IMT Institutional Repository: No conditions. Results ordered -Date Deposited. 2020-04-05T23:21:18ZEPrintshttp://eprints.imtlucca.it/images/logowhite.pnghttp://eprints.imtlucca.it/2017-05-08T12:57:15Z2017-05-08T12:57:15Zhttp://eprints.imtlucca.it/id/eprint/3701This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/37012017-05-08T12:57:15ZNetworks of reinforced stochastic processes: Asymptotics for the empirical meansThis work deals with systems of interacting reinforced stochastic processes, where each process X^j = (X_{n,j})_n is located at a vertex j of a finite weighted direct graph, and it can be interpreted as the sequence of “actions” adopted by an agent j of the network. The interaction
among the evolving dynamics of these processes depends on the weighted adjacency matrix W associated to the underlying graph: indeed, the probability that an agent j chooses a certain action depends on its personal “inclination” Z_{n,j} and on the inclinations Z_{n,h} , with h not equal to j, of the other agents according to the elements of W.
Asymptotic results for the stochastic processes of the personal inclinations Z^j = (Z_{n,j})_n have
been subject of studies in recent papers (e.g. [2, 21]); while the asymptotic behavior of the stochastic
processes of the actions (X_{n,j})_n has never been studied yet. In this paper, we fill this gap by characterizing the asymptotic behavior of the empirical means N_{n,j} = \sum_{k=1}^n X_{k,j} /n, proving their almost sure synchronization and some central limit theorems in the sense of stable convergence. Moreover, we discuss some statistical applications of these convergence results concerning confidence intervals for the random limit toward which all the processes of the system converge and tools to
make inference on the matrix W.Giacomo Alettigiacomo.aletti@unimi.itIrene Crimaldiirene.crimaldi@imtlucca.itAndrea Ghigliettiandrea.ghiglietti@unimi.it2017-05-08T12:28:10Z2017-12-18T15:22:58Zhttp://eprints.imtlucca.it/id/eprint/3699This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/36992017-05-08T12:28:10ZSynchronization of Reinforced Stochastic Processes with a Network-based InteractionRandomly evolving systems composed by elements which interact among each other have always been of great interest in several scientific fields. This work deals with the synchronization phenomenon, that could be roughly defined as the tendency of different components to adopt a common behavior. We continue the study of a model of interacting stochastic processes with reinforcement, that
recently has been introduced in [21]. Generally speaking, by reinforcement we mean any mechanism for which the probability that a given event occurs has an increasing dependence on the number of times that events of the same type occurred in the past. The particularity of systems of such interacting stochastic processes is that synchronization is induced along time by the reinforcement mechanism itself and does not require a large-scale limit. We focus on the relationship between the topology of the network of the interactions and the long-time synchronization phenomenon. After proving the almost sure synchronization, we provide some CLTs in the sense
of stable convergence that establish the convergence rates and the asymptotic distributions for both convergence to the common limit and synchronization. The obtained results lead to the construction of asymptotic confidence intervals for the limit random variable and of statistical tests to make inference on the topology of the network.Giacomo AlettiIrene Crimaldiirene.crimaldi@imtlucca.itAndrea Ghiglietti