%L eprints3701 %D 2017 %X This work deals with systems of interacting reinforced stochastic processes, where each process X^j = (X_{n,j})_n is located at a vertex j of a finite weighted direct graph, and it can be interpreted as the sequence of “actions” adopted by an agent j of the network. The interaction among the evolving dynamics of these processes depends on the weighted adjacency matrix W associated to the underlying graph: indeed, the probability that an agent j chooses a certain action depends on its personal “inclination” Z_{n,j} and on the inclinations Z_{n,h} , with h not equal to j, of the other agents according to the elements of W. Asymptotic results for the stochastic processes of the personal inclinations Z^j = (Z_{n,j})_n have been subject of studies in recent papers (e.g. [2, 21]); while the asymptotic behavior of the stochastic processes of the actions (X_{n,j})_n has never been studied yet. In this paper, we fill this gap by characterizing the asymptotic behavior of the empirical means N_{n,j} = \sum_{k=1}^n X_{k,j} /n, proving their almost sure synchronization and some central limit theorems in the sense of stable convergence. Moreover, we discuss some statistical applications of these convergence results concerning confidence intervals for the random limit toward which all the processes of the system converge and tools to make inference on the matrix W. %R arXiv:1705.02126 %A Giacomo Aletti %A Irene Crimaldi %A Andrea Ghiglietti %I IMT Institute for Advanced Studies Lucca %T Networks of reinforced stochastic processes: Asymptotics for the empirical means %K Interacting Systems; Reinforced Stochastic Processes; Urn Models; Complex Networks; Synchronization; Asymptotic Normality.