TY - JOUR UR - http://doi.org/10.1162/neco_a_00976 IS - 8 N2 - Optimal control theory and machine learning techniques are combined to formulate and solve in closed form an optimal control formulation of online learning from supervised examples with regularization of the updates. The connections with the classical linear quadratic gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a nontrivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman filter estimate of the parameter vector to be learned. It is shown that the proposed algorithm is less sensitive to outliers with respect to the Kalman estimate (thanks to the presence of the regularization term), thus providing smoother estimates with respect to time. The basic formulation of the proposed online learning framework refers to a discrete-time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called kernel trick, the case of nonlinear models. JF - Neural Computation AV - public ID - eprints3759 TI - LQG Online Learning Y1 - 2017/// SP - 2203 A1 - Gnecco, Giorgio A1 - Bemporad, Alberto A1 - Gori, Marco A1 - Sanguineti, Marcello SN - 0899-7667 PB - MIT Press EP - 2291 VL - 29 ER -