TY - RPRT N2 - Optimal control theory and machine learning techniques are combined to propose and solve in closed form an optimal control formulation of online learning from supervised examples. The connections with the classical Linear Quadratic Gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a non trivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman-filter estimate of the parameter vector to be learned. It is shown that the former enjoys larger smoothness and robustness to outliers, thanks to the presence of a regularization term. The basic formulation of the proposed online-learning framework refers to a discrete time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called "kernel trick", the case of nonlinear models. Subjects: Optimization and Control (math.OC) Cite as: arXiv:1606.04272 [math.OC] (or arXiv:1606.04272v2 [math.OC] for this version) M1 - working_paper A1 - Gnecco, Giorgio A1 - Bemporad, Alberto A1 - Gori, Marco A1 - Sanguineti, Marcello PB - arXiv UR - https://arxiv.org/abs/1606.04272 TI - Linear Quadratic Gaussian (LQG) online learning AV - public Y1 - 2016/// VL - arXiv:1606.04272 EP - 69 ID - eprints3142 ER -