Gnecco, Giorgio and Bemporad, Alberto and Gori, Marco and Sanguineti, Marcello LQG Online Learning. Neural Computation, 29 (8). pp. 2203-2291. ISSN 0899-7667 (2017)
|
PDF
- Submitted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives. Download (1MB) | Preview |
Abstract
Optimal control theory and machine learning techniques are combined to formulate and solve in closed form an optimal control formulation of online learning from supervised examples with regularization of the updates. The connections with the classical linear quadratic gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a nontrivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman filter estimate of the parameter vector to be learned. It is shown that the proposed algorithm is less sensitive to outliers with respect to the Kalman estimate (thanks to the presence of the regularization term), thus providing smoother estimates with respect to time. The basic formulation of the proposed online learning framework refers to a discrete-time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called kernel trick, the case of nonlinear models.
Item Type: | Article |
---|---|
Identification Number: | https://doi.org/10.1162/neco_a_00976 |
Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
Research Area: | Computer Science and Applications |
Depositing User: | Caterina Tangheroni |
Date Deposited: | 04 Aug 2017 11:47 |
Last Modified: | 04 Aug 2017 11:47 |
URI: | http://eprints.imtlucca.it/id/eprint/3759 |
Actions (login required)
Edit Item |