Logo eprints

Learning with Hard Constraints

Gnecco, Giorgio and Gori, Marco and Melacci, Stefano and Sanguineti, Marcello Learning with Hard Constraints. In: Artificial Neural Networks and Machine Learning – ICANN 2013. Lecture notes in computer science (8131). Springer, pp. 146-153. ISBN 978-3-642-40728-4 (2013)

This is the latest version of this item.

Full text not available from this repository.
Related URLs


A learning paradigm is proposed, in which one has both classical supervised examples and constraints that cannot be violated, called here “hard constraints”, such as those enforcing the probabilistic normalization of a density function or imposing coherent decisions of the classifiers acting on different views of the same pattern. In contrast, supervised examples can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) and so play the roles of “soft constraints”. Constrained variational calculus is exploited to derive a representation theorem which provides a description of the “optimal body of the agent”, i.e. the functional structure of the solution to the proposed learning problem. It is shown that the solution can be represented in terms of a set of “support constraints”, thus extending the well-known notion of “support vectors”.

Item Type: Book Section
Identification Number: 10.1007/978-3-642-40728-4_19
Additional Information: 23rd International Conference on Artificial Neural Networks Sofia, Bulgaria, September 10-13, 2013. Proceedings
Uncontrolled Keywords: Learning from constraints; learning with prior knowledge; multi-task learning; support constraints; constrained variational calculus
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Research Area: Computer Science and Applications
Depositing User: Giorgio Gnecco
Date Deposited: 17 Sep 2013 12:55
Last Modified: 18 Feb 2015 12:01
URI: http://eprints.imtlucca.it/id/eprint/1769

Available Versions of this Item

Actions (login required)

Edit Item Edit Item