eprintid: 2622 rev_number: 5 eprint_status: archive userid: 6 dir: disk0/00/00/26/22 datestamp: 2015-02-23 11:14:29 lastmod: 2015-02-23 11:14:29 status_changed: 2015-02-23 11:14:29 type: article metadata_visibility: show creators_name: Gnecco, Giorgio creators_name: Gori, Marco creators_name: Melacci, Stefano creators_name: Sanguineti, Marcello creators_id: giorgio.gnecco@imtlucca.it creators_id: creators_id: creators_id: title: Foundations of Support Constraint Machines ispublished: pub subjects: QA75 divisions: CSA full_text_status: none abstract: The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses. date: 2015-02 date_type: published publication: Neural Computation volume: 27 number: 2 publisher: MIT Press pagerange: 388-480 id_number: doi:10.1162/NECO_a_00686 refereed: TRUE issn: 0899-7667 official_url: http://dx.doi.org/10.1162/NECO_a_00686 citation: Gnecco, Giorgio and Gori, Marco and Melacci, Stefano and Sanguineti, Marcello Foundations of Support Constraint Machines. Neural Computation, 27 (2). pp. 388-480. ISSN 0899-7667 (2015)