TY - JOUR SN - 0925-2312 Y1 - 2014/04// N1 - Available online 25 October 2013 JF - Neurocomputing N2 - Supervised learning is investigated, when the data are represented not only by labeled points but also labeled regions of the input space. In the limit case, such regions degenerate to single points and the proposed approach changes back to the classical learning context. The adopted framework entails the minimization of a functional obtained by introducing a loss function that involves such regions. An additive regularization term is expressed via differential operators that model the smoothness properties of the desired input/output relationship. Representer theorems are given, proving that the optimization problem associated to learning from labeled regions has a unique solution, which takes on the form of a linear combination of kernel functions determined by the differential operators together with the regions themselves. As a relevant situation, the case of regions given by multi-dimensional intervals (i.e., ?boxes?) is investigated, which models prior knowledge expressed by logical propositions. EP - 32 VL - 129 PB - Elsevier SP - 25 A1 - Gnecco, Giorgio A1 - Gori, Marco A1 - Melacci, Stefano A1 - Sanguineti, Marcello AV - public ID - eprints1794 TI - A Theoretical Framework for Supervised Learning from Regions UR - http://www.sciencedirect.com/science/article/pii/S0925231213009879 KW - supervised learning; kernel machines; propositional rules; variational calculus; infinite-dimensional optimization; representer theorems ER -