Logo eprints

Can Dictionary-Based Computational Models Outperform the Best Linear Ones?

Gnecco, Giorgio and Kůrková, Věra and Sanguineti, Marcello Can Dictionary-Based Computational Models Outperform the Best Linear Ones? Neural Networks , 24 (8). 881 - 887. ISSN 0893-6080 (2011)

This is the latest version of this item.

Full text not available from this repository.

Abstract

Approximation capabilities of two types of computational models are explored: dictionary-based models (i.e., linear combinations of n -tuples of basis functions computable by units belonging to a set called “dictionary”) and linear ones (i.e., linear combinations of n fixed basis functions). The two models are compared in terms of approximation rates, i.e., speeds of decrease of approximation errors for a growing number n of basis functions. Proofs of upper bounds on approximation rates by dictionary-based models are inspected, to show that for individual functions they do not imply estimates for dictionary-based models that do not hold also for some linear models. Instead, the possibility of getting faster approximation rates by dictionary-based models is demonstrated for worst-case errors in approximation of suitable sets of functions. For such sets, even geometric upper bounds hold.

Item Type: Article
Identification Number: https://doi.org/10.1016/j.neunet.2011.05.014
Additional Information: Special Issue "Artificial Neural Networks: Selected Papers from ICANN 2010"
Uncontrolled Keywords: Dictionary-based approximation; Linear approximation; Rates of approximation; Worst-case error; Kolmogorov width; Perceptron networks
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Research Area: Computer Science and Applications
Depositing User: Giorgio Gnecco
Date Deposited: 17 Sep 2013 07:44
Last Modified: 17 Sep 2013 07:44
URI: http://eprints.imtlucca.it/id/eprint/1751

Available Versions of this Item

Actions (login required)

Edit Item Edit Item