TY - JOUR N2 - Approximation capabilities of two types of computational models are explored: dictionary-based models (i.e., linear combinations of n -tuples of basis functions computable by units belonging to a set called ?dictionary?) and linear ones (i.e., linear combinations of n fixed basis functions). The two models are compared in terms of approximation rates, i.e., speeds of decrease of approximation errors for a growing number n of basis functions. Proofs of upper bounds on approximation rates by dictionary-based models are inspected, to show that for individual functions they do not imply estimates for dictionary-based models that do not hold also for some linear models. Instead, the possibility of getting faster approximation rates by dictionary-based models is demonstrated for worst-case errors in approximation of suitable sets of functions. For such sets, even geometric upper bounds hold. SN - 0893-6080 UR - http://www.sciencedirect.com/science/article/pii/S0893608011001560 TI - Can Dictionary-Based Computational Models Outperform the Best Linear Ones? AV - none KW - Dictionary-based approximation; Linear approximation; Rates of approximation; Worst-case error; Kolmogorov width; Perceptron networks EP - 887 N1 - Special Issue "Artificial Neural Networks: Selected Papers from ICANN 2010" ID - eprints1751 SP - 881 A1 - Gnecco, Giorgio A1 - K?rková, V?ra A1 - Sanguineti, Marcello PB - Elseviers IS - 8 JF - Neural Networks Y1 - 2011/// VL - 24 ER -