IMT Institutional Repository: No conditions. Results ordered -Date Deposited. 2024-06-16T09:51:51ZEPrintshttp://eprints.imtlucca.it/images/logowhite.pnghttp://eprints.imtlucca.it/2013-11-20T11:11:10Z2013-11-20T11:11:10Zhttp://eprints.imtlucca.it/id/eprint/1917This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/19172013-11-20T11:11:10ZExplicit Shift-Invariant Dictionary LearningIn this letter we give efficient solutions to the construction of structured dictionaries for sparse representations. We study circulant and Toeplitz structures and give fast algorithms based on least squares solutions. We take advantage of explicit circulant structures and we apply the resulting algorithms to shift-invariant learning scenarios. Synthetic experiments and comparisons with state-of-the-art methods show the superiority of the proposed methods.Cristian Rusucristian.rusu@imtlucca.itBogdan DumitrescuSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.it2013-11-05T11:14:58Z2013-11-05T11:14:58Zhttp://eprints.imtlucca.it/id/eprint/1856This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/18562013-11-05T11:14:58ZBlock Orthonormal Overcomplete Dictionary LearningIn the field of sparse representations, the overcomplete dictionary learning problem is of crucial importance and has a growing application pool where it is used. In this paper we present an iterative dictionary learning algorithm based on the singular value decomposition that efficiently construct unions of orthonormal bases. The important innovation described in this paper, that affects positively the running time of the learning procedures, is the way in which the sparse representations are computed - data are reconstructed in a single orthonormal base, avoiding slow sparse approximation algorithms - how the bases in the union are used and updated individually and how the union itself is expanded by looking at the worst reconstructed data items. The numerical experiments show conclusively the speedup induced by our method when compared to previous works, for the same target representation
error.Cristian Rusucristian.rusu@imtlucca.itBogdan Dumitrescu2013-06-10T08:42:23Z2013-06-10T08:42:23Zhttp://eprints.imtlucca.it/id/eprint/1609This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/16092013-06-10T08:42:23ZStagewise K-SVD to Design Efficient Dictionaries for Sparse Representations The problem of training a dictionary for sparse representations from a given dataset is receiving a lot of attention mainly due to its applications in the fields of coding, classification and pattern recognition. One of the open questions is how to choose the number of atoms in the dictionary: if the dictionary is too small then the representation errors are big and if the dictionary is too big then using it becomes computationally expensive. In this letter, we solve the problem of computing efficient dictionaries of reduced size by a new design method, called Stagewise K-SVD, which is an adaptation of the popular K-SVD algorithm. Since K-SVD performs very well in practice, we use K-SVD steps to gradually build dictionaries that fulfill an imposed error constraint. The conceptual simplicity of the method makes it easy to apply, while the numerical experiments highlight its efficiency for different overcomplete dictionaries.Cristian Rusucristian.rusu@imtlucca.itBogdan Dumitrescu2013-03-07T14:02:16Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1530This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15302013-03-07T14:02:16ZStagewise K-SVD to Design Efficient Dictionaries for Sparse RepresentationsThe problem of training a dictionary for sparse representations from a given dataset is receiving a lot of attention mainly due to its applications in the fields of coding, classification and pattern recognition. One of the open questions is how to choose the number of atoms in the dictionary: if the dictionary is too small then the representation errors are big and if the dictionary is too big then using it becomes computationally expensive. In this letter, we solve the problem of computing efficient dictionaries of reduced size by a new design method, called Stagewise K-SVD, which is an adaptation of the popular K-SVD algorithm. Since K-SVD performs very well in practice, we use K-SVD steps to gradually build dictionaries that fulfill an imposed error constraint. The conceptual simplicity of the method makes it easy to apply, while the numerical experiments highlight its efficiency for different overcomplete dictionaries.Cristian Rusucristian.rusu@imtlucca.itBogdan Dumitrescu2013-03-07T13:48:00Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1529This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15292013-03-07T13:48:00ZIterative reweighted l1 design of sparse FIR filtersSparse FIR filters have lower implementation complexity than full filters, while keeping a good performance level. This paper describes a new method for designing 1D and 2D sparse filters in the minimax sense using a mixture of reweighted l1 minimization and greedy iterations. The combination proves to be quite efficient; after the reweighted l1 minimization stage introduces zero coefficients in bulk, a small number of greedy iterations serve to eliminate a few extra coefficients. Experimental results and a comparison with the latest methods show that the proposed method performs very well both in the running speed and in the quality of the solutions obtained.Cristian Rusucristian.rusu@imtlucca.itBogdan Dumitrescu