IMT Institutional Repository: No conditions. Results ordered -Date Deposited. 2024-05-18T04:15:05ZEPrintshttp://eprints.imtlucca.it/images/logowhite.pnghttp://eprints.imtlucca.it/2016-02-11T15:16:10Z2016-04-06T10:06:34Zhttp://eprints.imtlucca.it/id/eprint/3055This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/30552016-02-11T15:16:10ZLarge-scale analysis of neuroimaging data on commercial clouds with content-aware resource allocation strategieThe combined use of mice that have genetic mutations (transgenic mouse models) of human pathology and advanced neuroimaging methods (such as magnetic resonance imaging) has the potential to radically change how we approach disease understanding, diagnosis and treatment. Morphological changes occurring in the brain of transgenic animals as a result of the interaction between environment and genotype can be assessed using advanced image analysis methods, an effort described as ‘mouse brain phenotyping’. However, the computational methods involved in the analysis of high-resolution brain images are demanding. While running such analysis on local clusters is possible, not all users have access to such infrastructure and even for those that do, having additional computational capacity can be beneficial (e.g. to meet sudden high throughput demands). In this paper we use a commercial cloud platform for brain neuroimaging and analysis. We achieve a registration-based multi-atlas, multi-template anatomical segmentation, normally a lengthy-in-time effort, within a few hours. Naturally, performing such analyses on the cloud entails a monetary cost, and it is worthwhile identifying strategies that can allocate resources intelligently. In our context a critical aspect is the identification of how long each job will take. We propose a method that estimates the complexity of an image-processing task, a registration, using statistical moments and shape descriptors of the image content. We use this information to learn and predict the completion time of a registration. The proposed approach is easy to deploy, and could serve as an alternative for laboratories that may require instant access to large high-performance-computing infrastructures. To facilitate adoption from the community we publicly release the source code.Massimo Minervinimassimo.minervini@imtlucca.itCristian RusuMario DamianoValter TucciAngelo BifoneAlessandro GozziSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.it2015-10-09T11:03:42Z2015-10-09T12:31:26Zhttp://eprints.imtlucca.it/id/eprint/2767This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/27672015-10-09T11:03:42ZDictionary-based Support Vector Machines for Unsupervised Ischemia Detection at Rest with CP-BOLD Cardiac MRICardiac Phase-resolved Blood-Oxygen-Level-Dependent (CP-BOLD) MRI has been recently demonstrated to detect an ongoing myocardial ischemia at rest, taking advantage of spatio-temporal patterns in myocardial signal intensities, which are modulated by the presence of disease. However, this approach does require significant post-processing to detect the disease and to this day only a few images of the acquisition are used coupled with fixed thresholds to establish biomarkers. We propose a threshold-free unsupervised approach, based on dictionary learning and one-class support vector machines, which can generate a probabilistic ischemia likelihood map.Marco BevilacquaAnirban MukhopadhyayIlkay Oksuzilkay.oksuz@imtlucca.itCristian RusuRohan DharmakumarSotirios A. Tsaftaris2015-09-03T08:15:31Z2016-05-06T14:07:46Zhttp://eprints.imtlucca.it/id/eprint/2740This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/27402015-09-03T08:15:31ZComputationally efficient data and application driven color transforms for the compression and enhancement of images and videoAn important step in color image or video coding and enhancement is the linear transformation of input (typically RGB) data into a color space more suitable for compression, subsequent analysis, or visualization. The choice of this transform becomes even more critical when operating in distributed and low-computational power environments, such as visual sensor networks or remote sensing. Data-driven transforms are rarely used due to increased complexity. Most schemes adopt fixed transforms to decorrelate the color channels which are then processed independently. Here we propose two frameworks to find appropriate data-driven transforms in different settings. The first, named approximate Karhunen-Loève Transform (aKLT), performs comparable to the KLT at a fraction of the computational complexity, thus favoring adoption on sensors and resource-constrained devices. Furthermore, we consider an application-aware setting in which an expert system (e.g., a classifier) analyzes imaging data at the receiver's end. In a compression context, distortion may jeopardize the accuracy of the analysis. Since the KLT is not optimal in this setting, we investigate formulations that maximize post-compression expert system performance. Relaxing decorrelation and energy compactness constraints, a second transform can be obtained offline with supervised learning methods. Finally, we propose transforms that accommodate both constraints, and are found using regularized optimization.Massimo Minervinimassimo.minervini@imtlucca.itCristian RusuSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.it2015-05-29T08:09:32Z2015-05-29T08:09:32Zhttp://eprints.imtlucca.it/id/eprint/2700This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/27002015-05-29T08:09:32ZDictionary learning for unsupervised identification of ischemic territories in CP-BOLD Cardiac MRI at restMarco Bevilacquamarco.bevilacqua@imtlucca.itCristian RusuRohan DharmakumarSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.it2015-02-02T09:53:13Z2015-02-02T09:53:13Zhttp://eprints.imtlucca.it/id/eprint/2549This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25492015-02-02T09:53:13ZUnsupervised and supervised approaches to color space transformation for image codingThe linear transformation of input (typically RGB) data into a color space is important in image compression. Most schemes adopt fixed transforms to decorrelate the color channels. Energy compaction transforms such as the Karhunen-Loève (KLT) do entail a complexity increase. Here, we propose a new data-dependent transform (aKLT), that achieves compression performance comparable to the KLT, at a fraction of the computational complexity. More important, we also consider an application-aware setting, in which a classifier analyzes reconstructed images at the receiver's end. In this context, KLT-based approaches may not be optimal and transforms that maximize post-compression classifier performance are more suited. Relaxing energy compactness constraints, we propose for the first time a transform which can be found offline optimizing the Fisher discrimination criterion in a supervised fashion. In lieu of channel decorrelation, we obtain spatial decorrelation using the same color transform as a rudimentary classifier to detect objects of interest in the input image without adding any computational cost. We achieve higher savings encoding these regions at a higher quality, when combined with region-of-interest capable encoders, such as JPEG 2000.Massimo Minervinimassimo.minervini@imtlucca.itCristian RusuSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.it2014-09-10T07:42:35Z2014-09-10T12:43:35Zhttp://eprints.imtlucca.it/id/eprint/2279This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/22792014-09-10T07:42:35ZSynthetic generation of myocardial blood–oxygen-level-dependent MRI time series via structural sparse decomposition modelingThis paper aims to identify approaches that generate appropriate synthetic data (computer generated) for cardiac phase-resolved blood-oxygen-level-dependent (CP-BOLD) MRI. CP-BOLD MRI is a new contrast agent- and stress-free approach for examining changes in myocardial oxygenation in response to coronary artery disease. However, since signal intensity changes are subtle, rapid visualization is not possible with the naked eye. Quantifying and visualizing the extent of disease relies on myocardial segmentation and registration to isolate the myocardium and establish temporal correspondences and ischemia detection algorithms to identify temporal differences in BOLD signal intensity patterns. If transmurality of the defect is of interest pixel-level analysis is necessary and thus a higher precision in registration is required. Such precision is currently not available affecting the design and performance of the ischemia detection algorithms. In this work, to enable algorithmic developments of ischemia detection irrespective to registration accuracy, we propose an approach that generates synthetic pixel-level myocardial time series. We do this by 1) modeling the temporal changes in BOLD signal intensity based on sparse multi-component dictionary learning, whereby segmentally derived myocardial time series are extracted from canine experimental data to learn the model; and 2) demonstrating the resemblance between real and synthetic time series for validation purposes. We envision that the proposed approach has the capacity to accelerate development of tools for ischemia detection while markedly reducing experimental costs so that cardiac BOLD MRI can be rapidly translated into the clinical arena for the noninvasive assessment of ischemic heart disease.Cristian RusuRita Morisirita.morisi@imtlucca.itDavide Boschettodavide.boschetto@imtlucca.itRohan DharmakumarSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.it2013-11-20T11:11:10Z2013-11-20T11:11:10Zhttp://eprints.imtlucca.it/id/eprint/1917This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/19172013-11-20T11:11:10ZExplicit Shift-Invariant Dictionary LearningIn this letter we give efficient solutions to the construction of structured dictionaries for sparse representations. We study circulant and Toeplitz structures and give fast algorithms based on least squares solutions. We take advantage of explicit circulant structures and we apply the resulting algorithms to shift-invariant learning scenarios. Synthetic experiments and comparisons with state-of-the-art methods show the superiority of the proposed methods.Cristian Rusucristian.rusu@imtlucca.itBogdan DumitrescuSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.it2013-11-05T11:14:58Z2013-11-05T11:14:58Zhttp://eprints.imtlucca.it/id/eprint/1856This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/18562013-11-05T11:14:58ZBlock Orthonormal Overcomplete Dictionary LearningIn the field of sparse representations, the overcomplete dictionary learning problem is of crucial importance and has a growing application pool where it is used. In this paper we present an iterative dictionary learning algorithm based on the singular value decomposition that efficiently construct unions of orthonormal bases. The important innovation described in this paper, that affects positively the running time of the learning procedures, is the way in which the sparse representations are computed - data are reconstructed in a single orthonormal base, avoiding slow sparse approximation algorithms - how the bases in the union are used and updated individually and how the union itself is expanded by looking at the worst reconstructed data items. The numerical experiments show conclusively the speedup induced by our method when compared to previous works, for the same target representation
error.Cristian Rusucristian.rusu@imtlucca.itBogdan Dumitrescu2013-09-18T09:46:51Z2015-05-29T11:31:15Zhttp://eprints.imtlucca.it/id/eprint/1799This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17992013-09-18T09:46:51ZEstimation of Scribble Placement for Painting ColorizationImage colorization has been a topic of interest since
the mid 70’s and several algorithms have been proposed that
given a grayscale image and color scribbles (hints) produce a colorized image. Recently, this approach has been introduced in the field of art conservation and cultural heritage, where B&W photographs of paintings at previous stages have been colorized. However, the questions of what is the minimum number of scribbles necessary and where they should be placed in an image remain unexplored. Here we address this limitation using an iterative algorithm that provides insights as to the relationship between locally vs. globally important scribbles. Given a color image we randomly select scribbles and we attempt to color the
grayscale version of the original.We define a scribble contribution measure based on the reconstruction error. We demonstrate our approach using a widely used colorization algorithm and images from a Picasso painting and the peppers test image. We show that areas isolated by thick brushstrokes or areas with high textural variation are locally important but contribute very little to the
overall representation accuracy. We also find that for the case of Picasso on average 10% of scribble coverage is enough and that flat areas can be presented by few scribbles. The proposed method can be used verbatim to test any colorization algorithm.Cristian Rusucristian.rusu@imtlucca.itSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.it2013-09-17T09:51:50Z2013-09-17T10:36:05Zhttp://eprints.imtlucca.it/id/eprint/1795This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17952013-09-17T09:51:50ZLearning Computationally Efficient Approximations of Complex Image Segmentation MetricsImage segmentation metrics have been extensively used in the literature to compare segmentation algorithms among each other, or relative to a ground-truth segmentation. Some metrics are easy to compute (e.g., Dice, Jaccard), others are more accurate (e.g., the Hausdorff distance) and may reflect local topology, but they are computationally demanding. While certain attempts have been made to create computationally efficient implementations of such complex metrics, in this paper we approach this problem from a radically different viewpoint. We construct approximations of a complex metric (e.g., the Hausdorff distance), combining a small number of computationally lightweight metrics in a linear regression model. We also consider feature selection, using sparsity inducing strategies, to restrict the number of metrics employed significantly, without penalizing the predictive power of the model. We demonstrate our methodology with image data from plant phenotyping experiments. We find that a linear model can effectively approximate the Hausdorff distance using even a few features. Our approach can find many applications, but is largely expected to benefit distributed sensing scenarios where the sensor has low computational capacity, whereas centralized processing units have higher computational capabilities.Massimo Minervinimassimo.minervini@imtlucca.itCristian Rusucristian.rusu@imtlucca.itSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.it2013-06-10T08:42:23Z2013-06-10T08:42:23Zhttp://eprints.imtlucca.it/id/eprint/1609This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/16092013-06-10T08:42:23ZStagewise K-SVD to Design Efficient Dictionaries for Sparse Representations The problem of training a dictionary for sparse representations from a given dataset is receiving a lot of attention mainly due to its applications in the fields of coding, classification and pattern recognition. One of the open questions is how to choose the number of atoms in the dictionary: if the dictionary is too small then the representation errors are big and if the dictionary is too big then using it becomes computationally expensive. In this letter, we solve the problem of computing efficient dictionaries of reduced size by a new design method, called Stagewise K-SVD, which is an adaptation of the popular K-SVD algorithm. Since K-SVD performs very well in practice, we use K-SVD steps to gradually build dictionaries that fulfill an imposed error constraint. The conceptual simplicity of the method makes it easy to apply, while the numerical experiments highlight its efficiency for different overcomplete dictionaries.Cristian Rusucristian.rusu@imtlucca.itBogdan Dumitrescu2013-06-10T08:02:44Z2013-06-10T08:02:44Zhttp://eprints.imtlucca.it/id/eprint/1608This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/16082013-06-10T08:02:44ZDesign of Incoherent Frames via Convex OptimizationThis paper describes a new procedure for the design of incoherent frames used in the field of sparse representations. We present an efficient algorithm for the design of incoherent frames that works well even when applied to the construction of relatively large frames. The main advantage of the proposed method is that it uses a convex optimization formulation that operates directly on the frame, and not on its Gram matrix. Solving a sequence of convex optimization problems allows for the introduction of constraints on the frame that were previously considered impossible or very hard to include, such as non-negativity. Numerous experimental results validate the approach.Cristian Rusucristian.rusu@imtlucca.it2013-03-07T14:02:16Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1530This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15302013-03-07T14:02:16ZStagewise K-SVD to Design Efficient Dictionaries for Sparse RepresentationsThe problem of training a dictionary for sparse representations from a given dataset is receiving a lot of attention mainly due to its applications in the fields of coding, classification and pattern recognition. One of the open questions is how to choose the number of atoms in the dictionary: if the dictionary is too small then the representation errors are big and if the dictionary is too big then using it becomes computationally expensive. In this letter, we solve the problem of computing efficient dictionaries of reduced size by a new design method, called Stagewise K-SVD, which is an adaptation of the popular K-SVD algorithm. Since K-SVD performs very well in practice, we use K-SVD steps to gradually build dictionaries that fulfill an imposed error constraint. The conceptual simplicity of the method makes it easy to apply, while the numerical experiments highlight its efficiency for different overcomplete dictionaries.Cristian Rusucristian.rusu@imtlucca.itBogdan Dumitrescu2013-03-07T13:48:00Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1529This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15292013-03-07T13:48:00ZIterative reweighted l1 design of sparse FIR filtersSparse FIR filters have lower implementation complexity than full filters, while keeping a good performance level. This paper describes a new method for designing 1D and 2D sparse filters in the minimax sense using a mixture of reweighted l1 minimization and greedy iterations. The combination proves to be quite efficient; after the reweighted l1 minimization stage introduces zero coefficients in bulk, a small number of greedy iterations serve to eliminate a few extra coefficients. Experimental results and a comparison with the latest methods show that the proposed method performs very well both in the running speed and in the quality of the solutions obtained.Cristian Rusucristian.rusu@imtlucca.itBogdan Dumitrescu2013-03-07T13:30:21Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1528This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15282013-03-07T13:30:21ZFast design of efficient dictionaries for sparse representationsOne of the central issues in the field of sparse representations is the design of overcomplete dictionaries with a fixed sparsity level from a given dataset. This article describes a fast and efficient procedure for the design of such dictionaries. The method implements the following ideas: a reduction technique is applied to the initial dataset to speed up the upcoming procedure; the actual training procedure runs a more sophisticated iterative expanding procedure based on K-SVD steps. Numerical experiments on image data show the effectiveness of the proposed design strategy.Cristian Rusucristian.rusu@imtlucca.it2013-03-07T13:20:08Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1527This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15272013-03-07T13:20:08ZClustering before training large datasets - Case study: K-SVDTraining and using overcomplete dictionaries has been the subject of many developments in the area of signal processing and sparse representations. The main idea is to train a dictionary that is able to achieve good sparse representations of the items contained in a given dataset. The most popular approach is the K-SVD algorithm and in this paper we study its application to large datasets. The main interest is to speedup the training procedure while keeping the representation errors close to some specific values. This goal is reached by using a clustering procedure, called here T-mindot, which reduces the size of the dataset but keeps the most representative data items and a measure of their importance. Experimental simulations compare the running times and representation errors of the training method with and without the clustering procedure and they clearly show how effective T-mindot is.Cristian Rusucristian.rusu@imtlucca.it2013-03-07T13:07:47Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1525This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15252013-03-07T13:07:47ZClustering large datasets - bounds and applications with K-SVDThis article presents a clustering method called T-mindot that is used to reduce the dimension of datasets in order to diminish the running time of the training algorithms. The T-mindot method is applied before the K-SVD algorithm in the context of sparse representations for the design of
overcomplete dictionaries. Simulations that run on image data show the efficiency of the proposed method that leads to the substantial reduction of the execution time of K-SVD, while keeping the representation performance of the dictionaries designed using the original dataset.Cristian Rusucristian.rusu@imtlucca.it2013-03-07T12:49:58Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1524This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15242013-03-07T12:49:58ZClassification of music genres using sparse representations in overcomplete dictionariesThis paper presents a simple, but efficient and robust, method for music genre classification that utilizes sparse representations in overcomplete dictionaries. The training step involves creating dictionaries, using the K-SVD algorithm, in which data corresponding to a particular music genre has a sparse representation. In the classification step, the Orthogonal Matching Pursuit (OMP) algorithm is used to separate feature vectors that consist only of Linear Predictive Coding (LPC) coefficients. The paper analyses in detail a popular case study from the literature, the ISMIR 2004 database. Using the presented method, the correct classification percentage of the 6 music genres is 85.59, result that is comparable with the best results published so far.Cristian Rusucristian.rusu@imtlucca.it