IMT Institutional Repository: No conditions. Results ordered -Date Deposited.
2024-06-20T01:42:55Z
EPrints
http://eprints.imtlucca.it/images/logowhite.png
http://eprints.imtlucca.it/
2022-12-20T15:51:26Z
2023-01-02T13:42:36Z
http://eprints.imtlucca.it/id/eprint/4084
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4084
2022-12-20T15:51:26Z
Value creation mechanisms of cloud computing: a conceptual framework
The management literature has analysed Cloud Computing, mainly focusing on the impact of its technical properties (e.g. accessibility, elasticity, scaling) on firms' dynamics, without explicitly addressing the dynamic generation of value streams. With this paper we fill this gap, linking the unexplored potential sources of Cloud Computing with the literature on business model value creation. We define a conceptual model able to integrate existent technical knowledge on Cloud Computing with the understudied part on the value creation mechanisms, dynamically representing their interaction. Our approach is based on a mixed methodology built on three pillars:
1) systematic literature review of the properties of Cloud Computing with an impact on firms’ management in order to identify possible gaps, using value generation within business models as the unit of analysis;
2) multiple case studies to inductively derive the emerging properties using Gioia methodology, analysing 20 startups in the AWS business case repository;
3) dynamic representation between technical properties extracted by literature review and emergent properties, focusing on the value streams generation.
Results confirm how the leveraging potentiality of Cloud Computing goes well beyond technical advantages, deeply inserting in the business model system and enabling different sources of value creation.
Leonardo Mazzoni
leonardo.mazzoni@imtlucca.it
Gabriele Costa
gabriele.costa@imtlucca.it
2018-03-12T09:18:45Z
2018-03-12T09:18:45Z
http://eprints.imtlucca.it/id/eprint/4009
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4009
2018-03-12T09:18:45Z
Direct Optimal Control and Model Predictive Control
Mario Zanon
mario.zanon@imtlucca.it
Andrea Boccia
Vryan Gil S. Palma
Sonja Parenti
Ilaria Xausa
2018-03-12T08:57:17Z
2018-03-12T08:57:17Z
http://eprints.imtlucca.it/id/eprint/4047
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4047
2018-03-12T08:57:17Z
Efficient Nonlinear Model Predictive Control Formulations for Economic Objectives with Aerospace and Automotive Applications
This thesis is concerned with optimal control techniques for optimal trajectory planning and real-time control and estimation. The framework of optimal control is a powerful tool which enjoys increasing popularity due to its applicability to a wide class of problems and its ability to deliver solutions to very complicated problems which cannot be intuitively solved.
The downside of optimal control is the computational burden required to compute the optimal solution. Due to recent algorithmic developments and increases in the computational power, this burden has been significantly reduced over the last decades. In order to guarantee effectiveness and reliability of the solver, three main components are necessary: fast and robust algorithms, a good problem formulation, and a mathematical model tailored to optimisation. Indeed, both the model and the optimal control problem can usually be formulated in many different ways, some of which are better suited for optimisation. In this thesis we are concerned with all three components, with a focus on the last two.
Concerning the problem formulation, we propose practical approaches for formulating optimal control, MPC and MHE problems in an optimisation- friendly fashion. Moreover, we analyse the stability properties of various MPC formulations, with a focus on so-called economic MPC, for which the stability theory is still developing.
On the algorithmic level, we review the literature on optimisation and optimal control and we prove that it is possible to tune tracking MPC formulations in order to locally obtain the same behaviour as economic MPC. The main advantages of tuned tracking MPC over economic MPC consist in easier to guarantee closed-loop stability and applicability of efficient real-time algorithms.
On the modelling side, we propose an approach for deriving models of reduced complexity and reduced nonlinearity for multibody mechanical systems. The use of nonminimal coordinates and DAE models enlarges the range of modelling possibilities and allows the control engineer to derive models which are better suited for optimisation. In order to provide an easy framework for the model derivation, we extend the Euler-Lagrange approach and we demonstrate how to implement the proposed approach in practice.
In order to demonstrate the effectiveness of the proposed techniques, we deploy them for two applications: tethered airplanes and autonomous vehicles. Both examples are characterised by fast nonlinear constrained dynamics for which simple controllers cannot be deployed.
Tethered airplanes are of particular interest because they are an emerging technology for wind energy production. In this thesis, we use optimal control to design trajectories which extract maximum energy from the airmass and compare single and dual-airfoil configurations. We moreover demonstrate the effectiveness of MPC and MHE for controlling the system in real time and apply the new tuning procedure for tracking MPC to show its ability to locally approximate economic MPC.
Mario Zanon
mario.zanon@imtlucca.it
2018-03-09T13:34:41Z
2018-03-09T13:34:41Z
http://eprints.imtlucca.it/id/eprint/4034
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4034
2018-03-09T13:34:41Z
Modularities maximization in multiplex network analysis using Many-Objective Optimization
Nowadays, social network analysis receives big attention from academia, industries and governments. Some practical applications such as community detection and centrality in economic networks have become main issues in this research area. Community detection algorithm for complex network analysis is mainly accomplished by the Louvain Method that seeks to find communities by heuristically finding a partitioning with maximal modularity. Traditionally, community detection applied for a network that has homogeneous semantics, for instance indicating friend relationship between people or import-export relationships between countries etc. However we increasingly deal with more complex network and also with so-called multiplex networks. In a multiplex network the set of nodes stays the same, while there are multiple sets of edges. In the analysis we would like to identify communities, but different edge sets give rise to different modularity optimizing partitions into communities. We propose to view community detection of such multilayer networks as a many-objective optimization problem. For this apply Evolutionary Many Objective Optimization and compute the Pareto fronts between different modularity layers. Then we group the objective functions into community in order to better understand the relationship and dependence between different layers (conflict, indifference, complementarily). As a case study, we compute the Pareto fronts for model problems and for economic data sets in order to show how to find the network modularity tradeoffs between different layers.
Asep Maulana
Valerio Gemmetto
Diego Garlaschelli
diego.garlaschelli@imtlucca.it
Iryna Yevesyeva
Michael Emmerich
2018-03-05T16:36:01Z
2018-03-05T16:36:01Z
http://eprints.imtlucca.it/id/eprint/3953
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3953
2018-03-05T16:36:01Z
Programming of CAS Systems by Relying on Attribute-Based Communication
In most distributed systems, named connections (i.e., channels) are used as means for programming interaction between communicating partners. These kinds of connections are low level and usually totally independent of the knowledge, the status, the capabilities, ..., in one word, of the attributes of the interacting partners. We have recently introduced a calculus, called AbC, in which interactions among agents are dynamically established by taking into account “connection” as determined by predicates over agent attributes. In this paper, we present Open image in new window, a Java run-time environment that has been developed to support modeling and programming of collective adaptive systems by relying on the communication primitives of the AbC calculus. Systems are described as sets of parallel components, each component is equipped with a set of attributes and communications among components take place in an implicit multicast fashion. By means of a number of examples, we also show how opportunistic behaviors, achieved by run-time attribute updates, can be exploited to express different communication and interaction patterns and to program challenging case studies.
Yehia Moustafa Abd Alrahman
yehia.abdalrahman@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2018-03-05T16:26:29Z
2018-03-05T16:26:29Z
http://eprints.imtlucca.it/id/eprint/3952
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3952
2018-03-05T16:26:29Z
Initial Algebra for a System of Right-Linear Functors
In 2003 we showed that right-linear systems of equations over regular expressions, when interpreted in a category of trees, have a solution whenever they enjoy a specific property that we called hierarchicity and that is instrumental to avoid critical mutual recursive definitions. In this note, we prove that a right-linear system of polynomial endofunctors on a cocartesian monoidal closed category which enjoys parameterized left list arithmeticity, has an initial algebra, provided it satisfies a property similar to hierarchicity.
Anna Labella
Rocco De Nicola
r.denicola@imtlucca.it
2018-03-05T16:19:42Z
2018-03-05T16:20:14Z
http://eprints.imtlucca.it/id/eprint/3951
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3951
2018-03-05T16:19:42Z
Verifying Properties of Systems Relying on Attribute-Based Communication
AbC is a process calculus designed for describing collective adaptive systems, whose distinguishing feature is the communication mechanism relying on predicates over attributes exposed by components. A novel approach to the analysis of concurrent systems modelled as AbC terms is presented that relies on the UMC model checker, a tool based on modelling concurrent systems as communicating UML-like state machines. A structural translation from AbC specifications to the UMC internal format is provided and used as the basis for the analysis. Three different algorithmic solutions of the well studied stable marriage problem are described in AbC and their translations are analysed with UMC. It is shown how the proposed approach can be exploited to identify emerging properties of systems and unwanted behaviour.
Rocco De Nicola
r.denicola@imtlucca.it
Tan Duong
Omar Inverso
Franco Mazzanti
2018-03-05T16:14:42Z
2018-03-05T16:14:42Z
http://eprints.imtlucca.it/id/eprint/3950
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3950
2018-03-05T16:14:42Z
AErlang at Work
AErlang is an extension of the Erlang programming language which is enriched with attribute-based communication. In AErlang, the Erlang send and receive constructs are extended to permit partner selection by relying on predicates over set of attributes. AErlang avoids the limitations of the Erlang point-to-point communication making it possible to model some of the sophisticated interaction features often observed in modern systems, such as anonymity and adaptation. By using our prototype extension, we show how the extended communication pattern can capture non-trivial process interaction in a natural and intuitive way. We also sketch a modelling technique aimed at automatically verifying AErlang systems, and discuss how it can be used to check some key properties of the considered case study.
Rocco De Nicola
Tan Duong
Omar Inverso
Catia Trubiani
2018-03-05T16:07:40Z
2018-03-05T16:07:40Z
http://eprints.imtlucca.it/id/eprint/3949
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3949
2018-03-05T16:07:40Z
AErlang: Empowering Erlang with Attribute-Based Communication
Attribute-based communication provides a novel mechanism to dynamically select groups of communicating entities by relying on predicates over their exposed attributes. In this paper, we embed the basic primitives for attribute-based communication into the functional concurrent language Erlang to obtain what we call AErlang, for attribute Erlang. To evaluate our prototype in terms of performance overhead and scalability we consider solutions of the Stable Marriage Problem based on predicates over attributes and on the classical preference lists, and use them to compare the runtime performance of AErlang with those of Erlang and X10. The outcome of the comparison shows that the overhead introduced by the new communication primitives is acceptable, and our prototype can compete performance-wise with an ad-hoc parallel solution in X10.
Rocco De Nicola
r.denicola@imtlucca.it
Tan Duong
Omar Inverso
Catia Trubiani
2018-03-05T16:02:13Z
2018-03-05T16:02:13Z
http://eprints.imtlucca.it/id/eprint/3948
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3948
2018-03-05T16:02:13Z
Smart Contract Negotiation in Cloud Computing
A smart contract is the formalisation of an agreement, whose terms are automatically enforced by relying on a transaction protocol, while minimising the need of intermediaries. Such contracts not only specify the service and its quality but also the possible changes at runtime of the terms of agreement. Although smart contracts provide a great deal of flexibility, analysing their compatibility and reaching agreements with this level of dynamism is considerably more challenging, due to the freedom of clients and providers in formulating needs/offers. We introduce a formal language to specify interactions between offers and requests and present a methodology for the autonomous negotiation of smart contracts, which analyses the cost and the necessary changes for reaching an agreement. Moreover, we describe a set of experiments that provides insights on the relative cost of dynamism in negotiating smart contracts and compare the request/offer matching rates of our solution with related works.
Scoca Vincenzo
Rafael Brundo Uriarte
Rocco De Nicola
r.denicola@imtlucca.it
2018-03-05T15:44:57Z
2018-03-05T15:44:57Z
http://eprints.imtlucca.it/id/eprint/3946
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3946
2018-03-05T15:44:57Z
Programming the Interactions of Collective Adaptive Systems by Relying on Attribute-based Communication
Collective adaptive systems are new emerging computational systems consisting of a large number of interacting components and featuring complex behaviour. These systems are usually distributed, heterogeneous, decentralised and interdependent, and are operating in dynamic and possibly unpredictable environments. Finding ways to understand and design these systems and, most of all, to model the interactions of their components, is a difficult but important endeavour. In this article we propose a language-based approach for programming the interactions of collective-adaptive systems by relying on attribute-based communication; a paradigm that permits a group of partners to communicate by considering their run-time properties and capabilities. We introduce AbC, a foundational calculus for attribute-based communication and show how its linguistic primitives can be used to program a complex and sophisticated variant of the well-known problem of Stable Allocation in Content Delivery Networks. Also other interesting case studies, from the realm of collective-adaptive systems, are considered. We also illustrate the expressive power of attribute-based communication by showing the natural encoding of other existing communication paradigms into AbC.
Yehia Moustafa Abd Alrahman
yehia.abdalrahman@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2018-03-05T15:40:55Z
2018-03-05T15:40:55Z
http://eprints.imtlucca.it/id/eprint/3945
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3945
2018-03-05T15:40:55Z
A Behavioural Theory for Interactions in Collective-Adaptive Systems
We propose a process calculus, named AbC, to study the behavioural theory of interactions in collective-adaptive systems by relying on attribute-based communication. An AbC system consists of a set of parallel components each of which is equipped with a set of attributes. Communication takes place in an implicit multicast fashion, and interaction among components is dynamically established by taking into account "connections" as determined by predicates over their attributes. The structural operational semantics of AbC is based on Labeled Transition Systems that are also used to define bisimilarity between components. Labeled bisimilarity is in full agreement with a barbed congruence, defined by simple basic observables and context closure. The introduced equivalence is used to study the expressiveness of AbC in terms of encoding broadcast channel-based interactions and to establish formal relationships between system descriptions at different levels of abstraction.
Yehia Moustafa Abd Alrahman
yehia.abdalrahman@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2018-01-16T10:14:31Z
2018-01-16T10:14:31Z
http://eprints.imtlucca.it/id/eprint/3863
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3863
2018-01-16T10:14:31Z
Uncertainty-aware demand management of water distribution networks in deregulated energy markets
We present an open-source solution for the operational control of drinking water distribution networks which accounts for the inherent uncertainty in water demand and electricity prices in the day-ahead market of a volatile deregulated economy. As increasingly more energy markets adopt this trading scheme, the operation of drinking water networks requires uncertainty-aware control approaches that mitigate the effect of volatility and result in an economic and safe operation of the network that meets the consumers’ need for uninterrupted water supply. We propose the use of scenario-based stochastic model predictive control: an advanced control methodology which comes at a considerable computation cost which is overcome by harnessing the parallelization capabilities of graphics processing units (GPUs) and using a massively parallelizable algorithm based on the accelerated proximal gradient method.
Pantelis Sopasakis
Ajay Kumar Sampathirao
Alberto Bemporad
alberto.bemporad@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
2017-10-31T09:00:18Z
2017-10-31T09:00:18Z
http://eprints.imtlucca.it/id/eprint/3806
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3806
2017-10-31T09:00:18Z
Long-term EVA degradation simulation: climatic zones comparison and possible revision of accelerated tests
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Marco Paggi
marco.paggi@imtlucca.it
2017-09-28T06:31:59Z
2017-09-28T06:31:59Z
http://eprints.imtlucca.it/id/eprint/3805
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3805
2017-09-28T06:31:59Z
Experimental characterization and numerical simulation of humidity-induced damage in PV cells
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Irene Berardone
irene.berardone@polito.it
Marco Paggi
marco.paggi@imtlucca.it
2017-09-28T06:24:42Z
2017-09-28T06:32:17Z
http://eprints.imtlucca.it/id/eprint/3804
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3804
2017-09-28T06:24:42Z
Computational and experimental characterization of thermo-oxidative and corrosion phenomena in photovoltaic modules
Irene Berardone
irene.berardone@polito.it
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Pietro Lenarda
pietro.lenarda@imtlucca.it
Marco Paggi
marco.paggi@imtlucca.it
2017-09-26T09:19:39Z
2017-09-26T09:19:39Z
http://eprints.imtlucca.it/id/eprint/3765
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3765
2017-09-26T09:19:39Z
EGAC: a genetic algorithm to compare chemical reaction networks
Discovering relations between chemical reaction networks (CRNs)
is a relevant problem in computational systems biology for model
reduction, to explain if a given system can be seen as an abstraction
of another one; and for model comparison, useful to establish an evolutionary
path from simpler networks to more complex ones. This
is also related to foundational issues in computer science regarding
program equivalence, in light of the established interpretation of a
CRN as a kernel programming language for concurrency. Criteria
for deciding if two CRNs can be formally related have been recently
developed, but these require that a candidate mapping be provided.
Automatically finding candidate mappings is very hard in general
since the search space essentially consists of all possible partitions
of a set. In this paper we tackle this problem by developing a genetic
algorithm for a class of CRNs called influence networks, which can
be used to model a variety of biological systems including cell-cycle
switches and gene networks. An extensive numerical evaluation
shows that our approach can successfully establish relations between
influence networks from the literature which cannot be found
by exact algorithms due to their large computational requirements.
Stefano Tognazzi
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2017-09-04T14:28:17Z
2017-09-04T14:28:17Z
http://eprints.imtlucca.it/id/eprint/3777
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3777
2017-09-04T14:28:17Z
Optimal Varicella immunization programs for both Varicella and Herpes Zoster Control
A main obstacle to the widespread adoption of varicella immunization in Europe has been the fear of a subsequent boom in natural herpes zoster caused by the decline in the protective effect of natural immunity boosting due to reduced virus circulation. We apply optimal control to simple models for VZV transmission and reactivation to investigate existence and feasibility of temporal paths of varicella childhood immunization that are optimal in controlling both varicella and zoster. We analyze the optimality system numerically focusing on the role played by the structure of the cost functional, the relative cost zoster-varicella, and the length of the planning horizon. We show that optimal programs exist but will mostly be unfeasible in real public health contexts due to their complex temporal profiles. This complexity is the consequence of the intrinsically antagonistic nature of varicella immunization programs when aimed to control both varicella and herpes zoster. However we could show that gradually increasing, smooth – thereby feasible - vaccination schedules, can perform largely better than routine programs with constant vaccine uptake. Moreover we show the optimal temporal profiles of feasible immunization
programs targeting with priority the mitigation of the post-immunization natural zoster boom.
Monica Betta
monica.betta@imtlucca.it
Marco Laurino
Andrea Pugliese
Giorgio Guzzetta
Alberto Landi
Piero Manfredi
2017-08-08T09:06:01Z
2017-08-08T09:06:01Z
http://eprints.imtlucca.it/id/eprint/3766
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3766
2017-08-08T09:06:01Z
ERODE: A Tool for the Evaluation and Reduction of Ordinary Differential Equations
We present ERODE, a multi-platform tool for the solution and exact reduction of systems of ordinary differential equations (ODEs). ERODE supports two recently introduced, complementary, equivalence relations over ODE variables: forward differential equivalence yields a self-consistent aggregate system where each ODE gives the cumulative dynamics of the sum of the original variables in the respective equivalence class. Backward differential equivalence identifies variables that have identical solutions whenever starting from the same initial conditions. As back-end ERODE uses the well-known Z3 SMT solver to compute the largest equivalence that refines a given initial partition of ODE variables. In the special case of ODEs with polynomial derivatives of degree at most two (covering affine systems and elementary chemical reaction networks), it implements a more efficient partition-refinement algorithm in the style of Paige and Tarjan. ERODE comes with a rich development environment based on the Eclipse plug-in framework offering: (i) seamless project management; (ii) a fully-featured text editor; and (iii) importing-exporting capabilities.
Luca Cardelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2017-08-08T07:44:36Z
2017-08-08T07:44:36Z
http://eprints.imtlucca.it/id/eprint/3764
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3764
2017-08-08T07:44:36Z
Fluid Analysis of Spatio-Temporal Properties of Agents in a Population Model
We consider large stochastic population models in which heterogeneous agents are interacting locally and moving in space. These models are very common, e.g. in the context of mobile wireless networks, crowd dynamics, traffic management, but they are typically very hard to analyze, even when space is discretized in a grid. Here we consider individual agents and look at their properties, e.g. quality of service metrics in mobile networks. Leveraging recent results on the combination of stochastic approximation with formal verification, and of fluid approximation of spatio-temporal population processes, we devise a novel mean-field based approach to check such behaviors, which requires the solution of a low-dimensional set of Partial Differential Equation, which is shown to be much faster than simulation. We prove the correctness of the method and validate it on a mobile peer-to-peer network example.
Luca Bortolussi
Max Tschaikowski
max.tschaikowski@imtlucca.it
2017-08-07T10:21:11Z
2017-08-07T10:31:16Z
http://eprints.imtlucca.it/id/eprint/3762
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3762
2017-08-07T10:21:11Z
Inferring monopartite projections of bipartite networks: an entropy-based approach
Bipartite networks are currently regarded as providing a major insight into the organization of many real-world systems, unveiling the mechanisms driving the interactions occurring between distinct groups of nodes. One of the most important issues encountered when modeling bipartite networks is devising a way to obtain a (monopartite) projection on the layer of interest, which preserves as much as possible the information encoded into the original bipartite structure. In the present paper we propose an algorithm to obtain statistically-validated projections of bipartite networks, according to which any two nodes sharing a statistically-significant number of neighbors are linked. Since assessing the statistical significance of nodes similarity requires a proper statistical benchmark, here we consider a set of four null models, defined within the exponential random graph framework. Our algorithm outputs a matrix of link-specific p -values, from which a validated projection is straightforwardly obtainable, upon running a multiple hypothesis testing procedure. Finally, we test our method on an economic network (i.e. the countries-products World Trade Web representation) and a social network (i.e. MovieLens, collecting the users’ ratings of a list of movies). In both cases non-trivial communities are detected: while projecting the World Trade Web on the countries layer reveals modules of similarly-industrialized nations, projecting it on the products layer allows communities characterized by an increasing level of complexity to be detected; in the second case, projecting MovieLens on the films layer allows clusters of movies whose affinity cannot be fully accounted for by genre similarity to be individuated.
Fabio Saracco
fabio.saracco@imtlucca.it
Mika J. Straka
mika.straka@imtlucca.it
Riccardo Di Clemente
Andrea Gabrielli
Guido Caldarelli
guido.caldarelli@imtlucca.it
Tiziano Squartini
tiziano.squartini@imtlucca.it
2017-08-04T11:50:27Z
2017-08-04T11:50:27Z
http://eprints.imtlucca.it/id/eprint/3760
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3760
2017-08-04T11:50:27Z
A SOM-based Chan–Vese model for unsupervised image segmentation
Active Contour Models (ACMs) constitute an efficient energy-based image segmentation framework. They usually deal with the segmentation problem as an optimization problem, formulated in terms of a suitable functional, constructed in such a way that its minimum is achieved in correspondence with a contour that is a close approximation of the actual object boundary. However, for existing ACMs, handling images that contain objects characterized by many different intensities still represents a challenge. In this paper, we propose a novel ACM that combines—in a global and unsupervised way—the advantages of the Self-Organizing Map (SOM) within the level set framework of a state-of-the-art unsupervised global ACM, the Chan–Vese (C–V) model. We term our proposed model SOM-based Chan–Vese (SOMCV) active contour model. It works by explicitly integrating the global information coming from the weights (prototypes) of the neurons in a trained SOM to help choosing whether to shrink or expand the current contour during the optimization process, which is performed in an iterative way. The proposed model can handle images that contain objects characterized by complex intensity distributions, and is at the same time robust to the additive noise. Experimental results show the high accuracy of the segmentation results obtained by the SOMCV model on several synthetic and real images, when compared to the Chan–Vese model and other image segmentation models.
Mohammed M. Abdelsamea
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
2017-08-04T11:47:46Z
2017-08-04T11:47:46Z
http://eprints.imtlucca.it/id/eprint/3759
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3759
2017-08-04T11:47:46Z
LQG Online Learning
Optimal control theory and machine learning techniques are combined to formulate and solve in closed form an optimal control formulation of online learning from supervised examples with regularization of the updates. The connections with the classical linear quadratic gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a nontrivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman filter estimate of the parameter vector to be learned. It is shown that the proposed algorithm is less sensitive to outliers with respect to the Kalman estimate (thanks to the presence of the regularization term), thus providing smoother estimates with respect to time. The basic formulation of the proposed online learning framework refers to a discrete-time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called kernel trick, the case of nonlinear models.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Gori
Marcello Sanguineti
2017-08-04T11:42:06Z
2017-08-04T11:42:06Z
http://eprints.imtlucca.it/id/eprint/3758
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3758
2017-08-04T11:42:06Z
Graph-restricted Game Approach for Investigating Human Movement Qualities
A novel computational method for the analysis of expressive full-body movement qualities is introduced, which exploits concepts and tools from graph theory and game theory. The human skeletal structure is modeled as an undirected graph, where the joints are the vertices and the edge set contains both physical and non-physical links. Physical links correspond to connections between adjacent physical body joints (e.g., the forearm, which connects the elbow to the wrist). Nonphysical links act as "bridges" between parts of the body not directly connected by the skeletal structure, but sharing very similar feature values. The edge weights depend on features obtained by using Motion Capture data. Then, a mathematical game is constructed over the graph structure, where the vertices represent the players and the edges represent communication channels between them. Hence, the body movement is modeled in terms of a game built on the graph structure. Since the vertices and the edges contribute to the overall quality of the movement, the adopted game-theoretical model is of cooperative nature. A game-theoretical concept, called Shapley value, is exploited as a centrality index to estimate the contribution of each vertex to a shared goal (e.g., to the way a particular movement quality is transferred among the vertices). The proposed method is applied to a data set of Motion Capture data of subjects performing expressive movements, recorded in the framework of the H2020-ICT-2015 EU Project WhoLoDance, Project no. 688865. Preliminary results are presented.
Ksenia Kolykhalova
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
Antonio Camurri
Gualtiero Volpe
2017-08-04T10:38:09Z
2018-03-08T16:56:04Z
http://eprints.imtlucca.it/id/eprint/3749
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3749
2017-08-04T10:38:09Z
Network reconstruction via density sampling
Reconstructing weighted networks from partial information is necessary in many important circumstances, e.g. for a correct estimation of systemic risk. It has been shown that, in order to achieve an accurate reconstruction, it is crucial to reliably replicate the empirical degree sequence, which is however unknown in many realistic situations. More recently, it has been found that the knowledge of the degree sequence can be replaced by the knowledge of the strength sequence, which is typically accessible, complemented by that of the total number of links, thus considerably relaxing the observational requirements. Here we further relax these requirements and devise a procedure valid when even the the total number of links is unavailable. We assume that, apart from the heterogeneity induced by the degree sequence itself, the network is homogeneous, so that its (global) link density can be estimated by sampling subsets of nodes with representative density. We show that the best way of sampling nodes is the random selection scheme, any other procedure being biased towards unrealistically large, or small, link densities. We then introduce our core technique for reconstructing both the topology and the link weights of the unknown network in detail. When tested on real economic and financial data sets, our method achieves a remarkable accuracy and is very robust with respect to the sampled subsets, thus representing a reliable practical tool whenever the available topological information is restricted to small portions of nodes.
Tiziano Squartini
tiziano.squartini@imtlucca.it
Giulio Cimini
giulio.cimini@imtlucca.it
Andrea Gabrielli
Diego Garlaschelli
diego.garlaschelli@imtlucca.it
2017-07-20T10:44:54Z
2017-07-20T10:44:54Z
http://eprints.imtlucca.it/id/eprint/3725
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3725
2017-07-20T10:44:54Z
Predictive Control for Linear and Hybrid Systems
Model Predictive Control (MPC), the dominant advanced control approach in industry over the past twenty-five years, is presented comprehensively in this unique book. With a simple, unified approach, and with attention to real-time implementation, it covers predictive control theory including the stability, feasibility, and robustness of MPC controllers. The theory of explicit MPC, where the nonlinear optimal feedback controller can be calculated efficiently, is presented in the context of linear systems with linear constraints, switched linear systems, and, more generally, linear hybrid systems. Drawing upon years of practical experience and using numerous examples and illustrative applications, the authors discuss the techniques required to design predictive control laws, including algorithms for polyhedral manipulations, mathematical and multiparametric programming and how to validate the theoretical properties and to implement predictive control policies. The most important algorithms feature in an accompanying free online MATLAB toolbox, which allows easy access to sample solutions. Predictive Control for Linear and Hybrid Systems is an ideal reference for graduate, postgraduate and advanced control practitioners interested in theory and/or implementation aspects of predictive control.
Francesco Borrelli
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2017-06-07T11:10:53Z
2017-06-07T11:10:53Z
http://eprints.imtlucca.it/id/eprint/3711
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3711
2017-06-07T11:10:53Z
Block Placement Strategies for Fault-Resilient Distributed Tuple Spaces: An Experimental Study - (Practical Experience Report)
The tuple space abstraction provides an easy-to-use programming paradigm
for distributed applications. Intuitively, it behaves like a distributed shared
memory, where applications write and read entries (tuples). When deployed over
a wide area network, the tuple space needs to efficiently cope with faults of links
and nodes. Erasure coding techniques are increasingly popular to deal with such
catastrophic events, in particular due to their storage efficiency with respect to
replication. When a client writes a tuple into the system, this is first striped into
k blocks and encoded into n > k blocks, in a fault-redundant manner. Then, any
k out of the n blocks are sufficient to reconstruct and read the tuple. This paper
presents several strategies to place those blocks across the set of nodes of a
wide area network, that all together form the tuple space. We present the performance
trade-offs of different placement strategies by means of simulations and a
Python implementation of a distributed tuple space. Our results reveal important
differences in the efficiency of the different strategies, for example in terms of
block fetching latency, and that having some knowledge of the underlying network
graph topology is highly beneficial
Roberta Barbi
Vitaly Buravlev
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Valerio Schiavoni
2017-05-08T12:27:18Z
2017-05-08T12:27:18Z
http://eprints.imtlucca.it/id/eprint/3697
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3697
2017-05-08T12:27:18Z
(edited by) Proceedings of the 31st Annual ACM Symposium on Applied Computing, SAC 2016, Special track on service-oriented architectures and programming (SOAP)
The SOAP track aims at bringing together researchers and practitioners having the common objective of transforming Service-Oriented Programming (SOP) into a mature discipline with both solid scientific foundations and mature software engineering development methodologies supported by dedicated tools. From the foundational point of view, many attempts to use formal methods for specification and verification in this setting have been made. Session correlation, service types, contract theories, and communication patterns are only a few examples of the aspects that have been investigated. Moreover, several formal models based upon automata, Petri nets and algebraic approaches have been developed. However, most of these approaches concentrate only on a few features of service-oriented systems in isolation, and a comprehensive approach is still lacking.
From the engineering point of view, there are open issues at many levels. Among others, at the system design level, both traditional approaches based on UML and approaches taking inspiration from Business Process Modelling, e.g. BPMN, are used. At the composition level, orchestration and choreography are continuously being improved both formally and practically, with an evident need for their integration in the development process. At the description and discovery level, there are two separate communities pushing respectively the semantic approach (like ontologies and OWL) and the syntactic one (like WSDL). In particular, the role of discovery engines and protocols is not clear. In this respect, adopted standards are still missing. UDDI looked to be a good candidate, but it is no longer pushed by the main corporations, and its wide adoption seems difficult. Furthermore, a recent implementation platform, the so-called REST services, is emerging and competing with classic Web Services. Finally, features like Quality of Service, security, and dependability need to be taken seriously into account.
SOAP in particular encouraged submissions on what SOP still needs in order to achieve the above goals.
The PC of SOAP 2016 was formed by:
• Farhad Arbab Leiden University and CWI, Amsterdam, NL
• Luís Barbosa University of Minho, Braga, PT
• Massimo Bartoletti Università di Cagliari, IT
• Maurice H. ter Beek ISTI-CNR, Pisa, IT (co-chair)
• Marcello M. Bersani Politecnico di Milano, IT
• Laura Bocchi University of Kent, UK
• Roberto Bruni Università di Pisa, IT
• Marco Carbone IT University of Copenhagen, DK
• Romain Demangeon Université Pierre et Marie Curie, FR
• Schahram Dustdar Vienna University of Technology, AT
• Alessandra Gorla IMDEA Software Institute, Madrid, ES
• Vasileios Koutavas Trinity College Dublin, IE
• Alberto Lluch Lafuente Technical University of Denmark, DK
• Manuel Mazzara Innopolis University, RU
• Hernán Melgratti University of Buenos Aires, AR (co-chair)
• Nicola Mezzetti University of Trento, IT
• Corrado Moiso Telecom Italia, IT
• Alberto Núñez Universidad Complutense de Madrid, ES
• Jorge A. Perez University of Groningen, NL
• Gustavo Petri Purdue University, USA
• António Ravara New University of Lisbon, PT
• Steve Ross-Talbot Cognizant Technology Solutions, UK
• Gwen Salaün Inria Grenoble - Rhône-Alpes, FR
• Francesco Tiezzi Università di Camerino, IT
• Hugo Torres Vieira IMT Lucca, IT (co-chair)
• Emilio Tuosto University of Leicester, UK
• Massimo Vecchio Università degli Studi eCampus, IT
• Peter Wong Travelex, UK
• Yongluan Zhou University of Southern Denmark, DK
SOAP 2016 received a total of 16 submissions. Each submission was reviewed by at least 4 PC members, the vast majority even by 5 PC members. All papers were subject to an animated general discussion among the PC members (with over 100 posts in the message boards). In the end, the PC decided to select only the following four papers for an oral presentation at the conference (an acceptance rate of 25%):
• JxActinium: a runtime manager for secure REST-ful COAP applications working over JXTA by Filippo Battaglia, Giancarlo Iannizzotto, and Lucia Lo Bello
• Improving QoS Delivered by WS-BPEL Scenario Adaptation through Service Execution Parallelization by Dionisis Margaris, Costas Vassilakis, and Panagiotis Georgiadis
• QoS-aware Adaptation for Complex Event Service by Feng Gao, Muhammad Ali, Edward Curry, and Alessandra Mileo
• Service functional testing automation with intelligent scheduling and planning by Lom Messan Hillah, Ariele-Paolo Maesano, Libero Maesano, Fabio De Rosa, Fabrice Kordon, and Pierre-Henri Wuillemin
We would like to thank the PC members, and a few external reviewers, for their detailed reports and the stimulating discussions during the reviewing phase; the authors of submitted papers, the session chairs and the attendees, for contributing to the success of the event; the providers of the START system, which was used to manage the submissions; and in particular all the organizers of SAC 2016, for their invitation to organize this track and for all their excellent assistance and support.
Maurice H. ter Beek
Hernán C. Melgratti
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-08T12:26:53Z
2017-05-08T12:26:53Z
http://eprints.imtlucca.it/id/eprint/3696
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3696
2017-05-08T12:26:53Z
Preface for the special issue on Interaction and Concurrency Experience 2015
This special issue contains extended versions of selected papers from the 8th Interaction and Concurrency Experience workshop (ICE 2015). The workshop was held in Grenoble, France, on June 4-5th, 2015. ICE workshops form a series of international scientific meetings oriented to theoretical computer science researchers with special interest in models, verification, tools, and programming primitives for complex interactions.
The general scope of the venue includes theoretical and applied aspects of interactions and the synchronization mechanisms used among components of concurrent/distributed systems, related to several areas of computer science in the broad spectrum ranging from formal specification and analysis to studies inspired by emerging computational models.
The authors of the most prominent papers presented at ICE 2015 were invited to submit an extended version to this special issue. In order to guarantee the fairness and quality of the selection process, each submission received at least three reviews. The review process has ensured that the accepted articles significantly extend and improve the original workshop contributions.
We want to thank all the authors who contributed to this volume. We would like to thank all the members of the Program Committee of ICE, who helped us in the selection of the papers and who helped the authors to improve their contributions in several ways. Additional referees were involved in the review of the papers invited for this special issue and we thank their timely contributions. We would also like to thank the editors of JLAMP, for their support during the whole editorial process.
Ivan Lanese
Alberto Lluch Lafuente
Sophia Knight
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-08T08:07:14Z
2017-05-08T08:07:14Z
http://eprints.imtlucca.it/id/eprint/3698
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3698
2017-05-08T08:07:14Z
Phenotiki: an open software and hardware platform for affordable and easy image-based phenotyping of rosette-shaped plants
Phenotyping is important to understand plant biology, but current solutions are costly, not versatile or are difficult to deploy. To solve this problem, we present Phenotiki, an affordable system for plant phenotyping that, relying on off-the-shelf parts, provides an easy to install and maintain platform, offering an out-of-box experience for a well-established phenotyping need: imaging rosette-shaped plants. The accompanying software (with available source code) processes data originating from our device seamlessly and automatically. Our software relies on machine learning to devise robust algorithms, and includes an automated leaf count obtained from 2D images without the need of depth (3D). Our affordable device (~€200) can be deployed in growth chambers or greenhouse to acquire optical 2D images of approximately up to 60 adult Arabidopsis rosettes concurrently. Data from the device are processed remotely on a workstation or via a cloud application (based on CyVerse). In this paper, we present a proof-of-concept validation experiment on top-view images of 24 Arabidopsis plants in a combination of genotypes that has not been compared previously. Phenotypic analysis with respect to morphology, growth, color and leaf count has not been performed comprehensively before now. We confirm the findings of others on some of the extracted traits, showing that we can phenotype at reduced cost. We also perform extensive validations with external measurements and with higher fidelity equipment, and find no loss in statistical accuracy when we use the affordable setting that we propose. Device set-up instructions and analysis software are publicly available (http://phenotiki.com).
Massimo Minervini
Mario Valerio Giuffrida
valerio.giuffrida@imtlucca.it
Pierdomenico Perata
Sotirios A. Tsaftaris
2017-05-04T14:16:41Z
2017-05-04T14:16:41Z
http://eprints.imtlucca.it/id/eprint/3695
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3695
2017-05-04T14:16:41Z
Foundations of Session Types and Behavioural Contracts
Behavioural type systems, usually associated to concurrent or distributed computations, encompass concepts such as interfaces, communication protocols, and contracts, in addition to the traditional input/output operations. The behavioural type of a software component specifies its expected patterns of interaction using expressive type languages, so types can be used to determine automatically whether the component interacts correctly with other components. Two related important notions of behavioural types are those of session types and behavioural contracts. This article surveys the main accomplishments of the last 20 years within these two approaches.
Hans Huttel
Ivan Lanese
Vasco Thudichum Vasconcelos
Luis Caires
Marco Carbone
Pierre-Malo Deniélou
Dimitris Mostrous
Luca Padovani
Antonio Ravara
Emilio Tuosto
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Gianluigi Zavattaro
2017-05-04T14:10:39Z
2017-05-04T14:10:39Z
http://eprints.imtlucca.it/id/eprint/3694
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3694
2017-05-04T14:10:39Z
Dynamic role authorization in multiparty conversations
Protocols in distributed settings usually rely on the interaction of several parties and often identify the roles involved in communications. Roles may have a behavioral interpretation, as they do not necessarily correspond to sites or physical devices. Notions of role authorization thus become necessary to consider settings in which, e.g., different sites may be authorized to act on behalf of a single role, or in which one site may be authorized to act on behalf of different roles. This flexibility must be equipped with ways of controlling the roles that the different parties are authorized to represent, including the challenging case in which role authorizations are determined only at runtime. We present a typed framework for the analysis of multiparty interaction with dynamic role authorization and delegation. Building on previous work on conversation types with role assignment, our formal model is based on an extension of the π-calculus in which the basic resources are pairs channel-role, which denote the access right of interacting along a given channel representing the given role. To specify dynamic authorization control, our process model includes (1) a novel scoping construct for authorization domains, and (2) communication primitives for authorizations, which allow to pass around authorizations to act on a given channel. An authorization error then corresponds to an action involving a channel and a role not enclosed by an appropriate authorization scope. We introduce a typing discipline that ensures that processes never reduce to authorization errors, including when parties dynamically acquire authorizations.
Silvia Ghilezan
Svetlana Jakšić
Jovanka Pantović
Jorge A. Pérez
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-04T13:43:26Z
2017-05-04T13:43:26Z
http://eprints.imtlucca.it/id/eprint/3692
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3692
2017-05-04T13:43:26Z
(edited by) Proceedings 9th Interaction and Concurrency Experience, ICE 2016, Heraklion, Greece, 8-9 June 2016
This volume contains the proceedings of ICE 2016, the 9th Interaction and Concurrency Experience, which was held in Heraklion, Greece on the 8th and 9th of June 2016 as a satellite event of DisCoTec 2016. The ICE procedure for paper selection allows PC members to interact, anonymously, with authors. During the review phase, each submitted paper is published on a discussion forum whose access is restricted to the authors and to all the PC members not declaring a conflict of interest. The PC members post comments and questions that the authors reply to. For the first time, the 2016 edition of ICE included a feature targeting review transparency: reviews of accepted papers were made public on the workshop website and workshop participants in particular were able to access them during the workshop. Each paper was reviewed by three PC members, and altogether nine papers were accepted for publication (the workshop also featured three brief announcements which are not part of this volume). We were proud to host two invited talks, by Alexandra Silva and Uwe Nestmann. The abstracts of these two talks are included in this volume together with the regular papers.
Massimo Bartoletti
Ludovic Henrio
Sophia Knight
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-04T13:42:01Z
2017-05-04T13:42:01Z
http://eprints.imtlucca.it/id/eprint/3693
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3693
2017-05-04T13:42:01Z
Preface for the special issue on Interaction and Concurrency Experience 2014
This special issue contains extended versions of selected papers from the 7th Interaction and Concurrency Experience workshop (ICE 2014). The workshop was held in Berlin (Germany) on June 6th, 2014. ICE workshops form a series of international scientific meetings oriented to theoretical computer science researchers with special interest in models, verification, tools, and programming primitives for complex interactions.
The general scope of the venue includes theoretical and applied aspects of interactions and the synchronization mechanisms used among components of concurrent/distributed systems, related to several areas of computer science in the broad spectrum ranging from formal specification and analysis to studies inspired by emerging computational models.
The authors of the most prominent papers presented at ICE 2014 were invited to submit an extended version to this special issue. In order to guarantee the fairness and quality of the selection process, each submission received at least three reviews. The review process has also ensured that the accepted articles significantly extend and improve the original workshop contributions.
This special issue features three articles:
• Declarative event based models of concurrency and refinement in psi-calculi, by Håkon Normann, Christian Johansen and Thomas Hildebrandt. In this paper the authors show an exploration of declarative event-based specifications open to runtime refinement aiming at a declarative model with support for adaptation.
• Contracts as games on event structures, by Massimo Bartoletti, Tiziana Cimoli, G. Michele Pinna and Roberto Zunino. This work presents an event structure based interpretation of contracts, allowing to study the rights and obligations of contract participants in a natural setting.
• Relating two automata-based models of orchestration and choreography, by Davide Basile, Pierpaolo Degano, Gian Luigi Ferrari and Emilio Tuosto. This paper presents a comparison between local contract-based specifications coordinated by orchestrators with communicating machines that have decentralized coordination.
We want to thank all the authors who contributed to this volume. We would like to thank all the members of the Program Committee of ICE, who helped us in the selection of the papers and who helped the authors to improve their contributions in several ways. Additional referees were involved in the review of the papers invited for this special issue and we thank their timely contributions. We would also like to thank the editors of JLAMP, for their support during the whole editorial process.
Ivan Lanese
Alberto Lluch Lafuente
Ana Sokolova
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-04T13:38:06Z
2017-05-04T13:38:06Z
http://eprints.imtlucca.it/id/eprint/3691
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3691
2017-05-04T13:38:06Z
(edited by) Proceedings 8th Interaction and Concurrency Experience, ICE 2015, Grenoble, France, 4-5th June 2015
This volume contains the proceedings of ICE 2015, the 8th Interaction and Concurrency Experience, which was held in Grenoble, France on the 4th and 5th of June 2015 as a satellite event of DisCoTec 2015. The ICE procedure for paper selection allows PC members to interact, anonymously, with authors. During the review phase, each submitted paper is published on a discussion forum with access restricted to the authors and to all the PC members not declaring a conflict of interest. The PC members post comments and questions to which the authors reply. Each paper was reviewed by three PC members, and altogether 9 papers, including 1 short paper, were accepted for publication (the workshop also featured 4 brief announcements which are not part of this volume). We were proud to host three invited talks, by Leslie Lamport (shared with the FRIDA workshop), Joseph Sifakis and Steve Ross-Talbot. The abstracts of the last two talks are included in this volume together with the regular papers.
Sophia Knight
Ivan Lanese
Alberto Lluch Lafuente
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-04T13:35:38Z
2017-05-04T13:35:38Z
http://eprints.imtlucca.it/id/eprint/3690
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3690
2017-05-04T13:35:38Z
A Typed Model for Dynamic Authorizations
Security requirements in distributed software systems are inherently dynamic. In the case of authorization policies, resources are meant to be accessed only by authorized parties, but the authorization to access a resource may be dynamically granted/yielded. We describe ongoing work on a model for specifying communication and dynamic authorization handling. We build upon the pi-calculus so as to enrich communication-based systems with authorization specification and delegation; here authorizations regard channel usage and delegation refers to the act of yielding an authorization to another party. Our model includes: (i) a novel scoping construct for authorization, which allows to specify authorization boundaries, and (ii) communication primitives for authorizations, which allow to pass around authorizations to act on a given channel. An authorization error may consist in, e.g., performing an action along a name which is not under an appropriate authorization scope. We introduce a typing discipline that ensures that processes never reduce to authorization errors, even when authorizations are dynamically delegated.
Silvia Ghilezan
Svetlana Jakšić
Jovanka Pantović
Jorge A. Pérez
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-04-18T08:57:40Z
2017-04-18T08:57:40Z
http://eprints.imtlucca.it/id/eprint/3689
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3689
2017-04-18T08:57:40Z
Modeling confirmation bias and polarization
Online users tend to select claims that adhere to their system of beliefs and to ignore dissenting information. Confirmation bias, indeed, plays a pivotal role in viral phenomena. Furthermore, the wide availability of content on the web fosters the aggregation of likeminded people where debates tend to enforce group polarization. Such a configuration might alter the public debate and thus the formation of the public opinion. In this paper we provide a mathematical model to study online social debates and the related polarization dynamics. We assume the basic updating rule of the Bounded Confidence Model (BCM) and we develop two variations a) the Rewire with Bounded Confidence Model (RBCM), in which discordant links are broken until convergence is reached; and b) the Unbounded Confidence Model, under which the interaction among discordant pairs of users is allowed even with a negative feedback, either with the rewiring step (RUCM) or without it (UCM). From numerical simulations we find that the new models (UCM and RUCM), unlike the BCM, are able to explain the coexistence of two stable final opinions, often observed in reality. Lastly, we present a mean field approximation of the newly introduced models.
Michela Del Vicario
michela.delvicario@imtlucca.it
Antonio Scala
Guido Caldarelli
guido.caldarelli@imtlucca.it
H. Eugene Stanley
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2017-04-18T08:31:54Z
2017-04-18T08:31:54Z
http://eprints.imtlucca.it/id/eprint/3685
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3685
2017-04-18T08:31:54Z
Inside the Echo Chamber
Despite optimistic talk about “collective intelligence,” the Web has helped create an echo chamber where misinformation thrives. Indeed, the viral spread of hoaxes, conspiracy theories, and other false or baseless information online is one of the most disturbing social trends of the early 21st century.
Social scientists are studying this echo chamber by applying computational methods to the traces people leave on Facebook, Twitter and other such outlets. Through this work, they have established that users happily embrace false information as long as it reinforces their preexisting beliefs.
Faced with complex global issues, people of all educational levels choose to believe compact—but false—explanations that clearly identify an object of blame. Unfortunately, attempts to debunk false beliefs seem only to reinforce them. Stopping the spread of misinformation is thus a problem with no apparent simple solutions.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2017-03-22T10:06:42Z
2017-03-22T10:06:42Z
http://eprints.imtlucca.it/id/eprint/3681
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3681
2017-03-22T10:06:42Z
Reference trajectory planning under constraints and path tracking using linear time-varying model predictive control for agricultural machines
A method for the control of autonomously and slowly moving agricultural machinery is presented. Special emphasis is on offline reference trajectory generation tailored for high-precision closed-loop tracking within agricultural fields using linear time-varying model predictive control. When optimisation is carried out, high-level logistical processing can result in edgy reference paths for field coverage. Subsequent trajectory smoothing can consider specific actuator rate constraints and field geometry. The latter step is the subject of this paper. Focussing on forward motion only, the role of non-convexly shaped field geometry, repressed area minimisation and spraying gap avoidance is analysed. Three design methods for generating smooth reference trajectories are discussed: circle-segments, generalised elementary paths, and bi-elementary paths.
Mogens M. Graf Plessen
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-03-21T11:05:10Z
2017-09-21T14:56:38Z
http://eprints.imtlucca.it/id/eprint/3666
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3666
2017-03-21T11:05:10Z
Optimal design of low-frequency band gaps in anti-tetrachiral lattice meta-materials
The elastic wave propagation is investigated in a beam lattice material characterized by a square periodic cell with anti-tetrachiral microstructure. With reference to the Floquet-Bloch spectrum, focus is made on the band structure enrichments and modifications which can be achieved by equipping the cellular microstructure with tunable local resonators. By virtue of its composite mechanical nature, the so-built inertial meta-material gains enhanced capacities of passive frequency-band filtering. Indeed the number, placement and properties of the inertial resonators can be designed to open, shift and enlarge the band gaps between one or more pairs of consecutive branches in the frequency spectrum. In order to improve the meta-material performance, several nonlinear optimization problems are formulated. The largest among the band gap amplitudes in the low-frequency range is selected as suited objective function. Proper inequality constraints are introduced to restrict the admissible solutions within a compact set of mechanical and geometric parameters, including only physically realistic properties of both the lattice and the resonators. The optimization problems related to full and partial band gaps are solved by using a globally convergent version of the numerical method of moving asymptotes, combined with a quasi-Monte Carlo multi-start technique. The optimal solutions are numerically computed, discussed and compared from the qualitative and quantitative viewpoints, bringing to light the limits and potential of the meta-material performance. The clearest trends emerging from the numerical analyses are pointed out and interpreted from the physical viewpoint. Finally, some specific recommendations about the microstructural design of the meta-material are synthesized.
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Lepidi
Luigi Gambarotta
2017-03-21T10:56:30Z
2017-03-21T10:56:30Z
http://eprints.imtlucca.it/id/eprint/3664
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3664
2017-03-21T10:56:30Z
Design of acoustic metamaterials through nonlinear programming
The dispersive wave propagation in a periodic metamaterial with tetrachiral topology and inertial local resonators is investigated. The Floquet-Bloch spectrum of the metamaterial is compared with that of the tetrachiral beam lattice material without resonators. The resonators can be designed to open and shift frequency band gaps, that is, spectrum intervals in which harmonic waves do not propagate. Therefore, an optimal passive control of the frequency band structure can be pursued in the metamaterial. To this aim, a suitable constrained nonlinear optimization problem on a compact set of admissible geometrical and mechanical parameters is stated. According to functional requirements, the particular set of parameters which determines the largest low-frequency band gap between a pair of consecutive branches of the Floquet-Bloch spectrum is obtained. The optimization problem is successfully solved by means of a version of the method of moving asymptotes, combined with a quasi-Monte Carlo multi-start technique.
Subjects: Materials Science (cond-mat.mtrl-sci)
Cite as: arXiv:1603.07717 [cond-mat.mtrl-sci]
(or arXiv:1603.07717v2 [cond-mat.mtrl-sci] for this version)
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Lepidi
Luigi Gambarotta
2017-02-01T08:49:17Z
2017-02-01T08:49:17Z
http://eprints.imtlucca.it/id/eprint/3652
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3652
2017-02-01T08:49:17Z
Statistical shape modeling of the left ventricle: myocardial infarct classification challenge
Statistical shape modeling is a powerful tool for visualizing and quantifying geometric and functional patterns of the heart. After myocardial infarction (MI), the left ventricle typically remodels in response to physiological challenges. Several methods have been proposed in the literature to describe statistical shape changes. Which method best characterizes left ventricular remodeling after MI is an open research question. A better descriptor of remodeling is expected to provide a more accurate evaluation of disease status in MI patients. We therefore designed a challenge to test shape characterization in MI given a set of three-dimensional left ventricular surface points. The training set comprised 100 MI patients, and 100 asymptomatic volunteers (AV). The challenge was initiated in 2015 at the Statistical Atlases and Computational Models of the Heart workshop, in conjunction with the MICCAI conference. The training set with labels was provided to participants, who were asked to submit the likelihood of MI from a different (validation) set of 200 cases (100 AV and 100 MI). Sensitivity, specificity, accuracy and area under the receiver operating characteristic curve were used as the outcome measures. The goals of this challenge were to (1) establish a common dataset for evaluating statistical shape modeling algorithms in MI, and (2) test whether statistical shape modeling provides additional information characterizing MI patients over standard clinical measures. Eleven groups with a wide variety of classification and feature extraction approaches participated in this challenge. All methods achieved excellent classification results with accuracy ranges from 0.83 to 0.98. The areas under the receiver operating characteristic curves were all above 0.90. Four methods showed significantly higher performance than standard clinical measures. The dataset and software for evaluation are available from the Cardiac Atlas Project website1.
A. Suinesiaputra
P. Ablin
X. Alba
M. Alessandrini
J. Allen
W. Bai
S. Cimen
P. Claes
B. R. Cowan
J. D'hooge
N. Duchateau
J. Ehrhardt
A. F. Frangi
A. Gooya
V. Grau
K. Lekadir
A. Lu
A. Mukhopadhyay
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
X. Pennec
M. Pereanez
C. Pinto
P. Piras
M. M. Rohe
D. Rueckert
M. Sermesant
K. Siddiqi
M. Tabassian
L. Teresi
S. A. Tsaftaris
M. Wilms
A. A. Young
X. Zhang
P. Medrano-Gracia
2017-02-01T08:36:30Z
2017-02-01T08:36:30Z
http://eprints.imtlucca.it/id/eprint/3651
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3651
2017-02-01T08:36:30Z
MRI-TRUS Image Synthesis with Application to Image-Guided Prostate Intervention
Accurate and robust fusion of pre-procedure magnetic resonance imaging (MRI) to intra-procedure trans-rectal ultrasound (TRUS) imaging is necessary for image-guided prostate cancer biopsy procedures. The current clinical standard for image fusion relies on non-rigid surface-based registration between semi-automatically segmented prostate surfaces in both the MRI and TRUS. This surface-based registration method does not take advantage of internal anatomical prostate structures, which have the potential to provide useful information for image registration. However, non-rigid, multi-modal intensity-based MRI-TRUS registration is challenging due to highly non-linear intensities relationships between MRI and TRUS. In this paper, we present preliminary work using image synthesis to cast this problem into a mono-modal registration task by using a large database of over 100 clinical MRI-TRUS image pairs to learn a joint model of MR-TRUS appearance. Thus, given an MRI, we use this learned joint appearance model to synthesize the patient’s corresponding TRUS image appearance with which we could potentially perform mono-modal intensity-based registration. We present preliminary results of this approach.
John A. Onofrey
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Saradwata Sarkar
Rajesh Venkataraman
Lawrence H. Staib
Xenophon Papademetris
2017-01-26T14:46:16Z
2017-01-26T14:46:16Z
http://eprints.imtlucca.it/id/eprint/3645
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3645
2017-01-26T14:46:16Z
Stochastic gradient methods for stochastic model predictive control
We introduce a new stochastic gradient algorithm, SAAGA, and investigate its employment for solving Stochastic MPC problems and multi-stage stochastic optimization programs in general. The method is particularly attractive for scenario-based formulations that involve a large number of scenarios, for which “batch” formulations may become inefficient due to high computational costs. Benefits of the method include cheap computations per iteration and fast convergence due to the sparsity of the proposed problem decomposition.
A. Themelis
S. Villa
Panagiotis Patrinos
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-01-26T14:36:39Z
2017-01-26T14:36:39Z
http://eprints.imtlucca.it/id/eprint/3643
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3643
2017-01-26T14:36:39Z
A Simple Effective Heuristic for Embedded Mixed-Integer Quadratic Programming
In this paper we propose a fast optimization
algorithm for approximately minimizing convex quadratic
functions over the intersection of affine and separable constraints
(i.e., the Cartesian product of possibly nonconvex
real sets). This problem class contains many NP-hard problems
such as mixed-integer quadratic programming. Our
heuristic is based on a variation of the alternating direction
method of multipliers (ADMM), an algorithm for solving
convex optimization problems. We discuss the favorable
computational aspects of our algorithm, which allow it to
run quickly even on very modest computational platforms
such as embedded processors. We give several examples
for which an approximate solution should be found very
quickly, such as management of a hybrid-electric vehicle
drivetrain. Our numerical experiments suggest that our
method is very effective in finding a feasible point with
small objective value; indeed, we see that in many cases,
it finds the global solution.
Reza Takapoui
Nicholas Mohele
Stephen Boyd
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-01-26T14:29:19Z
2017-01-26T14:29:19Z
http://eprints.imtlucca.it/id/eprint/3642
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3642
2017-01-26T14:29:19Z
Solving Mixed-Integer Quadratic Programs via Nonnegative Least Squares
This paper proposes a new algorithm for solving Mixed-Integer Quadratic Programming (MIQP) problems. The algorithm is particularly tailored to solving small-scale MIQPs such as those that arise in embedded hybrid Model Predictive Control (MPC) applications. The approach combines branch and bound (B&B) with nonnegative least squares (NNLS), that are used to solve Quadratic Programming (QP) relaxations. The QP algorithm extends a method recently proposed by the author for solving strictly convex QP's, by (i) handling equality and bilateral inequality constraints, (ii) warm starting, and (iii) exploiting easy-to-compute lower bounds on the optimal cost to reduce the number of QP iterations required to solve the relaxed problems. The proposed MIQP algorithm has a speed of execution that is comparable to state- of-the-art commercial MIQP solvers and is relatively simple to code, as it requires only basic arithmetic operations to solve least-square problems.
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-01-26T14:17:52Z
2017-01-26T14:17:52Z
http://eprints.imtlucca.it/id/eprint/3641
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3641
2017-01-26T14:17:52Z
GPU-accelerated stochastic predictive control of drinking water networks
Ajay Kumar Sampathirao
Pantelis Sopasakis
Alberto Bemporad
alberto.bemporad@imtlucca.it
Panagiotis Patrinos
2017-01-26T14:11:24Z
2017-01-26T14:11:24Z
http://eprints.imtlucca.it/id/eprint/3640
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3640
2017-01-26T14:11:24Z
Spatial-based predictive control and geometric corridor planning for adaptive cruise control coupled with obstacle avoidance
M. Graf Plessen
Daniele Bernardini
Hasan Esen
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-01-24T13:21:02Z
2017-08-28T15:36:22Z
http://eprints.imtlucca.it/id/eprint/3638
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3638
2017-01-24T13:21:02Z
Optimal energy management of a small-size building via hybrid model predictive control
Abstract This paper presents the design of a Model Predictive Control (MPC) scheme to optimally manage the thermal and electrical subsystems of a small-size building (“smart house”), with the objective of minimizing the expense for buying energy from the grid, while keeping the room temperature within given time-varying bounds. The system, for which an experimental prototype has been built, includes {PV} panels, solar collectors, a battery pack, an electrical heater in a thermal storage tank, and two pumps on the solar collector and radiator hydraulic circuits. The presence of binary control inputs together with continuous ones naturally leads to using a hybrid dynamical model, and the {MPC} controller solves a mixed-integer linear program at each sampling instant, relying on weather forecast data for ambient temperature and solar irradiance. The procedure for controller design is reported with focus on the specific application, and the proposed method is successfully tested on the experimental site.
Albina Khakimova
Aliya Kusatayeva
Akmaral Shamshimova
Dana Sharipova
Alberto Bemporad
alberto.bemporad@imtlucca.it
Yakov Familiant
Almas Shintemirov
Viktor Ten
Matteo Rubagotti
2017-01-24T13:14:41Z
2017-01-24T13:14:41Z
http://eprints.imtlucca.it/id/eprint/3637
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3637
2017-01-24T13:14:41Z
From linear to nonlinear MPC: bridging the gap via the real-time iteration
Linear model predictive control (MPC) can be currently deployed at outstanding speeds, thanks to recent progress in algorithms for solving online the underlying structured quadratic programs. In contrast, nonlinear MPC (NMPC) requires the deployment of more elaborate algorithms, which require longer computation times than linear MPC. Nonetheless, computational speeds for NMPC comparable to those of MPC are now regularly reported, provided that the adequate algorithms are used. In this paper, we aim at clarifying the similarities and differences between linear MPC and NMPC. In particular, we focus our analysis on NMPC based on the real-time iteration (RTI) scheme, as this technique has been successfully tested and, in some applications, requires computational times that are only marginally larger than linear MPC. The goal of the paper is to promote the understanding of RTI-based NMPC within the linear MPC community.
Sébastien Gros
Mario Zanon
Rien Quirynen
Alberto Bemporad
alberto.bemporad@imtlucca.it
Moritz Diehl
2017-01-24T13:10:56Z
2017-01-24T13:16:05Z
http://eprints.imtlucca.it/id/eprint/3636
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3636
2017-01-24T13:10:56Z
A Lyapunov method for stability analysis of piecewise-affine systems over non-invariant domains
This paper analyses stability of discrete-time piecewise-affine systems, defined on possibly non-invariant domains, taking into account the possible presence of multiple dynamics in each of the polytopic regions of the system. An algorithm based on linear programming is proposed, in order to prove exponential stability of the origin and to find a positively invariant estimate of its region of attraction. The results are based on the definition of a piecewise-affine Lyapunov function, which is in general discontinuous on the boundaries of the regions. The proposed method is proven to lead to feasible solutions in a broader range of cases as compared to a previously proposed approach. Two numerical examples are shown, among which a case where the proposed method is applied to a closed-loop system, to which model predictive control was applied without a-priori guarantee of stability.
Matteo Rubagotti
Luca Zaccarian
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-01-24T13:07:45Z
2017-01-24T13:07:45Z
http://eprints.imtlucca.it/id/eprint/3635
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3635
2017-01-24T13:07:45Z
Optimal distributed task scheduling in volunteer clouds
Abstract The ever increasing request of computational resources has shifted the computing paradigm towards solutions where less computation is performed locally. The most widely adopted approach nowadays is represented by cloud computing. With the cloud, users can transparently access to virtually infinite resources with the same aptitude of using any other utility. Next to the cloud, the volunteer computing paradigm has gained attention in the last decade, where the spared resources on each personal machine are shared thanks to the users’ willingness to cooperate. Cloud and volunteer paradigms have been recently seen as companion technologies to better exploit the use of local resources. Conversely, this scenario places complex challenges in managing such a large-scale environment, as the resources available on each node and the presence of the nodes online are not known a-priori. The complexity further increases in presence of tasks that have an associated Service Level Agreement specified, e.g., through a deadline. Distributed management solutions have then be advocated as the only approaches that are realistically applicable. In this paper, we propose a framework to allocate tasks according to different policies, defined by suitable optimization problems. Then, we provide a distributed optimization approach relying on the Alternating Direction Method of Multipliers (ADMM) for one of these policies, and we compare it with a centralized approach. Results show that, when a centralized approach can not be adopted in a real environment, it could be possible to rely on the good suboptimal solutions found by the ADMM.
Stefano Sebastio
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2016-11-30T10:37:34Z
2016-11-30T10:37:34Z
http://eprints.imtlucca.it/id/eprint/3606
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3606
2016-11-30T10:37:34Z
Machine Learning for Plant Phenotyping Needs Image Processing
We found the article by Singh et al. [1] extremely interesting because it introduces and showcases the utility of machine learning for high-throughput data-driven plant phenotyping. With this letter we aim to emphasize the role that image analysis and processing have in the phenotyping pipeline beyond what is suggested in [1], both in analyzing phenotyping data (e.g., to measure growth) and when providing effective feature extraction to be used by machine learning. Key recent reviews have shown that it is image analysis itself (what the authors of [1] consider as part of pre-processing) that has brought a renaissance in phenotyping [2].
Sotirios A. Tsaftaris
Massimo Minervini
massimo.minervini@imtlucca.it
Hanno Scharr
2016-10-10T15:08:08Z
2016-10-10T15:08:08Z
http://eprints.imtlucca.it/id/eprint/3583
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3583
2016-10-10T15:08:08Z
Real-time model predictive control based on dual gradient projection: Theory and fixed-point FPGA implementation
This paper proposes a method to design robust model predictive control (MPC) laws for discrete-time linear systems with hard mixed constraints on states and inputs, in case of only an inexact solution of the associated quadratic program is available, because of real-time requirements. By using a recently proposed dual gradient-projection algorithm, it is proved that the discrepancy of the optimal control law as compared with the obtained one is bounded even if the solver is implemented in fixed-point arithmetic. By defining an alternative MPC problem with tightened constraints, a feasible solution is obtained for the original MPC problem, which guarantees recursive feasibility and asymptotic stability of the closed-loop system with respect to a set including the origin, also considering the presence of external disturbances. The proposed MPC law is implemented on a field-programmable gate array in order to show the practical applicability of the method.
Matteo Rubagotti
Panagiotis Patrinos
Alberto Guiggiani
Alberto Bemporad
alberto.bemporad@imtlucca.it
2016-10-06T09:14:53Z
2016-10-06T09:16:01Z
http://eprints.imtlucca.it/id/eprint/3560
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3560
2016-10-06T09:14:53Z
Multiparty Testing Preorders
Variants of the must testing approach have been successfully applied in Service Oriented Computing for analysing the compliance between (contracts exposed by) clients and servers or, more generally, between two peers. It has however been argued that multiparty scenarios call for more permissive notions of compliance because partners usually do not have full coordination capabilities. We propose two new testing preorders, which are obtained by restricting the set of potential observers. For the first preorder, called uncoordinated, we allow only sets of parallel observers that use different parts of the interface of a given service and have no possibility of intercommunication. For the second preorder, that we call independent, we instead rely on parallel observers that perceive as silent all the actions that are not in the interface of interest. We have that the uncoordinated preorder is coarser than the classical must testing preorder and finer than the independent one. We also provide a characterisation in terms of decorated traces for both preorders: the uncoordinated preorder is defined in terms of must-sets and Mazurkiewicz traces while the independent one is described in terms of must-sets and classes of filtered traces that only contain designated visible actions.
Rocco De Nicola
r.denicola@imtlucca.it
Hernán Melgratti
2016-10-04T09:40:34Z
2016-10-04T09:40:34Z
http://eprints.imtlucca.it/id/eprint/3547
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3547
2016-10-04T09:40:34Z
A hierarchical consensus method for the approximation of the consensus state, based on clustering and spectral graph theory
A hierarchical method for the approximate computation of the consensus state of a network of agents is investigated. The method is motivated theoretically by spectral graph theory arguments. In a first phase, the graph is divided into a number of subgraphs with good spectral properties, i.e., a fast convergence toward the local consensus state of each subgraph. To find the subgraphs, suitable clustering methods are used. Then, an auxiliary graph is considered, to determine the final approximation of the consensus state in the original network. A theoretical investigation is performed of cases for which the hierarchical consensus method has a better performance guarantee than the non-hierarchical one (i.e., it requires a smaller number of iterations to guarantee a desired accuracy in the approximation of the consensus state of the original network). Moreover, numerical results demonstrate the effectiveness of the hierarchical consensus method for several case studies modeling real-world networks.
Rita Morisi
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2016-10-04T08:56:19Z
2016-10-04T08:56:19Z
http://eprints.imtlucca.it/id/eprint/3545
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3545
2016-10-04T08:56:19Z
Piecewise affine regression via recursive multiple least squares and multicategory discrimination
In nonlinear regression choosing an adequate model structure is often a challenging problem. While simple models (such as linear functions) may not be able to capture the underlying relationship among the variables, over-parametrized models described by a large set of nonlinear basis functions tend to overfit the training data, leading to poor generalization on unseen data. Piecewise-affine (PWA) models can describe nonlinear and possible discontinuous relationships while maintaining simple local affine regressor-to-output mappings, with extreme flexibility when the polyhedral partitioning of the regressor space is learned from data rather than fixed a priori. In this paper, we propose a novel and numerically very efficient two-stage approach for {PWA} regression based on a combined use of (i) recursive multi-model least-squares techniques for clustering and fitting linear functions to data, and (ii) linear multi-category discrimination, either offline (batch) via a Newton-like algorithm for computing a solution of unconstrained optimization problems with objective functions having a piecewise smooth gradient, or online (recursive) via averaged stochastic gradient descent.
Valentina Breschi
Dario Piga
dario.piga@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2016-10-04T08:36:36Z
2016-10-04T08:36:36Z
http://eprints.imtlucca.it/id/eprint/3543
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3543
2016-10-04T08:36:36Z
From linear to nonlinear MPC: bridging the gap via the real-time iteration
Linear model predictive control (MPC) can be currently deployed at outstanding speeds, thanks to recent progress in algorithms for solving online the underlying structured quadratic programs. In contrast, nonlinear MPC (NMPC) requires the deployment of more elaborate algorithms, which require longer computation times than linear MPC. Nonetheless, computational speeds for NMPC comparable to those of MPC are now regularly reported, provided that the adequate algorithms are used. In this paper, we aim at clarifying the similarities and differences between linear MPC and NMPC. In particular, we focus our analysis on NMPC based on the real-time iteration (RTI) scheme, as this technique has been successfully tested and, in some applications, requires computational times that are only marginally larger than linear MPC. The goal of the paper is to promote the understanding of RTI-based NMPC within the linear MPC community.
Sébastien Gros
Mario Zanon
Rien Quirynen
Alberto Bemporad
alberto.bemporad@imtlucca.it
Moritz Diehl
2016-09-22T16:17:07Z
2016-09-22T16:17:07Z
http://eprints.imtlucca.it/id/eprint/3542
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3542
2016-09-22T16:17:07Z
Data Science and Complex Networks. Real Case Studies with Python
This book provides a comprehensive yet short description of the basic concepts of Complex Network theory. In contrast to other books the authors present these concepts through real case studies. The application topics span from Foodwebs, to the Internet, the World Wide Web and the Social Networks, passing through the International Trade Web and Financial time series. The final part is devoted to definition and implementation of the most important network models.
The text provides information on the structure of the data and on the quality of available datasets. Furthermore it provides a series of codes to allow immediate implementation of what is theoretically described in the book. Readers already used to the concepts introduced in this book can learn the art of coding in Python by using the online material. To this purpose the authors have set up a dedicated web site where readers can download and test the codes. The whole project is aimed as a learning tool for scientists and practitioners, enabling them to begin working instantly in the field of Complex Networks.
Guido Caldarelli
guido.caldarelli@imtlucca.it
Alessandro Chessa
alessandro.chessa@imtlucca.it
2016-09-16T08:51:39Z
2016-09-16T10:26:17Z
http://eprints.imtlucca.it/id/eprint/3541
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3541
2016-09-16T08:51:39Z
Finely-grained annotated datasets for image-based plant phenotyping
Image-based approaches to plant phenotyping are gaining momentum providing fertile ground for several interesting vision tasks where fine-grained categorization is necessary, such as leaf segmentation among a variety of cultivars, and cultivar (or mutant) identification. However, benchmark data focusing on typical imaging situations and vision tasks are still lacking, making it difficult to compare existing methodologies. This paper describes a collection of benchmark datasets of raw and annotated top-view color images of rosette plants. We briefly describe plant material, imaging setup and procedures for different experiments: one with various cultivars of Arabidopsis and one with tobacco undergoing different treatments. We proceed to define a set of computer vision and classification tasks and provide accompanying datasets and annotations based on our raw data. We describe the annotation process performed by experts and discuss appropriate evaluation criteria. We also offer exemplary use cases and results on some tasks obtained with parts of these data. We hope with the release of this rigorous dataset collection to invigorate the development of algorithms in the context of plant phenotyping but also provide new interesting datasets for the general computer vision community to experiment on. Data are publicly available at http://www.plant-phenotyping.org/datasets.
Massimo Minervini
massimo.minervini@imtlucca.it
Andreas Fischbach
Hanno Scharr
Sotirios A. Tsaftaris
2016-09-08T06:40:19Z
2016-09-08T06:40:19Z
http://eprints.imtlucca.it/id/eprint/3526
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3526
2016-09-08T06:40:19Z
Static VS Dynamic Reversibility in CCS
The notion of reversible computing is attracting interest because of its applications in diverse fields, in particular the study of programming abstractions for fault tolerant systems. Reversible CCS (RCCS), proposed by Danos and Krivine, enacts reversibility by means of memory stacks. Ulidowski and Phillips proposed a general method to reverse a process calculus given in a particular SOS format, by exploiting the idea of making all the operators of a calculus static. CCSK is then derived from CCS with this method. In this paper we show that RCCS is at least as expressive as CCSK.
Doriana Medic
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
2016-08-31T08:47:26Z
2016-08-31T08:47:26Z
http://eprints.imtlucca.it/id/eprint/3523
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3523
2016-08-31T08:47:26Z
Polarized User and Topic Tracking in Twitter
Digital traces of conversations in micro-blogging platforms and OSNs provide information about user opinion with a high degree of resolution. These information sources can be exploited to understand and monitor collective behaviours. In this work, we focus on polarisation classes, i.e., those topics that require the user to side exclusively with one position. The proposed method provides an iterative classification of users and keywords: first, polarised users are identified, then polarised keywords are discovered by monitoring the activities of previously classified users. This method thus allows tracking users and topics over time. We report several experiments conducted on two Twitter datasets during political election time-frames. We measure the user classification accuracy on a golden set of users, and analyse the relevance of the extracted keywords for the ongoing political discussion.
Mauro Coletto
mauro.coletto@imtlucca.it
Claudio Lucchese
Salvatore Orlando
Raffaele Perego
2016-08-29T09:08:06Z
2016-08-29T09:08:06Z
http://eprints.imtlucca.it/id/eprint/3522
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3522
2016-08-29T09:08:06Z
A computational method to simulate thermo-oxidative degradation phenomena of poly(ethylene-co-vinyl acetate) used in photovoltaics.
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Pietro Lenarda
pietro.lenarda@imtlucca.it
Marco Paggi
marco.paggi@imtlucca.it
2016-07-13T09:55:44Z
2016-07-13T09:55:44Z
http://eprints.imtlucca.it/id/eprint/3517
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3517
2016-07-13T09:55:44Z
Statistical Analysis of Probabilistic Models of Software Product Lines with Quantitative Constraints
We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra whose operational behaviour interacts with a store of constraints, neatly separating product configuration from product behaviour. The resulting probabilistic configurations and behaviour converge seamlessly in a semantics based on DTMCs, thus enabling quantitative analyses ranging from the likelihood of certain behaviour to the expected average cost of products. This is supported by a Maude implementation of QFLan, integrated with the SMT solver Z3 and the distributed statistical model checker MultiVeStA. Our approach is illustrated with a bikes product line case study.
M.H. ter Beek
Axel Legay
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-07-13T09:43:28Z
2016-07-13T09:43:28Z
http://eprints.imtlucca.it/id/eprint/3516
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3516
2016-07-13T09:43:28Z
Quantitative Abstractions for Collective Adaptive Systems
Collective adaptive systems (CAS) consist of a large number of possibly heterogeneous entities evolving according to local interactions that may operate across multiple scales in time and space. The adaptation to changes in the environment, as well as the highly dispersed decision-making process, often leads to emergent behaviour that cannot be understood by simply analysing the objectives, properties, and dynamics of the individual entities in isolation.
As with most complex systems, modelling is a phase of crucial importance for the design of new CAS or the understanding of existing ones. Elsewhere in this volume the typical workflow of formal modelling, analysis, and evaluation of a CAS has been illustrated in detail. In this chapter we treat the problem of efficiently analysing large-scale CAS for quantitative properties. We review algorithms to automatically reduce the dimensionality of a CAS model preserving modeller-defined state variables, with focus on descriptions based on systems of ordinary differential equations. We illustrate the theory in a tutorial fashion, with running examples and a number of more substantial case studies ranging from crowd dynamics, epidemiology and biological systems.
Andrea Vandin
andrea.vandin@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2016-05-26T10:47:34Z
2016-05-26T10:47:34Z
http://eprints.imtlucca.it/id/eprint/3493
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3493
2016-05-26T10:47:34Z
Software Engineering for Collective Autonomic Systems: The ASCENS Approach
Dhaminda B. Abeywickrama
Jacques Combaz
Jaroslav and Kofro\v Horký
Andrea Vandin
andrea.vandin@imtlucca.it
Emil Vassev
Jan Kofroň
Alberto Lluch Lafuente
Michele Loreti
Andrea Margheri
Philip Mayer
Giacoma Valentina Monreale
Ugo Montanari
Carlo Pinciroli
Petr Tůma
2016-05-26T10:39:24Z
2016-05-26T10:39:24Z
http://eprints.imtlucca.it/id/eprint/3492
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3492
2016-05-26T10:39:24Z
Quantitative Analysis of Probabilistic Models of SoftwareProduct Lines with Statistical Model Checking
We investigate the suitability of statistical model checking techniques for analysing quantitative prop-erties of software product line models with probabilistic aspects. For this purpose, we enrich thefeature-oriented language FLANwith action rates, which specify the likelihood of exhibiting par-ticular behaviour or of installing features at a specific moment or in a specific order. The enrichedlanguage (called PFLAN) allows us to specify models of software product lines with probabilis-tic configurations and behaviour, e.g. by considering a PFLANsemantics based on discrete-timeMarkov chains. The Maude implementation of PFLANis combined with the distributed statisticalmodel checker MultiVeStA to perform quantitative analyses of a simple product line case study. Thepresented analyses include the likelihood of certain behaviour of interest (e.g. product malfunction-ing) and the expected average cost of products
Maurice H. ter Beek
maurice.terbeek@isti.cnr.it
Axel Legay
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
2016-05-26T10:06:19Z
2016-05-26T10:06:19Z
http://eprints.imtlucca.it/id/eprint/3491
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3491
2016-05-26T10:06:19Z
Modelling and Analyzing Adaptive Self-assembly Strategies with Maude
Building adaptive systems with predictable emergent behavior is a challenging task and it is becoming a critical need. The research community has accepted the challenge by introducing approaches of various nature: from software architectures, to programming paradigms, to analysis techniques. We recently proposed a conceptual framework for adaptation centered around the role of control data. In this paper we show that it can be naturally realized in a reflective logical language like Maude by using the Reflective Russian Dolls model. Moreover, we exploit this model to specify and analyse a prominent example of adaptive system: robot swarms equipped with obstacle-avoidance self-assembly strategies. The analysis exploits the statistical model checker PVesta.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
2016-05-23T11:46:35Z
2016-05-23T11:46:35Z
http://eprints.imtlucca.it/id/eprint/3490
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3490
2016-05-23T11:46:35Z
Simple outlier labeling based on quantile regression, with application to the steelmaking process
This paper introduces some methods for outlier identification in the regression setting, motivated by the analysis of steelmaking process data. The proposed methodology extends to the regression setting the boxplot rule, commonly used for outlier screening with univariate data. The focus here is on bivariate settings with a single covariate, but extensions are possible. The proposal is based on quantile regression, including an additional transformation parameter for selecting the best scale for linearity of the conditional quantiles. The resulting method is used to perform effective labeling of potential outliers, with a quite low computational complexity, allowing for simple implementation within statistical software as well as commonly used spreadsheets. Some simulation experiments have been carried out to study the swamping and masking properties of the proposal. The methodology is also illustrated by some real life examples, taking as the response variable the energy consumed in the melting process
Ruggero Bellio
Mauro Coletto
mauro.coletto@imtlucca.it
2016-05-23T09:52:51Z
2016-05-23T09:52:51Z
http://eprints.imtlucca.it/id/eprint/3489
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3489
2016-05-23T09:52:51Z
Electoral predictions with Twitter: a machine-learning approach
Several studies have shown how to approximately predict public opinion,
such as in political elections, by analyzing user activities in blogging platforms
and on-line social networks. The task is challenging for several reasons.
Sample bias and automatic understanding of textual content are two of several
non trivial issues.
In this work we study how Twitter can provide some interesting insights concerning
the primary elections of an Italian political party. State-of-the-art approaches
rely on indicators based on tweet and user volumes, often including sentiment
analysis. We investigate how to exploit and improve those indicators in order to
reduce the bias of the Twitter users sample. We propose novel indicators and a
novel content-based method. Furthermore, we study how a machine learning approach
can learn correction factors for those indicators. Experimental results on
Twitter data support the validity of the proposed methods and their improvement
over the state of the art.
Mauro Coletto
mauro.coletto@imtlucca.it
Claudio Lucchese
Salvatore Orlando
Raffaele Perego
2016-05-23T09:41:18Z
2016-05-23T09:41:18Z
http://eprints.imtlucca.it/id/eprint/3488
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3488
2016-05-23T09:41:18Z
Misinformation in the loop: the emergence of
narratives in Online Social Networks
The interlink between information and belief formation and
revision is a fundamental aspect of social dynamics. The growth of knowledge
fostered by a hyper-connected world together with the unprecedented
acceleration of scientific progress has exposed individuals, governments
and countries to an increasing level of complexity to explain
reality and its phenomena. Despite the enthusiastic rhetoric about the so
called collective intelligence, conspiracy theories and other unsubstantiated
claims find on the Web a natural medium for their diffusion. Cases
in which these kinds of false information are used in political debates
are far from unimaginable. In this work, we study the behavior of users
supporting different (and opposite) worldviews – i.e. scientific and conspiracist
thinking – that commented the posts of the Facebook page
of a large italian political party that advocates direct democracy and
e-Participation. We find that users supporting different narratives consume
political information in a similar way. Moreover, by analyzing the
composition of users active on the page in terms of commenting activity,
we notice that almost one fifth of them is represented by polarized consumers
of conspiracy stories, and those are able to generate almost one
third of total comments to the posts of the page
Alessandro Bessi
Mauro Coletto
mauro.coletto@imtlucca.it
George Alexandru Davidescu
Antonio Scala
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2016-05-05T13:57:04Z
2016-05-05T13:57:04Z
http://eprints.imtlucca.it/id/eprint/3481
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3481
2016-05-05T13:57:04Z
Classification-aware distortion metric for HEVC intra coding
Increasingly many vision applications necessitate the transmission of acquired images and video to a remote location for automated processing. When the image data are consumed by analysis algorithms and possibly never seen by a human, tailoring compression to the application is beneficial from a bit rate perspective. We inject prior knowledge of the application in the encoder to make rate-distortion decisions based on an estimate of the accuracy that will be achieved when analyzing reconstructed image data. Focusing on classification (e.g., used for image segmentation), we propose a new application-aware distortion metric based on a geometric interpretation of classification error. We devise an implementation for the High Efficiency Video Coding standard, and derive optimal model parameters for the A-domain rate control algorithm by curve fitting procedures. We evaluate our approach on time-lapse sequences from plant phenotyping experiments and cell fluorescence microscopy encoded in intra-only mode, observing a reduction in segmentation error across bit rates.
Massimo Minervini
massimo.minervini@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2016-04-19T07:48:41Z
2016-04-19T09:06:15Z
http://eprints.imtlucca.it/id/eprint/3346
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3346
2016-04-19T07:48:41Z
Reversibility in the higher-order \(π\)-calculus
The notion of reversible computation is attracting increasing interest because of its applications in diverse fields, in particular the study of programming abstractions for reliable systems. In this paper, we continue the study undertaken by Danos and Krivine on reversible CCS by defining a reversible higher-order π -calculus, called rhoπ. We prove that reversibility in our calculus is causally consistent and that the causal information used to support reversibility in rhoπ is consistent with the one used in the causal semantics of the π -calculus developed by Boreale and Sangiorgi. Finally, we show that one can faithfully encode rhoπ into a variant of higher-order π, substantially improving on the result we obtained in the conference version of this paper.
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Jean-Bernard Stefani
2016-04-13T09:40:21Z
2016-04-13T09:40:21Z
http://eprints.imtlucca.it/id/eprint/3444
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3444
2016-04-13T09:40:21Z
Scaling Size and Parameter Spaces in Variability-Aware Software Performance Models (T)
In software performance engineering, what-if scenarios, architecture optimization, capacity planning, run-time adaptation, and uncertainty management of realistic models typically require the evaluation of many instances. Effective analysis is however hindered by two orthogonal sources of complexity. The first is the infamous problem of state space explosion — the analysis of a single model becomes intractable with its size. The second is due to massive parameter spaces to be explored, but such that computations cannot be reused across model instances. In this paper, we efficiently analyze many queuing models with the distinctive feature of more accurately capturing variability and uncertainty of execution rates by incorporating general (i.e., non-exponential) distributions. Applying product-line engineering methods, we consider a family of models generated by a core that evolves into concrete instances by applying simple delta operations affecting both the topology and the model's parameters. State explosion is tackled by turning to a scalable approximation based on ordinary differential equations. The entire model space is analyzed in a family-based fashion, i.e., at once using an efficient symbolic solution of a super-model that subsumes every concrete instance. Extensive numerical tests show that this is orders of magnitude faster than a naive instance-by-instance analysis.
Matthias Kowal
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
Ina Schaefer
2016-04-13T09:23:42Z
2016-04-13T09:23:42Z
http://eprints.imtlucca.it/id/eprint/3441
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3441
2016-04-13T09:23:42Z
Forward and Backward Bisimulations for Chemical Reaction Networks
We present two quantitative behavioral equivalences over species of a chemical reaction network(CRN) with semantics based on ordinary differential equations.Forward CRN bisimulationiden-tifies a partition where each equivalence class represents the exact sum of the concentrations ofthe species belonging to that class. Backward CRN bisimulationrelates species that have theidentical solutions at all time points when starting from the same initial conditions. Both notionscan be checked using only CRN syntactical information, i.e., by inspection of the set of reactions. We provide a unified algorithm that computes the coarsest refinement up to our bisimulationsin polynomial time. Further, we give algorithms to compute quotient CRNs induced by a bisim-ulation. As an application, we find significant reductions in a number of models of biologicalprocesses from the literature. In two cases we allow the analysis of benchmark models whichwould be otherwise intractable due to their memory requirements.
Luca Cardelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-04-13T08:40:24Z
2016-04-13T08:40:24Z
http://eprints.imtlucca.it/id/eprint/3437
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3437
2016-04-13T08:40:24Z
Efficient Syntax-Driven Lumping of Differential Equations
We present an algorithm to compute exact aggregations of a class of systems of ordinary differential equations (ODEs). Our approach consists in an extension of Paige and Tarjan’s seminal solution to the coarsest refinement problem by encoding an ODE system into a suitable discrete-state representation. In particular, we consider a simple extension of the syntax of elementary chemical reaction networks because (i) it can express ODEs with derivatives given by polynomials of degree at most two, which are relevant in many applications in natural sciences and engineering; and (ii) we can build on two recently introduced bisimulations, which yield two complementary notions of ODE lumping. Our algorithm computes the largest bisimulations in O(r⋅s⋅logs)O(r⋅s⋅logs) time, where r is the number of monomials and s is the number of variables in the ODEs. Numerical experiments on real-world models from biochemistry, electrical engineering, and structural mechanics show that our prototype is able to handle ODEs with millions of variables and monomials, providing significant model reductions.
Luca Cardelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-04-13T08:31:40Z
2016-04-13T08:31:40Z
http://eprints.imtlucca.it/id/eprint/3436
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3436
2016-04-13T08:31:40Z
Noise Reduction in Complex Biological Switches
Cells operate in noisy molecular environments via complex regulatory networks. It is possible to understand how molecular counts are related to noise in specific networks, but it is not generally clear how noise relates to network complexity, because different levels of complexity also imply different overall number of molecules. For a fixed function, does increased network complexity reduce noise, beyond the mere increase of overall molecular counts? If so, complexity could provide an advantage counteracting the costs involved in maintaining larger networks. For that purpose, we investigate how noise affects multistable systems, where a small amount of noise could lead to very different outcomes; thus we turn to biochemical switches. Our method for comparing networks of different structure and complexity is to place them in conditions where they produce exactly the same deterministic function. We are then in a good position to compare their noise characteristics relatively to their identical deterministic traces. We show that more complex networks are better at coping with both intrinsic and extrinsic noise. Intrinsic noise tends to decrease with complexity, and extrinsic noise tends to have less impact. Our findings suggest a new role for increased complexity in biological networks, at parity of function.
Luca Cardelli
Attila Csikász-Nagy
Neil Dalchau
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
2016-04-13T08:26:32Z
2016-04-13T08:26:32Z
http://eprints.imtlucca.it/id/eprint/3435
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3435
2016-04-13T08:26:32Z
Symbolic Computation of Differential Equivalences
Ordinary differential equations (ODEs) are widespread in manynatural sciences including chemistry, ecology, and systems biology,and in disciplines such as control theory and electrical engineering. Building on the celebrated molecules-as-processes paradigm, they have become increasingly popular in computer science, with high-level languages and formal methods such as Petri nets, process algebra, and rule-based systems that are interpreted as ODEs. We consider the problem of comparing and minimizing ODEs automatically. Influenced by traditional approaches in the theory of programming, we propose differential equivalence relations. We study them for a basic intermediate language, for which we have decidability results, that can be targeted by a class of high-level specifications. An ODE implicitly represents an uncountable state space, hence reasoning techniques cannot be borrowed from established domains such as probabilistic programs with finite-state Markov chain semantics. We provide novel symbolic procedures to check an equivalence and compute the largest one via partition refinement algorithms that use satisfiability modulo theories. We illustrate the generality of our framework by showing that differential equivalences include (i) well-known notions for the minimization of continuous-time Markov chains (lumpability),(ii) bisimulations for chemical reaction networks recently proposedby Cardelli et al., and (iii) behavioral relations for process algebra with ODE semantics. With a prototype implementation we are able to detect equivalences in biochemical models from the literature thatcannot be reduced using competing automatic techniques.
Luca Cardelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-04-13T08:14:47Z
2016-04-13T08:14:47Z
http://eprints.imtlucca.it/id/eprint/3433
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3433
2016-04-13T08:14:47Z
Comparing Chemical Reaction Networks:A Categorical and Algorithmic Perspective
We study chemical reaction networks (CRNs) as a kernel language for concurrency models with semantics based on ordinary differential equations. We investigate the problem of comparing two CRNs,i.e., to decide whether the trajectories of asource CRN can be matched by a target CRN under an appropriate choice of initial conditions. Using a categorical framework, we extend and relate model-comparison approaches based on structural (syntactic) and on dynamical (semantic) properties of a CRN, proving their equivalence. Then, we provide an algorithm to compare CRNs, running linearly in time with respect to the cardinality of all possible comparisons. Finally, we apply our results to biological models from the literature.
Luca Cardelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-03-21T10:55:23Z
2016-03-21T10:55:23Z
http://eprints.imtlucca.it/id/eprint/3259
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3259
2016-03-21T10:55:23Z
Computational models for the in silico analysis of drug delivery from drug-eluting stents
Stents are tubular meshed devices implanted to restore the perviety of an occluded vessel owing to the presence of an atherosclerotic plaque. These devices were introduced in to clinical practice from 1980. The first stent implanted in a human coronary artery was the Wallstent [1], a self-expandable metallic device. The use of a stent to expand the vessel was introduced to overcome the greater limit of angioplasty, the elastic recoil of the vessel wall, yet also caused the onset of another, different pathology: intra-stent restenosis. This pathology results from injuries on the vessel wall after balloon inflation as well as the different fluid dynamic regime established after stent implantation [2]. Intra-stent restenosis is caused by the abnormal growth of tissues within stent meshes, leading to the implant failure.
The common therapeutic approach to limit hyperplasia is the systemic administration of antimitotic and anti-inflammatory drugs. This treatment generally fails because effective dosing levels have toxic effects on patients. Since 2000, a new and emerging class of stents was introduced to redress this problem. We are referring to drug-eluting stents (DES), new devices loaded with one or more active principles for the local administration of the drug, avoiding the systemic administration of massive doses. DES are metallic devices impregnated with a drug on their surface or coated with a polymeric thin layer containing the active principle.
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
2016-03-21T09:12:52Z
2016-03-22T11:08:58Z
http://eprints.imtlucca.it/id/eprint/3244
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3244
2016-03-21T09:12:52Z
Experimental and computational study of mechanical and transport properties of a polymer coating for drug-eluting stents
Background: Experimental and computational characterizations in the preclinical development of biomedical devices are complementary and can significantly help in a thorough analysis of the performances before clinical evaluation. Methodology: Here mechanical and drug delivery properties of a polymer platform, ad hoc prepared to obtain coatings for drug-eluting stents, is reported; polymer formulation and starting drug loading were varied to study the behavior of the platform; a finite element model was constructed starting from experimental data. Results: Different platform formulations affected mechanical and drug transport properties; these properties can be fine tuned by varying the starting platform formulation. Finite element analysis allowed visualizing drug distribution maps over time in biological tissues for different commercial stents and polymer platform formulations.
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
2016-03-21T08:41:12Z
2016-03-21T08:41:12Z
http://eprints.imtlucca.it/id/eprint/3240
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3240
2016-03-21T08:41:12Z
Parcellation-based connectome assessment by using structural and functional connectivity
Connectome analysis of the human brain structural and functional architecture provides a unique opportunity to understand the organization of brain networks. In this work, we investigate a novel large scale parcellation-based connectome, merging together information coming from resting state fMRI (rs-fMRI) data and diffusion tensor imaging (DTI) measurements.
Ying-Chia Lin
yingchia.lin@imtlucca.it
Tommaso Gili
Sotirios A. Tsaftaris
Andrea Gabrielli
Mariangela Iorio
Gianfranco Spalletta
Guido Caldarelli
guido.caldarelli@imtlucca.it
2016-03-21T08:41:04Z
2016-03-21T08:41:04Z
http://eprints.imtlucca.it/id/eprint/3239
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3239
2016-03-21T08:41:04Z
A cortical and sub-cortical parcellation clustering by intrinsic functional connectivity
Network analysis of resting-state fMRI (rsfMRI) has been widely utilized to investigate the functional architecture of the whole brain. Here we propose a robust parcellation method that first divides cortical and sub-cortical regions into sub-regions by clustering the rsfMRI data for each subject independently, and then merges those individual parcellations to obtain a global whole brain parcellation. To do so our method relies on majority voting (to merge parcellations of multiple subjects) and enforces spatial constraints within a hierarchical agglomerative clustering framework to define parcels that are spatially homogeneous.
Ying-Chia Lin
yingchia.lin@imtlucca.it
Tommaso Gili
Sotirios A. Tsaftaris
Andrea Gabrielli
Mariangela Iorio
Gianfranco Spalletta
Guido Caldarelli
guido.caldarelli@imtlucca.it
2016-03-14T13:02:02Z
2016-04-06T10:06:19Z
http://eprints.imtlucca.it/id/eprint/3223
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3223
2016-03-14T13:02:02Z
A Cortical and Sub-cortical Parcellation Clustering by Intrinsic Functional Connectivity
Network analysis of resting-state fMRI (rsfMRI) has been widely utilized to investigate the functional architecture of the whole brain. Such analysis can divide the brain into several discrete elements (nodes) connected by links (edges) representing the relation between two elements. The brain cortical and subcortical areas can be segmented or parcelled into several functional and/or structural regions. The connectome analysis of human-brain structure and functional connectivity provides a unique opportunity to understand the organisation of brain networks. However, such analyses require an appropriate definition of functional or structural nodes to efficiently represent cortical regions. In order to address this issue, here we propose a robust parcellation method based on resting-state fMRI, which can be generalized from the single-subject level to the multi-group one. Considering the input data of a single subject and constructing multi-resolution graph elements. We combined voting-based measurements to divide the cortical region into sub-regions in order to obtain the whole brain parcellation. Our parcellation relies on majority vote and poses spatial constraints within a hierarchical agglomerative clustering framework to define parcels that are spatially homogeneous. We used rsfMRI data collected from 40 healthy subjects and we showed that our purposed algorithm is able to compute stable and reproducible parcellations across the group of subjects at multi-resolution level. We find that, even though previous methods ensure on average larger overlap between parcels and regions in AAL atlas, the method proposed herein reduces inter-subject variability, especially when the number of parcels increases. Our high-resolution parcels seem to be functionally more consistent and reliable and can be a useful tool for future analysis that will aim to match functional and structural architecture of the brain.
Ying-Chia Lin
yingchia.lin@imtlucca.it
Tommaso Gili
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Andrea Gabrielli
Mariangela Iorio
Gianfranco Spalletta
Guido Caldarelli
guido.caldarelli@imtlucca.it
2016-02-29T09:01:28Z
2016-03-01T10:41:59Z
http://eprints.imtlucca.it/id/eprint/3147
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3147
2016-02-29T09:01:28Z
On the approximation of the optimal control functions in
stochastic optimal control problems
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2016-02-29T09:01:04Z
2016-03-04T08:25:15Z
http://eprints.imtlucca.it/id/eprint/3148
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3148
2016-02-29T09:01:04Z
Symmetry and antisymmetry properties of optimal solutions to regression problems
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2016-02-26T15:48:35Z
2016-03-04T08:26:07Z
http://eprints.imtlucca.it/id/eprint/3146
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3146
2016-02-26T15:48:35Z
A green policy to schedule tasks in a distributed cloud
Stefano Sebastio
stefano.sebastio@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2016-02-26T15:47:40Z
2016-03-01T10:41:10Z
http://eprints.imtlucca.it/id/eprint/3145
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3145
2016-02-26T15:47:40Z
Binary and
multi-class Parkinsonian disorders classification using Support Vector Machines with
graph-based features
Rita Morisi
rita.morisi@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Nico Lanconelli
Stefano Zanigni
David Neil Manners
Claudia Testa
Stefania Evangelisti
Laura Ludovica Gramegna
Claudio Bianchini
Pietro Cortelli
Caterina Tonon
Raffaele Lodi
2016-02-26T15:44:37Z
2016-03-04T08:24:34Z
http://eprints.imtlucca.it/id/eprint/3144
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3144
2016-02-26T15:44:37Z
“Optimal distributed task scheduling in volunteer clouds
Stefano Sebastio
stefano.sebastio@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2016-02-26T15:43:29Z
2016-03-04T08:25:32Z
http://eprints.imtlucca.it/id/eprint/3143
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3143
2016-02-26T15:43:29Z
Transboundary pollution control and environmental absorption efficiency management
F. El Ouardighi
K. Kogan
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2016-02-26T15:41:35Z
2016-10-04T09:03:37Z
http://eprints.imtlucca.it/id/eprint/3142
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3142
2016-02-26T15:41:35Z
Linear Quadratic Gaussian (LQG) online learning
Optimal control theory and machine learning techniques are combined to propose and solve in closed form an optimal control formulation of online learning from supervised examples. The connections with the classical Linear Quadratic Gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a non trivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman-filter estimate of the parameter vector to be learned. It is shown that the former enjoys larger smoothness and robustness to outliers, thanks to the presence of a regularization term. The basic formulation of the proposed online-learning framework refers to a discrete time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called "kernel trick", the case of nonlinear models.
Subjects: Optimization and Control (math.OC)
Cite as: arXiv:1606.04272 [math.OC]
(or arXiv:1606.04272v2 [math.OC] for this version)
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Gori
Marcello Sanguineti
2016-02-26T15:40:04Z
2016-03-04T08:24:59Z
http://eprints.imtlucca.it/id/eprint/3141
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3141
2016-02-26T15:40:04Z
Symmetric and antisymmetric properties of solutions to kernel-based machine learning problems
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2016-02-26T15:38:43Z
2016-03-01T10:42:48Z
http://eprints.imtlucca.it/id/eprint/3140
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3140
2016-02-26T15:38:43Z
Welfare effects of uniform and differential pricing schemes: an analysis through quadratic programming
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Fabio Pammolli
f.pammolli@imtlucca.it
Berna Tuncay
berna.tuncay@imtlucca.it
2016-02-26T15:28:19Z
2016-03-04T08:23:15Z
http://eprints.imtlucca.it/id/eprint/3137
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3137
2016-02-26T15:28:19Z
Congestion-aware forwarding strategies for intermittently connected networks
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2016-02-26T15:20:00Z
2016-02-26T15:20:00Z
http://eprints.imtlucca.it/id/eprint/3136
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3136
2016-02-26T15:20:00Z
Forwarding strategies for congestion control in intermittently connected networks
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2016-02-26T15:16:19Z
2016-02-26T15:16:19Z
http://eprints.imtlucca.it/id/eprint/3135
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3135
2016-02-26T15:16:19Z
A Machine-Learning Paradigm that Includes Pointwise
Constraints
The classical framework of learning from examples is enhanced by the
introduction of hard point-wise constraints, i.e., constraints, on a finite
set of examples, that cannot be violated. They arise, e.g., when imposing
coherent decisions of classifiers acting on different views of the
same pattern. Constrained variational calculus is exploited to derive a
representer theorem that provides a description of the functional structure
of the solution. The general theory is applied to learning from hard
linear point-wise constraints combined with classical supervised pairs
and loss functions.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2016-02-26T15:08:57Z
2016-02-26T15:08:57Z
http://eprints.imtlucca.it/id/eprint/3134
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3134
2016-02-26T15:08:57Z
Average Packet Delivery Delay in Intermittently-Connected Networks
Delay/Disruption-Tolerant Networking (DTN) is addressed. It is a communication paradigm that enables
the communication over Intermittently-Connected Networks (ICNs), which are characterized by unpredictable
or scheduled contacts among nodes, high latency, and high bit error rates. DTN exploits storeand-forward
techniques in order to cope with intermittent link issues. A model is proposed, to compute the
average packet delivery delay in ICNs. We assume that the inter-meeting time as well as the contact time
between any two nodes is an exponentially-distributed random-variable. As a consequence, the behavior of
the communication between each pair of nodes can be modeled by a two-state Continuous-Time Markov
Chain (CTMC). It is assumed that the packet generation process at the source node follows a Poisson process,
so in the analysis one can exploit the Poisson Arrivals See Time Averages (PASTA) property. Both
the IP-like paradigm used by traditional TCP/IP protocols and DTN are considered. Numerical results and
simulations are presented.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2016-02-26T15:06:47Z
2016-02-26T15:09:47Z
http://eprints.imtlucca.it/id/eprint/3133
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3133
2016-02-26T15:06:47Z
Supervised Learning from Regions and Box Kernels
A supervised learning paradigm is investigated, in which the data are represented by labeled regions of
the input space. This learning model is motivated by real-world applications, such as problems of medical
diagnosis and image categorization. The associated optimization framework entails the minimization of a
functional obtained by introducing a loss function that involves the labeled regions. A regularization term
expressed via differential operators, modeling smoothness properties of the desired input/output relationship,
is included. It is shown that the optimization problem associated to supervised learning from regions has
a unique solution, represented as a linear combination of kernel functions determined by the differential
operators together with the regions themselves. The case of regions given by multi-dimensional intervals (i.e.,
“boxes”) is investigated as an interesting instance of learning from regions, which models prior knowledge
expressed by logical propositions. The proposed approach covers as a particular case the classical learning
context, which corresponds to the situation where regions degenerate to single points. Applications and
numerical examples are discussed.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2016-02-26T15:00:45Z
2016-02-26T15:00:45Z
http://eprints.imtlucca.it/id/eprint/3132
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3132
2016-02-26T15:00:45Z
Evaluating flood hazard at the catchment scale via machine-learning techniques
Massimiliano Degiorgis
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Silvia Gorni
Rita Morisi
rita.morisi@imtlucca.it
Giorgio Roth
Marcello Sanguineti
Angela Celeste Taramasso
2016-02-26T14:43:14Z
2016-02-26T14:43:14Z
http://eprints.imtlucca.it/id/eprint/3131
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3131
2016-02-26T14:43:14Z
Dealing with mixed hard/soft constraints via Support constraint Machines
A learning paradigm is presented, which extends the classical framework of
learning from examples by including hard pointwise constraints, i.e., constraints that cannot be
violated. In applications, hard pointwise constraints may encode very precise prior knowledge
coming from rules, applied, e.g., to a large collection of unsupervised examples. The classical
learning framework corresponds to soft pointwise constraints, which can be violated at the cost
of some penalization. The functional structure of the optimal solution is derived in terms of a set
of “support constraints”, which generalize the classical concept of “support vectors”. They are
at the basis of a novel learning parading, that we called “Support Constraint Machines”. A case
study and a numerical example are presented.
Marcello Sanguineti
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
2016-02-26T14:41:30Z
2016-02-26T14:41:30Z
http://eprints.imtlucca.it/id/eprint/3130
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3130
2016-02-26T14:41:30Z
A Two-Player Differential Game Model for the Management of Transboundary Pollution and
Environmental Absorption
It is likely that the decentralized structure at the level of nations of decision-making
processes related to polluting emissions will aggravate the decline in the efficiency of carbon sinks.
A two-player differential game model of pollution is proposed. It accounts for a time-dependent
environmental absorption efficiency and allows for the possibility of a switching of the biosphere
from a carbon sink to a source. The impact of negative externalities from the transboundary
pollution non-cooperative game wherein countries are dynamically involved is investigated. The
differences in steady state between cooperative, open-loop, and Markov perfect Nash equilibria
are studied. For the latter, two numerical methods for its approximation are compared.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
F. El Ouardighi
K. Kogan
Marcello Sanguineti
2016-02-26T14:35:46Z
2016-02-29T08:31:00Z
http://eprints.imtlucca.it/id/eprint/3129
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3129
2016-02-26T14:35:46Z
Binary and multi-class classification of parkinsonian disorders with support vector machines based on quantitative brain MR and graph-based features
Laura Ludovica Gramegna
Claudia Testa
Rita Morisi
rita.morisi@imtlucca.it
Stefano Zanigni
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Nico Lanconelli
David Neil Manners
Stefania Evangelisti
Pietro Cortelli
Caterina Tonon
Raffaele Lodi
2016-02-26T13:30:28Z
2016-02-26T13:30:28Z
http://eprints.imtlucca.it/id/eprint/3128
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3128
2016-02-26T13:30:28Z
Online learning as an LQG optimal control problem with random matrices
In this paper, we combine optimal control theory and machine learning techniques to propose and solve an optimal control formulation of online learning from supervised examples, which are used to learn an unknown vector parameter modeling the relationship between the input examples and their outputs. We show some connections of the problem investigated with the classical LQG optimal control problem, of which the proposed problem is a non-trivial variation, as it involves random matrices. We also compare the optimal solution to the proposed problem with the Kalman-filter estimate of the parameter vector to be learned, demonstrating its larger smoothness and robustness to outliers. Extension of the proposed online-learning framework are mentioned at the end of the paper.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Gori
Rita Morisi
rita.morisi@imtlucca.it
Marcello Sanguineti
2016-02-26T13:24:09Z
2016-02-26T13:24:09Z
http://eprints.imtlucca.it/id/eprint/3127
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3127
2016-02-26T13:24:09Z
Learning with hard constraints as a limit case of learning with soft constraints
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2016-02-26T13:11:06Z
2016-02-26T13:11:06Z
http://eprints.imtlucca.it/id/eprint/3126
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3126
2016-02-26T13:11:06Z
A SOM-based Chan–Vese model for unsupervised image segmentation
Active Contour Models (ACMs) constitute an efficient energy-based image segmentation framework. They usually deal with the segmentation problem as an optimization problem, formulated in terms of a suitable functional, constructed in such a way that its minimum is achieved in correspondence with a contour that is a close approximation of the actual object boundary. However, for existing ACMs, handling images that contain objects characterized by many different intensities still represents a challenge. In this paper, we propose a novel ACM that combines—in a global and unsupervised way—the advantages of the Self-Organizing Map (SOM) within the level set framework of a state-of-the-art unsupervised global ACM, the Chan–Vese (C–V) model. We term our proposed model SOM-based Chan–Vese (SOMCV) active contour model. It works by explicitly integrating the global information coming from the weights (prototypes) of the neurons in a trained SOM to help choosing whether to shrink or expand the current contour during the optimization process, which is performed in an iterative way. The proposed model can handle images that contain objects characterized by complex intensity distributions, and is at the same time robust to the additive noise. Experimental results show the high accuracy of the segmentation results obtained by the SOMCV model on several synthetic and real images, when compared to the Chan–Vese model and other image segmentation models.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
2016-02-26T12:55:26Z
2016-02-26T12:55:26Z
http://eprints.imtlucca.it/id/eprint/3125
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3125
2016-02-26T12:55:26Z
On the Relationship between Variational Level Set-Based and SOM-Based Active Contours
Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
Eyad Elyan
2016-02-26T12:39:16Z
2016-02-26T12:39:16Z
http://eprints.imtlucca.it/id/eprint/3123
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3123
2016-02-26T12:39:16Z
On the Curse of Dimensionality in the Ritz Method
It is shown that the classical Ritz method of the calculus of variations suffers from the “curse of dimensionality,” i.e., an exponential growth, as a function of the number of variables, of the dimension a linear subspace needs in order to achieve a desired relative improvement in the accuracy of approximation of the optimal solution value. The proof is constructive and is obtained by exhibiting a family of infinite-dimensional optimization problems for which this happens, namely those with quadratic functional and spherical constraint. The results provide a theoretical motivation for the search of alternative solution methods, such as the so-called “extended Ritz method,” to deal with the curse of dimensionality.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2016-02-26T12:26:43Z
2017-03-21T10:32:36Z
http://eprints.imtlucca.it/id/eprint/3122
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3122
2016-02-26T12:26:43Z
Optimal design of auxetic hexachiral metamaterials with local resonators
A parametric beam lattice model is formulated to analyse the propagation properties of elastic in-plane waves in an auxetic material based on a hexachiral topology of the periodic cell, equipped with inertial local resonators. The Floquet-Bloch boundary conditions are imposed on a reduced order linear model in the only dynamically active degrees-offreedom. Since the resonators can be designed to open and shift band gaps, an optimal design, focused on the largest possible gap in the low-frequency range, is achieved by solving a maximization problem in the bounded space of the significant geometrical and mechanical parameters. A local optimized solution, for a the lowest pair of consecutive dispersion curves, is found by employing the globally convergent version of the Method of Moving asymptotes, combined with Monte Carlo and quasi-Monte Carlo multi-start techniques.
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Marco Lepidi
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Luigi Gambarotta
2016-02-26T12:10:01Z
2016-02-26T12:10:01Z
http://eprints.imtlucca.it/id/eprint/3119
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3119
2016-02-26T12:10:01Z
Automatic Classification of Leading Interactions in a String Quartet
The aim of the present work is to analyze automatically the leading interactions between the musicians of a string quartet, using machine learning techniques applied to nonverbal features of the musicians behavior, which are detected through the help of a motion capture system. We represent these interactions by a graph of influence of the musicians, which displays the relations is following and is not following with weighted directed arcs. The goal of the machine learning problem investigated is to assign weights to these arcs in an optimal way. Since only a subset of the available training examples are labeled, a semisupervised support vector machine is used, which is based on a linear kernel to limit its model complexity. Specific potential applications within the field of human-computer interaction are also discussed, such as e-learning, networked music performance, and social active listening.
Floriane Dardard
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Donald Glowinski
2016-02-12T13:32:18Z
2016-02-12T13:32:18Z
http://eprints.imtlucca.it/id/eprint/3068
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3068
2016-02-12T13:32:18Z
MultiVeStA: Statistical Model Checking for Discrete Event Simulators
The modeling, analysis and performance evaluation of large-scale systems are difficult tasks. An approach typically followed by engineers consists in performing simulations of systems models to obtain statistical estimations of quantitative properties. Similarly, a technique used by computer scientists working on quantitative analysis is Statistical Model Checking (SMC), where rigorous mathematical languages (e.g., logics) are used to express properties, which are automatically estimated again simulating the model at hand. These property specification languages provide a formal, compact and elegant way to express properties without hard-coding them in the model definition. This paper presents MultiVeStA, a statistical analysis tool which can be easily integrated with discrete event simulators, enriching them with efficient distributed statistical analysis and SMC capabilities.
Stefano Sebastio
stefano.sebastio@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T13:25:31Z
2016-04-06T09:37:22Z
http://eprints.imtlucca.it/id/eprint/3067
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3067
2016-02-12T13:25:31Z
Distributed statistical analysis of complex systems modeled through a chemical metaphor
The chemical-inspired programming approach is an emerging paradigm for defining the behavior of densely distributed and context-aware devices (e.g., in ecosystems of displays tailored to crowd steering, or to obtain profile-based coordinated visualization). Typically, the evolution of such systems cannot be easily predicted, thus making of paramount importance the availability of techniques and tools supporting prior-to-deployment analysis. Exact analysis techniques do not scale well when the complexity of systems grows: as a consequence, approximated techniques based on simulation assumed a relevant role. This work presents a new simulation-based distributed analysis tool addressing the statistical analysis of such a kind of systems. The tool has been obtained by chaining two existing tools: MultiVeSta and Alchemist. The former is a recently proposed lightweight tool which allows to enrich existing discrete event simulators with automated and distributed statistical analysis capabilities, while the latter is an efficient simulator for chemical-inspired computational systems. The tool is validated against a crowd steering scenario, and insights on the performance are provided by discussing how the analysis tasks scale on a multi-core architecture.
Danilo Pianini
Stefano Sebastio
stefano.sebastio@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T13:11:53Z
2016-02-12T13:11:53Z
http://eprints.imtlucca.it/id/eprint/3065
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3065
2016-02-12T13:11:53Z
The SCEL Language: Design, Implementation, Verification
SCEL (Service Component Ensemble Language) is a new language specifically designed to rigorously model and program autonomic components and their interaction, while supporting formal reasoning on their behaviors. SCEL brings together various programming abstractions that allow one to directly represent aggregations, behaviors and knowledge according to specific policies. It also naturally supports programming interaction, self-awareness, context-awareness, and adaptation. The solid semantic grounds of the language is exploited for developing logics, tools and methodologies for formal reasoning on system behavior to establish qualitative and quantitative properties of both the individual components and the overall systems.
Rocco De Nicola
r.denicola@imtlucca.it
Diego Latella
Alberto Lluch Lafuente
Michele Loreti
Andrea Margheri
Mieke Massink
Andrea Morichetta
Rosario Pugliese
Francesco Tiezzi
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T13:04:06Z
2016-04-06T07:58:07Z
http://eprints.imtlucca.it/id/eprint/3064
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3064
2016-02-12T13:04:06Z
Reconciling White-Box and Black-Box Perspectives on Behavioral Self-adaptation
This paper proposes to reconcile two perspectives on behavioral adaptation commonly taken at different stages of the engineering of autonomic computing systems. Requirements engineering activities often take a black-box perspective: A system is considered to be adaptive with respect to an environment whenever the system is able to satisfy its goals irrespectively of the environment perturbations. Modeling and programming engineering activities often take a white-box perspective: A system is equipped with suitable adaptation mechanisms and its behavior is classified as adaptive depending on whether the adaptation mechanisms are enacted or not. The proposed approach reconciles black- and white-box perspectives by proposing several notions of coherence between the adaptivity as observed by the two perspectives: These notions provide useful criteria for the system developer to assess and possibly modify the adaptation requirements, models and programs of an autonomic system.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Matthias Hölzl
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
Martin Wirsing
2016-02-12T12:37:25Z
2016-02-12T12:37:25Z
http://eprints.imtlucca.it/id/eprint/3063
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3063
2016-02-12T12:37:25Z
Differential Bisimulation for a Markovian Process Algebra
Formal languages with semantics based on ordinary differential equations (ODEs) have emerged as a useful tool to reason about large-scale distributed systems. We present differential bisimulation, a behavioral equivalence developed as the ODE counterpart of bisimulations for languages with probabilistic or stochastic semantics. We study it in the context of a Markovian process algebra. Similarly to Markovian bisimulations yielding an aggregated Markov process in the sense of the theory of lumpability, differential bisimulation yields a partition of the ODEs underlying a process algebra term, whereby the sum of the ODE solutions of the same partition block is equal to the solution of a single (lumped) ODE. Differential bisimulation is defined in terms of two symmetries that can be verified only using syntactic checks. This enables the adaptation to a continuous-state semantics of proof techniques and algorithms for finite, discrete-state, labeled transition systems. For instance, we readily obtain a result of compositionality, and provide an efficient partition-refinement algorithm to compute the coarsest ODE aggregation of a model according to differential bisimulation.
Giulio Iacobelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T12:27:56Z
2016-02-12T12:27:56Z
http://eprints.imtlucca.it/id/eprint/3062
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3062
2016-02-12T12:27:56Z
A White Box Perspective on Behavioural Adaptation
We present a white-box conceptual framework for adaptation developed in the context of the EU Project ASCENS coordinated by Martin Wirsing. We called it CoDa, for Control Data Adaptation, since it is based on the notion of control data. CoDa promotes a neat separation between application and adaptation logic through a clear identification of the set of data that is relevant for the latter. The framework provides an original perspective from which we survey a representative set of approaches to adaptation, ranging from programming languages and paradigms to computational models and architectural solutions.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T11:52:51Z
2016-02-12T12:08:13Z
http://eprints.imtlucca.it/id/eprint/3061
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3061
2016-02-12T11:52:51Z
Modelling and analyzing adaptive self-assembly strategies with Maude
Building adaptive systems with predictable emergent behavior is a difficult task and it is becoming a critical need. The research community has accepted the challenge by introducing approaches of various nature: from software architectures to programming paradigms and analysis techniques. Our white-box conceptual approach to adaptive systems based on the notion of control data promotes a clear distinction between the application and the adaptation logic. In this paper we propose a concrete instance of our approach based on (i) a neat identification of control data; (ii) a hierarchical architecture that provides the basic structure to separate the adaptation and application logics; (iii) computational reflection as the main mechanism to realize the adaptation logic; (iv) probabilistic rule-based specifications and quantitative verification techniques to specify and analyze the adaptation logic. We show that our solution can be naturally realized in Maude, a Rewriting Logic based framework, and illustrate our approach by specifying, validating and analyzing a prominent example of adaptive systems: robot swarms equipped with self-assembly strategies.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-11T15:16:10Z
2016-04-06T10:06:34Z
http://eprints.imtlucca.it/id/eprint/3055
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3055
2016-02-11T15:16:10Z
Large-scale analysis of neuroimaging data on commercial clouds with content-aware resource allocation strategie
The combined use of mice that have genetic mutations (transgenic mouse models) of human pathology and advanced neuroimaging methods (such as magnetic resonance imaging) has the potential to radically change how we approach disease understanding, diagnosis and treatment. Morphological changes occurring in the brain of transgenic animals as a result of the interaction between environment and genotype can be assessed using advanced image analysis methods, an effort described as ‘mouse brain phenotyping’. However, the computational methods involved in the analysis of high-resolution brain images are demanding. While running such analysis on local clusters is possible, not all users have access to such infrastructure and even for those that do, having additional computational capacity can be beneficial (e.g. to meet sudden high throughput demands). In this paper we use a commercial cloud platform for brain neuroimaging and analysis. We achieve a registration-based multi-atlas, multi-template anatomical segmentation, normally a lengthy-in-time effort, within a few hours. Naturally, performing such analyses on the cloud entails a monetary cost, and it is worthwhile identifying strategies that can allocate resources intelligently. In our context a critical aspect is the identification of how long each job will take. We propose a method that estimates the complexity of an image-processing task, a registration, using statistical moments and shape descriptors of the image content. We use this information to learn and predict the completion time of a registration. The proposed approach is easy to deploy, and could serve as an alternative for laboratories that may require instant access to large high-performance-computing infrastructures. To facilitate adoption from the community we publicly release the source code.
Massimo Minervini
massimo.minervini@imtlucca.it
Cristian Rusu
Mario Damiano
Valter Tucci
Angelo Bifone
Alessandro Gozzi
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2016-01-20T10:27:26Z
2016-04-06T10:06:49Z
http://eprints.imtlucca.it/id/eprint/3025
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3025
2016-01-20T10:27:26Z
Supervised Learning of Functional Maps for Infarct Classification
Our submission to the STACOM Challenge at MICCAI 2015 is based on the supervised learning of functional map representation between End Systole (ES) and End Diastole (ED) phases of Left Ventricle (LV), for classifying infarcted LV from the healthy ones. The Laplace-Beltrami eigen-spectrum of the LV surfaces at ES and ED, represented by their triangular meshes, are used to compute the functional maps. Multi-scale distortions induced by the mapping, are further calculated by singular value decomposition of the functional map. During training, the information of whether an LV surface is healthy or diseased is known, and this information is used to train an SVM classifier for the singular values at multiple scales corresponding to the distorted areas augmented with surface area difference of epicardium and endocardium meshes. At testing similar augmented features are calculated and fed to the SVM model for classification. Promising results are obtained on both cross validation of training data as well as on testing data, which encourages us in believing that this algorithm will perform favourably in comparison to state of the art methods.
Anirban Mukhopadhyay
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2016-01-20T10:15:32Z
2016-04-06T07:35:14Z
http://eprints.imtlucca.it/id/eprint/3024
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3024
2016-01-20T10:15:32Z
Reconstruction of DSC-MRI Data from Sparse Data Exploiting Temporal Redundancy and Contrast Localization
In order to asses brain perfusion, one of the available methods is the estimation of parameters such as cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) from Dynamic Susceptibility Contrast-MRI (DSC-MRI). This estimation requires both high temporal resolution to capture the rapid tracer kinetic, and high spatial resolution to detect small impairments and reliably discriminate boundaries.With this inmind, we propose a compressed sensing approach to decrease the acquisition time without sacrificing the reconstruction, especially in the region affected by tracer passage. To this end we propose the utilization of an available TVL1- L2 minimization scheme with a novel additional term that introduce the information on the volume at baseline (no tracer). We show on simulated data the benefit of such a scheme, that is able to achieve an accurate reconstruction even at high acceleration (x16), with a RMSE of 2.8, 10 times lower than the error obtained with the original reconstruction.
Davide Boschetto
davide.boschetto@imtlucca.it
M. Castellaro
P. Di Prima
A. Bertoldo
Enrico Grisan
2016-01-20T09:41:10Z
2016-01-20T09:41:10Z
http://eprints.imtlucca.it/id/eprint/3023
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3023
2016-01-20T09:41:10Z
X3DMMS: An X3DOM Tool for Molecular and Material Sciences
We are presenting a virtual reality environment based on X3DOM technologies and aimed for enabling a researcher in the Molecular and Matter Sciences to set up the initial conditions of a simulation to be performed using the Dl-Poly software, through a virtual environment implemented in X3D. After having completed the definition of the molecular system to be studied in a very intuitive and user friendly way, the user can write out the Dl-Poly input files. In this way the crucial phase of the initial set up of the simulation is simplified and can be performed in a short time.
Even if some technological drawbacks have been experienced in the current X3DOM implementation, we are confident that this approach, which definitely solves the "traditional" issues related to the compatibility among different web browser (plugins) and operating systems, represents an highway for the diffusion of X3D technologies in several application fields.
Fabiana Zollo
fabiana.zollo@imtlucca.it
Luca Caprini
Osvaldo Gervasi
Alessandro Costantini
2016-01-20T09:35:59Z
2016-01-20T09:35:59Z
http://eprints.imtlucca.it/id/eprint/3022
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3022
2016-01-20T09:35:59Z
User Interaction and Data Management for Large Scale Grid Applications
In this paper we present a model that combines the X3DMMS application with the G3CPie execution framework, that enables the user to perform large scale computations on distributed computing environments. Such an approach facilitates the management and the preparation of the data required to define the input files for DL_POLY, a popular Molecular Dynamics (MD) package used for the study of molecular systems. The researcher can define in a intuitive way the initial configuration of the molecular system, making use of the X3DMMS virtual reality environment, and prepares the related MD package oriented input files. After having defined the initial conditions of the system, the researcher can carry out the required computations by using the G3CPie workflow environment, which controls the execution of the calculation on a distributed computing infrastructure. To test the validity of the developed model, implemented in the EGI infrastructure, we present the results carried out for a propane bulk system, where the solvation process of propane inside the bulk has been investigated. The presented approach provides a reusable example for other laboratories or groups interested both in acting through virtual representation of the molecular systems and porting their applications to distributed computing infrastructures.
Alessandro Costantini
Osvaldo Gervasi
Fabiana Zollo
fabiana.zollo@imtlucca.it
Luca Caprini
2016-01-20T08:59:10Z
2016-01-20T09:37:24Z
http://eprints.imtlucca.it/id/eprint/3021
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3021
2016-01-20T08:59:10Z
Viral Misinformation: The Role of Homophily and Polarization
Alessandro Bessi
Fabio Petroni
Michela Del Vicario
michela.delvicario@imtlucca.it
Fabiana Zollo
fabiana.zollo@imtlucca.it
Aris Anagnostopoulos
Antonio Scala
Guido Caldarelli
guido.caldarelli@imtlucca.it
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2016-01-19T15:59:42Z
2016-04-06T07:39:00Z
http://eprints.imtlucca.it/id/eprint/3018
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3018
2016-01-19T15:59:42Z
Semiautomatic detection of villi in confocal endoscopy for the evaluation of celiac disease
Celiac Disease (CD) is an immune-mediated enteropathy, diagnosed in the clinical practice by intestinal biopsy and the concomitant presence of a positive celiac serology. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to potentially perform in vivo virtual histology of small-bowel mucosa. In particular, it allows the qualitative evaluation of mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a semi-automatic method for villi detection from confocal endoscopy images, whose appearance change in case of villous atrophy. Starting from a set of manual seeds, a first rough segmentation of the villi is obtained by means of mathematical morphology operations. A merge and split procedure is then performed, to ensure that each seed originates a different region in the final segmentation. A border refinement process is finally performed, evolving the shape of each region according to local gradient intensities. Mean and median Dice coefficients for 290 villi originating from 66 images when compared to manually obtained ground truth are 80.71 and 87.96 respectively.
Davide Boschetto
davide.boschetto@imtlucca.it
H. Mirzaei
R.W.L. Leong
Giacomo Tarroni
Enrico Grisan
2016-01-19T15:31:34Z
2016-04-06T07:34:52Z
http://eprints.imtlucca.it/id/eprint/3017
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3017
2016-01-19T15:31:34Z
Detection and density estimation of goblet cells in confocal endoscopy for the evaluation of celiac disease
Celiac Disease (CD) is an immune-mediated enteropathy, diagnosed in the clinical practice by intestinal biopsy and the concomitant presence of a positive celiac serology. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to potentially perform in vivo virtual histology of small-bowel mucosa. In particular, it allows the qualitative evaluation of mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a semi-automatic computer-based method for the detection of goblet cells from confocal endoscopy images, whose density changes in case of pathological tissue. After a manual selection of a suitable region of interest, the candidate columnar and goblet cells' centers are first detected and the cellular architecture is estimated from their position using a Voronoi diagram. The region within each Voronoi cell is then analyzed and classified as goblet cell or other. The results suggest that our method is able to detect and label goblet cells immersed in a columnar epithelium in a fast, reliable and automatic way. Accepting 0.44 false positives per image, we obtain a sensitivity value of 90.3. Furthermore, estimated and real goblet cell densities are comparable (error: 9.7 ± 16.9, correlation: 87.2, R2 = 76).
Davide Boschetto
davide.boschetto@imtlucca.it
H. Mirzaei
R.W.L. Leong
Enrico Grisan
2016-01-19T15:25:22Z
2016-04-06T07:36:06Z
http://eprints.imtlucca.it/id/eprint/3016
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3016
2016-01-19T15:25:22Z
Baseline constrained reconstruction of DSC-MRI tracer kinetics from sparse fourier data
In order to assess brain perfusion, one of the available methods is the estimation of parameters such as cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) from Dynamic Susceptibility Contrast MRI (DSC-MRI). This estimation requires both high temporal and spatial resolution to capture the rapid tracer kinetic and detect small impairments and reliably discriminate boundaries. With this in mind, we propose a compressed sensing approach to decrease the acquisition time without sacrificing the reconstruction, especially in the region affected by the tracer. Within the framework of a TV-L1-L2 minimization for solving the reconstruction from partial Fourier data, we introduce a novel baseline-constraining term weighting the difference of the reconstructed volume from the baseline in all regions where no perfusion is apparent. We show that the proposed reconstruction scheme is able to provide accurate estimation of the tracer kinetics (the necessary step for estimating CBF, CBV and MTT) in the volume even at high acceleration (x16), with a RMSE of 11, a third of what achievable without the baseline constraint.
Davide Boschetto
davide.boschetto@imtlucca.it
P. Di Prima
M. Castellaro
A. Bertoldo
Enrico Grisan
2015-12-15T11:27:37Z
2016-03-18T10:39:28Z
http://eprints.imtlucca.it/id/eprint/2972
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2972
2015-12-15T11:27:37Z
A Concurrent SOM-Based Chan-Vese Model for Image Segmentation
Concurrent Self Organizing Maps (CSOMs) deal with the pattern classification problem in a parallel processing way, aiming to minimize a suitable objective function. Similarly, Active Contour Models (ACMs) (e.g., the Chan-Vese (CV) model) deal with the image segmentation problem as an optimization problem by minimizing a suitable energy functional. The effectiveness of ACMs is a real challenge in many computer vision applications. In this paper, we propose a novel regional ACM, which relies on a CSOM to approximate the foreground and background image intensity distributions in a supervised way, and to drive the active-contour evolution accordingly. We term our model Concurrent Self Organizing Map-based Chan-Vese (CSOM-CV) model. Its main idea is to concurrently integrate the global information extracted by a CSOM from a few supervised pixels into the level-set framework of the CV model to build an effective ACM. Experimental results show the effectiveness of CSOM-CV in segmenting synthetic and real images, when compared with the stand-alone CV and CSOM models.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
2015-12-11T11:32:13Z
2015-12-11T11:32:13Z
http://eprints.imtlucca.it/id/eprint/2971
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2971
2015-12-11T11:32:13Z
Douglas-rachford splitting: Complexity estimates and accelerated variants
We propose a new approach for analyzing convergence of the Douglas-Rachford splitting method for solving convex composite optimization problems. The approach is based on a continuously differentiable function, the Douglas-Rachford Envelope (DRE), whose stationary points correspond to the solutions of the original (possibly nonsmooth) problem. By proving the equivalence between the Douglas-Rachford splitting method and a scaled gradient method applied to the DRE, results from smooth unconstrained optimization are employed to analyze convergence properties of DRS, to tune the method and to derive an accelerated version of it.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Lorenzo Stella
lorenzo.stella@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-12-04T08:57:09Z
2015-12-04T09:00:42Z
http://eprints.imtlucca.it/id/eprint/2967
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2967
2015-12-04T08:57:09Z
Il Futuro della Cyber Security in Italia. Un libro bianco per raccontare le principali sfide che il nostro Paese dovrà affrontare nei prossimi cinque anni
Roberto Baldoni
Rocco De Nicola
r.denicola@imtlucca.it
2015-12-03T14:50:46Z
2015-12-03T14:50:46Z
http://eprints.imtlucca.it/id/eprint/2963
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2963
2015-12-03T14:50:46Z
Social Determinants of Content Selection in the Age of (Mis)Information
Despite the enthusiastic rhetoric about the so called collective intelligence, conspiracy theories – e.g. global warming induced by chemtrails or the link between vaccines and autism – find on the Web a natural medium for their dissemination. Users preferentially consume information according to their system of beliefs and the strife within users of opposite worldviews (e.g., scientific and conspiracist) may result in heated debates. In this work we provide a genuine example of information consumption on a set of 1.2 million of Facebook Italian users. We show by means of a thorough quantitative analysis that information supporting different worldviews – i.e. scientific and conspiracist news – are consumed in a comparable way. Moreover, we measure the effect of 4709 evidently false information (satirical version of conspiracist stories) and 4502 debunking memes (information aiming at contrasting unsubstantiated rumors) on polarized users of conspiracy claims.
Alessandro Bessi
Guido Caldarelli
guido.caldarelli@imtlucca.it
Michela Del Vicario
michela.delvicario@imtlucca.it
Antonio Scala
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2015-12-03T13:38:45Z
2015-12-03T13:38:45Z
http://eprints.imtlucca.it/id/eprint/2962
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2962
2015-12-03T13:38:45Z
Differential Analysis of Interacting Automata with Immediate Actions
The stochastic modelling of software systems with activities of durations that are separated by many orders of magnitude typically leads to numerical complications, due to stiffness. To avoid explicit state-space generation-a prerequisite to tackle this problem via suitable manipulations or aggregations-in this paper we present an accurate and scalable fluid approximation. It is expressed as a compact piecewise linear system of ordinary differential equations, which have discontinuous right-hand sides as a result of the incorporation of immediateness. We study the nature of this approximation in a general high-level framework of interacting automata. On a case study of client/server interaction, our approach is ca two times faster than the analysis conducted on the stiff equations where immediate actions are explicitly modelled.
Luca Bortolussi
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-12-03T13:32:51Z
2015-12-03T13:32:51Z
http://eprints.imtlucca.it/id/eprint/2961
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2961
2015-12-03T13:32:51Z
Model-based Development and Performance Analysis for Evolving Manufacturing Systems
Manufacturing systems and their control software exhibit a large number of variants, which evolve over time in order to meet changing functional and non-functional requirements. To handle the resulting complexity, we propose a multi-perspective modeling approach with different viewpoints regarding workflow, architecture and component behavior. We combine it with delta modeling to seamlessly capture variability and evolution by the same means on each of the viewpoints. We show how the separation in different viewpoints enables early performance analysis as well as code generation. The approach is illustrated using a case study.
Matthias Kowal
Christian Prehofer
Ina Schaefer
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-11-30T15:16:11Z
2015-11-30T15:16:11Z
http://eprints.imtlucca.it/id/eprint/2941
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2941
2015-11-30T15:16:11Z
SLAC: A Formal Service-Level-Agreement Language for Cloud Computing
The need of mechanisms to automate and regulate the interaction amongst the parties involved in the offered cloud services is exacerbated by the increasing number of providers and solutions that enable the cloud paradigm. This regulation needs to be defined through a contract, the so-called Service Level Agreement (SLA). We argue that the current solutions for SLA specification cannot cope with the distinctive characteristics of clouds. Therefore, in this paper we define a language, named SLAC, devised for specifying SLA for the cloud computing domain. The main differences with respect to the existing specification languages are: SLAC is domain specific, its semantics are formally defined in order to avoid ambiguity, it supports the main cloud deployment models, and it enables the specification of multi-party agreements. Moreover, SLAC supports the business aspects of the domain, such as pricing schemes, business actions and metrics. Furthermore, SLAC comes with an open-source software framework which enables the specification, evaluation and enforcement of SLAs for clouds. We illustrate potentialities and effectiveness of the SLAC language and its management framework by experimenting with an Open Nebula cloud system.
Rafael B. Uriarte
Francesco Tiezzi
Rocco De Nicola
r.denicola@imtlucca.it
2015-11-30T15:08:22Z
2015-11-30T15:08:22Z
http://eprints.imtlucca.it/id/eprint/2940
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2940
2015-11-30T15:08:22Z
Self-expression and Dynamic Attribute-Based Ensembles in SCEL
In the field of distributed autonomous computing the current trend is to develop cooperating computational entities enabled with enhanced self-* properties. The expression self-* indicates the possibility of a component inside an ensemble, i.e. a set of collaborative autonomic components, to self organize, heal (repair), optimize and configure with little or no human interaction. We focus on a self-* property called self-expression, defined as the ability to deploy run-time changes of the coordination pattern of the observed ensemble; the goal of the ensemble is to achieve adaptivity by meeting functional and non-functional requirements when specific tasks have to be completed. The purpose of this paper is to rigorously present the mechanisms involved whenever a change in the coordination pattern is needed, and the interactions that take place. To this aim, we use SCEL (Software Component Ensemble Language), a formal language for describing autonomic components and their interactions, featuring a highly dynamic and flexible way to form ensembles based on components’ attributes.
Giacomo Cabri
Nicola Capodieci
Luca Cesari
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
Francesco Tiezzi
Franco Zambonelli
2015-11-30T14:55:01Z
2015-11-30T14:55:01Z
http://eprints.imtlucca.it/id/eprint/2939
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2939
2015-11-30T14:55:01Z
Group-by-Group Probabilistic Bisimilarities and Their Logical Characterizations
We provide two interpretations, over nondeterministic and probabilistic processes, of PML, the probabilistic version of Hennessy-Milner logic used by Larsen and Skou to characterize bisimilarity of probabilistic processes without internal nondeterminism. We also exhibit two new bisimulation-based equivalences, which are in full agreement with the two different interpretations of PML. The new equivalences are coarser than the bisimilarity for nondeterministic and probabilistic processes proposed by Segala and Lynch, which instead is in agreement with a version of Hennessy-Milner logic extended with an additional probabilistic operator interpreted over state distributions rather than over individual states. The modal logic characterizations provided for the new equivalences thus offer a uniform framework for reasoning on purely nondeterministic processes, reactive probabilistic processes, and nondeterministic and probabilistic processes.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-11-30T13:11:55Z
2015-11-30T13:11:55Z
http://eprints.imtlucca.it/id/eprint/2938
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2938
2015-11-30T13:11:55Z
Programming and Verifying Component Ensembles
A simplified version of the kernel language SCEL, that we call SCELlight, is introduced as a formalism for programming and verifying properties of so-called cyber-physical systems consisting of software-intensive ensembles of components, featuring complex intercommunications and interactions with humans and other systems. In order to validate the amenability of the language for verification purposes, we provide a translation of SCELlight specifications into Promela. We test the feasibility of the approach by formally specifying an application scenario, consisting of a collection of components offering a variety of services meeting different quality levels, and by using SPIN to verify that some desired behaviors are guaranteed.
Rocco De Nicola
r.denicola@imtlucca.it
Alberto Lluch Lafuente
Michele Loreti
Andrea Morichetta
Rosario Pugliese
Valerio Senni
Francesco Tiezzi
2015-11-30T12:56:35Z
2015-11-30T12:56:35Z
http://eprints.imtlucca.it/id/eprint/2937
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2937
2015-11-30T12:56:35Z
Introduction to “Rigorous Engineering of Autonomic Ensembles”– Track Introduction
Today’s software systems are becoming increasingly distributed and decentralized and have to adapt autonomously to dynamically changing, open-ended environments. Often their nodes partake in complex interactions with other nodes or with humans. We call these kinds of distributed, complex systems operating in openended and changing environments, ensembles.
Martin Wirsing
Rocco De Nicola
r.denicola@imtlucca.it
Matthias Hölzl
2015-11-30T12:37:23Z
2016-04-06T07:56:18Z
http://eprints.imtlucca.it/id/eprint/2936
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2936
2015-11-30T12:37:23Z
A Life Cycle for the Development of Autonomic Systems: The E-mobility Showcase
Component ensembles are a promising way of building self-aware autonomic adaptive systems. This approach has been promoted by the EU project ASCENS, which develops the core idea of ensembles by providing rigorous semantics as well as models and methods for the whole development life cycle of an ensemble-based system. These methods specifically address adaptation, self-awareness, self-optimization, and continuous system evolution. In this paper, we demonstrate the key concepts and benefits of the ASCENS approach in the context of intelligent navigation of electric vehicles (e-Mobility), which itself is one of the three key case studies of the project.
Tomáš Bureš
Rocco De Nicola
r.denicola@imtlucca.it
Ilias Gerostathopoulos
Nicklas Hoch
Michal Kit
Nora Koch
Giacoma Valentina Monreale
Ugo Montanari
Rosario Pugliese
Nikola Serbedzija
Martin Wirsing
Franco Zambonelli
2015-11-24T15:47:03Z
2015-11-24T15:47:03Z
http://eprints.imtlucca.it/id/eprint/2930
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2930
2015-11-24T15:47:03Z
Robust model predictive control for discrete-time fractional-order systems
In this paper we propose a tube-based robust model predictive control scheme for fractional-order discrete-time systems of the Grunwald-Letnikov type with state and input constraints. We first approximate the infinite-dimensional
fractional-order system by a finite-dimensional linear system
and we show that the actual dynamics can be approximated
arbitrarily tight. We use the approximate dynamics to design
a tube-based model predictive controller which endows to the
controlled closed-loop system robust stability properties.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Sotiris Ntouskas
Haralambos Sarimveis
2015-11-24T13:00:34Z
2015-11-24T13:00:34Z
http://eprints.imtlucca.it/id/eprint/2931
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2931
2015-11-24T13:00:34Z
The eNanoMapper database for nanomaterial safety information
Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs.
Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms.
Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state transfer” (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure–activity relationships for nanomaterials (NanoQSAR).
Nina Jeliazkova
Haralambos Chomenides
Philip Doganis
Bengt Fadeel
Roland Grafström
Barry Hardy
Janna Hastings
Markus Hegi
Vedrin Jeliazkov
Nikolay Kochev
Pekka Kohonen
Cristian Munteanu
Haralambos Sarimveis
Bart Smeets
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Georgia Tsiliki
David Vorgrimmler
Egon Willighagen
2015-11-05T14:07:32Z
2015-11-05T14:07:32Z
http://eprints.imtlucca.it/id/eprint/2828
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2828
2015-11-05T14:07:32Z
SEDNAM - Socio-Economic Dynamics: Networks and Agent-Based Models - Introduction
Recent years have witnessed the increasing interest of physicists, mathematicians and computer scientists for socio-economic systems. In our view, the many reasons behind this can be summarized by observing that traditional approaches to disciplines as sociology and economics have dramatically shown their limitations.
Serge Galam
Marco Alberto Javarone
Tiziano Squartini
tiziano.squartini@imtlucca.it
2015-10-28T15:00:56Z
2015-10-28T15:00:56Z
http://eprints.imtlucca.it/id/eprint/2788
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2788
2015-10-28T15:00:56Z
Debunking in a World of Tribes
Recently a simple military exercise on the Internet was perceived as the beginning of a new civil war in the US. Social media aggregate people around common interests eliciting a collective framing of narratives and worldviews. However, the wide availability of user-provided content and the direct path between producers and consumers of information often foster confusion about causations, encouraging mistrust, rumors, and even conspiracy thinking. In order to contrast such a trend attempts to \textit{debunk} are often undertaken. Here, we examine the effectiveness of debunking through a quantitative analysis of 54 million users over a time span of five years (Jan 2010, Dec 2014). In particular, we compare how users interact with proven (scientific) and unsubstantiated (conspiracy-like) information on Facebook in the US. Our findings confirm the existence of echo chambers where users interact primarily with either conspiracy-like or scientific pages. Both groups interact similarly with the information within their echo chamber. We examine 47,780 debunking posts and find that attempts at debunking are largely ineffective. For one, only a small fraction of usual consumers of unsubstantiated information interact with the posts. Furthermore, we show that those few are often the most committed conspiracy users and rather than internalizing debunking information, they often react to it negatively. Indeed, after interacting with debunking posts, users retain, or even increase, their engagement within the conspiracy echo chamber.
Fabiana Zollo
fabiana.zollo@imtlucca.it
Alessandro Bessi
Michela Del Vicario
michela.delvicario@imtlucca.it
Antonio Scala
Guido Caldarelli
guido.caldarelli@imtlucca.it
Louis Shekhtman
Shlomo Havlin
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2015-10-28T14:52:49Z
2016-05-04T09:46:22Z
http://eprints.imtlucca.it/id/eprint/2787
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2787
2015-10-28T14:52:49Z
A Quadratic Programming Algorithm Based on Nonnegative Least Squares with Applications to Embedded Model Predictive Control
This paper proposes an active set method based on nonnegative least squares (NNLS) to solve strictly convex quadratic programming (QP) problems, such as those that arise in Model Predictive Control (MPC). The main idea is to rephrase the QP problem as a Least Distance Problem (LDP) that is solved via a NNLS reformulation. While the method is rather general for solving strictly convex QP’s subject to linear inequality constraints, it is particularly useful for embedded MPC because (i) is very fast, compared to other existing state-of-theart QP algorithms, (ii) is very simple to code, requiring only basic arithmetic operations for computing LDLT decompositions recursively to solve linear systems of equations, (iii) contrary to iterative methods, provides the solution or recognizes infeasibility in a finite number of steps.
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-10-22T13:41:13Z
2015-10-22T13:41:13Z
http://eprints.imtlucca.it/id/eprint/2779
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2779
2015-10-22T13:41:13Z
Distributed solution of stochastic optimal control problems on GPUs
Stochastic optimal control problems arise in many applications and are, in principle, large-scale involving up to millions of decision variables. Their applicability in control applications is often limited by the availability of algorithms that can solve them efficiently and within the sampling time of the controlled system.
In this paper we propose a dual accelerated proximal gradient algorithm which is amenable to parallelization and demonstrate that its GPU implementation affords high speed-up values (with respect to a CPU implementation) and greatly outperforms well-established commercial optimizers such as Gurobi.
Ajay Kumar Sampathirao
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
2015-10-22T13:39:05Z
2016-05-04T10:15:53Z
http://eprints.imtlucca.it/id/eprint/2780
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2780
2015-10-22T13:39:05Z
Scenario-Based Model Predictive Operation Control of Islanded Microgrids
We propose a model predictive control (MPC) approach for the operation of islanded microgrids that takes into account the stochasticity of wind and load forecasts. In comparison to worst case approaches, the probability distribution of the prediction is used to optimize the operation of the microgrid, leading to less conservative solutions. Suitable models for time series forecast are derived and employed to create scenarios. These scenarios and the system measurements are used as inputs for a stochastic MPC, wherein a mixed-integer problem is solved to derive the optimal controls. In the provided case study, the stochastic MPC yields an increase of wind power generation and decrease of conventional generation.
Christian Hans
hans@control.tu-berlin.de
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Raisch Jörg
raisch@control.tu-berlin.de
Carsten Reincke-Collon
2015-10-19T10:16:18Z
2016-02-26T11:47:36Z
http://eprints.imtlucca.it/id/eprint/2778
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2778
2015-10-19T10:16:18Z
Supervised and semi-supervised classifiers for the detection of flood-prone areas
Supervised and semi-supervised machine-learning techniques are applied and compared for the recognition of the flood hazard. The learning goal consists in distinguishing between flood-exposed and marginal-risk areas. Kernel-based binary classifiers using six quantitative morphological features, derived from data stored in digital elevation models, are trained to model the relationship between morphology and the flood hazard. According to the experimental outcomes, such classifiers are appropriate tools when one is interested in performing an initial low-cost detection of flood-exposed areas, to be possibly refined in successive steps by more time-consuming and costly investigations by experts. The use of these automatic classification techniques is valuable, e.g., in insurance applications, where one is interested in estimating the flood hazard of areas for which limited labeled information is available. The proposed machine-learning techniques are applied to the basin of the Italian Tanaro River. The experimental results show that for this case study, semi-supervised methods outperform supervised ones when—the number of labeled examples being the same for the two cases—only a few labeled examples are used, together with a much larger number of unsupervised ones.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Rita Morisi
rita.morisi@imtlucca.it
Giorgio Roth
Marcello Sanguineti
Angela Celeste Taramasso
2015-10-19T09:51:47Z
2016-04-05T12:19:09Z
http://eprints.imtlucca.it/id/eprint/2777
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2777
2015-10-19T09:51:47Z
Supervised Learning Modelization and Segmentation of Cardiac Scar in Delayed Enhanced MRI
Delayed Enhancement Magnetic Resonance Imaging can be used to non-invasively differentiate viable from non-viable myocardium within the Left Ventricle in patients suffering from myocardial diseases. Automated segmentation of scarified tissue can be used to accurately quantify the percentage of myocardium affected. This paper presents a method for cardiac scar detection and segmentation based on supervised learning and level set segmentation. First, a model of the appearance of scar tissue is trained using a Support Vector Machines classifier on image-derived descriptors. Based on the areas detected by the classifier, an accurate segmentation is performed using a segmentation method based on level sets.
Laura Lara
Sergio Vera
Frederic Perez
Nico Lanconelli
Rita Morisi
rita.morisi@imtlucca.it
Bruno Donini
Dario Turco
Cristiana Corsi
Claudio Lamberti
Giovana Gavidia
Maurizio Bordone
Eduardo Soudah
Nick Curzen
James Rosengarten
John Morgan
Javier Herrero
Miguel A. González Ballester
2015-10-19T09:40:53Z
2016-04-06T08:50:40Z
http://eprints.imtlucca.it/id/eprint/2776
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2776
2015-10-19T09:40:53Z
Binary and Multi-class Parkinsonian Disorders Classification Using Support Vector Machines
This paper presents a method for an automated Parkinsonian disorders classification using Support Vector Machines (SVMs). Magnetic Resonance quantitative markers are used as features to train SVMs with the aim of automatically diagnosing patients with different Parkinsonian disorders. Binary and multi–class classification problems are investigated and applied with the aim of automatically distinguishing the subjects with different forms of disorders. A ranking feature selection method is also used as a preprocessing step in order to asses the significance of the different features in diagnosing Parkinsonian disorders. In particular, it turns out that the features selected as the most meaningful ones reflect the opinions of the clinicians as the most important markers in the diagnosis of these disorders. Concerning the results achieved in the classification phase, they are promising; in the two multi–class classification problems investigated, an average accuracy of 81% and 90% is obtained, while in the binary scenarios taken in consideration, the accuracy is never less than 88%.
Rita Morisi
rita.morisi@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Nico Lanconelli
Stefano Zanigni
David Neil Manners
Claudia Testa
Stefania Evangelisti
LauraLudovica Gramegna
Claudio Bianchini
Pietro Cortelli
Caterina Tonon
Raffaele Lodi
2015-10-19T09:31:34Z
2015-10-19T09:31:34Z
http://eprints.imtlucca.it/id/eprint/2775
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2775
2015-10-19T09:31:34Z
Semi-automated scar detection in delayed enhanced cardiac magnetic resonance images
Late enhancement cardiac magnetic resonance images (MRI) has the ability to precisely delineate myocardial scars. We present a semi-automated method for detecting scars in cardiac MRI. This model has the potential to improve routine clinical practice since quantification is not currently offered due to time constraints. A first segmentation step was developed for extracting the target regions for potential scar and determining pre-candidate objects. Pattern recognition methods are then applied to the segmented images in order to detect the position of the myocardial scar. The database of late gadolinium enhancement (LE) cardiac MR images consists of 111 blocks of images acquired from 63 patients at the University Hospital Southampton NHS Foundation Trust (UK). At least one scar was present for each patient, and all the scars were manually annotated by an expert. A group of images (around one third of the entire set) was used for training the system which was subsequently tested on all the remaining images. Four different classifiers were trained (Support Vector Machine (SVM), k-nearest neighbor (KNN), Bayesian and feed-forward neural network) and their performance was evaluated by using Free response Receiver Operating Characteristic (FROC) analysis. Feature selection was implemented for analyzing the importance of the various features. The segmentation method proposed allowed the region affected by the scar to be extracted correctly in 96% of the blocks of images. The SVM was shown to be the best classifier for our task, and our system reached an overall sensitivity of 80% with less than 7 false positives per patient. The method we present provides an effective tool for detection of scars on cardiac MRI. This may be of value in clinical practice by permitting routine reporting of scar quantification.
Rita Morisi
rita.morisi@imtlucca.it
Bruno Donini
Nico Lanconelli
James Rosengarden
John Morgan
Stephen Harden
Nick Curzen
2015-10-19T09:22:53Z
2015-10-19T09:22:53Z
http://eprints.imtlucca.it/id/eprint/2774
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2774
2015-10-19T09:22:53Z
Sparse Solutions to the Average Consensus Problem via Various Regularizations of the Fastest Mixing Markov-Chain Problem
In the consensus problem on multi-agent systems, in which the states of the agents represent opinions, the agents aim at reaching a common opinion (or consensus state) through local exchange of information. An important design problem is to choose the degree of interconnection of the subsystems to achieve a good trade-off between a small number of interconnections and a fast convergence to the consensus state, which is the average of the initial opinions under mild conditions. This paper addresses this problem through l₁ -norm and l₀ -“pseudo-norm” regularized versions of the well-known Fastest Mixing Markov-Chain (FMMC) problem. We show that such versions can be interpreted as robust forms of the FMMC problem and provide results to guide the choice of the regularization parameter.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Rita Morisi
rita.morisi@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-10-13T08:07:33Z
2015-10-13T08:08:43Z
http://eprints.imtlucca.it/id/eprint/2771
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2771
2015-10-13T08:07:33Z
Unsupervised Myocardial Segmentation for Cardiac MRI
Though unsupervised segmentation was a de-facto standard for cardiac MRI segmentation early on, recently cardiac MRI segmentation literature has favored fully supervised techniques such as Dictionary Learning and Atlas-based techniques. But, the benefits of unsupervised techniques e.g., no need for large amount of training data and better potential of handling variability in anatomy and image contrast, is more evident with emerging cardiac MR modalities. For example, CP-BOLD is a new MRI technique that has been shown to detect ischemia without any contrast at stress but also at rest conditions. Although CP-BOLD looks similar to standard CINE, changes in myocardial intensity patterns and shape across cardiac phases, due to the heart’s motion, BOLD effect and artifacts affect the underlying mechanisms of fully supervised segmentation techniques resulting in a significant drop in segmentation accuracy. In this paper, we present a fully unsupervised technique for segmenting myocardium from the background in both standard CINE MR and CP-BOLD MR. We combine appearance with motion information (obtained via Optical Flow) in a dictionary learning framework to sparsely represent important features in a low dimensional space and separate myocardium from background accordingly. Our fully automated method learns background-only models and one class classifier provides myocardial segmentation. The advantages of the proposed technique are demonstrated on a dataset containing CP-BOLD MR and standard CINE MR image sequences acquired in baseline and ischemic condition across 10 canine subjects, where our method outperforms state-of-the-art supervised segmentation techniques in CP-BOLD MR and performs at-par for standard CINE MR.
Anirban Mukhopadhyay
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Marco Bevilacqua
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-10-13T08:04:03Z
2015-10-13T08:04:03Z
http://eprints.imtlucca.it/id/eprint/2770
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2770
2015-10-13T08:04:03Z
Dictionary Learning Based Image Descriptor for Myocardial Registration of CP-BOLD MR
Cardiac Phase-resolved Blood Oxygen-Level-Dependent (CP-BOLD) MRI is a new contrast agent- and stress-free imaging technique for the assessment of myocardial ischemia at rest. The precise registration among the cardiac phases in this cine type acquisition is essential for automating the analysis of images of this technique, since it can potentially lead to better specificity of ischemia detection. However, inconsistency in myocardial intensity patterns and the changes in myocardial shape due to the heart’s motion lead to low registration performance for state-of-the-art methods. This low accuracy can be explained by the lack of distinguishable features in CP-BOLD and inappropriate metric definitions in current intensity-based registration frameworks. In this paper, the sparse representations, which are defined by a discriminative dictionary learning approach for source and target images, are used to improve myocardial registration. This method combines appearance with Gabor and HOG features in a dictionary learning framework to sparsely represent features in a low dimensional space. The sum of absolute differences of these distinctive sparse representations are used to define a similarity term in the registration framework. The proposed approach is validated on a dataset of CP-BOLD MR and standard CINE MR acquired in baseline and ischemic condition across 10 canines.
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Anirban Mukhopadhyay
Marco Bevilacqua
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-10-09T11:09:10Z
2015-10-09T11:09:10Z
http://eprints.imtlucca.it/id/eprint/2768
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2768
2015-10-09T11:09:10Z
Effect of BOLD Contrast on Myocardial Registration
Cardiac phase-resolved Blood-Oxygen-Level-Dependent (CP-BOLD) MRI is a new approach for detecting ischemia at rest. Currently disease assessment relies on segmental analysis and uses only a few images in the phase-resolved acquisition. It is expected that using all phases can permit pixel-level characterization of CP-BOLD MRI. In this study, state-of-the-art image registration techniques are evaluated on cardiac BOLD MRI data for the first time. The results show that cardiac phase-dependent variations in myocardial BOLD contrast in CP-BOLD images creates a statistically significant decrease in the accuracy compared to standard Cine MR images acquired under conditions of health and myocardial ischemia.
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Anirban Mukhopadhyay
Marco Bevilacqua
Hsin-Jung Yang
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-10-09T11:03:42Z
2015-10-09T12:31:26Z
http://eprints.imtlucca.it/id/eprint/2767
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2767
2015-10-09T11:03:42Z
Dictionary-based Support Vector Machines for Unsupervised Ischemia Detection at Rest with CP-BOLD Cardiac MRI
Cardiac Phase-resolved Blood-Oxygen-Level-Dependent (CP-BOLD) MRI has been recently demonstrated to detect an ongoing myocardial ischemia at rest, taking advantage of spatio-temporal patterns in myocardial signal intensities, which are modulated by the presence of disease. However, this approach does require significant post-processing to detect the disease and to this day only a few images of the acquisition are used coupled with fixed thresholds to establish biomarkers. We propose a threshold-free unsupervised approach, based on dictionary learning and one-class support vector machines, which can generate a probabilistic ischemia likelihood map.
Marco Bevilacqua
Anirban Mukhopadhyay
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Cristian Rusu
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-10-09T10:30:59Z
2015-10-09T11:10:32Z
http://eprints.imtlucca.it/id/eprint/2766
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2766
2015-10-09T10:30:59Z
Data Driven Feature Learning for Representation of Myocardial BOLD MR Images
Cardiac phase-dependent variations of myocardial signal intensities in Cardiac Phase-resolved Blood-Oxygen-Level-Dependent (CP-BOLD) MRI can be exploited for the identification of ischemic territories. This technique requires segmentation to isolate the myocardium. However, spatio-temporal variations of BOLD contrast, prove challenging for existing automated myocardial segmentation techniques, because they were developed for acquisitions where contrast variations in the myocardium are minimal. Appropriate feature learning mechanisms are necessary to best represent appearance and texture in CP-BOLD data. Here we propose and validate a feature learning technique based on multiscale dictionary model that learns to sparsely represent effective patterns under healthy and ischemic conditions.
Anirban Mukhopadhyay
Marco Bevilacqua
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-09-16T11:12:59Z
2015-09-16T11:35:57Z
http://eprints.imtlucca.it/id/eprint/2748
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2748
2015-09-16T11:12:59Z
The significance of image compression in plant phenotyping applications
We are currently witnessing an increasingly higher throughput in image-based plant phenotyping experiments. The majority of imaging data are collected using complex automated procedures and are then post-processed to extract phenotyping-related information. In this article, we show that the image compression used in such procedures may compromise phenotyping results and this needs to be taken into account. We use three illuminating proof-of-concept experiments that demonstrate that compression (especially in the most common lossy JPEG form) affects measurements of plant traits and the errors introduced can be high. We also systematically explore how compression affects measurement fidelity, quantified as effects on image quality, as well as errors in extracted plant visual traits. To do so, we evaluate a variety of image-based phenotyping scenarios, including size and colour of shoots, leaf and root growth. To show that even visual impressions can be used to assess compression effects, we use root system images as examples. Overall, we find that compression has a considerable effect on several types of analyses (albeit visual or quantitative) and that proper care is necessary to ensure that this choice does not affect biological findings. In order to avoid or at least minimise introduced measurement errors, for each scenario, we derive recommendations and provide guidelines on how to identify suitable compression options in practice. We also find that certain compression choices can offer beneficial returns in terms of reducing the amount of data storage without compromising phenotyping results. This may enable even higher throughput experiments in the future.
Massimo Minervini
massimo.minervini@imtlucca.it
Hanno Scharr
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-09-16T11:03:23Z
2015-09-16T11:03:23Z
http://eprints.imtlucca.it/id/eprint/2746
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2746
2015-09-16T11:03:23Z
Data-driven feature learning for myocardial segmentation of CP-BOLD MRI
Cardiac Phase-resolved Blood Oxygen-Level-Dependent (CP-BOLD) MR is capable of diagnosing an ongoing ischemia by detecting changes in myocardial intensity patterns at rest without any contrast and stress agents. Visualizing and detecting these changes require significant post-processing, including myocardial segmentation for isolating the myocardium. But, changes in myocardial intensity pattern and myocardial shape due to the heart’s motion challenge automated standard CINE MR myocardial segmentation techniques resulting in a significant drop of segmentation accuracy. We hypothesize that the main reason behind this phenomenon is the lack of discernible features. In this paper, a multi scale discriminative dictionary learning approach is proposed for supervised learning and sparse representation of the myocardium, to improve the myocardial feature selection. The technique is validated on a challenging dataset of CP-BOLD MR and standard CINE MR acquired in baseline and ischemic condition across 10 canine subjects. The proposed method significantly outperforms standard cardiac segmentation techniques, including segmentation via registration, level sets and supervised methods for myocardial segmentation.
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-09-04T10:26:42Z
2016-05-05T13:50:59Z
http://eprints.imtlucca.it/id/eprint/2745
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2745
2015-09-04T10:26:42Z
An interactive tool for semi-automated leaf annotation
High throughput plant phenotyping is emerging as a necessary step towards meeting agricultural demands of the future. Central to its success is the development of robust computer vision algorithms that analyze images and extract phenotyping information to be associated with genotypes and environmental conditions for identifying traits suitable for further development. Obtaining leaf level quantitative data is important towards understanding better this interaction. While certain efforts have been made to obtain such information in an automated fashion, further innovations are necessary. In this paper we present an annotation tool that can be used to semi-automatically segment leaves in images of rosette plants. This tool, which is designed to exist in a stand-alone fashion but also in cloud based environments, can be used to annotate data directly for the study of plant and leaf growth or to provide annotated datasets for learning-based approaches to extracting phenotypes from images. It relies on an interactive graph-based segmentation algorithm to propagate expert provided priors (in the form of pixels) to the rest of the image, using the random walk formulation to find a good per-leaf segmentation. To evaluate the tool we use standardized datasets available from the LSC and LCC 2015 challenges, achieving an average leaf segmentation accuracy of almost 97% using scribbles as annotations. The tool and source code are publicly available at http://www.phenotiki.com and as a GitHub repository at https://github.com/phenotiki/LeafAnnotationTool.
Massimo Minervini
massimo.minervini@imtlucca.it
Mario Valerio Giuffrida
valerio.giuffrida@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-09-04T10:24:54Z
2016-05-05T13:48:56Z
http://eprints.imtlucca.it/id/eprint/2744
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2744
2015-09-04T10:24:54Z
Learning to Count Leaves in Rosette Plants
Counting the number of leaves in plants is important for plant phenotyping, since it can be used to assess plant growth stages. We propose a learning-based approach for counting leaves in rosette (model) plants. We relate image-based descriptors learned in an unsupervised fashion to leaf counts using a supervised regression model. To take advantage of the circular and coplanar arrangement of leaves and also to introduce scale and rotation invariance, we learn features in a log-polar representation. Image patches extracted in this log-polar domain are provided to K-means, which builds a codebook in a unsupervised manner. Feature codes are obtained by projecting patches on the codebook using the triangle encoding, introducing both sparsity and specifically designed representation. A global, per-plant image descriptor is obtained by pooling local features in specific regions of the image. Finally, we provide the global descriptors to a support vector regression framework to estimate the number of leaves in a plant. We evaluate our method on datasets of the \textit{Leaf Counting Challenge} (LCC), containing images of Arabidopsis and tobacco plants. Experimental results show that on average we reduce absolute counting error by 40% w.r.t. the winner of the 2014 edition of the challenge -a counting via segmentation method. When compared to state-of-the-art density-based approaches to counting, on Arabidopsis image data ~75% less counting errors are observed. Our findings suggest that it is possible to treat leaf counting as a regression problem, requiring as input only the total leaf count per training image.
Mario Valerio Giuffrida
valerio.giuffrida@imtlucca.it
Massimo Minervini
massimo.minervini@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-09-03T14:31:15Z
2015-11-02T09:40:23Z
http://eprints.imtlucca.it/id/eprint/2743
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2743
2015-09-03T14:31:15Z
Combining behavioural types with security analysis
Today’s software systems are highly distributed and interconnected, and they increasingly rely on communication to achieve their goals; due to their societal importance, security and trustworthiness are crucial aspects for the correctness of these systems. Behavioural types, which extend data types by describing also the structured behaviour of programs, are a widely studied approach to the enforcement of correctness properties in communicating systems. This paper offers a unified overview of proposals based on behavioural types which are aimed at the analysis of security properties.
Massimo Bartoletti
Ilaria Castellani
Pierre-Malo Deniélou
Mariangiola Dezani-Ciancaglini
Silvia Ghilezan
Jovanka Pantović
Jorge A. Pérez
Peter Thiemann
Bernardo Toninho
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-09-03T08:15:31Z
2016-05-06T14:07:46Z
http://eprints.imtlucca.it/id/eprint/2740
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2740
2015-09-03T08:15:31Z
Computationally efficient data and application driven color transforms for the compression and enhancement of images and video
An important step in color image or video coding and enhancement is the linear transformation of input (typically RGB) data into a color space more suitable for compression, subsequent analysis, or visualization. The choice of this transform becomes even more critical when operating in distributed and low-computational power environments, such as visual sensor networks or remote sensing. Data-driven transforms are rarely used due to increased complexity. Most schemes adopt fixed transforms to decorrelate the color channels which are then processed independently. Here we propose two frameworks to find appropriate data-driven transforms in different settings. The first, named approximate Karhunen-Loève Transform (aKLT), performs comparable to the KLT at a fraction of the computational complexity, thus favoring adoption on sensors and resource-constrained devices. Furthermore, we consider an application-aware setting in which an expert system (e.g., a classifier) analyzes imaging data at the receiver's end. In a compression context, distortion may jeopardize the accuracy of the analysis. Since the KLT is not optimal in this setting, we investigate formulations that maximize post-compression expert system performance. Relaxing decorrelation and energy compactness constraints, a second transform can be obtained offline with supervised learning methods. Finally, we propose transforms that accommodate both constraints, and are found using regularized optimization.
Massimo Minervini
massimo.minervini@imtlucca.it
Cristian Rusu
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-07-27T08:02:39Z
2015-07-27T08:02:39Z
http://eprints.imtlucca.it/id/eprint/2734
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2734
2015-07-27T08:02:39Z
Insensitivity to service-time distributions for fluid queueing models
We study fluid limits based on ordinary differential equations (ODEs) for Markovian queueing models where nonexponential service times are fit by appropriate Coxian distributions to match their first and second moments. We focus on a heavy-load regime, whereby the fluid limit of the queue-length process of the nonexponential queue estimates a bottleneck situation. Under this condition, we show that the ODE solution admits a steady state which is insensitive to the service-time distribution: The ODE steady state only depends on the mean service times. By contrast, the steady-state average performance measures computed by Markovian analysis are in general dependent on the higher-order moments of the service-time distribution. A numerical investigation shows that, given any two Markovian queueing models with Coxian-distributed service times with the same mean and different variance, the model with lower variance converges more rapidly to the (same) fluid limit than the one with higher variance.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-07-27T07:57:24Z
2015-07-27T07:57:24Z
http://eprints.imtlucca.it/id/eprint/2733
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2733
2015-07-27T07:57:24Z
A partial-differential approximation for spatial stochastic process algebra
We study a spatial framework for process algebra with ordinary differential equation (ODE) semantics. We consider an explicit mobility model over a 2D lattice where processes may walk to neighbouring regions independently, and interact with each other when they are in same region. The ODE system size will grow linearly with the number of regions, hindering the analysis in practice. Assuming an unbiased random walk, we introduce an approximation in terms of a system of reaction-diffusion partial differential equations, of size independent of the lattice granularity. Numerical tests on a spatial version of the generalised Lotka-Volterra model show high accuracy and very competitive runtimes against ODE solutions for fine-grained lattices.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-07-24T12:54:54Z
2015-07-24T12:54:54Z
http://eprints.imtlucca.it/id/eprint/2731
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2731
2015-07-24T12:54:54Z
A unified framework for differential aggregations in Markovian process algebra
Fluid semantics for Markovian process algebra have recently emerged as a computationally attractive approximate way of reasoning about the behaviour of stochastic models of large-scale systems. This interpretation is particularly convenient when sequential components characterised by small local state spaces are present in many independent copies. While the traditional Markovian interpretation causes state-space explosion, fluid semantics is independent from the multiplicities of the sequential components present in the model, just associating a single ordinary differential equation (ODE) with each local state. In this paper we analyse the case of a process algebra model inducing a large ODE system. Previous work, known as exact fluid lumpability, requires two symmetries: ODE aggregation is possible for processes that i) are isomorphic and that ii) are present with the same multiplicities. We first relax the latter requirement by introducing the notion of ordinary fluid lumpability, which yields an ODE system where the sum of the aggregated variables is preserved exactly. Then, we consider approximate variants of both notions of lumpability which make nearby processes symmetric after a perturbation of their parameters. We prove that small perturbations yield nearby differential trajectories. We carry out our study in the context of a process algebra that unifies two synchronisation semantics that are well studied in the literature, useful for the modelling of computer systems and chemical networks, respectively. In both cases, we provide numerical evidence which shows that, in practice, many heterogeneous processes can be aggregated with negligible errors.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-07-24T12:44:02Z
2016-04-13T08:34:53Z
http://eprints.imtlucca.it/id/eprint/2730
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2730
2015-07-24T12:44:02Z
Approximate reduction of heterogenous nonlinear models with differential hulls
We present a model reduction technique for a class of nonlinear ordinary differential equation (ODE) models of heterogeneous systems, where heterogeneity is expressed in terms of classes of state variables having the same dynamics structurally, but which are characterized by distinct parameters. To this end, we first build a system of differential inequalities that provides lower and upper bounds for each original state variable, but such that it is homogeneous in its parameters. Then, we use two methods for exact aggregation of ODEs to exploit this homogeneity, yielding a smaller model of size independent of the number of heterogeneous classes. We apply this technique to two case studies: a multiclass queuing network and a model of epidemics spread.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-07-22T10:28:33Z
2015-09-22T08:14:50Z
http://eprints.imtlucca.it/id/eprint/2729
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2729
2015-07-22T10:28:33Z
On Expressiveness and Behavioural Theory of Attribute-based
Communication
Attribute-based communication is an interesting alternative to broadcast and binary communication when providing abstract models for the so called Collective Adaptive Systems which consist of a large number of interacting components that dynamically adjust and combine their behavior to achieve specifc goals. A basic process calculus, named AbC, is introduced whose primary
primitive for interaction is attribute-based communication. An AbC system consists of a set of parallel components each of which is equipped with a set of attributes. Communication takes place in an implicit multicast fashion, and interactions among components are dynamically established by taking into account\connections" as determined by predicates over the attributes
exposed by components. First, the syntax and the semantics of AbC are presented, then expressiveness and effectiveness of the calculus are demonstrated both in terms of the ability to model scenarios featuring collaboration, reconfiguration, and adaptation
and of the possibility of encoding a process calculus for broadcasting channel-based communication and other communication
paradigms. Behavioral equivalences for AbC are introduced for establishing formal relationships between different descriptions
of the same system.
Yehia Moustafa Abd Alrahman
yehia.abdalrahman@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-06-26T12:53:49Z
2015-06-30T08:21:38Z
http://eprints.imtlucca.it/id/eprint/2720
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2720
2015-06-26T12:53:49Z
Dynamic role authorization in multiparty conversations
Protocol specifications often identify the roles involved in communications. In multiparty protocols that involve task delegation it is often useful to consider settings in which different sites may act on behalf of a single role. It is then crucial to control the roles that the different parties are authorized to represent, including the case in which role authorizations are determined only at runtime. Building on previous work on conversation types with flexible role assignment, here we report initial results on a typed framework for the analysis of multiparty communications with dynamic role authorization and delegation. In the underlying process model, communication prefixes are annotated with role authorizations and authorizations can be passed around. We extend the conversation type system so as to statically distinguish processes that never incur in authorization errors. The proposed static discipline guarantees that processes are always authorized to communicate on behalf of an intended role, also covering the case in which authorizations are dynamically passed around in messages.
Silvia Ghilezan
Svetlana Jakšić
Jovanka Pantović
Jorge A. Pérez
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-06-26T12:53:33Z
2015-06-26T12:53:33Z
http://eprints.imtlucca.it/id/eprint/2721
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2721
2015-06-26T12:53:33Z
Extensionality of spatial observations in distributed systems
We discuss the tensions between intensionality and extensionality of spatial observations in distributed systems, showing that there are natural models where extensional observational equivalences may be characterized by spatial logics, including the composition and void operators. Our results support the claim that spatial observations do not need to be always considered intensional, even if expressive enough to talk about the structure of systems. For simplicity, our technical development is based on a minimalist process calculus, that already captures the main features of distributed systems, namely local synchronous communication, local computation, asynchronous remote communication, and partial failures.
Luis Caires
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-06-26T12:53:17Z
2015-06-26T12:53:17Z
http://eprints.imtlucca.it/id/eprint/2722
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2722
2015-06-26T12:53:17Z
An observational model for spatial logics
Spatiality is an important aspect of distributed systems because their computations depend both on the dynamic behaviour and on the structure of their components. Spatial logics have been proposed as the formal device for expressing spatial properties of systems.
We define CCS∥, a CCS-like calculus whose semantics allows one to observe spatial aspects of systems on the top of which we define models of the spatial logic. Our alternative definition of models is proved equivalent to the standard one. Furthermore, logical equivalence is characterized in terms of the bisimilarity of CCS∥.
Emilio Tuosto
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-06-26T12:52:56Z
2015-06-26T12:52:56Z
http://eprints.imtlucca.it/id/eprint/2723
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2723
2015-06-26T12:52:56Z
Checking for choreography conformance using spatial logic model-checking
We illustrate with a simple example how the Spatial Logic Model Checker can be used to check choreography conformance properties
Luis Caires
David Tavares Sousa
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-06-25T12:52:14Z
2015-06-25T12:52:14Z
http://eprints.imtlucca.it/id/eprint/2719
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2719
2015-06-25T12:52:14Z
Type-based access control in data-centric systems
Data-centric multi-user systems, such as web applications, require flexible yet fine-grained data security mechanisms. Such mechanisms are usually enforced by a specially crafted security layer, which adds extra complexity and often leads to error prone coding, easily causing severe security breaches. In this paper, we introduce a programming language approach for enforcing access control policies to data in data-centric programs by static typing. Our development is based on the general concept of refinement type, but extended so as to address realistic and challenging scenarios of permission-based data security, in which policies dynamically depend on the database state, and flexible combinations of column- and row-level protection of data are necessary. We state and prove soundness and safety of our type system, stating that well-typed programs never break the declared data access control policies.
Luis Caires
Jorge A. Pérez
João C. Seco
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Lúcio Ferrão
2015-06-25T12:47:44Z
2015-06-25T12:47:44Z
http://eprints.imtlucca.it/id/eprint/2717
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2717
2015-06-25T12:47:44Z
Automotive and finance case studies in the conversation calculus
We describe the encoding of the Car Break scenario of the SENSORIA Automotive case study and of the Credit Request scenario of the SENSORIA Finance case study using the Conversation Calculus (CSCC). These scenarios consist of an orchestration of services and service clients which are typefully encoded here in a modular way. Namely the latter scenario consists of a workflow involving different actors: a client willing to submit a credit request, a bank employee, and its supervisor. We show how the workflow is well described in the type assigned to the processes implementing it. We first informally describe the CSCC calculus, and then show how the two scenarios can be encoded using the CSCC calculus and the corresponding typing.
Luis Caires
João Costa Seco
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-06-16T15:22:40Z
2015-06-16T15:22:40Z
http://eprints.imtlucca.it/id/eprint/2708
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2708
2015-06-16T15:22:40Z
Image Analysis: The New Bottleneck in Plant Phenotyping [Applications Corner]
Plant phenotyping is the identification of effects on the phenotype (i.e., the plant appearance and performance) as a result of genotype differences (i.e., differences in the genetic code) and the environmental conditions to which a plant has been exposed [1]?[3]. According to the Food and Agriculture Organization of the United Nations, large-scale experiments in plant phenotyping are a key factor in meeting the agricultural needs of the future to feed the world and provide biomass for energy, while using less water, land, and fertilizer under a constantly evolving environment due to climate change. Working on model plants (such as Arabidopsis), combined with remarkable advances in genotyping, has revolutionized our understanding of biology but has accelerated the need for precision and automation in phenotyping, favoring approaches that provide quantifiable phenotypic information that could be better used to link and find associations in the genotype [4]. While early on, the collection of phenotypes was manual, currently noninvasive, imaging-based methods are increasingly being utilized [5], [6]. However, the rate at which phenotypes are extracted in the field or in the lab is not matching the speed of genotyping and is creating a bottleneck.
Massimo Minervini
massimo.minervini@imtlucca.it
Hanno Scharr
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-06-15T09:27:55Z
2015-06-15T09:27:55Z
http://eprints.imtlucca.it/id/eprint/2707
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2707
2015-06-15T09:27:55Z
A uniform definition of stochastic process calculi
We introduce a unifying framework to provide the semantics of process algebras, including their quantitative variants useful for modeling quantitative aspects of behaviors. The unifying framework is then used to describe some of the most representative stochastic process algebras. This provides a general and clear support for an understanding of their similarities and differences. The framework is based on State to Function Labeled Transition Systems, FuTSs for short, that are state transition structures where each transition is a triple of the form (s,α,
Rocco De Nicola
r.denicola@imtlucca.it
Diego Latella
Michele Loreti
Mieke Massink
2015-06-15T09:14:09Z
2015-06-15T09:14:09Z
http://eprints.imtlucca.it/id/eprint/2706
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2706
2015-06-15T09:14:09Z
Relating strong behavioral equivalences for processes with nondeterminism and probabilities
We present a comparison of behavioral equivalences for nondeterministic and probabilistic processes whose activities are all observable. In particular, we consider trace-based, testing, and bisimulation-based equivalences. For each of them, we examine the discriminating power of three variants stemming from three approaches that differ for the way probabilities of events are compared when nondeterministic choices are resolved via schedulers. The first approach compares two resolutions with respect to the probability distributions of all considered events. The second approach requires that the probabilities of the set of events of a resolution be individually matched by the probabilities of the same events in possibly different resolutions. The third approach only compares the extremal probabilities of each event stemming from the different resolutions. The three approaches have very reasonable motivations and, when applied to fully nondeterministic processes or fully probabilistic processes, give rise to the classical well studied relations. We shall see that, for processes with nondeterminism and probability, they instead give rise to a much wider variety of behavioral relations, whose discriminating power is thoroughly investigated here in the case of deterministic schedulers.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-06-15T08:55:40Z
2015-06-15T08:55:40Z
http://eprints.imtlucca.it/id/eprint/2705
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2705
2015-06-15T08:55:40Z
Editor's Note
Rocco De Nicola
r.denicola@imtlucca.it
2015-06-15T08:33:16Z
2015-06-15T08:33:16Z
http://eprints.imtlucca.it/id/eprint/2703
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2703
2015-06-15T08:33:16Z
Revisiting bisimilarity and its modal logic for nondeterministic and probabilistic processes
The logic PML is a probabilistic version of Hennessy–Milner logic introduced by Larsen and Skou to characterize bisimilarity over probabilistic processes without internal nondeterminism. In this paper, two alternative interpretations of PML over nondeterministic and probabilistic processes as models are considered, and two new bisimulation-based equivalences that are in full agreement with those interpretations are provided. The new equivalences include as coarsest congruences the two bisimilarities for nondeterministic and probabilistic processes proposed by Segala and Lynch. The latter equivalences are instead known to agree with two versions of Hennessy–Milner logic extended with an additional probabilistic operator interpreted over state distributions in place of individual states. The new interpretations of PML and the corresponding new bisimilarities are thus the first ones to offer a uniform framework for reasoning on processes that are purely nondeterministic or reactive probabilistic or that mix nondeterminism and probability in an alternating/nonalternating way.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-06-15T08:33:05Z
2015-06-15T08:33:05Z
http://eprints.imtlucca.it/id/eprint/2704
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2704
2015-06-15T08:33:05Z
Revisiting Trace and Testing Equivalences for Nondeterministic and Probabilistic Processes
Two of the most studied extensions of trace and testing equivalences to nondeterministic and probabilistic processes induce distinctions that have been questioned and lack properties that are desirable. Probabilistic trace-distribution equivalence differentiates systems that can perform the same set of traces with the same probabilities, and is not a congruence for parallel composition. Probabilistic testing equivalence, which relies only on extremal success probabilities, is backward compatible with testing equivalences for restricted classes of processes, such as fully nondeterministic processes or generative/reactive probabilistic processes, only if specific sets of tests are admitted. In this paper, new versions of probabilistic trace and testing equivalences are presented for the general class of nondeterministic and probabilistic processes. The new trace equivalence is coarser because it compares execution probabilities of single traces instead of entire trace distributions, and turns out to be compositional. The new testing equivalence requires matching all resolutions of nondeterminism on the basis of their success probabilities, rather than comparing only extremal success probabilities, and considers success probabilities in a trace-by-trace fashion, rather than cumulatively on entire resolutions. It is fully backward compatible with testing equivalences for restricted classes of processes; as a consequence, the trace-by-trace approach uniformly captures the standard probabilistic testing equivalences for generative and reactive probabilistic processes. The paper discusses in full details the new equivalences and provides a simple spectrum that relates them with existing ones in the setting of nondeterministic and probabilistic processes.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-06-08T13:19:51Z
2015-06-08T13:19:51Z
http://eprints.imtlucca.it/id/eprint/2702
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2702
2015-06-08T13:19:51Z
Tail-scope: Using friends to estimate heavy tails of degree distributions in large-scale complex networks
Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.
Young-Ho Eom
youngho.eom@imtlucca.it
Hang-Hyun Jo
2015-05-21T10:00:39Z
2016-04-07T09:49:34Z
http://eprints.imtlucca.it/id/eprint/2698
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2698
2015-05-21T10:00:39Z
Default Cascades in Complex Networks: Topology and Systemic Risk
The recent crisis has brought to the fore a crucial question that remains still open: what would be the optimal architecture of financial systems? We investigate the stability of several benchmark topologies in a simple default cascading dynamics in bank networks. We analyze the interplay of several crucial drivers, i.e., network topology, banks' capital ratios, market illiquidity, and random vs targeted shocks. We find that, in general, topology matters only – but substantially – when the market is illiquid. No single topology is always superior to others. In particular, scale-free networks can be both more robust and more fragile than homogeneous architectures. This finding has important policy implications. We also apply our methodology to a comprehensive dataset of an interbank market from 1999 to 2011.
Tarik Roukny
Hugues Bersini
Hugues Pirotte
Guido Caldarelli
guido.caldarelli@imtlucca.it
Stefano Battiston
2015-05-19T10:05:19Z
2015-05-19T10:05:19Z
http://eprints.imtlucca.it/id/eprint/2683
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2683
2015-05-19T10:05:19Z
Trend of Narratives in the Age of Misinformation
Social media enabled a direct path from producer to consumer of contents changing the way users get informed, debate, and shape their worldviews. Such a {\em disintermediation} weakened consensus on social relevant issues in favor of rumors, mistrust, and fomented conspiracy thinking -- e.g., chem-trails inducing global warming, the link between vaccines and autism, or the New World Order conspiracy.
In this work, we study through a thorough quantitative analysis how different conspiracy topics are consumed in the Italian Facebook. By means of a semi-automatic topic extraction strategy, we show that the most discussed contents semantically refer to four specific categories: {\em environment}, {\em diet}, {\em health}, and {\em geopolitics}. We find similar patterns by comparing users activity (likes and comments) on posts belonging to different semantic categories. However, if we focus on the lifetime -- i.e., the distance in time between the first and the last comment for each user -- we notice a remarkable difference within narratives -- e.g., users polarized on geopolitics are more persistent in commenting, whereas the less persistent are those focused on diet related topics. Finally, we model users mobility across various topics finding that the more a user is active, the more he is likely to join all topics. Once inside a conspiracy narrative users tend to embrace the overall corpus.
Alessandro Bessi
Fabiana Zollo
fabiana.zollo@imtlucca.it
Michela Del Vicario
michela.delvicario@imtlucca.it
Antonio Scala
Guido Caldarelli
guido.caldarelli@imtlucca.it
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2015-05-19T09:35:54Z
2015-10-28T14:47:41Z
http://eprints.imtlucca.it/id/eprint/2681
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2681
2015-05-19T09:35:54Z
Model Predictive Control for Linear Impulsive Systems
Linear impulsive control systems have been extensively studied with respect to their equilibrium points which, in most cases, are no other than the origin. However, the trajectory of an impulsive system cannot be stabilized to arbitrary desired points hindering their utilization in a great many applications. In this paper, we study the equilibrium of linear impulsive systems with respect to target-sets. We properly extend the notion of invariance and design stabilizing model predictive controllers (MPC). Finally, we apply the proposed methodology to control the intravenous bolus administration of Lithium.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-05-18T15:45:57Z
2015-11-02T13:13:29Z
http://eprints.imtlucca.it/id/eprint/2680
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2680
2015-05-18T15:45:57Z
Identifying geographic clusters: A network analytic approach
In recent years there has been a growing interest in the role of networks and clusters in the global economy. Despite being a popular research topic in economics, sociology and urban studies, geographical clustering of human activity has often been studied by means of predetermined geographical units, such as administrative divisions and metropolitan areas. This approach is intrinsically time invariant and it does not allow one to differentiate between different activities. Our goal in this paper is to present a new methodology for identifying clusters, that can be applied to different empirical settings. We use a graph approach based on k-shell decomposition to analyze world biomedical research clusters based on PubMed scientific publications. We identify research institutions and locate their activities in geographical clusters. Leading areas of scientific production and their top performing research institutions are consistently identified at different geographic scales.
Roberto Catini
roberto.catini@imtlucca.it
Dmytro Karamshuk
dmytro.karamshuk@imtlucca.it
Orion Penner
orion.penner@imtlucca.it
Massimo Riccaboni
massimo.riccaboni@imtlucca.it
2015-05-12T10:23:59Z
2015-05-12T10:23:59Z
http://eprints.imtlucca.it/id/eprint/2673
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2673
2015-05-12T10:23:59Z
Sloshing-aware attitude control of impulsively actuated spacecraft
Upper stages of launchers sometimes drift, with the main engine switched-off, for a longer period of time until re-ignition and subsequent payload release. During this period a large amount of propellant is still in the tank and the motion of the fluid (sloshing) has an impact on the attitude of the stage. For the flight phase the classical spring/damper or pendulum models cannot be applied. A more elaborate sloshing-aware model is described in the paper involving a time-varying inertia tensor.
Using principles of hybrid systems theory we model the minimum impulse bit (MIB) effect, that is, the minimum torque that can be exerted by the thrusters. We design a hybrid model predictive control scheme for the attitude control of a launcher during its long coasting period, aiming at minimising the actuation count of the thrusters.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Daniele Bernardini
daniele.bernardini@imtlucca.it
Hans Strauch
Samir Bennani
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-05-04T15:19:39Z
2015-05-04T15:36:21Z
http://eprints.imtlucca.it/id/eprint/2664
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2664
2015-05-04T15:19:39Z
Causal-Consistent Reversibility in a Tuple-Based Language
Causal-consistent reversibility is a natural way of undoing concurrent computations. We study causal-consistent reversibility in the context of µKlaim, a formal coordination language based on distributed tuple spaces. We consider both uncontrolled reversibility, suitable to study the basic properties of the reversibility mechanism, and controlled reversibility based on a rollback operator, more suitable for programming applications. The causality structure of the language, and thus the definition of its reversible semantics, differs from all the reversible languages in the literature because of its generative communication paradigm. In particular, the reversible behavior of µKlaim read primitive, reading a tuple without consuming it, cannot be matched using channel-based communication. We illustrate the reversible extensions of µKlaim on a simple, but realistic, application scenario.
Elena Giachino
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Francesco Tiezzi
2015-04-07T14:00:07Z
2015-04-07T14:00:07Z
http://eprints.imtlucca.it/id/eprint/2657
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2657
2015-04-07T14:00:07Z
A dual gradient-projection algorithm for model predictive control in fixed-point arithmetic
Although linear Model Predictive Control has gained increasing popularity for controlling dynamical systems subject to constraints, the main barrier that prevents its widespread use in embedded applications is the need to solve a Quadratic Program (QP) in real-time. This paper proposes a dual gradient projection (DGP) algorithm specifically tailored for implementation on fixed-point hardware. A detailed convergence rate analysis is presented in the presence of round-off errors due to fixed-point arithmetic. Based on these results, concrete guidelines are provided for selecting the minimum number of fractional and integer bits that guarantee convergence to a suboptimal solution within a pre-specified tolerance, therefore reducing the cost and power consumption of the hardware device.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Alberto Guiggiani
alberto.guiggiani@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-04-07T13:53:35Z
2015-10-28T14:49:34Z
http://eprints.imtlucca.it/id/eprint/2656
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2656
2015-04-07T13:53:35Z
A multiparametric quadratic programming algorithm with polyhedral computations based on nonnegative least squares
Model Predictive Control (MPC) is one of the most successful techniques adopted in industry to control multivariable systems under constraints on input and output variables. To circumvent the main drawback of MPC, i.e., the need to solve a Quadratic Program (QP) on line to compute the control action, explicit MPC was proposed in the past to precompute the control law off line using multiparametric QP (mpQP). The resulting form of the MPC law is piecewise affine, which is extremely easy to code, can be computed online by simple arithmetic operations, and requires a maximum number of iterations that can be exactly determined a priori. On the other hand, the offline computations to solve the mpQP problem require detecting emptiness, full-dimensionality, and minimal hyperplane representations of polyhedra, and other computational geometric operations. While most of the existing methods solve such operations via linear programming, the approach proposed in this paper relies on a nonnegative least squares (NNLS) solver that is very simple to code, fast to execute, and provides solutions up to machine precision. In addition, the new approach exploits QP duality to identify and construct critical regions and to handle degeneracy issues.
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-03-26T11:45:05Z
2015-03-26T11:45:05Z
http://eprints.imtlucca.it/id/eprint/2447
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2447
2015-03-26T11:45:05Z
Robust pole placement for plants with semialgebraic parametric uncertainty
In this paper we address the problem of robust pole placement for linear-time-invariant systems whose uncertain parameters are assumed to belong to a semialgebraic region. A dynamic controller is designed in order to constrain the coefficients of the closed-loop characteristic polynomial within prescribed intervals. Two main topics arising from the problem of robust pole placement are tackled by means of polynomial optimization. First, necessary conditions on the plant parameters for the existence of a robust controller are given. Then, the set of all admissible robust controllers is sought. Convex relaxation techniques based on sum-of-square decomposition of positive polynomials are used to efficiently solve the formulated optimization problems through semidefinite programming techniques.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-03-26T11:36:44Z
2015-03-26T11:36:44Z
http://eprints.imtlucca.it/id/eprint/2439
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2439
2015-03-26T11:36:44Z
Set-membership EIV identification through LMI relaxation techniques
In this paper the Set-membership Error-In-Variables (EIV) identification problem is considered, that is the identification of linear dynamic systems when both the output and the input measurements are corrupted by bounded noise. A new approach for the computation of the Parameters Uncertainty Intervals (PUIs) is discussed. First the problem is formulated in terms of non-convex semi-algebraic optimization. Then, a Linear-Matrix-Inequalities relaxation technique is presented to compute parameters bounds by means of convex optimization. Finally, convergence properties and computational complexity of the given algorithms are discussed. Advantages of the proposed technique with respect to previously published ones are discussed both theoretically and by means of a simulated example.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-03-11T11:18:06Z
2015-07-24T12:26:41Z
http://eprints.imtlucca.it/id/eprint/2633
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2633
2015-03-11T11:18:06Z
Supporting performance awareness in autonomous ensembles
The ASCENS project works with systems of self-aware, self-adaptive and self-expressive ensembles. Performance awareness represents a concern that cuts across multiple aspects of such systems, from the techniques to acquire performance information by monitoring, to the methods of incorporating such information into the design making and decision making processes. This chapter provides an overview of five project contributions – performance monitoring based on the DiSL instrumentation framework, measurement evaluation using the SPL formalism, performance modeling with fluid semantics, adaptation with DEECo and design with IRM-SA – all in the context of the cloud case stud
Lubomír Bulej
Tomáš Bureš
Ilias Gerostathopoulos
Vojtěch Horký
Jaroslav Keznikl
Lukáš Marek
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
Petr Tůma
2015-03-11T11:14:42Z
2015-03-11T11:14:42Z
http://eprints.imtlucca.it/id/eprint/2632
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2632
2015-03-11T11:14:42Z
Service composition for collective adaptive systems
Collective adaptive systems are large-scale resource-sharing systems which adapt to the demands of their users by redistributing resources to balance load or provide alternative services where the current provision is perceived to be insufficient. Smart transport systems are a primary example where real-time location tracking systems record the location availability of assets such as cycles for hire, or fleet vehicles such as buses, trains and trams. We consider the problem of an informed user optimising his journey using a composition of services offered by different service providers.
Stephen Gilmore
Jane Hillston
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-03-11T10:09:01Z
2015-03-11T10:09:01Z
http://eprints.imtlucca.it/id/eprint/2631
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2631
2015-03-11T10:09:01Z
A homage to Martin Wirsing
Martin Wirsing was born on Christmas Eve, December 24th, 1948, in Bayreuth, a Bavarian town which is famous for the annually celebrated Richard Wagner Festival. There he visited the Lerchenbühl School and the High-School “Christian Ernestinum” where he followed the humanistic branch focusing on Latin and Ancient Greek. After that, from 1968 to 1974, Martin studied Mathematics at University Paris 7 and at Ludwig-Maximilians-Universität in Munich. In 1971 he became Maitrise-en-Sciences Mathematiques at the University Paris 7 and, in 1974, he got the Diploma in Mathematics at LMU Munich.
Rocco De Nicola
r.denicola@imtlucca.it
Rolf Hennicker
2015-03-03T09:49:51Z
2015-03-03T09:49:51Z
http://eprints.imtlucca.it/id/eprint/2626
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2626
2015-03-03T09:49:51Z
CaSPiS: a calculus of sessions, pipelines and services
Service-oriented computing is calling for novel computational models and languages with well-disciplined primitives for client–server interaction, structured orchestration and unexpected events handling. We present CaSPiS, a process calculus where the conceptual abstractions of sessioning and pipelining play a central role for modelling service-oriented systems. CaSPiS sessions are two-sided, uniquely named and can be nested. CaSPiS pipelines permit orchestrating the flow of data produced by different sessions. The calculus is also equipped with operators for handling (unexpected) termination of the partner's side of a session. Several examples are presented to provide evidence of the flexibility of the chosen set of primitives. One key contribution is a fully abstract encoding of Misra et al.'s orchestration language Orc. Another main result shows that in CaSPiS it is possible to program a ‘graceful termination’ of nested sessions, which guarantees that no session is forced to hang forever after the loss of its partner.
Michele Boreale
Roberto Bruni
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-03-03T09:41:50Z
2015-03-03T09:41:50Z
http://eprints.imtlucca.it/id/eprint/2625
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2625
2015-03-03T09:41:50Z
A formal approach to autonomic systems programming: the SCEL language
Software-intensive cyber-physical systems have to deal with massive numbers of components, featuring complex interactions among components and with humans and other systems. Often, they are designed to operate in open and non-deterministic environments, and to dynamically adapt to new requirements, technologies and external conditions. This class of systems has been named ensembles and new engineering techniques are needed to address the challenges of developing, integrating, and deploying them. In the paper, we briefly introduce SCEL (Software Component Ensemble Language), a kernel language that takes a holistic approach to programming autonomic computing systems and aims at providing programmers with a complete set of linguistic abstractions for programming the behavior of autonomic components and the formation of autonomic components ensembles, and for controlling the interaction among different components.
Rocco De Nicola
r.denicola@imtlucca.it
2015-02-23T11:14:29Z
2015-02-23T11:14:29Z
http://eprints.imtlucca.it/id/eprint/2622
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2622
2015-02-23T11:14:29Z
Foundations of Support Constraint Machines
The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2015-02-23T11:11:28Z
2015-02-23T11:11:28Z
http://eprints.imtlucca.it/id/eprint/2621
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2621
2015-02-23T11:11:28Z
Robust local–global SOM-based ACM
A novel active contour model (ACM) for image segmentation, driven by both local and global image-intensity information encoded by a self-organising map (SOM), is proposed. Experimental results demonstrate the robustness of the proposed model to the contour initialisation and to the additive noise, when compared with the state-of-the-art local and global ACMs. They also demonstrate its robustness to scene changes.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2015-02-23T11:04:02Z
2015-02-23T11:04:02Z
http://eprints.imtlucca.it/id/eprint/2620
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2620
2015-02-23T11:04:02Z
An efficient Self-Organizing Active Contour model for image segmentation
Active Contour Models (ACMs) constitute a powerful energy-based minimization framework for image segmentation, based on the evolution of an active contour. Among ACMs, supervised {ACMs} are able to exploit the information extracted from supervised examples to guide the contour evolution. However, their applicability is limited by the accuracy of the probability models they use. As a consequence, effectiveness and efficiency of supervised {ACMs} are among their main real challenges, especially when handling images containing regions characterized by intensity inhomogeneity. In this paper, to deal with such kinds of images, we propose a new supervised ACM, named Self-Organizing Active Contour (SOAC) model, which combines a variational level set method (a specific kind of ACM) with the weights of the neurons of two Self-Organizing Maps (SOMs). Its main contribution is the development of a new {ACM} energy functional optimized in such a way that the topological structure of the underlying image intensity distribution is preserved – using the two {SOMs} – in a parallel-processing and local way. The model has a supervised component since training pixels associated with different regions are assigned to different SOMs. Experimental results show the superior efficiency and effectiveness of {SOAC} versus several existing ACMs.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
2015-02-23T10:45:49Z
2015-11-02T13:02:11Z
http://eprints.imtlucca.it/id/eprint/2618
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2618
2015-02-23T10:45:49Z
Learning With Mixed Hard/Soft Pointwise Constraints
A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2015-02-23T09:55:18Z
2015-02-23T09:55:18Z
http://eprints.imtlucca.it/id/eprint/2616
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2616
2015-02-23T09:55:18Z
Evaluation of the Average Packet Delivery Delay in Highly-Disrupted Networks: The DTN and IP-like Protocol Cases
Delay/Disruption-Tolerant Networking (DTN) represents an innovative communication paradigm that enables the communication over Intermittently-Connected Networks (ICNs). ICNs are characterized by unpredictable or scheduled contacts among nodes, high latency, and high bit error rates. DTNs, unlike TCP/IP protocols, make use of store-and-forward techniques in order to cope with intermittent link issues. In this letter, a simple model is proposed to compute the average packet delivery delay in ICNs. Both the IP-like paradigm used by traditional TCP/IP protocols and DTN are considered. The results provide theoretical insights into the applications of these two approaches to ICNs. Numerical results and simulations are presented, too.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2015-02-23T09:44:36Z
2015-02-23T09:44:36Z
http://eprints.imtlucca.it/id/eprint/2615
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2615
2015-02-23T09:44:36Z
Exploiting the Shapley Value in the Estimation of the Position of a Point of Interest for a Group of Individuals
Concepts and tools from cooperative game theory are exploited to quantify the role played by each member of a team in estimating the position of an observed point of interest. The measure of importance known as “Shapley value” is used to this end. From the theoretical point view, we propose a specific form of the characteristic function for the class of cooperative games under investigation. In the numerical analysis, different configurations of a group of individuals are considered: all individuals looking at a mobile point of interest, one of them replaced with an artificially-generated one who looks exactly toward the point of interest, and directions of the heads replaced with randomly-generated directions. The corresponding experimental outcomes are compared.
Antonio Camurri
Floriane Dardard
Simone Ghisio
Donald Glowinski
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2015-02-23T09:38:59Z
2015-05-19T09:17:36Z
http://eprints.imtlucca.it/id/eprint/2614
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2614
2015-02-23T09:38:59Z
Narrowing the Search for Optimal Call-Admission Policies Via a Nonlinear Stochastic Knapsack Model
Call admission control with two classes of users is investigated via a nonlinear stochastic knapsack model. The feasibility region represents the subset of the call space, where given constraints on the quality of service have to be satisfied. Admissible strategies are searched for within the class of coordinate-convex policies. Structural properties that the optimal policies belonging to such a class have to satisfy are derived. They are exploited to narrow the search for the optimal solution to the nonlinear stochastic knapsack problem that models call admission control. To illustrate the role played by these properties, the numbers of coordinate-convex policies by which they are satisfied are estimated. A graph-based algorithm to generate all such policies is presented.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2015-02-18T14:47:11Z
2015-02-18T14:47:11Z
http://eprints.imtlucca.it/id/eprint/2611
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2611
2015-02-18T14:47:11Z
Learning as Constraint Reactions
A theory of learning is proposed,which extends naturally the classic regularization framework of kernelmachines to the case in which the agent interacts with a richer environment, compactly described by the notion of constraint. Variational calculus is exploited to derive general representer theorems that give a description of the structure of the solution to the learning problem. It is shown that such solution can be represented in terms of constraint reactions, which remind the corresponding notion in analytic mechanics. In particular, the derived representer theorems clearly show the extension of the classic kernel expansion on support vectors to the expansion on support constraints. As an application of the proposed theory three examples are given, which illustrate the dimensional collapse to a finite-dimensional space of parameters. The constraint reactions are calculated for the classic collection of supervised examples, for the case of box constraints, and for the case of hard holonomic linear constraints mixed with supervised examples. Interestingly, this leads to representer theorems for which we can re-use the kernel machine mathematical and algorithmic apparatus.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2015-02-18T14:33:29Z
2015-02-18T14:33:29Z
http://eprints.imtlucca.it/id/eprint/2610
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2610
2015-02-18T14:33:29Z
A Survey of SOM-Based Active Contour Models for Image Segmentation
Self Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly when dealing with image segmentation as a contour extraction problem. The idea of utilizing the prototypes (weights) of a SOM to model an evolving contour has produced a new class of Active Contour Models (ACMs), known as SOM-based ACMs. Such models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property, and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey paper, the main principles of SOMs and their application in modelling active contours are first highlighted. Then, we review existing SOM-based ACMs with a focus on their advantages and disadvantages in modelling the evolving contour via different kinds of SOMs. Finally, some current research directions are identified.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
2015-02-11T14:35:31Z
2015-02-11T14:35:31Z
http://eprints.imtlucca.it/id/eprint/2607
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2607
2015-02-11T14:35:31Z
Blending randomness in closed queueing network models
Abstract Random environments are stochastic models used to describe events occurring in the environment a system operates in. The goal is to describe events that affect performance and reliability such as breakdowns, repairs, or temporary degradations of resource capacities due to exogenous factors. Despite having been studied for decades, models that include both random environments and queueing networks remain difficult to analyse. To cope with this problem, we introduce the blending algorithm, a novel approximation for closed queueing network models in random environments. The algorithm seeks to obtain the stationary solution of the model by iteratively evaluating the dynamics of the system in between state changes of the environment. To make the approach scalable, the computation relies on a fluid approximation of the queueing network model. A validation study on 1800 models shows that blending can save a significant amount of time compared to simulation, with an average accuracy that grows with the number of servers in each station. We also give an interpretation of this technique in terms of Laplace transforms and use this approach to determine convergence properties.
Giuliano Casale
Mirco Tribastone
mirco.tribastone@imtlucca.it
Peter G. Harrison
2015-02-11T14:31:46Z
2015-07-24T12:22:54Z
http://eprints.imtlucca.it/id/eprint/2606
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2606
2015-02-11T14:31:46Z
Tackling continuous state-space explosion in a Markovian process algebra
Abstract Fluid or mean-field methods are approximate analytical techniques which have proven effective in tackling the infamous state-space explosion problem which typically arises when modelling large-scale concurrent systems based on interleaving semantics. These methods are particularly suitable in situations which present large populations of simple interacting objects characterised by small local state spaces, since they require the analysis of a problem which is insensitive to the population sizes but is dependent only on the size of the local state spaces. This paper studies the case when the replicated objects are best described as composites which consist of smaller simple objects. A congenial formal modelling framework for situations of this kind may be given by stochastic process algebra. Using {PEPA} as a representative case, we find that fluid models with replicated copies of composite processes do not scale well with increasing population sizes, thus rendering intractable the analysis of the underlying system of ordinary differential equations (ODEs). We call this problem continuous state-space explosion, by analogy with its counterpart phenomenon in discrete state spaces. The main contribution of this paper is a result of equivalence that simplifies, in an exact way, the potentially massive {ODE} system arising in those circumstances to one whose size is independent from all the multiplicities in the model. As a byproduct, we find that these simplified {ODEs} turn out to characterise the fluid behaviour of a family of {PEPA} models whose elements cannot be related to each other through any known equivalence relation. A substantial numerical assessment investigates the relationship between the different underlying Markov chains and their unique fluid limit, demonstrating its generally good accuracy for all practical purposes.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-11T14:29:07Z
2015-07-24T12:22:10Z
http://eprints.imtlucca.it/id/eprint/2605
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2605
2015-02-11T14:29:07Z
Exact fluid lumpability in Markovian process algebra
Abstract Quantitative analysis by means of discrete-state stochastic processes is hindered by the well-known phenomenon of state-space explosion, whereby the size of the state space may have an exponential growth with the number of agents of the system under scrutiny. When the stochastic process underlies a Markovian process algebra model, this problem may be alleviated by suitable notions of behavioural equivalence that induce lumping at the underlying continuous-time Markov chain, establishing an exact relation between a potentially much smaller aggregated chain and the original one. For the analysis of massively parallel systems, however, lumping techniques may not be sufficient to yield a computationally tractable problem. Recently, much work has been directed towards forms of fluid techniques that provide a set of ordinary differential equations (ODEs) approximating the expected path of the stochastic process. Unfortunately, even fluid models of realistic systems may be too large for feasible analysis. This paper studies a behavioural relation for process algebra with fluid semantics, called projected label equivalence, which is shown to yield an exactly fluid lumpable model, i.e., an aggregated {ODE} system which can be related to the original one without any loss of information. Project label equivalence relates sequential components of a process term. In general, for any two sequential components that are related in the fluid sense, nothing can be said about their relationship from the stochastic viewpoint. We define and study a notion of well-posedness which allows us to relate fluid lumpability to the stochastic notion of semi-isomorphism, which is a weaker version of the common notion of isomorphism between the doubly labelled transition systems at the basis of the Markovian interpretation.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-11T14:24:13Z
2015-02-11T14:24:13Z
http://eprints.imtlucca.it/id/eprint/2604
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2604
2015-02-11T14:24:13Z
Modelling exogenous variability in cloud deployments
Describing exogenous variability in the resources used by a cloud application leads to stochastic performance models that are difficult to solve. In this paper, we describe the blending algorithm, a novel approximation for queueing network models immersed in a random environment. Random environments are Markov chain-based descriptions of timevarying operational conditions that evolve independently of the system state, therefore they are natural descriptors for exogenous variability in a cloud deployment. The algorithm adopts the principle of solving a separate transient-analysis subproblem for each state of the random environment. Each subproblem is then approximated by a system of ordinary differential equations formulated according to a fluid limit theorem, making the approach scalable and computationally inexpensive. A validation study on several hundred models shows that blending can save up to two orders of magnitude of computational time compared to simulation, enabling efficient exploration of a decision space, which is useful in particular at design-time.
Giuliano Casale
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-11T14:20:40Z
2015-02-11T14:20:40Z
http://eprints.imtlucca.it/id/eprint/2603
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2603
2015-02-11T14:20:40Z
A fluid model for layered queueing networks
Layered queueing networks are a useful tool for the performance modeling and prediction of software systems that exhibit complex characteristics such as multiple tiers of service, fork/join interactions, and asynchronous communication. These features generally result in nonproduct form behavior for which particularly efficient approximations based on mean value analysis (MVA) have been devised. This paper reconsiders the accuracy of such techniques by providing an interpretation of layered queueing networks as fluid models. Mediated by an automatic translation into a stochastic process algebra, PEPA, a network is associated with a set of ordinary differential equations (ODEs) whose size is insensitive to the population levels in the system under consideration. A substantial numerical assessment demonstrates that this approach significantly improves the quality of the approximation for typical performance indices such as utilization, throughput, and response time. Furthermore, backed by established theoretical results of asymptotic convergence, the error trend shows monotonic decrease with larger population sizes-a behavior which is found to be in sharp contrast with that of approximate mean value analysis, which instead tends to increase.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-11T14:17:18Z
2015-02-11T14:17:18Z
http://eprints.imtlucca.it/id/eprint/2602
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2602
2015-02-11T14:17:18Z
Fluid rewards for a stochastic process algebra
Reasoning about the performance of models of software systems typically entails the derivation of metrics such as throughput, utilization, and response time. If the model is a Markov chain, these are expressed as real functions of the chain, called reward models. The computational complexity of reward-based metrics is of the same order as the solution of the Markov chain, making the analysis infeasible when evaluating large-scale systems. In the context of the stochastic process algebra PEPA, the underlying continuous-time Markov chain has been shown to admit a deterministic (fluid) approximation as a solution of an ordinary differential equation, which effectively circumvents state-space explosion. This paper is concerned with approximating Markovian reward models for PEPA with fluid rewards, i.e., functions of the solution of the differential equation problem. It shows that (1) the Markovian reward models for typical metrics of performance enjoy asymptotic convergence to their fluid analogues, and that (2) via numerical tests, the approximation yields satisfactory accuracy in practice.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Jie Ding
Stephen Gilmore
Jane Hillston
2015-02-11T14:14:02Z
2015-02-11T14:14:02Z
http://eprints.imtlucca.it/id/eprint/2601
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2601
2015-02-11T14:14:02Z
Scalable differential analysis of process algebra models
The exact performance analysis of large-scale software systems with discrete-state approaches is difficult because of the well-known problem of state-space explosion. This paper considers this problem with regard to the stochastic process algebra PEPA, presenting a deterministic approximation to the underlying Markov chain model based on ordinary differential equations. The accuracy of the approximation is assessed by means of a substantial case study of a distributed multithreaded application.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
Jane Hillston
2015-02-11T14:10:28Z
2015-02-11T14:10:28Z
http://eprints.imtlucca.it/id/eprint/2600
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2600
2015-02-11T14:10:28Z
Stochastic process algebras: from individuals to populations
In this paper we report on progress in the use of stochastic process algebras for representing systems which contain many replications of components such as clients, servers and devices. Such systems have traditionally been difficult to analyse even when using high-level models because of the need to represent the vast range of their potential behaviour. Models of concurrent systems with many components very quickly exceed the storage capacity of computing devices even when efficient data structures are used to minimize the cost of representing each state. Here, we show how population-based models that make use of a continuous approximation of the discrete behaviour can be used to efficiently analyse the temporal behaviour of very large systems via their collective dynamics. This approach enables modellers to study problems that cannot be tackled with traditional discrete-state techniques such as continuous-time Markov chains.
Jane Hillston
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-11T14:06:11Z
2015-02-11T14:06:11Z
http://eprints.imtlucca.it/id/eprint/2599
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2599
2015-02-11T14:06:11Z
Non-functional properties in the model-driven development of service-oriented systems
Systems based on the service-oriented architecture (SOA) principles have become an important cornerstone of the development of enterprise-scale software applications. They are characterized by separating functions into distinct software units, called services, which can be published, requested and dynamically combined in the production of business applications. Service-oriented systems (SOSs) promise high flexibility, improved maintainability, and simple re-use of functionality. Achieving these properties requires an understanding not only of the individual artifacts of the system but also their integration. In this context, non-functional aspects play an important role and should be analyzed and modeled as early as possible in the development cycle. In this paper, we discuss modeling of non-functional aspects of service-oriented systems, and the use of these models for analysis and deployment. Our contribution in this paper is threefold. First, we show how services and service compositions may be modeled in UML by using a profile for SOA (UML4SOA) and how non-functional properties of service-oriented systems can be represented using the non-functional extension of UML4SOA (UML4SOA-NFP) and the MARTE profile. This enables modeling of performance, security and reliable messaging. Second, we discuss formal analysis of models which respect this design, in particular we consider performance estimates and reliability analysis using the stochastically timed process algebra PEPA as the underlying analytical engine. Last but not least, our models are the source for the application of deployment mechanisms which comprise model-to-model and model-to-text transformations implemented in the framework VIATRA. All techniques presented in this work are illustrated by a running example from an eUniversity case study.
Stephen Gilmore
László Gönczy
Nora Koch
Philip Mayer
Mirco Tribastone
mirco.tribastone@imtlucca.it
Dániel Varró
2015-02-11T13:51:46Z
2015-02-11T13:51:46Z
http://eprints.imtlucca.it/id/eprint/2598
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2598
2015-02-11T13:51:46Z
The PEPA eclipse plugin
The PEPA Eclipse Plug-in supports the creation and analysis of performance models, from small-scale Markov models to large-scale simulation studies and differential equation systems. Whichever form of analysis is used, models are expressed in a single highlevel language for quantitative modelling, Performance Evaluation Process Algebra (PEPA).
Mirco Tribastone
mirco.tribastone@imtlucca.it
Adam Duguid
Stephen Gilmore
2015-02-11T13:46:09Z
2015-02-11T13:46:09Z
http://eprints.imtlucca.it/id/eprint/2597
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2597
2015-02-11T13:46:09Z
Scaling performance analysis using fluid-flow approximation
The fluid interpretation of the process calculus PEPA provides a very useful tool for the performance evaluation of large-scale systems because the tractability of the numerical solution does not depend upon the population levels of the system under study. This paper offers a tutorial on how to use this technique by analysing a case study of a service-oriented application to support an e-University infrastructure.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-11T13:40:30Z
2015-02-11T13:40:30Z
http://eprints.imtlucca.it/id/eprint/2596
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2596
2015-02-11T13:40:30Z
Quantitative analysis of web services using SRMC
In this tutorial paper we present quantitative methods for analysing Web Services with the goal of understanding how they will perform under increased demand, or when asked to serve a larger pool of service subscribers. We use a process calculus called SRMC to model the service. We apply efficient analysis techniques to numerically evaluate our model. The process calculus and the numerical analysis are supported by a set of software tools which relieve the modeller of the burden of generating and evaluating a large family of related models. The methods are illustrated on a classical example of Web Service usage in a business-to-business scenario.
Allan Clark
Stephen Gilmore
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-11T13:33:24Z
2015-02-11T13:33:24Z
http://eprints.imtlucca.it/id/eprint/2595
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2595
2015-02-11T13:33:24Z
Stochastic process algebras
In this tutorial we give an introduction to stochastic process algebras and their use in performance modelling, with a focus on the PEPA formalism. A brief introduction is given to the motivations for extending classical process algebra with stochastic times and probabilistic choice. We then present an introduction to the modelling capabilities of the formalism and the tools available to support Markovian based analysis. The chapter is illustrated throughout by small examples, demonstrating the use of the formalism and the tools.
Allan Clark
Stephen Gilmore
Jane Hillston
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T15:32:31Z
2015-02-10T15:32:31Z
http://eprints.imtlucca.it/id/eprint/2594
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2594
2015-02-10T15:32:31Z
Behavioral relations in a process algebra for variants
Variant Process Algebra is designed for the formal behavioral modeling of software variation, as arises, for instance, in software product line engineering. Process terms are labelled with the sets of variants, i.e., specific products, where they are enabled. A multi-modal operational semantics enables two compositional forms of reasoning. The first one is concerned with relating the behavior of a variant to the whole family. The second notion relates variants between each other, for instance to be able to formally capture the intuitive idea that a variant is a conservative extension of another, in the sense that it adds more behavior without breaking any existing one. Sufficient conditions are given to establish such a relation statically, by means of syntactic checks on process terms.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T15:26:40Z
2016-02-12T13:12:35Z
http://eprints.imtlucca.it/id/eprint/2593
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2593
2015-02-10T15:26:40Z
An analysis pathway for the quantitative evaluation of public transport systems
We consider the problem of evaluating quantitative service-level agreements in public services such as transportation systems. We describe the integration of quantitative analysis tools for data fitting, model generation, simulation, and statistical model-checking, creating an analysis pathway leading from system measurement data to verification results. We apply our pathway to the problem of determining whether public bus systems are delivering an appropriate quality of service as required by regulators. We exercise the pathway on service data obtained from Lothian Buses about the arrival and departure times of their buses on key bus routes through the city of Edinburgh. Although we include only that example in the present paper, our methods are sufficiently general to apply to other transport systems and other cities.
Stephen Gilmore
Mirco Tribastone
mirco.tribastone@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2015-02-10T15:06:34Z
2015-02-10T15:06:34Z
http://eprints.imtlucca.it/id/eprint/2592
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2592
2015-02-10T15:06:34Z
Family-based performance analysis of variant-rich software systems
We study models of software systems with variants that stem from a specific choice of configuration parameters with a direct impact on performance properties. Using UML activity diagrams with quantitative annotations, we model such systems as a product line. The efficiency of a product-based evaluation is typically low because each product must be analyzed in isolation, making difficult the re-use of computations across variants. Here, we propose a family-based approach based on symbolic computation. A numerical assessment on large activity diagrams shows that this approach can be up to three orders of magnitude faster than product-based analysis in large models, thus enabling computationally efficient explorations of large parameter spaces.
Matthias Kowal
Ina Schaefer
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T14:53:43Z
2015-07-24T12:27:34Z
http://eprints.imtlucca.it/id/eprint/2591
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2591
2015-02-10T14:53:43Z
Extended differential aggregations in process algebra for performance and biology
We study aggregations for ordinary differential equations induced by fluid semantics for Markovian process algebra which can capture the dynamics of performance models and chemical reaction networks. Whilst previous work has required perfect symmetry for exact aggregation, we present approximate fluid lumpability, which makes nearby processes perfectly symmetric after a perturbation of their parameters. We prove that small perturbations yield nearby differential trajectories. Numerically, we show that many heterogeneous processes can be aggregated with negligible errors.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T14:44:18Z
2015-02-10T14:44:18Z
http://eprints.imtlucca.it/id/eprint/2590
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2590
2015-02-10T14:44:18Z
Fluid performability analysis of nested automata models
In this paper we present a class of nested automata for the modelling of performance, availability, and reliability of software systems with hierarchical structure, which we call systems of systems. Quantitative modelling provides valuable insight into the dynamic behaviour of software systems, allowing non-functional properties such as performance, dependability and availability to be assessed. However, the complexity of many systems challenges the feasibility of this approach as the required mathematical models grow too large to afford computationally efficient solution. In recent years it has been found that in some cases a fluid, or mean field, approximation can provide very good estimates whilst dramatically reducing the computational cost.
The systems of systems which we propose are hierarchically arranged automata in which influence may be exerted between siblings, between parents and children, and even from children to parents, allowing a wide range of complex dynamics to be captured. We show that, under mild conditions, systems of systems can be equipped with fluid approximation models which are several orders of magnitude more efficient to run than explicit state representations, whilst providing excellent estimates of performability measures. This is a significant extension of previous fluid approximation results, with valuable applications for software performance modelling.
Luca Bortolussi
Jane Hillston
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T14:33:03Z
2015-02-10T14:33:03Z
http://eprints.imtlucca.it/id/eprint/2589
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2589
2015-02-10T14:33:03Z
Efficient optimization of software performance models via parameter-space pruning
tem's parameters. Unfortunately, for realistic scenarios, the cost of the optimization is typically high, leading to computational difficulties in the exploration of large parameter spaces. This paper proposes an approach to provably exact parameter-space pruning for a class of models of large-scale software systems analyzed with fluid techniques, efficient and scalable deterministic approximations of massively parallel stochastic models. We present a result of monotonicity of fluid solutions with respect to the model parameters, and employ it in the context of optimization programs with evolutionary algorithms by discarding candidate configurations a priori, i.e., without ever solving them, whenever they are proven to give lower fitness than other configurations. An extensive numerical validation shows that this approach yields an average twofold runtime speed-up compared to a baseline optimization algorithm that does not exploit monotonicity. Furthermore, we find that the optimal configuration is within a few percent from the true one obtained by stochastic simulation, whose solution is however orders of magnitude more expensive.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T14:10:52Z
2015-02-10T14:10:52Z
http://eprints.imtlucca.it/id/eprint/2588
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2588
2015-02-10T14:10:52Z
Lumpability of fluid models with heterogeneous agent types
Fluid models have gained popularity in the performance modeling of computing systems and communication networks. When the model under study consists of many different types of agents, the size of the associated system of ordinary differential equations (ODEs) increases with the number of types, making the analysis more difficult. We study this problem for a class of models where heterogeneity is expressed as a perturbation of certain parameters of the ODE vector field. We provide an a-priori bound that relates the solutions of the original, heterogenous model with that of an ODE system of smaller size which arises from aggregating system variables concerning different types of agents. By showing that this bound grows linearly with the intensity of the perturbation, we provide a formal justification to the intuitive possibility of neglecting small differences in agents' behavior as a means to reducing the dimensionality of the original system.
Giulio Iacobelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T14:06:19Z
2015-07-24T12:31:15Z
http://eprints.imtlucca.it/id/eprint/2587
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2587
2015-02-10T14:06:19Z
Exact fluid lumpability for Markovian process algebra
We study behavioural relations for process algebra with a fluid semantics given in terms of a system of ordinary differential equations (ODEs). We introduce label equivalence, a relation which is shown to induce an exactly lumped fluid model, a potentially smaller ODE system which can be exactly related to the original one. We show that, in general, for two processes that are related in the fluid sense nothing can be said about their relationship from stochastic viewpoint. However, we identify a class of models for which label equivalence implies a correspondence, called semi-isomorphism, between their transition systems that are at the basis of the Markovian interpretation.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T14:02:29Z
2015-07-24T12:28:15Z
http://eprints.imtlucca.it/id/eprint/2586
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2586
2015-02-10T14:02:29Z
Generalised communication for interacting agents
Process algebra for quantitative evaluation are based on either of the two following mechanisms for communication: binary, where a channel is shared by exactly two agents, or multiway, where all agents sharing a channel must synchronise. In this paper we consider an intermediate form which we call generalised communication, where only m agents out of n potentially available are involved in the communication. We study this in the context of the stochastic process algebra PEPA, of which we conservatively extend the syntax and semantics. We give an intuitive interpretation in terms of bandwidth assignments to agents communicating over a shared medium. We validate this semantics using a real implementation of a simple peer-to-peer protocol, for which our performance model yields predictions with high accuracy. We prove a result of lumpability that exploits symmetries between identical communicating agents, yielding good scalability of the underlying continuous-time Markov chain (CTMC) with respect to increasing population levels. Furthermore, we present an algorithm that derives the lumped chain directly, without having to generate the full CTMC first.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T13:50:25Z
2015-02-10T13:50:25Z
http://eprints.imtlucca.it/id/eprint/2585
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2585
2015-02-10T13:50:25Z
Performance modeling of design patterns for distributed computation
In software engineering, design patterns are commonly used and represent robust solution templates to frequently occurring problems in software design and implementation. In this paper, we consider performance simulation for two design patterns for processing of parallel messaging. We develop continuous-time Markov chain models of two commonly used design patterns, Half-Sync/Half-Async and Leader/Followers, for their performance evaluation in multicore machines. We propose a unified modeling approach which contemplates a detailed description of the application-level logic and abstracts away from operating system calls and complex locking and networking application programming interfaces. By means of a validation study against implementations on a 16-core machine, we show that the models accurately predict peak throughputs and variation trends with increasing concurrency levels for a wide range of message processing workloads. We also discuss the limits of our models when memory-level internal contention is not captured.
Ronald Strebelow
Mirco Tribastone
mirco.tribastone@imtlucca.it
Christian Prehofer
2015-02-10T13:31:51Z
2015-02-10T13:31:51Z
http://eprints.imtlucca.it/id/eprint/2584
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2584
2015-02-10T13:31:51Z
Fluid limits of queueing networks with batches
This paper presents an analytical model for the performance prediction of queueing networks with batch services and batch arrivals, related to the fluid limit of a suitable single-parameter sequence of continuous-time Markov chains and interpreted as the deterministic approximation of the average behaviour of the stochastic process. Notably, the underlying system of ordinary differential equations exhibits discontinuities in the right-hand sides, which however are proven to yield a meaningful solution. A substantial numerical assessment is used to study the quality of the approximation and shows very good accuracy in networks with large job populations.
Luca Bortolussi
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T11:30:06Z
2015-02-09T11:30:06Z
http://eprints.imtlucca.it/id/eprint/2583
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2583
2015-02-09T11:30:06Z
Fluid analysis of queueing in two-stage random environments
A large number of random environments leads to Markov processes where average-environment (AVG) and near-complete-decomposability (DEC) approximations suffer unacceptably large errors. This is problematic for queueing networks in particular, where state-space explosion hinders the application of numerical methods. In this paper we introduce blending, a novel fluid-based approximation for queueing models in random environments. The technique is here first introduced for random environments with two stages. Blending estimates the equilibrium of the model by iteratively evaluating transient-analysis sub problems for each of the two stages. Each sub problem is solved by means of a very small system of ordinary differential equations, making the approach scalable and simple to implement. Random environments supported by blending are either state-independent, as for models with breakdown and repair, or state-dependent, such as for Markov-modulated queues where the service phase changes only during busy periods. Comparative results with AVG and DEC approximations prove that blending tackles the limitations of existing methods for evaluating queues in random environments.
Giuliano Casale
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T11:09:43Z
2015-02-09T11:09:43Z
http://eprints.imtlucca.it/id/eprint/2582
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2582
2015-02-09T11:09:43Z
ASCENS: engineering autonomic service-component ensembles
Today’s developers often face the demanding task of developing software for ensembles: systems with massive numbers of nodes, operating in open and non-deterministic environments with complex interactions, and the need to dynamically adapt to new requirements, technologies or environmental conditions without redeployment and without interruption of the system’s functionality. Conventional development approaches and languages do not provide adequate support for the problems posed by this challenge. The goal of the ASCENS project is to develop a coherent, integrated set of methods and tools to build software for ensembles. To this end we research foundational issues that arise during the development of these kinds of systems, and we build mathematical models that address them. Based on these theories we design a family of languages for engineering ensembles, formal methods that can handle the size, complexity and adaptivity required by ensembles, and software-development methods that provide guidance for developers. In this paper we provide an overview of several research areas of ASCENS: the SOTA approach to ensemble engineering and the underlying formal model called GEM, formal notions of adaptation and awareness, the SCEL language, quantitative analysis of ensembles, and finally software-engineering methods for ensembles.
Martin Wirsing
Matthias Hölzl
Mirco Tribastone
mirco.tribastone@imtlucca.it
Franco Zambonelli
2015-02-09T11:05:12Z
2015-02-09T11:05:12Z
http://eprints.imtlucca.it/id/eprint/2581
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2581
2015-02-09T11:05:12Z
Approximate mean value analysis of process algebra models
Studying the existence of product forms of performance models described with compositional techniques is of central importance since this may lead to particularly efficient solution methods. This paper considers a class of models in the stochastic process algebra PEPA which do not enjoy the exact product form solutions available in the literature. However, they can be interpreted as queueing networks with service vacations and multiple resource possession, which have been shown to admit accurate analytical approximations based on mean value analysis. Special attention is devoted to situations where the use of the competing approximate method based on ordinary differential equations may be questionable due to the presence of components with few replicas.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T10:58:13Z
2015-02-09T10:58:13Z
http://eprints.imtlucca.it/id/eprint/2580
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2580
2015-02-09T10:58:13Z
Modular performance modelling for mobile applications
We propose a model-based approach to analysing the performance of mobile applications where physical mobility and state changes are modelled by graph transformations from which a model in the Performance Evaluation Process Algebra (PEPA) is derived. To fight scalability problems with state space generation we adopt a modular solution where the graph transformation system is decomposed into views, for which labelled transition systems (LTS) are generated separately and later synchronised in PEPA. We demonstrate that the result of this modular analysis is equivalent to that of the monolithic approach and evaluate practicality and scalability by means of a case study.
Niaz Arijo
Reiko Heckel
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-09T10:46:27Z
2015-02-09T10:46:27Z
http://eprints.imtlucca.it/id/eprint/2579
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2579
2015-02-09T10:46:27Z
Scalable performance evaluation of computer systems
The present paper provides an overview of recent and
ongoing research conducted at the Chair of Program-
ming and Software Engineering of LMU Munich on
performance evaluation of large-scale computer sys-
tems.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T10:29:56Z
2015-02-09T10:29:56Z
http://eprints.imtlucca.it/id/eprint/2578
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2578
2015-02-09T10:29:56Z
Large-scale modelling with the PEPA eclipse plug-in
We report on recent advances in the development of the PEPA Eclipse Plug-in, a software tool which supports a complete modelling workflow for the stochastic process algebra PEPA. The most notable improvements regard the implementation of the population-based semantics, which constitutes the basis for the aggregation of models for large state spaces. Analysis is supported either via an efficient stochastic simulation algorithm or through fluid approximation based on ordinary differential equations. In either case, the functionality is provided by a common graphical interface, which presents the user with a number of wizards that ease the specification of typical performance measures such as average response time or throughput. Behind the scenes, the engine for stochastic simulation has been extended in order to support both transient and steady-state simulation and to calculate confidence levels and correlations without resorting to external tools.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-09T09:45:25Z
2015-02-09T09:45:25Z
http://eprints.imtlucca.it/id/eprint/2576
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2576
2015-02-09T09:45:25Z
Performance prediction of service-oriented systems with layered queueing networks
We present a method for the prediction of the performance of a service-oriented architecture during its early stage of development. The system under scrutiny is modelled with the UML and two profiles: UML4SOA for specifying the functional behaviour, and MARTE for the non-functional performance-related characterisation. By means of a case study, we show how such a model can be interpreted as a layered queueing network. This target technique has the advantage to employ as constituent blocks entities, such as threads and processors, which arise very frequently in real deployment scenarios. Furthermore, the analytical methods for the solution of the performance model scale very well with increasing problem sizes, making it possible to efficiently evaluate the behaviour of large-scale systems.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Philip Mayer
Martin Wirsing
2015-02-09T09:41:12Z
2015-02-09T09:41:12Z
http://eprints.imtlucca.it/id/eprint/2575
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2575
2015-02-09T09:41:12Z
Hybrid semantics for PEPA
In order to circumvent the problem of state-space explosion of large-scale Markovian models, the stochastic process algebra PEPA has been given a fluid semantics based on ordinary differential equations, treating all entities as continuous. However, low numbers of instances and/or relatively slow dynamics may make such approximation too coarse for some parts of the system. To deal with such situations, we propose an hybrid semantics lying between these two extremes, treating parts of the system as discrete and stochastic and others as continuous and deterministic. The underlying mathematical object for the quantitative evaluation is a stochastic hybrid automaton. A case study of a client/server system with breakdowns and repairs is used to discuss the accuracy and the cost of this hybrid analysis.
Luca Bortolussi
Vashti Galpin
Jane Hillston
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T09:31:35Z
2015-02-09T09:31:35Z
http://eprints.imtlucca.it/id/eprint/2574
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2574
2015-02-09T09:31:35Z
Scalable differential analysis of large process algebra models
This tutorial is concerned with the performance evaluation of hardware/software systems using ordinary differential equations which approximate large-scale continuous-time Markov processes derived from models described with the stochastic process algebra PEPA. The tutorial is divided into three parts. The first part illustrates the main theoretical results. The second part gives an overview of a software tool-the PEPA Eclipse Plug-in-which supports the differential analysis of PEPA. In the last part, this approach is related to other efficient analysis techniques in the literature. In particular, a comparison against layered queues is presented.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T09:22:38Z
2015-02-09T09:22:38Z
http://eprints.imtlucca.it/id/eprint/2573
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2573
2015-02-09T09:22:38Z
Relating layered queueing networks and process algebra models
This paper presents a process-algebraic interpretation of the Layered Queueing Network model. The semantics of layered multi-class servers, resource contention, multiplicity of threads and processors are mapped into a model described in the stochastic process algebra PEPA. The accuracy of the translation is validated through a case study of a distributed computer system and the numerical results are used to discuss the relative strengths and weaknesses of the different forms of analysis available in both approaches, i.e., simulation, mean-value analysis, and differential approximation.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T09:18:02Z
2015-02-09T09:19:12Z
http://eprints.imtlucca.it/id/eprint/2572
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2572
2015-02-09T09:18:02Z
Scalable analysis of scalable systems
We present a systematic method of analysing the scalability of large-scale systems. We construct a high-level model using the SRMC process calculus and generate variants of this using model transformation. The models are compiled into systems of ordinary differential equations and numerically integrated to predict non-functional properties such as responsiveness and scalability.
Allan Clark
Stephen Gilmore
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T13:54:13Z
2015-02-06T13:54:13Z
http://eprints.imtlucca.it/id/eprint/2570
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2570
2015-02-06T13:54:13Z
Automatic translation of UML sequence diagrams into PEPA models
The UML profile for modeling and analysis of real time and embedded systems (MARTE) provides a powerful, standardised framework for the specification of non-functional properties of UML models. In this paper we present an automatic procedure to derive PEPA process algebra models from sequence diagrams (SD) to carry out quantitative evaluation. PEPA has recently been enriched with a fluid-flow semantics facilitating the analysis of models of a scale and complexity which would defeat Markovian analysis.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-06T13:47:52Z
2015-02-06T13:47:52Z
http://eprints.imtlucca.it/id/eprint/2569
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2569
2015-02-06T13:47:52Z
Automatic extraction of PEPA performance models from UML activity diagrams annotated with the MARTE profile
Recent trends in software engineering lean towards modelcentric development methodologies, a context in which the UML plays a crucial role. To provide modellers with quantitative insights into their artifacts, the UML benefits from a framework for software performance evaluation provided by MARTE, the UML profile for model-driven development of Real Time and Embedded Systems. MARTE offers a rich semantics which is general enough to allow different quantitative analysis techniques to act as underlying performance engines. In the present paper we explore the use of the stochastic process algebra PEPA as one such engine, providing a procedure to systematically map activity diagrams onto PEPA models. Independent activity flows are translated into sequential automata which co-ordinate at the synchronisation points expressed by fork and join nodes of the activity. The PEPA performance model is interpreted against a Markovian semantics which allows the calculation of performance indices such as throughput and utilisation. We also discuss the implementation of a new software tool powered by the popular Eclipse platform which implements the fully automatic translation from MARTE-annotated UML activity diagrams to PEPA models.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-06T13:43:58Z
2015-02-06T13:43:58Z
http://eprints.imtlucca.it/id/eprint/2568
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2568
2015-02-06T13:43:58Z
Partial evaluation of PEPA models for fluid-flow analysis
We present an application of partial evaluation to performance models expressed in the PEPA stochastic process algebra [1]. We partially evaluate the state-space of a PEPA model in order to remove uses of the cooperation and hiding operators and compile an arbitrary sub-model into a single sequential component. This transformation is applied to PEPA models which are not in the correct form for the application of the fluid-flow analysis for PEPA [2]. The result of the transformation is a PEPA model which is amenable to fluid-flow analysis but which is strongly equivalent [1] to the input PEPA model and so, by an application of Hillston’s theorem, performance results computed from one model are valid for the other. We apply the method to a Markovian model of a key distribution centre used to facilitate secure distribution of cryptographic session keys between remote principals communicating over an insecure network.
Allan Clark
Adam Duguid
Stephen Gilmore
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T13:39:18Z
2015-02-06T13:39:18Z
http://eprints.imtlucca.it/id/eprint/2567
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2567
2015-02-06T13:39:18Z
Service-level agreements for service-oriented computing
Service-oriented computing is dynamic. There may be many possible service instances available for binding, leading to uncertainty about where service requests will execute. We present a novel Markovian process calculus which allows the formal expression of uncertainty about binding as found in service-oriented computing. We show how to compute meaningful quantitative information about the quality of service provided in such a setting. These numerical results can be used to allow the expression of accurate service-level agreements about service-oriented computing.
Allan Clark
Stephen Gilmore
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T13:36:11Z
2015-02-06T13:36:11Z
http://eprints.imtlucca.it/id/eprint/2566
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2566
2015-02-06T13:36:11Z
Safety and response-time analysis of an automotive accident assistance service
In the present paper we assess both the safety properties and the response-time profile of a subscription service which provides medical assistance to drivers who are injured in vehicular collisions. We use both timed and untimed process calculi cooperatively to perform the required analysis. The formal analysis tools used are hosted on a high-level modelling platform with support for scripting and orchestration which enables users to build custom analysis processes from the general-purpose analysers which are hosted as services on the platform.
Ashok Argent-Katwala
Allan Clark
Howard Foster
Stephen Gilmore
Philip Mayer
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T13:31:04Z
2015-02-06T14:11:08Z
http://eprints.imtlucca.it/id/eprint/2565
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2565
2015-02-06T13:31:04Z
The PEPA Plug-in Project
We present a GUI-based tool supporting the stochastic process algebra PEPA with modules for performance evaluation through Markovian steady-state analysis, fluid flow analysis, and stochastic simulation.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T12:10:47Z
2015-02-06T14:09:44Z
http://eprints.imtlucca.it/id/eprint/2563
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2563
2015-02-06T12:10:47Z
An analytical model of a BitTorrent peer
In this paper we propose a Markovian model of BitTorrent. Unlike already developed works which capture demographic dynamics, it focuses on the behavior of individual peers. To this end, we center our attention on a generic peer, called tagged peer (TP); for each possible logical state of a BT peer-to-peer connection maintained by the TP, we consider a stochastic process which counts the number of such links, and characterize them according to their state. Validation is carried out and steady-state analysis is performed in order to illustrate how performance evaluation can be extracted from our model
Mario Barbera
Alfio Lombardo
Giovanni Schembra
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T12:01:12Z
2015-02-06T12:01:12Z
http://eprints.imtlucca.it/id/eprint/2562
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2562
2015-02-06T12:01:12Z
Replicating web services for scalability
Web service instances are often replicated to allow service provision to scale to support larger population sizes of users. However, such systems are difficult to analyse because the scale and complexity inherent in the system itself poses challenges for accurate qualitative or quantitative modelling. We use two process calculi cooperatively in the analysis of an example Web service replicated across many servers. The SOCK calculus is used to model service-oriented aspects closely and the PEPA calculus is used to analyse the performance of the system under increasing load.
Mario Bravetti
Stephen Gilmore
Claudio Guidi
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T11:57:01Z
2015-02-06T12:01:45Z
http://eprints.imtlucca.it/id/eprint/2561
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2561
2015-02-06T11:57:01Z
Evaluating the scalability of a web service-based distributed e-learning and course management system
A growing concern of Web service providers is scalability. An implementation of a Web service may be able at present to support its user base, but how can a provider judge what will happen if that user base grows? We present a modelling approach based on process algebra which allows service providers to investigate how models of Web service execution scale with increasing client population sizes. The method has the benefit of allowing a simple model of the service to be scaled to realistic population sizes without the modeller needing to aggregate or re-model the system.
Stephen Gilmore
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T11:52:22Z
2015-02-06T14:06:29Z
http://eprints.imtlucca.it/id/eprint/2560
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2560
2015-02-06T11:52:22Z
A Markov model of a freerider in a BitTorrent P2P network
BitTorrent is today one of the largest P2P systems which allows file sharing for Internet users. Very little effort has been dedicated to this target up to now. The goal of this paper is to develop an analytical model of a free-rider in a BitTorrent network. Unlike previous analytical models which capture the behavior of the network as a whole, the proposed model is able to analyze the performance from the user perspective. The model is applied to a case study to evaluate performance in a real case, and to obtain some insights into the influence of BitTorrent parameters on system performance.
Mario Barbera
Alfio Lombardo
Giovanni Schembra
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T11:43:48Z
2015-02-06T11:43:48Z
http://eprints.imtlucca.it/id/eprint/2559
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2559
2015-02-06T11:43:48Z
The PEPA Plug-in project
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T11:39:40Z
2015-07-24T12:32:00Z
http://eprints.imtlucca.it/id/eprint/2558
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2558
2015-02-06T11:39:40Z
Refined theory of packages
The fluid approximation for PEPA usually considers large populations of simple interacting sequential components characterised by small local state spaces. A natural question which arises is whether it is possible to extend this technique to composite processes with arbitrary large local state spaces. In [1] the authors were able to give a positive answer for a certain class of models. The current paper
will enlarge this class.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T11:16:22Z
2015-02-06T11:16:22Z
http://eprints.imtlucca.it/id/eprint/2557
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2557
2015-02-06T11:16:22Z
Process-algebraic modelling of priority queueing networks
We consider a closed multiclass queueing network model in which each class receives a different
priority level and jobs with lower priority are served only if there are no higher-priority jobs in the
queue. Such systems do not enjoy a product form solution, thus their analysis is typically carried out
through approximate mean value analysis (AMVA) techniques. We formalise the problem in PEPA in
a way amenable to differential analysis. Experimental results show that our approach is competitive
with simulation and AMVA methods.
Giuliano Casale
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T10:35:18Z
2015-02-06T13:56:54Z
http://eprints.imtlucca.it/id/eprint/2556
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2556
2015-02-06T10:35:18Z
Differential analysis of PEPA models
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T10:14:24Z
2015-02-06T10:14:24Z
http://eprints.imtlucca.it/id/eprint/2555
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2555
2015-02-06T10:14:24Z
Bottom-up beats top-down hands down
In PEPA, the calculation of the transitions enabled by a process accounts for a large part of the time for the state space exploration of the underlying Markov chain. Unlike other approaches based on recursion, we present a new technique that is iterative—it traverses the process’ binary tree from the sequential components at the leaves up to the root. Empirical results shows that this algorithm is faster than a similar implementation employing recursion in Java. Finally, a study on the user-perceived performance compares our algorithm with those of other existing tools
(ipc/Hydra and the PEPA Workbench).
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-02T10:31:14Z
2015-11-02T13:06:43Z
http://eprints.imtlucca.it/id/eprint/2553
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2553
2015-02-02T10:31:14Z
The economy of attention in the age of (mis)information
In this work we present a thorough quantitative analysis of information consumption patterns of qualitatively different information on Facebook. Pages are categorized, according to their topics and the communities of interests they pertain to, in a) alternative information sources (diffusing topics that are neglected by science and main stream media); b) online political activism; and c) main stream media. We find similar information consumption patterns despite the very different nature of contents. Then, we classify users according to their interaction patterns among the different topics and measure how they responded to the injection of 2788 false information (parodistic imitations of alternative stories). We find that users prominently interacting with alternative information sources ? i.e. more exposed to unsubstantiated claims ? are more prone to interact with intentional and parodistic false claims
Alessandro Bessi
Antonio Scala
Luc Rossi
Qian Zhang
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2015-02-02T10:25:24Z
2015-02-02T10:25:24Z
http://eprints.imtlucca.it/id/eprint/2552
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2552
2015-02-02T10:25:24Z
Structural patterns of the occupy movement on Facebook
In this work we study a peculiar example of social organization on Facebook: the Occupy Movement -- i.e., an international protest movement against social and economic inequality organized online at a city level. We consider 179 US Facebook public pages during the time period between September 2011 and February 2013. The dataset includes 618K active users and 753K posts that received about 5.2M likes and 1.1M comments. By labeling user according to their interaction patterns on pages -- e.g., a user is considered to be polarized if she has at least the 95% of her likes on a specific page -- we find that activities are not locally coordinated by geographically close pages, but are driven by pages linked to major US cities that act as hubs within the various groups. Such a pattern is verified even by extracting the backbone structure -- i.e., filtering statistically relevant weight heterogeneities -- for both the pages-reshares and the pages-common users networks.
Michela Del Vicario
michela.delvicario@imtlucca.it
Qian Zhang
Alessandro Bessi
Fabiana Zollo
fabiana.zollo@imtlucca.it
Antonio Scala
Guido Caldarelli
guido.caldarelli@imtlucca.it
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2015-02-02T10:18:27Z
2016-04-07T10:13:21Z
http://eprints.imtlucca.it/id/eprint/2550
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2550
2015-02-02T10:18:27Z
Everyday the same picture: popularity and content diversity
Facebook is flooded by diverse and heterogeneous content, from kittens up to music and news, passing through satirical and funny stories. Each piece of that corpus reflects the heterogeneity of the underlying social background. In the Italian Facebook we have found an interesting case: a page having more than 40K followers that every day posts the same picture of Toto Cutugno, a popular Italian singer. In this work, we use such a page as a benchmark to study and model the effects of content heterogeneity on popularity. In particular, we use that page for a comparative analysis of information consumption patterns with respect to pages posting science and conspiracy news. In total, we analyze about 2M likes and 190K comments, made by approximately 340K and 65K users, respectively. We conclude the paper by introducing a model mimicking users selection preferences accounting for the heterogeneity of contents.
Alessandro Bessi
Fabiana Zollo
fabiana.zollo@imtlucca.it
Michela Del Vicario
michela.delvicario@imtlucca.it
Antonio Scala
Fabio Petroni
Bruno Gonçalves
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2015-02-02T09:53:13Z
2015-02-02T09:53:13Z
http://eprints.imtlucca.it/id/eprint/2549
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2549
2015-02-02T09:53:13Z
Unsupervised and supervised approaches to color space transformation for image coding
The linear transformation of input (typically RGB) data into a color space is important in image compression. Most schemes adopt fixed transforms to decorrelate the color channels. Energy compaction transforms such as the Karhunen-Loève (KLT) do entail a complexity increase. Here, we propose a new data-dependent transform (aKLT), that achieves compression performance comparable to the KLT, at a fraction of the computational complexity. More important, we also consider an application-aware setting, in which a classifier analyzes reconstructed images at the receiver's end. In this context, KLT-based approaches may not be optimal and transforms that maximize post-compression classifier performance are more suited. Relaxing energy compactness constraints, we propose for the first time a transform which can be found offline optimizing the Fisher discrimination criterion in a supervised fashion. In lieu of channel decorrelation, we obtain spatial decorrelation using the same color transform as a rudimentary classifier to detect objects of interest in the input image without adding any computational cost. We achieve higher savings encoding these regions at a higher quality, when combined with region-of-interest capable encoders, such as JPEG 2000.
Massimo Minervini
massimo.minervini@imtlucca.it
Cristian Rusu
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-01-16T09:54:20Z
2015-01-16T09:54:20Z
http://eprints.imtlucca.it/id/eprint/2500
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2500
2015-01-16T09:54:20Z
Causal-consistent reversibility
Reversible computing allows one to execute programs both in the standard,
forward direction, and backward, going back to past states. In a concurrent
scenario, the correct notion of reversibility is causal-consistent reversibility:
any action can be undone, provided that all its consequences (if
any) are undone beforehand. In this paper we present an overview of the
main approaches, results, and applications of causal-consistent reversibility.
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2015-01-16T09:29:19Z
2015-01-16T09:32:56Z
http://eprints.imtlucca.it/id/eprint/2499
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2499
2015-01-16T09:29:19Z
A goal model for collective adaptive systems
Antonio Bucchiarone
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Heorhi Raik
2015-01-16T09:19:23Z
2015-01-16T09:19:23Z
http://eprints.imtlucca.it/id/eprint/2498
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2498
2015-01-16T09:19:23Z
Collective adaptation in process-based systems
A collective adaptive system is composed of a set of heterogeneous, autonomous and self-adaptive entities that come into a collaboration with one another in order to improve the effectiveness with which they can accomplish their individual goals. In this paper, we offer a characterization of ensembles, as the main concept around which systems that exhibit collective adaptability can be built. Our conceptualization of ensembles enables to define a collective adaptive system as an emergent aggregation of autonomous and self-adaptive process-based elements. To elucidate our approach to ensembles and collective adaptation, we draw an example from a scenario in the urban mobility domain, we describe an architecture that enables that approach, and we show how our approach can address the problems posed by the motivating scenario.
Antonio Bucchiarone
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Marco Pistore
Heorhi Raik
Giuseppe Valetto
2015-01-16T09:01:05Z
2015-01-16T09:01:05Z
http://eprints.imtlucca.it/id/eprint/2497
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2497
2015-01-16T09:01:05Z
Causal-consistent reversible debugging
Reversible debugging provides developers with a way to execute their applications both forward and backward, seeking the cause of an unexpected or undesired event. In a concurrent setting, reversing actions in the exact reverse order in which they have been executed may lead to undo many actions that were not related to the bug under analysis. On the other hand, undoing actions in some order that violates causal dependencies may lead to states that could not be reached in a forward execution. We propose an approach based on causal-consistent reversibility: each action can be reversed if all its consequences have already been reversed. The main feature of the approach is that it allows the programmer to easily individuate and undo exactly the actions that caused a given misbehavior till the corresponding bug is reached. This paper major contribution is the individuation of the appropriate primitives for causal-consistent reversible debugging and their prototype implementation in the CaReDeb tool. We also show how to apply CaReDeb to individuate common real-world concurrent bugs.
Elena Giachino
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
2015-01-16T08:52:19Z
2015-01-16T08:52:19Z
http://eprints.imtlucca.it/id/eprint/2496
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2496
2015-01-16T08:52:19Z
A conceptual framework for collective adaptive systems
In this paper we propose a conceptual framework to characterize Collective Adaptive Systems. By following the separation of concerns we represent these systems as a composition of three components: execution, context and adaptation, and we give a formal definition of all their concepts, defining their corresponding semantics and pointing out the interactions among them. Moreover, we formalize also the main properties that these systems should have, abstracting from any precise specification language or model
Antonio Bucchiarone
Annapaola Marconi
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Marco Pistore
2015-01-16T08:49:08Z
2015-01-16T08:49:08Z
http://eprints.imtlucca.it/id/eprint/2495
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2495
2015-01-16T08:49:08Z
CAptLang: a language for context-aware and adaptable business processes
Run-time adaptability is a key feature of dynamic business environments, where the processes need to be constantly refined and restructured to deal with context changes. In this paper, we present CAptLang, a language to model context-aware and adaptable business processes where the main feature is the possibility of leaving the handling of extraordinary or improbable situations to run time. We present CAptLang with its formal syntax and semantics. Moreover we show how its semantics have been used to guide the implementation of a Java-based business processes execution engine, component of the ASTRO-CAptEvo adaptation framework.
Antonio Bucchiarone
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Marco Pistore
2015-01-16T08:44:12Z
2015-01-16T08:44:12Z
http://eprints.imtlucca.it/id/eprint/2494
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2494
2015-01-16T08:44:12Z
Concurrent flexible reversibility
Concurrent reversibility has been studied in different areas, such as biological or dependable distributed systems. However, only “rigid” reversibility has been considered, allowing to go back to a past state and restart the exact same computation, possibly leading to divergence. In this paper, we present croll-π, a concurrent calculus featuring flexible reversibility, allowing the specification of alternatives to a computation to be used upon rollback. Alternatives in croll-π are attached to messages. We show the robustness of this mechanism by encoding more complex idioms for specifying flexible reversibility, and we illustrate the benefits of our approach by encoding a calculus of communicating transactions.
Ivan Lanese
Michael Lienhardt
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Alan Schmitt
Jean-Bernard Stefani
2015-01-16T08:39:16Z
2015-01-16T08:39:16Z
http://eprints.imtlucca.it/id/eprint/2493
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2493
2015-01-16T08:39:16Z
On-the-fly adaptation of dynamic service-based systems: incrementality, reduction and reuse
On-the-fly adaptation is where adaptation activities are not explicitly represented at design time but are discovered and managed at run time considering all aspect of the execution environments. In this paper we present a comprehensive framework for the on-the-fly adaptation of highly dynamic service-based systems. The framework relies on advanced context-aware adaptation techniques that allow for i) incremental handling of complex adaptation problems by interleaving problem solving and solution execution, ii) reduction in the complexity of each adaptation problem by minimizing the search space according to the specific execution context, and iii) reuse of adaptation solutions by learning from past executions. We evaluate the applicability of the proposed approach on a real world scenario based on the operation of the Bremen sea port.
Antonio Bucchiarone
Annapaola Marconi
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Marco Pistore
Heorhi Raik
2015-01-16T08:31:09Z
2015-01-16T08:31:09Z
http://eprints.imtlucca.it/id/eprint/2492
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2492
2015-01-16T08:31:09Z
Towards modeling and execution of Collective Adaptive Systems
Collective Adaptive Systems comprise large numbers of heterogeneous entities that can join and leave the system at any time depending on their own objectives. In the scope of pervasive computing, both physical and virtual entities may exist, e.g., buses and their passengers using mobile devices, as well as city-wide traffic coordination systems. In this paper we introduce a novel conceptual framework that enables Collective Adaptive Systems based on well-founded and widely accepted paradigms and technologies like service orientation, distributed systems, context-aware computing and adaptation of composite systems. Toward achieving this goal, we also present an architecture that underpins the envisioned framework, discuss the current state of our implementation effort, and we outline the open issues and challenges in the field.
Vasilios Andrikopoulos
Antonio Bucchiarone
Santiago Gómez Sáez
Dimka Karastoyanova
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
2015-01-15T13:24:00Z
2015-01-15T13:24:00Z
http://eprints.imtlucca.it/id/eprint/2491
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2491
2015-01-15T13:24:00Z
A reversible abstract machine and its space overhead
We study in this paper the cost of making a concurrent programming language reversible. More specifically, we take an abstract machine for a fragment of the Oz programming language and make it reversible. We show that the overhead of the reversible machine with respect to the original one in terms of space is at most linear in the number of execution steps. We also show that this bound is tight since some programs cannot be made reversible without storing a commensurate amount of information.
Michael Lienhardt
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Jean-Bernard Stefani
2015-01-15T13:15:40Z
2015-01-15T13:15:40Z
http://eprints.imtlucca.it/id/eprint/2490
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2490
2015-01-15T13:15:40Z
Controlled reversibility and compensations
In this paper we report the main ideas of an ongoing thread of research that aims at exploiting reversibility mechanisms to define programming abstractions for dependable distributed systems. In particular, we discuss the issues posed by concurrency in the definition of controlled forms of reversibility. We also discuss the need of introducing compensations to deal with irreversible actions and to avoid to repeat past errors.
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Jean-Bernard Stefani
2015-01-15T13:12:28Z
2015-01-15T13:12:28Z
http://eprints.imtlucca.it/id/eprint/2489
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2489
2015-01-15T13:12:28Z
Controlling reversibility in higher-order pi
We present in this paper a fine-grained rollback primitive for the higher-order π-calculus (HOπ), that builds on the reversibility apparatus of reversible HOπ [9]. The definition of a proper semantics for such a primitive is a surprisingly delicate matter because of the potential interferences between concurrent rollbacks. We define in this paper a high-level operational semantics which we prove sound and complete with respect to reversible HOπ backward reduction. We also define a lower-level distributed semantics, which is closer to an actual implementation of the rollback primitive, and we prove it to be fully abstract with respect to the high-level semantics.
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Alan Schmitt
Jean-Bernard Stefani
2015-01-15T13:06:00Z
2015-01-15T13:06:00Z
http://eprints.imtlucca.it/id/eprint/2488
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2488
2015-01-15T13:06:00Z
Reversing higher-order pi
The notion of reversible computation is attracting increasing interest because of its applications in diverse fields, in particular the study of programming abstractions for reliable systems. In this paper, we continue the study undertaken by Danos and Krivine on reversible CCS by defining a reversible higher-order π-calculus (HOπ). We prove that reversibility in our calculus is causally consistent and that one can encode faithfully reversible HOπ into a variant of HOπ.
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Jean-Bernard Stefani
2015-01-15T13:01:59Z
2015-01-15T13:01:59Z
http://eprints.imtlucca.it/id/eprint/2487
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2487
2015-01-15T13:01:59Z
Typing component-based communication systems
Building complex component-based software systems, for instance communication systems based on the Click, Coyote, Appia, or Dream frameworks, can lead to subtle assemblage errors. We present a novel type system and type inference algorithm that prevent interconnection and message-handling errors when assembling component-based communication systems. These errors are typically not captured by classical type systems of host programming languages such as Java or ML. We have implemented our approach by extending the architecture description language (ADL) toolset used by the Dream framework, and used it to check Dream-based communication systems.
Michael Lienhardt
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Alan Schmitt
Jean-Bernard Stefani
2015-01-13T15:04:49Z
2015-02-18T12:06:10Z
http://eprints.imtlucca.it/id/eprint/2480
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2480
2015-01-13T15:04:49Z
Sparse solutions to the average consensus problem via L1-Norm regularization of the fastest mixing Markov-Chain problem
In the “consensus problem” on multi-agent systems, in which the states of the agents are “opinions”, the agents aim at reaching a common opinion (or “consensus state”) through local exchange of information. An important design problem is to choose the degree of interconnection of the subsystems so as to achieve a good trade-off between a small number of interconnections and a fast convergence to the consensus state, which is the average of the initial opinions under mild conditions. This paper addresses this problem through l1-norm regularized versions of the well-known fastest mixing Markov-chain problem, which are investigated theoretically. In particular, it is shown that such versions can be interpreted as “robust” forms of the fastest mixing Markov-chain problem. Theoretical results useful to guide the choice of the regularization parameters are also provided, together with a numerical example.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Rita Morisi
rita.morisi@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-01-13T14:59:41Z
2015-01-13T14:59:41Z
http://eprints.imtlucca.it/id/eprint/2479
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2479
2015-01-13T14:59:41Z
A proximal alternating minimization method for L0-Regularized nonlinear optimization problems: application to state estimation
In this paper we consider the minimization of l0-regularized nonlinear optimization problems, where the objective function is the sum of a smooth convex term and the l0 quasi-norm of the decision variable. We introduce the class of coordinatewise minimizers and prove that any point in this class is a local minimum for our l0-regularized problem. Then, we devise a random proximal alternating minimization method, which has a simple iteration and is suitable for solving this class of optimization problems. Under convexity and coordinatewise Lipschitz gradient assumptions, we prove that any limit point of the sequence generated by our new algorithm belongs to the class of coordinatewise minimizers almost surely. We also show that the state estimation of dynamical systems with corrupted measurements can be modeled in our framework. Numerical experiments on state estimation of power systems, using IEEE bus test case, show that our algorithm performs favorably on solving such problems
Andrei - Mihai Patrascu
Ion Necoara
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
2015-01-13T14:42:02Z
2015-01-13T14:42:02Z
http://eprints.imtlucca.it/id/eprint/2478
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2478
2015-01-13T14:42:02Z
A unified framework for solving a general class of conditional and robust set-membership estimation problems
In this paper, we present a unified framework for solving a general class of problems arising in the context of set-membership estimation/identification theory. More precisely, the paper aims at providing an original approach for the computation of optimal conditional and robust projection estimates in a nonlinear estimation setting, where the operator relating the data and the parameter to be estimated is assumed to be a generic multivariate polynomial function, and the uncertainties affecting the data are assumed to belong to semialgebraic sets. By noticing that the computation of both the conditional and the robust projection optimal estimators requires the solution to min-max optimization problems that share the same structure, we propose a unified two-stage approach based on semidefinite-relaxation techniques for solving such estimation problems. The key idea of the proposed procedure is to recognize that the optimal functional of the inner optimization problems can be approximated to any desired precision by a multivariate polynomial function by suitably exploiting recently proposed results in the field of parametric optimization. Two simulation examples are reported to show the effectiveness of the proposed approach.
Vito Cerone
Jean-Bernard Lasserre
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T14:34:09Z
2015-11-02T09:57:27Z
http://eprints.imtlucca.it/id/eprint/2477
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2477
2015-01-13T14:34:09Z
Characteristic polynomial assignment for plants with semialgebraic uncertainty: a robust diophantine equation approach
In this paper, we address the problem of robust characteristic polynomial assignment for LTI systems whose parameters are assumed to belong to a semialgebraic uncertainty region. The objective is to design a dynamic fixed-order controller in order to constrain the coefficients of the closed-loop characteristic polynomial within prescribed intervals. First, necessary conditions on the plant parameters for the existence of a robust controller are reviewed, and it is shown that such conditions are satisfied if and only if a suitable Sylvester matrix is nonsingular for all possible values of the uncertain plant parameters. The problem of checking such a robust nonsingularity condition is formulated in terms of a nonconvex optimization problem. Then, the set of all feasible robust controllers is sought through the solution to a suitable robust diophantine equation. Convex relaxation techniques based on sum-of-square decomposition of positive polynomials are used to efficiently solve the formulated optimization problems by means of semidefinite programming. The presented approach provides a generalization of the results previously proposed in the literature on the problem of assigning the characteristic polynomial in the presence of plant parametric uncertainty.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T14:24:45Z
2015-01-13T14:24:45Z
http://eprints.imtlucca.it/id/eprint/2476
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2476
2015-01-13T14:24:45Z
A bias-corrected estimator for nonlinear systems with output-error type model structures
Abstract Parametric identification of linear time-invariant (LTI) systems with output-error (OE) type of noise model structures has a well-established theoretical framework. Different algorithms, like instrumental-variables based approaches or prediction error methods (PEMs), have been proposed in the literature to compute a consistent parameter estimate for linear {OE} systems. Although the prediction error method provides a consistent parameter estimate also for nonlinear output-error (NOE) systems, it requires to compute the solution of a nonconvex optimization problem. Therefore, an accurate initialization of the numerical optimization algorithms is required, otherwise they may get stuck in a local minimum and, as a consequence, the computed estimate of the system might not be accurate. In this paper, we propose an approach to obtain, in a computationally efficient fashion, a consistent parameter estimate for output-error systems with polynomial nonlinearities. The performance of the method is demonstrated through a simulation example.
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-13T14:22:23Z
2015-01-13T14:22:23Z
http://eprints.imtlucca.it/id/eprint/2475
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2475
2015-01-13T14:22:23Z
Approximation of model predictive control laws for polynomial systems
A fast implementation of a given predictive controller for polynomial systems is introduced by approximating the optimal control law with a piecewise constant function defined over a hyper-cube partition of the system state space. Such a state-space partition is computed in order to guarantee stability, an a priori fixed trajectory error as well as input and state constraints fulfilment. The presented approximation procedure is achieved by solving a set of nonconvex polynomial optimization problems, whose approximate solutions are computed by means of semidefinite relaxation techniques for semialgebraic problems.
Massimo Canale
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T14:18:27Z
2015-01-13T14:18:27Z
http://eprints.imtlucca.it/id/eprint/2474
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2474
2015-01-13T14:18:27Z
An SDP approach for l0-minimization: application to ARX model segmentation
Abstract Minimizing the ℓ 0 -seminorm of a vector under convex constraints is a combinatorial (NP-hard) problem. Replacement of the ℓ 0 -seminorm with the ℓ 1 -norm is a commonly used approach to compute an approximate solution of the original ℓ 0 -minimization problem by means of convex programming. In the theory of compressive sensing, the condition that the sensing matrix satisfies the Restricted Isometry Property (RIP) is a sufficient condition to guarantee that the solution of the ℓ 1 -approximated problem is equal to the solution of the original ℓ 0 -minimization problem. However, the evaluation of the conservativeness of the ℓ 1 -relaxation approaches is recognized to be a difficult task in case the {RIP} is not satisfied. In this paper, we present an alternative approach to minimize the ℓ 0 -norm of a vector under given constraints. In particular, we show that an ℓ 0 -minimization problem can be relaxed into a sequence of semidefinite programming problems, whose solutions are guaranteed to converge to the optimizer (if unique) of the original combinatorial problem also in case the {RIP} is not satisfied. Segmentation of {ARX} models is then discussed in order to show, through a relevant problem in system identification, that the proposed approach outperforms the ℓ 1 -based relaxation in detecting piece-wise constant parameter changes in the estimated model.
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-13T14:12:42Z
2015-01-13T14:12:42Z
http://eprints.imtlucca.it/id/eprint/2473
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2473
2015-01-13T14:12:42Z
A convex relaxation approach to set-membership identification of LPV systems
Abstract Identification of linear parameter varying models is considered in this paper, under the assumption that both the output and the scheduling parameter measurements are affected by bounded noise. First, the problem of computing parameter uncertainty intervals is formulated in terms of nonconvex optimization. Then, on the basis of the analysis of the regressor structure, we present an ad hoc convex relaxation scheme for computing parameter bounds by means of semidefinite optimization.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T14:08:50Z
2015-01-13T14:08:50Z
http://eprints.imtlucca.it/id/eprint/2472
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2472
2015-01-13T14:08:50Z
Fixed-order FIR approximation of linear systems from quantized input and output data
Abstract The problem of identifying a fixed-order {FIR} approximation of linear systems with unknown structure, assuming that both input and output measurements are subjected to quantization, is dealt with in this paper. A fixed-order {FIR} model providing the best approximation of the input–output relationship is sought by minimizing the worst-case distance between the output of the true system and the modeled output, for all possible values of the input and output data consistent with their quantized measurements. The considered problem is firstly formulated in terms of robust optimization. Then, two different algorithms to compute the optimum of the formulated problem by means of linear programming techniques are presented. The effectiveness of the proposed approach is illustrated by means of a simulation example.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T14:01:48Z
2015-01-13T14:01:48Z
http://eprints.imtlucca.it/id/eprint/2471
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2471
2015-01-13T14:01:48Z
Computational load reduction in bounded error identification of Hammerstein systems
In this technical note we present a procedure for the identification of Hammerstein systems from measurements affected by bounded noise. First, we show that computation of tight parameter bounds requires the solution to nonconvex optimization problems where the number of decision variables increases with the length of the experimental data sequence. Then, in order to reduce the computational burden of the identification problem, we propose a procedure to relax the formulated problem into a collection of polynomial optimization problems where the number of variables does not depend on the number of measurements. Advantages of the presented approach with respect to previously published results are discussed and highlighted by means of a simulation example.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T13:57:44Z
2015-01-13T14:35:23Z
http://eprints.imtlucca.it/id/eprint/2470
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2470
2015-01-13T13:57:44Z
Bounding the parameters of block-structured nonlinear feedback systems
In this paper, a procedure for set-membership identification of block-structured nonlinear feedback systems is presented. Nonlinear block parameter bounds are first computed by exploiting steady-state measurements. Then, given the uncertain description of the nonlinear block, bounds on the unmeasurable inner signal are computed. Finally, linear block parameter bounds are evaluated on the basis of output measurements and computed inner-signal bounds. The computation of both the nonlinear block parameters and the inner-signal bounds is formulated in terms of semialgebraic optimization and solved by means of suitable convex LMI relaxation techniques. The problem of linear block parameter evaluation is formulated in terms of a bounded errors-in-variables identification problem.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T13:52:43Z
2015-01-13T14:35:42Z
http://eprints.imtlucca.it/id/eprint/2469
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2469
2015-01-13T13:52:43Z
Optimization of airborne wind energy generators
This paper presents novel results related to an innovative airborne wind energy technology, named Kitenergy, for the conversion of high-altitude wind energy into electricity. The research activities carried out in the last five years, including theoretical analyses, numerical simulations, and experimental tests, indicate that Kitenergy could bring forth a revolution in wind energy generation, providing renewable energy in large quantities at a lower cost than fossil energy. This work investigates three important theoretical aspects: the evaluation of the performance achieved by the employed control law, the optimization of the generator operating cycle, and the possibility to generate continuously a constant and maximal power output. These issues are tackled through the combined use of modeling, control, and optimization methods that result to be key technologies for a significant breakthrough in renewable energy generation.
Lorenzo Fagiano
Mario Milanese
Dario Piga
dario.piga@imtlucca.it
2015-01-13T13:40:47Z
2015-01-13T13:40:47Z
http://eprints.imtlucca.it/id/eprint/2468
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2468
2015-01-13T13:40:47Z
Bounded error identification of Hammerstein systems through sparse polynomial optimization
In this paper we present a procedure for the evaluation of bounds on the parameters of Hammerstein systems, from output measurements affected by bounded errors. The identification problem is formulated in terms of polynomial optimization, and relaxation techniques, based on linear matrix inequalities, are proposed to evaluate parameter bounds by means of convex optimization. The structured sparsity of the formulated identification problem is exploited to reduce the computational complexity of the convex relaxed problem. Analysis of convergence properties and computational complexity is reported.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T13:28:37Z
2015-01-13T13:28:37Z
http://eprints.imtlucca.it/id/eprint/2467
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2467
2015-01-13T13:28:37Z
Set-Membership Error-in-variables identification through convex relaxation techniques
In this technical note, the set membership error-in-variables identification problem is considered, that is the identification of linear dynamic systems when both output and input measurements are corrupted by bounded noise. A new approach for the computation of parameter uncertainty intervals is presented. First, the identification problem is formulated in terms of nonconvex optimization. Then, relaxation techniques based on linear matrix inequalities are employed to evaluate parameter bounds by means of convex optimization. The inherent structured sparsity of the original identification problems is exploited to reduce the computational complexity of the relaxed problems. Finally, convergence properties and complexity of the proposed procedure are discussed. Advantages of the presented technique with respect to previously published results are discussed and shown by means of two simulated examples.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T14:46:07Z
2015-01-12T14:46:07Z
http://eprints.imtlucca.it/id/eprint/2466
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2466
2015-01-12T14:46:07Z
Enforcing stability constraints in set-membership identification of linear dynamic systems
In this paper, we consider the identification of linear systems, a priori known to be stable, from input–output data corrupted by bounded noise. By taking explicitly into account a priori information on system stability, a formal definition of the feasible parameter set for a stable linear system is provided. On the basis of a detailed analysis of the geometrical structure of the feasible set, convex relaxation techniques are presented to solve nonconvex optimization problems arising in the computation of parameter uncertainty intervals. Properties of the computed relaxed bounds are discussed. A simulated example is presented to show the effectiveness of the proposed technique.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T14:39:42Z
2015-01-13T14:49:53Z
http://eprints.imtlucca.it/id/eprint/2465
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2465
2015-01-12T14:39:42Z
Set-membership LPV model identification of vehicle lateral dynamics
Set-membership identification of a Linear Parameter Varying (LPV) model describing the vehicle lateral dynamics is addressed in the paper. The model structure, chosen as much as possible on the ground of physical insights into the vehicle lateral behavior, consists of two single-input single-output {LPV} models relating the steering angle to the yaw rate and to the sideslip angle. A set of experimental data obtained by performing a large number of maneuvers is used to identify the vehicle lateral dynamics model. Prior information on the error bounds on the output and the time-varying parameter measurements are taken into account. Comparison with other vehicle lateral dynamics models is discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T14:32:40Z
2015-01-12T14:32:40Z
http://eprints.imtlucca.it/id/eprint/2464
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2464
2015-01-12T14:32:40Z
Improved parameter bounds for set-membership EIV problems
In this paper, we consider the set-membership error-in-variables identification problem, that is the identification of linear dynamic systems when output and input measurements are corrupted by bounded noise. A new approach for the computation of parameters uncertainty intervals is presented. First, the problem is formulated in terms of nonconvex optimization. Then, a relaxation procedure is proposed to compute parameter bounds by means of semidefinite programming techniques. Finally, accuracy of the estimate and computational complexity of the proposed algorithm are discussed. Advantages of the proposed technique with respect to previously published ones are discussed both theoretically and by means of a simulated example
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T14:29:10Z
2015-01-12T14:29:10Z
http://eprints.imtlucca.it/id/eprint/2463
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2463
2015-01-12T14:29:10Z
High-Altitude wind power generation
The paper presents the innovative technology of high-altitude wind power generation, indicated as Kitenergy, which exploits the automatic flight of tethered airfoils (e.g., power kites) to extract energy from wind blowing between 200 and 800 m above the ground. The key points of this technology are described and the design of large scale plants is investigated, in order to show that it has the potential to overcome the limits of the actual wind turbines and to provide large quantities of renewable energy, with competitive cost with respect to fossil sources. Such claims are supported by the results obtained so far in the Kitenergy project, undergoing at Politecnico di Torino, Italy, including numerical simulations, prototype experiments, and wind data analyses.
Lorenzo Fagiano
Mario Milanese
Dario Piga
dario.piga@imtlucca.it
2015-01-12T13:20:30Z
2015-01-12T13:20:30Z
http://eprints.imtlucca.it/id/eprint/2461
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2461
2015-01-12T13:20:30Z
Shrinking complexity of scheduling dependencies in LS-SVM based LPV system identification(I)
In the past years, Linear Parameter-Varying (LPV) identification has rapidly evolved from parametric identification methods to nonparametric methods allowing the relaxation of restrictive assumptions. For example, Least-Square Support Vector Machines (LS-SVMs) offer an attractive way of estimating LPV models directly from data without requiring from the user to specify the functional dependencies of the model coefficients on the scheduling variable. These methods have also been recently extended in order to automatically determine the model order directly from data by the help of regularization. Nonetheless, despite all these recent improvements, LPV identification methods still require some strong a priori such as i) the dependencies are static or dynamic, ii) it is known which variables are considered to be the scheduling or iii) all coefficient functions of the underlaying system depend on all scheduling variables. This prevents the complexity of the scheduling dependency of the model to be shrunk gradually and independently until an optimal bias-variance trade off is found. In this paper, a novel reformulation of the LPV LS-SVM approach is proposed which, besides of the non-parametric estimation of the coefficient functions, achieves data-driven coefficient complexity selection via convex optimization. The properties of the introduced approach are illustrated by a simulation study.
René Duijkers
Roland Tóth
Dario Piga
dario.piga@imtlucca.it
Vincent Laurain
2015-01-12T12:49:00Z
2015-01-12T12:49:00Z
http://eprints.imtlucca.it/id/eprint/2460
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2460
2015-01-12T12:49:00Z
LPV model order selection in an LS-SVM setting
In parametric identification of Linear Parameter-Varying (LPV) systems, the scheduling dependencies of the model coefficients are commonly parameterized in terms of linear combinations of a-priori selected basis functions. Such functions need to be adequately chosen, e.g., on the basis of some first-principles or expert's knowledge of the system, in order to capture the unknown dependencies of the model coefficient functions on the scheduling variable and, at the same time, to achieve a low-variance of the model estimate by limiting the number of parameters to be identified. This problem together with the well-known model order selection problem (in terms of number of input lags, output lags and input delay of the model structure) in system identification can be interpreted as a trade-off between bias and variance of the resulting model estimate. The problem of basis function selection can be avoided by using a non-parametric estimator of the coefficient functions in terms of a recently proposed Least-Square Support-Vector-Machine (LS-SVM) approach. However, the selection of the model order still appears to be an open problem in the identification of LPV systems via the LS-SVM method. In this paper, we propose a novel reformulation of the LPV LS-SVM approach, which, besides of the non-parametric estimation of the coefficient functions, achieves data-driven model order selection via convex optimization. The properties of the introduced approach are illustrated via a simulation example.
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-12T12:41:33Z
2015-01-12T12:42:48Z
http://eprints.imtlucca.it/id/eprint/2459
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2459
2015-01-12T12:41:33Z
Direct data-driven control of linear parameter-varying systems
In many control applications, nonlinear plants can be modeled as linear parameter-varying (LPV) systems, by which the dynamic behavior is assumed to be linear, but also dependent on some measurable signals, e.g., operating conditions. When a measured data set is available, LPV model identification can provide low complexity linear models that can embed the underlying nonlinear dynamic behavior of the plant. For such models, powerful control synthesis tools are available, but the way the modeling error and the conservativeness of the embedding affect the control performance is still largely unknown. Therefore, it appears to be attractive to directly synthesize the controller from data without modeling the plant. In this paper, a novel data-driven synthesis scheme is proposed to lay the basic foundations of future research on this challenging problem. The effectiveness of the proposed approach is illustrated by a numerical example.
Simone Formentin
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
Sergio M. Savaresi
2015-01-12T12:06:11Z
2015-01-12T12:06:11Z
http://eprints.imtlucca.it/id/eprint/2458
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2458
2015-01-12T12:06:11Z
SM identification of input-output LPV models with uncertain time-varying parameters
In this chapter, we consider the identification of single-input single-output linear-parameter-varying models when both the output and the time-varying parameter measurements are affected by bounded noise. First, the problem of computing exact parameter uncertainty intervals is formulated in terms of semialgebraic optimization. Then, a suitable relaxation tecnique is presented to compute parameter bounds by means of convex optimization. Advantages of the presented approach with respect to previously published results are discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T11:47:05Z
2015-01-12T11:47:05Z
http://eprints.imtlucca.it/id/eprint/2457
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2457
2015-01-12T11:47:05Z
Bounded error identification of Hammerstein Systems with backlash
Actuators and sensors commonly used in control systems may exhibit a variety of nonlinear behaviours that may be responsible for undesirable phenomena such as delays and oscillations, which may severely limit both the static and the dynamic performance of the system under control (see, e.g., [22]). In particular, one of the most relevant nonlinearities affecting the performance of industrial machines is the backlash (see Figure 22.1), which commonly occurs in mechanical, hydraulic and magnetic components like bearings, gears and impact dampers (see, e.g., [17]). This nonlinearity, which can be classified as dynamic (i.e., with memory) and hard (i.e. non-differentiable), may arise from unavoidable manufacturing tolerances or sometimes may be deliberately incorporated into the system in order to describe lubrication and thermal expansion effects [3]. The interested reader is referred to [22] for real-life examples of systems with either input or output backlash nonlinearities.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T11:36:39Z
2015-01-12T11:36:39Z
http://eprints.imtlucca.it/id/eprint/2456
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2456
2015-01-12T11:36:39Z
Frequency-Domain Least-Squares Support Vector Machines to deal with correlated errors when identifying linear time-varying systems
A Least-Squares Support Vector Machine (LS-SVM) estimator, formulated in the frequency domain is proposed to identify linear time-varying dynamic systems. The LS-SVM aims at learning the structure of the time variation in a data driven way. The frequency domain is chosen for its superior robustness w.r.t. correlated errors for the calibration of the hyper parameters of the model. The time-domain and the frequency-domain implementations are compared on a simulation example to show the effectiveness of the proposed approach. It is demonstrated that the time-domain formulation is mislead during the calibration due to the fact that the noise on the estimation and calibration data sets are correlated. This is not the case for the frequency-domain implementation.
John Lataire
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-09T13:37:33Z
2015-01-09T13:37:33Z
http://eprints.imtlucca.it/id/eprint/2453
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2453
2015-01-09T13:37:33Z
Polytopic outer approximations of semialgebraic sets
This paper deals with the problem of finding a polytopic outer approximation P* of a compact semialgebraic set S ⊆ Rn. The computed polytope turns out to be an approximation of the linear hull of the set S. The evaluation of P* is reduced to the solution of a sequence of robust optimization problems with nonconvex functional, which are efficiently solved by means of convex relaxation techniques. Properties of the presented algorithm and its possible applications in the analysis, identification and control of uncertain systems are discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T13:32:04Z
2015-01-09T13:32:04Z
http://eprints.imtlucca.it/id/eprint/2452
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2452
2015-01-09T13:32:04Z
Fixed order LPV controller design for LPV models in input-output form
In this work, a new synthesis approach is proposed to design fixed-order H∞ controllers for linear parameter-varying (LPV) systems described by input-output (I/O) models with polynomial dependence on the scheduling variables. First, by exploiting a suitable technique for polytopic outer approximation of semi-algebraic sets, the closed loop system is equivalently rewritten as an LPV I/O model depending affinely on an augmented scheduling parameter vector constrained inside a polytope. Then, the problem is reformulated in terms of bilinear matrix inequalities (BMI) and solved by means of a suitable semidefinite relaxation technique.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
Roland Tóth
2015-01-09T12:49:50Z
2015-01-09T12:49:50Z
http://eprints.imtlucca.it/id/eprint/2451
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2451
2015-01-09T12:49:50Z
Bounded-error identification of linear systems with input and output backlash
In this paper we present a single-stage procedure for computing bounds on the parameters of linear systems with input and output backlash from output data corrupted by bounded measurement noise. By properly selecting a sequence of input/output measurements, the problem of evaluating parameter bounds is formulated as a collection of sparse nonconvex optimization problems. Convex-relation techniques are exploited to efficiently compute guaranteed bounds on system parameters by means of semidefinite programming.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T12:25:17Z
2015-01-09T12:25:17Z
http://eprints.imtlucca.it/id/eprint/2450
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2450
2015-01-09T12:25:17Z
FIR approximation of linear systems from quantized records
In this paper we consider the problem of identifying a fixed-order FIR approximation of linear systems with unknown structure, assuming that both input and output measurements are subjected to quantization. In particular, a FIR model of given order which provides the best approximation of the input-output relationship is sought by minimizing the worst-case distance between the output of the true system and the modeled output, for all possible values of the input and output data consistent with their quantized measurements. First we show that the considered problem can be formulated in terms of robust optimization. Then, we present two different algorithms to compute the optimum of the formulated problem by means of linear programming techniques. The effectiveness of the proposed approach is illustrated by means of a simulation example.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T12:12:01Z
2015-01-09T12:12:01Z
http://eprints.imtlucca.it/id/eprint/2449
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2449
2015-01-09T12:12:01Z
LPV identification of the glucose-insulin dynamics in Type I Diabetes
In this paper we address the problem of identifying a linear parameter varying (LPV) model of the glucose-insulin dynamics in Type I diabetic patients. First, the identification problem is formulated in the framework of bounded-error identification, then an algorithm for parameter bounds computation, based on semidefinite programming, is presented. The effectiveness of the proposed approach is tested in simulation by means of the widely adopted nonlinear Sorensen patient model.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
Sintayehu Berehanu
2015-01-09T11:59:20Z
2015-01-09T11:59:20Z
http://eprints.imtlucca.it/id/eprint/2448
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2448
2015-01-09T11:59:20Z
Input-Output LPV Model identification with guaranteed quadratic stability
The problem of identifying linear parameter-varying (LPV) systems, a-priori known to be quadratically stable, is considered in the paper using an input-output model structure. To solve this problem, a novel constrained optimization-based algorithm is proposed which guarantees quadratic stability of the identified model. It is shown that this estimation objective corresponds to a nonconvex optimization problem, defined by a set of polynomial matrix inequalities (PMI), whose optimal solution can be approximated by means of suitable convex semidefinite relaxations. Applicability of such relaxation-based estimation approach in the presence of either stochastic or deterministic bounded noise is discussed. A simulation example is also given to demonstrate the effectiveness of the resulting identification method.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
Roland Tóth
2015-01-09T11:36:20Z
2015-01-09T11:52:42Z
http://eprints.imtlucca.it/id/eprint/2446
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2446
2015-01-09T11:36:20Z
Minimal LPV state-space realization driven set-membership identification
Set-membership identification algorithms have been recently proposed to derive linear parameter-varying (LPV) models in input-output form, under the assumption that both measurements of the output and the scheduling signals are affected by bounded noise. In order to use the identified models for controller synthesis, linear time-invariant (LTI) realization theory is usually applied to derive a statespace model whose matrices depend statically on the scheduling signals, as required by most of the LPV control synthesis techniques. Unfortunately, application of the LTI realization theory leads to an approximate state-space description of the original LPV input-output model. In order to limit the effect of the realization error, a new set-membership algorithm for identification of input/output LPV models is proposed in the paper. A suitable nonconvex optimization problem is formulated to select the model in the feasible set which minimizes a suitable measure of the state-space realization error. The solution of the identification problem is then derived by means of convex relaxation techniques.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
Roland Tóth
2015-01-09T11:31:37Z
2015-01-09T11:31:37Z
http://eprints.imtlucca.it/id/eprint/2445
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2445
2015-01-09T11:31:37Z
Set-membership identification of Hammerstein-Wiener systems
Set-membership identification of Hammerstein-Wiener models is addressed in the paper. First, it is shown that computation of tight parameter bounds requires the solutions to a number of nonconvex constrained polynomial optimization problems where the number of decision variables increases with the length of the experimental data sequence. Then, a suitable convex relaxation procedure is presented to significantly reduce the computational burden of the identification problem. A detailed discussion of the identification algorithm properties is reported. Finally, a simulated example is used to show the effectiveness and the computational tractability of the proposed approach.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T11:26:33Z
2015-01-09T11:26:33Z
http://eprints.imtlucca.it/id/eprint/2444
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2444
2015-01-09T11:26:33Z
Fast implementation of model predictive control with guaranteed performance
A fast implementation of a given predictive controller for nonlinear systems is introduced through a piecewise constant approximate function defined over an hyper-cube partition of the system state space. Such a state partition is obtained by maximizing the hyper-cube volumes in order to guarantee, besides stability, an a priori fixed trajectory error as well as input and state constraints satisfaction. The presented approximation procedure is achieved by solving a set of nonconvex polynomial optimization problems, whose approximate solutions are computed by means of semidefinite relaxation techniques for semialgebraic problems.
Massimo Canale
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T11:12:20Z
2015-01-09T11:12:20Z
http://eprints.imtlucca.it/id/eprint/2443
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2443
2015-01-09T11:12:20Z
Computational burden reduction in set-membership Hammerstein system identification
Hammerstein system identification from measurements affected by bounded noise is considered in the paper. First, we show that computation of tight parameter bounds requires the solution to nonconvex optimization problems where the number of decision variables increases with the length of the experimental data sequence. Then, in order to reduce the computational burden of the identification problem, we propose a procedure to relax the previously formulated problem to a set of polynomial optimization problems where the number of variables does not depend on the size of the measurements sequence. Advantages of the presented approach with respect to previously published results are discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T10:28:51Z
2015-01-09T10:28:51Z
http://eprints.imtlucca.it/id/eprint/2442
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2442
2015-01-09T10:28:51Z
Convex relaxation techniques for set-membership identification of LPV systems
Set-membership identification of single-input single-output linear parameter varying models is considered in the paper under the assumption that both the output and the scheduling parameter measurements are affected by bounded noise. First, we show that the problem of computing the parameter uncertainty intervals requires the solutions to a number of nonconvex optimization problems. Then, on the basis of the analysis of the regressor structure, we present some ad hoc convex relaxation schemes to compute parameter bounds by means of semidefinite optimization. Advantages of the new techniques with respect to previously published results are discussed both theoretically and by means of simulations.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T10:25:09Z
2015-01-09T10:25:09Z
http://eprints.imtlucca.it/id/eprint/2441
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2441
2015-01-09T10:25:09Z
Hammerstein systems parameters bounding through sparse polynomial optimization
A single-stage procedure for the evaluation of tight bounds on the parameters of Hammerstein systems from output measurements affected by bounded errors is presented. The identification problem is formulated in terms of polynomial optimization, and relaxation techniques based on linear matrix inequalities are proposed to evaluate parameters bounds by means of convex optimization. The structured sparsity of the identification problem is exploited to reduce the computational complexity of the convex relaxed problem. Convergence proper ties, complexity analysis and advantages of the proposed technique with respect to previously published ones are discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T10:00:06Z
2015-01-09T10:00:06Z
http://eprints.imtlucca.it/id/eprint/2440
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2440
2015-01-09T10:00:06Z
Bounding the parameters of linear systems with stability constraints
Identification of linear systems, a priori known to be stable, from input output measurements corrupted by bounded noise is considered in the paper. A formal definition of the feasible parameter set is provided, taking explicitly into account prior information on system stability. On the basis of a detailed analysis of the geometrical structure of the feasible set, convex relaxation techniques are presented to solve nonconvex optimization problems arising in the computation of parameters uncertainty intervals. Properties of the computed relaxed bounds are discussed. A simulated example is presented to show the effectiveness of the proposed technique.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-08T14:09:38Z
2015-01-08T14:09:38Z
http://eprints.imtlucca.it/id/eprint/2438
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2438
2015-01-08T14:09:38Z
Control as a key technology for a radical innovation in wind energy generation
This paper is concerned with an innovative technology, denoted as Kitenergy, for the conversion of high-altitude wind energy into electricity. The research activities carried out in the last five years, including theoretical analyses, numerical simulations and experimental tests, indicate that Kitenergy could bring forth a revolution in wind energy generation, providing renewable energy in large quantities at lower cost than fossil energy. After an overview of the main features of the technology, this work investigates three important aspects: the evaluation of the performance achieved by the employed control law, the optimization of the generator operating cycle and the possibility to generate continuously a constant and maximal power output. These issues are tackled through the combined use of advanced modeling, control and optimization methods, which results to be key technologies for a significant breakthrough in renewable energy generation.
Mario Milanese
Lorenzo Fagiano
Dario Piga
dario.piga@imtlucca.it
2015-01-08T13:49:05Z
2015-01-08T13:49:05Z
http://eprints.imtlucca.it/id/eprint/2437
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2437
2015-01-08T13:49:05Z
Kitenergy: a radical innovation in wind energy generation
This paper presents an innovative technology of high-altitude wind power generation, indicated as Kitenergy, which exploits the automatic flight of tethered airfoils (e.g. power kites) to extract energy from wind blowing between 200 and 800 meters above the ground. The key points of such a technology are described and the design of large scale plants is investigated here, in order to show that Kitenergy technology has the potential to provide large quantities of renewable energy with competitive cost with respect to fossil sources. Such claims are supported by the results obtained so far in the research activities undergoing at Politecnico di Torino, Italy, including numerical simulations, prototype experiments and wind data analyses.
Lorenzo Fagiano
Mario Milanese
Dario Piga
dario.piga@imtlucca.it
2015-01-08T13:24:07Z
2015-01-08T13:24:07Z
http://eprints.imtlucca.it/id/eprint/2436
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2436
2015-01-08T13:24:07Z
Set-membership identification of block-structured nonlinear feedback systems
In this paper a three-stage procedure for set-membership identification of block-structured nonlinear feedback systems is proposed. Nonlinear block parameters bounds are computed in the first stage exploiting steady-state measurements. Then, given the uncertain description of the nonlinear block, bounds on the unmeasurable inner-signal are computed in the second stage. Finally, linear block parameters bounds are computed in the third stage on the basis of output measurements and computed inner signal bounds. Computation of both the nonlinear block parameters and the inner-signal bounds is formulated in terms of semialgebraic optimization and solved by means of suitable convex LMI relaxation techniques. Linear block parameters are bounded solving a number of linear programming problems.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-08T11:51:23Z
2015-01-08T11:51:23Z
http://eprints.imtlucca.it/id/eprint/2434
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2434
2015-01-08T11:51:23Z
Parameter bounds evaluation for linear systems with output backlash
In this paper a procedure is presented for deriving parameters bounds of linear systems with output backlash when the output measurement errors are bounded. First, using steady-state input/output data, parameters of the backlash are bounded. Then, given the estimated uncertain backlash and the output measurements collected exciting the system with a PRBS, bounds on the unmeasurable inner signal are computed. Finally, such bounds, together with the input sequence, are used for bounding the parameters of the linear block.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-08T11:08:23Z
2015-01-12T13:16:19Z
http://eprints.imtlucca.it/id/eprint/2433
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2433
2015-01-08T11:08:23Z
An instrumental Least Squares Support Vector Machine for system identification
Roland Tóth
Vincent Laurain
Dario Piga
dario.piga@imtlucca.it
2015-01-08T11:00:48Z
2015-01-08T11:00:48Z
http://eprints.imtlucca.it/id/eprint/2432
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2432
2015-01-08T11:00:48Z
Segmentation of ARX systems through SDP-relaxation techniques
Segmentation of ARX models can be formulated as a combinato-
rial minimization problem in terms of the ℓ0-norm of the param-
eter variations and the ℓ2-loss of the prediction error. A typical
approach to compute an approximate solution to such a prob-
lem is based on ℓ1-relaxation. Unfortunately, evaluation of the
level of accuracy of the ℓ1-relaxation in approximating the opti-
mal solution of the original combinatorial problem is not easy to
accomplish. In this poster, an alternative approach is proposed
which provides an attractive solution for the ℓ0-norm minimiza-
tion problem associated with segmentation of ARX models.
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-08T10:57:52Z
2015-01-08T11:01:10Z
http://eprints.imtlucca.it/id/eprint/2431
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2431
2015-01-08T10:57:52Z
Dealing with correlated errors in Least-Squares Support Vector Machine Estimators
John Lataire
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-08T10:35:27Z
2015-01-08T10:55:43Z
http://eprints.imtlucca.it/id/eprint/2430
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2430
2015-01-08T10:35:27Z
Data-driven LPV modeling of continuous pulp digesters
In this technical report, the LPV-IO identification techniques described in Kauven et al. [2013]
(Chapter 5) are applied in order to estimate an LPV model of a continuous pulp digester. The pulp
digester simulator (described in Modén [2011]) has been provided by ABB for benchmark studies
as part of its participation in the EU project Autoprofit
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-08T10:31:58Z
2015-01-12T13:16:01Z
http://eprints.imtlucca.it/id/eprint/2429
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2429
2015-01-08T10:31:58Z
An instrumental Least Squares Support Vector Machine for nonlinear system identification: enforcing zero-centering constraints
Least-Squares Support Vector Machines (LS-SVM's), originating from Stochastic Learning
theory, represent a promising approach to identify nonlinear systems via nonparametric es-
timation of nonlinearities in a computationally and stochastically attractive way. However,
application of LS-SVM's in the identification context is formulated as a linear regression aim-
ing at the minimization of the ℓ2 loss in terms of the prediction error. This formulation
corresponds to a prejudice of an auto-regressive noise structure, which, especially in the non-
linear context, is often found to be too restrictive in practical applications. In [1], a novel
Instrumental Variable (IV) based estimation is integrated into the LS-SVM approach provid-
ing, under minor conditions, a consistent identification of nonlinear systems in case of a noise
modeling error. It is shown how the cost function of the LS-SVM is modified to achieve an IV-based solution.
In this technical report, a detailed derivation of the results presented in Section 5.2 of [1]
is given as a supplement material for interested readers.
Vincent Laurain
Roland Tóth
Dario Piga
dario.piga@imtlucca.it
2015-01-08T10:09:00Z
2015-01-08T13:05:30Z
http://eprints.imtlucca.it/id/eprint/2428
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2428
2015-01-08T10:09:00Z
A convex relaxation approach to set-membership identification
Set-membership identification of dynamical systems is dealt with in this thesis. Differently from the stochastic framework, in the set-membership context the statistical description of the measurement noise is not available and the only information on such an error is that its amplitude or energy is bounded. In the framework of Set-membership identification, the result of the estimation process is the set of all system parameter values consistent with measured data, assumed model structure and a-priori assumptions on the measurement error. The problem of evaluating bounds on system parameters belonging to the feasible parameter set can be formulated in terms of polynomial optimization problems, where the number of decision variables increases with the length of the experimental data sequence. Such problems are generally nonconvex and NP-hard. Therefore, standard nonlinear optimization tools can not be used to compute parameter bounds, since they can trap in local minima and, as a consequence, the computed bounds are not guaranteed to contain the true values of parameters, which is a key requirement in set-membership identification. In order to overcome such a problem, convex relaxation procedures based on the theory of moments are proposed to efficiently compute relaxed bounds which are guaranteed to contain the true values of system parameters. Unfortunately, a direct application of the theory of moments in relaxing set-membership identification problems leads to semidefinite programming problems with high computational burden, thus limiting, in practice, the use of such relaxation procedures to solve identification problems with a small number of measurements. The aim of the thesis is to derive a number of convex-relaxation based algorithms that, exploiting the peculiar properties of the considered identification problems, make it possible to perform bound computation also when the number of measurements is large. In particular, errors-in-variables (EIV) identification of linear models, concerning identification of linear-time-invariant (LTI) systems based on noise-corrupted measurements of both input and output signals, is tackled through two different relaxation approaches. The first method, which is referred to as dynamic-EIV approach, exploits the sparse structure of EIV problems in order to reduce the computational complexity of the semidefinite programming problems arising from theory-of-moment relaxations. The second technique, referred to as semi-static-EIV approach, is based on a suitable handling of the constraints defining the feasible parameter set, and leads to polynomial optimization problems where the number of decision variables does not depend on the size of the measurement sequence. Thanks to that problem reformulation, theory-of-moment relaxations can be efficiently applied to compute bounds on system parameters also from large data set. Identification of block-oriented nonlinear systems is also addressed. The considered model structures are: Hammerstein-Wiener systems; Hammerstein-like and Wiener-like structures with backlash nonlinearity and block-structured nonlinear feedback systems. The semi-static-EIV approach is extended with suitable modifications to estimate the parameters of Hammerstein-Wiener models with static blocks described by polynomial functions. Then, a unified approach for set-membership identification of Hammerstein and Wiener models with backlash is discussed. By properly selecting a sequence of input/output measurements, the evaluation of parameter bounds is formulated in terms of polynomial optimization problems and the structured sparsity of the formulated problems is exploited to reduce the computational complexity of theory-of-moment based relaxations. Finally, a two-stage method for identification of block-structured nonlinear feedback systems is presented. Nonlinear block parameter bounds are first computed by using input/output data collected from the response of the system to square wave inputs. Then, by stimulating the system with a persistently exciting input signal, bounds on the unmeasurable inner-signal are evaluated, which are used, together with noise-corrupted measurements of the output signal, to formulate the identification of linear block parameters in terms of EIV problems that can be solved either through the dynamic or the semi-static-EIV approach. Then, an "ad hoc" convex relaxation scheme is presented to compute guaranteed bounds on the parameters of linear-parameter-varying (LPV) models in input/output (I/O) form, under the assumption that both the output and the scheduling parameter measurements are affected by bounded noise. The developed set-membership identification algorithms are used to derive an LPV model describing vehicle lateral dynamics based on a set of experimental data, and an LPV model to describe glucose-insulin dynamics for patients affected by Type I diabetes. Finally, the problem of identifying systems a-priori known to be stable is discussed. In particular, suitable relaxation-based algorithms are proposed to enforce BIBO stability and quadratic stability constraints for the cases of LTI and LPV systems, respectively. Applicability of the proposed techniques both in the stochastic and in the set-membership framework is discussed.
Dario Piga
dario.piga@imtlucca.it
2014-12-18T12:15:05Z
2015-11-02T11:27:17Z
http://eprints.imtlucca.it/id/eprint/2425
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2425
2014-12-18T12:15:05Z
Multicontrast MRI quantification of focal inflammation and degeneration in multiple sclerosis
Local microstructural pathology in multiple sclerosis patients might influence their clinical performance. This study applied multicontrast MRI to quantify inflammation and neurodegeneration in MS lesions. We explored the impact of MRI-based lesion pathology in cognition and disability. Methods. 36 relapsing-remitting MS subjects and 18 healthy controls underwent neurological, cognitive, behavioural examinations and 3 T MRI including (i) fluid attenuated inversion recovery, double inversion recovery, and magnetization-prepared gradient echo for lesion count; (ii) T1, T2, and T2* relaxometry and magnetisation transfer imaging for lesion tissue characterization. Lesions were classified according to the extent of inflammation/neurodegeneration. A generalized linear model assessed the contribution of lesion groups to clinical performances. Results. Four lesion groups were identified and characterized by (1) absence of significant alterations, (2) prevalent inflammation, (3) concomitant inflammation and microdegeneration, and (4) prevalent tissue loss. Groups 1, 3, 4 correlated with general disability (Adj-; ), executive function (Adj-; ), verbal memory (Adj-; ), and attention (Adj-; ). Conclusion. Multicontrast MRI provides a new approach to infer in vivo histopathology of plaques. Our results support evidence that neurodegeneration is the major determinant of patients’ disability and cognitive dysfunction
Guillaume Bonnier
Alexis Roche
David Romascano
Samanta Simioni
Djalel-Eddine Meskaldji
David Rotzinger
Ying-Chia Lin
yingchia.lin@imtlucca.it
Gloria Menegaz
Myriam Schluep
Renaud Du Pasquier
Tilman Johannes Sumpf
Jens Frahm
Jean-Philippe Thiran
Gunnar Krueger
Cristina Granziera
2014-12-18T11:34:55Z
2016-04-06T09:03:15Z
http://eprints.imtlucca.it/id/eprint/2424
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2424
2014-12-18T11:34:55Z
Quantitative analysis of myelin and axonal remodeling in the uninjured motor network after stroke
Objectives: Contralesional brain connectivity plasticity was previously reported after stroke. This study aims at disentangling the biological mechanisms underlying connectivity plasticity in the uninjured motor network after an ischemic lesion. In particular, we measured generalized fractional anisotropy (GFA) and magnetization transfer ratio (MTR) to assess whether post-stroke connectivity remodeling depend on axonal and/or myelin changes. Materials and Methods: Diffusion Spectrum Imaging (DSI) and Magnetization Transfer MRI at 3T were performed in 10 patients in acute phase, at one and six months after stroke, which was affecting motor cortical and/or subcortical areas. Ten age- and gender- matched healthy volunteers were scanned one month apart for longitudinal comparison. Clinical assessment was also performed in patients prior to MRI. In the contra-lesional hemisphere, average measures and tract-based quantitative analysis of GFA and MTR was performed to assess axonal integrity and myelination along motor connections as well as their variations in time. Results and Conclusions: Mean and tract-based measures of MTR and GFA showed significant changes in a number of contralesional motor connections, confirming both axonal and myelin plasticity in our cohort of patients. Moreover, density-derived features (peak height, standard deviation-SD and skewness) of GFA and MTR along the tracts showed additional correlation with clinical scores than mean values. These findings reveal the interplay between contralateral myelin and axonal remodeling after stroke.
Ying-Chia Lin
yingchia.lin@imtlucca.it
Alessandro Daducci
Djalel-Eddine Meskaldji
Jean-Philippe Thiran
Patrik Michel
Reto A Meuli
Gunnar Krueger
Gloria Menegaz
Cristina Granziera
2014-12-18T11:19:20Z
2016-04-06T09:02:53Z
http://eprints.imtlucca.it/id/eprint/2423
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2423
2014-12-18T11:19:20Z
Advanced MRI unravels the nature of tissue alterations in early multiple sclerosis
Introduction In patients with multiple sclerosis (MS), conventional magnetic resonance imaging (MRI) provides only limited insights into the nature of brain damage with modest clinic-radiological correlation. In this study, we applied recent advances in MRI techniques to study brain microstructural alterations in early relapsing-remitting MS (RRMS) patients with minor deficits. Further, we investigated the potential use of advanced MRI to predict functional performances in these patients. Methods Brain relaxometry (T1, T2, T2*) and magnetization transfer MRI were performed at 3T in 36 RRMS patients and 18 healthy controls (HC). Multicontrast analysis was used to assess for microstructural alterations in normal-appearing (NA) tissue and lesions. A generalized linear model was computed to predict clinical performance in patients using multicontrast MRI data, conventional MRI measures as well as demographic and behavioral data as covariates. Results Quantitative T2 and T2* relaxometry were significantly increased in temporal normal-appearing white matter (NAWM) of patients compared to HC, indicating subtle microedema (P = 0.03 and 0.004). Furthermore, significant T1 and magnetization transfer ratio (MTR) variations in lesions (mean T1 z-score: 4.42 and mean MTR z-score: −4.09) suggested substantial tissue loss. Combinations of multicontrast and conventional MRI data significantly predicted cognitive fatigue (P = 0.01, Adj-R2 = 0.4), attention (P = 0.0005, Adj-R2 = 0.6), and disability (P = 0.03, Adj-R2 = 0.4). Conclusion Advanced MRI techniques at 3T, unraveled the nature of brain tissue damage in early MS and substantially improved clinical–radiological correlations in patients with minor deficits, as compared to conventional measures of disease.
Guillaume Bonnier
Alexis Roche
David Romascano
Samanta Simioni
Djalel-Eddine Meskaldji
David Rotzinger
Ying-Chia Lin
yingchia.lin@imtlucca.it
Gloria Menegaz
Myriam Schluep
Renaud Du Pasquier
Tilman Johannes Sumpf
Jens Frahm
Jean-Philippe Thiran
Gunnar Krueger
Cristina Granziera
2014-12-18T11:09:56Z
2016-04-06T09:56:55Z
http://eprints.imtlucca.it/id/eprint/2422
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2422
2014-12-18T11:09:56Z
Multicontrast connectometry: a new tool to assess cerebellum alterations in early relapsing-remitting multiple sclerosis
Background: Cerebellar pathology occurs in late multiple sclerosis (MS) but little is known about cerebellar changes during early disease stages. In this study, we propose a new multicontrast “connectometry” approach to assess the structural and functional integrity of cerebellar networks and connectivity in early MS. Methods: We used diffusion spectrum and resting-state functional MRI (rs-fMRI) to establish the structural and functional cerebellar connectomes in 28 early relapsing-remitting MS patients and 16 healthy controls (HC). We performed multicontrast “connectometry” by quantifying multiple MRI parameters along the structural tracts (generalized fractional anisotropy-GFA, T1/T2 relaxation times and magnetization transfer ratio) and functional connectivity measures. Subsequently, we assessed multivariate differences in local connections and network properties between MS and HC subjects; finally, we correlated detected alterations with lesion load, disease duration, and clinical scores. Results: In MS patients, a subset of structural connections showed quantitative MRI changes suggesting loss of axonal microstructure and integrity (increased T1 and decreased GFA, P < 0.05). These alterations highly correlated with motor, memory and attention in patients, but were independent of cerebellar lesion load and disease duration. Neither network organization nor rs-fMRI abnormalities were observed at this early stage. Conclusion: Multicontrast cerebellar connectometry revealed subtle cerebellar alterations in MS patients, which were independent of conventional disease markers and highly correlated with patient function. Future work should assess the prognostic value of the observed damage. Hum Brain Mapp, 2014. © 2014 Wiley Periodicals, Inc.
David Romascano
Djalel-Eddine Meskaldji
Guillaume Bonnier
Samanta Simioni
David Rotzinger
Ying-Chia Lin
yingchia.lin@imtlucca.it
Gloria Menegaz
Alexis Roche
Myriam Schluep
Renaud Du Pasquier
Jonas Richiardi
Dimitri Van De Ville
Alessandro Daducci
Tilman Johannes Sumpf
Jens Fraham
Jean-Philippe Thiran
Gunnar Krueger
Cristina Granziera
2014-12-18T11:04:46Z
2014-12-18T11:04:46Z
http://eprints.imtlucca.it/id/eprint/2421
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2421
2014-12-18T11:04:46Z
Quantitative comparison of reconstruction methods for intra-voxel fiber recovery from diffusion MRI
Validation is arguably the bottleneck in the diffusion magnetic resonance imaging (MRI) community. This paper evaluates and compares 20 algorithms for recovering the local intra-voxel fiber structure from diffusion MRI data and is based on the results of the “HARDI reconstruction challenge” organized in the context of the “ISBI 2012” conference. Evaluated methods encompass a mixture of classical techniques well known in the literature such as diffusion tensor, Q-Ball and diffusion spectrum imaging, algorithms inspired by the recent theory of compressed sensing and also brand new approaches proposed for the first time at this contest. To quantitatively compare the methods under controlled conditions, two datasets with known ground-truth were synthetically generated and two main criteria were used to evaluate the quality of the reconstructions in every voxel: correct assessment of the number of fiber populations and angular accuracy in their orientation. This comparative study investigates the behavior of every algorithm with varying experimental conditions and highlights strengths and weaknesses of each approach. This information can be useful not only for enhancing current algorithms and develop the next generation of reconstruction methods, but also to assist physicians in the choice of the most adequate technique for their studies.
Alessandro Daducci
Erick Jorge Canales-Rodriguez
Maxime Descoteaux
Eleftherios Garyfallidis
Yaniv Gur
Ying-Chia Lin
yingchia.lin@imtlucca.it
Merry Mani
Sylvain Merlet
Michael Paquette
Alonso Ramirez-Manzanares
Marco Reisert
Paulo Reis Rodrigues
Farshid Sepehrband
Emmanuel Caruyer
Jeiran Choupan
Rachid Deriche
Matthew Jacob
Gloria Menegaz
Vesna Prckovska
Mariano Rivera
Yves Wiaux
Jean-Philippe Thiran
2014-12-11T11:38:40Z
2014-12-11T11:38:40Z
http://eprints.imtlucca.it/id/eprint/2417
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2417
2014-12-11T11:38:40Z
Single-image super-resolution via linear mapping of interpolated self-examples
This paper presents a novel example-based single-image superresolution procedure that upscales to high-resolution (HR) a given low-resolution (LR) input image without relying on an external dictionary of image examples. The dictionary instead is built from the LR input image itself, by generating a double pyramid of recursively scaled, and subsequently interpolated, images, from which self-examples are extracted. The upscaling procedure is multipass, i.e., the output image is constructed by means of gradual increases, and consists in learning special linear mapping functions on this double pyramid, as many as the number of patches in the current image to upscale. More precisely, for each LR patch, similar self-examples are found, and, because of them, a linear function is learned to directly map it into its HR version. Iterative back projection is also employed to ensure consistency at each pass of the procedure. Extensive experiments and comparisons with other state-of-the-art methods, based both on external and internal dictionaries, show that our algorithm can produce visually pleasant upscalings, with sharp edges and well reconstructed details. Moreover, when considering objective metrics, such as Peak signal-to-noise ratio and Structural similarity, our method turns out to give the best performance.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T11:31:01Z
2014-12-11T11:31:01Z
http://eprints.imtlucca.it/id/eprint/2416
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2416
2014-12-11T11:31:01Z
Video super-resolution via sparse combinations of key-frame patches in a compression context
In this paper we present a super-resolution (SR) method for upscaling low-resolution (LR) video sequences, that relies on the presence of periodic high-resolution (HR) key frames, and validate it in the context of video compression. For a given LR intermediate frame, the HR details are retrieved patch-by-patch by taking sparse linear combinations of patches found in the neighbor key frames. The performance of the video SR algorithm is assessed in a scheme where only some key frames from an original HR sequence are directly encoded; the remaining intermediate frames are down-sampled to LR and encoded as well, with a possibly different quantization parameter. SR is then finally employed to upscale these frames. For comparison, we consider the best case where the whole original HR sequence is encoded. With respect to this case, our SR-based approach is shown to bring a certain gain for low bit-rates (consistent when all frames are encoded independently), i.e. when a poor encoding can actually benefit of the special processing of the intermediate frames, so proving that video SR can be an useful tool in realistic scenarios.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T11:25:26Z
2014-12-11T11:34:03Z
http://eprints.imtlucca.it/id/eprint/2415
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2415
2014-12-11T11:25:26Z
K-WEB: Nonnegative dictionary learning for sparse image representations
This paper presents a new nonnegative dictionary learning method, to decompose an input data matrix into a dictionary of nonnegative atoms, and a representation matrix with a strict ℓ0-sparsity constraint. This constraint makes each input vector representable by a limited combination of atoms. The proposed method consists of two steps which are alternatively iterated: a sparse coding and a dictionary update stage. As for the dictionary update, an original method is proposed, which we call K-WEB, as it involves the computation of k WEighted Barycenters. The so designed algorithm is shown to outperform other methods in the literature that address the same learning problem, in different applications, and both with synthetic and “real” data, i.e. coming from natural images.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T11:14:47Z
2014-12-11T11:33:33Z
http://eprints.imtlucca.it/id/eprint/2414
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2414
2014-12-11T11:14:47Z
Super-resolution using neighbor embedding of back-projection residuals
In this paper we present a novel algorithm for neighbor embedding based super-resolution (SR), using an external dictionary. In neighbor embedding based SR, the dictionary is trained from couples of high-resolution and low-resolution (LR) training images, and consists of pairs of patches: matching patches (m-patches), which are used to match the input image patches and contain only low-frequency content, and reconstruction patches (r-patches), which are used to generate the output image patches and actually bring the high-frequency details. We propose a novel training scheme, where the m-patches are extracted from enhanced back-projected interpolations of the LR images and the r-patches are extracted from the back-projection residuals. A procedure to further optimize the dictionary is followed, and finally nonnegative neighbor embedding is considered at the SR algorithm stage. We consider singularly the various elements of the algorithm, and prove that each of them brings a gain on the final result. The complete algorithm is then compared to other state-of-the-art methods, and its competitiveness is shown.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T11:06:46Z
2014-12-11T11:33:08Z
http://eprints.imtlucca.it/id/eprint/2413
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2413
2014-12-11T11:06:46Z
Compact and coherent dictionary construction for example-based super-resolution
This paper presents a new method to construct a dictionary for example-based super-resolution (SR) algorithms. Example-based SR relies on a dictionary of correspondences of low-resolution (LR) and high-resolution (HR) patches. Having a fixed, prebuilt, dictionary, allows to speed up the SR process; however, in order to perform well in most cases, we need to have big dictionaries with a large variety of patches. Moreover, LR and HR patches often are not coherent, i.e. local LR neighborhoods are not preserved in the HR space. Our designed dictionary learning method takes as input a large dictionary and gives as an output a dictionary with a “sustainable” size, yet presenting comparable or even better performance. It firstly consists of a partitioning process, done according to a joint k-means procedure, which enforces the coherence between LR and HR patches by discarding those pairs for which we do not find a common cluster. Secondly, the clustered dictionary is used to extract some salient patches that will form the output set.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T11:00:13Z
2014-12-16T14:34:53Z
http://eprints.imtlucca.it/id/eprint/2412
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2412
2014-12-11T11:00:13Z
Low-complexity single-image super-resolution based on nonnegative neighbor embedding
This paper describes a single-image super-resolution (SR) algorithm based on nonnegative neighbor embedding. It belongs to the family of single-image example-based
SR algorithms, since it uses a dictionary of low resolution (LR) and high resolution (HR) trained patch pairs to infer the unknown HR details. Each LR feature vector in the input
image is expressed as the weighted combination of its K nearest neighbors in the dictionary; the corresponding HR feature vector is reconstructed under the assumption that the local LR embedding is preserved. Three key aspects are introduced in order to build a low-complexity competitive algorithm: (i) a compact but efficient representation of the
patches (feature representation) (ii) an accurate estimation of the patches by their nearest neighbors (weight computation) (iii) a compact and already built (therefore external) dictionary, which allows a one-step upscaling. The neighbor embedding SR algorithm so designed is shown to give good visual results, comparable to other state-of-the-art methods, while presenting an appreciable reduction of the computational time.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T10:26:16Z
2014-12-11T11:31:46Z
http://eprints.imtlucca.it/id/eprint/2411
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2411
2014-12-11T10:26:16Z
Neighbor embedding based single-image super-resolution using Semi-Nonnegative Matrix Factorization
This paper describes a novel method for single-image super-resolution (SR) based on a neighbor embedding technique which uses Semi-Nonnegative Matrix Factorization (SNMF). Each low-resolution (LR) input patch is approximated by a linear combination of nearest neighbors taken from a dictionary. This dictionary stores low-resolution and corresponding high-resolution (HR) patches taken from natural images and is thus used to infer the HR details of the super-resolved image. The entire neighbor embedding procedure is carried out in a feature space. Features which are either the gradient values of the pixels or the mean-subtracted luminance values are extracted from the LR input patches, and from the LR and HR patches stored in the dictionary. The algorithm thus searches for the K nearest neighbors of the feature vector of the LR input patch and then computes the weights for approximating the input feature vector. The use of SNMF for computing the weights of the linear approximation is shown to have a more stable behavior than the use of LLE and lead to significantly higher PSNR values for the super-resolved images.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T09:38:48Z
2014-12-16T14:35:43Z
http://eprints.imtlucca.it/id/eprint/2410
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2410
2014-12-11T09:38:48Z
Sparse reconstruction for compressed sensing using Stagewise Polytope Faces Pursuit
Compressed sensing, also known as compressive sampling, is an approach to the measurement of signals which have a sparse representation, that can reduce the number of measurements that are needed to reconstruct the signal. The signal reconstruction part requires efficient methods to perform sparse reconstruction, such as those based on linear programming. In this paper we present a method for sparse reconstruction which is an extension of our earlier polytope faces pursuit algorithm, based on the polytope geometry of the dual linear program. The new algorithm adds several basis vectors at each stage, in a similar way to the recent stagewise orthogonal matching pursuit (StOMP) algorithm. We demonstrate the application of the algorithm to some standard compressed sensing problems.
Mark D. Plumbley
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
2014-12-10T14:41:25Z
2014-12-10T14:41:25Z
http://eprints.imtlucca.it/id/eprint/2407
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2407
2014-12-10T14:41:25Z
Schroedinger-like PageRank equation and localization in the WWW
The WorldWide Web is one of the most important communication systems we use in our everyday life. Despite its central role, the growth and the development of the WWW is not controlled by any central authority. This situation has created a huge ensemble of connections whose complexity can be fruitfully described and quantified by network theory. One important application that allows to sort out the information present in these connections is given by the PageRank alghorithm. Computation of this quantity is usually made iteratively with a large use of computational time. In this paper we show that the PageRank can be expressed in terms of a wave function obeying a Schroedinger-like equation. In particular the topological disorder given by the unbalance of outgoing and ingoing links between pages, induces wave function and potential structuring. This allows to directly localize the pages with the largest score. Through this new representation we can now compute the PageRank without iterative techniques. For most of the cases of interest our method is faster than the original one. Our results also clarify the role of topology in the diffusion of information within complex networks. The whole approach opens the possibility to novel techniques inspired by quantum physics for the analysis of the WWW properties.
Nicola Perra
Vinko Zlatic
Alessandro Chessa
alessandro.chessa@imtlucca.it
Claudio Conti
Debora Donato
Guido Caldarelli
guido.caldarelli@imtlucca.it
2014-12-02T15:15:35Z
2014-12-18T13:56:23Z
http://eprints.imtlucca.it/id/eprint/2384
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2384
2014-12-02T15:15:35Z
Mining communities in networks
Online social networks pose significant challenges to computer scientists, physicists, and sociologists alike, for their massive size, fast evolution, and uncharted potential for social computing. One particular problem that has interested us is community identification. Many algorithms based on various metrics have been proposed for communities in networks [18, 24], but a few algorithms scale to very large networks. Three recent community identification algorithms, namely CNM [16], Wakita [59], and Louvain [10], stand out for their scalability to a few millions of nodes. All of them use modularity as the metric of optimization. However, all three algorithms produce inconsistent communities every time the ordering of nodes to the algorithms changes.
We propose two quantitative metrics to represent the level of consistency across multiple runs of an algorithm: pairwise membership probability and consistency. Based on these two metrics, we propose a solution that improves the consistency without compromising the modularity. We demonstrate that our solution to use pairwise membership probabilities as link weights generates consistent communities within six or fewer cycles for most networks. However, our iterative, pairwise membership reinforcing approach does not deliver convergence for Flickr, Orkut, and Cyworld networks as well for the rest of the networks. Our approach is empirically driven and is yet to be shown to produce consistent output analytically. We leave further investigation into the topological structure and its impact on the consistency as future work.
In order to evaluate the quality of clustering, we have looked at 3 of the 48 communities identified in the AS graph. Surprisingly, all have either hierarchical, geographical, or topological interpretations to their groupings. Our preliminary evaluation of the quality of communities is promising. We plan to conduct more thorough evaluation of the communities and study network structures and their evolutions using our approach.
Haewoon Kwak
Yoonchan Choi
Young-Ho Eom
youngho.eom@imtlucca.it
Hawoong Jeong
Sue Moon
2014-12-02T15:12:14Z
2014-12-18T13:56:58Z
http://eprints.imtlucca.it/id/eprint/2383
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2383
2014-12-02T15:12:14Z
Comparison of online social relations in volume vs interaction
Online social networking services are among the most popular Internet services according to Alexa.com and have become a key feature in many Internet services. Users interact through various features of online social networking services: making friend relationships, sharing their photos, and writing comments. These friend relationships are expected to become a key to many other features in web services, such as recommendation engines, security measures, online search, and personalization issues. However, we have very limited knowledge on how much interaction actually takes place over friend relationships declared online. A friend relationship only marks the beginning of online interaction.
Does the interaction between users follow the declaration of friend relationship? Does a user interact evenly or lopsidedly with friends? We venture to answer these questions in this work. We construct a network from comments written in guestbooks. A node represents a user and a directed edge a comments from a user to another. We call this network an activity network. Previous work on activity networks include phone-call networks [34, 35] and MSN messenger networks [27]. To our best knowledge, this is the first attempt to compare the explicit friend relationship network and implicit activity network.
We have analyzed structural characteristics of the activity network and compared them with the friends network. Though the activity network is weighted and directed, its structure is similar to the friend relationship network. We report that the in-degree and out-degree distributions are close to each other and the social interaction through the guestbook is highly reciprocated. When we consider only those links in the activity network that are reciprocated, the degree correlation distribution exhibits much more pronounced assortativity than the friends network and places it close to known social networks. The k-core analysis gives yet another corroborating evidence that the friends network deviates from the known social network and has an unusually large number of highly connected cores.
We have delved into the weighted and directed nature of the activity network, and investigated the reciprocity, disparity, and network motifs. We also have observed that peer pressure to stay active online stops building up beyond a certain number of friends.
The activity network has shown topological characteristics similar to the friends network, but thanks to its directed and weighted nature, it has allowed us more in-depth analysis of user interaction.
Hyunwoo Chun
Haewoon Kwak
Young-Ho Eom
youngho.eom@imtlucca.it
Yong-Yeol Ahn
Sue Moon
Hawoong Jeong
2014-11-10T09:17:33Z
2014-11-10T09:17:33Z
http://eprints.imtlucca.it/id/eprint/2351
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2351
2014-11-10T09:17:33Z
Credit Default Swaps networks and systemic risk
Credit Default Swaps (CDS) spreads should reflect default risk of the underlying corporate debt. Actually, it has been recognized that CDS spread time series did not anticipate but only followed the increasing risk of default before the financial crisis. In principle, the network of correlations among CDS spread time series could at least display some form of structural change to be used as an early warning of systemic risk. Here we study a set of 176 CDS time series of financial institutions from 2002 to 2011. Networks are constructed in various ways, some of which display structural change at the onset of the credit crisis of 2008, but never before. By taking these networks as a proxy of interdependencies among financial institutions, we run stress-test based on Group DebtRank. Systemic risk before 2008 increases only when incorporating a macroeconomic indicator reflecting the potential losses of financial assets associated with house prices in the US. This approach indicates a promising way to detect systemic instabilities.
Michelangelo Puliga
michelangelo.puliga@imtlucca.it
Guido Caldarelli
guido.caldarelli@imtlucca.it
Stefano Battiston
2014-11-05T10:29:31Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2334
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2334
2014-11-05T10:29:31Z
(edited by) Proceedings 7th Interaction and Concurrency Experience, ICE 2014 (Berlin, Germany, 6th June 2014)
This volume contains the proceedings of ICE 2014, the 7th Interaction and Concurrency Experience, which was held in Berlin, Germany on the 6th of June 2014 as a satellite event of DisCoTec 2014. The ICE procedure for paper selection allows PC members to interact, anonymously, with authors. During the review phase, each submitted paper is published on a Wiki and associated with a discussion forum whose access is restricted to the authors and to all the PC members not declaring a conflict of interests. The PC members post comments and questions that the authors reply to. Each paper was reviewed by three PC members, and altogether 8 papers (including 3 short papers) were accepted for publication. We were proud to host two invited talks, by Pavol Cerny and Kim Larsen, whose abstracts are included in this volume together with the regular papers.
Ivan Lanese
Alberto Lluch-Lafuente
Ana Sokolova
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-22T09:53:27Z
2014-10-22T10:00:58Z
http://eprints.imtlucca.it/id/eprint/2331
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2331
2014-10-22T09:53:27Z
Stabilizing linear model predictive control under inexact numerical optimization
This note describes a model predictive control (MPC) formulation for discrete-time linear systems with hard constraints on control and state variables, under the assumption that the solution of the associated quadratic program is neither optimal nor satisfies the inequality constraints. This is common in embedded control applications, for which real-time constraints and limited computing resources dictate restrictions on the possible number of on-line iterations that can be performed within a sampling period. The proposed approach is rather general, in that it does not refer to a particular optimization algorithm, and is based on the definition of an alternative MPC problem that we assume can only be solved within bounded levels of suboptimality, and violation of the inequality constraints. By showing that the inexact solution is a feasible suboptimal one for the original problem, asymptotic or exponential stability is guaranteed for the closed-loop system. Based on the above general results, we focus on a specific dual accelerated gradient-projection method to obtain a stabilizing MPC law that only requires a predetermined maximum number of on-line iterations.
Matteo Rubagotti
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-10-22T09:15:26Z
2014-10-22T09:15:26Z
http://eprints.imtlucca.it/id/eprint/2330
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2330
2014-10-22T09:15:26Z
Cabin heat thermal management in hybrid vehicles using model predictive control
This paper describes a Model Predictive Control (MPC) design for the thermal management of cabin heat in Hybrid Electric Vehicles (HEVs). Due to the augmented complexity of the energy flow in recent energy-efficient vehicles in comparison to conventional vehicles, control degrees of freedom are increased, as many components can achieve the same functionality of heating up the cabin temperature. This paper proposes an MPC strategy to distribute the workload between available components in the vehicle, while achieving multiple objectives, such as fuel efficiency and heat-power reference tracking, and enforcing various constraints. First, a simplified linear dynamical model subject to linear time-varying (LTV) constraints is identified, based on high-fidelity simulations on a full nonlinear model. Then an MPC controller is designed to achieve multiple control objectives by manipulating different inputs. Simulation results indicate that the proposed approach is suitable for such multi-objective automotive control problems.
Hasan Esen
Tsutomu Tashiro
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-10-22T08:30:45Z
2014-10-22T08:30:45Z
http://eprints.imtlucca.it/id/eprint/2329
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2329
2014-10-22T08:30:45Z
MPC for power systems dispatch based on stochastic optimization
In this paper we investigate the problem of optimal real-time power dispatch of an interconnection of conventional power generation plants, renewable resources and energy storage systems. The objective of the problem is to minimize imbalance costs and maximize the profit of the company managing the system whilst satisfying user demand. The managing company is able to trade energy on an electricity market. Energy prices on the market, user demand and intermittent generation from the renewable plants are considered stochastic processes. We show that under certain assumptions, the stochastic power dispatch problem over a finite horizon can be recast, under a proper choice for the feedback policies and for the disturbance set, into a stochastic optimization formulation but with deterministic constraints. We carry out a systematic study of stochastic optimization methods to solve this problem, in particular we analyze the stochastic gradient method. We also show that this problem can be approximated by a proper deterministic optimization problem using the sample average approximation method, which can then be solved by standard means.
Ion Necoara
Dragos Nicolae Clipici
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-10-21T13:19:35Z
2014-10-22T08:17:50Z
http://eprints.imtlucca.it/id/eprint/2327
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2327
2014-10-21T13:19:35Z
Controlled drug administration by a fractional PID
Amiodarone is an antiarrhythmic drug that exhibits highly complex and non-exponential dynamics whose controlled administration has important implications for its clinical use especially for long-term therapies. Its pharmacokinetics has been accurately modelled using a fractional-order compartmental model. In this paper we design a fractional-order PID controller and we evaluate its dynamical characteristics in terms of the stability margins of the closed loop and the ability of the controlled system to attenuate various sources of noise and uncertainty.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Haralambos Sarimveis
2014-10-21T13:08:59Z
2016-04-06T09:40:34Z
http://eprints.imtlucca.it/id/eprint/2326
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2326
2014-10-21T13:08:59Z
Water demand forecasting for the optimal operation of large-scale drinking water networks: the Barcelona case study
Drinking Water Networks (DWN) are large-scale multiple-input multiple-output systems with uncertain disturbances (such as the water demand from the consumers) and involve components of linear, non-linear and switching nature. Operating, safety and quality constraints deem it important for the state and the input of such systems to be constrained into a given domain. Moreover, DWNs' operation is driven by time-varying demands and involves an considerable consumption of electric energy and the exploitation of limited water resources. Hence, the management of these networks must be carried out optimally with respect to the use of available resources and infrastructure, whilst satisfying high service levels for the drinking water supply. To accomplish this task, this paper explores various methods for demand forecasting, such as Seasonal ARIMA, BATS and Support Vector Machine, and presents a set of statistically validated time series models. These models, integrated with a Model Predictive Control (MPC) strategy addressed in this paper, allow to account for an accurate on-line forecasting and flow management of a DWN.
Ajay Kumar Sampathirao
Juan Manuel Grosso Pérez
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Carlos Ocampo-Martinez
Alberto Bemporad
alberto.bemporad@imtlucca.it
Vicenç Puig
2014-10-10T09:34:56Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2323
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2323
2014-10-10T09:34:56Z
Analysis of service oriented software systems with the conversation calculus
We overview some perspectives on the concept of service-based computing, and discuss the motivation of a small set of modeling abstractions for expressing and analyzing service based systems, which have led to the design of the Conversation Calculus. Distinguishing aspects of the Conversation Calculus are the adoption of a very simple, context sensitive, local message-passing communication mechanism, natural support for modeling multi-party conversations, and a novel mechanism for handling exceptional behavior. In this paper, written in a tutorial style, we review some Conversation Calculus based analysis techniques for reasoning about properties of service-based systems, mainly by going through a sequence of illustrating examples.
Luis Caires
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T13:45:05Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2322
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2322
2014-10-09T13:45:05Z
Typing liveness in multiparty communicating systems
Session type systems are an effective tool to prove that communicating programs do not go wrong, ensuring that the participants of a session follow the protocols described by the types. In a previous work we introduced a typing discipline for the analysis of progress in binary sessions. In this paper we generalize the approach to multiparty sessions following the conversation type approach, while strengthening progress to liveness. We combine the usual session-like fidelity analysis with the liveness analysis and devise an original treatment of recursive types allowing us to address challenging configurations that are out of the reach of existing approaches.
Luca Padovani
Vasco Thudichum Vasconcelos
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T13:30:52Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2320
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2320
2014-10-09T13:30:52Z
Typing progress in communication-centred systems
We present a type system for the analysis of progress in session-based communication centred systems. Our development is carried out in a minimal setting considering classic (binary) sessions, but building on and generalising previous work on progress analysis in the context of conversation types. Our contributions aim at underpinning forthcoming works on progress for session-typed systems, so as to support richer verification procedures based on a more foundational approach. Although this work does not target expressiveness, our approach already addresses challenging scenarios which are unaccounted for elsewhere in the literature, in particular systems that interleave communications on received session channels.
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Vasco Thudichum Vasconcelos
2014-10-09T13:26:39Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2319
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2319
2014-10-09T13:26:39Z
A type system for flexible role assignment in multiparty communicating systems
Communication protocols in distributed systems often specify the roles of the parties involved in the communications, namely for enforcing security policies or task assignment purposes. Ensuring that implementations follow role-based protocol specifications is challenging, especially in scenarios found, e.g., in business processes and web applications, where multiple peers are involved, single peers impersonate several roles, or single roles are carried out by several peers. We present a type-based analysis for statically verifying role-based multi-party interactions, based on a simple π-calculus model and prior work on conversation types. Our main result ensures that well-typed systems follow the role-based protocols prescribed by the types, including systems where roles are flexibly assigned to processes.
Pedro Baltazar
Luis Caires
Vasco Thudichum Vasconcelos
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T13:21:53Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2318
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2318
2014-10-09T13:21:53Z
SLMC: a tool for model checking concurrent systems against dynamical spatial logic specifications
The Spatial Logic Model Checker is a tool for verifying π-calculus systems against safety, liveness, and structural properties expressed in the spatial logic for concurrency of Caires and Cardelli. Model-checking is one of the most widely used techniques to check temporal properties of software systems. However, when the analysis focuses on properties related to resource usage, localities, interference, mobility, or topology, it is crucial to reason about spatial properties and structural dynamics. The SLMC is the only currently available tool that supports the combined analysis of behavioral and spatial properties of systems. The implementation, written in OCAML, is mature and robust, available in open source, and outperforms other tools for verifying systems modeled in π-calculus.
Luis Caires
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T13:14:13Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2317
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2317
2014-10-09T13:14:13Z
Typing dynamic roles in multiparty interaction
We present a type-based analysis for role-based multiparty
interaction. Novel to our approach are the notions that a role specified in a protocol may be carried out by several parties, and that one party may assume di%erent roles at di%erent stages of the protocol. We build on Conversation Types by adding roles to protocol specifications. Systems
are modeled in ⇤-calculus extended with labeled communication and role annotations. The main result shows that well-typed systems follow the role-based protocols prescribed by the types, addressing systems where
roles have dynamic distributed implementations.
Pedro Baltazar
Vasco Thudichum Vasconcelos
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T12:45:52Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2315
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2315
2014-10-09T12:45:52Z
Static analysis techniques for session-oriented calculi
In the Sensoria project, core calculi have been adopted as a linguistic means to model and analyze service-oriented applications. The present chapter reports about the static analysis techniques developed for the Sensoria session-oriented core calculi CaSPiS and CC. In particular, it presents a type system for client progress and control flow analysis in CaSPiS and type systems for conversation fidelity and progress in CC. The chapter gives an overview of the these techniques, summarizes the main results and presents the analysis of a common example taken from the Sensoria financial case-study: the credit request scenario.
Lucia Acciai
Chiara Bodei
Michele Boreale
Roberto Bruni
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T12:43:01Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2314
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2314
2014-10-09T12:43:01Z
Behavioral theory for session-oriented calculi
This chapter presents the behavioral theory of some of the Sensoria core calculi. We consider SSCC, μ se and CC as representatives of the session-based approach and COWS as representative of the correlation-based one.
For SSCC, μ se and CC the main point is the structure that the session/conversation mechanism creates in programs. We show how the differences between binary sessions, multiparty sessions and dynamic conversations are captured by different behavioral laws. We also exploit those laws for proving the correctness of program transformations.
For COWS the main point is that communication is prioritized (the best matching input captures the output), and this has a strong influence on the behavioral theory of COWS. In particular, we show that communication in COWS is neither purely synchronous nor purely asynchronous.
Ivan Lanese
Antonio Ravara
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T12:31:27Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2313
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2313
2014-10-09T12:31:27Z
Advanced mechanisms for service combination and transactions
Languages and models for service-oriented applications usually include primitives and constructs for exception and compensation handling. Exception handling is used to react to unexpected events while compensation handling is used to undo previously completed activities. In this chapter we investigate the impact of exception and compensation handling in message-based process calculi and the related theories developed within Sensoria.
Carla Ferreira
Ivan Lanese
Antonio Ravara
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Gianluigi Zavattaro
2014-10-09T11:59:22Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2311
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2311
2014-10-09T11:59:22Z
Tools and verification
This chapter presents different tools that have been developed inside the Sensoria project. Sensoria studied qualitative analysis techniques for verifying properties of service implementations with respect to their formal specifications. The tools presented in this chapter have been developed to carry out the analysis in an automated, or semi-automated, way.
We present four different tools, all developed during the Sensoria project, exploiting new techniques and calculi from the Sensoria project itself.
Massimo Bartoletti
Luis Caires
Ivan Lanese
Franco Mazzanti
Davide Sangiorgi
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Roberto Zunino
2014-10-09T11:45:31Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2310
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2310
2014-10-09T11:45:31Z
Spatial logic model checker user’s guide : version 1.15
Luis Caires
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-08T13:56:17Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2300
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2300
2014-10-08T13:56:17Z
A calculus for modeling and analyzing conversations in service-oriented computing
The service-oriented computing paradigm has motivated a large research effort in the past few years. On the one hand, the wide dissemination of Web-Service technology urged for the development of standards, tools and formal techniques that contributed for the design of more reliable systems. On the other hand, many of the problems presented in the study of service-oriented applications find an existing work basis in well-established research fields, as is the case of the study of interaction models that has been an active field of research in the last couple of decades. However, there are many new problems raised by the service-oriented computing paradigm in particular that call for new concepts, dedicated models and specialized formal analysis techniques. The work presented in this dissertation is inserted in such effort, with particular focus on the challenges involved in governing interaction in service-oriented applications. One of the main innovations introduced by the work presented here is the way in which multiparty interaction is handled. One reference field of research that addresses the specification and analysis of interaction of communication-centric systems is based on the notion of session. Essentially, a session characterizes the interaction between two parties, a client and a server,that exchange messages between them in a sequential and dual way. The notion of session is thus particularly adequate to model the client/server paradigm, however it fails to cope with interaction between several participants, a scenario frequently found in real service-oriented applications. The approach described in this dissertation improves on the state of the art as it allows to model and analyze systems where several parties interact, while retaining the fundamental flavor of session-based approaches, by relying on a novel notion of conversation: a simple extension of the notion of session that allows for several parties to interact in a single medium of communication in a disciplined way, via labeled message passing. The contributions of the work presented in this dissertation address the modeling and analysis of service-oriented applications in a rigorous way: First, we propose and study a formal model for service-oriented computing, the Conversation Calculus, which, building on the abstract notion of conversation, allows to capture the interactions between several parties that are relative to the same service task using a single medium of communication. Second, we introduce formal analysis techniques, namely the conversation type system and progress proof system that can be used to ensure, in a provably correct way and at static verification time (before deploying such applications), that systems enjoy good properties such as “the prescribed protocols will be followed at runtime by all conversation participants”(conversation fidelity)and “the system will never run into a stuck state” (progress). We give substantial evidence that our approach is already effective enough to model and type sophisticated service-based systems, at a fairly high level of abstraction. Examples of such systems include challenging scenarios involving simultaneous multiparty conversations, with concurrency and access to local resources, and conversations with a dynamically changing and unanticipated number of participants, that fall out of scope of previous approaches.
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-08T13:47:34Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2299
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2299
2014-10-08T13:47:34Z
Conversation types
We present a type theory for analyzing concurrent multiparty interactions as found in service-oriented computing. Our theory introduces a novel and flexible type structure, able to uniformly describe both the internal and the interface behavior of systems, referred respectively as choreographies and contracts in web-services terminology. The notion of conversation builds on the fundamental concept of session, but generalizes it along directions up to now unexplored; in particular, conversation types discipline interactions in conversations while accounting for dynamical join and leave of an unanticipated number of participants. We prove that well-typed systems never violate the prescribed conversation constraints. We also present techniques to ensure progress of systems involving several interleaved conversations, a previously open problem.
Luis Caires
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-08T13:38:03Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2298
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2298
2014-10-08T13:38:03Z
Conversation types
We present a type theory for analyzing concurrent multiparty interactions as found in service-oriented computing. Our theory introduces a novel and flexible type structure, able to uniformly describe both the internal and the interface behavior of systems, referred respectively as choreographies and contracts in web-services terminology. The notion of conversation builds on the fundamental concept of session, but generalizes it along directions up to now unexplored; in particular, conversation types discipline interactions in conversations while accounting for dynamical join and leave of an unanticipated number of participants. We prove that well-typed systems never violate the prescribed conversation constraints. We also present techniques to ensure progress of systems involving several interleaved conversations, a previously open problem.
Luis Caires
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-08T13:21:57Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2297
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2297
2014-10-08T13:21:57Z
A process calculus analysis of compensations
Conversations in service-oriented computation are frequently long running. In such a setting, traditional ACID properties of transactions cannot be reasonably implemented, and compensation mechanisms seem to provide convenient techniques to, at least, approximate them. In this paper, we investigate the representation and analysis of structured compensating transactions within a process calculus model, by embedding in the Conversation Calculus certain structured compensation programming abstractions inspired by the ones proposed by Butler, Ferreira, and Hoare. We prove the correctness of the embedding after developing a general notion of stateful model for structured compensations and related results, and showing that the embedding induces such a model.
Luis Caires
Carla Ferreira
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-08T13:13:49Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2296
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2296
2014-10-08T13:13:49Z
The conversation calculus: a model of service-oriented computation
We present a process-calculus model for expressing and analyzing service-based systems. Our approach addresses central features of the service-oriented computational model such as distribution, process delegation, communication and context sensitiveness, and loose coupling. Distinguishing aspects of our model are the notion of conversation context, the adoption of a context sensitive, message-passing-based communication, and of a simple yet expressive mechanism for handling exceptional behavior. We instantiate our model by extending a fragment of the π-calculus, illustrate its expressiveness by means of many examples, and study its basic behavioral theory; in particular, we establish that bisimilarity is a congruence
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Luis Caires
João C. Seco
2014-10-07T13:25:24Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2291
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2291
2014-10-07T13:25:24Z
The spatial logic model checker user's manual: version 1.0
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Luis Caires
Rubens Viegas
2014-10-07T13:16:48Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2290
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2290
2014-10-07T13:16:48Z
The spatial logic model checker user's manual: version 0.9
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Luis Caires
2014-09-02T09:44:03Z
2014-09-02T09:44:03Z
http://eprints.imtlucca.it/id/eprint/2273
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2273
2014-09-02T09:44:03Z
Douglas-Rachford splitting: complexity estimates and accelerated variants
We propose a new approach for analyzing convergence of the Douglas-Rachford splitting method for solving convex composite optimization problems. The approach is based on a continuously differentiable function, the Douglas-Rachford Envelope (DRE), whose stationary points correspond to the solutions of the original (possibly nonsmooth) problem. The Douglas-Rachford splitting method is shown to be equivalent to a scaled gradient method on the DRE, and so results from smooth unconstrained optimization are employed to analyze its convergence and optimally choose parameter {\gamma} and to derive an accelerated variant of Douglas-Rachford splitting.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Lorenzo Stella
lorenzo.stella@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-08-08T10:44:40Z
2014-08-08T10:44:40Z
http://eprints.imtlucca.it/id/eprint/2270
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2270
2014-08-08T10:44:40Z
Annotated image datasets of rosette plants
While image-based approaches to plant phenotyping are gaining momentum, benchmark data focusing on typical imaging situations and tasks in plant phenotyping are still lacking, making it difficult to compare existing methodologies. This report describes a benchmark dataset of raw and annotated images of plants. We describe the plant material, environmental conditions, and imaging setup and procedures, as well as the datasets where this image selection stems from. We also describe the annotation process, since all of these images have been manually segmented by experts, such that each leaf has its own label. Color images in the dataset show top-down views on young rosette plants. Two datasets show different genotypes of Arabidopsis while another dataset shows tobacco (Nicoticana tobacum) under different treatments. A version of the dataset, described also in this report, is in the public domain at http://www.plant-phenotyping.org/CVPPP2014-dataset and can be used for the purpose of plant/leaf segmentation from background, with accompanying evaluation scripts. This version was used in the Leaf Segmentation Challenge (LSC) of the Computer Vision Problems in Plant Phenotyping (CVPPP 2014) workshop organized in conjunction with the 13th European Conference on Computer Vision (ECCV), in Zürich, Switzerland. We hope with the release of this, and future, dataset(s) to invigorate the study of computer vision problems and the development of algorithms in the context of plant phenotyping. We also aim to provide to the computer vision community another interesting dataset on which new algorithmic developments can be evaluated.
Hanno Scharr
Massimo Minervini
massimo.minervini@imtlucca.it
Andreas Fischbach
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2014-08-04T11:23:26Z
2016-04-06T08:20:36Z
http://eprints.imtlucca.it/id/eprint/2268
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2268
2014-08-04T11:23:26Z
Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study
The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.
Rina D. Rudyanto
Sjoerd Kerkstra
Eva M. van Rikxoort
Catalin Fetita
Pierre-Yves Brillet
Christophe Lefevre
Wenzhe Xue
Xiangjun Zhu
Jianming Liang
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Devrim Ünay
Kamuran Kadipasaoglu
Raúl San José Estépar
James C. Ross
George R. Washko
Juan-Carlos Prieto
Marcela Hernández Hoyos
Maciej Orkisz
Hans Meine
Markus Hüllebrand
Christina Stöcker
Fernando Lopez Mir
Valery Naranjo
Eliseo Villanueva
Marius Staring
Changyan Xiao
Berend C. Stoel
Anna Fabijanska
Erik Smistad
Anne C. Elster
Frank Lindseth
Amir Hossein Foruzan
Ryan Kiros
Karteek Popuri
Dana Cobzas
Daniel Jimenez-Carretero
Andres Santos
Maria J. Ledesma-Carbayo
Michael Helmberger
Martin Urschler
Michael Pienn
Dennis G.H. Bosboom
Arantza Campo
Mathias Prokop
Pim A. de Jong
Carlos Ortiz-de-Solorzano
Arrate Muñoz-Barrutia
Bram van Ginneken
2014-07-29T08:14:57Z
2014-07-29T08:14:57Z
http://eprints.imtlucca.it/id/eprint/2266
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2266
2014-07-29T08:14:57Z
Assessment of myocardial reactivity to controlled hypercapnia with free-breathing T2-prepared cardiac blood oxygen level–dependent MR imaging
Purpose: To examine whether controlled and tolerable levels of hypercapnia may be an alternative to adenosine, a routinely used coronary vasodilator, in healthy human subjects and animals. Materials and Methods: Human studies were approved by the institutional review board and were HIPAA compliant. Eighteen subjects had end-tidal partial pressure of carbon dioxide (PetCO2) increased by 10 mm Hg, and myocardial perfusion was monitored with myocardial blood oxygen level–dependent (BOLD) magnetic resonance (MR) imaging. Animal studies were approved by the institutional animal care and use committee. Anesthetized canines with (n = 7) and without (n = 7) induced stenosis of the left anterior descending artery (LAD) underwent vasodilator challenges with hypercapnia and adenosine. LAD coronary blood flow velocity and free-breathing myocardial BOLD MR responses were measured at each intervention. Appropriate statistical tests were performed to evaluate measured quantitative changes in all parameters of interest in response to changes in partial pressure of carbon dioxide.
Results: Changes in myocardial BOLD MR signal were equivalent to reported changes with adenosine (11.2% ± 10.6 [hypercapnia, 10 mm Hg] vs 12% ± 12.3 [adenosine]; P = .75). In intact canines, there was a sigmoidal relationship between BOLD MR response and PetCO2 with most of the response occurring over a 10 mm Hg span. BOLD MR (17% ± 14 [hypercapnia] vs 14% ± 24 [adenosine]; P = .80) and coronary blood flow velocity (21% ± 16 [hypercapnia] vs 26% ± 27 [adenosine]; P > .99) responses were similar to that of adenosine infusion. BOLD MR signal changes in canines with LAD stenosis during hypercapnia and adenosine infusion were not different (1% ± 4 [hypercapnia] vs 6% ± 4 [adenosine]; P = .12). Conclusion: Free-breathing T2-prepared myocardial BOLD MR imaging showed that hypercapnia of 10 mm Hg may provide a cardiac hyperemic stimulus similar to adenosine.
Hsin-Jung Yang
Roya Yumul
Richard Tang
Ivan Cokic
Michael Klein
Avinash Kali
Olivia Sobczyk
Behzad Sharif
Jun Tang
Xiaoming Bi
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Debiao Li
Antonio Hernandez Conte
Joseph A. Fisher
Rohan Dharmakumar
2014-07-16T12:09:17Z
2014-12-03T13:05:43Z
http://eprints.imtlucca.it/id/eprint/2260
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2260
2014-07-16T12:09:17Z
Stabilizing dynamic controllers for hybrid systems: a hybrid control Lyapunov function approach
This paper proposes a dynamic controller structure and a systematic design procedure for stabilizing discrete-time hybrid systems. The proposed approach is based on the concept of control Lyapunov functions (CLFs), which, when available, can be used to design a stabilizing state-feedback control law. In general, the construction of a CLF for hybrid dynamical systems involving both continuous and discrete states is extremely complicated, especially in the presence of non-trivial discrete dynamics. Therefore, we introduce the novel concept of a hybrid control Lyapunov function, which allows the compositional design of a discrete and a continuous part of the CLF, and we formally prove that the existence of a hybrid CLF guarantees the existence of a classical CLF. A constructive procedure is provided to synthesize a hybrid CLF, by expanding the dynamics of the hybrid system with a specific controller dynamics. We show that this synthesis procedure leads to a dynamic controller that can be implemented by a receding horizon control strategy, and that the associated optimization problem is numerically tractable for a fairly general class of hybrid systems, useful in real world applications. Compared to classical hybrid receding horizon control algorithms, the proposed approach typically requires a shorter prediction horizon to guarantee asymptotic stability of the closed-loop system, which yields a reduction of the computational burden, as illustrated through two examples.
Stefano Di Cairano
W.P.M.H. Heemels
Mircea Lazar
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-07-08T13:40:37Z
2014-07-08T13:40:37Z
http://eprints.imtlucca.it/id/eprint/2251
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2251
2014-07-08T13:40:37Z
An algorithm for PWL approximations of nonlinear functions
In this report we provide some technical details for some of the results appeared in [Alessio et al.(2005)]. In the first section we provide the proof of continuity of the PPWA function computed with the ”squaring the circle” algorithm stated in ACC 06. Then, we analyze the complexity of the previous algorithm, in terms of the desired level of accuracy in the approximation of the PPWA function.
Alessandro Alessio
Alberto Bemporad
alberto.bemporad@imtlucca.it
B. Addis
Alessandro Pasini
2014-07-03T10:18:42Z
2016-04-06T09:20:28Z
http://eprints.imtlucca.it/id/eprint/2241
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2241
2014-07-03T10:18:42Z
Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography
Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert’s manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/.
H.A. Kirişli
M. Schaap
C.T. Metz
A.S. Dharampal
W.B. Meijboom
S.L. Papadopoulou
A. Dedic
K. Nieman
M.A. de Graaf
M.F.L. Meijs
M.J. Cramer
A. Broersen
S. Cetin
A. Eslami
L. Flórez-Valencia
K.L. Lor
B. Matuszewski
I. Melki
B. Mohr
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
R. Shahzad
C. Wang
P.H. Kitslaar
G. Unal
A. Katouzian
Maciej Orkisz
C.M. Chen
F. Precioso
L. Najman
S. Masood
Devrim Unay
L. van Vliet
R. Moreno
R. Goldenberg
E. Vuçini
G.P. Krestin
W.J. Niessen
T. van Walsum
2014-07-03T10:05:28Z
2014-07-03T10:05:28Z
http://eprints.imtlucca.it/id/eprint/2240
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2240
2014-07-03T10:05:28Z
Automated aortic supravalvular sinus detection in conventional computed tomography image
Valvular diseases are those where one or more of the cardiac valves are affected. Treatment of valvular diseases often involves replacement or restoration of the affected valve(s). In such a surgical procedure, the medical expert performing the procedure can largely benefit from a patient-specific and dynamic valvular model containing information complementary to the 2D/3D static images. To this end, in this study a novel automated supravalvular sinus detection method (to be used as a first step in aortic valve segmentation) on conventional contrast-enhanced ECG-gated multislice CT data and its evaluation on expert annotated 31 real cases are presented. Results demonstrate a highly accurate detection performance with average error rate inferior to 1.12 mm.
Devrim Unay
Ibrahim Harmankaya
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Kamuran Kadipasaoglu
Rahmi Cubuk
Levent Celik
2014-07-03T10:00:27Z
2014-07-03T10:00:27Z
http://eprints.imtlucca.it/id/eprint/2239
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2239
2014-07-03T10:00:27Z
Region growing on frangi vesselness values in 3-D CTA data
In cardiac related diagnostic methods, the shape and curvature of coronary arteries is essential. Consequently, one of the most important requirements for Computer Aided Diagnosis (CAD) Systems is automated segmentation of vasculature. In this paper, we propose a new hybrid algorithm, which segment the coronary arterial tree in CTA images by merging methodologies-, namely, Region Growing and Frangi Approach. The algorithm first runs a region growing on Frangi vesselness values and subsequently optimizes the results with several threshold values. Comparison of the present results with optimal results of existing segmentation algorithms reveals that the proposed approach outperforms its predecessors. The diagnostic accuracy of the algorithm will next be validated on the segmentation of coronary arteries from real CT data.
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Devrim Unay
Kamuran Kadipasaoglu
2014-07-03T09:44:43Z
2015-05-29T10:25:43Z
http://eprints.imtlucca.it/id/eprint/2237
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2237
2014-07-03T09:44:43Z
Biharmonic density estimate - a scale space signature for deformable surfaces
A novel intrinsic geometric scale space formulation for 3D deformable surfaces termed as the Biharmonic Density Estimate (BDE) is proposed. The proposed BDE signature allows for multiscale surface feature-based representation of deformable 3D shapes for subsequent image and scene analysis. It is shown to provide an underlying theoretical framework for the concept of intrinsic geometric scale space, resulting in a highly descriptive characterization of both, the local surface structure and the global metric of the 3D shape. The compactness and robustness of the proposed BDE signature are demonstrated via a series of experiments and a key components detection application.
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Suchendra M. Bhandarkar
2014-07-03T09:34:35Z
2014-07-03T09:34:35Z
http://eprints.imtlucca.it/id/eprint/2236
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2236
2014-07-03T09:34:35Z
Analysis of surface folding patterns of diccols using the GPU-Optimized geodesic field estimate
Localization of cortical regions of interests (ROIs) in the human brain via analysis of Diffusion Tensor Imaging (DTI) data plays a pivotal role in basic and clinical neuroscience. In recent studies, 358 common cortical landmarks in the human brain, termed as Dense Indi-
vidualized and Common Connectivity-based Cortical Landmarks (DICCCOLs), have been identified. Each of these DICCCOL sites has been observed to possess fiber connection patterns that are consistent across individuals and populations and can be regarded as predictive of brain
function. However, the regularity and variability of the cortical surface fold patterns at these DICCCOL sites have, thus far, not been investigated. This paper presents a novel approach, based on intrinsic surface
geometry, for quantitative analysis of the regularity and variability of the cortical surface folding patterns with respect to the structural neural connectivity of the human brain. In particular, the Geodesic Field Estimate (GFE) is used to infer the relationship between the structural
and connectional DTI features and the complex surface geometry of the human brain. A parallel algorithm, well suited for implementation on Graphics Processing Units (GPUs), is also proposed for efficient computation of the shortest geodesic paths between all cortical surface point pairs. Based on experimental results, a mathematical model for the morphological variability and regularity of the cortical folding patterns in the vicinity of the DICCCOL sites is proposed. It is envisioned that this model could be potentially applied in several human brain image
registration and brain mapping applications.
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Chul Woo Lim
Suchendra M. Bhandarkar
Hanbo Chen
Tianming Liu
Khaled Rasheed
Thiab Taha
2014-07-03T09:06:15Z
2014-07-03T09:36:26Z
http://eprints.imtlucca.it/id/eprint/2235
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2235
2014-07-03T09:06:15Z
Morphological analysis of the left ventricular endocardial surface and its clinical implications
The complex morphological structure of the left ventricular endocardial surface and its relation to the severity of arterial stenosis has not yet been thoroughly investigated due to the limitations of conventional imaging techniques. By exploiting the recent developments in Multirow-Detector Computed Tomography (MDCT) scanner technology, the complex endocardial surface morphology of the left ventricle is studied and the cardiac segments affected by coronary arterial stenosis localized via analysis of Computed Tomography (CT) image data obtained from a 320-MDCT scanner. The non-rigid endocardial surface data is analyzed using an isometry-invariant Bag-of-Words (BOW) feature-based approach. The clinical significance of the analysis in identifying, localizing and quantifying the incidence and extent of coronary artery disease is investigated. Specifically, the association between the incidence and extent of coronary artery disease and the alterations in the endocardial surface morphology is studied. The results of the proposed approach on 15 normal data sets, and 12 abnormal data sets exhibiting coronary artery disease with varying levels of severity are presented. Based on the characterization of the endocardial surface morphology using the Bag-of-Words features, a neural network-based classifier is implemented to test the effectiveness of the proposed morphological analysis approach. Experiments performed on a strict leave-one-out basis are shown to exhibit a distinct pattern in terms of classification accuracy within the cardiac segments where the incidence of coronary arterial stenosis is localized.
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Zhen Qian
Suchendra M. Bhandarkar
Tianming Liu
Sarah Rinehart
Szilard Voros
2014-07-03T08:46:34Z
2014-07-03T08:46:34Z
http://eprints.imtlucca.it/id/eprint/2234
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2234
2014-07-03T08:46:34Z
Non-rigid shape correspondence and description using geodesic field estimate distribution
Non-rigid shape description and analysis is an unsolved problem in computer graphics. Shape analysis is a fast evolving research field due to the wide availability of 3D shape databases. Widely studied methods for this family of problems include the Gromov Hausdorff distance [1], Bag-of-Features [2] and diffusion geometry [3]. The limitations of the Euclidian distance measure in the context of isometric deformation have made geodesic distance a de-facto standard for describing a metric space for non-rigid shape analysis. In this work, we propose a novel geodesic field space-based approach to describe and analyze non-rigid shapes from a point correspondence perspective.
Austin T. New
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Hamid R. Arabnia
Suchendra M. Bhandarkar
2014-07-03T08:34:33Z
2014-07-03T09:36:48Z
http://eprints.imtlucca.it/id/eprint/2233
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2233
2014-07-03T08:34:33Z
Shape analysis of the left ventricular endocardial surface and its application in detecting coronary artery disease
Coronary artery disease is the leading cause of morbidity and mortality worldwide. The complex morphological structure of the ventricular endocardial surface has not yet been studied properly due to the limitations of conventional imaging techniques. With the recent developments in Multi-Detector Computed Tomography (MDCT) scanner technology, we propose to study, in this paper, the complex endocardial surface morphology of the left ventricle via analysis of Computed Tomography (CT) image data obtained from a 320 Multi-Detector CT scanner. The CT image data is analyzed using a 3D shape analysis approach and the clinical significance of the analysis in detecting coronary artery disease is investigated. Global and local 3D shape descriptors are adapted for the purpose of shape analysis of the left ventricular endocardial surface. In order to study the association between the incidence of coronary artery disease and the alteration of the endocardial surface structure, we present the results of our shape analysis approach on 5 normal data sets, and 6 abnormal data sets with obstructive coronary artery disease. Based on the morphological characteristics of the endocardial surface as quantified by the shape descriptors, we implement a Linear Discrimination Analysis (LDA)-based classification algorithm to test the effectiveness of our shape analysis approach. Experiments performed on a strict leave-one-out basis are shown to achieve a classification accuracy of 81.8%.
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Zhen Qian
Suchendra M. Bhandarkar
Tianming Liu
Szilard Voros
2014-07-01T11:13:05Z
2014-07-01T11:13:05Z
http://eprints.imtlucca.it/id/eprint/2226
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2226
2014-07-01T11:13:05Z
Proximal Newton methods for convex composite optimization
This paper proposes two proximal Newton methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a new continuously differentiable exact penalty function, namely the Composite Moreau Envelope. The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the solution of a linear system of usually small dimension.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-07-01T11:02:11Z
2014-07-01T11:02:11Z
http://eprints.imtlucca.it/id/eprint/2225
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2225
2014-07-01T11:02:11Z
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Lorenzo Stella
lorenzo.stella@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-06-27T12:29:24Z
2014-06-27T12:29:24Z
http://eprints.imtlucca.it/id/eprint/2213
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2213
2014-06-27T12:29:24Z
Reputation-Based Composition of Social Web Services
Social Web Services (SWSs) constitute a novel paradigm of service-oriented computing, where Web services, just like humans, sign up in social networks that guarantee, e.g., better service discovery for users and faster replacement in case of service failures. In past work, composition of SWSs was mainly supported by specialised social networks of competitor services and cooperating ones. In this work, we continue this line of research, by proposing a novel SWSs composition procedure driven by the SWSs reputation. Making use of a well-known formal language and associated tools, we specify the composition steps and we prove that such reputation-driven approach assures better results in terms of the overall quality of service of the compositions, with respect to randomly selecting SWSs.
Alessandro Celestini
alessandro.celestini@imtlucca.it
Gianpiero Costantino
Rocco De Nicola
r.denicola@imtlucca.it
Zakaria Maamar
Fabio Martinelli
Marinella Petrocchi
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2014-06-27T12:18:56Z
2015-02-06T10:07:15Z
http://eprints.imtlucca.it/id/eprint/2212
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2212
2014-06-27T12:18:56Z
Dimming relations for the efficient analysis of concurrent systems via action abstraction
We study models of concurrency based on labelled transition systems where abstractions are induced by a partition of the action set. We introduce dimming relations which are able to relate two models if they can mimic each other by using actions from the same partition block. Moreover, we discuss the necessary requirements for guaranteeing compositional verification. We show how our new relations and results can be exploited when seemingly heterogeneous systems exhibit analogous behaviours manifested via different actions. Dimming relations make the models more homogeneous by collapsing such distinct actions into the same partition block. With our examples, we show how these abstractions may considerably reduce the state-space size, in some cases from exponential to polynomial complexity.
Rocco De Nicola
r.denicola@imtlucca.it
Giulio Iacobelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
2014-06-25T08:59:26Z
2014-06-25T08:59:26Z
http://eprints.imtlucca.it/id/eprint/2210
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2210
2014-06-25T08:59:26Z
Collective attention in the age of (mis)information
In this work we study, on a sample of 2.3 million individuals, how Facebook users consumed different information at the edge of political discussion and news during the last Italian electoral competition. Pages are categorized, according to their topics and the communities of interests they pertain to, in a) alternative information sources (diffusing topics that are neglected by science and main stream media); b) online political activism; and c) main stream media. We show that attention patterns are similar despite the different qualitative nature of the information, meaning that unsubstantiated claims (mainly conspiracy theories) reverberate for as long as other information. Finally, we categorize users according to their interaction patterns among the different topics and measure how a sample of this social ecosystem (1279 users) responded to the injection of 2788 false information posts. Our analysis reveals that users which are prominently interacting with alternative information sources (i.e. more exposed to unsubstantiated claims) are more prone to interact with false claims.
Delia Mocanu
Qian Zhang
Màrton Karsai
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2014-06-19T08:09:43Z
2014-09-02T09:28:54Z
http://eprints.imtlucca.it/id/eprint/2208
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2208
2014-06-19T08:09:43Z
Robust Model Predictive Control for optimal continuous drug administration
In this paper the Model Predictive Control (MPC) technology is used for tackling the optimal drug administration problem. The important advantage of MPC compared to other control technologies is that it explicitly takes into account the constraints of the system. In particular,
for drug treatments of living organisms, MPC can guarantee satisfaction of the minimum toxic concentration (MTC) constraints. A whole-body physiologically-based pharmacokinetic (PBPK) model serves as the dynamic prediction model of the system after it is formulated as a
discrete-time state-space model. Only plasma measurements are assumed to be measured online. The rest of the states (drug concentrations in other organs and tissues) are estimated in real time by designing an artificial observer. The complete system (observer and MPC controller) is able to drive the drug concentration to the desired levels at the organs of interest, while satisfying the imposed constraints, even in the presence of modeling errors, disturbances and noise. A case study on a PBPK model with 7 compartments, constraints on 5 tissues and a variable drug concentration set-point illustrates the efficiency of the methodology in drug dosing control applications. The proposed methodology is also tested in an uncertain setting
and proves successful in presence of modelling errors and inaccurate measurements.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
2014-06-16T13:17:54Z
2014-06-16T13:17:54Z
http://eprints.imtlucca.it/id/eprint/2207
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2207
2014-06-16T13:17:54Z
A uniform framework for modeling nondeterministic, probabilistic, stochastic, or mixed processes and their behavioral equivalences
Labeled transition systems are typically used as behavioral models of concurrent processes. Their labeled transitions define a one-step state-to-state reachability relation. This model can be generalized by modifying the transition relation to associate a state reachability distribution with any pair consisting of a source state and a transition label. The state reachability distribution is a function mapping each possible target state to a value that expresses the degree of one-step reachability of that state. Values are taken from a preordered set equipped with a minimum that denotes unreachability. By selecting suitable preordered sets, the resulting model, called {ULTraS} from Uniform Labeled Transition System, can be specialized to capture well-known models of fully nondeterministic processes (LTS), fully probabilistic processes (ADTMC), fully stochastic processes (ACTMC), and nondeterministic and probabilistic (MDP) or nondeterministic and stochastic (CTMDP) processes. This uniform treatment of different behavioral models extends to behavioral equivalences. They can be defined on {ULTraS} by relying on appropriate measure functions that express the degree of reachability of a set of states when performing multi-step computations. It is shown that the specializations of bisimulation, trace, and testing equivalences for the different classes of {ULTraS} coincide with the behavioral equivalences defined in the literature over traditional models except when nondeterminism and probability/stochasticity coexist; then new equivalences pop up.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2014-05-13T09:07:10Z
2014-07-07T10:27:41Z
http://eprints.imtlucca.it/id/eprint/2196
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2196
2014-05-13T09:07:10Z
A multi-level geographical study of Italian political elections from Twitter Data
In this paper we present an analysis of the behavior of Italian Twitter users during national political elections. We monitor the volumes of the tweets related to the leaders of the various political parties and we compare them to the elections results. Furthermore, we study the topics that are associated with the co-occurrence of two politicians in the same tweet. We cannot conclude, from a simple statistical analysis of tweet volume and their time evolution, that it is possible to precisely predict the election outcome (or at least not in our case of study that was characterized by a “too-close-to-call” scenario). On the other hand, we found that the volume of tweets and their change in time provide a very good proxy of the final results. We present this analysis both at a national level and at smaller levels, ranging from the regions composing the country to macro-areas (North, Center, South).
Guido Caldarelli
guido.caldarelli@imtlucca.it
Alessandro Chessa
alessandro.chessa@imtlucca.it
Fabio Pammolli
f.pammolli@imtlucca.it
Gabriele Pompa
gabriele.pompa@imtlucca.it
Michelangelo Puliga
michelangelo.puliga@imtlucca.it
Massimo Riccaboni
massimo.riccaboni@imtlucca.it
Gianni Riotta
2014-03-27T09:30:26Z
2016-04-05T12:01:10Z
http://eprints.imtlucca.it/id/eprint/2182
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2182
2014-03-27T09:30:26Z
COMT Genetic Reduction Produces Sexually Divergent Effects on Cortical Anatomy and Working Memory in Mice and Humans
Genetic variations in catechol-O-methyltransferase (COMT) that modulate cortical dopamine have been associated with pleiotropic behavioral effects in humans and mice. Recent data suggest that some of these effects may vary among sexes. However, the specific brain substrates underlying COMT sexual dimorphisms remain unknown. Here, we report that genetically driven reduction in COMT enzyme activity increased cortical thickness in the prefrontal cortex (PFC) and postero-parieto-temporal cortex of male, but not female adult mice and humans. Dichotomous changes in PFC cytoarchitecture were also observed: reduced COMT increased a measure of neuronal density in males, while reducing it in female mice. Consistent with the neuroanatomical findings, COMT-dependent sex-specific morphological brain changes were paralleled by divergent effects on PFC-dependent working memory in both mice and humans. These findings emphasize a specific sex–gene interaction that can modulate brain morphological substrates with influence on behavioral outcomes in healthy subjects and, potentially, in neuropsychiatric populations.
Sara Sannino
Alessandro Gozzi
Antonio Cerasa
Fabrizio Piras
Diego Scheggia
Francesca Manago
Mario Damiano
Alberto Galbusera
Lucy C. Erickson
Davide De Pietri Tonelli
Angelo Bifone
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Carlo Caltagirone
Daniel R. Weinberger
Gianfranco Spalletta
Francesco Papaleo
2014-03-10T12:55:15Z
2016-02-12T13:15:43Z
http://eprints.imtlucca.it/id/eprint/2180
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2180
2014-03-10T12:55:15Z
Reasoning (on) service component ensembles in rewriting logic
Programming autonomic systems with massive number of heterogeneous components poses a number of challenges to language designers and software engineers and requires the integration of computational tools and reasoning tools. We present a general methodology to enrich SCEL, a recently introduced language for programming systems with massive numbers of components, with reasoning capabilities that are guaranteed by external reasoners. We show how the methodology can be instantiated by considering the Maude implementation of SCEL and a specific reasoner, Pirlo, implemented in Maude as well. Moreover we show how the actual integration can benefit from the existing analytical tools of the Maude framework. In particular, we demonstrate our approach by considering a simple scenario consisting of a group of robots moving in an arena aiming at minimising the number of collisions.
Lenz Belzner
Rocco De Nicola
r.denicola@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
Martin Wirsing
2014-03-05T14:26:53Z
2014-03-05T14:26:53Z
http://eprints.imtlucca.it/id/eprint/2178
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2178
2014-03-05T14:26:53Z
A stochastic optimization approach to optimal bidding on dutch ancillary services markets
The aim of this paper is to present a market design for trading capacity reserves (also called Ancillary Services, AS) and to introduce a strategy for the optimal bidding problem in such a scenario. In the deregulated market, the presence of several market participants or Balance Responsible Parties (BRPs) entitled for trading energy, together with the increasing integration of renewable sources and price-elastic loads, shift the focus on decentralized control and reliable forecast techniques. The main feature of the considered market design is its double-sided nature. In addition to portfolio-based supply bids and based on prediction of their stochastic production and load, BRPs are allowed to submit risk-limiting requests. Requesting capacity from the AS market corresponds to giving to the market an estimate of the possible deviation from the daily production schedule resulting from the day-ahead auction and from bilateral contracts, named E-Program. In this way each BRP is responsible for the balanced and safe operation of the electric grid. On the other hand, at each Program Time Unit (PTU) BRPs must also offer their available capacity under the form of bids. In this paper, a bidding strategy to the double-sided market is described, where the risk is minimized and all the constraints are fulfilled. The algorithms devised are tested in a simulation environment and compared to the current practice, where the double-sided auction is not contemplated. Results in terms of expected imbalances and reliability are presented.
Laura Puglia