IMT Institutional Repository: No conditions. Results ordered -Date Deposited.
2024-03-28T18:49:30Z
EPrints
http://eprints.imtlucca.it/images/logowhite.png
http://eprints.imtlucca.it/
2022-12-20T15:51:26Z
2023-01-02T13:42:36Z
http://eprints.imtlucca.it/id/eprint/4084
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4084
2022-12-20T15:51:26Z
Value creation mechanisms of cloud computing: a conceptual framework
The management literature has analysed Cloud Computing, mainly focusing on the impact of its technical properties (e.g. accessibility, elasticity, scaling) on firms' dynamics, without explicitly addressing the dynamic generation of value streams. With this paper we fill this gap, linking the unexplored potential sources of Cloud Computing with the literature on business model value creation. We define a conceptual model able to integrate existent technical knowledge on Cloud Computing with the understudied part on the value creation mechanisms, dynamically representing their interaction. Our approach is based on a mixed methodology built on three pillars:
1) systematic literature review of the properties of Cloud Computing with an impact on firms’ management in order to identify possible gaps, using value generation within business models as the unit of analysis;
2) multiple case studies to inductively derive the emerging properties using Gioia methodology, analysing 20 startups in the AWS business case repository;
3) dynamic representation between technical properties extracted by literature review and emergent properties, focusing on the value streams generation.
Results confirm how the leveraging potentiality of Cloud Computing goes well beyond technical advantages, deeply inserting in the business model system and enabling different sources of value creation.
Leonardo Mazzoni
leonardo.mazzoni@imtlucca.it
Gabriele Costa
gabriele.costa@imtlucca.it
2018-03-12T09:18:45Z
2018-03-12T09:18:45Z
http://eprints.imtlucca.it/id/eprint/4009
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4009
2018-03-12T09:18:45Z
Direct Optimal Control and Model Predictive Control
Mario Zanon
mario.zanon@imtlucca.it
Andrea Boccia
Vryan Gil S. Palma
Sonja Parenti
Ilaria Xausa
2018-03-12T08:57:17Z
2018-03-12T08:57:17Z
http://eprints.imtlucca.it/id/eprint/4047
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4047
2018-03-12T08:57:17Z
Efficient Nonlinear Model Predictive Control Formulations for Economic Objectives with Aerospace and Automotive Applications
This thesis is concerned with optimal control techniques for optimal trajectory planning and real-time control and estimation. The framework of optimal control is a powerful tool which enjoys increasing popularity due to its applicability to a wide class of problems and its ability to deliver solutions to very complicated problems which cannot be intuitively solved.
The downside of optimal control is the computational burden required to compute the optimal solution. Due to recent algorithmic developments and increases in the computational power, this burden has been significantly reduced over the last decades. In order to guarantee effectiveness and reliability of the solver, three main components are necessary: fast and robust algorithms, a good problem formulation, and a mathematical model tailored to optimisation. Indeed, both the model and the optimal control problem can usually be formulated in many different ways, some of which are better suited for optimisation. In this thesis we are concerned with all three components, with a focus on the last two.
Concerning the problem formulation, we propose practical approaches for formulating optimal control, MPC and MHE problems in an optimisation- friendly fashion. Moreover, we analyse the stability properties of various MPC formulations, with a focus on so-called economic MPC, for which the stability theory is still developing.
On the algorithmic level, we review the literature on optimisation and optimal control and we prove that it is possible to tune tracking MPC formulations in order to locally obtain the same behaviour as economic MPC. The main advantages of tuned tracking MPC over economic MPC consist in easier to guarantee closed-loop stability and applicability of efficient real-time algorithms.
On the modelling side, we propose an approach for deriving models of reduced complexity and reduced nonlinearity for multibody mechanical systems. The use of nonminimal coordinates and DAE models enlarges the range of modelling possibilities and allows the control engineer to derive models which are better suited for optimisation. In order to provide an easy framework for the model derivation, we extend the Euler-Lagrange approach and we demonstrate how to implement the proposed approach in practice.
In order to demonstrate the effectiveness of the proposed techniques, we deploy them for two applications: tethered airplanes and autonomous vehicles. Both examples are characterised by fast nonlinear constrained dynamics for which simple controllers cannot be deployed.
Tethered airplanes are of particular interest because they are an emerging technology for wind energy production. In this thesis, we use optimal control to design trajectories which extract maximum energy from the airmass and compare single and dual-airfoil configurations. We moreover demonstrate the effectiveness of MPC and MHE for controlling the system in real time and apply the new tuning procedure for tracking MPC to show its ability to locally approximate economic MPC.
Mario Zanon
mario.zanon@imtlucca.it
2018-03-09T13:34:41Z
2018-03-09T13:34:41Z
http://eprints.imtlucca.it/id/eprint/4034
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4034
2018-03-09T13:34:41Z
Modularities maximization in multiplex network analysis using Many-Objective Optimization
Nowadays, social network analysis receives big attention from academia, industries and governments. Some practical applications such as community detection and centrality in economic networks have become main issues in this research area. Community detection algorithm for complex network analysis is mainly accomplished by the Louvain Method that seeks to find communities by heuristically finding a partitioning with maximal modularity. Traditionally, community detection applied for a network that has homogeneous semantics, for instance indicating friend relationship between people or import-export relationships between countries etc. However we increasingly deal with more complex network and also with so-called multiplex networks. In a multiplex network the set of nodes stays the same, while there are multiple sets of edges. In the analysis we would like to identify communities, but different edge sets give rise to different modularity optimizing partitions into communities. We propose to view community detection of such multilayer networks as a many-objective optimization problem. For this apply Evolutionary Many Objective Optimization and compute the Pareto fronts between different modularity layers. Then we group the objective functions into community in order to better understand the relationship and dependence between different layers (conflict, indifference, complementarily). As a case study, we compute the Pareto fronts for model problems and for economic data sets in order to show how to find the network modularity tradeoffs between different layers.
Asep Maulana
Valerio Gemmetto
Diego Garlaschelli
diego.garlaschelli@imtlucca.it
Iryna Yevesyeva
Michael Emmerich
2018-03-05T16:36:01Z
2018-03-05T16:36:01Z
http://eprints.imtlucca.it/id/eprint/3953
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3953
2018-03-05T16:36:01Z
Programming of CAS Systems by Relying on Attribute-Based Communication
In most distributed systems, named connections (i.e., channels) are used as means for programming interaction between communicating partners. These kinds of connections are low level and usually totally independent of the knowledge, the status, the capabilities, ..., in one word, of the attributes of the interacting partners. We have recently introduced a calculus, called AbC, in which interactions among agents are dynamically established by taking into account “connection” as determined by predicates over agent attributes. In this paper, we present Open image in new window, a Java run-time environment that has been developed to support modeling and programming of collective adaptive systems by relying on the communication primitives of the AbC calculus. Systems are described as sets of parallel components, each component is equipped with a set of attributes and communications among components take place in an implicit multicast fashion. By means of a number of examples, we also show how opportunistic behaviors, achieved by run-time attribute updates, can be exploited to express different communication and interaction patterns and to program challenging case studies.
Yehia Moustafa Abd Alrahman
yehia.abdalrahman@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2018-03-05T16:26:29Z
2018-03-05T16:26:29Z
http://eprints.imtlucca.it/id/eprint/3952
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3952
2018-03-05T16:26:29Z
Initial Algebra for a System of Right-Linear Functors
In 2003 we showed that right-linear systems of equations over regular expressions, when interpreted in a category of trees, have a solution whenever they enjoy a specific property that we called hierarchicity and that is instrumental to avoid critical mutual recursive definitions. In this note, we prove that a right-linear system of polynomial endofunctors on a cocartesian monoidal closed category which enjoys parameterized left list arithmeticity, has an initial algebra, provided it satisfies a property similar to hierarchicity.
Anna Labella
Rocco De Nicola
r.denicola@imtlucca.it
2018-03-05T16:19:42Z
2018-03-05T16:20:14Z
http://eprints.imtlucca.it/id/eprint/3951
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3951
2018-03-05T16:19:42Z
Verifying Properties of Systems Relying on Attribute-Based Communication
AbC is a process calculus designed for describing collective adaptive systems, whose distinguishing feature is the communication mechanism relying on predicates over attributes exposed by components. A novel approach to the analysis of concurrent systems modelled as AbC terms is presented that relies on the UMC model checker, a tool based on modelling concurrent systems as communicating UML-like state machines. A structural translation from AbC specifications to the UMC internal format is provided and used as the basis for the analysis. Three different algorithmic solutions of the well studied stable marriage problem are described in AbC and their translations are analysed with UMC. It is shown how the proposed approach can be exploited to identify emerging properties of systems and unwanted behaviour.
Rocco De Nicola
r.denicola@imtlucca.it
Tan Duong
Omar Inverso
Franco Mazzanti
2018-03-05T16:14:42Z
2018-03-05T16:14:42Z
http://eprints.imtlucca.it/id/eprint/3950
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3950
2018-03-05T16:14:42Z
AErlang at Work
AErlang is an extension of the Erlang programming language which is enriched with attribute-based communication. In AErlang, the Erlang send and receive constructs are extended to permit partner selection by relying on predicates over set of attributes. AErlang avoids the limitations of the Erlang point-to-point communication making it possible to model some of the sophisticated interaction features often observed in modern systems, such as anonymity and adaptation. By using our prototype extension, we show how the extended communication pattern can capture non-trivial process interaction in a natural and intuitive way. We also sketch a modelling technique aimed at automatically verifying AErlang systems, and discuss how it can be used to check some key properties of the considered case study.
Rocco De Nicola
Tan Duong
Omar Inverso
Catia Trubiani
2018-03-05T16:07:40Z
2018-03-05T16:07:40Z
http://eprints.imtlucca.it/id/eprint/3949
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3949
2018-03-05T16:07:40Z
AErlang: Empowering Erlang with Attribute-Based Communication
Attribute-based communication provides a novel mechanism to dynamically select groups of communicating entities by relying on predicates over their exposed attributes. In this paper, we embed the basic primitives for attribute-based communication into the functional concurrent language Erlang to obtain what we call AErlang, for attribute Erlang. To evaluate our prototype in terms of performance overhead and scalability we consider solutions of the Stable Marriage Problem based on predicates over attributes and on the classical preference lists, and use them to compare the runtime performance of AErlang with those of Erlang and X10. The outcome of the comparison shows that the overhead introduced by the new communication primitives is acceptable, and our prototype can compete performance-wise with an ad-hoc parallel solution in X10.
Rocco De Nicola
r.denicola@imtlucca.it
Tan Duong
Omar Inverso
Catia Trubiani
2018-03-05T16:02:13Z
2018-03-05T16:02:13Z
http://eprints.imtlucca.it/id/eprint/3948
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3948
2018-03-05T16:02:13Z
Smart Contract Negotiation in Cloud Computing
A smart contract is the formalisation of an agreement, whose terms are automatically enforced by relying on a transaction protocol, while minimising the need of intermediaries. Such contracts not only specify the service and its quality but also the possible changes at runtime of the terms of agreement. Although smart contracts provide a great deal of flexibility, analysing their compatibility and reaching agreements with this level of dynamism is considerably more challenging, due to the freedom of clients and providers in formulating needs/offers. We introduce a formal language to specify interactions between offers and requests and present a methodology for the autonomous negotiation of smart contracts, which analyses the cost and the necessary changes for reaching an agreement. Moreover, we describe a set of experiments that provides insights on the relative cost of dynamism in negotiating smart contracts and compare the request/offer matching rates of our solution with related works.
Scoca Vincenzo
Rafael Brundo Uriarte
Rocco De Nicola
r.denicola@imtlucca.it
2018-03-05T15:44:57Z
2018-03-05T15:44:57Z
http://eprints.imtlucca.it/id/eprint/3946
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3946
2018-03-05T15:44:57Z
Programming the Interactions of Collective Adaptive Systems by Relying on Attribute-based Communication
Collective adaptive systems are new emerging computational systems consisting of a large number of interacting components and featuring complex behaviour. These systems are usually distributed, heterogeneous, decentralised and interdependent, and are operating in dynamic and possibly unpredictable environments. Finding ways to understand and design these systems and, most of all, to model the interactions of their components, is a difficult but important endeavour. In this article we propose a language-based approach for programming the interactions of collective-adaptive systems by relying on attribute-based communication; a paradigm that permits a group of partners to communicate by considering their run-time properties and capabilities. We introduce AbC, a foundational calculus for attribute-based communication and show how its linguistic primitives can be used to program a complex and sophisticated variant of the well-known problem of Stable Allocation in Content Delivery Networks. Also other interesting case studies, from the realm of collective-adaptive systems, are considered. We also illustrate the expressive power of attribute-based communication by showing the natural encoding of other existing communication paradigms into AbC.
Yehia Moustafa Abd Alrahman
yehia.abdalrahman@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2018-03-05T15:40:55Z
2018-03-05T15:40:55Z
http://eprints.imtlucca.it/id/eprint/3945
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3945
2018-03-05T15:40:55Z
A Behavioural Theory for Interactions in Collective-Adaptive Systems
We propose a process calculus, named AbC, to study the behavioural theory of interactions in collective-adaptive systems by relying on attribute-based communication. An AbC system consists of a set of parallel components each of which is equipped with a set of attributes. Communication takes place in an implicit multicast fashion, and interaction among components is dynamically established by taking into account "connections" as determined by predicates over their attributes. The structural operational semantics of AbC is based on Labeled Transition Systems that are also used to define bisimilarity between components. Labeled bisimilarity is in full agreement with a barbed congruence, defined by simple basic observables and context closure. The introduced equivalence is used to study the expressiveness of AbC in terms of encoding broadcast channel-based interactions and to establish formal relationships between system descriptions at different levels of abstraction.
Yehia Moustafa Abd Alrahman
yehia.abdalrahman@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2018-01-16T10:14:31Z
2018-01-16T10:14:31Z
http://eprints.imtlucca.it/id/eprint/3863
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3863
2018-01-16T10:14:31Z
Uncertainty-aware demand management of water distribution networks in deregulated energy markets
We present an open-source solution for the operational control of drinking water distribution networks which accounts for the inherent uncertainty in water demand and electricity prices in the day-ahead market of a volatile deregulated economy. As increasingly more energy markets adopt this trading scheme, the operation of drinking water networks requires uncertainty-aware control approaches that mitigate the effect of volatility and result in an economic and safe operation of the network that meets the consumers’ need for uninterrupted water supply. We propose the use of scenario-based stochastic model predictive control: an advanced control methodology which comes at a considerable computation cost which is overcome by harnessing the parallelization capabilities of graphics processing units (GPUs) and using a massively parallelizable algorithm based on the accelerated proximal gradient method.
Pantelis Sopasakis
Ajay Kumar Sampathirao
Alberto Bemporad
alberto.bemporad@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
2017-10-31T09:00:18Z
2017-10-31T09:00:18Z
http://eprints.imtlucca.it/id/eprint/3806
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3806
2017-10-31T09:00:18Z
Long-term EVA degradation simulation: climatic zones comparison and possible revision of accelerated tests
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Marco Paggi
marco.paggi@imtlucca.it
2017-09-28T06:31:59Z
2017-09-28T06:31:59Z
http://eprints.imtlucca.it/id/eprint/3805
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3805
2017-09-28T06:31:59Z
Experimental characterization and numerical simulation of humidity-induced damage in PV cells
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Irene Berardone
irene.berardone@polito.it
Marco Paggi
marco.paggi@imtlucca.it
2017-09-28T06:24:42Z
2017-09-28T06:32:17Z
http://eprints.imtlucca.it/id/eprint/3804
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3804
2017-09-28T06:24:42Z
Computational and experimental characterization of thermo-oxidative and corrosion phenomena in photovoltaic modules
Irene Berardone
irene.berardone@polito.it
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Pietro Lenarda
pietro.lenarda@imtlucca.it
Marco Paggi
marco.paggi@imtlucca.it
2017-09-26T09:19:39Z
2017-09-26T09:19:39Z
http://eprints.imtlucca.it/id/eprint/3765
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3765
2017-09-26T09:19:39Z
EGAC: a genetic algorithm to compare chemical reaction networks
Discovering relations between chemical reaction networks (CRNs)
is a relevant problem in computational systems biology for model
reduction, to explain if a given system can be seen as an abstraction
of another one; and for model comparison, useful to establish an evolutionary
path from simpler networks to more complex ones. This
is also related to foundational issues in computer science regarding
program equivalence, in light of the established interpretation of a
CRN as a kernel programming language for concurrency. Criteria
for deciding if two CRNs can be formally related have been recently
developed, but these require that a candidate mapping be provided.
Automatically finding candidate mappings is very hard in general
since the search space essentially consists of all possible partitions
of a set. In this paper we tackle this problem by developing a genetic
algorithm for a class of CRNs called influence networks, which can
be used to model a variety of biological systems including cell-cycle
switches and gene networks. An extensive numerical evaluation
shows that our approach can successfully establish relations between
influence networks from the literature which cannot be found
by exact algorithms due to their large computational requirements.
Stefano Tognazzi
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2017-09-04T14:28:17Z
2017-09-04T14:28:17Z
http://eprints.imtlucca.it/id/eprint/3777
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3777
2017-09-04T14:28:17Z
Optimal Varicella immunization programs for both Varicella and Herpes Zoster Control
A main obstacle to the widespread adoption of varicella immunization in Europe has been the fear of a subsequent boom in natural herpes zoster caused by the decline in the protective effect of natural immunity boosting due to reduced virus circulation. We apply optimal control to simple models for VZV transmission and reactivation to investigate existence and feasibility of temporal paths of varicella childhood immunization that are optimal in controlling both varicella and zoster. We analyze the optimality system numerically focusing on the role played by the structure of the cost functional, the relative cost zoster-varicella, and the length of the planning horizon. We show that optimal programs exist but will mostly be unfeasible in real public health contexts due to their complex temporal profiles. This complexity is the consequence of the intrinsically antagonistic nature of varicella immunization programs when aimed to control both varicella and herpes zoster. However we could show that gradually increasing, smooth – thereby feasible - vaccination schedules, can perform largely better than routine programs with constant vaccine uptake. Moreover we show the optimal temporal profiles of feasible immunization
programs targeting with priority the mitigation of the post-immunization natural zoster boom.
Monica Betta
monica.betta@imtlucca.it
Marco Laurino
Andrea Pugliese
Giorgio Guzzetta
Alberto Landi
Piero Manfredi
2017-08-08T09:06:01Z
2017-08-08T09:06:01Z
http://eprints.imtlucca.it/id/eprint/3766
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3766
2017-08-08T09:06:01Z
ERODE: A Tool for the Evaluation and Reduction of Ordinary Differential Equations
We present ERODE, a multi-platform tool for the solution and exact reduction of systems of ordinary differential equations (ODEs). ERODE supports two recently introduced, complementary, equivalence relations over ODE variables: forward differential equivalence yields a self-consistent aggregate system where each ODE gives the cumulative dynamics of the sum of the original variables in the respective equivalence class. Backward differential equivalence identifies variables that have identical solutions whenever starting from the same initial conditions. As back-end ERODE uses the well-known Z3 SMT solver to compute the largest equivalence that refines a given initial partition of ODE variables. In the special case of ODEs with polynomial derivatives of degree at most two (covering affine systems and elementary chemical reaction networks), it implements a more efficient partition-refinement algorithm in the style of Paige and Tarjan. ERODE comes with a rich development environment based on the Eclipse plug-in framework offering: (i) seamless project management; (ii) a fully-featured text editor; and (iii) importing-exporting capabilities.
Luca Cardelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2017-08-08T07:44:36Z
2017-08-08T07:44:36Z
http://eprints.imtlucca.it/id/eprint/3764
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3764
2017-08-08T07:44:36Z
Fluid Analysis of Spatio-Temporal Properties of Agents in a Population Model
We consider large stochastic population models in which heterogeneous agents are interacting locally and moving in space. These models are very common, e.g. in the context of mobile wireless networks, crowd dynamics, traffic management, but they are typically very hard to analyze, even when space is discretized in a grid. Here we consider individual agents and look at their properties, e.g. quality of service metrics in mobile networks. Leveraging recent results on the combination of stochastic approximation with formal verification, and of fluid approximation of spatio-temporal population processes, we devise a novel mean-field based approach to check such behaviors, which requires the solution of a low-dimensional set of Partial Differential Equation, which is shown to be much faster than simulation. We prove the correctness of the method and validate it on a mobile peer-to-peer network example.
Luca Bortolussi
Max Tschaikowski
max.tschaikowski@imtlucca.it
2017-08-07T10:21:11Z
2017-08-07T10:31:16Z
http://eprints.imtlucca.it/id/eprint/3762
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3762
2017-08-07T10:21:11Z
Inferring monopartite projections of bipartite networks: an entropy-based approach
Bipartite networks are currently regarded as providing a major insight into the organization of many real-world systems, unveiling the mechanisms driving the interactions occurring between distinct groups of nodes. One of the most important issues encountered when modeling bipartite networks is devising a way to obtain a (monopartite) projection on the layer of interest, which preserves as much as possible the information encoded into the original bipartite structure. In the present paper we propose an algorithm to obtain statistically-validated projections of bipartite networks, according to which any two nodes sharing a statistically-significant number of neighbors are linked. Since assessing the statistical significance of nodes similarity requires a proper statistical benchmark, here we consider a set of four null models, defined within the exponential random graph framework. Our algorithm outputs a matrix of link-specific p -values, from which a validated projection is straightforwardly obtainable, upon running a multiple hypothesis testing procedure. Finally, we test our method on an economic network (i.e. the countries-products World Trade Web representation) and a social network (i.e. MovieLens, collecting the users’ ratings of a list of movies). In both cases non-trivial communities are detected: while projecting the World Trade Web on the countries layer reveals modules of similarly-industrialized nations, projecting it on the products layer allows communities characterized by an increasing level of complexity to be detected; in the second case, projecting MovieLens on the films layer allows clusters of movies whose affinity cannot be fully accounted for by genre similarity to be individuated.
Fabio Saracco
fabio.saracco@imtlucca.it
Mika J. Straka
mika.straka@imtlucca.it
Riccardo Di Clemente
Andrea Gabrielli
Guido Caldarelli
guido.caldarelli@imtlucca.it
Tiziano Squartini
tiziano.squartini@imtlucca.it
2017-08-04T11:50:27Z
2017-08-04T11:50:27Z
http://eprints.imtlucca.it/id/eprint/3760
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3760
2017-08-04T11:50:27Z
A SOM-based Chan–Vese model for unsupervised image segmentation
Active Contour Models (ACMs) constitute an efficient energy-based image segmentation framework. They usually deal with the segmentation problem as an optimization problem, formulated in terms of a suitable functional, constructed in such a way that its minimum is achieved in correspondence with a contour that is a close approximation of the actual object boundary. However, for existing ACMs, handling images that contain objects characterized by many different intensities still represents a challenge. In this paper, we propose a novel ACM that combines—in a global and unsupervised way—the advantages of the Self-Organizing Map (SOM) within the level set framework of a state-of-the-art unsupervised global ACM, the Chan–Vese (C–V) model. We term our proposed model SOM-based Chan–Vese (SOMCV) active contour model. It works by explicitly integrating the global information coming from the weights (prototypes) of the neurons in a trained SOM to help choosing whether to shrink or expand the current contour during the optimization process, which is performed in an iterative way. The proposed model can handle images that contain objects characterized by complex intensity distributions, and is at the same time robust to the additive noise. Experimental results show the high accuracy of the segmentation results obtained by the SOMCV model on several synthetic and real images, when compared to the Chan–Vese model and other image segmentation models.
Mohammed M. Abdelsamea
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
2017-08-04T11:47:46Z
2017-08-04T11:47:46Z
http://eprints.imtlucca.it/id/eprint/3759
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3759
2017-08-04T11:47:46Z
LQG Online Learning
Optimal control theory and machine learning techniques are combined to formulate and solve in closed form an optimal control formulation of online learning from supervised examples with regularization of the updates. The connections with the classical linear quadratic gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a nontrivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman filter estimate of the parameter vector to be learned. It is shown that the proposed algorithm is less sensitive to outliers with respect to the Kalman estimate (thanks to the presence of the regularization term), thus providing smoother estimates with respect to time. The basic formulation of the proposed online learning framework refers to a discrete-time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called kernel trick, the case of nonlinear models.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Gori
Marcello Sanguineti
2017-08-04T11:42:06Z
2017-08-04T11:42:06Z
http://eprints.imtlucca.it/id/eprint/3758
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3758
2017-08-04T11:42:06Z
Graph-restricted Game Approach for Investigating Human Movement Qualities
A novel computational method for the analysis of expressive full-body movement qualities is introduced, which exploits concepts and tools from graph theory and game theory. The human skeletal structure is modeled as an undirected graph, where the joints are the vertices and the edge set contains both physical and non-physical links. Physical links correspond to connections between adjacent physical body joints (e.g., the forearm, which connects the elbow to the wrist). Nonphysical links act as "bridges" between parts of the body not directly connected by the skeletal structure, but sharing very similar feature values. The edge weights depend on features obtained by using Motion Capture data. Then, a mathematical game is constructed over the graph structure, where the vertices represent the players and the edges represent communication channels between them. Hence, the body movement is modeled in terms of a game built on the graph structure. Since the vertices and the edges contribute to the overall quality of the movement, the adopted game-theoretical model is of cooperative nature. A game-theoretical concept, called Shapley value, is exploited as a centrality index to estimate the contribution of each vertex to a shared goal (e.g., to the way a particular movement quality is transferred among the vertices). The proposed method is applied to a data set of Motion Capture data of subjects performing expressive movements, recorded in the framework of the H2020-ICT-2015 EU Project WhoLoDance, Project no. 688865. Preliminary results are presented.
Ksenia Kolykhalova
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
Antonio Camurri
Gualtiero Volpe
2017-08-04T10:38:09Z
2018-03-08T16:56:04Z
http://eprints.imtlucca.it/id/eprint/3749
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3749
2017-08-04T10:38:09Z
Network reconstruction via density sampling
Reconstructing weighted networks from partial information is necessary in many important circumstances, e.g. for a correct estimation of systemic risk. It has been shown that, in order to achieve an accurate reconstruction, it is crucial to reliably replicate the empirical degree sequence, which is however unknown in many realistic situations. More recently, it has been found that the knowledge of the degree sequence can be replaced by the knowledge of the strength sequence, which is typically accessible, complemented by that of the total number of links, thus considerably relaxing the observational requirements. Here we further relax these requirements and devise a procedure valid when even the the total number of links is unavailable. We assume that, apart from the heterogeneity induced by the degree sequence itself, the network is homogeneous, so that its (global) link density can be estimated by sampling subsets of nodes with representative density. We show that the best way of sampling nodes is the random selection scheme, any other procedure being biased towards unrealistically large, or small, link densities. We then introduce our core technique for reconstructing both the topology and the link weights of the unknown network in detail. When tested on real economic and financial data sets, our method achieves a remarkable accuracy and is very robust with respect to the sampled subsets, thus representing a reliable practical tool whenever the available topological information is restricted to small portions of nodes.
Tiziano Squartini
tiziano.squartini@imtlucca.it
Giulio Cimini
giulio.cimini@imtlucca.it
Andrea Gabrielli
Diego Garlaschelli
diego.garlaschelli@imtlucca.it
2017-07-20T10:44:54Z
2017-07-20T10:44:54Z
http://eprints.imtlucca.it/id/eprint/3725
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3725
2017-07-20T10:44:54Z
Predictive Control for Linear and Hybrid Systems
Model Predictive Control (MPC), the dominant advanced control approach in industry over the past twenty-five years, is presented comprehensively in this unique book. With a simple, unified approach, and with attention to real-time implementation, it covers predictive control theory including the stability, feasibility, and robustness of MPC controllers. The theory of explicit MPC, where the nonlinear optimal feedback controller can be calculated efficiently, is presented in the context of linear systems with linear constraints, switched linear systems, and, more generally, linear hybrid systems. Drawing upon years of practical experience and using numerous examples and illustrative applications, the authors discuss the techniques required to design predictive control laws, including algorithms for polyhedral manipulations, mathematical and multiparametric programming and how to validate the theoretical properties and to implement predictive control policies. The most important algorithms feature in an accompanying free online MATLAB toolbox, which allows easy access to sample solutions. Predictive Control for Linear and Hybrid Systems is an ideal reference for graduate, postgraduate and advanced control practitioners interested in theory and/or implementation aspects of predictive control.
Francesco Borrelli
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2017-06-07T11:10:53Z
2017-06-07T11:10:53Z
http://eprints.imtlucca.it/id/eprint/3711
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3711
2017-06-07T11:10:53Z
Block Placement Strategies for Fault-Resilient Distributed Tuple Spaces: An Experimental Study - (Practical Experience Report)
The tuple space abstraction provides an easy-to-use programming paradigm
for distributed applications. Intuitively, it behaves like a distributed shared
memory, where applications write and read entries (tuples). When deployed over
a wide area network, the tuple space needs to efficiently cope with faults of links
and nodes. Erasure coding techniques are increasingly popular to deal with such
catastrophic events, in particular due to their storage efficiency with respect to
replication. When a client writes a tuple into the system, this is first striped into
k blocks and encoded into n > k blocks, in a fault-redundant manner. Then, any
k out of the n blocks are sufficient to reconstruct and read the tuple. This paper
presents several strategies to place those blocks across the set of nodes of a
wide area network, that all together form the tuple space. We present the performance
trade-offs of different placement strategies by means of simulations and a
Python implementation of a distributed tuple space. Our results reveal important
differences in the efficiency of the different strategies, for example in terms of
block fetching latency, and that having some knowledge of the underlying network
graph topology is highly beneficial
Roberta Barbi
Vitaly Buravlev
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Valerio Schiavoni
2017-05-08T12:27:18Z
2017-05-08T12:27:18Z
http://eprints.imtlucca.it/id/eprint/3697
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3697
2017-05-08T12:27:18Z
(edited by) Proceedings of the 31st Annual ACM Symposium on Applied Computing, SAC 2016, Special track on service-oriented architectures and programming (SOAP)
The SOAP track aims at bringing together researchers and practitioners having the common objective of transforming Service-Oriented Programming (SOP) into a mature discipline with both solid scientific foundations and mature software engineering development methodologies supported by dedicated tools. From the foundational point of view, many attempts to use formal methods for specification and verification in this setting have been made. Session correlation, service types, contract theories, and communication patterns are only a few examples of the aspects that have been investigated. Moreover, several formal models based upon automata, Petri nets and algebraic approaches have been developed. However, most of these approaches concentrate only on a few features of service-oriented systems in isolation, and a comprehensive approach is still lacking.
From the engineering point of view, there are open issues at many levels. Among others, at the system design level, both traditional approaches based on UML and approaches taking inspiration from Business Process Modelling, e.g. BPMN, are used. At the composition level, orchestration and choreography are continuously being improved both formally and practically, with an evident need for their integration in the development process. At the description and discovery level, there are two separate communities pushing respectively the semantic approach (like ontologies and OWL) and the syntactic one (like WSDL). In particular, the role of discovery engines and protocols is not clear. In this respect, adopted standards are still missing. UDDI looked to be a good candidate, but it is no longer pushed by the main corporations, and its wide adoption seems difficult. Furthermore, a recent implementation platform, the so-called REST services, is emerging and competing with classic Web Services. Finally, features like Quality of Service, security, and dependability need to be taken seriously into account.
SOAP in particular encouraged submissions on what SOP still needs in order to achieve the above goals.
The PC of SOAP 2016 was formed by:
• Farhad Arbab Leiden University and CWI, Amsterdam, NL
• Luís Barbosa University of Minho, Braga, PT
• Massimo Bartoletti Università di Cagliari, IT
• Maurice H. ter Beek ISTI-CNR, Pisa, IT (co-chair)
• Marcello M. Bersani Politecnico di Milano, IT
• Laura Bocchi University of Kent, UK
• Roberto Bruni Università di Pisa, IT
• Marco Carbone IT University of Copenhagen, DK
• Romain Demangeon Université Pierre et Marie Curie, FR
• Schahram Dustdar Vienna University of Technology, AT
• Alessandra Gorla IMDEA Software Institute, Madrid, ES
• Vasileios Koutavas Trinity College Dublin, IE
• Alberto Lluch Lafuente Technical University of Denmark, DK
• Manuel Mazzara Innopolis University, RU
• Hernán Melgratti University of Buenos Aires, AR (co-chair)
• Nicola Mezzetti University of Trento, IT
• Corrado Moiso Telecom Italia, IT
• Alberto Núñez Universidad Complutense de Madrid, ES
• Jorge A. Perez University of Groningen, NL
• Gustavo Petri Purdue University, USA
• António Ravara New University of Lisbon, PT
• Steve Ross-Talbot Cognizant Technology Solutions, UK
• Gwen Salaün Inria Grenoble - Rhône-Alpes, FR
• Francesco Tiezzi Università di Camerino, IT
• Hugo Torres Vieira IMT Lucca, IT (co-chair)
• Emilio Tuosto University of Leicester, UK
• Massimo Vecchio Università degli Studi eCampus, IT
• Peter Wong Travelex, UK
• Yongluan Zhou University of Southern Denmark, DK
SOAP 2016 received a total of 16 submissions. Each submission was reviewed by at least 4 PC members, the vast majority even by 5 PC members. All papers were subject to an animated general discussion among the PC members (with over 100 posts in the message boards). In the end, the PC decided to select only the following four papers for an oral presentation at the conference (an acceptance rate of 25%):
• JxActinium: a runtime manager for secure REST-ful COAP applications working over JXTA by Filippo Battaglia, Giancarlo Iannizzotto, and Lucia Lo Bello
• Improving QoS Delivered by WS-BPEL Scenario Adaptation through Service Execution Parallelization by Dionisis Margaris, Costas Vassilakis, and Panagiotis Georgiadis
• QoS-aware Adaptation for Complex Event Service by Feng Gao, Muhammad Ali, Edward Curry, and Alessandra Mileo
• Service functional testing automation with intelligent scheduling and planning by Lom Messan Hillah, Ariele-Paolo Maesano, Libero Maesano, Fabio De Rosa, Fabrice Kordon, and Pierre-Henri Wuillemin
We would like to thank the PC members, and a few external reviewers, for their detailed reports and the stimulating discussions during the reviewing phase; the authors of submitted papers, the session chairs and the attendees, for contributing to the success of the event; the providers of the START system, which was used to manage the submissions; and in particular all the organizers of SAC 2016, for their invitation to organize this track and for all their excellent assistance and support.
Maurice H. ter Beek
Hernán C. Melgratti
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-08T12:26:53Z
2017-05-08T12:26:53Z
http://eprints.imtlucca.it/id/eprint/3696
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3696
2017-05-08T12:26:53Z
Preface for the special issue on Interaction and Concurrency Experience 2015
This special issue contains extended versions of selected papers from the 8th Interaction and Concurrency Experience workshop (ICE 2015). The workshop was held in Grenoble, France, on June 4-5th, 2015. ICE workshops form a series of international scientific meetings oriented to theoretical computer science researchers with special interest in models, verification, tools, and programming primitives for complex interactions.
The general scope of the venue includes theoretical and applied aspects of interactions and the synchronization mechanisms used among components of concurrent/distributed systems, related to several areas of computer science in the broad spectrum ranging from formal specification and analysis to studies inspired by emerging computational models.
The authors of the most prominent papers presented at ICE 2015 were invited to submit an extended version to this special issue. In order to guarantee the fairness and quality of the selection process, each submission received at least three reviews. The review process has ensured that the accepted articles significantly extend and improve the original workshop contributions.
We want to thank all the authors who contributed to this volume. We would like to thank all the members of the Program Committee of ICE, who helped us in the selection of the papers and who helped the authors to improve their contributions in several ways. Additional referees were involved in the review of the papers invited for this special issue and we thank their timely contributions. We would also like to thank the editors of JLAMP, for their support during the whole editorial process.
Ivan Lanese
Alberto Lluch Lafuente
Sophia Knight
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-08T08:07:14Z
2017-05-08T08:07:14Z
http://eprints.imtlucca.it/id/eprint/3698
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3698
2017-05-08T08:07:14Z
Phenotiki: an open software and hardware platform for affordable and easy image-based phenotyping of rosette-shaped plants
Phenotyping is important to understand plant biology, but current solutions are costly, not versatile or are difficult to deploy. To solve this problem, we present Phenotiki, an affordable system for plant phenotyping that, relying on off-the-shelf parts, provides an easy to install and maintain platform, offering an out-of-box experience for a well-established phenotyping need: imaging rosette-shaped plants. The accompanying software (with available source code) processes data originating from our device seamlessly and automatically. Our software relies on machine learning to devise robust algorithms, and includes an automated leaf count obtained from 2D images without the need of depth (3D). Our affordable device (~€200) can be deployed in growth chambers or greenhouse to acquire optical 2D images of approximately up to 60 adult Arabidopsis rosettes concurrently. Data from the device are processed remotely on a workstation or via a cloud application (based on CyVerse). In this paper, we present a proof-of-concept validation experiment on top-view images of 24 Arabidopsis plants in a combination of genotypes that has not been compared previously. Phenotypic analysis with respect to morphology, growth, color and leaf count has not been performed comprehensively before now. We confirm the findings of others on some of the extracted traits, showing that we can phenotype at reduced cost. We also perform extensive validations with external measurements and with higher fidelity equipment, and find no loss in statistical accuracy when we use the affordable setting that we propose. Device set-up instructions and analysis software are publicly available (http://phenotiki.com).
Massimo Minervini
Mario Valerio Giuffrida
valerio.giuffrida@imtlucca.it
Pierdomenico Perata
Sotirios A. Tsaftaris
2017-05-04T14:16:41Z
2017-05-04T14:16:41Z
http://eprints.imtlucca.it/id/eprint/3695
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3695
2017-05-04T14:16:41Z
Foundations of Session Types and Behavioural Contracts
Behavioural type systems, usually associated to concurrent or distributed computations, encompass concepts such as interfaces, communication protocols, and contracts, in addition to the traditional input/output operations. The behavioural type of a software component specifies its expected patterns of interaction using expressive type languages, so types can be used to determine automatically whether the component interacts correctly with other components. Two related important notions of behavioural types are those of session types and behavioural contracts. This article surveys the main accomplishments of the last 20 years within these two approaches.
Hans Huttel
Ivan Lanese
Vasco Thudichum Vasconcelos
Luis Caires
Marco Carbone
Pierre-Malo Deniélou
Dimitris Mostrous
Luca Padovani
Antonio Ravara
Emilio Tuosto
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Gianluigi Zavattaro
2017-05-04T14:10:39Z
2017-05-04T14:10:39Z
http://eprints.imtlucca.it/id/eprint/3694
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3694
2017-05-04T14:10:39Z
Dynamic role authorization in multiparty conversations
Protocols in distributed settings usually rely on the interaction of several parties and often identify the roles involved in communications. Roles may have a behavioral interpretation, as they do not necessarily correspond to sites or physical devices. Notions of role authorization thus become necessary to consider settings in which, e.g., different sites may be authorized to act on behalf of a single role, or in which one site may be authorized to act on behalf of different roles. This flexibility must be equipped with ways of controlling the roles that the different parties are authorized to represent, including the challenging case in which role authorizations are determined only at runtime. We present a typed framework for the analysis of multiparty interaction with dynamic role authorization and delegation. Building on previous work on conversation types with role assignment, our formal model is based on an extension of the π-calculus in which the basic resources are pairs channel-role, which denote the access right of interacting along a given channel representing the given role. To specify dynamic authorization control, our process model includes (1) a novel scoping construct for authorization domains, and (2) communication primitives for authorizations, which allow to pass around authorizations to act on a given channel. An authorization error then corresponds to an action involving a channel and a role not enclosed by an appropriate authorization scope. We introduce a typing discipline that ensures that processes never reduce to authorization errors, including when parties dynamically acquire authorizations.
Silvia Ghilezan
Svetlana Jakšić
Jovanka Pantović
Jorge A. Pérez
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-04T13:43:26Z
2017-05-04T13:43:26Z
http://eprints.imtlucca.it/id/eprint/3692
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3692
2017-05-04T13:43:26Z
(edited by) Proceedings 9th Interaction and Concurrency Experience, ICE 2016, Heraklion, Greece, 8-9 June 2016
This volume contains the proceedings of ICE 2016, the 9th Interaction and Concurrency Experience, which was held in Heraklion, Greece on the 8th and 9th of June 2016 as a satellite event of DisCoTec 2016. The ICE procedure for paper selection allows PC members to interact, anonymously, with authors. During the review phase, each submitted paper is published on a discussion forum whose access is restricted to the authors and to all the PC members not declaring a conflict of interest. The PC members post comments and questions that the authors reply to. For the first time, the 2016 edition of ICE included a feature targeting review transparency: reviews of accepted papers were made public on the workshop website and workshop participants in particular were able to access them during the workshop. Each paper was reviewed by three PC members, and altogether nine papers were accepted for publication (the workshop also featured three brief announcements which are not part of this volume). We were proud to host two invited talks, by Alexandra Silva and Uwe Nestmann. The abstracts of these two talks are included in this volume together with the regular papers.
Massimo Bartoletti
Ludovic Henrio
Sophia Knight
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-04T13:42:01Z
2017-05-04T13:42:01Z
http://eprints.imtlucca.it/id/eprint/3693
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3693
2017-05-04T13:42:01Z
Preface for the special issue on Interaction and Concurrency Experience 2014
This special issue contains extended versions of selected papers from the 7th Interaction and Concurrency Experience workshop (ICE 2014). The workshop was held in Berlin (Germany) on June 6th, 2014. ICE workshops form a series of international scientific meetings oriented to theoretical computer science researchers with special interest in models, verification, tools, and programming primitives for complex interactions.
The general scope of the venue includes theoretical and applied aspects of interactions and the synchronization mechanisms used among components of concurrent/distributed systems, related to several areas of computer science in the broad spectrum ranging from formal specification and analysis to studies inspired by emerging computational models.
The authors of the most prominent papers presented at ICE 2014 were invited to submit an extended version to this special issue. In order to guarantee the fairness and quality of the selection process, each submission received at least three reviews. The review process has also ensured that the accepted articles significantly extend and improve the original workshop contributions.
This special issue features three articles:
• Declarative event based models of concurrency and refinement in psi-calculi, by Håkon Normann, Christian Johansen and Thomas Hildebrandt. In this paper the authors show an exploration of declarative event-based specifications open to runtime refinement aiming at a declarative model with support for adaptation.
• Contracts as games on event structures, by Massimo Bartoletti, Tiziana Cimoli, G. Michele Pinna and Roberto Zunino. This work presents an event structure based interpretation of contracts, allowing to study the rights and obligations of contract participants in a natural setting.
• Relating two automata-based models of orchestration and choreography, by Davide Basile, Pierpaolo Degano, Gian Luigi Ferrari and Emilio Tuosto. This paper presents a comparison between local contract-based specifications coordinated by orchestrators with communicating machines that have decentralized coordination.
We want to thank all the authors who contributed to this volume. We would like to thank all the members of the Program Committee of ICE, who helped us in the selection of the papers and who helped the authors to improve their contributions in several ways. Additional referees were involved in the review of the papers invited for this special issue and we thank their timely contributions. We would also like to thank the editors of JLAMP, for their support during the whole editorial process.
Ivan Lanese
Alberto Lluch Lafuente
Ana Sokolova
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-04T13:38:06Z
2017-05-04T13:38:06Z
http://eprints.imtlucca.it/id/eprint/3691
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3691
2017-05-04T13:38:06Z
(edited by) Proceedings 8th Interaction and Concurrency Experience, ICE 2015, Grenoble, France, 4-5th June 2015
This volume contains the proceedings of ICE 2015, the 8th Interaction and Concurrency Experience, which was held in Grenoble, France on the 4th and 5th of June 2015 as a satellite event of DisCoTec 2015. The ICE procedure for paper selection allows PC members to interact, anonymously, with authors. During the review phase, each submitted paper is published on a discussion forum with access restricted to the authors and to all the PC members not declaring a conflict of interest. The PC members post comments and questions to which the authors reply. Each paper was reviewed by three PC members, and altogether 9 papers, including 1 short paper, were accepted for publication (the workshop also featured 4 brief announcements which are not part of this volume). We were proud to host three invited talks, by Leslie Lamport (shared with the FRIDA workshop), Joseph Sifakis and Steve Ross-Talbot. The abstracts of the last two talks are included in this volume together with the regular papers.
Sophia Knight
Ivan Lanese
Alberto Lluch Lafuente
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-04T13:35:38Z
2017-05-04T13:35:38Z
http://eprints.imtlucca.it/id/eprint/3690
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3690
2017-05-04T13:35:38Z
A Typed Model for Dynamic Authorizations
Security requirements in distributed software systems are inherently dynamic. In the case of authorization policies, resources are meant to be accessed only by authorized parties, but the authorization to access a resource may be dynamically granted/yielded. We describe ongoing work on a model for specifying communication and dynamic authorization handling. We build upon the pi-calculus so as to enrich communication-based systems with authorization specification and delegation; here authorizations regard channel usage and delegation refers to the act of yielding an authorization to another party. Our model includes: (i) a novel scoping construct for authorization, which allows to specify authorization boundaries, and (ii) communication primitives for authorizations, which allow to pass around authorizations to act on a given channel. An authorization error may consist in, e.g., performing an action along a name which is not under an appropriate authorization scope. We introduce a typing discipline that ensures that processes never reduce to authorization errors, even when authorizations are dynamically delegated.
Silvia Ghilezan
Svetlana Jakšić
Jovanka Pantović
Jorge A. Pérez
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-04-18T08:57:40Z
2017-04-18T08:57:40Z
http://eprints.imtlucca.it/id/eprint/3689
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3689
2017-04-18T08:57:40Z
Modeling confirmation bias and polarization
Online users tend to select claims that adhere to their system of beliefs and to ignore dissenting information. Confirmation bias, indeed, plays a pivotal role in viral phenomena. Furthermore, the wide availability of content on the web fosters the aggregation of likeminded people where debates tend to enforce group polarization. Such a configuration might alter the public debate and thus the formation of the public opinion. In this paper we provide a mathematical model to study online social debates and the related polarization dynamics. We assume the basic updating rule of the Bounded Confidence Model (BCM) and we develop two variations a) the Rewire with Bounded Confidence Model (RBCM), in which discordant links are broken until convergence is reached; and b) the Unbounded Confidence Model, under which the interaction among discordant pairs of users is allowed even with a negative feedback, either with the rewiring step (RUCM) or without it (UCM). From numerical simulations we find that the new models (UCM and RUCM), unlike the BCM, are able to explain the coexistence of two stable final opinions, often observed in reality. Lastly, we present a mean field approximation of the newly introduced models.
Michela Del Vicario
michela.delvicario@imtlucca.it
Antonio Scala
Guido Caldarelli
guido.caldarelli@imtlucca.it
H. Eugene Stanley
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2017-04-18T08:31:54Z
2017-04-18T08:31:54Z
http://eprints.imtlucca.it/id/eprint/3685
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3685
2017-04-18T08:31:54Z
Inside the Echo Chamber
Despite optimistic talk about “collective intelligence,” the Web has helped create an echo chamber where misinformation thrives. Indeed, the viral spread of hoaxes, conspiracy theories, and other false or baseless information online is one of the most disturbing social trends of the early 21st century.
Social scientists are studying this echo chamber by applying computational methods to the traces people leave on Facebook, Twitter and other such outlets. Through this work, they have established that users happily embrace false information as long as it reinforces their preexisting beliefs.
Faced with complex global issues, people of all educational levels choose to believe compact—but false—explanations that clearly identify an object of blame. Unfortunately, attempts to debunk false beliefs seem only to reinforce them. Stopping the spread of misinformation is thus a problem with no apparent simple solutions.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2017-03-22T10:06:42Z
2017-03-22T10:06:42Z
http://eprints.imtlucca.it/id/eprint/3681
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3681
2017-03-22T10:06:42Z
Reference trajectory planning under constraints and path tracking using linear time-varying model predictive control for agricultural machines
A method for the control of autonomously and slowly moving agricultural machinery is presented. Special emphasis is on offline reference trajectory generation tailored for high-precision closed-loop tracking within agricultural fields using linear time-varying model predictive control. When optimisation is carried out, high-level logistical processing can result in edgy reference paths for field coverage. Subsequent trajectory smoothing can consider specific actuator rate constraints and field geometry. The latter step is the subject of this paper. Focussing on forward motion only, the role of non-convexly shaped field geometry, repressed area minimisation and spraying gap avoidance is analysed. Three design methods for generating smooth reference trajectories are discussed: circle-segments, generalised elementary paths, and bi-elementary paths.
Mogens M. Graf Plessen
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-03-21T11:05:10Z
2017-09-21T14:56:38Z
http://eprints.imtlucca.it/id/eprint/3666
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3666
2017-03-21T11:05:10Z
Optimal design of low-frequency band gaps in anti-tetrachiral lattice meta-materials
The elastic wave propagation is investigated in a beam lattice material characterized by a square periodic cell with anti-tetrachiral microstructure. With reference to the Floquet-Bloch spectrum, focus is made on the band structure enrichments and modifications which can be achieved by equipping the cellular microstructure with tunable local resonators. By virtue of its composite mechanical nature, the so-built inertial meta-material gains enhanced capacities of passive frequency-band filtering. Indeed the number, placement and properties of the inertial resonators can be designed to open, shift and enlarge the band gaps between one or more pairs of consecutive branches in the frequency spectrum. In order to improve the meta-material performance, several nonlinear optimization problems are formulated. The largest among the band gap amplitudes in the low-frequency range is selected as suited objective function. Proper inequality constraints are introduced to restrict the admissible solutions within a compact set of mechanical and geometric parameters, including only physically realistic properties of both the lattice and the resonators. The optimization problems related to full and partial band gaps are solved by using a globally convergent version of the numerical method of moving asymptotes, combined with a quasi-Monte Carlo multi-start technique. The optimal solutions are numerically computed, discussed and compared from the qualitative and quantitative viewpoints, bringing to light the limits and potential of the meta-material performance. The clearest trends emerging from the numerical analyses are pointed out and interpreted from the physical viewpoint. Finally, some specific recommendations about the microstructural design of the meta-material are synthesized.
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Lepidi
Luigi Gambarotta
2017-03-21T10:56:30Z
2017-03-21T10:56:30Z
http://eprints.imtlucca.it/id/eprint/3664
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3664
2017-03-21T10:56:30Z
Design of acoustic metamaterials through nonlinear programming
The dispersive wave propagation in a periodic metamaterial with tetrachiral topology and inertial local resonators is investigated. The Floquet-Bloch spectrum of the metamaterial is compared with that of the tetrachiral beam lattice material without resonators. The resonators can be designed to open and shift frequency band gaps, that is, spectrum intervals in which harmonic waves do not propagate. Therefore, an optimal passive control of the frequency band structure can be pursued in the metamaterial. To this aim, a suitable constrained nonlinear optimization problem on a compact set of admissible geometrical and mechanical parameters is stated. According to functional requirements, the particular set of parameters which determines the largest low-frequency band gap between a pair of consecutive branches of the Floquet-Bloch spectrum is obtained. The optimization problem is successfully solved by means of a version of the method of moving asymptotes, combined with a quasi-Monte Carlo multi-start technique.
Subjects: Materials Science (cond-mat.mtrl-sci)
Cite as: arXiv:1603.07717 [cond-mat.mtrl-sci]
(or arXiv:1603.07717v2 [cond-mat.mtrl-sci] for this version)
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Lepidi
Luigi Gambarotta
2017-02-01T08:49:17Z
2017-02-01T08:49:17Z
http://eprints.imtlucca.it/id/eprint/3652
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3652
2017-02-01T08:49:17Z
Statistical shape modeling of the left ventricle: myocardial infarct classification challenge
Statistical shape modeling is a powerful tool for visualizing and quantifying geometric and functional patterns of the heart. After myocardial infarction (MI), the left ventricle typically remodels in response to physiological challenges. Several methods have been proposed in the literature to describe statistical shape changes. Which method best characterizes left ventricular remodeling after MI is an open research question. A better descriptor of remodeling is expected to provide a more accurate evaluation of disease status in MI patients. We therefore designed a challenge to test shape characterization in MI given a set of three-dimensional left ventricular surface points. The training set comprised 100 MI patients, and 100 asymptomatic volunteers (AV). The challenge was initiated in 2015 at the Statistical Atlases and Computational Models of the Heart workshop, in conjunction with the MICCAI conference. The training set with labels was provided to participants, who were asked to submit the likelihood of MI from a different (validation) set of 200 cases (100 AV and 100 MI). Sensitivity, specificity, accuracy and area under the receiver operating characteristic curve were used as the outcome measures. The goals of this challenge were to (1) establish a common dataset for evaluating statistical shape modeling algorithms in MI, and (2) test whether statistical shape modeling provides additional information characterizing MI patients over standard clinical measures. Eleven groups with a wide variety of classification and feature extraction approaches participated in this challenge. All methods achieved excellent classification results with accuracy ranges from 0.83 to 0.98. The areas under the receiver operating characteristic curves were all above 0.90. Four methods showed significantly higher performance than standard clinical measures. The dataset and software for evaluation are available from the Cardiac Atlas Project website1.
A. Suinesiaputra
P. Ablin
X. Alba
M. Alessandrini
J. Allen
W. Bai
S. Cimen
P. Claes
B. R. Cowan
J. D'hooge
N. Duchateau
J. Ehrhardt
A. F. Frangi
A. Gooya
V. Grau
K. Lekadir
A. Lu
A. Mukhopadhyay
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
X. Pennec
M. Pereanez
C. Pinto
P. Piras
M. M. Rohe
D. Rueckert
M. Sermesant
K. Siddiqi
M. Tabassian
L. Teresi
S. A. Tsaftaris
M. Wilms
A. A. Young
X. Zhang
P. Medrano-Gracia
2017-02-01T08:36:30Z
2017-02-01T08:36:30Z
http://eprints.imtlucca.it/id/eprint/3651
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3651
2017-02-01T08:36:30Z
MRI-TRUS Image Synthesis with Application to Image-Guided Prostate Intervention
Accurate and robust fusion of pre-procedure magnetic resonance imaging (MRI) to intra-procedure trans-rectal ultrasound (TRUS) imaging is necessary for image-guided prostate cancer biopsy procedures. The current clinical standard for image fusion relies on non-rigid surface-based registration between semi-automatically segmented prostate surfaces in both the MRI and TRUS. This surface-based registration method does not take advantage of internal anatomical prostate structures, which have the potential to provide useful information for image registration. However, non-rigid, multi-modal intensity-based MRI-TRUS registration is challenging due to highly non-linear intensities relationships between MRI and TRUS. In this paper, we present preliminary work using image synthesis to cast this problem into a mono-modal registration task by using a large database of over 100 clinical MRI-TRUS image pairs to learn a joint model of MR-TRUS appearance. Thus, given an MRI, we use this learned joint appearance model to synthesize the patient’s corresponding TRUS image appearance with which we could potentially perform mono-modal intensity-based registration. We present preliminary results of this approach.
John A. Onofrey
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Saradwata Sarkar
Rajesh Venkataraman
Lawrence H. Staib
Xenophon Papademetris
2017-01-26T14:46:16Z
2017-01-26T14:46:16Z
http://eprints.imtlucca.it/id/eprint/3645
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3645
2017-01-26T14:46:16Z
Stochastic gradient methods for stochastic model predictive control
We introduce a new stochastic gradient algorithm, SAAGA, and investigate its employment for solving Stochastic MPC problems and multi-stage stochastic optimization programs in general. The method is particularly attractive for scenario-based formulations that involve a large number of scenarios, for which “batch” formulations may become inefficient due to high computational costs. Benefits of the method include cheap computations per iteration and fast convergence due to the sparsity of the proposed problem decomposition.
A. Themelis
S. Villa
Panagiotis Patrinos
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-01-26T14:36:39Z
2017-01-26T14:36:39Z
http://eprints.imtlucca.it/id/eprint/3643
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3643
2017-01-26T14:36:39Z
A Simple Effective Heuristic for Embedded Mixed-Integer Quadratic Programming
In this paper we propose a fast optimization
algorithm for approximately minimizing convex quadratic
functions over the intersection of affine and separable constraints
(i.e., the Cartesian product of possibly nonconvex
real sets). This problem class contains many NP-hard problems
such as mixed-integer quadratic programming. Our
heuristic is based on a variation of the alternating direction
method of multipliers (ADMM), an algorithm for solving
convex optimization problems. We discuss the favorable
computational aspects of our algorithm, which allow it to
run quickly even on very modest computational platforms
such as embedded processors. We give several examples
for which an approximate solution should be found very
quickly, such as management of a hybrid-electric vehicle
drivetrain. Our numerical experiments suggest that our
method is very effective in finding a feasible point with
small objective value; indeed, we see that in many cases,
it finds the global solution.
Reza Takapoui
Nicholas Mohele
Stephen Boyd
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-01-26T14:29:19Z
2017-01-26T14:29:19Z
http://eprints.imtlucca.it/id/eprint/3642
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3642
2017-01-26T14:29:19Z
Solving Mixed-Integer Quadratic Programs via Nonnegative Least Squares
This paper proposes a new algorithm for solving Mixed-Integer Quadratic Programming (MIQP) problems. The algorithm is particularly tailored to solving small-scale MIQPs such as those that arise in embedded hybrid Model Predictive Control (MPC) applications. The approach combines branch and bound (B&B) with nonnegative least squares (NNLS), that are used to solve Quadratic Programming (QP) relaxations. The QP algorithm extends a method recently proposed by the author for solving strictly convex QP's, by (i) handling equality and bilateral inequality constraints, (ii) warm starting, and (iii) exploiting easy-to-compute lower bounds on the optimal cost to reduce the number of QP iterations required to solve the relaxed problems. The proposed MIQP algorithm has a speed of execution that is comparable to state- of-the-art commercial MIQP solvers and is relatively simple to code, as it requires only basic arithmetic operations to solve least-square problems.
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-01-26T14:17:52Z
2017-01-26T14:17:52Z
http://eprints.imtlucca.it/id/eprint/3641
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3641
2017-01-26T14:17:52Z
GPU-accelerated stochastic predictive control of drinking water networks
Ajay Kumar Sampathirao
Pantelis Sopasakis
Alberto Bemporad
alberto.bemporad@imtlucca.it
Panagiotis Patrinos
2017-01-26T14:11:24Z
2017-01-26T14:11:24Z
http://eprints.imtlucca.it/id/eprint/3640
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3640
2017-01-26T14:11:24Z
Spatial-based predictive control and geometric corridor planning for adaptive cruise control coupled with obstacle avoidance
M. Graf Plessen
Daniele Bernardini
Hasan Esen
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-01-24T13:21:02Z
2017-08-28T15:36:22Z
http://eprints.imtlucca.it/id/eprint/3638
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3638
2017-01-24T13:21:02Z
Optimal energy management of a small-size building via hybrid model predictive control
Abstract This paper presents the design of a Model Predictive Control (MPC) scheme to optimally manage the thermal and electrical subsystems of a small-size building (“smart house”), with the objective of minimizing the expense for buying energy from the grid, while keeping the room temperature within given time-varying bounds. The system, for which an experimental prototype has been built, includes {PV} panels, solar collectors, a battery pack, an electrical heater in a thermal storage tank, and two pumps on the solar collector and radiator hydraulic circuits. The presence of binary control inputs together with continuous ones naturally leads to using a hybrid dynamical model, and the {MPC} controller solves a mixed-integer linear program at each sampling instant, relying on weather forecast data for ambient temperature and solar irradiance. The procedure for controller design is reported with focus on the specific application, and the proposed method is successfully tested on the experimental site.
Albina Khakimova
Aliya Kusatayeva
Akmaral Shamshimova
Dana Sharipova
Alberto Bemporad
alberto.bemporad@imtlucca.it
Yakov Familiant
Almas Shintemirov
Viktor Ten
Matteo Rubagotti
2017-01-24T13:14:41Z
2017-01-24T13:14:41Z
http://eprints.imtlucca.it/id/eprint/3637
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3637
2017-01-24T13:14:41Z
From linear to nonlinear MPC: bridging the gap via the real-time iteration
Linear model predictive control (MPC) can be currently deployed at outstanding speeds, thanks to recent progress in algorithms for solving online the underlying structured quadratic programs. In contrast, nonlinear MPC (NMPC) requires the deployment of more elaborate algorithms, which require longer computation times than linear MPC. Nonetheless, computational speeds for NMPC comparable to those of MPC are now regularly reported, provided that the adequate algorithms are used. In this paper, we aim at clarifying the similarities and differences between linear MPC and NMPC. In particular, we focus our analysis on NMPC based on the real-time iteration (RTI) scheme, as this technique has been successfully tested and, in some applications, requires computational times that are only marginally larger than linear MPC. The goal of the paper is to promote the understanding of RTI-based NMPC within the linear MPC community.
Sébastien Gros
Mario Zanon
Rien Quirynen
Alberto Bemporad
alberto.bemporad@imtlucca.it
Moritz Diehl
2017-01-24T13:10:56Z
2017-01-24T13:16:05Z
http://eprints.imtlucca.it/id/eprint/3636
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3636
2017-01-24T13:10:56Z
A Lyapunov method for stability analysis of piecewise-affine systems over non-invariant domains
This paper analyses stability of discrete-time piecewise-affine systems, defined on possibly non-invariant domains, taking into account the possible presence of multiple dynamics in each of the polytopic regions of the system. An algorithm based on linear programming is proposed, in order to prove exponential stability of the origin and to find a positively invariant estimate of its region of attraction. The results are based on the definition of a piecewise-affine Lyapunov function, which is in general discontinuous on the boundaries of the regions. The proposed method is proven to lead to feasible solutions in a broader range of cases as compared to a previously proposed approach. Two numerical examples are shown, among which a case where the proposed method is applied to a closed-loop system, to which model predictive control was applied without a-priori guarantee of stability.
Matteo Rubagotti
Luca Zaccarian
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-01-24T13:07:45Z
2017-01-24T13:07:45Z
http://eprints.imtlucca.it/id/eprint/3635
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3635
2017-01-24T13:07:45Z
Optimal distributed task scheduling in volunteer clouds
Abstract The ever increasing request of computational resources has shifted the computing paradigm towards solutions where less computation is performed locally. The most widely adopted approach nowadays is represented by cloud computing. With the cloud, users can transparently access to virtually infinite resources with the same aptitude of using any other utility. Next to the cloud, the volunteer computing paradigm has gained attention in the last decade, where the spared resources on each personal machine are shared thanks to the users’ willingness to cooperate. Cloud and volunteer paradigms have been recently seen as companion technologies to better exploit the use of local resources. Conversely, this scenario places complex challenges in managing such a large-scale environment, as the resources available on each node and the presence of the nodes online are not known a-priori. The complexity further increases in presence of tasks that have an associated Service Level Agreement specified, e.g., through a deadline. Distributed management solutions have then be advocated as the only approaches that are realistically applicable. In this paper, we propose a framework to allocate tasks according to different policies, defined by suitable optimization problems. Then, we provide a distributed optimization approach relying on the Alternating Direction Method of Multipliers (ADMM) for one of these policies, and we compare it with a centralized approach. Results show that, when a centralized approach can not be adopted in a real environment, it could be possible to rely on the good suboptimal solutions found by the ADMM.
Stefano Sebastio
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2016-11-30T10:37:34Z
2016-11-30T10:37:34Z
http://eprints.imtlucca.it/id/eprint/3606
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3606
2016-11-30T10:37:34Z
Machine Learning for Plant Phenotyping Needs Image Processing
We found the article by Singh et al. [1] extremely interesting because it introduces and showcases the utility of machine learning for high-throughput data-driven plant phenotyping. With this letter we aim to emphasize the role that image analysis and processing have in the phenotyping pipeline beyond what is suggested in [1], both in analyzing phenotyping data (e.g., to measure growth) and when providing effective feature extraction to be used by machine learning. Key recent reviews have shown that it is image analysis itself (what the authors of [1] consider as part of pre-processing) that has brought a renaissance in phenotyping [2].
Sotirios A. Tsaftaris
Massimo Minervini
massimo.minervini@imtlucca.it
Hanno Scharr
2016-10-10T15:08:08Z
2016-10-10T15:08:08Z
http://eprints.imtlucca.it/id/eprint/3583
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3583
2016-10-10T15:08:08Z
Real-time model predictive control based on dual gradient projection: Theory and fixed-point FPGA implementation
This paper proposes a method to design robust model predictive control (MPC) laws for discrete-time linear systems with hard mixed constraints on states and inputs, in case of only an inexact solution of the associated quadratic program is available, because of real-time requirements. By using a recently proposed dual gradient-projection algorithm, it is proved that the discrepancy of the optimal control law as compared with the obtained one is bounded even if the solver is implemented in fixed-point arithmetic. By defining an alternative MPC problem with tightened constraints, a feasible solution is obtained for the original MPC problem, which guarantees recursive feasibility and asymptotic stability of the closed-loop system with respect to a set including the origin, also considering the presence of external disturbances. The proposed MPC law is implemented on a field-programmable gate array in order to show the practical applicability of the method.
Matteo Rubagotti
Panagiotis Patrinos
Alberto Guiggiani
Alberto Bemporad
alberto.bemporad@imtlucca.it
2016-10-06T09:14:53Z
2016-10-06T09:16:01Z
http://eprints.imtlucca.it/id/eprint/3560
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3560
2016-10-06T09:14:53Z
Multiparty Testing Preorders
Variants of the must testing approach have been successfully applied in Service Oriented Computing for analysing the compliance between (contracts exposed by) clients and servers or, more generally, between two peers. It has however been argued that multiparty scenarios call for more permissive notions of compliance because partners usually do not have full coordination capabilities. We propose two new testing preorders, which are obtained by restricting the set of potential observers. For the first preorder, called uncoordinated, we allow only sets of parallel observers that use different parts of the interface of a given service and have no possibility of intercommunication. For the second preorder, that we call independent, we instead rely on parallel observers that perceive as silent all the actions that are not in the interface of interest. We have that the uncoordinated preorder is coarser than the classical must testing preorder and finer than the independent one. We also provide a characterisation in terms of decorated traces for both preorders: the uncoordinated preorder is defined in terms of must-sets and Mazurkiewicz traces while the independent one is described in terms of must-sets and classes of filtered traces that only contain designated visible actions.
Rocco De Nicola
r.denicola@imtlucca.it
Hernán Melgratti
2016-10-04T09:40:34Z
2016-10-04T09:40:34Z
http://eprints.imtlucca.it/id/eprint/3547
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3547
2016-10-04T09:40:34Z
A hierarchical consensus method for the approximation of the consensus state, based on clustering and spectral graph theory
A hierarchical method for the approximate computation of the consensus state of a network of agents is investigated. The method is motivated theoretically by spectral graph theory arguments. In a first phase, the graph is divided into a number of subgraphs with good spectral properties, i.e., a fast convergence toward the local consensus state of each subgraph. To find the subgraphs, suitable clustering methods are used. Then, an auxiliary graph is considered, to determine the final approximation of the consensus state in the original network. A theoretical investigation is performed of cases for which the hierarchical consensus method has a better performance guarantee than the non-hierarchical one (i.e., it requires a smaller number of iterations to guarantee a desired accuracy in the approximation of the consensus state of the original network). Moreover, numerical results demonstrate the effectiveness of the hierarchical consensus method for several case studies modeling real-world networks.
Rita Morisi
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2016-10-04T08:56:19Z
2016-10-04T08:56:19Z
http://eprints.imtlucca.it/id/eprint/3545
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3545
2016-10-04T08:56:19Z
Piecewise affine regression via recursive multiple least squares and multicategory discrimination
In nonlinear regression choosing an adequate model structure is often a challenging problem. While simple models (such as linear functions) may not be able to capture the underlying relationship among the variables, over-parametrized models described by a large set of nonlinear basis functions tend to overfit the training data, leading to poor generalization on unseen data. Piecewise-affine (PWA) models can describe nonlinear and possible discontinuous relationships while maintaining simple local affine regressor-to-output mappings, with extreme flexibility when the polyhedral partitioning of the regressor space is learned from data rather than fixed a priori. In this paper, we propose a novel and numerically very efficient two-stage approach for {PWA} regression based on a combined use of (i) recursive multi-model least-squares techniques for clustering and fitting linear functions to data, and (ii) linear multi-category discrimination, either offline (batch) via a Newton-like algorithm for computing a solution of unconstrained optimization problems with objective functions having a piecewise smooth gradient, or online (recursive) via averaged stochastic gradient descent.
Valentina Breschi
Dario Piga
dario.piga@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2016-10-04T08:36:36Z
2016-10-04T08:36:36Z
http://eprints.imtlucca.it/id/eprint/3543
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3543
2016-10-04T08:36:36Z
From linear to nonlinear MPC: bridging the gap via the real-time iteration
Linear model predictive control (MPC) can be currently deployed at outstanding speeds, thanks to recent progress in algorithms for solving online the underlying structured quadratic programs. In contrast, nonlinear MPC (NMPC) requires the deployment of more elaborate algorithms, which require longer computation times than linear MPC. Nonetheless, computational speeds for NMPC comparable to those of MPC are now regularly reported, provided that the adequate algorithms are used. In this paper, we aim at clarifying the similarities and differences between linear MPC and NMPC. In particular, we focus our analysis on NMPC based on the real-time iteration (RTI) scheme, as this technique has been successfully tested and, in some applications, requires computational times that are only marginally larger than linear MPC. The goal of the paper is to promote the understanding of RTI-based NMPC within the linear MPC community.
Sébastien Gros
Mario Zanon
Rien Quirynen
Alberto Bemporad
alberto.bemporad@imtlucca.it
Moritz Diehl
2016-09-22T16:17:07Z
2016-09-22T16:17:07Z
http://eprints.imtlucca.it/id/eprint/3542
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3542
2016-09-22T16:17:07Z
Data Science and Complex Networks. Real Case Studies with Python
This book provides a comprehensive yet short description of the basic concepts of Complex Network theory. In contrast to other books the authors present these concepts through real case studies. The application topics span from Foodwebs, to the Internet, the World Wide Web and the Social Networks, passing through the International Trade Web and Financial time series. The final part is devoted to definition and implementation of the most important network models.
The text provides information on the structure of the data and on the quality of available datasets. Furthermore it provides a series of codes to allow immediate implementation of what is theoretically described in the book. Readers already used to the concepts introduced in this book can learn the art of coding in Python by using the online material. To this purpose the authors have set up a dedicated web site where readers can download and test the codes. The whole project is aimed as a learning tool for scientists and practitioners, enabling them to begin working instantly in the field of Complex Networks.
Guido Caldarelli
guido.caldarelli@imtlucca.it
Alessandro Chessa
alessandro.chessa@imtlucca.it
2016-09-16T08:51:39Z
2016-09-16T10:26:17Z
http://eprints.imtlucca.it/id/eprint/3541
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3541
2016-09-16T08:51:39Z
Finely-grained annotated datasets for image-based plant phenotyping
Image-based approaches to plant phenotyping are gaining momentum providing fertile ground for several interesting vision tasks where fine-grained categorization is necessary, such as leaf segmentation among a variety of cultivars, and cultivar (or mutant) identification. However, benchmark data focusing on typical imaging situations and vision tasks are still lacking, making it difficult to compare existing methodologies. This paper describes a collection of benchmark datasets of raw and annotated top-view color images of rosette plants. We briefly describe plant material, imaging setup and procedures for different experiments: one with various cultivars of Arabidopsis and one with tobacco undergoing different treatments. We proceed to define a set of computer vision and classification tasks and provide accompanying datasets and annotations based on our raw data. We describe the annotation process performed by experts and discuss appropriate evaluation criteria. We also offer exemplary use cases and results on some tasks obtained with parts of these data. We hope with the release of this rigorous dataset collection to invigorate the development of algorithms in the context of plant phenotyping but also provide new interesting datasets for the general computer vision community to experiment on. Data are publicly available at http://www.plant-phenotyping.org/datasets.
Massimo Minervini
massimo.minervini@imtlucca.it
Andreas Fischbach
Hanno Scharr
Sotirios A. Tsaftaris
2016-09-08T06:40:19Z
2016-09-08T06:40:19Z
http://eprints.imtlucca.it/id/eprint/3526
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3526
2016-09-08T06:40:19Z
Static VS Dynamic Reversibility in CCS
The notion of reversible computing is attracting interest because of its applications in diverse fields, in particular the study of programming abstractions for fault tolerant systems. Reversible CCS (RCCS), proposed by Danos and Krivine, enacts reversibility by means of memory stacks. Ulidowski and Phillips proposed a general method to reverse a process calculus given in a particular SOS format, by exploiting the idea of making all the operators of a calculus static. CCSK is then derived from CCS with this method. In this paper we show that RCCS is at least as expressive as CCSK.
Doriana Medic
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
2016-08-31T08:47:26Z
2016-08-31T08:47:26Z
http://eprints.imtlucca.it/id/eprint/3523
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3523
2016-08-31T08:47:26Z
Polarized User and Topic Tracking in Twitter
Digital traces of conversations in micro-blogging platforms and OSNs provide information about user opinion with a high degree of resolution. These information sources can be exploited to understand and monitor collective behaviours. In this work, we focus on polarisation classes, i.e., those topics that require the user to side exclusively with one position. The proposed method provides an iterative classification of users and keywords: first, polarised users are identified, then polarised keywords are discovered by monitoring the activities of previously classified users. This method thus allows tracking users and topics over time. We report several experiments conducted on two Twitter datasets during political election time-frames. We measure the user classification accuracy on a golden set of users, and analyse the relevance of the extracted keywords for the ongoing political discussion.
Mauro Coletto
mauro.coletto@imtlucca.it
Claudio Lucchese
Salvatore Orlando
Raffaele Perego
2016-08-29T09:08:06Z
2016-08-29T09:08:06Z
http://eprints.imtlucca.it/id/eprint/3522
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3522
2016-08-29T09:08:06Z
A computational method to simulate thermo-oxidative degradation phenomena of poly(ethylene-co-vinyl acetate) used in photovoltaics.
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Pietro Lenarda
pietro.lenarda@imtlucca.it
Marco Paggi
marco.paggi@imtlucca.it
2016-07-13T09:55:44Z
2016-07-13T09:55:44Z
http://eprints.imtlucca.it/id/eprint/3517
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3517
2016-07-13T09:55:44Z
Statistical Analysis of Probabilistic Models of Software Product Lines with Quantitative Constraints
We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra whose operational behaviour interacts with a store of constraints, neatly separating product configuration from product behaviour. The resulting probabilistic configurations and behaviour converge seamlessly in a semantics based on DTMCs, thus enabling quantitative analyses ranging from the likelihood of certain behaviour to the expected average cost of products. This is supported by a Maude implementation of QFLan, integrated with the SMT solver Z3 and the distributed statistical model checker MultiVeStA. Our approach is illustrated with a bikes product line case study.
M.H. ter Beek
Axel Legay
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-07-13T09:43:28Z
2016-07-13T09:43:28Z
http://eprints.imtlucca.it/id/eprint/3516
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3516
2016-07-13T09:43:28Z
Quantitative Abstractions for Collective Adaptive Systems
Collective adaptive systems (CAS) consist of a large number of possibly heterogeneous entities evolving according to local interactions that may operate across multiple scales in time and space. The adaptation to changes in the environment, as well as the highly dispersed decision-making process, often leads to emergent behaviour that cannot be understood by simply analysing the objectives, properties, and dynamics of the individual entities in isolation.
As with most complex systems, modelling is a phase of crucial importance for the design of new CAS or the understanding of existing ones. Elsewhere in this volume the typical workflow of formal modelling, analysis, and evaluation of a CAS has been illustrated in detail. In this chapter we treat the problem of efficiently analysing large-scale CAS for quantitative properties. We review algorithms to automatically reduce the dimensionality of a CAS model preserving modeller-defined state variables, with focus on descriptions based on systems of ordinary differential equations. We illustrate the theory in a tutorial fashion, with running examples and a number of more substantial case studies ranging from crowd dynamics, epidemiology and biological systems.
Andrea Vandin
andrea.vandin@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2016-05-26T10:47:34Z
2016-05-26T10:47:34Z
http://eprints.imtlucca.it/id/eprint/3493
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3493
2016-05-26T10:47:34Z
Software Engineering for Collective Autonomic Systems: The ASCENS Approach
Dhaminda B. Abeywickrama
Jacques Combaz
Jaroslav and Kofro\v Horký
Andrea Vandin
andrea.vandin@imtlucca.it
Emil Vassev
Jan Kofroň
Alberto Lluch Lafuente
Michele Loreti
Andrea Margheri
Philip Mayer
Giacoma Valentina Monreale
Ugo Montanari
Carlo Pinciroli
Petr Tůma
2016-05-26T10:39:24Z
2016-05-26T10:39:24Z
http://eprints.imtlucca.it/id/eprint/3492
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3492
2016-05-26T10:39:24Z
Quantitative Analysis of Probabilistic Models of SoftwareProduct Lines with Statistical Model Checking
We investigate the suitability of statistical model checking techniques for analysing quantitative prop-erties of software product line models with probabilistic aspects. For this purpose, we enrich thefeature-oriented language FLANwith action rates, which specify the likelihood of exhibiting par-ticular behaviour or of installing features at a specific moment or in a specific order. The enrichedlanguage (called PFLAN) allows us to specify models of software product lines with probabilis-tic configurations and behaviour, e.g. by considering a PFLANsemantics based on discrete-timeMarkov chains. The Maude implementation of PFLANis combined with the distributed statisticalmodel checker MultiVeStA to perform quantitative analyses of a simple product line case study. Thepresented analyses include the likelihood of certain behaviour of interest (e.g. product malfunction-ing) and the expected average cost of products
Maurice H. ter Beek
maurice.terbeek@isti.cnr.it
Axel Legay
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
2016-05-26T10:06:19Z
2016-05-26T10:06:19Z
http://eprints.imtlucca.it/id/eprint/3491
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3491
2016-05-26T10:06:19Z
Modelling and Analyzing Adaptive Self-assembly Strategies with Maude
Building adaptive systems with predictable emergent behavior is a challenging task and it is becoming a critical need. The research community has accepted the challenge by introducing approaches of various nature: from software architectures, to programming paradigms, to analysis techniques. We recently proposed a conceptual framework for adaptation centered around the role of control data. In this paper we show that it can be naturally realized in a reflective logical language like Maude by using the Reflective Russian Dolls model. Moreover, we exploit this model to specify and analyse a prominent example of adaptive system: robot swarms equipped with obstacle-avoidance self-assembly strategies. The analysis exploits the statistical model checker PVesta.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
2016-05-23T11:46:35Z
2016-05-23T11:46:35Z
http://eprints.imtlucca.it/id/eprint/3490
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3490
2016-05-23T11:46:35Z
Simple outlier labeling based on quantile regression, with application to the steelmaking process
This paper introduces some methods for outlier identification in the regression setting, motivated by the analysis of steelmaking process data. The proposed methodology extends to the regression setting the boxplot rule, commonly used for outlier screening with univariate data. The focus here is on bivariate settings with a single covariate, but extensions are possible. The proposal is based on quantile regression, including an additional transformation parameter for selecting the best scale for linearity of the conditional quantiles. The resulting method is used to perform effective labeling of potential outliers, with a quite low computational complexity, allowing for simple implementation within statistical software as well as commonly used spreadsheets. Some simulation experiments have been carried out to study the swamping and masking properties of the proposal. The methodology is also illustrated by some real life examples, taking as the response variable the energy consumed in the melting process
Ruggero Bellio
Mauro Coletto
mauro.coletto@imtlucca.it
2016-05-23T09:52:51Z
2016-05-23T09:52:51Z
http://eprints.imtlucca.it/id/eprint/3489
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3489
2016-05-23T09:52:51Z
Electoral predictions with Twitter: a machine-learning approach
Several studies have shown how to approximately predict public opinion,
such as in political elections, by analyzing user activities in blogging platforms
and on-line social networks. The task is challenging for several reasons.
Sample bias and automatic understanding of textual content are two of several
non trivial issues.
In this work we study how Twitter can provide some interesting insights concerning
the primary elections of an Italian political party. State-of-the-art approaches
rely on indicators based on tweet and user volumes, often including sentiment
analysis. We investigate how to exploit and improve those indicators in order to
reduce the bias of the Twitter users sample. We propose novel indicators and a
novel content-based method. Furthermore, we study how a machine learning approach
can learn correction factors for those indicators. Experimental results on
Twitter data support the validity of the proposed methods and their improvement
over the state of the art.
Mauro Coletto
mauro.coletto@imtlucca.it
Claudio Lucchese
Salvatore Orlando
Raffaele Perego
2016-05-23T09:41:18Z
2016-05-23T09:41:18Z
http://eprints.imtlucca.it/id/eprint/3488
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3488
2016-05-23T09:41:18Z
Misinformation in the loop: the emergence of
narratives in Online Social Networks
The interlink between information and belief formation and
revision is a fundamental aspect of social dynamics. The growth of knowledge
fostered by a hyper-connected world together with the unprecedented
acceleration of scientific progress has exposed individuals, governments
and countries to an increasing level of complexity to explain
reality and its phenomena. Despite the enthusiastic rhetoric about the so
called collective intelligence, conspiracy theories and other unsubstantiated
claims find on the Web a natural medium for their diffusion. Cases
in which these kinds of false information are used in political debates
are far from unimaginable. In this work, we study the behavior of users
supporting different (and opposite) worldviews – i.e. scientific and conspiracist
thinking – that commented the posts of the Facebook page
of a large italian political party that advocates direct democracy and
e-Participation. We find that users supporting different narratives consume
political information in a similar way. Moreover, by analyzing the
composition of users active on the page in terms of commenting activity,
we notice that almost one fifth of them is represented by polarized consumers
of conspiracy stories, and those are able to generate almost one
third of total comments to the posts of the page
Alessandro Bessi
Mauro Coletto
mauro.coletto@imtlucca.it
George Alexandru Davidescu
Antonio Scala
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2016-05-05T13:57:04Z
2016-05-05T13:57:04Z
http://eprints.imtlucca.it/id/eprint/3481
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3481
2016-05-05T13:57:04Z
Classification-aware distortion metric for HEVC intra coding
Increasingly many vision applications necessitate the transmission of acquired images and video to a remote location for automated processing. When the image data are consumed by analysis algorithms and possibly never seen by a human, tailoring compression to the application is beneficial from a bit rate perspective. We inject prior knowledge of the application in the encoder to make rate-distortion decisions based on an estimate of the accuracy that will be achieved when analyzing reconstructed image data. Focusing on classification (e.g., used for image segmentation), we propose a new application-aware distortion metric based on a geometric interpretation of classification error. We devise an implementation for the High Efficiency Video Coding standard, and derive optimal model parameters for the A-domain rate control algorithm by curve fitting procedures. We evaluate our approach on time-lapse sequences from plant phenotyping experiments and cell fluorescence microscopy encoded in intra-only mode, observing a reduction in segmentation error across bit rates.
Massimo Minervini
massimo.minervini@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2016-04-19T07:48:41Z
2016-04-19T09:06:15Z
http://eprints.imtlucca.it/id/eprint/3346
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3346
2016-04-19T07:48:41Z
Reversibility in the higher-order \(π\)-calculus
The notion of reversible computation is attracting increasing interest because of its applications in diverse fields, in particular the study of programming abstractions for reliable systems. In this paper, we continue the study undertaken by Danos and Krivine on reversible CCS by defining a reversible higher-order π -calculus, called rhoπ. We prove that reversibility in our calculus is causally consistent and that the causal information used to support reversibility in rhoπ is consistent with the one used in the causal semantics of the π -calculus developed by Boreale and Sangiorgi. Finally, we show that one can faithfully encode rhoπ into a variant of higher-order π, substantially improving on the result we obtained in the conference version of this paper.
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Jean-Bernard Stefani
2016-04-13T09:40:21Z
2016-04-13T09:40:21Z
http://eprints.imtlucca.it/id/eprint/3444
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3444
2016-04-13T09:40:21Z
Scaling Size and Parameter Spaces in Variability-Aware Software Performance Models (T)
In software performance engineering, what-if scenarios, architecture optimization, capacity planning, run-time adaptation, and uncertainty management of realistic models typically require the evaluation of many instances. Effective analysis is however hindered by two orthogonal sources of complexity. The first is the infamous problem of state space explosion — the analysis of a single model becomes intractable with its size. The second is due to massive parameter spaces to be explored, but such that computations cannot be reused across model instances. In this paper, we efficiently analyze many queuing models with the distinctive feature of more accurately capturing variability and uncertainty of execution rates by incorporating general (i.e., non-exponential) distributions. Applying product-line engineering methods, we consider a family of models generated by a core that evolves into concrete instances by applying simple delta operations affecting both the topology and the model's parameters. State explosion is tackled by turning to a scalable approximation based on ordinary differential equations. The entire model space is analyzed in a family-based fashion, i.e., at once using an efficient symbolic solution of a super-model that subsumes every concrete instance. Extensive numerical tests show that this is orders of magnitude faster than a naive instance-by-instance analysis.
Matthias Kowal
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
Ina Schaefer
2016-04-13T09:23:42Z
2016-04-13T09:23:42Z
http://eprints.imtlucca.it/id/eprint/3441
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3441
2016-04-13T09:23:42Z
Forward and Backward Bisimulations for Chemical Reaction Networks
We present two quantitative behavioral equivalences over species of a chemical reaction network(CRN) with semantics based on ordinary differential equations.Forward CRN bisimulationiden-tifies a partition where each equivalence class represents the exact sum of the concentrations ofthe species belonging to that class. Backward CRN bisimulationrelates species that have theidentical solutions at all time points when starting from the same initial conditions. Both notionscan be checked using only CRN syntactical information, i.e., by inspection of the set of reactions. We provide a unified algorithm that computes the coarsest refinement up to our bisimulationsin polynomial time. Further, we give algorithms to compute quotient CRNs induced by a bisim-ulation. As an application, we find significant reductions in a number of models of biologicalprocesses from the literature. In two cases we allow the analysis of benchmark models whichwould be otherwise intractable due to their memory requirements.
Luca Cardelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-04-13T08:40:24Z
2016-04-13T08:40:24Z
http://eprints.imtlucca.it/id/eprint/3437
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3437
2016-04-13T08:40:24Z
Efficient Syntax-Driven Lumping of Differential Equations
We present an algorithm to compute exact aggregations of a class of systems of ordinary differential equations (ODEs). Our approach consists in an extension of Paige and Tarjan’s seminal solution to the coarsest refinement problem by encoding an ODE system into a suitable discrete-state representation. In particular, we consider a simple extension of the syntax of elementary chemical reaction networks because (i) it can express ODEs with derivatives given by polynomials of degree at most two, which are relevant in many applications in natural sciences and engineering; and (ii) we can build on two recently introduced bisimulations, which yield two complementary notions of ODE lumping. Our algorithm computes the largest bisimulations in O(r⋅s⋅logs)O(r⋅s⋅logs) time, where r is the number of monomials and s is the number of variables in the ODEs. Numerical experiments on real-world models from biochemistry, electrical engineering, and structural mechanics show that our prototype is able to handle ODEs with millions of variables and monomials, providing significant model reductions.
Luca Cardelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-04-13T08:31:40Z
2016-04-13T08:31:40Z
http://eprints.imtlucca.it/id/eprint/3436
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3436
2016-04-13T08:31:40Z
Noise Reduction in Complex Biological Switches
Cells operate in noisy molecular environments via complex regulatory networks. It is possible to understand how molecular counts are related to noise in specific networks, but it is not generally clear how noise relates to network complexity, because different levels of complexity also imply different overall number of molecules. For a fixed function, does increased network complexity reduce noise, beyond the mere increase of overall molecular counts? If so, complexity could provide an advantage counteracting the costs involved in maintaining larger networks. For that purpose, we investigate how noise affects multistable systems, where a small amount of noise could lead to very different outcomes; thus we turn to biochemical switches. Our method for comparing networks of different structure and complexity is to place them in conditions where they produce exactly the same deterministic function. We are then in a good position to compare their noise characteristics relatively to their identical deterministic traces. We show that more complex networks are better at coping with both intrinsic and extrinsic noise. Intrinsic noise tends to decrease with complexity, and extrinsic noise tends to have less impact. Our findings suggest a new role for increased complexity in biological networks, at parity of function.
Luca Cardelli
Attila Csikász-Nagy
Neil Dalchau
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
2016-04-13T08:26:32Z
2016-04-13T08:26:32Z
http://eprints.imtlucca.it/id/eprint/3435
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3435
2016-04-13T08:26:32Z
Symbolic Computation of Differential Equivalences
Ordinary differential equations (ODEs) are widespread in manynatural sciences including chemistry, ecology, and systems biology,and in disciplines such as control theory and electrical engineering. Building on the celebrated molecules-as-processes paradigm, they have become increasingly popular in computer science, with high-level languages and formal methods such as Petri nets, process algebra, and rule-based systems that are interpreted as ODEs. We consider the problem of comparing and minimizing ODEs automatically. Influenced by traditional approaches in the theory of programming, we propose differential equivalence relations. We study them for a basic intermediate language, for which we have decidability results, that can be targeted by a class of high-level specifications. An ODE implicitly represents an uncountable state space, hence reasoning techniques cannot be borrowed from established domains such as probabilistic programs with finite-state Markov chain semantics. We provide novel symbolic procedures to check an equivalence and compute the largest one via partition refinement algorithms that use satisfiability modulo theories. We illustrate the generality of our framework by showing that differential equivalences include (i) well-known notions for the minimization of continuous-time Markov chains (lumpability),(ii) bisimulations for chemical reaction networks recently proposedby Cardelli et al., and (iii) behavioral relations for process algebra with ODE semantics. With a prototype implementation we are able to detect equivalences in biochemical models from the literature thatcannot be reduced using competing automatic techniques.
Luca Cardelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-04-13T08:14:47Z
2016-04-13T08:14:47Z
http://eprints.imtlucca.it/id/eprint/3433
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3433
2016-04-13T08:14:47Z
Comparing Chemical Reaction Networks:A Categorical and Algorithmic Perspective
We study chemical reaction networks (CRNs) as a kernel language for concurrency models with semantics based on ordinary differential equations. We investigate the problem of comparing two CRNs,i.e., to decide whether the trajectories of asource CRN can be matched by a target CRN under an appropriate choice of initial conditions. Using a categorical framework, we extend and relate model-comparison approaches based on structural (syntactic) and on dynamical (semantic) properties of a CRN, proving their equivalence. Then, we provide an algorithm to compare CRNs, running linearly in time with respect to the cardinality of all possible comparisons. Finally, we apply our results to biological models from the literature.
Luca Cardelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-03-21T10:55:23Z
2016-03-21T10:55:23Z
http://eprints.imtlucca.it/id/eprint/3259
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3259
2016-03-21T10:55:23Z
Computational models for the in silico analysis of drug delivery from drug-eluting stents
Stents are tubular meshed devices implanted to restore the perviety of an occluded vessel owing to the presence of an atherosclerotic plaque. These devices were introduced in to clinical practice from 1980. The first stent implanted in a human coronary artery was the Wallstent [1], a self-expandable metallic device. The use of a stent to expand the vessel was introduced to overcome the greater limit of angioplasty, the elastic recoil of the vessel wall, yet also caused the onset of another, different pathology: intra-stent restenosis. This pathology results from injuries on the vessel wall after balloon inflation as well as the different fluid dynamic regime established after stent implantation [2]. Intra-stent restenosis is caused by the abnormal growth of tissues within stent meshes, leading to the implant failure.
The common therapeutic approach to limit hyperplasia is the systemic administration of antimitotic and anti-inflammatory drugs. This treatment generally fails because effective dosing levels have toxic effects on patients. Since 2000, a new and emerging class of stents was introduced to redress this problem. We are referring to drug-eluting stents (DES), new devices loaded with one or more active principles for the local administration of the drug, avoiding the systemic administration of massive doses. DES are metallic devices impregnated with a drug on their surface or coated with a polymeric thin layer containing the active principle.
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
2016-03-21T09:12:52Z
2016-03-22T11:08:58Z
http://eprints.imtlucca.it/id/eprint/3244
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3244
2016-03-21T09:12:52Z
Experimental and computational study of mechanical and transport properties of a polymer coating for drug-eluting stents
Background: Experimental and computational characterizations in the preclinical development of biomedical devices are complementary and can significantly help in a thorough analysis of the performances before clinical evaluation. Methodology: Here mechanical and drug delivery properties of a polymer platform, ad hoc prepared to obtain coatings for drug-eluting stents, is reported; polymer formulation and starting drug loading were varied to study the behavior of the platform; a finite element model was constructed starting from experimental data. Results: Different platform formulations affected mechanical and drug transport properties; these properties can be fine tuned by varying the starting platform formulation. Finite element analysis allowed visualizing drug distribution maps over time in biological tissues for different commercial stents and polymer platform formulations.
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
2016-03-21T08:41:12Z
2016-03-21T08:41:12Z
http://eprints.imtlucca.it/id/eprint/3240
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3240
2016-03-21T08:41:12Z
Parcellation-based connectome assessment by using structural and functional connectivity
Connectome analysis of the human brain structural and functional architecture provides a unique opportunity to understand the organization of brain networks. In this work, we investigate a novel large scale parcellation-based connectome, merging together information coming from resting state fMRI (rs-fMRI) data and diffusion tensor imaging (DTI) measurements.
Ying-Chia Lin
yingchia.lin@imtlucca.it
Tommaso Gili
Sotirios A. Tsaftaris
Andrea Gabrielli
Mariangela Iorio
Gianfranco Spalletta
Guido Caldarelli
guido.caldarelli@imtlucca.it
2016-03-21T08:41:04Z
2016-03-21T08:41:04Z
http://eprints.imtlucca.it/id/eprint/3239
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3239
2016-03-21T08:41:04Z
A cortical and sub-cortical parcellation clustering by intrinsic functional connectivity
Network analysis of resting-state fMRI (rsfMRI) has been widely utilized to investigate the functional architecture of the whole brain. Here we propose a robust parcellation method that first divides cortical and sub-cortical regions into sub-regions by clustering the rsfMRI data for each subject independently, and then merges those individual parcellations to obtain a global whole brain parcellation. To do so our method relies on majority voting (to merge parcellations of multiple subjects) and enforces spatial constraints within a hierarchical agglomerative clustering framework to define parcels that are spatially homogeneous.
Ying-Chia Lin
yingchia.lin@imtlucca.it
Tommaso Gili
Sotirios A. Tsaftaris
Andrea Gabrielli
Mariangela Iorio
Gianfranco Spalletta
Guido Caldarelli
guido.caldarelli@imtlucca.it
2016-03-14T13:02:02Z
2016-04-06T10:06:19Z
http://eprints.imtlucca.it/id/eprint/3223
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3223
2016-03-14T13:02:02Z
A Cortical and Sub-cortical Parcellation Clustering by Intrinsic Functional Connectivity
Network analysis of resting-state fMRI (rsfMRI) has been widely utilized to investigate the functional architecture of the whole brain. Such analysis can divide the brain into several discrete elements (nodes) connected by links (edges) representing the relation between two elements. The brain cortical and subcortical areas can be segmented or parcelled into several functional and/or structural regions. The connectome analysis of human-brain structure and functional connectivity provides a unique opportunity to understand the organisation of brain networks. However, such analyses require an appropriate definition of functional or structural nodes to efficiently represent cortical regions. In order to address this issue, here we propose a robust parcellation method based on resting-state fMRI, which can be generalized from the single-subject level to the multi-group one. Considering the input data of a single subject and constructing multi-resolution graph elements. We combined voting-based measurements to divide the cortical region into sub-regions in order to obtain the whole brain parcellation. Our parcellation relies on majority vote and poses spatial constraints within a hierarchical agglomerative clustering framework to define parcels that are spatially homogeneous. We used rsfMRI data collected from 40 healthy subjects and we showed that our purposed algorithm is able to compute stable and reproducible parcellations across the group of subjects at multi-resolution level. We find that, even though previous methods ensure on average larger overlap between parcels and regions in AAL atlas, the method proposed herein reduces inter-subject variability, especially when the number of parcels increases. Our high-resolution parcels seem to be functionally more consistent and reliable and can be a useful tool for future analysis that will aim to match functional and structural architecture of the brain.
Ying-Chia Lin
yingchia.lin@imtlucca.it
Tommaso Gili
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Andrea Gabrielli
Mariangela Iorio
Gianfranco Spalletta
Guido Caldarelli
guido.caldarelli@imtlucca.it
2016-02-29T09:01:28Z
2016-03-01T10:41:59Z
http://eprints.imtlucca.it/id/eprint/3147
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3147
2016-02-29T09:01:28Z
On the approximation of the optimal control functions in
stochastic optimal control problems
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2016-02-29T09:01:04Z
2016-03-04T08:25:15Z
http://eprints.imtlucca.it/id/eprint/3148
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3148
2016-02-29T09:01:04Z
Symmetry and antisymmetry properties of optimal solutions to regression problems
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2016-02-26T15:48:35Z
2016-03-04T08:26:07Z
http://eprints.imtlucca.it/id/eprint/3146
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3146
2016-02-26T15:48:35Z
A green policy to schedule tasks in a distributed cloud
Stefano Sebastio
stefano.sebastio@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2016-02-26T15:47:40Z
2016-03-01T10:41:10Z
http://eprints.imtlucca.it/id/eprint/3145
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3145
2016-02-26T15:47:40Z
Binary and
multi-class Parkinsonian disorders classification using Support Vector Machines with
graph-based features
Rita Morisi
rita.morisi@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Nico Lanconelli
Stefano Zanigni
David Neil Manners
Claudia Testa
Stefania Evangelisti
Laura Ludovica Gramegna
Claudio Bianchini
Pietro Cortelli
Caterina Tonon
Raffaele Lodi
2016-02-26T15:44:37Z
2016-03-04T08:24:34Z
http://eprints.imtlucca.it/id/eprint/3144
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3144
2016-02-26T15:44:37Z
“Optimal distributed task scheduling in volunteer clouds
Stefano Sebastio
stefano.sebastio@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2016-02-26T15:43:29Z
2016-03-04T08:25:32Z
http://eprints.imtlucca.it/id/eprint/3143
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3143
2016-02-26T15:43:29Z
Transboundary pollution control and environmental absorption efficiency management
F. El Ouardighi
K. Kogan
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2016-02-26T15:41:35Z
2016-10-04T09:03:37Z
http://eprints.imtlucca.it/id/eprint/3142
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3142
2016-02-26T15:41:35Z
Linear Quadratic Gaussian (LQG) online learning
Optimal control theory and machine learning techniques are combined to propose and solve in closed form an optimal control formulation of online learning from supervised examples. The connections with the classical Linear Quadratic Gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a non trivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman-filter estimate of the parameter vector to be learned. It is shown that the former enjoys larger smoothness and robustness to outliers, thanks to the presence of a regularization term. The basic formulation of the proposed online-learning framework refers to a discrete time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called "kernel trick", the case of nonlinear models.
Subjects: Optimization and Control (math.OC)
Cite as: arXiv:1606.04272 [math.OC]
(or arXiv:1606.04272v2 [math.OC] for this version)
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Gori
Marcello Sanguineti
2016-02-26T15:40:04Z
2016-03-04T08:24:59Z
http://eprints.imtlucca.it/id/eprint/3141
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3141
2016-02-26T15:40:04Z
Symmetric and antisymmetric properties of solutions to kernel-based machine learning problems
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2016-02-26T15:38:43Z
2016-03-01T10:42:48Z
http://eprints.imtlucca.it/id/eprint/3140
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3140
2016-02-26T15:38:43Z
Welfare effects of uniform and differential pricing schemes: an analysis through quadratic programming
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Fabio Pammolli
f.pammolli@imtlucca.it
Berna Tuncay
berna.tuncay@imtlucca.it
2016-02-26T15:28:19Z
2016-03-04T08:23:15Z
http://eprints.imtlucca.it/id/eprint/3137
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3137
2016-02-26T15:28:19Z
Congestion-aware forwarding strategies for intermittently connected networks
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2016-02-26T15:20:00Z
2016-02-26T15:20:00Z
http://eprints.imtlucca.it/id/eprint/3136
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3136
2016-02-26T15:20:00Z
Forwarding strategies for congestion control in intermittently connected networks
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2016-02-26T15:16:19Z
2016-02-26T15:16:19Z
http://eprints.imtlucca.it/id/eprint/3135
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3135
2016-02-26T15:16:19Z
A Machine-Learning Paradigm that Includes Pointwise
Constraints
The classical framework of learning from examples is enhanced by the
introduction of hard point-wise constraints, i.e., constraints, on a finite
set of examples, that cannot be violated. They arise, e.g., when imposing
coherent decisions of classifiers acting on different views of the
same pattern. Constrained variational calculus is exploited to derive a
representer theorem that provides a description of the functional structure
of the solution. The general theory is applied to learning from hard
linear point-wise constraints combined with classical supervised pairs
and loss functions.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2016-02-26T15:08:57Z
2016-02-26T15:08:57Z
http://eprints.imtlucca.it/id/eprint/3134
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3134
2016-02-26T15:08:57Z
Average Packet Delivery Delay in Intermittently-Connected Networks
Delay/Disruption-Tolerant Networking (DTN) is addressed. It is a communication paradigm that enables
the communication over Intermittently-Connected Networks (ICNs), which are characterized by unpredictable
or scheduled contacts among nodes, high latency, and high bit error rates. DTN exploits storeand-forward
techniques in order to cope with intermittent link issues. A model is proposed, to compute the
average packet delivery delay in ICNs. We assume that the inter-meeting time as well as the contact time
between any two nodes is an exponentially-distributed random-variable. As a consequence, the behavior of
the communication between each pair of nodes can be modeled by a two-state Continuous-Time Markov
Chain (CTMC). It is assumed that the packet generation process at the source node follows a Poisson process,
so in the analysis one can exploit the Poisson Arrivals See Time Averages (PASTA) property. Both
the IP-like paradigm used by traditional TCP/IP protocols and DTN are considered. Numerical results and
simulations are presented.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2016-02-26T15:06:47Z
2016-02-26T15:09:47Z
http://eprints.imtlucca.it/id/eprint/3133
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3133
2016-02-26T15:06:47Z
Supervised Learning from Regions and Box Kernels
A supervised learning paradigm is investigated, in which the data are represented by labeled regions of
the input space. This learning model is motivated by real-world applications, such as problems of medical
diagnosis and image categorization. The associated optimization framework entails the minimization of a
functional obtained by introducing a loss function that involves the labeled regions. A regularization term
expressed via differential operators, modeling smoothness properties of the desired input/output relationship,
is included. It is shown that the optimization problem associated to supervised learning from regions has
a unique solution, represented as a linear combination of kernel functions determined by the differential
operators together with the regions themselves. The case of regions given by multi-dimensional intervals (i.e.,
“boxes”) is investigated as an interesting instance of learning from regions, which models prior knowledge
expressed by logical propositions. The proposed approach covers as a particular case the classical learning
context, which corresponds to the situation where regions degenerate to single points. Applications and
numerical examples are discussed.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2016-02-26T15:00:45Z
2016-02-26T15:00:45Z
http://eprints.imtlucca.it/id/eprint/3132
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3132
2016-02-26T15:00:45Z
Evaluating flood hazard at the catchment scale via machine-learning techniques
Massimiliano Degiorgis
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Silvia Gorni
Rita Morisi
rita.morisi@imtlucca.it
Giorgio Roth
Marcello Sanguineti
Angela Celeste Taramasso
2016-02-26T14:43:14Z
2016-02-26T14:43:14Z
http://eprints.imtlucca.it/id/eprint/3131
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3131
2016-02-26T14:43:14Z
Dealing with mixed hard/soft constraints via Support constraint Machines
A learning paradigm is presented, which extends the classical framework of
learning from examples by including hard pointwise constraints, i.e., constraints that cannot be
violated. In applications, hard pointwise constraints may encode very precise prior knowledge
coming from rules, applied, e.g., to a large collection of unsupervised examples. The classical
learning framework corresponds to soft pointwise constraints, which can be violated at the cost
of some penalization. The functional structure of the optimal solution is derived in terms of a set
of “support constraints”, which generalize the classical concept of “support vectors”. They are
at the basis of a novel learning parading, that we called “Support Constraint Machines”. A case
study and a numerical example are presented.
Marcello Sanguineti
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
2016-02-26T14:41:30Z
2016-02-26T14:41:30Z
http://eprints.imtlucca.it/id/eprint/3130
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3130
2016-02-26T14:41:30Z
A Two-Player Differential Game Model for the Management of Transboundary Pollution and
Environmental Absorption
It is likely that the decentralized structure at the level of nations of decision-making
processes related to polluting emissions will aggravate the decline in the efficiency of carbon sinks.
A two-player differential game model of pollution is proposed. It accounts for a time-dependent
environmental absorption efficiency and allows for the possibility of a switching of the biosphere
from a carbon sink to a source. The impact of negative externalities from the transboundary
pollution non-cooperative game wherein countries are dynamically involved is investigated. The
differences in steady state between cooperative, open-loop, and Markov perfect Nash equilibria
are studied. For the latter, two numerical methods for its approximation are compared.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
F. El Ouardighi
K. Kogan
Marcello Sanguineti
2016-02-26T14:35:46Z
2016-02-29T08:31:00Z
http://eprints.imtlucca.it/id/eprint/3129
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3129
2016-02-26T14:35:46Z
Binary and multi-class classification of parkinsonian disorders with support vector machines based on quantitative brain MR and graph-based features
Laura Ludovica Gramegna
Claudia Testa
Rita Morisi
rita.morisi@imtlucca.it
Stefano Zanigni
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Nico Lanconelli
David Neil Manners
Stefania Evangelisti
Pietro Cortelli
Caterina Tonon
Raffaele Lodi
2016-02-26T13:30:28Z
2016-02-26T13:30:28Z
http://eprints.imtlucca.it/id/eprint/3128
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3128
2016-02-26T13:30:28Z
Online learning as an LQG optimal control problem with random matrices
In this paper, we combine optimal control theory and machine learning techniques to propose and solve an optimal control formulation of online learning from supervised examples, which are used to learn an unknown vector parameter modeling the relationship between the input examples and their outputs. We show some connections of the problem investigated with the classical LQG optimal control problem, of which the proposed problem is a non-trivial variation, as it involves random matrices. We also compare the optimal solution to the proposed problem with the Kalman-filter estimate of the parameter vector to be learned, demonstrating its larger smoothness and robustness to outliers. Extension of the proposed online-learning framework are mentioned at the end of the paper.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Gori
Rita Morisi
rita.morisi@imtlucca.it
Marcello Sanguineti
2016-02-26T13:24:09Z
2016-02-26T13:24:09Z
http://eprints.imtlucca.it/id/eprint/3127
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3127
2016-02-26T13:24:09Z
Learning with hard constraints as a limit case of learning with soft constraints
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2016-02-26T13:11:06Z
2016-02-26T13:11:06Z
http://eprints.imtlucca.it/id/eprint/3126
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3126
2016-02-26T13:11:06Z
A SOM-based Chan–Vese model for unsupervised image segmentation
Active Contour Models (ACMs) constitute an efficient energy-based image segmentation framework. They usually deal with the segmentation problem as an optimization problem, formulated in terms of a suitable functional, constructed in such a way that its minimum is achieved in correspondence with a contour that is a close approximation of the actual object boundary. However, for existing ACMs, handling images that contain objects characterized by many different intensities still represents a challenge. In this paper, we propose a novel ACM that combines—in a global and unsupervised way—the advantages of the Self-Organizing Map (SOM) within the level set framework of a state-of-the-art unsupervised global ACM, the Chan–Vese (C–V) model. We term our proposed model SOM-based Chan–Vese (SOMCV) active contour model. It works by explicitly integrating the global information coming from the weights (prototypes) of the neurons in a trained SOM to help choosing whether to shrink or expand the current contour during the optimization process, which is performed in an iterative way. The proposed model can handle images that contain objects characterized by complex intensity distributions, and is at the same time robust to the additive noise. Experimental results show the high accuracy of the segmentation results obtained by the SOMCV model on several synthetic and real images, when compared to the Chan–Vese model and other image segmentation models.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
2016-02-26T12:55:26Z
2016-02-26T12:55:26Z
http://eprints.imtlucca.it/id/eprint/3125
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3125
2016-02-26T12:55:26Z
On the Relationship between Variational Level Set-Based and SOM-Based Active Contours
Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
Eyad Elyan
2016-02-26T12:39:16Z
2016-02-26T12:39:16Z
http://eprints.imtlucca.it/id/eprint/3123
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3123
2016-02-26T12:39:16Z
On the Curse of Dimensionality in the Ritz Method
It is shown that the classical Ritz method of the calculus of variations suffers from the “curse of dimensionality,” i.e., an exponential growth, as a function of the number of variables, of the dimension a linear subspace needs in order to achieve a desired relative improvement in the accuracy of approximation of the optimal solution value. The proof is constructive and is obtained by exhibiting a family of infinite-dimensional optimization problems for which this happens, namely those with quadratic functional and spherical constraint. The results provide a theoretical motivation for the search of alternative solution methods, such as the so-called “extended Ritz method,” to deal with the curse of dimensionality.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2016-02-26T12:26:43Z
2017-03-21T10:32:36Z
http://eprints.imtlucca.it/id/eprint/3122
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3122
2016-02-26T12:26:43Z
Optimal design of auxetic hexachiral metamaterials with local resonators
A parametric beam lattice model is formulated to analyse the propagation properties of elastic in-plane waves in an auxetic material based on a hexachiral topology of the periodic cell, equipped with inertial local resonators. The Floquet-Bloch boundary conditions are imposed on a reduced order linear model in the only dynamically active degrees-offreedom. Since the resonators can be designed to open and shift band gaps, an optimal design, focused on the largest possible gap in the low-frequency range, is achieved by solving a maximization problem in the bounded space of the significant geometrical and mechanical parameters. A local optimized solution, for a the lowest pair of consecutive dispersion curves, is found by employing the globally convergent version of the Method of Moving asymptotes, combined with Monte Carlo and quasi-Monte Carlo multi-start techniques.
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Marco Lepidi
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Luigi Gambarotta
2016-02-26T12:10:01Z
2016-02-26T12:10:01Z
http://eprints.imtlucca.it/id/eprint/3119
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3119
2016-02-26T12:10:01Z
Automatic Classification of Leading Interactions in a String Quartet
The aim of the present work is to analyze automatically the leading interactions between the musicians of a string quartet, using machine learning techniques applied to nonverbal features of the musicians behavior, which are detected through the help of a motion capture system. We represent these interactions by a graph of influence of the musicians, which displays the relations is following and is not following with weighted directed arcs. The goal of the machine learning problem investigated is to assign weights to these arcs in an optimal way. Since only a subset of the available training examples are labeled, a semisupervised support vector machine is used, which is based on a linear kernel to limit its model complexity. Specific potential applications within the field of human-computer interaction are also discussed, such as e-learning, networked music performance, and social active listening.
Floriane Dardard
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Donald Glowinski
2016-02-12T13:32:18Z
2016-02-12T13:32:18Z
http://eprints.imtlucca.it/id/eprint/3068
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3068
2016-02-12T13:32:18Z
MultiVeStA: Statistical Model Checking for Discrete Event Simulators
The modeling, analysis and performance evaluation of large-scale systems are difficult tasks. An approach typically followed by engineers consists in performing simulations of systems models to obtain statistical estimations of quantitative properties. Similarly, a technique used by computer scientists working on quantitative analysis is Statistical Model Checking (SMC), where rigorous mathematical languages (e.g., logics) are used to express properties, which are automatically estimated again simulating the model at hand. These property specification languages provide a formal, compact and elegant way to express properties without hard-coding them in the model definition. This paper presents MultiVeStA, a statistical analysis tool which can be easily integrated with discrete event simulators, enriching them with efficient distributed statistical analysis and SMC capabilities.
Stefano Sebastio
stefano.sebastio@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T13:25:31Z
2016-04-06T09:37:22Z
http://eprints.imtlucca.it/id/eprint/3067
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3067
2016-02-12T13:25:31Z
Distributed statistical analysis of complex systems modeled through a chemical metaphor
The chemical-inspired programming approach is an emerging paradigm for defining the behavior of densely distributed and context-aware devices (e.g., in ecosystems of displays tailored to crowd steering, or to obtain profile-based coordinated visualization). Typically, the evolution of such systems cannot be easily predicted, thus making of paramount importance the availability of techniques and tools supporting prior-to-deployment analysis. Exact analysis techniques do not scale well when the complexity of systems grows: as a consequence, approximated techniques based on simulation assumed a relevant role. This work presents a new simulation-based distributed analysis tool addressing the statistical analysis of such a kind of systems. The tool has been obtained by chaining two existing tools: MultiVeSta and Alchemist. The former is a recently proposed lightweight tool which allows to enrich existing discrete event simulators with automated and distributed statistical analysis capabilities, while the latter is an efficient simulator for chemical-inspired computational systems. The tool is validated against a crowd steering scenario, and insights on the performance are provided by discussing how the analysis tasks scale on a multi-core architecture.
Danilo Pianini
Stefano Sebastio
stefano.sebastio@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T13:11:53Z
2016-02-12T13:11:53Z
http://eprints.imtlucca.it/id/eprint/3065
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3065
2016-02-12T13:11:53Z
The SCEL Language: Design, Implementation, Verification
SCEL (Service Component Ensemble Language) is a new language specifically designed to rigorously model and program autonomic components and their interaction, while supporting formal reasoning on their behaviors. SCEL brings together various programming abstractions that allow one to directly represent aggregations, behaviors and knowledge according to specific policies. It also naturally supports programming interaction, self-awareness, context-awareness, and adaptation. The solid semantic grounds of the language is exploited for developing logics, tools and methodologies for formal reasoning on system behavior to establish qualitative and quantitative properties of both the individual components and the overall systems.
Rocco De Nicola
r.denicola@imtlucca.it
Diego Latella
Alberto Lluch Lafuente
Michele Loreti
Andrea Margheri
Mieke Massink
Andrea Morichetta
Rosario Pugliese
Francesco Tiezzi
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T13:04:06Z
2016-04-06T07:58:07Z
http://eprints.imtlucca.it/id/eprint/3064
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3064
2016-02-12T13:04:06Z
Reconciling White-Box and Black-Box Perspectives on Behavioral Self-adaptation
This paper proposes to reconcile two perspectives on behavioral adaptation commonly taken at different stages of the engineering of autonomic computing systems. Requirements engineering activities often take a black-box perspective: A system is considered to be adaptive with respect to an environment whenever the system is able to satisfy its goals irrespectively of the environment perturbations. Modeling and programming engineering activities often take a white-box perspective: A system is equipped with suitable adaptation mechanisms and its behavior is classified as adaptive depending on whether the adaptation mechanisms are enacted or not. The proposed approach reconciles black- and white-box perspectives by proposing several notions of coherence between the adaptivity as observed by the two perspectives: These notions provide useful criteria for the system developer to assess and possibly modify the adaptation requirements, models and programs of an autonomic system.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Matthias Hölzl
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
Martin Wirsing
2016-02-12T12:37:25Z
2016-02-12T12:37:25Z
http://eprints.imtlucca.it/id/eprint/3063
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3063
2016-02-12T12:37:25Z
Differential Bisimulation for a Markovian Process Algebra
Formal languages with semantics based on ordinary differential equations (ODEs) have emerged as a useful tool to reason about large-scale distributed systems. We present differential bisimulation, a behavioral equivalence developed as the ODE counterpart of bisimulations for languages with probabilistic or stochastic semantics. We study it in the context of a Markovian process algebra. Similarly to Markovian bisimulations yielding an aggregated Markov process in the sense of the theory of lumpability, differential bisimulation yields a partition of the ODEs underlying a process algebra term, whereby the sum of the ODE solutions of the same partition block is equal to the solution of a single (lumped) ODE. Differential bisimulation is defined in terms of two symmetries that can be verified only using syntactic checks. This enables the adaptation to a continuous-state semantics of proof techniques and algorithms for finite, discrete-state, labeled transition systems. For instance, we readily obtain a result of compositionality, and provide an efficient partition-refinement algorithm to compute the coarsest ODE aggregation of a model according to differential bisimulation.
Giulio Iacobelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T12:27:56Z
2016-02-12T12:27:56Z
http://eprints.imtlucca.it/id/eprint/3062
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3062
2016-02-12T12:27:56Z
A White Box Perspective on Behavioural Adaptation
We present a white-box conceptual framework for adaptation developed in the context of the EU Project ASCENS coordinated by Martin Wirsing. We called it CoDa, for Control Data Adaptation, since it is based on the notion of control data. CoDa promotes a neat separation between application and adaptation logic through a clear identification of the set of data that is relevant for the latter. The framework provides an original perspective from which we survey a representative set of approaches to adaptation, ranging from programming languages and paradigms to computational models and architectural solutions.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T11:52:51Z
2016-02-12T12:08:13Z
http://eprints.imtlucca.it/id/eprint/3061
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3061
2016-02-12T11:52:51Z
Modelling and analyzing adaptive self-assembly strategies with Maude
Building adaptive systems with predictable emergent behavior is a difficult task and it is becoming a critical need. The research community has accepted the challenge by introducing approaches of various nature: from software architectures to programming paradigms and analysis techniques. Our white-box conceptual approach to adaptive systems based on the notion of control data promotes a clear distinction between the application and the adaptation logic. In this paper we propose a concrete instance of our approach based on (i) a neat identification of control data; (ii) a hierarchical architecture that provides the basic structure to separate the adaptation and application logics; (iii) computational reflection as the main mechanism to realize the adaptation logic; (iv) probabilistic rule-based specifications and quantitative verification techniques to specify and analyze the adaptation logic. We show that our solution can be naturally realized in Maude, a Rewriting Logic based framework, and illustrate our approach by specifying, validating and analyzing a prominent example of adaptive systems: robot swarms equipped with self-assembly strategies.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-11T15:16:10Z
2016-04-06T10:06:34Z
http://eprints.imtlucca.it/id/eprint/3055
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3055
2016-02-11T15:16:10Z
Large-scale analysis of neuroimaging data on commercial clouds with content-aware resource allocation strategie
The combined use of mice that have genetic mutations (transgenic mouse models) of human pathology and advanced neuroimaging methods (such as magnetic resonance imaging) has the potential to radically change how we approach disease understanding, diagnosis and treatment. Morphological changes occurring in the brain of transgenic animals as a result of the interaction between environment and genotype can be assessed using advanced image analysis methods, an effort described as ‘mouse brain phenotyping’. However, the computational methods involved in the analysis of high-resolution brain images are demanding. While running such analysis on local clusters is possible, not all users have access to such infrastructure and even for those that do, having additional computational capacity can be beneficial (e.g. to meet sudden high throughput demands). In this paper we use a commercial cloud platform for brain neuroimaging and analysis. We achieve a registration-based multi-atlas, multi-template anatomical segmentation, normally a lengthy-in-time effort, within a few hours. Naturally, performing such analyses on the cloud entails a monetary cost, and it is worthwhile identifying strategies that can allocate resources intelligently. In our context a critical aspect is the identification of how long each job will take. We propose a method that estimates the complexity of an image-processing task, a registration, using statistical moments and shape descriptors of the image content. We use this information to learn and predict the completion time of a registration. The proposed approach is easy to deploy, and could serve as an alternative for laboratories that may require instant access to large high-performance-computing infrastructures. To facilitate adoption from the community we publicly release the source code.
Massimo Minervini
massimo.minervini@imtlucca.it
Cristian Rusu
Mario Damiano
Valter Tucci
Angelo Bifone
Alessandro Gozzi
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2016-01-20T10:27:26Z
2016-04-06T10:06:49Z
http://eprints.imtlucca.it/id/eprint/3025
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3025
2016-01-20T10:27:26Z
Supervised Learning of Functional Maps for Infarct Classification
Our submission to the STACOM Challenge at MICCAI 2015 is based on the supervised learning of functional map representation between End Systole (ES) and End Diastole (ED) phases of Left Ventricle (LV), for classifying infarcted LV from the healthy ones. The Laplace-Beltrami eigen-spectrum of the LV surfaces at ES and ED, represented by their triangular meshes, are used to compute the functional maps. Multi-scale distortions induced by the mapping, are further calculated by singular value decomposition of the functional map. During training, the information of whether an LV surface is healthy or diseased is known, and this information is used to train an SVM classifier for the singular values at multiple scales corresponding to the distorted areas augmented with surface area difference of epicardium and endocardium meshes. At testing similar augmented features are calculated and fed to the SVM model for classification. Promising results are obtained on both cross validation of training data as well as on testing data, which encourages us in believing that this algorithm will perform favourably in comparison to state of the art methods.
Anirban Mukhopadhyay
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2016-01-20T10:15:32Z
2016-04-06T07:35:14Z
http://eprints.imtlucca.it/id/eprint/3024
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3024
2016-01-20T10:15:32Z
Reconstruction of DSC-MRI Data from Sparse Data Exploiting Temporal Redundancy and Contrast Localization
In order to asses brain perfusion, one of the available methods is the estimation of parameters such as cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) from Dynamic Susceptibility Contrast-MRI (DSC-MRI). This estimation requires both high temporal resolution to capture the rapid tracer kinetic, and high spatial resolution to detect small impairments and reliably discriminate boundaries.With this inmind, we propose a compressed sensing approach to decrease the acquisition time without sacrificing the reconstruction, especially in the region affected by tracer passage. To this end we propose the utilization of an available TVL1- L2 minimization scheme with a novel additional term that introduce the information on the volume at baseline (no tracer). We show on simulated data the benefit of such a scheme, that is able to achieve an accurate reconstruction even at high acceleration (x16), with a RMSE of 2.8, 10 times lower than the error obtained with the original reconstruction.
Davide Boschetto
davide.boschetto@imtlucca.it
M. Castellaro
P. Di Prima
A. Bertoldo
Enrico Grisan
2016-01-20T09:41:10Z
2016-01-20T09:41:10Z
http://eprints.imtlucca.it/id/eprint/3023
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3023
2016-01-20T09:41:10Z
X3DMMS: An X3DOM Tool for Molecular and Material Sciences
We are presenting a virtual reality environment based on X3DOM technologies and aimed for enabling a researcher in the Molecular and Matter Sciences to set up the initial conditions of a simulation to be performed using the Dl-Poly software, through a virtual environment implemented in X3D. After having completed the definition of the molecular system to be studied in a very intuitive and user friendly way, the user can write out the Dl-Poly input files. In this way the crucial phase of the initial set up of the simulation is simplified and can be performed in a short time.
Even if some technological drawbacks have been experienced in the current X3DOM implementation, we are confident that this approach, which definitely solves the "traditional" issues related to the compatibility among different web browser (plugins) and operating systems, represents an highway for the diffusion of X3D technologies in several application fields.
Fabiana Zollo
fabiana.zollo@imtlucca.it
Luca Caprini
Osvaldo Gervasi
Alessandro Costantini
2016-01-20T09:35:59Z
2016-01-20T09:35:59Z
http://eprints.imtlucca.it/id/eprint/3022
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3022
2016-01-20T09:35:59Z
User Interaction and Data Management for Large Scale Grid Applications
In this paper we present a model that combines the X3DMMS application with the G3CPie execution framework, that enables the user to perform large scale computations on distributed computing environments. Such an approach facilitates the management and the preparation of the data required to define the input files for DL_POLY, a popular Molecular Dynamics (MD) package used for the study of molecular systems. The researcher can define in a intuitive way the initial configuration of the molecular system, making use of the X3DMMS virtual reality environment, and prepares the related MD package oriented input files. After having defined the initial conditions of the system, the researcher can carry out the required computations by using the G3CPie workflow environment, which controls the execution of the calculation on a distributed computing infrastructure. To test the validity of the developed model, implemented in the EGI infrastructure, we present the results carried out for a propane bulk system, where the solvation process of propane inside the bulk has been investigated. The presented approach provides a reusable example for other laboratories or groups interested both in acting through virtual representation of the molecular systems and porting their applications to distributed computing infrastructures.
Alessandro Costantini
Osvaldo Gervasi
Fabiana Zollo
fabiana.zollo@imtlucca.it
Luca Caprini
2016-01-20T08:59:10Z
2016-01-20T09:37:24Z
http://eprints.imtlucca.it/id/eprint/3021
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3021
2016-01-20T08:59:10Z
Viral Misinformation: The Role of Homophily and Polarization
Alessandro Bessi
Fabio Petroni
Michela Del Vicario
michela.delvicario@imtlucca.it
Fabiana Zollo
fabiana.zollo@imtlucca.it
Aris Anagnostopoulos
Antonio Scala
Guido Caldarelli
guido.caldarelli@imtlucca.it
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2016-01-19T15:59:42Z
2016-04-06T07:39:00Z
http://eprints.imtlucca.it/id/eprint/3018
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3018
2016-01-19T15:59:42Z
Semiautomatic detection of villi in confocal endoscopy for the evaluation of celiac disease
Celiac Disease (CD) is an immune-mediated enteropathy, diagnosed in the clinical practice by intestinal biopsy and the concomitant presence of a positive celiac serology. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to potentially perform in vivo virtual histology of small-bowel mucosa. In particular, it allows the qualitative evaluation of mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a semi-automatic method for villi detection from confocal endoscopy images, whose appearance change in case of villous atrophy. Starting from a set of manual seeds, a first rough segmentation of the villi is obtained by means of mathematical morphology operations. A merge and split procedure is then performed, to ensure that each seed originates a different region in the final segmentation. A border refinement process is finally performed, evolving the shape of each region according to local gradient intensities. Mean and median Dice coefficients for 290 villi originating from 66 images when compared to manually obtained ground truth are 80.71 and 87.96 respectively.
Davide Boschetto
davide.boschetto@imtlucca.it
H. Mirzaei
R.W.L. Leong
Giacomo Tarroni
Enrico Grisan
2016-01-19T15:31:34Z
2016-04-06T07:34:52Z
http://eprints.imtlucca.it/id/eprint/3017
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3017
2016-01-19T15:31:34Z
Detection and density estimation of goblet cells in confocal endoscopy for the evaluation of celiac disease
Celiac Disease (CD) is an immune-mediated enteropathy, diagnosed in the clinical practice by intestinal biopsy and the concomitant presence of a positive celiac serology. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to potentially perform in vivo virtual histology of small-bowel mucosa. In particular, it allows the qualitative evaluation of mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a semi-automatic computer-based method for the detection of goblet cells from confocal endoscopy images, whose density changes in case of pathological tissue. After a manual selection of a suitable region of interest, the candidate columnar and goblet cells' centers are first detected and the cellular architecture is estimated from their position using a Voronoi diagram. The region within each Voronoi cell is then analyzed and classified as goblet cell or other. The results suggest that our method is able to detect and label goblet cells immersed in a columnar epithelium in a fast, reliable and automatic way. Accepting 0.44 false positives per image, we obtain a sensitivity value of 90.3. Furthermore, estimated and real goblet cell densities are comparable (error: 9.7 ± 16.9, correlation: 87.2, R2 = 76).
Davide Boschetto
davide.boschetto@imtlucca.it
H. Mirzaei
R.W.L. Leong
Enrico Grisan
2016-01-19T15:25:22Z
2016-04-06T07:36:06Z
http://eprints.imtlucca.it/id/eprint/3016
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3016
2016-01-19T15:25:22Z
Baseline constrained reconstruction of DSC-MRI tracer kinetics from sparse fourier data
In order to assess brain perfusion, one of the available methods is the estimation of parameters such as cerebral blood flow (CBF), cerebral blood volume (CBV) and mean transit time (MTT) from Dynamic Susceptibility Contrast MRI (DSC-MRI). This estimation requires both high temporal and spatial resolution to capture the rapid tracer kinetic and detect small impairments and reliably discriminate boundaries. With this in mind, we propose a compressed sensing approach to decrease the acquisition time without sacrificing the reconstruction, especially in the region affected by the tracer. Within the framework of a TV-L1-L2 minimization for solving the reconstruction from partial Fourier data, we introduce a novel baseline-constraining term weighting the difference of the reconstructed volume from the baseline in all regions where no perfusion is apparent. We show that the proposed reconstruction scheme is able to provide accurate estimation of the tracer kinetics (the necessary step for estimating CBF, CBV and MTT) in the volume even at high acceleration (x16), with a RMSE of 11, a third of what achievable without the baseline constraint.
Davide Boschetto
davide.boschetto@imtlucca.it
P. Di Prima
M. Castellaro
A. Bertoldo
Enrico Grisan
2015-12-15T11:27:37Z
2016-03-18T10:39:28Z
http://eprints.imtlucca.it/id/eprint/2972
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2972
2015-12-15T11:27:37Z
A Concurrent SOM-Based Chan-Vese Model for Image Segmentation
Concurrent Self Organizing Maps (CSOMs) deal with the pattern classification problem in a parallel processing way, aiming to minimize a suitable objective function. Similarly, Active Contour Models (ACMs) (e.g., the Chan-Vese (CV) model) deal with the image segmentation problem as an optimization problem by minimizing a suitable energy functional. The effectiveness of ACMs is a real challenge in many computer vision applications. In this paper, we propose a novel regional ACM, which relies on a CSOM to approximate the foreground and background image intensity distributions in a supervised way, and to drive the active-contour evolution accordingly. We term our model Concurrent Self Organizing Map-based Chan-Vese (CSOM-CV) model. Its main idea is to concurrently integrate the global information extracted by a CSOM from a few supervised pixels into the level-set framework of the CV model to build an effective ACM. Experimental results show the effectiveness of CSOM-CV in segmenting synthetic and real images, when compared with the stand-alone CV and CSOM models.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
2015-12-11T11:32:13Z
2015-12-11T11:32:13Z
http://eprints.imtlucca.it/id/eprint/2971
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2971
2015-12-11T11:32:13Z
Douglas-rachford splitting: Complexity estimates and accelerated variants
We propose a new approach for analyzing convergence of the Douglas-Rachford splitting method for solving convex composite optimization problems. The approach is based on a continuously differentiable function, the Douglas-Rachford Envelope (DRE), whose stationary points correspond to the solutions of the original (possibly nonsmooth) problem. By proving the equivalence between the Douglas-Rachford splitting method and a scaled gradient method applied to the DRE, results from smooth unconstrained optimization are employed to analyze convergence properties of DRS, to tune the method and to derive an accelerated version of it.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Lorenzo Stella
lorenzo.stella@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-12-04T08:57:09Z
2015-12-04T09:00:42Z
http://eprints.imtlucca.it/id/eprint/2967
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2967
2015-12-04T08:57:09Z
Il Futuro della Cyber Security in Italia. Un libro bianco per raccontare le principali sfide che il nostro Paese dovrà affrontare nei prossimi cinque anni
Roberto Baldoni
Rocco De Nicola
r.denicola@imtlucca.it
2015-12-03T14:50:46Z
2015-12-03T14:50:46Z
http://eprints.imtlucca.it/id/eprint/2963
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2963
2015-12-03T14:50:46Z
Social Determinants of Content Selection in the Age of (Mis)Information
Despite the enthusiastic rhetoric about the so called collective intelligence, conspiracy theories – e.g. global warming induced by chemtrails or the link between vaccines and autism – find on the Web a natural medium for their dissemination. Users preferentially consume information according to their system of beliefs and the strife within users of opposite worldviews (e.g., scientific and conspiracist) may result in heated debates. In this work we provide a genuine example of information consumption on a set of 1.2 million of Facebook Italian users. We show by means of a thorough quantitative analysis that information supporting different worldviews – i.e. scientific and conspiracist news – are consumed in a comparable way. Moreover, we measure the effect of 4709 evidently false information (satirical version of conspiracist stories) and 4502 debunking memes (information aiming at contrasting unsubstantiated rumors) on polarized users of conspiracy claims.
Alessandro Bessi
Guido Caldarelli
guido.caldarelli@imtlucca.it
Michela Del Vicario
michela.delvicario@imtlucca.it
Antonio Scala
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2015-12-03T13:38:45Z
2015-12-03T13:38:45Z
http://eprints.imtlucca.it/id/eprint/2962
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2962
2015-12-03T13:38:45Z
Differential Analysis of Interacting Automata with Immediate Actions
The stochastic modelling of software systems with activities of durations that are separated by many orders of magnitude typically leads to numerical complications, due to stiffness. To avoid explicit state-space generation-a prerequisite to tackle this problem via suitable manipulations or aggregations-in this paper we present an accurate and scalable fluid approximation. It is expressed as a compact piecewise linear system of ordinary differential equations, which have discontinuous right-hand sides as a result of the incorporation of immediateness. We study the nature of this approximation in a general high-level framework of interacting automata. On a case study of client/server interaction, our approach is ca two times faster than the analysis conducted on the stiff equations where immediate actions are explicitly modelled.
Luca Bortolussi
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-12-03T13:32:51Z
2015-12-03T13:32:51Z
http://eprints.imtlucca.it/id/eprint/2961
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2961
2015-12-03T13:32:51Z
Model-based Development and Performance Analysis for Evolving Manufacturing Systems
Manufacturing systems and their control software exhibit a large number of variants, which evolve over time in order to meet changing functional and non-functional requirements. To handle the resulting complexity, we propose a multi-perspective modeling approach with different viewpoints regarding workflow, architecture and component behavior. We combine it with delta modeling to seamlessly capture variability and evolution by the same means on each of the viewpoints. We show how the separation in different viewpoints enables early performance analysis as well as code generation. The approach is illustrated using a case study.
Matthias Kowal
Christian Prehofer
Ina Schaefer
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-11-30T15:16:11Z
2015-11-30T15:16:11Z
http://eprints.imtlucca.it/id/eprint/2941
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2941
2015-11-30T15:16:11Z
SLAC: A Formal Service-Level-Agreement Language for Cloud Computing
The need of mechanisms to automate and regulate the interaction amongst the parties involved in the offered cloud services is exacerbated by the increasing number of providers and solutions that enable the cloud paradigm. This regulation needs to be defined through a contract, the so-called Service Level Agreement (SLA). We argue that the current solutions for SLA specification cannot cope with the distinctive characteristics of clouds. Therefore, in this paper we define a language, named SLAC, devised for specifying SLA for the cloud computing domain. The main differences with respect to the existing specification languages are: SLAC is domain specific, its semantics are formally defined in order to avoid ambiguity, it supports the main cloud deployment models, and it enables the specification of multi-party agreements. Moreover, SLAC supports the business aspects of the domain, such as pricing schemes, business actions and metrics. Furthermore, SLAC comes with an open-source software framework which enables the specification, evaluation and enforcement of SLAs for clouds. We illustrate potentialities and effectiveness of the SLAC language and its management framework by experimenting with an Open Nebula cloud system.
Rafael B. Uriarte
Francesco Tiezzi
Rocco De Nicola
r.denicola@imtlucca.it
2015-11-30T15:08:22Z
2015-11-30T15:08:22Z
http://eprints.imtlucca.it/id/eprint/2940
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2940
2015-11-30T15:08:22Z
Self-expression and Dynamic Attribute-Based Ensembles in SCEL
In the field of distributed autonomous computing the current trend is to develop cooperating computational entities enabled with enhanced self-* properties. The expression self-* indicates the possibility of a component inside an ensemble, i.e. a set of collaborative autonomic components, to self organize, heal (repair), optimize and configure with little or no human interaction. We focus on a self-* property called self-expression, defined as the ability to deploy run-time changes of the coordination pattern of the observed ensemble; the goal of the ensemble is to achieve adaptivity by meeting functional and non-functional requirements when specific tasks have to be completed. The purpose of this paper is to rigorously present the mechanisms involved whenever a change in the coordination pattern is needed, and the interactions that take place. To this aim, we use SCEL (Software Component Ensemble Language), a formal language for describing autonomic components and their interactions, featuring a highly dynamic and flexible way to form ensembles based on components’ attributes.
Giacomo Cabri
Nicola Capodieci
Luca Cesari
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
Francesco Tiezzi
Franco Zambonelli
2015-11-30T14:55:01Z
2015-11-30T14:55:01Z
http://eprints.imtlucca.it/id/eprint/2939
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2939
2015-11-30T14:55:01Z
Group-by-Group Probabilistic Bisimilarities and Their Logical Characterizations
We provide two interpretations, over nondeterministic and probabilistic processes, of PML, the probabilistic version of Hennessy-Milner logic used by Larsen and Skou to characterize bisimilarity of probabilistic processes without internal nondeterminism. We also exhibit two new bisimulation-based equivalences, which are in full agreement with the two different interpretations of PML. The new equivalences are coarser than the bisimilarity for nondeterministic and probabilistic processes proposed by Segala and Lynch, which instead is in agreement with a version of Hennessy-Milner logic extended with an additional probabilistic operator interpreted over state distributions rather than over individual states. The modal logic characterizations provided for the new equivalences thus offer a uniform framework for reasoning on purely nondeterministic processes, reactive probabilistic processes, and nondeterministic and probabilistic processes.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-11-30T13:11:55Z
2015-11-30T13:11:55Z
http://eprints.imtlucca.it/id/eprint/2938
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2938
2015-11-30T13:11:55Z
Programming and Verifying Component Ensembles
A simplified version of the kernel language SCEL, that we call SCELlight, is introduced as a formalism for programming and verifying properties of so-called cyber-physical systems consisting of software-intensive ensembles of components, featuring complex intercommunications and interactions with humans and other systems. In order to validate the amenability of the language for verification purposes, we provide a translation of SCELlight specifications into Promela. We test the feasibility of the approach by formally specifying an application scenario, consisting of a collection of components offering a variety of services meeting different quality levels, and by using SPIN to verify that some desired behaviors are guaranteed.
Rocco De Nicola
r.denicola@imtlucca.it
Alberto Lluch Lafuente
Michele Loreti
Andrea Morichetta
Rosario Pugliese
Valerio Senni
Francesco Tiezzi
2015-11-30T12:56:35Z
2015-11-30T12:56:35Z
http://eprints.imtlucca.it/id/eprint/2937
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2937
2015-11-30T12:56:35Z
Introduction to “Rigorous Engineering of Autonomic Ensembles”– Track Introduction
Today’s software systems are becoming increasingly distributed and decentralized and have to adapt autonomously to dynamically changing, open-ended environments. Often their nodes partake in complex interactions with other nodes or with humans. We call these kinds of distributed, complex systems operating in openended and changing environments, ensembles.
Martin Wirsing
Rocco De Nicola
r.denicola@imtlucca.it
Matthias Hölzl
2015-11-30T12:37:23Z
2016-04-06T07:56:18Z
http://eprints.imtlucca.it/id/eprint/2936
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2936
2015-11-30T12:37:23Z
A Life Cycle for the Development of Autonomic Systems: The E-mobility Showcase
Component ensembles are a promising way of building self-aware autonomic adaptive systems. This approach has been promoted by the EU project ASCENS, which develops the core idea of ensembles by providing rigorous semantics as well as models and methods for the whole development life cycle of an ensemble-based system. These methods specifically address adaptation, self-awareness, self-optimization, and continuous system evolution. In this paper, we demonstrate the key concepts and benefits of the ASCENS approach in the context of intelligent navigation of electric vehicles (e-Mobility), which itself is one of the three key case studies of the project.
Tomáš Bureš
Rocco De Nicola
r.denicola@imtlucca.it
Ilias Gerostathopoulos
Nicklas Hoch
Michal Kit
Nora Koch
Giacoma Valentina Monreale
Ugo Montanari
Rosario Pugliese
Nikola Serbedzija
Martin Wirsing
Franco Zambonelli
2015-11-24T15:47:03Z
2015-11-24T15:47:03Z
http://eprints.imtlucca.it/id/eprint/2930
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2930
2015-11-24T15:47:03Z
Robust model predictive control for discrete-time fractional-order systems
In this paper we propose a tube-based robust model predictive control scheme for fractional-order discrete-time systems of the Grunwald-Letnikov type with state and input constraints. We first approximate the infinite-dimensional
fractional-order system by a finite-dimensional linear system
and we show that the actual dynamics can be approximated
arbitrarily tight. We use the approximate dynamics to design
a tube-based model predictive controller which endows to the
controlled closed-loop system robust stability properties.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Sotiris Ntouskas
Haralambos Sarimveis
2015-11-24T13:00:34Z
2015-11-24T13:00:34Z
http://eprints.imtlucca.it/id/eprint/2931
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2931
2015-11-24T13:00:34Z
The eNanoMapper database for nanomaterial safety information
Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs.
Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms.
Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state transfer” (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure–activity relationships for nanomaterials (NanoQSAR).
Nina Jeliazkova
Haralambos Chomenides
Philip Doganis
Bengt Fadeel
Roland Grafström
Barry Hardy
Janna Hastings
Markus Hegi
Vedrin Jeliazkov
Nikolay Kochev
Pekka Kohonen
Cristian Munteanu
Haralambos Sarimveis
Bart Smeets
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Georgia Tsiliki
David Vorgrimmler
Egon Willighagen
2015-11-05T14:07:32Z
2015-11-05T14:07:32Z
http://eprints.imtlucca.it/id/eprint/2828
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2828
2015-11-05T14:07:32Z
SEDNAM - Socio-Economic Dynamics: Networks and Agent-Based Models - Introduction
Recent years have witnessed the increasing interest of physicists, mathematicians and computer scientists for socio-economic systems. In our view, the many reasons behind this can be summarized by observing that traditional approaches to disciplines as sociology and economics have dramatically shown their limitations.
Serge Galam
Marco Alberto Javarone
Tiziano Squartini
tiziano.squartini@imtlucca.it
2015-10-28T15:00:56Z
2015-10-28T15:00:56Z
http://eprints.imtlucca.it/id/eprint/2788
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2788
2015-10-28T15:00:56Z
Debunking in a World of Tribes
Recently a simple military exercise on the Internet was perceived as the beginning of a new civil war in the US. Social media aggregate people around common interests eliciting a collective framing of narratives and worldviews. However, the wide availability of user-provided content and the direct path between producers and consumers of information often foster confusion about causations, encouraging mistrust, rumors, and even conspiracy thinking. In order to contrast such a trend attempts to \textit{debunk} are often undertaken. Here, we examine the effectiveness of debunking through a quantitative analysis of 54 million users over a time span of five years (Jan 2010, Dec 2014). In particular, we compare how users interact with proven (scientific) and unsubstantiated (conspiracy-like) information on Facebook in the US. Our findings confirm the existence of echo chambers where users interact primarily with either conspiracy-like or scientific pages. Both groups interact similarly with the information within their echo chamber. We examine 47,780 debunking posts and find that attempts at debunking are largely ineffective. For one, only a small fraction of usual consumers of unsubstantiated information interact with the posts. Furthermore, we show that those few are often the most committed conspiracy users and rather than internalizing debunking information, they often react to it negatively. Indeed, after interacting with debunking posts, users retain, or even increase, their engagement within the conspiracy echo chamber.
Fabiana Zollo
fabiana.zollo@imtlucca.it
Alessandro Bessi
Michela Del Vicario
michela.delvicario@imtlucca.it
Antonio Scala
Guido Caldarelli
guido.caldarelli@imtlucca.it
Louis Shekhtman
Shlomo Havlin
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2015-10-28T14:52:49Z
2016-05-04T09:46:22Z
http://eprints.imtlucca.it/id/eprint/2787
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2787
2015-10-28T14:52:49Z
A Quadratic Programming Algorithm Based on Nonnegative Least Squares with Applications to Embedded Model Predictive Control
This paper proposes an active set method based on nonnegative least squares (NNLS) to solve strictly convex quadratic programming (QP) problems, such as those that arise in Model Predictive Control (MPC). The main idea is to rephrase the QP problem as a Least Distance Problem (LDP) that is solved via a NNLS reformulation. While the method is rather general for solving strictly convex QP’s subject to linear inequality constraints, it is particularly useful for embedded MPC because (i) is very fast, compared to other existing state-of-theart QP algorithms, (ii) is very simple to code, requiring only basic arithmetic operations for computing LDLT decompositions recursively to solve linear systems of equations, (iii) contrary to iterative methods, provides the solution or recognizes infeasibility in a finite number of steps.
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-10-22T13:41:13Z
2015-10-22T13:41:13Z
http://eprints.imtlucca.it/id/eprint/2779
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2779
2015-10-22T13:41:13Z
Distributed solution of stochastic optimal control problems on GPUs
Stochastic optimal control problems arise in many applications and are, in principle, large-scale involving up to millions of decision variables. Their applicability in control applications is often limited by the availability of algorithms that can solve them efficiently and within the sampling time of the controlled system.
In this paper we propose a dual accelerated proximal gradient algorithm which is amenable to parallelization and demonstrate that its GPU implementation affords high speed-up values (with respect to a CPU implementation) and greatly outperforms well-established commercial optimizers such as Gurobi.
Ajay Kumar Sampathirao
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
2015-10-22T13:39:05Z
2016-05-04T10:15:53Z
http://eprints.imtlucca.it/id/eprint/2780
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2780
2015-10-22T13:39:05Z
Scenario-Based Model Predictive Operation Control of Islanded Microgrids
We propose a model predictive control (MPC) approach for the operation of islanded microgrids that takes into account the stochasticity of wind and load forecasts. In comparison to worst case approaches, the probability distribution of the prediction is used to optimize the operation of the microgrid, leading to less conservative solutions. Suitable models for time series forecast are derived and employed to create scenarios. These scenarios and the system measurements are used as inputs for a stochastic MPC, wherein a mixed-integer problem is solved to derive the optimal controls. In the provided case study, the stochastic MPC yields an increase of wind power generation and decrease of conventional generation.
Christian Hans
hans@control.tu-berlin.de
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Raisch Jörg
raisch@control.tu-berlin.de
Carsten Reincke-Collon
2015-10-19T10:16:18Z
2016-02-26T11:47:36Z
http://eprints.imtlucca.it/id/eprint/2778
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2778
2015-10-19T10:16:18Z
Supervised and semi-supervised classifiers for the detection of flood-prone areas
Supervised and semi-supervised machine-learning techniques are applied and compared for the recognition of the flood hazard. The learning goal consists in distinguishing between flood-exposed and marginal-risk areas. Kernel-based binary classifiers using six quantitative morphological features, derived from data stored in digital elevation models, are trained to model the relationship between morphology and the flood hazard. According to the experimental outcomes, such classifiers are appropriate tools when one is interested in performing an initial low-cost detection of flood-exposed areas, to be possibly refined in successive steps by more time-consuming and costly investigations by experts. The use of these automatic classification techniques is valuable, e.g., in insurance applications, where one is interested in estimating the flood hazard of areas for which limited labeled information is available. The proposed machine-learning techniques are applied to the basin of the Italian Tanaro River. The experimental results show that for this case study, semi-supervised methods outperform supervised ones when—the number of labeled examples being the same for the two cases—only a few labeled examples are used, together with a much larger number of unsupervised ones.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Rita Morisi
rita.morisi@imtlucca.it
Giorgio Roth
Marcello Sanguineti
Angela Celeste Taramasso
2015-10-19T09:51:47Z
2016-04-05T12:19:09Z
http://eprints.imtlucca.it/id/eprint/2777
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2777
2015-10-19T09:51:47Z
Supervised Learning Modelization and Segmentation of Cardiac Scar in Delayed Enhanced MRI
Delayed Enhancement Magnetic Resonance Imaging can be used to non-invasively differentiate viable from non-viable myocardium within the Left Ventricle in patients suffering from myocardial diseases. Automated segmentation of scarified tissue can be used to accurately quantify the percentage of myocardium affected. This paper presents a method for cardiac scar detection and segmentation based on supervised learning and level set segmentation. First, a model of the appearance of scar tissue is trained using a Support Vector Machines classifier on image-derived descriptors. Based on the areas detected by the classifier, an accurate segmentation is performed using a segmentation method based on level sets.
Laura Lara
Sergio Vera
Frederic Perez
Nico Lanconelli
Rita Morisi
rita.morisi@imtlucca.it
Bruno Donini
Dario Turco
Cristiana Corsi
Claudio Lamberti
Giovana Gavidia
Maurizio Bordone
Eduardo Soudah
Nick Curzen
James Rosengarten
John Morgan
Javier Herrero
Miguel A. González Ballester
2015-10-19T09:40:53Z
2016-04-06T08:50:40Z
http://eprints.imtlucca.it/id/eprint/2776
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2776
2015-10-19T09:40:53Z
Binary and Multi-class Parkinsonian Disorders Classification Using Support Vector Machines
This paper presents a method for an automated Parkinsonian disorders classification using Support Vector Machines (SVMs). Magnetic Resonance quantitative markers are used as features to train SVMs with the aim of automatically diagnosing patients with different Parkinsonian disorders. Binary and multi–class classification problems are investigated and applied with the aim of automatically distinguishing the subjects with different forms of disorders. A ranking feature selection method is also used as a preprocessing step in order to asses the significance of the different features in diagnosing Parkinsonian disorders. In particular, it turns out that the features selected as the most meaningful ones reflect the opinions of the clinicians as the most important markers in the diagnosis of these disorders. Concerning the results achieved in the classification phase, they are promising; in the two multi–class classification problems investigated, an average accuracy of 81% and 90% is obtained, while in the binary scenarios taken in consideration, the accuracy is never less than 88%.
Rita Morisi
rita.morisi@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Nico Lanconelli
Stefano Zanigni
David Neil Manners
Claudia Testa
Stefania Evangelisti
LauraLudovica Gramegna
Claudio Bianchini
Pietro Cortelli
Caterina Tonon
Raffaele Lodi
2015-10-19T09:31:34Z
2015-10-19T09:31:34Z
http://eprints.imtlucca.it/id/eprint/2775
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2775
2015-10-19T09:31:34Z
Semi-automated scar detection in delayed enhanced cardiac magnetic resonance images
Late enhancement cardiac magnetic resonance images (MRI) has the ability to precisely delineate myocardial scars. We present a semi-automated method for detecting scars in cardiac MRI. This model has the potential to improve routine clinical practice since quantification is not currently offered due to time constraints. A first segmentation step was developed for extracting the target regions for potential scar and determining pre-candidate objects. Pattern recognition methods are then applied to the segmented images in order to detect the position of the myocardial scar. The database of late gadolinium enhancement (LE) cardiac MR images consists of 111 blocks of images acquired from 63 patients at the University Hospital Southampton NHS Foundation Trust (UK). At least one scar was present for each patient, and all the scars were manually annotated by an expert. A group of images (around one third of the entire set) was used for training the system which was subsequently tested on all the remaining images. Four different classifiers were trained (Support Vector Machine (SVM), k-nearest neighbor (KNN), Bayesian and feed-forward neural network) and their performance was evaluated by using Free response Receiver Operating Characteristic (FROC) analysis. Feature selection was implemented for analyzing the importance of the various features. The segmentation method proposed allowed the region affected by the scar to be extracted correctly in 96% of the blocks of images. The SVM was shown to be the best classifier for our task, and our system reached an overall sensitivity of 80% with less than 7 false positives per patient. The method we present provides an effective tool for detection of scars on cardiac MRI. This may be of value in clinical practice by permitting routine reporting of scar quantification.
Rita Morisi
rita.morisi@imtlucca.it
Bruno Donini
Nico Lanconelli
James Rosengarden
John Morgan
Stephen Harden
Nick Curzen
2015-10-19T09:22:53Z
2015-10-19T09:22:53Z
http://eprints.imtlucca.it/id/eprint/2774
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2774
2015-10-19T09:22:53Z
Sparse Solutions to the Average Consensus Problem via Various Regularizations of the Fastest Mixing Markov-Chain Problem
In the consensus problem on multi-agent systems, in which the states of the agents represent opinions, the agents aim at reaching a common opinion (or consensus state) through local exchange of information. An important design problem is to choose the degree of interconnection of the subsystems to achieve a good trade-off between a small number of interconnections and a fast convergence to the consensus state, which is the average of the initial opinions under mild conditions. This paper addresses this problem through l₁ -norm and l₀ -“pseudo-norm” regularized versions of the well-known Fastest Mixing Markov-Chain (FMMC) problem. We show that such versions can be interpreted as robust forms of the FMMC problem and provide results to guide the choice of the regularization parameter.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Rita Morisi
rita.morisi@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-10-13T08:07:33Z
2015-10-13T08:08:43Z
http://eprints.imtlucca.it/id/eprint/2771
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2771
2015-10-13T08:07:33Z
Unsupervised Myocardial Segmentation for Cardiac MRI
Though unsupervised segmentation was a de-facto standard for cardiac MRI segmentation early on, recently cardiac MRI segmentation literature has favored fully supervised techniques such as Dictionary Learning and Atlas-based techniques. But, the benefits of unsupervised techniques e.g., no need for large amount of training data and better potential of handling variability in anatomy and image contrast, is more evident with emerging cardiac MR modalities. For example, CP-BOLD is a new MRI technique that has been shown to detect ischemia without any contrast at stress but also at rest conditions. Although CP-BOLD looks similar to standard CINE, changes in myocardial intensity patterns and shape across cardiac phases, due to the heart’s motion, BOLD effect and artifacts affect the underlying mechanisms of fully supervised segmentation techniques resulting in a significant drop in segmentation accuracy. In this paper, we present a fully unsupervised technique for segmenting myocardium from the background in both standard CINE MR and CP-BOLD MR. We combine appearance with motion information (obtained via Optical Flow) in a dictionary learning framework to sparsely represent important features in a low dimensional space and separate myocardium from background accordingly. Our fully automated method learns background-only models and one class classifier provides myocardial segmentation. The advantages of the proposed technique are demonstrated on a dataset containing CP-BOLD MR and standard CINE MR image sequences acquired in baseline and ischemic condition across 10 canine subjects, where our method outperforms state-of-the-art supervised segmentation techniques in CP-BOLD MR and performs at-par for standard CINE MR.
Anirban Mukhopadhyay
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Marco Bevilacqua
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-10-13T08:04:03Z
2015-10-13T08:04:03Z
http://eprints.imtlucca.it/id/eprint/2770
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2770
2015-10-13T08:04:03Z
Dictionary Learning Based Image Descriptor for Myocardial Registration of CP-BOLD MR
Cardiac Phase-resolved Blood Oxygen-Level-Dependent (CP-BOLD) MRI is a new contrast agent- and stress-free imaging technique for the assessment of myocardial ischemia at rest. The precise registration among the cardiac phases in this cine type acquisition is essential for automating the analysis of images of this technique, since it can potentially lead to better specificity of ischemia detection. However, inconsistency in myocardial intensity patterns and the changes in myocardial shape due to the heart’s motion lead to low registration performance for state-of-the-art methods. This low accuracy can be explained by the lack of distinguishable features in CP-BOLD and inappropriate metric definitions in current intensity-based registration frameworks. In this paper, the sparse representations, which are defined by a discriminative dictionary learning approach for source and target images, are used to improve myocardial registration. This method combines appearance with Gabor and HOG features in a dictionary learning framework to sparsely represent features in a low dimensional space. The sum of absolute differences of these distinctive sparse representations are used to define a similarity term in the registration framework. The proposed approach is validated on a dataset of CP-BOLD MR and standard CINE MR acquired in baseline and ischemic condition across 10 canines.
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Anirban Mukhopadhyay
Marco Bevilacqua
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-10-09T11:09:10Z
2015-10-09T11:09:10Z
http://eprints.imtlucca.it/id/eprint/2768
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2768
2015-10-09T11:09:10Z
Effect of BOLD Contrast on Myocardial Registration
Cardiac phase-resolved Blood-Oxygen-Level-Dependent (CP-BOLD) MRI is a new approach for detecting ischemia at rest. Currently disease assessment relies on segmental analysis and uses only a few images in the phase-resolved acquisition. It is expected that using all phases can permit pixel-level characterization of CP-BOLD MRI. In this study, state-of-the-art image registration techniques are evaluated on cardiac BOLD MRI data for the first time. The results show that cardiac phase-dependent variations in myocardial BOLD contrast in CP-BOLD images creates a statistically significant decrease in the accuracy compared to standard Cine MR images acquired under conditions of health and myocardial ischemia.
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Anirban Mukhopadhyay
Marco Bevilacqua
Hsin-Jung Yang
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-10-09T11:03:42Z
2015-10-09T12:31:26Z
http://eprints.imtlucca.it/id/eprint/2767
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2767
2015-10-09T11:03:42Z
Dictionary-based Support Vector Machines for Unsupervised Ischemia Detection at Rest with CP-BOLD Cardiac MRI
Cardiac Phase-resolved Blood-Oxygen-Level-Dependent (CP-BOLD) MRI has been recently demonstrated to detect an ongoing myocardial ischemia at rest, taking advantage of spatio-temporal patterns in myocardial signal intensities, which are modulated by the presence of disease. However, this approach does require significant post-processing to detect the disease and to this day only a few images of the acquisition are used coupled with fixed thresholds to establish biomarkers. We propose a threshold-free unsupervised approach, based on dictionary learning and one-class support vector machines, which can generate a probabilistic ischemia likelihood map.
Marco Bevilacqua
Anirban Mukhopadhyay
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Cristian Rusu
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-10-09T10:30:59Z
2015-10-09T11:10:32Z
http://eprints.imtlucca.it/id/eprint/2766
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2766
2015-10-09T10:30:59Z
Data Driven Feature Learning for Representation of Myocardial BOLD MR Images
Cardiac phase-dependent variations of myocardial signal intensities in Cardiac Phase-resolved Blood-Oxygen-Level-Dependent (CP-BOLD) MRI can be exploited for the identification of ischemic territories. This technique requires segmentation to isolate the myocardium. However, spatio-temporal variations of BOLD contrast, prove challenging for existing automated myocardial segmentation techniques, because they were developed for acquisitions where contrast variations in the myocardium are minimal. Appropriate feature learning mechanisms are necessary to best represent appearance and texture in CP-BOLD data. Here we propose and validate a feature learning technique based on multiscale dictionary model that learns to sparsely represent effective patterns under healthy and ischemic conditions.
Anirban Mukhopadhyay
Marco Bevilacqua
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-09-16T11:12:59Z
2015-09-16T11:35:57Z
http://eprints.imtlucca.it/id/eprint/2748
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2748
2015-09-16T11:12:59Z
The significance of image compression in plant phenotyping applications
We are currently witnessing an increasingly higher throughput in image-based plant phenotyping experiments. The majority of imaging data are collected using complex automated procedures and are then post-processed to extract phenotyping-related information. In this article, we show that the image compression used in such procedures may compromise phenotyping results and this needs to be taken into account. We use three illuminating proof-of-concept experiments that demonstrate that compression (especially in the most common lossy JPEG form) affects measurements of plant traits and the errors introduced can be high. We also systematically explore how compression affects measurement fidelity, quantified as effects on image quality, as well as errors in extracted plant visual traits. To do so, we evaluate a variety of image-based phenotyping scenarios, including size and colour of shoots, leaf and root growth. To show that even visual impressions can be used to assess compression effects, we use root system images as examples. Overall, we find that compression has a considerable effect on several types of analyses (albeit visual or quantitative) and that proper care is necessary to ensure that this choice does not affect biological findings. In order to avoid or at least minimise introduced measurement errors, for each scenario, we derive recommendations and provide guidelines on how to identify suitable compression options in practice. We also find that certain compression choices can offer beneficial returns in terms of reducing the amount of data storage without compromising phenotyping results. This may enable even higher throughput experiments in the future.
Massimo Minervini
massimo.minervini@imtlucca.it
Hanno Scharr
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-09-16T11:03:23Z
2015-09-16T11:03:23Z
http://eprints.imtlucca.it/id/eprint/2746
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2746
2015-09-16T11:03:23Z
Data-driven feature learning for myocardial segmentation of CP-BOLD MRI
Cardiac Phase-resolved Blood Oxygen-Level-Dependent (CP-BOLD) MR is capable of diagnosing an ongoing ischemia by detecting changes in myocardial intensity patterns at rest without any contrast and stress agents. Visualizing and detecting these changes require significant post-processing, including myocardial segmentation for isolating the myocardium. But, changes in myocardial intensity pattern and myocardial shape due to the heart’s motion challenge automated standard CINE MR myocardial segmentation techniques resulting in a significant drop of segmentation accuracy. We hypothesize that the main reason behind this phenomenon is the lack of discernible features. In this paper, a multi scale discriminative dictionary learning approach is proposed for supervised learning and sparse representation of the myocardium, to improve the myocardial feature selection. The technique is validated on a challenging dataset of CP-BOLD MR and standard CINE MR acquired in baseline and ischemic condition across 10 canine subjects. The proposed method significantly outperforms standard cardiac segmentation techniques, including segmentation via registration, level sets and supervised methods for myocardial segmentation.
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-09-04T10:26:42Z
2016-05-05T13:50:59Z
http://eprints.imtlucca.it/id/eprint/2745
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2745
2015-09-04T10:26:42Z
An interactive tool for semi-automated leaf annotation
High throughput plant phenotyping is emerging as a necessary step towards meeting agricultural demands of the future. Central to its success is the development of robust computer vision algorithms that analyze images and extract phenotyping information to be associated with genotypes and environmental conditions for identifying traits suitable for further development. Obtaining leaf level quantitative data is important towards understanding better this interaction. While certain efforts have been made to obtain such information in an automated fashion, further innovations are necessary. In this paper we present an annotation tool that can be used to semi-automatically segment leaves in images of rosette plants. This tool, which is designed to exist in a stand-alone fashion but also in cloud based environments, can be used to annotate data directly for the study of plant and leaf growth or to provide annotated datasets for learning-based approaches to extracting phenotypes from images. It relies on an interactive graph-based segmentation algorithm to propagate expert provided priors (in the form of pixels) to the rest of the image, using the random walk formulation to find a good per-leaf segmentation. To evaluate the tool we use standardized datasets available from the LSC and LCC 2015 challenges, achieving an average leaf segmentation accuracy of almost 97% using scribbles as annotations. The tool and source code are publicly available at http://www.phenotiki.com and as a GitHub repository at https://github.com/phenotiki/LeafAnnotationTool.
Massimo Minervini
massimo.minervini@imtlucca.it
Mario Valerio Giuffrida
valerio.giuffrida@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-09-04T10:24:54Z
2016-05-05T13:48:56Z
http://eprints.imtlucca.it/id/eprint/2744
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2744
2015-09-04T10:24:54Z
Learning to Count Leaves in Rosette Plants
Counting the number of leaves in plants is important for plant phenotyping, since it can be used to assess plant growth stages. We propose a learning-based approach for counting leaves in rosette (model) plants. We relate image-based descriptors learned in an unsupervised fashion to leaf counts using a supervised regression model. To take advantage of the circular and coplanar arrangement of leaves and also to introduce scale and rotation invariance, we learn features in a log-polar representation. Image patches extracted in this log-polar domain are provided to K-means, which builds a codebook in a unsupervised manner. Feature codes are obtained by projecting patches on the codebook using the triangle encoding, introducing both sparsity and specifically designed representation. A global, per-plant image descriptor is obtained by pooling local features in specific regions of the image. Finally, we provide the global descriptors to a support vector regression framework to estimate the number of leaves in a plant. We evaluate our method on datasets of the \textit{Leaf Counting Challenge} (LCC), containing images of Arabidopsis and tobacco plants. Experimental results show that on average we reduce absolute counting error by 40% w.r.t. the winner of the 2014 edition of the challenge -a counting via segmentation method. When compared to state-of-the-art density-based approaches to counting, on Arabidopsis image data ~75% less counting errors are observed. Our findings suggest that it is possible to treat leaf counting as a regression problem, requiring as input only the total leaf count per training image.
Mario Valerio Giuffrida
valerio.giuffrida@imtlucca.it
Massimo Minervini
massimo.minervini@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-09-03T14:31:15Z
2015-11-02T09:40:23Z
http://eprints.imtlucca.it/id/eprint/2743
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2743
2015-09-03T14:31:15Z
Combining behavioural types with security analysis
Today’s software systems are highly distributed and interconnected, and they increasingly rely on communication to achieve their goals; due to their societal importance, security and trustworthiness are crucial aspects for the correctness of these systems. Behavioural types, which extend data types by describing also the structured behaviour of programs, are a widely studied approach to the enforcement of correctness properties in communicating systems. This paper offers a unified overview of proposals based on behavioural types which are aimed at the analysis of security properties.
Massimo Bartoletti
Ilaria Castellani
Pierre-Malo Deniélou
Mariangiola Dezani-Ciancaglini
Silvia Ghilezan
Jovanka Pantović
Jorge A. Pérez
Peter Thiemann
Bernardo Toninho
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-09-03T08:15:31Z
2016-05-06T14:07:46Z
http://eprints.imtlucca.it/id/eprint/2740
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2740
2015-09-03T08:15:31Z
Computationally efficient data and application driven color transforms for the compression and enhancement of images and video
An important step in color image or video coding and enhancement is the linear transformation of input (typically RGB) data into a color space more suitable for compression, subsequent analysis, or visualization. The choice of this transform becomes even more critical when operating in distributed and low-computational power environments, such as visual sensor networks or remote sensing. Data-driven transforms are rarely used due to increased complexity. Most schemes adopt fixed transforms to decorrelate the color channels which are then processed independently. Here we propose two frameworks to find appropriate data-driven transforms in different settings. The first, named approximate Karhunen-Loève Transform (aKLT), performs comparable to the KLT at a fraction of the computational complexity, thus favoring adoption on sensors and resource-constrained devices. Furthermore, we consider an application-aware setting in which an expert system (e.g., a classifier) analyzes imaging data at the receiver's end. In a compression context, distortion may jeopardize the accuracy of the analysis. Since the KLT is not optimal in this setting, we investigate formulations that maximize post-compression expert system performance. Relaxing decorrelation and energy compactness constraints, a second transform can be obtained offline with supervised learning methods. Finally, we propose transforms that accommodate both constraints, and are found using regularized optimization.
Massimo Minervini
massimo.minervini@imtlucca.it
Cristian Rusu
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-07-27T08:02:39Z
2015-07-27T08:02:39Z
http://eprints.imtlucca.it/id/eprint/2734
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2734
2015-07-27T08:02:39Z
Insensitivity to service-time distributions for fluid queueing models
We study fluid limits based on ordinary differential equations (ODEs) for Markovian queueing models where nonexponential service times are fit by appropriate Coxian distributions to match their first and second moments. We focus on a heavy-load regime, whereby the fluid limit of the queue-length process of the nonexponential queue estimates a bottleneck situation. Under this condition, we show that the ODE solution admits a steady state which is insensitive to the service-time distribution: The ODE steady state only depends on the mean service times. By contrast, the steady-state average performance measures computed by Markovian analysis are in general dependent on the higher-order moments of the service-time distribution. A numerical investigation shows that, given any two Markovian queueing models with Coxian-distributed service times with the same mean and different variance, the model with lower variance converges more rapidly to the (same) fluid limit than the one with higher variance.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-07-27T07:57:24Z
2015-07-27T07:57:24Z
http://eprints.imtlucca.it/id/eprint/2733
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2733
2015-07-27T07:57:24Z
A partial-differential approximation for spatial stochastic process algebra
We study a spatial framework for process algebra with ordinary differential equation (ODE) semantics. We consider an explicit mobility model over a 2D lattice where processes may walk to neighbouring regions independently, and interact with each other when they are in same region. The ODE system size will grow linearly with the number of regions, hindering the analysis in practice. Assuming an unbiased random walk, we introduce an approximation in terms of a system of reaction-diffusion partial differential equations, of size independent of the lattice granularity. Numerical tests on a spatial version of the generalised Lotka-Volterra model show high accuracy and very competitive runtimes against ODE solutions for fine-grained lattices.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-07-24T12:54:54Z
2015-07-24T12:54:54Z
http://eprints.imtlucca.it/id/eprint/2731
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2731
2015-07-24T12:54:54Z
A unified framework for differential aggregations in Markovian process algebra
Fluid semantics for Markovian process algebra have recently emerged as a computationally attractive approximate way of reasoning about the behaviour of stochastic models of large-scale systems. This interpretation is particularly convenient when sequential components characterised by small local state spaces are present in many independent copies. While the traditional Markovian interpretation causes state-space explosion, fluid semantics is independent from the multiplicities of the sequential components present in the model, just associating a single ordinary differential equation (ODE) with each local state. In this paper we analyse the case of a process algebra model inducing a large ODE system. Previous work, known as exact fluid lumpability, requires two symmetries: ODE aggregation is possible for processes that i) are isomorphic and that ii) are present with the same multiplicities. We first relax the latter requirement by introducing the notion of ordinary fluid lumpability, which yields an ODE system where the sum of the aggregated variables is preserved exactly. Then, we consider approximate variants of both notions of lumpability which make nearby processes symmetric after a perturbation of their parameters. We prove that small perturbations yield nearby differential trajectories. We carry out our study in the context of a process algebra that unifies two synchronisation semantics that are well studied in the literature, useful for the modelling of computer systems and chemical networks, respectively. In both cases, we provide numerical evidence which shows that, in practice, many heterogeneous processes can be aggregated with negligible errors.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-07-24T12:44:02Z
2016-04-13T08:34:53Z
http://eprints.imtlucca.it/id/eprint/2730
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2730
2015-07-24T12:44:02Z
Approximate reduction of heterogenous nonlinear models with differential hulls
We present a model reduction technique for a class of nonlinear ordinary differential equation (ODE) models of heterogeneous systems, where heterogeneity is expressed in terms of classes of state variables having the same dynamics structurally, but which are characterized by distinct parameters. To this end, we first build a system of differential inequalities that provides lower and upper bounds for each original state variable, but such that it is homogeneous in its parameters. Then, we use two methods for exact aggregation of ODEs to exploit this homogeneity, yielding a smaller model of size independent of the number of heterogeneous classes. We apply this technique to two case studies: a multiclass queuing network and a model of epidemics spread.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-07-22T10:28:33Z
2015-09-22T08:14:50Z
http://eprints.imtlucca.it/id/eprint/2729
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2729
2015-07-22T10:28:33Z
On Expressiveness and Behavioural Theory of Attribute-based
Communication
Attribute-based communication is an interesting alternative to broadcast and binary communication when providing abstract models for the so called Collective Adaptive Systems which consist of a large number of interacting components that dynamically adjust and combine their behavior to achieve specifc goals. A basic process calculus, named AbC, is introduced whose primary
primitive for interaction is attribute-based communication. An AbC system consists of a set of parallel components each of which is equipped with a set of attributes. Communication takes place in an implicit multicast fashion, and interactions among components are dynamically established by taking into account\connections" as determined by predicates over the attributes
exposed by components. First, the syntax and the semantics of AbC are presented, then expressiveness and effectiveness of the calculus are demonstrated both in terms of the ability to model scenarios featuring collaboration, reconfiguration, and adaptation
and of the possibility of encoding a process calculus for broadcasting channel-based communication and other communication
paradigms. Behavioral equivalences for AbC are introduced for establishing formal relationships between different descriptions
of the same system.
Yehia Moustafa Abd Alrahman
yehia.abdalrahman@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-06-26T12:53:49Z
2015-06-30T08:21:38Z
http://eprints.imtlucca.it/id/eprint/2720
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2720
2015-06-26T12:53:49Z
Dynamic role authorization in multiparty conversations
Protocol specifications often identify the roles involved in communications. In multiparty protocols that involve task delegation it is often useful to consider settings in which different sites may act on behalf of a single role. It is then crucial to control the roles that the different parties are authorized to represent, including the case in which role authorizations are determined only at runtime. Building on previous work on conversation types with flexible role assignment, here we report initial results on a typed framework for the analysis of multiparty communications with dynamic role authorization and delegation. In the underlying process model, communication prefixes are annotated with role authorizations and authorizations can be passed around. We extend the conversation type system so as to statically distinguish processes that never incur in authorization errors. The proposed static discipline guarantees that processes are always authorized to communicate on behalf of an intended role, also covering the case in which authorizations are dynamically passed around in messages.
Silvia Ghilezan
Svetlana Jakšić
Jovanka Pantović
Jorge A. Pérez
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-06-26T12:53:33Z
2015-06-26T12:53:33Z
http://eprints.imtlucca.it/id/eprint/2721
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2721
2015-06-26T12:53:33Z
Extensionality of spatial observations in distributed systems
We discuss the tensions between intensionality and extensionality of spatial observations in distributed systems, showing that there are natural models where extensional observational equivalences may be characterized by spatial logics, including the composition and void operators. Our results support the claim that spatial observations do not need to be always considered intensional, even if expressive enough to talk about the structure of systems. For simplicity, our technical development is based on a minimalist process calculus, that already captures the main features of distributed systems, namely local synchronous communication, local computation, asynchronous remote communication, and partial failures.
Luis Caires
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-06-26T12:53:17Z
2015-06-26T12:53:17Z
http://eprints.imtlucca.it/id/eprint/2722
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2722
2015-06-26T12:53:17Z
An observational model for spatial logics
Spatiality is an important aspect of distributed systems because their computations depend both on the dynamic behaviour and on the structure of their components. Spatial logics have been proposed as the formal device for expressing spatial properties of systems.
We define CCS∥, a CCS-like calculus whose semantics allows one to observe spatial aspects of systems on the top of which we define models of the spatial logic. Our alternative definition of models is proved equivalent to the standard one. Furthermore, logical equivalence is characterized in terms of the bisimilarity of CCS∥.
Emilio Tuosto
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-06-26T12:52:56Z
2015-06-26T12:52:56Z
http://eprints.imtlucca.it/id/eprint/2723
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2723
2015-06-26T12:52:56Z
Checking for choreography conformance using spatial logic model-checking
We illustrate with a simple example how the Spatial Logic Model Checker can be used to check choreography conformance properties
Luis Caires
David Tavares Sousa
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-06-25T12:52:14Z
2015-06-25T12:52:14Z
http://eprints.imtlucca.it/id/eprint/2719
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2719
2015-06-25T12:52:14Z
Type-based access control in data-centric systems
Data-centric multi-user systems, such as web applications, require flexible yet fine-grained data security mechanisms. Such mechanisms are usually enforced by a specially crafted security layer, which adds extra complexity and often leads to error prone coding, easily causing severe security breaches. In this paper, we introduce a programming language approach for enforcing access control policies to data in data-centric programs by static typing. Our development is based on the general concept of refinement type, but extended so as to address realistic and challenging scenarios of permission-based data security, in which policies dynamically depend on the database state, and flexible combinations of column- and row-level protection of data are necessary. We state and prove soundness and safety of our type system, stating that well-typed programs never break the declared data access control policies.
Luis Caires
Jorge A. Pérez
João C. Seco
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Lúcio Ferrão
2015-06-25T12:47:44Z
2015-06-25T12:47:44Z
http://eprints.imtlucca.it/id/eprint/2717
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2717
2015-06-25T12:47:44Z
Automotive and finance case studies in the conversation calculus
We describe the encoding of the Car Break scenario of the SENSORIA Automotive case study and of the Credit Request scenario of the SENSORIA Finance case study using the Conversation Calculus (CSCC). These scenarios consist of an orchestration of services and service clients which are typefully encoded here in a modular way. Namely the latter scenario consists of a workflow involving different actors: a client willing to submit a credit request, a bank employee, and its supervisor. We show how the workflow is well described in the type assigned to the processes implementing it. We first informally describe the CSCC calculus, and then show how the two scenarios can be encoded using the CSCC calculus and the corresponding typing.
Luis Caires
João Costa Seco
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-06-16T15:22:40Z
2015-06-16T15:22:40Z
http://eprints.imtlucca.it/id/eprint/2708
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2708
2015-06-16T15:22:40Z
Image Analysis: The New Bottleneck in Plant Phenotyping [Applications Corner]
Plant phenotyping is the identification of effects on the phenotype (i.e., the plant appearance and performance) as a result of genotype differences (i.e., differences in the genetic code) and the environmental conditions to which a plant has been exposed [1]?[3]. According to the Food and Agriculture Organization of the United Nations, large-scale experiments in plant phenotyping are a key factor in meeting the agricultural needs of the future to feed the world and provide biomass for energy, while using less water, land, and fertilizer under a constantly evolving environment due to climate change. Working on model plants (such as Arabidopsis), combined with remarkable advances in genotyping, has revolutionized our understanding of biology but has accelerated the need for precision and automation in phenotyping, favoring approaches that provide quantifiable phenotypic information that could be better used to link and find associations in the genotype [4]. While early on, the collection of phenotypes was manual, currently noninvasive, imaging-based methods are increasingly being utilized [5], [6]. However, the rate at which phenotypes are extracted in the field or in the lab is not matching the speed of genotyping and is creating a bottleneck.
Massimo Minervini
massimo.minervini@imtlucca.it
Hanno Scharr
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-06-15T09:27:55Z
2015-06-15T09:27:55Z
http://eprints.imtlucca.it/id/eprint/2707
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2707
2015-06-15T09:27:55Z
A uniform definition of stochastic process calculi
We introduce a unifying framework to provide the semantics of process algebras, including their quantitative variants useful for modeling quantitative aspects of behaviors. The unifying framework is then used to describe some of the most representative stochastic process algebras. This provides a general and clear support for an understanding of their similarities and differences. The framework is based on State to Function Labeled Transition Systems, FuTSs for short, that are state transition structures where each transition is a triple of the form (s,α,
Rocco De Nicola
r.denicola@imtlucca.it
Diego Latella
Michele Loreti
Mieke Massink
2015-06-15T09:14:09Z
2015-06-15T09:14:09Z
http://eprints.imtlucca.it/id/eprint/2706
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2706
2015-06-15T09:14:09Z
Relating strong behavioral equivalences for processes with nondeterminism and probabilities
We present a comparison of behavioral equivalences for nondeterministic and probabilistic processes whose activities are all observable. In particular, we consider trace-based, testing, and bisimulation-based equivalences. For each of them, we examine the discriminating power of three variants stemming from three approaches that differ for the way probabilities of events are compared when nondeterministic choices are resolved via schedulers. The first approach compares two resolutions with respect to the probability distributions of all considered events. The second approach requires that the probabilities of the set of events of a resolution be individually matched by the probabilities of the same events in possibly different resolutions. The third approach only compares the extremal probabilities of each event stemming from the different resolutions. The three approaches have very reasonable motivations and, when applied to fully nondeterministic processes or fully probabilistic processes, give rise to the classical well studied relations. We shall see that, for processes with nondeterminism and probability, they instead give rise to a much wider variety of behavioral relations, whose discriminating power is thoroughly investigated here in the case of deterministic schedulers.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-06-15T08:55:40Z
2015-06-15T08:55:40Z
http://eprints.imtlucca.it/id/eprint/2705
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2705
2015-06-15T08:55:40Z
Editor's Note
Rocco De Nicola
r.denicola@imtlucca.it
2015-06-15T08:33:16Z
2015-06-15T08:33:16Z
http://eprints.imtlucca.it/id/eprint/2703
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2703
2015-06-15T08:33:16Z
Revisiting bisimilarity and its modal logic for nondeterministic and probabilistic processes
The logic PML is a probabilistic version of Hennessy–Milner logic introduced by Larsen and Skou to characterize bisimilarity over probabilistic processes without internal nondeterminism. In this paper, two alternative interpretations of PML over nondeterministic and probabilistic processes as models are considered, and two new bisimulation-based equivalences that are in full agreement with those interpretations are provided. The new equivalences include as coarsest congruences the two bisimilarities for nondeterministic and probabilistic processes proposed by Segala and Lynch. The latter equivalences are instead known to agree with two versions of Hennessy–Milner logic extended with an additional probabilistic operator interpreted over state distributions in place of individual states. The new interpretations of PML and the corresponding new bisimilarities are thus the first ones to offer a uniform framework for reasoning on processes that are purely nondeterministic or reactive probabilistic or that mix nondeterminism and probability in an alternating/nonalternating way.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-06-15T08:33:05Z
2015-06-15T08:33:05Z
http://eprints.imtlucca.it/id/eprint/2704
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2704
2015-06-15T08:33:05Z
Revisiting Trace and Testing Equivalences for Nondeterministic and Probabilistic Processes
Two of the most studied extensions of trace and testing equivalences to nondeterministic and probabilistic processes induce distinctions that have been questioned and lack properties that are desirable. Probabilistic trace-distribution equivalence differentiates systems that can perform the same set of traces with the same probabilities, and is not a congruence for parallel composition. Probabilistic testing equivalence, which relies only on extremal success probabilities, is backward compatible with testing equivalences for restricted classes of processes, such as fully nondeterministic processes or generative/reactive probabilistic processes, only if specific sets of tests are admitted. In this paper, new versions of probabilistic trace and testing equivalences are presented for the general class of nondeterministic and probabilistic processes. The new trace equivalence is coarser because it compares execution probabilities of single traces instead of entire trace distributions, and turns out to be compositional. The new testing equivalence requires matching all resolutions of nondeterminism on the basis of their success probabilities, rather than comparing only extremal success probabilities, and considers success probabilities in a trace-by-trace fashion, rather than cumulatively on entire resolutions. It is fully backward compatible with testing equivalences for restricted classes of processes; as a consequence, the trace-by-trace approach uniformly captures the standard probabilistic testing equivalences for generative and reactive probabilistic processes. The paper discusses in full details the new equivalences and provides a simple spectrum that relates them with existing ones in the setting of nondeterministic and probabilistic processes.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-06-08T13:19:51Z
2015-06-08T13:19:51Z
http://eprints.imtlucca.it/id/eprint/2702
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2702
2015-06-08T13:19:51Z
Tail-scope: Using friends to estimate heavy tails of degree distributions in large-scale complex networks
Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.
Young-Ho Eom
youngho.eom@imtlucca.it
Hang-Hyun Jo
2015-05-21T10:00:39Z
2016-04-07T09:49:34Z
http://eprints.imtlucca.it/id/eprint/2698
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2698
2015-05-21T10:00:39Z
Default Cascades in Complex Networks: Topology and Systemic Risk
The recent crisis has brought to the fore a crucial question that remains still open: what would be the optimal architecture of financial systems? We investigate the stability of several benchmark topologies in a simple default cascading dynamics in bank networks. We analyze the interplay of several crucial drivers, i.e., network topology, banks' capital ratios, market illiquidity, and random vs targeted shocks. We find that, in general, topology matters only – but substantially – when the market is illiquid. No single topology is always superior to others. In particular, scale-free networks can be both more robust and more fragile than homogeneous architectures. This finding has important policy implications. We also apply our methodology to a comprehensive dataset of an interbank market from 1999 to 2011.
Tarik Roukny
Hugues Bersini
Hugues Pirotte
Guido Caldarelli
guido.caldarelli@imtlucca.it
Stefano Battiston
2015-05-19T10:05:19Z
2015-05-19T10:05:19Z
http://eprints.imtlucca.it/id/eprint/2683
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2683
2015-05-19T10:05:19Z
Trend of Narratives in the Age of Misinformation
Social media enabled a direct path from producer to consumer of contents changing the way users get informed, debate, and shape their worldviews. Such a {\em disintermediation} weakened consensus on social relevant issues in favor of rumors, mistrust, and fomented conspiracy thinking -- e.g., chem-trails inducing global warming, the link between vaccines and autism, or the New World Order conspiracy.
In this work, we study through a thorough quantitative analysis how different conspiracy topics are consumed in the Italian Facebook. By means of a semi-automatic topic extraction strategy, we show that the most discussed contents semantically refer to four specific categories: {\em environment}, {\em diet}, {\em health}, and {\em geopolitics}. We find similar patterns by comparing users activity (likes and comments) on posts belonging to different semantic categories. However, if we focus on the lifetime -- i.e., the distance in time between the first and the last comment for each user -- we notice a remarkable difference within narratives -- e.g., users polarized on geopolitics are more persistent in commenting, whereas the less persistent are those focused on diet related topics. Finally, we model users mobility across various topics finding that the more a user is active, the more he is likely to join all topics. Once inside a conspiracy narrative users tend to embrace the overall corpus.
Alessandro Bessi
Fabiana Zollo
fabiana.zollo@imtlucca.it
Michela Del Vicario
michela.delvicario@imtlucca.it
Antonio Scala
Guido Caldarelli
guido.caldarelli@imtlucca.it
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2015-05-19T09:35:54Z
2015-10-28T14:47:41Z
http://eprints.imtlucca.it/id/eprint/2681
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2681
2015-05-19T09:35:54Z
Model Predictive Control for Linear Impulsive Systems
Linear impulsive control systems have been extensively studied with respect to their equilibrium points which, in most cases, are no other than the origin. However, the trajectory of an impulsive system cannot be stabilized to arbitrary desired points hindering their utilization in a great many applications. In this paper, we study the equilibrium of linear impulsive systems with respect to target-sets. We properly extend the notion of invariance and design stabilizing model predictive controllers (MPC). Finally, we apply the proposed methodology to control the intravenous bolus administration of Lithium.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-05-18T15:45:57Z
2015-11-02T13:13:29Z
http://eprints.imtlucca.it/id/eprint/2680
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2680
2015-05-18T15:45:57Z
Identifying geographic clusters: A network analytic approach
In recent years there has been a growing interest in the role of networks and clusters in the global economy. Despite being a popular research topic in economics, sociology and urban studies, geographical clustering of human activity has often been studied by means of predetermined geographical units, such as administrative divisions and metropolitan areas. This approach is intrinsically time invariant and it does not allow one to differentiate between different activities. Our goal in this paper is to present a new methodology for identifying clusters, that can be applied to different empirical settings. We use a graph approach based on k-shell decomposition to analyze world biomedical research clusters based on PubMed scientific publications. We identify research institutions and locate their activities in geographical clusters. Leading areas of scientific production and their top performing research institutions are consistently identified at different geographic scales.
Roberto Catini
roberto.catini@imtlucca.it
Dmytro Karamshuk
dmytro.karamshuk@imtlucca.it
Orion Penner
orion.penner@imtlucca.it
Massimo Riccaboni
massimo.riccaboni@imtlucca.it
2015-05-12T10:23:59Z
2015-05-12T10:23:59Z
http://eprints.imtlucca.it/id/eprint/2673
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2673
2015-05-12T10:23:59Z
Sloshing-aware attitude control of impulsively actuated spacecraft
Upper stages of launchers sometimes drift, with the main engine switched-off, for a longer period of time until re-ignition and subsequent payload release. During this period a large amount of propellant is still in the tank and the motion of the fluid (sloshing) has an impact on the attitude of the stage. For the flight phase the classical spring/damper or pendulum models cannot be applied. A more elaborate sloshing-aware model is described in the paper involving a time-varying inertia tensor.
Using principles of hybrid systems theory we model the minimum impulse bit (MIB) effect, that is, the minimum torque that can be exerted by the thrusters. We design a hybrid model predictive control scheme for the attitude control of a launcher during its long coasting period, aiming at minimising the actuation count of the thrusters.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Daniele Bernardini
daniele.bernardini@imtlucca.it
Hans Strauch
Samir Bennani
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-05-04T15:19:39Z
2015-05-04T15:36:21Z
http://eprints.imtlucca.it/id/eprint/2664
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2664
2015-05-04T15:19:39Z
Causal-Consistent Reversibility in a Tuple-Based Language
Causal-consistent reversibility is a natural way of undoing concurrent computations. We study causal-consistent reversibility in the context of µKlaim, a formal coordination language based on distributed tuple spaces. We consider both uncontrolled reversibility, suitable to study the basic properties of the reversibility mechanism, and controlled reversibility based on a rollback operator, more suitable for programming applications. The causality structure of the language, and thus the definition of its reversible semantics, differs from all the reversible languages in the literature because of its generative communication paradigm. In particular, the reversible behavior of µKlaim read primitive, reading a tuple without consuming it, cannot be matched using channel-based communication. We illustrate the reversible extensions of µKlaim on a simple, but realistic, application scenario.
Elena Giachino
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Francesco Tiezzi
2015-04-07T14:00:07Z
2015-04-07T14:00:07Z
http://eprints.imtlucca.it/id/eprint/2657
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2657
2015-04-07T14:00:07Z
A dual gradient-projection algorithm for model predictive control in fixed-point arithmetic
Although linear Model Predictive Control has gained increasing popularity for controlling dynamical systems subject to constraints, the main barrier that prevents its widespread use in embedded applications is the need to solve a Quadratic Program (QP) in real-time. This paper proposes a dual gradient projection (DGP) algorithm specifically tailored for implementation on fixed-point hardware. A detailed convergence rate analysis is presented in the presence of round-off errors due to fixed-point arithmetic. Based on these results, concrete guidelines are provided for selecting the minimum number of fractional and integer bits that guarantee convergence to a suboptimal solution within a pre-specified tolerance, therefore reducing the cost and power consumption of the hardware device.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Alberto Guiggiani
alberto.guiggiani@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-04-07T13:53:35Z
2015-10-28T14:49:34Z
http://eprints.imtlucca.it/id/eprint/2656
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2656
2015-04-07T13:53:35Z
A multiparametric quadratic programming algorithm with polyhedral computations based on nonnegative least squares
Model Predictive Control (MPC) is one of the most successful techniques adopted in industry to control multivariable systems under constraints on input and output variables. To circumvent the main drawback of MPC, i.e., the need to solve a Quadratic Program (QP) on line to compute the control action, explicit MPC was proposed in the past to precompute the control law off line using multiparametric QP (mpQP). The resulting form of the MPC law is piecewise affine, which is extremely easy to code, can be computed online by simple arithmetic operations, and requires a maximum number of iterations that can be exactly determined a priori. On the other hand, the offline computations to solve the mpQP problem require detecting emptiness, full-dimensionality, and minimal hyperplane representations of polyhedra, and other computational geometric operations. While most of the existing methods solve such operations via linear programming, the approach proposed in this paper relies on a nonnegative least squares (NNLS) solver that is very simple to code, fast to execute, and provides solutions up to machine precision. In addition, the new approach exploits QP duality to identify and construct critical regions and to handle degeneracy issues.
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-03-26T11:45:05Z
2015-03-26T11:45:05Z
http://eprints.imtlucca.it/id/eprint/2447
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2447
2015-03-26T11:45:05Z
Robust pole placement for plants with semialgebraic parametric uncertainty
In this paper we address the problem of robust pole placement for linear-time-invariant systems whose uncertain parameters are assumed to belong to a semialgebraic region. A dynamic controller is designed in order to constrain the coefficients of the closed-loop characteristic polynomial within prescribed intervals. Two main topics arising from the problem of robust pole placement are tackled by means of polynomial optimization. First, necessary conditions on the plant parameters for the existence of a robust controller are given. Then, the set of all admissible robust controllers is sought. Convex relaxation techniques based on sum-of-square decomposition of positive polynomials are used to efficiently solve the formulated optimization problems through semidefinite programming techniques.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-03-26T11:36:44Z
2015-03-26T11:36:44Z
http://eprints.imtlucca.it/id/eprint/2439
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2439
2015-03-26T11:36:44Z
Set-membership EIV identification through LMI relaxation techniques
In this paper the Set-membership Error-In-Variables (EIV) identification problem is considered, that is the identification of linear dynamic systems when both the output and the input measurements are corrupted by bounded noise. A new approach for the computation of the Parameters Uncertainty Intervals (PUIs) is discussed. First the problem is formulated in terms of non-convex semi-algebraic optimization. Then, a Linear-Matrix-Inequalities relaxation technique is presented to compute parameters bounds by means of convex optimization. Finally, convergence properties and computational complexity of the given algorithms are discussed. Advantages of the proposed technique with respect to previously published ones are discussed both theoretically and by means of a simulated example.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-03-11T11:18:06Z
2015-07-24T12:26:41Z
http://eprints.imtlucca.it/id/eprint/2633
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2633
2015-03-11T11:18:06Z
Supporting performance awareness in autonomous ensembles
The ASCENS project works with systems of self-aware, self-adaptive and self-expressive ensembles. Performance awareness represents a concern that cuts across multiple aspects of such systems, from the techniques to acquire performance information by monitoring, to the methods of incorporating such information into the design making and decision making processes. This chapter provides an overview of five project contributions – performance monitoring based on the DiSL instrumentation framework, measurement evaluation using the SPL formalism, performance modeling with fluid semantics, adaptation with DEECo and design with IRM-SA – all in the context of the cloud case stud
Lubomír Bulej
Tomáš Bureš
Ilias Gerostathopoulos
Vojtěch Horký
Jaroslav Keznikl
Lukáš Marek
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
Petr Tůma
2015-03-11T11:14:42Z
2015-03-11T11:14:42Z
http://eprints.imtlucca.it/id/eprint/2632
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2632
2015-03-11T11:14:42Z
Service composition for collective adaptive systems
Collective adaptive systems are large-scale resource-sharing systems which adapt to the demands of their users by redistributing resources to balance load or provide alternative services where the current provision is perceived to be insufficient. Smart transport systems are a primary example where real-time location tracking systems record the location availability of assets such as cycles for hire, or fleet vehicles such as buses, trains and trams. We consider the problem of an informed user optimising his journey using a composition of services offered by different service providers.
Stephen Gilmore
Jane Hillston
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-03-11T10:09:01Z
2015-03-11T10:09:01Z
http://eprints.imtlucca.it/id/eprint/2631
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2631
2015-03-11T10:09:01Z
A homage to Martin Wirsing
Martin Wirsing was born on Christmas Eve, December 24th, 1948, in Bayreuth, a Bavarian town which is famous for the annually celebrated Richard Wagner Festival. There he visited the Lerchenbühl School and the High-School “Christian Ernestinum” where he followed the humanistic branch focusing on Latin and Ancient Greek. After that, from 1968 to 1974, Martin studied Mathematics at University Paris 7 and at Ludwig-Maximilians-Universität in Munich. In 1971 he became Maitrise-en-Sciences Mathematiques at the University Paris 7 and, in 1974, he got the Diploma in Mathematics at LMU Munich.
Rocco De Nicola
r.denicola@imtlucca.it
Rolf Hennicker
2015-03-03T09:49:51Z
2015-03-03T09:49:51Z
http://eprints.imtlucca.it/id/eprint/2626
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2626
2015-03-03T09:49:51Z
CaSPiS: a calculus of sessions, pipelines and services
Service-oriented computing is calling for novel computational models and languages with well-disciplined primitives for client–server interaction, structured orchestration and unexpected events handling. We present CaSPiS, a process calculus where the conceptual abstractions of sessioning and pipelining play a central role for modelling service-oriented systems. CaSPiS sessions are two-sided, uniquely named and can be nested. CaSPiS pipelines permit orchestrating the flow of data produced by different sessions. The calculus is also equipped with operators for handling (unexpected) termination of the partner's side of a session. Several examples are presented to provide evidence of the flexibility of the chosen set of primitives. One key contribution is a fully abstract encoding of Misra et al.'s orchestration language Orc. Another main result shows that in CaSPiS it is possible to program a ‘graceful termination’ of nested sessions, which guarantees that no session is forced to hang forever after the loss of its partner.
Michele Boreale
Roberto Bruni
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-03-03T09:41:50Z
2015-03-03T09:41:50Z
http://eprints.imtlucca.it/id/eprint/2625
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2625
2015-03-03T09:41:50Z
A formal approach to autonomic systems programming: the SCEL language
Software-intensive cyber-physical systems have to deal with massive numbers of components, featuring complex interactions among components and with humans and other systems. Often, they are designed to operate in open and non-deterministic environments, and to dynamically adapt to new requirements, technologies and external conditions. This class of systems has been named ensembles and new engineering techniques are needed to address the challenges of developing, integrating, and deploying them. In the paper, we briefly introduce SCEL (Software Component Ensemble Language), a kernel language that takes a holistic approach to programming autonomic computing systems and aims at providing programmers with a complete set of linguistic abstractions for programming the behavior of autonomic components and the formation of autonomic components ensembles, and for controlling the interaction among different components.
Rocco De Nicola
r.denicola@imtlucca.it
2015-02-23T11:14:29Z
2015-02-23T11:14:29Z
http://eprints.imtlucca.it/id/eprint/2622
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2622
2015-02-23T11:14:29Z
Foundations of Support Constraint Machines
The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2015-02-23T11:11:28Z
2015-02-23T11:11:28Z
http://eprints.imtlucca.it/id/eprint/2621
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2621
2015-02-23T11:11:28Z
Robust local–global SOM-based ACM
A novel active contour model (ACM) for image segmentation, driven by both local and global image-intensity information encoded by a self-organising map (SOM), is proposed. Experimental results demonstrate the robustness of the proposed model to the contour initialisation and to the additive noise, when compared with the state-of-the-art local and global ACMs. They also demonstrate its robustness to scene changes.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2015-02-23T11:04:02Z
2015-02-23T11:04:02Z
http://eprints.imtlucca.it/id/eprint/2620
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2620
2015-02-23T11:04:02Z
An efficient Self-Organizing Active Contour model for image segmentation
Active Contour Models (ACMs) constitute a powerful energy-based minimization framework for image segmentation, based on the evolution of an active contour. Among ACMs, supervised {ACMs} are able to exploit the information extracted from supervised examples to guide the contour evolution. However, their applicability is limited by the accuracy of the probability models they use. As a consequence, effectiveness and efficiency of supervised {ACMs} are among their main real challenges, especially when handling images containing regions characterized by intensity inhomogeneity. In this paper, to deal with such kinds of images, we propose a new supervised ACM, named Self-Organizing Active Contour (SOAC) model, which combines a variational level set method (a specific kind of ACM) with the weights of the neurons of two Self-Organizing Maps (SOMs). Its main contribution is the development of a new {ACM} energy functional optimized in such a way that the topological structure of the underlying image intensity distribution is preserved – using the two {SOMs} – in a parallel-processing and local way. The model has a supervised component since training pixels associated with different regions are assigned to different SOMs. Experimental results show the superior efficiency and effectiveness of {SOAC} versus several existing ACMs.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
2015-02-23T10:45:49Z
2015-11-02T13:02:11Z
http://eprints.imtlucca.it/id/eprint/2618
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2618
2015-02-23T10:45:49Z
Learning With Mixed Hard/Soft Pointwise Constraints
A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2015-02-23T09:55:18Z
2015-02-23T09:55:18Z
http://eprints.imtlucca.it/id/eprint/2616
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2616
2015-02-23T09:55:18Z
Evaluation of the Average Packet Delivery Delay in Highly-Disrupted Networks: The DTN and IP-like Protocol Cases
Delay/Disruption-Tolerant Networking (DTN) represents an innovative communication paradigm that enables the communication over Intermittently-Connected Networks (ICNs). ICNs are characterized by unpredictable or scheduled contacts among nodes, high latency, and high bit error rates. DTNs, unlike TCP/IP protocols, make use of store-and-forward techniques in order to cope with intermittent link issues. In this letter, a simple model is proposed to compute the average packet delivery delay in ICNs. Both the IP-like paradigm used by traditional TCP/IP protocols and DTN are considered. The results provide theoretical insights into the applications of these two approaches to ICNs. Numerical results and simulations are presented, too.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2015-02-23T09:44:36Z
2015-02-23T09:44:36Z
http://eprints.imtlucca.it/id/eprint/2615
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2615
2015-02-23T09:44:36Z
Exploiting the Shapley Value in the Estimation of the Position of a Point of Interest for a Group of Individuals
Concepts and tools from cooperative game theory are exploited to quantify the role played by each member of a team in estimating the position of an observed point of interest. The measure of importance known as “Shapley value” is used to this end. From the theoretical point view, we propose a specific form of the characteristic function for the class of cooperative games under investigation. In the numerical analysis, different configurations of a group of individuals are considered: all individuals looking at a mobile point of interest, one of them replaced with an artificially-generated one who looks exactly toward the point of interest, and directions of the heads replaced with randomly-generated directions. The corresponding experimental outcomes are compared.
Antonio Camurri
Floriane Dardard
Simone Ghisio
Donald Glowinski
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2015-02-23T09:38:59Z
2015-05-19T09:17:36Z
http://eprints.imtlucca.it/id/eprint/2614
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2614
2015-02-23T09:38:59Z
Narrowing the Search for Optimal Call-Admission Policies Via a Nonlinear Stochastic Knapsack Model
Call admission control with two classes of users is investigated via a nonlinear stochastic knapsack model. The feasibility region represents the subset of the call space, where given constraints on the quality of service have to be satisfied. Admissible strategies are searched for within the class of coordinate-convex policies. Structural properties that the optimal policies belonging to such a class have to satisfy are derived. They are exploited to narrow the search for the optimal solution to the nonlinear stochastic knapsack problem that models call admission control. To illustrate the role played by these properties, the numbers of coordinate-convex policies by which they are satisfied are estimated. A graph-based algorithm to generate all such policies is presented.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2015-02-18T14:47:11Z
2015-02-18T14:47:11Z
http://eprints.imtlucca.it/id/eprint/2611
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2611
2015-02-18T14:47:11Z
Learning as Constraint Reactions
A theory of learning is proposed,which extends naturally the classic regularization framework of kernelmachines to the case in which the agent interacts with a richer environment, compactly described by the notion of constraint. Variational calculus is exploited to derive general representer theorems that give a description of the structure of the solution to the learning problem. It is shown that such solution can be represented in terms of constraint reactions, which remind the corresponding notion in analytic mechanics. In particular, the derived representer theorems clearly show the extension of the classic kernel expansion on support vectors to the expansion on support constraints. As an application of the proposed theory three examples are given, which illustrate the dimensional collapse to a finite-dimensional space of parameters. The constraint reactions are calculated for the classic collection of supervised examples, for the case of box constraints, and for the case of hard holonomic linear constraints mixed with supervised examples. Interestingly, this leads to representer theorems for which we can re-use the kernel machine mathematical and algorithmic apparatus.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2015-02-18T14:33:29Z
2015-02-18T14:33:29Z
http://eprints.imtlucca.it/id/eprint/2610
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2610
2015-02-18T14:33:29Z
A Survey of SOM-Based Active Contour Models for Image Segmentation
Self Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly when dealing with image segmentation as a contour extraction problem. The idea of utilizing the prototypes (weights) of a SOM to model an evolving contour has produced a new class of Active Contour Models (ACMs), known as SOM-based ACMs. Such models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property, and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey paper, the main principles of SOMs and their application in modelling active contours are first highlighted. Then, we review existing SOM-based ACMs with a focus on their advantages and disadvantages in modelling the evolving contour via different kinds of SOMs. Finally, some current research directions are identified.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
2015-02-11T14:35:31Z
2015-02-11T14:35:31Z
http://eprints.imtlucca.it/id/eprint/2607
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2607
2015-02-11T14:35:31Z
Blending randomness in closed queueing network models
Abstract Random environments are stochastic models used to describe events occurring in the environment a system operates in. The goal is to describe events that affect performance and reliability such as breakdowns, repairs, or temporary degradations of resource capacities due to exogenous factors. Despite having been studied for decades, models that include both random environments and queueing networks remain difficult to analyse. To cope with this problem, we introduce the blending algorithm, a novel approximation for closed queueing network models in random environments. The algorithm seeks to obtain the stationary solution of the model by iteratively evaluating the dynamics of the system in between state changes of the environment. To make the approach scalable, the computation relies on a fluid approximation of the queueing network model. A validation study on 1800 models shows that blending can save a significant amount of time compared to simulation, with an average accuracy that grows with the number of servers in each station. We also give an interpretation of this technique in terms of Laplace transforms and use this approach to determine convergence properties.
Giuliano Casale
Mirco Tribastone
mirco.tribastone@imtlucca.it
Peter G. Harrison
2015-02-11T14:31:46Z
2015-07-24T12:22:54Z
http://eprints.imtlucca.it/id/eprint/2606
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2606
2015-02-11T14:31:46Z
Tackling continuous state-space explosion in a Markovian process algebra
Abstract Fluid or mean-field methods are approximate analytical techniques which have proven effective in tackling the infamous state-space explosion problem which typically arises when modelling large-scale concurrent systems based on interleaving semantics. These methods are particularly suitable in situations which present large populations of simple interacting objects characterised by small local state spaces, since they require the analysis of a problem which is insensitive to the population sizes but is dependent only on the size of the local state spaces. This paper studies the case when the replicated objects are best described as composites which consist of smaller simple objects. A congenial formal modelling framework for situations of this kind may be given by stochastic process algebra. Using {PEPA} as a representative case, we find that fluid models with replicated copies of composite processes do not scale well with increasing population sizes, thus rendering intractable the analysis of the underlying system of ordinary differential equations (ODEs). We call this problem continuous state-space explosion, by analogy with its counterpart phenomenon in discrete state spaces. The main contribution of this paper is a result of equivalence that simplifies, in an exact way, the potentially massive {ODE} system arising in those circumstances to one whose size is independent from all the multiplicities in the model. As a byproduct, we find that these simplified {ODEs} turn out to characterise the fluid behaviour of a family of {PEPA} models whose elements cannot be related to each other through any known equivalence relation. A substantial numerical assessment investigates the relationship between the different underlying Markov chains and their unique fluid limit, demonstrating its generally good accuracy for all practical purposes.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-11T14:29:07Z
2015-07-24T12:22:10Z
http://eprints.imtlucca.it/id/eprint/2605
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2605
2015-02-11T14:29:07Z
Exact fluid lumpability in Markovian process algebra
Abstract Quantitative analysis by means of discrete-state stochastic processes is hindered by the well-known phenomenon of state-space explosion, whereby the size of the state space may have an exponential growth with the number of agents of the system under scrutiny. When the stochastic process underlies a Markovian process algebra model, this problem may be alleviated by suitable notions of behavioural equivalence that induce lumping at the underlying continuous-time Markov chain, establishing an exact relation between a potentially much smaller aggregated chain and the original one. For the analysis of massively parallel systems, however, lumping techniques may not be sufficient to yield a computationally tractable problem. Recently, much work has been directed towards forms of fluid techniques that provide a set of ordinary differential equations (ODEs) approximating the expected path of the stochastic process. Unfortunately, even fluid models of realistic systems may be too large for feasible analysis. This paper studies a behavioural relation for process algebra with fluid semantics, called projected label equivalence, which is shown to yield an exactly fluid lumpable model, i.e., an aggregated {ODE} system which can be related to the original one without any loss of information. Project label equivalence relates sequential components of a process term. In general, for any two sequential components that are related in the fluid sense, nothing can be said about their relationship from the stochastic viewpoint. We define and study a notion of well-posedness which allows us to relate fluid lumpability to the stochastic notion of semi-isomorphism, which is a weaker version of the common notion of isomorphism between the doubly labelled transition systems at the basis of the Markovian interpretation.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-11T14:24:13Z
2015-02-11T14:24:13Z
http://eprints.imtlucca.it/id/eprint/2604
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2604
2015-02-11T14:24:13Z
Modelling exogenous variability in cloud deployments
Describing exogenous variability in the resources used by a cloud application leads to stochastic performance models that are difficult to solve. In this paper, we describe the blending algorithm, a novel approximation for queueing network models immersed in a random environment. Random environments are Markov chain-based descriptions of timevarying operational conditions that evolve independently of the system state, therefore they are natural descriptors for exogenous variability in a cloud deployment. The algorithm adopts the principle of solving a separate transient-analysis subproblem for each state of the random environment. Each subproblem is then approximated by a system of ordinary differential equations formulated according to a fluid limit theorem, making the approach scalable and computationally inexpensive. A validation study on several hundred models shows that blending can save up to two orders of magnitude of computational time compared to simulation, enabling efficient exploration of a decision space, which is useful in particular at design-time.
Giuliano Casale
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-11T14:20:40Z
2015-02-11T14:20:40Z
http://eprints.imtlucca.it/id/eprint/2603
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2603
2015-02-11T14:20:40Z
A fluid model for layered queueing networks
Layered queueing networks are a useful tool for the performance modeling and prediction of software systems that exhibit complex characteristics such as multiple tiers of service, fork/join interactions, and asynchronous communication. These features generally result in nonproduct form behavior for which particularly efficient approximations based on mean value analysis (MVA) have been devised. This paper reconsiders the accuracy of such techniques by providing an interpretation of layered queueing networks as fluid models. Mediated by an automatic translation into a stochastic process algebra, PEPA, a network is associated with a set of ordinary differential equations (ODEs) whose size is insensitive to the population levels in the system under consideration. A substantial numerical assessment demonstrates that this approach significantly improves the quality of the approximation for typical performance indices such as utilization, throughput, and response time. Furthermore, backed by established theoretical results of asymptotic convergence, the error trend shows monotonic decrease with larger population sizes-a behavior which is found to be in sharp contrast with that of approximate mean value analysis, which instead tends to increase.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-11T14:17:18Z
2015-02-11T14:17:18Z
http://eprints.imtlucca.it/id/eprint/2602
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2602
2015-02-11T14:17:18Z
Fluid rewards for a stochastic process algebra
Reasoning about the performance of models of software systems typically entails the derivation of metrics such as throughput, utilization, and response time. If the model is a Markov chain, these are expressed as real functions of the chain, called reward models. The computational complexity of reward-based metrics is of the same order as the solution of the Markov chain, making the analysis infeasible when evaluating large-scale systems. In the context of the stochastic process algebra PEPA, the underlying continuous-time Markov chain has been shown to admit a deterministic (fluid) approximation as a solution of an ordinary differential equation, which effectively circumvents state-space explosion. This paper is concerned with approximating Markovian reward models for PEPA with fluid rewards, i.e., functions of the solution of the differential equation problem. It shows that (1) the Markovian reward models for typical metrics of performance enjoy asymptotic convergence to their fluid analogues, and that (2) via numerical tests, the approximation yields satisfactory accuracy in practice.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Jie Ding
Stephen Gilmore
Jane Hillston
2015-02-11T14:14:02Z
2015-02-11T14:14:02Z
http://eprints.imtlucca.it/id/eprint/2601
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2601
2015-02-11T14:14:02Z
Scalable differential analysis of process algebra models
The exact performance analysis of large-scale software systems with discrete-state approaches is difficult because of the well-known problem of state-space explosion. This paper considers this problem with regard to the stochastic process algebra PEPA, presenting a deterministic approximation to the underlying Markov chain model based on ordinary differential equations. The accuracy of the approximation is assessed by means of a substantial case study of a distributed multithreaded application.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
Jane Hillston
2015-02-11T14:10:28Z
2015-02-11T14:10:28Z
http://eprints.imtlucca.it/id/eprint/2600
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2600
2015-02-11T14:10:28Z
Stochastic process algebras: from individuals to populations
In this paper we report on progress in the use of stochastic process algebras for representing systems which contain many replications of components such as clients, servers and devices. Such systems have traditionally been difficult to analyse even when using high-level models because of the need to represent the vast range of their potential behaviour. Models of concurrent systems with many components very quickly exceed the storage capacity of computing devices even when efficient data structures are used to minimize the cost of representing each state. Here, we show how population-based models that make use of a continuous approximation of the discrete behaviour can be used to efficiently analyse the temporal behaviour of very large systems via their collective dynamics. This approach enables modellers to study problems that cannot be tackled with traditional discrete-state techniques such as continuous-time Markov chains.
Jane Hillston
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-11T14:06:11Z
2015-02-11T14:06:11Z
http://eprints.imtlucca.it/id/eprint/2599
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2599
2015-02-11T14:06:11Z
Non-functional properties in the model-driven development of service-oriented systems
Systems based on the service-oriented architecture (SOA) principles have become an important cornerstone of the development of enterprise-scale software applications. They are characterized by separating functions into distinct software units, called services, which can be published, requested and dynamically combined in the production of business applications. Service-oriented systems (SOSs) promise high flexibility, improved maintainability, and simple re-use of functionality. Achieving these properties requires an understanding not only of the individual artifacts of the system but also their integration. In this context, non-functional aspects play an important role and should be analyzed and modeled as early as possible in the development cycle. In this paper, we discuss modeling of non-functional aspects of service-oriented systems, and the use of these models for analysis and deployment. Our contribution in this paper is threefold. First, we show how services and service compositions may be modeled in UML by using a profile for SOA (UML4SOA) and how non-functional properties of service-oriented systems can be represented using the non-functional extension of UML4SOA (UML4SOA-NFP) and the MARTE profile. This enables modeling of performance, security and reliable messaging. Second, we discuss formal analysis of models which respect this design, in particular we consider performance estimates and reliability analysis using the stochastically timed process algebra PEPA as the underlying analytical engine. Last but not least, our models are the source for the application of deployment mechanisms which comprise model-to-model and model-to-text transformations implemented in the framework VIATRA. All techniques presented in this work are illustrated by a running example from an eUniversity case study.
Stephen Gilmore
László Gönczy
Nora Koch
Philip Mayer
Mirco Tribastone
mirco.tribastone@imtlucca.it
Dániel Varró
2015-02-11T13:51:46Z
2015-02-11T13:51:46Z
http://eprints.imtlucca.it/id/eprint/2598
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2598
2015-02-11T13:51:46Z
The PEPA eclipse plugin
The PEPA Eclipse Plug-in supports the creation and analysis of performance models, from small-scale Markov models to large-scale simulation studies and differential equation systems. Whichever form of analysis is used, models are expressed in a single highlevel language for quantitative modelling, Performance Evaluation Process Algebra (PEPA).
Mirco Tribastone
mirco.tribastone@imtlucca.it
Adam Duguid
Stephen Gilmore
2015-02-11T13:46:09Z
2015-02-11T13:46:09Z
http://eprints.imtlucca.it/id/eprint/2597
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2597
2015-02-11T13:46:09Z
Scaling performance analysis using fluid-flow approximation
The fluid interpretation of the process calculus PEPA provides a very useful tool for the performance evaluation of large-scale systems because the tractability of the numerical solution does not depend upon the population levels of the system under study. This paper offers a tutorial on how to use this technique by analysing a case study of a service-oriented application to support an e-University infrastructure.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-11T13:40:30Z
2015-02-11T13:40:30Z
http://eprints.imtlucca.it/id/eprint/2596
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2596
2015-02-11T13:40:30Z
Quantitative analysis of web services using SRMC
In this tutorial paper we present quantitative methods for analysing Web Services with the goal of understanding how they will perform under increased demand, or when asked to serve a larger pool of service subscribers. We use a process calculus called SRMC to model the service. We apply efficient analysis techniques to numerically evaluate our model. The process calculus and the numerical analysis are supported by a set of software tools which relieve the modeller of the burden of generating and evaluating a large family of related models. The methods are illustrated on a classical example of Web Service usage in a business-to-business scenario.
Allan Clark
Stephen Gilmore
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-11T13:33:24Z
2015-02-11T13:33:24Z
http://eprints.imtlucca.it/id/eprint/2595
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2595
2015-02-11T13:33:24Z
Stochastic process algebras
In this tutorial we give an introduction to stochastic process algebras and their use in performance modelling, with a focus on the PEPA formalism. A brief introduction is given to the motivations for extending classical process algebra with stochastic times and probabilistic choice. We then present an introduction to the modelling capabilities of the formalism and the tools available to support Markovian based analysis. The chapter is illustrated throughout by small examples, demonstrating the use of the formalism and the tools.
Allan Clark
Stephen Gilmore
Jane Hillston
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T15:32:31Z
2015-02-10T15:32:31Z
http://eprints.imtlucca.it/id/eprint/2594
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2594
2015-02-10T15:32:31Z
Behavioral relations in a process algebra for variants
Variant Process Algebra is designed for the formal behavioral modeling of software variation, as arises, for instance, in software product line engineering. Process terms are labelled with the sets of variants, i.e., specific products, where they are enabled. A multi-modal operational semantics enables two compositional forms of reasoning. The first one is concerned with relating the behavior of a variant to the whole family. The second notion relates variants between each other, for instance to be able to formally capture the intuitive idea that a variant is a conservative extension of another, in the sense that it adds more behavior without breaking any existing one. Sufficient conditions are given to establish such a relation statically, by means of syntactic checks on process terms.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T15:26:40Z
2016-02-12T13:12:35Z
http://eprints.imtlucca.it/id/eprint/2593
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2593
2015-02-10T15:26:40Z
An analysis pathway for the quantitative evaluation of public transport systems
We consider the problem of evaluating quantitative service-level agreements in public services such as transportation systems. We describe the integration of quantitative analysis tools for data fitting, model generation, simulation, and statistical model-checking, creating an analysis pathway leading from system measurement data to verification results. We apply our pathway to the problem of determining whether public bus systems are delivering an appropriate quality of service as required by regulators. We exercise the pathway on service data obtained from Lothian Buses about the arrival and departure times of their buses on key bus routes through the city of Edinburgh. Although we include only that example in the present paper, our methods are sufficiently general to apply to other transport systems and other cities.
Stephen Gilmore
Mirco Tribastone
mirco.tribastone@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2015-02-10T15:06:34Z
2015-02-10T15:06:34Z
http://eprints.imtlucca.it/id/eprint/2592
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2592
2015-02-10T15:06:34Z
Family-based performance analysis of variant-rich software systems
We study models of software systems with variants that stem from a specific choice of configuration parameters with a direct impact on performance properties. Using UML activity diagrams with quantitative annotations, we model such systems as a product line. The efficiency of a product-based evaluation is typically low because each product must be analyzed in isolation, making difficult the re-use of computations across variants. Here, we propose a family-based approach based on symbolic computation. A numerical assessment on large activity diagrams shows that this approach can be up to three orders of magnitude faster than product-based analysis in large models, thus enabling computationally efficient explorations of large parameter spaces.
Matthias Kowal
Ina Schaefer
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T14:53:43Z
2015-07-24T12:27:34Z
http://eprints.imtlucca.it/id/eprint/2591
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2591
2015-02-10T14:53:43Z
Extended differential aggregations in process algebra for performance and biology
We study aggregations for ordinary differential equations induced by fluid semantics for Markovian process algebra which can capture the dynamics of performance models and chemical reaction networks. Whilst previous work has required perfect symmetry for exact aggregation, we present approximate fluid lumpability, which makes nearby processes perfectly symmetric after a perturbation of their parameters. We prove that small perturbations yield nearby differential trajectories. Numerically, we show that many heterogeneous processes can be aggregated with negligible errors.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T14:44:18Z
2015-02-10T14:44:18Z
http://eprints.imtlucca.it/id/eprint/2590
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2590
2015-02-10T14:44:18Z
Fluid performability analysis of nested automata models
In this paper we present a class of nested automata for the modelling of performance, availability, and reliability of software systems with hierarchical structure, which we call systems of systems. Quantitative modelling provides valuable insight into the dynamic behaviour of software systems, allowing non-functional properties such as performance, dependability and availability to be assessed. However, the complexity of many systems challenges the feasibility of this approach as the required mathematical models grow too large to afford computationally efficient solution. In recent years it has been found that in some cases a fluid, or mean field, approximation can provide very good estimates whilst dramatically reducing the computational cost.
The systems of systems which we propose are hierarchically arranged automata in which influence may be exerted between siblings, between parents and children, and even from children to parents, allowing a wide range of complex dynamics to be captured. We show that, under mild conditions, systems of systems can be equipped with fluid approximation models which are several orders of magnitude more efficient to run than explicit state representations, whilst providing excellent estimates of performability measures. This is a significant extension of previous fluid approximation results, with valuable applications for software performance modelling.
Luca Bortolussi
Jane Hillston
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T14:33:03Z
2015-02-10T14:33:03Z
http://eprints.imtlucca.it/id/eprint/2589
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2589
2015-02-10T14:33:03Z
Efficient optimization of software performance models via parameter-space pruning
tem's parameters. Unfortunately, for realistic scenarios, the cost of the optimization is typically high, leading to computational difficulties in the exploration of large parameter spaces. This paper proposes an approach to provably exact parameter-space pruning for a class of models of large-scale software systems analyzed with fluid techniques, efficient and scalable deterministic approximations of massively parallel stochastic models. We present a result of monotonicity of fluid solutions with respect to the model parameters, and employ it in the context of optimization programs with evolutionary algorithms by discarding candidate configurations a priori, i.e., without ever solving them, whenever they are proven to give lower fitness than other configurations. An extensive numerical validation shows that this approach yields an average twofold runtime speed-up compared to a baseline optimization algorithm that does not exploit monotonicity. Furthermore, we find that the optimal configuration is within a few percent from the true one obtained by stochastic simulation, whose solution is however orders of magnitude more expensive.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T14:10:52Z
2015-02-10T14:10:52Z
http://eprints.imtlucca.it/id/eprint/2588
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2588
2015-02-10T14:10:52Z
Lumpability of fluid models with heterogeneous agent types
Fluid models have gained popularity in the performance modeling of computing systems and communication networks. When the model under study consists of many different types of agents, the size of the associated system of ordinary differential equations (ODEs) increases with the number of types, making the analysis more difficult. We study this problem for a class of models where heterogeneity is expressed as a perturbation of certain parameters of the ODE vector field. We provide an a-priori bound that relates the solutions of the original, heterogenous model with that of an ODE system of smaller size which arises from aggregating system variables concerning different types of agents. By showing that this bound grows linearly with the intensity of the perturbation, we provide a formal justification to the intuitive possibility of neglecting small differences in agents' behavior as a means to reducing the dimensionality of the original system.
Giulio Iacobelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T14:06:19Z
2015-07-24T12:31:15Z
http://eprints.imtlucca.it/id/eprint/2587
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2587
2015-02-10T14:06:19Z
Exact fluid lumpability for Markovian process algebra
We study behavioural relations for process algebra with a fluid semantics given in terms of a system of ordinary differential equations (ODEs). We introduce label equivalence, a relation which is shown to induce an exactly lumped fluid model, a potentially smaller ODE system which can be exactly related to the original one. We show that, in general, for two processes that are related in the fluid sense nothing can be said about their relationship from stochastic viewpoint. However, we identify a class of models for which label equivalence implies a correspondence, called semi-isomorphism, between their transition systems that are at the basis of the Markovian interpretation.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T14:02:29Z
2015-07-24T12:28:15Z
http://eprints.imtlucca.it/id/eprint/2586
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2586
2015-02-10T14:02:29Z
Generalised communication for interacting agents
Process algebra for quantitative evaluation are based on either of the two following mechanisms for communication: binary, where a channel is shared by exactly two agents, or multiway, where all agents sharing a channel must synchronise. In this paper we consider an intermediate form which we call generalised communication, where only m agents out of n potentially available are involved in the communication. We study this in the context of the stochastic process algebra PEPA, of which we conservatively extend the syntax and semantics. We give an intuitive interpretation in terms of bandwidth assignments to agents communicating over a shared medium. We validate this semantics using a real implementation of a simple peer-to-peer protocol, for which our performance model yields predictions with high accuracy. We prove a result of lumpability that exploits symmetries between identical communicating agents, yielding good scalability of the underlying continuous-time Markov chain (CTMC) with respect to increasing population levels. Furthermore, we present an algorithm that derives the lumped chain directly, without having to generate the full CTMC first.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-10T13:50:25Z
2015-02-10T13:50:25Z
http://eprints.imtlucca.it/id/eprint/2585
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2585
2015-02-10T13:50:25Z
Performance modeling of design patterns for distributed computation
In software engineering, design patterns are commonly used and represent robust solution templates to frequently occurring problems in software design and implementation. In this paper, we consider performance simulation for two design patterns for processing of parallel messaging. We develop continuous-time Markov chain models of two commonly used design patterns, Half-Sync/Half-Async and Leader/Followers, for their performance evaluation in multicore machines. We propose a unified modeling approach which contemplates a detailed description of the application-level logic and abstracts away from operating system calls and complex locking and networking application programming interfaces. By means of a validation study against implementations on a 16-core machine, we show that the models accurately predict peak throughputs and variation trends with increasing concurrency levels for a wide range of message processing workloads. We also discuss the limits of our models when memory-level internal contention is not captured.
Ronald Strebelow
Mirco Tribastone
mirco.tribastone@imtlucca.it
Christian Prehofer
2015-02-10T13:31:51Z
2015-02-10T13:31:51Z
http://eprints.imtlucca.it/id/eprint/2584
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2584
2015-02-10T13:31:51Z
Fluid limits of queueing networks with batches
This paper presents an analytical model for the performance prediction of queueing networks with batch services and batch arrivals, related to the fluid limit of a suitable single-parameter sequence of continuous-time Markov chains and interpreted as the deterministic approximation of the average behaviour of the stochastic process. Notably, the underlying system of ordinary differential equations exhibits discontinuities in the right-hand sides, which however are proven to yield a meaningful solution. A substantial numerical assessment is used to study the quality of the approximation and shows very good accuracy in networks with large job populations.
Luca Bortolussi
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T11:30:06Z
2015-02-09T11:30:06Z
http://eprints.imtlucca.it/id/eprint/2583
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2583
2015-02-09T11:30:06Z
Fluid analysis of queueing in two-stage random environments
A large number of random environments leads to Markov processes where average-environment (AVG) and near-complete-decomposability (DEC) approximations suffer unacceptably large errors. This is problematic for queueing networks in particular, where state-space explosion hinders the application of numerical methods. In this paper we introduce blending, a novel fluid-based approximation for queueing models in random environments. The technique is here first introduced for random environments with two stages. Blending estimates the equilibrium of the model by iteratively evaluating transient-analysis sub problems for each of the two stages. Each sub problem is solved by means of a very small system of ordinary differential equations, making the approach scalable and simple to implement. Random environments supported by blending are either state-independent, as for models with breakdown and repair, or state-dependent, such as for Markov-modulated queues where the service phase changes only during busy periods. Comparative results with AVG and DEC approximations prove that blending tackles the limitations of existing methods for evaluating queues in random environments.
Giuliano Casale
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T11:09:43Z
2015-02-09T11:09:43Z
http://eprints.imtlucca.it/id/eprint/2582
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2582
2015-02-09T11:09:43Z
ASCENS: engineering autonomic service-component ensembles
Today’s developers often face the demanding task of developing software for ensembles: systems with massive numbers of nodes, operating in open and non-deterministic environments with complex interactions, and the need to dynamically adapt to new requirements, technologies or environmental conditions without redeployment and without interruption of the system’s functionality. Conventional development approaches and languages do not provide adequate support for the problems posed by this challenge. The goal of the ASCENS project is to develop a coherent, integrated set of methods and tools to build software for ensembles. To this end we research foundational issues that arise during the development of these kinds of systems, and we build mathematical models that address them. Based on these theories we design a family of languages for engineering ensembles, formal methods that can handle the size, complexity and adaptivity required by ensembles, and software-development methods that provide guidance for developers. In this paper we provide an overview of several research areas of ASCENS: the SOTA approach to ensemble engineering and the underlying formal model called GEM, formal notions of adaptation and awareness, the SCEL language, quantitative analysis of ensembles, and finally software-engineering methods for ensembles.
Martin Wirsing
Matthias Hölzl
Mirco Tribastone
mirco.tribastone@imtlucca.it
Franco Zambonelli
2015-02-09T11:05:12Z
2015-02-09T11:05:12Z
http://eprints.imtlucca.it/id/eprint/2581
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2581
2015-02-09T11:05:12Z
Approximate mean value analysis of process algebra models
Studying the existence of product forms of performance models described with compositional techniques is of central importance since this may lead to particularly efficient solution methods. This paper considers a class of models in the stochastic process algebra PEPA which do not enjoy the exact product form solutions available in the literature. However, they can be interpreted as queueing networks with service vacations and multiple resource possession, which have been shown to admit accurate analytical approximations based on mean value analysis. Special attention is devoted to situations where the use of the competing approximate method based on ordinary differential equations may be questionable due to the presence of components with few replicas.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T10:58:13Z
2015-02-09T10:58:13Z
http://eprints.imtlucca.it/id/eprint/2580
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2580
2015-02-09T10:58:13Z
Modular performance modelling for mobile applications
We propose a model-based approach to analysing the performance of mobile applications where physical mobility and state changes are modelled by graph transformations from which a model in the Performance Evaluation Process Algebra (PEPA) is derived. To fight scalability problems with state space generation we adopt a modular solution where the graph transformation system is decomposed into views, for which labelled transition systems (LTS) are generated separately and later synchronised in PEPA. We demonstrate that the result of this modular analysis is equivalent to that of the monolithic approach and evaluate practicality and scalability by means of a case study.
Niaz Arijo
Reiko Heckel
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-09T10:46:27Z
2015-02-09T10:46:27Z
http://eprints.imtlucca.it/id/eprint/2579
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2579
2015-02-09T10:46:27Z
Scalable performance evaluation of computer systems
The present paper provides an overview of recent and
ongoing research conducted at the Chair of Program-
ming and Software Engineering of LMU Munich on
performance evaluation of large-scale computer sys-
tems.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T10:29:56Z
2015-02-09T10:29:56Z
http://eprints.imtlucca.it/id/eprint/2578
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2578
2015-02-09T10:29:56Z
Large-scale modelling with the PEPA eclipse plug-in
We report on recent advances in the development of the PEPA Eclipse Plug-in, a software tool which supports a complete modelling workflow for the stochastic process algebra PEPA. The most notable improvements regard the implementation of the population-based semantics, which constitutes the basis for the aggregation of models for large state spaces. Analysis is supported either via an efficient stochastic simulation algorithm or through fluid approximation based on ordinary differential equations. In either case, the functionality is provided by a common graphical interface, which presents the user with a number of wizards that ease the specification of typical performance measures such as average response time or throughput. Behind the scenes, the engine for stochastic simulation has been extended in order to support both transient and steady-state simulation and to calculate confidence levels and correlations without resorting to external tools.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-09T09:45:25Z
2015-02-09T09:45:25Z
http://eprints.imtlucca.it/id/eprint/2576
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2576
2015-02-09T09:45:25Z
Performance prediction of service-oriented systems with layered queueing networks
We present a method for the prediction of the performance of a service-oriented architecture during its early stage of development. The system under scrutiny is modelled with the UML and two profiles: UML4SOA for specifying the functional behaviour, and MARTE for the non-functional performance-related characterisation. By means of a case study, we show how such a model can be interpreted as a layered queueing network. This target technique has the advantage to employ as constituent blocks entities, such as threads and processors, which arise very frequently in real deployment scenarios. Furthermore, the analytical methods for the solution of the performance model scale very well with increasing problem sizes, making it possible to efficiently evaluate the behaviour of large-scale systems.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Philip Mayer
Martin Wirsing
2015-02-09T09:41:12Z
2015-02-09T09:41:12Z
http://eprints.imtlucca.it/id/eprint/2575
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2575
2015-02-09T09:41:12Z
Hybrid semantics for PEPA
In order to circumvent the problem of state-space explosion of large-scale Markovian models, the stochastic process algebra PEPA has been given a fluid semantics based on ordinary differential equations, treating all entities as continuous. However, low numbers of instances and/or relatively slow dynamics may make such approximation too coarse for some parts of the system. To deal with such situations, we propose an hybrid semantics lying between these two extremes, treating parts of the system as discrete and stochastic and others as continuous and deterministic. The underlying mathematical object for the quantitative evaluation is a stochastic hybrid automaton. A case study of a client/server system with breakdowns and repairs is used to discuss the accuracy and the cost of this hybrid analysis.
Luca Bortolussi
Vashti Galpin
Jane Hillston
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T09:31:35Z
2015-02-09T09:31:35Z
http://eprints.imtlucca.it/id/eprint/2574
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2574
2015-02-09T09:31:35Z
Scalable differential analysis of large process algebra models
This tutorial is concerned with the performance evaluation of hardware/software systems using ordinary differential equations which approximate large-scale continuous-time Markov processes derived from models described with the stochastic process algebra PEPA. The tutorial is divided into three parts. The first part illustrates the main theoretical results. The second part gives an overview of a software tool-the PEPA Eclipse Plug-in-which supports the differential analysis of PEPA. In the last part, this approach is related to other efficient analysis techniques in the literature. In particular, a comparison against layered queues is presented.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T09:22:38Z
2015-02-09T09:22:38Z
http://eprints.imtlucca.it/id/eprint/2573
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2573
2015-02-09T09:22:38Z
Relating layered queueing networks and process algebra models
This paper presents a process-algebraic interpretation of the Layered Queueing Network model. The semantics of layered multi-class servers, resource contention, multiplicity of threads and processors are mapped into a model described in the stochastic process algebra PEPA. The accuracy of the translation is validated through a case study of a distributed computer system and the numerical results are used to discuss the relative strengths and weaknesses of the different forms of analysis available in both approaches, i.e., simulation, mean-value analysis, and differential approximation.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T09:18:02Z
2015-02-09T09:19:12Z
http://eprints.imtlucca.it/id/eprint/2572
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2572
2015-02-09T09:18:02Z
Scalable analysis of scalable systems
We present a systematic method of analysing the scalability of large-scale systems. We construct a high-level model using the SRMC process calculus and generate variants of this using model transformation. The models are compiled into systems of ordinary differential equations and numerically integrated to predict non-functional properties such as responsiveness and scalability.
Allan Clark
Stephen Gilmore
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T13:54:13Z
2015-02-06T13:54:13Z
http://eprints.imtlucca.it/id/eprint/2570
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2570
2015-02-06T13:54:13Z
Automatic translation of UML sequence diagrams into PEPA models
The UML profile for modeling and analysis of real time and embedded systems (MARTE) provides a powerful, standardised framework for the specification of non-functional properties of UML models. In this paper we present an automatic procedure to derive PEPA process algebra models from sequence diagrams (SD) to carry out quantitative evaluation. PEPA has recently been enriched with a fluid-flow semantics facilitating the analysis of models of a scale and complexity which would defeat Markovian analysis.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-06T13:47:52Z
2015-02-06T13:47:52Z
http://eprints.imtlucca.it/id/eprint/2569
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2569
2015-02-06T13:47:52Z
Automatic extraction of PEPA performance models from UML activity diagrams annotated with the MARTE profile
Recent trends in software engineering lean towards modelcentric development methodologies, a context in which the UML plays a crucial role. To provide modellers with quantitative insights into their artifacts, the UML benefits from a framework for software performance evaluation provided by MARTE, the UML profile for model-driven development of Real Time and Embedded Systems. MARTE offers a rich semantics which is general enough to allow different quantitative analysis techniques to act as underlying performance engines. In the present paper we explore the use of the stochastic process algebra PEPA as one such engine, providing a procedure to systematically map activity diagrams onto PEPA models. Independent activity flows are translated into sequential automata which co-ordinate at the synchronisation points expressed by fork and join nodes of the activity. The PEPA performance model is interpreted against a Markovian semantics which allows the calculation of performance indices such as throughput and utilisation. We also discuss the implementation of a new software tool powered by the popular Eclipse platform which implements the fully automatic translation from MARTE-annotated UML activity diagrams to PEPA models.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-06T13:43:58Z
2015-02-06T13:43:58Z
http://eprints.imtlucca.it/id/eprint/2568
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2568
2015-02-06T13:43:58Z
Partial evaluation of PEPA models for fluid-flow analysis
We present an application of partial evaluation to performance models expressed in the PEPA stochastic process algebra [1]. We partially evaluate the state-space of a PEPA model in order to remove uses of the cooperation and hiding operators and compile an arbitrary sub-model into a single sequential component. This transformation is applied to PEPA models which are not in the correct form for the application of the fluid-flow analysis for PEPA [2]. The result of the transformation is a PEPA model which is amenable to fluid-flow analysis but which is strongly equivalent [1] to the input PEPA model and so, by an application of Hillston’s theorem, performance results computed from one model are valid for the other. We apply the method to a Markovian model of a key distribution centre used to facilitate secure distribution of cryptographic session keys between remote principals communicating over an insecure network.
Allan Clark
Adam Duguid
Stephen Gilmore
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T13:39:18Z
2015-02-06T13:39:18Z
http://eprints.imtlucca.it/id/eprint/2567
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2567
2015-02-06T13:39:18Z
Service-level agreements for service-oriented computing
Service-oriented computing is dynamic. There may be many possible service instances available for binding, leading to uncertainty about where service requests will execute. We present a novel Markovian process calculus which allows the formal expression of uncertainty about binding as found in service-oriented computing. We show how to compute meaningful quantitative information about the quality of service provided in such a setting. These numerical results can be used to allow the expression of accurate service-level agreements about service-oriented computing.
Allan Clark
Stephen Gilmore
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T13:36:11Z
2015-02-06T13:36:11Z
http://eprints.imtlucca.it/id/eprint/2566
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2566
2015-02-06T13:36:11Z
Safety and response-time analysis of an automotive accident assistance service
In the present paper we assess both the safety properties and the response-time profile of a subscription service which provides medical assistance to drivers who are injured in vehicular collisions. We use both timed and untimed process calculi cooperatively to perform the required analysis. The formal analysis tools used are hosted on a high-level modelling platform with support for scripting and orchestration which enables users to build custom analysis processes from the general-purpose analysers which are hosted as services on the platform.
Ashok Argent-Katwala
Allan Clark
Howard Foster
Stephen Gilmore
Philip Mayer
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T13:31:04Z
2015-02-06T14:11:08Z
http://eprints.imtlucca.it/id/eprint/2565
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2565
2015-02-06T13:31:04Z
The PEPA Plug-in Project
We present a GUI-based tool supporting the stochastic process algebra PEPA with modules for performance evaluation through Markovian steady-state analysis, fluid flow analysis, and stochastic simulation.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T12:10:47Z
2015-02-06T14:09:44Z
http://eprints.imtlucca.it/id/eprint/2563
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2563
2015-02-06T12:10:47Z
An analytical model of a BitTorrent peer
In this paper we propose a Markovian model of BitTorrent. Unlike already developed works which capture demographic dynamics, it focuses on the behavior of individual peers. To this end, we center our attention on a generic peer, called tagged peer (TP); for each possible logical state of a BT peer-to-peer connection maintained by the TP, we consider a stochastic process which counts the number of such links, and characterize them according to their state. Validation is carried out and steady-state analysis is performed in order to illustrate how performance evaluation can be extracted from our model
Mario Barbera
Alfio Lombardo
Giovanni Schembra
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T12:01:12Z
2015-02-06T12:01:12Z
http://eprints.imtlucca.it/id/eprint/2562
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2562
2015-02-06T12:01:12Z
Replicating web services for scalability
Web service instances are often replicated to allow service provision to scale to support larger population sizes of users. However, such systems are difficult to analyse because the scale and complexity inherent in the system itself poses challenges for accurate qualitative or quantitative modelling. We use two process calculi cooperatively in the analysis of an example Web service replicated across many servers. The SOCK calculus is used to model service-oriented aspects closely and the PEPA calculus is used to analyse the performance of the system under increasing load.
Mario Bravetti
Stephen Gilmore
Claudio Guidi
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T11:57:01Z
2015-02-06T12:01:45Z
http://eprints.imtlucca.it/id/eprint/2561
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2561
2015-02-06T11:57:01Z
Evaluating the scalability of a web service-based distributed e-learning and course management system
A growing concern of Web service providers is scalability. An implementation of a Web service may be able at present to support its user base, but how can a provider judge what will happen if that user base grows? We present a modelling approach based on process algebra which allows service providers to investigate how models of Web service execution scale with increasing client population sizes. The method has the benefit of allowing a simple model of the service to be scaled to realistic population sizes without the modeller needing to aggregate or re-model the system.
Stephen Gilmore
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T11:52:22Z
2015-02-06T14:06:29Z
http://eprints.imtlucca.it/id/eprint/2560
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2560
2015-02-06T11:52:22Z
A Markov model of a freerider in a BitTorrent P2P network
BitTorrent is today one of the largest P2P systems which allows file sharing for Internet users. Very little effort has been dedicated to this target up to now. The goal of this paper is to develop an analytical model of a free-rider in a BitTorrent network. Unlike previous analytical models which capture the behavior of the network as a whole, the proposed model is able to analyze the performance from the user perspective. The model is applied to a case study to evaluate performance in a real case, and to obtain some insights into the influence of BitTorrent parameters on system performance.
Mario Barbera
Alfio Lombardo
Giovanni Schembra
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T11:43:48Z
2015-02-06T11:43:48Z
http://eprints.imtlucca.it/id/eprint/2559
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2559
2015-02-06T11:43:48Z
The PEPA Plug-in project
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T11:39:40Z
2015-07-24T12:32:00Z
http://eprints.imtlucca.it/id/eprint/2558
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2558
2015-02-06T11:39:40Z
Refined theory of packages
The fluid approximation for PEPA usually considers large populations of simple interacting sequential components characterised by small local state spaces. A natural question which arises is whether it is possible to extend this technique to composite processes with arbitrary large local state spaces. In [1] the authors were able to give a positive answer for a certain class of models. The current paper
will enlarge this class.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T11:16:22Z
2015-02-06T11:16:22Z
http://eprints.imtlucca.it/id/eprint/2557
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2557
2015-02-06T11:16:22Z
Process-algebraic modelling of priority queueing networks
We consider a closed multiclass queueing network model in which each class receives a different
priority level and jobs with lower priority are served only if there are no higher-priority jobs in the
queue. Such systems do not enjoy a product form solution, thus their analysis is typically carried out
through approximate mean value analysis (AMVA) techniques. We formalise the problem in PEPA in
a way amenable to differential analysis. Experimental results show that our approach is competitive
with simulation and AMVA methods.
Giuliano Casale
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T10:35:18Z
2015-02-06T13:56:54Z
http://eprints.imtlucca.it/id/eprint/2556
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2556
2015-02-06T10:35:18Z
Differential analysis of PEPA models
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-06T10:14:24Z
2015-02-06T10:14:24Z
http://eprints.imtlucca.it/id/eprint/2555
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2555
2015-02-06T10:14:24Z
Bottom-up beats top-down hands down
In PEPA, the calculation of the transitions enabled by a process accounts for a large part of the time for the state space exploration of the underlying Markov chain. Unlike other approaches based on recursion, we present a new technique that is iterative—it traverses the process’ binary tree from the sequential components at the leaves up to the root. Empirical results shows that this algorithm is faster than a similar implementation employing recursion in Java. Finally, a study on the user-perceived performance compares our algorithm with those of other existing tools
(ipc/Hydra and the PEPA Workbench).
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-02T10:31:14Z
2015-11-02T13:06:43Z
http://eprints.imtlucca.it/id/eprint/2553
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2553
2015-02-02T10:31:14Z
The economy of attention in the age of (mis)information
In this work we present a thorough quantitative analysis of information consumption patterns of qualitatively different information on Facebook. Pages are categorized, according to their topics and the communities of interests they pertain to, in a) alternative information sources (diffusing topics that are neglected by science and main stream media); b) online political activism; and c) main stream media. We find similar information consumption patterns despite the very different nature of contents. Then, we classify users according to their interaction patterns among the different topics and measure how they responded to the injection of 2788 false information (parodistic imitations of alternative stories). We find that users prominently interacting with alternative information sources ? i.e. more exposed to unsubstantiated claims ? are more prone to interact with intentional and parodistic false claims
Alessandro Bessi
Antonio Scala
Luc Rossi
Qian Zhang
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2015-02-02T10:25:24Z
2015-02-02T10:25:24Z
http://eprints.imtlucca.it/id/eprint/2552
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2552
2015-02-02T10:25:24Z
Structural patterns of the occupy movement on Facebook
In this work we study a peculiar example of social organization on Facebook: the Occupy Movement -- i.e., an international protest movement against social and economic inequality organized online at a city level. We consider 179 US Facebook public pages during the time period between September 2011 and February 2013. The dataset includes 618K active users and 753K posts that received about 5.2M likes and 1.1M comments. By labeling user according to their interaction patterns on pages -- e.g., a user is considered to be polarized if she has at least the 95% of her likes on a specific page -- we find that activities are not locally coordinated by geographically close pages, but are driven by pages linked to major US cities that act as hubs within the various groups. Such a pattern is verified even by extracting the backbone structure -- i.e., filtering statistically relevant weight heterogeneities -- for both the pages-reshares and the pages-common users networks.
Michela Del Vicario
michela.delvicario@imtlucca.it
Qian Zhang
Alessandro Bessi
Fabiana Zollo
fabiana.zollo@imtlucca.it
Antonio Scala
Guido Caldarelli
guido.caldarelli@imtlucca.it
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2015-02-02T10:18:27Z
2016-04-07T10:13:21Z
http://eprints.imtlucca.it/id/eprint/2550
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2550
2015-02-02T10:18:27Z
Everyday the same picture: popularity and content diversity
Facebook is flooded by diverse and heterogeneous content, from kittens up to music and news, passing through satirical and funny stories. Each piece of that corpus reflects the heterogeneity of the underlying social background. In the Italian Facebook we have found an interesting case: a page having more than 40K followers that every day posts the same picture of Toto Cutugno, a popular Italian singer. In this work, we use such a page as a benchmark to study and model the effects of content heterogeneity on popularity. In particular, we use that page for a comparative analysis of information consumption patterns with respect to pages posting science and conspiracy news. In total, we analyze about 2M likes and 190K comments, made by approximately 340K and 65K users, respectively. We conclude the paper by introducing a model mimicking users selection preferences accounting for the heterogeneity of contents.
Alessandro Bessi
Fabiana Zollo
fabiana.zollo@imtlucca.it
Michela Del Vicario
michela.delvicario@imtlucca.it
Antonio Scala
Fabio Petroni
Bruno Gonçalves
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2015-02-02T09:53:13Z
2015-02-02T09:53:13Z
http://eprints.imtlucca.it/id/eprint/2549
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2549
2015-02-02T09:53:13Z
Unsupervised and supervised approaches to color space transformation for image coding
The linear transformation of input (typically RGB) data into a color space is important in image compression. Most schemes adopt fixed transforms to decorrelate the color channels. Energy compaction transforms such as the Karhunen-Loève (KLT) do entail a complexity increase. Here, we propose a new data-dependent transform (aKLT), that achieves compression performance comparable to the KLT, at a fraction of the computational complexity. More important, we also consider an application-aware setting, in which a classifier analyzes reconstructed images at the receiver's end. In this context, KLT-based approaches may not be optimal and transforms that maximize post-compression classifier performance are more suited. Relaxing energy compactness constraints, we propose for the first time a transform which can be found offline optimizing the Fisher discrimination criterion in a supervised fashion. In lieu of channel decorrelation, we obtain spatial decorrelation using the same color transform as a rudimentary classifier to detect objects of interest in the input image without adding any computational cost. We achieve higher savings encoding these regions at a higher quality, when combined with region-of-interest capable encoders, such as JPEG 2000.
Massimo Minervini
massimo.minervini@imtlucca.it
Cristian Rusu
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-01-16T09:54:20Z
2015-01-16T09:54:20Z
http://eprints.imtlucca.it/id/eprint/2500
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2500
2015-01-16T09:54:20Z
Causal-consistent reversibility
Reversible computing allows one to execute programs both in the standard,
forward direction, and backward, going back to past states. In a concurrent
scenario, the correct notion of reversibility is causal-consistent reversibility:
any action can be undone, provided that all its consequences (if
any) are undone beforehand. In this paper we present an overview of the
main approaches, results, and applications of causal-consistent reversibility.
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2015-01-16T09:29:19Z
2015-01-16T09:32:56Z
http://eprints.imtlucca.it/id/eprint/2499
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2499
2015-01-16T09:29:19Z
A goal model for collective adaptive systems
Antonio Bucchiarone
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Heorhi Raik
2015-01-16T09:19:23Z
2015-01-16T09:19:23Z
http://eprints.imtlucca.it/id/eprint/2498
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2498
2015-01-16T09:19:23Z
Collective adaptation in process-based systems
A collective adaptive system is composed of a set of heterogeneous, autonomous and self-adaptive entities that come into a collaboration with one another in order to improve the effectiveness with which they can accomplish their individual goals. In this paper, we offer a characterization of ensembles, as the main concept around which systems that exhibit collective adaptability can be built. Our conceptualization of ensembles enables to define a collective adaptive system as an emergent aggregation of autonomous and self-adaptive process-based elements. To elucidate our approach to ensembles and collective adaptation, we draw an example from a scenario in the urban mobility domain, we describe an architecture that enables that approach, and we show how our approach can address the problems posed by the motivating scenario.
Antonio Bucchiarone
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Marco Pistore
Heorhi Raik
Giuseppe Valetto
2015-01-16T09:01:05Z
2015-01-16T09:01:05Z
http://eprints.imtlucca.it/id/eprint/2497
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2497
2015-01-16T09:01:05Z
Causal-consistent reversible debugging
Reversible debugging provides developers with a way to execute their applications both forward and backward, seeking the cause of an unexpected or undesired event. In a concurrent setting, reversing actions in the exact reverse order in which they have been executed may lead to undo many actions that were not related to the bug under analysis. On the other hand, undoing actions in some order that violates causal dependencies may lead to states that could not be reached in a forward execution. We propose an approach based on causal-consistent reversibility: each action can be reversed if all its consequences have already been reversed. The main feature of the approach is that it allows the programmer to easily individuate and undo exactly the actions that caused a given misbehavior till the corresponding bug is reached. This paper major contribution is the individuation of the appropriate primitives for causal-consistent reversible debugging and their prototype implementation in the CaReDeb tool. We also show how to apply CaReDeb to individuate common real-world concurrent bugs.
Elena Giachino
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
2015-01-16T08:52:19Z
2015-01-16T08:52:19Z
http://eprints.imtlucca.it/id/eprint/2496
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2496
2015-01-16T08:52:19Z
A conceptual framework for collective adaptive systems
In this paper we propose a conceptual framework to characterize Collective Adaptive Systems. By following the separation of concerns we represent these systems as a composition of three components: execution, context and adaptation, and we give a formal definition of all their concepts, defining their corresponding semantics and pointing out the interactions among them. Moreover, we formalize also the main properties that these systems should have, abstracting from any precise specification language or model
Antonio Bucchiarone
Annapaola Marconi
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Marco Pistore
2015-01-16T08:49:08Z
2015-01-16T08:49:08Z
http://eprints.imtlucca.it/id/eprint/2495
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2495
2015-01-16T08:49:08Z
CAptLang: a language for context-aware and adaptable business processes
Run-time adaptability is a key feature of dynamic business environments, where the processes need to be constantly refined and restructured to deal with context changes. In this paper, we present CAptLang, a language to model context-aware and adaptable business processes where the main feature is the possibility of leaving the handling of extraordinary or improbable situations to run time. We present CAptLang with its formal syntax and semantics. Moreover we show how its semantics have been used to guide the implementation of a Java-based business processes execution engine, component of the ASTRO-CAptEvo adaptation framework.
Antonio Bucchiarone
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Marco Pistore
2015-01-16T08:44:12Z
2015-01-16T08:44:12Z
http://eprints.imtlucca.it/id/eprint/2494
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2494
2015-01-16T08:44:12Z
Concurrent flexible reversibility
Concurrent reversibility has been studied in different areas, such as biological or dependable distributed systems. However, only “rigid” reversibility has been considered, allowing to go back to a past state and restart the exact same computation, possibly leading to divergence. In this paper, we present croll-π, a concurrent calculus featuring flexible reversibility, allowing the specification of alternatives to a computation to be used upon rollback. Alternatives in croll-π are attached to messages. We show the robustness of this mechanism by encoding more complex idioms for specifying flexible reversibility, and we illustrate the benefits of our approach by encoding a calculus of communicating transactions.
Ivan Lanese
Michael Lienhardt
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Alan Schmitt
Jean-Bernard Stefani
2015-01-16T08:39:16Z
2015-01-16T08:39:16Z
http://eprints.imtlucca.it/id/eprint/2493
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2493
2015-01-16T08:39:16Z
On-the-fly adaptation of dynamic service-based systems: incrementality, reduction and reuse
On-the-fly adaptation is where adaptation activities are not explicitly represented at design time but are discovered and managed at run time considering all aspect of the execution environments. In this paper we present a comprehensive framework for the on-the-fly adaptation of highly dynamic service-based systems. The framework relies on advanced context-aware adaptation techniques that allow for i) incremental handling of complex adaptation problems by interleaving problem solving and solution execution, ii) reduction in the complexity of each adaptation problem by minimizing the search space according to the specific execution context, and iii) reuse of adaptation solutions by learning from past executions. We evaluate the applicability of the proposed approach on a real world scenario based on the operation of the Bremen sea port.
Antonio Bucchiarone
Annapaola Marconi
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Marco Pistore
Heorhi Raik
2015-01-16T08:31:09Z
2015-01-16T08:31:09Z
http://eprints.imtlucca.it/id/eprint/2492
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2492
2015-01-16T08:31:09Z
Towards modeling and execution of Collective Adaptive Systems
Collective Adaptive Systems comprise large numbers of heterogeneous entities that can join and leave the system at any time depending on their own objectives. In the scope of pervasive computing, both physical and virtual entities may exist, e.g., buses and their passengers using mobile devices, as well as city-wide traffic coordination systems. In this paper we introduce a novel conceptual framework that enables Collective Adaptive Systems based on well-founded and widely accepted paradigms and technologies like service orientation, distributed systems, context-aware computing and adaptation of composite systems. Toward achieving this goal, we also present an architecture that underpins the envisioned framework, discuss the current state of our implementation effort, and we outline the open issues and challenges in the field.
Vasilios Andrikopoulos
Antonio Bucchiarone
Santiago Gómez Sáez
Dimka Karastoyanova
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
2015-01-15T13:24:00Z
2015-01-15T13:24:00Z
http://eprints.imtlucca.it/id/eprint/2491
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2491
2015-01-15T13:24:00Z
A reversible abstract machine and its space overhead
We study in this paper the cost of making a concurrent programming language reversible. More specifically, we take an abstract machine for a fragment of the Oz programming language and make it reversible. We show that the overhead of the reversible machine with respect to the original one in terms of space is at most linear in the number of execution steps. We also show that this bound is tight since some programs cannot be made reversible without storing a commensurate amount of information.
Michael Lienhardt
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Jean-Bernard Stefani
2015-01-15T13:15:40Z
2015-01-15T13:15:40Z
http://eprints.imtlucca.it/id/eprint/2490
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2490
2015-01-15T13:15:40Z
Controlled reversibility and compensations
In this paper we report the main ideas of an ongoing thread of research that aims at exploiting reversibility mechanisms to define programming abstractions for dependable distributed systems. In particular, we discuss the issues posed by concurrency in the definition of controlled forms of reversibility. We also discuss the need of introducing compensations to deal with irreversible actions and to avoid to repeat past errors.
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Jean-Bernard Stefani
2015-01-15T13:12:28Z
2015-01-15T13:12:28Z
http://eprints.imtlucca.it/id/eprint/2489
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2489
2015-01-15T13:12:28Z
Controlling reversibility in higher-order pi
We present in this paper a fine-grained rollback primitive for the higher-order π-calculus (HOπ), that builds on the reversibility apparatus of reversible HOπ [9]. The definition of a proper semantics for such a primitive is a surprisingly delicate matter because of the potential interferences between concurrent rollbacks. We define in this paper a high-level operational semantics which we prove sound and complete with respect to reversible HOπ backward reduction. We also define a lower-level distributed semantics, which is closer to an actual implementation of the rollback primitive, and we prove it to be fully abstract with respect to the high-level semantics.
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Alan Schmitt
Jean-Bernard Stefani
2015-01-15T13:06:00Z
2015-01-15T13:06:00Z
http://eprints.imtlucca.it/id/eprint/2488
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2488
2015-01-15T13:06:00Z
Reversing higher-order pi
The notion of reversible computation is attracting increasing interest because of its applications in diverse fields, in particular the study of programming abstractions for reliable systems. In this paper, we continue the study undertaken by Danos and Krivine on reversible CCS by defining a reversible higher-order π-calculus (HOπ). We prove that reversibility in our calculus is causally consistent and that one can encode faithfully reversible HOπ into a variant of HOπ.
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Jean-Bernard Stefani
2015-01-15T13:01:59Z
2015-01-15T13:01:59Z
http://eprints.imtlucca.it/id/eprint/2487
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2487
2015-01-15T13:01:59Z
Typing component-based communication systems
Building complex component-based software systems, for instance communication systems based on the Click, Coyote, Appia, or Dream frameworks, can lead to subtle assemblage errors. We present a novel type system and type inference algorithm that prevent interconnection and message-handling errors when assembling component-based communication systems. These errors are typically not captured by classical type systems of host programming languages such as Java or ML. We have implemented our approach by extending the architecture description language (ADL) toolset used by the Dream framework, and used it to check Dream-based communication systems.
Michael Lienhardt
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Alan Schmitt
Jean-Bernard Stefani
2015-01-13T15:04:49Z
2015-02-18T12:06:10Z
http://eprints.imtlucca.it/id/eprint/2480
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2480
2015-01-13T15:04:49Z
Sparse solutions to the average consensus problem via L1-Norm regularization of the fastest mixing Markov-Chain problem
In the “consensus problem” on multi-agent systems, in which the states of the agents are “opinions”, the agents aim at reaching a common opinion (or “consensus state”) through local exchange of information. An important design problem is to choose the degree of interconnection of the subsystems so as to achieve a good trade-off between a small number of interconnections and a fast convergence to the consensus state, which is the average of the initial opinions under mild conditions. This paper addresses this problem through l1-norm regularized versions of the well-known fastest mixing Markov-chain problem, which are investigated theoretically. In particular, it is shown that such versions can be interpreted as “robust” forms of the fastest mixing Markov-chain problem. Theoretical results useful to guide the choice of the regularization parameters are also provided, together with a numerical example.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Rita Morisi
rita.morisi@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-01-13T14:59:41Z
2015-01-13T14:59:41Z
http://eprints.imtlucca.it/id/eprint/2479
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2479
2015-01-13T14:59:41Z
A proximal alternating minimization method for L0-Regularized nonlinear optimization problems: application to state estimation
In this paper we consider the minimization of l0-regularized nonlinear optimization problems, where the objective function is the sum of a smooth convex term and the l0 quasi-norm of the decision variable. We introduce the class of coordinatewise minimizers and prove that any point in this class is a local minimum for our l0-regularized problem. Then, we devise a random proximal alternating minimization method, which has a simple iteration and is suitable for solving this class of optimization problems. Under convexity and coordinatewise Lipschitz gradient assumptions, we prove that any limit point of the sequence generated by our new algorithm belongs to the class of coordinatewise minimizers almost surely. We also show that the state estimation of dynamical systems with corrupted measurements can be modeled in our framework. Numerical experiments on state estimation of power systems, using IEEE bus test case, show that our algorithm performs favorably on solving such problems
Andrei - Mihai Patrascu
Ion Necoara
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
2015-01-13T14:42:02Z
2015-01-13T14:42:02Z
http://eprints.imtlucca.it/id/eprint/2478
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2478
2015-01-13T14:42:02Z
A unified framework for solving a general class of conditional and robust set-membership estimation problems
In this paper, we present a unified framework for solving a general class of problems arising in the context of set-membership estimation/identification theory. More precisely, the paper aims at providing an original approach for the computation of optimal conditional and robust projection estimates in a nonlinear estimation setting, where the operator relating the data and the parameter to be estimated is assumed to be a generic multivariate polynomial function, and the uncertainties affecting the data are assumed to belong to semialgebraic sets. By noticing that the computation of both the conditional and the robust projection optimal estimators requires the solution to min-max optimization problems that share the same structure, we propose a unified two-stage approach based on semidefinite-relaxation techniques for solving such estimation problems. The key idea of the proposed procedure is to recognize that the optimal functional of the inner optimization problems can be approximated to any desired precision by a multivariate polynomial function by suitably exploiting recently proposed results in the field of parametric optimization. Two simulation examples are reported to show the effectiveness of the proposed approach.
Vito Cerone
Jean-Bernard Lasserre
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T14:34:09Z
2015-11-02T09:57:27Z
http://eprints.imtlucca.it/id/eprint/2477
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2477
2015-01-13T14:34:09Z
Characteristic polynomial assignment for plants with semialgebraic uncertainty: a robust diophantine equation approach
In this paper, we address the problem of robust characteristic polynomial assignment for LTI systems whose parameters are assumed to belong to a semialgebraic uncertainty region. The objective is to design a dynamic fixed-order controller in order to constrain the coefficients of the closed-loop characteristic polynomial within prescribed intervals. First, necessary conditions on the plant parameters for the existence of a robust controller are reviewed, and it is shown that such conditions are satisfied if and only if a suitable Sylvester matrix is nonsingular for all possible values of the uncertain plant parameters. The problem of checking such a robust nonsingularity condition is formulated in terms of a nonconvex optimization problem. Then, the set of all feasible robust controllers is sought through the solution to a suitable robust diophantine equation. Convex relaxation techniques based on sum-of-square decomposition of positive polynomials are used to efficiently solve the formulated optimization problems by means of semidefinite programming. The presented approach provides a generalization of the results previously proposed in the literature on the problem of assigning the characteristic polynomial in the presence of plant parametric uncertainty.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T14:24:45Z
2015-01-13T14:24:45Z
http://eprints.imtlucca.it/id/eprint/2476
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2476
2015-01-13T14:24:45Z
A bias-corrected estimator for nonlinear systems with output-error type model structures
Abstract Parametric identification of linear time-invariant (LTI) systems with output-error (OE) type of noise model structures has a well-established theoretical framework. Different algorithms, like instrumental-variables based approaches or prediction error methods (PEMs), have been proposed in the literature to compute a consistent parameter estimate for linear {OE} systems. Although the prediction error method provides a consistent parameter estimate also for nonlinear output-error (NOE) systems, it requires to compute the solution of a nonconvex optimization problem. Therefore, an accurate initialization of the numerical optimization algorithms is required, otherwise they may get stuck in a local minimum and, as a consequence, the computed estimate of the system might not be accurate. In this paper, we propose an approach to obtain, in a computationally efficient fashion, a consistent parameter estimate for output-error systems with polynomial nonlinearities. The performance of the method is demonstrated through a simulation example.
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-13T14:22:23Z
2015-01-13T14:22:23Z
http://eprints.imtlucca.it/id/eprint/2475
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2475
2015-01-13T14:22:23Z
Approximation of model predictive control laws for polynomial systems
A fast implementation of a given predictive controller for polynomial systems is introduced by approximating the optimal control law with a piecewise constant function defined over a hyper-cube partition of the system state space. Such a state-space partition is computed in order to guarantee stability, an a priori fixed trajectory error as well as input and state constraints fulfilment. The presented approximation procedure is achieved by solving a set of nonconvex polynomial optimization problems, whose approximate solutions are computed by means of semidefinite relaxation techniques for semialgebraic problems.
Massimo Canale
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T14:18:27Z
2015-01-13T14:18:27Z
http://eprints.imtlucca.it/id/eprint/2474
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2474
2015-01-13T14:18:27Z
An SDP approach for l0-minimization: application to ARX model segmentation
Abstract Minimizing the ℓ 0 -seminorm of a vector under convex constraints is a combinatorial (NP-hard) problem. Replacement of the ℓ 0 -seminorm with the ℓ 1 -norm is a commonly used approach to compute an approximate solution of the original ℓ 0 -minimization problem by means of convex programming. In the theory of compressive sensing, the condition that the sensing matrix satisfies the Restricted Isometry Property (RIP) is a sufficient condition to guarantee that the solution of the ℓ 1 -approximated problem is equal to the solution of the original ℓ 0 -minimization problem. However, the evaluation of the conservativeness of the ℓ 1 -relaxation approaches is recognized to be a difficult task in case the {RIP} is not satisfied. In this paper, we present an alternative approach to minimize the ℓ 0 -norm of a vector under given constraints. In particular, we show that an ℓ 0 -minimization problem can be relaxed into a sequence of semidefinite programming problems, whose solutions are guaranteed to converge to the optimizer (if unique) of the original combinatorial problem also in case the {RIP} is not satisfied. Segmentation of {ARX} models is then discussed in order to show, through a relevant problem in system identification, that the proposed approach outperforms the ℓ 1 -based relaxation in detecting piece-wise constant parameter changes in the estimated model.
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-13T14:12:42Z
2015-01-13T14:12:42Z
http://eprints.imtlucca.it/id/eprint/2473
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2473
2015-01-13T14:12:42Z
A convex relaxation approach to set-membership identification of LPV systems
Abstract Identification of linear parameter varying models is considered in this paper, under the assumption that both the output and the scheduling parameter measurements are affected by bounded noise. First, the problem of computing parameter uncertainty intervals is formulated in terms of nonconvex optimization. Then, on the basis of the analysis of the regressor structure, we present an ad hoc convex relaxation scheme for computing parameter bounds by means of semidefinite optimization.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T14:08:50Z
2015-01-13T14:08:50Z
http://eprints.imtlucca.it/id/eprint/2472
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2472
2015-01-13T14:08:50Z
Fixed-order FIR approximation of linear systems from quantized input and output data
Abstract The problem of identifying a fixed-order {FIR} approximation of linear systems with unknown structure, assuming that both input and output measurements are subjected to quantization, is dealt with in this paper. A fixed-order {FIR} model providing the best approximation of the input–output relationship is sought by minimizing the worst-case distance between the output of the true system and the modeled output, for all possible values of the input and output data consistent with their quantized measurements. The considered problem is firstly formulated in terms of robust optimization. Then, two different algorithms to compute the optimum of the formulated problem by means of linear programming techniques are presented. The effectiveness of the proposed approach is illustrated by means of a simulation example.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T14:01:48Z
2015-01-13T14:01:48Z
http://eprints.imtlucca.it/id/eprint/2471
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2471
2015-01-13T14:01:48Z
Computational load reduction in bounded error identification of Hammerstein systems
In this technical note we present a procedure for the identification of Hammerstein systems from measurements affected by bounded noise. First, we show that computation of tight parameter bounds requires the solution to nonconvex optimization problems where the number of decision variables increases with the length of the experimental data sequence. Then, in order to reduce the computational burden of the identification problem, we propose a procedure to relax the formulated problem into a collection of polynomial optimization problems where the number of variables does not depend on the number of measurements. Advantages of the presented approach with respect to previously published results are discussed and highlighted by means of a simulation example.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T13:57:44Z
2015-01-13T14:35:23Z
http://eprints.imtlucca.it/id/eprint/2470
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2470
2015-01-13T13:57:44Z
Bounding the parameters of block-structured nonlinear feedback systems
In this paper, a procedure for set-membership identification of block-structured nonlinear feedback systems is presented. Nonlinear block parameter bounds are first computed by exploiting steady-state measurements. Then, given the uncertain description of the nonlinear block, bounds on the unmeasurable inner signal are computed. Finally, linear block parameter bounds are evaluated on the basis of output measurements and computed inner-signal bounds. The computation of both the nonlinear block parameters and the inner-signal bounds is formulated in terms of semialgebraic optimization and solved by means of suitable convex LMI relaxation techniques. The problem of linear block parameter evaluation is formulated in terms of a bounded errors-in-variables identification problem.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T13:52:43Z
2015-01-13T14:35:42Z
http://eprints.imtlucca.it/id/eprint/2469
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2469
2015-01-13T13:52:43Z
Optimization of airborne wind energy generators
This paper presents novel results related to an innovative airborne wind energy technology, named Kitenergy, for the conversion of high-altitude wind energy into electricity. The research activities carried out in the last five years, including theoretical analyses, numerical simulations, and experimental tests, indicate that Kitenergy could bring forth a revolution in wind energy generation, providing renewable energy in large quantities at a lower cost than fossil energy. This work investigates three important theoretical aspects: the evaluation of the performance achieved by the employed control law, the optimization of the generator operating cycle, and the possibility to generate continuously a constant and maximal power output. These issues are tackled through the combined use of modeling, control, and optimization methods that result to be key technologies for a significant breakthrough in renewable energy generation.
Lorenzo Fagiano
Mario Milanese
Dario Piga
dario.piga@imtlucca.it
2015-01-13T13:40:47Z
2015-01-13T13:40:47Z
http://eprints.imtlucca.it/id/eprint/2468
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2468
2015-01-13T13:40:47Z
Bounded error identification of Hammerstein systems through sparse polynomial optimization
In this paper we present a procedure for the evaluation of bounds on the parameters of Hammerstein systems, from output measurements affected by bounded errors. The identification problem is formulated in terms of polynomial optimization, and relaxation techniques, based on linear matrix inequalities, are proposed to evaluate parameter bounds by means of convex optimization. The structured sparsity of the formulated identification problem is exploited to reduce the computational complexity of the convex relaxed problem. Analysis of convergence properties and computational complexity is reported.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-13T13:28:37Z
2015-01-13T13:28:37Z
http://eprints.imtlucca.it/id/eprint/2467
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2467
2015-01-13T13:28:37Z
Set-Membership Error-in-variables identification through convex relaxation techniques
In this technical note, the set membership error-in-variables identification problem is considered, that is the identification of linear dynamic systems when both output and input measurements are corrupted by bounded noise. A new approach for the computation of parameter uncertainty intervals is presented. First, the identification problem is formulated in terms of nonconvex optimization. Then, relaxation techniques based on linear matrix inequalities are employed to evaluate parameter bounds by means of convex optimization. The inherent structured sparsity of the original identification problems is exploited to reduce the computational complexity of the relaxed problems. Finally, convergence properties and complexity of the proposed procedure are discussed. Advantages of the presented technique with respect to previously published results are discussed and shown by means of two simulated examples.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T14:46:07Z
2015-01-12T14:46:07Z
http://eprints.imtlucca.it/id/eprint/2466
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2466
2015-01-12T14:46:07Z
Enforcing stability constraints in set-membership identification of linear dynamic systems
In this paper, we consider the identification of linear systems, a priori known to be stable, from input–output data corrupted by bounded noise. By taking explicitly into account a priori information on system stability, a formal definition of the feasible parameter set for a stable linear system is provided. On the basis of a detailed analysis of the geometrical structure of the feasible set, convex relaxation techniques are presented to solve nonconvex optimization problems arising in the computation of parameter uncertainty intervals. Properties of the computed relaxed bounds are discussed. A simulated example is presented to show the effectiveness of the proposed technique.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T14:39:42Z
2015-01-13T14:49:53Z
http://eprints.imtlucca.it/id/eprint/2465
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2465
2015-01-12T14:39:42Z
Set-membership LPV model identification of vehicle lateral dynamics
Set-membership identification of a Linear Parameter Varying (LPV) model describing the vehicle lateral dynamics is addressed in the paper. The model structure, chosen as much as possible on the ground of physical insights into the vehicle lateral behavior, consists of two single-input single-output {LPV} models relating the steering angle to the yaw rate and to the sideslip angle. A set of experimental data obtained by performing a large number of maneuvers is used to identify the vehicle lateral dynamics model. Prior information on the error bounds on the output and the time-varying parameter measurements are taken into account. Comparison with other vehicle lateral dynamics models is discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T14:32:40Z
2015-01-12T14:32:40Z
http://eprints.imtlucca.it/id/eprint/2464
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2464
2015-01-12T14:32:40Z
Improved parameter bounds for set-membership EIV problems
In this paper, we consider the set-membership error-in-variables identification problem, that is the identification of linear dynamic systems when output and input measurements are corrupted by bounded noise. A new approach for the computation of parameters uncertainty intervals is presented. First, the problem is formulated in terms of nonconvex optimization. Then, a relaxation procedure is proposed to compute parameter bounds by means of semidefinite programming techniques. Finally, accuracy of the estimate and computational complexity of the proposed algorithm are discussed. Advantages of the proposed technique with respect to previously published ones are discussed both theoretically and by means of a simulated example
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T14:29:10Z
2015-01-12T14:29:10Z
http://eprints.imtlucca.it/id/eprint/2463
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2463
2015-01-12T14:29:10Z
High-Altitude wind power generation
The paper presents the innovative technology of high-altitude wind power generation, indicated as Kitenergy, which exploits the automatic flight of tethered airfoils (e.g., power kites) to extract energy from wind blowing between 200 and 800 m above the ground. The key points of this technology are described and the design of large scale plants is investigated, in order to show that it has the potential to overcome the limits of the actual wind turbines and to provide large quantities of renewable energy, with competitive cost with respect to fossil sources. Such claims are supported by the results obtained so far in the Kitenergy project, undergoing at Politecnico di Torino, Italy, including numerical simulations, prototype experiments, and wind data analyses.
Lorenzo Fagiano
Mario Milanese
Dario Piga
dario.piga@imtlucca.it
2015-01-12T13:20:30Z
2015-01-12T13:20:30Z
http://eprints.imtlucca.it/id/eprint/2461
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2461
2015-01-12T13:20:30Z
Shrinking complexity of scheduling dependencies in LS-SVM based LPV system identification(I)
In the past years, Linear Parameter-Varying (LPV) identification has rapidly evolved from parametric identification methods to nonparametric methods allowing the relaxation of restrictive assumptions. For example, Least-Square Support Vector Machines (LS-SVMs) offer an attractive way of estimating LPV models directly from data without requiring from the user to specify the functional dependencies of the model coefficients on the scheduling variable. These methods have also been recently extended in order to automatically determine the model order directly from data by the help of regularization. Nonetheless, despite all these recent improvements, LPV identification methods still require some strong a priori such as i) the dependencies are static or dynamic, ii) it is known which variables are considered to be the scheduling or iii) all coefficient functions of the underlaying system depend on all scheduling variables. This prevents the complexity of the scheduling dependency of the model to be shrunk gradually and independently until an optimal bias-variance trade off is found. In this paper, a novel reformulation of the LPV LS-SVM approach is proposed which, besides of the non-parametric estimation of the coefficient functions, achieves data-driven coefficient complexity selection via convex optimization. The properties of the introduced approach are illustrated by a simulation study.
René Duijkers
Roland Tóth
Dario Piga
dario.piga@imtlucca.it
Vincent Laurain
2015-01-12T12:49:00Z
2015-01-12T12:49:00Z
http://eprints.imtlucca.it/id/eprint/2460
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2460
2015-01-12T12:49:00Z
LPV model order selection in an LS-SVM setting
In parametric identification of Linear Parameter-Varying (LPV) systems, the scheduling dependencies of the model coefficients are commonly parameterized in terms of linear combinations of a-priori selected basis functions. Such functions need to be adequately chosen, e.g., on the basis of some first-principles or expert's knowledge of the system, in order to capture the unknown dependencies of the model coefficient functions on the scheduling variable and, at the same time, to achieve a low-variance of the model estimate by limiting the number of parameters to be identified. This problem together with the well-known model order selection problem (in terms of number of input lags, output lags and input delay of the model structure) in system identification can be interpreted as a trade-off between bias and variance of the resulting model estimate. The problem of basis function selection can be avoided by using a non-parametric estimator of the coefficient functions in terms of a recently proposed Least-Square Support-Vector-Machine (LS-SVM) approach. However, the selection of the model order still appears to be an open problem in the identification of LPV systems via the LS-SVM method. In this paper, we propose a novel reformulation of the LPV LS-SVM approach, which, besides of the non-parametric estimation of the coefficient functions, achieves data-driven model order selection via convex optimization. The properties of the introduced approach are illustrated via a simulation example.
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-12T12:41:33Z
2015-01-12T12:42:48Z
http://eprints.imtlucca.it/id/eprint/2459
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2459
2015-01-12T12:41:33Z
Direct data-driven control of linear parameter-varying systems
In many control applications, nonlinear plants can be modeled as linear parameter-varying (LPV) systems, by which the dynamic behavior is assumed to be linear, but also dependent on some measurable signals, e.g., operating conditions. When a measured data set is available, LPV model identification can provide low complexity linear models that can embed the underlying nonlinear dynamic behavior of the plant. For such models, powerful control synthesis tools are available, but the way the modeling error and the conservativeness of the embedding affect the control performance is still largely unknown. Therefore, it appears to be attractive to directly synthesize the controller from data without modeling the plant. In this paper, a novel data-driven synthesis scheme is proposed to lay the basic foundations of future research on this challenging problem. The effectiveness of the proposed approach is illustrated by a numerical example.
Simone Formentin
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
Sergio M. Savaresi
2015-01-12T12:06:11Z
2015-01-12T12:06:11Z
http://eprints.imtlucca.it/id/eprint/2458
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2458
2015-01-12T12:06:11Z
SM identification of input-output LPV models with uncertain time-varying parameters
In this chapter, we consider the identification of single-input single-output linear-parameter-varying models when both the output and the time-varying parameter measurements are affected by bounded noise. First, the problem of computing exact parameter uncertainty intervals is formulated in terms of semialgebraic optimization. Then, a suitable relaxation tecnique is presented to compute parameter bounds by means of convex optimization. Advantages of the presented approach with respect to previously published results are discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T11:47:05Z
2015-01-12T11:47:05Z
http://eprints.imtlucca.it/id/eprint/2457
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2457
2015-01-12T11:47:05Z
Bounded error identification of Hammerstein Systems with backlash
Actuators and sensors commonly used in control systems may exhibit a variety of nonlinear behaviours that may be responsible for undesirable phenomena such as delays and oscillations, which may severely limit both the static and the dynamic performance of the system under control (see, e.g., [22]). In particular, one of the most relevant nonlinearities affecting the performance of industrial machines is the backlash (see Figure 22.1), which commonly occurs in mechanical, hydraulic and magnetic components like bearings, gears and impact dampers (see, e.g., [17]). This nonlinearity, which can be classified as dynamic (i.e., with memory) and hard (i.e. non-differentiable), may arise from unavoidable manufacturing tolerances or sometimes may be deliberately incorporated into the system in order to describe lubrication and thermal expansion effects [3]. The interested reader is referred to [22] for real-life examples of systems with either input or output backlash nonlinearities.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T11:36:39Z
2015-01-12T11:36:39Z
http://eprints.imtlucca.it/id/eprint/2456
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2456
2015-01-12T11:36:39Z
Frequency-Domain Least-Squares Support Vector Machines to deal with correlated errors when identifying linear time-varying systems
A Least-Squares Support Vector Machine (LS-SVM) estimator, formulated in the frequency domain is proposed to identify linear time-varying dynamic systems. The LS-SVM aims at learning the structure of the time variation in a data driven way. The frequency domain is chosen for its superior robustness w.r.t. correlated errors for the calibration of the hyper parameters of the model. The time-domain and the frequency-domain implementations are compared on a simulation example to show the effectiveness of the proposed approach. It is demonstrated that the time-domain formulation is mislead during the calibration due to the fact that the noise on the estimation and calibration data sets are correlated. This is not the case for the frequency-domain implementation.
John Lataire
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-09T13:37:33Z
2015-01-09T13:37:33Z
http://eprints.imtlucca.it/id/eprint/2453
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2453
2015-01-09T13:37:33Z
Polytopic outer approximations of semialgebraic sets
This paper deals with the problem of finding a polytopic outer approximation P* of a compact semialgebraic set S ⊆ Rn. The computed polytope turns out to be an approximation of the linear hull of the set S. The evaluation of P* is reduced to the solution of a sequence of robust optimization problems with nonconvex functional, which are efficiently solved by means of convex relaxation techniques. Properties of the presented algorithm and its possible applications in the analysis, identification and control of uncertain systems are discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T13:32:04Z
2015-01-09T13:32:04Z
http://eprints.imtlucca.it/id/eprint/2452
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2452
2015-01-09T13:32:04Z
Fixed order LPV controller design for LPV models in input-output form
In this work, a new synthesis approach is proposed to design fixed-order H∞ controllers for linear parameter-varying (LPV) systems described by input-output (I/O) models with polynomial dependence on the scheduling variables. First, by exploiting a suitable technique for polytopic outer approximation of semi-algebraic sets, the closed loop system is equivalently rewritten as an LPV I/O model depending affinely on an augmented scheduling parameter vector constrained inside a polytope. Then, the problem is reformulated in terms of bilinear matrix inequalities (BMI) and solved by means of a suitable semidefinite relaxation technique.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
Roland Tóth
2015-01-09T12:49:50Z
2015-01-09T12:49:50Z
http://eprints.imtlucca.it/id/eprint/2451
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2451
2015-01-09T12:49:50Z
Bounded-error identification of linear systems with input and output backlash
In this paper we present a single-stage procedure for computing bounds on the parameters of linear systems with input and output backlash from output data corrupted by bounded measurement noise. By properly selecting a sequence of input/output measurements, the problem of evaluating parameter bounds is formulated as a collection of sparse nonconvex optimization problems. Convex-relation techniques are exploited to efficiently compute guaranteed bounds on system parameters by means of semidefinite programming.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T12:25:17Z
2015-01-09T12:25:17Z
http://eprints.imtlucca.it/id/eprint/2450
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2450
2015-01-09T12:25:17Z
FIR approximation of linear systems from quantized records
In this paper we consider the problem of identifying a fixed-order FIR approximation of linear systems with unknown structure, assuming that both input and output measurements are subjected to quantization. In particular, a FIR model of given order which provides the best approximation of the input-output relationship is sought by minimizing the worst-case distance between the output of the true system and the modeled output, for all possible values of the input and output data consistent with their quantized measurements. First we show that the considered problem can be formulated in terms of robust optimization. Then, we present two different algorithms to compute the optimum of the formulated problem by means of linear programming techniques. The effectiveness of the proposed approach is illustrated by means of a simulation example.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T12:12:01Z
2015-01-09T12:12:01Z
http://eprints.imtlucca.it/id/eprint/2449
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2449
2015-01-09T12:12:01Z
LPV identification of the glucose-insulin dynamics in Type I Diabetes
In this paper we address the problem of identifying a linear parameter varying (LPV) model of the glucose-insulin dynamics in Type I diabetic patients. First, the identification problem is formulated in the framework of bounded-error identification, then an algorithm for parameter bounds computation, based on semidefinite programming, is presented. The effectiveness of the proposed approach is tested in simulation by means of the widely adopted nonlinear Sorensen patient model.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
Sintayehu Berehanu
2015-01-09T11:59:20Z
2015-01-09T11:59:20Z
http://eprints.imtlucca.it/id/eprint/2448
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2448
2015-01-09T11:59:20Z
Input-Output LPV Model identification with guaranteed quadratic stability
The problem of identifying linear parameter-varying (LPV) systems, a-priori known to be quadratically stable, is considered in the paper using an input-output model structure. To solve this problem, a novel constrained optimization-based algorithm is proposed which guarantees quadratic stability of the identified model. It is shown that this estimation objective corresponds to a nonconvex optimization problem, defined by a set of polynomial matrix inequalities (PMI), whose optimal solution can be approximated by means of suitable convex semidefinite relaxations. Applicability of such relaxation-based estimation approach in the presence of either stochastic or deterministic bounded noise is discussed. A simulation example is also given to demonstrate the effectiveness of the resulting identification method.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
Roland Tóth
2015-01-09T11:36:20Z
2015-01-09T11:52:42Z
http://eprints.imtlucca.it/id/eprint/2446
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2446
2015-01-09T11:36:20Z
Minimal LPV state-space realization driven set-membership identification
Set-membership identification algorithms have been recently proposed to derive linear parameter-varying (LPV) models in input-output form, under the assumption that both measurements of the output and the scheduling signals are affected by bounded noise. In order to use the identified models for controller synthesis, linear time-invariant (LTI) realization theory is usually applied to derive a statespace model whose matrices depend statically on the scheduling signals, as required by most of the LPV control synthesis techniques. Unfortunately, application of the LTI realization theory leads to an approximate state-space description of the original LPV input-output model. In order to limit the effect of the realization error, a new set-membership algorithm for identification of input/output LPV models is proposed in the paper. A suitable nonconvex optimization problem is formulated to select the model in the feasible set which minimizes a suitable measure of the state-space realization error. The solution of the identification problem is then derived by means of convex relaxation techniques.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
Roland Tóth
2015-01-09T11:31:37Z
2015-01-09T11:31:37Z
http://eprints.imtlucca.it/id/eprint/2445
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2445
2015-01-09T11:31:37Z
Set-membership identification of Hammerstein-Wiener systems
Set-membership identification of Hammerstein-Wiener models is addressed in the paper. First, it is shown that computation of tight parameter bounds requires the solutions to a number of nonconvex constrained polynomial optimization problems where the number of decision variables increases with the length of the experimental data sequence. Then, a suitable convex relaxation procedure is presented to significantly reduce the computational burden of the identification problem. A detailed discussion of the identification algorithm properties is reported. Finally, a simulated example is used to show the effectiveness and the computational tractability of the proposed approach.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T11:26:33Z
2015-01-09T11:26:33Z
http://eprints.imtlucca.it/id/eprint/2444
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2444
2015-01-09T11:26:33Z
Fast implementation of model predictive control with guaranteed performance
A fast implementation of a given predictive controller for nonlinear systems is introduced through a piecewise constant approximate function defined over an hyper-cube partition of the system state space. Such a state partition is obtained by maximizing the hyper-cube volumes in order to guarantee, besides stability, an a priori fixed trajectory error as well as input and state constraints satisfaction. The presented approximation procedure is achieved by solving a set of nonconvex polynomial optimization problems, whose approximate solutions are computed by means of semidefinite relaxation techniques for semialgebraic problems.
Massimo Canale
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T11:12:20Z
2015-01-09T11:12:20Z
http://eprints.imtlucca.it/id/eprint/2443
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2443
2015-01-09T11:12:20Z
Computational burden reduction in set-membership Hammerstein system identification
Hammerstein system identification from measurements affected by bounded noise is considered in the paper. First, we show that computation of tight parameter bounds requires the solution to nonconvex optimization problems where the number of decision variables increases with the length of the experimental data sequence. Then, in order to reduce the computational burden of the identification problem, we propose a procedure to relax the previously formulated problem to a set of polynomial optimization problems where the number of variables does not depend on the size of the measurements sequence. Advantages of the presented approach with respect to previously published results are discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T10:28:51Z
2015-01-09T10:28:51Z
http://eprints.imtlucca.it/id/eprint/2442
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2442
2015-01-09T10:28:51Z
Convex relaxation techniques for set-membership identification of LPV systems
Set-membership identification of single-input single-output linear parameter varying models is considered in the paper under the assumption that both the output and the scheduling parameter measurements are affected by bounded noise. First, we show that the problem of computing the parameter uncertainty intervals requires the solutions to a number of nonconvex optimization problems. Then, on the basis of the analysis of the regressor structure, we present some ad hoc convex relaxation schemes to compute parameter bounds by means of semidefinite optimization. Advantages of the new techniques with respect to previously published results are discussed both theoretically and by means of simulations.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T10:25:09Z
2015-01-09T10:25:09Z
http://eprints.imtlucca.it/id/eprint/2441
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2441
2015-01-09T10:25:09Z
Hammerstein systems parameters bounding through sparse polynomial optimization
A single-stage procedure for the evaluation of tight bounds on the parameters of Hammerstein systems from output measurements affected by bounded errors is presented. The identification problem is formulated in terms of polynomial optimization, and relaxation techniques based on linear matrix inequalities are proposed to evaluate parameters bounds by means of convex optimization. The structured sparsity of the identification problem is exploited to reduce the computational complexity of the convex relaxed problem. Convergence proper ties, complexity analysis and advantages of the proposed technique with respect to previously published ones are discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T10:00:06Z
2015-01-09T10:00:06Z
http://eprints.imtlucca.it/id/eprint/2440
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2440
2015-01-09T10:00:06Z
Bounding the parameters of linear systems with stability constraints
Identification of linear systems, a priori known to be stable, from input output measurements corrupted by bounded noise is considered in the paper. A formal definition of the feasible parameter set is provided, taking explicitly into account prior information on system stability. On the basis of a detailed analysis of the geometrical structure of the feasible set, convex relaxation techniques are presented to solve nonconvex optimization problems arising in the computation of parameters uncertainty intervals. Properties of the computed relaxed bounds are discussed. A simulated example is presented to show the effectiveness of the proposed technique.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-08T14:09:38Z
2015-01-08T14:09:38Z
http://eprints.imtlucca.it/id/eprint/2438
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2438
2015-01-08T14:09:38Z
Control as a key technology for a radical innovation in wind energy generation
This paper is concerned with an innovative technology, denoted as Kitenergy, for the conversion of high-altitude wind energy into electricity. The research activities carried out in the last five years, including theoretical analyses, numerical simulations and experimental tests, indicate that Kitenergy could bring forth a revolution in wind energy generation, providing renewable energy in large quantities at lower cost than fossil energy. After an overview of the main features of the technology, this work investigates three important aspects: the evaluation of the performance achieved by the employed control law, the optimization of the generator operating cycle and the possibility to generate continuously a constant and maximal power output. These issues are tackled through the combined use of advanced modeling, control and optimization methods, which results to be key technologies for a significant breakthrough in renewable energy generation.
Mario Milanese
Lorenzo Fagiano
Dario Piga
dario.piga@imtlucca.it
2015-01-08T13:49:05Z
2015-01-08T13:49:05Z
http://eprints.imtlucca.it/id/eprint/2437
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2437
2015-01-08T13:49:05Z
Kitenergy: a radical innovation in wind energy generation
This paper presents an innovative technology of high-altitude wind power generation, indicated as Kitenergy, which exploits the automatic flight of tethered airfoils (e.g. power kites) to extract energy from wind blowing between 200 and 800 meters above the ground. The key points of such a technology are described and the design of large scale plants is investigated here, in order to show that Kitenergy technology has the potential to provide large quantities of renewable energy with competitive cost with respect to fossil sources. Such claims are supported by the results obtained so far in the research activities undergoing at Politecnico di Torino, Italy, including numerical simulations, prototype experiments and wind data analyses.
Lorenzo Fagiano
Mario Milanese
Dario Piga
dario.piga@imtlucca.it
2015-01-08T13:24:07Z
2015-01-08T13:24:07Z
http://eprints.imtlucca.it/id/eprint/2436
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2436
2015-01-08T13:24:07Z
Set-membership identification of block-structured nonlinear feedback systems
In this paper a three-stage procedure for set-membership identification of block-structured nonlinear feedback systems is proposed. Nonlinear block parameters bounds are computed in the first stage exploiting steady-state measurements. Then, given the uncertain description of the nonlinear block, bounds on the unmeasurable inner-signal are computed in the second stage. Finally, linear block parameters bounds are computed in the third stage on the basis of output measurements and computed inner signal bounds. Computation of both the nonlinear block parameters and the inner-signal bounds is formulated in terms of semialgebraic optimization and solved by means of suitable convex LMI relaxation techniques. Linear block parameters are bounded solving a number of linear programming problems.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-08T11:51:23Z
2015-01-08T11:51:23Z
http://eprints.imtlucca.it/id/eprint/2434
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2434
2015-01-08T11:51:23Z
Parameter bounds evaluation for linear systems with output backlash
In this paper a procedure is presented for deriving parameters bounds of linear systems with output backlash when the output measurement errors are bounded. First, using steady-state input/output data, parameters of the backlash are bounded. Then, given the estimated uncertain backlash and the output measurements collected exciting the system with a PRBS, bounds on the unmeasurable inner signal are computed. Finally, such bounds, together with the input sequence, are used for bounding the parameters of the linear block.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-08T11:08:23Z
2015-01-12T13:16:19Z
http://eprints.imtlucca.it/id/eprint/2433
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2433
2015-01-08T11:08:23Z
An instrumental Least Squares Support Vector Machine for system identification
Roland Tóth
Vincent Laurain
Dario Piga
dario.piga@imtlucca.it
2015-01-08T11:00:48Z
2015-01-08T11:00:48Z
http://eprints.imtlucca.it/id/eprint/2432
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2432
2015-01-08T11:00:48Z
Segmentation of ARX systems through SDP-relaxation techniques
Segmentation of ARX models can be formulated as a combinato-
rial minimization problem in terms of the ℓ0-norm of the param-
eter variations and the ℓ2-loss of the prediction error. A typical
approach to compute an approximate solution to such a prob-
lem is based on ℓ1-relaxation. Unfortunately, evaluation of the
level of accuracy of the ℓ1-relaxation in approximating the opti-
mal solution of the original combinatorial problem is not easy to
accomplish. In this poster, an alternative approach is proposed
which provides an attractive solution for the ℓ0-norm minimiza-
tion problem associated with segmentation of ARX models.
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-08T10:57:52Z
2015-01-08T11:01:10Z
http://eprints.imtlucca.it/id/eprint/2431
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2431
2015-01-08T10:57:52Z
Dealing with correlated errors in Least-Squares Support Vector Machine Estimators
John Lataire
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-08T10:35:27Z
2015-01-08T10:55:43Z
http://eprints.imtlucca.it/id/eprint/2430
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2430
2015-01-08T10:35:27Z
Data-driven LPV modeling of continuous pulp digesters
In this technical report, the LPV-IO identification techniques described in Kauven et al. [2013]
(Chapter 5) are applied in order to estimate an LPV model of a continuous pulp digester. The pulp
digester simulator (described in Modén [2011]) has been provided by ABB for benchmark studies
as part of its participation in the EU project Autoprofit
Dario Piga
dario.piga@imtlucca.it
Roland Tóth
2015-01-08T10:31:58Z
2015-01-12T13:16:01Z
http://eprints.imtlucca.it/id/eprint/2429
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2429
2015-01-08T10:31:58Z
An instrumental Least Squares Support Vector Machine for nonlinear system identification: enforcing zero-centering constraints
Least-Squares Support Vector Machines (LS-SVM's), originating from Stochastic Learning
theory, represent a promising approach to identify nonlinear systems via nonparametric es-
timation of nonlinearities in a computationally and stochastically attractive way. However,
application of LS-SVM's in the identification context is formulated as a linear regression aim-
ing at the minimization of the ℓ2 loss in terms of the prediction error. This formulation
corresponds to a prejudice of an auto-regressive noise structure, which, especially in the non-
linear context, is often found to be too restrictive in practical applications. In [1], a novel
Instrumental Variable (IV) based estimation is integrated into the LS-SVM approach provid-
ing, under minor conditions, a consistent identification of nonlinear systems in case of a noise
modeling error. It is shown how the cost function of the LS-SVM is modified to achieve an IV-based solution.
In this technical report, a detailed derivation of the results presented in Section 5.2 of [1]
is given as a supplement material for interested readers.
Vincent Laurain
Roland Tóth
Dario Piga
dario.piga@imtlucca.it
2015-01-08T10:09:00Z
2015-01-08T13:05:30Z
http://eprints.imtlucca.it/id/eprint/2428
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2428
2015-01-08T10:09:00Z
A convex relaxation approach to set-membership identification
Set-membership identification of dynamical systems is dealt with in this thesis. Differently from the stochastic framework, in the set-membership context the statistical description of the measurement noise is not available and the only information on such an error is that its amplitude or energy is bounded. In the framework of Set-membership identification, the result of the estimation process is the set of all system parameter values consistent with measured data, assumed model structure and a-priori assumptions on the measurement error. The problem of evaluating bounds on system parameters belonging to the feasible parameter set can be formulated in terms of polynomial optimization problems, where the number of decision variables increases with the length of the experimental data sequence. Such problems are generally nonconvex and NP-hard. Therefore, standard nonlinear optimization tools can not be used to compute parameter bounds, since they can trap in local minima and, as a consequence, the computed bounds are not guaranteed to contain the true values of parameters, which is a key requirement in set-membership identification. In order to overcome such a problem, convex relaxation procedures based on the theory of moments are proposed to efficiently compute relaxed bounds which are guaranteed to contain the true values of system parameters. Unfortunately, a direct application of the theory of moments in relaxing set-membership identification problems leads to semidefinite programming problems with high computational burden, thus limiting, in practice, the use of such relaxation procedures to solve identification problems with a small number of measurements. The aim of the thesis is to derive a number of convex-relaxation based algorithms that, exploiting the peculiar properties of the considered identification problems, make it possible to perform bound computation also when the number of measurements is large. In particular, errors-in-variables (EIV) identification of linear models, concerning identification of linear-time-invariant (LTI) systems based on noise-corrupted measurements of both input and output signals, is tackled through two different relaxation approaches. The first method, which is referred to as dynamic-EIV approach, exploits the sparse structure of EIV problems in order to reduce the computational complexity of the semidefinite programming problems arising from theory-of-moment relaxations. The second technique, referred to as semi-static-EIV approach, is based on a suitable handling of the constraints defining the feasible parameter set, and leads to polynomial optimization problems where the number of decision variables does not depend on the size of the measurement sequence. Thanks to that problem reformulation, theory-of-moment relaxations can be efficiently applied to compute bounds on system parameters also from large data set. Identification of block-oriented nonlinear systems is also addressed. The considered model structures are: Hammerstein-Wiener systems; Hammerstein-like and Wiener-like structures with backlash nonlinearity and block-structured nonlinear feedback systems. The semi-static-EIV approach is extended with suitable modifications to estimate the parameters of Hammerstein-Wiener models with static blocks described by polynomial functions. Then, a unified approach for set-membership identification of Hammerstein and Wiener models with backlash is discussed. By properly selecting a sequence of input/output measurements, the evaluation of parameter bounds is formulated in terms of polynomial optimization problems and the structured sparsity of the formulated problems is exploited to reduce the computational complexity of theory-of-moment based relaxations. Finally, a two-stage method for identification of block-structured nonlinear feedback systems is presented. Nonlinear block parameter bounds are first computed by using input/output data collected from the response of the system to square wave inputs. Then, by stimulating the system with a persistently exciting input signal, bounds on the unmeasurable inner-signal are evaluated, which are used, together with noise-corrupted measurements of the output signal, to formulate the identification of linear block parameters in terms of EIV problems that can be solved either through the dynamic or the semi-static-EIV approach. Then, an "ad hoc" convex relaxation scheme is presented to compute guaranteed bounds on the parameters of linear-parameter-varying (LPV) models in input/output (I/O) form, under the assumption that both the output and the scheduling parameter measurements are affected by bounded noise. The developed set-membership identification algorithms are used to derive an LPV model describing vehicle lateral dynamics based on a set of experimental data, and an LPV model to describe glucose-insulin dynamics for patients affected by Type I diabetes. Finally, the problem of identifying systems a-priori known to be stable is discussed. In particular, suitable relaxation-based algorithms are proposed to enforce BIBO stability and quadratic stability constraints for the cases of LTI and LPV systems, respectively. Applicability of the proposed techniques both in the stochastic and in the set-membership framework is discussed.
Dario Piga
dario.piga@imtlucca.it
2014-12-18T12:15:05Z
2015-11-02T11:27:17Z
http://eprints.imtlucca.it/id/eprint/2425
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2425
2014-12-18T12:15:05Z
Multicontrast MRI quantification of focal inflammation and degeneration in multiple sclerosis
Local microstructural pathology in multiple sclerosis patients might influence their clinical performance. This study applied multicontrast MRI to quantify inflammation and neurodegeneration in MS lesions. We explored the impact of MRI-based lesion pathology in cognition and disability. Methods. 36 relapsing-remitting MS subjects and 18 healthy controls underwent neurological, cognitive, behavioural examinations and 3 T MRI including (i) fluid attenuated inversion recovery, double inversion recovery, and magnetization-prepared gradient echo for lesion count; (ii) T1, T2, and T2* relaxometry and magnetisation transfer imaging for lesion tissue characterization. Lesions were classified according to the extent of inflammation/neurodegeneration. A generalized linear model assessed the contribution of lesion groups to clinical performances. Results. Four lesion groups were identified and characterized by (1) absence of significant alterations, (2) prevalent inflammation, (3) concomitant inflammation and microdegeneration, and (4) prevalent tissue loss. Groups 1, 3, 4 correlated with general disability (Adj-; ), executive function (Adj-; ), verbal memory (Adj-; ), and attention (Adj-; ). Conclusion. Multicontrast MRI provides a new approach to infer in vivo histopathology of plaques. Our results support evidence that neurodegeneration is the major determinant of patients’ disability and cognitive dysfunction
Guillaume Bonnier
Alexis Roche
David Romascano
Samanta Simioni
Djalel-Eddine Meskaldji
David Rotzinger
Ying-Chia Lin
yingchia.lin@imtlucca.it
Gloria Menegaz
Myriam Schluep
Renaud Du Pasquier
Tilman Johannes Sumpf
Jens Frahm
Jean-Philippe Thiran
Gunnar Krueger
Cristina Granziera
2014-12-18T11:34:55Z
2016-04-06T09:03:15Z
http://eprints.imtlucca.it/id/eprint/2424
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2424
2014-12-18T11:34:55Z
Quantitative analysis of myelin and axonal remodeling in the uninjured motor network after stroke
Objectives: Contralesional brain connectivity plasticity was previously reported after stroke. This study aims at disentangling the biological mechanisms underlying connectivity plasticity in the uninjured motor network after an ischemic lesion. In particular, we measured generalized fractional anisotropy (GFA) and magnetization transfer ratio (MTR) to assess whether post-stroke connectivity remodeling depend on axonal and/or myelin changes. Materials and Methods: Diffusion Spectrum Imaging (DSI) and Magnetization Transfer MRI at 3T were performed in 10 patients in acute phase, at one and six months after stroke, which was affecting motor cortical and/or subcortical areas. Ten age- and gender- matched healthy volunteers were scanned one month apart for longitudinal comparison. Clinical assessment was also performed in patients prior to MRI. In the contra-lesional hemisphere, average measures and tract-based quantitative analysis of GFA and MTR was performed to assess axonal integrity and myelination along motor connections as well as their variations in time. Results and Conclusions: Mean and tract-based measures of MTR and GFA showed significant changes in a number of contralesional motor connections, confirming both axonal and myelin plasticity in our cohort of patients. Moreover, density-derived features (peak height, standard deviation-SD and skewness) of GFA and MTR along the tracts showed additional correlation with clinical scores than mean values. These findings reveal the interplay between contralateral myelin and axonal remodeling after stroke.
Ying-Chia Lin
yingchia.lin@imtlucca.it
Alessandro Daducci
Djalel-Eddine Meskaldji
Jean-Philippe Thiran
Patrik Michel
Reto A Meuli
Gunnar Krueger
Gloria Menegaz
Cristina Granziera
2014-12-18T11:19:20Z
2016-04-06T09:02:53Z
http://eprints.imtlucca.it/id/eprint/2423
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2423
2014-12-18T11:19:20Z
Advanced MRI unravels the nature of tissue alterations in early multiple sclerosis
Introduction In patients with multiple sclerosis (MS), conventional magnetic resonance imaging (MRI) provides only limited insights into the nature of brain damage with modest clinic-radiological correlation. In this study, we applied recent advances in MRI techniques to study brain microstructural alterations in early relapsing-remitting MS (RRMS) patients with minor deficits. Further, we investigated the potential use of advanced MRI to predict functional performances in these patients. Methods Brain relaxometry (T1, T2, T2*) and magnetization transfer MRI were performed at 3T in 36 RRMS patients and 18 healthy controls (HC). Multicontrast analysis was used to assess for microstructural alterations in normal-appearing (NA) tissue and lesions. A generalized linear model was computed to predict clinical performance in patients using multicontrast MRI data, conventional MRI measures as well as demographic and behavioral data as covariates. Results Quantitative T2 and T2* relaxometry were significantly increased in temporal normal-appearing white matter (NAWM) of patients compared to HC, indicating subtle microedema (P = 0.03 and 0.004). Furthermore, significant T1 and magnetization transfer ratio (MTR) variations in lesions (mean T1 z-score: 4.42 and mean MTR z-score: −4.09) suggested substantial tissue loss. Combinations of multicontrast and conventional MRI data significantly predicted cognitive fatigue (P = 0.01, Adj-R2 = 0.4), attention (P = 0.0005, Adj-R2 = 0.6), and disability (P = 0.03, Adj-R2 = 0.4). Conclusion Advanced MRI techniques at 3T, unraveled the nature of brain tissue damage in early MS and substantially improved clinical–radiological correlations in patients with minor deficits, as compared to conventional measures of disease.
Guillaume Bonnier
Alexis Roche
David Romascano
Samanta Simioni
Djalel-Eddine Meskaldji
David Rotzinger
Ying-Chia Lin
yingchia.lin@imtlucca.it
Gloria Menegaz
Myriam Schluep
Renaud Du Pasquier
Tilman Johannes Sumpf
Jens Frahm
Jean-Philippe Thiran
Gunnar Krueger
Cristina Granziera
2014-12-18T11:09:56Z
2016-04-06T09:56:55Z
http://eprints.imtlucca.it/id/eprint/2422
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2422
2014-12-18T11:09:56Z
Multicontrast connectometry: a new tool to assess cerebellum alterations in early relapsing-remitting multiple sclerosis
Background: Cerebellar pathology occurs in late multiple sclerosis (MS) but little is known about cerebellar changes during early disease stages. In this study, we propose a new multicontrast “connectometry” approach to assess the structural and functional integrity of cerebellar networks and connectivity in early MS. Methods: We used diffusion spectrum and resting-state functional MRI (rs-fMRI) to establish the structural and functional cerebellar connectomes in 28 early relapsing-remitting MS patients and 16 healthy controls (HC). We performed multicontrast “connectometry” by quantifying multiple MRI parameters along the structural tracts (generalized fractional anisotropy-GFA, T1/T2 relaxation times and magnetization transfer ratio) and functional connectivity measures. Subsequently, we assessed multivariate differences in local connections and network properties between MS and HC subjects; finally, we correlated detected alterations with lesion load, disease duration, and clinical scores. Results: In MS patients, a subset of structural connections showed quantitative MRI changes suggesting loss of axonal microstructure and integrity (increased T1 and decreased GFA, P < 0.05). These alterations highly correlated with motor, memory and attention in patients, but were independent of cerebellar lesion load and disease duration. Neither network organization nor rs-fMRI abnormalities were observed at this early stage. Conclusion: Multicontrast cerebellar connectometry revealed subtle cerebellar alterations in MS patients, which were independent of conventional disease markers and highly correlated with patient function. Future work should assess the prognostic value of the observed damage. Hum Brain Mapp, 2014. © 2014 Wiley Periodicals, Inc.
David Romascano
Djalel-Eddine Meskaldji
Guillaume Bonnier
Samanta Simioni
David Rotzinger
Ying-Chia Lin
yingchia.lin@imtlucca.it
Gloria Menegaz
Alexis Roche
Myriam Schluep
Renaud Du Pasquier
Jonas Richiardi
Dimitri Van De Ville
Alessandro Daducci
Tilman Johannes Sumpf
Jens Fraham
Jean-Philippe Thiran
Gunnar Krueger
Cristina Granziera
2014-12-18T11:04:46Z
2014-12-18T11:04:46Z
http://eprints.imtlucca.it/id/eprint/2421
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2421
2014-12-18T11:04:46Z
Quantitative comparison of reconstruction methods for intra-voxel fiber recovery from diffusion MRI
Validation is arguably the bottleneck in the diffusion magnetic resonance imaging (MRI) community. This paper evaluates and compares 20 algorithms for recovering the local intra-voxel fiber structure from diffusion MRI data and is based on the results of the “HARDI reconstruction challenge” organized in the context of the “ISBI 2012” conference. Evaluated methods encompass a mixture of classical techniques well known in the literature such as diffusion tensor, Q-Ball and diffusion spectrum imaging, algorithms inspired by the recent theory of compressed sensing and also brand new approaches proposed for the first time at this contest. To quantitatively compare the methods under controlled conditions, two datasets with known ground-truth were synthetically generated and two main criteria were used to evaluate the quality of the reconstructions in every voxel: correct assessment of the number of fiber populations and angular accuracy in their orientation. This comparative study investigates the behavior of every algorithm with varying experimental conditions and highlights strengths and weaknesses of each approach. This information can be useful not only for enhancing current algorithms and develop the next generation of reconstruction methods, but also to assist physicians in the choice of the most adequate technique for their studies.
Alessandro Daducci
Erick Jorge Canales-Rodriguez
Maxime Descoteaux
Eleftherios Garyfallidis
Yaniv Gur
Ying-Chia Lin
yingchia.lin@imtlucca.it
Merry Mani
Sylvain Merlet
Michael Paquette
Alonso Ramirez-Manzanares
Marco Reisert
Paulo Reis Rodrigues
Farshid Sepehrband
Emmanuel Caruyer
Jeiran Choupan
Rachid Deriche
Matthew Jacob
Gloria Menegaz
Vesna Prckovska
Mariano Rivera
Yves Wiaux
Jean-Philippe Thiran
2014-12-11T11:38:40Z
2014-12-11T11:38:40Z
http://eprints.imtlucca.it/id/eprint/2417
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2417
2014-12-11T11:38:40Z
Single-image super-resolution via linear mapping of interpolated self-examples
This paper presents a novel example-based single-image superresolution procedure that upscales to high-resolution (HR) a given low-resolution (LR) input image without relying on an external dictionary of image examples. The dictionary instead is built from the LR input image itself, by generating a double pyramid of recursively scaled, and subsequently interpolated, images, from which self-examples are extracted. The upscaling procedure is multipass, i.e., the output image is constructed by means of gradual increases, and consists in learning special linear mapping functions on this double pyramid, as many as the number of patches in the current image to upscale. More precisely, for each LR patch, similar self-examples are found, and, because of them, a linear function is learned to directly map it into its HR version. Iterative back projection is also employed to ensure consistency at each pass of the procedure. Extensive experiments and comparisons with other state-of-the-art methods, based both on external and internal dictionaries, show that our algorithm can produce visually pleasant upscalings, with sharp edges and well reconstructed details. Moreover, when considering objective metrics, such as Peak signal-to-noise ratio and Structural similarity, our method turns out to give the best performance.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T11:31:01Z
2014-12-11T11:31:01Z
http://eprints.imtlucca.it/id/eprint/2416
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2416
2014-12-11T11:31:01Z
Video super-resolution via sparse combinations of key-frame patches in a compression context
In this paper we present a super-resolution (SR) method for upscaling low-resolution (LR) video sequences, that relies on the presence of periodic high-resolution (HR) key frames, and validate it in the context of video compression. For a given LR intermediate frame, the HR details are retrieved patch-by-patch by taking sparse linear combinations of patches found in the neighbor key frames. The performance of the video SR algorithm is assessed in a scheme where only some key frames from an original HR sequence are directly encoded; the remaining intermediate frames are down-sampled to LR and encoded as well, with a possibly different quantization parameter. SR is then finally employed to upscale these frames. For comparison, we consider the best case where the whole original HR sequence is encoded. With respect to this case, our SR-based approach is shown to bring a certain gain for low bit-rates (consistent when all frames are encoded independently), i.e. when a poor encoding can actually benefit of the special processing of the intermediate frames, so proving that video SR can be an useful tool in realistic scenarios.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T11:25:26Z
2014-12-11T11:34:03Z
http://eprints.imtlucca.it/id/eprint/2415
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2415
2014-12-11T11:25:26Z
K-WEB: Nonnegative dictionary learning for sparse image representations
This paper presents a new nonnegative dictionary learning method, to decompose an input data matrix into a dictionary of nonnegative atoms, and a representation matrix with a strict ℓ0-sparsity constraint. This constraint makes each input vector representable by a limited combination of atoms. The proposed method consists of two steps which are alternatively iterated: a sparse coding and a dictionary update stage. As for the dictionary update, an original method is proposed, which we call K-WEB, as it involves the computation of k WEighted Barycenters. The so designed algorithm is shown to outperform other methods in the literature that address the same learning problem, in different applications, and both with synthetic and “real” data, i.e. coming from natural images.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T11:14:47Z
2014-12-11T11:33:33Z
http://eprints.imtlucca.it/id/eprint/2414
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2414
2014-12-11T11:14:47Z
Super-resolution using neighbor embedding of back-projection residuals
In this paper we present a novel algorithm for neighbor embedding based super-resolution (SR), using an external dictionary. In neighbor embedding based SR, the dictionary is trained from couples of high-resolution and low-resolution (LR) training images, and consists of pairs of patches: matching patches (m-patches), which are used to match the input image patches and contain only low-frequency content, and reconstruction patches (r-patches), which are used to generate the output image patches and actually bring the high-frequency details. We propose a novel training scheme, where the m-patches are extracted from enhanced back-projected interpolations of the LR images and the r-patches are extracted from the back-projection residuals. A procedure to further optimize the dictionary is followed, and finally nonnegative neighbor embedding is considered at the SR algorithm stage. We consider singularly the various elements of the algorithm, and prove that each of them brings a gain on the final result. The complete algorithm is then compared to other state-of-the-art methods, and its competitiveness is shown.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T11:06:46Z
2014-12-11T11:33:08Z
http://eprints.imtlucca.it/id/eprint/2413
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2413
2014-12-11T11:06:46Z
Compact and coherent dictionary construction for example-based super-resolution
This paper presents a new method to construct a dictionary for example-based super-resolution (SR) algorithms. Example-based SR relies on a dictionary of correspondences of low-resolution (LR) and high-resolution (HR) patches. Having a fixed, prebuilt, dictionary, allows to speed up the SR process; however, in order to perform well in most cases, we need to have big dictionaries with a large variety of patches. Moreover, LR and HR patches often are not coherent, i.e. local LR neighborhoods are not preserved in the HR space. Our designed dictionary learning method takes as input a large dictionary and gives as an output a dictionary with a “sustainable” size, yet presenting comparable or even better performance. It firstly consists of a partitioning process, done according to a joint k-means procedure, which enforces the coherence between LR and HR patches by discarding those pairs for which we do not find a common cluster. Secondly, the clustered dictionary is used to extract some salient patches that will form the output set.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T11:00:13Z
2014-12-16T14:34:53Z
http://eprints.imtlucca.it/id/eprint/2412
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2412
2014-12-11T11:00:13Z
Low-complexity single-image super-resolution based on nonnegative neighbor embedding
This paper describes a single-image super-resolution (SR) algorithm based on nonnegative neighbor embedding. It belongs to the family of single-image example-based
SR algorithms, since it uses a dictionary of low resolution (LR) and high resolution (HR) trained patch pairs to infer the unknown HR details. Each LR feature vector in the input
image is expressed as the weighted combination of its K nearest neighbors in the dictionary; the corresponding HR feature vector is reconstructed under the assumption that the local LR embedding is preserved. Three key aspects are introduced in order to build a low-complexity competitive algorithm: (i) a compact but efficient representation of the
patches (feature representation) (ii) an accurate estimation of the patches by their nearest neighbors (weight computation) (iii) a compact and already built (therefore external) dictionary, which allows a one-step upscaling. The neighbor embedding SR algorithm so designed is shown to give good visual results, comparable to other state-of-the-art methods, while presenting an appreciable reduction of the computational time.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T10:26:16Z
2014-12-11T11:31:46Z
http://eprints.imtlucca.it/id/eprint/2411
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2411
2014-12-11T10:26:16Z
Neighbor embedding based single-image super-resolution using Semi-Nonnegative Matrix Factorization
This paper describes a novel method for single-image super-resolution (SR) based on a neighbor embedding technique which uses Semi-Nonnegative Matrix Factorization (SNMF). Each low-resolution (LR) input patch is approximated by a linear combination of nearest neighbors taken from a dictionary. This dictionary stores low-resolution and corresponding high-resolution (HR) patches taken from natural images and is thus used to infer the HR details of the super-resolved image. The entire neighbor embedding procedure is carried out in a feature space. Features which are either the gradient values of the pixels or the mean-subtracted luminance values are extracted from the LR input patches, and from the LR and HR patches stored in the dictionary. The algorithm thus searches for the K nearest neighbors of the feature vector of the LR input patch and then computes the weights for approximating the input feature vector. The use of SNMF for computing the weights of the linear approximation is shown to have a more stable behavior than the use of LLE and lead to significantly higher PSNR values for the super-resolved images.
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Aline Roumy
Christine Guillemot
Marie Line Alberi-Morel
2014-12-11T09:38:48Z
2014-12-16T14:35:43Z
http://eprints.imtlucca.it/id/eprint/2410
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2410
2014-12-11T09:38:48Z
Sparse reconstruction for compressed sensing using Stagewise Polytope Faces Pursuit
Compressed sensing, also known as compressive sampling, is an approach to the measurement of signals which have a sparse representation, that can reduce the number of measurements that are needed to reconstruct the signal. The signal reconstruction part requires efficient methods to perform sparse reconstruction, such as those based on linear programming. In this paper we present a method for sparse reconstruction which is an extension of our earlier polytope faces pursuit algorithm, based on the polytope geometry of the dual linear program. The new algorithm adds several basis vectors at each stage, in a similar way to the recent stagewise orthogonal matching pursuit (StOMP) algorithm. We demonstrate the application of the algorithm to some standard compressed sensing problems.
Mark D. Plumbley
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
2014-12-10T14:41:25Z
2014-12-10T14:41:25Z
http://eprints.imtlucca.it/id/eprint/2407
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2407
2014-12-10T14:41:25Z
Schroedinger-like PageRank equation and localization in the WWW
The WorldWide Web is one of the most important communication systems we use in our everyday life. Despite its central role, the growth and the development of the WWW is not controlled by any central authority. This situation has created a huge ensemble of connections whose complexity can be fruitfully described and quantified by network theory. One important application that allows to sort out the information present in these connections is given by the PageRank alghorithm. Computation of this quantity is usually made iteratively with a large use of computational time. In this paper we show that the PageRank can be expressed in terms of a wave function obeying a Schroedinger-like equation. In particular the topological disorder given by the unbalance of outgoing and ingoing links between pages, induces wave function and potential structuring. This allows to directly localize the pages with the largest score. Through this new representation we can now compute the PageRank without iterative techniques. For most of the cases of interest our method is faster than the original one. Our results also clarify the role of topology in the diffusion of information within complex networks. The whole approach opens the possibility to novel techniques inspired by quantum physics for the analysis of the WWW properties.
Nicola Perra
Vinko Zlatic
Alessandro Chessa
alessandro.chessa@imtlucca.it
Claudio Conti
Debora Donato
Guido Caldarelli
guido.caldarelli@imtlucca.it
2014-12-02T15:15:35Z
2014-12-18T13:56:23Z
http://eprints.imtlucca.it/id/eprint/2384
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2384
2014-12-02T15:15:35Z
Mining communities in networks
Online social networks pose significant challenges to computer scientists, physicists, and sociologists alike, for their massive size, fast evolution, and uncharted potential for social computing. One particular problem that has interested us is community identification. Many algorithms based on various metrics have been proposed for communities in networks [18, 24], but a few algorithms scale to very large networks. Three recent community identification algorithms, namely CNM [16], Wakita [59], and Louvain [10], stand out for their scalability to a few millions of nodes. All of them use modularity as the metric of optimization. However, all three algorithms produce inconsistent communities every time the ordering of nodes to the algorithms changes.
We propose two quantitative metrics to represent the level of consistency across multiple runs of an algorithm: pairwise membership probability and consistency. Based on these two metrics, we propose a solution that improves the consistency without compromising the modularity. We demonstrate that our solution to use pairwise membership probabilities as link weights generates consistent communities within six or fewer cycles for most networks. However, our iterative, pairwise membership reinforcing approach does not deliver convergence for Flickr, Orkut, and Cyworld networks as well for the rest of the networks. Our approach is empirically driven and is yet to be shown to produce consistent output analytically. We leave further investigation into the topological structure and its impact on the consistency as future work.
In order to evaluate the quality of clustering, we have looked at 3 of the 48 communities identified in the AS graph. Surprisingly, all have either hierarchical, geographical, or topological interpretations to their groupings. Our preliminary evaluation of the quality of communities is promising. We plan to conduct more thorough evaluation of the communities and study network structures and their evolutions using our approach.
Haewoon Kwak
Yoonchan Choi
Young-Ho Eom
youngho.eom@imtlucca.it
Hawoong Jeong
Sue Moon
2014-12-02T15:12:14Z
2014-12-18T13:56:58Z
http://eprints.imtlucca.it/id/eprint/2383
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2383
2014-12-02T15:12:14Z
Comparison of online social relations in volume vs interaction
Online social networking services are among the most popular Internet services according to Alexa.com and have become a key feature in many Internet services. Users interact through various features of online social networking services: making friend relationships, sharing their photos, and writing comments. These friend relationships are expected to become a key to many other features in web services, such as recommendation engines, security measures, online search, and personalization issues. However, we have very limited knowledge on how much interaction actually takes place over friend relationships declared online. A friend relationship only marks the beginning of online interaction.
Does the interaction between users follow the declaration of friend relationship? Does a user interact evenly or lopsidedly with friends? We venture to answer these questions in this work. We construct a network from comments written in guestbooks. A node represents a user and a directed edge a comments from a user to another. We call this network an activity network. Previous work on activity networks include phone-call networks [34, 35] and MSN messenger networks [27]. To our best knowledge, this is the first attempt to compare the explicit friend relationship network and implicit activity network.
We have analyzed structural characteristics of the activity network and compared them with the friends network. Though the activity network is weighted and directed, its structure is similar to the friend relationship network. We report that the in-degree and out-degree distributions are close to each other and the social interaction through the guestbook is highly reciprocated. When we consider only those links in the activity network that are reciprocated, the degree correlation distribution exhibits much more pronounced assortativity than the friends network and places it close to known social networks. The k-core analysis gives yet another corroborating evidence that the friends network deviates from the known social network and has an unusually large number of highly connected cores.
We have delved into the weighted and directed nature of the activity network, and investigated the reciprocity, disparity, and network motifs. We also have observed that peer pressure to stay active online stops building up beyond a certain number of friends.
The activity network has shown topological characteristics similar to the friends network, but thanks to its directed and weighted nature, it has allowed us more in-depth analysis of user interaction.
Hyunwoo Chun
Haewoon Kwak
Young-Ho Eom
youngho.eom@imtlucca.it
Yong-Yeol Ahn
Sue Moon
Hawoong Jeong
2014-11-10T09:17:33Z
2014-11-10T09:17:33Z
http://eprints.imtlucca.it/id/eprint/2351
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2351
2014-11-10T09:17:33Z
Credit Default Swaps networks and systemic risk
Credit Default Swaps (CDS) spreads should reflect default risk of the underlying corporate debt. Actually, it has been recognized that CDS spread time series did not anticipate but only followed the increasing risk of default before the financial crisis. In principle, the network of correlations among CDS spread time series could at least display some form of structural change to be used as an early warning of systemic risk. Here we study a set of 176 CDS time series of financial institutions from 2002 to 2011. Networks are constructed in various ways, some of which display structural change at the onset of the credit crisis of 2008, but never before. By taking these networks as a proxy of interdependencies among financial institutions, we run stress-test based on Group DebtRank. Systemic risk before 2008 increases only when incorporating a macroeconomic indicator reflecting the potential losses of financial assets associated with house prices in the US. This approach indicates a promising way to detect systemic instabilities.
Michelangelo Puliga
michelangelo.puliga@imtlucca.it
Guido Caldarelli
guido.caldarelli@imtlucca.it
Stefano Battiston
2014-11-05T10:29:31Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2334
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2334
2014-11-05T10:29:31Z
(edited by) Proceedings 7th Interaction and Concurrency Experience, ICE 2014 (Berlin, Germany, 6th June 2014)
This volume contains the proceedings of ICE 2014, the 7th Interaction and Concurrency Experience, which was held in Berlin, Germany on the 6th of June 2014 as a satellite event of DisCoTec 2014. The ICE procedure for paper selection allows PC members to interact, anonymously, with authors. During the review phase, each submitted paper is published on a Wiki and associated with a discussion forum whose access is restricted to the authors and to all the PC members not declaring a conflict of interests. The PC members post comments and questions that the authors reply to. Each paper was reviewed by three PC members, and altogether 8 papers (including 3 short papers) were accepted for publication. We were proud to host two invited talks, by Pavol Cerny and Kim Larsen, whose abstracts are included in this volume together with the regular papers.
Ivan Lanese
Alberto Lluch-Lafuente
Ana Sokolova
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-22T09:53:27Z
2014-10-22T10:00:58Z
http://eprints.imtlucca.it/id/eprint/2331
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2331
2014-10-22T09:53:27Z
Stabilizing linear model predictive control under inexact numerical optimization
This note describes a model predictive control (MPC) formulation for discrete-time linear systems with hard constraints on control and state variables, under the assumption that the solution of the associated quadratic program is neither optimal nor satisfies the inequality constraints. This is common in embedded control applications, for which real-time constraints and limited computing resources dictate restrictions on the possible number of on-line iterations that can be performed within a sampling period. The proposed approach is rather general, in that it does not refer to a particular optimization algorithm, and is based on the definition of an alternative MPC problem that we assume can only be solved within bounded levels of suboptimality, and violation of the inequality constraints. By showing that the inexact solution is a feasible suboptimal one for the original problem, asymptotic or exponential stability is guaranteed for the closed-loop system. Based on the above general results, we focus on a specific dual accelerated gradient-projection method to obtain a stabilizing MPC law that only requires a predetermined maximum number of on-line iterations.
Matteo Rubagotti
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-10-22T09:15:26Z
2014-10-22T09:15:26Z
http://eprints.imtlucca.it/id/eprint/2330
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2330
2014-10-22T09:15:26Z
Cabin heat thermal management in hybrid vehicles using model predictive control
This paper describes a Model Predictive Control (MPC) design for the thermal management of cabin heat in Hybrid Electric Vehicles (HEVs). Due to the augmented complexity of the energy flow in recent energy-efficient vehicles in comparison to conventional vehicles, control degrees of freedom are increased, as many components can achieve the same functionality of heating up the cabin temperature. This paper proposes an MPC strategy to distribute the workload between available components in the vehicle, while achieving multiple objectives, such as fuel efficiency and heat-power reference tracking, and enforcing various constraints. First, a simplified linear dynamical model subject to linear time-varying (LTV) constraints is identified, based on high-fidelity simulations on a full nonlinear model. Then an MPC controller is designed to achieve multiple control objectives by manipulating different inputs. Simulation results indicate that the proposed approach is suitable for such multi-objective automotive control problems.
Hasan Esen
Tsutomu Tashiro
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-10-22T08:30:45Z
2014-10-22T08:30:45Z
http://eprints.imtlucca.it/id/eprint/2329
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2329
2014-10-22T08:30:45Z
MPC for power systems dispatch based on stochastic optimization
In this paper we investigate the problem of optimal real-time power dispatch of an interconnection of conventional power generation plants, renewable resources and energy storage systems. The objective of the problem is to minimize imbalance costs and maximize the profit of the company managing the system whilst satisfying user demand. The managing company is able to trade energy on an electricity market. Energy prices on the market, user demand and intermittent generation from the renewable plants are considered stochastic processes. We show that under certain assumptions, the stochastic power dispatch problem over a finite horizon can be recast, under a proper choice for the feedback policies and for the disturbance set, into a stochastic optimization formulation but with deterministic constraints. We carry out a systematic study of stochastic optimization methods to solve this problem, in particular we analyze the stochastic gradient method. We also show that this problem can be approximated by a proper deterministic optimization problem using the sample average approximation method, which can then be solved by standard means.
Ion Necoara
Dragos Nicolae Clipici
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-10-21T13:19:35Z
2014-10-22T08:17:50Z
http://eprints.imtlucca.it/id/eprint/2327
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2327
2014-10-21T13:19:35Z
Controlled drug administration by a fractional PID
Amiodarone is an antiarrhythmic drug that exhibits highly complex and non-exponential dynamics whose controlled administration has important implications for its clinical use especially for long-term therapies. Its pharmacokinetics has been accurately modelled using a fractional-order compartmental model. In this paper we design a fractional-order PID controller and we evaluate its dynamical characteristics in terms of the stability margins of the closed loop and the ability of the controlled system to attenuate various sources of noise and uncertainty.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Haralambos Sarimveis
2014-10-21T13:08:59Z
2016-04-06T09:40:34Z
http://eprints.imtlucca.it/id/eprint/2326
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2326
2014-10-21T13:08:59Z
Water demand forecasting for the optimal operation of large-scale drinking water networks: the Barcelona case study
Drinking Water Networks (DWN) are large-scale multiple-input multiple-output systems with uncertain disturbances (such as the water demand from the consumers) and involve components of linear, non-linear and switching nature. Operating, safety and quality constraints deem it important for the state and the input of such systems to be constrained into a given domain. Moreover, DWNs' operation is driven by time-varying demands and involves an considerable consumption of electric energy and the exploitation of limited water resources. Hence, the management of these networks must be carried out optimally with respect to the use of available resources and infrastructure, whilst satisfying high service levels for the drinking water supply. To accomplish this task, this paper explores various methods for demand forecasting, such as Seasonal ARIMA, BATS and Support Vector Machine, and presents a set of statistically validated time series models. These models, integrated with a Model Predictive Control (MPC) strategy addressed in this paper, allow to account for an accurate on-line forecasting and flow management of a DWN.
Ajay Kumar Sampathirao
Juan Manuel Grosso Pérez
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Carlos Ocampo-Martinez
Alberto Bemporad
alberto.bemporad@imtlucca.it
Vicenç Puig
2014-10-10T09:34:56Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2323
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2323
2014-10-10T09:34:56Z
Analysis of service oriented software systems with the conversation calculus
We overview some perspectives on the concept of service-based computing, and discuss the motivation of a small set of modeling abstractions for expressing and analyzing service based systems, which have led to the design of the Conversation Calculus. Distinguishing aspects of the Conversation Calculus are the adoption of a very simple, context sensitive, local message-passing communication mechanism, natural support for modeling multi-party conversations, and a novel mechanism for handling exceptional behavior. In this paper, written in a tutorial style, we review some Conversation Calculus based analysis techniques for reasoning about properties of service-based systems, mainly by going through a sequence of illustrating examples.
Luis Caires
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T13:45:05Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2322
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2322
2014-10-09T13:45:05Z
Typing liveness in multiparty communicating systems
Session type systems are an effective tool to prove that communicating programs do not go wrong, ensuring that the participants of a session follow the protocols described by the types. In a previous work we introduced a typing discipline for the analysis of progress in binary sessions. In this paper we generalize the approach to multiparty sessions following the conversation type approach, while strengthening progress to liveness. We combine the usual session-like fidelity analysis with the liveness analysis and devise an original treatment of recursive types allowing us to address challenging configurations that are out of the reach of existing approaches.
Luca Padovani
Vasco Thudichum Vasconcelos
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T13:30:52Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2320
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2320
2014-10-09T13:30:52Z
Typing progress in communication-centred systems
We present a type system for the analysis of progress in session-based communication centred systems. Our development is carried out in a minimal setting considering classic (binary) sessions, but building on and generalising previous work on progress analysis in the context of conversation types. Our contributions aim at underpinning forthcoming works on progress for session-typed systems, so as to support richer verification procedures based on a more foundational approach. Although this work does not target expressiveness, our approach already addresses challenging scenarios which are unaccounted for elsewhere in the literature, in particular systems that interleave communications on received session channels.
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Vasco Thudichum Vasconcelos
2014-10-09T13:26:39Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2319
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2319
2014-10-09T13:26:39Z
A type system for flexible role assignment in multiparty communicating systems
Communication protocols in distributed systems often specify the roles of the parties involved in the communications, namely for enforcing security policies or task assignment purposes. Ensuring that implementations follow role-based protocol specifications is challenging, especially in scenarios found, e.g., in business processes and web applications, where multiple peers are involved, single peers impersonate several roles, or single roles are carried out by several peers. We present a type-based analysis for statically verifying role-based multi-party interactions, based on a simple π-calculus model and prior work on conversation types. Our main result ensures that well-typed systems follow the role-based protocols prescribed by the types, including systems where roles are flexibly assigned to processes.
Pedro Baltazar
Luis Caires
Vasco Thudichum Vasconcelos
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T13:21:53Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2318
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2318
2014-10-09T13:21:53Z
SLMC: a tool for model checking concurrent systems against dynamical spatial logic specifications
The Spatial Logic Model Checker is a tool for verifying π-calculus systems against safety, liveness, and structural properties expressed in the spatial logic for concurrency of Caires and Cardelli. Model-checking is one of the most widely used techniques to check temporal properties of software systems. However, when the analysis focuses on properties related to resource usage, localities, interference, mobility, or topology, it is crucial to reason about spatial properties and structural dynamics. The SLMC is the only currently available tool that supports the combined analysis of behavioral and spatial properties of systems. The implementation, written in OCAML, is mature and robust, available in open source, and outperforms other tools for verifying systems modeled in π-calculus.
Luis Caires
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T13:14:13Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2317
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2317
2014-10-09T13:14:13Z
Typing dynamic roles in multiparty interaction
We present a type-based analysis for role-based multiparty
interaction. Novel to our approach are the notions that a role specified in a protocol may be carried out by several parties, and that one party may assume di%erent roles at di%erent stages of the protocol. We build on Conversation Types by adding roles to protocol specifications. Systems
are modeled in ⇤-calculus extended with labeled communication and role annotations. The main result shows that well-typed systems follow the role-based protocols prescribed by the types, addressing systems where
roles have dynamic distributed implementations.
Pedro Baltazar
Vasco Thudichum Vasconcelos
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T12:45:52Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2315
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2315
2014-10-09T12:45:52Z
Static analysis techniques for session-oriented calculi
In the Sensoria project, core calculi have been adopted as a linguistic means to model and analyze service-oriented applications. The present chapter reports about the static analysis techniques developed for the Sensoria session-oriented core calculi CaSPiS and CC. In particular, it presents a type system for client progress and control flow analysis in CaSPiS and type systems for conversation fidelity and progress in CC. The chapter gives an overview of the these techniques, summarizes the main results and presents the analysis of a common example taken from the Sensoria financial case-study: the credit request scenario.
Lucia Acciai
Chiara Bodei
Michele Boreale
Roberto Bruni
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T12:43:01Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2314
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2314
2014-10-09T12:43:01Z
Behavioral theory for session-oriented calculi
This chapter presents the behavioral theory of some of the Sensoria core calculi. We consider SSCC, μ se and CC as representatives of the session-based approach and COWS as representative of the correlation-based one.
For SSCC, μ se and CC the main point is the structure that the session/conversation mechanism creates in programs. We show how the differences between binary sessions, multiparty sessions and dynamic conversations are captured by different behavioral laws. We also exploit those laws for proving the correctness of program transformations.
For COWS the main point is that communication is prioritized (the best matching input captures the output), and this has a strong influence on the behavioral theory of COWS. In particular, we show that communication in COWS is neither purely synchronous nor purely asynchronous.
Ivan Lanese
Antonio Ravara
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T12:31:27Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2313
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2313
2014-10-09T12:31:27Z
Advanced mechanisms for service combination and transactions
Languages and models for service-oriented applications usually include primitives and constructs for exception and compensation handling. Exception handling is used to react to unexpected events while compensation handling is used to undo previously completed activities. In this chapter we investigate the impact of exception and compensation handling in message-based process calculi and the related theories developed within Sensoria.
Carla Ferreira
Ivan Lanese
Antonio Ravara
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Gianluigi Zavattaro
2014-10-09T11:59:22Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2311
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2311
2014-10-09T11:59:22Z
Tools and verification
This chapter presents different tools that have been developed inside the Sensoria project. Sensoria studied qualitative analysis techniques for verifying properties of service implementations with respect to their formal specifications. The tools presented in this chapter have been developed to carry out the analysis in an automated, or semi-automated, way.
We present four different tools, all developed during the Sensoria project, exploiting new techniques and calculi from the Sensoria project itself.
Massimo Bartoletti
Luis Caires
Ivan Lanese
Franco Mazzanti
Davide Sangiorgi
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Roberto Zunino
2014-10-09T11:45:31Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2310
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2310
2014-10-09T11:45:31Z
Spatial logic model checker user’s guide : version 1.15
Luis Caires
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-08T13:56:17Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2300
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2300
2014-10-08T13:56:17Z
A calculus for modeling and analyzing conversations in service-oriented computing
The service-oriented computing paradigm has motivated a large research effort in the past few years. On the one hand, the wide dissemination of Web-Service technology urged for the development of standards, tools and formal techniques that contributed for the design of more reliable systems. On the other hand, many of the problems presented in the study of service-oriented applications find an existing work basis in well-established research fields, as is the case of the study of interaction models that has been an active field of research in the last couple of decades. However, there are many new problems raised by the service-oriented computing paradigm in particular that call for new concepts, dedicated models and specialized formal analysis techniques. The work presented in this dissertation is inserted in such effort, with particular focus on the challenges involved in governing interaction in service-oriented applications. One of the main innovations introduced by the work presented here is the way in which multiparty interaction is handled. One reference field of research that addresses the specification and analysis of interaction of communication-centric systems is based on the notion of session. Essentially, a session characterizes the interaction between two parties, a client and a server,that exchange messages between them in a sequential and dual way. The notion of session is thus particularly adequate to model the client/server paradigm, however it fails to cope with interaction between several participants, a scenario frequently found in real service-oriented applications. The approach described in this dissertation improves on the state of the art as it allows to model and analyze systems where several parties interact, while retaining the fundamental flavor of session-based approaches, by relying on a novel notion of conversation: a simple extension of the notion of session that allows for several parties to interact in a single medium of communication in a disciplined way, via labeled message passing. The contributions of the work presented in this dissertation address the modeling and analysis of service-oriented applications in a rigorous way: First, we propose and study a formal model for service-oriented computing, the Conversation Calculus, which, building on the abstract notion of conversation, allows to capture the interactions between several parties that are relative to the same service task using a single medium of communication. Second, we introduce formal analysis techniques, namely the conversation type system and progress proof system that can be used to ensure, in a provably correct way and at static verification time (before deploying such applications), that systems enjoy good properties such as “the prescribed protocols will be followed at runtime by all conversation participants”(conversation fidelity)and “the system will never run into a stuck state” (progress). We give substantial evidence that our approach is already effective enough to model and type sophisticated service-based systems, at a fairly high level of abstraction. Examples of such systems include challenging scenarios involving simultaneous multiparty conversations, with concurrency and access to local resources, and conversations with a dynamically changing and unanticipated number of participants, that fall out of scope of previous approaches.
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-08T13:47:34Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2299
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2299
2014-10-08T13:47:34Z
Conversation types
We present a type theory for analyzing concurrent multiparty interactions as found in service-oriented computing. Our theory introduces a novel and flexible type structure, able to uniformly describe both the internal and the interface behavior of systems, referred respectively as choreographies and contracts in web-services terminology. The notion of conversation builds on the fundamental concept of session, but generalizes it along directions up to now unexplored; in particular, conversation types discipline interactions in conversations while accounting for dynamical join and leave of an unanticipated number of participants. We prove that well-typed systems never violate the prescribed conversation constraints. We also present techniques to ensure progress of systems involving several interleaved conversations, a previously open problem.
Luis Caires
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-08T13:38:03Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2298
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2298
2014-10-08T13:38:03Z
Conversation types
We present a type theory for analyzing concurrent multiparty interactions as found in service-oriented computing. Our theory introduces a novel and flexible type structure, able to uniformly describe both the internal and the interface behavior of systems, referred respectively as choreographies and contracts in web-services terminology. The notion of conversation builds on the fundamental concept of session, but generalizes it along directions up to now unexplored; in particular, conversation types discipline interactions in conversations while accounting for dynamical join and leave of an unanticipated number of participants. We prove that well-typed systems never violate the prescribed conversation constraints. We also present techniques to ensure progress of systems involving several interleaved conversations, a previously open problem.
Luis Caires
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-08T13:21:57Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2297
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2297
2014-10-08T13:21:57Z
A process calculus analysis of compensations
Conversations in service-oriented computation are frequently long running. In such a setting, traditional ACID properties of transactions cannot be reasonably implemented, and compensation mechanisms seem to provide convenient techniques to, at least, approximate them. In this paper, we investigate the representation and analysis of structured compensating transactions within a process calculus model, by embedding in the Conversation Calculus certain structured compensation programming abstractions inspired by the ones proposed by Butler, Ferreira, and Hoare. We prove the correctness of the embedding after developing a general notion of stateful model for structured compensations and related results, and showing that the embedding induces such a model.
Luis Caires
Carla Ferreira
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-08T13:13:49Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2296
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2296
2014-10-08T13:13:49Z
The conversation calculus: a model of service-oriented computation
We present a process-calculus model for expressing and analyzing service-based systems. Our approach addresses central features of the service-oriented computational model such as distribution, process delegation, communication and context sensitiveness, and loose coupling. Distinguishing aspects of our model are the notion of conversation context, the adoption of a context sensitive, message-passing-based communication, and of a simple yet expressive mechanism for handling exceptional behavior. We instantiate our model by extending a fragment of the π-calculus, illustrate its expressiveness by means of many examples, and study its basic behavioral theory; in particular, we establish that bisimilarity is a congruence
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Luis Caires
João C. Seco
2014-10-07T13:25:24Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2291
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2291
2014-10-07T13:25:24Z
The spatial logic model checker user's manual: version 1.0
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Luis Caires
Rubens Viegas
2014-10-07T13:16:48Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2290
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2290
2014-10-07T13:16:48Z
The spatial logic model checker user's manual: version 0.9
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Luis Caires
2014-09-02T09:44:03Z
2014-09-02T09:44:03Z
http://eprints.imtlucca.it/id/eprint/2273
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2273
2014-09-02T09:44:03Z
Douglas-Rachford splitting: complexity estimates and accelerated variants
We propose a new approach for analyzing convergence of the Douglas-Rachford splitting method for solving convex composite optimization problems. The approach is based on a continuously differentiable function, the Douglas-Rachford Envelope (DRE), whose stationary points correspond to the solutions of the original (possibly nonsmooth) problem. The Douglas-Rachford splitting method is shown to be equivalent to a scaled gradient method on the DRE, and so results from smooth unconstrained optimization are employed to analyze its convergence and optimally choose parameter {\gamma} and to derive an accelerated variant of Douglas-Rachford splitting.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Lorenzo Stella
lorenzo.stella@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-08-08T10:44:40Z
2014-08-08T10:44:40Z
http://eprints.imtlucca.it/id/eprint/2270
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2270
2014-08-08T10:44:40Z
Annotated image datasets of rosette plants
While image-based approaches to plant phenotyping are gaining momentum, benchmark data focusing on typical imaging situations and tasks in plant phenotyping are still lacking, making it difficult to compare existing methodologies. This report describes a benchmark dataset of raw and annotated images of plants. We describe the plant material, environmental conditions, and imaging setup and procedures, as well as the datasets where this image selection stems from. We also describe the annotation process, since all of these images have been manually segmented by experts, such that each leaf has its own label. Color images in the dataset show top-down views on young rosette plants. Two datasets show different genotypes of Arabidopsis while another dataset shows tobacco (Nicoticana tobacum) under different treatments. A version of the dataset, described also in this report, is in the public domain at http://www.plant-phenotyping.org/CVPPP2014-dataset and can be used for the purpose of plant/leaf segmentation from background, with accompanying evaluation scripts. This version was used in the Leaf Segmentation Challenge (LSC) of the Computer Vision Problems in Plant Phenotyping (CVPPP 2014) workshop organized in conjunction with the 13th European Conference on Computer Vision (ECCV), in Zürich, Switzerland. We hope with the release of this, and future, dataset(s) to invigorate the study of computer vision problems and the development of algorithms in the context of plant phenotyping. We also aim to provide to the computer vision community another interesting dataset on which new algorithmic developments can be evaluated.
Hanno Scharr
Massimo Minervini
massimo.minervini@imtlucca.it
Andreas Fischbach
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2014-08-04T11:23:26Z
2016-04-06T08:20:36Z
http://eprints.imtlucca.it/id/eprint/2268
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2268
2014-08-04T11:23:26Z
Comparing algorithms for automated vessel segmentation in computed tomography scans of the lung: the VESSEL12 study
The VESSEL12 (VESsel SEgmentation in the Lung) challenge objectively compares the performance of different algorithms to identify vessels in thoracic computed tomography (CT) scans. Vessel segmentation is fundamental in computer aided processing of data generated by 3D imaging modalities. As manual vessel segmentation is prohibitively time consuming, any real world application requires some form of automation. Several approaches exist for automated vessel segmentation, but judging their relative merits is difficult due to a lack of standardized evaluation. We present an annotated reference dataset containing 20 CT scans and propose nine categories to perform a comprehensive evaluation of vessel segmentation algorithms from both academia and industry. Twenty algorithms participated in the VESSEL12 challenge, held at International Symposium on Biomedical Imaging (ISBI) 2012. All results have been published at the VESSEL12 website http://vessel12.grand-challenge.org. The challenge remains ongoing and open to new participants. Our three contributions are: (1) an annotated reference dataset available online for evaluation of new algorithms; (2) a quantitative scoring system for objective comparison of algorithms; and (3) performance analysis of the strengths and weaknesses of the various vessel segmentation methods in the presence of various lung diseases.
Rina D. Rudyanto
Sjoerd Kerkstra
Eva M. van Rikxoort
Catalin Fetita
Pierre-Yves Brillet
Christophe Lefevre
Wenzhe Xue
Xiangjun Zhu
Jianming Liang
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Devrim Ünay
Kamuran Kadipasaoglu
Raúl San José Estépar
James C. Ross
George R. Washko
Juan-Carlos Prieto
Marcela Hernández Hoyos
Maciej Orkisz
Hans Meine
Markus Hüllebrand
Christina Stöcker
Fernando Lopez Mir
Valery Naranjo
Eliseo Villanueva
Marius Staring
Changyan Xiao
Berend C. Stoel
Anna Fabijanska
Erik Smistad
Anne C. Elster
Frank Lindseth
Amir Hossein Foruzan
Ryan Kiros
Karteek Popuri
Dana Cobzas
Daniel Jimenez-Carretero
Andres Santos
Maria J. Ledesma-Carbayo
Michael Helmberger
Martin Urschler
Michael Pienn
Dennis G.H. Bosboom
Arantza Campo
Mathias Prokop
Pim A. de Jong
Carlos Ortiz-de-Solorzano
Arrate Muñoz-Barrutia
Bram van Ginneken
2014-07-29T08:14:57Z
2014-07-29T08:14:57Z
http://eprints.imtlucca.it/id/eprint/2266
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2266
2014-07-29T08:14:57Z
Assessment of myocardial reactivity to controlled hypercapnia with free-breathing T2-prepared cardiac blood oxygen level–dependent MR imaging
Purpose: To examine whether controlled and tolerable levels of hypercapnia may be an alternative to adenosine, a routinely used coronary vasodilator, in healthy human subjects and animals. Materials and Methods: Human studies were approved by the institutional review board and were HIPAA compliant. Eighteen subjects had end-tidal partial pressure of carbon dioxide (PetCO2) increased by 10 mm Hg, and myocardial perfusion was monitored with myocardial blood oxygen level–dependent (BOLD) magnetic resonance (MR) imaging. Animal studies were approved by the institutional animal care and use committee. Anesthetized canines with (n = 7) and without (n = 7) induced stenosis of the left anterior descending artery (LAD) underwent vasodilator challenges with hypercapnia and adenosine. LAD coronary blood flow velocity and free-breathing myocardial BOLD MR responses were measured at each intervention. Appropriate statistical tests were performed to evaluate measured quantitative changes in all parameters of interest in response to changes in partial pressure of carbon dioxide.
Results: Changes in myocardial BOLD MR signal were equivalent to reported changes with adenosine (11.2% ± 10.6 [hypercapnia, 10 mm Hg] vs 12% ± 12.3 [adenosine]; P = .75). In intact canines, there was a sigmoidal relationship between BOLD MR response and PetCO2 with most of the response occurring over a 10 mm Hg span. BOLD MR (17% ± 14 [hypercapnia] vs 14% ± 24 [adenosine]; P = .80) and coronary blood flow velocity (21% ± 16 [hypercapnia] vs 26% ± 27 [adenosine]; P > .99) responses were similar to that of adenosine infusion. BOLD MR signal changes in canines with LAD stenosis during hypercapnia and adenosine infusion were not different (1% ± 4 [hypercapnia] vs 6% ± 4 [adenosine]; P = .12). Conclusion: Free-breathing T2-prepared myocardial BOLD MR imaging showed that hypercapnia of 10 mm Hg may provide a cardiac hyperemic stimulus similar to adenosine.
Hsin-Jung Yang
Roya Yumul
Richard Tang
Ivan Cokic
Michael Klein
Avinash Kali
Olivia Sobczyk
Behzad Sharif
Jun Tang
Xiaoming Bi
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Debiao Li
Antonio Hernandez Conte
Joseph A. Fisher
Rohan Dharmakumar
2014-07-16T12:09:17Z
2014-12-03T13:05:43Z
http://eprints.imtlucca.it/id/eprint/2260
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2260
2014-07-16T12:09:17Z
Stabilizing dynamic controllers for hybrid systems: a hybrid control Lyapunov function approach
This paper proposes a dynamic controller structure and a systematic design procedure for stabilizing discrete-time hybrid systems. The proposed approach is based on the concept of control Lyapunov functions (CLFs), which, when available, can be used to design a stabilizing state-feedback control law. In general, the construction of a CLF for hybrid dynamical systems involving both continuous and discrete states is extremely complicated, especially in the presence of non-trivial discrete dynamics. Therefore, we introduce the novel concept of a hybrid control Lyapunov function, which allows the compositional design of a discrete and a continuous part of the CLF, and we formally prove that the existence of a hybrid CLF guarantees the existence of a classical CLF. A constructive procedure is provided to synthesize a hybrid CLF, by expanding the dynamics of the hybrid system with a specific controller dynamics. We show that this synthesis procedure leads to a dynamic controller that can be implemented by a receding horizon control strategy, and that the associated optimization problem is numerically tractable for a fairly general class of hybrid systems, useful in real world applications. Compared to classical hybrid receding horizon control algorithms, the proposed approach typically requires a shorter prediction horizon to guarantee asymptotic stability of the closed-loop system, which yields a reduction of the computational burden, as illustrated through two examples.
Stefano Di Cairano
W.P.M.H. Heemels
Mircea Lazar
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-07-08T13:40:37Z
2014-07-08T13:40:37Z
http://eprints.imtlucca.it/id/eprint/2251
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2251
2014-07-08T13:40:37Z
An algorithm for PWL approximations of nonlinear functions
In this report we provide some technical details for some of the results appeared in [Alessio et al.(2005)]. In the first section we provide the proof of continuity of the PPWA function computed with the ”squaring the circle” algorithm stated in ACC 06. Then, we analyze the complexity of the previous algorithm, in terms of the desired level of accuracy in the approximation of the PPWA function.
Alessandro Alessio
Alberto Bemporad
alberto.bemporad@imtlucca.it
B. Addis
Alessandro Pasini
2014-07-03T10:18:42Z
2016-04-06T09:20:28Z
http://eprints.imtlucca.it/id/eprint/2241
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2241
2014-07-03T10:18:42Z
Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography
Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert’s manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/.
H.A. Kirişli
M. Schaap
C.T. Metz
A.S. Dharampal
W.B. Meijboom
S.L. Papadopoulou
A. Dedic
K. Nieman
M.A. de Graaf
M.F.L. Meijs
M.J. Cramer
A. Broersen
S. Cetin
A. Eslami
L. Flórez-Valencia
K.L. Lor
B. Matuszewski
I. Melki
B. Mohr
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
R. Shahzad
C. Wang
P.H. Kitslaar
G. Unal
A. Katouzian
Maciej Orkisz
C.M. Chen
F. Precioso
L. Najman
S. Masood
Devrim Unay
L. van Vliet
R. Moreno
R. Goldenberg
E. Vuçini
G.P. Krestin
W.J. Niessen
T. van Walsum
2014-07-03T10:05:28Z
2014-07-03T10:05:28Z
http://eprints.imtlucca.it/id/eprint/2240
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2240
2014-07-03T10:05:28Z
Automated aortic supravalvular sinus detection in conventional computed tomography image
Valvular diseases are those where one or more of the cardiac valves are affected. Treatment of valvular diseases often involves replacement or restoration of the affected valve(s). In such a surgical procedure, the medical expert performing the procedure can largely benefit from a patient-specific and dynamic valvular model containing information complementary to the 2D/3D static images. To this end, in this study a novel automated supravalvular sinus detection method (to be used as a first step in aortic valve segmentation) on conventional contrast-enhanced ECG-gated multislice CT data and its evaluation on expert annotated 31 real cases are presented. Results demonstrate a highly accurate detection performance with average error rate inferior to 1.12 mm.
Devrim Unay
Ibrahim Harmankaya
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Kamuran Kadipasaoglu
Rahmi Cubuk
Levent Celik
2014-07-03T10:00:27Z
2014-07-03T10:00:27Z
http://eprints.imtlucca.it/id/eprint/2239
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2239
2014-07-03T10:00:27Z
Region growing on frangi vesselness values in 3-D CTA data
In cardiac related diagnostic methods, the shape and curvature of coronary arteries is essential. Consequently, one of the most important requirements for Computer Aided Diagnosis (CAD) Systems is automated segmentation of vasculature. In this paper, we propose a new hybrid algorithm, which segment the coronary arterial tree in CTA images by merging methodologies-, namely, Region Growing and Frangi Approach. The algorithm first runs a region growing on Frangi vesselness values and subsequently optimizes the results with several threshold values. Comparison of the present results with optimal results of existing segmentation algorithms reveals that the proposed approach outperforms its predecessors. The diagnostic accuracy of the algorithm will next be validated on the segmentation of coronary arteries from real CT data.
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Devrim Unay
Kamuran Kadipasaoglu
2014-07-03T09:44:43Z
2015-05-29T10:25:43Z
http://eprints.imtlucca.it/id/eprint/2237
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2237
2014-07-03T09:44:43Z
Biharmonic density estimate - a scale space signature for deformable surfaces
A novel intrinsic geometric scale space formulation for 3D deformable surfaces termed as the Biharmonic Density Estimate (BDE) is proposed. The proposed BDE signature allows for multiscale surface feature-based representation of deformable 3D shapes for subsequent image and scene analysis. It is shown to provide an underlying theoretical framework for the concept of intrinsic geometric scale space, resulting in a highly descriptive characterization of both, the local surface structure and the global metric of the 3D shape. The compactness and robustness of the proposed BDE signature are demonstrated via a series of experiments and a key components detection application.
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Suchendra M. Bhandarkar
2014-07-03T09:34:35Z
2014-07-03T09:34:35Z
http://eprints.imtlucca.it/id/eprint/2236
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2236
2014-07-03T09:34:35Z
Analysis of surface folding patterns of diccols using the GPU-Optimized geodesic field estimate
Localization of cortical regions of interests (ROIs) in the human brain via analysis of Diffusion Tensor Imaging (DTI) data plays a pivotal role in basic and clinical neuroscience. In recent studies, 358 common cortical landmarks in the human brain, termed as Dense Indi-
vidualized and Common Connectivity-based Cortical Landmarks (DICCCOLs), have been identified. Each of these DICCCOL sites has been observed to possess fiber connection patterns that are consistent across individuals and populations and can be regarded as predictive of brain
function. However, the regularity and variability of the cortical surface fold patterns at these DICCCOL sites have, thus far, not been investigated. This paper presents a novel approach, based on intrinsic surface
geometry, for quantitative analysis of the regularity and variability of the cortical surface folding patterns with respect to the structural neural connectivity of the human brain. In particular, the Geodesic Field Estimate (GFE) is used to infer the relationship between the structural
and connectional DTI features and the complex surface geometry of the human brain. A parallel algorithm, well suited for implementation on Graphics Processing Units (GPUs), is also proposed for efficient computation of the shortest geodesic paths between all cortical surface point pairs. Based on experimental results, a mathematical model for the morphological variability and regularity of the cortical folding patterns in the vicinity of the DICCCOL sites is proposed. It is envisioned that this model could be potentially applied in several human brain image
registration and brain mapping applications.
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Chul Woo Lim
Suchendra M. Bhandarkar
Hanbo Chen
Tianming Liu
Khaled Rasheed
Thiab Taha
2014-07-03T09:06:15Z
2014-07-03T09:36:26Z
http://eprints.imtlucca.it/id/eprint/2235
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2235
2014-07-03T09:06:15Z
Morphological analysis of the left ventricular endocardial surface and its clinical implications
The complex morphological structure of the left ventricular endocardial surface and its relation to the severity of arterial stenosis has not yet been thoroughly investigated due to the limitations of conventional imaging techniques. By exploiting the recent developments in Multirow-Detector Computed Tomography (MDCT) scanner technology, the complex endocardial surface morphology of the left ventricle is studied and the cardiac segments affected by coronary arterial stenosis localized via analysis of Computed Tomography (CT) image data obtained from a 320-MDCT scanner. The non-rigid endocardial surface data is analyzed using an isometry-invariant Bag-of-Words (BOW) feature-based approach. The clinical significance of the analysis in identifying, localizing and quantifying the incidence and extent of coronary artery disease is investigated. Specifically, the association between the incidence and extent of coronary artery disease and the alterations in the endocardial surface morphology is studied. The results of the proposed approach on 15 normal data sets, and 12 abnormal data sets exhibiting coronary artery disease with varying levels of severity are presented. Based on the characterization of the endocardial surface morphology using the Bag-of-Words features, a neural network-based classifier is implemented to test the effectiveness of the proposed morphological analysis approach. Experiments performed on a strict leave-one-out basis are shown to exhibit a distinct pattern in terms of classification accuracy within the cardiac segments where the incidence of coronary arterial stenosis is localized.
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Zhen Qian
Suchendra M. Bhandarkar
Tianming Liu
Sarah Rinehart
Szilard Voros
2014-07-03T08:46:34Z
2014-07-03T08:46:34Z
http://eprints.imtlucca.it/id/eprint/2234
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2234
2014-07-03T08:46:34Z
Non-rigid shape correspondence and description using geodesic field estimate distribution
Non-rigid shape description and analysis is an unsolved problem in computer graphics. Shape analysis is a fast evolving research field due to the wide availability of 3D shape databases. Widely studied methods for this family of problems include the Gromov Hausdorff distance [1], Bag-of-Features [2] and diffusion geometry [3]. The limitations of the Euclidian distance measure in the context of isometric deformation have made geodesic distance a de-facto standard for describing a metric space for non-rigid shape analysis. In this work, we propose a novel geodesic field space-based approach to describe and analyze non-rigid shapes from a point correspondence perspective.
Austin T. New
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Hamid R. Arabnia
Suchendra M. Bhandarkar
2014-07-03T08:34:33Z
2014-07-03T09:36:48Z
http://eprints.imtlucca.it/id/eprint/2233
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2233
2014-07-03T08:34:33Z
Shape analysis of the left ventricular endocardial surface and its application in detecting coronary artery disease
Coronary artery disease is the leading cause of morbidity and mortality worldwide. The complex morphological structure of the ventricular endocardial surface has not yet been studied properly due to the limitations of conventional imaging techniques. With the recent developments in Multi-Detector Computed Tomography (MDCT) scanner technology, we propose to study, in this paper, the complex endocardial surface morphology of the left ventricle via analysis of Computed Tomography (CT) image data obtained from a 320 Multi-Detector CT scanner. The CT image data is analyzed using a 3D shape analysis approach and the clinical significance of the analysis in detecting coronary artery disease is investigated. Global and local 3D shape descriptors are adapted for the purpose of shape analysis of the left ventricular endocardial surface. In order to study the association between the incidence of coronary artery disease and the alteration of the endocardial surface structure, we present the results of our shape analysis approach on 5 normal data sets, and 6 abnormal data sets with obstructive coronary artery disease. Based on the morphological characteristics of the endocardial surface as quantified by the shape descriptors, we implement a Linear Discrimination Analysis (LDA)-based classification algorithm to test the effectiveness of our shape analysis approach. Experiments performed on a strict leave-one-out basis are shown to achieve a classification accuracy of 81.8%.
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Zhen Qian
Suchendra M. Bhandarkar
Tianming Liu
Szilard Voros
2014-07-01T11:13:05Z
2014-07-01T11:13:05Z
http://eprints.imtlucca.it/id/eprint/2226
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2226
2014-07-01T11:13:05Z
Proximal Newton methods for convex composite optimization
This paper proposes two proximal Newton methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a new continuously differentiable exact penalty function, namely the Composite Moreau Envelope. The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the solution of a linear system of usually small dimension.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-07-01T11:02:11Z
2014-07-01T11:02:11Z
http://eprints.imtlucca.it/id/eprint/2225
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2225
2014-07-01T11:02:11Z
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second one combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates. Furthermore, they are computationally attractive since each Newton iteration requires the approximate solution of a linear system of usually small dimension.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Lorenzo Stella
lorenzo.stella@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-06-27T12:29:24Z
2014-06-27T12:29:24Z
http://eprints.imtlucca.it/id/eprint/2213
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2213
2014-06-27T12:29:24Z
Reputation-Based Composition of Social Web Services
Social Web Services (SWSs) constitute a novel paradigm of service-oriented computing, where Web services, just like humans, sign up in social networks that guarantee, e.g., better service discovery for users and faster replacement in case of service failures. In past work, composition of SWSs was mainly supported by specialised social networks of competitor services and cooperating ones. In this work, we continue this line of research, by proposing a novel SWSs composition procedure driven by the SWSs reputation. Making use of a well-known formal language and associated tools, we specify the composition steps and we prove that such reputation-driven approach assures better results in terms of the overall quality of service of the compositions, with respect to randomly selecting SWSs.
Alessandro Celestini
alessandro.celestini@imtlucca.it
Gianpiero Costantino
Rocco De Nicola
r.denicola@imtlucca.it
Zakaria Maamar
Fabio Martinelli
Marinella Petrocchi
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2014-06-27T12:18:56Z
2015-02-06T10:07:15Z
http://eprints.imtlucca.it/id/eprint/2212
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2212
2014-06-27T12:18:56Z
Dimming relations for the efficient analysis of concurrent systems via action abstraction
We study models of concurrency based on labelled transition systems where abstractions are induced by a partition of the action set. We introduce dimming relations which are able to relate two models if they can mimic each other by using actions from the same partition block. Moreover, we discuss the necessary requirements for guaranteeing compositional verification. We show how our new relations and results can be exploited when seemingly heterogeneous systems exhibit analogous behaviours manifested via different actions. Dimming relations make the models more homogeneous by collapsing such distinct actions into the same partition block. With our examples, we show how these abstractions may considerably reduce the state-space size, in some cases from exponential to polynomial complexity.
Rocco De Nicola
r.denicola@imtlucca.it
Giulio Iacobelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
2014-06-25T08:59:26Z
2014-06-25T08:59:26Z
http://eprints.imtlucca.it/id/eprint/2210
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2210
2014-06-25T08:59:26Z
Collective attention in the age of (mis)information
In this work we study, on a sample of 2.3 million individuals, how Facebook users consumed different information at the edge of political discussion and news during the last Italian electoral competition. Pages are categorized, according to their topics and the communities of interests they pertain to, in a) alternative information sources (diffusing topics that are neglected by science and main stream media); b) online political activism; and c) main stream media. We show that attention patterns are similar despite the different qualitative nature of the information, meaning that unsubstantiated claims (mainly conspiracy theories) reverberate for as long as other information. Finally, we categorize users according to their interaction patterns among the different topics and measure how a sample of this social ecosystem (1279 users) responded to the injection of 2788 false information posts. Our analysis reveals that users which are prominently interacting with alternative information sources (i.e. more exposed to unsubstantiated claims) are more prone to interact with false claims.
Delia Mocanu
Qian Zhang
Màrton Karsai
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2014-06-19T08:09:43Z
2014-09-02T09:28:54Z
http://eprints.imtlucca.it/id/eprint/2208
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2208
2014-06-19T08:09:43Z
Robust Model Predictive Control for optimal continuous drug administration
In this paper the Model Predictive Control (MPC) technology is used for tackling the optimal drug administration problem. The important advantage of MPC compared to other control technologies is that it explicitly takes into account the constraints of the system. In particular,
for drug treatments of living organisms, MPC can guarantee satisfaction of the minimum toxic concentration (MTC) constraints. A whole-body physiologically-based pharmacokinetic (PBPK) model serves as the dynamic prediction model of the system after it is formulated as a
discrete-time state-space model. Only plasma measurements are assumed to be measured online. The rest of the states (drug concentrations in other organs and tissues) are estimated in real time by designing an artificial observer. The complete system (observer and MPC controller) is able to drive the drug concentration to the desired levels at the organs of interest, while satisfying the imposed constraints, even in the presence of modeling errors, disturbances and noise. A case study on a PBPK model with 7 compartments, constraints on 5 tissues and a variable drug concentration set-point illustrates the efficiency of the methodology in drug dosing control applications. The proposed methodology is also tested in an uncertain setting
and proves successful in presence of modelling errors and inaccurate measurements.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
2014-06-16T13:17:54Z
2014-06-16T13:17:54Z
http://eprints.imtlucca.it/id/eprint/2207
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2207
2014-06-16T13:17:54Z
A uniform framework for modeling nondeterministic, probabilistic, stochastic, or mixed processes and their behavioral equivalences
Labeled transition systems are typically used as behavioral models of concurrent processes. Their labeled transitions define a one-step state-to-state reachability relation. This model can be generalized by modifying the transition relation to associate a state reachability distribution with any pair consisting of a source state and a transition label. The state reachability distribution is a function mapping each possible target state to a value that expresses the degree of one-step reachability of that state. Values are taken from a preordered set equipped with a minimum that denotes unreachability. By selecting suitable preordered sets, the resulting model, called {ULTraS} from Uniform Labeled Transition System, can be specialized to capture well-known models of fully nondeterministic processes (LTS), fully probabilistic processes (ADTMC), fully stochastic processes (ACTMC), and nondeterministic and probabilistic (MDP) or nondeterministic and stochastic (CTMDP) processes. This uniform treatment of different behavioral models extends to behavioral equivalences. They can be defined on {ULTraS} by relying on appropriate measure functions that express the degree of reachability of a set of states when performing multi-step computations. It is shown that the specializations of bisimulation, trace, and testing equivalences for the different classes of {ULTraS} coincide with the behavioral equivalences defined in the literature over traditional models except when nondeterminism and probability/stochasticity coexist; then new equivalences pop up.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2014-05-13T09:07:10Z
2014-07-07T10:27:41Z
http://eprints.imtlucca.it/id/eprint/2196
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2196
2014-05-13T09:07:10Z
A multi-level geographical study of Italian political elections from Twitter Data
In this paper we present an analysis of the behavior of Italian Twitter users during national political elections. We monitor the volumes of the tweets related to the leaders of the various political parties and we compare them to the elections results. Furthermore, we study the topics that are associated with the co-occurrence of two politicians in the same tweet. We cannot conclude, from a simple statistical analysis of tweet volume and their time evolution, that it is possible to precisely predict the election outcome (or at least not in our case of study that was characterized by a “too-close-to-call” scenario). On the other hand, we found that the volume of tweets and their change in time provide a very good proxy of the final results. We present this analysis both at a national level and at smaller levels, ranging from the regions composing the country to macro-areas (North, Center, South).
Guido Caldarelli
guido.caldarelli@imtlucca.it
Alessandro Chessa
alessandro.chessa@imtlucca.it
Fabio Pammolli
f.pammolli@imtlucca.it
Gabriele Pompa
gabriele.pompa@imtlucca.it
Michelangelo Puliga
michelangelo.puliga@imtlucca.it
Massimo Riccaboni
massimo.riccaboni@imtlucca.it
Gianni Riotta
2014-03-27T09:30:26Z
2016-04-05T12:01:10Z
http://eprints.imtlucca.it/id/eprint/2182
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2182
2014-03-27T09:30:26Z
COMT Genetic Reduction Produces Sexually Divergent Effects on Cortical Anatomy and Working Memory in Mice and Humans
Genetic variations in catechol-O-methyltransferase (COMT) that modulate cortical dopamine have been associated with pleiotropic behavioral effects in humans and mice. Recent data suggest that some of these effects may vary among sexes. However, the specific brain substrates underlying COMT sexual dimorphisms remain unknown. Here, we report that genetically driven reduction in COMT enzyme activity increased cortical thickness in the prefrontal cortex (PFC) and postero-parieto-temporal cortex of male, but not female adult mice and humans. Dichotomous changes in PFC cytoarchitecture were also observed: reduced COMT increased a measure of neuronal density in males, while reducing it in female mice. Consistent with the neuroanatomical findings, COMT-dependent sex-specific morphological brain changes were paralleled by divergent effects on PFC-dependent working memory in both mice and humans. These findings emphasize a specific sex–gene interaction that can modulate brain morphological substrates with influence on behavioral outcomes in healthy subjects and, potentially, in neuropsychiatric populations.
Sara Sannino
Alessandro Gozzi
Antonio Cerasa
Fabrizio Piras
Diego Scheggia
Francesca Manago
Mario Damiano
Alberto Galbusera
Lucy C. Erickson
Davide De Pietri Tonelli
Angelo Bifone
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Carlo Caltagirone
Daniel R. Weinberger
Gianfranco Spalletta
Francesco Papaleo
2014-03-10T12:55:15Z
2016-02-12T13:15:43Z
http://eprints.imtlucca.it/id/eprint/2180
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2180
2014-03-10T12:55:15Z
Reasoning (on) service component ensembles in rewriting logic
Programming autonomic systems with massive number of heterogeneous components poses a number of challenges to language designers and software engineers and requires the integration of computational tools and reasoning tools. We present a general methodology to enrich SCEL, a recently introduced language for programming systems with massive numbers of components, with reasoning capabilities that are guaranteed by external reasoners. We show how the methodology can be instantiated by considering the Maude implementation of SCEL and a specific reasoner, Pirlo, implemented in Maude as well. Moreover we show how the actual integration can benefit from the existing analytical tools of the Maude framework. In particular, we demonstrate our approach by considering a simple scenario consisting of a group of robots moving in an arena aiming at minimising the number of collisions.
Lenz Belzner
Rocco De Nicola
r.denicola@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
Martin Wirsing
2014-03-05T14:26:53Z
2014-03-05T14:26:53Z
http://eprints.imtlucca.it/id/eprint/2178
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2178
2014-03-05T14:26:53Z
A stochastic optimization approach to optimal bidding on dutch ancillary services markets
The aim of this paper is to present a market design for trading capacity reserves (also called Ancillary Services, AS) and to introduce a strategy for the optimal bidding problem in such a scenario. In the deregulated market, the presence of several market participants or Balance Responsible Parties (BRPs) entitled for trading energy, together with the increasing integration of renewable sources and price-elastic loads, shift the focus on decentralized control and reliable forecast techniques. The main feature of the considered market design is its double-sided nature. In addition to portfolio-based supply bids and based on prediction of their stochastic production and load, BRPs are allowed to submit risk-limiting requests. Requesting capacity from the AS market corresponds to giving to the market an estimate of the possible deviation from the daily production schedule resulting from the day-ahead auction and from bilateral contracts, named E-Program. In this way each BRP is responsible for the balanced and safe operation of the electric grid. On the other hand, at each Program Time Unit (PTU) BRPs must also offer their available capacity under the form of bids. In this paper, a bidding strategy to the double-sided market is described, where the risk is minimized and all the constraints are fulfilled. The algorithms devised are tested in a simulation environment and compared to the current practice, where the double-sided auction is not contemplated. Results in terms of expected imbalances and reliability are presented.
Laura Puglia
Alberto Bemporad
alberto.bemporad@imtlucca.it
Andrej Jokic
Ana Virag
2014-03-05T14:18:21Z
2014-03-05T14:26:18Z
http://eprints.imtlucca.it/id/eprint/2177
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2177
2014-03-05T14:18:21Z
Reliability and efficiency for market parties in power systems
In this paper we present control strategies for solving the problems of risk-averse bidding on the electricity markets, focusing on the Day-Ahead and Ancillary Services market, and of optimal real-time power dispatch from the point of view of a market participant, or Balance Responsible Party (BRP). For what concerns the bidding problem, the proposed algorithms are based on two-stage stochastic programming and are aimed at finding the optimal allocation of production between the day-ahead exchange market and the ancillary services market. For the real-time power dispatch problem, we devised a two-level hierarchical control strategy, where the upper-level computes economically optimal power set-points for the generators, and the lower level tracks them while considering constraints and dynamical models of the plant. Simulation results based on realistic data modeling the Dutch transmission network are shown to evaluate the effectiveness of the approach.
Laura Puglia
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-03-05T13:21:13Z
2014-03-05T14:06:56Z
http://eprints.imtlucca.it/id/eprint/2173
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2173
2014-03-05T13:21:13Z
An accelerated dual gradient-projection algorithm for embedded linear model predictive control
This paper proposes a dual fast gradient-projection method for solving quadratic programming problems that arise in model predictive control of linear systems subject to general polyhedral constraints on inputs and states. The proposed algorithm is well suited for embedded control applications in that: 1) it is extremely simple and easy to code; 2) the number of iterations to reach a given accuracy in terms of optimality and feasibility of the primal solution can be tightly estimated; and 3) the computational cost per iteration increases only linearly with the prediction horizon.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2014-02-27T09:26:50Z
2014-02-27T09:26:50Z
http://eprints.imtlucca.it/id/eprint/2152
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2152
2014-02-27T09:26:50Z
Stochastically timed predicate-based communication primitives
for autonomic computing
Predicate-based communication allows components of a system to send messages and requests to ensembles of components that are determined at execution time through the
evaluation of a predicate, in a multicast fashion. Predicate-based communication can greatly simplify the programming of autonomous and adaptive systems. We present a
stochastically timed extension of the Software Component Ensemble Language (SCEL) that was introduced in previous work. Such an extension raises a number of non-trivial
design and formal semantics issues with different options as possible solutions at different levels of abstraction. We discuss four of these options. We provide formal semantics and an illustration of the use of the language modeling a variant of a bike sharing system, together with some preliminary analysis of the system performance.
Diego Latella
Michele Loreti
Mieke Massink
Valerio Senni
valerio.senni@imtlucca.it
2014-01-27T09:22:07Z
2014-01-27T09:22:07Z
http://eprints.imtlucca.it/id/eprint/2127
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2127
2014-01-27T09:22:07Z
A Computational Field Framework for Collaborative Task Execution in Volunteer Clouds
The increasing diffusion of cloud technologies offers new opportunities for distributed and collaborative computing. Volunteer clouds are a prominent example, where participants join and leave the platform and collaborate by sharing computational resources. The high complexity, dynamism and unpredictability of such scenarios call for decentralized self-* approaches. We present in this paper a framework for the design and evaluation of self-adaptive collaborative task execution strategies in volunteer clouds. As a byproduct, we propose a novel strategy based on the Ant Colony Optimization paradigm, that we validate through simulation-based statistical analysis over Google workload data.
Stefano Sebastio
stefano.sebastio@imtlucca.it
Michele Amoretti
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2014-01-24T14:01:24Z
2014-12-11T13:27:37Z
http://eprints.imtlucca.it/id/eprint/2125
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2125
2014-01-24T14:01:24Z
Self-healing networks: redundancy and structure
We introduce the concept of self-healing in the field of complex networks modelling; in particular, self-healing capabilities are implemented through distributed communication protocols that exploit redundant links to recover the connectivity of the system. We then analyze the effect of the level of redundancy on the resilience to multiple failures; in particular, we measure the fraction of nodes still served for increasing levels of network damages. Finally, we study the effects of redundancy under different connectivity patterns—from planar grids, to small-world, up to scale-free networks—on healing performances. Small-world topologies show that introducing some long-range connections in planar grids greatly enhances the resilience to multiple failures with performances comparable to the case of the most resilient (and least realistic) scale-free structures. Obvious applications of self-healing are in the important field of infrastructural networks like gas, power, water, oil distribution systems.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Guido Caldarelli
guido.caldarelli@imtlucca.it
Antonio Scala
2014-01-24T13:57:52Z
2014-01-24T13:57:52Z
http://eprints.imtlucca.it/id/eprint/2124
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2124
2014-01-24T13:57:52Z
Influence of media on collective debates
The information system (T.V., newspapers, blogs, social network platforms) and its inner dynamics play a fundamental role on the evolution of collective debates and thus on the public opinion. In this work we address such a process focusing on how the current inner strategies of the information system (competition, customer satisfaction) once combined with the gossip may affect the opinions dynamics. A reinforcement effect is particularly evident in the social network platforms where several and incompatible cultures coexist (e.g, pro or against the existence of chemical trails and reptilians, the new world order conspiracy and so forth). We introduce a computational model of opinion dynamics which accounts for the coexistence of media and gossip as separated but interdependent mechanisms influencing the opinions evolution. Individuals may change their opinions under the contemporary pressure of the information supplied by the media and the opinions of their social contacts. We stress the effect of the media communication patterns by considering both the simple case where each medium mimics the behavior of the most successful one (in order to maximize the audience) and the case where there is polarization and thus competition among media reported information (in order to preserve and satisfy their segmented audience). Finally, we first model the information cycle as in the case of traditional main stream media (i.e, when every medium knows about the format of all the others) and then, to account for the effect of the Internet, on more complex connectivity patterns (as in the case of the web based information). We show that multiple and polarized information sources lead to stable configurations where several and distant opinions coexist.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Guido Caldarelli
guido.caldarelli@imtlucca.it
Antonio Scala
2014-01-24T13:38:10Z
2014-01-24T13:38:10Z
http://eprints.imtlucca.it/id/eprint/2123
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2123
2014-01-24T13:38:10Z
Opinions within media, power and gossip
Despite the increasing diffusion of the Internet technology, TV remains the principal medium of communication. People's perceptions, knowledge, beliefs and opinions about matter of facts get (in)formed through the information reported on by the mass-media. However, a single source of information (and consensus) could be a potential cause of anomalies in the structure and evolution of a society. Hence, as the information available (and the way it is reported) is fundamental for our perceptions and opinions, the definition of conditions allowing for a good information to be disseminated is a pressing challenge. In this paper starting from a report on the last Italian political campaign in 2008, we derive a socio-cognitive computational model of opinion dynamics where agents get informed by different sources of information. Then, a what-if analysis, performed trough simulations on the model's parameters space, is shown. In particular, the scenario implemented includes three main streams of information acquisition, differing in both the contents and the perceived reliability of the messages spread. Agents' internal opinion is updated either by accessing one of the information sources, namely media and experts, or by exchanging information with one another. They are also endowed with cognitive mechanisms to accept, reject or partially consider the acquired information.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Rosaria Conte
Elena Lodi
2014-01-24T13:34:36Z
2014-01-24T13:38:31Z
http://eprints.imtlucca.it/id/eprint/2122
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2122
2014-01-24T13:34:36Z
Exploiting reputation in distributed virtual environments
The cognitive research on reputation has shown several interesting properties that can improve both the quality of services and the security in distributed electronic environments. In this paper, the impact of reputation on decision-making under scarcity of information will be shown. First, a cognitive theory of reputation will be presented, then a selection of simulation experimental results from different studies will be discussed. Such results concern the benefits of reputation when agents need to find out good sellers in a virtual market-place under uncertainty and informational cheating.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Rosaria Conte
2014-01-24T13:29:01Z
2014-01-24T13:29:01Z
http://eprints.imtlucca.it/id/eprint/2121
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2121
2014-01-24T13:29:01Z
Rooting opinions in the minds: a cognitive model and a formal account of opinions and their dynamics
The study of opinions, their formation and change, is one of the defining topics addressed by social psychology, but in recent years other disciplines, like computer science and complexity, have tried to deal with this issue. Despite the flourishing of different models and theories in both fields, several key questions still remain unanswered. The understanding of how opinions change and the way they are affected by social influence are challenging issues requiring a thorough analysis of opinion per se but also of the way in which they travel between agents' minds and are modulated by these exchanges. To account for the two-faceted nature of opinions, which are mental entities undergoing complex social processes, we outline a preliminary model in which a cognitive theory of opinions is put forward and it is paired with a formal description of them and of their spreading among minds. Furthermore, investigating social influence also implies the necessity to account for the way in which people change their minds, as a consequence of interacting with other people, and the need to explain the higher or lower persistence of such changes.
Francesca Giardini
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Rosaria Conte
2014-01-24T13:21:39Z
2014-01-24T13:21:39Z
http://eprints.imtlucca.it/id/eprint/2120
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2120
2014-01-24T13:21:39Z
Understanding opinions. A cognitive and formal account
The study of opinions, their formation and change, is one of the defining topics addressed by social psychology, but in recent years other disciplines, as computer science and complexity, have addressed this challenge. Despite the flourishing of different models and theories in both fields, several key questions still remain unanswered. The aim of this paper is to challenge the current theories on opinion by putting forward a cognitively grounded model where opinions are described as specific mental representations whose main properties are put forward. A comparison with reputation will be also presented.
Francesca Giardini
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Rosaria Conte
2014-01-24T13:16:33Z
2014-01-24T13:16:33Z
http://eprints.imtlucca.it/id/eprint/2119
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2119
2014-01-24T13:16:33Z
Emergence through selection: the evolution of a scientific challenge
One of the most interesting scientific challenges nowadays deals with
the analysis and the understanding of complex networks' dynamics and how their
processes lead to emergence according to the interactions among their components.
In this paper we approach the definition of new methodologies for the visualization
and the exploration of the dynamics at play in real dynamic social networks.We
present a recently introduced formalism called TVG (for time-varying graphs), which
was initially developed to model and analyze highly-dynamic and infrastructure-less
communication networks. As an application context, we chose the case of scientific
communities by analyzing a portion of the ArXiv repository (ten years of publications
in physics). The analysis presented in the paper passes through different data
transformations aimed at providing different perspectives on the scientific community
and its evolutions.
On a first level we discuss the dataset by means of both a static and temporal
analysis of citations and co-authorships networks. Afterward, as we consider that
scientific communities are at the same time communities of practice (through coauthorship)
and that a citation represents a deliberative selection pointing out the
relevance of a work in its scientific domain, we introduce a new transformation aimed
at capturing the interdependencies between collaborations' patterns and citations'
effects and how they make evolve a goal oriented systems as Science.
Finally, we show how through the TVG formalism and derived indicators, it is
possible to capture the interactions patterns behind the emergence (selection) of
a sub-community among others, as a goal-driven preferential attachment toward a
set of authors among which there are some key scientists (Nobel prizes) acting as
attractors on the community.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Frederic Amblard
2014-01-24T13:10:24Z
2014-01-24T13:10:24Z
http://eprints.imtlucca.it/id/eprint/2118
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2118
2014-01-24T13:10:24Z
Time-varying graphs and social network analysis: temporal indicators and metrics
Most instruments - formalisms, concepts, and metrics - for social networks analysis fail to capture their dynamics. Typical systems exhibit different scales of dynamics, ranging from the fine-grain dynamics of interactions (which recently led researchers to consider temporal versions of distance, connectivity, and related indicators), to the evolution of network properties over longer periods of time. This paper proposes a general approach to study that evolution for both atemporal and temporal indicators, based respectively on sequences of static graphs and sequences of time-varying graphs that cover successive time-windows. All the concepts and indicators, some of which are new, are expressed using a time-varying graph formalism.
Nicola Santoro
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Paola Flocchini
Arnaud Casteigts
Frederic Amblard
2014-01-24T11:55:50Z
2015-03-03T09:34:45Z
http://eprints.imtlucca.it/id/eprint/2117
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2117
2014-01-24T11:55:50Z
A formal approach to autonomic systems programming: the SCEL Language
The autonomic computing paradigm has been proposed to cope with size, complexity and dynamism of contemporary
software-intensive systems. The challenge for language designers is to devise appropriate abstractions
and linguistic primitives to deal with the large dimension of systems, and with their need to
adapt to the changes of the working environment and to the evolving requirements. We propose a set of
programming abstractions that permit to represent behaviors, knowledge and aggregations according to
specific policies, and to support programming context-awareness, self-awareness and adaptation. Based on
these abstractions, we define SCEL (Software Component Ensemble Language), a kernel language whose
solid semantic foundations lay also the basis for formal reasoning on autonomic systems behavior. To show
expressiveness and effectiveness of SCEL’s design, we present a Java implementation of the proposed abstractions
and show how it can be exploited for programming a robotics scenario that is used as a running
example for describing features and potentials of our approach
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2014-01-24T11:48:20Z
2014-01-24T13:10:57Z
http://eprints.imtlucca.it/id/eprint/2116
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2116
2014-01-24T11:48:20Z
Multicolored dynamos on toroidal meshes
Detecting on a graph the presence of the minimum number of nodes (target set) that will be able to "activate" a prescribed number of vertices in the graph is called the target set selection problem (TSS) proposed by Kempe, Kleinberg, and Tardos. In TSS's settings, nodes have two possible states (active or non-active) and the threshold triggering the activation of a node is given by the number of its active neighbors. Dealing with fault tolerance in a majority based system the two possible states are used to denote faulty or non-faulty nodes, and the threshold is given by the state of the majority of neighbors. Here, the major effort was in determining the distribution of initial faults leading the entire system to a faulty behavior. Such an activation pattern, also known as dynamic monopoly (or shortly dynamo), was introduced by Peleg in 1996. In this paper we extend the TSS problem's settings by representing nodes' states with a "multicolored" set. The extended version of the problem can be described as follows: let G be a simple connected graph where every node is assigned a color from a finite ordered set C = {1, . . ., k} of colors. At each local time step, each node can recolor itself, depending on the local configurations, with the color held by the majority of its neighbors. Given G, we study the initial distributions of colors leading the system to a k monochromatic configuration in toroidal meshes, focusing on the minimum number of initial k-colored nodes. We find upper and lower bounds to the size of a dynamo, and then special classes of dynamos, outlined by means of a new approach based on recoloring patterns, are characterized.
Sara Brunetti
Elena Lodi
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2014-01-23T09:29:16Z
2014-01-23T09:29:16Z
http://eprints.imtlucca.it/id/eprint/2114
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2114
2014-01-23T09:29:16Z
Social networks and spatial distribution
In most agent-based social simulation models, the issue of the organisation of the agents’ population matters. The topology, in which agents interact, – be it spatially structured or a social network – can have important impacts on the obtained results in social simulation. Unfortunately, the necessary data about the target system is often lacking, therefore you have to use models in order to reproduce realistic spatial distributions of the population and/or realistic social networks among the agents. In this chapter we identify the main issues concerning this point and describe several models of social networks or of spatial distribution that can be integrated in agent-based simulation to go a step forward from the use of a purely random model. In each case we identify several output measures that allow quantifying their impacts.
Frederic Amblard
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2014-01-23T09:20:49Z
2014-01-23T09:20:49Z
http://eprints.imtlucca.it/id/eprint/2113
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2113
2014-01-23T09:20:49Z
Minimum weight dynamo and fast opinion spreading
We consider the following multi–level opinion spreading model on networks. Initially, each node gets a weight from the set {0,…,k − 1}, where such a weight stands for the individuals conviction of a new idea or product. Then, by proceeding to rounds, each node updates its weight according to the weights of its neighbors. We are interested in the initial assignments of weights leading each node to get the value k − 1 –e.g. unanimous maximum level acceptance– within a given number of rounds. We determine lower bounds on the sum of the initial weights of the nodes under the irreversible simple majority rules, where a node increases its weight if and only if the majority of its neighbors have a weight that is higher than its own one. Moreover, we provide constructive tight upper bounds for some class of regular topologies: rings, tori, and cliques.
Sara Brunetti
Gennaro Cordasco
Luisa Gargano
Elena Lodi
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2014-01-23T09:16:45Z
2014-01-23T09:16:45Z
http://eprints.imtlucca.it/id/eprint/2112
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2112
2014-01-23T09:16:45Z
Selection in scientific networks
One of the most pressing and interesting actual scientific challenges deals with the analysis and the understanding of complex network dynamics. In particular, a major trend is the definition of new frameworks for the analysis, the exploration and the detection of the dynamics at play in real dynamic networks. In this paper, we focus in particular on scientific communities by targeting the social part of science through a descriptive approach that aims at identifying the social determinants behind the emergence and the resilience of scientific communities. We consider that scientific communities are at the same time through co-authorship communities of practice and that they exist also as representations in the scientists mind, since references to other scientists’ works are not merely an objective link to a relevant work, but they reveal also social objects that one manipulates and refers to. In fact, our analysis focuses on the coexistence of co-authorships and citation dynamics and how their interplay affects the shape, the strength and the stability of the scientific systems. Such an analysis—performed through the time-varying graphs (TVG) formalism and derived metrics—concerns the evolution of a scientific network extracted from a portion of the arXiv repository covering a period of 10 years of publications in physics. We detect an example of how the selection process of citations may affect the shape of the co-authorships network from a sparser and disconnected structure to a dense and homogeneous one.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Frederic Amblard
Eugenia Galeota
2014-01-23T09:13:08Z
2014-01-23T09:13:08Z
http://eprints.imtlucca.it/id/eprint/2111
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2111
2014-01-23T09:13:08Z
Time-varying graphs and dynamic networks
The past few years have seen intensive research efforts carried out in some apparently unrelated areas of dynamic systems – delay-tolerant networks, opportunistic-mobility networks and social networks – obtaining closely related insights. Indeed, the concepts discovered in these investigations can be viewed as parts of the same conceptual universe, and the formal models proposed so far to express some specific concepts are the components of a larger formal description of this universe. The main contribution of this paper is to integrate the vast collection of concepts, formalisms and results found in the literature into a unified framework, which we call time-varying graphs (TVGs). Using this framework, it is possible to express directly in the same formalism not only the concepts common to all those different areas, but also those specific to each. Based on this definitional work, employing both existing results and original observations, we present a hierarchical classification of TVGs; each class corresponds to a significant property examined in the distributed computing literature. We then examine how TVGs can be used to study the evolution of network properties, and propose different techniques, depending on whether the indicators for these properties are atemporal (as in the majority of existing studies) or temporal. Finally, we briefly discuss the introduction of randomness in TVGs.
Arnaud Casteigts
Paola Flocchini
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Nicola Santoro
2014-01-21T16:03:32Z
2014-01-21T16:03:32Z
http://eprints.imtlucca.it/id/eprint/2110
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2110
2014-01-21T16:03:32Z
Causality in collective filtering
In this paper, we describe a proposal for improving the practice of web-based collective filtering, in particular for what regards discussions and selection of issues about policy, based on the intuitive concept of causality. Causality, especially when presented in visual form, is especially suited to the task since it is intuitive to understand and to use, and at the same time, it's rich enough to create a semantic network between the representations of real world facts. We give some examples of the suggested system workflow and we present guidelines for its implementation.
Mario Paolucci
Stefano Picascia
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2014-01-21T15:51:19Z
2014-01-21T15:51:19Z
http://eprints.imtlucca.it/id/eprint/2108
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2108
2014-01-21T15:51:19Z
Opinions manipulation: Media, power and gossip
Despite the increasing diffusion of the Internet technology, TV remains the principal medium of communication. People's perceptions, knowledge, beliefs and opinions about matters of fact get (in)formed through the information reported on by the media.
However, a single source of information (and consensus) could be a potential cause of anomalies in the structure and evolution of a society.
Hence, as the information available (and the way it is reported) is fundamental for our perceptions and opinions, the definition of conditions allowing for a good information to be disseminated is a pressing challenge. In this paper starting from a report on the last Italian political campaign in 2008, we derive a socio-cognitive computational model of opinion dynamics where agents get informed by different sources of information. Then, a what-if analysis, performed through simulations on the model's parameters space, is shown. In particular, the scenario implemented includes three main streams of information acquisition, differing in both the contents and the perceived reliability of the messages spread. Agents' internal opinion is updated either by accessing one of the information sources, namely media and experts, or by exchanging information with one another. They are also endowed with cognitive mechanisms to accept, reject or partially consider the acquired information.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Rosaria Conte
Elena Lodi
2014-01-21T15:43:18Z
2014-01-21T15:43:18Z
http://eprints.imtlucca.it/id/eprint/2107
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2107
2014-01-21T15:43:18Z
Taste and trust
Although taste and trust are concepts on clearly distinct ontological levels, they are strongly interrelated in several contexts. For instance, when assessing trust, e.g. through a trust network, it is important to understand the role that personal taste plays in order to correctly interpret potential value dependent trust recommendations and conclusions, in order to provide a sound basis for decision-making. This paper aims at exploring the relationship between taste and trust in the analysis of semantic trust networks.
Audun Jøsang
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Dino Karabeg
2014-01-21T15:37:10Z
2014-01-21T15:37:10Z
http://eprints.imtlucca.it/id/eprint/2106
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2106
2014-01-21T15:37:10Z
On the temporal analysis of scientific network evolution
In this paper we approach the definition of new methodologies for the visualization and the exploration of social networks and their dynamics. We present a recently introduced formalism called TVG (for time-varying graphs), which was initially developed to model and analyze highly-dynamic and infrastructure-less communication networks, and TVG derived metrics. As an application context, we chose the case of scientific communities by analyzing a portion of the arXiv repository (ten years of publications in physics). We discuss the dataset by means of both static and temporal analysis of citations and co-authorships networks. Afterward, as we consider that scientific communities are at the same time communities of practice (through co-authorship) and that a citation represents a deliberative selection of a work among others, we introduce a new transformation to capture the co-existence of citations' effects and collaboration behaviors.
Frederic Amblard
Arnaud Casteigts
Paola Flocchini
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Nicola Santoro
2014-01-20T14:18:47Z
2014-01-20T14:18:56Z
http://eprints.imtlucca.it/id/eprint/2101
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2101
2014-01-20T14:18:47Z
Dynamic monopolies in colored tori
The information diffusion has been modeled as the spread of an information within a group through a process of social influence, where the diffusion is driven by the so called influential network. Such a process, which has been intensively studied under the name of viral marketing, has the goal to select an initial good set of individuals that will promote a new idea (or message) by spreading the "rumor" within the entire social network through the word-of-mouth. Several studies used the linear threshold model where the group is represented by a graph, nodes have two possible states (active, non-active), and the threshold triggering the adoption (activation) of a new idea to a node is given by the number of the active neighbors. The problem of detecting in a graph the presence of the minimal number of nodes that will be able to activate the entire network is called target set selection (TSS). In this paper we extend TSS by allowing nodes to have more than two colors. The multicolored version of the TSS can be described as follows: let G be a torus where every node is assigned a color from a finite set of colors. At each local time step, each node can recolor itself, depending on the local configurations, with the color held by the majority of its neighbors. We study the initial distributions of colors leading the system to a monochromatic configuration of color k, focusing on the minimum number of initial k-colored nodes. We conclude the paper by providing the time complexity to achieve the monochromatic configuration.
Sara Brunetti
Elena Lodi
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2014-01-20T13:46:26Z
2014-01-20T13:46:26Z
http://eprints.imtlucca.it/id/eprint/2100
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2100
2014-01-20T13:46:26Z
Time-Varying graphs and dynamic networks
The past decade has seen intensive research efforts on highly dynamic wireless and mobile networks (variously called delay-tolerant, disruptive-tolerant, challenged, opportunistic, etc) whose essential feature is a possible absence of end-to-end communication routes at any instant. As part of these efforts, a number of important concepts have been identified, based on new meanings of distance and connectivity. The main contribution of this paper is to review and integrate the collection of these concepts, formalisms, and related results found in the literature into a unified coherent framework, called TVG (for time-varying graphs).Besides this definitional work, we connect the various assumptions through a hierarchy of classes of TVGs defined with respect to properties with algorithmic significance in distributed computing. One of these classes coincides with the family of dynamic graphs over which population protocols are defined. We examine the (strict) inclusion hierarchy among the classes. The paper also provides a quick review of recent stochastic models for dynamic networks that aim to enable analytical investigation of the dynamics.
Arnaud Casteigts
Paola Flocchini
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Nicola Santoro
2014-01-20T11:20:51Z
2014-01-20T11:20:51Z
http://eprints.imtlucca.it/id/eprint/2098
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2098
2014-01-20T11:20:51Z
Simulating opinion dynamics in heterogeneous communication systems
Since the information available is fundamental for our perceptions and opinions, we are interested in understanding the conditions allowing for a good information to be disseminated. This paper explores opinion dynamics by means of multi-agent based simulations when agents get informed by different sources of information. The scenario implemented includes three main streams of information acquisition, differing in both the contents and the perceived reliability of the messages spread. Agents' internal opinion is updated either by accessing one of the information sources, namely media and experts, or by exchanging information with one another. They are also endowed with cognitive mechanisms to accept, reject or partially consider the acquired information. We expect
that peer-to-peer communication and reliable information sources are able both to reduce biased perceptions and to inhibit information cheating, possibly performed by the media as stated by the agenda-setting theory. In the paper, after having shortly presented both the hypotheses and the model, the simulation design will be specified and results will be discussed with respect to the hypotheses. Some considerations and ideas for future studies will conclude the paper.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Rosaria Conte
Elena Lodi
2014-01-20T10:23:48Z
2014-01-20T10:23:48Z
http://eprints.imtlucca.it/id/eprint/2097
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2097
2014-01-20T10:23:48Z
Dealing with Interaction for Complex Systems Modelling and Prediction
The increasing complexity of problems in the context of system modeling is leading to a new epistemological approach able to provide a representation which allows from one hand, to model complex phenomena with the support of mathematical and computational instruments, and on the other hand able to capture the global system description. In this article is presented a methodology for complex dynamical systems modeling which is an extension of the supervised learning paradigm. The theoretical aspects of our methodology are introduced and then two different and heterogeneous case studies are presented.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Daniela Latorre
Elena Lodi
Mirco Nanni
2014-01-17T13:49:15Z
2014-01-17T13:49:15Z
http://eprints.imtlucca.it/id/eprint/2095
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2095
2014-01-17T13:49:15Z
Advanced features in Bayesian reputation systems
Bayesian reputation systems are quite flexible and can relatively easily be adapted to different types of applications and environments. The purpose of this paper is to provide a concise overview of the rich set of features that characterizes Bayesian reputation systems. In particular we demonstrate the importance of base rates during bootstrapping, for handling rating scarcity and for expressing long term trends
Audun Jøsang
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2014-01-17T12:13:16Z
2014-01-17T12:13:16Z
http://eprints.imtlucca.it/id/eprint/2094
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2094
2014-01-17T12:13:16Z
On the effects of informational cheating on social evaluations: image and reputation through gossip
Multi-agent-based simulation is an arising scientific trend which is naturally provided of instruments able to cope with complex systems, in particular the socio-cognitive complex systems. In this paper, a simulation-based exploration of the effect of false information on social
evaluation formation is presented. We perform simulative experiments on the Repage platform, a computational system allowing agents to communicate and acquire both direct (image) and indirect and unchecked (reputation) information. Informational cheating, when the number of liars becomes substantial, is shown to seriously affect quality achievement obtained through reputation. In the paper, after a brief introduction of the theoretical background, the hypotheses and the market scenario are presented and the simulation results are discussed with respect to the agents’ decision-making process, focusing on
uncertainty, false information spreading and quality of contracts.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Mario Paolucci
Rosaria Conte
2014-01-17T12:02:57Z
2014-01-17T12:02:57Z
http://eprints.imtlucca.it/id/eprint/2093
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2093
2014-01-17T12:02:57Z
Image and reputation coping differently with massive informational cheating
Multi-agent based simulation is an arising scientific trend which is naturally provided of instruments able to cope with complex systems, in particular the socio-cognitive complex systems. In this paper, a simulation-based exploration of the effect of false information on social evaluation formation is presented. We perform simulative experiments on the RepAge platform, a computational system allowing agents to communicate and acquire both direct (image) and indirect and unchecked (reputation) information. Informational cheating, when the number of liars becomes substantial, is shown to seriously affect quality achievement obtained through reputation. In the paper, after a brief introduction of the theoretical background, the hypotheses and the market scenario are presented and the simulation results are discussed with respect to the agents’ decision making process, focusing on uncertainty, false information spreading and quality of contracts.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Mario Paolucci
Rosaria Conte
2014-01-17T11:52:05Z
2014-01-17T11:52:05Z
http://eprints.imtlucca.it/id/eprint/2092
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2092
2014-01-17T11:52:05Z
Reputation and uncertainty : a fairly optimistic
society when cheating is total
In an uncertain world, humans or artificial agents, to cope
with uncertainty, need to communicate and share information to increase the number of their experiences, and consequently their possibility of success. The information shared by agents in a society can have different
nature: accepted evaluations (Image) or reported voices (Reputation). In this work we model a simulative context where information acquisition is strongly affected by false information; in the experiments presented, performed on the RepAge Platform which is a computational module for the management of reputational information, agents can lie or report others’ lies. In this work we explore the effect of informational cheating under extreme setting, focusing to study from one hand, the effect of cheating on the quality achievement when the society is composed by a large amount of liars, and on the other hand, the beliefs formation
and revision dynamics when the informational domain is not reliable. Information accuracy has effect on the market in relation with its trustworthiness; if social information is not reliable communication loses its importance.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Mario Paolucci
2014-01-17T11:42:46Z
2014-01-17T11:42:46Z
http://eprints.imtlucca.it/id/eprint/2091
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2091
2014-01-17T11:42:46Z
On the effects of reputation in the Internet of Services
The Internet of Services is a term used to describe open
global computing infrastructures in which an increasing number of services is made available to users through the Internet. Due to the openness, from the users point, the quality of the services offered can vary a lot and users have to concentrate on choosing the right ones. The choice
of a good service thereby depends on the users’ direct experience (Image), and their ability to acquire information (Reputation), which can be used to update their own evaluations. In this work, we present a set of simulation runs to explore the effect of reputation regarding services delivery in a Service network where information is asymmetrically distributed.
Stefan König
Tina Balke
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Mario Paolucci
Torsten Eymann
2014-01-15T15:44:12Z
2014-01-15T15:44:12Z
http://eprints.imtlucca.it/id/eprint/2090
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2090
2014-01-15T15:44:12Z
Minimum weight multicolor dynamos
Sara Brunetti
Gennaro Cordasco
Luisa Gargano
Elena Lodi
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2014-01-15T15:28:43Z
2014-01-15T15:28:43Z
http://eprints.imtlucca.it/id/eprint/2089
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2089
2014-01-15T15:28:43Z
Bruce Edmonds, Cesareo Hernandez, Klaus Troitzsch (eds): Social simulation technologies, advances and new discoveries
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2014-01-15T15:23:51Z
2014-01-15T15:44:56Z
http://eprints.imtlucca.it/id/eprint/2088
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2088
2014-01-15T15:23:51Z
Cognition in information evaluation: the effect of reputation in decisions making and learning strategies for discovering good sellers in a base market
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Mario Paolucci
2014-01-15T14:25:25Z
2014-01-15T14:25:25Z
http://eprints.imtlucca.it/id/eprint/2087
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2087
2014-01-15T14:25:25Z
Reputation and uncertainty reduction: simulating partner selection
Computer simulations of the impact of reputation in ideal marketplaces are presented. The study concentrates on the link between choices at individual level (partner selection) and system level performance, measured in terms of goods’ average quality. Partner selection is based on two different information exchange settings. In L1, information conveys only agent’s evaluation of a target (Image). In L2, it also transmits what is heard, but not yet necessarily checked by present informer, about a target (Reputation). The results show that in L2, the system exhibits higher tolerance for informational uncertainty.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Mario Paolucci
Rosaria Conte
2014-01-15T09:06:50Z
2014-01-15T09:06:50Z
http://eprints.imtlucca.it/id/eprint/2086
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2086
2014-01-15T09:06:50Z
A Language-Based Approach to Autonomic Computing
SCEL is a new language specifically designed to model autonomic components and their interaction. It brings together various programming abstractions that permit to directly represent knowledge, behaviors and aggregations according to specific policies. It also supports naturally programming self-awareness, context-awareness, and adaptation. In this paper, we first present design principles, syntax and operational semantics of SCEL. Then, we show how a dialect can be defined by appropriately instantiating the features of the language we left open to deal with different application domains and use this dialect to model a simple, yet illustrative, example application. Finally, we demonstrate that adaptation can be naturally expressed in SCEL.
Rocco De Nicola
r.denicola@imtlucca.it
GianLuigi Ferrari
Michele Loreti
Rosario Pugliese
2013-12-12T13:23:05Z
2013-12-12T13:23:05Z
http://eprints.imtlucca.it/id/eprint/2058
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2058
2013-12-12T13:23:05Z
Combining Declarative and Procedural Views in the Specification and Analysis of Product Families
We introduce the feature-oriented language FLan as a proof of concept for specifying both declarative aspects of product families, namely constraints on their features, and procedural aspects, namely design processes and run-time behaviour. FLan is inspired by the con- current constraint programming paradigm. A store of constraints allows one to specify in a declarative way all common constraints on features, including cross-tree constraints as known from feature models. A standard yet rich set of process-algebraic operators allows one to specify in a procedural way the configuration and behaviour of products. There is a close interaction between both views: (i) the execution of a process is constrained by its store to forbid undesired configurations; (ii) a process can query a store to resolve design and behavioural choices; (iii) a process can update the store, for instance to add new features. An implementa- tion in the Maude framework allows for a variety of formal automated analyses of product families specified in FLan, ranging from consistency checking to model checking.
Maurice H. ter Beek
maurice.terbeek@isti.cnr.it
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Marinella Petrocchi
marinella.petrocchi@iit.cnr.it
2013-12-12T13:11:42Z
2016-07-13T10:29:47Z
http://eprints.imtlucca.it/id/eprint/2056
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2056
2013-12-12T13:11:42Z
Modelling and analyzing adaptive self-assembling strategies with Maude
Building adaptive systems with predictable emergent behavior is a challenging task and it is becoming a critical need. The research community has accepted the challenge by introducing approaches of various nature: from software architectures, to programming paradigms, to analysis techniques. We recently proposed a conceptual framework for adaptation centered around the role of control data. In this paper we show that it can be naturally realized in a reflective logical language like Maude by using the Reflective Russian Dolls model. Moreover, we exploit this model to specify, validate and analyse a prominent example of adaptive system: robot swarms equipped with self-assembly strategies. The analysis exploits the statistical model checker PVeStA.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2013-12-12T13:11:24Z
2013-12-12T13:11:24Z
http://eprints.imtlucca.it/id/eprint/2057
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2057
2013-12-12T13:11:24Z
Constraint Design Rewriting
We propose an algebraic approach to the design and transformation of constraint networks, inspired by Architectural Design Rewriting. The approach can be understood as (i) an extension of ADR with constraints, and (ii) an application of ADR to the design of reconfigurable constraint networks. The main idea is to consider classes of constraint networks as algebras whose operators are used to denote constraint networks with terms. Constraint network transformations such as constraint propagations are specified with rewrite rules exploiting the network’s structure provided by terms.
Roberto Bruni
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
2013-11-20T16:11:59Z
2015-11-02T11:30:13Z
http://eprints.imtlucca.it/id/eprint/1839
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1839
2013-11-20T16:11:59Z
Towards a formal approach to mobile cloud computing
Mobile cloud computing (MCC) is an emerging paradigm to transparently provide support for demanding tasks on resource-constrained mobile devices by relying on the integration with remote cloud services. Research in this field is starting to tackle the multiple conceptual and technical challenges (e.g., how and when to offload), which are hindering the full realization of MCC. The NAM framework is as a general tool to describe networks of hardware and software autonomic entities, providing or consuming services or resources, that can be applied to MCC scenarios. In this paper, we focus on NAM's features related to the key aspects of MCC, in particular those concerning code mobility capabilities and autonomic offloading strategies. Our first contribution is the definition of a restricted set of mobility actions supporting MCC. The second contribution is a formal semantics for those actions, which allows us to better understand the behavior of MCC systems and paves the way for the application of formal reasoning techniques. As an outcome, we also derive a more precise formalization of the core NAM features, which may contribute to further development of that framework and the related middleware.
Michele Amoretti
Alessandro Grazioli
Francesco Zanichelli
Valerio Senni
valerio.senni@imtlucca.it
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-11-20T11:52:55Z
2014-12-11T13:31:29Z
http://eprints.imtlucca.it/id/eprint/1919
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1919
2013-11-20T11:52:55Z
Neuroimaging Evidence of Major Morpho-Anatomical and Functional Abnormalities in the BTBR T+TF/J Mouse Model of Autism
BTBR T+tf/J (BTBR) mice display prominent behavioural deficits analogous to the defining symptoms of autism, a feature that has prompted a widespread use of the model in preclinical autism research. Because neuro-behavioural traits are described with respect to reference populations, multiple investigators have examined and described the behaviour of BTBR mice against that exhibited by C57BL/6J (B6), a mouse line characterised by high sociability and low self-grooming. In an attempt to probe the translational relevance of this comparison for autism research, we used Magnetic Resonance Imaging (MRI) to map in both strain multiple morpho-anatomical and functional neuroimaging readouts that have been extensively used in patient populations. Diffusion tensor tractography confirmed previous reports of callosal agenesis and lack of hippocampal commissure in BTBR mice, and revealed a concomitant rostro-caudal reorganisation of major cortical white matter bundles. Intact inter-hemispheric tracts were found in the anterior commissure, ventro-medial thalamus, and in a strain-specific white matter formation located above the third ventricle. BTBR also exhibited decreased fronto-cortical, occipital and thalamic gray matter volume and widespread reductions in cortical thickness with respect to control B6 mice. Foci of increased gray matter volume and thickness were observed in the medial prefrontal and insular cortex. Mapping of resting-state brain activity using cerebral blood volume weighted fMRI revealed reduced cortico-thalamic function together with foci of increased activity in the hypothalamus and dorsal hippocampus of BTBR mice. Collectively, our results show pronounced functional and structural abnormalities in the brain of BTBR mice with respect to control B6 mice. The large and widespread white and gray matter abnormalities observed do not appear to be representative of the neuroanatomical alterations typically observed in autistic patients. The presence of reduced fronto-cortical metabolism is of potential translational relevance, as this feature recapitulates previously-reported clinical observations.
Luca Dodero
Mario Damiano
Alberto Galbusera
Angelo Bifone
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Maria Luisa Scattoni
Alessandro Gozzi
2013-11-20T11:44:18Z
2014-01-29T10:06:41Z
http://eprints.imtlucca.it/id/eprint/1918
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1918
2013-11-20T11:44:18Z
Application-Aware approach to compression and transmission of H.264 Encoded Video for automated and centralized transportation surveillance
In this paper, we present a transportation video coding and wireless transmission system specifically tailored to automated vehicle tracking applications. By taking into account the video characteristics and the lossy nature of the wireless channels, we propose video preprocessing and error control approaches to enhance tracking performance while conserving bandwidth resources and computational power at the transmitter. Compared with current state-of-the-art H.264-based implementations, our system is shown to yield over 80% bitrate savings for comparable tracking accuracy.
Zhaofu Chen
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Eren Soyak
Aggelos K. Katsaggelos
2013-11-20T11:11:10Z
2013-11-20T11:11:10Z
http://eprints.imtlucca.it/id/eprint/1917
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1917
2013-11-20T11:11:10Z
Explicit Shift-Invariant Dictionary Learning
In this letter we give efficient solutions to the construction of structured dictionaries for sparse representations. We study circulant and Toeplitz structures and give fast algorithms based on least squares solutions. We take advantage of explicit circulant structures and we apply the resulting algorithms to shift-invariant learning scenarios. Synthetic experiments and comparisons with state-of-the-art methods show the superiority of the proposed methods.
Cristian Rusu
cristian.rusu@imtlucca.it
Bogdan Dumitrescu
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2013-11-20T11:04:43Z
2013-11-20T11:04:43Z
http://eprints.imtlucca.it/id/eprint/1916
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1916
2013-11-20T11:04:43Z
Detecting Myocardial Ischemia at Rest With Cardiac Phase-Resolved Blood Oxygen Level-Dependent Cardiovascular Magnetic Resonance
Background: Fast, noninvasive identification of ischemic territories at rest (prior to tissue-specific changes) and assessment of functional status can be valuable in the management of severe coronary artery disease. This study investigated the utility of cardiac phase-resolved Blood-Oxygen-Level-Dependent (CP-BOLD) CMR in detecting myocardial ischemia at rest secondary to severe coronary artery stenosis.
Methods and Results: CP-BOLD, standard-cine, and T2-weighted images were acquired in canines (n=11) at baseline and within 20 minutes of ischemia induction (severe LAD stenosis) at rest. Following 3-hours of ischemia, LAD stenosis was removed and T2-weighted and late-gadolinium-enhancement (LGE) images were acquired. From standard-cine and CP-BOLD images, End-Systolic (ES) and End-Diastolic (ED) myocardium were segmented. Affected and remote sections of the myocardium were identified from post-reperfusion LGE images. S/D, quotient of mean ES and ED signal intensities (on CP-BOLD and standard-cine), was computed for affected and remote segments at baseline and ischemia. Ejection fraction (EF) and segmental wall-thickening (sWT) were derived from CP-BOLD images at baseline and ischemia. On CP-BOLD images: S/D was greater than 1 (remote and affected territories) at baseline; S/D was diminished only in affected territories during ischemia and the findings were statistically significant (ANOVA, post-hoc p<0.01). The dependence of S/D on ischemia was not observed in standard-cine images. Computer simulations confirmed the experimental findings. ROC analysis showed that S/D identifies affected regions with similar performance (AUC:0.87) as EF (AUC:0.89) and sWT (AUC:0.75).
Conclusions: Preclinical studies and computer simulations showed that CP-BOLD CMR could be useful in detecting myocardial ischemia at rest. Patient studies are needed for clinical translation.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Xiangzhi Zhou
Richard Tang
Debiao Li
Rohan Dharmakumar
2013-11-20T10:52:50Z
2013-11-20T10:52:50Z
http://eprints.imtlucca.it/id/eprint/1915
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1915
2013-11-20T10:52:50Z
Active contour model driven by Globally Signed Region Pressure Force
One of the most popular and widely used global active contour models (ACM) is the region-based ACM, which relies on the assumption of homogeneous intensity in the regions of interest. As a result, most often than not, when images violate this assumption the performance of this method is limited. Thus, handling images that contain foreground objects characterized by multiple intensity classes present a challenge. In this paper, we propose a novel active contour model based on a new Signed Pressure Force (SPF) function which we term Globally Signed Region Pressure Force (GSRPF). It is designed to incorporate, in a global fashion, the skewness of the intensity distribution of the region of interest (ROI). It can accurately modulate the signs of the pressure force inside and outside the contour, it can handle images with multiple intensity classes in the foreground, it is robust to additive noise, and offers high efficiency and rapid convergence. The proposed GSRPF is robust to contour initialization and has the ability to stop the curve evolution close to even ill-defined (weak) edges. Our model provides a parameter-free environment to allow minimum user intervention, and offers both local and global segmentation properties. Experimental results on several synthetic and real images demonstrate the high accuracy of the segmentation results in comparison to other methods adopted from the literature.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2013-11-13T15:00:42Z
2014-01-28T15:19:46Z
http://eprints.imtlucca.it/id/eprint/1890
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1890
2013-11-13T15:00:42Z
Learning Relatedness Measures for Entity Linking
Entity Linking is the task of detecting, in text documents, relevant mentions to entities of a given knowledge base. To this end, entity-linking algorithms use several signals and features extracted from the input text or from the knowl- edge base. The most important of such features is entity relatedness. Indeed, we argue that these algorithms benefit from maximizing the relatedness among the relevant enti- ties selected for annotation, since this minimizes errors in disambiguating entity-linking.
The definition of an e↵ective relatedness function is thus a crucial point in any entity-linking algorithm. In this paper we address the problem of learning high-quality entity relatedness functions. First, we formalize the problem of learning entity relatedness as a learning-to-rank problem. We propose a methodology to create reference datasets on the basis of manually annotated data. Finally, we show that our machine-learned entity relatedness function performs better than other relatedness functions previously proposed, and, more importantly, improves the overall performance of dif- ferent state-of-the-art entity-linking algorithms.
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Claudio Lucchese
claudio.lucchese@isti.cnr.it
Salvatore Orlando
salvatore.orlando@isti.cnr.it
Raffaele Perego
raffaele.perego@isti.cnr.it
Salvatore Trani
salvatore.trani@isti.cnr.it
2013-11-13T15:00:09Z
2013-11-13T15:00:09Z
http://eprints.imtlucca.it/id/eprint/1889
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1889
2013-11-13T15:00:09Z
Dexter: an open source framework for entity linking
We introduce Dexter, an open source framework for entity linking. The entity linking task aims at identifying all the small text fragments in a document referring to an entity contained in a given knowledge base, e.g., Wikipedia. The annotation is usually organized in three tasks. Given an input document the first task consists in discovering the fragments that could refer to an entity. Since a mention could refer to multiple entities, it is necessary to perform a disambiguation step, where the correct entity is selected among the candidates. Finally, discovered entities are ranked by some measure of relevance. Many entity linking algorithms have been proposed, but unfortunately only a few authors have released the source code or some APIs. As a result, evaluating today the performance of a method on a single subtask, or comparing different techniques is difficult. In this work we present a new open framework, called Dexter, which implements some popular algorithms and provides all the tools needed to develop any entity linking technique. We believe that a shared framework is fundamental to perform fair comparisons and improve the state of the art.
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Claudio Lucchese
claudio.lucchese@isti.cnr.it
Raffaele Perego
raffaele.perego@isti.cnr.it
Salvatore Orlando
salvatore.orlando@isti.cnr.it
Salvatore Trani
salvatore.trani@isti.cnr.it
2013-11-13T14:55:15Z
2013-11-13T14:55:15Z
http://eprints.imtlucca.it/id/eprint/1910
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1910
2013-11-13T14:55:15Z
On Suggesting Entities as Web Search Queries
The Web of Data is growing in popularity and dimension,
and named entity exploitation is gaining importance in many research
fields. In this paper, we explore the use of entities that can be extracted
from a query log to enhance query recommendation. In particular, we
extend a state-of-the-art recommendation algorithm to take into account
the semantic information associated with submitted queries. Our novel
method generates highly related and diversified suggestions that we as-
sess by means of a new evaluation technique. The manually annotated
dataset used for performance comparisons has been made available to
the research community to favor the repeatability of experiments.
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Sergiu Gordea
Claudio Lucchese
Franco Maria Nardini
Raffaele Perego
2013-11-13T14:40:45Z
2013-11-13T15:01:45Z
http://eprints.imtlucca.it/id/eprint/1909
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1909
2013-11-13T14:40:45Z
Twitter anticipates bursts of requests for Wikipedia articles
Most of the tweets that users exchange on Twitter make implicit mentions of named-entities, which in turn can be mapped to corresponding Wikipedia articles using proper Entity Linking (EL) techniques. Some of those become trending entities on Twitter due to a long-lasting or a sudden effect on the volume of tweets where they are mentioned. We argue that the set of trending entities discovered from Twitter may help predict the volume of requests for relating Wikipedia articles. To validate this claim, we apply an EL technique to extract trending entities from a large dataset of public tweets. Then, we analyze the time series derived from the hourly trending score (i.e., an index of popularity) of each entity as measured by Twitter and Wikipedia, respectively. Our results reveals that Twitter actually leads Wikipedia by one or more hours.
Gabriele Tolomei
gabriele.tolomei@isti.cnr.it
Salvatore Orlando
salvatore.orlando@isti.cnr.it
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Claudio Lucchese
claudio.lucchese@isti.cnr.it
2013-11-08T10:45:37Z
2014-07-07T10:29:45Z
http://eprints.imtlucca.it/id/eprint/1896
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1896
2013-11-08T10:45:37Z
SapRete: saperi in rete per il recupero delle competenze logico matematiche e scientifiche
Susanna Setzu
Alessandro Chessa
alessandro.chessa@imtlucca.it
Michelangelo Puliga
michelangelo.puliga@imtlucca.it
Maria Polo
Maria Cristina Mereu
2013-11-07T13:17:48Z
2013-11-07T13:17:48Z
http://eprints.imtlucca.it/id/eprint/1888
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1888
2013-11-07T13:17:48Z
Group Recommendation with Automatic Identification of Users Communities
Recommender systems usually propose items to single users. However, in some domains like Mobile IPTV or Satellite Systems it might be impossible to generate a program schedule for each user, because of bandwidth limitations. A few approaches were proposed to generate group recommendations. However, these approaches take into account that groups of users already exist and no recommender system is able to detect intrinsic users communities. This paper describes an algorithm that detects groups of users whose preferences are similar and predicts recommendations for such groups. Groups of different granularities are generated through a modularity-based Community Detection algorithm, making it possible for a content provider to explore the trade off between the level of personalization of the recommendations and the number of channels. Experimental results show that the quality of group recommendations increases linearly with the number of groups created.
Ludovico Boratto
Salvatore Carta
Alessandro Chessa
alessandro.chessa@imtlucca.it
Maurizio Agelli
M. Laura Clemente
2013-11-05T11:14:58Z
2013-11-05T11:14:58Z
http://eprints.imtlucca.it/id/eprint/1856
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1856
2013-11-05T11:14:58Z
Block Orthonormal Overcomplete Dictionary Learning
In the field of sparse representations, the overcomplete dictionary learning problem is of crucial importance and has a growing application pool where it is used. In this paper we present an iterative dictionary learning algorithm based on the singular value decomposition that efficiently construct unions of orthonormal bases. The important innovation described in this paper, that affects positively the running time of the learning procedures, is the way in which the sparse representations are computed - data are reconstructed in a single orthonormal base, avoiding slow sparse approximation algorithms - how the bases in the union are used and updated individually and how the union itself is expanded by looking at the worst reconstructed data items. The numerical experiments show conclusively the speedup induced by our method when compared to previous works, for the same target representation
error.
Cristian Rusu
cristian.rusu@imtlucca.it
Bogdan Dumitrescu
2013-10-28T12:04:50Z
2016-03-18T10:44:41Z
http://eprints.imtlucca.it/id/eprint/1850
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1850
2013-10-28T12:04:50Z
The Autonomic Cloud: A Vision of Voluntary, Peer-2-Peer Cloud Computing
Autonomic computing - that is, the development of software and hardware systems featuring a certain degree of self-awareness and self-adaptability - is a field with many application areas and many technical difficulties. In this paper, we explore the idea of an autonomic cloud in the form of a platform-as-a-service computing infrastructure which, contrary to the usual practice, does not consist of a well-maintained set of reliable high-performance computers, but instead is formed by a loose collection of voluntarily provided heterogeneous nodes which are connected in a peer-to-peer manner. Such an infrastructure must deal with network resilience, data redundancy, and failover mechanisms for executing applications. We discuss possible solutions and methods which help developing such (and similar) systems. The described approaches are developed in the EU project ASCENS.
Philip Mayer
Annabelle Klarl
Rolf Hennicker
Mariachiara Puviani
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
Rosario Pugliese
Tomáš Bureš
2013-10-28T12:04:06Z
2014-06-16T10:42:48Z
http://eprints.imtlucca.it/id/eprint/1852
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1852
2013-10-28T12:04:06Z
Linguistic abstractions for programming and policing autonomic computing systems
We introduce PSCEL, a new language for developing autonomic software components capable of adapting their behaviour to react to external stimuli and environment changes. The application logic generating the computational behaviour of systems components is defined in a procedural style, by the programming constructs, while the adaptation logic is defined in a declarative style, by the policing constructs. The interplay between these two kinds of constructs permits to dynamically produce and enforce adaptation actions. To show PSCEL practical applicability and effectiveness, we employ it in a Cloud Computing case study.
Andrea Margheri
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-10-28T12:03:11Z
2015-11-02T09:54:00Z
http://eprints.imtlucca.it/id/eprint/1849
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1849
2013-10-28T12:03:11Z
Developing and Enforcing Policies for Access Control, Resource Usage, and Adaptation: A Practical Approach
Policy-based software architectures are nowadays widely exploited to regulate different aspects of systems’ behavior, such as access control, resource usage, and adaptation. Several languages and technologies have been proposed as, e.g., the standard XACML. However, developing real-world systems using such approaches is still a tricky task, being them complex and error-prone. To overcome such difficulties, we advocate the use of FACPL, a formal policy language inspired to but simpler than XACML. FACPL has an intuitive syntax, a mathematical semantics and easy-to-use software tools supporting policy development and enforcement. We illustrate potentialities and effectiveness of our approach through a case study from the Cloud computing domain.
Andrea Margheri
Massimiliano Masi
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-10-28T11:54:52Z
2014-01-29T14:43:50Z
http://eprints.imtlucca.it/id/eprint/1854
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1854
2013-10-28T11:54:52Z
Special issue on Automated Specification and Verification of Web Systems (Editorial)
Laura Kovács
Rosario Pugliese
Josep Silva
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-10-28T11:52:37Z
2016-04-06T09:39:46Z
http://eprints.imtlucca.it/id/eprint/1851
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1851
2013-10-28T11:52:37Z
Formalising adaptation patterns for autonomic ensembles
Autonomic behavior and self-adaptation in software can be supported by several architectural design patterns. In this paper we illustrate how some of the component- and ensemble-level adaptation patterns proposed in the literature can be rendered in SCEL, a formalism devised for modeling autonomic systems. Specifically, we present a compositional approach: first we show how a single generic component is modelled in SCEL, then we show that each pattern is rendered as the (parallel) composition of the SCEL terms corresponding to the involved components (and, possibly, to their environment). Notably, the SCEL terms corresponding to the patterns only differ from each other for the definition of the predicates identifying the targets of attribute-based communication. This enables autonomic ensembles to dynamically change the pattern in use by simply updating components' predicate definitions, as illustrated by means of a case study from the robotics domain.
Luca Cesari
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
Mariachiara Puviani
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
Franco Zambonelli
2013-10-25T08:56:20Z
2014-06-12T10:09:34Z
http://eprints.imtlucca.it/id/eprint/1844
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1844
2013-10-25T08:56:20Z
Constrained Model Predictive Control Based on Reduced-Order Models
The need for reduced-order approximations of dynamical systems emerges naturally in model-based control of very large-scale systems, such as those arising from the discretisation of partial differential equation models. The controller based on the reduced-order model, when in closed-loop with the large-scale system, ought to endow certain properties, in primis stability, but also satisfaction of state constraints and recursive computability of the control law in the case of constrained control. In this paper we introduce a new approach to the design of model predictive controllers to meet the aforementioned requirements while the on-line complexity is essentially tantamount to the one that corresponds to the low-dimensional approximate model.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2013-10-25T08:37:29Z
2013-10-25T08:37:29Z
http://eprints.imtlucca.it/id/eprint/1846
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1846
2013-10-25T08:37:29Z
JAQPOT RESTful Web Services: An Implementation of the OpenTox Application Programming Interface for On-line Prediction of Toxicological Properties
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Haralambos Sarimveis
2013-10-25T08:36:07Z
2013-10-25T08:36:07Z
http://eprints.imtlucca.it/id/eprint/1845
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1845
2013-10-25T08:36:07Z
ToxOtis: A Java Interface to the OpenTox Predictive Toxicology Network
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Haralambos Sarimveis
2013-10-25T08:30:43Z
2014-06-16T10:17:42Z
http://eprints.imtlucca.it/id/eprint/1841
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1841
2013-10-25T08:30:43Z
MPC for Sampled-Data Linear Systems: guaranteeing continuous-time positive invariance
Model Predictive Controllers (MPC) designed for sampled-data systems can be shown to violate the constraints in continuous time. A reformulation of the initial problem will guarantee constraint satisfaction throughout the intersample period. Polytopic inclusions of the continuous trajectory are used in this paper to establish additional constraints leading to a linearly constrained quadratic optimization problem. Continuous time asymptotic stability and continuous-time positive invariance are proven for the reformulated problem.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
2013-10-16T14:28:49Z
2014-03-05T13:29:47Z
http://eprints.imtlucca.it/id/eprint/1835
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1835
2013-10-16T14:28:49Z
Low-complexity piecewise-affine virtual sensors: theory and design
This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called ‘curse of dimensionality’ of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.
Matteo Rubagotti
Tomaso Poggi
Albert Oliveri
Carlo A. Pascucci
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Storace
2013-10-16T14:17:19Z
2013-10-16T14:17:19Z
http://eprints.imtlucca.it/id/eprint/1834
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1834
2013-10-16T14:17:19Z
(edited by) Coordination Models and Languages
15th International Conference, COORDINATION 2013, Held as Part of the 8th International Federated Conference on Distributed Computing Techniques, DisCoTec 2013, Florence, Italy, June 3-5, 2013. Proceedings
Rocco De Nicola
r.denicola@imtlucca.it
Christine Julien
2013-10-14T08:31:48Z
2013-10-14T08:31:48Z
http://eprints.imtlucca.it/id/eprint/1619
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1619
2013-10-14T08:31:48Z
Controlling Polyvariance for Specialization-based Verification
Program specialization has been proposed as a means of improving constraint-based analysis of infinite state reactive systems. In particular, safety properties can be specified by constraint logic programs encoding (backward or forward) reachability algorithms. These programs are then transformed, before their use for checking safety, by specializing them with respect to the initial states (in the case of backward reachability) or with respect to the unsafe states (in the case of forward reachability). By using the specialized reachability programs, we can considerably increase the number of successful verifications. An important feature of specialization algorithms is the so called polyvariance, that is, the number of specialized variants of the same predicate that are introduced by specialization. Depending on this feature, the specialization time, the size of the specialized program, and the number of successful verifications may vary. We present a specialization framework which is more general than previous proposals and provides control on polyvariance. We demonstrate, through experiments on several infinite state reactive systems, that by a careful choice of the degree of polyvariance we can design specialization-based verification procedures that are both efficient and precise.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2013-10-14T08:29:30Z
2014-01-21T13:19:07Z
http://eprints.imtlucca.it/id/eprint/1832
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1832
2013-10-14T08:29:30Z
Proving theorems by program transformation
In this paper we present an overview of the unfold/fold proof method, a method for proving theorems about programs, based on program transformation. As a metalanguage for specifying programs and program properties we adopt constraint logic programming (CLP), and we present a set of transformation rules (including the familiar unfolding and folding rules) which preserve the semantics of CLP programs. Then, we show how program transformation strategies can be used, similarly to theorem proving tactics, for guiding the application of the transformation rules and inferring the properties to be proved. We work out three examples: (i) the proof of predicate equivalences, applied to the verification of equality between CCS processes, (ii) the proof of first order formulas via an extension of the quantifier elimination method, and (iii) the proof of temporal properties of infinite state concurrent systems, by using a transformation strategy that performs program specialization.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2013-10-04T10:56:22Z
2013-10-04T11:09:42Z
http://eprints.imtlucca.it/id/eprint/1828
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1828
2013-10-04T10:56:22Z
New Encoding Schemes with Infofuses
Kyeng Min Park
Choongik Kim
Samuel W. Thomas III
Hyo Jae Yoon
Greg Morrison
greg.morrison@imtlucca.it
L. Mahadevan
George M. Whitesides
2013-10-03T07:57:29Z
2014-01-21T14:28:14Z
http://eprints.imtlucca.it/id/eprint/1812
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1812
2013-10-03T07:57:29Z
Efficient Generation of Test Data Structures using Constraint Logic Programming and Program Transformation
The goal of Bounded-Exhaustive Testing (BET) is the automatic generation of all test cases satisfying a given invariant, within a given size bound. When the test cases have a complex structure, the development of
correct and efficient generators becomes a very challenging task. In this paper we use Constraint Logic Programming (CLP) to systematically develop generators of structurally complex test data structures.
We follow a declarative approach which allows us to separate the issue of (i) defining the test data structure in terms of its properties, from that of (ii) efficiently generating data structure instances. This separation helps establish the correctness of the developed test case generators. We rely on a symbolic representation and we take advantage of efficient search strategies provided by CLP systems for generating test instances.
Through a running example taken from the literature on BET, we illustrate our test generation framework and we show that CLP allows us to develop easily understandable and efficient test generators.
Additionally, we propose a program transformation technique whose goal is to make the evaluation of these CLP-based generators much more efficient and we demonstrate its effectiveness on a number of complex test data structures.
Fabio Fioravanti
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2013-09-18T09:46:51Z
2015-05-29T11:31:15Z
http://eprints.imtlucca.it/id/eprint/1799
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1799
2013-09-18T09:46:51Z
Estimation of Scribble Placement for Painting Colorization
Image colorization has been a topic of interest since
the mid 70’s and several algorithms have been proposed that
given a grayscale image and color scribbles (hints) produce a colorized image. Recently, this approach has been introduced in the field of art conservation and cultural heritage, where B&W photographs of paintings at previous stages have been colorized. However, the questions of what is the minimum number of scribbles necessary and where they should be placed in an image remain unexplored. Here we address this limitation using an iterative algorithm that provides insights as to the relationship between locally vs. globally important scribbles. Given a color image we randomly select scribbles and we attempt to color the
grayscale version of the original.We define a scribble contribution measure based on the reconstruction error. We demonstrate our approach using a widely used colorization algorithm and images from a Picasso painting and the peppers test image. We show that areas isolated by thick brushstrokes or areas with high textural variation are locally important but contribute very little to the
overall representation accuracy. We also find that for the case of Picasso on average 10% of scribble coverage is enough and that flat areas can be presented by few scribbles. The proposed method can be used verbatim to test any colorization algorithm.
Cristian Rusu
cristian.rusu@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2013-09-18T09:31:59Z
2016-07-13T10:47:00Z
http://eprints.imtlucca.it/id/eprint/1798
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1798
2013-09-18T09:31:59Z
MultiVeStA: Statistical Model Checking for Discrete Event Simulators
The modeling, analysis and performance evaluation of large-scale systems are difficult tasks. Due to the size and complexity of the considered systems, an approach typically followed by engineers consists in performing simulations of systems models to obtain statistical estimations of quantitative properties. Similarly, a technique used by computer scientists working on quantitative analysis is Statistical Model Checking (SMC), where rigorous mathematical languages (typically logics) are used to express systems properties of interest. Such properties can then be automatically estimated by tools performing simulations of the model at hand. These property specifications languages, often not popular among engineers, provide a formal, compact and elegant way to express systems properties without needing to hard-code them in the model definition. This paper presents MultiVeStA, a statistical analysis tool which can be easily integrated with existing discrete event simulators, enriching them with efficient distributed statistical analysis and SMC capabilities.
Stefano Sebastio
stefano.sebastio@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2013-09-17T13:13:13Z
2013-09-17T13:13:13Z
http://eprints.imtlucca.it/id/eprint/1785
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1785
2013-09-17T13:13:13Z
Suboptimal Solutions to Dynamic Optimization Problems: Extended Ritz Method Versus Approximate Dynamic Programming
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
Riccardo Zoppoli
2013-09-17T13:12:54Z
2013-09-17T13:12:54Z
http://eprints.imtlucca.it/id/eprint/1783
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1783
2013-09-17T13:12:54Z
Exploiting Structural Results in Approximate Dynamic Programming
Efforts to cope with the curse of dimensionality in Dynamic Programming (DP) follow two main directions: i) problem simplification by simpler models and ii) use of smart approximators for the cost-to-go and/or policy functions. Here we focus on ii). We consider: a) structural properties of the cost-to-go and/or policy functions (to restrict approximation to certain function classes and to chose the approximators); b) suitable norms of the approximation error (to estimate how it propagates through stages). These ingredients can be combined to develop efficient approximate
DP algorithms.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
Riccardo Zoppoli
2013-09-17T13:12:13Z
2013-09-17T13:12:13Z
http://eprints.imtlucca.it/id/eprint/1792
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1792
2013-09-17T13:12:13Z
Value and Policy Function Approximations in Infinite-Horizon Optimization Problems
Abstract Suboptimal solutions to infinite-horizon dynamic optimization problems with continuous state are considered. An underlying dynamical system determining the state transition between each stage and the next one is modelled via the constraints (xt, xt +1) ∈ D, t = 0, 1, …, where X is the set to which the state vector belongs and D ⊆ X × X is a correspondence. An error analysis is performed for two cases: approximation of the value function and approximation of the optimal policy function. Structural properties of the dynamic optimization problems are derived, allowing to restrict a priori the approximation to families of functions characterized by certain smoothness properties. The two approximation approaches are compared and the respective pros and cons are highlighted.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:11:58Z
2013-09-17T13:11:58Z
http://eprints.imtlucca.it/id/eprint/1796
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1796
2013-09-17T13:11:58Z
The Weight-Decay Technique in Learning from Data: An Optimization Point of View
The technique known as “weight decay” in the literature about learning from data is investigated using tools from regularization theory. Weight-decay regularization is compared with Tikhonov’s regularization of the learning problem and with a mixed regularized learning technique. The accuracies of suboptimal solutions to weight-decay learning are estimated for connectionistic models with a-priori fixed numbers of computational units.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:11:35Z
2013-09-17T13:11:35Z
http://eprints.imtlucca.it/id/eprint/1786
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1786
2013-09-17T13:11:35Z
Suboptimal Solutions to Network Team Optimization Problems
Smoothness of the solutions to network team optimization problems with statistical information structure is investigated. Suboptimal solutions expressed as linear combinations of elements from sets of basis functions containing adjustable parameters are considered. Estimates of their accuracy are derived, for basis functions represented by sinusoids with variable frequencies and phases and
Gaussians with variable centers and widthss.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:11:21Z
2013-09-17T13:11:21Z
http://eprints.imtlucca.it/id/eprint/1782
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1782
2013-09-17T13:11:21Z
Structural Properties of Stochastic Dynamic Concave Optimization Problems and Approximations of the Value and Optimal Policy Functions
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:11:08Z
2013-09-17T13:11:08Z
http://eprints.imtlucca.it/id/eprint/1777
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1777
2013-09-17T13:11:08Z
Smoothness and Approximation of Optimal Decision Strategies in Team Optimization Problems
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:10:48Z
2013-09-17T13:10:48Z
http://eprints.imtlucca.it/id/eprint/1776
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1776
2013-09-17T13:10:48Z
Regularization and Suboptimal Solutions in Learning from Data
Supervised learning from data is investigated from an optimization viewpoint. Ill-posedness issues of the learning problem are discussed and its Tikhonov, Ivanov, Phillips, and Miller regularizations are analyzed. Theoretical features of the optimization problems associated with these regularization techniques and their use in learning tasks are considered. Weight-decay learning is investigated, too. Exploiting properties of the functionals to be minimized in the various regularized problems, estimates are derived on the accuracy of suboptimal solutions formed by linear combinations of n-tuples of computational units, for values of n smaller than the number of data.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:10:30Z
2013-09-17T13:10:30Z
http://eprints.imtlucca.it/id/eprint/1770
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1770
2013-09-17T13:10:30Z
Lipschitz Continuity of the Solutions to Team Optimization Problems Revisited
Sufficient conditions for the existence and Lipschitz
continuity of optimal strategies for static team optimization problems are studied. Revised statements and proofs of some results in “Kim K.H., Roush F.W., Team Theory. Ellis Horwood Limited Publishers, Chichester, UK, 1987” are presented.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:10:16Z
2013-09-17T13:10:16Z
http://eprints.imtlucca.it/id/eprint/1768
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1768
2013-09-17T13:10:16Z
Information Complexity of Functional Optimization Problems and Their Approximation Schemes
Functional optimization is investigated using tools from information-based complexity. In such optimization problems, a functional has to be minimized with respect to admissible solutions belonging to an infinite-dimensional space of functions. This context models tasks arising in optimal control, systems identification, machine learning, time-series analysis, etc. The solution via variable-basis approximation schemes, which provide a sequence of nonlinear programming problems approximating the original functional one, is considered. Also for such problems, the information complexity is estimated.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:09:59Z
2013-09-17T13:09:59Z
http://eprints.imtlucca.it/id/eprint/1761
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1761
2013-09-17T13:09:59Z
Error Bounds for Suboptimal Solutions to Kernel Principal Component Analysis
Suboptimal solutions to kernel principal component analysis are considered. Such solutions take on the form of linear combinations of all n-tuples of kernel functions centered on the data, where n is a positive integer smaller than the cardinality m of the data sample. Their accuracy in approximating the optimal solution, obtained in general for n = m, is estimated. The analysis made in Gnecco and Sanguineti (Comput Optim Appl 42:265–287, 2009) is extended. The estimates derived therein for the approximation of the first principal axis are improved and extensions to the successive principal axes are derived.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:09:43Z
2013-09-17T13:09:43Z
http://eprints.imtlucca.it/id/eprint/1759
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1759
2013-09-17T13:09:43Z
Editorial for the special issue: “Mathematical problems in engineering, aerospace and sciences”
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:09:23Z
2013-09-17T13:09:23Z
http://eprints.imtlucca.it/id/eprint/1788
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1788
2013-09-17T13:09:23Z
Suboptimal Solutions to Team Optimization Problems
with Statistical Information Structure
Network team optimization problems with statistical information structure are investigated. A team of n Decision Makers (DMs), each having at disposal
some information (obtained, e.g., by measurement devices or by exit polls) and various possibilities of decisions, coordinate their efforts to achieve a common goal, expressed via a team utility function. Decisions are generated by the DMs via strategies, on the basis of the available information y1, . . . , yn that each of
them has and in the presence of uncertainties in the "state of the external world’ x (which the DMs do not control). Such uncertainties are modeled via a joint
probability density p(x, y1, . . . , yn). For these problems, optimal solutions in closed form can be derived only in special cases, so a methodology of approximate
solution is proposed. Suboptimal solutions are searched for, taking the form of linear combinations of elements from sets of basis functions, possibly with adjustable "inner’ parameters. Upper bounds on the accuracy of such
suboptimal solutions are obtained. The estimates are expressed in dependence of the number of trigonometric and Gaussian basis functions. The trade-off between the level of decentralization and the smoothness assumptions on the
utility function and the probability density, required to derive the upper bounds, is investigated. Numerical results are presented for an instance of the network team optimization problem under study, which models optimal production in a multidivisional firm.
Mauro Gaggero
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:07:38Z
2013-09-17T13:07:38Z
http://eprints.imtlucca.it/id/eprint/1773
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1773
2013-09-17T13:07:38Z
On Call Admission Control with Nonlinearly Constrained
Feasibility Regions
A simple criterion is proposed to improve suboptimal coordinate-convex policies in Call Admission Control problems with nonlinearly constrained feasibility regions. To test the criterion, numerical simulation results are given.
Finally, some structural properties of the optimal coordinate-convex policies are proven, which do not depend on a complete knowledge of the nonlinear boundary of the feasibility region.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2013-09-17T13:07:22Z
2013-09-17T13:07:22Z
http://eprints.imtlucca.it/id/eprint/1781
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1781
2013-09-17T13:07:22Z
Structural Properties of Optimal Coordinate-Convex Policies for CAC with Nonlinearly-Constrained Feasibility Regions
Necessary optimality conditions for Call Admission Control (CAC) problems with nonlinearly-constrained feasibility regions and two classes of users are derived. The policies are restricted to the class of coordinate-convex policies. Two kinds of structural properties of the optimal policies and their robustness with respect to changes of the feasibility region are investigated: 1) general properties not depending on the revenue ratio associated with the two classes of users and 2) more specific properties depending on such a ratio. The results allow one to narrow the search for the optimal policies to a suitable subset of the set of coordinate-convex policies.
Mario Marchese
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:06:59Z
2013-09-17T13:06:59Z
http://eprints.imtlucca.it/id/eprint/1767
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1767
2013-09-17T13:06:59Z
Functional Optimization in OR Problems with Very Large Numbers of Variables
Functional optimization, or "infinite-dimensional programming", investigates the minimization (or maximization) of functionals with respect to admissible
solutions belonging to infinite-dimensional spaces of functions. In OR applications, such functions may express, e.g.,
-releasing policies in water-resources management;
-exploration strategies stochastic graphs;
-routing strategies in telecommunication networks;
-input/output mappings in learning from data, etc.
Infinite dimension makes inapplicable many tools used in mathematical programming, and variational methods provide closed-form solutions only in particular cases. Suboptimal solutions can be sought via "linear approximation
schemes",i.e., linear combinations of fixed basis functions (e.g., polynomial expansions):
the functional problem is reduced to optimization of the coefficients
of the linear combinations ("Ritz method"). Most often, admissible solutions
are functions dependent on many variables, related, e.g., to
-reservoirs in water-resources management;
-nodes of a communication network;
-items in inventory problems;
-freeway sections in traffic management.
Unfortunately, linear schemes may be computationally inefficient because
of the "curse of dimensionality": the number of basis functions, necessary to
obtain a desired accuracy, may grow "very fast" with the number of variables.
This motivates the "Extended Ritz Method"(ERIM), based on nonlinear approximation schemes formed by linear combinations of computational units
containing "inner" parameters which make the schemes nonlinear to be optimized (together with the coefficients of the combinations) via nonlinear programming algorithms. Experimental results show that this approach obtains surprisingly good performances. We present recent theoretical results that give insights into the possibility to cope with the curse of dimensionality in functional optimization via the ERIM, when admissible solutions contain very large numbers of variables.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
Riccardo Zoppoli
2013-09-17T13:06:33Z
2013-09-17T13:06:33Z
http://eprints.imtlucca.it/id/eprint/1789
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1789
2013-09-17T13:06:33Z
Team Optimization Problems with Lipschitz Continuous Strategies
Sufficient conditions for the existence and Lipschitz continuity of optimal strategies for static team optimization problems are studied. Revised statements and proofs of some results appeared in the literature are presented. Their extensions are discussed. As an example of application, optimal production in a multidivisional firm is considered.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:06:11Z
2013-09-17T13:06:11Z
http://eprints.imtlucca.it/id/eprint/1778
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1778
2013-09-17T13:06:11Z
Some Comparisons of Complexity in Dictionary-Based and Linear Computational Models
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Věra Kůrková
Marcello Sanguineti
2013-09-17T13:05:48Z
2013-09-17T13:05:48Z
http://eprints.imtlucca.it/id/eprint/1766
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1766
2013-09-17T13:05:48Z
Functional Optimization by Variable-Basis Approximation Schemes
This is a summary of the author’s PhD thesis, supervised by Marcello Sanguineti and defended on April 2, 2009 at Università degli Studi di Genova. The thesis is written in English and a copy is available from the author upon request. Functional optimization problems arising in Operations Research are investigated. In such problems, a cost functional Φ has to be minimized over an admissible set S of d-variable functions. As, in general, closed-form solutions cannot be derived, suboptimal solutions are searched for, having the form of variable-basis functions, i.e., elements of the set span n G of linear combinations of at most n elements from a set G of computational units. Upper bounds on inff∈S∩spannGΦ(f)−inff∈SΦ(f) are obtained. Conditions are derived, under which the estimates do not exhibit the so-called “curse of dimensionality” in the number n of computational units, when the number d of variables grows. The problems considered include dynamic optimization, team optimization, and supervised learning from data.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2013-09-17T13:05:26Z
2013-09-17T13:05:26Z
http://eprints.imtlucca.it/id/eprint/1779
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1779
2013-09-17T13:05:26Z
A Stochastic Knapsack Problem with Nonlinear Capacity Constraint
There exist various generalizations and stochastic variants of the NP-hard 0/1 knapsack problem [1,2]. The following model is considered here. A knapsack of capacity C is given, together with K classes of objects. The stochastic
nature come into play since, in contrast to the classical knapsack, the objects belonging to each class become available randomly. The inter-arrival times are exponentially-distributed with means depending on the class and on the state of the knapsack. Each object has a sojourn time independent from the sojourn times of the other objects and described by a class-dependent distribution.
The other difference with respect to the classical model consists is the following generalization. For k = 1;K, let nk be the number of objects of class k that are currently inside the knapsack; then, the portion of knapsack
occupied by them is given by a nonlinear function bk(nk). When included in the knapsack, an object from class k generates revenue at a positive rate rk. The objects can be placed into the knapsack as long as the sum of their
sizes does not exceed the capacity C. The problem consists in finding a policy that maximizes the average revenue, by accepting or rejecting the arriving objects in dependence of the current state of the knapsack. A-priori knowledge
of structural properties of the (unknown) optimal policies is useful to find satisfactorily accurate suboptimal policies. The family of coordinate-convex
policies is considered here. In this context, structural properties of the optimal policies are investigated. New insights into a criterion proposed in [3] to improve coordinate-convex policies are discussed and the greedy presented in [5] is further developed. Applications in Call Admission Control (CAC) for telecommunication networks are discussed. In this case, the objects are requests of connections coming from K different classes of users, each with an associated bandwidth requirement and a distribution of its duration.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2013-09-17T13:05:08Z
2013-09-17T13:05:08Z
http://eprints.imtlucca.it/id/eprint/1772
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1772
2013-09-17T13:05:08Z
Optimality Conditions For A Nonlinear Stochastic Knapsack Problem
We investigate a nonlinear stochastic knapsack problem with application in Call Admission Control (CAC) with two classes of users, preliminary studied in [1,2]. Among possible stochastic nonlinear generalizations [3.4] of the NP-hard 0/1 knapsack problem, we consider the following model. One has a knapsack of capacity C and K classes of objects. The objects belonging to each class become available randomly. The inter-arrival times are
exponentially-distributed with means depending on the class and on the state of the knapsack. The sojourn time of each object is independent of the others and described by a class-dependent distribution. When included in the knapsack,
an object from class k generates revenue at a positive rate rk. The occupied portion of knapsack is given by a nonlinear function bk(nk), where, for k=1, . . . K, nk is the number of objects of class k currently inside. The objects can be inserted as long as the sum of their sizes does not exceed the capacity C.
The stochastic nonlinear 0,1-programming problem consists in deciding either acceptance or rejection of the arriving objects in dependence of the current state
of the knapsack, in such a way to maximize the average revenue. The functions used to generate such decisions are called \policies". We focus on coordinate-convex policies. We provide an algorithm which generates all coordinate-convex policies satisfying three different necessary conditions for optimality. Then we derive exact expressions of the cardinalities of such three sets of policies. Finally, we give conditions under which these cardinalities
are significantly smaller than the cardinality of the set of all coordinate-convex policies.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2013-09-17T13:04:27Z
2013-09-17T13:04:27Z
http://eprints.imtlucca.it/id/eprint/1793
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1793
2013-09-17T13:04:27Z
An Application to Two-Hop Forwarding of a Model of Buffer Occupancy in ICNs
An application of the model proposed in Cello at al., A Model of Buffer Occupancy in ICNs, IEEE Communications Letters, to appear is investigated. Such a model provides a relationship in the z-domain between the discrete probability densities of the buffer state occupancies of the nodes in the network and the sizes of the arriving bulks. Under a class of two-hop forwarding strategies, expressions are obtained for the average buffer occupancy and its standard deviation.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2013-09-17T13:03:47Z
2015-02-18T11:55:48Z
http://eprints.imtlucca.it/id/eprint/1762
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1762
2013-09-17T13:03:47Z
Evaluation of Individual Contributions in a Group Estimate of the Position of a Moving Point of Common Interest
This paper presents a feasibility study on the application of cooperative game theory tools such as the Shapley
value to measure quantitatively non-verbal interaction inside a group of people. More precisely, a method is tested to evaluate individual contributions in the estimate of the position of a mobile point of interest. Various social aspects are evaluated, like synchronization between people and group's topology, together with computational aspects for the evaluation of the Shapley value, like efficiency and accuracy. The results show that among the main factors that influence the contribution of each individual on the team's utility there are the displacement of the players and the type of movement performed by the point of interest. Possible applications in a musical context are also described.
Davide Punta
Giulio Puri
Fabio Tollini
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2013-09-17T13:03:19Z
2013-09-17T13:03:19Z
http://eprints.imtlucca.it/id/eprint/1760
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1760
2013-09-17T13:03:19Z
Editorial: A Successful Change From TNN to TNNLS and a Very Successful Year
This issue marks the first anniversary issue of IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS after it changed its name from IEEE TRANSACTIONS ON NEURAL NETWORKS. I am happy to report that we had a great year! The number of new submissions in a year exceeded 1,000 for the first time in the history of TNN/TNNLS. IEEE TNN had a very successful development for 22 years from 1990 to 2011, and we have good reasons to believe that IEEE TNNLS will have many more years of successful growth.
Derong Liu
Charles Anderson
Ahmad Taher Azar
Giorgio Battistelli
Eduardo Bayro-Corrochano
Cristiano Cervellera
David Elizondo
Maurizio Filippone
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Xiaolin Hu
Tingwen Huang
Weifeng Liu
Wenlian Lu
Ana Maria Madureira
Igor Skrjanc
Thomas Villmann
Jonathan Wu
Shengli Xie
Dong Xu
2013-09-17T13:02:44Z
2014-03-05T15:34:07Z
http://eprints.imtlucca.it/id/eprint/1794
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1794
2013-09-17T13:02:44Z
A Theoretical Framework for Supervised Learning from Regions
Supervised learning is investigated, when the data are represented not only by labeled points but also labeled regions of the input space. In the limit case, such
regions degenerate to single points and the proposed approach changes back to the classical learning context. The adopted framework entails the minimization
of a functional obtained by introducing a loss function that involves such regions. An additive regularization term is expressed via differential operators that model
the smoothness properties of the desired input/output relationship. Representer
theorems are given, proving that the optimization problem associated to learning
from labeled regions has a unique solution, which takes on the form of a linear
combination of kernel functions determined by the differential operators together
with the regions themselves. As a relevant situation, the case of regions given
by multi-dimensional intervals (i.e., “boxes”) is investigated, which models prior
knowledge expressed by logical propositions.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2013-09-17T13:01:44Z
2013-09-17T13:01:44Z
http://eprints.imtlucca.it/id/eprint/1790
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1790
2013-09-17T13:01:44Z
Towards Automated Analysis of Joint Music Performance in the Orchestra
Preliminary results from a study of expressivity and of non-verbal social signals in small groups of users are presented. Music is selected as experimental test-bed since it is a clear example of interactive and social activity, where affective non-verbal communication plays a fundamental role. In this experiment the orchestra is adopted as a social group characterized by a clear leader (the conductor) of two groups of musicians (the first and second violin sections). It is shown how a reduced set of simple movement features - heads movements - can be sufficient to explain the difference in the behavior of the first violin section between two performance conditions, characterized by different eye contact between the two violin sections and between the first section and the conductor.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Leonardo Badino
Antonio Camurri
Alessandro D’Ausilio
Luciano Fadiga
Donald Glowinski
Marcello Sanguineti
Giovanna Varni
Gualtiero Volpe
2013-09-17T13:01:15Z
2014-01-29T10:53:38Z
http://eprints.imtlucca.it/id/eprint/1764
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1764
2013-09-17T13:01:15Z
Expressive Non-Verbal Interaction in String Quartet
The present study investigates expressive non-verbal interaction in musical context starting from behavioral features extracted at individual and group level. We define four features related to head movement and direction that may help gaining insight on the expressivity and cohesion of the performance. Our preliminary findings obtained from the analysis of a string quartet recorded in ecological settings show that these features may help in distinguishing between two types of performance: (a) a concert-like condition where all musicians aim at performing at best, (b) a perturbed one where the 1st violinist devises alternative interpretations of the music score without discussing them with the other musicians.
Donald Glowinski
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Antonio Camurri
Stefano Piana
2013-09-17T13:00:41Z
2015-02-18T11:45:06Z
http://eprints.imtlucca.it/id/eprint/1784
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1784
2013-09-17T13:00:41Z
Suboptimal Policies for Stochastic N-Stage Optimization Problems: Accuracy Analysis and a Case Study from Optimal Consumption
Dynamic Programming formally solves stochastic optimization problems with an objective that is additive over a finite number of stages. However, it provides closed-form solutions only in particular cases. In general, one has to resort to approximate methodologies. In this chapter, suboptimal solutions are searched for by approximating the decision policies via linear combinations of Gaussian and sigmoidal functions containing adjustable parameters, to be optimized together with the coefficients of the combinations. These approximation schemes correspond to Gaussian radial-basis-function networks and sigmoidal feedforward neural networks, respectively. The accuracies of the suboptimal solutions are investigated by estimating the error propagation through the stages. As a case study, we address a multidimensional problem of optimal consumption under uncertainty, modeled as a stochastic optimization task with an objective that is additive over a finite number of stages. In the classical one-dimensional context, a consumer aims at maximizing over a given time horizon the discounted expected value of consumption of a good, where the expectation is taken with respect to a stochastic interest rate. The consumer has an initial wealth and at each time period earns an income, modeled as an exogenous input. We consider a multidimensional framework, in which there are d>1 consumers that aim at maximizing a social utility function. First we provide conditions that allow one to apply our estimates to such a problem; then we present a numerical analysis.
Mauro Gaggero
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:00:19Z
2015-02-18T11:38:24Z
http://eprints.imtlucca.it/id/eprint/1780
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1780
2013-09-17T13:00:19Z
Approximate dynamic programming for stochastic N-stage optimization with application to optimal consumption under uncertainty
Stochastic optimization problems with an objective function that is additive over a finite number of stages are addressed. Although Dynamic Programming allows one to formally solve such problems, closed-form solutions can be derived only in particular cases. The search for suboptimal solutions via two approaches is addressed: approximation of the value functions and approximation of the optimal decision policies. The approximations take on the form of linear combinations of basis functions containing adjustable parameters to be optimized together with the coefficients of the combinations. Two kinds of basis functions are considered: Gaussians with varying centers and widths and sigmoids with varying weights and biases. The accuracies of such suboptimal solutions are investigated via estimates of the error propagation through the stages. Upper bounds are derived on the differences between the optimal value of the objective functional and its suboptimal values corresponding to the use at each stage of approximate value functions and approximate policies. Conditions under which the number of basis functions required for a desired approximation accuracy does not grow “too fast” with respect to the dimensions of the state and random vectors are provided. As an example of application, a multidimensional problem of optimal consumption under uncertainty is investigated, where consumers aim at maximizing a social utility function. Numerical simulations are provided, emphasizing computational pros and cons of the two approaches (i.e., value-function approximation and optimal-policy approximation) using the above-mentioned two kinds of basis functions. To investigate the dependencies of the performances on dimensionality, the numerical analysis is performed for various numbers of consumers. In the simulations, discretization techniques exploiting low-discrepancy sequences are used. Both theoretical and numerical results give insights into the possibility of coping with the curse of dimensionality in stochastic optimization problems whose decision strategies depend on large numbers of variables.
Mauro Gaggero
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T12:59:48Z
2014-01-29T10:28:09Z
http://eprints.imtlucca.it/id/eprint/1765
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1765
2013-09-17T12:59:48Z
Flood hazard assessment via threshold binary classifiers: case study of the Tanaro River basin
This contribution deals with the identification of flood hazards at the catchment scale. The aim is to distinguish flood-exposed areas from marginal risk ones, and to extend available information on flood hazards to cover the whole catchment. Threshold binary classifiers based on six selected quantitative morphological features, derived from data stored in digital elevation models (DEMs), are used to investigate the relationships between morphology and the flooding hazard, as described in flood hazard maps. Results show that threshold binary classifier techniques should be taken into account when one is interested in an initial low-cost detection of flood-exposed areas. This may be needed, for example, in applications related to the insurance market, in which one is interested in estimating the flood hazard of specific areas for which limited information is available, or whenever a first flood hazard delineation is required to further address detailed investigations for flood mapping purposes. The method described in the paper has been tested on the basin of the Tanaro River. Results present a high degree of accuracy: indeed, the best classifier correctly identifies about 91% of flood-exposed areas, whereas the percentage of the areas exposed to marginal risk that are incorrectly classified as flood-exposed areas is about 16%
Massimiliano Degiorgis
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Silvia Gorni
Giorgio Roth
Marcello Sanguineti
Angela Celeste Taramasso
2013-09-17T12:59:15Z
2015-02-18T13:43:14Z
http://eprints.imtlucca.it/id/eprint/1791
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1791
2013-09-17T12:59:15Z
Types of Leadership in a String Quartet
Floriane Dardard
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Donald Glowinski
2013-09-17T12:57:04Z
2014-01-29T10:11:03Z
http://eprints.imtlucca.it/id/eprint/1774
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1774
2013-09-17T12:57:04Z
Optimality conditions for coordinate-convex policies in CAC with nonlinear feasibility boundaries
Optimality conditions for Call Admission Control (CAC) problems with nonlinearly constrained feasibility regions and K classes of users are derived. The adopted model is a generalized stochastic knapsack, with exponentially distributed interarrival times of the objects. Call admission strategies are restricted to the family of Coordinate-Convex (CC) policies. For K=2 classes of users, both general structural properties of the optimal CC policies and structural properties that depend on the revenue ratio are investigated. Then, the analysis is extended to the case K>2. The theoretical results are exploited to narrow the set of admissible solutions to the associated knapsack problem, i.e., the set of CC policies to which an optimal one belongs. With respect to results available in the literature, less restrictive conditions on the optimality of the complete-sharing policy are obtained. To illustrate the role played by the theoretical results on the combinatorial CAC problem, simulation results are presented, which show how the number of candidate optimal CC policies dramatically decreases as the derived optimality conditions are imposed.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2013-09-17T12:56:17Z
2013-09-17T12:56:17Z
http://eprints.imtlucca.it/id/eprint/1775
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1775
2013-09-17T12:56:17Z
Optimality Conditions for Coordinate-Convex Policies in Call Admission Control via a Generalized Stochastic Knapsack Model
We derive optimality conditions for Coordinate-Convex (CC) policies in Call Admission Control with K classes of users and nonlinearly-constrained feasibility regions. We adopt a generalized stochastic knapsack model, with random inter-arrival times and sojourn times. General structural properties of the optimal CC policies and properties that depend on the revenue ratio are investigated. Both theoretical and numerical results show that exploiting the proposed analysis the number of candidate optimal CC policies dramatically decrease.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Giorgio Marchese
Marcello Sanguineti
2013-09-17T12:55:31Z
2015-02-18T12:01:09Z
http://eprints.imtlucca.it/id/eprint/1769
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1769
2013-09-17T12:55:31Z
Learning with Hard Constraints
A learning paradigm is proposed, in which one has both classical supervised examples and constraints that cannot be violated, called here “hard constraints”, such as those enforcing the probabilistic normalization of a density function or imposing coherent decisions of the classifiers acting on different views of the same pattern. In contrast, supervised examples can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) and so play the roles of “soft constraints”. Constrained variational calculus is exploited to derive a representation theorem which provides a description of the “optimal body of the agent”, i.e. the functional structure of the solution to the proposed learning problem. It is shown that the solution can be represented in terms of a set of “support constraints”, thus extending the well-known notion of “support vectors”.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2013-09-17T09:51:50Z
2013-09-17T10:36:05Z
http://eprints.imtlucca.it/id/eprint/1795
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1795
2013-09-17T09:51:50Z
Learning Computationally Efficient Approximations of Complex Image Segmentation Metrics
Image segmentation metrics have been extensively used in the literature to compare segmentation algorithms among each other, or relative to a ground-truth segmentation. Some metrics are easy to compute (e.g., Dice, Jaccard), others are more accurate (e.g., the Hausdorff distance) and may reflect local topology, but they are computationally demanding. While certain attempts have been made to create computationally efficient implementations of such complex metrics, in this paper we approach this problem from a radically different viewpoint. We construct approximations of a complex metric (e.g., the Hausdorff distance), combining a small number of computationally lightweight metrics in a linear regression model. We also consider feature selection, using sparsity inducing strategies, to restrict the number of metrics employed significantly, without penalizing the predictive power of the model. We demonstrate our methodology with image data from plant phenotyping experiments. We find that a linear model can effectively approximate the Hausdorff distance using even a few features. Our approach can find many applications, but is largely expected to benefit distributed sensing scenarios where the sensor has low computational capacity, whereas centralized processing units have higher computational capabilities.
Massimo Minervini
massimo.minervini@imtlucca.it
Cristian Rusu
cristian.rusu@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2013-09-17T08:23:42Z
2013-09-17T08:23:42Z
http://eprints.imtlucca.it/id/eprint/1758
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1758
2013-09-17T08:23:42Z
On Spectral Windows In Supervised Learning From Data
For Tikhonov regularization in supervised learning from data, the effect on the regularized solution of a joint perturbation of the regression function and the data is investigated. Spectral windows in the finite-sample and population cases are compared via probabilistic estimates of the differences between regularized solutions.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T08:12:18Z
2013-09-17T08:12:18Z
http://eprints.imtlucca.it/id/eprint/1757
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1757
2013-09-17T08:12:18Z
Dynamic Programming And Value-Function Approximation With
Application To Optimal Consumption
Sequential decision problems are considered, where a reward additive over a number of stages has to be maximized. Instances arise in scheduling eets of vehicles,
allocating resources, selling assets, optimizing transportation or telecommunication networks, inventory forecasting, financial planning, etc. At each stage, Dynamic Programming (DP) introduces the value function, which gives the value of the reward to be incurred at the next stage, as a function of the state at the current stage. The solution is formally obtained via recursive equations. However, closed-form solutions can be derived only in particular cases. We investigate how DP and suitable approximations of the value functions can be combined, providing a methodology to face high-dimensional sequential
decision problems. Approximations of the value functions are considered, expressed as linear combinations of basis functions obtained from a "mother function" (e.g., the Gaussian), by varying some "inner parameters" (e.g., variance and center coordinates) [1-5]. The accuracies of such suboptimal solutions are estimated. It is shown that
one can cope with the \curse of dimensionality" in value-function approximation (i.e., an exponential growth of the number of basis functions, required to guarantee a desired solution accuracy). The theoretical analysis is applied to a multidimensional version of the optimal consumption problem. (In the classical version, a consumer aims at maximizing the discounted value of the consumption of a good, given a time horizon, a sequence of interest rates, an initial wealth, and an income earned at each stage. Here, more consumers are considered.) The proposed approximation scheme is compared with classical linear approximators, i.e., linear combinations of a-priori
fixed basis functions. It is shown via simulations that the our approach provides a better solution accuracy, the number of computational units being the same as in fixed-basis approximation.
Mauro Gaggero
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
Riccardo Zoppoli
2013-09-17T08:11:59Z
2013-09-17T08:11:59Z
http://eprints.imtlucca.it/id/eprint/1754
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1754
2013-09-17T08:11:59Z
Computationally Efficient Approximation Schemes for Functional Optimization
Angelo Alessandri
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T08:11:38Z
2013-09-17T08:11:38Z
http://eprints.imtlucca.it/id/eprint/1755
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1755
2013-09-17T08:11:38Z
Decentralized Optimization Problems with Cooperating Decision Makers
Mauro Gaggero
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T08:11:13Z
2013-09-17T08:11:13Z
http://eprints.imtlucca.it/id/eprint/1756
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1756
2013-09-17T08:11:13Z
Deriving Approximation Error Bounds via Rademacher’s Complexity and Learning Theory
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T07:56:24Z
2013-09-17T07:56:24Z
http://eprints.imtlucca.it/id/eprint/1752
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1752
2013-09-17T07:56:24Z
Classifiers for the Detection of Flood Prone Areas from Remote Sensed Elevation Data
Massimiliano Degiorgis
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Silvia Gorni
Giorgio Roth
Marcello Sanguineti
Angela Celeste Taramasso
2013-09-17T07:44:28Z
2013-09-17T07:44:28Z
http://eprints.imtlucca.it/id/eprint/1751
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1751
2013-09-17T07:44:28Z
Can Dictionary-Based Computational Models Outperform the Best Linear Ones?
Approximation capabilities of two types of computational models are explored: dictionary-based models (i.e., linear combinations of n -tuples of basis functions computable by units belonging to a set called “dictionary”) and linear ones (i.e., linear combinations of n fixed basis functions). The two models are compared in terms of approximation rates, i.e., speeds of decrease of approximation errors for a growing number n of basis functions. Proofs of upper bounds on approximation rates by dictionary-based models are inspected, to show that for individual functions they do not imply estimates for dictionary-based models that do not hold also for some linear models. Instead, the possibility of getting faster approximation rates by dictionary-based models is demonstrated for worst-case errors in approximation of suitable sets of functions. For such sets, even geometric upper bounds hold.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Věra Kůrková
Marcello Sanguineti
2013-09-17T07:43:57Z
2013-09-17T07:43:57Z
http://eprints.imtlucca.it/id/eprint/1753
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1753
2013-09-17T07:43:57Z
Classifiers for the Detection of Flood-Prone Areas Using Remote Sensed Elevation Data
Summary A technique is presented for the identification of the areas subject to flooding hazard. Starting from remote sensed elevation data and existing flood hazard maps – usually available for limited areas – the relationships between selected quantitative morphologic features and the flooding hazard are first identified and then used to extend the hazard information to the entire catchment. This is performed through techniques of pattern classification, such as linear classifiers based on quantitative morphologic features, and support vector machines with linear and Gaussian kernels. The experiment starts by discriminating between flood-prone areas and marginal hazard areas. Multiclass classifiers are subsequently used to graduate the hazard. Their designs amount to solving suitable optimization problems. Several performance measures are considered in comparing the different classifiers, such as the area under the receiver operating characteristics curve, and the sum of the false positive and false negative rates. The procedure has been validated for the Tanaro basin, a tributary to the major Italian river, the Po. Results show a high reliability: the classifier properly identifies 93 of flood-prone areas, and only 14 of the areas subject to a marginal hazard are improperly assigned. An increase of this latter value up to 19 is detected when the same structure is applied for hazard graduation. Results derived from the application to different catchments seem to qualitatively indicate the ability of the classifier to perform well also outside the calibration region. Pattern classification techniques should be considered when the identification of flood-prone areas and hazard grading is required for large regions (e.g., for civil protection or insurance purposes) or when a first identification is needed (e.g., to address further detailed flood-mapping activities).
Massimiliano Degiorgis
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Silvia Gorni
Giorgio Roth
Marcello Sanguineti
Angela Celeste Taramasso
2013-09-17T07:43:42Z
2013-09-17T07:43:42Z
http://eprints.imtlucca.it/id/eprint/1750
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1750
2013-09-17T07:43:42Z
Approximation Structures with Moderate Complexity in Functional Optimization and Dynamic Programming
Connections between function approximation and classes of functional optimization problems, whose admissible solutions may depend on a large number of variables, are investigated. The insights obtained in this context are exploited to analyze families of nonlinear approximation schemes containing tunable parameters and enjoying the following property: when they are used to approximate the (unknown) solutions to optimization problems, the number of parameters required to guarantee a desired accuracy grows at most polynomially with respect to the number of variables in admissible solutions. Both sigmoidal neural networks and networks with kernel units are considered as approximation structures to which the analysis applies. Finally, it is shown how the approach can be applied for the solution of finite-horizon optimal control problems via approximate dynamic programming enhancing the potentialities of recent developments in nonlinear approximation in the framework of the solution of sequential decision problems with continuous state spaces.
Mauro Gaggero
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Thomas Parisini
Marcello Sanguineti
Riccardo Zoppoli
2013-09-17T07:43:09Z
2015-02-18T11:35:48Z
http://eprints.imtlucca.it/id/eprint/1748
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1748
2013-09-17T07:43:09Z
Approximation and Estimation Bounds for Subsets of Reproducing Kernel Kreǐn Spaces
Reproducing kernel Kreın spaces are used in learning from data via kernel methods when the kernel is indefinite. In this paper, a characterization of a subset of the unit ball in
such spaces is provided. Conditions are given, under which upper bounds on the estimation error and the approximation error can be applied simultaneously to such a subset. Finally, it is shown that the hyperbolic-tangent kernel and other indefinite kernels satisfy such conditions.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2013-09-17T07:42:09Z
2013-09-17T07:42:09Z
http://eprints.imtlucca.it/id/eprint/1745
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1745
2013-09-17T07:42:09Z
Analysis of Music Ensemble Performance as a Test-bed for Social Interaction: Methods from Operations Research and Preliminary Results
Leonardo Badino
Antonio Camurri
Alessandro D'Ausilio
Luciano Fadiga
Donald Glowinski
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
Giovanna Varni
Gualtiero Volpe
2013-09-17T07:41:22Z
2013-09-17T07:41:22Z
http://eprints.imtlucca.it/id/eprint/1743
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1743
2013-09-17T07:41:22Z
Accuracy of Approximations of Solutions to Fredholm Equations by Kernel Methods
Approximate solutions to inhomogeneous Fredholm integral equations of the second kind by radial and kernel networks are investigated. Upper bounds are derived on errors in approximation of solutions of these equations by networks with increasing model complexity. The bounds are obtained using results from nonlinear approximation theory. The results are applied to networks with Gaussian and kernel units and illustrated by numerical simulations.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Věra Kůrková
Marcello Sanguineti
2013-09-17T07:40:56Z
2013-09-17T07:40:56Z
http://eprints.imtlucca.it/id/eprint/1747
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1747
2013-09-17T07:40:56Z
Approximate Dynamic Programming by Variable-Basis Schemes: Error Analysis and Numerical Results
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
Mauro Gaggero
2013-09-17T07:40:37Z
2013-09-17T07:40:37Z
http://eprints.imtlucca.it/id/eprint/1744
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1744
2013-09-17T07:40:37Z
Accuracy of Suboptimal Solutions to Kernel Principal Component Analysis
For Principal Component Analysis in Reproducing Kernel Hilbert Spaces (KPCA), optimization over sets containing only linear combinations of all n-tuples of kernel functions is investigated, where n is a positive integer smaller than the number of data. Upper bounds on the accuracy in approximating the optimal solution, achievable without restrictions on the number of kernel functions, are derived. The rates of decrease of the upper bounds for increasing number n of kernel functions are given by the summation of two terms, one proportional to n −1/2 and the other to n −1, and depend on the maximum eigenvalue of the Gram matrix of the kernel with respect to the data. Primal and dual formulations of KPCA are considered. The estimates provide insights into the effectiveness of sparse KPCA techniques, aimed at reducing the computational costs of expansions in terms of kernel units.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T07:40:01Z
2013-09-17T07:40:01Z
http://eprints.imtlucca.it/id/eprint/1746
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1746
2013-09-17T07:40:01Z
Approximate Dynamic Programming by Value-Function Approximation via Variable-Basis Schemes
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T07:39:34Z
2013-09-17T07:39:34Z
http://eprints.imtlucca.it/id/eprint/1749
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1749
2013-09-17T07:39:34Z
Approximation Error Bounds via Rademacher's Complexity
Approximation properties of some connectionistic models, commonly used to construct approximation schemes for optimization problems with multivariable functions as admissible solutions, are investigated. Such models are made up of linear combinations of computational units
with adjustable parameters. The relationship between model complexity (number of computational units) and approximation error is investigated using tools from Statistical Learning Theory, such as Talagrand's
inequality, fat-shattering dimension, and Rademacher's complexity. For some families of multivariable functions, estimates of the approximation accuracy of models with certain computational units are derived in dependence of the Rademacher's complexities of the families. The
estimates improve previously-available ones, which were expressed in terms of V C dimension and derived by exploiting union-bound techniques. The results are applied to approximation schemes with certain radial-basis-functions as computational units, for which it is shown that
the estimates do not exhibit the curse of dimensionality with respect to the number of variables.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-16T09:43:14Z
2013-09-16T12:02:59Z
http://eprints.imtlucca.it/id/eprint/1730
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1730
2013-09-16T09:43:14Z
Learning with Boundary Conditions
Kernel machines traditionally arise from an elegant formulation based on measuring the smoothness of the admissible solutions by the norm in the reproducing kernel Hilbert space (RKHS) generated by the chosen kernel. It was pointed out that they can be formulated in a related functional framework, in which the Green’s function of suitable differential operators is thought of as a kernel. In this letter, our own picture of this intriguing connection is given by emphasizing some relevant distinctions between these different ways of measuring the smoothness of admissible solutions. In particular, we show that for some kernels, there is no associated differential operator. The crucial relevance of boundary conditions is especially emphasized, which is in fact the truly distinguishing feature of the approach based on differential operators. We provide a general solution to the problem of learning from data and boundary conditions and illustrate the significant role played by boundary conditions with examples. It turns out that the degree of freedom that arises in the traditional formulation of kernel machines is indeed a limitation, which is partly overcome when incorporating the boundary conditions. This likely holds true in many real-world applications in which there is prior knowledge about the expected behavior of classifiers and regressors on the boundary.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Marcello Sanguineti
2013-09-16T09:25:24Z
2013-09-16T12:02:59Z
http://eprints.imtlucca.it/id/eprint/1728
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1728
2013-09-16T09:25:24Z
Dynamic Programming and Value-Function Approximation in Sequential Decision Problems: Error Analysis and Numerical Results
Value-function approximation is investigated for the solution via Dynamic Programming (DP) of continuous-state sequential N-stage decision problems, in which the reward to be maximized has an additive structure over a finite number of stages. Conditions that guarantee smoothness properties of the value function at each stage are derived. These properties are exploited to approximate such functions by means of certain nonlinear approximation schemes, which include splines of suitable order and Gaussian radial-basis networks with variable centers and widths. The accuracies of suboptimal solutions obtained by combining DP with these approximation tools are estimated. The results provide insights into the successful performances appeared in the literature about the use of value-function approximators in DP. The theoretical analysis is applied to a problem of optimal consumption, with simulation results illustrating the use of the proposed solution methodology. Numerical comparisons with classical linear approximators are presented.
Mauro Gaggero
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-16T09:09:08Z
2013-09-16T12:02:59Z
http://eprints.imtlucca.it/id/eprint/1726
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1726
2013-09-16T09:09:08Z
Suboptimal Solutions to Team Optimization Problems with Stochastic Information Structure
Existence, uniqueness, and approximations of smooth solutions to team optimization problems with stochastic information structure are investigated. Suboptimal strategies made up of linear combinations of basis functions containing adjustable parameters are considered. Estimates of their accuracies are derived by combining properties of the unknown optimal strategies with tools from nonlinear approximation theory. The estimates are obtained for basis functions corresponding to sinusoids with variable frequencies and phases, Gaussians with variable centers and widths, and sigmoidal ridge functions. The theoretical results are applied to a problem of optimal production in a multidivisional firm, for which numerical simulations are presented.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
Mauro Gaggero
2013-09-13T12:36:29Z
2013-09-16T12:02:59Z
http://eprints.imtlucca.it/id/eprint/1725
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1725
2013-09-13T12:36:29Z
A Model of Buffer Occupancy for ICNs
In this letter, an analytical framework to model nodes in Intermittently Connected Networks (ICNs) is proposed. A relationship is derived in the z-domain between the discrete probability densities of their buffer state occupancies and the sizes of the arriving bulks. Under a fixed epidemic-routing-based forwarding strategy, expressions are obtained for the average buffer occupancy and its standard deviation with immediate protocol advantages.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2013-09-13T12:25:47Z
2013-09-16T12:02:59Z
http://eprints.imtlucca.it/id/eprint/1724
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1724
2013-09-13T12:25:47Z
New insights into Witsenhausen’s counterexample
The accuracies of certain suboptimal solutions to the famous and still unsolved optimization problem known as “Witsenhausen’s counterexample” are investigated. The differences between the corresponding suboptimal values of the Witsenhausen functional and its optimum are estimated, too. The results give insights into the effectiveness of certain approaches proposed in the literature to face this hard optimization problem and into numerical results obtained by some researchers.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-13T12:03:55Z
2013-09-16T12:02:59Z
http://eprints.imtlucca.it/id/eprint/1723
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1723
2013-09-13T12:03:55Z
A Comparison between Fixed-Basis and Variable-Basis Schemes for Function Approximation and Functional Optimization
Fixed-basis and variable-basis approximation schemes are compared for the problems of function approximation and functional optimization (also known as infinite programming). Classes of problems are investigated for which variable-basis schemes with sigmoidal computational
units perform better than fixed-basis ones, in terms of the minimum number of computational units needed to achieve a desired error in function approximation or approximate optimization. Previously known bounds on the accuracy are extended, with better rates, to families of
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2013-09-13T11:34:55Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1719
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1719
2013-09-13T11:34:55Z
On a Variational Norm Tailored to Variable-Basis Approximation Schemes
A variational norm associated with sets of computational units and used in function approximation, learning from data, and infinite-dimensional optimization is investigated. For sets Gk obtained by varying a vector y of parameters in a fixed-structure computational unit K(-,y) (e.g., the set of Gaussians with free centers and widths), upper and lower bounds on the GK -variation norms of functions having certain integral representations are given, in terms of the £1-norms of the weighting functions in such representations. Families of functions for which the two norms are equal are described.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-13T11:30:21Z
2013-09-16T12:02:59Z
http://eprints.imtlucca.it/id/eprint/1718
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1718
2013-09-13T11:30:21Z
CAC with Nonlinearly-Constrained Feasibility Regions
Two criteria are proposed to characterize and improve suboptimal coordinate-convex (c.c.) policies in Call Admission Control (CAC) problems with nonlinearly-constrained feasibility regions. Then, a structural property of the optimal c.c. policies is derived. This is expressed in terms of constraints on the relative positions of successive corner points.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2013-09-13T10:33:43Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1716
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1716
2013-09-13T10:33:43Z
Minimizing Sequences for a Family of Functional Optimal Estimation Problems
Rates of convergence are derived for approximate solutions to optimization problems associated with the design of state estimators for nonlinear dynamic systems. Such problems consist in minimizing the functional given by the worst-case ratio between the ℒ p -norm of the estimation error and the sum of the ℒ p -norms of the disturbances acting on the dynamic system. The state estimator depends on an innovation function, which is searched for as a minimizer of the functional over a subset of a suitably-defined functional space. In general, no closed-form solutions are available for these optimization problems. Following the approach proposed in (Optim. Theory Appl. 134:445–466, 2007), suboptimal solutions are searched for over linear combinations of basis functions containing some parameters to be optimized. The accuracies of such suboptimal solutions are estimated in terms of the number of basis functions. The estimates hold for families of approximators used in applications, such as splines of suitable orders.
Angelo Alessandri
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-13T10:30:44Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1715
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1715
2013-09-13T10:30:44Z
Suboptimal Solutions to Dynamic Optimization Problems via Approximations of the Policy Functions
The approximation of the optimal policy functions is investigated for dynamic optimization problems with an objective that is additive over a finite number of stages. The distance between optimal and suboptimal values of the objective functional is estimated, in terms of the errors in approximating the optimal policy functions at the various stages. Smoothness properties are derived for such functions and exploited to choose the approximating families. The approximation error is measured in the supremum norm, in such a way to control the error propagation from stage to stage. Nonlinear approximators corresponding to Gaussian radial-basis-function networks with adjustable centers and widths are considered. Conditions are defined, guaranteeing that the number of Gaussians (hence, the number of parameters to be adjusted) does not grow “too fast” with the dimension of the state vector. The results help to mitigate the curse of dimensionality in dynamic optimization. An example of application is given and the use of the estimates is illustrated via a numerical simulation.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-13T10:27:37Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1714
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1714
2013-09-13T10:27:37Z
Estimates of Variation with Respect to a Set and Applications to Optimization Problems
A variational norm that plays a role in functional optimization and learning from data is investigated. For sets of functions obtained by varying some parameters in fixed-structure computational units (e.g., Gaussians with variable centers and widths), upper bounds on the variational norms associated with such units are derived. The results are applied to functional optimization problems arising in nonlinear approximation by variable-basis functions and in learning from data. They are also applied to the construction of minimizing sequences by an extension of the Ritz method.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-13T10:17:39Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1713
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1713
2013-09-13T10:17:39Z
Regularization Techniques and Suboptimal Solutions to Optimization Problems in Learning from Data
Various regularization techniques are investigated in supervised learning from data. Theoretical features of the associated optimization problems are studied, and sparse suboptimal solutions are searched for. Rates of approximate optimization are estimated for sequences of suboptimal solutions formed by linear combinations of n-tuples of computational units, and statistical learning bounds are derived. As hypothesis sets, reproducing kernel Hilbert spaces and their subsets are considered.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-13T09:19:51Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1707
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1707
2013-09-13T09:19:51Z
Estimates of the Approximation Error Using Rademacher Complexity: Learning Vector-Valued Functions
For certain families of multivariable vector-valued functions to be approximated, the accuracy of approximation schemes made up of linear combinations of computational units containing adjustable parameters is investigated. Upper bounds on the approximation error are derived that depend on the Rademacher complexities of the families. The estimates exploit possible relationships among the components of the multivariable vector-valued functions. All such components are approximated simultaneously in such a way to use, for a desired approximation accuracy, less computational units than those required by componentwise approximation. An application to -stage optimization problems is discussed.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-12T11:13:50Z
2016-07-13T10:47:18Z
http://eprints.imtlucca.it/id/eprint/1697
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1697
2013-09-12T11:13:50Z
Statistical analysis of chemical computational systems with MULTIVESTA and ALCHEMIST
The chemical-oriented approach is an emerging paradigm for programming the behaviour of densely distributed and context-aware devices (e.g. in ecosystems of displays tailored to crowd steering, or to obtain profile-based coordinated visualization). Typically, the evolution of such systems cannot be easily predicted, thus making of paramount importance the availability of techniques and tools supporting prior-to-deployment analysis. Exact analysis techniques do not scale well when the complexity of systems grows: as a consequence, approximated techniques based on simulation assumed a relevant role. This work presents a new simulation-based distributed tool addressing the statistical analysis of such a kind of systems, which has been obtained by chaining two existing tools: MultiVeStA and Alchemist. The former is a recently proposed lightweight tool which allows to enrich existing discrete event simulators with distributed statistical analysis capabilities, while the latter is an efficient simulator for chemical-oriented computational systems. The tool is validated against a crowd steering scenario, and insights on the performance are provided by discussing how these scale distributing the analysis tasks on a multi-core architecture.
Danilo Pianini
Stefano Sebastio
stefano.sebastio@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2013-09-12T11:06:41Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1696
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1696
2013-09-12T11:06:41Z
Bounds for Approximate Solutions of Fredholm Integral Equations Using Kernel Networks
Approximation of solutions of integral equations by networks with kernel units is investigated theoretically. There are derived upper bounds on speed of decrease of errors in approximation of solutions of Fredholm integral equations by kernel networks with increasing numbers of units. The estimates are obtained for Gaussian and degenerate kernels.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Věra Kůrková
Marcello Sanguineti
2013-09-12T10:45:37Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1693
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1693
2013-09-12T10:45:37Z
Computationally Efficient Approximation Schemes for Functional Optimization
Approximation schemes for functional optimization problems with admissible solutions dependent on a large number d of variables are investigated. Suboptimal solutions
are considered, expressed as linear combinations of n-tuples from a basis set. The uses of fixed-basis and variable-basis approximation are compared. In the latter,
simple computational units with adjustable parameters are exploited. Conditions are discussed, under which the number n of basis functions required to guarantee a desired accuracy does not grow “fast” with the number d of variables in admissible solutions, thus mitigating the “curse of dimensionality”. As an example of application,
an optimization-based approach to fault diagnosis for nonlinear stochastic systems is presented. Numerical results for a complex instance of the fault-diagnosis problem are given.
Angelo Alessandri
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-11T14:00:57Z
2013-09-16T12:02:59Z
http://eprints.imtlucca.it/id/eprint/1685
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1685
2013-09-11T14:00:57Z
A Generalized Stochastic Knapsack Problem with Application in Call Admission Control
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2013-09-10T15:08:56Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1669
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1669
2013-09-10T15:08:56Z
Some Comparisons of Model Complexity in Linear and Neural-Network Approximation
Capabilities of linear and neural-network models are compared from the point of view of requirements on the growth of model complexity with an increasing accuracy of approximation. Upper bounds on worst-case errors in approximation by neural networks are compared with lower bounds on these errors in linear approximation. The bounds are formulated in terms of singular numbers of certain operators induced by computational units and high-dimensional volumes of the domains of the functions to be approximated.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Věra Kůrková
Marcello Sanguineti
2013-09-10T14:59:37Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1668
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1668
2013-09-10T14:59:37Z
On call admission control with nonlinearly constrained
feasibility regions
A simple criterion is proposed to improve suboptimal coordinate-convex policies in Call Admission Control problems with nonlinearly constrained feasibility regions. To test the criterion, numerical simulation results are given.
Finally, some structural properties of the optimal coordinate-convex policies are proven, which do not depend on a complete knowledge of the nonlinear boundary of the feasibility region.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Cello
Marcello Sanguineti
2013-09-10T14:56:16Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1667
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1667
2013-09-10T14:56:16Z
Suboptimal solutions to team optimization problems
with statistical information structure
Network team optimization problems with statistical information structure are investigated. A team of n Decision Makers (DMs), each having at disposal
some information (obtained, e.g., by measurement devices or by exit polls) and various possibilities of decisions, coordinate their efforts to achieve a common goal, expressed via a team utility function. Decisions are generated by the DMs via strategies, on the basis of the available information y1, . . . , yn that each of
them has and in the presence of uncertainties in the "state of the external world’ x (which the DMs do not control). Such uncertainties are modeled via a joint
probability density p(x, y1, . . . , yn). For these problems, optimal solutions in closed form can be derived only in special cases, so a methodology of approximate
solution is proposed. Suboptimal solutions are searched for, taking the form of linear combinations of elements from sets of basis functions, possibly with adjustable "inner’ parameters. Upper bounds on the accuracy of such
suboptimal solutions are obtained. The estimates are expressed in dependence of the number of trigonometric and Gaussian basis functions. The trade-off between the level of decentralization and the smoothness assumptions on the
utility function and the probability density, required to derive the upper bounds, is investigated. Numerical results are presented for an instance of the network team optimization problem under study, which models optimal production in a multidivisional firm.
Marcello Sanguineti
Mauro Gaggero
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2013-09-10T14:42:52Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1666
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1666
2013-09-10T14:42:52Z
Smooth Optimal Decision Strategies for Static Team Optimization Problems and Their Approximations
Sufficient conditions for the existence and uniqueness of smooth optimal decision strategies for static team optimization problems with statistical information structure are derived. Approximation methods and algorithms to derive suboptimal solutions based on the obtained results are investigated. The application to network team optimization problems is discussed.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-08-09T08:02:03Z
2015-02-06T10:09:37Z
http://eprints.imtlucca.it/id/eprint/1655
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1655
2013-08-09T08:02:03Z
Dimming Relations for the Efficient Analysis of Concurrent Systems via Action Abstraction
We study models of concurrency based on labelled transition
systems where abstractions are induced by a partition of the action set. We introduce dimming relations, i.e., notions of behavioural equivalence which are able to relate two models if they can match each other's actions
whenever they are in the same partition block. We show applicability to a number of situations of practical interest which are apparently heterogeneous but exhibit similar behaviors although manifested via different
actions. Dimming relations make the models more homogeneous by collapsing such distinct actions into the same partition block. With our examples, we show how these abstractions permit reducing the state-space complexity from exponential to polynomial in the number of concurrent processes.
Rocco De Nicola
r.denicola@imtlucca.it
Giulio Iacobelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
2013-08-05T08:53:53Z
2014-07-01T12:39:39Z
http://eprints.imtlucca.it/id/eprint/1653
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1653
2013-08-05T08:53:53Z
Stochastic MPC with learning for driver-predictive vehicle control and its application to HEV energy management
This paper develops an approach for driver-aware vehicle control based on stochastic model predictive control with learning (SMPCL). The framework combines the on-board learning of a Markov chain that represents the driver behavior, a scenario-based approach for stochastic optimization, and quadratic programming. By using quadratic programming, SMPCL can handle, in general, larger state dimension models than stochastic dynamic programming, and can reconfigure in real-time for accommodating changes in driver behavior. The SMPCL approach is demonstrated in the energy management of a series hybrid electrical vehicle, aimed at improving fuel efficiency while enforcing constraints on battery state of charge and power. The SMPCL controller allocates the power from the battery and the engine to meet the driver power request. A Markov chain that models the power request dynamics is learned in real-time to improve the prediction capabilities of model predictive control (MPC). Because of exploiting the learned pattern of the driver behavior, the proposed approach outperforms conventional model predictive control and shows performance close to MPC with full knowledge of future driver power request in standard and real-world driving cycles.
Stefano Di Cairano
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Ilya Kolmanovsky
2013-06-20T08:03:30Z
2013-06-20T08:03:30Z
http://eprints.imtlucca.it/id/eprint/1620
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1620
2013-06-20T08:03:30Z
Applying Mean-field Approximation to Continuous Time Markov Chains
The mean-field analysis technique is used to perform analysis of a systems with a large number of components to determine the emergent deterministic behaviour and how this behaviour modifies when its parameters are perturbed. The computer science performance modelling and analysis community has found the mean-field method useful for modelling large-scale computer and communication networks. Applying mean-field analysis from the computer science perspective requires the following major steps: (1) describing how the agents populations evolve by means of a system of differential equations, (2) finding the emergent
deterministic behaviour of the system by solving such differential equations, and (3) analysing properties of this behaviour either by relying on simulation or by using logics. Depending on the system under analysis, performing these steps may become challenging. Often, modifications
of the general idea are needed. In this tutorial we consider illustrating examples to discuss how the mean-field method is used in different application areas. Starting from the application of the classical technique,
moving to cases where additional steps have to be used, such as systems with local communication. Finally we illustrate the application of the simulation and
uid model checking analysis techniques.
Anna Kolesnichenko
Alireza Pourranjabar
Valerio Senni
valerio.senni@imtlucca.it
2013-06-10T08:42:23Z
2013-06-10T08:42:23Z
http://eprints.imtlucca.it/id/eprint/1609
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1609
2013-06-10T08:42:23Z
Stagewise K-SVD to Design Efficient Dictionaries for Sparse Representations
The problem of training a dictionary for sparse representations from a given dataset is receiving a lot of attention mainly due to its applications in the fields of coding, classification and pattern recognition. One of the open questions is how to choose the number of atoms in the dictionary: if the dictionary is too small then the representation errors are big and if the dictionary is too big then using it becomes computationally expensive. In this letter, we solve the problem of computing efficient dictionaries of reduced size by a new design method, called Stagewise K-SVD, which is an adaptation of the popular K-SVD algorithm. Since K-SVD performs very well in practice, we use K-SVD steps to gradually build dictionaries that fulfill an imposed error constraint. The conceptual simplicity of the method makes it easy to apply, while the numerical experiments highlight its efficiency for different overcomplete dictionaries.
Cristian Rusu
cristian.rusu@imtlucca.it
Bogdan Dumitrescu
2013-05-17T13:45:01Z
2013-09-03T08:26:04Z
http://eprints.imtlucca.it/id/eprint/1588
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1588
2013-05-17T13:45:01Z
SPoT: Representing the Social, Spatial, and Temporal
Dimensions of Human Mobility with a Unifying Framework
Modeling human mobility is crucial in the analysis and simulation of opportunistic networks, where contacts are exploited as opportunities for peer-topeer message forwarding. The current approach with human mobility modeling has been based on continuously modifying models, trying to embed in them the mobility properties (e.g., visiting patterns to locations or specific distributions of inter-contact times) as they came up from trace analysis. As
a consequence, with these models it is difficult, if not impossible, to modify the features of mobility or to control the exact shape of mobility metrics (e.g., modifying the distribution of inter-contact times). For these reasons, in this paper we propose a mobility framework rather than a mobility model, with the explicit goal of providing a exible and controllable tool for modeling mathematically and generating simulatively different possible features of human mobility. Our framework, named SPoT, is able to incorporate the three dimensions - spatial, social, and temporal - of human mobility. The way SPoT does it is by mapping the different social communities of the network into different locations, whose members visit with a configurable temporal pattern. In order to characterize the temporal patterns of user visits to locations and the relative positioning of locations based on their shared users, we analyze the traces of real user movements extracted from three location-based online social networks (Gowalla, Foursquare, and Altergeo). We observe that a Bernoulli process effectively approximates user visits to locations in the majority of cases and that locations that share many common users visiting them frequently tend to be located close to each other. In addition, we use these traces to test the exibility of the framework, and we show that SPoT is able to accurately reproduce the mobility behavior observed in traces. Finally, relying on the Bernoulli assumption for arrival processes, we provide a throughout mathematical analysis of the controllability of the framework, deriving the conditions under which heavy-tailed and exponentially-tailed aggregate inter-contact times (often observed in real traces) emerge.
Dmytro Karamshuk
dmytro.karamshuk@imtlucca.it
Chiara Boldrini
Marco Conti
Andrea Passarella
2013-05-16T13:56:29Z
2016-07-13T10:47:56Z
http://eprints.imtlucca.it/id/eprint/1584
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1584
2013-05-16T13:56:29Z
A Conceptual Framework for Adaptation
We present a white-box conceptual framework for adaptation. We called it CODA, for COntrol Data Adaptation, since it is based on the notion of control data. CODA promotes a neat separation between application and adaptation logic through a clear identification of the set of data that is relevant for the latter. The framework provides an original perspective from which we survey a representative set of approaches to adaptation ranging from programming languages and paradigms, to computational models and architectural solutions.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2013-05-16T13:06:14Z
2016-07-13T09:48:45Z
http://eprints.imtlucca.it/id/eprint/1581
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1581
2013-05-16T13:06:14Z
A Conceptual Framework for Adaptation
In this position paper we present a conceptual vision of adaptation, a key feature of autonomic systems. We put some stress on the role of control data and argue how some of the programming paradigms and models used for adaptive systems match with our conceptual framework.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2013-05-16T13:05:40Z
2016-07-13T10:47:36Z
http://eprints.imtlucca.it/id/eprint/1583
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1583
2013-05-16T13:05:40Z
Adaptation is a Game
Control data variants of game models such as Interface Automata are suitable for the design and analysis of self-adaptive systems.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2013-05-03T11:39:43Z
2013-05-03T11:39:43Z
http://eprints.imtlucca.it/id/eprint/1564
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1564
2013-05-03T11:39:43Z
A tool for rapid development of WS-BPEL applications
WS-BPEL is imposing itself as a standard for orchestration
of web services. However, there are still some well-known
difficulties that make programming in WS-BPEL a tricky
task. In this paper, we present BliteC, a software tool we
have developed for supporting a rapid and easy development
of WS-BPEL applications. BliteC translates service orches-
trations written in Blite, a formal language inspired to but
simpler than WS-BPEL, into readily executable WS-BPEL
programs. We illustrate our approach by means of a few
practical programming examples.
Luca Cesari
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-03T11:38:21Z
2013-05-03T11:38:21Z
http://eprints.imtlucca.it/id/eprint/1567
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1567
2013-05-03T11:38:21Z
Proceedings of 7th International Workshop on Automated Specification and Verification of Web Systems (WWV 2011).
This volume contains the final and revised versions of the papers presented at the 7th International Workshop on Automated Specification and Verification of Web Systems (WWV 2011). The workshop was held in Reykjavik, Iceland, on June 9, 2011, as part of DisCoTec 2011. The aim of the WWV workshop series is to provide an interdisciplinary forum to facilitate the cross-fertilization and the advancement of hybrid methods that exploit concepts and tools drawn from Rule-based programming, Software engineering, Formal methods and Web-oriented research. Nowadays, indeed, many companies and institutions have diverted their Web sites into interactive, completely-automated, Web-based applications for, e.g., e-business, e-learning, e-government, and e-health. The increased complexity and the explosive growth of Web systems have made their design and implementation a challenging task. Systematic, formal approaches to their specification and verification can permit to address the problems of this specific domain by means of automated and effective techniques and tools. In response to this year's call for papers, we received 9 paper submissions. The Program Committee of WWV 2011 collected three reviews for each paper and held an electronic discussion leading to the selection of 7 papers for presentation at the workshop. In addition to the selected papers, the scientific programme included an invited lecture by Elie Najm.
Laura Kovács
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-03T11:36:21Z
2013-05-03T11:36:21Z
http://eprints.imtlucca.it/id/eprint/1568
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1568
2013-05-03T11:36:21Z
The Sensoria Approach Applied to the Finance Case Study
This chapter provides an effective implementation of (part of) the Sensoria approach, specifically modelling and formal analysis of service-oriented software based on mathematically founded techniques. The ‘Finance case study’
is used as a test bed for demonstrating the feasibility and effectiveness of the use of the process calculus COWS and some of its related analysis techniques and tools. In particular, we report the results of an application of a temporal logic and its model checker for expressing and checking functional properties of services and a type system for guaranteeing confidentiality properties of services.
Stefania Gnesi
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T14:11:09Z
2013-05-02T14:11:09Z
http://eprints.imtlucca.it/id/eprint/1566
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1566
2013-05-02T14:11:09Z
Proceedings of 8th International Workshop on Automated Specification and Verification of Web Systems (WWV 2012)
This volume contains the final and revised versions of the papers presented at the 8th International Workshop on Automated Specification and Verification of Web Systems (WWV 2012). The workshop was held in Stockholm, Sweden, on June 16, 2012, as part of DisCoTec 2012.
WWV is a yearly workshop that aims at providing an interdisciplinary forum to facilitate the cross-fertilization and the advancement of hybrid methods that exploit concepts and tools drawn from Rule-based programming, Software engineering, Formal methods and Web-oriented research. WWV has a reputation for being a lively, friendly forum for presenting and discussing work in progress. The proceedings have been produced after the symposium to allow the authors to incorporate the feedback gathered during the event in the published papers.
All papers submitted to the workshop were reviewed by at least three Program Committee members or external referees. The Program Committee held an electronic discussion leading to the acceptance of all papers for presentation at the workshop. In addition to the presentation of the contributed papers, the scientific programme included the invited talks by two outstanding speakers: Rocco De Nicola (IMT, Institute for Advanced Studies Lucca, Italy) and Jos\`e Luiz Fiadeiro (Royal Holloway, United Kingdom).
Josep Silva
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T14:08:18Z
2013-05-02T14:08:18Z
http://eprints.imtlucca.it/id/eprint/1562
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1562
2013-05-02T14:08:18Z
A Calculus for Orchestration of Web Services
Service-oriented computing, an emerging paradigm for distributed computing based on the use of services, is calling for the development of tools and techniques to build safe and trustworthy systems, and to analyse their behaviour. Therefore, many researchers have proposed to use process calculi, a cornerstone of current foundational research on specification and analysis of concurrent, reactive, and distributed systems. In this paper, we follow this approach and introduce CWS, a process calculus expressly designed for specifying and combining service-oriented applications, while modelling their dynamic behaviour. We show that CWS can model all the phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, orchestration, deployment, reconfiguration and execution. We illustrate the specification style that CWS supports by means of a large case study from the automotive domain and a number of more specific examples drawn from it.
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T14:05:09Z
2013-05-02T14:05:09Z
http://eprints.imtlucca.it/id/eprint/1577
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1577
2013-05-02T14:05:09Z
e-Health for Rural Areas in Developing Countries: Lessons from the Sebokeng Experience
We report the experience gained in an e-Health project in
the Gauteng province, in South Africa. A Proof-of-Concept of the project has been already installed in 3 clinics in the Sebokeng township. The project is now going to be applied to 300 clinics in the whole province. This extension of the Proof-of-Concept can however give rise to security
aws because of the inclusion of rural areas with unreliable Internet connection. We address this problem and propose a safe solution.
Massimiliano Masi
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T13:50:27Z
2013-05-02T13:51:02Z
http://eprints.imtlucca.it/id/eprint/1561
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1561
2013-05-02T13:50:27Z
Security Analysis of Standards-Driven Communication Protocols for Healthcare Scenarios
The importance of the ElectronicHealth Record (EHR), that stores all healthcare-related data belonging to a patient, has been recognised in recent years by governments, institutions and industry. Initiatives like the Integrating the Healthcare Enterprise (IHE) have been developed for the
definition of standard methodologies for secure and interoperable EHR exchanges among clinics and hospitals. Using the requisites specified by these initiatives, many large scale projects have been set up for enabling healthcare professionals to handle patients’ EHRs. The success of applications developed in these contexts crucially depends on ensuring such security properties as
confidentiality, authentication, and authorization.
In this paper, we first propose a communication
protocol, based on the IHE specifications, for authenticating healthcare professionals and assuring
patients’ safety. By means of a formal analysis
carried out by using the specification language
COWS and the model checker CMC, we reveal a security flaw in the protocol thus demonstrating that to simply adopt the international standards does not guarantee the absence of such type of flaws. We then propose how to emend the IHE
specifications and modify the protocol accordingly.
Finally, we show how to tailor our protocol for application to more critical scenarios with no assumptions on the communication channels. To demonstrate feasibility and effectiveness of our protocols we have fully implemented them.
Massimiliano Masi
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T13:36:28Z
2013-05-02T13:36:28Z
http://eprints.imtlucca.it/id/eprint/1575
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1575
2013-05-02T13:36:28Z
Formalisation and Implementation of the XACML Access Control Mechanism
We propose a formal account of XACML, an OASIS standard adhering to the Policy Based Access Control model for the specifica- tion and enforcement of access control policies. To clarify all ambiguous and intricate aspects of XACML, we provide it with a more manageable alternative syntax and with a solid semantic ground. This lays the basis
for developing tools and methodologies which allow software engineers to easily and precisely regulate access to resources using policies. To demonstrate feasibility and effectiveness of our approach, we provide a software tool, supporting the specification and evaluation of policies and access requests, whose implementation fully relies on our formal development.
Massimiliano Masi
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T13:30:10Z
2013-05-02T13:30:10Z
http://eprints.imtlucca.it/id/eprint/1563
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1563
2013-05-02T13:30:10Z
Using formal methods to develop WS-BPEL applications
In recent years, WS-BPEL has become a de facto standard language for orchestration of Web Services. However, there are still some well-known difficulties that make programming
in WS-BPEL a tricky task. In this paper, we firstly point out major loose points of the WS-BPEL specification by means of many examples, some of which are also exploited
to test and compare the behaviour of three of the most known freely available WS-BPEL engines. We show that, as a matter of fact, these engines implement different
semantics, which undermines portability of WS-BPEL programs over different platforms. Then we introduce Blite, a prototypical orchestration language equipped with a formal
operational semantics, which is closely inspired by, but simpler than, WS-BPEL. Indeed, Blite is designed around some of WS-BPEL distinctive features like partner links, process termination, message correlation, long-running business transactions and compensation handlers. Finally, we present BliteC, a software tool supporting a rapid and easy development of WS-BPEL applications via translation of service orchestrations written in Blite into executable WS-BPEL programs. We illustrate our approach by means of a running example borrowed from the official specification of WS-BPEL.
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T13:25:15Z
2013-05-02T13:25:15Z
http://eprints.imtlucca.it/id/eprint/1576
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1576
2013-05-02T13:25:15Z
Modeling adaptation with a tuple-based coordination language
In recent years, it has been argued that systems and applications, in order to deal with their increasing complexity, should be able to adapt their behavior according to new requirements or environment conditions. In this paper, we present a preliminary investigation aiming at studying how coordination languages and formal methods can contribute to a better understanding, implementation and usage of the mechanisms and techniques for adaptation currently proposed in the literature. Our study relies on the formal coordination language Klaim as a common framework for modeling some adaptation techniques, namely the MAPE-K loop, aspect- and context-oriented programming.
Edmond Gjondrekaj
Michele Loreti
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T13:21:35Z
2013-05-02T13:21:35Z
http://eprints.imtlucca.it/id/eprint/1559
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1559
2013-05-02T13:21:35Z
Modeling Adaptation with Klaim
In recent years, it has been argued that systems and applications, in order to deal with their increasing complexity, should be able to adapt their behavior according to new requirements or environment conditions. In this paper, we present an investigation aiming at studying how coordination languages and formal methods can contribute to a better understanding, implementation and use of the mechanisms and techniques for adaptation currently proposed in the literature. Our study relies on the formal coordination language Klaim as a common framework for modeling some well-known adaptation techniques: the IBM MAPE-K loop, the Accord component-based framework for architectural adaptation, and the aspect- and context-oriented programming paradigms. We illustrate our approach through a simple example concerning a data repository equipped with an automated cache mechanism.
Edmond Gjondrekaj
Michele Loreti
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T13:09:59Z
2013-05-02T13:09:59Z
http://eprints.imtlucca.it/id/eprint/1560
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1560
2013-05-02T13:09:59Z
A Logical Verification Methodology for Service-Oriented Computing
We introduce a logical verification methodology for checking behavioural properties of service-oriented computing systems. Service properties are described by means of SocL, a branching-time temporal logic that we have specifically designed to express in an effective way distinctive aspects of services, such as, e.g., acceptance of a request, provision of a response, and correlation among service requests and responses. Our approach allows service properties to be expressed in such a way that
they can be independent of service domains and specifications. We show an instantiation of our general methodology that uses the formal language COWS to conveniently specify services and the expressly developed software tool CMC to assist the user in the task of verifying SocL formulae over service specifications. We demonstrate feasibility and effectiveness of our methodology by means of the specification and the analysis of a case study in the automotive domain.
Alessandro Fantechi
Stefania Gnesi
Alessandro Lapadula
Franco Mazzanti
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T12:47:10Z
2013-05-02T12:47:10Z
http://eprints.imtlucca.it/id/eprint/1574
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1574
2013-05-02T12:47:10Z
Towards Model-Driven Development of Access Control Policies for Web Applications
We introduce a UML-based notation for graphically modeling
systems’ security aspects in a simple and intuitive
way and a model-driven process that transforms graphical
specifications of access control policies in XACML. These
XACML policies are then translated in FACPL, a policy
language with a formal semantics, and the resulting policies
are evaluated by means of a Java-based software tool.
Marianne Bush
Nora Koch
Massimiliano Masi
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T12:39:11Z
2013-05-02T12:39:11Z
http://eprints.imtlucca.it/id/eprint/1579
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1579
2013-05-02T12:39:11Z
A standard-driven communication protocol for disconnected clinics in rural areas
The importance of the Electronic Health Record (EHR), which stores all healthcare-related data belonging to a patient, has been recognized in recent years by governments, institutions, and industry. Initiatives like Integrating the Healthcare Enterprise (IHE) have been developed for the definition of standard methodologies for secure and interoperable EHR exchanges among clinics and hospitals. Using the requisites specified by these initiatives, many large-scale projects have been set up to enable healthcare professionals to handle patients' EHRs. Applications deployed in these settings are often considered safety-critical, thus ensuring such security properties as confidentiality, authentication, and authorization is crucial for their success. In this paper, we propose a communication protocol, based on the IHE specifications, for authenticating healthcare professionals and assuring patients' safety in settings where no network connection is available, such as in rural areas of some developing countries. We define a specific threat model, driven by the experience of use cases covered by international projects, and prove that an intruder cannot cause damages to the safety of patients and their data by performing any of the attacks falling within this threat model. To demonstrate the feasibility and effectiveness of our protocol, we have fully implemented it.
Massimiliano Masi
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T12:26:20Z
2013-05-02T12:26:20Z
http://eprints.imtlucca.it/id/eprint/1578
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1578
2013-05-02T12:26:20Z
Orchestrating Tuple-based Languages
The World Wide Web can be thought of as a global computing architecture supporting the deployment of distributed networked applications. Currently, such applications can be programmed by resorting mainly to two distinct paradigms: one devised for orchestrating distributed services, and the other designed for coordinating distributed (possibly mobile) agents. In this paper, the issue of designing a pro-
gramming language aiming at reconciling orchestration and coordination is investigated. Taking as starting point the orchestration calculus Orc and the tuple-based coordination language Klaim, a new formalism is introduced combining concepts and primitives of the original calculi.
To demonstrate feasibility and effectiveness of the proposed approach, a prototype implementation of the new formalism is described and it is then used to tackle a case study dealing with a simplified but realistic electronic marketplace, where a number of on-line stores allow client
applications to access information about their goods and to place orders.
Rocco De Nicola
r.denicola@imtlucca.it
Andrea Margheri
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T12:21:09Z
2013-05-02T12:21:09Z
http://eprints.imtlucca.it/id/eprint/1573
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1573
2013-05-02T12:21:09Z
Towards a Formal Verification Methodology for Collective Robotic Systems
We introduce a UML-based notation for graphically modeling
systems’ security aspects in a simple and intuitive
way and a model-driven process that transforms graphical
specifications of access control policies in XACML. These
XACML policies are then translated in FACPL, a policy
language with a formal semantics, and the resulting policies
are evaluated by means of a Java-based software tool.
Edmond Gjondrekaj
Michele Loreti
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
Carlo Pinciroli
Manuele Brambilla
Mauro Birattari
Marco Dorigo
2013-05-02T12:12:57Z
2014-01-29T11:37:12Z
http://eprints.imtlucca.it/id/eprint/1572
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1572
2013-05-02T12:12:57Z
On a Formal and User-friendly Linguistic Approach to Access Control of Electronic Health Data
The importance of the exchange of Electronic Health Records (EHRs) between hospitals has been recognized by governments and institutions. Due to the sensitivity of data exchanged, only mature standards and implementations can be chosen to operate. This exchange process is of course under the control of the patient, who decides who has the rights to access her personal healthcare data and who has not, by giving her personal privacy consent. Patients’ privacy consent is regulated by local legislations, which can vary frequently from region to region. The technology implementing such privacy aspects must be highly adaptable, often resulting in complex security scenarios that cannot be easily managed by patients and software designers. To
overcome such security problems, we advocate the use of a linguistic approach that relies on languages for expressing policies with solid mathematical foundations. Our approach bases on FACPL, a policy language we have intentionally designed by taking inspiration from OASIS XACML, the de-facto standard used in all projects covering secure EHRs transmission protected by patients’ privacy consent. FACPL can express policies similar to those expressible by XACML but, differently from XACML, it has an intuitive syntax, a formal semantics and easy to use software tools supporting policy development and enforcement. In this paper, we
present the potentialities of our approach and outline ongoing work.
Andrea Margheri
Massimiliano Masi
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T12:10:24Z
2014-01-29T09:50:32Z
http://eprints.imtlucca.it/id/eprint/1571
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1571
2013-05-02T12:10:24Z
Specifying and analysing reputation systems with coordination languages
Reputation systems are nowadays widely used to support decision making in networked systems. Parties in such systems rate each other and use shared ratings to compute reputation scores that drive their interactions. The existence of reputation systems with remarkable differences calls for formal approaches to their analysis. We present a verification methodology for reputation systems that is based on the use of the coordination language Klaim and related analysis tools. First, we define a parametric Klaim specification of a reputation system that can be instantiated with different reputation models. Then, we consider stochastic specification obtained by considering actions with random (exponentially distributed) duration. The resulting specification enables quantitative analysis of properties of the considered system. Feasibility and effectiveness of our proposal is demonstrated by reporting on the analysis of two reputation models.
Alessandro Celestini
alessandro.celestini@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T12:05:44Z
2014-01-28T16:20:09Z
http://eprints.imtlucca.it/id/eprint/1569
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1569
2013-05-02T12:05:44Z
Network-aware evaluation environment for reputation systems
Parties of reputation systems rate each other and use ratings to compute reputation scores that drive their interactions. When deciding which reputation model to deploy in a network environment, it is important to find the
most suitable model and to determine its right initial configuration. This calls for an engineering approach for describing, implementing and evaluating reputation systems while taking into account specific aspects of both the reputation systems and the networked environment where they will run. We present a software tool (NEVER) for network-aware evaluation of reputation systems and their rapid prototyping through experiments performed according to user-specified parameters
Alessandro Celestini
alessandro.celestini@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-04-30T14:12:58Z
2013-04-30T14:12:58Z
http://eprints.imtlucca.it/id/eprint/1557
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1557
2013-04-30T14:12:58Z
An arrival-based framework for human mobility modeling
Modeling human mobility is crucial in the performance analysis and simulation of mobile ad hoc networks, where contacts are exploited as opportunities for peer-to-peer message forwarding. The current approach to human mobility modeling has been based on continuously modifying models, trying to embed in them the newest features of mobility properties (e.g., visiting patterns to locations or inter-contact times) as they came up from trace analysis. As a consequence, typically these models are neither flexible (i.e., features of mobility cannot be changed without changing the model) nor controllable (i.e., the exact shape of mobility properties cannot be controlled directly). In order to take into account the above requirements, in this paper we propose a mobility framework whose goal is, starting from the stochastic process describing the arrival patterns of users to locations, to generate pairwise inter-contact times and aggregate inter-contact times featuring a predictable probability distribution. We validate the proposed framework by means of simulations. In addition, assuming that the arrival process of users to locations can be described by a Bernoulli process, we mathematically derive a closed form for the pairwise and aggregate inter-contact times, proving the controllability of the proposed approach in this case.
Dmytro Karamshuk
dmytro.karamshuk@imtlucca.it
Chiara Boldrini
Marco Conti
Andrea Passarella
2013-04-30T13:43:46Z
2013-04-30T13:44:36Z
http://eprints.imtlucca.it/id/eprint/1556
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1556
2013-04-30T13:43:46Z
Human mobility models for opportunistic networks
Mobile ad hoc networks enable communications between clouds of mobile devices without the need for a preexisting infrastructure. One of their most interesting evolutions are opportunistic networks, whose goal is to also enable communication in disconnected environments, where the general absence of an end-to-end path between the sender and the receiver impairs communication when legacy MANET networking protocols are used. The key idea of OppNets is that the mobility of nodes helps the delivery of messages, because it may connect, asynchronously in time, otherwise disconnected subnetworks. This is especially true for networks whose nodes are mobile devices (e.g., smartphones and tablets) carried by human users, which is the typical OppNets scenario. In such a network where the movements of the communicating devices mirror those of their owners, finding a route between two disconnected devices implies uncovering habits in human movements and patterns in their connectivity (frequencies of meetings, average duration of a contact, etc.), and exploiting them to predict future encounters. Therefore, there is a challenge in studying human mobility, specifically in its application to OppNets research. In this article we review the state of the art in the field of human mobility analysis and present a survey of mobility models. We start by reviewing the most considerable findings regarding the nature of human movements, which we classify along the spatial, temporal, and social dimensions of mobility. We discuss the shortcomings of the existing knowledge about human movements and extend it with the notion of predictability and patterns. We then survey existing approaches to mobility modeling and fit them into a taxonomy that provides the basis for a discussion on open problems and further directions for research on modeling human mobility.
Dmytro Karamshuk
dmytro.karamshuk@imtlucca.it
Chiara Boldrini
Marco Conti
Andrea Passarella
2013-04-19T11:54:44Z
2014-03-10T10:42:41Z
http://eprints.imtlucca.it/id/eprint/1553
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1553
2013-04-19T11:54:44Z
Revisiting bisimilarity and its modal logic for nondeterministic and probabilistic processes
We consider PML, the probabilistic version of Hennessy-Milner logic introduced by Larsen and Skou to characterize bisimilarity over probabilistic processes without internal
nondeterminism.We provide two different interpretations for PML by considering nondeterministic and probabilistic processes as models, and we exhibit two new bisimulation-based equivalences that are in full agreement with those interpretations. Our new equivalences include
as coarsest congruences the two bisimilarities for nondeterministic and probabilistic processes proposed by Segala and Lynch. The latter equivalences are instead in agreement with two versions of Hennessy-Milner logic extended with an additional probabilistic operator
interpreted over state distributions rather than over individual states. Thus, our new interpretations of PML and the corresponding new bisimilarities offer a uniform framework for reasoning on processes that are purely nondeterministic or reactive probabilistic or are mixing nondeterminism and probability in an alternating/non-alternating way.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2013-04-12T13:32:30Z
2013-04-12T13:32:30Z
http://eprints.imtlucca.it/id/eprint/1542
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1542
2013-04-12T13:32:30Z
Quantitative Multirun Security under Active Adversaries
We study the security of probabilistic programsunder the assumption that an active adversary controls part ofthe program's inputs, and the program can be run several times. The adversary's target are the high, confidential inputs to theprogram. We model the program behaviour as an information-theoretic channel and define a notion of quantitative multi-runleakage. We characterize in a simple way both the asymptoticmulti-run leakage and its exponential growth rate, depending onthe number of runs, the characterization is given in terms ofthe program's channel matrix. We then study the case where adeclassification policy is specified: we define a measure of thedegree of violation of the policy and characterize its asymptoticmulti-run behaviour, thus allowing for a combined analysis ofwhat and how much information is leaked. We finally study thecase where a user is faced with the task of assessing the undueinfluence of an active adversary on a deployed program or system, of which only a (black-box) specification is available.
Michele Boreale
Francesca Pampaloni
francesca.pampaloni@imtlucca.it
2013-03-21T08:05:21Z
2014-03-05T15:50:54Z
http://eprints.imtlucca.it/id/eprint/1531
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1531
2013-03-21T08:05:21Z
Prioritisation of polybrominated diphenyl ethers (PBDEs)
by using the QSPR-THESAURUS web tool
The prioritisation of chemical compounds is important for the identification of those chemicals that represent the highest threat to the environment. As part of the CADASTER project (http://www.cadaster.eu), we developed an online web tool, which allows the calculation of the environmental
risk of chemical compounds from a web interface. The environmental fate of compounds in the aquatic environment is assessed by using the SimpleBox model, while adverse effects on the aquatic environment are assessed by the Species Sensitivity Distribution approach. The main purpose of this web tool is to exemplify the use of quantitative structure–activity relationships (QSARs) to support risk assessment. A case study of QSAR integrated risk assessment of 209 polybrominated diphenyl ethers (PBDEs) demonstrates
the treatment and influence of uncertainty in the predicted physicochemical and toxicity parameters in probabilistic risk assessment.
Igor V. Tetko
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Prakash Kunwar
Stefan Brandmaier
Sergii Novoratskyi
Larisa Charochkina
Volodymyr Prokopenko
Willie J.G.M. Peijnenburg
2013-03-19T08:19:38Z
2013-04-19T12:43:33Z
http://eprints.imtlucca.it/id/eprint/1537
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1537
2013-03-19T08:19:38Z
Network-aware Evaluation Environment for Reputation Systems
Parties of reputation systems rate each other and use ratings to compute reputation scores that drive their interactions. When deciding which reputation model to deploy in a network environment, it is important to find the
most suitable model and to determine its right initial configuration. This calls for an engineering approach for describing, implementing and evaluating reputation
systems while taking into account specific aspects of both the reputation systems and the networked environment where they will run. We present a software tool (NEVER) for network-aware evaluation of reputation systems and their rapid prototyping through experiments performed according to user-specified parameters. To demonstrate effectiveness of NEVER, we analyse reputation models based on the beta distribution and the maximum likelihood estimation.
Alessandro Celestini
alessandro.celestini@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-03-18T07:40:52Z
2016-02-12T13:26:47Z
http://eprints.imtlucca.it/id/eprint/1535
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1535
2013-03-18T07:40:52Z
Adaptable transition systems
We present an essential model of adaptable transition systems inspired by white-box approaches to adaptation and based on foundational models of component based systems. The key feature of adaptable transition systems are control propositions, imposing a clear separation between ordinary, functional behaviours and adaptive ones. We instantiate our approach on interface automata yielding adaptable interface automata, but it may be instantiated on other foundational models of component-based systems as well. We discuss how control propositions can be exploited in the specification and analysis of adaptive systems, focusing on various notions proposed in the literature, like adaptability, control loops, and control synthesis.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2013-03-07T14:02:16Z
2013-03-12T14:58:11Z
http://eprints.imtlucca.it/id/eprint/1530
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1530
2013-03-07T14:02:16Z
Stagewise K-SVD to Design Efficient Dictionaries for Sparse Representations
The problem of training a dictionary for sparse representations from a given dataset is receiving a lot of attention mainly due to its applications in the fields of coding, classification and pattern recognition. One of the open questions is how to choose the number of atoms in the dictionary: if the dictionary is too small then the representation errors are big and if the dictionary is too big then using it becomes computationally expensive. In this letter, we solve the problem of computing efficient dictionaries of reduced size by a new design method, called Stagewise K-SVD, which is an adaptation of the popular K-SVD algorithm. Since K-SVD performs very well in practice, we use K-SVD steps to gradually build dictionaries that fulfill an imposed error constraint. The conceptual simplicity of the method makes it easy to apply, while the numerical experiments highlight its efficiency for different overcomplete dictionaries.
Cristian Rusu
cristian.rusu@imtlucca.it
Bogdan Dumitrescu
2013-03-07T13:48:00Z
2013-03-12T14:58:11Z
http://eprints.imtlucca.it/id/eprint/1529
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1529
2013-03-07T13:48:00Z
Iterative reweighted l1 design of sparse FIR filters
Sparse FIR filters have lower implementation complexity than full filters, while keeping a good performance level. This paper describes a new method for designing 1D and 2D sparse filters in the minimax sense using a mixture of reweighted l1 minimization and greedy iterations. The combination proves to be quite efficient; after the reweighted l1 minimization stage introduces zero coefficients in bulk, a small number of greedy iterations serve to eliminate a few extra coefficients. Experimental results and a comparison with the latest methods show that the proposed method performs very well both in the running speed and in the quality of the solutions obtained.
Cristian Rusu
cristian.rusu@imtlucca.it
Bogdan Dumitrescu
2013-03-07T13:30:21Z
2013-03-12T14:58:11Z
http://eprints.imtlucca.it/id/eprint/1528
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1528
2013-03-07T13:30:21Z
Fast design of efficient dictionaries for sparse representations
One of the central issues in the field of sparse representations is the design of overcomplete dictionaries with a fixed sparsity level from a given dataset. This article describes a fast and efficient procedure for the design of such dictionaries. The method implements the following ideas: a reduction technique is applied to the initial dataset to speed up the upcoming procedure; the actual training procedure runs a more sophisticated iterative expanding procedure based on K-SVD steps. Numerical experiments on image data show the effectiveness of the proposed design strategy.
Cristian Rusu
cristian.rusu@imtlucca.it
2013-03-07T13:20:08Z
2013-03-12T14:58:11Z
http://eprints.imtlucca.it/id/eprint/1527
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1527
2013-03-07T13:20:08Z
Clustering before training large datasets - Case study: K-SVD
Training and using overcomplete dictionaries has been the subject of many developments in the area of signal processing and sparse representations. The main idea is to train a dictionary that is able to achieve good sparse representations of the items contained in a given dataset. The most popular approach is the K-SVD algorithm and in this paper we study its application to large datasets. The main interest is to speedup the training procedure while keeping the representation errors close to some specific values. This goal is reached by using a clustering procedure, called here T-mindot, which reduces the size of the dataset but keeps the most representative data items and a measure of their importance. Experimental simulations compare the running times and representation errors of the training method with and without the clustering procedure and they clearly show how effective T-mindot is.
Cristian Rusu
cristian.rusu@imtlucca.it
2013-03-07T13:07:47Z
2013-03-12T14:58:11Z
http://eprints.imtlucca.it/id/eprint/1525
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1525
2013-03-07T13:07:47Z
Clustering large datasets - bounds and applications with K-SVD
This article presents a clustering method called T-mindot that is used to reduce the dimension of datasets in order to diminish the running time of the training algorithms. The T-mindot method is applied before the K-SVD algorithm in the context of sparse representations for the design of
overcomplete dictionaries. Simulations that run on image data show the efficiency of the proposed method that leads to the substantial reduction of the execution time of K-SVD, while keeping the representation performance of the dictionaries designed using the original dataset.
Cristian Rusu
cristian.rusu@imtlucca.it
2013-03-07T12:49:58Z
2013-03-12T14:58:11Z
http://eprints.imtlucca.it/id/eprint/1524
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1524
2013-03-07T12:49:58Z
Classification of music genres using sparse representations in overcomplete dictionaries
This paper presents a simple, but efficient and robust, method for music genre classification that utilizes sparse representations in overcomplete dictionaries. The training step involves creating dictionaries, using the K-SVD algorithm, in which data corresponding to a particular music genre has a sparse representation. In the classification step, the Orthogonal Matching Pursuit (OMP) algorithm is used to separate feature vectors that consist only of Linear Predictive Coding (LPC) coefficients. The paper analyses in detail a popular case study from the literature, the ISMIR 2004 database. Using the presented method, the correct classification percentage of the 6 music genres is 85.59, result that is comparable with the best results published so far.
Cristian Rusu
cristian.rusu@imtlucca.it
2013-03-07T11:29:24Z
2014-01-08T10:27:41Z
http://eprints.imtlucca.it/id/eprint/1523
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1523
2013-03-07T11:29:24Z
Revisiting Trace and Testing Equivalences for Nondeterministic and Probabilistic Processes
Two of the most studied extensions of trace and testing equivalences to non-deterministic and probabilistic processes induce distinctions that have been questioned and
lack properties that are desirable. Probabilistic trace-distribution equivalence differentiates systems that can perform the same set of traces with the same probabilities, and is not a congruence for parallel composition. Probabilistic testing equivalence, which relies only
on extremal success probabilities, is backward compatible with testing equivalences for restricted classes of processes, such as fully nondeterministic processes or generative/reactive probabilistic processes, only if specific sets of tests are admitted. In this paper, new versions of probabilistic trace and testing equivalences are presented for the general class of nondeterministic and probabilistic processes. The new trace equivalence is coarser because it compares execution probabilities of single traces instead of entire trace distributions, and turns out to be compositional. The new testing equivalence requires matching all resolutions of nondeterminism on the basis of their success probabilities, rather than comparing
only extremal success probabilities, and considers success probabilities in a trace-by-trace fashion, rather than cumulatively on entire resolutions. It is fully backward compatible with testing equivalences for restricted classes of processes; as a consequence, the trace-
by-trace approach uniformly captures the standard probabilistic testing equivalences for generative and reactive probabilistic processes. The paper discusses in full details the new equivalences and provides a simple spectrum that relates them with existing ones in the
setting of nondeterministic and probabilistic processes.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2013-03-06T13:57:58Z
2014-06-16T10:27:39Z
http://eprints.imtlucca.it/id/eprint/1519
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1519
2013-03-06T13:57:58Z
A novel visualization tool for art history and conservation: automated colorization of black and white archival photographs of works of art
This paper describes the use of a customized algorithm for the colorization of historical black and white photographs documenting earlier states of paintings. This study specifically focuses on Pablo Picasso's mid-century Mediterranean masterpiece La Joie de Vivre, 1946 (Musée Picasso, Antibes, France). The custom-designed algorithm allows computer-controlled spreading of color information on a digital image of black and white historical photographs to obtain accurate color renditions. Expert observation of the present state of the painting, coupled with stratigraphic information from cross sections allows the attribution of color information to selected pixels in the digitized images. The algorithm uses the localized color information and the grayscale intensities of the black and white historical photographs to formulate a set of equations for the missing color values of the remaining pixels. The computational resolution of such equations allows an accurate colorization that preserves brushwork and shading. This new method is proposed as a valuable alternative to the use of commercial software to apply flat areas of color, which is currently the most common practice for colorization efforts in the conservation community. Availability of such colorized images enhances the art-historical understanding of the works and might lead to better-informed treatment.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Francesca Casadio
J.L. Andral
Aggelos K. Katsaggelos
2013-03-06T11:22:15Z
2013-03-12T09:32:21Z
http://eprints.imtlucca.it/id/eprint/1518
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1518
2013-03-06T11:22:15Z
Detecting ACS and Identifying Acute Ischemic Territories with Cardiac Phase-Resolved BOLD MRI at Rest
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Xiangzhi Zhou
Richard Tang
J. Min
Debiao Li
Rohan Dharmakumar
2013-03-06T10:56:25Z
2013-03-12T09:32:21Z
http://eprints.imtlucca.it/id/eprint/1517
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1517
2013-03-06T10:56:25Z
Chronic Iron Deposition following Acute Hemorrhagic Myocardial Infarction: A Cardiovascular Magnetic Resonance Study
Introduction - Intramyocardial hemorrhage frequently occurs in large reperfused myocardial infarctions (MI). However, its long-term fate remains unexplored.
Hypothesis - We hypothesize that intramyocardial hemorrhage, secondary to reperfused MI, results in chronic iron deposition within infarcted territories.
Methods - We studied 15 patients by Cardiovascular Magnetic Resonance (CMR) T2* mapping (1.5T) on day 3 and 6 months after successful percutaneous coronary intervention for first STEMI. Using the same CMR protocol, we also studied 20 canines, on days 3 and 56 post ischemia-reperfusion injury, of which 3 animals received sham procedures. Subsequently, canine hearts were explanted, imaged ex-vivo, and samples of hemorrhagic infarcts (Hemo+), non-hemorrhagic infarcts (Hemo-), remote and sham myocardium were isolated, sectioned and mass spectrometry was performed.
Results - Eleven patients had Hemo+ (verified by T2* CMR on day 3) and their scar tissue T2* values remained significantly lower after 6 months, when compared to Hemo- and remote myocardium (Fig 1; p<0.001). In canines, Hemo+ territories showed a significant T2* reduction compared to the other groups (Fig 2; p<0.001). Mean iron content ([Fe]) of Hemo+ on day 56 was 10-fold greater than that observed in control groups (p<0.001), while no differences were observed among the control groups (p=0.14). A strong linear relationship was observed between log(T2*) and -log([Fe]) (R2 = 0.74; p<0.001) on day 56. Conclusion - Hemorrhagic MI leads to chronic iron depositions within the infarct zones. Consequences of chronic iron deposition within the scar tissue remain to be investigated.
Avinash Kali
Andreas Kumar
Ivan Cokic
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Matthias G Friedrich
Rohan Dharmakumar
2013-03-06T10:33:04Z
2016-03-18T10:52:59Z
http://eprints.imtlucca.it/id/eprint/1516
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1516
2013-03-06T10:33:04Z
Tracking-optimal error control schemes for H.264 compressed video for vehicle surveillance
In this paper we present a transportation video coding and transmission system specifically tailored to automated vehicle tracking applications. By taking into account the video characteristics and the lossy nature of the wireless channels, we propose error control approaches to enhance tracking accuracy. The proposed system is shown to give performance improvement over the current state-of-the-art system and yields bitrate savings of up to 60.
Zhaofu Chen
Eren Soyak
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2013-03-06T10:07:58Z
2013-03-12T09:32:21Z
http://eprints.imtlucca.it/id/eprint/1515
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1515
2013-03-06T10:07:58Z
Mouse neuroimaging phenotyping in the cloud
The combined use of mice that have genetic mutations (transgenic mouse models) of human pathology and advanced neuroimaging methods (such as MRI) has the potential to radically change how we approach disease understanding, diagnosis and treatment. Morphological changes occurring in the brain of transgenic animals as a result of the interaction between environment and genotype, can be assessed using advanced image analysis methods, an effort described as “mouse brain phenotyping”. However, the computational methods required for the analysis of high-resolution brain images are demanding. In this paper, we propose a computationally effective cloud-based implementation of morphometric analysis of high-resolution mouse brain datasets. We show that the proposed approach is highly scalable and suited for a variety of methods for MR-based brain phenotyping. The proposed approach is easy to deploy, and could become an alternative for laboratories that may require instant access to large high performance computing infrastructure.
Massimo Minervini
massimo.minervini@imtlucca.it
Mario Damiano
Valter Tucci
Angelo Bifone
Alessandro Gozzi
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2013-03-06T09:58:23Z
2016-04-05T12:10:18Z
http://eprints.imtlucca.it/id/eprint/1514
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1514
2013-03-06T09:58:23Z
Neuroimaging Evidence of Major Morpho-Anatomical and Functional Abnormalities in the BTBR T+TF/J Mouse Model of Autism
Luca Dodero
Francesco Sforazzini
Alberto Galbusera
Mario Damiano
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Angelo Bifone
Maria Luisa Scattoni
Alessandro Gozzi
2013-03-06T09:26:36Z
2013-03-12T09:32:21Z
http://eprints.imtlucca.it/id/eprint/1513
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1513
2013-03-06T09:26:36Z
Acute reperfusion intramyocardial hemorrhage leads to regional chronic iron deposition in the heart
Intramyocardial hemorrhage commonly occurs in large reperfused myocardial infarctions. However, its long-term fate remains unexplored. We hypothesized that acute reperfusion intramyocardial hemorrhage leads to chronic iron deposition.
Avinash Kali
Ivan Cokic
Andreas Kumar
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Richard Tang
Matthias G Friedrich
Rohan Dharmakumar
2013-03-06T09:20:24Z
2013-03-12T09:32:21Z
http://eprints.imtlucca.it/id/eprint/1512
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1512
2013-03-06T09:20:24Z
Acute Hemorrhagic Myocardial Infarction Leads to Localized Chronic Iron Deposition: A CMR Study
Avinash Kali
Ivan Cokic
Andreas Kumar
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Richard Tang
Matthias G Friedrich
Rohan Dharmakumar
2013-03-06T08:41:56Z
2013-03-12T09:32:21Z
http://eprints.imtlucca.it/id/eprint/1511
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1511
2013-03-06T08:41:56Z
Active Contour Model driven by Globally Signed Region Pressure Force
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2013-03-06T08:36:51Z
2013-09-17T10:35:52Z
http://eprints.imtlucca.it/id/eprint/1510
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1510
2013-03-06T08:36:51Z
Application-Aware Image Compression for Low Cost and Distributed Plant Phenotyping
Plant phenotyping investigates how a plant's genome, interacting with the environment, affects the observable traits of a plant (phenome). It is becoming increasingly important in our quest towards efficient and sustainable agriculture. While sequencing the genome is becoming increasingly efficient, acquiring phenotype information has remained largely of low throughput, since high throughput solutions are costly and not widespread. A distributed approach could provide a low cost solution, offering high accuracy and throughput. A sensor of low computational power acquires time-lapse images of plants and sends them to an analysis system with higher computational and storage capacity (e.g., a service running on a cloud infrastructure). However, such system requires the transmission of imaging data from sensor to receiver, which necessitates their lossy compression to reduce bandwidth requirements. In this paper, we propose an application aware image compression approach where the sensor is aware of its context (i.e., imaging plants) and takes advantage of the feedback from the receiver to focus bitrate on regions of interest (ROI). We use JPEG 2000 with ROI coding, and thus remain standard compliant, and offer a solution that is low cost and has low computational requirements. We evaluate our solution in several images of Arabidopsis thaliana phenotyping experiments, and we show that both for traditional metrics (such as PSNR) and application aware metrics, the performance of the proposed solution provides a 70% reduction of bitrate for equivalent performance.
Massimo Minervini
massimo.minervini@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2013-03-05T15:03:04Z
2013-03-12T09:32:21Z
http://eprints.imtlucca.it/id/eprint/1509
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1509
2013-03-05T15:03:04Z
Chronic Manifestation of Post-Reperfusion Intramyocardial Hemorrhage as Regional Iron Deposition: A Cardiovascular MR Study with Ex-vivo Validation
Background—Intramyocardial hemorrhage frequently accompanies large reperfused myocardial infarctions. However, its influence on the make-up and the ensuing effect on the infarcted tissue during the chronic phase remain unexplored.
Methods and Results—Patients (n = 15; 3 women), recruited after successful PCI for first ST-elevation myocardial infarction, underwent Cardiovascular Magnetic Resonance (CMR) imaging on day 3 and month 6 post-PCI. Patients with hemorrhagic (Hemo+) infarctions, as determined by T2* CMR on day 3 (n = 11), showed persistent T2* losses co-localized with scar tissue on the follow-up scans, suggesting chronic iron deposition. T2* values of Hemo+ territories were significantly higher than non-hemorrhagic (Hemo-) and remote territories (p<0.001); however, T2* values of non-hemorrhagic (Hemo-) and remote territories were not different (p=0.51. Canines (n = 20), subjected to ischemia-reperfusion (I/R) injury (n = 14), underwent CMR on days 3 and 56 post I/R injury. Similarly, sham-operated animals (Shams; n = 3) were imaged using CMR at similar time points. Subsequently, hearts were explanted, imaged ex-vivo, and samples of Hemo+, Hemo-, remote and Sham myocardium were isolated and stained. The extent of iron deposition ([Fe]) within each sample was measured using mass spectrometry. Hemo+ infarcts showed significant T2* losses compared to the other (control) groups (p<0.001), and Perl's stain confirmed localized iron deposition. Mean [Fe] of Hemo+ was nearly an order of magnitude greater than the control groups (p<0.001), but no significant differences were observed among the control groups. A strong linear relationship was observed between log(T2*) and -log([Fe]) (R2=0.7; p<0.001). The monoclonal antibody Mac387 stains, along with Perl's stains, showed preferential localization of newly recruited macrophages at the site of chronic iron deposition.
Conclusions—Hemorrhagic myocardial infarction can lead to iron depositions within the infarct zones, which can be a source of prolonged inflammatory burden in chronic phase of myocardial infarction.
Avinash Kali
Andreas Kumar
Ivan Cokic
Richard Tang
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Matthias G Friedrich
Rohan Dharmakumar
2013-03-05T14:57:29Z
2013-03-12T09:32:21Z
http://eprints.imtlucca.it/id/eprint/1508
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1508
2013-03-05T14:57:29Z
Detecting Myocardial Ischemia at Rest with Cardiac Phase-Resolved BOLD CMR
Background—Fast, noninvasive identification of ischemic territories at rest (prior to tissue-specific changes) and assessment of functional status can be valuable in the management of severe coronary artery disease. This study investigated the utility of cardiac phase-resolved Blood-Oxygen-Level-Dependent (CP-BOLD) CMR in detecting myocardial ischemia at rest secondary to severe coronary artery stenosis.
Methods and Results—CP-BOLD, standard-cine, and T2-weighted images were acquired in canines (n=11) at baseline and within 20 minutes of ischemia induction (severe LAD stenosis) at rest. Following 3-hours of ischemia, LAD stenosis was removed and T2-weighted and late-gadolinium-enhancement (LGE) images were acquired. From standard-cine and CP-BOLD images, End-Systolic (ES) and End-Diastolic (ED) myocardium were segmented. Affected and remote sections of the myocardium were identified from post-reperfusion LGE images. S/D, quotient of mean ES and ED signal intensities (on CP-BOLD and standard-cine), was computed for affected and remote segments at baseline and ischemia. Ejection fraction (EF) and segmental wall-thickening (sWT) were derived from CP-BOLD images at baseline and ischemia. On CP-BOLD images: S/D was greater than 1 (remote and affected territories) at baseline; S/D was diminished only in affected territories during ischemia and the findings were statistically significant (ANOVA, post-hoc p<0.01). The dependence of S/D on ischemia was not observed in standard-cine images. Computer simulations confirmed the experimental findings. ROC analysis showed that S/D identifies affected regions with similar performance (AUC:0.87) as EF (AUC:0.89) and sWT (AUC:0.75).
Conclusions—Preclinical studies and computer simulations showed that CP-BOLD CMR could be useful in detecting myocardial ischemia at rest. Patient studies are needed for clinical translation.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Xiangzhi Zhou
Richard Tang
Debiao Li
Rohan Dharmakumar
2013-03-05T13:49:40Z
2014-08-08T10:37:54Z
http://eprints.imtlucca.it/id/eprint/1505
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1505
2013-03-05T13:49:40Z
Image based plant phenotyping with incremental learning and active contours
Plant phenotyping investigates how a plant's genome, interacting with the environment, affects the observable traits of a plant (phenome). It is becoming increasingly important in our quest towards efficient and sustainable agriculture. While sequencing the genome is becoming increasingly efficient, acquiring phenotype information has remained largely of low throughput. Current solutions for automated image-based plant phenotyping, rely either on semi-automated or manual analysis of the imaging data, or on expensive and proprietary software which accompanies costly hardware infrastructure. While some attempts have been made to create software applications that enable the analysis of such images in an automated fashion, most solutions are tailored to particular acquisition scenarios and restrictions on experimental design. In this paper we propose and test, a method for the segmentation and the automated analysis of time-lapse plant images from phenotyping experiments in a general laboratory setting, that can adapt to scene variability. The method involves minimal user interaction, necessary to establish the statistical experiments that may follow. At every time instance (i.e., a digital photograph), it segments the plants in images that contain many specimens of the same species. For accurate plant segmentation we propose a vector valued level set formulation that incorporates features of color intensity, local texture, and prior knowledge. Prior knowledge is incorporated using a plant appearance model implemented with Gaussian mixture models, which utilizes incrementally information from previously segmented instances. The proposed approach is tested on Arabidopsis plant images acquired with a static camera capturing many subjects at the same time. Our validation with ground truth segmentations and comparisons with state-of-the-art methods in the literature shows that the proposed method is able to handle images with complicated and changing background in an automated fashion. An accuracy of 96.7% (dice similarity coefficient) was observed, which was higher than other methods used for comparison. While here it was tested on a single plant species, the fact that we do not employ shape driven models and we do not rely on fully supervised classification (trained on a large dataset) increases the ease of deployment of the proposed solution for the study of different plant species in a variety of laboratory settings. Our solution will be accompanied by an easy to use graphical user interface and, to facilitate adoption, we will make the software available to the scientific community.
Massimo Minervini
massimo.minervini@imtlucca.it
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2013-02-28T12:15:01Z
2013-02-28T12:15:01Z
http://eprints.imtlucca.it/id/eprint/1494
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1494
2013-02-28T12:15:01Z
Serendipitous Fuzzy Item Recommendation with ProfileMatcher
In this paper an approach to serendipitous item recommendation is outlined. The model used for this task is an extension of ProfileMatcher, which is based on fuzzy metadata describing both user and items to be recommended. To address the task of recommending serendipitous resources, a priori knowledge on the relations occurring among metadata values is injected in the recommendation process. This is achieved using fuzzy graphs to model similarity relations among the elements of the fuzzy sets describing the metadata. An experimentation has been carried out on the MovieLens data set to show the impact of serendipity injection in the item recommendation process.
Danilo Dell’Agnello
Anna Fanelli
Corrado Mencar
Massimo Minervini
massimo.minervini@imtlucca.it
2013-02-26T14:19:14Z
2015-03-03T09:50:55Z
http://eprints.imtlucca.it/id/eprint/1493
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1493
2013-02-26T14:19:14Z
CaSPiS: A Calculus of Sessions, Pipelines and Services
Service-oriented computing is calling for novel computational models and languages with well
disciplined primitives for client-server interaction, structured orchestration and unexpected events handling. We present CaSPiS, a process calculus where the conceptual abstractions of sessioning and pipelining play a central role for modelling service-oriented systems. CaSPiS sessions are two-sided, uniquely named and can be nested. CaSPiS pipelines permit orchestrating the flow of data produced by different sessions. The calculus is also equipped with operators for handling (unexpected) termination of the partner’s side of a session. Several examples are presented to provide evidence of the flexibility of the chosen set of primitives. One key contribution is a fully abstract encoding of Misra et al.’s orchestration language Orc. Another main result shows that in CaSPiS it is possible to program a “graceful termination” of nested sessions, which guarantees that no session is forced to hang forever after the loss of its partner.
Michele Boreale
Roberto Bruni
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2013-02-20T10:52:50Z
2013-02-20T10:52:50Z
http://eprints.imtlucca.it/id/eprint/1487
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1487
2013-02-20T10:52:50Z
Approximate Explicit MPC on Simplicial Partitions with Guaranteed Stability for Constrained Linear Systems
This paper proposes an approximate explicit model predictive control design approach for regulating linear time-invariant systems subject to both state and control constraints. The proposed control law is implemented as a piecewise-affine function defined on a regular simplicial partition, and has two main positive features. First, the regularity of the simplicial partition allows a very efficient implementation of the control law on digital circuits, with computation performed in tens of nanoseconds. Second, the asymptotic stability of the closed-loop system is enforced a priori by design.
Matteo Rubagotti
Davide Barcelli
Alberto Bemporad
alberto.bemporad@imtlucca.it
2013-02-20T10:41:16Z
2013-02-20T10:41:16Z
http://eprints.imtlucca.it/id/eprint/1486
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1486
2013-02-20T10:41:16Z
Simple and Certifiable Quadratic Programming Algorithms for Embedded Linear Model Predictive Control
In this paper we review a dual fast gradient-projection approach to solving quadratic programming (QP) problems recently proposed in [Patrinos and Bemporad, 2012] that is particularly useful for embedded model predictive control (MPC) of linear systems subject to linear constraints on inputs and states. We show that the method has a computational effort aligned with several other existing QP solvers typically used in MPC, and in addition it is extremely easy to code, requires only basic and easily parallelizable arithmetic operations, and a number of iterations to reach a given accuracy in terms of optimality and feasibility of the primal solution that can be estimated quite tightly by solving an off-line mixed-integer linear programming problem. This research was largely motivated by ongoing research activities on embedded MPC for aerospace systems carried out in collaboration with the European Space Agency.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
2013-02-20T10:24:10Z
2014-07-01T12:51:19Z
http://eprints.imtlucca.it/id/eprint/1485
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1485
2013-02-20T10:24:10Z
A numerical algorithm for nonlinear L2-gain optimal control with application to vehicle yaw stability control
This paper is concerned with L2-gain optimal control approach for coordinating the active front steering and differential braking to improve vehicle yaw stability and cornering control. The vehicle dynamics with respect to the tire slip angles is formulated and disturbances are added on the front and rear cornering forces characteristics modelling, for instance, variability on road friction. The mathematical model results in input-affine nonlinear system. A numerical algorithm based on conjugate gradient method to solve L2-gain optimal control problem is presented. The proposed algorithm, which has backward-in-time structure, directly finds the feedback control and the "worst case" disturbance variables. Simulations of the controller in closed-loop with the nonlinear vehicle model are shown and discussed.
Vladimir Milic
Stefano Di Cairano
Josip Kasac
Alberto Bemporad
alberto.bemporad@imtlucca.it
Zeljko Situm
2013-02-15T08:10:34Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1467
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1467
2013-02-15T08:10:34Z
Development of a methodology for the computation of a near-optimal explicit control law for nonlinear systems that are subject to constraints coupling fuzzy model predictive control and multi-parametric programming
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
2013-02-14T10:04:07Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1480
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1480
2013-02-14T10:04:07Z
Model Predictive Control for Linear Impulsive Systems
Linear Impulsive Control Systems have been extensively studied with respect to their equilibrium points which, in most cases, are no other than the origin. However, the trajectory of the system cannot be stabilized to arbitrary desired points which imposes a significant restriction towards their utilization in various applications such as drug administration. In this paper, we study the equilibrium of Linear Impulsive Systems in light of target-sets instead of the standard equilibrium point approach. We properly extend the notion of invariant sets which is crucial in designing asymptotically stable Model Predictive Controllers (MPC).
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
Alberto Bemporad
alberto.bemporad@imtlucca.it
2013-02-14T08:45:32Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1476
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1476
2013-02-14T08:45:32Z
Consensus QSAR modeling and domain of applicability: an integrated approach
Consensus modelling is a term that has been used in many scientific disciplines to define methods by which a group of individuals can come to an agreement. The QSAR community has used this term for methodologies that aggregate the predictions of several QSAR models to arrive at a single prediction. Literature reports on the validity of consensus modelling approaches are quite conflicting. Many publications present advantages of consensus models: More accurate QSAR models, greater confidence in predictions, regulatory significance, improved robustness. Several other references however, have criticized consensus modelling for complexity, lack of portability, transparency and mechanistic interpretation and for not showing significant improvements over single QSAR models.
Many consensus QSAR models that have appeared in the literature use a naive approach that calculates the average value among all the individual model predictions. More sophisticated methods consider only the models for which the compound to be predicted falls into their domain of applicability. Alternative consensus modelling methods consider the individual model predictions as attributes in an overall multiple linear regression model, where the model coefficients play the role of weights. This way, the contribution of each individual model in the overall prediction is weighted.
In this work, we present a new approach, integrating three basic components in the process of building a QSAR model: variable selection, regression/classification, and domain of applicability. In particular, the proposed method requires a single wrapper variable selection method, a single method for defining the domain of applicability and many regression/classification algorithms depending on the type of the problem. The wrapper variable selection method is applied separately to each QSAR algorithm and produces a QSAR model which used a certain subset of features. In general, different sets of features are selected by the various QSAR models that are generated. Thus, for each QSAR model, a different domain of applicability is defined, by applying the domain of applicability method on the respective set of descriptors. For a new compound, the proposed method first calculates the individual QSAR models predictions. Then it checks for each model, if the compound falls into its domain of applicability. In the case of a negative answer, the model is not taken into account in the calculation of the aggregated prediction. If the answer is positive, a weight is produced depending on the location of the compound inside the domain of applicability. Obviously the weight becomes lower when the location of the compound is closer to the boundaries of the domain of applicability. The weights are finally normalized, so they add to 1. The normalized weights are used to produce the final aggregated prediction. The results of the application of the method to QSAR problems illustrate the advantages and limitations of the method.
Georgia Melagraki
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Antreas Afantitis
Haralambos Sarimveis
2013-02-14T08:31:40Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1475
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1475
2013-02-14T08:31:40Z
Decibell: A novel approach to the ORM software in Java
DeciBell is an open source and free tool developed to tackle in a uniform and structured way the problem of Java and SQL cooperation (available at http://github.com/hampos/DeciBell). In DeciBell, Java classes are related to relational database entities automatically and in a transparent way as far as the background operations are concerned. So, on one hand, non-expert users can work on Java code exclusively while expert ones are able to focus on more algorithmic aspects of the problem they try to solve rather than be wasted with trivial database management issues. In contrast to the existing O.R.M. programs, DeciBell does not require any configuration files or composite query structures, but only a proper annotation of certain fields of the classes. This annotation is carried out by means of the Java Annotations which is a modern trend in Java programming. Among its supported facilities, DeciBell supports primary keys (single and multiple), foreign keys, constraints, one-to-one, one- to-many, and many-to-many relations and all these using pure Java predicates and no SQL or other Query Languages.
Haralambos Chomenides
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Haralambos Sarimveis
2013-02-13T07:50:17Z
2013-02-13T07:50:17Z
http://eprints.imtlucca.it/id/eprint/1473
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1473
2013-02-13T07:50:17Z
The rendezvous dynamics under linear quadratic optimal control
This paper investigates the dynamics of networks of systems achieving rendezvous under linear quadratic optimal control. While the dynamics of rendezvous were studied extensively for the symmetric case, where all systems have exactly the same dynamics (such as simple integrators), this paper investigates the rendezvous dynamics for the general case when the dynamics of the systems may be different. We show that the rendezvous is stable and that the post-rendezvous dynamics of the network of systems is entirely defined by the common eigenvalues with common eigenvectors output image. The approach is also extended to the case of constraints on systems states, inputs, and outputs.
Stefano Di Cairano
Carlo A. Pascucci
Alberto Bemporad
alberto.bemporad@imtlucca.it
2013-02-13T07:46:55Z
2013-02-13T07:46:55Z
http://eprints.imtlucca.it/id/eprint/1472
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1472
2013-02-13T07:46:55Z
Stability analysis of discrete-time piecewise-affine systems over non-invariant domains
This paper analyzes stability of discrete-time piecewise-affine systems defined on non-invariant domains. An algorithm based on linear programming is proposed, in order to prove the exponential stability of the origin and to find a positively invariant estimate of the region of attraction. The theoretical results are based on the definition of a piecewise-affine, possibly discontinuous, Lyapunov function. The proposed method presents a relatively low computational burden, and is proven to lead to feasible solutions in a broader range of cases with respect to a previously proposed approach.
Matteo Rubagotti
Luca Zaccarian
Alberto Bemporad
alberto.bemporad@imtlucca.it
2013-02-12T12:11:05Z
2013-02-12T12:11:05Z
http://eprints.imtlucca.it/id/eprint/1471
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1471
2013-02-12T12:11:05Z
Piecewise affine direct virtual sensors with Reduced Complexity
In this paper, a piecewise-affine direct virtual sensor is proposed for the estimation of unmeasured outputs of nonlinear systems whose dynamical model is unknown. In order to overcome the lack of a model, the virtual sensor is designed directly from measured inputs and outputs. The proposed approach generalizes a previous contribution, allowing one to design lower-complexity estimators. Indeed, the reduced-complexity approach strongly reduces the effect of the so-called "curse of dimensionality", and can be applied to relatively high-order systems, while enjoying all the convergence and optimality properties of the original approach.
Matteo Rubagotti
Tomaso Poggi
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Storace
2013-02-12T12:03:40Z
2013-02-12T12:03:40Z
http://eprints.imtlucca.it/id/eprint/1470
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1470
2013-02-12T12:03:40Z
An accelerated dual gradient-projection algorithm for linear model predictive control
This paper proposes a dual fast gradient-projection method for solving quadratic programming problems that arise in linear model predictive control with general polyhedral constraints on inputs and states. The proposed algorithm is quite suitable for embedded control applications in that: (1) it is extremely simple and easy to code; (2) the number of iterations to reach a given accuracy in terms of optimality and feasibility of the primal solution can be estimated quite tightly; (3) the computational cost per iteration increases only linearly with the prediction horizon; and (4) the algorithm is also applicable to linear time-varying (LTV) model predictive control problems, with an extra on-line computational effort that is still linear with the prediction horizon.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2013-01-24T15:10:05Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1468
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1468
2013-01-24T15:10:05Z
A Model Predictive Control Approach for Optimal Drug Administration
The barriers between systems engineering and medicine are slowly eroding as recently it has become evident that medicine has a lot to gain by systems technology. In particular, the drug administration problem be cast as a control engineering problem, where the objective is to keep the drug concentration at certain organs in the body close to desired set-points. A number of constraints render the problem rather challenging. For example, hard constraints may be posed on the drug concentration in blood, because a higher than a certain limit concentration may render the drug effects adverse and toxic. In this paper we show that a popular method for tackling chemical engineering control problems can be used for determining the optimal drug administration. Specifically, the Model Predictive Control (MPC) technology is used for taking optimal decisions regarding of drug concentration in the human body, while incorporating constraints on both drug concentration and drug infusion rate.
Haralambos Sarimveis
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Antreas Afantitis
Georgia Melagraki
2013-01-24T10:34:13Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1466
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1466
2013-01-24T10:34:13Z
Formulation and solution of an optimal control problem where the input values are restricted on a finite set
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Haralambos Sarimveis
2013-01-24T10:16:26Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1465
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1465
2013-01-24T10:16:26Z
Forecasting of the technological and market evolution of QDs using an ARIMA stochastic process
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
A. Golnas
N. Apratzanis
C.A. Charitides
2013-01-24T09:27:55Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1463
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1463
2013-01-24T09:27:55Z
An integer programming approach for optimal drug dose computation
In this paper, we study the problem of determining the optimal drug administration strategy when only a finite number of different dosages are available, a lower bound is posed on the time intervals between two consecutive doses, and drug concentrations should not exceed the toxic concentration levels. The presence of only binary variables leads to the adoption of an integer programming (IP) scheme for the formulation and solution of the drug dose optimal control problem. The proposed method is extended to account for the stochastic formulation of the optimal control problem, so that it can be used in practical applications where large populations of patients are to be treated. A Finite Impulse Response (FIR) model derived from experimental pharmacokinetic data is employed to correlate the administered drug dose with the concentration–time profiles of the drug in the compartments (organs) of the body.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Haralambos Sarimveis
2013-01-17T10:06:03Z
2014-01-29T13:58:23Z
http://eprints.imtlucca.it/id/eprint/1460
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1460
2013-01-17T10:06:03Z
Two-time-scale MPC for economically optimal real-time operation of balance responsible parties
European electrical networks are evolving towards a distributed system where the number of power plants is growing and also the green plants based on renewable energy sources (RES) like wind and solar are increasing. Integration of RES leads to energy imbalance, due to the difficulty to predict their production. This paper proposes a two-time-scale Hierarchical Model Predictive Control (HMPC) strategy for real-time optimal control of Balance Responsible Parties (BRPs) in power systems with high penetration of renewable energy sources (RES). The proposed control strategy is able to handle ramp-rate constraints efficiently and results in reduced generation and imbalance costs due to real-time economic optimization of power setpoints.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alessandro Maffei
Andrej Jokic
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-12-19T11:16:32Z
2014-01-29T08:58:38Z
http://eprints.imtlucca.it/id/eprint/1457
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1457
2012-12-19T11:16:32Z
When entities meet query recommender systems: semantic search shortcuts
The Web of Data is growing in popularity and dimension,
and entities are gaining importance in many research fields.
In this paper, we explore the use of entities that can be extracted from a query log to enhance query recommendation.
In particular, we use a large query log recorded by the Europeana portal, a central access point to the descriptions of
more than 20 million cultural heritage objects, and we extend a state-of-the-art query recommendation algorithm to
take into account the semantic information associated with
the submitted queries. Our novel method generates highly
related and diversified suggestions. We assess it by means
of a new evaluation technique. The manually annotated
dataset used for performance comparisons has been made
available to the research community to favor the repeatability of the experiments.
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Sergiu Gordea
Claudio Lucchese
Franco Maria Nardini
Raffaele Perego
2012-12-19T11:09:10Z
2013-03-12T14:57:00Z
http://eprints.imtlucca.it/id/eprint/1456
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1456
2012-12-19T11:09:10Z
You should read this! let me explain you why: explaining news recommendations to users
Recommender systems have become ubiquitous in contentbased
web applications, from news to shopping sites. Nonetheless,
an aspect that has been largely overlooked so far in the recommender system literature is that of automatically
building explanations for a particular recommendation.
This paper focuses on the news domain, and proposes to enhance effectiveness of news recommender systems by adding,
to each recommendation, an explanatory statement to help
the user to better understand if, and why, the item can be
her interest. We consider the news recommender system as a
black-box, and generate different types of explanations employing pieces of information associated with the news. In
particular, we engineer text-based, entity-based, and usagebased explanations, and make use of a Markov Logic Networks to rank the explanations on the basis of their effectiveness. The assessment of the model is conducted via a user study on a dataset of news read consecutively by actual users. Experiments show that news recommender systems
can greatly benefit from our explanation module.
Roi Blanco
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Claudio Lucchese
Raffaele Perego
Fabrizio Silvestri
2012-12-19T10:57:28Z
2013-03-12T14:57:00Z
http://eprints.imtlucca.it/id/eprint/1454
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1454
2012-12-19T10:57:28Z
Introducing RDF Graph Summary with Application to Assisted SPARQL Formulation
One of the reasons for the slow adoption of SPARQL is the complexity in query formulation due to data diversity. The principal barrier a user faces when trying to formulate a query is that he generally has no information about the underlying structure and vocabulary of the data. In this paper, we address this problem at the maximum scale we can think of: providing assistance in formulating SPARQL queries over the entire Sindice data collection - 15 billion triples and counting coming from more than 300K datasets. We present a method to help users in formulating complex SPARQL queries across multiple heterogeneous data sources. Even if the structure and vocabulary of the data sources are unknown to the user, the user is able to quickly and easily formulate his queries. Our method is based on a summary of the data graph and assists the user during an interactive query formulation by recommending possible structural query elements.
Stephane Campinas
Thomas E. Perry
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Renaud Delbru
Giovanni Tummarello
2012-12-19T10:42:21Z
2013-03-12T14:57:00Z
http://eprints.imtlucca.it/id/eprint/1453
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1453
2012-12-19T10:42:21Z
Improving Europeana Search Experience Using Query Logs
Europeana is a long-term project funded by the European Commission with the goal of making Europe’s cultural and scientific heritage accessible to the public. Since 2008, about 1500 institutions have contributed to Europeana, enabling people to explore the digital resources of Europe’s museums, libraries and archives. The huge amount of collected multi-lingual multi-media data is made available today through the Europeana portal, a search engine allowing users to explore such content through textual queries. One of the most important techniques for enhancing users search experience in large information spaces, is the exploitation of the knowledge contained in query logs. In this paper we present a characterization of the Europeana query log, showing statistics on common behavioral patterns of the Europeana users. Our analysis highlights some significative differences between the Europeana query log and the historical data collected by general purpose Web Search Engine logs. In particular, we find out that both query and search session distributions show different behaviors. Finally, we use this information for designing a query recommendation technique having the goal of enhancing the functionality of the Europeana portal.
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Sergiu Gordea
Claudio Lucchese
Franco Maria Nardini
Gabriele Tolomei
2012-12-19T10:23:55Z
2013-03-12T14:57:00Z
http://eprints.imtlucca.it/id/eprint/1451
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1451
2012-12-19T10:23:55Z
Discovering Europeana users’ search behavior
Europeana is a strategic project funded by the European Commission with the goal of making Europe's cultural and scientific heritage accessible to the public. ASSETS is a two-year Best Practice Network co-funded by the CIP PSP Programme to improve performance, accessibility and usability of the Europeana search engine. Here we present a characterization of the Europeana logs by showing statistics on common behavioural patterns of the Europeana users.
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Sergiu Gordea
Claudio Lucchese
Franco Maria Nardini
Raffaele Perego
Gabriele Tolomei
2012-12-19T09:37:52Z
2013-03-12T14:57:00Z
http://eprints.imtlucca.it/id/eprint/1450
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1450
2012-12-19T09:37:52Z
The Sindice-2011 Dataset for Entity-Oriented Search in the Web of Data
The task of entity retrieval becomes increasingly prevalent as more and more (semi-) structured information about objects is available on the Web in the form of documents embedding metadata (RDF,RDFa, Microformats, and others). However, research and development in that direction is dependent on (1) the availability of a representative corpus of entities that are found on the Web, and (2)
the availability of an entity-oriented search infrastructure for experimenting with new retrieval models. In this paper, we introduce the Sindice-2011 data collection which is derived from data collected by the Sindice semantic search engine. The data collection (available at http://data.sindice.com/trec2011/) is especially designed for supporting research in the domain of web entity retrieval. We describe how the corpus is organised, discuss statistics of the data collection, and introduce a search infrastructure to foster research and development.
Stephane Campinas
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Thomas E. Perry
Renaud Delbru
Krisztian Balog
Giovanni Tummarello
2012-12-19T09:15:48Z
2013-03-12T14:57:01Z
http://eprints.imtlucca.it/id/eprint/1449
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1449
2012-12-19T09:15:48Z
Caching query-biased snippets for efficient retrieval
Web Search Engines' result pages contain references to the
top-k documents relevant for the query submitted by a user.
Each document is represented by a title, a snippet and a
URL. Snippets, i.e. short sentences showing the portions
of the document being relevant to the query, help users to
select the most interesting results. The snippet generation process is very expensive, since it may require to access a number of documents for each issued query. We assert that caching, a popular technique used to enhance performance at various levels of any computing systems, can be very effective in this context. We design and experiment several cache organizations, and we introduce the concept of supersnippet, that is the set of sentences in a document that are more likely to answer future queries. We show that supersnippets can be built by exploiting query logs, and that in our experiments a supersnippet cache answers up to 62% of the requests, remarkably outperforming other caching approaches.
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Claudio Lucchese
Salvatore Orlando
Raffaele Perego
Fabrizio Silvestri
2012-12-14T08:37:47Z
2014-01-24T14:13:11Z
http://eprints.imtlucca.it/id/eprint/1446
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1446
2012-12-14T08:37:47Z
Metamodel variability in robust simulation-optimization: a bootstrap analysis
Metamodels are often used in simulation-optimization for the design and management of complex systems enabling the integration of discipline-dependent analysis into the
overall decision process. These metamodels yield insight into the relationship between responses and decision variables, providing fast analysis tools in place of the more expensive computer simulations. The combined use of stochastic simulation experiments and metamodels introduces a source of uncertainty in the decision process that we refer to as metamodel variability. To quantify this variability, we combine validation and bootstrapping techniques. The rationale behind the method relies on the fact that, after the validation process, the relative validation errors are small indicating that the metamodels give an adequate approximation, and bootstrapping these errors allows to quantify the metamodels' variability in an acceptable way. The method has the advantage to be general and can be used with different kind of metamodels and validation techniques. The resulting methodology is illustrated through some examples using regression and Kriging metamodels.
Gabriella Dellino
gabriella.dellino@imtlucca.it
Carlo Meloni
2012-11-20T10:42:17Z
2014-01-29T14:32:00Z
http://eprints.imtlucca.it/id/eprint/1429
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1429
2012-11-20T10:42:17Z
A cooperative approach for distributed task execution in autonomic clouds
Virtualization and distributed computing are two key pillars that guarantee scalability of applications deployed in the Cloud. In Autonomous Cooperative Cloud-based Platforms, autonomous computing nodes cooperate to offer a PaaS Cloud for the deployment of user applications. Each node must allocate the necessary resources for customer applications to be executed with certain QoS guarantees. If the QoS of an application cannot be guaranteed a node has mainly two options: to allocate more resources (if it is possible) or to rely on the collaboration of other nodes. Making a decision is not trivial since it involves many factors (e.g. the cost of setting up virtual machines, migrating applications, discovering collaborators). In this paper we present a model of such scenarios and experimental results validating the convenience of cooperative strategies over selfish ones, where nodes do not help each other. We describe the architecture of the platform of autonomous clouds and the main features of the model, which has been implemented and evaluated in the DEUS discrete-event simulator. From the experimental evaluation, based on workload data from the Google Cloud Backend, we can conclude that (modulo our assumptions and simplifications) the performance of a volunteer cloud can be compared to that of a Google Cluster.
Michele Amoretti
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Stefano Sebastio
stefano.sebastio@imtlucca.it
2012-10-15T08:09:17Z
2012-10-15T08:09:17Z
http://eprints.imtlucca.it/id/eprint/1401
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1401
2012-10-15T08:09:17Z
MOBY-DIC: A MATLAB Toolbox for Circuit-Oriented Design of Explicit MPC
This paper describes a MATLAB Toolbox for the integrated design of Model Predictive Control (MPC) state-feedback control laws and the digital circuits implementing
them. Explicit MPC laws can be designed using optimal and sub-optimal formulations, directly taking into account the specifications of the digital circuit implementing the control law (such as latency and size), together with the usual control specifications (stability, performance,
constraint satisfaction). Tools for a-posteriori stability analysis of the closed-loop system, and for the simulation of the circuit in Simulink, are also included in the toolbox.
Alberto Oliveri
Davide Barcelli
Alberto Bemporad
alberto.bemporad@imtlucca.it
Bart Genuit
W.P.M.H. Heemels
Tomaso Poggi
Matteo Rubagotti
Marco Storace
2012-10-04T08:28:37Z
2012-10-04T08:28:37Z
http://eprints.imtlucca.it/id/eprint/1387
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1387
2012-10-04T08:28:37Z
Model predictive control applications for planetary rovers
Model Predictive Control (MPC) is a well-known method for control of processes with low or moderate dynamics as found in power or chemical plants. Within space applications the typical domain of MPC is spacecraft attitude and orbit control. MPC for control of planetary rovers is a quite new technology and was recently investigated in the frame of the RobMPC project, under ESA contract. In this context the Robust-MPC approach was applied to three layers of the rover
control hierarchy dealing with medium to high dynamics control tasks: 1) guidance, 2) trajectory control and 3) wheel traction and steering control. The selected reference rover is ESA’s four-wheel EGP rover with rear axle steering and a mass of approximately 800 kg. The MPC control design flow is based on the MPCSofT Toolbox for MATLAB, a novel toolbox developed within the RobMPC project. The MPCSofT toolbox provides an environment for design and simulation of
MPC controllers, based on a quite general class of linear
time-varying models, constraints, and quadratic costs, possibly equipped with integral action to increase robustness. As MPC prediction models are easily specified by the user in Embedded MATLAB code, Ccode can be automatically generated within the MATLAB/Simulink environment for immediate rapid prototyping. The highest control level is shared between the nominal path planner (computed offline) and the MPC guidance function. When the rover slips outside the safety corridor around the nominal path, the guidance function continuously builds obstacle-free optimal contingency paths to bring back the vehicle to the nominal path, without the need of stopping the rover to compute a new nominal path. The LTV model included in the MPC optimization engine is used to reconstruct the guidance path from the computed optimal sequence of actions. The MPC trajectory control acts on the velocity vector of the vehicle in order to keep the vehicle within the nominal (guidance) path. This level takes into account the non-holonomic characteristics of the rover and
implements a kinematic LTV model of the vehicle. The lowest MPC level is dedicated to traction and steering The highest control level is shared between the nominal path planner (computed offline) and the MPC guidance function. When the rover slips outside the safety corridor around the nominal path, the guidance function continuously builds obstacle-free optimal contingency paths to bring back the vehicle to the nominal path, without the need of stopping the rover to compute a new nominal path. The LTV model included in the MPC optimization engine is used to reconstruct the guidance path from the computed optimal sequence of actions. The MPC trajectory control acts on the velocity vector of the vehicle in order to keep the vehicle within the nominal (guidance) path. This level takes into account the non-holonomic characteristics of the rover and
implements a kinematic LTV model of the vehicle. The lowest MPC level is dedicated to traction and steering control. This layer is controlling the steering angle and wheel velocity coordination and replaces typically the Ackermann control. Here, the MPC solution is based on a multi-body system model of the rover including the wheel-soil interaction dynamics. It is implemented as a stepwise LTI class problem with corresponding online linearization of the model. The paper will introduce the architecture of the entire control hierarchy together with selected details of the MPC specific implementation. The performance and
robustness analyses are presented based on results of
comprehensive Monte Carlo simulations. A profiling of
the code will give an outlook regarding readiness state
in terms of controller implementation on space qualified
computer hardware.
Giovanni Binet
Rainer Krenn
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-09-26T14:49:47Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1385
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1385
2012-09-26T14:49:47Z
Modular Termination and Combinability for Superposition Modulo Counter Arithmetic
Modularity is a highly desirable property in the develop-
ment of satisfiability procedures. In this paper we are interested in using a dedicated superposition calculus to develop satisfiability procedures for (unions of) theories sharing counter arithmetic. In the first place, we are concerned with the termination of this calculus for theories representing data structures and their extensions. To this purpose, we prove a modularity result for termination which allows us to use our superposition
calculus as a satisfiability procedure for combinations of data structures. In addition, we present a general combinability result that permits us to use our satisfiability procedures into a non-disjoint combination method à la Nelson-Oppen without loss of completeness. This latter result is useful whenever data structures are combined with theories for which superposition is not applicable, like theories of arithmetic.
Christophe Ringeissen
Valerio Senni
valerio.senni@imtlucca.it
2012-09-26T13:50:03Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1384
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1384
2012-09-26T13:50:03Z
Reachability Analysis via Specialization of Constraint Logic Programs
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-26T13:42:59Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1383
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1383
2012-09-26T13:42:59Z
Verifying parameterized protocols by transforming stratified logic programs
We propose a method for the specification and the automated verification of temporal properties of parameterized protocols. Our method is based on logic programming and program transformation. We specify the properties of parameterized protocols by using an extension of stratified logic programs. This extension allows premises of
clauses to contain first order formulas over arrays of parameterized length. A property of a given protocol is proved by applying suitable unfold/fold transformations to the specifiation of that protocol.We demonstrate our
method by proving that the parameterized Peterson's protocol among N processes, for any N ≥ 2, ensures the mutual exclusion property.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-26T13:05:53Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1382
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1382
2012-09-26T13:05:53Z
Transformational Verification of Parameterized Protocols Using Array Formulas
We propose a method for the specification and the automated verification of temporal properties of parameterized protocols. Our method is based on logic programming and program transformation. We specify the properties of parameterized protocols by using an extension of stratified logic programs. This extension allows premises of clauses to contain first order formulas over arrays of parameterized length. A property of a given protocol is proved by applying suitable unfold/fold transformations to the specification of that protocol. We demonstrate our method by proving that the parameterized Peterson’s protocol among N processes, for any N ≥ 2, ensures the mutual exclusion property.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-26T12:54:23Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1381
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1381
2012-09-26T12:54:23Z
Proofs of Program Properties via Unfold/Fold Transformations of Constraint Logic Programs
In the literature there are various papers which illustrate the relationship between the unfold/fold program transformation techniques and the proofs of program properties both in the case of logic programs and in the case of functional programs.In this paper we illustrate that relationship in the case of constraint logic programs. We build up on results already presented, i.e.,where we have considered logic programs with locally stratified negation. The constraint logic programming paradigm significantly extends the logic-programming paradigm by allowing some of the atoms to denote constraints in a suitably chosen constraint domain. By using those constraints it is often possible to get simple and direct formulations of problem solutions.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-26T12:42:46Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1380
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1380
2012-09-26T12:42:46Z
Proving Properties of Constraint Logic Programs by Eliminating Existential Variables
We propose a method for proving first order properties of constraint logic programs which manipulate finite lists of real numbers. Constraints are linear equations and inequations over reals. Our method consists in converting any given first order formula into a stratified constraint logic program and then applying a suitable unfold/fold transformation strategy that preserves the perfect model. Our strategy is based on the elimination of existential variables, that is, variables which occur in the body of a clause and not in its head. Since, in general, the first order properties of the class of programs we consider are undecidable, our strategy is necessarily incomplete. However, experiments show that it is powerful enough to prove several non-trivial program properties.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-25T12:27:54Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1379
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1379
2012-09-25T12:27:54Z
Automatic Correctness Proofs for Logic Program Transformations
The many approaches which have been proposed in the literature for proving the correctness of unfold/fold program transformations, consist in associating suitable well-founded orderings with the proof trees of the atoms belonging to the least Herbrand models of the programs. In practice, these orderings are given by ‘clause measures’, that is, measures associated with the clauses of the programs to be transformed. In the unfold/fold transformation systems proposed so far, clause measures are fixed in advance, independently of the transformations to be proved correct. In this paper we propose a method for the automatic generation of the clause measures which, instead, takes into account the particular program transformation at hand. During the transformation process we construct a system of linear equations and inequations whose unknowns are the clause measures to be found, and the correctness of the transformation is guaranteed by the satisfiability of that system. Through some examples we show that our method is able to establish in a fully automatic way the correctness of program transformations which, by using other methods, are proved correct at the expense of fixing sophisticated clause measures.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-25T12:03:29Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1378
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1378
2012-09-25T12:03:29Z
Program Transformation for Development, Verification, and Synthesis of Software
In this paper we briefly describe the use of the program transformation methodology for the development of correct
and efficient programs. We will consider, in particular,
the case of the transformation and the development of constraint logic programs.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-25T11:29:19Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1377
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1377
2012-09-25T11:29:19Z
Transformation techniques for constraint logic programs with applications to protocol verification
The contribution of this thesis consists in the extension of the techniques for the transformation of constraint logic programs and the development of methods for the application of these techniques to the proof of temporal properties of parameterized protocols. In particular, we first introduce a method for proving automatically the total correctness of an unfold/fold transformation by solving linear equations and inequations over the natural numbers. We also propose a transformation-based method for proving first order properties of constraint logic programs which manipulate finite lists of real or rational numbers. Then, we extend the standard folding transformation rule by introducing two variants of this rule. The first variant combines the folding rule with the clause splitting rule for obtaining a more powerful folding rule. The second variant is tailored to the elimination of the existential variables occurring in a clause. For the standard folding rule and its two variants we develop the corresponding algorithms for automating their application. Finally, we propose a program transformation framework for proving temporal properties of parameterized protocols. Using this framework we encode the protocols and the temporal properties we want to prove as logic programs, and then we use the unfold/fold transformation technique for proving whether or not the properties holds.
Valerio Senni
valerio.senni@imtlucca.it
2012-09-25T11:25:06Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1376
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1376
2012-09-25T11:25:06Z
Folding Transformation Rules for Constraint Logic Programs
We consider the folding transformation rule for constraint
logic programs. We propose an algorithm for applying the folding rule in the case where the constraints are linear equations and inequations over the rational or the real numbers. Basically, our algorithm consists in reducing a rule application to the solution of one or more systems
of linear equations and inequations. We also introduce two variants of the folding transformation rule. The first variant combines the folding rule with the clause splitting rule, and the second variant eliminates the existential variables of a clause, that is, those variables which occur in the body of the clause and not in its head. Finally, we present the algorithms for applying these variants of the folding rule.
Valerio Senni
valerio.senni@imtlucca.it
Alberto Pettorossi
Maurizio Proietti
2012-09-24T13:49:23Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1375
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1375
2012-09-24T13:49:23Z
A Folding Algorithm for Eliminating Existential Variables from Constraint Logic Programs
The existential variables of a clause in a constraint logic program are the variables which occur in the body of the clause and not in its head. The elimination of these variables is a transformation technique which is often used for improving program efficiency and verifying program properties. We consider a folding transformation rule which ensures the elimination of existential variables and we propose an algorithm for applying this rule in the case where the constraints are linear inequations over rational or real numbers. The algorithm combines techniques for matching terms modulo equational theories and techniques for solving systems of linear inequations. We show that an implementation of our folding algorithm performs well in practice.
Valerio Senni
valerio.senni@imtlucca.it
Alberto Pettorossi
Maurizio Proietti
2012-09-24T13:28:00Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1373
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1373
2012-09-24T13:28:00Z
Transformational Verification of Linear Temporal Logic
We present a new method for verifying Linear Temporal
Logic (LTL) properties of finite state reactive systems based on logic programming and program transformation. We encode a finite state system and an LTL property which we want to verify as a logic program on infinite lists. Then we apply a verification method consisting of two steps. In the first step we transform the logic program that encodes the given system and the given property into a new program belonging to the class of the so-called linear monadic !-programs (which are stratified, linear recursive programs defining nullary predicates or unary predicates on infinite lists). This transformation is performed by applying rules that preserve correctness. In the second step we verify the property of interest by using suitable proof rules for linear monadic !-programs. These proof rules can be encoded as a logic program which always terminates, if evaluated by using tabled resolution. Although our method uses standard
program transformation techniques, the computational complexity of the derived verification algorithm is essentially the same as the one of the Lichtenstein-Pnueli algorithm [9], which uses sophisticated ad-hoc techniques.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-24T13:13:27Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1372
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1372
2012-09-24T13:13:27Z
Deciding Full Branching Time Logic by Program Transformation
We present a method based on logic program transformation, for verifying Computation Tree Logic (CTL*) properties of finite state reactive systems. The finite state systems and the CTL* properties we want to verify, are encoded as logic programs on infinite lists. Our verification method consists of two steps. In the first step we transform the logic program that encodes the given system and the given property, into a monadic ω -program, that is, a stratified program defining nullary or unary predicates on infinite lists. This transformation is performed by applying unfold/fold rules that preserve the perfect model of the initial program. In the second step we verify the property of interest by using a proof method for monadic ω-programs
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-24T12:48:57Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1371
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1371
2012-09-24T12:48:57Z
A Folding Rule for Eliminating Existential Variables from Constraint Logic Programs
The existential variables of a clause in a constraint logic program are the variables which occur in the body of the clause and not in its head. The elimination of these variables is a transformation technique which is often used for improving program efficiency and verifying program properties. We consider a folding transformation rule which ensures the elimination of existential variables and we propose an algorithm for applying this rule in the case where the constraints are linear inequations over rational or real numbers. The algorithm combines techniques for matching terms modulo equational theories and techniques for solving systems of linear inequations. Through some examples we show that an implementation of our folding algorithm has a good performance in practice.
Valerio Senni
valerio.senni@imtlucca.it
Alberto Pettorossi
Maurizio Proietti
2012-09-18T15:24:04Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1364
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1364
2012-09-18T15:24:04Z
The Transformational Approach to Program Development
We present an overview of the program transformation techniques which have been proposed over the past twenty-five years in the context of logic programming. We consider the approach based on rules and strategies. First, we present the transformation rules and we address the issue of their correctness. Then, we present the transformation strategies and, through some examples, we illustrate their use for improving program efficiency via the elimination of unnecessary variables, the reduction of nondeterminism, and the use of program specialization. We also describe the use of the transformation methodology for the synthesis of logic programs from first-order specifications. Finally, we illustrate some transformational techniques for verifying first-order properties of logic programs and their application to model checking for finite and infinite state concurrent systems.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T15:15:57Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1362
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1362
2012-09-18T15:15:57Z
Transformations of logic programs on infinite lists
We consider an extension of logic programs, called ω-programs, that can be used to define predicates over infinite lists. ω-programs allow us to specify properties of the infinite behavior of reactive systems and, in general, properties of infinite sequences of events. The semantics of ω-programs is an extension of the perfect model semantics. We present variants of the familiar unfold/fold rules which can be used for transforming ω-programs. We show that these new rules are correct, that is, their application preserves the perfect model semantics. Then we outline a general methodology based on program transformation for verifying properties of ω-programs. We demonstrate the power of our transformation-based verification methodology by proving some properties of Büchi automata and ω-regular languages.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T14:38:17Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1361
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1361
2012-09-18T14:38:17Z
Generalization Strategies for the Verification of Infinite State Systems
We present a comparative evaluation of some generalization
strategies which are applied by a method for the automated verification of infinite state reactive systems. The verification method is based on (1) the specialization of the constraint logic program which encodes the system with respect to the initial state and the property to be verified, and (2) a bottom-up evaluation of the specialized program. The generalization strategies are used during the program specialization phase for controlling when and how to perform generalization. Selecting a good generalization strategy is not a trivial task because it must guarantee the
termination of the specialization phase itself, and it should be a good balance between precision and performance. Indeed, a coarse generalization strategy may prevent one to prove the properties of interest, while an unnecessarily precise strategy may lead to high verification times. We
perform an experimental evaluation of various generalization strategies on several infinite state systems and properties to be verified.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T13:56:51Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1359
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1359
2012-09-18T13:56:51Z
A Transformation Strategy for Verifying Logic Programs on Infinite Lists
We consider an extension of the class of logic programs,
called !-programs, that can be used to define predicates over infinite lists. The ω-programs allow us to specify properties of the infinite behaviour of reactive systems and, in general, properties of infinite sequences of events. The semantics of ω-programs is an extension of the
perfect model semantics. We present a general methodology based on an extension of the unfold/fold transformation rules which can be used for verifying properties of ω-programs. Then we propose a strategy for
the mechanical application of those rules and we demonstrate the power of that strategy by proving some properties of ω-regular languages and Büchi automata.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T13:23:48Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1358
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1358
2012-09-18T13:23:48Z
Program Specialization for Verifying Infinite State Systems: An Experimental Evaluation
We address the problem of the automated verification of temporal properties of infinite state reactive systems. We present some improvements of a verification method based on the specialization of constraint logic programs (CLP). First, we reformulate the verification method as a two-phase procedure: (1) in the first phase a CLP specification of an infinite state system is specialized with respect to the initial state of the system and the temporal property to be verified, and (2) in the second phase the specialized program is evaluated by using a bottom-up strategy. In this paper we propose some new strategies for performing program specialization during the first phase. We evaluate the effectiveness of these new strategies, as well as that of some old strategies, by presenting the results of experiments performed on several infinite state systems and temporal properties. Finally, we compare the implementation of our specialization-based verification method with various constraint-based model checking tools. The experimental results show that our method is effective and competitive with respect to the methods used in those other tools.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T13:12:39Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1357
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1357
2012-09-18T13:12:39Z
Program transformation for development, verification, and synthesis of programs
This paper briefly describes the use of the program transformation methodology for the development of correct and efficient programs. In particular, we will refer to the case of constraint logic programs and, through some examples, we will show how by program transformation, one can improve, synthesize, and verify programs.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T12:52:48Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1356
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1356
2012-09-18T12:52:48Z
Improving Reachability Analysis of Infinite State Systems by Specialization
We consider infinite state reactive systems specified by using linear constraints over the integers, and we address the problem of verifying safety properties of these systems by applying reachability analysis techniques. We propose a method based on program specialization, which improves the effectiveness of the backward and forward reachability analyses. For backward reachability our method consists in: (i) specializing the reactive system with respect to the initial states, and then (ii) applying to the specialized system a reachability analysis that works backwards from the unsafe states. For forward reachability our method works as for backward reachability, except that the role of the initial states and the unsafe states are interchanged. We have implemented our method using the MAP transformation system and the ALV verification system. Through various experiments performed on several infinite state systems, we have shown that our specialization-based verification technique considerably increases the number of successful verifications without significantly degrading the time performance.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T12:20:53Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1355
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1355
2012-09-18T12:20:53Z
Modular Termination and Combinability for Superposition Modulo Counter Arithmetic
Modularity is a highly desirable property in the development of satisfiability procedures. In this paper we are interested in using a dedicated superposition calculus to develop satisfiability procedures for (unions of) theories sharing counter arithmetic. In the first place, we are concerned with the termination of this calculus for theories representing data structures and their extensions. To this purpose, we prove a modularity result for termination which allows us to use our superposition calculus as a satisfiability procedure for combinations of data structures. In addition, we present a general combinability result that permits us to use our satisfiability procedures into a non-disjoint combination method à la Nelson-Oppen without loss of completeness. This latter result is useful whenever data structures are combined with theories for which superposition is not applicable, like theories of arithmetic.
Christophe Ringeissen
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T12:09:21Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1354
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1354
2012-09-18T12:09:21Z
Controlling Polyvariance for Specialization-based Verification
We present some extensions of a method for verifying safety
properties of infinite state reactive systems. Safety properties are specified by constraint logic programs encoding (backward or forward) reachability algorithms. These programs are transformed, before their use for checking safety, by specializing them with respect to the initial states (in the case of backward reachability) or with respect to the unsafe states (in the case of forward reachability). In particular, we present a specialization
strategy which is more general than previous proposals and we show, through some experiments performed on several infinite state reactive systems, that by using the specialized reachability programs obtained by our new strategy, we considerably increase the number of successful
verifications. Then we show that the specialization time, the size of the specialized program, and the number of successful verifications may vary, depending on the polyvariance introduced by the specialization, that is,
the set of specialized predicates which have been introduced. Finally, we propose a general framework for controlling polyvariance and we use our set of examples of infinite state reactive systems to compare in an experimental way various control strategies one may apply in practice.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T10:43:10Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1353
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1353
2012-09-18T10:43:10Z
Constraint-Based Correctness Proofs for Logic Program Transformations
Many approaches proposed in the literature for proving the correctness of unfold/fold transformations of logic programs make use of measures associated with program clauses. When from a program P1 we derive a program P2 by a applying a sequence of transformations, suitable conditions on the measures of the clauses in P2 guarantee that the transformation of P1 into P2 is correct, that is, P1 and P2 have the same least Herbrand model. In the approaches proposed so far, clause measures are fixed in advance, independently of the transformations to be proved correct. In this paper we propose a method for the automatic generation of clause measures which, instead, takes into account the particular program transformation at hand. During the application of a sequence of transformations we construct a system of linear equalities and inequalities
over nonnegative integers whose unknowns are the clause measures to be found, and the correctness of the transformation is guaranteed by the satisfiability of that system. Through some examples we show that our method is more powerful and practical than other methods proposed in the literature. In particular, we are able to establish in a fully automatic way the correctness of program transformations which, by using other methods, are proved correct at the expense of fixing in advance sophisticated clause measures.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-14T15:24:30Z
2016-07-13T10:52:33Z
http://eprints.imtlucca.it/id/eprint/1350
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1350
2012-09-14T15:24:30Z
State space c-reductions for concurrent systems in rewriting logic
We present c-reductions, a simple, flexible and very general state space reduction technique that exploits an equivalence relation on states that is a bisimulation. Reduction is achieved by a canonizer function, which maps each state into a not necessarily unique canonical representative of its equivalence class. The approach contains symmetry reduction and name reuse and name abstraction as special cases, and exploits the expressiveness of rewriting logic and its realization in Maude to automate c-reductions and to seamlessly integrate model checking and the discharging of correctness proof obligations. The performance of the approach has been validated over a set of representative case studies.
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
José Meseguer
Andrea Vandin
andrea.vandin@imtlucca.it
2012-09-14T14:53:15Z
2013-06-20T10:52:52Z
http://eprints.imtlucca.it/id/eprint/1349
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1349
2012-09-14T14:53:15Z
Generalization strategies for the verification of infinite state systems
We present a method for the automated verification of temporal properties of infinite state systems. Our verification method is based on the specialization of constraint logic programs (CLP) and works in two phases: (1) in the first phase, a CLP specification of an infinite state system is specialized with respect to the initial state of the system and the temporal property to be verified, and (2) in the second phase, the specialized program is evaluated by using a bottom-up strategy. The effectiveness of the method strongly depends on the generalization strategy which is applied during the program specialization phase. We consider several generalization strategies obtained by combining techniques already known in the field of program analysis and program transformation, and we also introduce some new strategies. Then, through many verification experiments, we evaluate the effectiveness of the generalization strategies we have considered. Finally, we compare the implementation of our specialization-based verification method to other constraint-based model checking tools. The experimental results show that our method is competitive with the methods used by those other tools.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-14T14:18:02Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1348
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1348
2012-09-14T14:18:02Z
Generation of Test Data Structures Using Constraint Logic Programming
The goal of Bounded-Exhaustive Testing (BET) is the automatic generation of all the test cases satisfying a given invariant, within a given bound. When the input has a complex structure, the development of correct and efficient generators becomes a very challenging task. In this paper we use Constraint Logic Programming (CLP) to systematically develop generators of structurally complex test data. Similarly to filtering -based test generation, we follow a declarative approach which allows us to separate the issue of (i) defining the test structure and invariant, from that of (ii) generating admissible test input instances. This separation helps improve the correctness of the developed test case generators. However, in contrast with filtering approaches, we rely on a symbolic representation and we take advantage of efficient search strategies provided by CLP systems for generating test instances. Through some experiments on examples taken from the literature on BET, we show that CLP, by combining the use of constraints and recursion, allows one to write intuitive and easily understandable test generators. We also show that these generators can be much more efficient than those built using ad-hoc filtering-based test generation tools like Korat.
Valerio Senni
valerio.senni@imtlucca.it
Fabio Fioravanti
2012-09-14T14:10:59Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1347
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1347
2012-09-14T14:10:59Z
Constraint-based correctness proofs for logic program transformations
Many approaches proposed in the literature for proving the correctness of unfold/fold transformations of logic programs make use of measures associated with program clauses. When from a program P 1 we derive a program P 2 by applying a sequence of transformations, suitable conditions on the measures of the clauses in P 2 guarantee that the transformation of P 1 into P 2 is correct, that is, P 1 and P 2 have the same least Herbrand model. In the approaches proposed so far, clause measures are fixed in advance, independently of the transformations to be proved correct. In this paper we propose a method for the automatic generation of clause measures which, instead, takes into account the particular program transformation at hand. During the application of a sequence of transformations we construct a system of linear equalities and inequalities over nonnegative integers whose unknowns are the clause measures to be found, and the correctness of the transformation is guaranteed by the satisfiability of that system. Through some examples we show that our method is more powerful and practical than other methods proposed in the literature. In particular, we are able to establish in a fully automatic way the correctness of program transformations which, by using other methods, are proved correct at the expense of fixing in advance sophisticated clause measures.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-13T10:58:51Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1345
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1345
2012-09-13T10:58:51Z
Improving Reachability Analysis of Infinite State Systems by Specialization
We consider infinite state reactive systems specified by using linear constraints over the integers, and we address the problem of verifying safety properties of these systems by applying reachability analysis techniques. We propose a method based on program specialization, which improves the effectiveness of the backward and forward reachability analyses. For backward reachability our method consists in: (i) specializing the reactive system with respect to the initial states, and then (ii) applying to the specialized system the reachability analysis that works backwards from the unsafe states. For reasons of efficiency, during specialization we make use of a relaxation from integers to reals. In particular, we test the satisfiability or entailment of constraints over the real numbers, while preserving the reachability properties of the reactive systems when constraints are interpreted over the integers. For forward reachability our method works as for backward reachability, except that the role of the initial states and the unsafe states are interchanged. We have implemented our method using the MAP transformation system and the ALV verification system. Through various experiments performed on several infinite state systems, we have shown that our specialization-based verification technique considerably increases the number of successful verifications without a significant degradation of the time performance.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-13T10:40:49Z
2013-03-07T12:56:24Z
http://eprints.imtlucca.it/id/eprint/1344
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1344
2012-09-13T10:40:49Z
Specification and Validation of Algorithms Generating Planar Lehman Words
Alain Giorgetti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-13T10:29:29Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1343
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1343
2012-09-13T10:29:29Z
Using Real Relaxations during Program Specialization
We propose a program specialization technique for locally stratified CLP(ℤ) programs, that is, logic programs with linear constraints over the set ℤ of the integer numbers. For reasons of efficiency our technique makes use of a relaxation from integers to reals. We reformulate the familiar unfold/fold transformation rules for CLP programs so that: (i) the applicability conditions of the rules are based on the satisfiability or entailment of constraints over the set ℝ of the real numbers, and (ii) every application of the rules transforms a given program into a new program with the same perfect model constructed over ℤ. Then, we introduce a strategy which applies the transformation rules for specializing CLP(ℤ) programs with respect to a given query. Finally, we show that our specialization strategy can be applied for verifying properties of infinite state reactive systems specified by constraints over ℤ.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-11T08:24:52Z
2014-01-13T10:32:53Z
http://eprints.imtlucca.it/id/eprint/1342
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1342
2012-09-11T08:24:52Z
Global networks of trade and bits
Considerable efforts have been made in recent years to produce detailed topologies of the Internet, but so far these data have been overlooked by economists. In this paper, we suggest that such information could be used to characterize both the size of the digital economy and outsourcing at country level. We analyse the topological structure of the network of trade in digital services (trade in bits) and compare it with the more traditional flow of manufactured goods across countries. To perform meaningful comparisons across networks with different characteristics, we define a stochastic benchmark for the number of connections among each country-pair, based on hypergeometric distribution. Original data are filtered so that we only focus on the strongest, i.e. statistically significant, links. We find that trade in bits displays a sparser and less hierarchical network structure, which is more similar to trade in high-skill manufactured goods than total trade. Moreover, distance plays a more prominent role in shaping the network of international trade in physical goods than trade in digital services.
Massimo Riccaboni
massimo.riccaboni@imtlucca.it
Alessandro Rossi
Stefano Schiavo
2012-09-10T09:20:40Z
2012-09-10T09:20:40Z
http://eprints.imtlucca.it/id/eprint/1341
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1341
2012-09-10T09:20:40Z
A Game-Theoretic Analysis of Grid Job Scheduling
Computational Grid is a well-established platform that gives an assurance to provide a vast range of heterogeneous resources for high performance computing. Efficient and effective resource management and Grid job scheduling are key requirements in order to optimize the use of the resources and to take full advantage from Grid systems. In this paper, we study the job scheduling problem in Computational Grid by using a game-theoretic approach. Grid resources are usually owned by different organizations which may have different and possibly conflicting concerns. Thus it is a crucial objective to analyze potential scenarios where selfish or cooperative behaviors of organizations impact heavily on global Grid efficiency. To this purpose, we formulate a repeated non-cooperative job scheduling game, whose players are Grid sites and whose strategies are scheduling algorithms. We exploit the concept of Nash equilibrium to express a situation in which no player can gain any profit by unilaterally changing its strategy. We extend and complement our previous work by showing whether, under certain circumstances, each investigated strategy is a Nash equilibrium or not. In the negative case we give a counter-example, in the positive case we either give a formal proof or motivate our conjecture by experimental results supported by simulations and exhaustive search.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
Sonia Taneja
2012-09-06T12:57:20Z
2012-09-06T12:57:20Z
http://eprints.imtlucca.it/id/eprint/1340
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1340
2012-09-06T12:57:20Z
Numerical algorithm for nonlinear state feedback ℌ∞ optimal control problem
In this paper, the numerical algorithm based on conjugate gradient method to solve a finite-horizon min-max optimization problem arising in the ℌ∞; control of nonlinear systems is presented. The feedback control and disturbance variables are formulated as a linear combination of basis functions. The proposed algorithm, which has a backward-intime structure, directly finds very accurate approximations of these feedbacks. Benchmark examples with analytic solutions are provided to demonstrate the effectiveness of the proposed algorithm.
Vladimir Milic
Alberto Bemporad
alberto.bemporad@imtlucca.it
Josip Kasac
Zeljko Situm
2012-09-04T09:57:02Z
2012-09-04T09:57:02Z
http://eprints.imtlucca.it/id/eprint/1339
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1339
2012-09-04T09:57:02Z
Nonnegative Matrix Factorizations Performing Object Detection and Localization
We study the problem of detecting and localizing objects in still, gray-scale images making use of the part-based representation provided by nonnegative matrix factorizations. Nonnegative matrix factorization represents an emerging example of subspace methods, which is able to extract interpretable parts from a set of template image objects and then to additively use them for describing individual objects. In this paper, we present a prototype system based on some nonnegative factorization algorithms,
which differ in the additional properties added to the nonnegative representation of data, in order to investigate if any additional constraint produces better results in general object detection via nonnegative matrix factorizations.
Gabriella Casalino
Nicoletta Del Buono
Massimo Minervini
massimo.minervini@imtlucca.it
2012-09-04T09:02:27Z
2013-03-05T15:09:06Z
http://eprints.imtlucca.it/id/eprint/1336
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1336
2012-09-04T09:02:27Z
Channel protection for H.264 compression in transportation video surveillance applications
The compression of video and subsequent partial loss of the compressed bitstream can dramatically reduce the accuracy of automated tracking algorithms. This is problematic for centralized applications such as transportation surveillance systems, where remotely captured and compressed video is transmitted over lossy wireless links to a central location for tracking. We propose a low-complexity method for protecting compressed video against channel loss such that the tracking accuracy of decoded and concealed video is maximized. Our algorithm leverages a previous method of video processing that removes components of low tracking interest before compression to minimize bitrate, and uses some of the bitrate savings to introduce redundancy into the transmitted bitstream to reduce the probability of information loss. We show using a common tracker and loss concealment algorithm that our system allows for up to 100 increased tracking accuracy at a given bitrate, or 90 bitrate savings for comparable tracking quality.
Eren Soyak
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2012-07-30T11:14:52Z
2016-04-07T09:29:17Z
http://eprints.imtlucca.it/id/eprint/1328
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1328
2012-07-30T11:14:52Z
Web Search Queries Can Predict Stock Market Volumes
We live in a computerized and networked society where many of our actions leave a digital trace and affect other people’s actions. This has lead to the emergence of a new data-driven research field: mathematical methods of computer science, statistical physics and sociometry provide insights on a wide range of disciplines ranging from social science to human mobility. A recent important discovery is that search engine traffic (i.e., the number of requests submitted by users to search engines on the www) can be used to track and, in some cases, to anticipate the dynamics of social phenomena. Successful examples include unemployment levels, car and home sales, and epidemics spreading. Few recent works applied this approach to stock prices and market sentiment. However, it remains unclear if trends in financial markets can be anticipated by the collective wisdom of on-line users on the web. Here we show that daily trading volumes of stocks traded in NASDAQ-100 are correlated with daily volumes of queries related to the same stocks. In particular, query volumes anticipate in many cases peaks of trading by one day or more. Our analysis is carried out on a unique dataset of queries, submitted to an important web search engine, which enable us to investigate also the user behavior. We show that the query volume dynamics emerges from the collective but seemingly uncoordinated activity of many users. These findings contribute to the debate on the identification of early warnings of financial systemic risk, based on the activity of users of the www.
Ilaria Bordino
Stefano Battiston
Guido Caldarelli
guido.caldarelli@imtlucca.it
Matthieu Cristelli
Antti Ukkonen
Ingmar Weber
2012-07-24T13:28:47Z
2013-04-19T12:42:25Z
http://eprints.imtlucca.it/id/eprint/1323
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1323
2012-07-24T13:28:47Z
A uniform framework for modelling nondeterministic, probabilistic, stochastic, or mixed processes and their behavioral equivalences
Labeled transition systems are typically used as behavioral models of concurrent processes, and the labeled transitions define the a one-step state-to-state reachability relation. This model can be made generalized by modifying the transition relation to associate a state reachability distribution, rather than a single target state, with any pair of source state and transition label. The state reachability distribution becomes a function mapping each possible target state to a value that expresses the degree of one-step reachability of that state. Values are taken from a preordered set equipped with a minimum that denotes unreachability. By selecting suitable preordered sets, the resulting model, called ULTraS from Uniform Labeled Transition System, can be specialized to capture well-known models of fully nondeterministic processes (LTS), fully
probabilistic processes (ADTMC), fully stochastic processes (ACTMC), and of nondeterministic and probabilistic (MDP) or nondeterministic and stochastic (CTMDP) processes. This uniform treatment of different behavioral models extends to behavioral equivalences. These can be defined on ULTraS by relying on appropriate measure functions that expresses the degree of reachability of a set of states when performing
single-step or multi-step computations. It is shown that the specializations of bisimulation, trace, and testing
equivalences for the different classes of ULTraS coincide with the behavioral equivalences defined in the literature over traditional models.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2012-07-24T13:23:26Z
2013-04-19T12:42:07Z
http://eprints.imtlucca.it/id/eprint/1322
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1322
2012-07-24T13:23:26Z
A uniform definition of stochastic process calculi
We introduce a unifying framework to provide the semantics of process algebras, including their quantitative variants useful for modeling quantitative aspects of behaviors. The unifying framework is then used to describe some of the most representative stochastic process algebras. This
provides a general and clear support for an understanding of their similarities and differences. The framework is based on State to Function Labeled Transition Systems, FuTSs for short, that are state-transition structures where each transition is a triple of the form (s; α;P). The first andthe second components are the source state, s, and the label, α, of the transition, while the third component is the continuation function, P, associating a value of a suitable type to each state s0. For example, in the case of stochastic process algebras the value of the continuation function on s0 represents the rate of the negative exponential distribution characterizing the duration/delay of the action performed to reach state s0 from s. We first provide the semantics of a simple formalism used to describe Continuous-Time Markov Chains, then we model a number of process algebras that permit parallel composition of models according to the two main interaction paradigms (multiparty and one-to-one synchronization). Finally, we deal with formalisms where actions and rates are kept separate and address the issues related to the coexistence of stochastic, probabilistic, and non-deterministic behaviors. For each formalism, we establish the formal correspondence between the FuTSs semantics and its original semantics.
Rocco De Nicola
r.denicola@imtlucca.it
Diego Latella
Michele Loreti
Mieke Massink
2012-07-11T08:08:58Z
2014-01-29T14:51:34Z
http://eprints.imtlucca.it/id/eprint/1317
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1317
2012-07-11T08:08:58Z
Towards the specification and verification of modal properties for structured systems
System specification formalisms should come with suitable property specification languages and effective verification tools. We sketch a framework for the verification of quantified temporal properties of systems with dynamically evolving structure. We consider visual specification formalisms like graph transformation systems (GTS) where program states are modelled as graphs, and the program
behavior is specified by graph transformation rules. The state space of a GTS can be represented as a graph transition system (GTrS), i.e. a transition system with states and transitions labelled, respectively, with a graph, and with a partial morphism representing the evolution of state components. Unfortunately, GTrSs are prohibitively large or infinite even for simple systems, making verification intractable and hence calling for appropriate abstraction techniques.
Andrea Vandin
andrea.vandin@
2012-06-29T12:28:49Z
2016-07-13T09:49:50Z
http://eprints.imtlucca.it/id/eprint/1293
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1293
2012-06-29T12:28:49Z
State space c-reductions for concurrent systems in rewriting logic
We present c-reductions, a simple, flexible and very general state space reduction technique that exploits an equivalence relation on states that is a bisimulation. Reduction is achieved by a canonizer function, which maps each state into a not necessarily unique canonical representative of its equivalence class. The approach contains symmetry reduction and name reuse and name abstraction as special cases, and exploits the expressiveness of rewriting logic and its realization in Maude
to automate c-reductions and to seamlessly integrate model checking and the discharging of correctness proof obligations. The performance of the approach has been validated over a set of representative case studies.
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
José Meseguer
Andrea Vandin
andrea.vandin@imtlucca.it
2012-06-29T11:15:48Z
2014-01-29T14:27:01Z
http://eprints.imtlucca.it/id/eprint/1291
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1291
2012-06-29T11:15:48Z
Evaluating the performance of model transformation styles in Maude
Rule-based programming has been shown to be very successful in many application areas. Two prominent examples are the specification of model transformations in model driven development approaches and the definition of structured operational semantics of formal languages. General rewriting frameworks such as Maude are flexible enough to allow the programmer to adopt and mix various rule styles. The choice between styles can be biased by the programmer’s background. For instance, experts in visual formalisms might prefer graph-rewriting styles, while experts in semantics might prefer structurally inductive rules. This paper evaluates the performance of different rule styles on a significant benchmark taken from the literature on model transformation. Depending on the actual transformation being carried out, our results show that different rule styles can offer drastically different performances. We point out the situations from which each rule style benefits to offer a valuable set of hints for choosing one style over the other.
Roberto Bruni
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2012-06-29T11:10:58Z
2016-07-13T09:49:16Z
http://eprints.imtlucca.it/id/eprint/1292
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1292
2012-06-29T11:10:58Z
Exploiting over- and under-approximations for infinite-state counterpart models
Software systems with dynamic topology are often infini-testate. Paradigmatic examples are those modeled as graph transformation systems (GTSs) with rewrite rules that allow an unbounded creation of items. For such systems, verification can become intractable, thus calling for the development of approximation techniques that may ease
the verification at the cost of losing in preciseness and completeness. Both over- and under-approximations have been considered in the literature, respectively offering more and less behaviors than the original system. At the same time, properties of the system may be either preserved or
reflected by a given approximation. In this paper we propose a general notion of approximation that captures some of the existing approaches for GTSs. Formulae are specified by a generic quantified modal logic, one that also generalizes many specification logics adopted in the literature for GTSs. We also propose a type system to denote part of the formulae as either reflected or preserved, together with a technique that exploits
under- and over-approximations to reason about typed as well as untyped formulae
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Fabio Gadducci
Andrea Vandin
andrea.vandin@imtlucca.it
2012-05-07T09:12:53Z
2012-05-07T09:12:53Z
http://eprints.imtlucca.it/id/eprint/1266
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1266
2012-05-07T09:12:53Z
Revisiting Trace and Testing Equivalences for Nondeterministic and Probabilistic Processes
One of the most studied extensions of testing theory to nondeterministic and probabilistic processes yields unrealistic probabilities estimations that give rise to two anomalies. First, probabilistic testing equivalence does not imply probabilistic trace equivalence. Second, probabilistic testing equivalence differentiates processes that perform the same sequence of actions with the same probability but make internal choices in different moments and thus, when applied to processes without probabilities, does not coincide with classical testing equivalence. In this paper, new versions of probabilistic trace and testing equivalences are presented for nondeterministic and probabilistic processes that resolve the two anomalies. Instead of focussing only on suprema and infima of the set of success probabilities of resolutions of interaction systems, our testing equivalence matches all the resolutions on the basis of the success probabilities of their identically labeled computations. A simple spectrum is provided to relate the new relations with existing ones. It is also shown that, with our approach, the standard probabilistic testing equivalences for generative and reactive probabilistic processes can be retrieved.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2012-04-26T10:50:14Z
2012-07-06T12:20:13Z
http://eprints.imtlucca.it/id/eprint/1264
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1264
2012-04-26T10:50:14Z
Assessment of non-centralised model predictive control techniques for electrical power networks
Model predictive control (MPC) is one of the few advanced control methodologies that have proven to be very successful in real-life applications. An attractive feature of MPC is its capability of explicitly taking state and input constraints into account. Recently, there has been an increasing interest in the usage of MPC schemes to control electrical power networks. The major obstacle for implementation lies in the large scale of these systems, which is prohibitive for a centralised approach. In this article, we therefore assess and compare the suitability of several non-centralised predictive control schemes for power balancing, to provide valuable insights that can contribute to the successful implementation of non-centralised MPC in the real-life electrical power system.
Ralph M. Hermans
Andrej Jokic
Mircea Lazar
Alessandro Alessio
Paul Van den bosch
Ian Hiskens
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-04-18T12:49:49Z
2013-03-12T09:32:21Z
http://eprints.imtlucca.it/id/eprint/1263
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1263
2012-04-18T12:49:49Z
Myocardial Blood-Oxygen-Level-Dependent Magnetic Resonance Imaging with Balanced Steady-State Free Precession Imaging Approaches
The current state of myocardial Blood-Oxygen-Level-Dependent (BOLD) MRI with balanced steady-state free precession (SSFP) approaches is reviewed. Initial studies forming the basis for SSFP-based detection of oxygenation changes beginning with whole blood studies, progressing through controlled studies that consider microcirculatory changes in oxygenation in skeletal muscle and kidney, culminating in basic myocardial studies are outlined. The theoretical basis to observe signal changes and the mechanisms that facilitate such observations are elucidated. Methods to overcome limitations in sensitivity are described.
Rohan Dharmakumar
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Debiao Li
2012-04-13T09:21:05Z
2014-01-24T14:13:48Z
http://eprints.imtlucca.it/id/eprint/1261
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1261
2012-04-13T09:21:05Z
Contractual Testing
Variants of must testing approach have been successfully applied in Service Oriented Computing for capturing compliance between (contracts exposed by) a client and a service and for characterising safe replacement, namely
the fact that compliance is preserved when a service exposing a ’smaller’ contract is replaced by another one with a ’larger’ contract. Nevertheless, in multi-party
interactions, partners often lack full coordination capabilities. Such a scenario calls for less discriminating notions of testing in which observers are, e.g., the
description of uncoordinated multiparty contexts or contexts that are unable to observe the complete behaviour of the process under test. In this paper we propose an extended notion of must preorder, called contractual preorder, according to which contracts are compared according to their ability to pass only the tests belonging to a given set. We show the generality of our framework by proving that preorders induced by existing notions of compliance in a distributed setting are instances of the contractual preorder when restricting to suitable sets of observers.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Hernán C. Melgratti
2012-04-04T09:31:53Z
2012-04-04T09:31:53Z
http://eprints.imtlucca.it/id/eprint/1257
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1257
2012-04-04T09:31:53Z
Synthesis of low-complexity stabilizing piecewise affine controllers: a control-Lyapunov function approach
Explicit model predictive controllers computed exactly by multi-parametric optimization techniques often lead to piecewise affine (PWA) state feedback controllers with highly complex and irregular partitionings of the feasible set. In many cases complexity prohibits the implementation of the resulting MPC control law for fast or large-scale system. This paper presents a new approach to synthesize low-complexity PWA controllers on regular partitionings that enhance fast on-line implementation with low memory requirements. Based on a PWA control-Lyapunov function, which can be obtained as the optimal cost for a constrained linear system corresponding to a stabilizing MPC setup, the synthesis procedure for the low-complexity control law boils down to local linear programming (LP) feasibility problems, which guarantee stability, constraint satisfaction, and certain performance requirements. Initially, the PWA controllers are computed on a fixed regular partitioning. However, we also present an automatic refinement procedure to refine the partitioning where necessary in order to satisfy the design specifications. A numerical example show the effectiveness of the novel approach.
Liang Lu
W.P.M.H. Heemels
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-04-04T08:41:10Z
2012-04-04T08:41:10Z
http://eprints.imtlucca.it/id/eprint/1254
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1254
2012-04-04T08:41:10Z
Decentralized linear time-varying model predictive control of a formation of unmanned aerial vehicles
This paper proposes a hierarchical MPC approach to stabilization and autonomous navigation of a formation of unmanned aerial vehicles (UAVs), under constraints on motor thrusts, angles and positions, and under collision avoidance constraints. Each vehicle is of quadcopter type and is stabilized by a local linear time-invariant (LTI) MPC controller at the lower level of the control hierarchy around commanded desired set-points. These are generated at the higher level and at a slower sampling rate by a linear time-varying (LTV) MPC controller per vehicle, based on an a simplified dynamical model of the stabilized UAV and a novel algorithm for convex under-approximation of the feasible space. Formation flying is obtained by running the above decentralized scheme in accordance with a leader-follower approach. The performance of the hierarchical control scheme is assessed through simulations, and compared to previous work in which a hybrid MPC scheme is used for planning paths on-line.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Claudio Rocchi
2012-03-28T13:02:05Z
2012-04-03T07:49:05Z
http://eprints.imtlucca.it/id/eprint/1252
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1252
2012-03-28T13:02:05Z
An integer linear programming approach for radio-based localization of shipping containers in the presence of incomplete proximity information
The most advanced solutions that are currently adopted in ports and terminals use technologies based on radio frequency identification (RFID) and the Global Positioning System (GPS) to identify and localize shipping containers in the yard. Nevertheless, because of the limitations of these solutions, the position of containers is still affected by errors, and it cannot be determined in real time. In this paper, a nonconventional approach is presented: Each container is equipped with nodes that use wireless communication to detect neighbor containers and to send proximity information to a base station. At the base station, geometrical constraints and proximity data are combined to determine the positions of containers. Missing information due to faulty nodes is tolerated by modeling geometrical constraints as an integer linear programming problem. Numerical simulations show that most of the containers can be localized, even when the number of nodes that are affected by faults is on the order of 30.
Stefano Abbate
stefano.abbate@alumni.imtlucca.it
Marco Avvenuti
Paolo Corsini
Barbara Panicucci
Mauro Passacantando
Alessio Vecchio
2012-03-28T12:54:38Z
2012-04-03T07:52:37Z
http://eprints.imtlucca.it/id/eprint/1251
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1251
2012-03-28T12:54:38Z
Localization of Shipping Containers in Ports and Terminals Using Wireless Sensor Networks
The most advanced logistics solutions that are currently adopted in ports and terminals use RFID- and GPS-based technologies to identify and localize shipping containers in the yard. Nevertheless, because of the limits of these techniques, the position of containers is still affected by some errors or it cannot be determined in real-time. We propose a non-conventional approach where the position of containers can be continuously determined by means of a wireless sensor network. Each container is equipped with a number of nodes that use wireless communication to detect neighbor containers. At the base station, geometrical constraints and proximity data are combined together to determine the relative positions of containers.
Stefano Abbate
stefano.abbate@alumni.imtlucca.it
Marco Avvenuti
Paolo Corsini
Alessio Vecchio
2012-03-28T10:58:54Z
2012-04-03T07:51:16Z
http://eprints.imtlucca.it/id/eprint/1247
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1247
2012-03-28T10:58:54Z
Developing cognitive decline baseline for normal ageing from sleep-EEG monitoring using wireless neurosensor devices
Sleep has a well-organized and consistent structure and hence can be a valuable instrument for investigating cognitive decline with ageing, and other neurological disorders. The ab- normality in brain function can be observed by changes in sleep patterns and brain signals in Electroencephalograph - EEG recordings. In this study, EEGs are captured through different sleep stages from different age groups, to develop baseline for normal healthy ageing. Threshold values are de- fined and extracted from sleep-spindles' amplitude and frequency characteristics. These values can then be compared with abnormal EEGs in progressive neurodegenerative subjects to identify the progression of the disease leading to decline in cognition and related mobility problems. Cognitive decline indicators derived from this preliminary study, will be useful to build intelligence in non-invasive wireless monitoring systems.
Janet Light
Xiaoyi Li
Stefano Abbate
stefano.abbate@alumni.imtlucca.it
2012-03-28T10:49:35Z
2012-04-03T07:50:44Z
http://eprints.imtlucca.it/id/eprint/1246
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1246
2012-03-28T10:49:35Z
MIMS: A Minimally Invasive Monitoring Sensor Platform
This paper describes a minimally invasive sensor platform for active and passive monitoring of human movements and physiological signals. Such a system is needed in cases where 24 #x00D7; 7 monitoring is required, as in older adults with cognitive impairment, dementia and Alzheimer's disease. The passive monitoring systems used today are useful only in detecting events after they happen; the accuracy and speed of detection is questionable. The noninvasive nature of such systems does not bring trade off benefits to early detection and prevention of emergency incidents. We compare some existing sensor platforms and present our monitoring approach using minimally invasive wearable sensor device(s). With a Minimally Invasive Monitoring Sensor (MIMS), using advanced intelligent systems, we analyze the physiological signal data preceding potential emergency events in order to predict them quickly. The Virtual Hub is the core component of MIMS, which acts as a gateway between a monitored person and her/his caregivers, as well as a shared access point between active and passive sensing devices. Some preliminary results are presented here from our sleep-related fall study using two heterogeneous sensor systems.
Stefano Abbate
stefano.abbate@alumni.imtlucca.it
Marco Avvenuti
Janet Light
2012-03-27T09:53:56Z
2014-07-28T12:21:38Z
http://eprints.imtlucca.it/id/eprint/1244
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1244
2012-03-27T09:53:56Z
Quantitative information flow, with a view
We put forward a general model intended for assessment of system security against passive eavesdroppers, both quantitatively ( how much information is leaked) and qualitatively ( what properties are leaked). To this purpose, we extend information hiding systems ( ihs ), a model where the secret-observable relation is represented as a noisy channel, with views : basically, partitions of the state-space. Given a view W and n independent observations of the system, one is interested in the probability that a Bayesian adversary wrongly predicts the class of W the underlying secret belongs to. We offer results that allow one to easily characterise the behaviour of this error probability as a function of the number of observations, in terms of the channel matrices defining the ihs and the view W . In particular, we provide expressions for the limit value as n → ∞, show by tight bounds that convergence is exponential, and also characterise the rate of convergence to predefined error thresholds. We then show a few instances of statistical attacks that can be assessed by a direct application of our model: attacks against modular exponentiation that exploit timing leaks, against anonymity in mix-nets and against privacy in sparse datasets.
Michele Boreale
Francesca Pampaloni
francesca.pampaloni@imtlucca.it
Michela Paolini
michela.paolini@alumni.imtlucca.it
2012-03-27T09:38:21Z
2014-07-28T12:21:19Z
http://eprints.imtlucca.it/id/eprint/1243
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1243
2012-03-27T09:38:21Z
Asymptotic information leakage under one-try attacks
We study the asymptotic behaviour of (a) information leakage and (b) adversary’s error probability in information hiding systems modelled as noisy channels. Specifically, we assume the attacker can make a single guess after observing n independent executions of the system, throughout which the secret information is kept fixed. We show that the asymptotic behaviour of quantities (a) and (b) can be determined in a simple way from the channel matrix. Moreover, simple and tight bounds on them as functions of n show that the convergence is exponential. We also discuss feasible methods to evaluate the rate of convergence. Our results cover both the Bayesian case, where a prior probability distribution on the secrets is assumed known to the attacker, and the maximum-likelihood case, where the attacker does not know such distribution. In the Bayesian case, we identify the distributions that maximize the leakage. We consider both the min-entropy setting studied by Smith and the additive form recently proposed by Braun et al., and show the two forms do agree asymptotically. Next, we extend these results to a more sophisticated eavesdropping scenario, where the attacker can perform a (noisy) observation at each state of the computation and the systems are modelled as hidden Markov models.
Michele Boreale
Francesca Pampaloni
francesca.pampaloni@imtlucca.it
Michela Paolini
michela.paolini@alumni.imtlucca.it
2012-03-06T13:33:24Z
2013-09-30T12:27:16Z
http://eprints.imtlucca.it/id/eprint/1221
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1221
2012-03-06T13:33:24Z
A dynamic obstacle avoidance strategy for a mobile robot based on sliding mode control
In this paper, a dynamic obstacle avoidance strategy for mobile robots is proposed. The strategy consists of two key elements: an on-line reference generator and a control scheme to make the robot track the reference signals so as to reach a pre-specified goal point. To generate the online reference signals, a harmonic potential field for dynamic environments is exploited. The potential field is modified on-line, in order to make the robot avoid the collision with obstacles which move along non a-priori known trajectories with timevarying speed. The proposed multi-level sliding mode controller is capable of making the robot move tracking the prescribed reference signals determined by the trajectory generator. The simulation results confirm the good performances of this approach.
Antonella Ferrara
Matteo Rubagotti
2012-03-06T12:06:46Z
2013-09-30T12:27:36Z
http://eprints.imtlucca.it/id/eprint/1216
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1216
2012-03-06T12:06:46Z
A nonlinear model predictive control scheme with multirate integral sliding mode
In this paper, a hierarchical multirate control scheme for nonlinear discrete-time systems is proposed, composed of a robust model predictive controller (MPC) and a multirate integral sliding mode (MISM) controller. In particular, the MISM controller acts at a faster sampling time than the MPC controller, and reduces the effect of model uncertainties and external disturbances, in order to obtain, at the next sampling instant of the MPC controller, a value of the system state that is as close as possible to the nominal one. To obtain this result, the control variable is composed of two parts: one generated by the MPC controller, and the other by the MISM controller. The a-priori reduction of the disturbance terms turns out to be very useful in order to improve the convergence properties of the MPC controller.
Matteo Rubagotti
Davide Martino Raimondo
Colin Neil Jones
Lalo Magni
Antonella Ferrara
Manfred Morari
2012-03-05T13:49:24Z
2013-09-30T12:30:10Z
http://eprints.imtlucca.it/id/eprint/1213
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1213
2012-03-05T13:49:24Z
Sliding mode observers for sensorless control of current-fed induction motors
This paper presents the use of a higher order sliding mode scheme for sensorless control of induction motors. The second order sub-optimal control law is based on a reduced-order model of the motor, and produces the references for a current regulated PWM inverter. A nonlinear observer structure, based on Lyapunov theory and on different sliding mode techniques (first order, sub-optimal and super-twisting) generates the velocity and rotor flux estimates necessary for the controller, based only on the measurements of phase voltages and currents. The proposed control scheme and observers are tested on an experimental setup, showing a satisfactory performance.
Daniele Bullo
Antonella Ferrara
Matteo Rubagotti
2012-03-05T11:06:43Z
2012-04-04T09:21:01Z
http://eprints.imtlucca.it/id/eprint/1212
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1212
2012-03-05T11:06:43Z
A multi-stage stochastic optimization approach to optimal bidding on energy markets
One of the most challenging tasks for an energy producer is represented by the optimal bidding on energy markets. Each eligible plant has to submit bids for the spot market one day before the delivery time and bids for the ancillary services provision. Allocating the optimal amount of energy, jointly minimizing the risk and maximizing profits is not a trivial task, since one has to face several sources of stochasticity, such as the high volatility of energy prices and the uncertainty of the production, due to the deregulation and to the growing importance of renewable sources. In this paper the optimal bidding problem is formulated as a multi-stage optimization problem to be solved in a receding horizon fashion, where at each time step a risk measure is minimized in order to obtain optimal quantities to bid on the day ahead market, while reserving the remaining production to the ancillary market. Simulation results show the optimal bid profile for a trading day, based on stochastic models identified from historical data series from the Italian energy market.
Laura Puglia
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-03-05T10:58:40Z
2013-02-12T12:12:49Z
http://eprints.imtlucca.it/id/eprint/1211
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1211
2012-03-05T10:58:40Z
Stability and invariance analysis of uncertain PWA systems based on linear programming
This paper analyzes stability of discrete-time uncertain piecewise-affine systems whose dynamics are defined on a bounded set χ; that is not necessarily invariant. The objective is to prove the uniform asymptotic stability of the origin and to find an invariant domain of attraction. This goal is attained by defining a suitable extended dynamics (which is partially fictitious), and by using a numerical procedure based on linear programming. The theoretical results are based on the definition of a piecewise-affine, possibly discontinuous, Lyapunov function.
Sergio Trimboli
Matteo Rubagotti
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-03-02T15:49:32Z
2013-09-30T12:32:05Z
http://eprints.imtlucca.it/id/eprint/1210
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1210
2012-03-02T15:49:32Z
Second-order sliding-mode control of a mobile robot based on a harmonic potential field
The problem of controlling an autonomous wheeled vehicle which must move in its operative space and reach a prescribed goal point avoiding the collision with the obstacles is dealt with. To comply with the non- holonomic nature of the system, a gradient-tracking approach is followed, so that a reference velocity and orientation are suitably generated during the vehicle motion. To track such references, two control laws are designed by suitably transforming the system model into a couple of auxiliary second-order uncertain systems, relying on which second-order sliding modes can be enforced. As a result, the control objective is attained by means of a continuous control law, so that the problems due to the so-called chattering effect, such as the possible actuators wear or the induction of vibrations, typically associated with the use of sliding-mode control, are circumvented.
Antonella Ferrara
Matteo Rubagotti
2012-03-02T15:42:26Z
2013-09-30T12:25:51Z
http://eprints.imtlucca.it/id/eprint/1209
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1209
2012-03-02T15:42:26Z
A sub-optimal second order sliding mode controller for systems with saturating actuators
In this paper, the problem of the possible saturation of the continuous control variable in the sub-optimal second order sliding mode controller applied to relative degree one systems with saturating actuators is addressed. It is proved that during the sliding phase, if basic assumptions are made, the continuous control variable never saturates, while, during the reaching phase, the presence of saturating actuators can make the steering of the sliding variable to zero in finite time not always guaranteed. In the present paper, the original algorithm is modified in order to solve this problem: a new strategy is proposed, which proves to be able to steer the sliding variable to zero in a finite time in spite of the presence of saturating actuators.
Antonella Ferrara
Matteo Rubagotti
2012-03-02T15:30:33Z
2013-09-30T12:33:14Z
http://eprints.imtlucca.it/id/eprint/1208
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1208
2012-03-02T15:30:33Z
Robust model predictive control with integral sliding mode in continuous-time sampled-data nonlinear systems
This paper proposes a control strategy for nonlinear constrained continuous-time uncertain systems which combines robust model predictive control (MPC) with sliding mode control (SMC). In particular, the so-called Integral SMC approach is used to produce a control action aimed to reduce the difference between the nominal predicted dynamics of the closed-loop system and the actual one. In this way, the MPC strategy can be designed on a system with a reduced uncertainty. In order to prove the stability of the overall control scheme, some general regional input-to-state practical stability results for continuous-time systems are proved.
Matteo Rubagotti
Davide Martino Raimondo
Antonella Ferrara
Lalo Magni
2012-03-02T15:10:30Z
2013-09-30T12:37:32Z
http://eprints.imtlucca.it/id/eprint/1207
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1207
2012-03-02T15:10:30Z
Time-optimal sliding-mode control of a mobile robot in a dynamic environment
In this study, an original strategy to control a mobile robot in a dynamic environment is presented. The strategy consists of two main elements. The first is the method for the online trajectory generation based on harmonic potential fields, capable of generating velocity and orientation references, which extends classical results on harmonic potential fields for the case of static environments to the case when the presence of a moving obstacle with unknown motion is considered. The second is the design of sliding-mode controllers capable of making the controlled variables of the robot track in a finite minimum time both the velocity and the orientation references.
Matteo Rubagotti
Marco L. Della Vedova
Antonella Ferrara
2012-03-02T14:54:31Z
2013-09-30T12:37:55Z
http://eprints.imtlucca.it/id/eprint/1206
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1206
2012-03-02T14:54:31Z
Integral sliding mode control for nonlinear systems with matched and unmatched perturbations
We consider the problem of designing an integral sliding mode controller to reduce the disturbance terms that act on nonlinear systems with state-dependent drift and input matrix. The general case of both, matched and unmatched disturbances affecting the system is addressed. It is proved that the definition of a suitable sliding manifold and the generation of sliding modes upon it guarantees the minimization of the effect of the disturbance terms, which takes place when the matched disturbances are completely rejected and the unmatched ones are not amplified. A simulation of the proposed technique, applied to a dynamically feedback linearized unicycle, illustrates its effectiveness, even in presence of nonholonomic constraints.
Matteo Rubagotti
Antonio Estrada
Fernando Castanos
Antonella Ferrara
Leonid Fridman
2012-03-02T14:47:20Z
2013-09-30T12:38:13Z
http://eprints.imtlucca.it/id/eprint/1205
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1205
2012-03-02T14:47:20Z
High-Speed piecewise affine virtual sensors
This paper proposes piecewise affine (PWA) virtual sensors for the estimation of unmeasured variables of nonlinear systems with unknown dynamics. The estimation functions are designed directly from measured inputs and outputs and have two important features. First, they enjoy convergence and optimality properties, based on classical results on parametric identification. Second, the PWA structure is based on a simplicial partition of the measurement space and allows one to implement very effectively the virtual sensor on a digital circuit. Due to the low cost of the required hardware for the implementation of such a particular structure and to the very high sampling frequencies that can be achieved, the approach is applicable to a wide range of industrial problems.
Tomaso Poggi
Matteo Rubagotti
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Storace
2012-02-29T16:10:39Z
2012-02-29T16:10:39Z
http://eprints.imtlucca.it/id/eprint/1200
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1200
2012-02-29T16:10:39Z
Model predictive control with delay compensation for air-to-fuel ratio control
To meet increasingly stringent emission regulations modern internal combustion engines require highly accurate control of the air-to-fuel ratio. The performance of the conventional air-to-fuel ratio feedback loop is limited by the combustion delay between fuel injection and engine exhaust, and by the transport delay for the exhaust gas to propagate to the air-to-fuel ratio sensor location. The combined delay is variable, since it depends on engine speed and airflow. Drivability, fuel economy and emission requirements result in constraints on the deviations of the air-to-fuel ratio, stored oxygen in the three-way catalyst, and fuel injection. This paper proposes an approach for air-to-fuel ratio control based on Model Predictive Control (MPC). The approach systematically handles both variable time delays and pointwise-in-time constraints. A delay-free model is considered first, which takes into account the dynamic relations between the injected fuel and the air-to-fuel ratio and the dynamics of the oxygen stored in the catalyst. For the delay-free model, the explicit MPC law is computed. Delay compensation is obtained by estimating the delay online from engine operating conditions, and feeding the MPC law with the state predicted ahead over the time interval of the estimated delay. The predicted state is computed by combining measurement filtering with forward iterations of the nonlinear dynamic equations of the model. The achieved performance in tracking the air-to-fuel ratio and the oxygen storage setpoints while enforcing the constraints is demonstrated in simulation using real data profiles.
Sergio Trimboli
Stefano Di Cairano
Alberto Bemporad
alberto.bemporad@imtlucca.it
Ilya Kolmanovsky
2012-02-27T11:58:34Z
2012-02-27T11:58:34Z
http://eprints.imtlucca.it/id/eprint/1194
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1194
2012-02-27T11:58:34Z
The fractal properties of internet
In this paper we show that the Internet web, from a user’s perspective, manifests robust scaling properties of the type P(n)∝n−r where n is the size of the basin connected to a given point, P represents the density of probability of finding a basin of size n connected and τ = 1.9±0.1 is a characteristic universal exponent. The connection between users and providers are studied and modeled as branches of a world spanning tree. This scale-free structure is the result of the spontaneous growth of the web, but is not necessarily the optimal one for efficient transport. We introduce an appropriate figure of merit and suggest that a planning of few big links, acting as information highways, may noticeably increase the efficiency of the net without affecting its robustness.
Riccardo Marchetti
Guido Caldarelli
guido.caldarelli@imtlucca.it
Luciano Pietronero
2012-02-27T09:59:42Z
2012-10-31T10:31:13Z
http://eprints.imtlucca.it/id/eprint/1189
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1189
2012-02-27T09:59:42Z
Networks: a very short introduction
From ecosystems to Facebook, from the Internet to the global financial market, some of the most important and familiar natural systems and social phenomena are based on a networked structure. It is impossible to understand the spread of an epidemic, a computer virus, large-scale blackouts, or massive extinctions without taking into account the network structure that underlies all these phenomena.
In this Very Short Introduction, Guido Caldarelli and Michele Catanzaro discuss the nature and variety of networks, using everyday examples from society, technology, nature, and history to explain and understand the science of network theory. They show the ubiquitous role of networks; how networks self-organize; why the rich get richer; and how networks can spontaneously collapse. They conclude by highlighting how the findings of complex network theory have very wide and important applications in genetics, ecology, communications, economics, and sociology.
Guido Caldarelli
guido.caldarelli@imtlucca.it
Michele Catanzaro
2012-02-24T10:07:42Z
2013-11-20T14:03:52Z
http://eprints.imtlucca.it/id/eprint/1164
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1164
2012-02-24T10:07:42Z
The fractal properties of Internet
In this paper we show that the Internet web, from a user's perspective, manifests robust scaling properties of the type P(n) propto n−τ, where n is the size of the basin connected to a given point, P represents the density of probability of finding n points downhill and τ = 1.9 ± 0.1 s a characteristic universal exponent. This scale-free structure is a result of the spontaneous growth of the web, but is not necessarily the optimal one for efficient transport. We introduce an appropriate figure of merit and suggest that a planning of few big links, acting as information highways, may noticeably increase the efficiency of the net without affecting its robustness.
Guido Caldarelli
guido.caldarelli@imtlucca.it
Riccardo Marchetti
Luciano Pietronero
2012-02-22T10:59:34Z
2012-02-22T10:59:34Z
http://eprints.imtlucca.it/id/eprint/1145
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1145
2012-02-22T10:59:34Z
Growing dynamics of Internet providers
In this paper we present a model for the growth and evolution of Internet providers. The model reproduces the data observed for the Internet connection as probed by tracing routes from different computers. This problem represents a paramount case of study for growth processes in general, but can also help in the understanding the properties of the Internet. Our main result is that this network can be reproduced by a self-organized interaction between users and providers that can rearrange in time. This model can then be considered as a prototype model for the class of phenomena of aggregation processes in social networks.
Andrea Capocci
Guido Caldarelli
guido.caldarelli@imtlucca.it
Riccardo Marchetti
Luciano Pietronero
2012-02-21T10:52:51Z
2014-12-05T09:27:10Z
http://eprints.imtlucca.it/id/eprint/1137
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1137
2012-02-21T10:52:51Z
Scale-Free networks from varying vertex intrinsic fitness
A new mechanism leading to scale-free networks is proposed in this Letter. It is shown that, in many cases of interest, the connectivity power-law behavior is neither related to dynamical properties nor to preferential attachment. Assigning a quenched fitness value xi to every vertex, and drawing links among vertices with a probability depending on the fitnesses of the two involved sites, gives rise to what we call a good-get-richer mechanism, in which sites with larger fitness are more likely to become hubs (i.e., to be highly connected).
Guido Caldarelli
guido.caldarelli@imtlucca.it
Andrea Capocci
Paolo De Los Rios
Miguel A. Muñoz
2012-02-21T10:37:14Z
2012-02-27T10:37:03Z
http://eprints.imtlucca.it/id/eprint/1136
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1136
2012-02-21T10:37:14Z
Multi-layer model for the web graph
This paper studies stochastic graph models of the WebGraph. We present a new model that describes the WebGraph as an ensemble of different regions generated by independent stochastic processes (in the spirit of a recent paper by Dill et al. [VLDB 2001]). Models such as the Copying Model [17] and Evolving Networks Model [3] are simulated and compared on several relevant measures such as degree and clique distribution.
Luigi Laura
Stefano Leonardi
Guido Caldarelli
guido.caldarelli@imtlucca.it
Paolo De Los Rios
2012-02-20T13:54:38Z
2018-03-08T17:08:28Z
http://eprints.imtlucca.it/id/eprint/1135
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1135
2012-02-20T13:54:38Z
Universal scaling relations in food webs
The structure of ecological communities is usually represented by food webs. In these webs, we describe species by means of vertices connected by links representing the predations. We can therefore study different webs by considering the shape (topology) of these networks. Comparing food webs by searching for regularities is of fundamental importance, because universal patterns would reveal common principles underlying the organization of different ecosystems. However, features observed in small food webs are different from those found in large ones. Furthermore, food webs (except in isolated cases) do not share general features with other types of network (including the Internet, the World Wide Web and biological webs). These features are a small-world character and a scale-free (power-law) distribution of the degree (the number of links per vertex). Here we propose to describe food webs as transportation networks by extending to them the concept of allometric scaling (how branching properties change with network size). We then decompose food webs in spanning trees and loop-forming links. We show that, whereas the number of loops varies significantly across real webs, spanning trees are characterized by universal scaling relations.
Diego Garlaschelli
diego.garlaschelli@imtlucca.it
Guido Caldarelli
guido.caldarelli@imtlucca.it
Luciano Pietronero
2012-02-20T13:38:39Z
2018-03-08T17:08:41Z
http://eprints.imtlucca.it/id/eprint/1133
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1133
2012-02-20T13:38:39Z
Food web structure and the evolution of complex networks
In addition to traditional properties such as the degree distribution P(k), in this work we propose two other useful quantities that can help in characterizing the topology of food webs quantitatively, namely the allometric scaling relations C(A) and the branch size distribution P(A) which are defined on the spanning tree of the webs. These quantities, whose use has proved relevant in characterizing other different networks appearing in nature (such as river basins, Internet, and vascular systems), are related (in the context of food webs) to the efficiency in the resource transfer and to the stability against species removal. We present the analysis of the data for both real food webs and numerical simulations of a growing network model. Our results allow us to conclude that real food webs display a high degree of both efficiency and stability due to the evolving character of their topology.
Guido Caldarelli
guido.caldarelli@imtlucca.it
Diego Garlaschelli
diego.garlaschelli@imtlucca.it
Luciano Pietronero
2012-02-15T16:25:31Z
2012-02-21T13:54:41Z
http://eprints.imtlucca.it/id/eprint/1130
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1130
2012-02-15T16:25:31Z
Quantitative description and modeling of real networks
We present data analysis and modeling of two particular cases of study in the field of growing networks. We analyze World Wide Web data set and authorship collaboration networks in order to check the presence of correlation in the data. The results are reproduced with good agreement through a suitable modification of the standard Albert-Barabási model of network growth. In particular, intrinsic relevance of sites plays a role in determining the future degree of the vertex.
Andrea Capocci
Guido Caldarelli
guido.caldarelli@imtlucca.it
Paolo De Los Rios
2012-02-15T16:02:44Z
2012-02-15T16:02:44Z
http://eprints.imtlucca.it/id/eprint/1127
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1127
2012-02-15T16:02:44Z
Structure of cycles and local ordering in complex networks
We study the properties of quantities aimed at the characterization of grid-like ordering in complex networks. These quantities are based on the global and local behavior of cycles of order four, which are the minimal structures able to identify rectangular clustering. The analysis of data from real networks reveals the ubiquitous presence of a statistically high level of grid-like ordering that is non-trivially correlated with the local degree properties. These observations provide new insights on the hierarchical structure of complex networks.
Guido Caldarelli
guido.caldarelli@imtlucca.it
Romualdo Pastor-Satorras
Alessandro Vespignani
2012-02-15T15:54:46Z
2012-02-15T15:54:46Z
http://eprints.imtlucca.it/id/eprint/1126
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1126
2012-02-15T15:54:46Z
Preface on "Applications of Networks"
Guido Caldarelli
guido.caldarelli@imtlucca.it
Ayşe Erzan
Alessandro Vespignani
2012-02-15T15:49:49Z
2012-02-15T15:49:49Z
http://eprints.imtlucca.it/id/eprint/1125
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1125
2012-02-15T15:49:49Z
Virtual Round Table on ten leading questions for network research
The following discussion is an edited summary of the public debate started during the conference "Growing Networks and Graphs in Statistical Physics, Finance, Biology and Social Systems" held in Rome in September 2003. Drafts documents were circulated electronically among experts in the field and additions and follow-up to the original discussion have been included. Among the scientists participating to the discussion L.A.N. Amaral, A. Barrat, A.L. Barabasi, G. Caldarelli, P. De Los Rios, A. Erzan, B. Kahng, R. Mantegna, J.F.F. Mendes, R. Pastor-Satorras, A. Vespignani are acknowledged for their contributions and editing.
Luis A. N. Amaral
Alain Barrat
Guido Caldarelli
guido.caldarelli@imtlucca.it
Albert-László Barabási
Paolo De Los Rios
Ayşe Erzan
Byungnam Kahng
Rosario Nunzio Mantegna
Josè F. F. Mendes
Romualdo Pastor-Satorras
Alessandro Vespignani
2012-02-14T14:36:31Z
2013-11-21T09:03:40Z
http://eprints.imtlucca.it/id/eprint/1121
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1121
2012-02-14T14:36:31Z
Preferential exchange: strengthening connections in complex networks
Many social, technological, and biological interactions involve network relationships whose outcome intimately depends on the structure of the network and on the strengths of the connections. Yet, although much information is now available concerning the structure of many networks, the strengths are more difficult to measure. Here we show that, for one particular social network, notably the e-mail network, a suitable measure of the strength of the connections can be available. We also propose a simple mechanism, based on positive feedback and reciprocity, that can explain the observed behavior and that hints toward specific dynamics of formation and reinforcement of network connections. Network data from contexts different from social sciences indicate that power-law, and generally broad, distributions of the connection strength are ubiquitous, and the proposed mechanism has a wide range of applicability.
Guido Caldarelli
guido.caldarelli@imtlucca.it
Fabrizio Coccetti
Paolo De Los Rios
2012-02-14T13:19:55Z
2012-02-14T13:19:55Z
http://eprints.imtlucca.it/id/eprint/1116
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1116
2012-02-14T13:19:55Z
Social network growth with assortative mixing
Networks representing social systems display specific features that put them apart from biological and technological ones. In particular, the number of links attached to a node is positively correlated to that of its nearest neighbours. We develop a model that reproduces this feature, starting from microscopical mechanisms of growth. The statistical properties arising from the simulations are in good agreement with those of the real-world social networks of scientists co-authoring papers in condensed matter physics. Moreover, the model highlights the determinant role of correlations in shaping the network's topology.
Michele Catanzaro
Guido Caldarelli
guido.caldarelli@imtlucca.it
Luciano Pietronero
2012-02-13T14:30:32Z
2013-11-21T09:04:26Z
http://eprints.imtlucca.it/id/eprint/1114
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1114
2012-02-13T14:30:32Z
Detecting communities in large networks
We develop an algorithm to detect community structure in complex networks. The algorithm is based on spectral methods and takes into account weights and link orientation. Since the method detects efficiently clustered nodes in large networks even when these are not sharply partitioned, it turns to be specially suitable for the analysis of social and information networks. We test the algorithm on a large-scale data-set from a psychological experiment of word association. In this case, it proves to be successful both in clustering words, and in uncovering mental association patterns.
Andrea Capocci
Vito D. P. Servedio
Guido Caldarelli
guido.caldarelli@imtlucca.it
Francesca Colaiori
2012-02-13T14:15:03Z
2012-02-13T14:15:03Z
http://eprints.imtlucca.it/id/eprint/1113
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1113
2012-02-13T14:15:03Z
Communities Detection in Large Networks
We develop an algorithm to detect community structure in complex networks. The algorithm is based on spectral methods and takes into account weights and links orientations. Since the method detects efficiently clustered nodes in large networks even when these are not sharply partitioned, it turns to be specially suitable to the analysis of social and information networks. We test the algorithm on a large-scale data-set from a psychological experiment of word association. In this case, it proves to be successful both in clustering words, and in uncovering mental association patterns.
Andrea Capocci
Vito D. P. Servedio
Guido Caldarelli
guido.caldarelli@imtlucca.it
Francesca Colaiori
2012-02-13T13:48:45Z
2018-03-08T17:07:31Z
http://eprints.imtlucca.it/id/eprint/1112
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1112
2012-02-13T13:48:45Z
Food-web topology: universal scaling in food-web structure? (reply)
Although Camacho and Arenas raise potentially interesting points, we believe that some of their arguments are flawed or undermined by poor statistics, and therefore that they do not invalidate our result.
Diego Garlaschelli
diego.garlaschelli@imtlucca.it
Guido Caldarelli
guido.caldarelli@imtlucca.it
Luciano Pietronero
2012-02-03T15:01:41Z
2013-11-21T09:06:06Z
http://eprints.imtlucca.it/id/eprint/1111
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1111
2012-02-03T15:01:41Z
Loops structure of the Internet at the autonomous system level
We present here a study of the clustering and loops in a graph of the Internet at the autonomous systems level. We show that, even if the whole structure is changing with time, the statistical distributions of loops of order 3, 4, and 5 remain stable during the evolution. Moreover, we will bring evidence that the Internet graphs show characteristic Markovian signatures, since the structure is very well described by two-point correlations between the degrees of the vertices. This indeed proves that the Internet belongs to a class of network in which the two-point correlation is sufficient to describe their whole local (and thus global) structure. Data are also compared to present Internet models.
Ginestra Bianconi
Guido Caldarelli
guido.caldarelli@imtlucca.it
Andrea Capocci
2012-02-03T14:29:51Z
2014-12-18T16:02:51Z
http://eprints.imtlucca.it/id/eprint/1109
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1109
2012-02-03T14:29:51Z
Preferential attachment in the growth of social networks: the internet encyclopedia Wikipedia
We present an analysis of the statistical properties and growth of the free on-line encyclopedia Wikipedia. By describing topics by vertices and hyperlinks between them as edges, we can represent this encyclopedia as a directed graph. The topological properties of this graph are in close analogy with those of the World Wide Web, despite the very different growth mechanism. In particular, we measure a scale-invariant distribution of the in and out degree and we are able to reproduce these features by means of a simple statistical model. As a major consequence, Wikipedia growth can be described by local rules such as the preferential attachment mechanism, though users, who are responsible of its evolution, can act globally on the network.
Andrea Capocci
Vito D. P. Servedio
Francesca Colaiori
Luciana Buriol
Debora Donato
Stefano Leonardi
Guido Caldarelli
guido.caldarelli@imtlucca.it
2012-02-01T11:21:00Z
2013-11-06T10:33:36Z
http://eprints.imtlucca.it/id/eprint/1093
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1093
2012-02-01T11:21:00Z
Complex Networks: from Biology to Information Technology - Preface
Alain Barrat
Stefano Boccaletti
Guido Caldarelli
guido.caldarelli@imtlucca.it
Alessandro Chessa
alessandro.chessa@imtlucca.it
Vito Latora
Adilson E. Motter
2012-01-26T14:09:16Z
2014-12-18T15:37:28Z
http://eprints.imtlucca.it/id/eprint/1085
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1085
2012-01-26T14:09:16Z
Random hypergraphs and their applications
In the last few years we have witnessed the emergence, primarily in online communities, of new types of social networks that require for their representation more complex graph structures than have been employed in the past. One example is the folksonomy, a tripartite structure of users, resources, and tags—labels collaboratively applied by the users to the resources in order to impart meaningful structure on an otherwise undifferentiated database. Here we propose a mathematical model of such tripartite structures that represents them as random hypergraphs. We show that it is possible to calculate many properties of this model exactly in the limit of large network size and we compare the results against observations of a real folksonomy, that of the online photography website Flickr. We show that in some cases the model matches the properties of the observed network well, while in others there are significant differences, which we find to be attributable to the practice of multiple tagging, i.e., the application by a single user of many tags to one resource or one tag to many resources
Gourab Ghoshal
Vinko Zlatic
Guido Caldarelli
guido.caldarelli@imtlucca.it
M.E.J. Newman
2012-01-26T13:53:54Z
2014-12-18T15:35:35Z
http://eprints.imtlucca.it/id/eprint/1084
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1084
2012-01-26T13:53:54Z
Hypergraph topological quantities for tagged social networks
Recent years have witnessed the emergence of a new class of social networks, which require us to move beyond previously employed representations of complex graph structures. A notable example is that of the folksonomy, an online process where users collaboratively employ tags to resources to impart structure to an otherwise undifferentiated database. In a recent paper, we proposed a mathematical model that represents these structures as tripartite hypergraphs and defined basic topological quantities of interest. In this paper, we extend our model by defining additional quantities such as edge distributions, vertex similarity and correlations as well as clustering. We then empirically measure these quantities on two real life folksonomies, the popular online photo sharing site Flickr and the bookmarking site CiteULike. We find that these systems share similar qualitative features with the majority of complex networks that have been previously studied. We propose that the quantities and methodology described here can be used as a standard tool in measuring the structure of tagged networks.
Vinko Zlatic
Gourab Ghoshal
Guido Caldarelli
guido.caldarelli@imtlucca.it
2012-01-26T10:49:27Z
2013-11-20T15:59:36Z
http://eprints.imtlucca.it/id/eprint/1083
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1083
2012-01-26T10:49:27Z
PageRank equation and localization in the WWW
We show that the PageRank in a network can be represented as the solution of a differential equation discretized over a directed graph. By exploiting a formal relationship with the time-independent Schrödinger equation it is possible to interpret hub formation and related phenomena as a wave-like localization process in the presence of disorder and trapping potentials. The result opens new perspectives in the physics of networks with interdisciplinary connections and opens the way to the employment of various mathematical techniques to the analysis of self-organization in structured systems. Applications are envisaged in the World-Wide Web, traffic, social and biological networks.
Nicola Perra
Vinko Zlatic
Alessandro Chessa
alessandro.chessa@imtlucca.it
Claudio Conti
Debora Donato
Guido Caldarelli
guido.caldarelli@imtlucca.it
2012-01-26T10:01:25Z
2012-01-26T10:55:18Z
http://eprints.imtlucca.it/id/eprint/1081
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1081
2012-01-26T10:01:25Z
A PageRank-based preferential attachment model for the evolution of the World Wide Web
We propose a model of network growth aimed at mimicking the evolution of the World Wide Web. To this purpose, we take as a key quantity, in the network evolution, the centrality or importance of a vertex as measured by its PageRank. Using a preferential attachment rule and a rewiring procedure based on this quantity, we can reproduce most of the topological properties of the system.
P. Giammatteo
Debora Donato
Vinko Zlatic
Guido Caldarelli
guido.caldarelli@imtlucca.it
2012-01-25T13:12:31Z
2012-01-25T13:12:31Z
http://eprints.imtlucca.it/id/eprint/1077
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1077
2012-01-25T13:12:31Z
Scale-free networks : complex webs in nature and technology
A variety of different social, natural and technological systems can be described by the same mathematical framework. This holds from the Internet to food webs and to boards of company directors. In all these situations, a graph of the elements of the system and their interconnections displays a universal feature. There are only a few elements with many connections and many elements with few connections. This book reports the experimental evidence of these ‘Scale-free networks’ and provides students and researchers with a corpus of theoretical results and algorithms to analyse and understand these features. The content of this book and the exposition makes it a clear textbook for beginners and a reference book for experts.
Guido Caldarelli
guido.caldarelli@imtlucca.it
2012-01-20T10:45:29Z
2012-01-25T13:10:14Z
http://eprints.imtlucca.it/id/eprint/1076
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1076
2012-01-20T10:45:29Z
(edited by) Large scale structure and dynamics of complex networks: from information technology to finance and natural science
This book is the culmination of three years of research effort on a multidisciplinary project in which physicists, mathematicians, computer scientists and social scientists worked together to arrive at a unifying picture of complex networks. The contributed chapters form a reference for the various problems in data analysis visualization and modeling of complex networks.
Guido Caldarelli
guido.caldarelli@imtlucca.it
Alessandro Vespignani
2012-01-20T09:04:42Z
2012-01-20T09:32:03Z
http://eprints.imtlucca.it/id/eprint/1073
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1073
2012-01-20T09:04:42Z
Behavioral Equivalences
Beahvioral equivalences serve to establish in which cases two reactive (possible concurrent) systems offer similar interaction capabilities relatively to other systems representing their operating environment. Behavioral equivalences have been mainly developed in the context
of process algebras, mathematically rigorous languages that have been used for describing and verifying properties of concurrent communicating systems. By relying on the so called structural operational semantics (SOS), labelled transition systems, are associated to each term of a process
algebra. Behavioral equivalences are used to abstract from unwanted details and identify those labelled transition systems that react “similarly” to external experiments. Due to the large number of properties which may be relevant in the analysis of concurrent systems, many different theories
of equivalences have been proposed in the literature. The main contenders consider those systems equivalent that (i) perform the same sequences of actions, or (ii) perform the same sequences of actions and after each sequence are ready to accept the same sets of actions, or (iii) perform the
same sequences of actions and after each sequence exhibit, recursively, the same behavior. This approach leads to many different equivalences that preserve significantly different properties of systems.
Rocco De Nicola
r.denicola@imtlucca.it
2012-01-20T08:52:55Z
2012-01-20T09:28:17Z
http://eprints.imtlucca.it/id/eprint/1072
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1072
2012-01-20T08:52:55Z
Process Algebras
Process Algebras are mathematically rigorous languages with well defined semantics that permit describing and verifying properties of concurrent communicating systems.
They can be seen as models of processes, regarded as agents that act and interact continuously with other similar agents and with their common environment. The agents may be real-world objects (even people), or they may be artifacts, embodied perhaps in computer hardware or software systems.
Many different approaches (operational, denotational, algebraic) are taken for describing the meaning of processes. However, the operational approach is the reference one. By relying on the so called Structural Operational Semantics (SOS), labelled transition systems are built and composed by using the different operators of the many different process algebras. Behavioral equivalences are used to abstract from unwanted details and identify those systems that react similarly to external
experiments.
Rocco De Nicola
r.denicola@imtlucca.it
2012-01-09T14:07:35Z
2014-07-01T13:34:46Z
http://eprints.imtlucca.it/id/eprint/1051
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1051
2012-01-09T14:07:35Z
Stabilizing model predictive control of stochastic constrained linear systems
This paper investigates stochastic stabilization procedures based on quadratic and piecewise linear Lyapunov functions for discrete-time linear systems affected by multiplicative disturbances and subject to linear constraints on inputs and states. A stochastic model predictive control (SMPC) design approach is proposed to optimize closed-loop performance while enforcing constraints. Conditions for stochastic convergence and robust constraints fulfillment of the closed-loop system are enforced by solving linear matrix inequality problems off line. Performance is optimized on line using multi-stage stochastic optimization based on enumeration of scenarios, that amounts to solving a quadratic program subject to either quadratic or linear constraints. In the latter case, an explicit form is computable to ease the implementation of the proposed SMPC law. The approach can deal with a very general class of stochastic disturbance processes with discrete probability distribution. The effectiveness of the proposed SMPC formulation is shown on a numerical example and compared to traditional MPC schemes.
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-01-09T13:58:30Z
2012-01-09T13:58:30Z
http://eprints.imtlucca.it/id/eprint/1050
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1050
2012-01-09T13:58:30Z
Hierarchical and decentralised model predictive control of drinking water networks: application to Barcelona case study
A hierarchical and decentralised model predictive control (DMPC) strategy for drinking water networks (DWN) is proposed. The DWN is partitioned into a set of subnetworks using a partitioning algorithm that makes use of the topology of the network, historic information about the actuator usage and heuristics. A suboptimal DMPC strategy was derived, which consists in a set of MPC controllers, whose prediction model is a plant partition, where each element solves its control problem in a hierarchical order. A comparative simulation study between centralised MPC (CMPC) and DMPC approaches is developed using a case study, which consists in an aggregate version of the Barcelona DWN. Results have shown the effectiveness of the proposed DMPC approach in terms of the scalability of computations with an acceptable admissible loss of performance in all the considered scenarios.
Carlos Ocampo-Martinez
Davide Barcelli
Vicenç Puig
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-01-09T11:57:49Z
2016-07-13T10:50:07Z
http://eprints.imtlucca.it/id/eprint/1049
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1049
2012-01-09T11:57:49Z
Counterpart Semantics for a Second-Order mu-Calculus
Quantified μ-calculi combine the fix-point and modal operators of temporal logics with (existential and universal) quantifiers, and they allow for reasoning about the possible behaviour of individual components within a software system. In this paper we introduce a novel approach to the semantics of such calculi: we consider a sort of labeled transition systems called counterpart models as semantic domain, where states are algebras and transitions are defined by counterpart relations (a family of partial homomorphisms) between states. Then, formulae are interpreted over sets of state assignments (families of partial substitutions, associating formula variables to state components). Our proposal allows us to model and reason about the creation and deletion of components, as well as the merging of components. Moreover, it avoids the limitations of existing approaches, usually enforcing restrictions of the transition relation: the resulting semantics is a streamlined and intuitively appealing one, yet it is general enough to cover most of the alternative proposals we are aware of. The paper is rounded up with some considerations about expressiveness and decidability aspects.
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2011-12-06T10:39:31Z
2011-12-06T10:39:31Z
http://eprints.imtlucca.it/id/eprint/1034
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1034
2011-12-06T10:39:31Z
An explicit optimal control approach for mean-risk dynamic portfolio allocation
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
2011-12-06T10:31:11Z
2011-12-06T10:31:11Z
http://eprints.imtlucca.it/id/eprint/1033
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1033
2011-12-06T10:31:11Z
Robust optimal control: calculation of the explicit
control law combining dynamic programming and multiparametric optimization
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
2011-12-06T10:15:20Z
2013-03-12T14:57:39Z
http://eprints.imtlucca.it/id/eprint/1032
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1032
2011-12-06T10:15:20Z
Explicit control for nonlinear constrained systems combining fuzzy model predictive control and multiparametric
programming
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
2011-12-05T15:49:54Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1031
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1031
2011-12-05T15:49:54Z
Physiologically based pharmacokinetic modeling and predictive control: an integrated approach for optimal drug administration
The barriers between systems engineering and medicine are slowly eroding as recently it has become evident that medicine has a lot to gain from systems technology. In particular, the drug administration problem can be cast as a control engineering problem, where the objective is to keep the drug concentration at certain organs in the body close to desired set-points. A number of constraints render the problem rather challenging. For example, hard constraints may be posed on drug concentration, because if it exceeds an upper limit, the effects of the drug are adverse and toxic.
In this paper we show that a popular method in control engineering can be used for determining the optimal drug administration. Specifically, the Model Predictive Control (MPC) technology can be adopted for taking optimal decisions regarding regulation of drug concentration in the human body, while posing constraints on both drug concentration and drug infusion rate.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Stefania Giannikou
Haralambos Sarimveis
2011-12-05T14:19:43Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1030
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1030
2011-12-05T14:19:43Z
Stochastic model predictive control for constrained networked control systems with random time delay
In this paper the continuous time stochastic constrained optimal control problem is formulated for the class of networked control systems assuming that time delays follow a discrete-time, finite Markov chain . Polytopic overapproximations of the system's trajectories are employed to produce a polyhedral inner approximation of the non-convex constraint set resulting from imposing the constraints in continuous time. The problem is cast in a Markov jump linear systems (MJLS) framework and a stochastic MPC controller is calculated explicitly, oine, coupling dynamic programming with parametric piecewise quadratic (PWQ) optimization. The calculated control law leads to stochastic stability of the closed loop system, in the mean square sense and respects the state and input constraints in continuous time.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Haralambos Sarimveis
2011-12-05T11:40:26Z
2011-12-05T11:40:26Z
http://eprints.imtlucca.it/id/eprint/1027
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1027
2011-12-05T11:40:26Z
A two-stage evolutionary algorithm for variable selection in the development of RBF neural network models
In many modeling problems that are based on input–output data, information about a plethora of variables is available. In these cases, the proper selection of explanatory variables is very critical for the success of the produced model, since it eliminates noisy variables and possible correlations, reduces the size of the model and accomplishes more accurate predictions. Many variable selection procedures have been proposed in the literature, but most of them consider only linear models. In this work, we present a novel methodology for variable selection in nonlinear modeling, which combines the advantages of several artificial intelligence technologies. More specifically, the Radial Basis Function (RBF) neural network architecture serves as the nonlinear modeling tool, by exploiting the simplicity of its topology and the fast fuzzy means training algorithm. The proper variables are selected in two stages using a multi-objective optimization approach: in the first stage, a specially designed genetic algorithm minimizes the prediction error over a monitoring data set, while in the second stage a simulated annealing technique aims at the reduction of the number of explanatory variables. The efficiency of the proposed method is illustrated through its application to a number of benchmark problems.
Alex Alexandridis
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
George Tsekouras
2011-12-05T11:00:57Z
2011-12-05T11:00:57Z
http://eprints.imtlucca.it/id/eprint/1025
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1025
2011-12-05T11:00:57Z
Dynamic modeling and control of supply chain systems: a review
Supply chains are complicated dynamical systems triggered by customer demands. Proper selection of equipment, machinery, buildings and transportation fleets is a key component for the success of such systems. However, efficiency of supply chains mostly depends on management decisions, which are often based on intuition and experience. Due to the increasing complexity of supply chain systems (which is the result of changes in customer preferences, the globalization of the economy and the stringy competition among companies), these decisions are often far from optimum. Another factor that causes difficulties in decision making is that different stages in supply chains are often supervised by different groups of people with different managing philosophies. From the early 1950s it became evident that a rigorous framework for analyzing the dynamics of supply chains and taking proper decisions could improve substantially the performance of the systems. Due to the resemblance of supply chains to engineering dynamical systems, control theory has provided a solid background for building such a framework. During the last half century many mathematical tools emerging from the control literature have been applied to the supply chain management problem. These tools vary from classical transfer function analysis to highly sophisticated control methodologies, such as model predictive control (MPC) and neuro-dynamic programming. The aim of this paper is to provide a review of this effort. The reader will find representative references of many alternative control philosophies and identify the advantages, weaknesses and complexities of each one. The bottom line of this review is that a joint co-operation between control experts and supply chain managers has the potential to introduce more realism to the dynamical models and develop improved supply chain management policies.
Haralambos Sarimveis
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Chris D. Tarantilis
Chris T. Kiranoudis
2011-12-05T09:47:20Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1022
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1022
2011-12-05T09:47:20Z
A global piecewise smooth Newton method for fast large-scale model predictive control
In this paper, the strictly convex quadratic program (QP) arising in model predictive control (MPC) for constrained linear systems is reformulated as a system of piecewise affine equations. A regularized piecewise smooth Newton method with exact line search on a convex, differentiable, piecewise-quadratic merit function is proposed for the solution of the reformulated problem. The algorithm has considerable merits when applied to MPC over standard active set or interior point algorithms. Its performance is tested and compared against state-of-the-art QP solvers on a series of benchmark problems. The proposed algorithm is orders of magnitudes faster, especially for large-scale problems and long horizons. For example, for the challenging crude distillation unit model of Pannocchia, Rawlings, and Wright (2007) with 252 states, 32 inputs, and 90 outputs, the average running time of the proposed approach is 1.57 ms.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
2011-11-22T14:16:35Z
2011-12-20T12:00:25Z
http://eprints.imtlucca.it/id/eprint/1020
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1020
2011-11-22T14:16:35Z
Style-Based architectural reconfigurations
We introduce Architectural Design Rewriting (ADR), an approach to the design of reconfigurable software architectures whose key features are: (i) rule-based approach (over graphs); (ii) hierarchical design; (iii) algebraic presentation; and (iv) inductively-defined reconfigurations. Architectures are modelled by graphs whose edges and nodes represent components and connection ports. Architectures are designed hierarchically by a set of edge replacement rules that fix the architectural style. Depending on their reading, productions allow: (i) top-down design by refinement, (ii) bottom-up typing of actual architectures, and (iii) well-formed composition of architectures. The key idea is to encode style proofs as terms and to exploit such information at run-time for guiding reconfigurations. The main advantages of ADR
are that: (i) instead of reasoning on flat architectures, ADR specifications provide a convenient hierarchical structure, by exploiting the architectural classes introduced by the style, (ii) complex reconfiguration schemes can be defined inductively, and (iii) style-preservation is guaranteed.
Roberto Bruni
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
Emilio Tuosto
2011-11-22T13:59:55Z
2011-12-20T12:00:25Z
http://eprints.imtlucca.it/id/eprint/1019
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1019
2011-11-22T13:59:55Z
Architectural design rewriting as an architecture description language
Architectural Design Rewriting (ADR) is a declarative rule-based approach for the design of dynamic software architectures. The key features that make ADR a suitable and expressive framework are the algebraic presentation of graph-based structures and the use of conditional rewrite rules. These features enable the modelling of, e.g. hierarchical design, inductively defined reconfigurations and
ordinary computation. Here, we promote ADR as an Architectural
Description Language.
Roberto Bruni
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
Emilio Tuosto
2011-11-22T13:39:08Z
2011-12-20T12:00:25Z
http://eprints.imtlucca.it/id/eprint/1018
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1018
2011-11-22T13:39:08Z
Action planning for graph transition systems
Graphs are suitable modeling formalisms for software and hardware systems involving aspects such as communication,
object orientation, concurrency, mobility and distribution. State spaces of such systems can be represented by graph transition systems, which are basically transition systems whose states and transitions represent graphs and graph morphisms. In this paper, we propose the modeling of graph transition systems in PDDL and the application of heuristic search planning for their analysis. We consider different heuristics and present experimental results.
Stefan Edelkamp
Shahid Jabbar
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-11-22T13:30:17Z
2011-12-20T12:00:25Z
http://eprints.imtlucca.it/id/eprint/1017
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1017
2011-11-22T13:30:17Z
Symmetry reduction and heuristic search for error detection in model checking
The state explosion problem is the main limitation of model checking. Symmetries in the system being verified can be exploited in order to avoid this problem by defining an equivalence (symmetry) relation on the states of the system, which induces a semantically equivalent quotient system of smaller size. On the other hand, heuristic search algorithms can be applied to improve the bug finding capabilities of model checking. Such algorithms use
heuristic functions to guide the exploration. Bestfirst
is used for accelerating the search, while A* guarantees optimal error trails if combined with admissible estimates. We analyze some aspects of combining both approaches, concentrating on the problem of finding the optimal path to the equivalence class of a given error state. Experimental
results evaluate our approach.
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-11-22T12:19:23Z
2011-12-20T12:00:25Z
http://eprints.imtlucca.it/id/eprint/1016
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1016
2011-11-22T12:19:23Z
Protocol verification with heuristic search
We present an approach to reconcile explicit state model checking and heuristic directed search and provide experimental evidence that the model checking problem for concurrent systems, such as communications protocols, can be solved more efficiently, since finding a state violating a property can be understood as a directed search problem. In our work we combine the expressive power and implementation efficiency of the SPIN model checker with the HSF heuristic search workbench, yielding the HSF-SPIN tool that we have implemented. We start off from the A* algorithm and some of its derivatives and define heuristics for various system properties that guide the search so that it finds error states faster. In this paper we focus on safety properties and provide heuristics for invariant and assertion violation and deadlock detection. We provide experimental results for applying HSF-SPIN to two toy protocols and one real world protocol, the CORBA GIOP protocol.
Stefan Edelkamp
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Stefan Leue
2011-11-22T11:25:48Z
2011-12-20T12:00:25Z
http://eprints.imtlucca.it/id/eprint/1013
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1013
2011-11-22T11:25:48Z
Abstraction in directed model checking
Abstraction is one of the most important issues to cope with large and infinite state spaces in model checking and to reduce the verification efforts. The abstract system is smaller than the original one and if the abstract system satisfies a correctness specification, so does the concrete one. However, abstractions may introduce a behavior violating the specification that is not present in the original system.
This paper bypasses this problem by proposing the combination of abstraction with heuristic search to improve error detection. The abstract system is explored in order to create a database that stores the exact distances from abstract states to the set of abstract error states. To check, whether or not the abstract behavior is present in the original system, effcient exploration algorithms exploit the database as a guidance.
Stefan Edelkamp
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-11-18T14:02:09Z
2014-07-01T14:45:43Z
http://eprints.imtlucca.it/id/eprint/1009
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1009
2011-11-18T14:02:09Z
Optimization-based AFC automatic flatness control in cold tandem rolling : an integrated flatness optimization approach for the whole tandem mill
Cold tandem mills have the purpose of reducing the thickness of flat steel by means of consecutive rolling stands.This type of process is widely deployed in order to supply a wide variety of industries, from food processing to automotive manufacturing.In the recent years, the production of steel (and other metals, like copper and alluminium as well) by cold rolling has been subject of research efforts to reach ultra-thin gauges and to advance the production performance together with the quality of the material.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Daniele Bernardini
daniele.bernardini@imtlucca.it
Andrea Spinelli
Francesco Alessandro Cuzzola
2011-11-17T14:49:51Z
2014-01-24T14:27:13Z
http://eprints.imtlucca.it/id/eprint/1007
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1007
2011-11-17T14:49:51Z
Optimization-based feedback control of flatness in a cold tandem rolling
For the problem of feedback control of flatness in
a cold tandem rolling this paper proposes control techniques
based on quadratic optimization. Three different strategies
are proposed and compared: a centralized solution based on
a global QP problem that decides the commands to all the
actuators, and two decentralized solutions where each actuator command is optimized locally. All schemes are based on a global exchange of information about the commands generated at the previous time step at each stand, required by a synchronizing device that compensates for the numerous delays present in the mill.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Daniele Bernardini
daniele.bernardini@imtlucca.it
Francesco Alessandro Cuzzola
Andrea Spinelli
2011-11-17T14:36:26Z
2011-11-17T14:36:26Z
http://eprints.imtlucca.it/id/eprint/1006
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1006
2011-11-17T14:36:26Z
Energy-aware robust model predictive control with feedback from multiple noisy wireless sensors
Wireless Sensor Networks (WSNs) are becoming fundamental components of modern control systems. Although WSNs provide tremendous advantages in versatility, their use poses new issues in the design of the control system, in particular the discharge of batteries of sensor nodes, which is mainly due to radio communications, must be taken into account. In a previous work, for the case of a single wireless measurement device and no measurement noise, we have provided a general transmission strategy for communication between controller and sensors and an energy-aware robust Model Predictive Control (MPC) algorithm that achieve a profitable trade-off between transmission rate (battery energy savings) and loss of closed-loop system performance. In this paper we extend the approach by taking into account unknown but bounded noise on state measurements, and by considering a local area network of multiple wireless sensors measuring the state vector for disturbance rejection.
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-11-15T14:50:30Z
2011-11-15T14:50:30Z
http://eprints.imtlucca.it/id/eprint/1003
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1003
2011-11-15T14:50:30Z
Model predictive control of stochastic and networked systems
Daniele Bernardini
daniele.bernardini@imtlucca.it
2011-10-25T10:01:35Z
2013-03-12T09:32:21Z
http://eprints.imtlucca.it/id/eprint/972
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/972
2011-10-25T10:01:35Z
Tracking-Optimized quantization for H.264 compression in transportation video surveillance applications
Eren Soyak
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2011-10-06T09:22:40Z
2011-10-06T09:28:54Z
http://eprints.imtlucca.it/id/eprint/910
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/910
2011-10-06T09:22:40Z
SoSL: a service-oriented stochastic logic
The Temporal Mobile Stochastic Logic (MoSL) has been introduced in previous works by the authors for formulating properties of systems specified in StoKlaim, a Markovian extension of Klaim. The main purpose of MoSL is addressing key functional aspects of network aware programming such as distribution awareness, mobility and security and to guarantee their integration with performance and dependability guarantees. In this paper we present SoSL, a variant of MoSL, designed for dealing with specific features of Service-Oriented Computing (SOC). We also show how SoSL formulae can be model-checked against systems descriptions expressed with MarCaSPiS, a process calculus designed for addressing quantitative aspects of SOC. In order to perform actual model checking, we rely on a dedicated front-end that uses existing state-based stochastic model-checkers, like e.g. the Markov Reward Model Checker (MRMC).
Rocco De Nicola
r.denicola@imtlucca.it
Diego Latella
Michele Loreti
Mieke Massink
2011-10-06T09:13:07Z
2011-10-06T09:28:54Z
http://eprints.imtlucca.it/id/eprint/909
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/909
2011-10-06T09:13:07Z
Core calculi for service-oriented computing
Core calculi have been adopted in the Sensoria project with three main aims. First of all, they have been used to clarify and formally define the basic concepts that characterize the Sensoria approach to the modeling of service-oriented applications. In second place, they are formal models on which the Sensoria analysis techniques have been developed. Finally, they have been used to drive the implementation of the prototypes of the Sensoria languages for programming actual service-based systems. This chapter reports about the Sensoria core calculi presenting their syntax and intuitive semantics, and describing their main features by means of a common running example, namely a Credit Request scenario taken from the Sensoria Finance case study.
Luis Caires
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
Vasco Thudichum Vasconcelos
Gianluigi Zavattaro
2011-09-13T09:52:21Z
2011-09-27T11:09:23Z
http://eprints.imtlucca.it/id/eprint/863
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/863
2011-09-13T09:52:21Z
A formal support to business and architectural design for service-oriented systems
Architectural Design Rewriting (ADR) is an approach for the design of software architectures developed within Sensoria by reconciling graph transformation and process calculi techniques. The key feature that makes ADR a suitable and expressive framework is the algebraic handling of structured graphs, which improves the support for specification, analysis and verification of service-oriented architectures and applications. We show how ADR is used as a formal ground for high-level modelling languages and approaches developed within Sensoria.
Roberto Bruni
Howard Foster
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
Emilio Tuosto
2011-09-13T09:42:21Z
2011-09-27T11:09:23Z
http://eprints.imtlucca.it/id/eprint/862
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/862
2011-09-13T09:42:21Z
Hierarchical models for service-oriented systems
We present our approach to the denotation and representation of hierarchical graphs: a suitable algebra of hierarchical graphs and two domains of interpretations. Each domain of interpretation focuses on a particular perspective of the graph hierarchy: the top view (nested boxes) is based on a notion of embedded graphs while the side view (tree hierarchy) is based on gs-graphs. Our algebra can be understood as a high-level language for describing such graphical models, which are well suited for defining graphical representations of service-oriented systems where nesting (e.g. sessions, transactions, locations) and linking (e.g. shared channels, resources, names) are key aspects.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
2011-09-13T09:24:38Z
2016-07-13T10:48:51Z
http://eprints.imtlucca.it/id/eprint/861
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/861
2011-09-13T09:24:38Z
A Lewisian approach to the verification of adaptive systems
Many software artifacts like software architectures or distributed programs are characterized by a high level of
dynamism involving changes in their structure or behaviour as a response to external stimuli or as the result of
programmed reconfigurations. When reasoning on such adaptive systems one is not only interested in proving
properties on their global behaviour like system correctness, but also on the evolution of the single components. For instance, when analysing the well-known stable marriage problem one would like to know whether a solution ensures that “two females never claim to be married with the same male”. To enable automatic reasoning, two main things are needed: models for the software artifacts and logic-based languages for describing their properties. One of the most successful and versatile model for such artifacts are graphs. Regarding the property specification languages, variants of quantified temporal logics have been proposed, which combine the modal operators of temporal logics with monadic second-order logic for graphs. Unfortunately, the semantical models for such logics are not clearly cut, due to the possibility to interleave modal operators and quantifiers in formulae like $x.◊ψ where x is quantified in a world but ψ states properties about x in a reachable world or state where it does not necessarily exist or even have the same identity. The issue is denoted in the quantified temporal logic
literature as trans-world identity [1, 3]. A typical solution follows the so-called “Kripke semantics” approach: roughly, a set of universal items is chosen, and its elements are used to form each state. This solution is the most widely adopted, and it underlines all the proposals we are aware of Kripke-like solutions do not fit well with the merging, deletion and creation of components, neither allows for an easy inclusion of evolution relations possibly forming cycles: if the value of an open formula is a set of states, how to account e.g. for an element that is first deleted and then added again? This problem is often solved by restricting the class of admissible evolution relations: this forces to reformulate the state transition relation modeling the system evolution, hampering the intuitive meaning of the logic. In [2, 5] we presented an alternative approach, inspired to counterpart theory [4]. The key point of Lewis's proposal is the notion of counterpart, which is a consequence of his refusal to interpret the relation of trans-world sameness as
strict identity. In our approach we exploit counterpart relations, i.e. (partial) functions among states, explicitly relating elements of different states. Our solution avoids some limitations of the existing approaches, in particular in what regards the treatment of the possible merging and reuse of components. Moreover, the resulting semantics is a streamlined and intuitively appealing one, yet it is general enough to cover most of the alternatives we are aware of.
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2011-09-12T14:13:05Z
2016-07-13T09:45:10Z
http://eprints.imtlucca.it/id/eprint/860
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/860
2011-09-12T14:13:05Z
Towards a Maude tool for model checking temporal graph properties
We present our prototypical tool for the verification of graph transformation systems. The major novelty of our tool is that it provides a model checker for temporal graph properties based on counterpart semantics for quantified m-calculi. Our tool can be considered as an instantiation of our approach to counterpart semantics which allows for a neat handling of creation, deletion and merging in systems
with dynamic structure. Our implementation is based on the object-based machinery of Maude, which provides the basics to deal with attributed graphs. Graph transformation
systems are specified with term rewrite rules. The model checker evaluates logical formulae of second-order modal m-calculus in the automatically generated CounterpartModel (a sort of unfolded graph transition system) of the graph transformation system under study. The result of evaluating a formula is a set of assignments for each state, associating node variables to actual nodes.
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2011-09-12T13:50:29Z
2011-09-27T11:09:23Z
http://eprints.imtlucca.it/id/eprint/859
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/859
2011-09-12T13:50:29Z
On structured model-driven transformations
Structural aspects play a key role in the model-driven development of software systems. Effective techniques and tools must therefore be based on suitable representation formalisms that facilitate the specification, manipulation and analysis of the structure of models. Graphical and algebraic approaches have been shown to be very successful for such purposes: 1) graphs offer natural a representation of topological structures, 2) algebras offer a natural representation of compositional structures, 3) both graphs and algebras can be manipulated in a declarative way by means of rule-based techniques, 4) they allow for a layered presentation of models that enables compositional techniques and favours scalability. Most of the existing approaches represent such layering in a plain manner by overlapping the intra- and the inter-layered structure. It has been shown that some layering structures can be conveniently represented by an explicit hierarchical structure enabling then structurally inductive manipulations of the resulting models. Moreover, providing an inductive presentation of the structure facilitates the compositional analysis and verification of models. In this paper we compare and reconcile some recent approaches and synthesise them into an algebraic and graph-based formalism for representing and manipulating models with inductively defined hierarchical structure.
Roberto Bruni
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
2011-09-08T13:30:49Z
2013-03-05T15:33:17Z
http://eprints.imtlucca.it/id/eprint/849
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/849
2011-09-08T13:30:49Z
Tracking-optimal pre- and post-processing for H.264 compression in traffic video surveillance applications
The compression of video can reduce the accuracy of automated tracking algorithms. This is problematic for centralized applications such as transportation surveillance systems, where remotely captured and compressed video is transmitted to a central location for tracking. In typical systems, the majority of communications bandwidth is spent on representing events such as capture noise or local changes to lighting. We propose a pre- and post-processing algorithm that identifies and removes such events of low tracking interest, significantly reducing the bitrate required to transmit remotely captured video while maintaining comparable tracking accuracy. Using the H.264/AVC video coding standard and a commonly used state-of-the-art tracker we show that our algorithm allows for up to 90 bitrate savings while maintaining comparable tracking accuracy.
Eren Soyak
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2011-09-07T14:17:53Z
2013-03-05T15:34:20Z
http://eprints.imtlucca.it/id/eprint/833
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/833
2011-09-07T14:17:53Z
Content-aware H.264 encoding for traffic video tracking applications
The compression of video can reduce the accuracy of tracking algorithms, which is problematic for centralized applications that rely on remotely captured and compressed video for input. We show the effects of high compression on the features commonly used in real-time video object tracking. We propose a computationally efficient Region of Interest (ROI) extraction method, which is used during standard-compliant H.264 encoding to concentrate bitrate on regions in video most likely to contain objects of tracking interest (vehicles). This algorithm is shown to significantly increase tracking accuracy, which is measured by employing a commonly used automatic tracker.
Eren Soyak
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2011-09-05T15:07:57Z
2013-03-05T15:46:14Z
http://eprints.imtlucca.it/id/eprint/823
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/823
2011-09-05T15:07:57Z
Restoration of the cantilever bowing distortion in Atomic Force Microscopy
Due to the mechanics of the Atomic Force Microscope (AFM),
there is a curvature distortion (bowing effect) present in the acquired images. At present, flattening such images requires human intervention to manually segment object data from the background, which is time consuming and highly inaccurate. In this paper, an automated algorithm to flatten lines from AFM images is presented. The proposed method classifies the data into objects and background, and fits convex lines in an iterative fashion. Results on real images from DNA wrapped carbon nanotubes (DNACNTs) and synthetic experiments are presented, demonstrating the
effectiveness of the proposed algorithm in increasing the resolution of the surface topography. In addition a link between the flattening problem and MRI inhomogeneity (shading) is given and the proposed method is compared to an entropy based MRI inhomogeniety correction method.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Jana Zujovic
Aggelos K. Katsaggelos
2011-09-05T14:51:27Z
2013-03-05T15:46:02Z
http://eprints.imtlucca.it/id/eprint/821
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/821
2011-09-05T14:51:27Z
Automated line flattening of Atomic Force Microscopy images
In this paper, an automated algorithm to flatten lines from Atomic Force Microscopy (AFM) images is presented. Due to the mechanics of the AFM, there is a curvature distortion (bowing effect) present in the acquired images. At present, flattening such images requires human intervention to manually segment object data from the background, which is time consuming and highly inaccurate. The proposed method classifies the data into objects and background, and fits convex lines in an iterative fashion. Results on real images from DNA wrapped carbon nanotubes (DNA-CNTs) and synthetic experiments are presented, demonstrating the effectiveness of the proposed algorithm in increasing the resolution of the surface topography.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Jana Zujovic
Aggelos K. Katsaggelos
2011-09-05T14:43:12Z
2013-03-05T15:45:15Z
http://eprints.imtlucca.it/id/eprint/820
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/820
2011-09-05T14:43:12Z
Local feature extraction for video copy detection in a database
In this paper a new content-based copy identification method for video sequences is presented. It is robust to a number of image transformations and particulary robust to compression artifacts. A scale and rotation invariant local image descriptor for corner points in detected key frames is proposed based on a generalized Radon transform. In addition, a distance similarity metric is used that fuses intensity and geometry information to compare key frames extracted using a scene detection algorithm. Furthermore, to achieve low querying computational complexity a DP approach is employed. Experimental results demonstrate the effectiveness of our approach.
Ehsan Maani
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2011-08-12T11:08:23Z
2013-03-05T15:46:28Z
http://eprints.imtlucca.it/id/eprint/818
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/818
2011-08-12T11:08:23Z
Automated tracking of a passive endomyocardial stiletto catheter with dephased FLAPS MRI: a feasibility study
Automated tracking of a passive stiletto catheter for regenerative myocardial therapy under the MR environment may improve the accuracy ofthe procedure. We report successful implementation of automated computer-assisted tracking for this purpose in a controlled phantom study.
Ioannis Koktzoglou
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Sven Zuehlsdorff
Debiao Li
Aggelos K. Katsaggelos
Rohan Dharmakumar
2011-08-12T10:37:17Z
2013-03-05T15:46:54Z
http://eprints.imtlucca.it/id/eprint/817
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/817
2011-08-12T10:37:17Z
Automated tracking of a passive intramyocardial needle with off-resonance MRI: a feasibility study
Direct intramyocardial therapies aimed at treating myocardial regions affected by severe ischemia may benefit from CMR-guided interventional procedures. Although interventional MR approaches using active devices are considered to be the method of choice, potential tissue heating and altered mechanical properties are some of their limitations. Methods that have the capacity to visualize MR-compatible passive devices may overcome many of these obstacles. Recently, an off-resonance-based real-time positive contrast method (FLAPS) was used to visualize the passage of an intramyocardial needle (PIN) through the aorta and into the heart of swine [1,2]. We envision this procedure may benefit from computer assisted strategies that track the needle's location throughout the MR procedure. However, the feasibility of real-time automated tracking of a PIN has not been established.
Ioannis Koktzoglou
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Sven Zuehlsdorff
Debiao Li
Aggelos K. Katsaggelos
Rohan Dharmakumar
2011-08-12T09:27:34Z
2013-03-05T15:47:33Z
http://eprints.imtlucca.it/id/eprint/812
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/812
2011-08-12T09:27:34Z
DNA as a medium for storing digital signals
Motivated by the storage capacity and efficiency of the DNA molecule in this paper we propose to utilize DNA molecules to store digital signals. We show that hybridization of DNA molecules can be used as a similarity criterion for retrieving digital signals encoded and stored in a DNA database. Since retrieval is achieved through hybridization of query and data carrying DNA molecules, we present a mathematical model to estimate hybridization efficiency (also known as selectivity annealing). We show that selectivity annealing is inversely proportional to the mean squared error (MSE) of the encoded signal values. In addition, we show that the concentration of the molecules plays the same role as the decision threshold employed in digital signal matching algorithms. Finally, similarly to the digital domain, we define a DNA signal-to-noise ratio (SNR) measure to assess the performance of the DNA-based retrieval scheme. Simulations are presented to validate our arguments.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Vassily Hatzimanikati
Aggelos K. Katsaggelos
2011-08-12T09:20:11Z
2013-03-05T15:47:59Z
http://eprints.imtlucca.it/id/eprint/811
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/811
2011-08-12T09:20:11Z
DNA hybridization as a similarity criterion for querying digital signals stored in DNA databases
We demonstrate via simulation that hybridization of DNA molecules can be used as a similarity criterion for retrieving digital signals encoded and stored in a synthesized DNA database. After introducing some necessary DNA terminology, we briefly explain how digital signals are transformed to DNA sequences. Since retrieval is achieved through hybridization of query and data carrying DNA molecules, we present a mathematical model to estimate hybridization efficiency (also known as selectivity annealing). We show that selectivity annealing is inversely proportional to the mean squared error (MSE) of the encoded signal values. In addition, we show that the concentration of the molecules plays the same role as the decision threshold employed in digital signal matching algorithms. Finally, similar to the digital domain, we define a DNA signal-to-noise ratio (SNR) measure to assess the performance of the DNA-based retrieval scheme. Simulations are presented to validate our arguments.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Vassily Hatzimanikati
Aggelos K. Katsaggelos
2011-08-11T13:53:47Z
2013-03-05T15:48:48Z
http://eprints.imtlucca.it/id/eprint/810
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/810
2011-08-11T13:53:47Z
On designing DNA databases for the storage and retrieval of digital signals
In this paper we propose a procedure for the storage and retrieval of digital signals utilizing DNA. Digital signals are encoded in DNA sequences that satisfy among other constraints the Noise Tolerance Constraint (NTC) that we have previously introduced. NTC takes into account the presence of noise in digital signals by exploiting the annealing between non-perfect complementary sequences. We discuss various issues arising from the development of DNA-based database solutions (i) in vitro (in test tubes, or other materials) for short-term storage and (ii) in vivo (inside organisms) for long-term storage. We discuss the benefits and drawbacks of each scheme and its effects on the codeword design problem and performance. We also propose a new way of constructing the database elements such that a short-term database can be converted into a long term one and vice versa without the need for a re-synthesis. The latter improves efficiency and reduces the cost of a long-term database.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2011-08-11T13:47:12Z
2013-03-05T15:49:00Z
http://eprints.imtlucca.it/id/eprint/809
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/809
2011-08-11T13:47:12Z
A new codeword design algorithm for DNA based storage and retrieval of digital signals
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2011-08-11T13:04:07Z
2013-03-05T15:51:08Z
http://eprints.imtlucca.it/id/eprint/805
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/805
2011-08-11T13:04:07Z
Fast compressed domain watermarking of MPEG multiplexed streams
In this paper, a new technique for watermarking of
MPEG compressed video streams is proposed. The watermarking scheme operates directly in the domain of MPEG multiplexed streams. Perceptual models are used during the embedding process in order to preserve the quality of the video. The watermark is embedded in the compressed domain and is detected without the use of the original video sequence. Experimental evaluation demonstrates that the proposed scheme is able to withstand a variety of attacks. The resulting watermarking system is very fast and reliable, and is suitable for copyright protection and real-time content authentication applications.
Dimitrios Simitopoulos
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Nikolaos Boulgouris
Michael Strintzis
2011-08-11T12:27:52Z
2013-03-05T15:08:41Z
http://eprints.imtlucca.it/id/eprint/803
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/803
2011-08-11T12:27:52Z
Low-complexity tracking-aware H.264 video compression for transportation surveillance
In centralized transportation surveillance systems, video is captured and compressed at low processing power remote nodes and transmitted to a central location for processing. Such compression can reduce the accuracy of centrally run automated object tracking algorithms. In typical systems, the majority of communications bandwidth is spent on encoding temporal pixel variations such as acquisition noise or local changes to lighting. We propose a tracking-aware, H.264-compliant compression algorithm that removes temporal components of low tracking interest and optimizes the quantization of frequency coefficients, particularly those that most influence trackers, significantly reducing bitrate while maintaining comparable tracking accuracy. We utilize tracking accuracy as our compression criterion in lieu of mean squared error metrics. Our proposed system is designed with low processing power and memory requirements in mind, and as such can be deployed on remote nodes. Using H.264/AVC video coding and a commonly used state-of-the-art tracker we show that our algorithm allows for over 90 bitrate savings while maintaining comparable tracking accuracy.
Eren Soyak
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2011-08-11T10:07:57Z
2013-03-05T15:46:40Z
http://eprints.imtlucca.it/id/eprint/795
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/795
2011-08-11T10:07:57Z
The not so digital future of digital signal processing [point of view]
The purpose of this paper is to consider possibilities of digital signal processing outside the semiconductor or electronic domain.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2011-08-11T08:37:32Z
2011-09-27T13:11:38Z
http://eprints.imtlucca.it/id/eprint/794
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/794
2011-08-11T08:37:32Z
Uniform labeled transition systems for nondeterministic, probabilistic, and stochastic process calculi
Labeled transition systems are typically used to represent the behavior of nondeterministic processes, with labeled transitions defining a one-step state to-state reachability relation. This model has been recently made more general by modifying the transition relation in such a way that it associates with any source state and transition label a reachability distribution, i.e., a function mapping each possible target state to a value of some domain that expresses the degree of one-step reachability of that target state. In this extended abstract, we show how the resulting model, called ULTraS from Uniform Labeled Transition System, can be naturally used to give semantics to a fully nondeterministic, a fully probabilistic, and a fully stochastic variant of a CSP-like process language.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2011-08-10T13:58:34Z
2013-03-05T15:47:46Z
http://eprints.imtlucca.it/id/eprint/792
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/792
2011-08-10T13:58:34Z
Joint source-channel coding for wireless object-based video communications utilizing data hiding
In recent years, joint source-channel coding for multimedia communications has gained increased popularity. However, very limited work has been conducted to address the problem of joint source-channel coding for object-based video. In this paper, we propose a data hiding scheme that improves the error resilience of object-based video by adaptively embedding the shape and motion information into the texture data. Within a rate-distortion theoretical framework, the source coding, channel coding, data embedding, and decoder error concealment are jointly optimized based on knowledge of the transmission channel conditions. Our goal is to achieve the best video quality as expressed by the minimum total expected distortion. The optimization problem is solved using Lagrangian relaxation and dynamic programming. The performance of the proposed scheme is tested using simulations of a Rayleigh-fading wireless channel, and the algorithm is implemented based on the MPEG-4 verification model. Experimental results indicate that the proposed hybrid source-channel coding scheme significantly outperforms methods without data hiding or unequal error protection
Haohong Wang
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2011-08-10T13:16:49Z
2013-03-05T15:49:53Z
http://eprints.imtlucca.it/id/eprint/788
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/788
2011-08-10T13:16:49Z
Fast watermarking of MPEG-1/2 streams using compressed-domain perceptual embedding and a generalized correlator detector
A novel technique is proposed for watermarking of MPEG-1 and MPEG-2 compressed video streams. The proposed scheme is applied directly in the domain of MPEG-1 system streams and MPEG-2 program streams (multiplexed streams). Perceptual models are used during the embedding process in order to avoid degradation of the video quality. The watermark is detected without the use of the original video sequence. A modified correlation-based detector is introduced that applies nonlinear preprocessing before correlation. Experimental evaluation demonstrates that the proposed scheme is able to withstand several common attacks. The resulting watermarking system is very fast and therefore suitable for copyright protection of compressed video.
Dimitrios Simitopoulos
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Nikolaos Boulgouris
Alexia Briassouli
Michael Strintzis
2011-08-09T13:38:06Z
2013-03-05T15:50:07Z
http://eprints.imtlucca.it/id/eprint/785
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/785
2011-08-09T13:38:06Z
Digital watermarking for the copyright protection of compressed video
In this chapter, a new technique for the watermarking of MPEG-1 and MPEG-2 compressed video streams is proposed. The watermarking scheme operates directly in the domain of MPEG-1 system streams and MPEG-2 program streams (multiplexed streams). Perceptual models are used during the embedding process in order to preserve video quality. The watermark is embedded in the compressed domain and is detected without the use of the original video sequence. Experimental evaluation demonstrates that the proposed scheme is able to withstand a variety of attacks. The resulting watermarking system is very fast and reliable, and is suitable for the copyright protection of video content.
Dimitrios Simitopoulos
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Nikolaos Boulgouris
Georgios Triantafyllidis
Michael Strintzis
2011-08-03T13:33:34Z
2012-03-05T10:25:09Z
http://eprints.imtlucca.it/id/eprint/766
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/766
2011-08-03T13:33:34Z
Appliance operation scheduling for electricity consumption optimization
This paper concerns the problem of optimally scheduling a set of appliances at the end-user premises. The user's energy fee varies over time, and moreover, in the context of smart grids, the user may receive a reward from an energy aggregator if he/she reduces consumption during certain time intervals. In a household, the problem is to decide when to schedule the operation of the appliances, in order to meet a number of goals, namely overall costs, climatic comfort level and timeliness. We devise a model accounting for a typical household user, and present computational results showing that it can be efficiently solved in real-life instances.
Alessandro Agnetis
Gabriella Dellino
gabriella.dellino@imtlucca.it
Paolo Detti
Gianluca De Pascale
Giacomo Innocenti
Antonio Vicino
2011-08-03T10:52:17Z
2011-08-04T07:30:21Z
http://eprints.imtlucca.it/id/eprint/765
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/765
2011-08-03T10:52:17Z
Kriging metamodels in design optimization: an automotive engineering application
Gabriella Dellino
gabriella.dellino@imtlucca.it
Paolo Lino
Carlo Meloni
Alessandro Rizzo
2011-08-03T10:42:23Z
2011-08-04T07:30:21Z
http://eprints.imtlucca.it/id/eprint/764
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/764
2011-08-03T10:42:23Z
Modeling and simulation study for the design of controlled IPMC actuators
Gabriella Dellino
gabriella.dellino@imtlucca.it
Paolo Lino
Carlo Meloni
Alessandro Rizzo
Paolo Di Giambernardino
Andrea Usai
2011-08-02T10:17:52Z
2011-08-04T07:30:21Z
http://eprints.imtlucca.it/id/eprint/763
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/763
2011-08-02T10:17:52Z
Models for the design and optimization of CNG injection systems
Gabriella Dellino
gabriella.dellino@imtlucca.it
Paolo Lino
Carlo Meloni
Alessandro Rizzo
2011-08-02T10:15:15Z
2011-08-08T08:41:01Z
http://eprints.imtlucca.it/id/eprint/762
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/762
2011-08-02T10:15:15Z
Optimization issues in modeling IPMC devices
Claudia Bonomo
Gabriella Dellino
gabriella.dellino@imtlucca.it
Luigi Fortuna
Pietro Giannone
Salvatore Graziani
Paolo Lino
Carlo Meloni
Alessandro Rizzo
2011-08-02T09:22:36Z
2011-08-04T07:30:21Z
http://eprints.imtlucca.it/id/eprint/760
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/760
2011-08-02T09:22:36Z
Multidisciplinary design optimization of a pressure controller for CNG injection systems
In this work, the multidisciplinary design optimization (MDO) methodology is applied to a case arising in the automotive engineering in which the design optimization of mechanical and control features of a system are simultaneously carried out with an evolutionary algorithm based method. The system under study is the regulator of the injection pressure of an innovative Common Rail system for Compressed Natural Gas (CNG) automotive engines, whose engineering design includes several practical and numerical difficulties. To tackle such a situation, this paper proposes a constrained multi-objective optimization method, that pursues the Pareto-optimality on the basis of fitness functions that capture domain specific design aspects as well as static and dynamic objectives. The proposed scheme provides ways to incorporate the designers specific knowledge, from interactive actions to simulation based analysis or surrogate-assisted evolution. The computational experiments show the ability of the method for finding a relevant and satisfactory set of efficient solutions.
Gabriella Dellino
gabriella.dellino@imtlucca.it
Paolo Lino
Carlo Meloni
Alessandro Rizzo
2011-08-02T09:13:43Z
2011-08-04T07:30:21Z
http://eprints.imtlucca.it/id/eprint/759
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/759
2011-08-02T09:13:43Z
Performance evaluation of the evolution control in design optimization assisted by Kriging surrogates
Gabriella Dellino
gabriella.dellino@imtlucca.it
Carlo Meloni
Paolo Lino
Alessandro Rizzo
2011-08-01T10:24:59Z
2011-08-04T07:30:21Z
http://eprints.imtlucca.it/id/eprint/747
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/747
2011-08-01T10:24:59Z
Kriging metamodel management in the design optimization of a CNG injection system
This paper deals with the use of Kriging metamodels in multi-objective engineering design optimization. The metamodel management issue to find the tradeoff between accuracy and efficiency is addressed. A comparative analysis of different strategies is conducted for a case study devoted to the design of a component of the injection system for Compressed Natural Gas (CNG) engines. The computational results are reported and analyzed for a performance assessment conducted with a data envelopment analysis approach.
Gabriella Dellino
gabriella.dellino@imtlucca.it
Paolo Lino
Carlo Meloni
Alessandro Rizzo
2011-07-29T10:53:35Z
2012-07-09T09:26:08Z
http://eprints.imtlucca.it/id/eprint/744
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/744
2011-07-29T10:53:35Z
Decentralized hierarchical multi-rate control of constrained linear systems
This paper proposes a decentralized hierarchical multi-rate control scheme for large-scale dynamically-coupled linear systems subject to linear constraints on input and state variables. At the lower level, a set of decentralized and independent linear controllers stabilizes the process, without taking care of the constraints. Each controller receives reference signals from its own upper-level controller, that runs at a lower sampling frequency. By optimally constraining the magnitude and rate of variation of the reference signals to each lower-level controller, quantitative criteria are provided for selecting the ratio between the sampling rates of the upper and lower layers of control at each location, in a way that closed-loop stability is preserved and the fulfillment of the prescribed constraints is guaranteed.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Davide Barcelli
Giulio Ripaccioli
2011-07-29T10:53:29Z
2012-07-09T09:25:27Z
http://eprints.imtlucca.it/id/eprint/743
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/743
2011-07-29T10:53:29Z
Decentralized hybrid model predictive control of a formation of unmanned aerial vehicles
This paper proposes a hierarchical MPC strategy for autonomous navigation of a formation of unmanned aerial vehicles (UAVs) of quadcopter type under obstacle and collision avoidance constraints. Each vehicle is stabilized by a lower-level local linear MPC controller around a desired position, that is generated, at a slower sampling rate, by a hybrid MPC controller per vehicle. Such an upper control layer is based on a hybrid dynamical model of the UAV in closed-loop with its linear MPC controller and of its surrounding environment (i.e., the other UAVs and obstacles). The resulting decentralized scheme controls the formation based on a leader-follower approach. The performance of the hierarchical control scheme is assessed through simulations and comparisons with other path planning strategies, showing the ability of linear MPC to handle the strong couplings among the dynamical variables of each quadcopter under motor voltage and angle/position constraints, and the flexibility of the decentralized hybrid MPC scheme in planning the desired paths on-line.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Claudio Rocchi
2011-07-29T10:53:22Z
2012-07-06T13:14:36Z
http://eprints.imtlucca.it/id/eprint/742
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/742
2011-07-29T10:53:22Z
Explicit hybrid model predictive control: discontinuous piecewise-affine approximation and FPGA implementation
In this paper we introduce a digital architecture implementing the explicit solution of a switched model predictive control problem. Given a mixed-logic dynamical system, we derive an explicit controller in the form of a possibly discontinuous piecewise-affine function. This function is then approximated by resorting to piecewise-affine simplicial functions, which can be implemented on a circuit by extending the representation capabilities of a previously proposed architecture to evaluate the control action. The architecture has been implemented on FPGA and validated on a benchmark example related to an air conditioning system.
Tomaso Poggi
Sergio Trimboli
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Storace
2011-07-29T10:52:40Z
2011-11-17T11:01:57Z
http://eprints.imtlucca.it/id/eprint/738
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/738
2011-07-29T10:52:40Z
Stochastic model predictive control with driver behavior learning for improved powertrain control
In this paper we advocate the use of stochastic model predictive control (SMPC) for improving the performance of powertrain control algorithms, by optimally controlling the complex system composed of driver and vehicle. While the powertrain is modeled as the deterministic component of the dynamics, the driver behavior is represented as a
stochastic system which affects the vehicle dynamics. Since stochastic MPC is based on online numerical optimization, the driver model can be learned online, hence allowing the control algorithm to adapt to different drivers and drivers' behaviors. The proposed technique is evaluated in two applications: adaptive cruise control, where the driver behavioral model is used to predict the leading vehicle dynamics, and series hybrid electric vehicle (SHEV) energy management, where the driver model is used to predict the future power requests.
M. Bichi
Giulio Ripaccioli
Stefano Di Cairano
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Ilya Kolmanovsky
2011-07-29T10:52:30Z
2011-08-05T12:20:39Z
http://eprints.imtlucca.it/id/eprint/736
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/736
2011-07-29T10:52:30Z
Hierarchical multi-rate control design for constrained linear systems
This paper proposes a hierarchical multi-rate control design approach to linear systems subject to linear constraints on input and output variables. At the lower level, a linear controller stabilizes the open-loop process without considering the constraints. A higher-level controller commands reference signals at a lower sampling frequency so as to enforce linear constraints on the variables of the process. By optimally constraining the magnitude and the rate of variation of the reference signals applied to the lower control layer, we provide quantitative criteria for selecting the ratio between the sampling rates of the upper and lower layers to preserve closed-loop stability without violating the prescribed constraints.
Davide Barcelli
Alberto Bemporad
alberto.bemporad@imtlucca.it
Giulio Ripaccioli
2011-07-29T10:49:00Z
2011-08-04T07:29:05Z
http://eprints.imtlucca.it/id/eprint/737
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/737
2011-07-29T10:49:00Z
Synthesis of stabilizing model predictive controllers via canonical piecewise affine approximations
This paper proposes the use of canonical piecewise affine (PWA) functions for the approximation of linear MPC controllers over a regular simplicial partition of a given set of states of interest. Analysis tools based on the construction of PWA Lyapunov functions are provided for certifying the asymptotic stability of the resulting closed-loop system. The main advantage of the proposed controller synthesis approach is that the resulting stabilizing approximate MPC controller can be implemented on chip with sampling times in the order of tens of nanoseconds.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Alberto Oliveri
Tomaso Poggi
Marco Storace
2011-07-29T10:18:42Z
2011-11-17T11:05:36Z
http://eprints.imtlucca.it/id/eprint/735
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/735
2011-07-29T10:18:42Z
Synthesis of networked switching linear decentralized controllers
This paper proposes an approach based on linear matrix inequalities for synthesizing a set of decentralized regulators for discrete-time linear systems subject to input and state constraints. Measurements and command signals are exchanged over a sensor/actuator network, in which some links are subject to packet dropout. The resulting closed-loop system is guaranteed to asymptotically reach the origin, even if every local actuator can exploit only a (possibly time-varying) subset of state measurements. A model of packet dropout based on a finite-state Markov chain is also considered to exploit available knowledge about the stochastic nature of the network. For such model, a set of decentralized switching linear controllers is synthesized that guarantees mean-square stability of the overall controlled process under packet dropout and soft input and state constraints.
Davide Barcelli
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-29T10:18:25Z
2011-11-16T11:50:12Z
http://eprints.imtlucca.it/id/eprint/734
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/734
2011-07-29T10:18:25Z
A model predictive control approach for stochastic networked control systems
In this paper we present a stochastic model predictive control (SMPC) approach for networked control systems (NCSs) that are subject to time-varying sampling intervals and time-varying transmission delays. These network-induced uncertain parameters are assumed to be described by random processes, having a bounded support and an arbitrary continuous probability density function. Assuming that the controlled plant can be modeled as a linear system, we present a SMPC formulation based on scenario enumeration and quadratic programming that optimizes a stochastic performance index and provides closed-loop stability in the mean-square sense. Simulation results are shown to demonstrate the performance of the proposed approach.
Daniele Bernardini
daniele.bernardini@imtlucca.it
M.C.F. Donkers
Alberto Bemporad
alberto.bemporad@imtlucca.it
W.P.M.H. Heemels
2011-07-29T10:18:09Z
2011-08-05T12:30:43Z
http://eprints.imtlucca.it/id/eprint/733
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/733
2011-07-29T10:18:09Z
Decentralized model predictive control of drinking water networks using an automatic subsystem decomposition approach
This paper proposes an automatic model decomposition approach for decentralized model predictive control (DMPC) of drinking water networks (DWNs). For a given DWN, the proposed algorithm partitions the network in a set of subnetworks by taking advantage of the topology of the network, of the information about the use of actuators, and of system management heuristics. The derived suboptimal DMPC strategy results in a hierarchical solution with a set of MPC controllers used for each partition. A comparative study between centralized MPC (CMPC) and DMPC approaches is described for the considered case study, which consists of an aggregate version of the Barcelona DWN. Results on several simulation scenarios show the effectiveness of the proposed DMPC approach in terms of the reduced computation burden and, at the same time, of the admissible lost of performance.
Davide Barcelli
Carlos Ocampo-Martinez
Vincenç Puig Cayuela
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-28T09:52:57Z
2012-04-03T07:18:16Z
http://eprints.imtlucca.it/id/eprint/729
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/729
2011-07-28T09:52:57Z
Model predictive idle speed control: design, analysis, and experimental evaluation
Idle speed control is a landmark application of feedback control in automotive vehicles that continues to be of significant interest to automotive industry practitioners, since improved idle performance and robustness translate into better fuel economy, emissions and drivability. In this paper, we develop a model predictive control (MPC) strategy for regulating the engine speed to the idle speed set-point by actuating the electronic throttle and the spark timing. The MPC controller coordinates the two actuators according to a specified cost function, while explicitly taking into account constraints on the control and requirements on the acceptable engine speed range, e.g., to avoid engine stalls. Following a process proposed here for the implementation of MPC in automotive applications, an MPC controller is obtained with excellent performance and robustness as demonstrated in actual vehicle tests. In particular, the MPC controller performs better than an existing baseline controller in the vehicle, is robust to changes in operating conditions, and to different types of disturbances. It is also shown that the MPC computational complexity is well within the capability of production electronic control unit and that the improved performance achieved by the MPC controller can translate into fuel economy improvements.
Stefano Di Cairano
Diana Yanakiev
Alberto Bemporad
alberto.bemporad@imtlucca.it
Ilya Kolmanovsky
Davor Hrovat
2011-07-28T09:52:43Z
2011-08-04T07:29:05Z
http://eprints.imtlucca.it/id/eprint/728
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/728
2011-07-28T09:52:43Z
Model-predictive control of discrete hybrid stochastic automata
This paper focuses on optimal and receding horizon control of a class of hybrid dynamical systems, called Discrete Hybrid Stochastic Automata (DHSA), whose discrete-state transitions depend on both deterministic and stochastic events. A finite-time optimal control approach “optimistically”; determines the trajectory that provides the best tradeoff between tracking performance and the probability of the trajectory to actually execute, under possible chance constraints. The approach is also robustified, less optimistically, to ensure that the system satisfies a set of constraints for all possible realizations of the stochastic events, or alternatively for those having enough probability to realize. Sufficient conditions for asymptotic convergence in probability are given for the receding-horizon implementation of the optimal control solution. The effectiveness of the suggested stochastic hybrid control techniques is shown on a case study in supply chain management.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Stefano Di Cairano
2011-07-28T09:52:00Z
2012-07-06T12:30:20Z
http://eprints.imtlucca.it/id/eprint/730
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/730
2011-07-28T09:52:00Z
Ultra-fast stabilizing model predictive control via canonical piecewise affine approximations
This paper investigates the use of canonical piecewise affine (PWA) functions for approximation and fast implementation of linear MPC controllers. The control law is approximated in an optimal way over a regular simplicial partition of a given set of states of interest. The stability properties of the resulting closed-loop system are analyzed by constructing a suitable PWA Lyapunov function. The main advantage of the proposed approach to the implementation of MPC controllers is that the resulting stabilizing approximate MPC controller can be implemented on chip with sampling times in the order of tens of nanoseconds.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Alberto Oliveri
Tomaso Poggi
Marco Storace
2011-07-27T12:52:01Z
2012-01-09T13:41:47Z
http://eprints.imtlucca.it/id/eprint/726
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/726
2011-07-27T12:52:01Z
Energy-aware robust model predictive control based on noisy wireless sensors
Wireless sensor networks (WSNs) are becoming fundamental components of modern control systems due to their flexibility, ease of deployment and low cost. However, the energy-constrained nature of WSNs poses new issues in control design; in particular the discharge of batteries of sensor nodes, which is mainly due to radio communications, must be taken into account. In this paper we present a novel transmission strategy for communication between controller and sensors which is intended to minimize the data exchange over the wireless channel. Moreover, we propose an energy-aware control technique for constrained linear systems based on explicit model predictive control (MPC), providing closed-loop stability in the presence of disturbances. The presented control schemes are compared to traditional MPC techniques. The results show the effectiveness of the proposed energy-aware approach, which achieves a profitable trade-off between energy savings and closed-loop performance.
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T12:51:25Z
2011-08-05T11:12:37Z
http://eprints.imtlucca.it/id/eprint/725
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/725
2011-07-27T12:51:25Z
Decentralized model predictive control of dynamically coupled linear systems
This paper proposes a decentralized model predictive control (DMPC) scheme for large-scale dynamical processes subject to input constraints. The global model of the process is approximated as the decomposition of several (possibly overlapping) smaller models used for local predictions. The degree of decoupling among submodels represents a tuning knob of the approach: the less coupled are the submodels, the lighter the computational burden and the load for transmission of shared information; but the smaller is the degree of cooperativeness of the decentralized controllers and the overall performance of the control system. Sufficient criteria for analyzing asymptotic closed-loop stability are provided for input constrained open-loop asymptotically stable systems and for unconstrained open-loop unstable systems, under possible intermittent lack of communication of measurement data between controllers. The DMPC approach is also extended to asymptotic tracking of output set-points and rejection of constant measured disturbances. The effectiveness of the approach is shown on a relatively large-scale simulation example on decentralized temperature control based on wireless sensor feedback.
Alessandro Alessio
Davide Barcelli
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T12:50:39Z
2011-08-04T07:29:05Z
http://eprints.imtlucca.it/id/eprint/724
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/724
2011-07-27T12:50:39Z
(edited by) Networked Control Systems
There has been a recent surge in research activities related to networked control of large-scale systems. These "cyber-physical" systems can be found throughout society, from industrial production plants, to water and energy distribution networks and transportation systems. They are characterized by tight coordination of a pervasive sensing infrastructure, distributed computing elements, and the physical world. Developed from work presented at the 3rd WIDE PhD School on Networked Control Systems held in Siena in July 2009, Networked Control Systems contains tutorial introductions to key research topics in the area of networked control.
Alberto Bemporad
alberto.bemporad@imtlucca.it
W.P.M.H. Heemels
Mikael Johansson
2011-07-27T10:59:59Z
2011-08-08T08:08:51Z
http://eprints.imtlucca.it/id/eprint/424
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/424
2011-07-27T10:59:59Z
Predictive control for hybrid systems and its application to process control
K Asano
Koji Tsuda
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T10:51:01Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/558
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/558
2011-07-27T10:51:01Z
Approximate multiparametric convex programming
Alberto Bemporad
alberto.bemporad@imtlucca.it
Carlo Filippi
2011-07-27T10:14:44Z
2014-07-03T13:45:30Z
http://eprints.imtlucca.it/id/eprint/437
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/437
2011-07-27T10:14:44Z
Modeling and control of hybrid dynamical systems: the hybrid toolbox for MATLAB
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:42:01Z
2011-08-05T12:45:52Z
http://eprints.imtlucca.it/id/eprint/624
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/624
2011-07-27T09:42:01Z
Further switched systems
Mixed logical dynamical systems and linear complementarity systems are representations of switched systems, which under the conditions described here are equivalent to the model used in Chapter 4. They are particularly useful for model-predictive control. The equivalences of several hybrid system models show that different models, which are suitable for specific analysis and design problems and have been investigated in detail, cover the same class of hybrid systems. The analysis of the well-posedness of the models leads to conditions on the model equations under which a unique solution exists.
Alberto Bemporad
alberto.bemporad@imtlucca.it
M.Kanat Çamlıbel
W.P.M.H. Heemels
Arjan J. Van der Schaft
J.M. Schumacher
Bart De Schutter
2011-07-27T09:22:02Z
2014-07-16T12:56:08Z
http://eprints.imtlucca.it/id/eprint/620
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/620
2011-07-27T09:22:02Z
Constraint fulfilment in feedback control via predictive reference management
The problem of satisfying input and state-dependent inequality constraints in feedback control systems is addressed. The proposed solution is based on predicting the evolution of the constrained vector and, accordingly, selecting online the future reference based on both the current state and the desired set-point changes. The achievable performance is first investigated via simulations, and compared with the one obtained via a receding horizon controller which uses an online mathematical programming solver. Finally, an analysis is carried out so as to establish the stability and offset-free properties of the method
Alberto Bemporad
alberto.bemporad@imtlucca.it
Edoardo Mosca
2011-07-27T09:22:00Z
2014-01-28T09:18:20Z
http://eprints.imtlucca.it/id/eprint/621
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/621
2011-07-27T09:22:00Z
Constraint fulfilment in control systems via predictive reference management
The problem of satisfying input and state-dependent inequality constraints in feedback control systems is addressed. The proposed solution is based on predicting the evolution of the constrained vector and, accordingly, selecting online the future reference based on both the current state and the desired set-point changes. An analysis is presented so as to establish stability and offset-free properties of the method when embodied in an LQ regulated system. Finally, simulations are used to evaluate the achievable performance
Alberto Bemporad
alberto.bemporad@imtlucca.it
Edoardo Mosca
2011-07-27T09:21:58Z
2014-01-28T09:15:20Z
http://eprints.imtlucca.it/id/eprint/464
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/464
2011-07-27T09:21:58Z
On the stabilizing property of SIORHC
It is shown that the stabilizing property of SIORHC (stabilizing I/O receding horizon control) holds for general stabilizable discrete-time linear plants irrespective of the condition that the plant has no poles at the origin.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Luigi Chisci
Edoardo Mosca
2011-07-27T09:21:54Z
2014-07-16T13:00:07Z
http://eprints.imtlucca.it/id/eprint/622
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/622
2011-07-27T09:21:54Z
Reference management predictive control
Alberto Bemporad
alberto.bemporad@imtlucca.it
Edoardo Mosca
2011-07-27T09:20:47Z
2014-07-16T12:59:48Z
http://eprints.imtlucca.it/id/eprint/591
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/591
2011-07-27T09:20:47Z
Nonlinear predictive reference governor for constrained control systems
This paper presents a new methodology for solving control problems where hard contraints on the state and/or the inputs of the system are present. This is achieved by adding to the control architecture a command governor which prefilters the reference to be tracked, taking into account the current value of the state and aiming at optimizing a tracking performance index. The overall system is proved to be asymptotically stable, and feasibility is ensured by a weak condition on the initial state linear loops, a complete solution is developed for the latter. The resulting online computational burden turns out to be moderate and the related operations executable with current low-priced computing hardware
Alberto Bemporad
alberto.bemporad@imtlucca.it
Edoardo Mosca
2011-07-27T09:20:41Z
2014-01-28T09:15:39Z
http://eprints.imtlucca.it/id/eprint/480
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/480
2011-07-27T09:20:41Z
Filtraggio predittivo del riferimento per il controllo di sistemi vincolati
Alberto Bemporad
alberto.bemporad@imtlucca.it
Edoardo Mosca
2011-07-27T09:20:39Z
2014-07-16T13:07:37Z
http://eprints.imtlucca.it/id/eprint/597
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/597
2011-07-27T09:20:39Z
Robust nonlinear reference filtering for constrained linear systems with uncertain impulse/step responses
A method based on conceptual tools of predictive control is described for solving tracking problems wherein pointwise-in-time input and/or state inequality constraints and model uncertainties are present. It consists of adding to a primal compensated system a nonlinear device called predictive reference filter (PRF) which manipulates the desired trajectory in order to fulfill the prescribed constraints. Provided that an admissibility condition on the initial state is satisfied, the control scheme is proved to fulfill the constraints and be asymptotically stable for all the systems whose impulse-response and step-response descriptions lie within given uncertainty ranges
Alberto Bemporad
alberto.bemporad@imtlucca.it
Edoardo Mosca
2011-07-27T09:20:37Z
2014-07-16T13:05:32Z
http://eprints.imtlucca.it/id/eprint/596
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/596
2011-07-27T09:20:37Z
Local incremental planning for a car-like robot navigating among obstacles
We present a local approach for planning the motion of a car-like robot navigating among obstacles, suitable for sensor-based implementation. The nonholonomic nature of the robot kinematics is explicitly taken into account. The strategy is to modify the output of a generic local holonomic planner, so as to provide commands that realize the desired motion in a least-squares sense. A feedback action tends to align the vehicle with the local force field. In order to avoid the motion stops away from the desired goal, various force fields are considered and compared by simulation
Alberto Bemporad
alberto.bemporad@imtlucca.it
Alessandro De Luca
Giuseppe Oriolo
2011-07-27T09:20:35Z
2014-07-16T13:04:04Z
http://eprints.imtlucca.it/id/eprint/594
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/594
2011-07-27T09:20:35Z
A predictive reference governor for constrained control systems
Alberto Bemporad
alberto.bemporad@imtlucca.it
Alessandro Casavola
Edoardo Mosca
2011-07-27T09:20:32Z
2014-07-16T13:03:47Z
http://eprints.imtlucca.it/id/eprint/595
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/595
2011-07-27T09:20:32Z
A nonlinear command governor for constrained control systems
Alberto Bemporad
alberto.bemporad@imtlucca.it
Alessandro Casavola
Edoardo Mosca
2011-07-27T09:20:28Z
2014-07-16T14:09:27Z
http://eprints.imtlucca.it/id/eprint/599
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/599
2011-07-27T09:20:28Z
On-line path parameterization for manipulators with input/state constraints
This paper addresses the problem of satisfying input/state constraints for robotic systems tracking a given geometric path. According to a prediction of the evolution of the robot from the current state, a discrete-time device called {em Path Governor} (PG) generates on line a suitable time-parameterization of the path to be tracked, by solving at fixed intervals a constrained scalar optimization problem. Higher level switching commands are also taken into account by simply associating a different optimization criterion to each mode of operation.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Tzyh-Jong Tarn
2011-07-27T09:19:44Z
2014-07-16T14:00:50Z
http://eprints.imtlucca.it/id/eprint/601
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/601
2011-07-27T09:19:44Z
Predictive control via set-membership state estimation for constrained linear systems with disturbances
Alberto Bemporad
Andrea Garulli
2011-07-27T09:19:42Z
2014-07-16T13:59:41Z
http://eprints.imtlucca.it/id/eprint/602
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/602
2011-07-27T09:19:42Z
Wall-following controllers for sonar-based mobile robots
For mobile robots equipped with incremental encoders and one sonar sensor this paper presents wall-following controllers that achieve global convergence, as well as the fulfillment of constraints on the orientation of the sonar and the velocities of the wheels. A sensor fusion approach for the estimation of the robot's coordinates is adopted by designing an extended Kalman filter that combines ultrasonic and odometric data
Alberto Bemporad
alberto.bemporad@imtlucca.it
Mauro Di Marco
Alberto Tesi
2011-07-27T09:19:40Z
2014-07-16T13:56:49Z
http://eprints.imtlucca.it/id/eprint/598
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/598
2011-07-27T09:19:40Z
Ottimizzazione in linea del set-point per processi industriali soggetti a saturazioni o vincoli sullo stato
In questa memoria si affronta il problema del controllo di processi industriali soggetti a vincoli sulle variabili di stato e/o di ingresso. La soluzione che si propone consiste nel modificare in linea, quando necessario, il set-point di riferimento in modo da garantire evoluzioni del processo che soddisfino i vincoli imposti. L’idea di modificare soltanto il set-point di riferimento ha il vantaggio di potere essere implementata senza modificare la struttura dei loop interni di controllo già esistenti. La soluzione proposta consente di ottenere il rispetto dei vincoli e buone prestazioni in termini di inseguimento del set-point, richiedendo potenze di calcolo contenute, ampiamente disponibili su hardware commerciale a basso costo.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Alessandro Casavola
Edoardo Mosca
2011-07-27T09:19:38Z
2014-07-16T13:56:27Z
http://eprints.imtlucca.it/id/eprint/626
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/626
2011-07-27T09:19:38Z
Reference governors: on-line set-point optimization techniques for constraint fulfillment
This dissertation presents a control technique to cope with set-point tracking problems in the presence of input and/or state constraints. The main idea consists of feeding to a conventional controller artificial set-points, which are calculated in real-time by manipulating the desired reference trajectory. For this reason, the resulting control tool is called {em reference governor} (RG). Set-point manipulation is performed on-line through an optimization procedure. This attempts at minimizing a performance index, which is evaluated by predicting the future evolution of the system. The RG is a high-level intelligent control module which supervises conventional controller operation, by ``smoothing out'' the reference trajectory when abrupt set-point changes would lead to constraint violation. The proposed control scheme is computationally light and easily implementable on low-cost hardware, and is general enough to cope systematically with different constrained tracking problems. We develop here theory and present simulation results of reference governors for linear, nonlinear, uncertain, robotic, and teleoperated control systems.
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:19:36Z
2014-07-16T13:56:02Z
http://eprints.imtlucca.it/id/eprint/600
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/600
2011-07-27T09:19:36Z
Control of constrained nonlinear systems via reference management
For a broad class of nonlinear continuous-time systems this paper addresses the problem of satisfying input and/or state hard constraints. The approach consists of adding a reference governor to a primal compensated nonlinear system. This is a predictive discrete-time device which, taking into account the current value of the state, filters the desired reference trajectory in such a way that a nonlinear primal compensated control system can operate in a stable way with satisfactory tracking performance and no constraint violation. The resulting hybrid system is proved to fulfil the constraints, as well as stability and tracking requirements, and the related computational burden turns out to be moderate and executable with current computing hardware
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:19:34Z
2014-07-16T13:52:23Z
http://eprints.imtlucca.it/id/eprint/445
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/445
2011-07-27T09:19:34Z
Nonlinear control of constrained linear systems via predictive reference management
A method based on conceptual tools of predictive control is described for solving set-point tracking problems wherein pointwise-in-time input and/or state inequality constraints are present. It consists of adding to a primal compensated system a nonlinear device, called command governor (CG), whose action is based on the current state, set-point, and prescribed constraints. The CG selects at any time a virtual sequence among a family of linearly parameterized command sequences, by solving a convex constrained quadratic optimization problem, and feeds the primal system according to a receding horizon control philosophy. The overall system is proved to fulfill the constraints, be asymptotically stable, and exhibit an offset-free tracking behavior, provided that an admissibility condition on the initial state is satisfied. Though the CG can be tailored for the application at hand by appropriately choosing the available design knobs, the required online computational load for the usual case of affine constraints is well tempered by the related relatively simple convex quadratic programming problem
Alberto Bemporad
alberto.bemporad@imtlucca.it
Alessandro Casavola
Edoardo Mosca
2011-07-27T09:19:32Z
2014-07-16T14:14:51Z
http://eprints.imtlucca.it/id/eprint/605
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/605
2011-07-27T09:19:32Z
Reducing conservativeness in predictive control of constrained systems with disturbances
Predictive controllers which are able to guarantee constraint fulfilment in the presence of input disturbances, typically based on min-max formulations, often suffer excessive conservativeness. One of the main reasons for this is that the control action is based on the open-loop prediction of the evolution of the system, because the uncertainty due to the disturbance grows as time proceeds on the prediction horizon. On the other hand, such an effect can be moderated by adopting a closed-loop prediction. In this paper, closed-loop prediction is achieved by including a free feedback matrix gain in the set of optimization variables. This allows one to balance computational burden and reduction of conservativeness
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:19:30Z
2014-07-16T14:13:47Z
http://eprints.imtlucca.it/id/eprint/604
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/604
2011-07-27T09:19:30Z
Predictive control of teleoperated constrained systems with unbounded communication delays
We present a control technique which allows the teleoperation of systems subject to input/state constraints through transmission channels with unbounded time-delays, such as Internet TCP/IP connections. The main idea is based on the fact that predictive controllers provide, as a by-product, command sequences which can be executed as emergency maneuvers whenever the communication channel is broken by excessive time-delays. We show how this idea can be exploited by equipping the predictive controller with some additional control logic which enables the synchronization between plant, predictive controller, and human operator
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:19:26Z
2014-07-16T14:11:36Z
http://eprints.imtlucca.it/id/eprint/465
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/465
2011-07-27T09:19:26Z
Fulfilling hard constraints in uncertain linear systems by reference managing
A method based on conceptual tools of predictive control is described for tackling tracking problems of uncertain linear systems wherein pointwise-in-time input and/or state inequality constraints are present. The method consists of adding to a primal compensated system a nonlinear device called predictive reference filter which manipulates the desired reference in order to fulfill the prescribed constraints. Provided that an admissibility condition on the initial state is satisfied, the control scheme is proved to fulfill the constraints, as well as stability and set-point tracking requirements, for all systems whose impulse/step responses lie within given uncertainty ranges.
Alberto Bemporad
Edoardo Mosca
2011-07-27T09:18:23Z
2014-07-16T14:11:17Z
http://eprints.imtlucca.it/id/eprint/481
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/481
2011-07-27T09:18:23Z
A predictive reference governor for constrained control systems
A method based on conceptual tools of predictive control is described for solving tracking problems wherein pointwise-in-time input and/or state inequality constraints are present. It consists of adding to a primal compensated system a nonlinear device called reference governor (RG) whose action is based on the current state, set-point, and prescribed constraints. The aim of the RG device is that of modifying, when necessary, the reference in such a way that the constraints are enforced and the primal compensated system maintains its linear behavior. The RG action is computed on-line by solving, at each sampling time, a constrained quadratic programming problem that usually requires low computational times also for systems of relatively high order. The overall system is proved to fulfill the constraints, be asymptotically stable, and exhibit an offset-free tracking behaviour, provided that an admissibility condition on the initial state is satisfied.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Alessandro Casavola
Edoardo Mosca
2011-07-27T09:18:20Z
2014-07-16T14:10:13Z
http://eprints.imtlucca.it/id/eprint/467
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/467
2011-07-27T09:18:20Z
A predictive controller with artificial Lyapunov function for linear systems with input/state constraints
This paper copes with the problem of satisfying input and/or state hard constraints in set-point tracking problems. Stability is guaranteed by synthesizing a Lyapunov quadratic function for the system, and by imposing that the terminal state lies within a level set of the function. Procedures to maximize the volume of such an ellipsoidal set are provided, and interiorpoint methods to solve on-line optimization are considered.
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:18:17Z
2014-07-16T14:27:00Z
http://eprints.imtlucca.it/id/eprint/500
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/500
2011-07-27T09:18:17Z
Verification of hybrid systems via mathematical programming
This paper proposes a novel approach to the verification of hybrid systems based on linear and mixed-integer linear programming. Models are described using the Mixed Logical Dynamical (MLD) formalism introduced in [5]. The proposed technique is demonstrated on a verification case study for an automotive suspension system.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T09:18:15Z
2014-07-16T14:26:41Z
http://eprints.imtlucca.it/id/eprint/518
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/518
2011-07-27T09:18:15Z
Robust model predictive control: a survey
This paper gives an overview of robustness in Model Predictive Control (MPC). After reviewing the basic concepts of MPC, we survey the uncertainty descriptions considered in the MPC literature, and the techniques proposed for robust constraint handling, stability, and performance. The key concept of “closedloop prediction” is discussed at length. The paper concludes with some comments on future research directions.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T09:18:12Z
2011-08-08T08:26:41Z
http://eprints.imtlucca.it/id/eprint/608
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/608
2011-07-27T09:18:12Z
A framework for control, fault detection, state estimation and verification of hybrid systems
This paper presents a modeling formalism for hybrid systems which allows one to formulate and solve several practical problems, such as control, formal verification, state estimation, and fault detection. As an extension to previous works we report a technique that allows one to reduce the number of auxiliary binary variables in the modeling phase
Domenico Mignone
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T09:18:11Z
2014-07-16T14:26:17Z
http://eprints.imtlucca.it/id/eprint/606
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/606
2011-07-27T09:18:11Z
An efficient branch and bound algorithm for state estimation and control of hybrid systems
This paper presents a new Branch and Bound tree exploring strategy for solving Mixed Integer Quadratic Programs (MIQP) involving time evolutions of linear hybrid systems. In particular, we refer to the Mixed Logical Dynamical (MLD) models introduced by Bemporad and Morari, 1999, where the hybrid system is described by linear equations/inequalities involving continuous and integer variables. For the optimizations required by the controller synthesis and state estimation of MLD systems, the proposed algorithm reduces the average number of node explorations during the search of a global minimum. It also provides good local minima after a short number of steps of the Branch and Bound procedure.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Domenico Mignone
Manfred Morari
2011-07-27T09:18:08Z
2014-07-16T14:26:00Z
http://eprints.imtlucca.it/id/eprint/609
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/609
2011-07-27T09:18:08Z
Moving horizon estimation for hybrid systems and fault detection
An approach for fault detection and state estimation of hybrid systems is presented. The method relies on the modeling framework for hybrid systems introduced by Bemporad and Morari (1999). This framework considers interacting propositional logic, automata, continuous dynamics and constraints. The proposed approach is illustrated by considering the fault detection problem of the three-tank benchmark system
Alberto Bemporad
alberto.bemporad@imtlucca.it
Domenico Mignone
Manfred Morari
2011-07-27T09:18:05Z
2014-07-16T14:22:27Z
http://eprints.imtlucca.it/id/eprint/611
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/611
2011-07-27T09:18:05Z
Observability and controllability of piecewise affine and hybrid systems
We prove, in a constructive way, the equivalence between hybrid and piecewise affine systems. By focusing our investigation on the latter class, we show through counter-examples that observability and controllability properties cannot be easily deduced from those of the component linear subsystems. Instead, we propose practical numerical tests based on mixed-integer linear programming
Alberto Bemporad
alberto.bemporad@imtlucca.it
Giancarlo Ferrari-Trecate
Manfred Morari
2011-07-27T09:18:01Z
2014-07-16T14:20:22Z
http://eprints.imtlucca.it/id/eprint/447
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/447
2011-07-27T09:18:01Z
Control of systems integrating logic, dynamics, and constraints
This paper proposes a framework for modeling and controlling systems described by interdependent physical laws, logic rules, and operating constraints, denoted as mixed logical dynamical (MLD) systems. These are described by linear dynamic equations subject to linear inequalities involving real and integer variables. MLD systems include linear hybrid systems, finite state machines, some classes of discrete event systems, constrained linear systems, and nonlinear systems which can be approximated by piecewise linear functions. A predictive control scheme is proposed which is able to stabilize MLD systems on desired reference trajectories while fulfilling operating constraints, and possibly take into account previous qualitative knowledge in the form of heuristic rules. Due to the presence of integer variables, the resulting on-line optimization procedures are solved through mixed integer quadratic programming (MIQP), for which efficient solvers have been recently developed. Some examples and a simulation case study on a complex gas supply system are reported.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T09:16:44Z
2014-07-16T14:19:56Z
http://eprints.imtlucca.it/id/eprint/496
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/496
2011-07-27T09:16:44Z
Predictive path parameterization for constrained robot control
For robotic systems tracking a given geometric path, the paper addresses the problem of satisfying input and state constraints. According to a prediction of the evolution of the robot from the current state, a discrete-time device called a path governor generates online a suitable time-parameterization of the path to be tracked, by solving at fixed intervals a constrained scalar look-ahead optimization problem. Higher level switching commands are also taken into account by simply associating a different optimization criterion to each mode of operation. Experimental results are reported for a three-degree-of-freedom PUMA 560 manipulator subject to absolute position error, Cartesian velocity, and motor voltage constraints
Alberto Bemporad
alberto.bemporad@imtlucca.it
Tzyh-Jong Tarn
Ning Xi
2011-07-27T09:16:40Z
2014-07-17T12:22:02Z
http://eprints.imtlucca.it/id/eprint/574
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/574
2011-07-27T09:16:40Z
Performance analysis of piecewise linear systems and model predictive control systems
Bemporad and Morari (1999) provided a tool for obtaining the explicit solution of constrained model predictive control (MPG) problems by showing that the control law is a continuous piecewise affine (PWA) function of the state vector. Therefore, the feedback interconnection between the MPC controller and a linear system, or a PWA system (e.g., a PWA approximation of a nonlinear system), is a PWA system. For discrete-time PWA and hybrid systems, the present authors (2000) presented an algorithm for verification/reachability analysis. In this paper, we formulate the performance analysis problem of closed-loop PWA systems (including MPC feedback loops where the prediction model and the plant model could be different) as a reachability analysis problem, and use our algorithm to obtain a tool for characterizing (i) the set of states for which the evolution is feasible, (ii) the domain of stability, (iii) the performance of the closed-loop
Alberto Bemporad
alberto.bemporad@imtlucca.it
Fabio Danilo Torrisi
Manfred Morari
2011-07-27T09:16:38Z
2014-07-17T12:21:35Z
http://eprints.imtlucca.it/id/eprint/499
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/499
2011-07-27T09:16:38Z
Optimization-based verification and stability characterization of piecewise affine and hybrid systems
In this paper, we formulate the problem of characterizing the stability of a piecewise affine (PWA) system as a verification problem. The basic idea is to take the whole IR n as the set of initial conditions, and check that all the trajectories go to the origin. More precisely, we test for semi-global stability by restricting the set of initial conditions to an (arbitrarily large) bounded set X(0), and label as “asymptotically stable in T steps” the trajectories that enter an invariant set around the origin within a finite time T, or as “unstable in T steps” the trajectories which enter a set X inst of (very large) states. Subsets of X(0) leading to none of the two previous cases are labeled as “non-classifiable in T steps”. The domain of asymptotical stability in T steps is a subset of the domain of attraction of an equilibrium point, and has the practical meaning of collecting the initial conditions from which the settling time to a specified set around the origin is smaller than T. In addition, it can be computed algorithmically in finite time. Such an algorithm requires the computation of reach sets, in a similar fashion as what has been proposed for verification of hybrid systems. In this paper we present a substantial extension of the verification algorithm presented in [6] for stability characterization of PWA systems, based on linear and mixed-integer linear programming. As a result, given a set of initial conditions we are able to determine its partition into subsets of trajectories which are asymptotically stable, or unstable, or non-classifiable in T steps.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Fabio Danilo Torrisi
Manfred Morari
2011-07-27T09:16:35Z
2014-07-17T12:22:57Z
http://eprints.imtlucca.it/id/eprint/482
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/482
2011-07-27T09:16:35Z
On-line optimization via off-line parametric optimization tools
In this paper, on-line optimization problems with a quadratic performance criteria and linear constraints are formulated as multi-parametric quadratic programs, where the input and state variables, corresponding to a plant, are treated as optimization variables and parameters, respectively. The solution of such problems is given by (i) a complete set of profiles of all the optimal inputs to the plant as a function of state variables, and (ii) the regions in the space of state variables where these functions remain optimal. It is shown that these profiles are linear and the corresponding regions are described by linear inequalities. An algorithm for obtaining these profiles and corresponding regions of optimality is also presented. The key feature of the proposed approach is that the on-line optimization problem is solved off-line via parametric programming techniques, hence, at each time interval (i) no optimization solver is called on-line, (ii) simple function evaluations are required for obtaining the optimal inputs to the plant for the current state of the plant.
Efstratios N. Pistikopoulos
Vivek Dua
Nikolaos A. Bozinis
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T09:16:33Z
2014-07-17T12:21:08Z
http://eprints.imtlucca.it/id/eprint/569
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/569
2011-07-27T09:16:33Z
The explicit solution of model predictive control via multiparametric quadratic programming
The control based on online optimization, popularly known as model predictive control (MPC), has long been recognized as the winning alternative for constrained systems. The main limitation of MPC is, however, its online computational complexity. For discrete-time linear time-invariant systems with constraints on inputs and states, we develop an algorithm to determine explicitly the state feedback control law associated with MPC, and show that it is piecewise linear and continuous. The controller inherits all the stability and performance properties of MPC, but the online computation is reduced to a simple linear function evaluation instead of the expensive quadratic program. The new technique is expected to enlarge the scope of applicability of MPC to small-size/fast-sampling applications which cannot be covered satisfactorily with anti-windup schemes
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
Vivek Dua
Efstratios N. Pistikopoulos
2011-07-27T09:16:32Z
2014-07-17T12:20:00Z
http://eprints.imtlucca.it/id/eprint/517
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/517
2011-07-27T09:16:32Z
Predictive control of constrained hybrid systems
This paper proposes a framework for modeling and controlling systems described by interdependent physical laws, logic rules, and operating constraints, denoted as Mixed Logical Dynamical (MLD) systems. These are described by linear dynamic equations subject to linear inequalities involving real and integer variables. MLD systems include linear hybrid systems, finite state machines, some classes of discrete event systems, constrained linear systems, and nonlinear systems which can be approximated by piecewise linear functions. A predictive control scheme is proposed which is able to stabilize MLD systems on desired reference trajectories while fulfilling operating constraints, and possibly take into account previous qualitative knowledge in the form of heuristic rules. Due to the presence of integer variables, the resulting on-line optimization procedures are solved through Mixed Integer Quadratic Programming (MIQP), for which efficient solvers have been recently developed. Some examples and a simulation case study on a complex gas supply system are reported.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T09:16:29Z
2014-07-17T12:22:25Z
http://eprints.imtlucca.it/id/eprint/613
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/613
2011-07-27T09:16:29Z
Multi-objective prioritisation and reconfiguration for the control of constrained hybrid systems
In many applications, the control objectives and constraints can be assigned a hierarchy of levels of priority. Often a disturbance or a fault occurs, resulting in some constraints or objectives being violated. Inadequate handling of this situation might result in component or even system-wide failures. This paper presents several methods for handling a large class of multi-objective formulations and prioritisations for model predictive control of hybrid systems, using the new mixed logic dynamical (MLD) framework. A new method, which does not require logic variables for prioritising soft constraints, is also presented
Eric C. Kerrigan
Alberto Bemporad
alberto.bemporad@imtlucca.it
Domenico Mignone
Manfred Morari
Jan M. Maciejowski
2011-07-27T09:16:26Z
2014-07-17T12:19:29Z
http://eprints.imtlucca.it/id/eprint/570
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/570
2011-07-27T09:16:26Z
Robust simulation of nonlinear electronic circuits
This paper proposes robust simulation of piecewise linear systems as a tool for the analysis of nonlinear electronic circuits. Rather than computing the evolution of a single trajectory, robust simulation computes the evolution from a set of initial conditions in the state space, for all forcing input signals within a given class. We describe here a tool to perform this analysis using mathematical programming. Among various applications, the tool allows to estimate the domain of attraction of equilibria, and to
determine if some design speci£cations — expressed
themselves in terms of reachability of subsets of the
state-space — are met. A test of the tool on Chua’s
circuit is presented.
Alberto Bemporad
alberto.bemporad@imtlucca.it
L. Giovanardi
Fabio Danilo Torrisi
2011-07-27T09:16:24Z
2014-07-17T12:19:02Z
http://eprints.imtlucca.it/id/eprint/575
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/575
2011-07-27T09:16:24Z
Performance driven reachability analysis for optimal scheduling and control of hybrid systems
We deal with the optimal control problem for piecewise linear and hybrid systems by using a computational approach based on performance-driven reachability analysis. The idea consists of coupling a reach-set exploration algorithm, essentially based on a repetitive use of linear programming, to a quadratic programming solver which selectively drives the exploration. In particular, an upper bound on the optimal cost is continually updated during the procedure, and used as a criterion to discern non-optimal evolutions and to prevent their exploration. The result is an efficient strategy of branch-and-bound nature, which is especially attractive for solving long-horizon hybrid optimal control and scheduling problems
Alberto Bemporad
alberto.bemporad@imtlucca.it
L. Giovanardi
Fabio Danilo Torrisi
2011-07-27T09:11:24Z
2014-07-17T12:17:57Z
http://eprints.imtlucca.it/id/eprint/612
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/612
2011-07-27T09:11:24Z
Model predictive control: a multi-parametric programming approach
In this paper, linear model predictive control problems are formulated as multi-parametric quadratic programs, where the control variables are treated as optimization variables and the state variables as parameters. It is shown that the control variables are affine functions of the state variables and each of these affine functions is valid in a certain polyhedral region in the space of state variables. An approach for deriving the explicit expressions of all the affine functions and their corresponding polyhedral regions is presented. The key advantage of this approach is that the control actions are computed off-line: the on-line computation simply reduces to a function evaluation problem.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Nikolaos A. Bozinis
Vivek Dua
Manfred Morari
Efstratios N. Pistikopoulos
2011-07-27T09:11:21Z
2014-07-17T12:17:38Z
http://eprints.imtlucca.it/id/eprint/568
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/568
2011-07-27T09:11:21Z
Piecewise linear optimal controllers for hybrid systems
We propose a procedure for synthesizing piecewise linear optimal controllers for discrete-time hybrid systems. A stabilizing controller is obtained by designing a model predictive controller, which is based on the minimization of a weighted l1/∞-norm of the tracking error and the input trajectories over a finite horizon. The control law is obtained by solving a multiparametric mixed-integer linear program, which avoids solving mixed-integer programs online. As the resulting control law is piecewise affine, online computation is drastically reduced to a simple linear function evaluation
Alberto Bemporad
alberto.bemporad@imtlucca.it
Francesco Borrelli
Manfred Morari
2011-07-27T09:11:16Z
2014-07-17T12:16:28Z
http://eprints.imtlucca.it/id/eprint/573
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/573
2011-07-27T09:11:16Z
Explicit solution of LP-based model predictive control
For discrete-time linear time-invariant systems with constraints on inputs and states, we develop an algorithm to determine explicitly, as a function of the initial state, the solution to optimal control problems that can be formulated using a linear program. In particular, we focus our attention on a receding horizon control scheme where the performance criterion is based on a mixed 1/∞-norm. We show that the optimal control profile is a piecewise linear and continuous function of the initial state. Thus, when the optimal control problem is solved at each time step according to a moving horizon scheme, the online computation of the resultant model predictive controller is reduced to a simple linear function evaluation, instead of the typical expensive linear program required up to now. The technique proposed has both theoretical and practical advantages. The proposed technique is attractive for a wide range of applications where the simplicity of the online computational complexity is a crucial requirement
Alberto Bemporad
alberto.bemporad@imtlucca.it
Francesco Borrelli
Manfred Morari
2011-07-27T09:11:14Z
2014-07-17T12:15:37Z
http://eprints.imtlucca.it/id/eprint/448
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/448
2011-07-27T09:11:14Z
Sonar-based wall-following control of mobile robots
In this paper, the wall-following problem for low-velocity mobile robots, equipped with incremental encoders and one sonar sensor, is considered. A robust observer-based controller, which takes into account explicit constraints on the orientation of the sonar sensor with respect to the wall and the velocity of the wheels, is designed. The feedback controller provides convergence and fulfillment of the constraints, once an estimate of the position of the mobile robot, is available. Such an estimate is given by an Extended Kalman Filter (EKF), which is designed via a sensor fusion approach merging the velocity signals from the encoders and the distance measurements from the sonar. Some experimental tests are reported to discuss the robustness of the control scheme.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Mauro Di Marco
Alberto Tesi
2011-07-27T09:11:12Z
2014-07-17T12:15:01Z
http://eprints.imtlucca.it/id/eprint/446
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/446
2011-07-27T09:11:12Z
Observability and controllability of piecewise affine and hybrid systems
We prove, in a constructive way, the equivalence between piecewise affine systems and a broad class of hybrid systems described by interacting linear dynamics, automata, and propositional logic. By focusing our investigation on the former class, we show through counterexamples that observability and controllability properties cannot be easily deduced from those of the component linear subsystems. Instead, we propose practical numerical tests based on mixed-integer linear programming.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Giancarlo Ferrari-Trecate
Manfred Morari
2011-07-27T09:11:10Z
2014-07-17T12:48:08Z
http://eprints.imtlucca.it/id/eprint/519
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/519
2011-07-27T09:11:10Z
A hybrid approach to traction control
In this paper we describe a hybrid model and an optimization-based control strategy for solving a traction control prob- lem currently under investigation at Ford Research Laboratories. We show through simulations on a model and a realistic set of parameters that good and robust performance is achieved. Furthermore, the result- ing optimal controller is a piecewise linear function of the measurements that can be implemented on low cost control hardware.
Francesco Borrelli
Alberto Bemporad
alberto.bemporad@imtlucca.it
Michael Fodor
Davor Hrovat
2011-07-27T09:11:08Z
2014-07-17T12:49:10Z
http://eprints.imtlucca.it/id/eprint/582
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/582
2011-07-27T09:11:08Z
Discrete-time hybrid modeling and verification
For hybrid systems described by interconnections of linear dynamical systems and logic devices, we recently (A. Bemporad et al., 2000, 2001) proposed mixed logical-dynamical (MLD) systems and the language HYSDEL (HYbrid System DEscription Language) as a modeling tool. For MLD models, we developed a reachability analysis algorithm which combines forward reach-set computation and feasibility analysis of trajectories by linear and mixed-integer linear programming. In this paper, the versatility of the overall analysis tool is illustrated in the verification of an automotive cruise control system for a car with a robotized manual gear shift
Fabio Danilo Torrisi
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:09:10Z
2014-07-17T12:47:36Z
http://eprints.imtlucca.it/id/eprint/585
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/585
2011-07-27T09:09:10Z
Identification of hybrid systems via mixed-integer programming
Addresses the problem of identification of hybrid dynamical systems, by focusing the attention on hinging hyperplanes and Wiener piecewise affine autoregressive exogenous models. In particular, we provide algorithms based on mixed-integer linear or quadratic programming which are guaranteed to converge to a global optimum
Alberto Bemporad
alberto.bemporad@imtlucca.it
Jacob Roll
Lennart Ljung
2011-07-27T09:09:07Z
2014-07-17T12:47:14Z
http://eprints.imtlucca.it/id/eprint/577
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/577
2011-07-27T09:09:07Z
Optimization-based hybrid control tools
The paper discusses a framework for modeling, analyzing and controlling systems whose behavior is governed by interdependent physical laws, logic rules, and operating constraints, denoted as Mixed Logical Dynamical (MLD) systems. They are described by linear dynamic equations subject to linear inequalities involving real and integer variables. MLD models are equivalent to various other system descriptions like Piecewise Affine (PWA) systems and Linear Complementarity (LC) systems. They have the advantage, however, that many problems of system analysis (like reachability/controllability, observability, and verification) and many problems of synthesis (like controller design and filter design) can be readily expressed as mixed integer linear or quadratic programs, for which many commercial software packages exist. In this paper we first recall MLD models and the modeling language HYSDEL (HYbrid Systems DEscription Language). Subsequently, we illustrate the use of Model Predictive Control (MPC) based on mixed-integer programming for hybrid MLD models, and the use of multiparametric programming for obtaining explicitly the equivalent piecewise linear control form of MPC. The eventual practical success of these methods will depend on progress in the development of the various optimization algorithms and tools so that problems of realistic size can be tackled
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T09:09:04Z
2014-07-17T12:48:28Z
http://eprints.imtlucca.it/id/eprint/583
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/583
2011-07-27T09:09:04Z
On the equivalence of classes of hybrid dynamical models
We establish equivalences among five classes of hybrid systems, that we have encountered in previous research: mixed logical dynamical systems, linear complementarity systems, extended linear complementarity systems, piecewise affine systems, and max-min-plus-scaling systems. These results are of paramount importance for transferring properties and tools from one class to another
W.P.M.H. Heemels
Bart De Schutter
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:09:02Z
2014-07-17T12:46:56Z
http://eprints.imtlucca.it/id/eprint/584
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/584
2011-07-27T09:09:02Z
On hybrid systems and closed-loop MPC systems
The following five classes of hybrid systems were proved by W.P.M.H. Heemels et al. (2001) to be equivalent: linear complementarity (LC) systems, extended linear complementarity (ELC) systems, mixed logical-dynamical (MLD) systems, piecewise affine (PWA) systems and max-min-plus-scaling (MMPS) systems. Some of the equivalences were obtained under additional assumptions, such as boundedness of system variables. In this paper, for closed-loop linear or hybrid plants with model-predictive control (MPC) based on a linear model and fulfilling linear constraints on the input and state variables, we provide a simple and direct proof that the closed-loop system (cl-MPC) is a subclass of any of the former five classes of hybrid systems. This result opens up the use of tools developed for hybrid systems (such as stability, robust stability and safety analysis tools) to study the closed-loop properties of MPC
Alberto Bemporad
alberto.bemporad@imtlucca.it
W.P.M.H. Heemels
Bart De Schutter
2011-07-27T09:08:58Z
2014-07-17T12:46:26Z
http://eprints.imtlucca.it/id/eprint/586
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/586
2011-07-27T09:08:58Z
Suboptimal explicit MPC via approximate multiparametric quadratic programming
Algorithms for solving multiparametric quadratic programming (mp-QP) were proposed in Bemporad et al. (2001) and Tondel et al. (2001) for computing explicit model predictive control (MPC) laws. The reason for this interest is that the solution to mp-QP is a piecewise affine function of the state vector and thus it is easily implementable on-line. The main drawback of solving mp-QP exactly is that whenever the number of linear constraints involved in the optimization problem increases, the number of polyhedral cells in the piecewise affine partition of the parameter space may increase exponentially. We address the problem of finding approximate solutions to mp-QP, where the degree of approximation is arbitrary and allows a trade off between optimality and a smaller number of cells in the piecewise affine solution
Alberto Bemporad
alberto.bemporad@imtlucca.it
Carlo Filippi
2011-07-27T09:08:55Z
2014-07-09T14:43:59Z
http://eprints.imtlucca.it/id/eprint/580
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/580
2011-07-27T09:08:55Z
Efficient on-line computation of constrained optimal control
For discrete-time linear time-invariant systems with constraints on inputs and outputs, the constrained finite-time optimal controller can be obtained explicitly as a piecewise-affine function of the initial state via multi-parametric programming. By exploiting the properties of the value function, we present two algorithms that efficiently perform the online evaluation of the explicit optimal control law both in terms of storage demands and computational complexity. The algorithms are particularly effective when used for model-predictive control (MPC) where an open-loop constrained finite-time optimal control problem has to be solved at each sampling time
Francesco Borrelli
Mato Baotic
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T09:08:51Z
2014-07-17T12:40:36Z
http://eprints.imtlucca.it/id/eprint/576
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/576
2011-07-27T09:08:51Z
Optimal piecewise-linear control of dry clutch engagement
Based on a discrete-time second order state-space dynamic model of the powertrain system, a piecewise feedback control for the dry clutch engagement process is proposed. The engine speed and the clutch disk speed are assumed to be measurable and the control input is the normal engaging force applied to the disks. The controller is designed by minimizing a quadratic performance index subject to constraints on the normal force, normal force derivative, and engine speed. The resulting Model Predictive Controller (MPC) is shown to consist of a piecewise linear feedback control: the state space can be divided into several regions, such that in each region an off-line computed linear controller must be implemented. The explicit piecewise linear form of the MPC law is obtained by using a multiparametric programming solver and can be tuned so that fast engagement, small friction losses and smooth lock-up are achieved. The paper reports numerical results, carried out by a Simulink/MPC Toolbox simulation scheme and a realistic set of parameters, showing the good performance of the closed-loop system.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Francesco Borrelli
Luigi Glielmo
Francesco Vasca
2011-07-27T09:08:46Z
2014-07-17T12:39:43Z
http://eprints.imtlucca.it/id/eprint/458
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/458
2011-07-27T09:08:46Z
Discrete-time hybrid modeling and verification of the batch evaporator process benchmark
For hybrid systems described by interconnections of linear discrete-time dynamical systems, automata, and propositional logic rules, we recently proposed the Mixed Logical Dynamical (MLD) systems formalism and the language HYSDEL (Hybrid System Descrip- tion Language) as a modeling tool. For MLD models, we developed a reachability analysis algorithm which combines forward reach set computation and feasibility analysis of trajectories by linear and mixed-integer linear programming. In this paper the versatility of the overall analysis tool is illustrated on the batch evaporator benchmark process.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Fabio Danilo Torrisi
Manfred Morari
2011-07-27T09:08:06Z
2014-07-17T12:38:20Z
http://eprints.imtlucca.it/id/eprint/457
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/457
2011-07-27T09:08:06Z
Equivalence of hybrid dynamical models
This paper establishes equivalences among five classes of hybrid systems: mixed logical dynamical (MLD) systems, linear complementarity (LC) systems, extended linear complementarity (ELC) systems, piecewise affine (PWA) systems, and max-min-plus-scaling (MMPS) systems. Some of the equivalences are established under (rather mild) additional assumptions. These results are of paramount importance for transferring theoretical properties and tools from one class to another, with the consequence that for the study of a particular hybrid system that belongs to any of these classes, one can choose the most convenient hybrid modeling framework.
W.P.M.H. Heemels
Bart De Schutter
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:08:04Z
2011-08-08T08:13:22Z
http://eprints.imtlucca.it/id/eprint/547
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/547
2011-07-27T09:08:04Z
Computation and approximation of piecewise affine control via binary search tree
We present an algorithm for generating a binary search tree that allows efficient computation of piecewise affine (PWA) functions defined on a polyhedral partition. This is useful for PWA control approaches, such as explicit model predictive control (MPC), as it allows the controller to be implemented on-line with small computational effort. The computation time is logarithmic in the number of regions in the PWA partition. A method for generating an approximate PWA function based on a binary search tree is also presented, giving further simplification of PWA control.
Petter Tondel
Tor Arne Johansen
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:08:02Z
2011-08-08T08:11:46Z
http://eprints.imtlucca.it/id/eprint/588
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/588
2011-07-27T09:08:02Z
Scheduling of hybrid systems: multi product batch plant
The paper proposes a solution to a class of scheduling problems where the goal is to minimize the schedule (production) time. The algorithm. which takes into account a model of a hybrid system described as MLD (mixed logical dynamical) system, is based on performance driven reachability analysis. The algorithm abstracts the behavior of the hybrid system by building a tree of evolution. Nodes of the tree represent reachable states of a process, and the branches connect two nodes if a transition exists between the corresponding states. To each node a cost function value is associated and based on this value. the tree exploration is driven. As soon as the tree is explored. the global solution to the scheduling problem is obtained.
Bostjan Potocnik
Alberto Bemporad
alberto.bemporad@imtlucca.it
Fabio Danilo Torrisi
Gasper Music
Borut Zupancic
2011-07-27T09:08:00Z
2011-08-04T07:29:09Z
http://eprints.imtlucca.it/id/eprint/590
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/590
2011-07-27T09:08:00Z
An iterative algorithm for the optimal control of continuous-time switched linear systems
For continuous-time switched linear systems, this paper proposes an approach for solving infinite-horizon optimal control problems where the decision variables are the switching instants and the sequence of operating modes. The procedure iterates between a "master" procedure that finds an optimal switching sequence of modes, and a "slave" procedure that finds the optimal switching instants. The effectiveness of the approach is shown through simple simulation examples.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Alessandro Giua
Carla Seatzu
2011-07-27T09:07:45Z
2011-08-04T07:29:09Z
http://eprints.imtlucca.it/id/eprint/524
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/524
2011-07-27T09:07:45Z
On the optimal control law for linear discrete time hybrid systems
In this paper we study the solution to optimal control problems for discrete time linear hybrid systems. First, we prove that the closed form of the state-feedback solution to finite time optimal control based on quadratic or linear norms performance criteria is a time-varying piecewise afine feedback control law. Then, we give an insight into the structure of the optimal state-feedback solution and of the value function. Finally, we briefly describe how the optimal control law can be computed by means of multiparametric programming.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Francesco Borrelli
Manfred Morari
2011-07-27T09:05:50Z
2011-08-08T08:07:22Z
http://eprints.imtlucca.it/id/eprint/486
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/486
2011-07-27T09:05:50Z
On-line optimization via off-line parametric optimization tools
In this paper, model predictive control (MPC) based previous termoptimizationnext term problems with a quadratic performance criterion and linear constraints are formulated as multi-previous termparametricnext term quadratic programs (mp-QP), where the input and state variables, corresponding to a plant model, are treated as previous termoptimizationnext term variables and parameters, respectively. The solution of such problems is given by (i) a complete set of profiles of all the optimal inputs to the plant as a function of state variables, and (ii) the regions in the space of state variables where these functions remain optimal. It is shown that these profiles are linear and the corresponding regions are described by linear inequalities. An algorithm for obtaining these profiles and corresponding regions of optimality is also presented. The key feature of the proposed approach is that the on-previous termline optimizationnext term problem is solved previous termoff-line via parametricnext term programming techniques. Hence (i) no previous termoptimizationnext term solver is called on-previous termline,next term and (ii) only simple function evaluations are required, to obtain the optimal inputs to the plant for the current state of the plant.
Efstratios N. Pistikopoulos
Vivek Dua
Nikolaos A. Bozinis
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T09:05:47Z
2014-07-17T12:52:10Z
http://eprints.imtlucca.it/id/eprint/473
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/473
2011-07-27T09:05:47Z
On hybrid systems and closed-loop MPC systems
The following five classes of hybrid systems were recently proven to be equivalent: linear complementarity, extended linear complementarity, mixed logical dynamical systems, piecewise affine systems and max-min-plus-scaling systems. Some of the equivalences were obtained under additional assumptions, such as boundedness of certain system variables. In this paper, for linear or hybrid plants in closed-loop with a model predictive control (MPC)controller based on a linear model fulfilling linear constraints on input and state variables and utilizing a quadratic cost criterion, we provide a simple and direct proof that the closed-loop system is a subclass of any of the former five classes of hybrid systems. This result is of extreme importance, as it opens up the use of tools developed for the mentioned hybrid model classes, such as (robust) stability and safety analysis tools, to study closed-loop properties of MPC
Alberto Bemporad
alberto.bemporad@imtlucca.it
W.P.M.H. Heemels
Bart De Schutter
2011-07-27T09:05:45Z
2014-07-17T12:51:46Z
http://eprints.imtlucca.it/id/eprint/449
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/449
2011-07-27T09:05:45Z
On the equivalence of linear complementarity problems
We show that the Extended Linear Complementarity Problem (ELCP) can be recast as a standard Linear Complementarity Problem (LCP) provided that the surplus variables or the feasible set of the ELCP are bounded. Since many extensions of the LCP are special cases of the ELCP, this implies that these extensions can be rewritten as an LCP as well. Our equivalence proof is constructive and leads to three possible numerical solution methods for a given ELCP: regular ELCP algorithms, mixed integer linear programming algorithms, and regular LCP algorithms.
Bart De Schutter
W.P.M.H. Heemels
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:05:43Z
2014-07-17T12:51:18Z
http://eprints.imtlucca.it/id/eprint/589
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/589
2011-07-27T09:05:43Z
L2 anti-windup via receding horizon optimal control
The nonlinear L2 anti-windup framework introduced by Teel and Kapoor (1997) reduces the anti-windup synthesis problem to a state feedback synthesis problem for linear systems with input saturation and input matched L2 disturbances. In this paper, such a state feedback is synthesized using receding horizon optimal control techniques, and its equivalent piecewise affine closed-form is computed using the techniques of Bemporad et al. (2002). The properties of the resulting anti-windup compensation scheme are analyzed in the paper, and its performance is investigated through a simulation example.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Andrew R. Teel
Luca Zaccarian
2011-07-27T09:05:40Z
2014-07-17T12:50:59Z
http://eprints.imtlucca.it/id/eprint/550
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/550
2011-07-27T09:05:40Z
A master-slave algorithm for the optimal control of continuous-time switched affine systems
For continuous-time switched affine systems, this paper proposes an approach for solving infinite-horizon optimal control problems where the decision variables are the switching instants and the sequence of operating modes. The procedure iterates between a "master" procedure that finds an optimal switching sequence of modes, and a "slave" procedure that finds the optimal switching instants.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Alessandro Giua
Carla Seatzu
2011-07-27T09:05:38Z
2014-07-17T12:50:41Z
http://eprints.imtlucca.it/id/eprint/549
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/549
2011-07-27T09:05:38Z
Synthesis of state-feedback optimal controllers for switched linear systems
The paper deals with the optimal control of switched piecewise linear autonomous systems, where the objective is to minimize a performance index over an infinite time horizon. We assume that the switching sequence has a finite length: the unknown switching times and the switching sequence are the optimization parameters. We also assume that a cost may be associated to each switch. The optimal control for this class of systems takes the form of a state feedback, i.e., it is possible to identify a set of regions of the state space such that an optimal switch should occur if and only if the present state belongs to one of them. We show how the tables containing these regions can be computed off-line through a numerical procedure.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Alessandro Giua
Carla Seatzu
2011-07-27T09:05:32Z
2011-08-05T14:09:02Z
http://eprints.imtlucca.it/id/eprint/459
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/459
2011-07-27T09:05:32Z
An algorithm for multi-parametric quadratic programming and explicit MPC solutions
Explicit solutions to constrained linear model predictive control problems can be obtained by solving multi-parametric quadratic programs (mp-QP) where the parameters are the components of the state vector. We study the properties of the polyhedral partition of the state space induced by the multi-parametric piecewise affine solution and propose a new mp-QP solver. Compared to existing algorithms, our approach adopts a different exploration strategy for subdividing the parameter space, avoiding unnecessary partitioning and QP problem solving, with a significant improvement of efficiency.
Petter Tondel
Tor Arne Johansen
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:04:28Z
2011-08-05T14:08:37Z
http://eprints.imtlucca.it/id/eprint/460
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/460
2011-07-27T09:04:28Z
Evaluation of piecewise affine control via binary search tree
We present an algorithm for generating a binary search tree that allows efficient computation of piecewise affine (PWA) functions defined on a polyhedral partition. This is useful for PWA control approaches, such as explicit model predictive control, as it allows the controller to be implemented online with small computational effort. The computation time is logarithmic in the number of regions in the PWA partition.
Petter Tondel
Tor Arne Johansen
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:04:23Z
2011-08-05T14:08:12Z
http://eprints.imtlucca.it/id/eprint/556
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/556
2011-07-27T09:04:23Z
Optimal control of uncertain piecewise affine/mixed logical dynamical systems
This paper proposes an approach to extend the mixed logical
dynamical modelling framework for synthesizing robust optimal control actions for constrained piecewise affine systems subject to bounded additive input disturbances. Rather than using closed-loop dynamic programming arguments, robustness is achieved here with an open-loop optimization strategy, such that the optimal control sequence optimizes nominal performance while robustly guaranteeing that safety/performance constraints are respected. The proposed approach is based on the robust mode control concept, which enforces the control input
to generate trajectories such that the mode of the system, at each time instant, is independent of the disturbances.
Miguel Pedro Silva
Alberto Bemporad
alberto.bemporad@imtlucca.it
Miguel Ayala Botto
José Sa da Costa
2011-07-27T09:04:20Z
2016-05-11T11:07:22Z
http://eprints.imtlucca.it/id/eprint/487
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/487
2011-07-27T09:04:20Z
Hybrid modeling and optimal control of an asphalt base process
Bostjan Potocnik
Alberto Bemporad
alberto.bemporad@imtlucca.it
Fabio Danilo Torrisi
Gasper Music
Borut Zupancic
2011-07-27T09:04:08Z
2011-08-05T14:07:13Z
http://eprints.imtlucca.it/id/eprint/552
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/552
2011-07-27T09:04:08Z
Receding-horizon control of LTI systems with quantized inputs
This paper deals with the stabilization problem for a particular class of hybrid systems, namely discrete–time linear systems subject to a uniform (a priori fixed) quantization of the control set. Results of our previous work on the subject provided a description of minimal (in a specific sense) invariant sets that could be rendered maximally attractive under any quantized feedback strategy. In this paper, we consider the design of stabilizing laws that optimize a given cost index on the state and input evolution on a finite, receding horizon. Application of Model Predictive Control techniques for the solution of similar hybrid control problems through Mixed Logical Dynamical reformulations can provide a stabilizing control law, provided that the feasibility hypotheses are met. In this paper, we discuss precisely what are the shortest horizon length and the minimal invariant terminal set for which it can be guaranteed a stabilizing MPC scheme. The final paper will provide an example and simulations of the application of the control scheme to a practical quantized control problem.
Bruno Picasso
Stefania Pancanti
Alberto Bemporad
alberto.bemporad@imtlucca.it
Antonio Bicchi
2011-07-27T09:04:05Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/554
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/554
2011-07-27T09:04:05Z
Set membership identification of piecewise affine models
This paper addresses the problem of identification of piecewise affine (PWA)models, which involves the joint estimations of both the parameters of the affine submodels and the partition of the PWA map from data. According to ideas from set-membership identification, the key approach is to characterize the model by its maximum allowed prediction error, which is used as a tuning knob for traning off between prediction accuracy and model complexity. At initialization, the proposed procedure for PWA identification exploits a technique per partitioning an infeasible system of linear inequalities into a (possibly minimum) number of feasible subsystems. This provides both an initial clustering of the datapoints and a guess of the number of required submodels, which therefore is not fixed a priori. A refinement procedure is then applied in order to improve both data classification and parameter estimation. The partition of the PWA maps is finally estimated by considering multicategory classification techniques.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Andrea Garulli
Simone Paoletti
Antonio Vicino
2011-07-27T09:04:02Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/553
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/553
2011-07-27T09:04:02Z
Optimal state-feedback quadratic regulation of linear hybrid automata
For linear hybrid automata, namely switched linear autonomous systems whose mode of operation is determined by a controlled automaton, in this paper we face the problem of optimal control, where the objective is to minimize a quadratic performance index over an infinite time horizon. The quantities to be optimized are the sequence of switching times and the sequence of modes (or ”locations”), under the following constraints: the sequence of modes has a finite length; the discrete dynamics of the automaton restricts the possible switches from a given location to
the next location, with a cost associated to each switch; the time interval between two consecutive switching times is greater than a fixed quantity. We show how a state-feedback solution can be computed off-line through a numerical procedure.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Daniele Corona
Alessandro Giua
Carla Seatzu
2011-07-27T09:04:00Z
2011-08-05T14:04:27Z
http://eprints.imtlucca.it/id/eprint/450
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/450
2011-07-27T09:04:00Z
A geometric algorithm for multi-parametric linear programming
We propose a novel algorithm for solving multiparametric linear programming problems. Rather than visiting different bases of the associated LP tableau, we follow a geometric approach based on the direct exploration of the parameter space. The resulting algorithm has computational advantages, namely the simplicity of its implementation in a recursive form and an efficient handling of primal and dual degeneracy. Illustrative examples describe the approach throughout the paper. The algorithm is used to solve finite-time constrained optimal control problems for discrete-time linear dynamical systems.
Francesco Borrelli
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T09:03:57Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/501
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/501
2011-07-27T09:03:57Z
Hybrid control of an automotive robotized gearbox for reduction of consumptions and emissions
This paper describes the application of hybrid modeling and receding horizon optimal control techniques for supervising an automotive robotized gearbox, with the goal of reducing consumptions and emissions, a problem that is currently under investigation at Fiat Research Center (CRF). We show that the dynamic behavior of the vehicle can be easily approximated and captured by the hybrid model, and through simulations on standard speed patterns that a good closed loop performance can be achieved. The synthesized control law can be implemented on automotive hardware as a piecewise affine function of the measured and estimated quantities.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Pandeli Borodani
Massimo Mannelli
2011-07-27T09:02:50Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/502
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/502
2011-07-27T09:02:50Z
A SAT-based hybrid solver for optimal control of hybrid systems
Combinatorial optimization over continuous and integer variables was proposed recently as a useful tool for solving complex optimal control problems for linear hybrid dynamical systems formulated in discrete-time. Current approaches are based on mixed-integer linear or quadratic programming (MIP), which provides the solution after solving a sequence of relaxed standard linear (or quadratic) programs (LP, QP). An MIP formulation has the drawback of requiring conversion of the discrete/logic part of the hybrid problem into mixed-integer inequalities. Although this operation can be done automatically, most of the original discrete structure of the problem is lost during the conversion. Moreover, the efficiency of the MIP solver mainly relies upon the tightness of the continuous LP/QP relaxations. In this paper we attempt to overcome such difficulties by combining MIP and techniques for solving constraint satisfaction problems into a “hybrid” solver, taking advantage of SAT solvers for dealing efficiently with satisfiability of logic constraints. We detail how to model the hybrid dynamics so that the optimal control problem can be solved by the hybrid MIP+SAT solver, and show that the achieved performance is superior to the one achieved by commercial MIP solvers.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Nicolò Giorgetti
2011-07-27T09:02:47Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/471
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/471
2011-07-27T09:02:47Z
Suboptimal Explicit Receding Horizon Control via Approximate Multiparametric Quadratic Programming
Algorithms for solving multiparametric quadratic programming (MPQP) were recently proposed in Refs. 1–2 for computing explicit receding horizon control (RHC) laws for linear systems subject to linear constraints on input and state variables. The reason for this interest is that the solution to MPQP is a piecewise affine function of the state vector and thus it is easily implementable online. The main drawback of solving MPQP exactly is that, whenever the number of linear constraints involved in the optimization problem increases, the number of polyhedral cells in the piecewise affine partition of the parameter space may increase exponentially. In this paper, we address the problem of finding approximate solutions to MPQP, where the degree of approximation is arbitrary and allows to tradeoff between optimality and a smaller number of cells in the piecewise affine solution. We provide analytic formulas for bounding the errors on the optimal value and the optimizer, and for guaranteeing that the resulting suboptimal RHC law provides closed-loop stability and constraint fulfillment.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Carlo Filippi
2011-07-27T09:02:45Z
2011-08-05T14:03:07Z
http://eprints.imtlucca.it/id/eprint/555
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/555
2011-07-27T09:02:45Z
An efficient algorithm for computing the state feedback optimal control law for discrete time hybrid systems
Francesco Borrelli
Mato Baotic
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T09:02:43Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/472
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/472
2011-07-27T09:02:43Z
Min-max control of constrained uncertain discrete-time linear systems
For discrete-time uncertain linear systems with constraints on inputs and states, we develop an approach to determine state feedback controllers based on a min-max control formulation. Robustness is achieved against additive norm-bounded input disturbances and/or polyhedral parametric uncertainties in the state-space matrices. We show that the finite-horizon robust optimal control law is a continuous piecewise affine function of the state vector and can be calculated by solving a sequence of multiparametric linear programs. When the optimal control law is implemented in a receding horizon scheme, only a piecewise affine function needs to be evaluated on line at each time step. The technique computes the robust optimal feedback controller for a rather general class of systems with modest computational effort without needing to resort to gridding of the state-space.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Francesco Borrelli
Manfred Morari
2011-07-27T09:02:41Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/483
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/483
2011-07-27T09:02:41Z
Corrigendum to: "The explicit linear quadratic regulator for constrained systems" [Automatica 38(1) (2002) 3-20]
We apologize that Example 7.1 as publishedin Bemporad,
Morari, Dua, andPistikopoulos (2002) is incorrect due to a
miscalculation of the weight matrix P on the terminal state
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
Vivek Dua
Efstratios N. Pistikopoulos
2011-07-27T09:02:38Z
2011-08-05T14:01:58Z
http://eprints.imtlucca.it/id/eprint/557
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/557
2011-07-27T09:02:38Z
Further results on multiparametric quadratic programming
In this paper we extend results on strictly convex multiparametric quadratic programming (mpQP) to the convex case. An efficient method for computing the mpQP solution is provided. We give a fairly complete description of the mpQP solver, focusing on implementational issues such as degeneracy handling.
Petter Tondel
Tor Arne Johansen
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:02:37Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/559
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/559
2011-07-27T09:02:37Z
Logic-based hybrid solvers for optimal control of hybrid systems
Combinatorial optimization over continuous and integer variables was proposed recently as an useful tool for solving complex optimal control problems for linear hybrid dynamical systems formulated in discrete-time. Current approaches are based on mixed-integer linear/quadratic programming (MIP), which provides the solution after solving a sequence of relaxed standard linear (or quadratic) programs (LP, QP). An MIP formulation has the drawback of requiring that the discrete/logic part of the hybrid problem needs to be converted to into mixed-integer inequalities. Although this operation can be done automatically, most of the original discrete structure of the problem is lost during the conversion. Moreover, the efficiency of the MIP solver only relies upon the tightness of the continuous LP/QP relaxations. In this paper we attempt at overcoming such difficulties by combining MIP and constraint programming (CP) techniques into a "hybrid" solver, taking advantage of CP for dealing efficiently with satisfiability of logic constraints. We detail how to model the hybrid dynamics so that the optimal control problem can be solved by the hybrid MIP+CP solver, and show on a case study that the achieved performance is superior to the one achieved by pure MIP solvers.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Nicolò Giorgetti
2011-07-27T09:02:33Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/560
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/560
2011-07-27T09:02:33Z
Multiparametric nonlinear integer programming and explicit quantized optimal control
This paper deals with multiparametric nonlinear integer programming problems where the optimization variables belong to a finite set and where the cost function and the constraints depend in an arbitrary nonlinear fashion on the optimization variables and in a linear fashion on the parameters. We examine the main theoretical properties of the optimizer and of the optimum as a function of the parameters, and propose a solution algorithm. The methodology is employed to investigate properties of quantized optimal control laws and optimal performance, and to obtain their explicit representation as a function of the state vector.
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T09:02:31Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/563
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/563
2011-07-27T09:02:31Z
Model predictive control - new tools for design and evaluation
A new version of the model predictive control toolbox for MATLAB is described. Major improvements include more flexible modeling of plant and disturbance characteristics, and support for design and simulation involving nonlinear (Simulink) models.
Alberto Bemporad
alberto.bemporad@imtlucca.it
N. Lawrence Ricker
James Gareth Owen
2011-07-27T09:02:29Z
2014-01-24T14:29:28Z
http://eprints.imtlucca.it/id/eprint/628
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/628
2011-07-27T09:02:29Z
Model Predictive Control Toolbox™ User’s Guide
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
N. Lawrence Ricker
2011-07-27T08:54:00Z
2011-08-05T13:56:12Z
http://eprints.imtlucca.it/id/eprint/562
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/562
2011-07-27T08:54:00Z
Stabilizing receding horizon control of piecewise linear systems: An LMI approach
Receding horizon control has recently been used for regulating discrete-time Piecewise Affine (PWA) systems. One of the obstructions for implementation consists in guaranteeing closed-loop stability a priori. This is an issue that has only been addressed marginally in the literature. In this paper we present an extension of the terminal cost method for guaranteeing stability in receding horizon control to the class of unconstrained Piecewise Linear (PWL) systems. A linear matrix inequalities set-up is developed to calculate the terminal weight matrix and the auxiliary feedback gains that ensure stability for quadratic cost based receding horizon control. It is shown that the PWL statefeedback control law employed in the stability proof globally asymptotically stabilizes the origin of the PWL system. The additional conditions needed to extend these results to constrained PWA systems are also pointed out. The implementation of the proposed method is illustrated by an example.
Mircea Lazar
W.P.M.H. Heemels
Siep Weiland
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:53:56Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/525
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/525
2011-07-27T08:53:56Z
SAT-based branch & bound and optimal control of hybrid dynamical systems
A classical hybrid MIP-CSP approach for solving problems having a logical part and a mixed integer programming part is presented. A Branch and Bound procedure combines an MIP and a SAT solver to determine the optimal solution of a general class of optimization problems. The procedure explores the search tree, by solving at each node a linear relaxation and a satisfiability problem, until all integer variables of the linear relaxation are set to an integer value in the optimal solution. When all integer variables are fixed the procedure switches to the SAT solver which tries to extend the solution taking into account logical constraints. If this is impossible, a ldquono-goodrdquo cut is generated and added to the linear relaxation. We show that the class of problems we consider turns out to be very useful for solving complex optimal control problems for linear hybrid dynamical systems formulated in discrete-time. We describe how to model the ldquohybridrdquo dynamics so that the optimal control problem can be solved by the hybrid MIP+SAT solver, and show that the achieved performance is superior to the one achieved by commercial MIP solvers.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Nicolò Giorgetti
2011-07-27T08:53:53Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/504
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/504
2011-07-27T08:53:53Z
A greedy approach to identification of piecewise affine models
This paper addresses the problem of identification of piecewise affine (PWA) models. This problem involves the estimation from data of both the parameters of the affine submodels and the partition of the PWA map. The procedure that we propose for PWA identification exploits a greedy strategy for partitioning an infeasible system of linear inequalities into a minimum number of feasible subsystems: this provides an initial clustering of the datapoints. Then a refinement procedure is applied repeatedly to the estimated clusters in order to improve both the data classification and the parameter estimation. The partition of the PWA map is finally estimated by considering pairwise the clusters of regression vectors, and by finding a separating hyperplane for each of such pairs. We show that our procedure does not require to fix a priori the number of affine submodels, which is instead automatically estimated from the data.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Andrea Garulli
Simone Paoletti
Antonio Vicino
2011-07-27T08:53:46Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/463
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/463
2011-07-27T08:53:46Z
Efficient conversion of mixed logical dynamical systems into an equivalent piecewise affine form
For hybrid systems described by switched linear difference equations, linear threshold conditions, automata, and propositional logic conditions, described in mixed logical dynamical form, this note describes two algorithms for transforming such systems into an equivalent piecewise affine form, where equivalent means that for the same initial conditions and input sequences the trajectories of the system are identical. The proposed techniques exploit ideas from mixed-integer programming and multiparametric programming.
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:53:44Z
2011-08-05T13:58:48Z
http://eprints.imtlucca.it/id/eprint/469
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/469
2011-07-27T08:53:44Z
Identification of piecewise affine systems via mixed-integer programming
This paper addresses the problem of identification of hybrid dynamical systems, by focusing the attention on hinging hyperplanes and Wiener piecewise affine autoregressive exogenous models, in which the regressor space is partitioned into polyhedra with affine submodels for each polyhedron. In particular, we provide algorithms based on mixed-integer linear or quadratic programming which are guaranteed to converge to a global optimum. For the special case where the estimation data only seldom switches between the different submodels, we also suggest a way of trading off between optimality and complexity by using a change detection approach.
Jacob Roll
Alberto Bemporad
alberto.bemporad@imtlucca.it
Lennart Ljung
2011-07-27T08:53:26Z
2011-08-05T13:58:02Z
http://eprints.imtlucca.it/id/eprint/461
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/461
2011-07-27T08:53:26Z
HYSDEL - A tool for generating computational hybrid models
This paper presents a computational framework for modeling hybrid systems in discrete-time. We introduce the class of discrete hybrid automata (DHA) and show its relation with several other existing model paradigms: piecewise affine systems, mixed logical dynamical systems, (extended) linear complementarity systems, min-max-plus-scaling systems. We present HYSDEL (hybrid systems description language), a high-level modeling language for DHA, and a set of tools for translating DHA into any of the former hybrid models. Such a multimodeling capability of HYSDEL is particularly appealing for exploiting a large number of available analysis and synthesis techniques, each one developed for a particular class of hybrid models. An automotive example shows the modeling capabilities of HYSDEL and how the different models allow to use several computational tools.
Fabio Danilo Torrisi
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:53:24Z
2011-08-04T07:29:08Z
http://eprints.imtlucca.it/id/eprint/462
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/462
2011-07-27T08:53:24Z
Anti-windup synthesis via sampled-data piecewise affine optimal control
Discrete-time receding horizon optimal control is employed in model-based anti-windup augmentation. The optimal control formulation enables designs that minimize the mismatch between the unconstrained closed-loop response with a given controller and the constrained closed-loop response with anti-windup augmentation. Recently developed techniques for off-line computation of the constrained linear regulator's solution, which is piecewise affine, facilitate implementation. The resulting sampled-data, anti-windup closed-loop system's properties are established and its performance is demonstrated on a simulation example.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Andrew R. Teel
Luca Zaccarian
2011-07-27T08:53:22Z
2011-08-05T13:57:21Z
http://eprints.imtlucca.it/id/eprint/561
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/561
2011-07-27T08:53:22Z
Robust optimal control of linear hybrid systems: An MLD approach
A methodology for synthesizing robust optimal input trajectories for constrained linear hybrid systems subject to bounded additive disturbances is presented. The computed control sequence optimizes nominal performance while robustly guarantees that safety/performance constraints are respected. Specifically, for hybrid systems representable in the piecewise affine form, robustness is achieved with an open-loop optimization strategy based on the mixed logical
dynamical modelling framework.
Miguel Pedro Silva
Miguel Ayala Botto
Luis Pina
Alberto Bemporad
alberto.bemporad@imtlucca.it
José Sa da Costa
2011-07-27T08:47:42Z
2011-08-05T13:54:34Z
http://eprints.imtlucca.it/id/eprint/564
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/564
2011-07-27T08:47:42Z
A dynamic programming approach for determining the explicit solution of MPC controllers
Recently multi-parametric methods have been applied with success to model predictive control (MPC) schemes. In this paper we propose a novel method for linear systems to obtain the explicit description of the control law that is based on dynamic programming and exploits the structure of the MPC formulation.
David Muñoz de la Peña
Teodoro Alamo
Alberto Bemporad
alberto.bemporad@imtlucca.it
Eduardo F. Camacho
2011-07-27T08:47:40Z
2011-08-05T13:54:55Z
http://eprints.imtlucca.it/id/eprint/565
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/565
2011-07-27T08:47:40Z
Robust explicit MPC based on approximate multi-parametric convex programming
Many robust model predictive control (MPC) schemes require the online solution of a convex program, which can be computationally demanding. For deterministic MPC schemes, multi-parametric programming was successfully applied to move most computations offline. In this paper we adopt a general approximate multi-parametric algorithm recently suggested for convex problems and propose to apply it to a classical robust WC scheme. This approach enables one to implement a robust MPC controller in real time for systems with polytopic uncertainty, ensuring robust constraint satisfaction and robust convergence to a given bounded set.
David Muñoz de la Peña
Alberto Bemporad
alberto.bemporad@imtlucca.it
Carlo Filippi
2011-07-27T08:47:28Z
2011-08-05T13:53:02Z
http://eprints.imtlucca.it/id/eprint/533
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/533
2011-07-27T08:47:28Z
A decomposition algorithm for feedback min-max model predictive control
An algorithm for solving feedback min-max model predictive control for discrete time uncertain linear systems with constraints is presented in the paper. The algorithm solves the corresponding multi-stage min-max linear optimization problem. It is based on applying recursively a decomposition technique to solve the min-max problem via a sequence of low complexity linear programs. It is proved that the algorithm converges to the optimal solution in finite time. Simulation results are provided to compare the proposed algorithm with other approaches.
David Muñoz de la Peña
Alberto Bemporad
alberto.bemporad@imtlucca.it
Teodoro Alamo
2011-07-27T08:47:26Z
2011-08-05T13:52:41Z
http://eprints.imtlucca.it/id/eprint/534
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/534
2011-07-27T08:47:26Z
Stochastic programming applied to model predictive control
Many robust model predictive control (MPC) schemes are based on min-max optimization, that is, the future control input trajectory is chosen as the one which minimizes the performance due to the worst disturbance realization. In this paper we take a different route to solve MPC problems under uncertainty. Disturbances are modelled as random variables and the expected value of the performance index is minimized. The MPC scheme that can be solved using Stochastic Programming (SP), for which several efficient solution techniques are available. We show that this formulation guarantees robust constraint fulfillment and that the expected value of the optimum cost function of the closed loop system decreases at each time step.
David Muñoz de la Peña
Alberto Bemporad
alberto.bemporad@imtlucca.it
Teodoro Alamo
2011-07-27T08:47:24Z
2011-08-05T13:52:14Z
http://eprints.imtlucca.it/id/eprint/526
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/526
2011-07-27T08:47:24Z
Infinity norms as Lyapunov functions for model predictive control of constrained PWA systems
In this paper we develop a priori stabilization conditions for infinity norm based hybrid MPC in the terminal cost and constraint set fashion. Closed-loop stability is achieved using infinity norm inequalities that guarantee that the value function corresponding to the MPC cost is a Lyapunov function of the controlled system. We show that Lyapunov asymptotic stability can be achieved even though the MPC value function may be discontinuous. One of the advantages of this hybrid MPC scheme is that the terminal constraint set can be directly obtained as a sublevel set of the calculated terminal cost, which is also a local piecewise linear Lyapunov function. This yields a new method to obtain positively invariant sets for PWA systems.
Mircea Lazar
W.P.M.H. Heemels
Siep Weiland
Alberto Bemporad
alberto.bemporad@imtlucca.it
Octavian Pastravanu
2011-07-27T08:47:22Z
2011-08-05T13:51:39Z
http://eprints.imtlucca.it/id/eprint/530
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/530
2011-07-27T08:47:22Z
On the stability and robustness of non-smooth nonlinear MPC
This paper considers discrete-time nonlinear, possibly discontinuous, systems in closed-loop with Model Predictive
Controllers (MPC). The aim of the paper is to provide a priori sufficient conditions for asymptotic stability in
the Lyapunov sense and robust stability, while allowing for both the system dynamics and the value function of the MPC cost (the usual candidate Lyapunov function in MPC) to be discontinuous functions of the state. The motivation for this work lies in the recent development of MPC for hybrid systems, which are inherently discontinuous and nonlinear systems. As an application of the general theory, it is shown that Lyapunov stability is achieved in hybrid MPC. For a particular class of piecewise affine systems, a modified MPC set-up is proposed, which is proven to be robust to small additive disturbances via an input-to-state stability argument.
Mircea Lazar
W.P.M.H. Heemels
Alberto Bemporad
alberto.bemporad@imtlucca.it
Siep Weiland
2011-07-27T08:45:21Z
2011-08-04T07:29:07Z
http://eprints.imtlucca.it/id/eprint/529
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/529
2011-07-27T08:45:21Z
Passivity analysis and passification of discrete-time hybrid systems
This paper proposes several (sufficient) criteria based on the numerical solution of systems of linear matrix inequalities (LMIs) for proving the passivity of discrete-time hybrid systems in piecewise affine form, and for the synthesis of switched linear control laws that enforce passivity.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Gianni Bianchini
Filippo Brogi
Federico Barbagli
2011-07-27T08:45:16Z
2011-08-05T13:48:46Z
http://eprints.imtlucca.it/id/eprint/528
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/528
2011-07-27T08:45:16Z
On the stability of quadratic forms based model predictive control of constrained PWA systems
In this paper we investigate the stability of discrete-time PWA systems in closed-loop with quadratic cost based Model Predictive Controllers (MPC) and we derive a priori sufficient conditions for Lyapunov asymptotic stability.
We prove that Lyapunov stability can be achieved for the
closed-loop system even though the considered Lyapunov
function and the system dynamics may be discontinuous.
The stabilization conditions are derived using a terminal
cost and constraint set method. An S-procedure technique
is employed to reduce conservativeness of the stabilization
conditions and a linear matrix inequalities set-up is developed in order to calculate the terminal cost. A new algorithm for computing piecewise polyhedral positively invariant sets for PWA systems is also presented. In this manner, the on-line optimization problem associated with MPC leads to a mixed integer quadratic programming problem, which can be solved by standard optimization tools.
Mircea Lazar
W.P.M.H. Heemels
Siep Weiland
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:45:13Z
2016-04-06T10:27:19Z
http://eprints.imtlucca.it/id/eprint/523
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/523
2011-07-27T08:45:13Z
Hybrid model predictive control application towards optimal semi-active suspension
The optimal control problem of a quartercar semi-active suspension has been studied in the past.
Considering that a quarter-car semi-active suspension can
either be modeled as a linear system with state dependent
constraint on control (of actuator force) input, or a bilinear system with a control (of variable damping coefficient) saturation, the seemingly simple problem poses several interesting questions and challenges. Does the optimal control law derived from the corresponding un-constrained system, i.e. “clipped-optimal”, remain optimal for the constrained case? If the optimal control law of the constrained system does deviate from its un-constrained counter-part, how different are they? What is the structure of the optimal control law? In this paper, we attempt to answer some of the above questions by utilizing the recent development in model predictive control (MPC) of hybrid dynamical systems. The constrained quarter-car semi-active suspension is modeled as a switching affine system, where the switching is determined
by the activation of passivity constraints, force saturation, and maximum power dissipation limits. Theoretically, over an infinite prediction horizon the MPC controller corresponds to the exact optimal controller. The performance of different finite-horizon hybrid MPC controllers is tested in simulation using mixed-integer quadratic programming.
Then, for short-horizon MPC controllers, we derive the
explicit optimal control law and show that the optimal
control is piecewise affine in state. In particular, we show
that for horizon equal to one the explicit MPC control law
corresponds to clipped LQR. We will compare the derived
optimal control law to various semi-active control laws in
the literature including the well-known “clipped-optimal”.
We will evaluate their corresponding performances for both
a deterministic shock input case and a stochastic random
disturbances case through simulations.
Nicolò Giorgetti
Alberto Bemporad
alberto.bemporad@imtlucca.it
H. E. Tseng
Davor Hrovat
2011-07-27T08:45:09Z
2011-08-05T13:46:22Z
http://eprints.imtlucca.it/id/eprint/452
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/452
2011-07-27T08:45:09Z
Dynamic programming for constrained optimal control of discrete-time linear hybrid systems
In this paper we study the solution to optimal control problems for constrained discrete-time linear hybrid systems based on quadratic or linear performance criteria. The aim of the paper is twofold. First, we give basic theoretical results on the structure of the optimal state-feedback solution and of the value function. Second, we describe how the state-feedback optimal control law can be constructed by combining multiparametric programming and dynamic programming.
Francesco Borrelli
Mato Baotic
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T08:45:06Z
2011-08-05T13:45:04Z
http://eprints.imtlucca.it/id/eprint/532
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/532
2011-07-27T08:45:06Z
Bounded model checking of hybrid dynamical systems
Bounded model checking (BMC) has recently emerged as a very powerful methodology for the verification of purely discrete systems. Given a horizon of interest, bounded model checking verifies whether all finite-horizon trajectories satisfy a temporal logic formula by first translating the problem to a large satisfiability SAT-problem and then relying on extremely powerful state-of-the art SAT-solvers for a counterexample or a ertification of safety. In this paper we consider the problem of bounded model checking for a general class of discrete-time hybrid systems. Critical to our approach is the abstraction of continuous trajectories under discrete observations with a purely discrete system that captures the same discrete sequences. Bounded model checking can then be applied to the purely discrete, abstracted system. The performance of our approach is illustrated by verifying temporal properties of a hybrid model of an electronic height controller.
Nicolò Giorgetti
George J. Pappas
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:45:04Z
2014-03-05T13:44:04Z
http://eprints.imtlucca.it/id/eprint/536
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/536
2011-07-27T08:45:04Z
Event-driven optimal control of integral continuous-time hybrid automata
This paper proposes an event-driven optimal control trategy for hybrid systems composed of continuous-time integral dynamics with inputs and of Anite state machines triggered by endgenous and exogenous events. Endogenous events are caused by continuous states or continuous inputs crossing certain linear thresholds or by the elapse of time intervals, while exogenous events are forced by changes of binary and continuous inputs. The advantages of the roposed strategy are the reduction of the amount of computation required to solve optimal control problems, and the reduction of approximation errors typical of discrete-time approaches. We examine several performance objectives and constraints that lead to mixed-integerlinear or quadratic optimization problems and we exemplify the approach on a simple example.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Stefano Di Cairano
Jorge Júlvez
2011-07-27T08:45:02Z
2011-08-05T13:50:39Z
http://eprints.imtlucca.it/id/eprint/531
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/531
2011-07-27T08:45:02Z
Model predictive control of hybrid systems with applications to supply chain management
Hybrid systems are dynamical systems whose behavior is determined by the interaction of continuous and discrete dynamics. Such systems arise in many real contexts, including automotive systems, chemical processes, communication networks, and supply chain management. A supply chain, whose goal is to transform ideas and raw materials into delivered products and services, is an example of a heterogeneous interconnection between continuous dynamics (inventory levels, material flows, etc.) and discrete dynamics (connection graphs, precedences, priorities, etc.). In general, in order to maximize a certain benefit or minimize certain costs, we have to optimally control all the heterogeneous components of the hybrid system. Model predictive control (MPC) is a well-known technique used in industry to (sub)optimally control dynamical processes, and is usually based on linear models.
This paper presents an overview of MPC techniques for hybrid systems. After giving a brief introduction to hybrid system models, model predictive control, and standard computation techniques, the paper summarizes recent results in using symbolic techniques and event-based formulations that exploit the particular structure of the hybrid process to come up with improved numerical computation schemes. The
concepts are illustrated through application examples in centralized management of supply chains.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Stefano Di Cairano
Nicolò Giorgetti
2011-07-27T08:44:59Z
2011-08-04T07:29:07Z
http://eprints.imtlucca.it/id/eprint/503
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/503
2011-07-27T08:44:59Z
Optimal control of discrete hybrid stochastic automata
This paper focuses on hybrid systems whose discrete state transitions depend on both deterministic and stochastic events. For such systems, after introducing a suitable hybrid model called Discrete Hybrid Stochastic Automaton (DHSA), different finite-time optimal control approaches are examined: (1) Stochastic Hybrid Optimal Control (SHOC), that “optimistically” determines the trajectory providing the best trade off between the tracking performance and the probability that stochastic events realize as expected, under specified chance constraints; (2) Robust Hybrid Optimal Control (RHOC) that, in addition, less optimistically, ensures that the system remains within a specified safety region for all possible realizations of stochastic events. Sufficient conditions for the asymptotic convergence of the state vector are given for receding-horizon implementations of the above schemes. The proposed approaches are exemplified on a simple benchmark problem in production system management.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Stefano Di Cairano
2011-07-27T08:44:57Z
2011-08-04T07:29:07Z
http://eprints.imtlucca.it/id/eprint/535
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/535
2011-07-27T08:44:57Z
Passivity analysis of discrete-time hybrid systems using piecewise polynomial storage functions
This paper proposes some sufficient criteria based on the computation of polynomial and piecewise polynomial storage functions for checking passivity of discrete-time hybrid systems in piecewise affine or piecewise polynomial form. The computation of such storage functions is performed by means of convex optimization techniques via the sum of squares decomposition of multivariate polynomials.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Gianni Bianchini
Filippo Brogi
Graziano Chesi
2011-07-27T08:43:54Z
2013-09-13T09:50:06Z
http://eprints.imtlucca.it/id/eprint/492
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/492
2011-07-27T08:43:54Z
An algorithm for approximate multiparametric convex programming
For multiparametric convex nonlinear programming problems we propose a recursive algorithm for approximating, within a given suboptimality tolerance, the value function and an optimizer as functions of the parameters. The approximate solution is expressed as a piecewise affine function over a simplicial partition of a subset of the feasible parameters, and it is organized over a tree structure for efficiency of evaluation. Adaptations of the algorithm to deal with multiparametric semidefinite programming and multiparametric geometric programming are provided and exemplified. The approach is relevant for real-time implementation of several optimization-based feedback control strategies.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Carlo Filippi
2011-07-27T08:43:43Z
2011-08-05T13:38:38Z
http://eprints.imtlucca.it/id/eprint/538
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/538
2011-07-27T08:43:43Z
Squaring the circle: An algorithm for generating polyhedral invariant sets from ellipsoidal ones
This paper presents a new (geometrical) approach to the computation of polyhedral positively invariant sets for general (possibly discontinuous) nonlinear systems, possibly affected by disturbances. Given a beta-contractive ellipsoidal set E, the key idea is to construct a polyhedral set that lies between the ellipsoidal sets betaE and E. A proof that the resulting polyhedral set is positively invariant (and contractive under an additional assumption) is given, and a new algorithm is developed to construct the desired polyhedral set. An advantage of the proposed method is that the problem of computing polyhedral invariant sets is formulated as a number of quadratic programming (QP) problems. The number of QP problems is guaranteed to be finite and therefore, the algorithm has finite termination. An important application of the proposed algorithm is the computation of polyhedral terminal constraint sets for model predictive control based on quadratic costs
Mircea Lazar
Alessandro Alessio
Alberto Bemporad
alberto.bemporad@imtlucca.it
W.P.M.H. Heemels
2011-07-27T08:41:35Z
2011-08-05T13:35:14Z
http://eprints.imtlucca.it/id/eprint/540
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/540
2011-07-27T08:41:35Z
Feasible mode enumeration and cost comparison for explicit quadratic model predictive control of hybrid systems
For hybrid systems in piecewise affine (PWA) form, this paper presents a new methodology for computing the solution, defined over a set of (possibly overlapping) polyhedra, of the finite-time constrained optimal control problem based on quadratic costs. First, feasible mode sequences are determined via backward reachability analysis, and multiparametric quadratic programming is employed to determine candidate polyhedral regions of the solution and the corresponding value functions and optimal control gains. Then, the value functions associated with overlapping regions are compared in order to discard those regions whose associated control law is never optimal. The comparison problem is, in general, nonconvex and is tackled here as a DC (Difference of Convex functions) programming problem.
Alessandro Alessio
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:40:49Z
2011-08-04T07:29:07Z
http://eprints.imtlucca.it/id/eprint/544
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/544
2011-07-27T08:40:49Z
Model predictive control Design: New Trends and Tools
Model-based design is well recognized in industry as a systematic approach to the development, evaluation, and implementation of feedback controllers. Model predictive control (MPC) is a particular branch of model-based design: a dynamical model of the open-loop process is explicitly used to construct an optimization problem aimed at achieving the prescribed system's performance under specified restrictions on input and output variables. The solution of the optimization problem provides the feedback control action, and can be either computed by embedding a numerical solver in the real-time control code, or pre-computed off-line and evaluated through a lookup table of linear feedback gains. This paper reviews the basic ideas of MPC design, from the traditional linear MPC setup based on quadratic programming to more a advanced explicit and hybrid MPC, and highlights available software tools for the design, evaluation, code generation, and deployment of MPC controllers in real-time hardware platforms
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:40:46Z
2011-08-05T13:23:15Z
http://eprints.imtlucca.it/id/eprint/455
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/455
2011-07-27T08:40:46Z
Optimal control of continuous-time switched affine systems
This paper deals with optimal control of switched piecewise affine autonomous systems, where the objective is to minimize a performance index over an infinite time horizon. We assume that the switching sequence has a finite length, and that the decision variables are the switching instants and the sequence of operating modes. We present two different approaches for solving such an optimal control problem. The first approach iterates between a procedure that finds an optimal switching sequence of modes, and a procedure that finds the optimal switching instants. The second approach is inspired by dynamic programming and identifies the regions of the state space where an optimal mode switch should occur, therefore providing a state feedback control law.
Carla Seatzu
Daniele Corona
Alessandro Giua
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:40:43Z
2012-03-30T10:34:59Z
http://eprints.imtlucca.it/id/eprint/495
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/495
2011-07-27T08:40:43Z
Logic-based solution methods for optimal control of hybrid systems
Combinatorial optimization over continuous and integer variables is a useful tool for solving complex optimal control problems of hybrid dynamical systems formulated in discrete-time. Current approaches are based on mixed-integer linear (or quadratic) programming (MIP), which provides the solution after solving a sequence of relaxed linear (or quadratic) programs. MIP formulations require the translation of the discrete/logic part of the hybrid problem into mixed-integer inequalities. Although this operation can be done automatically, most of the original symbolic structure of the problem (e.g., transition functions of finite state machines, logic constraints, symbolic variables, etc.) is lost during the conversion, with a consequent loss of computational performance. In this paper, we attempt to overcome such a difficulty by combining numerical techniques for solving convex programming problems with symbolic techniques for solving constraint satisfaction problems (CSP). The resulting "hybrid" solver proposed here takes advantage of CSP solvers for dealing with satisfiability of logic constraints very efficiently. We propose a suitable model of the hybrid dynamics and a class of optimal control problems that embrace both symbolic and continuous variables/functions, and that are tailored to the use of the new hybrid solver. The superiority in terms of computational performance with respect to commercial MIP solvers is shown on a centralized supply chain management problem with uncertain forecast demand.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Nicolò Giorgetti
2011-07-27T08:39:11Z
2011-08-05T13:16:49Z
http://eprints.imtlucca.it/id/eprint/508
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/508
2011-07-27T08:39:11Z
Suboptimal model predictive control of hybrid systems based on mode-switching constraints
Model predictive control (MPC) is recognized as a very versatile and effective way of controlling constrained hybrid dynamical systems in closed-loop. The main drawback of hybrid MPC is the heavy computation burden of the associated on-line mixed-integer optimization. Explicit MPC solutions overcome such a problem by rewriting the control law in piecewise affine form, but are limited to relatively simple hybrid control problem setups. This paper presents an alternative approach for reducing the complexity of computations by suitably constraining the mode sequence over the prediction horizon, so that on-line optimization is solved more quickly. While tracking performance of the feedback loop may be affected because of the suboptimality of the approach, closed- loop stability is guaranteed. The effectiveness of the method is demonstrated by an example.
Ari Ingimundarson
Carlos Ocampo-Martinez
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:39:08Z
2011-08-04T07:29:07Z
http://eprints.imtlucca.it/id/eprint/545
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/545
2011-07-27T08:39:08Z
A wireless magneto-resistive sensor network for real-time vehicle detection
This works describes a prototype wireless sensor network for vehicle detection developed at the University of Siena in collaboration with the Italian highways society Autostrade S.p.A. Each wireless sensor node is composed by an in-house designed electronic board driving a 2-axis Honeywell HMC1002 magneto-resistive sensor interfaced to a Telos rev.b (Moteiv Corporation) mote, and by a Matlab/Simulink interface for collecting and processing sensor data in (soft) real-time.
Alberto Bemporad
alberto.bemporad@imtlucca.it
F. Gentile
A. Mecocci
Francesco Molendi
F. Rossi
2011-07-27T08:39:02Z
2011-08-04T07:29:06Z
http://eprints.imtlucca.it/id/eprint/520
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/520
2011-07-27T08:39:02Z
Hybrid model predictive control based on wireless sensor feedback: an experimental study
This paper presents the design and the experimental validation of model predictive control (MPC) of a hybrid dynamical process based on measurements collected by a wireless sensor network. The proposed setup is the prototype of an industrial application in which a remote station controls the process via wireless network links. The experimental platform is a laboratory process consisting of four infrared lamps, controlled in pairs by two on/off switches, and of a transport belt, where moving parts equipped with wireless sensors are heated by the lamps. By approximating the stationary heat spatial distribution as a piecewise affine function of the position along the belt, the resulting plant model is a hybrid dynamical system. The control architecture is based on the reference governor approach: the process is actuated by a local controller, while a hybrid MPC algorithm running on a remote base station sends optimal belt velocity set-points and lamp on/off commands over a network link exploiting the information received through the wireless network. A discrete-time hybrid model of the process is used for the hybrid MPC algorithm and for the state estimator.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Stefano Di Cairano
Erik Henriksson
Karl Henrik Johansson
2011-07-27T08:39:00Z
2014-07-08T13:09:40Z
http://eprints.imtlucca.it/id/eprint/497
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/497
2011-07-27T08:39:00Z
(edited by) Hybrid Systems: Computation and Control, 10th International Workshop, HSCC 2007, Pisa, Italy, April 3-5, 2007. Proceedings
Alberto Bemporad
alberto.bemporad@imtlucca.it
Antonio Bicchi
Giorgio Buttazzo
2011-07-27T08:38:55Z
2014-01-20T15:13:54Z
http://eprints.imtlucca.it/id/eprint/506
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/506
2011-07-27T08:38:55Z
Moving target detection and tracking in wireless sensor networks
Stefano Di Cairano
Alberto Bemporad
alberto.bemporad@imtlucca.it
Angelita Caldelli
2011-07-27T08:36:24Z
2011-08-05T12:57:10Z
http://eprints.imtlucca.it/id/eprint/514
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/514
2011-07-27T08:36:24Z
A control Lyapunov approach to predictive control of hybrid systems
In this paper we consider the stabilization of hybrid systems with both continuous and discrete dynamics via predictive control. To deal with the presence of discrete dynamics we adopt a “hybrid” control Lyapunov function approach, which consists of using two different functions.
A Lyapunov-like function is designed to ensure finite-time convergence of the discrete state to a target value, while asymptotic stability of the continuous state is guaranteed via a classical local control Lyapunov function. We show that by combining these two functions in a proper manner it is no longer necessary that the control Lyapunov function for the continuous dynamics decreases at each time step. This leads to a significant reduction of conservativeness in contrast with classical Lyapunov based predictive control. Furthermore, the proposed approach also leads
to a reduction of the horizon length needed for recursive feasibility with respect to standard predictive control approaches.
Stefano Di Cairano
Mircea Lazar
Alberto Bemporad
alberto.bemporad@imtlucca.it
W.P.M.H. Heemels
2011-07-27T08:36:21Z
2011-08-05T12:53:50Z
http://eprints.imtlucca.it/id/eprint/515
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/515
2011-07-27T08:36:21Z
Discrete and hybrid stochastic state estimation algorithms
for Networked control systems
Networked control systems enable for flexible systems operation and reduce cost of installation and maintenance, potentially at the price of increasing the uncertainty due to information exchange over the network. We focus on the problem of information loss in terms of packet drops, which are modelled as stochastic events that depend on the current
state of the network. To design reliable control systems the state of the network must be estimated online, together with the state of the controlled process. This paper proposes various approaches to discrete and hybrid stochastic estimation of network and process states, where
the network is modelled as a Markov chain and the packet drop probability depends on the states of the Markov chain. The proposed techniques are evaluated on simulations and experimental data.
Stefano Di Cairano
K.H. Johasson
Alberto Bemporad
alberto.bemporad@imtlucca.it
Richard M. Murray
2011-07-27T08:36:19Z
2012-04-26T10:50:02Z
http://eprints.imtlucca.it/id/eprint/616
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/616
2011-07-27T08:36:19Z
Assessment of decentralized model predictive control techniques for power networks
Model predictive control (MPC) is one of the few advanced control methodologies that have proven to be very successful in real-life control applications. MPC has the capability to guarantee optimality with respect to a de- sired performance cost function, while explicitly taking con- straints into account. Recently, there has been an increas- ing interest in the usage of MPC schemes to control power networks. The major obstacle for implementation lies in the large scale of power networks, which is prohibitive for a centralized approach. In this paper we critically assess and compare the suitability of three model predictive control schemes for controlling power networks. These techniques are analyzed with respect to the following relevant characteristics: the performance of the closed-loop system, which is evaluated and compared to the performance achieved with the classical automatic generation control (AGC) structure; the decentralized implementation, which is investigated in terms of size of the models used for prediction, required measurements and data communication, type of cost function and the computational time required by each algorithm to obtain the control action. Based on the investigated properties mentioned above, the study presented in this paper provides valuable insights that can contribute to the successful decentralized implementation of MPC in real-life electrical power networks.
Armand Damoiseaux
Andrej Jokic
Mircea Lazar
Alessandro Alessio
Paul Van den bosch
Ian Hiskens
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:36:17Z
2011-10-12T14:23:48Z
http://eprints.imtlucca.it/id/eprint/614
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/614
2011-07-27T08:36:17Z
Energy-aware robust model predictive control based on wireless sensor feedback
Flexibility, ease of deployment and of spatial reconfiguration, and low cost make wireless sensor networks (WSNs) fundamental component of modern networked control systems. However, due to the energy-constrained nature of WSNs, the transmission rate of the sensor nodes is a critical aspect to take into account in control design. Two are the main contributions of this paper. First, a general transmission strategy for communication between controller and sensors is proposed. Then, a scenario with a controller and a wireless node providing measures is investigated, and two energy-aware control schemes based on explicit model predictive control (MPC) are presented. We consider both nominal and robust control in the presence of disturbances, and convergence properties are given for the latter. The proposed control schemes are tested and compared to traditional MPC techniques. The results show the effectiveness of the proposed energy-aware approach, which achieves a profitable trade-off between energy consumption of wireless sensors and loss in system performance.
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:35:38Z
2011-08-05T12:47:58Z
http://eprints.imtlucca.it/id/eprint/617
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/617
2011-07-27T08:35:38Z
Convergence properties of dynamic agents consensus networks with broken links
Convergence properties of distributed consensus protocols on networks of dynamical agents have been analyzed by combinations of algebraic graph theory and control theory tools under certain assumptions, such as strong connectivity. Strong connectivity can be regarded as the requirement that the information of each agent propagates to all the others, possibly with intermediate steps and manipulations. However, because of network failures or malicious attacks, it is possible that this assumption no longer holds, so that some agents are only receiving or only transmitting information from other subsets of agents. In this case, strong connectivity is replaced by weak connectivity. We analyze the convergence properties of distributed consensus on directed graphs with weakly connected components. We show conditions for which the agreement is reached, and, for the cases in which such conditions do not hold, we provide bounds on the residual disagreement.
Stefano Di Cairano
Alessandro Pasini
Alberto Bemporad
alberto.bemporad@imtlucca.it
Richard M. Murray
2011-07-27T08:35:37Z
2012-03-02T15:26:02Z
http://eprints.imtlucca.it/id/eprint/476
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/476
2011-07-27T08:35:37Z
Passivity analysis and passification of discrete-time hybrid systems
For discrete-time hybrid systems in piecewise affine or piece-wise polynomial (PWP) form, this note proposes sufficient passivity analysis and synthesis criteria based on the computation of piecewise quadratic or PWP storage functions. By exploiting linear matrix inequality techniques and sum of squares decomposition methods, passivity analysis and synthesis of passifying controllers can be carried out through standard semidefinite programming packages, providing a tool particularly important for stability of interconnected heterogeneous dynamical systems.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Gianni Bianchini
Filippo Brogi
2011-07-27T08:35:35Z
2011-08-05T12:49:32Z
http://eprints.imtlucca.it/id/eprint/477
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/477
2011-07-27T08:35:35Z
Efficient on-line computation of constrained optimal control
For discrete-time linear time-invariant systems with constraints on inputs and outputs, the constrained finite-time optimal controller can be obtained explicitly as a piecewise-affine function of the initial state via multi-parametric programming. By exploiting the properties of the value function, we present two algorithms that efficiently perform the online evaluation of the explicit optimal control law both in terms of storage demands and computational complexity. The algorithms are particularly effective when used for model-predictive control (MPC) where an open-loop constrained finite-time optimal control problem has to be solved at each sampling time.
Francesco Borrelli
Mato Baotic
Alberto Bemporad
alberto.bemporad@imtlucca.it
Manfred Morari
2011-07-27T08:34:45Z
2014-07-08T12:41:47Z
http://eprints.imtlucca.it/id/eprint/618
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/618
2011-07-27T08:34:45Z
Stability conditions for decentralized model predictive control under packet drop communication
We propose a decentralized model predictive control (MPC) design approach for possibly large-scale processes whose structure may not be dynamically decoupled. The decoupling assumption only appears in the prediction models used by the different MPC control agents. In [1] we presented a sufficient criterion for analyzing a posteriori the asymptotic stability of the process model in closed-loop with the set of decentralized MPC controllers. The communication model among neighboring MPC controllers was supposed faultless, so that each MPC could successfully receive the information about the states of its corresponding submodel. Here we present a sufficient condition for ensuring closed-loop stability of the overall closed-loop system when a certain number of packets containing state measurements may be lost.
Alessandro Alessio
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:34:07Z
2013-02-20T10:16:40Z
http://eprints.imtlucca.it/id/eprint/432
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/432
2011-07-27T08:34:07Z
Hierarchical and hybrid model predictive control of quadcopter air vehicles
This paper proposes a hierachical hybrid MPC approach to design feedback control functions for stabilization and autonomous navigation of unmanned air vehicles. After formulating the nonlinear dynamical equations of a "quadcopter" air vehicle, a linear MPC controller is designed to stabilize the vehicle around commanded desired set-points. These are generated at a slower sampling rate by a hybrid MPC controller at the upper control layer, based on a hybrid dynamical model of the UAV and of its surrounding environment, with the overall goal of controlling the vehicle to a target set-point while avoiding obstacles. The performance of the complete hierarchical control scheme is assessed through simulations and visualization in a virtual 3D environment, showing the ability of linear MPC to handle the strong couplings among the dynamical variables of the quadcopter under various torque and angle/position constraints, and the flexibility of hybrid MPC in planning the desired trajectory on-line.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Carlo A. Pascucci
Claudio Rocchi
2011-07-27T08:34:04Z
2011-08-04T07:29:06Z
http://eprints.imtlucca.it/id/eprint/442
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/442
2011-07-27T08:34:04Z
Multiobjective model predictive control
This paper proposes a novel model predictive control (MPC) scheme based on multiobjective optimization. At each sampling time, the MPC control action is chosen among the set of Pareto optimal solutions based on a time-varying, state-dependent decision criterion. Compared to standard single-objective MPC formulations, such a criterion allows one to take into account several, often irreconcilable, control specifications, such as high bandwidth (closed-loop promptness) when the state vector is far away from the equilibrium and low bandwidth (good noise rejection properties) near the equilibrium. After recasting the optimization problem associated with the multiobjective MPC controller as a multiparametric multiobjective linear or quadratic program, we show that it is possible to compute each Pareto optimal solution as an explicit piecewise affine function of the state vector and of the vector of weights to be assigned to the different objectives in order to get that particular Pareto optimal solution. Furthermore, we provide conditions for selecting Pareto optimal solutions so that the MPC control loop is asymptotically stable, and show the effectiveness of the approach in simulation examples.
Alberto Bemporad
alberto.bemporad@imtlucca.it
David Muñoz de la Peña
2011-07-27T08:34:02Z
2011-11-17T14:32:55Z
http://eprints.imtlucca.it/id/eprint/435
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/435
2011-07-27T08:34:02Z
Multiobjective model predictive control based on convex piecewise affine costs
This paper proposes a novel model predictive control (MPC) scheme based on multiobjective optimization. At each sampling time, the MPC control action is chosen among the set of Pareto optimal solutions based on a time-varying and state-dependent decision criterion. After recasting the optimization problem associated with the multiobjective MPC controller as a multiparametric multiobjective linear problem, we show that it is possible to compute each Pareto optimal solution as an explicit piecewise affine function of the state vector and of the vector of weights to be assigned to the different objectives in order to get that particular Pareto optimal solution. Furthermore, we provide conditions for selecting Pareto optimal solutions so that the MPC control loop is asymptotically stable, and show the effectiveness of the approach in simulation examples.
Alberto Bemporad
alberto.bemporad@imtlucca.it
David Muñoz de la Peña
2011-07-27T08:32:34Z
2014-03-05T13:43:45Z
http://eprints.imtlucca.it/id/eprint/475
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/475
2011-07-27T08:32:34Z
Event-driven optimization-based control of hybrid systems with integral continuous-time dynamics
In this paper we introduce a class of continuous-time hybrid dynamical systems called integral continuous-time hybrid automata (icHA) for which we propose an event-driven optimization-based control strategy. Events include both external actions applied to the system and changes of continuous dynamics (mode switches). The icHA formalism subsumes a number of hybrid dynamical systems with practical interest, e.g., linear hybrid automata. Different cost functions, including minimum-time and minimum-effort criteria, and constraints are examined in the event-driven optimal control formulation. This is translated into a finite-dimensional mixed-integer optimization problem, in which the event instants and the corresponding values of the control input are the optimization variables. As a consequence, the proposed approach has the advantage of automatically adjusting the attention of the controller to the frequency of event occurrence in the hybrid process. A receding horizon control scheme exploiting the event-based optimal control formulation is proposed as a feedback control strategy and proved to ensure either finite-time or asymptotic convergence of the closed-loop.
Stefano Di Cairano
Alberto Bemporad
alberto.bemporad@imtlucca.it
Jorge Júlvez
2011-07-27T08:31:54Z
2011-08-05T12:39:58Z
http://eprints.imtlucca.it/id/eprint/607
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/607
2011-07-27T08:31:54Z
Automotive control
Luca Benvenuti
Andrea Balluchi
Alberto Bemporad
alberto.bemporad@imtlucca.it
Stefano Di Cairano
Bengt Johansson
Rolf Johansson
Alberto Sangiovanni Vincentelli
Per Tunest
2011-07-27T08:31:52Z
2011-08-05T12:34:42Z
http://eprints.imtlucca.it/id/eprint/431
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/431
2011-07-27T08:31:52Z
Decentralized model predictive control of dynamically-coupled linear systems: tracking under packet loss
For large-scale processes whose dynamics can be represented as the interaction of several dynamically-coupled linear subsystems, this paper proposes a decentralized model predictive control (MPC) design approach for set-point tracking under input constraints and possible loss of information packets. Following earlier results in (Alessio and Bemporad, 2007 and 2008), the global model of the process is approximated as the decomposition of several (possibly overlapping) smaller models used for local predictions. We present sufficient criteria for asymptotic tracking of output set-points and rejection of constant measured disturbances when the overall process is in closed loop with the set of decentralized MPC controllers, under possible intermittent lack of communication of measurement data between controllers. The effectiveness of the approach is shown in a simulation example on distributed temperature control in the passenger area of a railcar.
Davide Barcelli
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:31:50Z
2011-08-05T12:33:40Z
http://eprints.imtlucca.it/id/eprint/511
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/511
2011-07-27T08:31:50Z
A survey on explicit model predictive control
Explicit model predictive control (MPC) addresses the problem of removing one of the main drawbacks of MPC, namely the need to solve a mathematical program on line to compute the control action. This computation prevents the application of MPC in several contexts, either because the computer technology needed to solve the optimization problem within the sampling time is too expensive or simply infeasible, or because the computer code implementing the numerical solver causes software certification concerns,especially in safety critical applications.
Explicit MPC allows one to solve the optimization problem off-line for a given range of operating conditions of interest. By exploiting multiparametric programming techniques, explicit MPC computes the optimal control action off line as an “explicit” function of the state and reference vectors, so that on-line operations reduce to a simple function evaluation. Such a function is piecewise affine in most cases, so that the MPC controller maps into a lookup table of linear gains.
In this paper we survey the main contributions on explicit MPC appeared in the scientific literature. After recalling the basic concepts and problem formulations of MPC, we review the main approaches to solve explicit MPC problems, including a novel and simple suboptimal practical approach to reduce the complexity of the explicit form. The paper concludes with some comments on future research directions.
Alessandro Alessio
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:30:07Z
2011-08-04T07:29:06Z
http://eprints.imtlucca.it/id/eprint/426
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/426
2011-07-27T08:30:07Z
On the synthesis of piecewise affine control laws
Piecewise affine (PWA) control laws offer an attractive solution to real-time control of linear, nonlinear and hybrid systems. In this paper we provide a compact exposition of the existing state-of-the-art methods for the synthesis of PWA control laws using optimization-based methods.
Alberto Bemporad
alberto.bemporad@imtlucca.it
W.P.M.H. Heemels
Mircea Lazar
2011-07-27T08:29:39Z
2011-11-17T11:26:40Z
http://eprints.imtlucca.it/id/eprint/427
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/427
2011-07-27T08:29:39Z
A stochastic model predictive control approach for series hybrid electric vehicle power management
This paper illustrates the use of stochastic model predictive control (SMPC) for power management in vehicles equipped with advanced hybrid powertrains. Hybrid vehicles use two or more distinct power sources for propulsion, and their complex powertrain architecture requires the coordination of all the subsystems to achieve target performances in terms of fuel consumption, driveability, component life-time, exhaust emissions. Many control strategies have been presented and successfully applied, mainly based on heuristics or rules and tuned on certain reference drive cycles. To take into account that cycles are not exactly known a priori in driving routine, this paper proposes a stochastic approach for the power management problem. We focus on a series hybrid electric vehicle (HEV), which combines an internal combustion engine and an electric motor. The power demand from the driver is modeled as a Markov chain estimated on several driving cycles and used to generate scenarios in the SMPC law. Simulation results over a standard driving cycle are presented to demonstrate the effectiveness of the proposed stochastic approach and compared with other deterministic approaches.
Giulio Ripaccioli
Daniele Bernardini
daniele.bernardini@imtlucca.it
Stefano Di Cairano
Alberto Bemporad
alberto.bemporad@imtlucca.it
Ilya Kolmanovsky
2011-07-27T08:29:32Z
2011-11-17T11:10:29Z
http://eprints.imtlucca.it/id/eprint/428
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/428
2011-07-27T08:29:32Z
Stability analysis of stochastic networked control systems
In this paper, we study the stability of Networked Control Systems (NCSs) that are subject to time-varying transmission intervals, time-varying transmission delays, packet-dropouts and communication constraints. Communication constraints impose that, per transmission, only one sensor or actuator node can access the network and send its information. Which node is given access to the network at a transmission time is orchestrated by a so-called network protocol. This paper considers NCSs, in which the transmission intervals and transmission delays are described by a random process, having a continuous probability density function (PDF). By focussing on linear plants and controllers and periodic and quadratic protocols, we present a modelling framework for NCSs based on stochastic discrete-time switched linear systems. Stability (in the mean-square) of these systems is analysed using convex overapproximations and a finite number of linear matrix inequalities. On a benchmark example of a batch reactor, we illustrated the effectiveness of the developed theory.
M.C.F. Donkers
W.P.M.H. Heemels
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Vsevolod Shneer
2011-07-27T08:29:22Z
2016-04-06T10:25:58Z
http://eprints.imtlucca.it/id/eprint/425
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/425
2011-07-27T08:29:22Z
Steering vehicle control by switched model predictive control
We propose a switching Model Predictive Control (MPC) strategy to control vehicle steering by actuating active front steering (AFS) and electronic stability control (ESC). After describing the piecewise affine prediction model used for MPC design, where the nonlinearities arise from the relation between sideslip angles and tire forces, a switching MPC strategy is implemented, where different local MPC controllers are used depending on the current tire force conditions. The designed controller maintains most of the benefits of a previously designed hybrid model predictive controller, but it has lower complexity and allows more flexible design. The controller stability is verified and the controller behavior during challenging step steering maneuvers is tested in closed-loop simulations against a nonlinear vehicle model.
Stefano Di Cairano
H. E. Tseng
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:29:14Z
2012-07-27T07:16:41Z
http://eprints.imtlucca.it/id/eprint/439
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/439
2011-07-27T08:29:14Z
Model predictive control tuning by controller matching
The effectiveness of model predictive control (MPC) in dealing with input and state constraints during transient operations is well known. However, in contrast with several linear control techniques, closed-loop frequency-domain properties such as sensitivities and robustness to small perturbations are usually not taken into account in the MPC design. This technical note considers the problem of tuning an MPC controller that behaves as a given linear controller when the constraints are not active (e.g., for perturbations around the equilibrium that remain within the given input and state bounds), therefore inheriting the small-signal properties of the linear control design, and that still optimally deals with constraints during transients. We provide two methods for selecting the MPC weight matrices so that the resulting MPC controller behaves as the given linear controller, therefore solving the posed inverse problem of controller matching, and is globally asymptotically stable.
Stefano Di Cairano
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-27T08:29:08Z
2011-11-17T11:47:21Z
http://eprints.imtlucca.it/id/eprint/436
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/436
2011-07-27T08:29:08Z
Energy-aware robust model predictive control with feedback from multiple noisy wireless sensors
Flexibility, ease of deployment and of spatial reconfiguration, and low cost make wireless sensor networks (WSNs) fundamental component of modern networked control systems. However, due to the energy-constrained nature of WSNs, the transmission rate of the sensor nodes is a critical aspect to take into account in control design. Two are the main contributions of this paper. First, a general transmission strategy for communication between controller and sensors is proposed. Then, a scenario with a controller and a wireless node providing measures is investigated, and two energy-aware control schemes based on explicit model predictive control (MPC) are presented. We consider both nominal and robust control in the presence of disturbances, and convergence properties are given for the latter. The proposed control schemes are tested and compared to traditional MPC techniques. The results show the effectiveness of the proposed energy-aware approach, which achieves a profitable trade-off between energy consumption of wireless sensors and loss in system performance.
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-08T12:40:57Z
2012-03-02T15:23:59Z
http://eprints.imtlucca.it/id/eprint/438
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/438
2011-07-08T12:40:57Z
Equivalent piecewise affine models of linear hybrid automata
In this technical note we examine the relationship between linear hybrid automata (LHA) and piecewise affine (PWA) systems. While a LHA is an autonomous non-deterministic model, a PWA is a deterministic model with inputs. Through the key idea of modeling the uncertainty associated with LHA transitions as input disturbances in a PWA model, by extending continuous-time PWA models to include the dynamics of discrete states and resets we show in a constructive way that a LHA can be equivalently represented as a PWA system, where equivalent means that the two systems generate the same trajectories. Besides filling in a missing theoretical link between the LHA modelling framework and the PWA modelling framework, the result has the practical advantage of enabling the use of several existing control theoretical tools developed for PWA models to a wider class of hybrid systems.
Stefano Di Cairano
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-08T11:23:17Z
2011-08-04T07:29:07Z
http://eprints.imtlucca.it/id/eprint/542
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/542
2011-07-08T11:23:17Z
An equivalence result between linear hybrid automata and piecewise affine systems
In this paper we examine a relationship existing among linear hybrid automata (LHA) and piecewise affine (PWA) systems. While a LHA is an autonomous non-deterministic model, a PWA system is a deterministic model with inputs. By extending continuous-time PWA models to include the dynamics of discrete states and resets, we show in a constructive way that a LHA can be equivalently represented as a PWA system, where equivalent means that the two systems generate the same trajectories. The key idea is to model the uncertainty associated with LHA transitions as an additional vector of input disturbances in the corresponding PWA model. By linking the LHA modelling framework (popular in computer science) with the PWA modelling framework (popular in systems science), our equivalence result allows one to expand the use of several existing control theoretical tools (for stability analysis, optimal control, etc.) developed for PWA models to a much wider class of hybrid systems
Stefano Di Cairano
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-07T14:09:25Z
2011-08-04T07:29:06Z
http://eprints.imtlucca.it/id/eprint/441
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/441
2011-07-07T14:09:25Z
Hybrid model predictive control based on wireless sensor feedback: An experimental study
Design and experimental validation of model predictive control (MPC) of a hybrid dynamical laboratory process with wireless sensors is presented. The laboratory process consists of four infrared lamps, controlled in pairs by two on/off switches, and of a transport belt, where moving parts equipped with wireless sensors are heated by the lamps. The process, which is motivated by heating processes in the plastic and printing industry, presents interesting hybrid dynamics. By approximating the stationary heat spatial distribution as a piecewise affine function of the position along the belt, the resulting plant model is a hybrid dynamical system. The control architecture is based on the reference governor approach: the process is actuated by a local controller, while a hybrid MPC algorithm running on a remote base station sends optimal belt velocity setpoints and lamp on/off commands over a wireless link, exploiting the sensor information received through the wireless network. A discrete-time hybrid model of the process is used for the hybrid MPC algorithm and for the state estimator. The physical modelling of the process and the hybrid MPC algorithm are presented in detail, together with the hardware and software architectures. The experimental results show that the presented theoretical framework is well suited for control of the new laboratory process, and that the process can be used as a prototype system for evaluating hybrid and networked control strategies
Alberto Bemporad
alberto.bemporad@imtlucca.it
Stefano Di Cairano
Erik Henriksson
Karl Henrik Johansson
2011-07-07T13:46:12Z
2011-11-17T10:55:29Z
http://eprints.imtlucca.it/id/eprint/440
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/440
2011-07-07T13:46:12Z
Optimization-based feedback control of flatness in a cold tandem rolling
For the problem of automatic flatness control (AFC) in cold tandem mills this paper proposes control techniques based on quadratic optimization and delay compensation. Three different strategies are presented and compared: a centralized solution based on a global quadratic programming (QP) problem that decides the commands to all the actuators, and two decentralized solutions where each actuator command is optimized locally. All schemes are based on a global exchange of information about the commands generated at the previous time step at each stand to compensate for the numerous delays present in the mill. Control algorithms are tested in simulation considering a tandem mill with five stands as a benchmark, and results are shown to demonstrate the performance of the proposed schemes.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Daniele Bernardini
daniele.bernardini@imtlucca.it
Francesco Alessandro Cuzzola
Andrea Spinelli
2011-07-06T09:22:43Z
2011-07-11T14:37:42Z
http://eprints.imtlucca.it/id/eprint/705
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/705
2011-07-06T09:22:43Z
Semigroups of Semi-copulas and Evolution of Dependence at Increase of Age
We consider a pair of exchangeable lifetimes X, Y and the families of the conditional survival functions F t (x, y) of
(X − t, Y − t) given (X > t, Y > t). We analyze some properties of dependence and of ageing for F t (x, y) and
some relations among them
Rachele Foschi
rachele.foschi@imtlucca.it
Fabio Spizzichino
2011-06-21T14:19:10Z
2011-08-04T07:29:06Z
http://eprints.imtlucca.it/id/eprint/623
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/623
2011-06-21T14:19:10Z
Tools for modeling, simulation, control, and verification of piecewise affine systems
Alberto Bemporad
alberto.bemporad@imtlucca.it
Stefano Di Cairano
Giancarlo Ferrari-Trecate
Michal Kvasnica
Manfred Morari
Simone Paoletti
2011-06-21T10:13:44Z
2011-08-05T12:57:37Z
http://eprints.imtlucca.it/id/eprint/615
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/615
2011-06-21T10:13:44Z
An MPC design flow for automotive control and applications to idle speed regulation
This paper describes the steps of a model predictive control (MPC) design procedure developed for a broad class of control problems in automotive engineering. The design flow starts by deriving a linearized discrete-time prediction model from an existing simulation model, augmenting it with integral action or output disturbance models to ensure offset-free steady-state properties, and tuning the resulting MPC controller in simulation. Explicit MPC tools are employed to synthesize the controller to quickly assess controller complexity, local stability of the closed-loop dynamics, and for rapid prototype testing. Then, the controller is fine-tuned by refining the linear prediction model through identification from experimental data, and by adjusting from observed experimental performance the values of weights and noise covariances for filter design. The idle speed control (ISC) problem is used in this paper to exemplify the design flow and our vehicle implementation results are reported.
Stefano Di Cairano
Diana Yanakiev
Alberto Bemporad
alberto.bemporad@imtlucca.it
Ilya Kolmanovsky
Davor Hrovat
2011-06-16T09:04:57Z
2011-07-11T14:36:24Z
http://eprints.imtlucca.it/id/eprint/420
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/420
2011-06-16T09:04:57Z
Linear-Time and May-Testing in a Probabilistic Reactive Setting
We consider reactive probabilistic labelled transition systems (rplts), a model where internal choices are refined by probabilistic choices. In this setting, we study the relationship between linear-time and may-testing semantics, where an angelic view of nondeterminism is taken. Building on the model of d-trees of Cleaveland et al., we first introduce a clean model of probabilistic may-testing, based on simple concepts from measure theory. In particular, we define a probability space where statements of the form “p may pass test o” naturally correspond to measurable events. We then obtain an observer-independent characterization of the may-testing preorder, based on comparing the probability of sets of traces, rather than of individual traces. This entails that may-testing is strictly finer than linear-time semantics. Next, we characterize the may-testing preorder in terms of the probability of satisfying safety properties, expressed as languages of infinite trees rather than traces. We then identify a significative subclass of rplts where linear and may-testing semantics do coincide: these are the separated rplts, where actions are partitioned into probabilistic and nondeterministic ones, and at each state only one type is available.
Lucia Acciai
Michele Boreale
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-16T07:49:09Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/378
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/378
2011-06-16T07:49:09Z
Three Logics for Branching Bisimulation (Extended Abstract)
Rocco De Nicola
r.denicola@imtlucca.it
Frits W. Vaandrager
2011-06-16T07:31:36Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/387
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/387
2011-06-16T07:31:36Z
Net Theory and Application - Response
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-16T07:28:23Z
2011-07-11T14:36:28Z
http://eprints.imtlucca.it/id/eprint/396
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/396
2011-06-16T07:28:23Z
Communication Through Message Passing or Shared Memory: A Formal Comparison
Rocco De Nicola
r.denicola@imtlucca.it
Alberto Matelli
Ugo Montanari
2011-06-15T14:43:17Z
2011-07-11T14:35:29Z
http://eprints.imtlucca.it/id/eprint/402
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/402
2011-06-15T14:43:17Z
An accessible verification environment for UML models of services
Service-Oriented Architectures (SOAs) provide methods and technologies for modelling, programming and deploying software applications that can run over globally available network infrastructures. Current software engineering technologies for SOAs, however, remain at the descriptive level and lack rigorous foundations enabling formal analysis of service-oriented models and software. To support automated verification of service properties by relying on mathematically founded techniques, we have developed a software tool that we called Venus (Verification ENvironment for UML models of Services). Our tool takes as an input service models specified by UML 2.0 activity diagrams according to the UML4SOA profile, while its theoretical bases are the process calculus COWS and the temporal logic SocL. A key feature of Venus is that it provides access to verification functionalities also to those users not familiar with formal methods. Indeed, the tool works by first automatically translating UML4SOA models and natural language statements of service properties into, respectively, COWS terms and SocL formulae, and then by automatically model checking the formulae over the COWS terms. In this paper we present the tool, its architecture and its enabling technologies by also illustrating the verification of a classical ‘travel agency’ scenario.
Federico Banti
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-15T14:39:12Z
2011-07-11T14:35:29Z
http://eprints.imtlucca.it/id/eprint/401
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/401
2011-06-15T14:39:12Z
A WSDL-based type system for asynchronous WS-BPEL processes
We tackle the problem of providing rigorous formal foundations to current software engineering technologies for web services, and especially to WSDL and WS-BPEL, two of the most used XML-based standard languages for web services. We focus on a simplified fragment of WS-BPEL sufficiently expressive to model asynchronous interactions among web services in a network context. We present this language as a process calculus-like formalism, that we call ws-calculus, for which we define an operational semantics and a type system. The semantics provides a precise operational model of programs, while the type system forces a clean programming discipline for integrating collaborating services. We prove that the operational semantics of ws-calculus and the type system are ‘sound’ and apply our approach to some illustrative examples. We expect that our formal development can be used to make the relationship between WS-BPEL programs and the associated WSDL documents precise and to support verification of their conformance.
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-15T13:46:06Z
2011-07-11T14:35:30Z
http://eprints.imtlucca.it/id/eprint/410
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/410
2011-06-15T13:46:06Z
A Formal Account of WS-BPEL
We introduce B lite, a lightweight language for web services orchestration designed around some of WS-BPEL peculiar features like partner links, process termination, message correlation, long-running business transactions and compensation handlers. B lite formal presentation helps clarifying some ambiguous aspects of the WS-BPEL specification, which have led to engines implementing different semantics and, thus, have undermined portability of WS-BPEL programs over different platforms. We illustrate the main features of B lite by means of many examples, some of which are also exploited to test and compare the behaviour of three of the most known free WS-BPEL engines.
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-15T13:40:09Z
2014-10-08T09:40:50Z
http://eprints.imtlucca.it/id/eprint/409
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/409
2011-06-15T13:40:09Z
From architectural to behavioural specification of services
Many efforts are currently devoted to provide software developers with methods and techniques that can endow service-oriented computing with systematic and accountable engineering practices. To this purpose, a number of languages and calculi have been proposed within the Sensoria project that address different levels of abstraction of the software engineering process. Here, we report on two such languages and the way they can be formally related within an integrated approach that can lead to verifiable development of service components from more abstract architectural models of business activities.
Laura Bocchi
Jose Luiz Fiadeiro
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-15T13:31:10Z
2014-10-08T09:31:57Z
http://eprints.imtlucca.it/id/eprint/408
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/408
2011-06-15T13:31:10Z
A symbolic semantics for a clculus for service-oriented computing
We introduce a symbolic characterisation of the operational semantics of COWS, a formal language for specifying and combining service-oriented applications, while modelling their dynamic behaviour. This alternative semantics avoids infinite representations of COWS terms due to the value-passing nature of communication in COWS and is more amenable for automatic manipulation by analytical tools, such as e.g. equivalence and model checkers. We illustrate our approach through a ‘translation service’ scenario.
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
Nobuko Yoshida
2011-06-15T13:25:13Z
2014-10-08T09:43:28Z
http://eprints.imtlucca.it/id/eprint/407
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/407
2011-06-15T13:25:13Z
Specification and analysis of SOC systems using COWS: a finance case study
Service-oriented computing, an emerging paradigm for distributed computing based on the use of services, is calling for the development of tools and techniques to build safe and trustworthy systems, and to analyse their behaviour. Therefore many researchers have proposed to use process calculi, a cornerstone of current foundational research on specification and analysis of concurrent and distributed systems.
We illustrate this approach by focussing on COWS, a process calculus expressly designed for specifying and combining services, while modelling their dynamic behaviour. We present the calculus and one of the analysis techniques it enables, that is based on the temporal logic SocL and the associated model checker CMC. We demonstrate applicability of our tools by means of a large case study, from the financial domain, which is first specified in COWS, and then analysed by using SocL to express many significant properties and CMC to verify them.
Federico Banti
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-15T13:03:05Z
2011-07-11T14:35:30Z
http://eprints.imtlucca.it/id/eprint/405
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/405
2011-06-15T13:03:05Z
On Observing Dynamic Prioritised Actions in SOC
We study the impact on observational semantics for SOC of priority mechanisms which combine dynamic priority with local pre-emption. We define manageable notions of strong and weak labelled bisimilarities for COWS, a process calculus for SOC, and provide alternative characterisations in terms of open barbed bisimilarities. These semantics show that COWS’s priority mechanisms partially recover the capability to observe receive actions (that could not be observed in a purely asynchronous setting) and that high priority primitives for termination impose specific conditions on the bisimilarities.
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
Nobuko Yoshida
2011-06-15T12:52:15Z
2011-07-11T14:35:30Z
http://eprints.imtlucca.it/id/eprint/406
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/406
2011-06-15T12:52:15Z
On Secure Implementation of an IHE XUA-Based Protocol for Authenticating Healthcare Professionals
The importance of the Electronic Health Record (EHR) has been addressed in recent years by governments and institutions.Many large scale projects have been funded with the aim to allow healthcare professionals to consult patients data. Properties such as confidentiality, authentication and authorization are the key for the success for these projects. The Integrating the Healthcare Enterprise (IHE) initiative promotes the coordinated use of established standards for authenticated and secure EHR exchanges among clinics and hospitals. In particular, the IHE integration profile named XUA permits to attest user identities by relying on SAML assertions, i.e. XML documents containing authentication statements. In this paper, we provide a formal model for the secure issuance of such an assertion. We first specify the scenario using the process calculus COWS and then analyse it using the model checker CMC. Our analysis reveals a potential flaw in the XUA profile when using a SAML assertion in an unprotected network. We then suggest a solution for this flaw, and model check and implement this solution to show that it is secure and feasible.
Massimiliano Masi
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-15T12:10:14Z
2011-07-11T14:35:30Z
http://eprints.imtlucca.it/id/eprint/404
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/404
2011-06-15T12:10:14Z
A tool for rapid development of WS-BPEL applications
We present BliteC, a software tool we have developed for supporting a rapid and easy development of WS-BPEL applications. BliteC translates service orchestrations written in Blite, a formal language inspired to but simpler than WS-BPEL, into executable WS-BPEL programs. We illustrate our approach by means of an example borrowed from the official specification of WS-BPEL.
Luca Cesari
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-15T11:51:52Z
2011-07-11T14:35:30Z
http://eprints.imtlucca.it/id/eprint/403
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/403
2011-06-15T11:51:52Z
A criterion for separating process calculi
We introduce a new criterion, replacement freeness, to discern the relative expressiveness of process calculi. Intuitively, a calculus is strongly replacement free if replacing, within an enclosing context, a process that cannot perform any visible action by an arbitrary process never inhibits the capability of the resulting process to perform a visible action. We prove that there exists no compositional and interaction sensitive encoding of a not strongly replacement free calculus into any strongly replacement free one. We then define a weaker version of replacement freeness, by only considering replacement of closed processes, and prove that, if we additionally require the encoding to preserve name independence, it is not even possible to encode a non replacement free calculus into a weakly replacement free one. As a consequence of our encodability results, we get that many calculi equipped with priority are not replacement free and hence are not encodable into mainstream calculi like CCS and pi-calculus, that instead are strongly replacement free. We also prove that variants of pi-calculus with match among names, pattern matching or polyadic synchronization are only weakly replacement free, hence they are separated both from process calculi with priority and from mainstream calculi.
Federico Banti
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-15T08:43:00Z
2011-07-11T14:35:30Z
http://eprints.imtlucca.it/id/eprint/419
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/419
2011-06-15T08:43:00Z
A WSDL-Based Type System for WS-BPEL
We tackle the problem of providing rigorous formal foundations to current software engineering technologies for web services. We focus on two of the most used XML-based languages for web services: WSDL and WS-BPEL. To this aim, first we select an expressive subset of WS-BPEL, with special concern for modeling the interactions among web service instances in a network context, and define its operational semantics. We call ws-calculus the resulting formalism. Then, we put forward a rigorous typing discipline that formalizes the relationship existing between ws-calculus terms and the associated WSDL documents and supports verification of their compliance. We prove that the type system and the operational semantics of ws-calculus are ‘sound’ and apply our approach to an example application involving three interacting web services.
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-15T08:37:37Z
2011-07-11T14:35:30Z
http://eprints.imtlucca.it/id/eprint/418
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/418
2011-06-15T08:37:37Z
COWS: A Timed Service-Oriented Calculus
COWS (Calculus for Orchestration of Web Services) is a foundational language for Service Oriented Computing that combines in an original way a number of ingredients borrowed from well-known process calculi, e.g. asynchronous communication, polyadic synchronization, pattern matching, protection, delimited receiving and killing activities, while resulting different from any of them. In this paper, we extend COWS with timed orchestration constructs, this way we obtain a language capable of completely formalizing the semantics of WS-BPEL, the ‘de facto’ standard language for orchestration of web services. We present the semantics of the extended language and illustrate its peculiarities and expressiveness by means of several examples.
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-15T08:30:11Z
2011-07-11T14:35:30Z
http://eprints.imtlucca.it/id/eprint/417
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/417
2011-06-15T08:30:11Z
Regulating Data Exchange in Service Oriented Applications
We define a type system for COWS, a formalism for specifying and combining services, while modelling their dynamic behaviour. Our types permit to express policies constraining data exchanges in terms of sets of service partner names attachable to each single datum. Service programmers explicitly write only the annotations necessary to specify the wanted policies for communicable data, while a type inference system (statically) derives the minimal additional annotations that ensure consistency of services initial configuration. Then, the language dynamic semantics only performs very simple checks to authorize or block communication. We prove that the type system and the operational semantics are sound. As a consequence, we have the following data protection property: services always comply with the policies regulating the exchange of data among interacting services. We illustrate our approach through a simplified but realistic scenario for a service-based electronic marketplace.
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-15T08:00:24Z
2011-07-11T14:35:30Z
http://eprints.imtlucca.it/id/eprint/416
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/416
2011-06-15T08:00:24Z
A Calculus for Orchestration of Web Services
We introduce COWS (Calculus for Orchestration of Web Services), a new foundational language for SOC whose design has been influenced by WS-BPEL, the de facto standard language for orchestration of web services. COWS combines in an original way a number of ingredients borrowed from well-known process calculi, e.g. asynchronous communication, polyadic synchronization, pattern matching, protection, delimited receiving and killing activities, while resulting different from any of them. Several examples illustrates COWS peculiarities and show its expressiveness both for modelling imperative and orchestration constructs, e.g. web services, flow graphs, fault and compensation handlers, and for encoding other process and orchestration languages.
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-14T15:00:55Z
2014-10-08T09:29:01Z
http://eprints.imtlucca.it/id/eprint/414
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/414
2011-06-14T15:00:55Z
Service discovery and negotiation with COWS
To provide formal foundations to current (web) services technologies, we put forward using COWS, a process calculus for specifying, combining and analysing services, as a uniform formalism for modelling all the relevant phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, deployment and execution. In this paper, we show that constraints and operations on them can be smoothly incorporated in COWS, and propose a disciplined way to model multisets of constraints and to manipulate them through appropriate interaction protocols. Therefore, we demonstrate that also QoS requirement specifications and SLA achievements, and the phases of dynamic service discovery and negotiation can be comfortably modelled in COWS. We illustrate our approach through a scenario for a service-based web hosting provider.
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-14T14:20:42Z
2016-04-06T07:58:24Z
http://eprints.imtlucca.it/id/eprint/413
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/413
2011-06-14T14:20:42Z
SENSORIA Patterns: Augmenting Service Engineering with Formal Analysis, Transformation and Dynamicity
The IST-FET Integrated Project Sensoria is developing a novel comprehensive approach to the engineering of service-oriented software systems where foundational theories, techniques and methods are fully integrated into pragmatic software engineering processes. The techniques and tools of Sensoria encompass the whole software development cycle, from business and architectural design, to quantitative and qualitative analysis of system properties, and to transformation and code generation. The Sensoria approach takes also into account reconfiguration of service-oriented architectures (SOAs) and re-engineering of legacy systems.
In this paper we give first a short overview of Sensoria and then present a pattern language for augmenting service engineering with formal analysis, transformation and dynamicity. The patterns are designed to help software developers choose appropriate tools and techniques to develop service-oriented systems with support from formal methods. They support the whole development process, from the modelling stage to deployment activities and give an overview of many of the research areas pursued in the Sensoria project.
Martin Wirsing
Matthias Hölzl
Lucia Acciai
Federico Banti
Allan Clark
Alessandro Fantechi
Stephen Gilmore
Stefania Gnesi
László Gönczy
Nora Koch
Alessandro Lapadula
Philip Mayer
Franco Mazzanti
Rosario Pugliese
Andreas Schroeder
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
Dániel Varró
2011-06-14T14:08:44Z
2011-07-11T14:35:30Z
http://eprints.imtlucca.it/id/eprint/412
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/412
2011-06-14T14:08:44Z
A Model Checking Approach for Verifying COWS Specifications
We introduce a logical verification framework for checking functional properties of service-oriented applications formally specified using the service specification language COWS. The properties are described by means of SocL, a logic specifically designed to capture peculiar aspects of services. Service behaviours are abstracted in terms of Doubly Labelled Transition Systems, which are used as the interpretation domain for SocL formulae. We also illustrate the SocL model checker at work on a bank service scenario specified in COWS.
Alessandro Fantechi
Stefania Gnesi
Alessandro Lapadula
Franco Mazzanti
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-14T13:55:41Z
2011-07-11T14:35:30Z
http://eprints.imtlucca.it/id/eprint/411
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/411
2011-06-14T13:55:41Z
Specifying and Analysing SOC Applications with COWS
COWS is a recently defined process calculus for specifying and combining service-oriented applications, while modelling their dynamic behaviour. Since its introduction, a number of methods and tools have been devised to analyse COWS specifications, like e.g. a type system to check confidentiality properties, a logic and a model checker to express and check functional properties of services. In this paper, by means of a case study in the area of automotive systems, we demonstrate that COWS, with some mild linguistic additions, can model all the phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, orchestration, deployment, reconfiguration and execution. We also provide a flavour of the properties that can be analysed by using the tools mentioned above.
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-14T13:34:30Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/384
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/384
2011-06-14T13:34:30Z
A Distributed Operational Semantics for CCS Based on Condition/Event Systems
A new set of inference rules for the guarded version of Milner’s Calculus of Communicating Systems is proposed. They not only describe the actions agents may perform when in a given state, but also say which parts of the agents move when the global state changes. From the transition relation a particular Petri Net, namely a Condition/Event system called ΣCCS, is immediately derived. Our construction gives a semantics which is consistent with the interleaving semantics of CCS and exhibits full parallelism. The proof consists of relating the case graph of ΣCCS with the original and with the multiset (step) transition systems of the calculus.
Pierpaolo Degano
Rocco De Nicola
r.denicola@imtlucca.it
Ugo Montanari
2011-06-14T13:29:28Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/364
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/364
2011-06-14T13:29:28Z
Testing Equivalence for Mobile Processes
The impact of applying the testing approach to a calculus of processes with dynamic communication topology is investigated. A proof system is introduced that consists of two groups of laws: those for strong observational equivalence and those needed to deal with invisible actions. Soundness and completeness of this proof system w.r.t. a testing preorder are shown. A fully abstract denotational model for the language is presented that takes advantage of reductions of processes to normal forms.
Michele Boreale
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-14T12:26:11Z
2014-01-15T10:32:07Z
http://eprints.imtlucca.it/id/eprint/349
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/349
2011-06-14T12:26:11Z
Interactive Mobile Agents in X-Klaim
Mobile agents are processes which can migrate and execute on new hosts. Mobility is a key concept for network programming; it has stimulated much research about new programming languages and paradigms. X-KLAIM is an experimental programming language, inspired by the Linda paradigm, where mobile agents and their interaction strategies can be naturally programmed. A prototype implementation of X-KLAIM is presented, together with a few examples introducing the new programming style
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
GianLuigi Ferrari
2011-06-14T12:07:35Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/359
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/359
2011-06-14T12:07:35Z
Concurrency: Theory and Practice
Concurrency theory is concerned with the modeling and verification of concurrent systems, while concurrency practice promulgates the application of concurrency theory to concurrent systems of practical interest. We assert that a strong interplay between concurrency theory and practice is essential for the continued development of both fields.
Rocco De Nicola
r.denicola@imtlucca.it
Scott A. Smolka
2011-06-14T10:58:45Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/353
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/353
2011-06-14T10:58:45Z
Basic Observables for Processes
We propose a general approach to define behavioural preorders over process terms by considering the pre-congruences induced by three basic observables. These observables provide information about the initial communication capabilities of processes and about their possibility of engaging in an infinite internal chattering. We show that some of the observables-based pre-congruences do correspond to behavioral preorders long studied in the literature. The coincidence proofs shed light on the differences between the must preorder of De Nicola and Hennessy and the fair/should preorder of Cleaveland and Natarajan and of Brinksma, Rensink and Vogler, and on the rôle played in their definition by tests for internal chattering.
Michele Boreale
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-14T10:53:56Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/356
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/356
2011-06-14T10:53:56Z
A Process Algebra Based on LINDA
The problem of comparing and analyzing the relationships between distributed programs written in the same concurrent programming language is addressed. It arises each time one wants to establish program correctness with respect to a notion of "being an approximation of". We define a testing scenario for PAL, a process algebra which is obtained by embedding the Linda primitives for interprocess communication in a CSP like process description language. We present a proof system for PAL processes which is sound and complete with respect to the behavioural relation and illustrate how it works by giving a small example.
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-14T10:33:12Z
2014-01-15T10:32:45Z
http://eprints.imtlucca.it/id/eprint/354
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/354
2011-06-14T10:33:12Z
Locality Based Linda: Programming with Explicit Localities
In this paper we investigate the issue of defining a programming calculus which supports programming with explicit localities. We introduce a language which embeds the asynchronous Linda communication paradigm extended with explicit localities in a process calculus. We consider multiple tuple spaces that are distributed over a collections of sites and use localities to distribute/retrieve tuples and processes over/from these sites. The operational semantics of the language turns out to be useful for discussing the language design, e.g. the effects of scoping disciplines over mobile agents which maintain their connections to the located tuple spaces while moving along sites. The flexibility of the language is illustrated by a few examples.
Rocco De Nicola
r.denicola@imtlucca.it
GianLuigi Ferrari
Rosario Pugliese
2011-06-14T10:29:06Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/381
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/381
2011-06-14T10:29:06Z
Using the Adxiomatic Presentation of Behavioural Equivalences for Manipulating CCS Specifications
An interactive system for proving properties of CCS specifications is described. This system allows users to take advantage of all three views of CCS semantics (the transitions, the operationally defined equivalences and the axioms) and to define their own verification strategies for moving from one view to another. The system relies on term rewriting techniques and manipulates only the symbolic representation of specifications without resorting to any other kind of internal representation.
Rocco De Nicola
r.denicola@imtlucca.it
Paola Inverardi
Monica Nesi
2011-06-14T10:23:47Z
2014-01-15T10:30:02Z
http://eprints.imtlucca.it/id/eprint/336
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/336
2011-06-14T10:23:47Z
Mobile Applications in X-KLAIM
Networking has turned computers from isolated data
processors into powerful communication and elaboration
devices, called global computers; an illustrative example is
the World–Wide Web. Global computers are rapidly evolving
towards programmability. The new scenario has called
for new programming languages and paradigms centered
around the notions of mobility and location awareness. In
this paper, we briefly present X-KLAIM, an experimental
programming language for global computers, and show a
few programming examples.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
GianLuigi Ferrari
Rosario Pugliese
2011-06-13T14:32:57Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/360
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/360
2011-06-13T14:32:57Z
On Four Partial Ordering Semantics for a Process Calculus
Three of the rewriting systems used by Degano, De Nicola and Montanari to provide Milner's CCS with a causality based semantics are compared by using also a fourth intermediate one. These rewriting systems have been used to associate Petri nets, Labelled Event Structures and structured sets of partial orderings to CCS terms. It is proved that the four rewriting systems yield computations from which the same causality relations among the executed actions can be extracted, thus it is established that the four partial ordering transitional semantics do coincide.
Flavio Corradini
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-13T14:27:24Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/357
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/357
2011-06-13T14:27:24Z
Algebraic Characterizations of Decorated Trace Equivalences over Tree-Like Structures
A possible approach to studying behavioural equivalences in labelled transition systems is that of characterizing them in terms of homomorphic transformations. This characterization permits relying on algebraic techniques for proving systems properties and reduces equivalence checking of two systems to studying the relationships among the elements of their structures. Different algebraic characterizations of bisimulation-based equivalences in terms of particular transition systems homomorphisms have been proposed in the literature. Here we show, by an example, that trace-based equivalences are not locally characterizable and thus that the above results cannot be extended to these equivalences. However, similar results can be obtained if we confine ourselves to restricted classes of transition systems. Here, the algebraic characterizations of three well known decorated-trace equivalences (ready trace, ready and failure equivalence) for tree-like structures are presented.
Xiao Jun Chen
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-13T14:23:35Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/347
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/347
2011-06-13T14:23:35Z
Possible Worlds for Process Algebras
A non-deterministic process is viewed as a set of deterministic ones: its possible worlds. Each world represents a particular “solution” of non-determinism. Under this view of non-determinism as underspecification, nodeterministic processes are specifications, and the possible worlds represent the model space and thus the set of possible implementations. Then, refinement is inclusion of sets of possible worlds and can be used for stepwise specifications. This notion of refinement naturally induces new preorders (and equivalences) for processes that we characterize denotationally, operationally and axiomatically for a basic process algebra with nil, prefix and choice.
Rocco De Nicola
r.denicola@imtlucca.it
Simone Veglioni
2011-06-13T14:02:16Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/370
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/370
2011-06-13T14:02:16Z
An Action-Based Framework for Verifying Logical and Behavioural Properties of Concurrent Systems
A system is described which supports proving both behavioural and logical properties of concurrent systems, these are specified by means of a process algebra and its associated logic. The logic is an action based version of the branching time logic CTL, which we call ACTL. It is interpreted over transition labelled structured while CTL is interpreted over state labelled ones. The core of the system are two existing tools, AUTO and EMC. The first builds the labelled transition system corresponding to a term of a process algebra and permits proof of equivalence and simplification of terms, while the second checks the validity of CTL logical formulae. The integration is realized by means of two translation functions from the action based branching time logic ACTL to CTL and from transition-labelled to state-labelled structures. The correctness of the integration is guaranteed by the proof that the two translation functions when coupled preserve satisfiability of logical formulae.
Rocco De Nicola
r.denicola@imtlucca.it
Alessandro Fantechi
Stefania Gnesi
Gioia Ristori
2011-06-13T13:56:57Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/358
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/358
2011-06-13T13:56:57Z
Testing Semantics of Asynchronous Distributed Programs
The algebraic approach to concurrency is tuned for modelling operational semantics and supporting formal verification of distributed imperative programs with asynchronous communications. We consider an imperative process algebra, IPAL, which is obtained by embedding the Linda primitives for interprocess communication in a CSP-like process description language enriched with a construct for assignment. We setup a testing scenario and present a proof system for IPAL which is sound and complete with respect to the induced behavioural relations.
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-13T13:10:36Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/369
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/369
2011-06-13T13:10:36Z
A Completeness Theorem for Nondeterministic Kleene Algebras
A generalization of Kleene Algebras (structures with +·*, 0 and 1 operators) is considered to take into account possible nondeterminism expressed by the + operator. It is shown that essentially the same complete axiomatization of Salomaa is obtained except for the elimination of the distribution P·(Q + R) = P·Q + P·R and the idempotence law P + P = P. The main result is that an algebra obtained from a suitable category of labelled trees plays the same role as the algebra of regular events. The algebraic semantics and the axiomatization are then extended by adding OHgr and par operator, and the whole set of laws is used as a touchstone for starting a discussion over the laws for deadlock, termination and divergence proposed for models of concurrent systems.
Rocco De Nicola
r.denicola@imtlucca.it
Anna Labella
2011-06-13T13:03:19Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/365
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/365
2011-06-13T13:03:19Z
Three Logics for Branching Bisimulation
Three temporal logics are introduced that induce on labeled transition systems the same identifications as branching bisimulation, a behavioral equivalence that aims at ignoring invisible transitions while preserving the branching structure of systems. The first logic is an extension of Hennessy-Milner Logic with an “until” operator. The second one is another extension of Hennessy-Milner Logic, which exploits the power of backward modalities. The third logic is CTL* without the next-time operator. A relevant side-effect of the last characterization is that it sets a bridge between the state- and action-based approaches to the semantics of concurrent systems.
Rocco De Nicola
r.denicola@imtlucca.it
Frits W. Vaandrager
2011-06-13T12:56:03Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/363
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/363
2011-06-13T12:56:03Z
Fully Abstract Models for Nondeterministic Regular Expressions
Regular expressions and Kleene Algebras have been a direct inspiration for many constructs and axiomatizations for concurrency models. These, however, put a different stress on nondeterminism. With concurrent interpretations in mind, we study the effect of removing the idempotence law X+X=X and distribution law X·(Y+Z)=X·Y +X·Z from Kleene Algebras. We propose an operational semantics that is sound and complete w.r.t. the new set of axioms and is fully abstract w.r.t. a denotational semantic based on trees. The operational semantics is based on labelled transition systems that keep track of the performed choices and on a preorder relation (we call it resource simulation) that takes also into account the number of states reachable via every action.An important property we exhibit is that resource bisimulation equivalence can be obtained as the kernel of resource simulation.
Flavio Corradini
Rocco De Nicola
r.denicola@imtlucca.it
Anna Labella
2011-06-13T12:52:47Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/389
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/389
2011-06-13T12:52:47Z
CCS is an (Augmented) Contact Free C/E System
A new class of Petri Nets, called Augmented Condition/Event Systems is defined, by slightly relaxing the condition for enabling events. One system, called SgrCCS, from this class is used to give a new operational semantics to Milner's Calculus of Communicating Systems. The set of CCS agents together with the traditional, interleaving based, derivation relation is proved isomorphic to the case graph of SgrCCS (when single transitions only are considered). Our achievement is twofold: first, we provide CCS with a semantics which is able to describe concurrency and causal dependencies between the actions the various agents can perform; second, we guarantee an adequate linguistic level for the particular class of Petri Nets which can be defined through CCS operators.
Pierpaolo Degano
Rocco De Nicola
r.denicola@imtlucca.it
Ugo Montanari
2011-06-13T12:48:41Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/346
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/346
2011-06-13T12:48:41Z
Models of Nondeterministic Regular Expressions
Nondeterminism is a direct outcome of interactions and is, therefore a central ingredient for modelling concurrent systems. Trees are very useful for modelling nondeterministic behaviour. We aim at a tree-based interpretation of regular expressions and study the effect of removing the idempotence law X+X=X and the distribution law X•(Y+Z)=X•Y+X•Z from Kleene algebras. We show that the free model of the new set of axioms is a class of trees labelled over A. We also equip regular expressions with a two-level behavioural semantics. The basic level is described in terms of a class of labelled transition systems that are detailed enough to describe the number of equal actions a system can perform from a given state. The abstract level is based on a so-called resource bisimulation preorder that permits ignoring uninteresting details of transition systems. The three proposed interpretations of regular expressions (algebraic, denotational, and behavioural) are proven to coincide. When dealing with infinite behaviours, we rely on a simple version of the ω-induction and obtain a complete proof system also for the full language of nondeterministic regular expressions.
Flavio Corradini
Rocco De Nicola
r.denicola@imtlucca.it
Anna Labella
2011-06-13T12:42:11Z
2014-10-07T14:38:28Z
http://eprints.imtlucca.it/id/eprint/350
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/350
2011-06-13T12:42:11Z
Tree morphisms and bisimulations
A category of (action labelled) trees is defined that can be used to model unfolding of labelled transition systems and to study behavioural relations over them. In this paper we study five different equivalences based on bisimulation for our model. One, that we called resource bisimulation, amounts essentially to three isomorphism. Another, its weak counterpart, permits abstracting from silent actions while preserving the tree structure. The other three are the well known strong, branching and weak bisimulation equivalence. For all bisimulations, but weak, canonical representatives are constructed and it is shown that they can be obtained via enriched functors over our categories of trees, with and without silent actions. Weak equivalence is more problematic; a canonical minimal representative for it cannot be denned by quotienting our trees. The common framework helps in understanding the relationships between the various equivalences and the results provide support to the claim that branching bisimulation is the natural generalization of strong bisimulation to systems with silent moves and that resource and weak resource have an interest of their own.
Rocco De Nicola
r.denicola@imtlucca.it
Anna Labella
2011-06-13T12:28:21Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/348
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/348
2011-06-13T12:28:21Z
Asynchronous Observations of Processes
We study may and must testing-based preorders in an asynchronous setting. In particular, we provide some full abstraction theorems that offer alternative characterizations of these preorders in terms of context closure w.r.t. basic observables and in terms of traces and acceptance sets. These characterizations throw light on the asymmetry between input and output actions in asynchronous interactions and on the difference between synchrony and asynchrony.
Michele Boreale
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-13T10:40:49Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/366
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/366
2011-06-13T10:40:49Z
A Process Algebraic View of Input/Output Automata
Input/output automata are a widely used formalism for the specification and verification of concurrent algorithms. Unfortunately, they lack an algebraic characterization, a formalization which has been fundamental for the success of theories like CSP, CCS and ACP. We present a many-sorted algebra for I/O automata that takes into account notions such as interface, input enabling, and local control. It is sufficiently expressive for representing all finitely branching transition systems; hence, all I/O automata with a finitely branching transition relation. Our presentation includes a complete axiomatization of the external trace preorder relation over recursion-free processes with input and output.
Rocco De Nicola
r.denicola@imtlucca.it
Roberto Segala
2011-06-13T10:06:26Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/371
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/371
2011-06-13T10:06:26Z
Universal Axioms for Bisimulations
Node-labelled graphs, called observation structures, are introduced as a basic model of concurrent distributed systems and as a framework for dealing with observational equivalences over them. In the special case of observation trees, the nodes represent the computations of a system and are labelled by what is observed out of them. The labelling function parametrically maps into different observation domains, e.g., sequences of actions, partial ordering, mixed ordering, … A language for denoting observation trees is proposed and congruences over its terms are defined as strong, rooted branching and rooted weak bisimulations. Also a new bisimulation, called jumping bisimulation, is defined which naturally arises in the framework of state-labelled structures. Sound and complete axiomatizations, which are independent of the chosen observations, are exhibited for all bisimulations. It is claimed that most of the bisimulation-based congruences known in the literature, for a given process description language, can be recast in terms of congruences on observation structures, by carefully choosing both the bisimulation and the observation domain. Thus, the process of defining the extensional semantics of a process description language can be factorized into a few stages, for each of which several alternatives with clean rationales are available.
Pierpaolo Degano
Rocco De Nicola
r.denicola@imtlucca.it
Ugo Montanari
2011-06-13T09:11:14Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/386
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/386
2011-06-13T09:11:14Z
Extensional Equivalences for Transition Systems
Various notions of systems equivalence based on the reactions of systems to stimuli from the outside world are presented and compared. These notions have been proposed in the literature to allow abstraction from unwanted details in models of concurrent and communicating systems. The equivalences, already defined for different theories of concurrency, will be compared by adapting their definitions to labelled transition systems, a model which underlies many others. In the presentation of each equivalence, the aspects of system behaviours which are ignored and the identifications which are forced will be stressed. It will be shown that many equivalences, although defined very differently by following different intuitions about systems behaviour, turn out to be the same or to differ only in minor detail for a large class of transition systems.
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-13T09:08:57Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/385
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/385
2011-06-13T09:08:57Z
CCS without tau's
The main point of this paper is that one can develop an adequate version of CCS which does not use the special combinator tau for internal actions. Instead, the choice operator +, whose semantics is somewhat unclear, is replaced by two new choice operators oplus and [], representing internal and external nondeterminism respectively. The operational semantics of the resulting language is simpler and the definition of testing preorders is significantly cleaner. The essential features of the original calculus are kept; this is shown by defining a translation from CCS to the new language which preserves testing preorders.
Rocco De Nicola
r.denicola@imtlucca.it
Matthew Hennessy
2011-06-13T09:04:16Z
2014-01-15T10:33:06Z
http://eprints.imtlucca.it/id/eprint/377
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/377
2011-06-13T09:04:16Z
Observational Logics and Concurrency Models
The aim of this paper is to examine some basic topics of true concurrency from the viewpoint of program logics. In particular, logical characterizations of observational (bisimulation) equivalences based on partial ordering observations are studied. To date, in contrast with the interleaving approach, such equivalences have been almost exclusively studied from the operational standpoint. We shall show that they can be defined in a logical setting and that standard modal and temporal techniques can also be applied to true concurrency models. As a result, the interleaving and the partial ordering views of concurrency are reconciled within a logical setting.
Rocco De Nicola
r.denicola@imtlucca.it
GianLuigi Ferrari
2011-06-10T14:15:52Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/380
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/380
2011-06-10T14:15:52Z
A Partial Ordering Semantics for CCS
A new operational semantics for “pure” CCS is proposed that considers the parallel operator as a first class one, and permits a description of the calculus in terms of partial orderings. The new semantics (also for unguarded agents) is given in the SOS style via the partial ordering derivation relation. CCS agents are decomposed into sets of sequential subagents. The new derivations relate sets of subagents, and describe their actions and the casual dependencies among them. The computations obtained by composing partial ordering derivations are “observed” either as interleaving or partial orderings of events. Interleavings coincide with Milner's many step derivations, and “linearizations” of partial orderings are all and only interleavings. Abstract semantics are obtained by introducing two relations of observational equivalence and congruence that preserve concurrency. These relations are finer than Milner's in that they distinguish interleaving of sequential nondeterministic agents from their concurrent execution.
Pierpaolo Degano
Rocco De Nicola
r.denicola@imtlucca.it
Ugo Montanari
2011-06-10T13:38:29Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/368
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/368
2011-06-10T13:38:29Z
Distribution and Locality of Concurrent Systems
A new semantics for process description languages that discriminates according to the distribution in space of processes is proposed. The semantics is based on a set of distributed transition rules that record spatial information and on a notion of equivalence that discriminates according to which actions processes can perform and where these actions are performed. The new semantics is proven to coincide with the locality equivalence of Boudol, Castellani, Hennessy and Kiehn. Over the latter, it has the advantage of not requiring explicit introduction of a (infinite) space of locations; this makes the new equivalence amenable to a mechanical treatment in the same vein as the classical bisimulation-based equivalences. Indeed, we propose a polynomial time algorithm for checking locality equivalence of processes.
Flavio Corradini
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-10T13:33:28Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/374
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/374
2011-06-10T13:33:28Z
An action based framework for verifying logical and behavioural properties of concurrent systems
A system is described which supports proofs of both behavioural and logical properties of concurrent systems; these are specified by means of a process algebra and its associated logics. The logic is an action based version of the branching time logic CTL which we call ACTL; it is interpreted over transition labelled structures while CTL is interpreted over state labelled ones. The core of the system are two existing tools, AUTO and EMC. The first builds the labelled transition system corresponding to a term of a process algebra and permits proof of equivalence and simplification of terms, while the second checks validity of CTL logical formulae. The integration is realized by means of two translation functions from the action based branching time logic ACTL to CTL and from transition-labelled to state-labelled structures. The correctness of the integration is guaranteed by the proof that the two functions when coupled preserve satisfiability of logical formulae.
Rocco De Nicola
r.denicola@imtlucca.it
Alessandro Fantechi
Stefania Gnesi
Gioia Ristori
2011-06-10T13:21:13Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/362
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/362
2011-06-10T13:21:13Z
Verifying hardware components within JACK
JACK (the acronym for Just Another Concurrency Kit) is a workbench integrating a set of verification tools for concurrent system specifications, supported by a graphical interface offering facilities to use these tools separately or in combination. The environment offers several functionalities to support the design, analysis and verification of systems specified using process algebras. In this paper we use JACK to formally specify the hardware components of a buffer system. Then we verify, by using the checking capabilities of JACK, the correctness of the specification with respect to some safety requirements, expressed in the action based temporal logic ACTL.
Alessandro Fantechi
Rocco De Nicola
r.denicola@imtlucca.it
Stefania Gnesi
Salvatore Larosa
Gioia Ristori
2011-06-10T12:33:03Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/376
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/376
2011-06-10T12:33:03Z
Back and Forth Bisimulations
This paper is concerned with bisimulation relations which do not only require related agents to simulate each others behavior in the direction of the arrows, but also to simulate each other when going back in history. First it is demonstrated that the back and forth variant of strong bisimulation leads to the same equivalence as the ordinary notion of strong bisimulation. Then it is shown that the back and forth variant of Milner's observation equivalence is different from (and finer than) observation equivalence. In fact we prove that it coincides with the branching bisimulation equivalence of Van Glabbeek & Weijland. Also the back and forth variants of branching, eegr and delay bisimulation lead to branching bisimulation equivalence. The notion of back and forth bisimulation moreover leads to characterizations of branching bisimulation in terms of abstraction homomorphisms and in terms of Hennessy-Milner logic with backward modalities. In our view these results support the claim that branching bisimulation is a natural and important notion.
Rocco De Nicola
r.denicola@imtlucca.it
Ugo Montanari
Frits W. Vaandrager
2011-06-10T12:28:33Z
2014-01-15T10:32:29Z
http://eprints.imtlucca.it/id/eprint/352
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/352
2011-06-10T12:28:33Z
Coordinating Mobile Agents via Blackboards and Access Rights
LLinda (Locality based Linda) is a variant of Linda which supports a programming paradigm where agents can migrate from one computing environment to another. In this paper, we define a type system for LLinda that permits statically checking access rights violations of mobile agents. Types are used to describe processes intentions (read, write, execute, ...) relatively to the different localities they are willing to interact with or they want to migrate to. The type system is used to determine the operations that processes want to perform at each locality, to check whether they comply with the declared intentions and whether they have the necessary rights to perform the intended operations at the specific localities.
Rocco De Nicola
r.denicola@imtlucca.it
GianLuigi Ferrari
Rosario Pugliese
2011-06-10T12:20:53Z
2014-01-15T10:23:37Z
http://eprints.imtlucca.it/id/eprint/351
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/351
2011-06-10T12:20:53Z
KLAIM: A Kernel Language for Agents Interaction and Mobility
We investigate the issue of designing a kernel programming language for mobile computing and describe KLAIM, a language that supports a programming paradigm where processes, like data, can be moved from one computing environment to another. The language consists of a core Linda with multiple tuple spaces and of a set of operators for building processes. KLAIM naturally supports programming with explicit localities. Localities are first-class data (they can be manipulated like any other data), but the language provides coordination mechanisms to control the interaction protocols among located processes. The formal operational semantics is useful for discussing the design of the language and provides guidelines for implementations. KLAIM is equipped with a type system that statically checks access rights violations of mobile agents. Types are used to describe the intentions (read, write, execute, etc.) of processes in relation to the various localities. The type system is used to determine the operations that processes want to perform at each locality, and to check whether they comply with the declared intentions and whether they have the necessary rights to perform the intended operations at the specific localities. Via a series of examples, we show that many mobile code programming paradigms can be naturally implemented in our kernel language. We also present a prototype implementaton of KLAIM in Java.
Rocco De Nicola
r.denicola@imtlucca.it
GianLuigi Ferrari
Rosario Pugliese
2011-06-09T10:24:51Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/355
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/355
2011-06-09T10:24:51Z
Locality Based Semantics for Process Algebras
A general framework proposed by Degano, De Nicola and Montanari has been fruitful to define in a natural way non interleaving semantics for process description languages based on causality. The framework relies on a decomposition function used to obtain the set of its sequential processes from a parallel term, and on a set of distributed transition rules carrying information about the actions processes can perform and their location. In this paper we show that also semantics discriminating according to space distribution of processes can be formulated in a natural way within this framework. Two new semantics are proposed. The first one is based on an alternative characterization of the locality equivalence of Boudol, Castellani, Hennessy and Kiehn. Over the latter, our equivalence has the advantage of not requiring explicit introduction of a (infinite) space of locations; this makes it amenable to a mechanical treatment in the same vein as the classical bisimulation-based equivalences. The second semantics is proposed via a direct generalization of Castellani and Hennessy's distributed equivalence to languages with global scoping operators.
Flavio Corradini
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-09T10:03:21Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/342
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/342
2011-06-09T10:03:21Z
Proof Techniques for Cryptographic Processes
Contextual equivalences for cryptographic process calculi can be used to reason about correctness of protocols, but their definition suffers from quantification over all possible contexts.Here, we focus on two such equivalences, may-testing and barbed equivalence, and investigate tractable proof methods for them. To this aim, we develop an `environment-sensitive' labelled transition system, where transitions are constrained by the knowledge the environment has of names and keys.On top of the new transition system, a trace equivalence and a co-inductive weak bisimulation equivalence are defined, both of which avoid quantification over contexts. Our main results are soundness of trace semantics and of weak bisimulation with respect to may-testing and barbed equivalence, respectively.This leads to more direct proof methods for equivalence checking. The use of such methods is illustrated via a few examples concerning implementation of secure channels by means of encrypted public channels. We also consider a variant of the labelled transition system that gives completeness, but is less handy to use.
Michele Boreale
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-09T09:55:06Z
2014-01-15T10:23:07Z
http://eprints.imtlucca.it/id/eprint/338
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/338
2011-06-09T09:55:06Z
Types for access control
Klaim is an experimental programming language that supports a programming paradigm where both processes and data can be moved across different computing environments. The language relies on the use of explicit localities, and on allocation environments that associate logical localities to physical sites. This paper presents the mathematical foundations of the Klaim type system; this system permits checking statically the access rights violations of mobile agents. Types are used to describe the intentions (read, write, execute, ...) of processes relative to the different localities that they are willing to interact with, or that they want to migrate to. Type checking then determines whether processes comply with the declared intentions, and whether they have been assigned the necessary rights to perform the intended operations at the specified localities. The Klaim type system encompasses both subtyping and recursively defined types. The former occurs naturally when considering hierarchies of access rights, while the latters are needed to model migration of recursive processes.
Rocco De Nicola
r.denicola@imtlucca.it
GianLuigi Ferrari
Rosario Pugliese
Betti Venneri
2011-06-09T09:45:20Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/344
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/344
2011-06-09T09:45:20Z
A finite axiomatization of nondeterministic regular expressions
An alternative (tree-based) semantics for a class of regular expressions is proposed that assigns a central rôle to the + operator and thus to nondeterminism and nondeterministic choice. For the new semantics a consistent and complete axiomatization is obtained from the original axiomatization of regular expressions by Salomaa and by Kozen by dropping the idempotence law for + and the distribution law of • over +.
Flavio Corradini
Rocco De Nicola
r.denicola@imtlucca.it
Anna Labella
2011-06-09T09:24:10Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/337
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/337
2011-06-09T09:24:10Z
Linda-based applicative and imperative process algebras
The classical algebraic approach to the specification and verification of concurrent systems is tuned to distributed programs that rely on asynchronous communications and permit explicit data exchange. An applicative process algebra, obtained by embedding the Linda primitives for interprocess communication in a CCS/CSP-like language, and an imperative one, obtained from the applicative variant by adding a construct for explicit assignment of values to variables, are introduced. The testing framework is used to define behavioural equivalences for both languages and sound and complete proof systems for them are described together with a fully abstract denotational model (namely, a variant of Strong Acceptance Trees).
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-09T09:20:22Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/335
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/335
2011-06-09T09:20:22Z
Process Algebraic Analysis of Cryptographic Protocols
Recent approaches to the analysis of crypto-protocols build on concepts which are well-established in the field of process algebras, such as labelled transition systems (lts) and observational semantics. We outline some recent work in this direction that stems from using cryptographic versions of the pi-calculus -- most notably Abadi and Gordon's spi-calculus -- as protocol description languages. We show the impact of these approaches on a specific example, a simplified version of the Kerberos protocol.
Michele Boreale
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-09T08:43:49Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/334
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/334
2011-06-09T08:43:49Z
Proving the Correctness of Optimising Destructive and Non-destructive Reads over Tuple Spaces
In this paper we describe the proof of an optimisation that can be applied to tuple space based run-time systems (as used in Linda). The optimisation allows, under certain circumstances, for a tuple that has been destructively removed from a shared tuple space (for example, by a Linda in) to be returned as the result for a non-destructive read (for example, a Linda rd) for a different process. The optimisation has been successfully used in a prototype run-time system.
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
Antony I. T. Rowstron
2011-06-09T08:19:03Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/331
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/331
2011-06-09T08:19:03Z
Divergence in testing and readiness semantics
Many variants of must-testing semantics have been put forward that are equally sensitive to deadlock, but differ for the stress they put on divergence, i.e. on the possibility for systems of getting involved in infinite internal computations. Safe-testing is one such variant, that naturally pops up when studying the behavioural pre-congruences induced by certain basic observables. Here, we study the relationship between safe-testing and Olderog's readiness semantics, a semantics induced by a natural process logic. We show that safe-testing is finer than readiness, and coincides with a refinement of readiness obtained by tuning Olderog's definition. For both safe-testing and the original readiness semantics we propose simple complete axiomatizations, which permit a fuller appreciation of their similarities and differences.
Michele Boreale
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-08T14:11:04Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/361
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/361
2011-06-08T14:11:04Z
A Symbolic Semantics for the pi-Calculus
We use symbolic transition systems as a basis for providing theπ-calculus with an alternative semantics. The latter is more amenable to automatic manipulation and sheds light on the logical differences among different forms of bisimulation over algebras of name-passing processes. Symbolic transitions have the form[formula], whereφis a boolean combination of equalities on names that has to hold for the transition to take place, andαis standard aπ-calculus action. On top of the symbolic transition system, a symbolic bisimulation is defined that captures the standard ones. Finally, a sound and complete proof system is introduced for symbolic bisimulation.
Michele Boreale
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-08T13:32:07Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/320
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/320
2011-06-08T13:32:07Z
Formalizing Properties of Mobile Agent Systems
The wide-spreading of Internet has stimulated the introduction of new programming paradigms and languages that model interactions among hosts by means of mobile agents, and that are centered around the notions of location awareness. In this paper we show how to use formal tools, specifically a modal logic, for formalizing properties for mobile agent systems. We concentrate on one of these new languages, Klaim, and we use it to specify a system that permits maintaining the software installed on several heterogeneous computers distributed over a network by taking advantage of the mobile agent paradigm.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2011-06-08T13:28:05Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/319
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/319
2011-06-08T13:28:05Z
Nondeterministic regular expressions as solutions of equational systems
We define the class of the linear systems whose solution is expressible as a tuple of nondeterministic regular expressions when they are interpreted as trees of actions rather than as sets of sequences. We precisely characterize those systems that have a regular expression as "canonical" solution, and show that any regular expression can be obtained as a canonical solution of a system of the defined class.
Rocco De Nicola
r.denicola@imtlucca.it
Anna Labella
2011-06-06T15:06:07Z
2014-01-15T10:29:16Z
http://eprints.imtlucca.it/id/eprint/317
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/317
2011-06-06T15:06:07Z
The Klaim Project: Theory and Practice
Klaim (Kernel Language for Agents Interaction and
Mobility) is an experimental language specifically designed to
program distributed systems consisting of several mobile
components that interact through multiple distributed tuple
spaces. Klaim primitives allow programmers to distribute and
retrieve data and processes to and from the nodes of a net.
Moreover, localities are first-class citizens that can be
dynamically created and communicated over the network. Components,
both stationary and mobile, can explicitly refer and control the
spatial structures of the network.
This paper reports the experiences in the design and development
of Klaim. Its main purpose is to outline the theoretical
foundations of the main features of Klaim and its programming
model. We also present a modal logic that permits reasoning about
behavioural properties of systems and various type systems that
help in controlling agents movements and actions. Extensions of
the language in the direction of object oriented programming are
also discussed together with the description of the implementation
efforts which have lead to the current prototypes.
Lorenzo Bettini
Viviana Bono
Rocco De Nicola
r.denicola@imtlucca.it
GianLuigi Ferrari
Daniele Gorla
Michele Loreti
Eugenio Moggi
Rosario Pugliese
Emilio Tuosto
Betti Venneri
2011-06-06T14:21:20Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/395
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/395
2011-06-06T14:21:20Z
Testing Equivalence for Processes
Given a set of processes and a set of tests on these processes we show how to define in a natural way three different equivalences on processes. These equivalences are applied to a particular language CCS. We give associated complete proof systems and fully abstract models. These models have a simple representation in terms of trees.
Rocco De Nicola
r.denicola@imtlucca.it
Matthew Hennessy
2011-06-06T14:02:38Z
2014-01-15T10:31:44Z
http://eprints.imtlucca.it/id/eprint/343
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/343
2011-06-06T14:02:38Z
Types as Specifications of Access Policies
Mobility is a key concept for network programming; it has stimulated much research about new programming languages and paradigms. In the design of programming languages for mobile agents, i.e. processes which can migrate and execute on new hosts, the integration of security mechanisms is a major challenge. This paper presents the security mechanisms of the programming language Klaim (a Kernel Language for Agents Interaction and Mobility). The language, by making use of a capability-based type system, provides direct support for expressing and enforcing policies that control access to resources and data.
Rocco De Nicola
r.denicola@imtlucca.it
GianLuigi Ferrari
Rosario Pugliese
2011-06-06T13:51:59Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/394
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/394
2011-06-06T13:51:59Z
A Complete Set of Axioms for a Theory of Communicating Sequential Processes
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-06T13:48:29Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/341
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/341
2011-06-06T13:48:29Z
A Theory of "May" Testing for Asynchronous Languages
Asynchronous communication mechanisms are usually a basic ingredient of distributed systems and protocols. For these systems, asynchronous may-based testing seems to be exactly what is needed to capture safety and certain security properties. We study may testing equivalence focusing on the asynchronous versions of CCS and π-calculus. We start from an operational testing preorder and provide finitary and fully abstract trace-based interpretations for it, together with complete inequational axiomatizations. The results throw light on the differences between synchronous and asynchronous systems and on the weaker testing power of asynchronous observations.
Michele Boreale
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-06T13:38:51Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/340
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/340
2011-06-06T13:38:51Z
Graded Modalities and Resource Bisimulation
The logical characterization of the strong and the weak (ignoring silent actions) versions of resource bisimulation are studied. The temporal logics we introduce are variants of Hennessy-Milner Logics that use graded modalities instead of the classical box and diamond operators. The considered strong bisimulation induces an equivalence that, when applied to labelled transition systems, permits identifying all and only those systems that give rise to isomorphic unfoldings. Strong resource bisimulation has been used to provide nondeterministic interpretation of finite regular expressions and new axiomatizations for them. Here we generalize this result to its weak variant.
Flavio Corradini
Rocco De Nicola
r.denicola@imtlucca.it
Anna Labella
2011-06-06T13:36:03Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/339
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/339
2011-06-06T13:36:03Z
Coordination and Access Control of Mobile Agents
Klaim (a Kernel Language for Agents Interaction and Mobility) [1] is an experimental programming language specifically designed for programming mobile agents that supports a programming paradigm where both processes and data can be moved across different computing environments. The language relies on the use of explicit localities, and on allocation environments that associate logical localities to physical sites. The language consists of core Linda with multiple located tuple spaces and of a set of process operators, borrowed from Milner’s CCS. Klaim tuple spaces and processes are distributed over different localities, which are considered as first-class data. Linda operations are indexed with the locations of the tuple space they operate on. This allows programmers to distribute/retrieve data and processes over/from different nodes directly. Programmers share their control with what we call the net coordinators. Net coordinators describe the distributed infrastructure necessary for managing physical distribution of processes, allocation policies, and agents mobility. Klaim provides direct support for expressing and enforcing security policies that control access to resources and data. In particular, Klaim uses types to protect resources and data and to establish policies for access control. The type system guarantees that the operations that processes intend to perform at various network sites comply with the processes’ access rights [2, 3]. Types are used to describe the intentions (read, write, execute,...) of processes relative to the different localities that they are willing to interact with, or that they want to migrate to. Type checking then determines whether processes comply with the declared intentions, and whether they have been assigned the necessary rights to perform the intended operations at the specified localities. The Klaim type system encompasses both subtyping and recursively defined types. The former occurs naturally when considering hierarchies of access rights, while the latters are needed to model migration of recursive processes. We are actually working on extending both the language and the type system for introducing types for tuples (record types), notions of multi-level security (by structuring localities into levels of security) and public or shared keys to model dynamic transmission of access rights. Other ongoing research is considering the extension of the language to deal with open systems and with hierarchical nets. The interested reader is referred to [4] for written material about our project, for related software (a Java implementation of the topic of Klaim is available), and for the forthcoming additional written documentation.
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-06T13:31:20Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/309
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/309
2011-06-06T13:31:20Z
Mobile Distributed Programming in X-Klaim
Network-aware computing has called for new programming languages that exploit the mobility paradigm as a basic interaction mechanism. In this paper we present X-Klaim, an experimental programming language specifically designed to program distributed systems composed of several components interacting through multiple distributed tuple spaces and mobile code. The language consists of a set of coordination primitives inspired by Linda, a set of operators for building processes borrowed from process algebras and a few classical constructs for sequential programming. X-Klaim naturally supports programming with explicit localities; these are first-class data that can be manipulated like any other data, and coordination primitives that permit controlling interactions among located processes. Via a series of examples, we show that many mobile code programming paradigms can be naturally implemented by means of the considered language.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-06T12:46:30Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/306
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/306
2011-06-06T12:46:30Z
Languages and Process Calculi for Network Aware Programming - Short Summary -
We describe motivations and background behind the design of Klaim, a process description language that has proved to be suitable for describing a wide range of applications distributed over wide area networks with agents and code mobility. We argue that a drawback of Klaim is that it is neither a programming language, nor a process calculus. We then outline the two research directions we have recently pursued. On the one hand we have evolved Klaim to a full-fledged language for highly distributed mobile programming. On the other hand we have distilled the language to a number of simple calculi that we have used to define new semantic theories and equivalences and to test the impact of new operators for network aware programming.
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-06T12:42:54Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/303
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/303
2011-06-06T12:42:54Z
A Flexible and Modular Framework for Implementing Infrastructures for Global Computing
We present a Java software framework for building infrastructures to support the development of applications for systems where mobility and network awareness are key issues. The framework is particularly useful to develop run-time support for languages oriented towards global computing. It enables platform designers to customize communication protocols and network architectures and guarantees transparency of name management and code mobility in distributed environments. The key features are illustrated by means of a couple of simple case studies.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Falassi
Marc Lacoste
Michele Loreti
2011-06-06T09:56:52Z
2014-01-15T10:24:01Z
http://eprints.imtlucca.it/id/eprint/302
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/302
2011-06-06T09:56:52Z
A Process Calculus for QoS-Aware Applications
The definition of suitable abstractions and models for identifying, understanding and managing Quality of Service (QoS) constraints is a challenging issue of the Service Oriented Computing paradigm. In this paper we introduce a process calculus where QoS attributes are first class objects. We identify a minimal set of primitives that allow capturing in an abstract way the ability to control and coordinate services in presence of QoS constraints.
Rocco De Nicola
r.denicola@imtlucca.it
GianLuigi Ferrari
Ugo Montanari
Rosario Pugliese
Emilio Tuosto
2011-06-06T09:50:50Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/327
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/327
2011-06-06T09:50:50Z
Translating Strong Mobility into Weak Mobility
Mobile agents are software objects that can be transmitted over the net together with data and code, or can autonomously migrate to a remote computer and execute automatically on arrival. However many frameworks and languages for mobile agents only provide weak mobility: agents do not resume their execution from the instruction following the migration action, instead they are always restarted from a given point. In this paper we present a purely syntactic translation process for transforming programs that use strong mobility into programs that rely only on weak mobility, while preserving the original semantics.This transformation applies to programs written in a procedural language and can be adapted to other languages, like Java, that provide means to send data and code, but not the execution state.It has actually been exploited for implementing our language for mobile agents X-Klaim, that has linguistic constructs for strong mobility.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-06T09:48:21Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/326
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/326
2011-06-06T09:48:21Z
Klava: a Java package for distributed and mobile applications
Highly distributed networks have now become a common infrastructure for wide-area distributed applications whose key design principle is network awareness, namely the ability to deal with dynamic changes of the network environment. Network-aware computing has called for new programming languages that exploit the mobility paradigm as a basic interaction mechanism. In this paper we present the architecture of KLAVA, an experimental Java package for distributed applications and code mobility. We describe how KLAVA permits code mobility by relying on Java and present a few distributed applications that exploit mobile code programmed in KLAVA.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-06T09:41:26Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/325
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/325
2011-06-06T09:41:26Z
An Equational Axiomatization of Bisimulation over Regular Expressions
We provide a finite equational axiomatization for bisimulation equivalence of nondeterministic interpretation of regular expressions. Our axiomatization is heavily based on the one by Salomaa, that provided an implicative axiomatization for a large subset of regular expressions, namely all those that satisfy the non‐empty word property (i.e. without 1 summands at the top level) in *‐contexts. Our restriction is similar, it essentially amounts to recursively requiring that the non‐empty word property be satisfied not just at top level but at any depth. We also discuss the impact on the axiomatization of different interpretations of the 0 term, interpreted either as a null process or as a deadlock.
Flavio Corradini
Rocco De Nicola
r.denicola@imtlucca.it
Anna Labella
2011-06-06T09:03:49Z
2011-09-19T10:00:54Z
http://eprints.imtlucca.it/id/eprint/323
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/323
2011-06-06T09:03:49Z
AGILE: Software Architecture for Mobility
Architecture-based approaches have been promoted as a means of controlling the complexity of system construction and evolution, in particular for providing systems with the agility required to operate in turbulent environments and to adapt very quickly to changes in the enterprise world. Recent technological advances in communication and distribution have made mobility an additional factor of complexity, one for which current architectural concepts and techniques can be hardly used. The AGILE project is developing an architectural approach in which mobility aspects can be modelled explicitly and mapped on the distribution and communication topology made available at physical levels. The whole approach is developed over a uniform mathematical framework based on graph-oriented techniques that support sound methodological principles, formal analysis, and refinement. This paper describes the AGILE project and some of the results gained during the first project year.
Luis Filipe Andrade
Paolo Baldan
Hubert Baumeister
Roberto Bruni
Andrea Corradini
Rocco De Nicola
r.denicola@imtlucca.it
Jose Luiz Fiadeiro
Fabio Gadducci
Stefania Gnesi
Piotr Hoffman
Nora Koch
Piotr Kosiuczenko
Alessandro Lapadula
Diego Latella
Antonia Lopes
Michele Loreti
Mieke Massink
Franco Mazzanti
Ugo Montanari
Cristóvão Oliveira
Rosario Pugliese
Andrzej Tarlecki
Michel Wermelinger
Martin Wirsing
Artur Zawlocki
2011-06-06T08:53:15Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/322
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/322
2011-06-06T08:53:15Z
Software update via mobile agent based programming
We describe a system that permits maintaining the software installed on several heterogeneous computers distributed over a network by taking advantage of the mobile agent paradigm. The applications are installed and updated only on the central server. When a new release of an application is installed on the server, agents are scattered along the network to update the application on the clients.To build a prototype system we use X-KLAIM, a programming language specifically designed to program distributed systems composed of several components interacting through multiple tuple spaces and mobile code.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2011-06-06T08:45:50Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/329
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/329
2011-06-06T08:45:50Z
Proof Techniques for Cryptographic Processes
Contextual equivalences for cryptographic process calculi, like the spi-calculus, can be used to reason about correctness of protocols, but their definition suffers from quantification over all possible contexts. Here, we focus on two such equivalences, namely may-testing and barbed equivalence, and investigate tractable proof methods for them. To this aim, we design an enriched labelled transition system, where transitions are constrained by the knowledge the environment has of names and keys. The new transition system is then used to define a trace equivalence and a weak bisimulation equivalence that avoid quantification over contexts. Our main results are soundness and completeness of trace and weak bisimulation equivalence with respect to may-testing and barbed equivalence, respectively. They lead to more direct proof methods for equivalence checking. The use of these methods is illustrated with a few examples concerning implementation of secure channels and verification of protocol correctness.
Michele Boreale
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-06T08:38:52Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/288
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/288
2011-06-06T08:38:52Z
Basic Observables for Probabilistic May Testing
The definition of behavioural preorders over process terms as the maximal (pre-)congruences induced by basic observables has proven to be a useful technique to define various preorders and equivalences in the non-probabilistic setting. In this paper, we consider probabilistic observables to define an observational semantics for a probabilistic process calculus. The resulting pre-congruence is proven to coincide with a probabilistic may preorder, which, in turn, corresponds to a natural probabilistic extension of the may testing preorder of De Nicola and Hennessy.
Maria Carla Palmeri
Rocco De Nicola
r.denicola@imtlucca.it
Mieke Massink
2011-06-06T08:24:16Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/283
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/283
2011-06-06T08:24:16Z
Multiple-Labelled Transition Systems for nominal calculi and their logics
Action-labelled transition systems (LTSs) have proved to be a fundamental model for describing and proving properties of concurrent systems. In this paper we introduce Multiple-Labelled Transition Systems (MLTSs) as generalisations of LTSs that enable us to deal with system features that are becoming increasingly important when considering languages and models for network-aware programming. MLTSs enable us to describe not only the actions that systems can perform but also their usage of resources and their handling (creation, revelation . . .) of names; these are essential for modelling changing evaluation environments. We also introduce MoMo, which is a logic inspired by Hennessy–Milner Logic and the μ-calculus, that enables us to consider state properties in a distributed environment and the impact of actions and movements over the different sites. MoMo operators are interpreted over MLTSs and both MLTSs and MoMo are used to provide a semantic framework to describe two basic calculi for mobile computing, namely μKlaim and the asynchronous π-calculus.
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2011-06-06T08:15:00Z
2014-10-08T09:16:27Z
http://eprints.imtlucca.it/id/eprint/289
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/289
2011-06-06T08:15:00Z
Multi labelled transition systems: a semantic framework for nominal calculi
Action Labelled transition systems (LTS) have proved to be a fundamental model for describing and proving properties of concurrent systems. In this paper, Multiple Labelled Transition Systems (MLTS) are introduced as generalizations of LTS that permit dealing also with systems features that are becoming more and more important when considering languages and models for network aware programming. MLTS permit describing not only the actions systems can perform but also system's resources usage and their handling (creation, revelation ...) of names. To show adeguacy of our proposal we show how MLTS can be used to describe the operational semantics of one of the most studied calculus for mobility: the asynchronous [pi]-calculus.
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2011-06-06T08:06:32Z
2014-10-07T14:45:22Z
http://eprints.imtlucca.it/id/eprint/328
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/328
2011-06-06T08:06:32Z
XKlaim and Klava: Programming Mobile Code
Highly distributed networks have now become a common infrastructure for a new kind of wide-area distributed applications whose key design principle is network awareness, namely the ability to deal with dynamic changes of the network environment. Network-aware computing has called for new programming languages that exploit the mobility paradigm as the basic interaction mechanism. In this paper we present the Klaim (Kernel Language for Agent Interaction and Mobility) framework for programming mobile code applications, namely the X Klaim programming language and the Java-based run-time system Klava. In particular, we illustrate how Klava handles mobile code. Finally, an example is shown that is implemented using this framework.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-03T14:57:15Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/308
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/308
2011-06-03T14:57:15Z
Formal modeling and quantitative analysis of KLAIM-based mobile systems
KLAIM is an experimental language designed for modeling and programming distributed systems composed of mobile components where distribution awareness and dynamic system architecture configuration are key issues. In this paper we propose STOCKLAIM, a STOchastic extension of cKLAIM, the core subset of KLAIM. cKLAIM includes process distribution, process mobility, and asynchronous communication. The extension makes it possible to integrate the modeling of quantitative aspects of mobile systems--- e.g. performance---with the functional specification of such systems. We present a formal operational semantics of STOcKLAIM, which associates a labeled transition system to each STOcKLAIM network and a translation to Continuous Time Markov Chains for quantitative analysis. We also show how STOcKLAIM can be used by means of a simple example, i.e. the modeling of the spreading of a virus.
Rocco De Nicola
r.denicola@imtlucca.it
Diego Latella
Mieke Massink
2011-06-03T14:23:10Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/321
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/321
2011-06-03T14:23:10Z
A Java Middleware for Guaranteeing Privacy of Distributed Tuple Spaces
The tuple space communication model, such as the one used in Linda, provides great flexibility for modeling concurrent, distributed and mobile processes. In a distributed setting with mobile agents, particular attention is needed for protecting sites and information. We have designed and developed a Java middleware, Klava, for implementing distributed tuple spaces and operations to support agent interaction and mobility. In this paper, we extend the Klava middleware with cryptographic primitives that enable encryption and decryption of tuple fields. We describe the actual implementation of the new primitives and provide a few examples. The proposed extension is general enough to be applied to similar Java frameworks using multiple distributed tuples spaces possibly dealing with mobility.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-01T09:33:19Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/324
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/324
2011-06-01T09:33:19Z
Trace and Testing Equivalence on Asynchronous Processes
We study trace and may-testing equivalences in the asynchronous versions of CCS and π-calculus. We start from the operational definition of the may-testing preorder and provide finitary and fully abstract trace-based characterizations for it, along with a complete in-equational proof system. We also touch upon two variants of this theory by first considering a more demanding equivalence notion (must-testing) and then a richer version of asynchronous CCS. The results throw light on the difference between synchronous and asynchronous communication and on the weaker testing power of asynchronous observations.
Michele Boreale
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
2011-06-01T09:24:50Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/332
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/332
2011-06-01T09:24:50Z
A Modal Logic for KLAIM
Klaim is an experimental programming language that supports a programming paradigm where both processes and data can be moved across different computing environments. The language relies on the use of explicit localities, and on allocation environments that associate logical localities to physical sites. This paper presents a temporal logic for specifying properties of Klaim programs. The logic is inspired by Hennessy-Milner Logic (HML) and the ν-calculus, but has novel features that permit dealing with state properties to describe the effect of actions over the different sites. The logic is equipped with a consistent and complete proof system that enables one to prove properties of mobile systems.
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2011-05-31T12:34:16Z
2011-07-11T14:36:26Z
http://eprints.imtlucca.it/id/eprint/330
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/330
2011-05-31T12:34:16Z
Algebraic characterizations of trace and decorated trace equivalences over tree-like structures
Behavioural equivalences of labelled transition systems are characterized in terms of homomorphic transformations. This permits relying on algebraic techniques for proving systems properties and reduces equivalence checking of two systems to studying the relationships among the elements of their structures. Different algebraic characterizations of bisimulation-based equivalences in terms of particular transition system homomorphisms have been proposed in the literature. Here, it is shown that trace and decorated trace equivalences can neither be characterized in terms of transition system homomorphisms, nor be defined locally, i.e., only in terms of action sequences of bounded length and of root-preserving maps. However, results similar to those for bisimulation can be obtained for restricted classes of transition systems. For tree-like systems, we present the algebraic characterizations of trace equivalence and of three well-known decorated trace equivalences, namely ready, ready trace equivalence and failure.
Xiao Jun Chen
Rocco De Nicola
r.denicola@imtlucca.it
2011-05-31T12:27:28Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/379
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/379
2011-05-31T12:27:28Z
Action versus State based Logics for Transition Systems
A temporal logic based on actions rather than on states is presented and interpreted over labelled transition systems. It is proved that it has essentially the same power as CTL*, a temporal logic interpreted over Kripke structures. The relationship between the two logics is established by introducing two mappings from Kripke structures to labelled transition systems and viceversa and two transformation functions between the two logics which preserve truth. A branching time version of the action based logic is also introduced. This new logic for transition systems can play an important role as an intermediate between Hennessy-Milner Logic and the modal μ-calculus. It is sufficiently expressive to describe safety and liveness properties but permits model checking in linear time.
Rocco De Nicola
r.denicola@imtlucca.it
Frits W. Vaandrager
2011-05-31T12:07:34Z
2014-01-15T10:28:52Z
http://eprints.imtlucca.it/id/eprint/318
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/318
2011-05-31T12:07:34Z
A Formal Basis for Reasoning on Programmable QoS
The explicit management of Quality of Service (QoS) of network connectivity, such as, e.g., working cost, transaction support, and security, is a key requirement for the development of the novel wide area network applications. In this paper, we introduce a foundational model for specification of QoS attributes at application level. The model handles QoS attributes as semantic constraints within a graphical calculus for mobility. In our approach QoS attributes are related to the programming abstractions and are exploited to select, configure and dynamically modify the underlying system oriented QoS mechanisms.
Rocco De Nicola
r.denicola@imtlucca.it
GianLuigi Ferrari
Ugo Montanari
Rosario Pugliese
Emilio Tuosto
2011-05-31T10:06:10Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/383
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/383
2011-05-31T10:06:10Z
Partial orderings descriptions and observations of nondeterministic concurrent processes
A methodology is introduced for defining truly concurrent semantics of processes as equivalence classes of Labelled Event Structures (LES). The construction of a les providing the operational semantics of systems consists of three main steps. First, systems are decomposed into sets of sequential processes and a set of rewriting rules is introduced which describe both the actions sequential processes may perform and their causal relation. Then, the rewriting rules are used to build an occurrence net. Finally, the required event structure is easily derived from the occurrence net. As a test case, a partial ordering operational semantics is introduced first for a subset of Milner's CCS and then for the whole calculus. The proposed semantics are consistent with the original interleaving semantics of the calculus and are able to capture all and only the parallelism present in its multiset semantics. In order to obtain more abstract semantic definitions, new notions of observational equivalence on Labelled Event Structures are introduced that preserve both concurrency and nondeterminism.
Pierpaolo Degano
Rocco De Nicola
r.denicola@imtlucca.it
Ugo Montanari
2011-05-27T14:23:50Z
2016-04-06T07:58:40Z
http://eprints.imtlucca.it/id/eprint/294
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/294
2011-05-27T14:23:50Z
SENSORIA Process Calculi for Service-Oriented Computing
The IST-FET Integrated Project Sensoria aims at developing a novel comprehensive approach to the engineering of service-oriented software systems where foundational theories, techniques and methods are fully integrated in a pragmatic software engineering approach. Process calculi and logical methods serve as the main mathematical basis of the Sensoria approach. In this paper we give first a short overview of Sensoria and then focus on process calculi for service-oriented computing. The Service Centered Calculus SCC is a general purpose calculus which enriches traditional process calculi with an explicit notion of session; the Service Oriented Computing Kernel SOCK is inspired by the Web services protocol stack and consists of three layers for service description, service engines, and the service network; Performance Evaluation Process Algebra (PEPA) is an expressive formal language for modelling distributed systems which we use for quantitative analysis of services. The calculi and the analysis techniques are illustrated by a case study in the area of distributed e-learning systems.
Martin Wirsing
Rocco De Nicola
r.denicola@imtlucca.it
Stephen Gilmore
Matthias Hölzl
Roberto Lucchi
Mirco Tribastone
mirco.tribastone@imtlucca.it
Gianluigi Zavattaro
2011-05-27T14:08:45Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/316
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/316
2011-05-27T14:08:45Z
Formulae Meet Programs Over the Net: A Framework for Correct Network Aware Programming
A general framework for network aware programming is presented that consists of a language for programming mobile applications, a logic for specifying properties of the applications and an automatic tool for verifying such properties. The framework is based on X-KLAIM, eXtended KLAIM, an experimental programming language specifically designed to program distributed systems composed of several components interacting through multiple tuple spaces and mobile code. The proposed logic is a modal logic inspired by Hennessy-Milner logic and is interpreted over the same labelled structures used for the operational semantics of X-KLAIM. The automatic verification tool is based on a complete proof system that has been previously developed for the logic.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2011-05-27T13:30:28Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/301
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/301
2011-05-27T13:30:28Z
Global Computing in a Dynamic Network of Tuple Spaces
We present a calculus inspired by Klaim whose main features are: explicit process distribution and node interconnections, remote operations, process mobility and asynchronous communication through distributed tuple spaces. We first introduce a basic language where connections are reliable and immutable; then, we enrich it with two more advanced features for global computing, i.e. failures and dynamically evolving connections. In each setting, we use our formalisms to specify some non-trivial global computing applications and exploit the semantic theory based on an observational equivalence to equationally establish properties of the considered case-studies.
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Gorla
Rosario Pugliese
2011-05-27T12:47:58Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/304
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/304
2011-05-27T12:47:58Z
Pattern Matching over a Dynamic Network of Tuple Spaces
In this paper, we present recent work carried on μ Klaim, a core calculus that retains most of the features of Klaim: explicit process distribution, remote operations, process mobility and asynchronous communication via distributed tuple spaces. Communication in μ Klaim is based on a simple form of pattern matching that enables withdrawal from shared data spaces of matching tuples and binds the matched variables within the continuation process. Pattern matching is orthogonal to the underlying computational paradigm of μ Klaim, but affects its expressive power. After presenting the basic pattern matching mechanism, inherited from Klaim, we discuss a number of variants that are easy to implement and test, by means of simple examples, the expressive power of the resulting variants of the language.
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Gorla
Rosario Pugliese
2011-05-26T13:08:13Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/287
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/287
2011-05-26T13:08:13Z
Session Centered Calculi for Service Oriented Computing
Within the European project SENSORIA, we are developing formalisms for service description that lay the mathematical basis for analysing and experimenting with components interactions, for combining services and formalising crucial aspects of service level agreement. One of the outcome of this study is pSCC, a process calculus with explicit primitives for service definition and invocation. Central to pSCC are the notions of session and pipelining. Sessions are two sided and can be equipped with protocols executed by each side during an interaction and permit interaction patterns that are more structured than the simple one-way and request-response ones. Pipeline permits exchange of values between among sessions. The calculus is also equipped with operators for handling (unexpected) session closures that permit programming smooth propagation of session closures to partners and subsessions, so as to avoid states with dangling or orphan sessions. In the talk we will present SCC and discuss other alternatives that are (or have been) considered within the project.
Rocco De Nicola
r.denicola@imtlucca.it
2011-05-26T12:40:31Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/375
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/375
2011-05-26T12:40:31Z
Action and State-based Logics for Process Algebras
Process algebras are generally recognized as a convenient tool for describing concurrent systems at different levels of abstraction. They rely on a small set of basic operators which correspond to primitive notions on concurrent systems and on one or more notions of behavioural equivalence or preorder. The operators are used to build complex systems from more elementary ones. The behavioural equivalences are used to study the relationships between different descriptions of the same system at different levels of abstractions and thus to perform part of the analysis.
Rocco De Nicola
r.denicola@imtlucca.it
2011-05-26T12:22:23Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/285
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/285
2011-05-26T12:22:23Z
Semantic subtyping for the pi-calculus
Subtyping relations for the π-calculus are usually defined in a syntactic way, by means of structural rules. We propose a semantic characterisation of channel types and use it to derive a subtyping relation. The type system we consider includes read-only and write-only channel types, as well as boolean combinations of types. A set-theoretic interpretation of types is provided, in which boolean combinations of types are interpreted as the corresponding set-theoretic operations. Subtyping is defined as inclusion of the interpretations. We prove decidability of the subtyping relation and sketch the subtyping algorithm.
In order to fully exploit the type system, we define a variant of the π-calculus where communication is subjected to pattern matching that performs dynamic typecase.
Giuseppe Castagna
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Varacca
2011-05-26T12:16:24Z
2013-05-03T12:37:08Z
http://eprints.imtlucca.it/id/eprint/284
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/284
2011-05-26T12:16:24Z
TAPAs: A Tool for the Analysis of Process Algebras
Process algebras are formalisms for modelling concurrent systems that permit mathematical reasoning with respect to a set of desired properties. TAPAs is a tool that can be used to support the use of process algebras to specify and analyze concurrent systems. It does not aim at guaranteeing high performances, but has been developed as a support to teaching. Systems are described as process algebras terms that are then mapped to labelled transition systems (LTSs). Properties are verified either by checking equivalence of concrete and abstract systems descriptions, or by model checking temporal formulae over the obtained LTS. A key feature of TAPAs, that makes it particularly suitable for teaching, is that it maintains a consistent double representation of each system both as a term and as a graph. Another useful didactical feature is the exhibition of counterexamples in case equivalences are not verified or the proposed formulae are not satisfied.
Francesco Calzolai
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-05-26T10:16:54Z
2014-03-03T11:59:47Z
http://eprints.imtlucca.it/id/eprint/282
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/282
2011-05-26T10:16:54Z
Provably Correct Implementations of Services
A number of formalisms have been defined to support the specification and analysis of service oriented applications. These formalisms have been equipped with tools (types or logics) to guarantee the correct behavior of the specified services. Due to the semantic gap between the specification formalism and the programming languages of service oriented overlay computers a critical issue is guaranteeing that correctness is preserved when running the specified systems over available implementations. We have defined a service oriented abstract machine, equipped with a formal structural semantics, that can be used to implement service specification formalisms. We use our abstract machine to implement different service oriented formalisms that have been recently proposed, each posing specific challenges that we can address successfully. By exploiting the SOS semantics of the abstract machine and those of the considered service oriented formalisms we do prove that our implementations are correct (sound and complete). We also discuss possible implementations of other formalisms.
Roberto Bruni
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
Leonardo Gaetano Mezzina
2011-05-26T10:11:50Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/281
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/281
2011-05-26T10:11:50Z
Sessions and Pipelines for Structured Service Programming
Service-oriented computing is calling for novel computational models and languages with primitives for client-server interaction, orchestration and unexpected events handling. We present CaSPiS, a process calculus where the notions of session and pipelining play a central role. Sessions are two-sided and can be equipped with protocols executed by each side. Pipelining permits orchestrating the flow of data produced by different sessions. The calculus is also equipped with operators for handling (unexpected) termination of the partner’s side of a session. Several examples are presented to provide evidence for the flexibility of the chosen set of primitives. Our main result shows that in CaSPiS it is possible to program a “graceful termination” of nested sessions, which guarantees that no session is forced to hang forever after the loss of its partner.
Michele Boreale
Roberto Bruni
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2011-05-26T09:13:57Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/280
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/280
2011-05-26T09:13:57Z
Ugo Montanari in a Nutshell
Ugo was born in Besana Brianza in 1943 where his parents had moved to escape from the Milan bombings during the Second World War. Immediately after the war he went back to Milan were he completed all his studies. Ugo got his Laurea degree in Electronic Engineering from the Politecnico di Milano in 1966, three years before the first Laurea curriculum and seventeen years before the first PhD curriculum in Computer Science started in Pisa.
Rocco De Nicola
r.denicola@imtlucca.it
Pierpaolo Degano
José Meseguer
2011-05-26T09:05:07Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/278
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/278
2011-05-26T09:05:07Z
From Flow Logic to Static Type Systems for Coordination Languages
Coordination languages are often used to describe open ended systems. This makes it challenging to develop tools for guaranteeing security of the coordinated systems and correctness of their interaction. Successful approaches to this problem have been based on type systems with dynamic checks; therefore, the correctness properties cannot be statically enforced. By contrast, static analysis approaches based on Flow Logic usually guarantee properties statically. In this paper we show how to combine these two approaches to obtain a static type system for describing secure access to tuple spaces and safe process migration for a dialect of the language Klaim.
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Gorla
Rene Rydhof Hansen
Flemming Nielson
Hanne Riis Nielson
Christian W. Probst
Rosario Pugliese
2011-05-25T13:27:57Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/279
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/279
2011-05-25T13:27:57Z
Implementing Session Centered Calculi
Recently, specific attention has been devoted to the development of service oriented process calculi. Besides the foundational aspects, it is also interesting to have prototype implementations for them in order to assess usability and to minimize the gap between theory and practice. Typically, these implementations are done in Java taking advantage of its mechanisms supporting network applications. However, most of the recurrent features of service oriented applications are re-implemented from scratch. In this paper we show how to implement a service oriented calculus, CaSPiS (Calculus of Services with Pipelines and Sessions) using the Java framework IMC, where recurrent mechanisms for network applications are already provided. By using the session oriented and pattern matching communication mechanisms provided by IMC, it is relatively simple to implement in Java all CaSPiS abstractions and thus to easily write the implementation in Java of a CaSPiS process.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2011-05-25T13:18:28Z
2014-10-08T09:33:54Z
http://eprints.imtlucca.it/id/eprint/276
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/276
2011-05-25T13:18:28Z
MarCaSPiS: a Markovian Extension of a Calculus for Services
Service Oriented Computing (SOC) is a design paradigm that has evolved from earlier paradigms including object-orientation and component-based software engineering. Important features of services are compositionality, context-independence, encapsulation and re-usability. To support the formal design and analysis of SOC applications recently a number of Service Oriented Calculi have been proposed. Most of them are based on process algebras enriched with primitives specific of service orientation such as operators for manipulating semi-structured data, mechanisms for describing safe client-service interactions, constructors for composing possibly unreliable services and techniques for services query and discovery. In this paper we show a versatile technique for the definition of Structural Operational Semantics of MarCaSPiS, a Markovian extension of one of such calculi, namely the Calculus of Sessions and Pipelines, CaSPiS. The semantics deals in an elegant way with a stochastic version of two-party synchronisation, typical of a service-oriented approach, and with the problem of transition multiplicity while preserving highly desirable mathematical properties such as associativity and commutativity of parallel composition.
We also show how the proposed semantics can be naturally used for defining a bisimulation-based behavioural equivalence for MarCaSPiS terms that induces the same equalities as those obtained via Strong Markovian Equivalence.
Rocco De Nicola
r.denicola@imtlucca.it
Diego Latella
Michele Loreti
Mieke Massink
2011-05-25T13:11:34Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/291
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/291
2011-05-25T13:11:34Z
Basic observables for a calculus for global computing
We develop the semantic theory of a foundational language for modelling applications over global computers whose interconnection structure can be explicitly manipulated. Together with process distribution, process mobility and remote asynchronous communication through distributed data repositories, the language has primitives for explicitly modelling inter-node connections and for dynamically activating and deactivating them. For the proposed language, we define natural notions of extensional observations and study their closure under operational reductions and/or language contexts to obtain barbed congruence and may testing equivalence. We then focus on barbed congruence and provide an alternative characterisation in terms of a labelled bisimulation. To test practical usability of the semantic theory, we model a system of communicating mobile devices and use the introduced proof techniques to verify one of its key properties.
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Gorla
Rosario Pugliese
2011-05-25T12:54:48Z
2014-10-08T09:20:45Z
http://eprints.imtlucca.it/id/eprint/290
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/290
2011-05-25T12:54:48Z
Implementing a distributed mobile calculus using the IMC framework
In the last decade, many calculi for modelling distributed mobile code have been proposed. To assess their merits and encourage use, implementations of the calculi have often been proposed. These implementations usually consist of a limited part dealing with mechanisms that are specific of the proposed calculus and of a significantly larger part handling recurrent mechanisms that are common to many calculi. Nevertheless, also the "classic" parts are often re-implemented from scratch. In this paper we show how to implement a well established representative of the family of mobile calculi, the distributed [pi]-calculus, by using a Java middleware (called IMC - Implementing Mobile Calculi) where recurrent mechanisms of distributed and mobile systems are already implemented. By means of the case study, we illustrate a methodology to accelerate the development of prototype implementations while concentrating only on the features that are specific of the calculus under consideration and relying on the common framework for all the recurrent mechanisms like network connections, code mobility, name handling, etc.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Falassi
Michele Loreti
2011-05-25T12:14:27Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/307
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/307
2011-05-25T12:14:27Z
Semantic Subtyping for the p-Calculus
Subtyping relations for the π-calculus are usually defined in a syntactic way, by means of structural rules. We propose a semantic characterisation of channel types and use it to derive a subtyping relation. The type system we consider includes read-only and write-only channel types, as well as Boolean combinations of types. A set-theoretic interpretation of types is provided, in which Boolean combinations are interpreted as the corresponding set-theoretic operations. Subtyping is defined as inclusion of the interpretations. We prove the decidability of the subtyping relation and sketch the subtyping algorithm. In order to fully exploit the type system, we define a variant of the π-calculus where communication is subjected to pattern matching that performs dynamic typecase.
Giuseppe Castagna
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Varacca
2011-05-25T12:02:56Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/305
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/305
2011-05-25T12:02:56Z
Basic Observables for a Calculus for Global Computing
We introduce a foundational language for modelling applications over global computers whose interconnection structure can be explicitly manipulated. Together with process distribution, mobility, remote operations and asynchronous communication through distributed data spaces, the language provides constructs for explicitly modelling inter-node connections and for dynamically establishing and removing them. For the proposed language, we define natural notions of extensional observations and study their closure under operational reductions and/or language contexts to obtain barbed congruence and may testing equivalence. For such equivalences, we provide alternative characterizations in terms of a labelled bisimulation and a trace equivalence that can be used for actual proofs.
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Gorla
Rosario Pugliese
2011-05-25T10:30:52Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/275
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/275
2011-05-25T10:30:52Z
Rate-Based Transition Systems for Stochastic Process Calculi
A variant of Rate Transition Systems (RTS), proposed by Klin and Sassone, is introduced and used as the basic model for defining stochastic behaviour of processes. The transition relation used in our variant associates to each process, for each action, the set of possible futures paired with a measure indicating their rates. We show how RTS can be used for providing the operational semantics of stochastic extensions of classical formalisms, namely CSP and CCS. We also show that our semantics for stochastic CCS guarantees associativity of parallel composition. Similarly, in contrast with the original definition by Priami, we argue that a semantics for stochastic π-calculus can be provided that guarantees associativity of parallel composition.
Rocco De Nicola
r.denicola@imtlucca.it
Diego Latella
Michele Loreti
Mieke Massink
2011-05-25T10:17:05Z
2011-07-11T14:36:24Z
http://eprints.imtlucca.it/id/eprint/274
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/274
2011-05-25T10:17:05Z
On a Uniform Framework for the Definition of Stochastic Process Languages
In this paper we show how Rate Transition Systems (RTSs) can be used as a unifying framework for the definition of the semantics of stochastic process algebras. RTSs facilitate the compositional definition of such semantics exploiting operators on the next state functions which are the functional counterpart of classical process algebra operators. We apply this framework to representative fragments of major stochastic process calculi namely TIPP, PEPA and IML and show how they solve the issue of transition multiplicity in a simple and elegant way. We, moreover, show how RTSs help describing different languages, their differences and their similarities. For each calculus, we also show the formal correspondence between the RTSs semantics and the standard SOS one.
Rocco De Nicola
r.denicola@imtlucca.it
Diego Latella
Michele Loreti
Mieke Massink
2011-05-25T09:18:22Z
2014-10-08T08:18:50Z
http://eprints.imtlucca.it/id/eprint/296
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/296
2011-05-25T09:18:22Z
Towards a logic for performance and mobility
Klaim is an experimental language designed for modeling and programming distributed systems composed of mobile components where distribution awareness and dynamic system architecture configuration are key issues. StocKlaim [R. De Nicola, D. Latella, and M. Massink. Formal modeling and quantitative analysis of KLAIM-based mobile systems. In ACM Symposium on Applied Computing (SAC). ACM Press, 2005. Also available as Technical Report 2004-TR-25; CNR/ISTI, 2004] is a Markovian extension of the core subset of Klaim which includes process distribution, process mobility, asynchronous communication, and site creation. In this paper, MoSL, a temporal logic for StocKlaim is proposed which addresses and integrates the issues of distribution awareness and mobility and those concerning stochastic behaviour of systems. The satisfiability relation is formally defined over labelled Markov chains. A large fragment of the proposed logic can be translated to action-based CSL for which efficient model-checkers exist. This way, such model-checkers can be used for the verification of StocKlaim models against MoSL properties. An example application is provided in the present paper.
Rocco De Nicola
r.denicola@imtlucca.it
Pieter Katoen
Diego Latella
Mieke Massink
2011-05-25T09:13:57Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/295
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/295
2011-05-25T09:13:57Z
SCC: A Service Centered Calculus
We seek for a small set of primitives that might serve as a basis for formalising and programming service oriented applications over global computers. As an outcome of this study we introduce here SCC, a process calculus that features explicit notions of service definition, service invocation and session handling. Our proposal has been influenced by Orc, a programming model for structured orchestration of services, but the SCC’s session handling mechanism allows for the definition of structured interaction protocols, more complex than the basic request-response provided by Orc. We present syntax and operational semantics of SCC and a number of simple but nontrivial programming examples that demonstrate flexibility of the chosen set of primitives. A few encodings are also provided to relate our proposal with existing ones.
Michele Boreale
Roberto Bruni
Luis Caires
Rocco De Nicola
r.denicola@imtlucca.it
Ivan Lanese
Michele Loreti
Francisco Martins
Ugo Montanari
Antonio Ravara
Davide Sangiorgi
Vasco Thudichum Vasconcelos
Gianluigi Zavattaro
2011-05-24T14:53:28Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/388
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/388
2011-05-24T14:53:28Z
Testing Equivalences for Event Structures
A flexible abstraction mechanism for models of concurrency, which allows systems which "look the same" to be considered equivalent, is proposed. Using three classes of atomic observations (sequences of actions, sequences of multisets of actions and partial orderings of actions) different information on the causal and temporal structure of Event Structures, a basic model of parallelism, is captured. As a result, three different semantic models for concurrent systems are obtained. These models can be used as the basis for defining interleaving, multisets or partial ordering semantics of concurrent systems. The common framework used to define the models allows us to study the relationship between these three traditional approaches to the semantics of concurrent communicating systems.
Luca Aceto
Alessandro Fantechi
Rocco De Nicola
r.denicola@imtlucca.it
2011-05-24T14:46:17Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/393
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/393
2011-05-24T14:46:17Z
Testing Equivalences for Processes
Given a set of processes and a set of tests on these processes we show how to define in a natural way three different equivalences on processes. These equivalences are applied to a particular language CCS. We give associated complete proof systems and fully abstract models. These models have a simple representation in terms of trees.
Matthew Hennessy
Rocco De Nicola
r.denicola@imtlucca.it
2011-05-24T14:35:02Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/392
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/392
2011-05-24T14:35:02Z
Models and Operators for Nondeterministic Processes
Rocco De Nicola
r.denicola@imtlucca.it
2011-05-24T14:28:46Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/391
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/391
2011-05-24T14:28:46Z
Two Complete Axiom Systems for a Theory of Communicating Sequential Processes
In C. A. R. Hoare, S. D. Brookes, and A. D. Roscoe (1984, J. Assoc. Comput. Mach. 31(3), 560) an abstract version of Hoare's CSP is defined and a denotational semantics based on the possible failures of processes is given for it. This semantics induces a natural preorder on processes. We define formally this preorder and prove that it can be characterized as the smallest relation satisfying a particular set of axioms. The characterization sheds lights on problems arising from the way divergence and underspecification are handled. After small changes to the semantic domains we propose a new semantics which is closer to the operational intuitions and suggests a possible solution to the above problems. Finally we give an axiomatic characterization for the equivalence induced by the new semantics which leads to fully abstract models in the sense of Scott.
Rocco De Nicola
r.denicola@imtlucca.it
2011-05-24T14:02:06Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/390
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/390
2011-05-24T14:02:06Z
Partial ordering derivations for CCS
In this paper we extend CCS transitions, labelled by strings, to concurrent histories, i.e. to transitions labelled by partial orderings. The two notions are linked by a theorem which shows that the strings can be obtained by taking all interleavings compatible with the partial orderings.
Pierpaolo Degano
Rocco De Nicola
r.denicola@imtlucca.it
Ugo Montanari
2011-05-24T13:21:52Z
2011-07-11T14:36:27Z
http://eprints.imtlucca.it/id/eprint/397
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/397
2011-05-24T13:21:52Z
Testing Equivalences and Fully Abstract Models for Communicating Processes
Rocco De Nicola
r.denicola@imtlucca.it
2011-05-24T12:59:37Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/315
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/315
2011-05-24T12:59:37Z
A modal logic for mobile agents
Klaim is an experimental programming language that supports a programming paradigm where both processes and data can be moved across different computing environments. The language relies on the use of explicit localities. This paper presents a temporal logic for specifying properties of Klaim programs. The logic is inspired by Hennessy-Milner Logic (HML) and the μ-calculus, but has novel features that permit dealing with state properties and impact of actions and movements over the different sites. The logic is equipped with a complete proof system that enables one to prove properties of mobile systems.
Michele Loreti
Rocco De Nicola
r.denicola@imtlucca.it
2011-05-24T12:52:30Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/314
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/314
2011-05-24T12:52:30Z
A Software Framework for Rapid Prototyping of Run-Time Systems for Mobile Calculi
We describe the architecture and the implementation of the Mikado software framework, that we call IMC (Implementing Mobile Calculi). The framework aims at providing the programmer with primitives to design and implement run-time systems for distributed process calculi. The paper describes the four main components of abstract machines for mobile calculi (node topology, naming and binding, communication protocols and mobility) that have been implemented as Java packages. The paper also contains the description of a prototype implementation of a run-time system for the Distributed Pi-Calculus relying on the presented framework.
Lorenzo Bettini
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Falassi
Marc Lacoste
Luis M. B. Lopes
Licinio Oliveira
Herve Paulino
Vasco Thudichum Vasconcelos
2011-05-24T10:30:56Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/299
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/299
2011-05-24T10:30:56Z
On the expressive power of KLAIM-based calculi
We study the expressive power of variants of KLAIM, an experimental language with programming primitives for network-aware programming that combines the process algebra approach with the coordination-oriented one. KLAIM has proved to be suitable for programming a wide range of distributed applications with agents and code mobility, and has been implemented on the top of a runtime system written in Java. In this paper, the expressivity of its constructs is tested by distilling from it a few, more and more foundational, languages and by studying the encoding of each of them into a simpler one. The expressive power of the considered calculi is finally tested by comparing one of them with asynchronous π-calculus.
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Gorla
Rosario Pugliese
2011-05-24T10:15:37Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/298
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/298
2011-05-24T10:15:37Z
Confining data and processes in global computing applications
A programming notation is introduced that can be used for protecting secrecy and integrity of data in global computing applications. The approach is based on the explicit annotations of data and network nodes. Data are tagged with information about the allowed movements, network nodes are tagged with information about the nodes that can send data and spawn processes to them. The annotations are used to confine movements of data and processes. The approach is illustrated by applying it to three paradigmatic calculi for global computing, namely cKlaim (a calculus at the basis of cKlaim), (a distributed version of the [pi]-calculus) and Mobile Ambients Calculus. For all of these formalisms, it is shown that their semantics guarantees that computations proceed only while respecting confinement constraints. Namely, it is proven that, after successful static type checking, data can reside at and cross only authorised nodes. "Local" formulations of this property where only relevant subnets type check are also presented. Finally, the theory is tested by using it to model secure behaviours of a UNIX-like multiuser system.
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Gorla
Rosario Pugliese
2011-05-24T09:53:16Z
2014-10-08T08:57:48Z
http://eprints.imtlucca.it/id/eprint/297
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/297
2011-05-24T09:53:16Z
From Process Calculi to Klaim and Back
We briefly describe the motivations and the background behind the design of Klaim, a process description language that has proved to be suitable for describing a wide range of distributed applications with agents and code mobility. We argue that a drawback of Klaim is that it is neither a programming language, nor a process calculus. We then outline the two research directions we have pursued more recently. On the one hand we have evolved Klaim to a full-fledged language for distributed mobile programming. On the other hand we have distilled the language into a number of simple calculi that we have used to define new semantic theories and equivalences and to test the impact of new operators for network aware programming.
Rocco De Nicola
r.denicola@imtlucca.it
2011-05-24T09:48:51Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/313
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/313
2011-05-24T09:48:51Z
MoMo: A Modal Logic for Reasoning About Mobility
A temporal logic is proposed as a tool for specifying properties of Klaim programs. Klaim is an experimental programming language that supports a programming paradigm where both processes and data can be moved across different computing environments. The language relies on the use of explicit localities. The logic is inspired by Hennessy-Milner Logic (HML) and the μ–calculus, but has novel features that permit dealing with state properties and impact of actions and movements over the different sites. The logic is equipped with a sound and complete tableaux based proof system.
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2011-05-24T09:09:12Z
2014-10-07T14:54:12Z
http://eprints.imtlucca.it/id/eprint/311
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/311
2011-05-24T09:09:12Z
On the Expressive Power of Klaim-based Calculi
In this work, we study the expressive power of variants of Klaim, an experimental language with programming primitives for global computing that combines the process algebra approach with the coordination-oriented one. Klaim has proved to be suitable for programming a wide range of distributed applications with agents and code mobility, and has been implemented on the top of a runtime system based on Java. The expressivity of its constructs is tested by distilling from it some (more and more foundational) calculi and studying the encoding of each of the considered languages into a simpler one. An encoding of the asynchronous [pi]-calculus into one of these calculi is also presented.
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Gorla
Rosario Pugliese
2011-05-24T09:03:36Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/310
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/310
2011-05-24T09:03:36Z
Types in concurrency
Rocco De Nicola
r.denicola@imtlucca.it
Davide Sangiorgi
2011-05-24T08:47:29Z
2011-07-11T14:36:24Z
http://eprints.imtlucca.it/id/eprint/273
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/273
2011-05-24T08:47:29Z
From Flow Logic to static type systems for coordination languages
Coordination languages are often used to describe open-ended systems. This makes it challenging to develop tools for guaranteeing the security of the coordinated systems and the correctness of their interaction. Successful approaches to this problem have been based on type systems with dynamic checks; therefore, the correctness properties cannot be statically enforced. By contrast, static analysis approaches based on Flow Logic usually guarantee properties statically. In this paper, we show how the insights from the Flow Logic approach can be used to construct a type system for statically ensuring secure access to tuple spaces and safe process migration for an extension of the language Klaim.
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Gorla
Rene Rydhof Hansen
Flemming Nielson
Hanne Riis Nielson
Christian W. Probst
Rosario Pugliese
2011-05-23T15:10:18Z
2012-07-06T10:03:20Z
http://eprints.imtlucca.it/id/eprint/272
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/272
2011-05-23T15:10:18Z
Tree-functors, determinacy and bisimulations
We study the functorial characterisation of bisimulation-based equivalences over a
categorical model of labelled trees. We show that in a setting where all labels are visible,
strong bisimilarity can be characterised in terms of enriched functors by relying on the
reflection of paths with their factorisations. For an enriched functor F, this notion requires
that a path (an internal morphism in our framework) π going from F(A) to C corresponds
to a path p going from A to K, with F(K) = C, such that every possible factorisation of π
can be lifted in an appropriate factorisation of p. This last property corresponds to a
Conduch´e property for enriched functors, and a very rigid formulation of it has been used by
Lawvere to characterise the determinacy of physical systems. We also consider the setting
where some labels are not visible, and provide characterisations for weak and branching
bisimilarity. Both equivalences are still characterised in terms of enriched functors that
reflect paths with their factorisations: for branching bisimilarity, the property is the same as
the one used to characterise strong bisimilarity when all labels are visible; for weak
bisimilarity, a weaker form of path factorisation lifting is needed. This fact can be seen as
evidence that strong and branching bisimilarity are strictly related and that, unlike weak
bisimilarity, they preserve process determinacy in the sense of Milner.
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Gorla
Anna Labella
2011-05-23T14:22:04Z
2011-07-11T14:36:24Z
http://eprints.imtlucca.it/id/eprint/271
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/271
2011-05-23T14:22:04Z
Uniform Labeled Transition Systems for Nondeterministic, Probabilistic, and Stochastic Processes
Rate transition systems (RTS) are a special kind of transition systems introduced for defining the stochastic behavior of processes and for associating continuous-time Markov chains with process terms. The transition relation assigns to each process, for each action, the set of possible futures paired with a measure indicating the rates at which they are reached. RTS have been shown to be a uniform model for providing an operational semantics to many stochastic process algebras. In this paper, we define Uniform Labeled TRAnsition Systems (ULTraS) as a generalization of RTS that can be exploited to uniformly describe also nondeterministic and probabilistic variants of process algebras. We then present a general notion of behavioral relation for ULTraS that can be instantiated to capture bisimulation and trace equivalences for fully nondeterministic, fully probabilistic, and fully stochastic processes.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2011-05-23T14:16:45Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/293
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/293
2011-05-23T14:16:45Z
Model checking mobile stochastic logic
The Temporal Mobile Stochastic Logic (MoSL) has been introduced in previous work by the authors for formulating properties of systems specified in StoKlaim, a Markovian extension of Klaim. The main purpose of MoSL is to address key functional aspects of global computing such as distribution awareness, mobility, and security and their integration with performance and dependability guarantees. In this paper, we present MoSL+, an extension of MoSL, which incorporates some basic features of the Modal Logic for MObility (MoMo), a logic specifically designed for dealing with resource management and mobility aspects of concurrent behaviours. We also show how MoSL+ formulae can be model-checked against StoKlaim specifications. For this purpose, we show how existing state-based stochastic model-checkers, like e.g. the Markov Reward Model Checker (MRMC), can be exploited by using a front-end for StoKlaim that performs appropriate pre-processing of MoSL+ formulae. The proposed approach is illustrated by modelling and verifying a sample system.
Rocco De Nicola
r.denicola@imtlucca.it
Pieter Katoen
Diego Latella
Michele Loreti
Mieke Massink
2011-05-23T13:25:55Z
2011-07-11T14:36:25Z
http://eprints.imtlucca.it/id/eprint/292
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/292
2011-05-23T13:25:55Z
Global computing in a dynamic network of tuple spaces
We present tKlaim (TopologicalKlaim), a process description language that retains the main features of Klaim (process distribution and mobility, remote and asynchronous communication through distributed data spaces), but extends it with new constructs to flexibly model the interconnection structure underlying a network and its evolution in time. We show how tKlaim can be used to model a number of interesting distributed applications and how systems correctness can be guaranteed, also in the presence of failures, by exploiting observational equivalences to study the relationships between descriptions of systems at different levels of abstraction.
Rocco De Nicola
r.denicola@imtlucca.it
Daniele Gorla
Rosario Pugliese
2011-05-18T10:31:57Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/147
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/147
2011-05-18T10:31:57Z
A Graph Syntax for Processes and Services
We propose a class of hierarchical graphs equipped with a simple algebraic syntax as a convenient way to describe configurations in languages with inherently hierarchical features such as sessions, fault- handling scopes or transactions. The graph syntax can be seen as an intermediate representation language, that facilitates the encoding of structured specifications and, in particular, of process calculi, since it provides primitives for nesting, name restriction and parallel composition. The syntax is based on an algebraic presentation that faithfully characterises families of hierarchical graphs, meaning that each term of the language uniquely identifies an equivalence class of graphs (modulo graph isomorphism). Proving soundness and completeness of an encoding (i.e. proving that structurally equivalent processes are mapped to isomorphic graphs) is then facilitated and can be done by structural induction. Summing up, the graph syntax facilitates the definition of faithful encodings, yet allowing a precise visual representation. We illustrate our work with an application to a workflow language and a service-oriented calculus.
Roberto Bruni
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-05-18T10:27:27Z
2016-04-06T07:57:34Z
http://eprints.imtlucca.it/id/eprint/146
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/146
2011-05-18T10:27:27Z
A Service-Oriented UML Profile with Formal Support
We present a UML Profile for the description of service oriented applications. The profile focuses on style-based design and reconfiguration aspects at the architectural level. Moreover, it has formal support in terms of an approach called Architectural Design Rewriting, which enables formal analysis of the UML specifications. We show how our prototypical implementation can be used to analyse and verify properties of a service oriented application.
Roberto Bruni
Matthias Hölzl
Nora Koch
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Philip Mayer
Ugo Montanari
Andreas Schroeder
Martin Wirsing
2011-05-18T09:02:33Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/145
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/145
2011-05-18T09:02:33Z
An Algebra of Hierarchical Graphs and its Application to Structural Encoding
We define an algebraic theory of hierarchical graphs, whose axioms
characterise graph isomorphism: two terms are equated exactly when
they represent the same graph. Our algebra can be understood as
a high-level language for describing graphs with a node-sharing, embedding
structure, and it is then well suited for defining graphical
representations of software models where nesting and linking are key
aspects. In particular, we propose the use of our graph formalism as a
convenient way to describe configurations in process calculi equipped
with inherently hierarchical features such as sessions, locations, transactions,
membranes or ambients. The graph syntax can be seen as an
intermediate representation language, that facilitates the encodings of
algebraic specifications, since it provides primitives for nesting, name
restriction and parallel composition. In addition, proving soundness
and correctness of an encoding (i.e. proving that structurally equivalent
processes are mapped to isomorphic graphs) becomes easier as it can
be done by induction over the graph syntax.
Roberto Bruni
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-05-18T08:54:26Z
2014-10-07T14:41:43Z
http://eprints.imtlucca.it/id/eprint/171
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/171
2011-05-18T08:54:26Z
Trail-directed model checking
HSF-SPIN is a Promela model checker based on heuristic search strategies. It utilizes heuristic estimates in order to direct the search for finding software bugs in concurrent systems. As a consequence, HSF-SPIN is able to find shorter trails than blind depth-first search.
This paper contributes an extension to the paradigm of directed model checking to shorten already established unacceptable long error trails. This approach has been implemented in HSF-SPIN. For selected benchmark and industrial communication protocols experimental evidence is given that trail-directed model-checking effectively shortcuts existing witness paths.
Stefan Edelkamp
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Stefan Leue
2011-05-18T08:46:12Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/148
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/148
2011-05-18T08:46:12Z
A Formalisation of Adaptable Pervasive Flows
Adaptable Pervasive Flows is a novel workflow-based paradigm for the design and execution of pervasive applications, where dynamic workflows situated in the real world are able to modify their execution in order to adapt to changes in their environment. In this paper, we study a formalisation of such flows by means of a formal flow language. More precisely, we define APFoL (Adaptable Pervasive Flow Language) and formalise its textual notation by encoding it in Blite, a formalisation of WS-BPEL. The encoding in Blite equips the language with a formal semantics and enables the use of automated verification techniques. We illustrate the approach with an example of a Warehouse Case Study.
Antonio Bucchiarone
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Annapaola Marconi
Marco Pistore
2011-05-17T15:01:44Z
2014-10-08T09:39:01Z
http://eprints.imtlucca.it/id/eprint/150
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/150
2011-05-17T15:01:44Z
On symbolic semantics for name-decorated contexts
Under several regards, various of the recently proposed computational paradigms are open-ended, i.e. they may comprise components whose behaviour is not or cannot be fully specified. For instance, applications can be distributed across different administration domains that do not fully disclose their internal business processes to each other, or the dynamics of the system may allow reconfigurations and dynamic bindings whose specification is not available at design time. While a large set of mature design and analysis techniques for closed systems have been developed, their lifting to the open case is not always straightforward. Some existing approaches in the process calculi community are based on the need of proving properties for components that may hold in any, or significantly many, execution environments. Dually, frameworks describing the dynamics of systems with unspecified components have also been presented. In this paper we lay some preliminary ideas on how to extend a symbolic semantics model for open systems in order to deal with name-based calculi. Moreover, we also discuss how the use of a simple type system based on name-decoration for unknown components can improve the expressiveness of the framework. The approach is illustrated on a simple, paradigmatic calculus of web crawlers, which can be understood as a term representation of a simple class of graphs.
Andrea Bracciali
Roberto Bruni
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-05-17T14:56:11Z
2014-10-08T09:36:46Z
http://eprints.imtlucca.it/id/eprint/151
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/151
2011-05-17T14:56:11Z
Hierarchical design rewriting with Maude
Architectural Design Rewriting (ADR) is a rule-based approach for the design of dynamic software architectures. The key features that make ADR a suitable and expressive framework are the algebraic presentation and the use of conditional rewrite rules. These features enable, e.g. hierarchical (top-down, bottom-up or composition-based) design and inductively-defined reconfigurations. The contribution of this paper is twofold: we define Hierarchical Design Rewriting (HDR) and present our prototypical tool support. HDR is a flavour of ADR that exploits the concept of hierarchical graph to deal with system specifications combining both symbolic and interpreted parts. Our prototypical implementation is based on Maude and its presentation serves several purposes. First, we show that HDR is not only a well-founded formal approach but also a tool-supported framework for the design and analysis of software architectures. Second, our illustration tailored to a particular algebra of designs and a particular scenario traces a general methodology for the reuse and exploitation of ADR concepts in other scenarios.
Roberto Bruni
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
2011-05-17T14:37:17Z
2011-07-11T14:34:35Z
http://eprints.imtlucca.it/id/eprint/170
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/170
2011-05-17T14:37:17Z
Directed Explicit Model Checking with HSF-SPIN
We present the explicit state model checker HSF-SPIN which is based on the model checker SPIN and its Promela modeling language. HSF-SPIN incorporates directed search algorithms for checking safety and a large class of LTL-specified liveness properties. We start off from the A* algorithm and define heuristics to accelerate the search into the direction of a specified failure situation. Next we propose an improved nested depth-first search algorithm that exploits the structure of Promela Never-Claims. As a result of both improvements, counterexamples will be shorter and the explored part of the state space will be smaller than with classical approaches, allowing to analyze larger state spaces. We evaluate the impact of the new heuristics and algorithms on a set of protocol models, some of which are real-world industrial protocols.
Stefan Edelkamp
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Stefan Leue
2011-05-17T14:26:42Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/154
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/154
2011-05-17T14:26:42Z
Graphical Encoding of a Spatial Logic for the pi-Calculus
This paper extends our graph-based approach to the verification of spatial properties of π-calculus specifications. The mechanism is based on an encoding for mobile calculi where each process is mapped into a graph (with interfaces) such that the denotation is fully abstract with respect to the usual structural congruence, i.e., two processes are equivalent exactly when the corresponding encodings yield isomorphic graphs. Behavioral and structural properties of π-calculus processes expressed in a spatial logic can then be verified on the graphical encoding of a process rather than on its textual representation. In this paper we introduce a modal logic for graphs and define a translation of spatial formulae such that a process verifies a spatial formula exactly when its graphical representation verifies the translated modal graph formula.
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-05-17T14:08:06Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/155
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/155
2011-05-17T14:08:06Z
Towards Model Checking Spatial Properties with SPIN
We present an approach for the verification of spatial properties with Spin. We first extend one of Spin’s main property specification mechanisms, i.e., the linear-time temporal logic LTL, with spatial connectives that allow us to restrict the reasoning of the behaviour of a system to some components of the system, only. For instance, one can express whether the system can reach a certain state from which a subset of processes can evolve alone until some property is fulfilled. We give a model checking algorithm for the logic and propose how Spin can be minimally extended to include the algorithm. We also discuss potential improvements to mitigate the exponential complexity introduced by spatial connectives. Finally, we present some experiments that compare our Spin extension with a spatial model checker for the π-calculus.
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-05-17T13:54:29Z
2011-07-11T14:34:35Z
http://eprints.imtlucca.it/id/eprint/167
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/167
2011-05-17T13:54:29Z
Directed explicit-state model checking in the validation of communication protocols
The success of model checking is largely based on its ability to efficiently locate errors in software designs. If an error is found, a model checker produces a trail that shows how the error state can be reached, which greatly facilitates debugging. However, while current model checkers find error states efficiently, the counterexamples are often unnecessarily lengthy, which hampers error explanation. This is due to the use of naive search algorithms in the state space exploration. In this paper we present approaches to the use of heuristic search algorithms in explicit-state model checking. We present the class of A * directed search algorithms and propose heuristics together with bitstate compression techniques for the search of safety property violations. We achieve great reductions in the length of the error trails, and in some instances render problems analyzable by exploring a much smaller number of states than standard depth-first search. We then suggest an improvement of the nested depth-first search algorithm and show how it can be used together with A * to improve the search for liveness property violations. Our approach to directed explicit-state model checking has been implemented in a tool set called HSF-SPIN. We provide experimental results from the protocol validation domain using HSF-SPIN.
Stefan Edelkamp
Stefan Leue
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-05-13T13:17:12Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/152
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/152
2011-05-13T13:17:12Z
Partial-order reduction for general state exploring algorithms
Partial-order reduction is one of the main techniques used to tackle the combinatorial state explosion problem occurring in explicit-state model checking of concurrent systems. The reduction is performed by exploiting the independence of concurrently executed events, which allows portions of the state space to be pruned. An important condition for the soundness of partial-order-based reduction algorithms is a condition that prevents indefinite ignoring of actions when pruning the state space. This condition is commonly known as the cycle proviso. In this paper, we present a new version of this proviso, which is applicable to a general search algorithm skeleton that we refer to as the general state exploring algorithm (GSEA). GSEA maintains a set of open states from which states are iteratively selected for expansion and moved to a closed set of states. Depending on the data structure used to represent the open set, GSEA can be instantiated as a depth-first, a breadth-first, or a directed search algorithm such as Best-First Search or A*. The proviso is characterized by reference to the open and closed set of states of the search algorithm. As a result, it can be computed in an efficient manner during the search based on local information. We implemented partial-order reduction for GSEA based on our proposed proviso in the tool HSF-SPIN, an extension of the explicit-state model checker SPIN for directed model checking. We evaluate the state space reduction achieved by partial-order reduction using the proposed proviso by comparing it on a set of benchmark problems to the use of other provisos. We also compare the use of breadth-first search (BFS) and A*, two algorithms ensuring that counterexamples of minimal length will be found, together with the proviso that we propose.
Dragan Bosnacki
Stefan Leue
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-05-13T13:13:01Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/153
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/153
2011-05-13T13:13:01Z
Graph-Based Design and Analysis of Dynamic Software Architectures
We illustrate two ways to address the specification, modelling and analysis of dynamic software architectures using: i) ordinary typed graph transformation techniques implemented in Alloy; ii) a process algebraic presentation of graph transformation implemented in Maude. The two approaches are compared by showing how different aspects can be tackled, including representation issues, modelling phases, property specification and analysis.
Roberto Bruni
Antonio Bucchiarone
Stefania Gnesi
Dan Hirsch
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-05-13T13:07:05Z
2011-07-11T14:34:35Z
http://eprints.imtlucca.it/id/eprint/169
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/169
2011-05-13T13:07:05Z
Partial Order Reduction in Directed Model Checking
Partial order reduction is a very succesful technique for avoiding the state explosion problem that is inherent to explicit state model checking of asynchronous concurrent systems. It exploits the commutativity of concurrently executed transitions in interleaved system runs in order to reduce the size of the explored state space. Directed model checking on the other hand addresses the state explosion problem by using guided search techniques during state space exploration. As a consequence, shorter errors trails are found and less search effort is required than when using standard depth-first or breadth-first search. We analyze how to combine directed model checking with partial order reduction methods and give experimental results on how the combination of both techniques performs.
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Stefan Edelkamp
Stefan Leue
2011-05-13T12:54:53Z
2011-07-11T14:34:35Z
http://eprints.imtlucca.it/id/eprint/168
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/168
2011-05-13T12:54:53Z
Partial-order reduction and trail improvement in directed model checking
In this paper we present work on trail improvement and partial-order reduction in the context of directed explicit-state model checking. Directed explicit-state model checking employs directed heuristic search algorithms such as A* or best-first search to improve the error-detection capabilities of explicit-state model checking. We first present the use of directed explicit-state model checking to improve the length of already established error trails. Second, we show that partial-order reduction, which aims at reducing the size of the state space by exploiting the commutativity of concurrent transitions in asynchronous systems, can coexist well with directed explicit-state model checking. Finally, we illustrate how to mitigate the excessive length of error trails produced by partial-order reduction in explicit-state model checking. In this context we also propose a combination of heuristic search and partial-order reduction to improve the length to already provided counterexamples.
Stefan Edelkamp
Stefan Leue
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-04-12T09:19:59Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/156
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/156
2011-04-12T09:19:59Z
Service Oriented Architectural Design
We propose Architectural Design Rewriting (ADR), an approach to formalise the development and reconfiguration of software architectures based on term-rewriting. An architectural style consists of a set of architectural elements and operations called productions which define the well-formed compositions of architectures. Roughly, a term built out of such ingredients constitutes the proof that a design was constructed according to the style, and the value of the term is the constructed software architecture. A main advantage of ADR is that it naturally supports style-preserving reconfigurations. The usefulness of our approach is shown by applying ADR to SRML, an emergent paradigm inspired by the Service Component Architecture. We model the complex operation that composes several SRML modules in a single one by means of suitable rewrite rules. Our approach guarantees that the resulting module respects SRML’s metamodel.
Roberto Bruni
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
Emilio Tuosto
2011-03-31T14:52:50Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/157
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/157
2011-03-31T14:52:50Z
Heuristic Search for the Analysis of Graph Transition Systems
Graphs are suitable modeling formalisms for software and hardware systems involving aspects such as communication, object orientation, concurrency, mobility and distribution. State spaces of such systems can be represented by graph transition systems, which are basically transition systems whose states and transitions represent graphs and graph morphisms. Heuristic search is a successful Artificial Intelligence technique for solving exploration problems implicitly present in games, planning, and formal verification. Heuristic search exploits information about the problem being solved to guide the exploration process. The main benefits are significant reductions in the search effort and the size of solutions. We propose the application of heuristic search for the analysis of graph transition systems. We define algorithms and heuristics and present experimental results.
Stefan Edelkamp
Shahid Jabbar
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-03-31T14:41:48Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/149
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/149
2011-03-31T14:41:48Z
Ten virtues of structured graphs
This paper extends the invited talk by the first author about the virtues
of structured graphs. The motivation behind the talk and this paper relies on our
experience on the development of ADR, a formal approach for the design of styleconformant,
reconfigurable software systems. ADR is based on hierarchical graphs
with interfaces and it has been conceived in the attempt of reconciling software architectures
and process calculi by means of graphical methods. We have tried to
write an ADR agnostic paper where we raise some drawbacks of flat, unstructured
graphs for the design and analysis of software systems and we argue that hierarchical,
structured graphs can alleviate such drawbacks.
Roberto Bruni
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-03-31T14:06:37Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/158
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/158
2011-03-31T14:06:37Z
Partial-Order Reduction for General State Exploring Algorithms
An important component of partial-order based reduction algorithms is the condition that prevents action ignoring, commonly known as the cycle proviso. In this paper we give a new version of this proviso that is applicable to a general search algorithm skeleton also known as the General State Expanding Algorithm (GSEA). GSEA maintains a set of open (visited but not expanded) states from which states are iteratively selected for exploration and moved to a closed set of states (visited and expanded). Depending on the open set data structure used, GSEA can be instantiated as depth-first, breadth-first, or a directed search algorithm. The proviso is characterized by reference to the open and closed set of states in GSEA. As a result the proviso can be computed in an efficient manner during the search based on local information. We implemented partial-order reduction for GSEA based on our proposed proviso in the tool HSF-SPIN, which is an extension of the model checker SPIN for directed model checking. We evaluate the state space reduction achieved by partial-order reduction according to the proviso that we propose by comparing it on a set of benchmark problems to other reduction approaches. We also compare the use of breadth-first search and A*, two algorithms ensuring that counterexamples of minimal length will be found, together with the proviso that we propose.
}
Dragan Bosnacki
Stefan Leue
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-03-31T13:40:00Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/141
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/141
2011-03-31T13:40:00Z
Exploiting the Hierarchical Structure of Rule-Based Specifications for Decision Planning
Rule-based specifications have been very successful as a declarative approach in many domains, due to the handy yet solid foundations offered by rule-based machineries like term and graph rewriting. Realistic problems, however, call for suitable techniques to guarantee scalability. For instance, many domains exhibit a hierarchical structure that can be exploited conveniently. This is particularly evident for composition associations of models. We propose an explicit representation of such structured models and a methodology that exploits it for the description and analysis of model- and rule-based systems. The approach is presented in the framework of rewriting logic and its efficient implementation in the rewrite engine Maude and is illustrated with a case study.
}
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Roberto Bruni
Artur Boronat
Ugo Montanari
Generoso Paolillo
2011-03-31T10:55:12Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/142
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/142
2011-03-31T10:55:12Z
On GS-Monoidal Theories for Graphs with Nesting
We propose a sound and complete axiomatisation of a class of graphs with nesting and either locally or globally restricted nodes. Such graphs allow to represent explicitly and at the right level of abstraction some relevant topological and logical features of models and systems, including nesting, hierarchies, sharing of resources, and pointers or links. We also provide an encoding of the proposed algebra into terms of a gs-monoidal theory, and through these into a suitable class of wellscoped term graphs, showing that this encoding is sound and complete with respect to the axioms of the algebra.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
2011-03-31T10:44:27Z
2016-07-13T09:46:04Z
http://eprints.imtlucca.it/id/eprint/143
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/143
2011-03-31T10:44:27Z
Counterpart semantics for a second-order mu-calculus
We propose a novel approach to the semantics of quantified μ-calculi, considering models where states are algebras; the evolution relation is given by a counterpart relation (a family of partial homomorphisms), allowing for the creation, deletion, and merging of components; and formulas are interpreted over sets of state assignments (families of substitutions, associating formula variables to state components). Our proposal avoids the limitations of existing approaches, usually enforcing restrictions of the evolution relation: the resulting semantics is a streamlined and intuitively appealing one, yet it is general enough to cover most of the alternative proposals we are aware of.
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2011-03-31T10:44:09Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/144
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/144
2011-03-31T10:44:09Z
An Algebra of Hierarchical Graphs
We define an algebraic theory of hierarchical graphs, whose axioms characterise graph isomorphism: two terms are equated exactly when they represent the same graph. Our algebra can be understood as a high-level language for describing graphs with a node-sharing, embedding structure, and it is then well suited for defining graphical representations of software models where nesting and linking are key aspects.
Roberto Bruni
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-03-24T11:00:40Z
2015-02-11T14:37:46Z
http://eprints.imtlucca.it/id/eprint/166
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/166
2011-03-24T11:00:40Z
Quantitative mu-calculus and CTL defined over constraint semirings
Model checking and temporal logics are boolean. The answer to the model checking question does a system satisfy a property? is either true or false, and properties expressed in temporal logics are defined over boolean propositions. While this classic approach is enough to specify and verify boolean temporal properties, it does not allow to reason about quantitative aspects of systems. Some quantitative extensions of temporal logics has been already proposed, especially in the context of probabilistic systems. They allow to answer questions like with which probability does a system satisfy a property?
We present a generalization of two well-known temporal logics: CTL and the [mu]-calculus. Both extensions are defined over c-semirings, an algebraic structure that captures quantitative aspects like quality of service or soft constraints. Basically, a c-semiring consists of a domain, an additive operation and a multiplicative operation, which satisfy some properties. We present the semantics of the extended logics over transition systems, where a formula is interpreted as a mapping from the set of states to the domain of the c-semiring, and show that the usual connection between CTL and [mu]-calculus does not hold in general. In addition, we reason about the complexity of computing the logics and illustrate some applications of our framework, including boolean model checking.
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
2011-03-24T10:41:56Z
2014-10-07T14:51:25Z
http://eprints.imtlucca.it/id/eprint/165
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/165
2011-03-24T10:41:56Z
Quantitative mu-calculus and CTL Based on Constraint Semirings
Model checking and temporal logics are boolean. The answer to the model checking question does a system satisfy a property? is either true or false, and properties expressed in temporal logics are defined over boolean propositions. While this classic approach is enough to specify and verify boolean temporal properties, it does not allow to reason about quantitative aspects of systems. Some quantitative extensions of temporal logics has been already proposed, especially in the context of probabilistic systems. They allow to answer questions like with which probability does a system satisfy a property?
We present a generalization of two well-known temporal logics: CTL and the [mu]-calculus. Both extensions are defined over c-semirings, an algebraic structure that captures many problems and that has been proposed as a general framework for soft constraint satisfaction problems (CSP). Basically, a c-semiring consists of a domain, an additive operation and a multiplicative operation, which satisfy some properties. We present the semantics of the extended logics over transition systems, where a formula is interpreted as a mapping from the set of states to the domain of the c-semiring, and show that the usual connection between CTL and [mu]-calculus does not hold in general. In addition, we reason about the feasibility of computing the logics and illustrate some applications of our framework, including boolean model checking.
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
2011-03-24T10:30:42Z
2011-07-11T14:34:35Z
http://eprints.imtlucca.it/id/eprint/164
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/164
2011-03-24T10:30:42Z
Using Linear Temporal Model Checking for Goal-Oriented Policy Refinement Frameworks
Policy refinement is meant to derive lower-level policies from higher-level ones so that these more specific policies are better suited for use in different execution environments. Although it has been recognized as crucial, it has received relatively little attention. We present a policy refinement framework grounded in goal-elaboration methodologies and reactive systems analysis. Through linear-time model checking, we obtain system trace executions aimed at fulfilling lower-level goals refined with the KAOS goal-elaboration method. From system executions, we abstract managed entities, conditions and actions to encode the refined policies. We present our framework and provide a refinement scenario applied to the DiffServ QoS management domain.
Javier Rubio-Loyola
Joan Serrat
Paris Flegkas
George Pavlou
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-03-24T10:18:31Z
2011-07-11T14:34:35Z
http://eprints.imtlucca.it/id/eprint/163
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/163
2011-03-24T10:18:31Z
Cost-Algebraic Heuristic Search
Heuristic search is used to efficiently solve the single-node
shortest path problem in weighted graphs. In practice, however,
one is not only interested in finding a short path, but
an optimal path, according to a certain cost notion. We propose
an algebraic formalism that captures many cost notions,
like typical Quality of Service attributes. We thus generalize
A*, the popular heuristic search algorithm, for solving
optimal-path problem. The paper provides an answer to a
fundamental question for AI search, namely to which general
notion of cost, heuristic search algorithms can be applied. We
proof correctness of the algorithms and provide experimental
results that validate the feasibility of the approach.
Stefan Edelkamp
Shahid Jabbar
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-03-24T09:48:30Z
2014-10-07T15:04:23Z
http://eprints.imtlucca.it/id/eprint/162
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/162
2011-03-24T09:48:30Z
Graphical Verification of a Spatial Logic for the Graphical Verification of a Spatial Logic for the pi-calculus
The paper introduces a novel approach to the verification of spatial properties for finite [pi]-calculus specifications. The mechanism is based on a recently proposed graphical encoding for mobile calculi: Each process is mapped into a (ranked) graph, such that the denotation is fully abstract with respect to the usual structural congruence (i.e., two processes are equivalent exactly when the corresponding encodings yield the same graph). Spatial properties for reasoning about the behavior and the structure of pi-calculus processes are then expressed in a logic introduced by Caires, and they are verified on the graphical encoding of a process, rather than on its textual representation. More precisely, the graphical presentation allows for providing a simple and easy to implement verification algorithm based on the graphical encoding (returning true if and only if a given process verifies a given spatial formula).
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-03-24T09:18:25Z
2014-10-07T14:57:13Z
http://eprints.imtlucca.it/id/eprint/161
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/161
2011-03-24T09:18:25Z
A logic for application level QoS
Service Oriented Computing (SOC) has been proposed as a paradigm to describe computations of applications on wide area distributed systems. Awareness of Quality of Service (QoS) is emerging as a new exigency in both design and implementation of SOC applications.
We do not refer to QoS aspects related to low-level performance and focus on those high-level non-functional features perceived by end-users as application dependent requirements, e.g., the price of a given service, or the payment mode, or else the availability of a resource (e.g., a file in a given format).
In this paper we present a logic which includes mechanisms to consider the three main dimensions of systems, namely their structure, behaviour and QoS aspects. The evaluation of a formula is a value of a constraint-semiring and not just a boolean value expressing whether or not the formula holds. This permits to express not only topological and temporal properties but also QoS properties of systems.
The logic is interpreted on SHReQ, a formal framework for specifying systems that handles abstract high-level QoS aspects combining Synchronised Hyperedge Replacement with constraint-semirings.
Dan Hirsch
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Emilio Tuosto
2011-03-09T14:42:08Z
2014-10-07T14:04:04Z
http://eprints.imtlucca.it/id/eprint/160
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/160
2011-03-09T14:42:08Z
A Logic for Graphs with QoS
We introduce a simple graph logic that supports specification of Quality of Service (QoS) properties of applications. The idea is that we are not only interested in representing whether two sites are connected, but we want to express the QoS level of the connection. The evaluation of a formula in the graph logic is a value of a suitable algebraic structure, a c-semiring, representing the QoS level of the formula and not just a boolean value expressing whether or not the formula holds. We present some examples and briefly discuss the expressiveness and complexity of our logic.
GianLuigi Ferrari
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-03-08T11:04:35Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/181
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/181
2011-03-08T11:04:35Z
Pi-Calculus Early Observational Equivalence: A First Order Coalgebraic Model
In this paper, we propose a compositional coalgebraic semantics of the pi-calculus based on a novel approach for lifting calculi with structural axioms to coalgebraic models. We equip the transition system of the calculus with permutations, parallel composition and restriction operations, thus obtaining a bialgebra. No prefix operation is introduced, relying instead on a clause format defining the transitions of recursively defined processes. The unique morphism to the final bialgebra induces a bisimilarity relation which coincides with observational equivalence and which is a congruence with respect to the operations. The permutation algebra is enriched with a name extrusion operator delta a' la De Brujin, that shifts any name to the successor and generates a new name in the first variable. As a consequence, in the axioms and in the SOS rules there is no need to refer to the support, i.e., the set of significant names, and, thus, the model turns out to be first order.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
2011-03-08T10:56:15Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/179
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/179
2011-03-08T10:56:15Z
A Compositional Coalgebraic Model of Monadic Fusion Calculus
We propose a compositional coalgebraic semantics of the Fusion calculus of Parrow and Victor in the version with explicit fusions by Gardner and Wischik. We follow a recent approach developed by the same authors and previously applied to the pi-calculus for lifting calculi with structural axioms to bialgebraic models. In our model, the unique morphism to the final bialgebra induces a bisimilarity relation which coincides with hyperequivalence and which is a congruence with respect to the operations. Interesting enough, the explicit fusion approach allows to exploit for the Fusion calculus essentially the same algebraic structure used for the pi-calculus.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
2011-03-08T10:51:00Z
2014-01-24T14:16:45Z
http://eprints.imtlucca.it/id/eprint/180
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/180
2011-03-08T10:51:00Z
Constraints for Service Contracts
This paper focuses on client-service interactions distinguishing between three phases: negotiate, commit and execute. The participants negotiate their behaviours, and if an agreement is reached they commit and start an execution which is guaranteed to respect the interaction scheme agreed upon. These ideas are materialised through a calculus of contracts enriched with semiring-based constraints, which allow clients to choose services and to interact with them in a safe way. A concrete representation of these constraints with logic programs and logic program combinations is straightforward, thus reducing constraint solution (and consequently the establishment of a contract) to the execution of a logic program.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Mario Coppo
Mariangiola Dezani-Ciancaglini
Ugo Montanari
2011-03-07T14:27:54Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/159
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/159
2011-03-07T14:27:54Z
A Temporal Graph Logic for Verification of Graph Transformation Systems
We extend our approach for verifying properties of graph transformation systems using suitable abstractions. In the original approach properties are specified as formulae of a propositional temporal logic whose atomic predicates are monadic second-order graph formulae. We generalize this aspect by considering more expressive logics, where edge quantifiers and temporal modalities can be interleaved, a feature which allows, e.g., to trace the history of objects in time. This requires the use of graph transition systems, a generalization of transition systems where states and transitions are mapped to graphs and graph morphisms, respectively, and of a corresponding notion of abstraction. After characterizing fragments of the logic which can be safely checked on the approximations, we show how the verification of the logic over graph transformation systems can be reduced to the verification of a logic over suitably defined Petri nets.
Paolo Baldan
Andrea Corradini
Barbara König
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2011-03-07T10:33:12Z
2014-01-24T14:18:26Z
http://eprints.imtlucca.it/id/eprint/137
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/137
2011-03-07T10:33:12Z
Constraints for Service Contracts
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Mario Coppo
Mariangiola Dezani-Ciancaglini
Ugo Montanari
2011-03-07T08:51:43Z
2012-07-06T13:25:33Z
http://eprints.imtlucca.it/id/eprint/174
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/174
2011-03-07T08:51:43Z
A Presheaf Environment for the Explicit Fusion Calculus
Name passing calculi are nowadays one of the preferred formalisms for the specification of concurrent and distributed systems with a dynamically evolving topology. Despite their widespread adoption as a theoretical tool, though, they still face some unresolved semantic issues, since the standard operational, denotational and logical methods often proved inadequate to reason about these formalisms. A domain which has been successfully employed for languages with asymmetric communication, like the π-calculus, are presheaf categories based on (injective) relabellings, such as SetI. Calculi with symmetric binding, in the spirit of the fusion calculus, give rise to novel research challenges. In this work we examine the explicit fusion calculus, and propose to model its syntax and semantics using the presheaf category SetE, where E is the category of equivalence relations and equivalence preserving morphisms.
Filippo Bonchi
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Vincenzo Ciancia
Fabio Gadducci
2011-03-03T11:38:59Z
2011-07-11T14:33:42Z
http://eprints.imtlucca.it/id/eprint/115
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/115
2011-03-03T11:38:59Z
Contracts for Abstract Processes in Service Composition
Contracts are a well-established approach for describing and analyzing behavioral aspects of web service
compositions. The theory of contracts comes equipped with a notion of compatibility between
clients and servers that ensures that every possible interaction between compatible clients and servers
will complete successfully. It is generally agreed that real applications often require the ability of exposing
just partial descriptions of their behaviors, which are usually known as abstract processes. We
propose a formal characterization of abstraction as an extension of the usual symbolic bisimulation
and we recover the notion of abstraction in the context of contracts.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Hernán C. Melgratti
2011-03-03T11:02:37Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/121
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/121
2011-03-03T11:02:37Z
A Survey of Constraint-based Programming
Paradigms.
Constraints support a programming style featuring declarative description and effective solving of several classes of problems. Unlike basic primitives of other programming languages, constraints do not specify computing operations, but rather the properties of a solution to be found. In this paper, we give a survey of the main formalisms based on constraints: Constraint Satisfaction Problems, Constraint Logic Programming and Concurrent Constraint Programming. We outline recent extensions of these approaches and we discuss ongoing trends of research.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
2011-03-03T10:21:08Z
2014-10-08T09:13:23Z
http://eprints.imtlucca.it/id/eprint/126
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/126
2011-03-03T10:21:08Z
A compositional coalgebraic model of a fragment of fusion calculus
This work is a further step in exploring the labelled transitions and bisimulations of fusion calculi. We follow the approach developed by Turi and Plotkin for lifting transition systems with a syntactic structure to bialgebras and, thus, we provide a compositional model of the fusion calculus with explicit fusions. In such a model, the bisimilarity relation induced by the unique morphism to the final coalgebra coincides with fusion hyperequivalence and it is a congruence with respect to the operations of the calculus. The key novelty in our work is to give an account of explicit fusions through labelled transitions. In this short essay, we focus on a fragment of the fusion calculus without recursion and replication.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
2011-03-02T15:12:27Z
2013-10-28T11:55:26Z
http://eprints.imtlucca.it/id/eprint/135
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/135
2011-03-02T15:12:27Z
A Compositional Coalgebraic Model of Fusion Calculus
This paper is a further step in exploring the labelled transitions and bisimulations of fusion calculi. We follow a recent theory by the same authors and previously applied to the pi-calculus for lifting calculi with structural axioms to bialgebras and, thus, we provide a compositional model of the fusion calculus with explicit fusions. In such a model, the bisimilarity relation induced by the unique morphism to the final coalgebra coincides with fusion hyperequivalence and it is a congruence with respect to the operations of the calculus. The key novelty in our work is that we give an account of explicit fusions through labelled transitions. Interestingly enough, this approach allows to exploit for the fusion calculus essentially the same algebraic structure used for the pi-calculus.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
2011-03-02T14:55:46Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/124
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/124
2011-03-02T14:55:46Z
Transactional Service Level Agreement
Several models based on process calculi have addressed the definition of linguistic primitives for handling long running transactions and Service Level Agreement (SLA) in service oriented applications. Nevertheless, the approaches appeared in the literature deal with these aspects as independent features. We claim that transactional mechanisms are relevant for programming multi-step SLA negotiations and, hence, it is worth investigating the interplay among such formal approaches. In this paper we propose a process calculus, the committed cc-pi, that combines two proposals: (i) cc-pi calculus accounting for SLA negotiation and (ii) cJoin as a model of long running transactions. We provide both a small- and a big-step operational semantics of committed cc-pi as labelled transition systems, and we prove a correspondence result.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Hernán C. Melgratti
2011-03-02T14:44:00Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/123
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/123
2011-03-02T14:44:00Z
Constraint-Based Policy Negotiation and Enforcement for Telco Services
Telco services are evolving under several aspects: for instance, services may combine different telecommunication features (messaging, multi-media, etc.) and may be activated and controlled by applications deployed in a 3rd party domain. Telco infrastructures are following this trend by adopting service oriented architecture solutions, e.g. for composing services and for introducing uniform interaction models among services. In a SOA-based system, capabilities, requirements and general features of services can be expressed in terms of policies. Such policies are negotiated in order to define a service level agreement among the involved parties. In this paper we show how to specify, negotiate, and enforce policies for Telco services by using a constraint-based model, the cc-pi calculus. This language extends concurrent constraint programming with synchronous communication and local names, and with the notion of soft constraints, that generalise classical constraints to represent preference levels. In cc-pi calculus, policies are expressed as soft constraints and the parties involved in the negotiation as communicating processes. The model allows to specify complex scenarios in which policy negotiations and validations can be arbitrarily nested.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Laura Ferrari
Corrado Moiso
Ugo Montanari
2011-03-02T14:20:15Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/122
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/122
2011-03-02T14:20:15Z
CC-Pi: A Constraint-Based Language for Specifying Service Level Agreements
Service Level Agreements are a key issue in Service Oriented Computing. SLA contracts specify client requirements and service guarantees, with emphasis on Quality of Service (cost, performance, availability, etc.). In this work we propose a simple model of contracts for QoS and SLAs that also allows to study mechanisms for resource allocation and for joining different SLA requirements. Our language combines two basic programming paradigms: name-passing calculi and concurrent constraint programming (cc programming). Specifically, we extend cc programming by adding synchronous communication and by providing a treatment of names in terms of restriction and structural axioms closer to nominal calculi than to variables with existential quantification. In the resulting framework, SLA requirements are constraints that can be generated either by a single party or by the synchronisation of two agents. Moreover, restricting the scope of names allows for local stores of constraints, which may become global as a consequence of synchronisations. Our approach relies on a system of named constraints that equip classical constraints with a suitable algebraic structure providing a richer mechanism of constraint combination. We give reduction-preserving translations of both cc programming and the calculus of explicit fusions.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
2011-03-02T11:26:57Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/120
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/120
2011-03-02T11:26:57Z
Open Bisimulation for the Concurrent Constraint Pi-Calculus
The concurrent constraint pi-calculus (cc-pi-calculus) has been introduced as a model for concluding Service Level Agreements. The cc-pi calculus combines the synchronous communication paradigm of process calculi with the constraint handling mechanism of concurrent constraint programming. While in the original presentation of the calculus a reduction semantics has been proposed, in this work we investigate the abstract semantics of cc-pi processes. First, we define a labelled transition system of the calculus and a notion of open bisimilarity à la pi-calculus that is proved to be a congruence. Next, we give a symbolic characterisation of bisimulation and we prove that the two semantics coincide. Essentially, two processes are open bisimilar if they have the same stores of constraints - this can be statically checked - and if their moves can be mutually simulated. A key idea of the symbolic transition system is to have ‘contextual’ labels, i.e. labels specifying that a process can evolve only in presence of certain constraints. Finally, we show that the polyadic Explicit Fusions calculus introduced by Gardner and Wischik can be translated into monadic cc-pi and that such a transition preserves open bisimilarity. The mapping exploits fusions and tuple unifications as constraints.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
2011-03-02T11:04:47Z
2011-07-11T14:33:42Z
http://eprints.imtlucca.it/id/eprint/118
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/118
2011-03-02T11:04:47Z
Abstract Processes in Orchestration Languages
Orchestrators are descriptions at implementation level and may contain sensitive information that should be kept private. Consequently, orchestration languages come equipped with a notion of abstract processes, which enable the interaction among parties while hiding private information. An interesting question is whether an abstract process accurately describes the behavior of a concrete process so to ensure that some particular property is preserved when composing services. In this paper we focus on compliance, i.e, the correct interaction of two orchestrators and we introduce two definitions of abstraction: one in terms of traces, called trace-based abstraction, and the other as a generalization of symbolic bisimulation, called simulation-based abstraction. We show that simulation-based abstraction is strictly more refined than trace-based abstraction and that simulation-based abstraction behaves well with respect to compliance.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Hernán C. Melgratti
2011-03-01T16:14:40Z
2011-07-11T14:33:42Z
http://eprints.imtlucca.it/id/eprint/117
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/117
2011-03-01T16:14:40Z
Toward a Game-Theoretic Model of Grid Systems
Computational Grid is a promising platform that provides a vast range of heterogeneous resources for high performance computing. To grasp the full advantage of Grid systems, efficient and effective resource management and Grid job scheduling are key requirements. Particularly, in resource management and job scheduling, conflicts may arise as Grid resources are usually owned by different organizations, which have different goals. In this paper, we study the job scheduling problem in Computational Grid by analyzing it using game theoretic approaches. We consider a hierarchical job scheduling model that is formulated as a repeated non-cooperative game among Grid sites, which may have selfish concerns. We exploit the concept of Nash equilibrium as a stable solution for our game which eventually is convenient for every player.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
Sonia Taneja
2011-03-01T16:05:39Z
2011-07-11T14:33:42Z
http://eprints.imtlucca.it/id/eprint/116
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/116
2011-03-01T16:05:39Z
Adaptive Fuzzy-valued Service Selection
Service composition concerns both integration of heterogeneous distributed applications and dynamic selection of services. QoS-aware selection enables a service requester with certain QoS requirements to classify services according to their QoS guarantees. In this paper we present a method that allows for a fuzzy-valued description of QoS parameters. Fuzzy sets are suited to specify both the QoS preferences raised by a service requester such as 'response time must be as lower as possible and cannot be more that 1000ms' and approximate estimates a provider can make on the QoS capabilities of its services like 'availability is roughly between 95% and 99%'. We propose a matchmaking procedure based on a fuzzy-valued similarity measure that, given the specifications of QoS parameters of the requester and the providers, selects the most appropriate service among several functionally-equivalent ones. We also devise a method for dynamical update of service offers by means of runtime monitoring of the actual QoS performance.
Davide Bacciu
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Lusine Mkrtchyan
2011-03-01T11:23:40Z
2011-07-11T14:33:42Z
http://eprints.imtlucca.it/id/eprint/114
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/114
2011-03-01T11:23:40Z
QoS Negotiation in Service Composition
Service composition in Service Oriented Computing concerns not only integration of heterogeneous distributed applications but also dynamic selection of services. Quality of Service (QoS) plays a key role in service composition as services providing the same functionalities can be differentiated according to their QoS guarantees. At subscription time, a service requester and a provider may sign a contract recording the QoS of the supplied service. The cc-pi calculus has been introduced as a constraint-based model of QoS contracts. In this work we propose a variant of the cc-pi calculus in which the alternatives in a choice rather than being selected non-deterministically have a dynamic priority. Basically, a guard cj:πj in a choice is enabled if the constraint cj is entailed by the store of constraints and the prefix πj can be consumed. Moreover, the j-th branch can be selected not only if the corresponding guard cj:πj is enabled but also if cj is weaker than the constraints ci of the other enabled alternatives. We prove that our choice operator is more general than a choice operator with static priority. Finally, we exploit some examples to show that our prioritised calculus allows arbitrarily complex QoS negotiations and that a static form of priority is strictly less expressive than ours.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
2011-03-01T11:14:20Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/134
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/134
2011-03-01T11:14:20Z
High-Level Petri Nets as Type Theories in the Join Calculus
We study the expressiveness of the join calculus by comparison with (generalised, coloured) Petri nets and using tools from type theory. More precisely, we consider four classes of nets of increasing expressiveness, ∏ i , introduce a hierarchy of type systems of decreasing strictness, Δ i , i = 0,..., 3, and we prove that a join process is typeable according to Δ i if and only if it is (strictly equivalent to) a net of class ∏ i . In the details, ∏ 0 and ∏ 1 contain, resp., usual place/transition and coloured Petri nets, while ∏ 2 and ∏ 3 propose two natural notions of high-level net accounting for dynamic reconfiguration and process creation and called reconfigurable and dynamic Petri nets, respectively.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Vladimiro Sassone
2011-03-01T10:58:52Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/133
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/133
2011-03-01T10:58:52Z
Experimenting with STA, a Tool for Automatic Analysis of Security Protocols
We present STA (Symbolic Trace Analyzer). a tool for the analysis
of security protocols. STA relies on symbolic techniques that avoid
explicit construction of the whole, possibly infinite, state-space of
protocols. This results in accurate protocol modeling, increased
efficiency and mere direct formalization, when compared to finitestate
techniques. We illustrate the use of STA by analyzing the
well-known asymmetric Needham Schroeder protocol. We discuss
the results of this analysis, and contrast them with previous work
based on finite-state model checking.
Michele Boreale
Maria Grazia Buscemi
m.buscemi@imtlucca.it
2011-03-01T10:35:11Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/132
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/132
2011-03-01T10:35:11Z
A Framework for the Analysis of Security Protocols
Properties of security protocols such as authentication and secrecy are often verified by explictly generating an operational model of the protocol and then seeking for insecure states. However, message exchange between the intruder and the honest participants induces a form of state explosion that makes the model infinite in principle. Building on previous work on symbolic semantics, we propose a general framework for automatic analysis of security protocols that make use of a variety of crypto-functions. We start from a base language akin to the spi-calculus, equipped with a set of generic cryptographic primitives. We propose a symbolic operational semantics that relies on unification and provides finite and effective protocol models. Next, we give a method to carry out trace analysis directly on the symbolic model. Under certain conditions on the given cryptographic primitives, our method is proven complete for the considered class of properties.
Michele Boreale
Maria Grazia Buscemi
m.buscemi@imtlucca.it
2011-03-01T10:28:27Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/131
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/131
2011-03-01T10:28:27Z
A First Order Coalgebraic Model of pi-Calculus Early Observational Equivalence
In this paper, we propose a compositional coalgebraic semantics of the π-calculus based on a novel approach for lifting calculi with structural axioms to coalgebraic models. We equip the transition system of the calculus with permutations, parallel composition and restriction operations, thus obtaining a bialgebra. No prefix operation is introduced, relying instead on a clause format defining the transitions of recursively defined processes. The unique morphism to the final bialgebra induces a bisimilarity relation which coincides with observational equivalence and which is a congruence with respect to the operations. The permutation algebra is enriched with a name extrusion operator δ à la De Brujin, that shifts any name to the successor and generates a new name in the first variable x 0. As a consequence, in the axioms and in the SOS rules there is no need to refer to the support, i.e., the set of significant names, and, thus, the model turns out to be first order.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
2011-03-01T10:03:05Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/130
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/130
2011-03-01T10:03:05Z
Symbolic Analysis of Crypto-Protocols Based on Modular Exponentiation
Automatic methods developed so far for analysis of security protocols only model a limited set of cryptographic primitives (often, only encryption and concatenation) and from low-level features of cryptographic algorithms. This paper is an attempt towards closing this gap. We propose a symbolic technique and a decision method for analysis of protocols based on modular exponentiation, such as Diffie-Hellman key exchange. We introduce a protocol description language along with its semantics. Then, we propose a notion of symbolic execution and, based on it, a verification method. We prove that the method is sound and complete with respect to the language semantics.
Michele Boreale
Maria Grazia Buscemi
m.buscemi@imtlucca.it
2011-03-01T09:55:50Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/129
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/129
2011-03-01T09:55:50Z
D-Fusion: A Distinctive Fusion Calculus
We study the relative expressive power of Fusion and pi-calculus. Fusion is commonly regarded as a generalisation of pi-calculus. Actually, we prove that there is no uniform fully abstract embedding of pi-calculus into Fusion. This fact motivates the introduction of a new calculus, D-Fusion, with two binders, λ and ν. We show that D-Fusion is strictly more expressive than both pi-calculus and Fusion. The expressiveness gap is further clarified by the existence of a fully abstract encoding of mixed guarded choice into the choice-free fragment of D-Fusion.
Michele Boreale
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
2011-03-01T09:27:17Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/128
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/128
2011-03-01T09:27:17Z
A Method for Symbolic Analysis of Security Protocols
In security protocols, message exchange between the intruder and honest participants induces a form of state explosion which makes protocol models infinite. We propose a general method for automatic analysis of security protocols based on the notion of frame, essentially a rewrite system plus a set of distinguished terms called messages. Frames are intended to model generic crypto-systems. Based on frames, we introduce a process language akin to Abadi and Fournet's applied pi. For this language, we define a symbolic operational semantics that relies on unification and provides finite and effective protocol models. Next, we give a method to carry out trace analysis directly on the symbolic model. We spell out a regularity condition on the underlying frame, which guarantees completeness of our method for the considered class of properties, including secrecy and various forms of authentication. We show how to instantiate our method to some of the most common crypto-systems, including shared- and public-key encryption, hashing and Diffie–Hellman key exchange.
Michele Boreale
Maria Grazia Buscemi
m.buscemi@imtlucca.it
2011-03-01T08:48:14Z
2011-07-11T14:33:43Z
http://eprints.imtlucca.it/id/eprint/127
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/127
2011-03-01T08:48:14Z
A General Name Binding Mechanism
We study fusion and binding mechanisms in name passing
process calculi. To this purpose, we introduce the U-Calculus, a process
calculus with no I/O polarities and a unique form of binding. The latter
can be used both to control the scope of fusions and to handle new name
generation. This is achieved by means of a simple form of typing: each
bound name x is annotated with a set of exceptions, that is names that
cannot be fused to x. The new calculus is proven to be more expressive
than pi-calculus and Fusion calculus separately. In U-Calculus, the syntactic
nesting of name binders has a semantic meaning, which cannot be
overcome by the ordering of name extrusions at runtime. Thanks to this
mixture of static and dynamic ordering of names, U-Calculus admits a
form of labelled bisimulation which is a congruence. This property yields
a substantial improvement with respect to previous proposals by the
same authors aimed at unifying the above two languages. The additional
expressiveness of U-Calculus is also explored by providing a uniform encoding
of mixed guarded choice into the choice-free sub-calculus.
Michele Boreale
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari