IMT Institutional Repository: No conditions. Results ordered -Date Deposited.
2022-05-21T02:57:17Z
EPrints
http://eprints.imtlucca.it/images/logowhite.png
http://eprints.imtlucca.it/
2018-03-12T11:01:45Z
2018-03-12T11:01:45Z
http://eprints.imtlucca.it/id/eprint/4017
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4017
2018-03-12T11:01:45Z
Regulation of Differential Drive Robots using Continuous Time MPC without Stabilizing Constraints or Costs
In this paper, model predictive control (MPC) of differential drive robots is considered. Here, we solve the set point stabilization problem without incorporating stabilizing constraints and/or costs in the MPC scheme. In particular, we extend recent results obtained in a discrete time setting to the continuous time domain. To this end, so called swaps and replacements are introduced in order to validate a growth condition on the value function and, thus, to rigorously prove asymptotic stability of the MPC closed loop for nonholonomic robots.
Karl Worthmann
Mohamed W. Mehrez
Mario Zanon
mario.zanon@imtlucca.it
George K.I. Mann
Raymond G. Gosine
Moritz Diehl
2018-03-12T09:25:10Z
2018-03-12T09:25:10Z
http://eprints.imtlucca.it/id/eprint/4014
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4014
2018-03-12T09:25:10Z
A compression algorithm for real-time distributed nonlinear MPC
Model Predictive Control (MPC) requires the online solution of an Optimal Control Problem (OCP) at each sampling time. Efficient online algorithms such as the Real-Time Iteration (RTI) scheme have been developed for real-time MPC implementations even for fast nonlinear dynamic systems. The RTI framework is based on direct Multiple Shooting (MS) for centralized systems. Distributed Multiple Shooting (DMS) is an MS-based OCP discretization strategy for distributed systems. Many fast dynamic systems can be described as connected subsystems and in order to exploit this structure, a DMS based RTI scheme has been developed and implemented in ACADO code generation. A novel technique called compression is proposed to reduce the dimensions of the convex subproblem, while exploiting the coupling structure. The performance of the presented scheme is illustrated on a nontrivial example from the literature, where a speedup of factor 11 in simulation time and factor 6 in the total computation time can be shown over the classical RTI scheme.
Rien Quirynen
Mario Zanon
mario.zanon@imtlucca.it
Attila Kozma
Moritz Diehl
2018-03-12T09:23:35Z
2018-03-12T09:23:35Z
http://eprints.imtlucca.it/id/eprint/4015
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4015
2018-03-12T09:23:35Z
Baumgarte stabilisation over the SO(3) rotation group for control
Representations of the SO(3) rotation group are crucial for airborne and aerospace applications. Euler angles is a popular representation in many applications, but yield models having singular dynamics. This issue is addressed via non-singular representations, operating in dimensions higher than 3. Unit quaternions and the Direction Cosine Matrix are the best known non-singular representations, and favoured in challenging aeronautic and aerospace applications. All nonsingular representations yield invariants in the model dynamics, i.e. a set of nonlinear algebraic conditions that must be fulfilled by the model initial conditions, and that remain fulfilled over time. However, due to numerical integration errors, these conditions tend to become violated when using standard integrators, making the model inconsistent with the physical reality. This issue poses some challenges when non-singular representations are deployed in optimal control. In this paper, we propose a simple technique to address the issue for classical integration schemes, establish formally its properties, and illustrate it on the optimal control of a satellite.
Sebastien Gros
Mario Zanon
mario.zanon@imtlucca.it
Moritz Diehl
2018-03-12T09:20:18Z
2018-03-12T09:20:18Z
http://eprints.imtlucca.it/id/eprint/4013
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4013
2018-03-12T09:20:18Z
Estimation of uncertain ARX models with ellipsoidal parameter variability
Adeleh Mohammadi
Moritz Diehl
Mario Zanon
mario.zanon@imtlucca.it
2018-03-12T08:57:17Z
2018-03-12T08:57:17Z
http://eprints.imtlucca.it/id/eprint/4047
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/4047
2018-03-12T08:57:17Z
Efficient Nonlinear Model Predictive Control Formulations for Economic Objectives with Aerospace and Automotive Applications
This thesis is concerned with optimal control techniques for optimal trajectory planning and real-time control and estimation. The framework of optimal control is a powerful tool which enjoys increasing popularity due to its applicability to a wide class of problems and its ability to deliver solutions to very complicated problems which cannot be intuitively solved.
The downside of optimal control is the computational burden required to compute the optimal solution. Due to recent algorithmic developments and increases in the computational power, this burden has been significantly reduced over the last decades. In order to guarantee effectiveness and reliability of the solver, three main components are necessary: fast and robust algorithms, a good problem formulation, and a mathematical model tailored to optimisation. Indeed, both the model and the optimal control problem can usually be formulated in many different ways, some of which are better suited for optimisation. In this thesis we are concerned with all three components, with a focus on the last two.
Concerning the problem formulation, we propose practical approaches for formulating optimal control, MPC and MHE problems in an optimisation- friendly fashion. Moreover, we analyse the stability properties of various MPC formulations, with a focus on so-called economic MPC, for which the stability theory is still developing.
On the algorithmic level, we review the literature on optimisation and optimal control and we prove that it is possible to tune tracking MPC formulations in order to locally obtain the same behaviour as economic MPC. The main advantages of tuned tracking MPC over economic MPC consist in easier to guarantee closed-loop stability and applicability of efficient real-time algorithms.
On the modelling side, we propose an approach for deriving models of reduced complexity and reduced nonlinearity for multibody mechanical systems. The use of nonminimal coordinates and DAE models enlarges the range of modelling possibilities and allows the control engineer to derive models which are better suited for optimisation. In order to provide an easy framework for the model derivation, we extend the Euler-Lagrange approach and we demonstrate how to implement the proposed approach in practice.
In order to demonstrate the effectiveness of the proposed techniques, we deploy them for two applications: tethered airplanes and autonomous vehicles. Both examples are characterised by fast nonlinear constrained dynamics for which simple controllers cannot be deployed.
Tethered airplanes are of particular interest because they are an emerging technology for wind energy production. In this thesis, we use optimal control to design trajectories which extract maximum energy from the airmass and compare single and dual-airfoil configurations. We moreover demonstrate the effectiveness of MPC and MHE for controlling the system in real time and apply the new tuning procedure for tracking MPC to show its ability to locally approximate economic MPC.
Mario Zanon
mario.zanon@imtlucca.it
2017-09-28T15:29:25Z
2017-09-28T15:29:25Z
http://eprints.imtlucca.it/id/eprint/3813
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3813
2017-09-28T15:29:25Z
A two-scale constitutive parameters identification procedure for elasto-plastic fracture
Constitutive parameters identification for elasto-plastic fracture is a complex problem due to the interplay between two forms of material nonlinearity, viz. plasticity and cohesive fracture. In the present study we examine this problem in relation to Copper specimens covered by Silver used in photovoltaic modules as electrical conductors. Uniaxial tensile tests on un-notched and notched specimens are performed with a tensile stage inside a scanning electron microscope, monitoring crack growth for each imposed far-field displacement. Parameters identification is then performed by considering an elasto-plastic constitutive relation with isotropic hardening for the continuum and a polynomial cohesive zone model (CZM) with two free parameters. For a better numerical-experimental fitting, a four-parameter CZM should be used to independently control the CZM stiffness and the fracture energy. To do so effectively, a constrained optimization procedure with a two-scale objective function is outlined.
Valerio Carollo
valerio.carollo@imtlucca.it
Claudia Borri
claudia.borri@imtlucca.it
Marco Paggi
marco.paggi@imtlucca.it
2017-09-18T12:42:34Z
2017-09-18T12:42:34Z
http://eprints.imtlucca.it/id/eprint/3790
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3790
2017-09-18T12:42:34Z
A lumped mass beam model for the wave propagation in anti-tetrachiral periodic lattices
The engineered class of periodic anti-tetrachiral materials is mainly characterized by
the unusual macroscopic property of a negative Poisson’s ratio. The auxetic behavior of the material
depends on the geometric and elastic features of the microstructure. In particular, the material symmetries
of the periodic cell govern the quadratic or orthotropic symmetry of the first-order elastic
tensor (i.e. auxetic quadratic or auxetic orthotropy). Under the assumption of uniform mass density
and elastic properties, one or the other case can be realized by a square or rectangular microstructure,
respectively. A beam lattice model with lumped masses is employed to analyse the effects
of different, usually small-valued, geometric and elastic parameters of the high- and low-frequency
dispersion curves and band gaps characterizing the free wave propagation.
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Marco Lepidi
2017-09-18T09:41:05Z
2017-09-18T09:41:05Z
http://eprints.imtlucca.it/id/eprint/3785
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3785
2017-09-18T09:41:05Z
On the Statics of the Dome of the Basilica of S. Maria Assunta in Carignano, Genoa
The paper deals with the dome of the Basilica of S. Maria Assunta in Carignano in Genoa, designed by Galeazzo Alessi and built in the sixteenth century, for which meridian cracking, rather common in masonry domes, requires the assessment of the dome. In order to set a general procedure for the assessment this structures, limit analysis approaches are here discussed and compared. On the basis of classic limit analysis, local (dome only) and global (dome-drum system) collapse mechanisms are considered considering the different behaviour of several structural elements (lantern, shells of the dome, drum, colonnade). A static (safe theorem) and a kinematic approach are applied to the structure by means of equilibrium limit conditions and kinematically admissible collapse mechanisms. Comparisons between the obtained results are carried out so as to: (i) discuss a general approach to the assessment of dome-drum systems based on both numerical tools and standard limit analyses approaches; (ii) provide a first glance in the assessment of the dome.
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Antonio Brencich
Luigi Gambarotta
2017-09-04T14:28:17Z
2017-09-04T14:28:17Z
http://eprints.imtlucca.it/id/eprint/3777
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3777
2017-09-04T14:28:17Z
Optimal Varicella immunization programs for both Varicella and Herpes Zoster Control
A main obstacle to the widespread adoption of varicella immunization in Europe has been the fear of a subsequent boom in natural herpes zoster caused by the decline in the protective effect of natural immunity boosting due to reduced virus circulation. We apply optimal control to simple models for VZV transmission and reactivation to investigate existence and feasibility of temporal paths of varicella childhood immunization that are optimal in controlling both varicella and zoster. We analyze the optimality system numerically focusing on the role played by the structure of the cost functional, the relative cost zoster-varicella, and the length of the planning horizon. We show that optimal programs exist but will mostly be unfeasible in real public health contexts due to their complex temporal profiles. This complexity is the consequence of the intrinsically antagonistic nature of varicella immunization programs when aimed to control both varicella and herpes zoster. However we could show that gradually increasing, smooth – thereby feasible - vaccination schedules, can perform largely better than routine programs with constant vaccine uptake. Moreover we show the optimal temporal profiles of feasible immunization
programs targeting with priority the mitigation of the post-immunization natural zoster boom.
Monica Betta
monica.betta@imtlucca.it
Marco Laurino
Andrea Pugliese
Giorgio Guzzetta
Alberto Landi
Piero Manfredi
2017-09-04T14:16:28Z
2017-09-04T14:17:39Z
http://eprints.imtlucca.it/id/eprint/3776
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3776
2017-09-04T14:16:28Z
Perspectives on optimal control of Varicella and Herpes Zoster by mass routine varicella vaccination: the effects of Immunity Boosting
Monica Betta
monica.betta@imtlucca.it
Marco Laurino
A. Pugliese
Giorgio Guzzetta
Alberto Landi
Piero Manfredi
2017-09-04T13:48:18Z
2017-09-04T13:48:18Z
http://eprints.imtlucca.it/id/eprint/3772
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3772
2017-09-04T13:48:18Z
A Classification method for eye movements direction during REM sleep trained on wake electro-oculographic recordings
Rapid eye movements (REMs) are a peculiar and intriguing aspect of REM sleep, even if their physiological function still remains unclear. During this work, a new automatic tool was developed, aimed at a complete description of REMs activity during the night, both in terms of their timing of occurrence that in term of their directional properties. A classification stage of each singular movement detected during the night according to its main direction, was in fact added to our procedure of REMs detection and ocular artifact removal. A supervised classifier was constructed, using as training and validation set EOG data recorded during voluntary saccades of five healthy volunteers. Different classification methods were tested and compared. The further information about REMs directional characteristic provided by the procedure would represent a valuable tool for a deeper investigation into REMs physiological origin and functional meaning.
Monica Betta
monica.betta@imtlucca.it
Marco Laurino
Angelo Gemignani
Alberto Landi
D. Menicucci
2017-09-04T13:44:51Z
2017-09-04T13:44:51Z
http://eprints.imtlucca.it/id/eprint/3771
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3771
2017-09-04T13:44:51Z
Procoagulant control strategies for the human blood clotting process
This paper describes the comparison between two drug control strategies to hemophilia A. To emulate blood clotting and the pathological condition of hemophilia, a mathematical model composed by 14 ordinary differential equations is considered. We adopt a variable structure non-linear PID approach and a Model Predictive Control in order to control the dosage of procoagulant factor used in the treatment of hemophiliac patient. The two control actions are sampled for a practical application. Finally, we discuss and compare the results of the two control approaches, introducing a suited control index (eINR).
Marco Laurino
Tommaso Menara
Alessandro Stella
Monica Betta
monica.betta@imtlucca.it
Alberto Landi
2017-05-04T13:38:06Z
2017-05-04T13:38:06Z
http://eprints.imtlucca.it/id/eprint/3691
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3691
2017-05-04T13:38:06Z
(edited by) Proceedings 8th Interaction and Concurrency Experience, ICE 2015, Grenoble, France, 4-5th June 2015
This volume contains the proceedings of ICE 2015, the 8th Interaction and Concurrency Experience, which was held in Grenoble, France on the 4th and 5th of June 2015 as a satellite event of DisCoTec 2015. The ICE procedure for paper selection allows PC members to interact, anonymously, with authors. During the review phase, each submitted paper is published on a discussion forum with access restricted to the authors and to all the PC members not declaring a conflict of interest. The PC members post comments and questions to which the authors reply. Each paper was reviewed by three PC members, and altogether 9 papers, including 1 short paper, were accepted for publication (the workshop also featured 4 brief announcements which are not part of this volume). We were proud to host three invited talks, by Leslie Lamport (shared with the FRIDA workshop), Joseph Sifakis and Steve Ross-Talbot. The abstracts of the last two talks are included in this volume together with the regular papers.
Sophia Knight
Ivan Lanese
Alberto Lluch Lafuente
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-05-04T13:35:38Z
2017-05-04T13:35:38Z
http://eprints.imtlucca.it/id/eprint/3690
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3690
2017-05-04T13:35:38Z
A Typed Model for Dynamic Authorizations
Security requirements in distributed software systems are inherently dynamic. In the case of authorization policies, resources are meant to be accessed only by authorized parties, but the authorization to access a resource may be dynamically granted/yielded. We describe ongoing work on a model for specifying communication and dynamic authorization handling. We build upon the pi-calculus so as to enrich communication-based systems with authorization specification and delegation; here authorizations regard channel usage and delegation refers to the act of yielding an authorization to another party. Our model includes: (i) a novel scoping construct for authorization, which allows to specify authorization boundaries, and (ii) communication primitives for authorizations, which allow to pass around authorizations to act on a given channel. An authorization error may consist in, e.g., performing an action along a name which is not under an appropriate authorization scope. We introduce a typing discipline that ensures that processes never reduce to authorization errors, even when authorizations are dynamically delegated.
Silvia Ghilezan
Svetlana Jakšić
Jovanka Pantović
Jorge A. Pérez
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2017-01-26T14:36:39Z
2017-01-26T14:36:39Z
http://eprints.imtlucca.it/id/eprint/3643
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3643
2017-01-26T14:36:39Z
A Simple Effective Heuristic for Embedded Mixed-Integer Quadratic Programming
In this paper we propose a fast optimization
algorithm for approximately minimizing convex quadratic
functions over the intersection of affine and separable constraints
(i.e., the Cartesian product of possibly nonconvex
real sets). This problem class contains many NP-hard problems
such as mixed-integer quadratic programming. Our
heuristic is based on a variation of the alternating direction
method of multipliers (ADMM), an algorithm for solving
convex optimization problems. We discuss the favorable
computational aspects of our algorithm, which allow it to
run quickly even on very modest computational platforms
such as embedded processors. We give several examples
for which an approximate solution should be found very
quickly, such as management of a hybrid-electric vehicle
drivetrain. Our numerical experiments suggest that our
method is very effective in finding a feasible point with
small objective value; indeed, we see that in many cases,
it finds the global solution.
Reza Takapoui
Nicholas Mohele
Stephen Boyd
Alberto Bemporad
alberto.bemporad@imtlucca.it
2017-01-26T14:29:19Z
2017-01-26T14:29:19Z
http://eprints.imtlucca.it/id/eprint/3642
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3642
2017-01-26T14:29:19Z
Solving Mixed-Integer Quadratic Programs via Nonnegative Least Squares
This paper proposes a new algorithm for solving Mixed-Integer Quadratic Programming (MIQP) problems. The algorithm is particularly tailored to solving small-scale MIQPs such as those that arise in embedded hybrid Model Predictive Control (MPC) applications. The approach combines branch and bound (B&B) with nonnegative least squares (NNLS), that are used to solve Quadratic Programming (QP) relaxations. The QP algorithm extends a method recently proposed by the author for solving strictly convex QP's, by (i) handling equality and bilateral inequality constraints, (ii) warm starting, and (iii) exploiting easy-to-compute lower bounds on the optimal cost to reduce the number of QP iterations required to solve the relaxed problems. The proposed MIQP algorithm has a speed of execution that is comparable to state- of-the-art commercial MIQP solvers and is relatively simple to code, as it requires only basic arithmetic operations to solve least-square problems.
Alberto Bemporad
alberto.bemporad@imtlucca.it
2016-10-06T16:05:16Z
2016-10-06T16:05:16Z
http://eprints.imtlucca.it/id/eprint/3580
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3580
2016-10-06T16:05:16Z
Power Trading Coordination in Smart Grids Using Dynamic Learning and Coalitional Game Theory
In traditional power distribution models, consumers acquire power from the central distribution unit, while “micro-grids” in a smart power grid can also trade power between themselves. In this paper, we investigate the problem of power trading coordination among such micro-grids. Each micro-grid has a surplus or a deficit quantity of power to transfer or to acquire, respectively. A coalitional game theory based algorithm is devised to form a set of coalitions. The coordination among micro-grids determines the amount of power to transfer over each transmission line in order to serve all micro-grids in demand by the supplier micro-grids and the central distribution unit with the purpose of minimizing the amount of dissipated power during generation and transfer. We propose two dynamic learning processes: one to form a coalition structure and one to provide the formed coalitions with the highest power saving. Numerical results show that dissipated power in the proposed cooperative smart grid is only 10% of that in traditional power distribution networks.
Farshad Shams
Mirco Tribastone
mirco.tribastone@imtlucca.it
2016-10-06T15:50:02Z
2016-10-06T15:50:02Z
http://eprints.imtlucca.it/id/eprint/3579
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3579
2016-10-06T15:50:02Z
A unified framework for differential aggregations in Markovian process algebra
Fluid semantics for Markovian process algebra have recently emerged as a computationally attractive approximate way of reasoning about the behaviour of stochastic models of large-scale systems. This interpretation is particularly convenient when sequential components characterised by small local state spaces are present in many independent copies. While the traditional Markovian interpretation causes state-space explosion, fluid semantics is independent from the multiplicities of the sequential components present in the model, just associating a single ordinary differential equation (ODE) with each local state. In this paper we analyse the case of a process algebra model inducing a large ODE system. Previous work, known as exact fluid lumpability, requires two symmetries: ODE aggregation is possible for processes that i) are isomorphic and that ii) are present with the same multiplicities. We first relax the latter requirement by introducing the notion of ordinary fluid lumpability, which yields an ODE system where the sum of the aggregated variables is preserved exactly. Then, we consider approximate variants of both notions of lumpability which make nearby processes symmetric after a perturbation of their parameters. We prove that small perturbations yield nearby differential trajectories. We carry out our study in the context of a process algebra that unifies two synchronisation semantics that are well studied in the literature, useful for the modelling of computer systems and chemical networks, respectively. In both cases, we provide numerical evidence which shows that, in practice, many heterogeneous processes can be aggregated with negligible errors.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2016-10-06T15:36:19Z
2016-10-06T15:36:19Z
http://eprints.imtlucca.it/id/eprint/3578
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3578
2016-10-06T15:36:19Z
A Proactive Approach for Runtime Self-adaptation Based on Queueing Network Fluid Analysis
Complex software systems are required to adapt dynamically to changing workloads and scenarios, while guaranteeing a set of performance objectives. This is not a trivial task, since run-time variability makes the process of devising the needed resources challenging for software designers. In this context, self-adaptation is a promising technique that work towards the specification of the most suitable system configuration, such that the system behavior is preserved while meeting performance requirements. In this paper we propose a proactive approach based on queuing networks that allows self-adaptation by predicting performance flaws and devising the most suitable system resources allocation. The queueing network model represents the system behavior and embeds the input parameters (e.g., workload) observed at run-time. We rely on fluid approximation to speed up the analysis of transient dynamics for performance indices. To support our approach we developed a tool that automatically generates simulation and fluid analysis code from an high-level description of the queueing network. An illustrative example is provided to demonstrate the effectiveness of our approach.
Emilio Incerto
Mirco Tribastone
mirco.tribastone@imtlucca.it
Catia Trubiani
2016-10-06T15:10:59Z
2016-10-06T15:10:59Z
http://eprints.imtlucca.it/id/eprint/3577
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3577
2016-10-06T15:10:59Z
Probabilistic Forecasts of Bike-Sharing Systems for Journey Planning
We study the problem of making forecasts about the future availability of bicycles in stations of a bike-sharing system (BSS). This is relevant in order to make recommendations guaranteeing that the probability that a user will be able to make a journey is sufficiently high. To do this we use probabilistic predictions obtained from a queuing theoretical time-inhomogeneous model of a BSS. The model is parametrized and successfully validated using historical data from the Vélib' BSS of the City of Paris.
We develop a critique of the standard root-mean-square-error (RMSE), commonly adopted in the bike-sharing research as an index of the prediction accuracy, because it does not account for the stochasticity inherent in the real system. Instead we introduce a new metric based on scoring rules. We evaluate the average score of our model against classical predictors used in the literature. We show that these are outperformed by our model for prediction horizons of up to a few hours. We also discuss that, in general, measuring the current number of available bikes is only relevant for prediction horizons of up to few hours.
Nicolas Gast
Guillaume Massonnet
Daniel Reijsbergen
Mirco Tribastone
mirco.tribastone@imtlucca.it
2016-10-06T10:13:39Z
2016-10-06T10:13:39Z
http://eprints.imtlucca.it/id/eprint/3570
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3570
2016-10-06T10:13:39Z
Twitlang(er): Interactions Modeling Language (and Interpreter) for Twitter
Online social networks are widespread means to enact interactive collaboration among people by, e.g., planning events, diffusing information, and enabling discussions. Twitter provides one of the most illustrative example of how people can effectively interact without resorting to traditional communication media. For example, the platform has acted as a unique medium for reliable communication in emergency or for organising cooperative mass actions. This use of Twitter in a cooperative, possibly critical, setting calls for a more precise awareness of the dynamics regulating message spreading. To this aim, in this paper, we propose Twitlang, a formal language to model interactions among Twitter accounts. The operational semantics associated to the language allows users to clearly and precisely determine the effects of actions performed by Twitter accounts, such as post, retweet, reply-to or delete tweets. The language is implemented in the form of a Maude interpreter, Twitlanger, which takes a language term as an input and, automatically or interactively, explores the computations arising from the term. By relying on this interpreter, automatic verification of communication properties of Twitter accounts can be carried out via the analysis tools provided by the Maude framework. We illustrate the benefits of our executable formalisation by means of few simple, yet typical, examples of Twitter interactions, whose effects are somehow subtle.
Rocco De Nicola
r.denicola@imtlucca.it
Alessandro Maggi
Marinella Petrocchi
Angelo Spognardi
Francesco Tiezzi
2016-10-06T10:09:46Z
2016-10-06T10:09:46Z
http://eprints.imtlucca.it/id/eprint/3569
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3569
2016-10-06T10:09:46Z
Replica-Based High-Performance Tuple Space Computing
We present the tuple-based coordination language RepliKlaim, which enriches Klaim with primitives for replica-aware coordination. Our overall goal is to offer suitable solutions to the challenging problems of data distribution and locality in large-scale high performance computing. In particular, RepliKlaim allows the programmer to specify and coordinate the replication of shared data items and the desired consistency properties. The programmer can hence exploit such flexible mechanisms to adapt data distribution and locality to the needs of the application, so to improve performance in terms of concurrency and data access. We investigate issues related to replica consistency, provide an operational semantics that guides the implementation of the language, and discuss the main synchronization mechanisms of our prototypical run-time framework. Finally, we provide a performance analysis, which includes scenarios where replica-based specifications and relaxed consistency provide significant performance gains.
Marina Andrić
Rocco De Nicola
r.denicola@imtlucca.it
Alberto Lluch Lafuente
2016-10-06T10:03:31Z
2016-10-06T10:03:31Z
http://eprints.imtlucca.it/id/eprint/3568
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3568
2016-10-06T10:03:31Z
On Integrating Social and Sensor Networks for Emergency Management
The 2010 earthquake in Haiti is often referred to as the turning point that changed the way social media can be used during disasters. The development of strategies, technologies and tools to enhance user collaboration around disasters has become an emergent field, and their integration with appropriate sensor networks presents itself as an effective solution to drive decision making in emergency management.
In this paper, we present a review of existing disaster management systems and their underlying strategies and technologies, and identify the limitations of the tools in which they are implemented. We then propose an architecture for disaster management that integrates the mining of social networks and the use of sensor networks as two complementary technologies to overcome the limitations of the current emergency management tools.
Farshad Shams
Antonio Cerone
Rocco De Nicola
r.denicola@imtlucca.it
2016-10-06T10:00:06Z
2016-10-06T10:00:06Z
http://eprints.imtlucca.it/id/eprint/3567
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3567
2016-10-06T10:00:06Z
Global Protocol Implementations via Attribute-Based Communication
Several type systems have been developed to address the conformance between specifications and implementations, where types are specifications and type-checking ensures the conformance relation. In this paper, we take a different perspective and assume that programming takes place only at the specification level, by using a type language that captures protocols of interaction. Specifications provide the global interaction scheme and lay the basis for an automatic (provably correct) generation of implementations. The latter is obtained by a translation into a rich formalism that relies on attribute-based communication, whose expressiveness permits modeling in a natural way the symmetric link between message recipient and emitter.
Rocco De Nicola
r.denicola@imtlucca.it
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2016-10-06T09:53:53Z
2016-10-06T09:53:53Z
http://eprints.imtlucca.it/id/eprint/3566
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3566
2016-10-06T09:53:53Z
A Calculus for Attribute-based Communication
The notion of attribute-based communication seems promising to model and analyse systems with huge numbers of interacting components that dynamically adjust and combine their behaviour to achieve specific goals. A basic process calculus, named AbC, is introduced that has as primitive construct exactly attribute-based communication and its impact on the above mentioned kind of systems is considered. An AbC system consists of a set of parallel components each of which is equipped with a set of attributes. Communication takes place in a broadcast fashion and communication links among components are dynamically established by taking into account interdependences determined by predicates over attributes. First, the syntax and the reduction semantics of AbC are presented, then its expressiveness and effectiveness is demonstrated by modelling two scenarios from the realm of TV streaming channels. An example of how well-established process calculi could be encoded into AbC is given by considering the translation into AbC of a prototypical π-calculus process.
Yehia Abd Alrahman
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
Francesco Tiezzi
Roberto Vigo
2016-10-06T09:49:51Z
2016-10-06T09:49:51Z
http://eprints.imtlucca.it/id/eprint/3565
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3565
2016-10-06T09:49:51Z
Domain-specific queries and Web search personalization:
some investigations
Major search engines deploy personalized Web results to enhance users’ experience, by showing
them data supposed to be relevant to their interests. Even if this process may bring benefits to
users while browsing, it also raises concerns on the selection of the search results. In particular,
users may be unknowingly trapped by search engines in protective information bubbles, called “filter
bubbles”, which can have the undesired effect of separating users from information that does not
fit their preferences. This paper moves from early results on quantification of personalization over
Google search query results. Inspired by previous works, we have carried out some experiments
consisting of search queries performed by a battery of Google accounts with differently prepared
profiles. Matching query results, we quantify the level of personalization, according to topics of the
queries and the profile of the accounts. This work reports initial results and it is a first step a for more
extensive investigation to measure Web search personalization.
Van Tien Hoang
Angelo Spognardi
Francesco Tiezzi
Marinella Petrocchi
Rocco De Nicola
r.denicola@imtlucca.it
2016-10-06T09:36:33Z
2016-10-06T09:36:33Z
http://eprints.imtlucca.it/id/eprint/3564
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3564
2016-10-06T09:36:33Z
CARMA: Collective Adaptive Resource-sharing Markovian Agents
In this paper we present CARMA, a language recently defined to support specification and analysis of collective adaptive systems. CARMA is a stochastic process algebra equipped with linguistic constructs specifically developed for modelling and programming systems that can operate in open-ended and unpredictable environments. This class of systems is typically composed of a huge number of interacting agents that dynamically adjust and combine their behaviour to achieve specific goals. A CARMA model, termed a collective, consists of a set of components, each of which exhibits a set of attributes. To model dynamic aggregations, which are sometimes referred to as ensembles, CARMA provides communication primitives that are based on predicates over the exhibited attributes. These predicates are used to select the participants in a communication. Two communication mechanisms are provided in the CARMA language: multicast-based and unicast-based. In this paper, we first introduce the basic principles of CARMA and then we show how our language can be used to support specification with a simple but illustrative example of a socio-technical collective adaptive system.
Luca Bortolussi
Rocco De Nicola
r.denicola@imtlucca.it
Vashti Galpin
Stephen Gilmore
Jane Hillstone
Diego Latella
Michele Loreti
Mieke Massink
2016-10-06T09:20:44Z
2016-10-06T09:20:44Z
http://eprints.imtlucca.it/id/eprint/3561
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3561
2016-10-06T09:20:44Z
Replicating Data for Better Performances in X10
Linguistic primitives for replica-aware coordination offer suitable solutions to the challenging problems of data distribution and locality in large-scale high-performance computing. The data replication mechanisms that had previously been designed to extend Klaim with replicated tuples are now used to experiment with X10, a parallel programming language primarily targeting clusters of multi-core processors linked in a large-scale system via high-performance networks. Our approach aims at allowing the programmer to specify and coordinate the replication of shared data items by taking into account the desired consistency properties. The programmer can hence exploit such flexible mechanisms to adapt data distribution and locality to the needs of the application, in order to improve performance in terms of concurrency and data access. We investigate issues related to replica consistency and provide a performance analysis, which includes scenarios where replica-based specifications and relaxed consistency provide significant performance gains.
Marina Andrić
Rocco De Nicola
r.denicola@imtlucca.it
Alberto Lluch Lafuente
2016-10-04T10:15:14Z
2016-10-04T10:15:14Z
http://eprints.imtlucca.it/id/eprint/3548
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3548
2016-10-04T10:15:14Z
Concurrent enhancement of percolation and synchronization in adaptive networks
Co-evolutionary adaptive mechanisms are not only ubiquitous in nature, but also beneficial for the functioning of a variety of systems. We here consider an adaptive network of oscillators with a stochastic, fitness-based, rule of connectivity, and show that it self-organizes from fragmented and incoherent states to connected and synchronized ones. The synchronization and percolation are associated to abrupt transitions, and they are concurrently (and significantly) enhanced as compared to the non-adaptive case. Finally we provide evidence that only partial adaptation is sufficient to determine these enhancements. Our study, therefore, indicates that inclusion of simple adaptive mechanisms can efficiently describe some emergent features of networked systems' collective behaviors, and suggests also self-organized ways to control synchronization and percolation in natural and social systems.
Young-Ho Eom
youngho.eom@imtlucca.it
Stefano Boccaletti
Guido Caldarelli
guido.caldarelli@imtlucca.it
2016-07-13T09:55:44Z
2016-07-13T09:55:44Z
http://eprints.imtlucca.it/id/eprint/3517
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3517
2016-07-13T09:55:44Z
Statistical Analysis of Probabilistic Models of Software Product Lines with Quantitative Constraints
We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra whose operational behaviour interacts with a store of constraints, neatly separating product configuration from product behaviour. The resulting probabilistic configurations and behaviour converge seamlessly in a semantics based on DTMCs, thus enabling quantitative analyses ranging from the likelihood of certain behaviour to the expected average cost of products. This is supported by a Maude implementation of QFLan, integrated with the SMT solver Z3 and the distributed statistical model checker MultiVeStA. Our approach is illustrated with a bikes product line case study.
M.H. ter Beek
Axel Legay
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-05-26T10:47:34Z
2016-05-26T10:47:34Z
http://eprints.imtlucca.it/id/eprint/3493
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3493
2016-05-26T10:47:34Z
Software Engineering for Collective Autonomic Systems: The ASCENS Approach
Dhaminda B. Abeywickrama
Jacques Combaz
Jaroslav and Kofro\v Horký
Andrea Vandin
andrea.vandin@imtlucca.it
Emil Vassev
Jan Kofroň
Alberto Lluch Lafuente
Michele Loreti
Andrea Margheri
Philip Mayer
Giacoma Valentina Monreale
Ugo Montanari
Carlo Pinciroli
Petr Tůma
2016-05-26T10:39:24Z
2016-05-26T10:39:24Z
http://eprints.imtlucca.it/id/eprint/3492
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3492
2016-05-26T10:39:24Z
Quantitative Analysis of Probabilistic Models of SoftwareProduct Lines with Statistical Model Checking
We investigate the suitability of statistical model checking techniques for analysing quantitative prop-erties of software product line models with probabilistic aspects. For this purpose, we enrich thefeature-oriented language FLANwith action rates, which specify the likelihood of exhibiting par-ticular behaviour or of installing features at a specific moment or in a specific order. The enrichedlanguage (called PFLAN) allows us to specify models of software product lines with probabilis-tic configurations and behaviour, e.g. by considering a PFLANsemantics based on discrete-timeMarkov chains. The Maude implementation of PFLANis combined with the distributed statisticalmodel checker MultiVeStA to perform quantitative analyses of a simple product line case study. Thepresented analyses include the likelihood of certain behaviour of interest (e.g. product malfunction-ing) and the expected average cost of products
Maurice H. ter Beek
maurice.terbeek@isti.cnr.it
Axel Legay
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
2016-05-23T11:46:35Z
2016-05-23T11:46:35Z
http://eprints.imtlucca.it/id/eprint/3490
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3490
2016-05-23T11:46:35Z
Simple outlier labeling based on quantile regression, with application to the steelmaking process
This paper introduces some methods for outlier identification in the regression setting, motivated by the analysis of steelmaking process data. The proposed methodology extends to the regression setting the boxplot rule, commonly used for outlier screening with univariate data. The focus here is on bivariate settings with a single covariate, but extensions are possible. The proposal is based on quantile regression, including an additional transformation parameter for selecting the best scale for linearity of the conditional quantiles. The resulting method is used to perform effective labeling of potential outliers, with a quite low computational complexity, allowing for simple implementation within statistical software as well as commonly used spreadsheets. Some simulation experiments have been carried out to study the swamping and masking properties of the proposal. The methodology is also illustrated by some real life examples, taking as the response variable the energy consumed in the melting process
Ruggero Bellio
Mauro Coletto
mauro.coletto@imtlucca.it
2016-05-23T09:52:51Z
2016-05-23T09:52:51Z
http://eprints.imtlucca.it/id/eprint/3489
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3489
2016-05-23T09:52:51Z
Electoral predictions with Twitter: a machine-learning approach
Several studies have shown how to approximately predict public opinion,
such as in political elections, by analyzing user activities in blogging platforms
and on-line social networks. The task is challenging for several reasons.
Sample bias and automatic understanding of textual content are two of several
non trivial issues.
In this work we study how Twitter can provide some interesting insights concerning
the primary elections of an Italian political party. State-of-the-art approaches
rely on indicators based on tweet and user volumes, often including sentiment
analysis. We investigate how to exploit and improve those indicators in order to
reduce the bias of the Twitter users sample. We propose novel indicators and a
novel content-based method. Furthermore, we study how a machine learning approach
can learn correction factors for those indicators. Experimental results on
Twitter data support the validity of the proposed methods and their improvement
over the state of the art.
Mauro Coletto
mauro.coletto@imtlucca.it
Claudio Lucchese
Salvatore Orlando
Raffaele Perego
2016-05-05T13:57:04Z
2016-05-05T13:57:04Z
http://eprints.imtlucca.it/id/eprint/3481
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3481
2016-05-05T13:57:04Z
Classification-aware distortion metric for HEVC intra coding
Increasingly many vision applications necessitate the transmission of acquired images and video to a remote location for automated processing. When the image data are consumed by analysis algorithms and possibly never seen by a human, tailoring compression to the application is beneficial from a bit rate perspective. We inject prior knowledge of the application in the encoder to make rate-distortion decisions based on an estimate of the accuracy that will be achieved when analyzing reconstructed image data. Focusing on classification (e.g., used for image segmentation), we propose a new application-aware distortion metric based on a geometric interpretation of classification error. We devise an implementation for the High Efficiency Video Coding standard, and derive optimal model parameters for the A-domain rate control algorithm by curve fitting procedures. We evaluate our approach on time-lapse sequences from plant phenotyping experiments and cell fluorescence microscopy encoded in intra-only mode, observing a reduction in segmentation error across bit rates.
Massimo Minervini
massimo.minervini@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2016-04-13T09:40:21Z
2016-04-13T09:40:21Z
http://eprints.imtlucca.it/id/eprint/3444
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3444
2016-04-13T09:40:21Z
Scaling Size and Parameter Spaces in Variability-Aware Software Performance Models (T)
In software performance engineering, what-if scenarios, architecture optimization, capacity planning, run-time adaptation, and uncertainty management of realistic models typically require the evaluation of many instances. Effective analysis is however hindered by two orthogonal sources of complexity. The first is the infamous problem of state space explosion — the analysis of a single model becomes intractable with its size. The second is due to massive parameter spaces to be explored, but such that computations cannot be reused across model instances. In this paper, we efficiently analyze many queuing models with the distinctive feature of more accurately capturing variability and uncertainty of execution rates by incorporating general (i.e., non-exponential) distributions. Applying product-line engineering methods, we consider a family of models generated by a core that evolves into concrete instances by applying simple delta operations affecting both the topology and the model's parameters. State explosion is tackled by turning to a scalable approximation based on ordinary differential equations. The entire model space is analyzed in a family-based fashion, i.e., at once using an efficient symbolic solution of a super-model that subsumes every concrete instance. Extensive numerical tests show that this is orders of magnitude faster than a naive instance-by-instance analysis.
Matthias Kowal
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
Ina Schaefer
2016-04-13T09:23:42Z
2016-04-13T09:23:42Z
http://eprints.imtlucca.it/id/eprint/3441
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3441
2016-04-13T09:23:42Z
Forward and Backward Bisimulations for Chemical Reaction Networks
We present two quantitative behavioral equivalences over species of a chemical reaction network(CRN) with semantics based on ordinary differential equations.Forward CRN bisimulationiden-tifies a partition where each equivalence class represents the exact sum of the concentrations ofthe species belonging to that class. Backward CRN bisimulationrelates species that have theidentical solutions at all time points when starting from the same initial conditions. Both notionscan be checked using only CRN syntactical information, i.e., by inspection of the set of reactions. We provide a unified algorithm that computes the coarsest refinement up to our bisimulationsin polynomial time. Further, we give algorithms to compute quotient CRNs induced by a bisim-ulation. As an application, we find significant reductions in a number of models of biologicalprocesses from the literature. In two cases we allow the analysis of benchmark models whichwould be otherwise intractable due to their memory requirements.
Luca Cardelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Max Tschaikowski
max.tschaikowski@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-04-04T09:28:35Z
2016-04-04T09:28:35Z
http://eprints.imtlucca.it/id/eprint/3352
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3352
2016-04-04T09:28:35Z
Effect of Punch Diameters on Shear Extrusion of 6063 Aluminium Alloy
This paper reports the effect of punch diameters on the
shear extrusion of 6063 Aluminium alloy. During the shear extrusion
process, Aluminium billets of considerable diameter 30 mm and
height 25 mm were inserted in a die hole and different punches of
diameter 12 mm, 14 mm, 16 mm and 18 mm respectively were
allowed to come in contact to perform the shear operation. The setup
took place under a hydraulic press with maximum capacity of 600
kN. This work is aimed at studying the selection of the optimum
punch diameter for shear extrusion using local groundnut oil as the
lubricant. Different extrusion pressures were measured and the punch
with a diameter of 18 mm gives the highest load of 77.7 kN while the
punch with a diameter of 12 mm gives the lowest load of 51.2 kN.
An indication shows that, an increase in the punch diameters led to
an increase in the height of the extrudates and this in turn reduces the
stress induced
Mutiu F. Erinosho
Saheed Olalekan Ojo
saheed.ojo@imtlucca.it
Joseph S. Ajiboye
Esther T. Akinlabi
2016-04-04T09:20:10Z
2016-04-05T11:19:16Z
http://eprints.imtlucca.it/id/eprint/3351
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3351
2016-04-04T09:20:10Z
Experimental Analysis for Lubricant and Punch Selection in Shear Extrusion of Aa-6063
Shear extrusion is a forming process which is based on combined backward cup-forward rod extrusion. This extrusion process is attractive due its potential to achieve severe plastic deformation thus enabling texture and microstructural control of materials. Furthermore, the economic potential of shear extrusion for mass production and production of complex shapes provides for numerous applications in automotive, transportation, aero-space and other industries. However, a trending challenge in the use of this method for complex shapes is the design and selection of tools to achieve a high quality product. This paper focuses on deep study of shear extrusion of AA-6063. The process was studied experimentally using variables which affect the forming load as well as the quality of the product. It is concluded from the load-displacement and stress plots that a punch with large diameter and small punch land is desirable for easy forming of the material during shear extrusion. Analysis of the effect of lubricants on deformation load and stress shows that palm oil lubricant remains the best lubricant of the four lubricants examined since its gives the minimum load obtained during shear extrusion.
Saheed Olalekan Ojo
saheed.ojo@imtlucca.it
Mutiu F. Erinosho
Joseph S. Ajiboye
2016-03-21T09:24:26Z
2016-03-21T09:24:26Z
http://eprints.imtlucca.it/id/eprint/3247
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3247
2016-03-21T09:24:26Z
Polymeric PEGylated nanoparticles as drug carriers: How preparation and loading procedures influence functional properties
The application of emerging nanotechnologies in medicine showed in the last years a significant potential in the improvement of therapies. In particular, polymeric nanocarriers are currently tested to evaluate their capability to reduce side effects, to increase the residence time in the body and also to obtain a controlled release over time. In the present work a novel polymeric nanocarrier was developed and optimized to obtain, with the same chemical formulation, three different typologies of nanocarriers: dense nanospheres loaded with an active molecule (1) during nanoparticle formation and (2) after the preparation and (3) hollow nanocapsules to increase the starting drug payload. Synthetic materials considered were PEGylated acrylic copolymers, folic acid was used as model of a hydrophobic drug. The main aim is to develop an optimized nanocarrier for the transport and the enhanced release of poorly water-soluble drugs. © 2014 Wiley Periodicals, Inc. J. Appl. Polym. Sci. 2015, 132, 41310.
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
2016-03-21T09:18:54Z
2016-03-21T09:20:12Z
http://eprints.imtlucca.it/id/eprint/3246
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3246
2016-03-21T09:18:54Z
Chemical synthesis of a biodegradable PEGylated copolymer from ε-caprolactone and γ-valerolactone: evaluation of reaction and functional properties
This paper reports the chemical synthesis of methoxy poly(ethyleneglycol)-block-poly(ε-caprolactone-co-4-hydroxyvalerate) from ε-caprolactone and γ-valerolactone, a five-membered ring rarely used in chemical synthesis due to its low reactivity. This procedure enabled production of copolymers with controlled ratios of repeating units and molecular weights, as demonstrated by GPC, FT-IR and NMR characterization. Copolymer degradation rate was found to depend on macromolecular composition, and finely tuneable in a wide range of values. Similarly, hydrophilicity was dependent on γ-valerolactone content, and could be accurately controlled by varying the composition of the reaction feed. Importantly, this copolymer showed lower levels of acidic degradation products than other biodegradable polymers, thus resulting in improved biocompatibility. These encouraging results demonstrate the feasibility of the chemical synthesis of a novel and versatile material with interesting properties that fill a gap in the range of commercially available biodegradable polymers.
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Federica Michele
Barbara Mazzolai
Angelo Bifone
2016-03-21T09:16:01Z
2016-03-21T09:19:49Z
http://eprints.imtlucca.it/id/eprint/3245
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3245
2016-03-21T09:16:01Z
Molecularly imprinted polymeric micro- and nano-particles for the targeted delivery of active molecules
Molecular imprinting (MI) represents a strategy to introduce a ‘molecular memory’ in a polymeric system obtaining materials with specific recognition properties. MI particles can be used as drug delivery systems providing a targeted release and thus reducing the side effects. The introduction of molecular recognition properties on a polymeric drug carrier represents a challenge in the development of targeted delivery systems to increase their efficiency. This review will summarize the limited number of drug delivery MI particles described in the literature along with an overview of potential solutions for a larger exploitation of MI particles as targeted drug delivery carriers. Molecularly imprinted drug carriers can be considered interesting candidates to significantly improve the efficiency of a controlled drug treatment.
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Barbara Mazzolai
2016-03-21T09:12:52Z
2016-03-22T11:08:58Z
http://eprints.imtlucca.it/id/eprint/3244
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3244
2016-03-21T09:12:52Z
Experimental and computational study of mechanical and transport properties of a polymer coating for drug-eluting stents
Background: Experimental and computational characterizations in the preclinical development of biomedical devices are complementary and can significantly help in a thorough analysis of the performances before clinical evaluation. Methodology: Here mechanical and drug delivery properties of a polymer platform, ad hoc prepared to obtain coatings for drug-eluting stents, is reported; polymer formulation and starting drug loading were varied to study the behavior of the platform; a finite element model was constructed starting from experimental data. Results: Different platform formulations affected mechanical and drug transport properties; these properties can be fine tuned by varying the starting platform formulation. Finite element analysis allowed visualizing drug distribution maps over time in biological tissues for different commercial stents and polymer platform formulations.
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
2016-03-15T09:31:37Z
2016-03-15T09:31:37Z
http://eprints.imtlucca.it/id/eprint/3234
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3234
2016-03-15T09:31:37Z
Interfacial Cracks in Piezoelectric Bimaterials: an approach based on Weight Functions and Boundary Integral Equations
The focus of this paper is on the analysis of a semi-infinite crack lying along a perfect interface in a piezoelectric bimaterial with arbitrary loading on the crack faces. Making use of the extended Stroh formalism for piezoelectric materials combined with Riemann-Hilbert formulation, general expressions are obtained for both symmetric and skew-symmetric weight functions associate with plane crack problems at the interface between dissimilar anisotropic piezoelectric media. The effect of the coupled electrical fields is incorporated in the derived original expressions for the weight function matrices. These matrices are used together with Betti's reciprocity identity in order to obtain singular integral equations relating the extended displacement and traction fields to the loading acting on the crack faces. In order to study the variation of the piezoelectric effect, two different poling directions are considered. Examples are shown for both poling directions with a number of mechanical and electrical loadings applied to the crack faces.
Lewis Pryce
Lorenzo Morini
lorenzo.morini@imtlucca.it
D. Andreeva
A. Zagnetko
2016-03-14T13:02:02Z
2016-04-06T10:06:19Z
http://eprints.imtlucca.it/id/eprint/3223
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3223
2016-03-14T13:02:02Z
A Cortical and Sub-cortical Parcellation Clustering by Intrinsic Functional Connectivity
Network analysis of resting-state fMRI (rsfMRI) has been widely utilized to investigate the functional architecture of the whole brain. Such analysis can divide the brain into several discrete elements (nodes) connected by links (edges) representing the relation between two elements. The brain cortical and subcortical areas can be segmented or parcelled into several functional and/or structural regions. The connectome analysis of human-brain structure and functional connectivity provides a unique opportunity to understand the organisation of brain networks. However, such analyses require an appropriate definition of functional or structural nodes to efficiently represent cortical regions. In order to address this issue, here we propose a robust parcellation method based on resting-state fMRI, which can be generalized from the single-subject level to the multi-group one. Considering the input data of a single subject and constructing multi-resolution graph elements. We combined voting-based measurements to divide the cortical region into sub-regions in order to obtain the whole brain parcellation. Our parcellation relies on majority vote and poses spatial constraints within a hierarchical agglomerative clustering framework to define parcels that are spatially homogeneous. We used rsfMRI data collected from 40 healthy subjects and we showed that our purposed algorithm is able to compute stable and reproducible parcellations across the group of subjects at multi-resolution level. We find that, even though previous methods ensure on average larger overlap between parcels and regions in AAL atlas, the method proposed herein reduces inter-subject variability, especially when the number of parcels increases. Our high-resolution parcels seem to be functionally more consistent and reliable and can be a useful tool for future analysis that will aim to match functional and structural architecture of the brain.
Ying-Chia Lin
yingchia.lin@imtlucca.it
Tommaso Gili
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Andrea Gabrielli
Mariangela Iorio
Gianfranco Spalletta
Guido Caldarelli
guido.caldarelli@imtlucca.it
2016-03-11T12:15:25Z
2016-03-11T12:15:25Z
http://eprints.imtlucca.it/id/eprint/3215
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3215
2016-03-11T12:15:25Z
Boundary integral formulation for interfacial cracks in thermodiffusive bimaterials
An original boundary integral formulation is proposed for the problem of a semi-infinite crack at the interface between two dissimilar elastic materials in the presence of heat flows and mass diffusion. Symmetric and skew-symmetric weight function matrices are used together with a generalized Betti’s reciprocity theorem in order to derive a system of integral equations that relate the applied loading, the temperature and mass concentration fields, the heat and mass fluxes on the fracture surfaces and the resulting crack opening. The obtained integral identities can have many relevant applications, such as for the modelling of crack and damage processes at the interface between different components in electrochemical energy devices characterized by multi-layered structures (solid oxide fuel cells and lithium ions batteries).
Lorenzo Morini
lorenzo.morini@imtlucca.it
Amdrea Piccolroaz
2016-02-26T15:00:45Z
2016-02-26T15:00:45Z
http://eprints.imtlucca.it/id/eprint/3132
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3132
2016-02-26T15:00:45Z
Evaluating flood hazard at the catchment scale via machine-learning techniques
Massimiliano Degiorgis
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Silvia Gorni
Rita Morisi
rita.morisi@imtlucca.it
Giorgio Roth
Marcello Sanguineti
Angela Celeste Taramasso
2016-02-26T14:43:14Z
2016-02-26T14:43:14Z
http://eprints.imtlucca.it/id/eprint/3131
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3131
2016-02-26T14:43:14Z
Dealing with mixed hard/soft constraints via Support constraint Machines
A learning paradigm is presented, which extends the classical framework of
learning from examples by including hard pointwise constraints, i.e., constraints that cannot be
violated. In applications, hard pointwise constraints may encode very precise prior knowledge
coming from rules, applied, e.g., to a large collection of unsupervised examples. The classical
learning framework corresponds to soft pointwise constraints, which can be violated at the cost
of some penalization. The functional structure of the optimal solution is derived in terms of a set
of “support constraints”, which generalize the classical concept of “support vectors”. They are
at the basis of a novel learning parading, that we called “Support Constraint Machines”. A case
study and a numerical example are presented.
Marcello Sanguineti
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
2016-02-26T14:41:30Z
2016-02-26T14:41:30Z
http://eprints.imtlucca.it/id/eprint/3130
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3130
2016-02-26T14:41:30Z
A Two-Player Differential Game Model for the Management of Transboundary Pollution and
Environmental Absorption
It is likely that the decentralized structure at the level of nations of decision-making
processes related to polluting emissions will aggravate the decline in the efficiency of carbon sinks.
A two-player differential game model of pollution is proposed. It accounts for a time-dependent
environmental absorption efficiency and allows for the possibility of a switching of the biosphere
from a carbon sink to a source. The impact of negative externalities from the transboundary
pollution non-cooperative game wherein countries are dynamically involved is investigated. The
differences in steady state between cooperative, open-loop, and Markov perfect Nash equilibria
are studied. For the latter, two numerical methods for its approximation are compared.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
F. El Ouardighi
K. Kogan
Marcello Sanguineti
2016-02-26T13:30:28Z
2016-02-26T13:30:28Z
http://eprints.imtlucca.it/id/eprint/3128
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3128
2016-02-26T13:30:28Z
Online learning as an LQG optimal control problem with random matrices
In this paper, we combine optimal control theory and machine learning techniques to propose and solve an optimal control formulation of online learning from supervised examples, which are used to learn an unknown vector parameter modeling the relationship between the input examples and their outputs. We show some connections of the problem investigated with the classical LQG optimal control problem, of which the proposed problem is a non-trivial variation, as it involves random matrices. We also compare the optimal solution to the proposed problem with the Kalman-filter estimate of the parameter vector to be learned, demonstrating its larger smoothness and robustness to outliers. Extension of the proposed online-learning framework are mentioned at the end of the paper.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Gori
Rita Morisi
rita.morisi@imtlucca.it
Marcello Sanguineti
2016-02-26T13:11:06Z
2016-02-26T13:11:06Z
http://eprints.imtlucca.it/id/eprint/3126
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3126
2016-02-26T13:11:06Z
A SOM-based Chan–Vese model for unsupervised image segmentation
Active Contour Models (ACMs) constitute an efficient energy-based image segmentation framework. They usually deal with the segmentation problem as an optimization problem, formulated in terms of a suitable functional, constructed in such a way that its minimum is achieved in correspondence with a contour that is a close approximation of the actual object boundary. However, for existing ACMs, handling images that contain objects characterized by many different intensities still represents a challenge. In this paper, we propose a novel ACM that combines—in a global and unsupervised way—the advantages of the Self-Organizing Map (SOM) within the level set framework of a state-of-the-art unsupervised global ACM, the Chan–Vese (C–V) model. We term our proposed model SOM-based Chan–Vese (SOMCV) active contour model. It works by explicitly integrating the global information coming from the weights (prototypes) of the neurons in a trained SOM to help choosing whether to shrink or expand the current contour during the optimization process, which is performed in an iterative way. The proposed model can handle images that contain objects characterized by complex intensity distributions, and is at the same time robust to the additive noise. Experimental results show the high accuracy of the segmentation results obtained by the SOMCV model on several synthetic and real images, when compared to the Chan–Vese model and other image segmentation models.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
2016-02-26T12:55:26Z
2016-02-26T12:55:26Z
http://eprints.imtlucca.it/id/eprint/3125
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3125
2016-02-26T12:55:26Z
On the Relationship between Variational Level Set-Based and SOM-Based Active Contours
Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
Eyad Elyan
2016-02-12T13:11:53Z
2016-02-12T13:11:53Z
http://eprints.imtlucca.it/id/eprint/3065
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3065
2016-02-12T13:11:53Z
The SCEL Language: Design, Implementation, Verification
SCEL (Service Component Ensemble Language) is a new language specifically designed to rigorously model and program autonomic components and their interaction, while supporting formal reasoning on their behaviors. SCEL brings together various programming abstractions that allow one to directly represent aggregations, behaviors and knowledge according to specific policies. It also naturally supports programming interaction, self-awareness, context-awareness, and adaptation. The solid semantic grounds of the language is exploited for developing logics, tools and methodologies for formal reasoning on system behavior to establish qualitative and quantitative properties of both the individual components and the overall systems.
Rocco De Nicola
r.denicola@imtlucca.it
Diego Latella
Alberto Lluch Lafuente
Michele Loreti
Andrea Margheri
Mieke Massink
Andrea Morichetta
Rosario Pugliese
Francesco Tiezzi
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T13:04:06Z
2016-04-06T07:58:07Z
http://eprints.imtlucca.it/id/eprint/3064
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3064
2016-02-12T13:04:06Z
Reconciling White-Box and Black-Box Perspectives on Behavioral Self-adaptation
This paper proposes to reconcile two perspectives on behavioral adaptation commonly taken at different stages of the engineering of autonomic computing systems. Requirements engineering activities often take a black-box perspective: A system is considered to be adaptive with respect to an environment whenever the system is able to satisfy its goals irrespectively of the environment perturbations. Modeling and programming engineering activities often take a white-box perspective: A system is equipped with suitable adaptation mechanisms and its behavior is classified as adaptive depending on whether the adaptation mechanisms are enacted or not. The proposed approach reconciles black- and white-box perspectives by proposing several notions of coherence between the adaptivity as observed by the two perspectives: These notions provide useful criteria for the system developer to assess and possibly modify the adaptation requirements, models and programs of an autonomic system.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Matthias Hölzl
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
Martin Wirsing
2016-02-12T12:37:25Z
2016-02-12T12:37:25Z
http://eprints.imtlucca.it/id/eprint/3063
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3063
2016-02-12T12:37:25Z
Differential Bisimulation for a Markovian Process Algebra
Formal languages with semantics based on ordinary differential equations (ODEs) have emerged as a useful tool to reason about large-scale distributed systems. We present differential bisimulation, a behavioral equivalence developed as the ODE counterpart of bisimulations for languages with probabilistic or stochastic semantics. We study it in the context of a Markovian process algebra. Similarly to Markovian bisimulations yielding an aggregated Markov process in the sense of the theory of lumpability, differential bisimulation yields a partition of the ODEs underlying a process algebra term, whereby the sum of the ODE solutions of the same partition block is equal to the solution of a single (lumped) ODE. Differential bisimulation is defined in terms of two symmetries that can be verified only using syntactic checks. This enables the adaptation to a continuous-state semantics of proof techniques and algorithms for finite, discrete-state, labeled transition systems. For instance, we readily obtain a result of compositionality, and provide an efficient partition-refinement algorithm to compute the coarsest ODE aggregation of a model according to differential bisimulation.
Giulio Iacobelli
Mirco Tribastone
mirco.tribastone@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T12:27:56Z
2016-02-12T12:27:56Z
http://eprints.imtlucca.it/id/eprint/3062
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3062
2016-02-12T12:27:56Z
A White Box Perspective on Behavioural Adaptation
We present a white-box conceptual framework for adaptation developed in the context of the EU Project ASCENS coordinated by Martin Wirsing. We called it CoDa, for Control Data Adaptation, since it is based on the notion of control data. CoDa promotes a neat separation between application and adaptation logic through a clear identification of the set of data that is relevant for the latter. The framework provides an original perspective from which we survey a representative set of approaches to adaptation, ranging from programming languages and paradigms to computational models and architectural solutions.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T11:52:51Z
2016-02-12T12:08:13Z
http://eprints.imtlucca.it/id/eprint/3061
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3061
2016-02-12T11:52:51Z
Modelling and analyzing adaptive self-assembly strategies with Maude
Building adaptive systems with predictable emergent behavior is a difficult task and it is becoming a critical need. The research community has accepted the challenge by introducing approaches of various nature: from software architectures to programming paradigms and analysis techniques. Our white-box conceptual approach to adaptive systems based on the notion of control data promotes a clear distinction between the application and the adaptation logic. In this paper we propose a concrete instance of our approach based on (i) a neat identification of control data; (ii) a hierarchical architecture that provides the basic structure to separate the adaptation and application logics; (iii) computational reflection as the main mechanism to realize the adaptation logic; (iv) probabilistic rule-based specifications and quantitative verification techniques to specify and analyze the adaptation logic. We show that our solution can be naturally realized in Maude, a Rewriting Logic based framework, and illustrate our approach by specifying, validating and analyzing a prominent example of adaptive systems: robot swarms equipped with self-assembly strategies.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch Lafuente
Andrea Vandin
andrea.vandin@imtlucca.it
2016-02-12T11:43:04Z
2016-02-12T11:43:04Z
http://eprints.imtlucca.it/id/eprint/3060
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3060
2016-02-12T11:43:04Z
Software Engineering for Collective Autonomic Systems: The ASCENS Approach
The ASCENS project deals with designing systems as ensembles of adaptive components. Among the outputs of the ASCENS project are multiple tools that address particular issues in designing the ensembles, ranging from support for early stage formal modeling to runtime environment for executing and monitoring ensemble implementations. The goal of this chapter is to provide a compact description of the individual tools, which is supplemented by additional downloadable material on the project website.
Dhaminda B. Abeywickrama
Jacques Combaz
Jaroslav and Kofro\v Horký
Andrea Vandin
andrea.vandin@imtlucca.it
Emil Vassev
2016-02-12T11:30:36Z
2016-04-05T11:54:38Z
http://eprints.imtlucca.it/id/eprint/3058
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3058
2016-02-12T11:30:36Z
Design and simulation of an optimized e-linac based neutron source for {BNCT} research
Abstract The paper is focused on the study of a novel photo-neutron source for {BNCT} preclinical research based on medical electron Linacs. Previous studies by the authors already demonstrated the possibility to obtain a mixed thermal and epithermal neutron flux of the order of 107 cm−2 s−1. This paper investigates possible Linac’s modifications and a new photo-converter design to rise the neutron flux above 5 107 cm−2 s−1, also reducing the gamma contamination.
E. Durisi
Katia Alikaniotis
Oscar Borla
F. Bragato
Mauro Costagli
Gianrossano Giannini
Valeria Monti
L. Visca
Gianna Vivaldo
gianna.vivaldo@imtlucca.it
Alba Zanini
2016-02-11T15:16:10Z
2016-04-06T10:06:34Z
http://eprints.imtlucca.it/id/eprint/3055
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3055
2016-02-11T15:16:10Z
Large-scale analysis of neuroimaging data on commercial clouds with content-aware resource allocation strategie
The combined use of mice that have genetic mutations (transgenic mouse models) of human pathology and advanced neuroimaging methods (such as magnetic resonance imaging) has the potential to radically change how we approach disease understanding, diagnosis and treatment. Morphological changes occurring in the brain of transgenic animals as a result of the interaction between environment and genotype can be assessed using advanced image analysis methods, an effort described as ‘mouse brain phenotyping’. However, the computational methods involved in the analysis of high-resolution brain images are demanding. While running such analysis on local clusters is possible, not all users have access to such infrastructure and even for those that do, having additional computational capacity can be beneficial (e.g. to meet sudden high throughput demands). In this paper we use a commercial cloud platform for brain neuroimaging and analysis. We achieve a registration-based multi-atlas, multi-template anatomical segmentation, normally a lengthy-in-time effort, within a few hours. Naturally, performing such analyses on the cloud entails a monetary cost, and it is worthwhile identifying strategies that can allocate resources intelligently. In our context a critical aspect is the identification of how long each job will take. We propose a method that estimates the complexity of an image-processing task, a registration, using statistical moments and shape descriptors of the image content. We use this information to learn and predict the completion time of a registration. The proposed approach is easy to deploy, and could serve as an alternative for laboratories that may require instant access to large high-performance-computing infrastructures. To facilitate adoption from the community we publicly release the source code.
Massimo Minervini
massimo.minervini@imtlucca.it
Cristian Rusu
Mario Damiano
Valter Tucci
Angelo Bifone
Alessandro Gozzi
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2016-01-20T15:45:46Z
2016-04-06T09:41:29Z
http://eprints.imtlucca.it/id/eprint/3027
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3027
2016-01-20T15:45:46Z
Crack propagation in graphene
The crack initiation and growth mechanisms in an 2D graphene lattice structure are studied based on molecular dynamics simulations. Crack growth in an initial edge crack model in the arm-chair and the zig-zag lattice configurations of graphene are considered. Influence of the time steps on the post yielding behaviour of graphene is studied. Based on the results, a time step of 0.1 fs is recommended for consistent and accurate simulation of crack propagation. Effect of temperature on the crack propagation in graphene is also studied, considering adiabatic and isothermal conditions. Total energy and stress fields are analyzed. A systematic study of the bond stretching and bond reorientation phenomena is performed, which shows that the crack propagates after significant bond elongation and rotation in graphene. Variation of the crack speed with the change in crack length is estimated.
Pattabhi R. Budarapu
pattabhi.budarapu@imtlucca.it
Brahmanandam Javvaji
V. K. Sutrakar
D. Roy Mahapatra
Goangseup Zi
Timon Rabczuk
2016-01-20T08:59:10Z
2016-01-20T09:37:24Z
http://eprints.imtlucca.it/id/eprint/3021
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3021
2016-01-20T08:59:10Z
Viral Misinformation: The Role of Homophily and Polarization
Alessandro Bessi
Fabio Petroni
Michela Del Vicario
michela.delvicario@imtlucca.it
Fabiana Zollo
fabiana.zollo@imtlucca.it
Aris Anagnostopoulos
Antonio Scala
Guido Caldarelli
guido.caldarelli@imtlucca.it
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2016-01-19T15:59:42Z
2016-04-06T07:39:00Z
http://eprints.imtlucca.it/id/eprint/3018
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3018
2016-01-19T15:59:42Z
Semiautomatic detection of villi in confocal endoscopy for the evaluation of celiac disease
Celiac Disease (CD) is an immune-mediated enteropathy, diagnosed in the clinical practice by intestinal biopsy and the concomitant presence of a positive celiac serology. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to potentially perform in vivo virtual histology of small-bowel mucosa. In particular, it allows the qualitative evaluation of mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a semi-automatic method for villi detection from confocal endoscopy images, whose appearance change in case of villous atrophy. Starting from a set of manual seeds, a first rough segmentation of the villi is obtained by means of mathematical morphology operations. A merge and split procedure is then performed, to ensure that each seed originates a different region in the final segmentation. A border refinement process is finally performed, evolving the shape of each region according to local gradient intensities. Mean and median Dice coefficients for 290 villi originating from 66 images when compared to manually obtained ground truth are 80.71 and 87.96 respectively.
Davide Boschetto
davide.boschetto@imtlucca.it
H. Mirzaei
R.W.L. Leong
Giacomo Tarroni
Enrico Grisan
2016-01-19T15:31:34Z
2016-04-06T07:34:52Z
http://eprints.imtlucca.it/id/eprint/3017
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3017
2016-01-19T15:31:34Z
Detection and density estimation of goblet cells in confocal endoscopy for the evaluation of celiac disease
Celiac Disease (CD) is an immune-mediated enteropathy, diagnosed in the clinical practice by intestinal biopsy and the concomitant presence of a positive celiac serology. Confocal Laser Endomicroscopy (CLE) allows skilled and trained experts to potentially perform in vivo virtual histology of small-bowel mucosa. In particular, it allows the qualitative evaluation of mucosa alteration such as a decrease in goblet cells density, presence of villous atrophy or crypt hypertrophy. We present a semi-automatic computer-based method for the detection of goblet cells from confocal endoscopy images, whose density changes in case of pathological tissue. After a manual selection of a suitable region of interest, the candidate columnar and goblet cells' centers are first detected and the cellular architecture is estimated from their position using a Voronoi diagram. The region within each Voronoi cell is then analyzed and classified as goblet cell or other. The results suggest that our method is able to detect and label goblet cells immersed in a columnar epithelium in a fast, reliable and automatic way. Accepting 0.44 false positives per image, we obtain a sensitivity value of 90.3. Furthermore, estimated and real goblet cell densities are comparable (error: 9.7 ± 16.9, correlation: 87.2, R2 = 76).
Davide Boschetto
davide.boschetto@imtlucca.it
H. Mirzaei
R.W.L. Leong
Enrico Grisan
2015-12-04T08:57:09Z
2015-12-04T09:00:42Z
http://eprints.imtlucca.it/id/eprint/2967
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2967
2015-12-04T08:57:09Z
Il Futuro della Cyber Security in Italia. Un libro bianco per raccontare le principali sfide che il nostro Paese dovrà affrontare nei prossimi cinque anni
Roberto Baldoni
Rocco De Nicola
r.denicola@imtlucca.it
2015-11-24T15:47:03Z
2015-11-24T15:47:03Z
http://eprints.imtlucca.it/id/eprint/2930
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2930
2015-11-24T15:47:03Z
Robust model predictive control for discrete-time fractional-order systems
In this paper we propose a tube-based robust model predictive control scheme for fractional-order discrete-time systems of the Grunwald-Letnikov type with state and input constraints. We first approximate the infinite-dimensional
fractional-order system by a finite-dimensional linear system
and we show that the actual dynamics can be approximated
arbitrarily tight. We use the approximate dynamics to design
a tube-based model predictive controller which endows to the
controlled closed-loop system robust stability properties.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Sotiris Ntouskas
Haralambos Sarimveis
2015-11-24T13:00:34Z
2015-11-24T13:00:34Z
http://eprints.imtlucca.it/id/eprint/2931
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2931
2015-11-24T13:00:34Z
The eNanoMapper database for nanomaterial safety information
Background: The NanoSafety Cluster, a cluster of projects funded by the European Commision, identified the need for a computational infrastructure for toxicological data management of engineered nanomaterials (ENMs). Ontologies, open standards, and interoperable designs were envisioned to empower a harmonized approach to European research in nanotechnology. This setting provides a number of opportunities and challenges in the representation of nanomaterials data and the integration of ENM information originating from diverse systems. Within this cluster, eNanoMapper works towards supporting the collaborative safety assessment for ENMs by creating a modular and extensible infrastructure for data sharing, data analysis, and building computational toxicology models for ENMs.
Results: The eNanoMapper database solution builds on the previous experience of the consortium partners in supporting diverse data through flexible data storage, open source components and web services. We have recently described the design of the eNanoMapper prototype database along with a summary of challenges in the representation of ENM data and an extensive review of existing nano-related data models, databases, and nanomaterials-related entries in chemical and toxicogenomic databases. This paper continues with a focus on the database functionality exposed through its application programming interface (API), and its use in visualisation and modelling. Considering the preferred community practice of using spreadsheet templates, we developed a configurable spreadsheet parser facilitating user friendly data preparation and data upload. We further present a web application able to retrieve the experimental data via the API and analyze it with multiple data preprocessing and machine learning algorithms.
Conclusion: We demonstrate how the eNanoMapper database is used to import and publish online ENM and assay data from several data sources, how the “representational state transfer” (REST) API enables building user friendly interfaces and graphical summaries of the data, and how these resources facilitate the modelling of reproducible quantitative structure–activity relationships for nanomaterials (NanoQSAR).
Nina Jeliazkova
Haralambos Chomenides
Philip Doganis
Bengt Fadeel
Roland Grafström
Barry Hardy
Janna Hastings
Markus Hegi
Vedrin Jeliazkov
Nikolay Kochev
Pekka Kohonen
Cristian Munteanu
Haralambos Sarimveis
Bart Smeets
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Georgia Tsiliki
David Vorgrimmler
Egon Willighagen
2015-11-16T09:10:37Z
2015-11-16T09:10:37Z
http://eprints.imtlucca.it/id/eprint/2900
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2900
2015-11-16T09:10:37Z
Memory Kernel in the Expertise of Chess Players
In this work we investigate a mechanism for the emergence of long-range time correlations observed in a chronologically ordered database of chess games. We analyze a modified Yule-Simon preferential growth process proposed by Cattuto et al., which includes memory effects by means of a probabilistic kernel. According to the Hurst exponent of different constructed time series from the record of games, artificially generated databases from the model exhibit similar long-range correlations. In addition, the inter-event time frequency distribution is well reproduced by the model for realistic parameter values. In particular, we find the inter-event time distribution properties to be correlated with the expertise of the chess players through the memory kernel extension. Our work provides new information about the strategies implemented by players with different levels of expertise, showing an interesting example of how popularities and long-range correlations build together during a collective learning process.
Ana L. Schaigorodsky
alschaigorodsky@gmail.com
Juan I. Perotti
juanignacio.perotti@imtlucca.it
Orlando V. Billoni
alschaigorodsky@gmail.com
2015-11-16T09:04:49Z
2015-11-16T09:04:49Z
http://eprints.imtlucca.it/id/eprint/2899
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2899
2015-11-16T09:04:49Z
Correlated bursts and the role of memory range
Inhomogeneous temporal processes in natural and social phenomena have been described by bursts that are rapidly occurring events within short time periods alternating with long periods of low activity. In addition to the analysis of heavy-tailed inter-event time distributions, higher-order correlations between inter-event times, called correlated bursts, have been studied only recently. As the possible mechanisms underlying such correlated bursts are far from being fully understood, we devise a simple model for correlated bursts by using a self-exciting point process with variable memory range. Here the probability that a new event occurs is determined by a memory function that is the sum of decaying memories of the past events. In order to incorporate the noise and/or limited memory capacity of systems, we apply two memory loss mechanisms, namely either fixed number or variable number of memories. By using theoretical analysis and numerical simulations we find that excessive amount of memory effect may lead to a Poissonian process, which implies that for memory effect there exists an intermediate range that will generate correlated bursts of magnitude comparable to empirical findings. Hence our results provide deeper understanding of how long-range memory affects correlated bursts.
Hang-Hyun Jo
Juan I. Perotti
juanignacio.perotti@imtlucca.it
Kimmo Kaski
János Kertész
2015-11-10T11:31:09Z
2016-09-13T09:43:51Z
http://eprints.imtlucca.it/id/eprint/2868
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2868
2015-11-10T11:31:09Z
The direct, not V1-mediated, functional influence between the thalamus and middle temporal complex in the human brain is modulated by the speed of visual motion
Abstract The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between {LGN} and hMT+ represents a ‘fast lane’ reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between {LGN} and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from {LGN} reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between {LGN} and functional subdivisions within hMT+: direct connections between {LGN} and MT-proper carry mainly slow motion information, while connections between {LGN} and {MST} carry mainly fast motion information. The existence of a parallel pathway that connects the {LGN} directly to hMT+ in response to both slow and fast speeds may explain why {MT} and {MST} can still respond in the presence of {V1} lesions.
Anna Gaglianese
Mauro Costagli
Kenichi Ueno
Emiliano Ricciardi
emiliano.ricciardi@imtlucca.it
Giulio Bernardi
Pietro Pietrini
pietro.pietrini@imtlucca.it
Kang Cheng
2015-11-10T11:01:36Z
2016-09-13T09:44:33Z
http://eprints.imtlucca.it/id/eprint/2863
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2863
2015-11-10T11:01:36Z
A topographical organization for action representation in the human brain
How the human brain represents distinct motor features into a unique finalized action still remains undefined. Previous models proposed the distinct features of a motor act to be hierarchically organized in separated, but functionally interconnected, cortical areas. Here, we hypothesized that distinct patterns across a wide expanse of cortex may actually subserve a topographically organized coding of different categories of actions that represents, at a higher cognitive level and independently from the distinct motor features, the action and its final aim as a whole. Using functional magnetic resonance imaging and pattern classification approaches on the neural responses of 14 right-handed individuals passively watching short movies of hand-performed tool-mediated, transitive, and meaningful intransitive actions, we were able to discriminate with a high accuracy and characterize the category-specific response patterns. Actions are distinctively coded in distributed and overlapping neural responses within an action-selective network, comprising frontal, parietal, lateral occipital and ventrotemporal regions. This functional organization, that we named action topography, subserves a higher-level and more abstract representation of finalized actions and has the capacity to provide unique representations for multiple categories of actions. Hum Brain Mapp 36:3832–3844, 2015. © 2015 Wiley Periodicals, Inc.
Giacomo Handjaras
Giulio Bernardi
Francesca Benuzzi
Paolo Nichelli
Pietro Pietrini
pietro.pietrini@imtlucca.it
Emiliano Ricciardi
emiliano.ricciardi@imtlucca.it
2015-11-10T10:57:50Z
2016-09-13T09:43:34Z
http://eprints.imtlucca.it/id/eprint/2862
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2862
2015-11-10T10:57:50Z
Spatial imagery relies on a sensory independent, though sensory sensitive, functional organization within the parietal cortex: A fMRI study of angle discrimination in sighted and congenitally blind individuals
Abstract Although vision offers distinctive information to space representation, individuals who lack vision since birth often show perceptual and representational skills comparable to those found in sighted individuals. However, congenitally blind individuals may result in impaired spatial analysis, when engaging in ‘visual’ spatial features (e.g., perspective or angle representation) or complex spatial mental abilities. In the present study, we measured behavioral and brain responses using functional magnetic resonance imaging in sighted and congenitally blind individuals during spatial imagery based on a modified version of the mental clock task (e.g., angle discrimination) and a simple recognition control condition, as conveyed across distinct sensory modalities: visual (sighted individuals only), tactile and auditory. Blind individuals were significantly less accurate during the auditory task, but comparable-to-sighted during the tactile task. As expected, both groups showed common neural activations in intraparietal and superior parietal regions across visual and non-visual spatial perception and imagery conditions, indicating the more abstract, sensory independent functional organization of these cortical areas, a property that we named supramodality. At the same time, however, comparisons in brain responses and functional connectivity patterns across experimental conditions demonstrated also a functional lateralization, in a way that correlated with the distinct behavioral performance in blind and sighted individuals. Specifically, blind individuals relied more on right parietal regions, mainly in the tactile and less in the auditory spatial processing. In sighted, spatial representation across modalities relied more on left parietal regions. In conclusions, intraparietal and superior parietal regions subserve supramodal spatial representations in sighted and congenitally blind individuals. Differences in their recruitment across non-visual spatial processing in sighted and blind individuals may be related to distinctive behavioral performance and/or mental strategies adopted when they deal with the same spatial representation as conveyed through different sensory modalities.
Daniela Bonino
Emiliano Ricciardi
emiliano.ricciardi@imtlucca.it
Giulio Bernardi
Lorenzo Sani
Claudio Gentili
Tomaso Vecchi
Pietro Pietrini
pietro.pietrini@imtlucca.it
2015-11-10T10:35:05Z
2016-09-13T09:44:05Z
http://eprints.imtlucca.it/id/eprint/2860
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2860
2015-11-10T10:35:05Z
Proneness to social anxiety modulates neural complexity in the absence of exposure: A resting state fMRI study using Hurst exponent
Abstract To test the hypothesis that brain activity is modulated by trait social anxiety, we measured the Hurst Exponent (HE), an index of complexity in time series, in healthy individuals at rest in the absence of any social trigger. Functional magnetic resonance imaging (fMRI) time series were recorded in 36 subjects at rest. All volunteers were healthy without any psychiatric, medical or neurological disorder. Subjects completed the Liebowitz Social Anxiety Scale (LSAS) and the Brief Fear of Negative Evaluation (BFNE) to assess social anxiety and thoughts in social contexts. We also obtained the fractional Amplitude of Low Frequency Fluctuations (fALFF) of the {BOLD} signal as an independent control measure for {HE} data. {BFNE} scores correlated positively with {HE} in the posterior cingulate/precuneus, while {LSAS} scores correlated positively with {HE} in the precuneus, in the inferior parietal sulci and in the parahippocamus. Results from fALFF were highly consistent with those obtained using {LSAS} and {BFNE} to predict HE. Overall our data indicate that spontaneous brain activity is influenced by the degree of social anxiety, on a continuum and in the absence of social stimuli. These findings suggest that social anxiety is a trait characteristic that shapes brain activity and predisposes to different reactions in social contexts.
Claudio Gentili
Nicola Vanello
Ioana Cristea
Daniel David
Emiliano Ricciardi
emiliano.ricciardi@imtlucca.it
Pietro Pietrini
pietro.pietrini@imtlucca.it
2015-11-10T10:32:22Z
2016-09-13T09:43:21Z
http://eprints.imtlucca.it/id/eprint/2859
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2859
2015-11-10T10:32:22Z
Neural and Behavioral Correlates of Extended Training during Sleep Deprivation in Humans: Evidence for Local, Task-Specific Effects
Recent work has demonstrated that behavioral manipulations targeting specific cortical areas during prolonged wakefulness lead to a region-specific homeostatic increase in theta activity (5–9 Hz), suggesting that theta waves could represent transient neuronal OFF periods (local sleep). In awake rats, the occurrence of an OFF period in a brain area relevant for behavior results in performance errors. Here we investigated the potential relationship between local sleep events and negative behavioral outcomes in humans.Volunteers participated in two prolonged wakefulness experiments (24 h), each including 12 h of practice with either a driving simulation (DS) game or a battery of tasks based on executive functions (EFs). Multiple high-density EEG recordings were obtained during each experiment, both in quiet rest conditions and during execution of two behavioral tests, a response inhibition test and a motor test, aimed at assessing changes in impulse control and visuomotor performance, respectively. In addition, fMRI examinations obtained at 12 h intervals were used to investigate changes in inter-regional connectivity.The EF experiment was associated with a reduced efficiency in impulse control, whereas DS led to a relative impairment in visuomotor control. A specific spatial and temporal correlation was observed between EEG theta waves occurring in task-related areas and deterioration of behavioral performance. The fMRI connectivity analysis indicated that performance impairment might partially depend on a breakdown in connectivity determined by a “network overload.”Present results demonstrate the existence of an association between theta waves during wakefulness and performance errors and may contribute explaining behavioral impairments under conditions of sleep deprivation/restriction.
Giulio Bernardi
Francesca Siclari
Xiaoqian Yu
Corinna Zennig
Michele Bellesi
Emiliano Ricciardi
emiliano.ricciardi@imtlucca.it
Chiara Cirelli
Maria Felice Ghilardi
Pietro Pietrini
pietro.pietrini@imtlucca.it
Giulio Tononi
2015-11-02T14:43:10Z
2015-11-02T14:43:10Z
http://eprints.imtlucca.it/id/eprint/2803
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2803
2015-11-02T14:43:10Z
An Accurate Thermoviscoelastic Rheological Model for Ethylene Vinyl Acetate Based on Fractional Calculus
The thermoviscoelastic rheological properties of ethylene vinyl acetate (EVA) used to embed solar cells have to be accurately described to assess the deformation and the stress state of photovoltaic (PV) modules and their durability. In the present work, considering the stress as dependent on a noninteger derivative of the strain, a two-parameter model is proposed to approximate the power-law relation between the relaxation modulus and time for a given temperature level. Experimental validation with EVA uniaxial relaxation data at different constant temperatures proves the great advantage of the proposed approach over classical rheological models based on exponential solutions.
Marco Paggi
marco.paggi@imtlucca.it
Alberto Sapora
2015-11-02T14:37:29Z
2017-03-27T14:28:03Z
http://eprints.imtlucca.it/id/eprint/2802
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2802
2015-11-02T14:37:29Z
Topological characterization of antireflective and hydrophobic rough surfaces: are random process theory and fractal modeling applicable?
The random process theory (RPT) has been widely applied to predict the joint probability distribution functions (PDFs) of asperity heights and curvatures of rough surfaces. A check of the predictions of RPT against the actual statistics of numerically generated random fractal surfaces and of real rough surfaces has been only partially undertaken. The present experimental and numerical study provides a deep critical comparison on this matter, providing some insight into the capabilities and limitations in applying RPT and fractal modeling to antireflective and hydrophobic rough surfaces, two important types of textured surfaces. A multi-resolution experimental campaign using a confocal profilometer with different lenses is carried out and a comprehensive software for the statistical description of rough surfaces is developed. It is found that the topology of the analyzed textured surfaces cannot be fully described according to RPT and fractal modeling. The following complexities emerge: ( i ) the presence of cut-offs or bi-fractality in the power-law power-spectral density (PSD) functions; ( ii ) a more pronounced shift of the PSD by changing resolution as compared to what was expected from fractal modeling; ( iii ) inaccuracy of the RPT in describing the joint PDFs of asperity heights and curvatures of textured surfaces; ( iv ) lack of resolution-invariance of joint PDFs of textured surfaces in case of special surface treatments, not accounted for by fractal modeling.
Claudia Borri
claudia.borri@imtlucca.it
Marco Paggi
marco.paggi@imtlucca.it
2015-10-26T12:20:25Z
2015-10-29T14:19:05Z
http://eprints.imtlucca.it/id/eprint/2782
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2782
2015-10-26T12:20:25Z
Benefits and challenges of using smart meters for advancing residential water demand modeling and management: a review
Over the last two decades, water smart metering programs have been launched in a number of medium to large cities worldwide to nearly continuously monitor water consumption at the single household level. The availability of data at such very high spatial and temporal resolution advanced the ability in characterizing, modeling, and, ultimately, designing user-oriented residential water demand management strategies. Research to date has been focusing on one or more of these aspects but with limited integration between the specialized methodologies developed so far. This manuscript is the first comprehensive review of the literature in this quickly evolving water research domain. The paper contributes a general framework for the classification of residential water demand modeling studies, which allows revising consolidated approaches, describing emerging trends, and identifying potential future developments. In particular, the future challenges posed by growing population demands, constrained sources of water supply and climate change impacts are expected to require more and more integrated procedures for effectively supporting residential water demand modeling and management in several countries across the world.
Andrea Cominola
Matteo Giuliani
Dario Piga
dario.piga@imtlucca.it
Andrea Castelletti
Andrea Emilio Rizzoli
2015-10-22T13:41:13Z
2015-10-22T13:41:13Z
http://eprints.imtlucca.it/id/eprint/2779
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2779
2015-10-22T13:41:13Z
Distributed solution of stochastic optimal control problems on GPUs
Stochastic optimal control problems arise in many applications and are, in principle, large-scale involving up to millions of decision variables. Their applicability in control applications is often limited by the availability of algorithms that can solve them efficiently and within the sampling time of the controlled system.
In this paper we propose a dual accelerated proximal gradient algorithm which is amenable to parallelization and demonstrate that its GPU implementation affords high speed-up values (with respect to a CPU implementation) and greatly outperforms well-established commercial optimizers such as Gurobi.
Ajay Kumar Sampathirao
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
2015-10-22T13:39:05Z
2016-05-04T10:15:53Z
http://eprints.imtlucca.it/id/eprint/2780
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2780
2015-10-22T13:39:05Z
Scenario-Based Model Predictive Operation Control of Islanded Microgrids
We propose a model predictive control (MPC) approach for the operation of islanded microgrids that takes into account the stochasticity of wind and load forecasts. In comparison to worst case approaches, the probability distribution of the prediction is used to optimize the operation of the microgrid, leading to less conservative solutions. Suitable models for time series forecast are derived and employed to create scenarios. These scenarios and the system measurements are used as inputs for a stochastic MPC, wherein a mixed-integer problem is solved to derive the optimal controls. In the provided case study, the stochastic MPC yields an increase of wind power generation and decrease of conventional generation.
Christian Hans
hans@control.tu-berlin.de
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
Raisch Jörg
raisch@control.tu-berlin.de
Carsten Reincke-Collon
2015-10-19T09:40:53Z
2016-04-06T08:50:40Z
http://eprints.imtlucca.it/id/eprint/2776
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2776
2015-10-19T09:40:53Z
Binary and Multi-class Parkinsonian Disorders Classification Using Support Vector Machines
This paper presents a method for an automated Parkinsonian disorders classification using Support Vector Machines (SVMs). Magnetic Resonance quantitative markers are used as features to train SVMs with the aim of automatically diagnosing patients with different Parkinsonian disorders. Binary and multi–class classification problems are investigated and applied with the aim of automatically distinguishing the subjects with different forms of disorders. A ranking feature selection method is also used as a preprocessing step in order to asses the significance of the different features in diagnosing Parkinsonian disorders. In particular, it turns out that the features selected as the most meaningful ones reflect the opinions of the clinicians as the most important markers in the diagnosis of these disorders. Concerning the results achieved in the classification phase, they are promising; in the two multi–class classification problems investigated, an average accuracy of 81% and 90% is obtained, while in the binary scenarios taken in consideration, the accuracy is never less than 88%.
Rita Morisi
rita.morisi@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Nico Lanconelli
Stefano Zanigni
David Neil Manners
Claudia Testa
Stefania Evangelisti
LauraLudovica Gramegna
Claudio Bianchini
Pietro Cortelli
Caterina Tonon
Raffaele Lodi
2015-10-19T09:31:34Z
2015-10-19T09:31:34Z
http://eprints.imtlucca.it/id/eprint/2775
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2775
2015-10-19T09:31:34Z
Semi-automated scar detection in delayed enhanced cardiac magnetic resonance images
Late enhancement cardiac magnetic resonance images (MRI) has the ability to precisely delineate myocardial scars. We present a semi-automated method for detecting scars in cardiac MRI. This model has the potential to improve routine clinical practice since quantification is not currently offered due to time constraints. A first segmentation step was developed for extracting the target regions for potential scar and determining pre-candidate objects. Pattern recognition methods are then applied to the segmented images in order to detect the position of the myocardial scar. The database of late gadolinium enhancement (LE) cardiac MR images consists of 111 blocks of images acquired from 63 patients at the University Hospital Southampton NHS Foundation Trust (UK). At least one scar was present for each patient, and all the scars were manually annotated by an expert. A group of images (around one third of the entire set) was used for training the system which was subsequently tested on all the remaining images. Four different classifiers were trained (Support Vector Machine (SVM), k-nearest neighbor (KNN), Bayesian and feed-forward neural network) and their performance was evaluated by using Free response Receiver Operating Characteristic (FROC) analysis. Feature selection was implemented for analyzing the importance of the various features. The segmentation method proposed allowed the region affected by the scar to be extracted correctly in 96% of the blocks of images. The SVM was shown to be the best classifier for our task, and our system reached an overall sensitivity of 80% with less than 7 false positives per patient. The method we present provides an effective tool for detection of scars on cardiac MRI. This may be of value in clinical practice by permitting routine reporting of scar quantification.
Rita Morisi
rita.morisi@imtlucca.it
Bruno Donini
Nico Lanconelli
James Rosengarden
John Morgan
Stephen Harden
Nick Curzen
2015-10-19T09:22:53Z
2015-10-19T09:22:53Z
http://eprints.imtlucca.it/id/eprint/2774
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2774
2015-10-19T09:22:53Z
Sparse Solutions to the Average Consensus Problem via Various Regularizations of the Fastest Mixing Markov-Chain Problem
In the consensus problem on multi-agent systems, in which the states of the agents represent opinions, the agents aim at reaching a common opinion (or consensus state) through local exchange of information. An important design problem is to choose the degree of interconnection of the subsystems to achieve a good trade-off between a small number of interconnections and a fast convergence to the consensus state, which is the average of the initial opinions under mild conditions. This paper addresses this problem through l₁ -norm and l₀ -“pseudo-norm” regularized versions of the well-known Fastest Mixing Markov-Chain (FMMC) problem. We show that such versions can be interpreted as robust forms of the FMMC problem and provide results to guide the choice of the regularization parameter.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Rita Morisi
rita.morisi@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-10-13T08:07:33Z
2015-10-13T08:08:43Z
http://eprints.imtlucca.it/id/eprint/2771
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2771
2015-10-13T08:07:33Z
Unsupervised Myocardial Segmentation for Cardiac MRI
Though unsupervised segmentation was a de-facto standard for cardiac MRI segmentation early on, recently cardiac MRI segmentation literature has favored fully supervised techniques such as Dictionary Learning and Atlas-based techniques. But, the benefits of unsupervised techniques e.g., no need for large amount of training data and better potential of handling variability in anatomy and image contrast, is more evident with emerging cardiac MR modalities. For example, CP-BOLD is a new MRI technique that has been shown to detect ischemia without any contrast at stress but also at rest conditions. Although CP-BOLD looks similar to standard CINE, changes in myocardial intensity patterns and shape across cardiac phases, due to the heart’s motion, BOLD effect and artifacts affect the underlying mechanisms of fully supervised segmentation techniques resulting in a significant drop in segmentation accuracy. In this paper, we present a fully unsupervised technique for segmenting myocardium from the background in both standard CINE MR and CP-BOLD MR. We combine appearance with motion information (obtained via Optical Flow) in a dictionary learning framework to sparsely represent important features in a low dimensional space and separate myocardium from background accordingly. Our fully automated method learns background-only models and one class classifier provides myocardial segmentation. The advantages of the proposed technique are demonstrated on a dataset containing CP-BOLD MR and standard CINE MR image sequences acquired in baseline and ischemic condition across 10 canine subjects, where our method outperforms state-of-the-art supervised segmentation techniques in CP-BOLD MR and performs at-par for standard CINE MR.
Anirban Mukhopadhyay
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Marco Bevilacqua
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-10-13T08:04:03Z
2015-10-13T08:04:03Z
http://eprints.imtlucca.it/id/eprint/2770
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2770
2015-10-13T08:04:03Z
Dictionary Learning Based Image Descriptor for Myocardial Registration of CP-BOLD MR
Cardiac Phase-resolved Blood Oxygen-Level-Dependent (CP-BOLD) MRI is a new contrast agent- and stress-free imaging technique for the assessment of myocardial ischemia at rest. The precise registration among the cardiac phases in this cine type acquisition is essential for automating the analysis of images of this technique, since it can potentially lead to better specificity of ischemia detection. However, inconsistency in myocardial intensity patterns and the changes in myocardial shape due to the heart’s motion lead to low registration performance for state-of-the-art methods. This low accuracy can be explained by the lack of distinguishable features in CP-BOLD and inappropriate metric definitions in current intensity-based registration frameworks. In this paper, the sparse representations, which are defined by a discriminative dictionary learning approach for source and target images, are used to improve myocardial registration. This method combines appearance with Gabor and HOG features in a dictionary learning framework to sparsely represent features in a low dimensional space. The sum of absolute differences of these distinctive sparse representations are used to define a similarity term in the registration framework. The proposed approach is validated on a dataset of CP-BOLD MR and standard CINE MR acquired in baseline and ischemic condition across 10 canines.
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Anirban Mukhopadhyay
Marco Bevilacqua
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-10-09T11:09:10Z
2015-10-09T11:09:10Z
http://eprints.imtlucca.it/id/eprint/2768
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2768
2015-10-09T11:09:10Z
Effect of BOLD Contrast on Myocardial Registration
Cardiac phase-resolved Blood-Oxygen-Level-Dependent (CP-BOLD) MRI is a new approach for detecting ischemia at rest. Currently disease assessment relies on segmental analysis and uses only a few images in the phase-resolved acquisition. It is expected that using all phases can permit pixel-level characterization of CP-BOLD MRI. In this study, state-of-the-art image registration techniques are evaluated on cardiac BOLD MRI data for the first time. The results show that cardiac phase-dependent variations in myocardial BOLD contrast in CP-BOLD images creates a statistically significant decrease in the accuracy compared to standard Cine MR images acquired under conditions of health and myocardial ischemia.
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Anirban Mukhopadhyay
Marco Bevilacqua
Hsin-Jung Yang
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-10-09T10:30:59Z
2015-10-09T11:10:32Z
http://eprints.imtlucca.it/id/eprint/2766
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2766
2015-10-09T10:30:59Z
Data Driven Feature Learning for Representation of Myocardial BOLD MR Images
Cardiac phase-dependent variations of myocardial signal intensities in Cardiac Phase-resolved Blood-Oxygen-Level-Dependent (CP-BOLD) MRI can be exploited for the identification of ischemic territories. This technique requires segmentation to isolate the myocardium. However, spatio-temporal variations of BOLD contrast, prove challenging for existing automated myocardial segmentation techniques, because they were developed for acquisitions where contrast variations in the myocardium are minimal. Appropriate feature learning mechanisms are necessary to best represent appearance and texture in CP-BOLD data. Here we propose and validate a feature learning technique based on multiscale dictionary model that learns to sparsely represent effective patterns under healthy and ischemic conditions.
Anirban Mukhopadhyay
Marco Bevilacqua
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Rohan Dharmakumar
Sotirios A. Tsaftaris
2015-10-08T08:06:56Z
2017-03-27T14:23:19Z
http://eprints.imtlucca.it/id/eprint/2764
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2764
2015-10-08T08:06:56Z
Evolution of the free volume between rough surfaces in contact
The free volume comprised between rough surfaces in contact governs the fluid/gas transport properties across networks of cracks and the leakage/percolation phenomena in seals. In this study, a fundamental insight into the evolution of the free volume depending on the mean plane separation, on the real contact area and on the applied pressure is gained in reference to fractal surfaces whose contact response is solved using the boundary element method. Particular attention is paid to the effect of the surface fractal dimension and of the surface resolution on the predicted results. The free volume domains corresponding to different threshold levels are found to display fractal spatial distributions whose bounds to their fractal dimensions are theoretically derived. A synthetic formula based on the probability distribution function of the free volumes is proposed to synthetically interpret the numerically observed trends.
Marco Paggi
marco.paggi@imtlucca.it
Qi-Chang He
2015-10-08T08:03:17Z
2015-10-08T08:04:30Z
http://eprints.imtlucca.it/id/eprint/2763
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2763
2015-10-08T08:03:17Z
Optimization algorithms for the solution of the frictionless normal contact between rough surfaces
This paper revisits the fundamental equations for the solution of the frictionless unilateral normal contact problem between a rough rigid surface and a linear elastic half-plane using the boundary element method (BEM). After recasting the resulting Linear Complementarity Problem (LCP) as a convex quadratic program (QP) with nonnegative constraints, different optimization algorithms are compared for its solution: (i) a Greedy method, based on different solvers for the unconstrained linear system (Conjugate Gradient CG, Gauss–Seidel, Cholesky factorization), (ii) a constrained CG algorithm, (iii) the Alternating Direction Method of Multipliers (ADMM), and (iv) the Non-Negative Least Squares (NNLS) algorithm, possibly warm-started by accelerated gradient projection steps or taking advantage of a loading history. The latter method is two orders of magnitude faster than the Greedy CG method and one order of magnitude faster than the constrained CG algorithm. Finally, we propose another type of warm start based on a refined criterion for the identification of the initial trial contact domain that can be used in conjunction with all the previous optimization algorithms. This method, called cascade multi-resolution (CMR), takes advantage of physical considerations regarding the scaling of the contact predictions by changing the surface resolution. The method is very efficient and accurate when applied to real or numerically generated rough surfaces, provided that their power spectral density function is of power-law type, as in case of self-affine fractal surfaces.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Paggi
marco.paggi@imtlucca.it
2015-10-08T08:01:12Z
2015-10-08T08:02:01Z
http://eprints.imtlucca.it/id/eprint/2762
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2762
2015-10-08T08:01:12Z
An anisotropic large displacement cohesive zone model for fibrillar and crazing interfaces
A new cohesive zone model to describe fracture of interfaces with a microstructurs made of fibrils with statistically distributed in-plane and out-of-plane orientations is proposed. The elementary force–displacement relation of each fibril is considered to obey the peeling theory of a tape, although other refined constitutive relations could be invoked for the adhesive constitutive response without any lack of generality. The proposed consistent 2D and 3D interface finite element formulations for large displacements account for both the mechanical and the geometrical tangent stiffness matrices, required for implicit solution schemes. After a preliminary discussion on model parameters identification, it is shown that by tailoring the spatial density of fibrils at different orientations can be a way to realize innovative interfaces enhancing adhesion or decohesion, depending on the need. For instance, it can be possible to realize microstructured adhesives to facilitate debonding of the glass cover in photovoltaic modules to simplify recycling purposes. Moreover, the use of probability distribution functions describing the density of fibrils at different orientations is a very effective approach for modeling the anisotropy in the mechanical bonding between paper tissues and for simulating the complex process of crazing in amorphous polymers.
Marco Paggi
marco.paggi@imtlucca.it
José Reinoso
2015-10-08T07:55:10Z
2015-10-08T08:03:01Z
http://eprints.imtlucca.it/id/eprint/2761
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2761
2015-10-08T07:55:10Z
Numerical modelling and validation of thermally-induced spalling
In order to reduce the silicon consumption in the production of crystalline silicon solar cells, the improvement of sawing techniques or the use of a kerf-less process are possible solutions. This study focuses on a particular kerf-less technique based on thermally-induced spalling of thin silicon layers joined to aluminum. Via a controlled temperature variation we demonstrate that it is possible to drive an initially sharp crack, introduced by laser, into the silicon substrate and obtain the detachment of ultra-thin silicon layers. A numerical approach based on the finite element method (FEM) and Linear Elastic Fracture Mechanics (LEFM) is herein proposed to compute the Stress Intensity Factors (SIFs) that characterize the stress field at the crack tip and predict crack propagation of an initial notch, depending on the geometry of the specimen and on the boundary conditions. We propose a parametric study to evaluate the dependence of the crack path on the following parameters: (i) the distance between the notch and the aluminum-silicon interface, (ii) the thickness of the stressor (aluminum) layer, and (iii) the applied load. The results for the cooling process here analyzed show that ΔT >43 K and a ratio λ=0.65 between the thickness of the stressor layer and the distance of the initial notch from the interface are suitable values to achieve a steady-state propagation in case of a ratio λ0=0.115 between the in plane thickness of the silicon substrate and the aluminum thickness, a value typically used in applications.
Irene Berardone
Sarah Kajari-Schröder
R Niepelt
J Hensen
V Steckenreiter
Marco Paggi
marco.paggi@imtlucca.it
2015-09-16T11:12:59Z
2015-09-16T11:35:57Z
http://eprints.imtlucca.it/id/eprint/2748
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2748
2015-09-16T11:12:59Z
The significance of image compression in plant phenotyping applications
We are currently witnessing an increasingly higher throughput in image-based plant phenotyping experiments. The majority of imaging data are collected using complex automated procedures and are then post-processed to extract phenotyping-related information. In this article, we show that the image compression used in such procedures may compromise phenotyping results and this needs to be taken into account. We use three illuminating proof-of-concept experiments that demonstrate that compression (especially in the most common lossy JPEG form) affects measurements of plant traits and the errors introduced can be high. We also systematically explore how compression affects measurement fidelity, quantified as effects on image quality, as well as errors in extracted plant visual traits. To do so, we evaluate a variety of image-based phenotyping scenarios, including size and colour of shoots, leaf and root growth. To show that even visual impressions can be used to assess compression effects, we use root system images as examples. Overall, we find that compression has a considerable effect on several types of analyses (albeit visual or quantitative) and that proper care is necessary to ensure that this choice does not affect biological findings. In order to avoid or at least minimise introduced measurement errors, for each scenario, we derive recommendations and provide guidelines on how to identify suitable compression options in practice. We also find that certain compression choices can offer beneficial returns in terms of reducing the amount of data storage without compromising phenotyping results. This may enable even higher throughput experiments in the future.
Massimo Minervini
massimo.minervini@imtlucca.it
Hanno Scharr
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-09-16T11:03:23Z
2015-09-16T11:03:23Z
http://eprints.imtlucca.it/id/eprint/2746
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2746
2015-09-16T11:03:23Z
Data-driven feature learning for myocardial segmentation of CP-BOLD MRI
Cardiac Phase-resolved Blood Oxygen-Level-Dependent (CP-BOLD) MR is capable of diagnosing an ongoing ischemia by detecting changes in myocardial intensity patterns at rest without any contrast and stress agents. Visualizing and detecting these changes require significant post-processing, including myocardial segmentation for isolating the myocardium. But, changes in myocardial intensity pattern and myocardial shape due to the heart’s motion challenge automated standard CINE MR myocardial segmentation techniques resulting in a significant drop of segmentation accuracy. We hypothesize that the main reason behind this phenomenon is the lack of discernible features. In this paper, a multi scale discriminative dictionary learning approach is proposed for supervised learning and sparse representation of the myocardium, to improve the myocardial feature selection. The technique is validated on a challenging dataset of CP-BOLD MR and standard CINE MR acquired in baseline and ischemic condition across 10 canine subjects. The proposed method significantly outperforms standard cardiac segmentation techniques, including segmentation via registration, level sets and supervised methods for myocardial segmentation.
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Ilkay Oksuz
ilkay.oksuz@imtlucca.it
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-09-03T14:31:15Z
2015-11-02T09:40:23Z
http://eprints.imtlucca.it/id/eprint/2743
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2743
2015-09-03T14:31:15Z
Combining behavioural types with security analysis
Today’s software systems are highly distributed and interconnected, and they increasingly rely on communication to achieve their goals; due to their societal importance, security and trustworthiness are crucial aspects for the correctness of these systems. Behavioural types, which extend data types by describing also the structured behaviour of programs, are a widely studied approach to the enforcement of correctness properties in communicating systems. This paper offers a unified overview of proposals based on behavioural types which are aimed at the analysis of security properties.
Massimo Bartoletti
Ilaria Castellani
Pierre-Malo Deniélou
Mariangiola Dezani-Ciancaglini
Silvia Ghilezan
Jovanka Pantović
Jorge A. Pérez
Peter Thiemann
Bernardo Toninho
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2015-09-03T08:15:31Z
2016-05-06T14:07:46Z
http://eprints.imtlucca.it/id/eprint/2740
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2740
2015-09-03T08:15:31Z
Computationally efficient data and application driven color transforms for the compression and enhancement of images and video
An important step in color image or video coding and enhancement is the linear transformation of input (typically RGB) data into a color space more suitable for compression, subsequent analysis, or visualization. The choice of this transform becomes even more critical when operating in distributed and low-computational power environments, such as visual sensor networks or remote sensing. Data-driven transforms are rarely used due to increased complexity. Most schemes adopt fixed transforms to decorrelate the color channels which are then processed independently. Here we propose two frameworks to find appropriate data-driven transforms in different settings. The first, named approximate Karhunen-Loève Transform (aKLT), performs comparable to the KLT at a fraction of the computational complexity, thus favoring adoption on sensors and resource-constrained devices. Furthermore, we consider an application-aware setting in which an expert system (e.g., a classifier) analyzes imaging data at the receiver's end. In a compression context, distortion may jeopardize the accuracy of the analysis. Since the KLT is not optimal in this setting, we investigate formulations that maximize post-compression expert system performance. Relaxing decorrelation and energy compactness constraints, a second transform can be obtained offline with supervised learning methods. Finally, we propose transforms that accommodate both constraints, and are found using regularized optimization.
Massimo Minervini
massimo.minervini@imtlucca.it
Cristian Rusu
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-07-24T12:54:54Z
2015-07-24T12:54:54Z
http://eprints.imtlucca.it/id/eprint/2731
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2731
2015-07-24T12:54:54Z
A unified framework for differential aggregations in Markovian process algebra
Fluid semantics for Markovian process algebra have recently emerged as a computationally attractive approximate way of reasoning about the behaviour of stochastic models of large-scale systems. This interpretation is particularly convenient when sequential components characterised by small local state spaces are present in many independent copies. While the traditional Markovian interpretation causes state-space explosion, fluid semantics is independent from the multiplicities of the sequential components present in the model, just associating a single ordinary differential equation (ODE) with each local state. In this paper we analyse the case of a process algebra model inducing a large ODE system. Previous work, known as exact fluid lumpability, requires two symmetries: ODE aggregation is possible for processes that i) are isomorphic and that ii) are present with the same multiplicities. We first relax the latter requirement by introducing the notion of ordinary fluid lumpability, which yields an ODE system where the sum of the aggregated variables is preserved exactly. Then, we consider approximate variants of both notions of lumpability which make nearby processes symmetric after a perturbation of their parameters. We prove that small perturbations yield nearby differential trajectories. We carry out our study in the context of a process algebra that unifies two synchronisation semantics that are well studied in the literature, useful for the modelling of computer systems and chemical networks, respectively. In both cases, we provide numerical evidence which shows that, in practice, many heterogeneous processes can be aggregated with negligible errors.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-07-22T10:28:33Z
2015-09-22T08:14:50Z
http://eprints.imtlucca.it/id/eprint/2729
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2729
2015-07-22T10:28:33Z
On Expressiveness and Behavioural Theory of Attribute-based
Communication
Attribute-based communication is an interesting alternative to broadcast and binary communication when providing abstract models for the so called Collective Adaptive Systems which consist of a large number of interacting components that dynamically adjust and combine their behavior to achieve specifc goals. A basic process calculus, named AbC, is introduced whose primary
primitive for interaction is attribute-based communication. An AbC system consists of a set of parallel components each of which is equipped with a set of attributes. Communication takes place in an implicit multicast fashion, and interactions among components are dynamically established by taking into account\connections" as determined by predicates over the attributes
exposed by components. First, the syntax and the semantics of AbC are presented, then expressiveness and effectiveness of the calculus are demonstrated both in terms of the ability to model scenarios featuring collaboration, reconfiguration, and adaptation
and of the possibility of encoding a process calculus for broadcasting channel-based communication and other communication
paradigms. Behavioral equivalences for AbC are introduced for establishing formal relationships between different descriptions
of the same system.
Yehia Moustafa Abd Alrahman
yehia.abdalrahman@imtlucca.it
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-06-29T14:12:27Z
2015-06-29T14:12:27Z
http://eprints.imtlucca.it/id/eprint/2724
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2724
2015-06-29T14:12:27Z
Auxetic anti-tetrachiral materials: equivalent elastic properties and frequency band-gaps
A comprehensive characterization of the novel class of anti-tetrachiral cellular solids, both considering the static and the dynamic response, is provided in the paper. The heterogeneous material is characterized by a periodic microstructure made of equi-spaced rings each interconnected by four ligaments. In the most general case, rings and ligaments are surrounded by a softer matrix and the rings can be filled by a different material. First, the first order linear elastic homogenized constitutive response is estimated resorting to two different microstructural models: a discrete model, in which the ligaments are modeled as beams and the presence of the matrix is neglected and the equivalent elastic properties are evaluated through a simplified analytical approach, and a more detailed continuous model, where the actual properties of matrix, ligaments and rings, varying in the 2D domain, are considered and the first order computational homogenization is adopted. Special attention is given to the dependence of the 2D overall Cauchy-type elastic constants on the mechanical and geometrical parameters characterizing the microstructure. The results, indeed, show the existence of large variations in the linear elastic constants and degree of anisotropy. A comparison with available experimental results confirms the validity of the analytical and numerical approaches adopted. Finally, the rigorous Floquet–Bloch approach is applied to the periodic cell of the cellular solid to evaluate the dispersion of propagation waves along the orthotropic axes in the framework of elasticity and to detect band gaps characterizing the material. A numerical approach, based on the first order computational homogenization, is also adopted and the rigorous and approximate solutions are compared.
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Maria Laura De Bellis
2015-06-23T12:59:23Z
2017-03-27T14:19:42Z
http://eprints.imtlucca.it/id/eprint/2709
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2709
2015-06-23T12:59:23Z
A power and energy procedure in operating
photovoltaic systems to quantify the losses
according to the causes
Recently, after high feed-in tariffs in Italy, retroactive cuts in the energy payments have generated economic concern about several grid-connected photovoltaic (PV) systems with poor performance. In this paper the proposed procedure suggests some rules for determining the sources of losses and thus minimizing poor performance in the energy production. The on-site
field inspection, the identification of the irradiance sensors, as close as possible the PV system, and the assessment of energy production are three preliminary steps which do not require experimental tests. The fourth step is to test the arrays of PV modules on-site. The fifth step is to test only the PV strings or single modules belonging to arrays with poor performance (e.g., I-V mismatch). The sixth step is to use the thermo-graphic camera and the electroluminescence at the PV-module level. The seventh step is to monitor the DC racks of each inverter or the individual inverter, if equipped with only one Maximum Power Point Tracker (MPPT). Experimental results on real PV systems show the effectiveness of this procedure.
F. Spertino
filippo.spertino@polito.it
A. Ciocia
P. Di Leo
R. Tommasini
Irene Berardone
irene.berardone@polito.it
Mauro Corrado
mauro.corrado@polito.it
Andrea Infuso
andrea.infuso@polito.it
Marco Paggi
marco.paggi@imtlucca.it
2015-06-16T15:22:40Z
2015-06-16T15:22:40Z
http://eprints.imtlucca.it/id/eprint/2708
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2708
2015-06-16T15:22:40Z
Image Analysis: The New Bottleneck in Plant Phenotyping [Applications Corner]
Plant phenotyping is the identification of effects on the phenotype (i.e., the plant appearance and performance) as a result of genotype differences (i.e., differences in the genetic code) and the environmental conditions to which a plant has been exposed [1]?[3]. According to the Food and Agriculture Organization of the United Nations, large-scale experiments in plant phenotyping are a key factor in meeting the agricultural needs of the future to feed the world and provide biomass for energy, while using less water, land, and fertilizer under a constantly evolving environment due to climate change. Working on model plants (such as Arabidopsis), combined with remarkable advances in genotyping, has revolutionized our understanding of biology but has accelerated the need for precision and automation in phenotyping, favoring approaches that provide quantifiable phenotypic information that could be better used to link and find associations in the genotype [4]. While early on, the collection of phenotypes was manual, currently noninvasive, imaging-based methods are increasingly being utilized [5], [6]. However, the rate at which phenotypes are extracted in the field or in the lab is not matching the speed of genotyping and is creating a bottleneck.
Massimo Minervini
massimo.minervini@imtlucca.it
Hanno Scharr
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-06-15T08:33:16Z
2015-06-15T08:33:16Z
http://eprints.imtlucca.it/id/eprint/2703
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2703
2015-06-15T08:33:16Z
Revisiting bisimilarity and its modal logic for nondeterministic and probabilistic processes
The logic PML is a probabilistic version of Hennessy–Milner logic introduced by Larsen and Skou to characterize bisimilarity over probabilistic processes without internal nondeterminism. In this paper, two alternative interpretations of PML over nondeterministic and probabilistic processes as models are considered, and two new bisimulation-based equivalences that are in full agreement with those interpretations are provided. The new equivalences include as coarsest congruences the two bisimilarities for nondeterministic and probabilistic processes proposed by Segala and Lynch. The latter equivalences are instead known to agree with two versions of Hennessy–Milner logic extended with an additional probabilistic operator interpreted over state distributions in place of individual states. The new interpretations of PML and the corresponding new bisimilarities are thus the first ones to offer a uniform framework for reasoning on processes that are purely nondeterministic or reactive probabilistic or that mix nondeterminism and probability in an alternating/nonalternating way.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-05-29T08:09:32Z
2015-05-29T08:09:32Z
http://eprints.imtlucca.it/id/eprint/2700
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2700
2015-05-29T08:09:32Z
Dictionary learning for unsupervised identification of ischemic territories in CP-BOLD Cardiac MRI at rest
Marco Bevilacqua
marco.bevilacqua@imtlucca.it
Cristian Rusu
Rohan Dharmakumar
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
2015-05-29T08:04:45Z
2015-05-29T08:04:45Z
http://eprints.imtlucca.it/id/eprint/2699
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2699
2015-05-29T08:04:45Z
Whole-heart, free-breathing, three-dimensional myocardial BOLD MRI at 3T with simultaneous 13N-ammonia PET in canines
Hsin-Jung Yang
Damini Dey
Jane Sykes
John Butler
Avinash Kali
Ivan Cokic
Behzad Sharif
Debiao Li
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Piotr Slomka
Frank S. Prato
Rohan Dharmakumar
2015-05-19T09:35:54Z
2015-10-28T14:47:41Z
http://eprints.imtlucca.it/id/eprint/2681
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2681
2015-05-19T09:35:54Z
Model Predictive Control for Linear Impulsive Systems
Linear impulsive control systems have been extensively studied with respect to their equilibrium points which, in most cases, are no other than the origin. However, the trajectory of an impulsive system cannot be stabilized to arbitrary desired points hindering their utilization in a great many applications. In this paper, we study the equilibrium of linear impulsive systems with respect to target-sets. We properly extend the notion of invariance and design stabilizing model predictive controllers (MPC). Finally, we apply the proposed methodology to control the intravenous bolus administration of Lithium.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-05-12T10:23:59Z
2015-05-12T10:23:59Z
http://eprints.imtlucca.it/id/eprint/2673
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2673
2015-05-12T10:23:59Z
Sloshing-aware attitude control of impulsively actuated spacecraft
Upper stages of launchers sometimes drift, with the main engine switched-off, for a longer period of time until re-ignition and subsequent payload release. During this period a large amount of propellant is still in the tank and the motion of the fluid (sloshing) has an impact on the attitude of the stage. For the flight phase the classical spring/damper or pendulum models cannot be applied. A more elaborate sloshing-aware model is described in the paper involving a time-varying inertia tensor.
Using principles of hybrid systems theory we model the minimum impulse bit (MIB) effect, that is, the minimum torque that can be exerted by the thrusters. We design a hybrid model predictive control scheme for the attitude control of a launcher during its long coasting period, aiming at minimising the actuation count of the thrusters.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Daniele Bernardini
daniele.bernardini@imtlucca.it
Hans Strauch
Samir Bennani
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-05-04T15:19:39Z
2015-05-04T15:36:21Z
http://eprints.imtlucca.it/id/eprint/2664
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2664
2015-05-04T15:19:39Z
Causal-Consistent Reversibility in a Tuple-Based Language
Causal-consistent reversibility is a natural way of undoing concurrent computations. We study causal-consistent reversibility in the context of µKlaim, a formal coordination language based on distributed tuple spaces. We consider both uncontrolled reversibility, suitable to study the basic properties of the reversibility mechanism, and controlled reversibility based on a rollback operator, more suitable for programming applications. The causality structure of the language, and thus the definition of its reversible semantics, differs from all the reversible languages in the literature because of its generative communication paradigm. In particular, the reversible behavior of µKlaim read primitive, reading a tuple without consuming it, cannot be matched using channel-based communication. We illustrate the reversible extensions of µKlaim on a simple, but realistic, application scenario.
Elena Giachino
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Francesco Tiezzi
2015-04-07T14:00:07Z
2015-04-07T14:00:07Z
http://eprints.imtlucca.it/id/eprint/2657
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2657
2015-04-07T14:00:07Z
A dual gradient-projection algorithm for model predictive control in fixed-point arithmetic
Although linear Model Predictive Control has gained increasing popularity for controlling dynamical systems subject to constraints, the main barrier that prevents its widespread use in embedded applications is the need to solve a Quadratic Program (QP) in real-time. This paper proposes a dual gradient projection (DGP) algorithm specifically tailored for implementation on fixed-point hardware. A detailed convergence rate analysis is presented in the presence of round-off errors due to fixed-point arithmetic. Based on these results, concrete guidelines are provided for selecting the minimum number of fractional and integer bits that guarantee convergence to a suboptimal solution within a pre-specified tolerance, therefore reducing the cost and power consumption of the hardware device.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Alberto Guiggiani
alberto.guiggiani@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-04-07T13:53:35Z
2015-10-28T14:49:34Z
http://eprints.imtlucca.it/id/eprint/2656
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2656
2015-04-07T13:53:35Z
A multiparametric quadratic programming algorithm with polyhedral computations based on nonnegative least squares
Model Predictive Control (MPC) is one of the most successful techniques adopted in industry to control multivariable systems under constraints on input and output variables. To circumvent the main drawback of MPC, i.e., the need to solve a Quadratic Program (QP) on line to compute the control action, explicit MPC was proposed in the past to precompute the control law off line using multiparametric QP (mpQP). The resulting form of the MPC law is piecewise affine, which is extremely easy to code, can be computed online by simple arithmetic operations, and requires a maximum number of iterations that can be exactly determined a priori. On the other hand, the offline computations to solve the mpQP problem require detecting emptiness, full-dimensionality, and minimal hyperplane representations of polyhedra, and other computational geometric operations. While most of the existing methods solve such operations via linear programming, the approach proposed in this paper relies on a nonnegative least squares (NNLS) solver that is very simple to code, fast to execute, and provides solutions up to machine precision. In addition, the new approach exploits QP duality to identify and construct critical regions and to handle degeneracy issues.
Alberto Bemporad
alberto.bemporad@imtlucca.it
2015-03-27T10:40:25Z
2015-03-27T12:54:13Z
http://eprints.imtlucca.it/id/eprint/2648
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2648
2015-03-27T10:40:25Z
A meshless adaptive multiscale method for fracture
Abstract The paper presents a multiscale method for crack propagation. The coarse region is modelled by the differential reproducing kernel particle method. Fracture in the coarse scale region is modelled with the Phantom node method. A molecular statics approach is employed in the fine scale where crack propagation is modelled naturally by breaking of bonds. The triangular lattice corresponds to the lattice structure of the (111)plane of an {FCC} crystal in the fine scale region. The Lennard–Jones potential is used to model the atom–atom interactions. The coupling between the coarse scale and fine scale is realized through ghost atoms. The ghost atom positions are interpolated from the coarse scale solution and enforced as boundary conditions on the fine scale. The fine scale region is adaptively refined and coarsened as the crack propagates. The centro symmetry parameter is used to detect the crack tip location. The method is implemented in two dimensions. The results are compared to pure atomistic simulations and show excellent agreement.
Shih-Wei Yang
Pattabhi R. Budarapu
pattabhi.budarapu@imtlucca.it
D. Roy Mahapatra
Stéphane P.A. Bordas
Goangseup Zi
Timon Rabczuk
2015-03-27T10:35:00Z
2015-03-27T12:54:13Z
http://eprints.imtlucca.it/id/eprint/2646
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2646
2015-03-27T10:35:00Z
Parametric studies on buckling of thin walled channel beams
Abstract The lateral buckling analysis of cold-formed thin walled beams subjected to pure bending moments has been performed. The critical buckling loads are estimated based an optimization criteria. The estimated critical buckling stresses are compared with the published results, they show excellent agreement. The effect of the beam length, radius and thickness of the flanges and the length of the extended open flanges, on the critical buckling stresses have been studied for several combinations of the geometric parameters of the beam. Among the three beams, the critical buckling moments for the beam with the extended open flanges are found to be the maximum. However, considering the material and manufacturing costs, beams with rounded cross section are efficient in resisting the buckling loads.
Y.B. SudhirSastry
Y. Krishna
Pattabhi R. Budarapu
pattabhi.budarapu@imtlucca.it
2015-03-27T10:32:33Z
2015-03-27T12:54:13Z
http://eprints.imtlucca.it/id/eprint/2645
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2645
2015-03-27T10:32:33Z
Buckling analysis of thin wall stiffened composite panels
Abstract We present the pre and post buckling analysis of stiffened composite panels based on the finite element models. Individual buckling studies are conducted on the stiffened composite panel madeup of woven fabric CFC/epoxy, E-glass/epoxy and the Kevlar/epoxy composites. Straight, T shaped and I shaped stiffeners are considered to stiffen the panel. The panel is fabricated with 8 layers and the stiffeners are madeup of 16 layers, of equal thickness arranged in different orientations. The panel is subjected to a uniform axial compression load of 10 kN. The distribution of the buckling stresses and the buckling loads with different panel and stiffener combinations are estimated for three different layup sequences. The variation of the buckling stresses and the buckling loads from the numerical model are compared with the experiment. The results are in excellent agreement.
Y.B. SudhirSastry
Pattabhi R. Budarapu
pattabhi.budarapu@imtlucca.it
N. Madhavi
Y. Krishna
2015-03-27T10:18:27Z
2015-03-27T12:54:13Z
http://eprints.imtlucca.it/id/eprint/2643
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2643
2015-03-27T10:18:27Z
Directionality of sound radiation from rectangular panels
Abstract In this paper, the directionality of sound radiated from a rectangular panel, attached with masses/springs, set in a baffle, is studied. The attachment of masses/springs is done based on the receptance method. The receptance method is used to generate new mode shapes and natural frequencies of the coupled system, in terms of the old mode shapes and natural frequencies. The Rayleigh integral is then used to compute the sound field. The point mass/spring locations are arbitrary, but chosen with the objective of attaining a unique directionality. The excitation frequency to a large degree decides the sound field variations. However, the size of the masses and the locations of the masses/springs do influence the new mode shapes and hence the sound field. The problem is more complex when the number of masses/springs are increased and/or their values are made different. The technique of receptance method is demonstrated through a steel plate with attached point masses in the first example. In the second and third examples, the present method is applied to estimate the sound field from a composite panel with attached springs and masses, respectively. The layup sequence of the composite panel considered in the examples corresponds to the multifunctional structure battery material system, used in the micro air vehicle (MAV) (Thomas and Qidwai, 2005). The demonstrated receptance method does give a reasonable estimate of the new modes.
Pattabhi R. Budarapu
pattabhi.budarapu@imtlucca.it
T.S.S. Narayana
B. Rammohan
Timon Rabczuk
2015-03-26T11:47:08Z
2016-06-30T12:29:59Z
http://eprints.imtlucca.it/id/eprint/2637
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2637
2015-03-26T11:47:08Z
An Instrumental Least Squares Support Vector Machine for Nonlinear System Identification
Least-Squares Support Vector Machines (LS-SVMs), originating from Statistical Learning and Reproducing Kernel Hilbert Space (RKHS) theories, represent a promising approach to identify nonlinear systems via nonparametric estimation of the involved nonlinearities in a computationally and stochastically attractive way. However, application of LS-SVMs and other RKHS variants in the identification context is formulated as a regularized linear regression aiming at the minimization of the l2-loss of the prediction error. This formulation corresponds to the assumption of an auto-regressive noise structure, which is often found to be too restrictive in practical applications. In this paper, Instrumental Variable (IV) based estimation is integrated into the LS-SVM approach, providing, under minor conditions, consistent identification of nonlinear systems regarding the noise modeling error. It is shown how the cost function of the LS-SVM is modified to achieve an IV-based solution. Although, a practically well applicable choice of the instrumental variable is proposed for the derived approach, optimal choice of this instrument in terms of the estimates associated variance still remains to be an open problem. The effectiveness of the proposed IV based LS-SVM scheme is also demonstrated by a Monte Carlo study based simulation example.
Vincent Laurain
Roland Tóth
Dario Piga
dario.piga@imtlucca.it
Wei Xing Zheng
2015-03-11T11:18:06Z
2015-07-24T12:26:41Z
http://eprints.imtlucca.it/id/eprint/2633
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2633
2015-03-11T11:18:06Z
Supporting performance awareness in autonomous ensembles
The ASCENS project works with systems of self-aware, self-adaptive and self-expressive ensembles. Performance awareness represents a concern that cuts across multiple aspects of such systems, from the techniques to acquire performance information by monitoring, to the methods of incorporating such information into the design making and decision making processes. This chapter provides an overview of five project contributions – performance monitoring based on the DiSL instrumentation framework, measurement evaluation using the SPL formalism, performance modeling with fluid semantics, adaptation with DEECo and design with IRM-SA – all in the context of the cloud case stud
Lubomír Bulej
Tomáš Bureš
Ilias Gerostathopoulos
Vojtěch Horký
Jaroslav Keznikl
Lukáš Marek
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
Petr Tůma
2015-03-11T11:14:42Z
2015-03-11T11:14:42Z
http://eprints.imtlucca.it/id/eprint/2632
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2632
2015-03-11T11:14:42Z
Service composition for collective adaptive systems
Collective adaptive systems are large-scale resource-sharing systems which adapt to the demands of their users by redistributing resources to balance load or provide alternative services where the current provision is perceived to be insufficient. Smart transport systems are a primary example where real-time location tracking systems record the location availability of assets such as cycles for hire, or fleet vehicles such as buses, trains and trams. We consider the problem of an informed user optimising his journey using a composition of services offered by different service providers.
Stephen Gilmore
Jane Hillston
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-03-11T10:09:01Z
2015-03-11T10:09:01Z
http://eprints.imtlucca.it/id/eprint/2631
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2631
2015-03-11T10:09:01Z
A homage to Martin Wirsing
Martin Wirsing was born on Christmas Eve, December 24th, 1948, in Bayreuth, a Bavarian town which is famous for the annually celebrated Richard Wagner Festival. There he visited the Lerchenbühl School and the High-School “Christian Ernestinum” where he followed the humanistic branch focusing on Latin and Ancient Greek. After that, from 1968 to 1974, Martin studied Mathematics at University Paris 7 and at Ludwig-Maximilians-Universität in Munich. In 1971 he became Maitrise-en-Sciences Mathematiques at the University Paris 7 and, in 1974, he got the Diploma in Mathematics at LMU Munich.
Rocco De Nicola
r.denicola@imtlucca.it
Rolf Hennicker
2015-03-03T09:49:51Z
2015-03-03T09:49:51Z
http://eprints.imtlucca.it/id/eprint/2626
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2626
2015-03-03T09:49:51Z
CaSPiS: a calculus of sessions, pipelines and services
Service-oriented computing is calling for novel computational models and languages with well-disciplined primitives for client–server interaction, structured orchestration and unexpected events handling. We present CaSPiS, a process calculus where the conceptual abstractions of sessioning and pipelining play a central role for modelling service-oriented systems. CaSPiS sessions are two-sided, uniquely named and can be nested. CaSPiS pipelines permit orchestrating the flow of data produced by different sessions. The calculus is also equipped with operators for handling (unexpected) termination of the partner's side of a session. Several examples are presented to provide evidence of the flexibility of the chosen set of primitives. One key contribution is a fully abstract encoding of Misra et al.'s orchestration language Orc. Another main result shows that in CaSPiS it is possible to program a ‘graceful termination’ of nested sessions, which guarantees that no session is forced to hang forever after the loss of its partner.
Michele Boreale
Roberto Bruni
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2015-03-03T09:41:50Z
2015-03-03T09:41:50Z
http://eprints.imtlucca.it/id/eprint/2625
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2625
2015-03-03T09:41:50Z
A formal approach to autonomic systems programming: the SCEL language
Software-intensive cyber-physical systems have to deal with massive numbers of components, featuring complex interactions among components and with humans and other systems. Often, they are designed to operate in open and non-deterministic environments, and to dynamically adapt to new requirements, technologies and external conditions. This class of systems has been named ensembles and new engineering techniques are needed to address the challenges of developing, integrating, and deploying them. In the paper, we briefly introduce SCEL (Software Component Ensemble Language), a kernel language that takes a holistic approach to programming autonomic computing systems and aims at providing programmers with a complete set of linguistic abstractions for programming the behavior of autonomic components and the formation of autonomic components ensembles, and for controlling the interaction among different components.
Rocco De Nicola
r.denicola@imtlucca.it
2015-03-02T09:41:51Z
2015-03-02T09:41:51Z
http://eprints.imtlucca.it/id/eprint/2624
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2624
2015-03-02T09:41:51Z
A convex feasibility approach to anytime model predictive control
This paper proposes to decouple performance optimization and enforcement of asymptotic convergence in Model Predictive Control (MPC) so that convergence to a given terminal set is achieved independently of how much performance is optimized at each sampling step. By embedding an explicit decreasing condition in the MPC constraints and thanks to a novel and very easy-to-implement convex feasibility solver proposed in the paper, it is possible to run an outer performance optimization algorithm on top of the feasibility solver and optimize for an amount of time that depends on the available CPU resources within the current sampling step (possibly going open-loop at a given sampling step in the extreme case no resources are available) and still guarantee convergence to the terminal set. While the MPC setup and the solver proposed in the paper can deal with quite general classes of functions, we highlight the synthesis method and show numerical results in case of linear MPC and ellipsoidal and polyhedral terminal sets.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Daniele Bernardini
daniele.bernardini@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
2015-02-23T11:14:29Z
2015-02-23T11:14:29Z
http://eprints.imtlucca.it/id/eprint/2622
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2622
2015-02-23T11:14:29Z
Foundations of Support Constraint Machines
The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2015-02-23T11:11:28Z
2015-02-23T11:11:28Z
http://eprints.imtlucca.it/id/eprint/2621
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2621
2015-02-23T11:11:28Z
Robust local–global SOM-based ACM
A novel active contour model (ACM) for image segmentation, driven by both local and global image-intensity information encoded by a self-organising map (SOM), is proposed. Experimental results demonstrate the robustness of the proposed model to the contour initialisation and to the additive noise, when compared with the state-of-the-art local and global ACMs. They also demonstrate its robustness to scene changes.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2015-02-23T11:04:02Z
2015-02-23T11:04:02Z
http://eprints.imtlucca.it/id/eprint/2620
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2620
2015-02-23T11:04:02Z
An efficient Self-Organizing Active Contour model for image segmentation
Active Contour Models (ACMs) constitute a powerful energy-based minimization framework for image segmentation, based on the evolution of an active contour. Among ACMs, supervised {ACMs} are able to exploit the information extracted from supervised examples to guide the contour evolution. However, their applicability is limited by the accuracy of the probability models they use. As a consequence, effectiveness and efficiency of supervised {ACMs} are among their main real challenges, especially when handling images containing regions characterized by intensity inhomogeneity. In this paper, to deal with such kinds of images, we propose a new supervised ACM, named Self-Organizing Active Contour (SOAC) model, which combines a variational level set method (a specific kind of ACM) with the weights of the neurons of two Self-Organizing Maps (SOMs). Its main contribution is the development of a new {ACM} energy functional optimized in such a way that the topological structure of the underlying image intensity distribution is preserved – using the two {SOMs} – in a parallel-processing and local way. The model has a supervised component since training pixels associated with different regions are assigned to different SOMs. Experimental results show the superior efficiency and effectiveness of {SOAC} versus several existing ACMs.
Mohammed Abdelsamea
mohammed.abdelsamea@imtlucca.it
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mohamed Medhat Gaber
2015-02-23T10:45:49Z
2015-11-02T13:02:11Z
http://eprints.imtlucca.it/id/eprint/2618
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2618
2015-02-23T10:45:49Z
Learning With Mixed Hard/Soft Pointwise Constraints
A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2015-02-23T10:03:14Z
2015-05-19T09:19:02Z
http://eprints.imtlucca.it/id/eprint/2617
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2617
2015-02-23T10:03:14Z
Expressive non-verbal interaction in a string quartet: an analysis through head movements
The present study investigates expressive non-verbal interaction in the musical context starting from behavioral features extracted at individual and group levels. Four groups of features are defined, which are related to head movement and direction, and may help gaining insight on the expressivity and cohesion of the performance, discriminating between different performance conditions. Then, the features are evaluated both at a global scale and at a local scale. The findings obtained from the analysis of a string quartet recorded in an ecological setting show that using these features alone or in their combination may help in distinguishing between two types of performance: (a) a concert-like condition, where all musicians aim at performing at best, (b) a perturbed one, where the 1 st violinist devises alternative interpretations of the music score without discussing them with the other musicians. In the global data analysis, the discriminative power of the features is investigated through statistical tests. Then, in the local data analysis, a larger amount of data is used to exploit more sophisticated machine learning techniques to select suitable subsets of the features, which are then used to train an SVM classifier to perform binary classification. Interestingly, the features whose discriminative power is evaluated as large (respectively, small) in the global analysis are also evaluated in a similar way in the local analysis. When used together, the 22 features that have been defined in the paper demonstrate to be efficient for classification, leading to a percentage of about 90 % successfully classified examples among the ones not used in the training phase. Similar results are obtained considering only a subset of 15 features.
Donald Glowinski
Floriane Dardard
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Stefano Piana
Antonio Camurri
2015-02-23T09:38:59Z
2015-05-19T09:17:36Z
http://eprints.imtlucca.it/id/eprint/2614
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2614
2015-02-23T09:38:59Z
Narrowing the Search for Optimal Call-Admission Policies Via a Nonlinear Stochastic Knapsack Model
Call admission control with two classes of users is investigated via a nonlinear stochastic knapsack model. The feasibility region represents the subset of the call space, where given constraints on the quality of service have to be satisfied. Admissible strategies are searched for within the class of coordinate-convex policies. Structural properties that the optimal policies belonging to such a class have to satisfy are derived. They are exploited to narrow the search for the optimal solution to the nonlinear stochastic knapsack problem that models call admission control. To illustrate the role played by these properties, the numbers of coordinate-convex policies by which they are satisfied are estimated. A graph-based algorithm to generate all such policies is presented.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2015-02-18T14:47:11Z
2015-02-18T14:47:11Z
http://eprints.imtlucca.it/id/eprint/2611
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2611
2015-02-18T14:47:11Z
Learning as Constraint Reactions
A theory of learning is proposed,which extends naturally the classic regularization framework of kernelmachines to the case in which the agent interacts with a richer environment, compactly described by the notion of constraint. Variational calculus is exploited to derive general representer theorems that give a description of the structure of the solution to the learning problem. It is shown that such solution can be represented in terms of constraint reactions, which remind the corresponding notion in analytic mechanics. In particular, the derived representer theorems clearly show the extension of the classic kernel expansion on support vectors to the expansion on support constraints. As an application of the proposed theory three examples are given, which illustrate the dimensional collapse to a finite-dimensional space of parameters. The constraint reactions are calculated for the classic collection of supervised examples, for the case of box constraints, and for the case of hard holonomic linear constraints mixed with supervised examples. Interestingly, this leads to representer theorems for which we can re-use the kernel machine mathematical and algorithmic apparatus.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marco Gori
Stefano Melacci
Marcello Sanguineti
2015-02-10T14:44:18Z
2015-02-10T14:44:18Z
http://eprints.imtlucca.it/id/eprint/2590
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2590
2015-02-10T14:44:18Z
Fluid performability analysis of nested automata models
In this paper we present a class of nested automata for the modelling of performance, availability, and reliability of software systems with hierarchical structure, which we call systems of systems. Quantitative modelling provides valuable insight into the dynamic behaviour of software systems, allowing non-functional properties such as performance, dependability and availability to be assessed. However, the complexity of many systems challenges the feasibility of this approach as the required mathematical models grow too large to afford computationally efficient solution. In recent years it has been found that in some cases a fluid, or mean field, approximation can provide very good estimates whilst dramatically reducing the computational cost.
The systems of systems which we propose are hierarchically arranged automata in which influence may be exerted between siblings, between parents and children, and even from children to parents, allowing a wide range of complex dynamics to be captured. We show that, under mild conditions, systems of systems can be equipped with fluid approximation models which are several orders of magnitude more efficient to run than explicit state representations, whilst providing excellent estimates of performability measures. This is a significant extension of previous fluid approximation results, with valuable applications for software performance modelling.
Luca Bortolussi
Jane Hillston
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T08:41:42Z
2015-02-09T08:41:42Z
http://eprints.imtlucca.it/id/eprint/2571
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2571
2015-02-09T08:41:42Z
LPV system identification under noise corrupted scheduling and
output signal observations
Most of the approaches available in the literature for the identification of Linear Parameter-Varying (LPV) systems rely on the assumption that only the measurements of the output signal are corrupted by the noise, while the observations of the scheduling variable are considered to be noise free. However, in practice, this turns out to be an unrealistic assumption in most of the cases, as the scheduling variable is often related to a measured signal and, thus, it is inherently affected by a measurement noise. In this paper, it is shown that neglecting the noise on the scheduling signal, which corresponds to an error-invariables
problem, can lead to a significant bias on the estimated parameters. Consequently, in order to overcome this corruptive phenomenon affecting practical use of data-driven LPV modeling, we present an identification scheme to compute a consistent estimate of LPV Input/Output (IO) models from noisy output and scheduling signal observations. A simulation example is provided to prove the effectiveness
of the proposed methodology.
Dario Piga
dario.piga@imtlucca.it
Pepijn Cox
p.b.cox@tue.nl
Roland Tóth
r.toth@tue.nl
Vincent Laurain
vincent.laurain@univ-lorraine.fr
2015-01-28T13:35:19Z
2016-04-04T09:05:08Z
http://eprints.imtlucca.it/id/eprint/2547
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2547
2015-01-28T13:35:19Z
Model order reduction applied to heat conduction in photovoltaic modules
Modeling of physical systems may be a challenging task when it requires solving large sets of numerical equations. This is the case of photovoltaic (PV) systems which contain many PV modules, each module containing several silicon cells. The determination of the temperature field in the modules leads to large scale systems, which may be computationally expensive to solve. In this context, Model Order Reduction (MOR) techniques can be used to approximate the full system dynamics with a compact model, that is much faster to solve. Among the several available MOR approaches, in this work we consider the Discrete Empirical Interpolation Method (DEIM), which we apply with a suitably modified formulation that is specifically designed for handling the nonlinear terms that are present in the equations governing the thermal behavior of PV modules. The results show that the proposed DEIM technique is able to reduce significantly the system size, by retaining a full control on the accuracy of the solution.
Saheed Olalekan Ojo
saheed.ojo@imtlucca.it
Stefano Grivet-Talocia
Marco Paggi
marco.paggi@imtlucca.it
2015-01-28T13:31:29Z
2015-01-28T13:31:29Z
http://eprints.imtlucca.it/id/eprint/2546
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2546
2015-01-28T13:31:29Z
Nonlinear fracture dynamics of laminates with finite thickness adhesives
Finite thickness interfaces, such as structural adhesives, are often simplified from the modeling point of view by introducing ideal cohesive zone models that do not take into account the finite thickness properties in the evaluation of the interface stiffness and inertia. In the present work, the nonlinear dynamic response of those layered systems is numerically investigated according to the finite element method. The weak form of the dynamic equilibrium is written by including not only the contribution of cohesive interfaces related to the virtual work exerted by the cohesive tractions for the corresponding relative displacements, but also considering the work done by the dynamic forces of the finite thickness interfaces resulting from their inertia properties. A fully implicit solution scheme both in space and in time is exploited and the numerical results for the double cantilever beam test show that the role of finite thickness properties is not negligible as far as the crack growth kinetics and the dynamic strength increase factor are concerned.
Mauro Corrado
mauro.corrado@polito.it
Marco Paggi
marco.paggi@imtlucca.it
2014-12-18T11:34:55Z
2016-04-06T09:03:15Z
http://eprints.imtlucca.it/id/eprint/2424
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2424
2014-12-18T11:34:55Z
Quantitative analysis of myelin and axonal remodeling in the uninjured motor network after stroke
Objectives: Contralesional brain connectivity plasticity was previously reported after stroke. This study aims at disentangling the biological mechanisms underlying connectivity plasticity in the uninjured motor network after an ischemic lesion. In particular, we measured generalized fractional anisotropy (GFA) and magnetization transfer ratio (MTR) to assess whether post-stroke connectivity remodeling depend on axonal and/or myelin changes. Materials and Methods: Diffusion Spectrum Imaging (DSI) and Magnetization Transfer MRI at 3T were performed in 10 patients in acute phase, at one and six months after stroke, which was affecting motor cortical and/or subcortical areas. Ten age- and gender- matched healthy volunteers were scanned one month apart for longitudinal comparison. Clinical assessment was also performed in patients prior to MRI. In the contra-lesional hemisphere, average measures and tract-based quantitative analysis of GFA and MTR was performed to assess axonal integrity and myelination along motor connections as well as their variations in time. Results and Conclusions: Mean and tract-based measures of MTR and GFA showed significant changes in a number of contralesional motor connections, confirming both axonal and myelin plasticity in our cohort of patients. Moreover, density-derived features (peak height, standard deviation-SD and skewness) of GFA and MTR along the tracts showed additional correlation with clinical scores than mean values. These findings reveal the interplay between contralateral myelin and axonal remodeling after stroke.
Ying-Chia Lin
yingchia.lin@imtlucca.it
Alessandro Daducci
Djalel-Eddine Meskaldji
Jean-Philippe Thiran
Patrik Michel
Reto A Meuli
Gunnar Krueger
Gloria Menegaz
Cristina Granziera
2014-12-18T11:09:56Z
2016-04-06T09:56:55Z
http://eprints.imtlucca.it/id/eprint/2422
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2422
2014-12-18T11:09:56Z
Multicontrast connectometry: a new tool to assess cerebellum alterations in early relapsing-remitting multiple sclerosis
Background: Cerebellar pathology occurs in late multiple sclerosis (MS) but little is known about cerebellar changes during early disease stages. In this study, we propose a new multicontrast “connectometry” approach to assess the structural and functional integrity of cerebellar networks and connectivity in early MS. Methods: We used diffusion spectrum and resting-state functional MRI (rs-fMRI) to establish the structural and functional cerebellar connectomes in 28 early relapsing-remitting MS patients and 16 healthy controls (HC). We performed multicontrast “connectometry” by quantifying multiple MRI parameters along the structural tracts (generalized fractional anisotropy-GFA, T1/T2 relaxation times and magnetization transfer ratio) and functional connectivity measures. Subsequently, we assessed multivariate differences in local connections and network properties between MS and HC subjects; finally, we correlated detected alterations with lesion load, disease duration, and clinical scores. Results: In MS patients, a subset of structural connections showed quantitative MRI changes suggesting loss of axonal microstructure and integrity (increased T1 and decreased GFA, P < 0.05). These alterations highly correlated with motor, memory and attention in patients, but were independent of cerebellar lesion load and disease duration. Neither network organization nor rs-fMRI abnormalities were observed at this early stage. Conclusion: Multicontrast cerebellar connectometry revealed subtle cerebellar alterations in MS patients, which were independent of conventional disease markers and highly correlated with patient function. Future work should assess the prognostic value of the observed damage. Hum Brain Mapp, 2014. © 2014 Wiley Periodicals, Inc.
David Romascano
Djalel-Eddine Meskaldji
Guillaume Bonnier
Samanta Simioni
David Rotzinger
Ying-Chia Lin
yingchia.lin@imtlucca.it
Gloria Menegaz
Alexis Roche
Myriam Schluep
Renaud Du Pasquier
Jonas Richiardi
Dimitri Van De Ville
Alessandro Daducci
Tilman Johannes Sumpf
Jens Fraham
Jean-Philippe Thiran
Gunnar Krueger
Cristina Granziera
2014-03-27T09:30:26Z
2016-04-05T12:01:10Z
http://eprints.imtlucca.it/id/eprint/2182
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2182
2014-03-27T09:30:26Z
COMT Genetic Reduction Produces Sexually Divergent Effects on Cortical Anatomy and Working Memory in Mice and Humans
Genetic variations in catechol-O-methyltransferase (COMT) that modulate cortical dopamine have been associated with pleiotropic behavioral effects in humans and mice. Recent data suggest that some of these effects may vary among sexes. However, the specific brain substrates underlying COMT sexual dimorphisms remain unknown. Here, we report that genetically driven reduction in COMT enzyme activity increased cortical thickness in the prefrontal cortex (PFC) and postero-parieto-temporal cortex of male, but not female adult mice and humans. Dichotomous changes in PFC cytoarchitecture were also observed: reduced COMT increased a measure of neuronal density in males, while reducing it in female mice. Consistent with the neuroanatomical findings, COMT-dependent sex-specific morphological brain changes were paralleled by divergent effects on PFC-dependent working memory in both mice and humans. These findings emphasize a specific sex–gene interaction that can modulate brain morphological substrates with influence on behavioral outcomes in healthy subjects and, potentially, in neuropsychiatric populations.
Sara Sannino
Alessandro Gozzi
Antonio Cerasa
Fabrizio Piras
Diego Scheggia
Francesca Manago
Mario Damiano
Alberto Galbusera
Lucy C. Erickson
Davide De Pietri Tonelli
Angelo Bifone
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Carlo Caltagirone
Daniel R. Weinberger
Gianfranco Spalletta
Francesco Papaleo
2013-12-12T13:11:42Z
2016-07-13T10:29:47Z
http://eprints.imtlucca.it/id/eprint/2056
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2056
2013-12-12T13:11:42Z
Modelling and analyzing adaptive self-assembling strategies with Maude
Building adaptive systems with predictable emergent behavior is a challenging task and it is becoming a critical need. The research community has accepted the challenge by introducing approaches of various nature: from software architectures, to programming paradigms, to analysis techniques. We recently proposed a conceptual framework for adaptation centered around the role of control data. In this paper we show that it can be naturally realized in a reflective logical language like Maude by using the Reflective Russian Dolls model. Moreover, we exploit this model to specify, validate and analyse a prominent example of adaptive system: robot swarms equipped with self-assembly strategies. The analysis exploits the statistical model checker PVeStA.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it