IMT Institutional Repository: No conditions. Results ordered -Date Deposited. 2020-02-20T12:37:03ZEPrintshttp://eprints.imtlucca.it/images/logowhite.pnghttp://eprints.imtlucca.it/2018-03-12T10:57:31Z2018-03-12T10:57:31Zhttp://eprints.imtlucca.it/id/eprint/4031This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/40312018-03-12T10:57:31ZOrbit control for a power generating airfoil based on nonlinear MPCThe Airborne Wind Energy paradigm proposes to generate energy by flying a tethered airfoil across the wind flow. An essential problem posed by Airborne Wind Energy is the control of the tethered airfoil trajectory during power generation. Tethered flight is a fast, strongly nonlinear, unstable and constrained process, motivating control approaches based on fast Nonlinear Model Predictive Control. In this paper, a computationally efficient 6-DOF control model for a high performance, large-scale, rigid airfoil is proposed. A control scheme based on receding-horizon Nonlinear Model Predictive Control to track reference trajectories is applied to the proposed model. In order to make a real-time application of Nonlinear Model Predictive Control possible, a Real-Time Iteration scheme is proposed and its performance investigated.Sébastien GrosMario Zanonmario.zanon@imtlucca.itMoritz Diehl2018-03-12T10:55:17Z2018-03-12T10:55:17Zhttp://eprints.imtlucca.it/id/eprint/4027This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/40272018-03-12T10:55:17ZNonlinear MPC and MHE for Mechanical Multi-Body Systems with Application to Fast Tethered AirplanesMechanical applications often require a high control frequency to cope with fast dynamics. The control frequency of a nonlinear model predictive controller depends strongly on the symbolic complexity of the equations modeling the system. The symbolic complexity of the model equations for multi-body mechanical systems can often be dramatically reduced by using representations based on non-minimal coordinates, which result in index-3 differential-algebraic equations (DAEs). This paper proposes a general procedure to efficiently treat multi-body mechanical systems in the context of MHE & NMPC using non-minimal coordinate representations, and provides the resulting computational times that can be achieved on a tethered airplane system using code generation.Sébastien GrosMario Zanonmario.zanon@imtlucca.itMilan VukovMoritz Diehl2018-03-06T13:24:18Z2018-03-06T13:24:18Zhttp://eprints.imtlucca.it/id/eprint/3963This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/39632018-03-06T13:24:18ZL'imprenditorialità nell'azienda lapidea. Rilevanza e caratteri delle radici territoriali nelle strategie competitiveIl volume si sofferma sulle caratteristiche della filiera del marmo e sull'azienda lapidea in particolare, per poi passare all'analisi delle possibili strategie competitive. Le dinamiche di sviluppo delle aziende del distretto apuoversiliese offrono interessanti indizi circa la rilevanza della forte caratterizzazione produttiva del territorio nel processo di formazione dell'identità strategica: ne emergono evidenze che paiono indicare un necessario connubio fra industria ed artigianato artistico, unitamente all'assunzione di detta consapevolezza da parte del policy maker pubblico. Il modello di sviluppo che ne deriva evidenzia percorsi di valorizzazione del patrimonio intangibile che puntano su radicamento territoriale, autenticità strategica e artigianalità delle produzione industriale.Nicola Lattanzinicola.lattanzi@imtlucca.itG. Vitali2018-03-06T13:21:12Z2018-03-06T13:21:12Zhttp://eprints.imtlucca.it/id/eprint/3961This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/39612018-03-06T13:21:12ZIl patrimonio della famiglia tra Family Business e Family OfficeNicola Lattanzinicola.lattanzi@imtlucca.it2018-03-06T13:19:31Z2018-03-06T13:19:31Zhttp://eprints.imtlucca.it/id/eprint/3960This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/39602018-03-06T13:19:31ZI modelli concettuali e le condizioni di funzionamento delle aziende famigliariNicola Lattanzinicola.lattanzi@imtlucca.itA. Morelli2018-03-06T12:15:37Z2018-03-06T12:15:37Zhttp://eprints.imtlucca.it/id/eprint/3959This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/39592018-03-06T12:15:37ZAzienda, uomo e neuroscienze. Processi decisionali e pensiero strategicoIl testo affronta il cambiamento intercorso nella relazione fra azienda e ambiente focalizzando l'attenzione sulle determinanti del rapporto fra patrimonio tangibile e intangibile. Nel contesto contemporaneo la tecnologia è artefice della creazione di nuove dimensioni spazio temporali all'interno delle quali l'area di verifica delle condizioni di aziendalità ha mutato forma e contenuti. Al fattore conoscenza e al fattore tempo si guarda ormai con valenza strategica, ma ciò che cattura maggiormente l'attenzione è il costante richiamo alla centralità della figura dell'uomo nel governo dell'azienda: l'individuo si caratterizza per una propria capacità cognitiva che ne definisce unicità e originalità di pensiero. E' l'essere umano che si erge a vera e propria risorsa cognitiva che oggi può essere meglio compresa e interpretata, con valenza sia endogena, sia esogena, mediante il ricorso alla disciplina delle neuroscienze. Non significa abbandonare la posizione che vuole l'azienda come istituzione oggettiva e che vede l'uomo al suo servizio, quanto piuttosto ampliare il campo di osservazione e studio a una nuova prospettiva, quella neuroscientifica per l'appunto, capace di completare, integrare e riconfigurare lo stesso fattore conoscenza.Nicola Lattanzinicola.lattanzi@imtlucca.it2018-03-02T10:17:21Z2018-03-02T10:17:21Zhttp://eprints.imtlucca.it/id/eprint/3927This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/39272018-03-02T10:17:21ZUno sguardo sull’incisione sarda contemporaneaSilvia Massasilvia.massa@imtlucca.it2018-03-02T10:14:57Z2018-03-02T10:14:57Zhttp://eprints.imtlucca.it/id/eprint/3926This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/39262018-03-02T10:14:57ZGraffiare oltre le apparenze: i bulini di Marco InnocenziSilvia Massasilvia.massa@imtlucca.it2018-02-16T14:26:31Z2018-02-16T14:26:31Zhttp://eprints.imtlucca.it/id/eprint/3914This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/39142018-02-16T14:26:31Z(a cura di) Global Administrative Law: the CasebookSabino CasseseBruno CarottiLorenzo Casinilorenzo.casini@imtlucca.itEleonora CavalieriEuan McDonald2018-01-24T12:00:47Z2018-01-24T12:00:47Zhttp://eprints.imtlucca.it/id/eprint/3881This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/38812018-01-24T12:00:47ZRedistribution and the Notion of Social StatusWe study the impact of redistributive policies when agents can signal their social status by spending on a conspicuous good. Our focus is on how the shape of the status function – i.e., how social status is computed and evaluated – can affect the equilibrium outcome of the model, and in particular the relationship between inequality and wasteful conspicuous consumption. We find that if status depends in an ordinal way on individuals' relative standing in terms of economic resources, then redistributing resources from the rich to the poor increases social waste because it forces the rich to spend more on conspicuous consumption in order to differentiate themselves from the poor. If, instead, status depends in a cardinal way on individuals' relative standing, then a redistribution of resources in favor of the poor can reduce social waste. This is possible because under cardinal status there is an additional effect: a lesser degree of inequality decreases the value of social status and, hence, reduces the incentives to engage in wasteful social competition. If this second effect is stronger than the first one, then social waste reduces. In this case a Pareto improvement is also possible but it requires, in addition, that the rich save enough on costly signaling to compensate for the losses due to the reduction of economic resources.Ennio Bilanciniennio.bilancini@imtlucca.itLeonardo Boncinelli2018-01-24T11:31:54Z2018-01-24T11:31:54Zhttp://eprints.imtlucca.it/id/eprint/3880This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/38802018-01-24T11:31:54ZThe desirability of pay-as-you-go pensions when relative consumption matters and returns are stochasticUnder concerns for relative consumption a PAYG system becomes more attractive because it insures pensioners against the risk of being outperformed, but it becomes potentially less effective in hedging the risks associated with financial markets. The net effect is ambiguous.Ennio Bilanciniennio.bilancini@imtlucca.itMassimo D'Antoni2018-01-24T11:27:15Z2018-01-24T11:27:15Zhttp://eprints.imtlucca.it/id/eprint/3879This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/38792018-01-24T11:27:15ZLong-run welfare under externalities in consumption, leisure, and production: A case for happy degrowth vs. unhappy growthIn this paper we contribute to the debate on the relationship between growth and well-being by examining an endogenous growth model where we allow for externalities in consumption, leisure, and production. We analyze three regimes: a decentralized economy where each household makes isolated choices without considering their external effects, a planned economy where a myopic planner fails to recognize both leisure and consumption externalities but recognizes production externalities, and a planned economy with a fully informed planner. We first compare the balanced growth paths under the three regimes and then we numerically investigate the transition to the optimal balanced growth path. We provide a number of findings. First, in a decentralized economy growth or labor (or both) are greater than in the regime with a fully informed planner, and hence are sub-optimal from a welfare standpoint. Second, a myopic intervention which overlooks consumption and leisure externalities leads to more growth and labor than in both the decentralized and the fully informed regime. Third, we provide a case for happy degrowth: a transition to the optimal balanced growth path that is associated with downscaling of production, a reduction in private consumption, and an ongoing increase in leisure and well-being.Ennio Bilanciniennio.bilancini@imtlucca.itSimone D'Alessandro2017-09-29T09:00:45Z2017-09-29T09:00:45Zhttp://eprints.imtlucca.it/id/eprint/3818This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/38182017-09-29T09:00:45ZSimulazione numerica e confronto con misure sperimentali del processo di raffreddamento di barriere termiche.Claudia Borriclaudia.borri@imtlucca.itAlessio FossatiAlessandro LavacchiTiberio BacciUgo Bardi2017-09-04T14:07:48Z2017-09-04T14:07:48Zhttp://eprints.imtlucca.it/id/eprint/3775This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/37752017-09-04T14:07:48ZAlgoritmo per il riconoscimento, classificazione e rimozione degli artefatti da movimenti oculari dal segnale EEG durante il sonno REMMonica Bettamonica.betta@imtlucca.itDanilo MenicucciMarco LaurinoA. PiarulliAlberto LandiAngelo Gemignani2017-08-04T10:28:59Z2017-08-04T10:28:59Zhttp://eprints.imtlucca.it/id/eprint/3748This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/37482017-08-04T10:28:59ZOptimal Encoding of Interval Timing in Expert PercussionistsWe measured temporal reproduction in human subjects with various levels of musical expertise: expert drummers, string musicians, and non-musicians. While duration reproduction of the non-percussionists showed a characteristic central tendency or regression to the mean, drummers responded veridically. Furthermore, when the stimuli were auditory tones rather than flashes, all subjects responded veridically. The behavior of all three groups in both modalities is well explained by a Bayesian model that seeks to minimize reproduction errors by incorporating a central tendency prior, a probability density function centered at the mean duration of the sample. We measured separately temporal precision thresholds with a bisection task; thresholds were twice as low in drummers as in the other two groups. These estimates of temporal precision, together with an adaptable Bayesian prior, predict well the reproduction results and the central tendency strategy under all conditions and for all subject groups. These results highlight the efficiency and flexibility of sensorimotor mechanisms estimating temporal duration.G. M. CicchiniR. ArrighiLuca Cecchettiluca.cecchetti@imtlucca.itM. GiustiD. C. Burr2017-08-04T10:27:07Z2017-08-04T10:27:07Zhttp://eprints.imtlucca.it/id/eprint/3747This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/37472017-08-04T10:27:07ZMemory and anatomical change in severe non missile traumatic brain injury: ∼1 vs. ∼8 years follow-upIn previous studies, we investigated a group of subjects who had suffered from a severe non missile traumatic brain injury (nmTBI) without macroscopic focal lesions and we found brain atrophy involving the hippocampus, fornix, corpus callosum, optic chiasm, and optic radiations. Memory test scores correlated mainly with fornix volumes 37,38. In the present study, we re-examined 11 of these nmTBI subjects approximately 8 yr later. High-spatial resolution \{T1\} weighted magnetic resonance images of the brain (1 mm3) and standardised memory tests were performed once more in order to compare brain morphology and memory performance originally assessed 3–13 months after head injury (first study) and after 8–10 yr (present study). An overall improvement of memory test performance was demonstrated in the latest assessment, indicating that cognitive recovery in severe nmTBI subjects had not been completed within 3–13 months post-injury. It is notable that the volumes of the fornix and the hippocampus were reduced significantly from normal controls, but these volumes do not differ appreciatively between nmTBI subjects at first (after ∼1 yr) and at second (after ∼8 yr) scans. On the contrary, a clear reduction in the volume of the corpus callosus can be observed after ∼1 yr and a further significant reduction is evident after ∼8 yr, indicating that the neural degeneration in severe nmTBI continues long after the head trauma and relates to specific structures and not to the overall brain.Francesco TomaiuoloUmberto BivonaJason P. LerchMargherita Di PaolaGiovanni A. CarlesimoPaola CiurliMariella MatteisLuca Cecchettiluca.cecchetti@imtlucca.itAntonio ForcinaDaniela SilvestroEva AzicnudaUmberto SabatiniDina Di GiacomoCarlo CaltagironeMichael PetridesRita Formisano2016-05-26T10:06:19Z2016-05-26T10:06:19Zhttp://eprints.imtlucca.it/id/eprint/3491This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/34912016-05-26T10:06:19ZModelling and Analyzing Adaptive Self-assembly Strategies with MaudeBuilding adaptive systems with predictable emergent behavior is a challenging task and it is becoming a critical need. The research community has accepted the challenge by introducing approaches of various nature: from software architectures, to programming paradigms, to analysis techniques. We recently proposed a conceptual framework for adaptation centered around the role of control data. In this paper we show that it can be naturally realized in a reflective logical language like Maude by using the Reflective Russian Dolls model. Moreover, we exploit this model to specify and analyse a prominent example of adaptive system: robot swarms equipped with obstacle-avoidance self-assembly strategies. The analysis exploits the statistical model checker PVesta.Roberto BruniAndrea CorradiniFabio GadducciAlberto Lluch LafuenteAndrea Vandinandrea.vandin@imtlucca.it2016-04-07T09:10:32Z2016-04-07T09:10:32Zhttp://eprints.imtlucca.it/id/eprint/3390This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/33902016-04-07T09:10:32ZCompetitors' communities and taxonomy of products according to export fluxesIn this paper we use Complex Network Theory to quantitatively characterize and synthetically describe the complexity of trade between nations. In particular, we focus our attention on export fluxes. Starting from the bipartite countries-products network defined by export fluxes, we define two complementary graphs projecting the original network on countries and products respectively. We define, in both cases, a distance matrix amongst countries and products. Specifically, two countries are similar if they export similar products. This relationship can be quantified by building the Minimum Spanning Tree and the Minimum Spanning Forest from the distance matrices for products and countries. Through this simple and scalable method we are also able to carry out a community analysis. It is not gone unnoticed that in this way we can produce an effective categorization for products providing several advantages with respect to traditional classifications of COMTRADE 1. Finally, the forests of countries allows for the detection of competitors' community and for the analysis of the evolution of these communities.Matthieu CristelliAndrea TacchellaAndrea GabrielliLuciano PietroneroAntonio ScalaGuido Caldarelliguido.caldarelli@imtlucca.it2016-03-22T09:27:56Z2016-03-22T09:36:39Zhttp://eprints.imtlucca.it/id/eprint/3279This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/32792016-03-22T09:27:56ZModulation of MMP-9/TIMP activity in preventing cardiac disfunction through a combination of molecularly imprinting technology and biodegradable microfabricated systemsCaterina CristalliniNiccoletta BarbaniE. BellottiF. ManettiElisabetta RoselliniMariacristina Gagliardimariacristina.gagliardi@imtlucca.itE. Del GaudioF. TricoliS. Mantero2016-03-21T09:45:38Z2016-03-21T09:45:38Zhttp://eprints.imtlucca.it/id/eprint/3253This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/32532016-03-21T09:45:38ZIn vitro haematic proteins adsorption and cytocompatibility study on acrylic copolymer to realise coatings for drug-eluting stentsIn the present paper, a preliminary in vitro analysis of biocompatibility of newly-synthesised acrylic copolymers is reported. In particular, with the aim to obtain coatings for drug-eluting stents, blood protein absorption and cytocompatibility were studied. For protein absorption tests, bovine serum albumin and bovine plasma fibrinogen were considered. Cytocompatibility was tested using {C2C12} cell line as model, analysing the behaviour of polymeric matrices and of drug-eluting systems, obtained loading polymeric matrices with paclitaxel, an anti-mitotic drug, in order to evaluate the efficacy of a pharmacological treatment locally administered from these materials. Results showed that the amount of albumin absorbed was greater than the amount of fibrinogen (comprised in the range of 70–85 and 10–22 respectively) and it is a good behaviour in terms of haemocompatibility. Cell culture tests showed good adhesion properties and a relative poor proliferation. In addition, a strong effect related to drug elution and a correlation with the macromolecular composition were detected. In this preliminary analysis, tested materials showed good characteristics and can be considered possible candidates to obtain coatings for drug-eluting stents.Mariacristina Gagliardimariacristina.gagliardi@imtlucca.it2016-03-21T09:44:02Z2016-03-21T09:44:02Zhttp://eprints.imtlucca.it/id/eprint/3252This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/32522016-03-21T09:44:02ZPolymeric nanocarriers for controlled and enhanced delivery of therapeutic agents to the CNSPolymeric nanocarriers are versatile structures that can be engineered to obtain high drug loading, good delivery yields and tunable release kinetics. Moreover, the particle surface can be modified for selective targeting of organs or tissues. In particular, polymeric nanocarriers can be conjugated with functional groups promoting translocation through the blood–brain barrier, thus providing a promising system to deliver therapeutic agents and/or diagnostic probes to the brain. Here we review recent literature on the preparation and characterization of polymeric nanoparticles as potential agents for drug delivery to the CNS, with an emphasis on materials chemistry and functionalization strategies for improved selectivity and delivery. Finally, we underline the immunotoxicological aspects of this class of nanostructured materials in view of potential clinical applications.Mariacristina Gagliardimariacristina.gagliardi@imtlucca.itGiuseppe BardiAngelo Bifone2016-03-15T08:44:33Z2016-03-18T10:58:52Zhttp://eprints.imtlucca.it/id/eprint/3230This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/32302016-03-15T08:44:33ZA versatile ray-tracing code for studying rf wave propagation in toroidal magnetized plasmasA new ray-tracing code named C 3 PO has been developed to study the propagation of arbitrary electromagnetic radio-frequency (rf) waves in magnetized toroidal plasmas. Its structure is designed for maximum flexibility regarding the choice of coordinate system and dielectric model. The versatility of this code makes it particularly suitable for integrated modeling systems. Using a coordinate system that reflects the nested structure of magnetic flux surfaces in tokamaks, fast and accurate calculations inside the plasma separatrix can be performed using analytical derivatives of a spline-Fourier interpolation of the axisymmetric toroidal MHD equilibrium. Applications to reverse field pinch magnetic configuration are also included. The effects of 3D perturbations of the axisymmetric toroidal MHD equilibrium, due to the discreteness of the magnetic coil system or plasma fluctuations in an original quasi-optical approach, are also studied. Using a Runge–Kutta–Fehlberg method for solving the set of ordinary differential equations, the ray-tracing code is extensively benchmarked against analytical models and other codes for lower hybrid and electron cyclotron waves.Y. PeyssonJoan DeckerLorenzo Morinilorenzo.morini@imtlucca.it2016-03-14T14:23:16Z2016-03-14T14:23:16Zhttp://eprints.imtlucca.it/id/eprint/3229This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/32292016-03-14T14:23:16ZStroh formalism in analysis of skew-symmetric and symmetric weight functions for interfacial cracksThe focus of the article is on analysis of skew-symmetric weight matrix functions for interfacial cracks in two dimensional anisotropic solids. It is shown that the Stroh formalism proves to be an efficient approach to this challenging task. Conventionally, the weight functions, both symmetric and skew-symmetric, can be identified as a non-trivial singular solutions of the homogeneous boundary value problem for a solid with a crack. For a semi-infinite crack, the problem can be reduced to solving a matrix Wiener-Hopf functional equation. Instead, the Stroh matrix representation of displacements and tractions, combined with a Riemann-Hilbert formulation, is used to obtain an algebraic eigenvalue problem, that is solved in a closed form. The proposed general method is applied to the case of a quasi-static semi-infinite crack propagation between two dissimilar orthotropic media: explicit expressions for the weight matrix functions are evaluated and then used in the computation of complex stress intensity factor corresponding to an asymmetric load acting on the crack faces.Lorenzo Morinilorenzo.morini@imtlucca.itEnrico RadiAlexander MovchanNatalia Movchan2016-03-08T11:02:35Z2016-09-14T10:21:17Zhttp://eprints.imtlucca.it/id/eprint/3191This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/31912016-03-08T11:02:35ZHybrid public-private bodies within global private regimes: The World Anti-Doping Agency (WADA)Lorenzo Casinilorenzo.casini@imtlucca.itGiulia Mannucci2016-03-08T10:59:16Z2016-09-14T10:21:16Zhttp://eprints.imtlucca.it/id/eprint/3190This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/31902016-03-08T10:59:16ZA hybrid public-private regime: the Internet Corporation for
Assigned Names and Numbers (ICANN) and the governance of the
internetBruno CarottiLorenzo Casinilorenzo.casini@imtlucca.it2016-03-08T10:43:04Z2016-09-14T10:21:16Zhttp://eprints.imtlucca.it/id/eprint/3189This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/31892016-03-08T10:43:04ZBeyond the State: the emergence of global administrationLorenzo Casinilorenzo.casini@imtlucca.it2016-03-08T10:32:20Z2016-09-14T10:21:17Zhttp://eprints.imtlucca.it/id/eprint/3188This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/31882016-03-08T10:32:20ZLe prospettive della globalizzazioneLorenzo Casinilorenzo.casini@imtlucca.itFrancesco AlbisinniEleonora Cavalieri2016-03-08T10:24:56Z2016-09-14T10:21:16Zhttp://eprints.imtlucca.it/id/eprint/3187This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/31872016-03-08T10:24:56ZDomestic public authorities within global networks: institutional and procedural design, accountability, and reviewLorenzo Casinilorenzo.casini@imtlucca.it2016-03-08T10:07:02Z2016-09-14T10:21:17Zhttp://eprints.imtlucca.it/id/eprint/3186This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/31862016-03-08T10:07:02ZPublic regulation of global indicatorsSabino CasseseLorenzo Casinilorenzo.casini@imtlucca.it2016-03-08T09:38:15Z2016-09-14T10:21:17Zhttp://eprints.imtlucca.it/id/eprint/3185This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/31852016-03-08T09:38:15ZThe Making of a Lex Sportiva by the Court of Arbitration for SportThe purpose of this paper is to examine the structure and functions of the Court of Arbitration for Sport (CAS)in order to highlight a number of problems concerning judicial activities at the global level more generally. Section 2 will outline CAS’ organization and functions, from its inception to the present date. Section 3 will focus on the role of CAS in making a lex sportiva, and it will take into account three different functions: the development of common legal principles; the interpretation of global norms and the influence on sports law-making; and the harmonization of global sports law. Section 4 will consider the relationships between the CAS and public authorities (both public administrations and domestic courts), in order to verify the extent to which the CAS and its judicial system are self-contained and autonomous from States. Lastly, Sect. 5 will address the importance of creating bodies like CAS in the global arena, and it will identify the main challenges raised by this form of transnational judicial activity. The analysis of CAS and its role as law-maker, in fact, allows us to shed light on broader global governance trends affecting areas such as the institutional design of global regimes, with specific regard to the separation of powers and the emergence of judicial activities.Lorenzo Casinilorenzo.casini@imtlucca.it2016-03-08T09:08:40Z2016-09-14T10:21:17Zhttp://eprints.imtlucca.it/id/eprint/3184This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/31842016-03-08T09:08:40ZLa disciplina degli indicatori globaliNegli ultimi decenni, numerosi indicatori sono stati adottati in ambito internazionale e globale, ad esempio dalla Banca mondiale, dall’Organizzazione Mondiale della Sanità (OMS) e dallo United Nations Development Programme (UNDP). Questi strumenti dominano ormai settori strategici come la governance dei mercati finanziari, le politiche di sviluppo e la protezione dei diritti umani. A seguito di questa enorme diffusione, un numero via via crescente di decisioni pubbliche sono oggi prese sulla base di indicatori. Di conseguenza, specialmente quando tali decisioni riguardano la tutela e la gestione di beni e servizi pubblici, è divenuta di grande importanza la questione di come regolare la produzione e l’uso di questi strumenti, questione cui si collegano anche problemi di legittimazione e di accountability. Il contributo esamina i problemi derivanti dalla disciplina degli indicatori globali. Gli indicatori, infatti, possono sì avere una intrinseca natura «normativa », ma non tutti necessitano di una apposita regolamentazione per assicurare adeguate forme di legittimazione e accountability: stabilire se occorra o meno una disciplina ad hoc dipende da svariati fattori, come il tipo di indicatore, le caratteristiche del soggetto che lo produce e la natura degli utenti coinvolti. Diviene perciò utile fornire una tassonomia dei differenti tipi di indicatori al fine di distinguere i casi in cui essi servono per incrementare l’accountability degli Stati e delle organizzazioni internazionali da quelli in cui gli indicatori richiedono essi stessi un controllo da parte dei pubblici poteri.Sabino CasseseLorenzo Casinilorenzo.casini@imtlucca.it2016-02-08T07:44:33Z2016-09-14T10:21:16Zhttp://eprints.imtlucca.it/id/eprint/3038This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/30382016-02-08T07:44:33ZLa globalizzazione giuridica dei beni culturaliCultural property offers a significant yet ambiguous example of the development of global regulatory regimes beyond the State. On the one hand, traditional international law instruments do not seem to ensure an adequate level of protection for cultural heritage; securing such protection requires procedures, norms, and standards produced by global institutions, both public (such as Unesco) and private (such as the International Council of Museums). On the other hand, a comprehensive global regulatory regime to complement the law of cultural property is still to be achieved. Instead, more regimes are being established, depending on the kind of properties and public interests at stake. Moreover, the huge cultural bias that dominates the debate about cultural property accentuates the "clash of civilizations" that already underlies the debate about global governance. The analysis of the relationship between globalization and cultural property, therefore, sheds light on broader global governance trends and helps highlight the points of weakness and strength in the adoption of administrative law techniques at the global level.Lorenzo Casinilorenzo.casini@imtlucca.it2016-02-01T10:34:01Z2016-09-14T10:21:17Zhttp://eprints.imtlucca.it/id/eprint/3036This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/30362016-02-01T10:34:01ZValorizzazione del patrimonio culturale pubblico: il prestito e l'esportazione di beni culturaliThe article examines the Italian legal regime in the field of lending and exporting movable cultural property. This sector is governed not only by EU and national legislation, but also by transnational norms, principles, and best practices. The analysis highlights that the current Italian system should be reformed in order to enhance cultural property through new mechanisms of lending. This may ensure more incomes to the cultural institutions and it may also contribute to cultural heritage protection.Lorenzo Casinilorenzo.casini@imtlucca.it2016-02-01T10:32:12Z2016-09-14T10:21:17Zhttp://eprints.imtlucca.it/id/eprint/3035This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/30352016-02-01T10:32:12ZOltre la mitologia giuridica dei beni culturaliThe article focuses on the Italian legal regime of cultural property in order to detect its limits and to suggest possible reforms. In particular, the author deals with three legal myths that, according to him, represent the main obstacles to ameliorate this strategic field. The first one is the Pandora vase, i.e. the tendency to expand beyond any reasonable limit the notion of cultural property. The second myth is the ambiguous Chimera, i.e. the transforming and mislead concept of "enhancement" (valorizzazione). The third one is the Sisyphus punishment, which seems to afflict the Italian Ministry for Cultural Heritage and its never-ending reform process. Against these three myths, remedies can be found in a better differentiation of cultural property definition, administrative tasks, and institutional models.Lorenzo Casinilorenzo.casini@imtlucca.it2016-02-01T10:28:02Z2016-09-14T10:21:16Zhttp://eprints.imtlucca.it/id/eprint/3034This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/30342016-02-01T10:28:02ZIl Tribunale arbitrale dello sportThis paper seeks to examine the structure and functions of the Court of Arbitration for Sport (CAS), so as to highlight, more generally, a number of problems concerning judicial activities at the global level. Section 1 outlines the CAS’ organization and functions, from its inception to the present day. Section 2 focuses on the role of the CAS in creating a lex sportiva, taking into account three different functions: the development of common legal principles; the interpretation of global norms and its influence on sports lawmaking; and the harmonization of global sports law. Section 3 considers the relationships between the CAS and public authorities (both public administrations and domestic courts) to verify the extent to which the CAS and its judicial system are self-contained and autonomous from States. Lastly, Section 4 addresses the importance of creating bodies like the CAS in the global arena, and identifies the main challenges raised by this form of transnational judicial activity. Indeed, the analysis of the CAS and its role as «law-maker» allows us to shed light on broader global governance trends, that affect areas such as the institutional design of global regimes, and, more specifically, the separation of powers and the emergence of judicial activities.Lorenzo Casinilorenzo.casini@imtlucca.it2016-02-01T10:20:18Z2016-09-14T10:21:17Zhttp://eprints.imtlucca.it/id/eprint/3033This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/30332016-02-01T10:20:18ZTowards global administrative systems? The case of sportLorenzo Casinilorenzo.casini@imtlucca.it2015-11-23T13:37:39Z2017-07-18T09:53:36Zhttp://eprints.imtlucca.it/id/eprint/2927This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29272015-11-23T13:37:39ZTheatre Management: A Debate over the Istanbul City Municipal TheatreYesim Tonga Uriarteyesim.tonga@imtlucca.it2015-11-23T12:58:24Z2017-07-18T09:53:11Zhttp://eprints.imtlucca.it/id/eprint/2926This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29262015-11-23T12:58:24ZChallenges for Cultural Tourism: Conservation and Sustainable DevelopmentPaula Jimena Matiz LopezYesim Tonga Uriarteyesim.tonga@imtlucca.it2015-11-18T11:03:28Z2016-09-13T09:51:45Zhttp://eprints.imtlucca.it/id/eprint/2919This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29192015-11-18T11:03:28ZFMRI Compatible Sensing Glove for Hand Gesture MonitoringHere we describe and validate a fabric sensing glove for hand finger movement monitoring. After a quick calibration procedure, and by suitably processing of the outputs of the glove, it is possible to estimate hand joint angles in real time. Moreover, we tested the fMRI compatibility of the glove and ran a pilot fMRI experiment on the neural correlates of handshaking during human-to-human and human-to-robot interactions. Here we describe how the glove can be used to monitor correct task execution and to improve modeling of the expected hemodynamic responses during fMRI experimental paradigms.Nicola VanelloValentina HartwigEnzo Pasquale ScilingoDaniela BoninoEmiliano Ricciardiemiliano.ricciardi@imtlucca.itAlessandro TognettiPietro Pietrinipietro.pietrini@imtlucca.itDanilo De RossiLuigi LandiniAntonio Bicchi2015-11-18T10:59:48Z2016-09-13T09:49:55Zhttp://eprints.imtlucca.it/id/eprint/2918This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29182015-11-18T10:59:48ZEvidence of a direct influence between the thalamus and hMT + independent of V1 in the human brain as measured by fMRIIn the present study we employed Conditional Granger Causality (CGC) and Coherence analysis to investigate whether visual motion-related information reaches the human middle temporal complex (hMT +) directly from the Lateral Geniculate Nucleus (LGN) of the thalamus, by-passing the primary visual cortex (V1). Ten healthy human volunteers underwent brain scan examinations by functional magnetic resonance imaging (fMRI) during two optic flow experiments. In addition to the classical LGN-V1-hMT + pathway, our results showed a significant direct influence of the blood oxygenation level dependent (BOLD) signal recorded in {LGN} over that in hMT+, not mediated by {V1} activity, which strongly supports the existence of a bilateral pathway that connects {LGN} directly to hMT + and serves visual motion processing. Furthermore, we evaluated the relative latencies among areas functionally connected in the processing of visual motion. Using {LGN} as a reference region, hMT + exhibited a statistically significant earlier peak of activation as compared to V1. In conclusion, our findings suggest the co-existence of an alternative route that directly links {LGN} to hMT+, bypassing V1. This direct pathway may play a significant functional role for the faster detection of motion and may contribute to explain persistence of unconscious motion detection in individuals with severe destruction of primary visual cortex (blindsight).Anna GaglianeseMauro CostagliGiulio BernardiEmiliano Ricciardiemiliano.ricciardi@imtlucca.itPietro Pietrinipietro.pietrini@imtlucca.it2015-11-18T10:56:40Z2016-09-13T09:49:20Zhttp://eprints.imtlucca.it/id/eprint/2917This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29172015-11-18T10:56:40ZCovert brand recognition engages
emotion-specific brain networksConsumer goods’ brands have become a major driver of consumers’ choice: they have got symbolic, relational
and even social properties that add substantial cultural and affective value to goods and services. Therefore,
measuring the role of brands in consumers’ cognitive and affective processes would be very helpful to better
understand economic decision making. This work aimed at finding the neural correlates of automatic, spontaneous
emotional response to brands, showing how deeply integrated are consumption symbols within the cognitive
and affective processes of individuals. Functional magnetic resonance imaging (fMRI) was measured during a
visual oddball paradigm consisting in the presentation of scrambled pictures as frequent stimuli, colored squares
as targets, and brands and emotional pictures (selected from the International Affective Picture System [IAPS]) as
emotionally-salient distractors. Affective rating of brands was assessed individually after scanning by a validated
questionnaire. Results showed that, similarly to IAPS pictures, brands activated a well-defined emotional network,
including amygdala and dorsolateral prefrontal cortex, highly specific of affective valence. In conclusion, this work
identified the neural correlates of brands within cognitive and affective processes of consumers.Silvia CasarottoEmiliano Ricciardiemiliano.ricciardi@imtlucca.itS. RomaniDaniele DalliPietro Pietrinipietro.pietrini@imtlucca.it2015-11-18T10:45:00Z2016-09-13T09:48:25Zhttp://eprints.imtlucca.it/id/eprint/2916This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29162015-11-18T10:45:00ZTouching Motion: rTMS on the Human Middle Temporal Complex Interferes with Tactile Speed PerceptionBrain functional and psychophysical studies have clearly demonstrated that visual motion perception relies on the activity of the middle temporal complex (hMT+). However, recent studies have shown that hMT+ seems to be also activated during tactile motion perception, suggesting that this visual extrastriate area is involved in the processing and integration of motion, irrespective of the sensorial modality. In the present study, we used repetitive transcranial magnetic stimulation (rTMS) to assess whether hMT+ plays a causal role in tactile motion processing. Blindfolded participants detected changes in the speed of a grid of tactile moving points with their finger (i.e. tactile modality). The experiment included three different conditions: a control condition with no TMS and two TMS conditions, i.e. hMT+-rTMS and posterior parietal cortex (PPC)-rTMS. Accuracies were significantly impaired during hMT+-rTMS but not in the other two conditions (No-rTMS or PPC-rTMS), moreover, thresholds for detecting speed changes were significantly higher in the hMT+-rTMS with respect to the control TMS conditions. These findings provide stronger evidence that the activity of the hMT+ area is involved in tactile speed processing, which may be consistent with the hypothesis of a supramodal role for that cortical region in motion processing.Demis BassoAndrea PavanEmiliano Ricciardiemiliano.ricciardi@imtlucca.itSabrina FagioliTomaso VecchiCarlo MiniussiPietro Pietrinipietro.pietrini@imtlucca.it2015-11-18T10:41:55Z2016-09-13T09:50:51Zhttp://eprints.imtlucca.it/id/eprint/2915This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29152015-11-18T10:41:55ZThe neural mechanisms of reliability weighted integration of shape information from vision and touchBehaviourally, humans have been shown to integrate multisensory information in a statistically-optimal fashion by averaging the individual unisensory estimates according to their relative reliabilities. This form of integration is optimal in that it yields the most reliable (i.e. least variable) multisensory percept. The present study investigates the neural mechanisms underlying integration of visual and tactile shape information at the macroscopic scale of the regional {BOLD} response. Observers discriminated the shapes of ellipses that were presented bimodally (visual–tactile) or visually alone. A 2 × 5 factorial design manipulated (i) the presence vs. absence of tactile shape information and (ii) the reliability of the visual shape information (five levels). We then investigated whether regional activations underlying tactile shape discrimination depended on the reliability of visual shape. Indeed, in primary somatosensory cortices (bilateral BA2) and the superior parietal lobe the responses to tactile shape input were increased when the reliability of visual shape information was reduced. Conversely, tactile inputs suppressed visual activations in the right posterior fusiform gyrus, when the visual signal was blurred and unreliable. Somatosensory and visual cortices may sustain integration of visual and tactile shape information either via direct connections from visual areas or top-down effects from higher order parietal areas.Hannah B. HelbigMarc O. ErnstEmiliano Ricciardiemiliano.ricciardi@imtlucca.itPietro Pietrinipietro.pietrini@imtlucca.itAxel ThielscherKatja M. MayerJohannes SchultzUta Noppeney2015-11-18T10:34:31Z2017-08-04T10:19:56Zhttp://eprints.imtlucca.it/id/eprint/2913This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29132015-11-18T10:34:31ZSpatial processing in the human dorsal pathway relies on supramodal functional connectivity mapsLuca Cecchettiluca.cecchetti@imtlucca.itGiacomo HandjarasGiulio BernardiDaniela BoninoEmiliano Ricciardiemiliano.ricciardi@imtlucca.itPietro Pietrinipietro.pietrini@imtlucca.it2015-11-18T10:30:20Z2016-09-13T09:51:22Zhttp://eprints.imtlucca.it/id/eprint/2912This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29122015-11-18T10:30:20ZCholinergic enhancement reduces functional connectivity and BOLD variability in visual extrastriate cortex during selective attentionEmiliano Ricciardiemiliano.ricciardi@imtlucca.itGiacomo HandjarasGiulio BernardiPietro Pietrinipietro.pietrini@imtlucca.itMaura L. Furey2015-11-18T10:18:00Z2016-09-13T09:50:23Zhttp://eprints.imtlucca.it/id/eprint/2911This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29112015-11-18T10:18:00ZVentral and Dorsal Stream Dissociation During Action Recognition in the Human BrainGiacomo HandjarasGiulio BernardiPietro Pietrinipietro.pietrini@imtlucca.itEmiliano Ricciardiemiliano.ricciardi@imtlucca.it2015-11-17T11:42:10Z2017-03-27T12:43:14Zhttp://eprints.imtlucca.it/id/eprint/2910This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29102015-11-17T11:42:10ZScreening for C9ORF72 repeat expansion in FTLDIn the present study we aimed to determine the prevalence of {C9ORF72} {GGGGCC} hexanucleotide expansion in our cohort of 53 frontotemporal lobar degeneration (FTLD) patients and 174 neurologically normal controls. We identified the hexanucleotide repeat, in the pathogenic range, in 4 (2 bv-frontotemporal dementia (FTD) and 2 FTD-amyotrophic lateral sclerosis ALS) out of 53 patients and 1 neurologically normal control. Interestingly, 2 of the \{C9ORF72\} expansion carriers also carried 2 novel missense mutations in \{GRN\} (Y294C) and in PSEN-2(I146V). Further, 1 of the \{C9ORF72\} expansion carriers, for whom pathology was available, showed amyloid plaques and tangles in addition to \{TAR\} (trans-activation response) DNA-binding protein (TDP)-43 pathology. In summary, our findings suggest that the hexanucleotide expansion is probably associated with ALS, FTD, or FTD-ALS and occasional comorbid conditions such as Alzheimer's disease. These findings are novel and need to be cautiously interpreted and most importantly replicated in larger numbers of samples.Raffaele FerrariKin MokJorge H. MorenoStephanie CosentinoJill GoldmanPietro Pietrinipietro.pietrini@imtlucca.itRichard MayeuxMichael C. TierneyDimitrios KapogiannisGregory A. JichaJill R. MurrellBernardino GhettiEric M. WassermannJordan GrafmanJohn HardyEdward D. HueyParastoo Momeni2015-11-17T11:25:36Z2016-09-13T09:48:43Zhttp://eprints.imtlucca.it/id/eprint/2907This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29072015-11-17T11:25:36ZExpertise modulates brain activity during passive driving: a study in professional and naïve driversGiulio BernardiEmiliano Ricciardiemiliano.ricciardi@imtlucca.itGiacomo HandjarasFerdinando FranzoniFabio GalettaGino SantoroPietro Pietrinipietro.pietrini@imtlucca.it2015-11-16T15:41:42Z2015-11-16T15:41:42Zhttp://eprints.imtlucca.it/id/eprint/2904This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/29042015-11-16T15:41:42ZLocalized O6-plane solutions with Romans massOrientifold solutions have an unphysical region around their source; for the O6, the singularity is resolved in M-theory by the Atiyah-Hitchin metric. Massive IIA, however, does not admit an eleven-dimensional lift, and one wonders what happens to the O6 there. In this paper, we find evidence for the existence of localized (unsmeared) O6 solutions in presence of Romans mass, in the context of four-dimensional compactifications. As a first step, we show that for generic supersymmetric compactifications, the Bianchi identity for the F 4 RR field follows from constancy of F 0. Using this, we find a procedure to deform any O6-D6 Minkowski compactification at first order in F 0. For a single O6, some of the symmetries of the massless solution are broken, but what is left is still enough to obtain a system of ODEs with as many variables as equations. Numerical analysis indicates that Romans mass makes the unphysical region disappear.Fabio Saraccofabio.saracco@imtlucca.itAlessandro Tomasiello2015-11-11T10:34:47Z2017-08-04T10:19:38Zhttp://eprints.imtlucca.it/id/eprint/2891This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28912015-11-11T10:34:47ZBrain modeling of noun representations in sighted and blind individualsGiacomo HandjarasEmiliano Ricciardiemiliano.ricciardi@imtlucca.itA. LenciAndrea LeoLuca Cecchettiluca.cecchetti@imtlucca.itG. MarottaPietro Pietrinipietro.pietrini@imtlucca.it2015-11-11T09:50:24Z2015-11-11T09:50:24Zhttp://eprints.imtlucca.it/id/eprint/2889This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28892015-11-11T09:50:24ZSensory Deprivation and Brain PlasticityMaurice PtitoRon KupersSteve LomberPietro Pietrinipietro.pietrini@imtlucca.it2015-11-10T13:47:06Z2015-11-10T13:47:06Zhttp://eprints.imtlucca.it/id/eprint/2888This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28882015-11-10T13:47:06ZIl cervello violentoPietro Pietrinipietro.pietrini@imtlucca.it2015-11-10T13:34:50Z2015-11-10T13:34:50Zhttp://eprints.imtlucca.it/id/eprint/2886This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28862015-11-10T13:34:50ZArtisticaMente: cervello, creatività, giudizio estetico = Artistically Minded: the brain, creativity and aesthetic judgmentPietro Pietrinipietro.pietrini@imtlucca.itMario Guazzelli2015-11-10T13:21:36Z2016-09-13T09:51:04Zhttp://eprints.imtlucca.it/id/eprint/2884This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28842015-11-10T13:21:36ZIncreased BOLD Variability in the Parietal Cortex and Enhanced Parieto-Occipital Connectivity during Tactile Perception in Congenitally Blind IndividualsPrevious studies in early blind individuals posited a possible role of parieto-occipital connections in conveying nonvisual information to the visual occipital cortex. As a consequence of blindness, parietal areas would thus become able to integrate a greater amount of multimodal information than in sighted individuals. To verify this hypothesis, we compared fMRI-measured BOLD signal temporal variability, an index of efficiency in functional information integration, in congenitally blind and sighted individuals during tactile spatial discrimination and motion perception tasks. In both tasks, the BOLD variability analysis revealed many cortical regions with a significantly greater variability in the blind as compared to sighted individuals, with an overlapping cluster located in the left inferior parietal/anterior intraparietal cortex. A functional connectivity analysis using this region as seed showed stronger correlations in both tasks with occipital areas in the blind as compared to sighted individuals. As BOLD variability reflects neural integration and processing efficiency, these cross-modal plastic changes in the parietal cortex, even if described in a limited sample, reinforce the hypothesis that this region may play an important role in processing nonvisual information in blind subjects and act as a hub in the cortico-cortical pathway from somatosensory cortex to the reorganized occipital areas.Andrea LeoGiulio BernardiGiacomo HandjarasDaniela BoninoEmiliano Ricciardiemiliano.ricciardi@imtlucca.itPietro Pietrinipietro.pietrini@imtlucca.it2015-11-10T13:11:49Z2018-03-06T13:22:31Zhttp://eprints.imtlucca.it/id/eprint/2882This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28822015-11-10T13:11:49ZContinuità nel pensiero strategico e longevità economicaNicola Lattanzinicola.lattanzi@imtlucca.itGiuseppina RotaPietro Pietrinipietro.pietrini@imtlucca.it2015-11-10T13:05:06Z2016-09-13T09:51:33Zhttp://eprints.imtlucca.it/id/eprint/2881This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28812015-11-10T13:05:06ZWhere the brain appreciates the final state of an event: The neural correlates of telicityIn this study we investigated whether the human brain distinguishes between telic events that necessarily entail a specified endpoint (e.g., reaching), and atelic events with no delimitation or final state (e.g., chasing). We used functional magnetic resonance imaging to explore the patterns of neural response associated with verbs denoting telic and atelic events, and found that the left posterior middle temporal gyrus (pMTG), an area consistently engaged by verb processing tasks, showed a significantly higher activation for telic compared with atelic verbs. These results provide the first evidence that the human brain appreciates whether events lead to an end or a change of state. Moreover, they provide an explanation for the long-debated question of which verb properties modulate neural activity in the left pMTG, as they indicate that, independently of any other semantic property, verb processing and event knowledge in this area are specifically related to the representation of telicity.Domenica RomagnoGiuseppina RotaEmiliano Ricciardiemiliano.ricciardi@imtlucca.itPietro Pietrinipietro.pietrini@imtlucca.it2015-11-10T13:02:15Z2016-09-13T09:50:07Zhttp://eprints.imtlucca.it/id/eprint/2879This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28792015-11-10T13:02:15ZEmotional dysregulation in social anxiety insights from an fMRI resting-state studyThe relationship between Social Phobia (SP) and subclinical social anxiety (SA), as well as with normal shyness is not completely defined. We used the Hurst Exponent (HE) to test the hypothesis that, even in a not socially anxious condition, relevant regions for the neurobiology of SP will display a relation between Social Anxiety levels as measured by psychological scales and HE of the BOLD signal. Resting-state fMRI time series were recorded in 26 subjects (12 F; mean age ± SD. = 26 ± 3). All the subjects were drug free and did notrefer any psychiatric disorder in the anamnesis. Each subject completed the following scales: Brief Fear of Negative Evaluation (BFNE), Interaction Anxiousness Scale (IAS), Liebowitz Social Anxiety scale (LSAS), Social Anxiety Spectrum Self-Report (SHI-SR) and State-Trait Anxiety Scale. The Hurst exponent was estimated by using the discrete second-order derivative approach and its relationship with SA has been tested in the whole brain and in regions known to be involved in SP. LSAS score predicted the HE in the anterior cingulate cortex (ACC), amygdala, cerebellum (all negatively) and precuneus (positively). ROI analysis showed an inverse correlation between LSAS and SHI-SR scores and HE in the amygdala and a direct correlation between IAS and BFNE scores in the precuneus. Our results suggest that the brain pattern of spontaneous activity is influenced by the degree of SA on a continuum in relevant regions for reappraisal and emotional regulation. We discuss our results in the framework of available knowledge on SA including the Clark ]";and Wells (1995) model of SA and the etiologic theories on emotional dysregulation in SA.Claudio GentiliNicola VanelloIoana CristeaEmiliano Ricciardiemiliano.ricciardi@imtlucca.itDaniel DavidPietro Pietrinipietro.pietrini@imtlucca.itMario Guazzelli2015-11-06T11:17:03Z2015-11-06T11:17:03Zhttp://eprints.imtlucca.it/id/eprint/2845This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28452015-11-06T11:17:03ZMeasuring Quality, Reputation and Trust in Online CommunitiesIn the Internet era the information overload and the challenge to detect quality content has raised the issue of how to rank both resources and users in online communities. In this paper we develop a general ranking method that can simultaneously evaluate users’ reputation and objects’ quality in an iterative procedure, and that exploits the trust relationships and social acquaintances of users as an additional source of information. We test our method on two real online communities, the EconoPhysics forum and the Last.fm music catalogue, and determine how different variants of the algorithm influence the resultant ranking. We show the benefits of considering trust relationships, and define the form of the algorithm better apt to common situations.Hao LiaoGiulio Ciminigiulio.cimini@imtlucca.itMatúš Medo2015-11-06T11:14:47Z2015-11-06T11:14:47Zhttp://eprints.imtlucca.it/id/eprint/2844This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28442015-11-06T11:14:47ZRemoving spurious interactions in complex networksIdentifying and removing spurious links in complex networks is meaningful for many real applications and is crucial for improving the reliability of network data, which, in turn, can lead to a better understanding of the highly interconnected nature of various social, biological, and communication systems. In this paper, we study the features of different simple spurious link elimination methods, revealing that they may lead to the distortion of networks’ structural and dynamical properties. Accordingly, we propose a hybrid method that combines similarity-based index and edge-betweenness centrality. We show that our method can effectively eliminate the spurious interactions while leaving the network connected and preserving the network's functionalities.An ZengGiulio Ciminigiulio.cimini@imtlucca.it2015-11-06T11:09:48Z2015-11-06T11:09:48Zhttp://eprints.imtlucca.it/id/eprint/2843This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28432015-11-06T11:09:48ZEnhancing topology adaptation in information-sharing social networksThe advent of the Internet and World Wide Web has led to unprecedent growth of the information available. People usually face the information overload by following a limited number of sources which best fit their interests. It has thus become important to address issues like who gets followed and how to allow people to discover new and better information sources. In this paper we conduct an empirical analysis of different online social networking sites and draw inspiration from its results to present different source selection strategies in an adaptive model for social recommendation. We show that local search rules which enhance the typical topological features of real social communities give rise to network configurations that are globally optimal. These rules create networks which are effective in information diffusion and resemble structures resulting from real social systems.Giulio Ciminigiulio.cimini@imtlucca.itDuanbing ChenMatúš MedoLinyuan LüYi-Cheng ZhangTao Zhou2015-11-05T11:48:40Z2018-03-08T17:03:03Zhttp://eprints.imtlucca.it/id/eprint/2817This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28172015-11-05T11:48:40ZThe Role of Distances in the World Trade WebIn the economic literature, geographic distances are considered fundamental factors to be included in any theoretical model whose aim is the quantification of the trade between countries. Quantitatively, distances enter into the so-called gravity models that successfully predict the weight of non-zero trade flows. However, it has been recently shown that gravity models fail to reproduce the binary topology of the World Trade Web. In this paper a different approach is presented: the formalism of exponential random graphs is used and the distances are treated as constraints, to be imposed on a previously chosen ensemble of graphs. Then, the information encoded in the geographical distances is used to explain the binary structure of the World Trade Web, by testing it on the degree-degree correlations and the reciprocity structure. This leads to the definition of a novel null model that combines spatial and non-spatial effects. The effectiveness of spatial constraints is compared to that of nonspatial ones by means of the Akaike Information Criterion and the Bayesian Information Criterion. Even if it is commonly believed that the World Trade Web is strongly dependent on the distances, what emerges from our analysis is that distances do not play a crucial role in shaping the World Trade Web binary structure and that the information encoded into the reciprocity is far more useful in explaining the observed patterns.Francesco PiccioloFranco RuzzenentiRiccardo BasosiTiziano Squartinitiziano.squartini@imtlucca.itDiego Garlaschellidiego.garlaschelli@imtlucca.it2015-11-05T11:38:15Z2018-03-08T17:03:33Zhttp://eprints.imtlucca.it/id/eprint/2815This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28152015-11-05T11:38:15ZComplex Networks Approach to the Italian Photovoltaic Energy Distribution SystemOne problem in the study of the Italian electric energy supply scenario is determining the ability of
photovoltaic production to provide a constant and stable energy background over space and time. Knowing
how the photovoltaic energy produced in a given node diffuses on the power grid is of crucial importance. A
smart grid able to face peaks of load must be designed. Approached here from a complex systems point of
view, the network of energy supply might be represented by a graph in which nodes are Italian municipalities
and edges cross the administrative boundaries from a municipality to its first neighbours. Using datasets
from ISTAT, GSE and ENEA, the node production and attraction of photovoltaic energy have been estimated
with high accuracy. The attraction index was built using demographic data, in accordance with medium per
capita energy consumption data. Moreover, the energy produced in each node could be determined using
data on the installed photovoltaic power and on local solar radiation. The available energy on each node was
calculated by running a distributive model assuming that the energy produced in one node which diffuses to
its first neighbours is proportional to the attraction index of the latter. Therefore the available energy at each
node is the sum of many contributions, coming from topological paths involving all the other nodes across
the network. The availability of cross temporal data on the photovoltaic power installed on the Italian territory
also make it possible to understand the evolution of the available photovoltaic energy landscape over time.Luca ValoriGiovanni Luca GiannuzziTiziano Squartinitiziano.squartini@imtlucca.itDiego Garlaschellidiego.garlaschelli@imtlucca.itRiccardo Basosi2015-11-05T11:24:01Z2018-03-08T17:03:45Zhttp://eprints.imtlucca.it/id/eprint/2814This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/28142015-11-05T11:24:01ZTriadic Motifs and Dyadic Self-Organization in the World Trade NetworkIn self-organizing networks, topology and dynamics coevolve in a continuous feedback, without exogenous driving. The World Trade Network (WTN) is one of the few empirically well documented examples of self-organizing networks: its topology depends on the GDP of world countries, which in turn depends on the structure of trade. Therefore, understanding the WTN topological properties deviating from randomness provides direct empirical information about the structural effects of self-organization. Here, using an analytical pattern-detection method we have recently proposed, we study the occurrence of triadic ‘motifs’ (three-vertices subgraphs) in the WTN between 1950 and 2000. We find that motifs are not explained by only the in- and out-degree sequences, but they are completely explained if also the numbers of reciprocal edges are taken into account. This implies that the self-organization process underlying the evolution of the WTN is almost completely encoded into the dyadic structure, which strongly depends on reciprocity.Tiziano Squartinitiziano.squartini@imtlucca.itDiego Garlaschellidiego.garlaschelli@imtlucca.it2015-03-26T11:45:05Z2015-03-26T11:45:05Zhttp://eprints.imtlucca.it/id/eprint/2447This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24472015-03-26T11:45:05ZRobust pole placement for plants with semialgebraic parametric uncertaintyIn this paper we address the problem of robust pole placement for linear-time-invariant systems whose uncertain parameters are assumed to belong to a semialgebraic region. A dynamic controller is designed in order to constrain the coefficients of the closed-loop characteristic polynomial within prescribed intervals. Two main topics arising from the problem of robust pole placement are tackled by means of polynomial optimization. First, necessary conditions on the plant parameters for the existence of a robust controller are given. Then, the set of all admissible robust controllers is sought. Convex relaxation techniques based on sum-of-square decomposition of positive polynomials are used to efficiently solve the formulated optimization problems through semidefinite programming techniques.Vito CeroneDario Pigadario.piga@imtlucca.itDiego Regruto2015-02-11T14:17:18Z2015-02-11T14:17:18Zhttp://eprints.imtlucca.it/id/eprint/2602This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/26022015-02-11T14:17:18ZFluid rewards for a stochastic process algebraReasoning about the performance of models of software systems typically entails the derivation of metrics such as throughput, utilization, and response time. If the model is a Markov chain, these are expressed as real functions of the chain, called reward models. The computational complexity of reward-based metrics is of the same order as the solution of the Markov chain, making the analysis infeasible when evaluating large-scale systems. In the context of the stochastic process algebra PEPA, the underlying continuous-time Markov chain has been shown to admit a deterministic (fluid) approximation as a solution of an ordinary differential equation, which effectively circumvents state-space explosion. This paper is concerned with approximating Markovian reward models for PEPA with fluid rewards, i.e., functions of the solution of the differential equation problem. It shows that (1) the Markovian reward models for typical metrics of performance enjoy asymptotic convergence to their fluid analogues, and that (2) via numerical tests, the approximation yields satisfactory accuracy in practice.Mirco Tribastonemirco.tribastone@imtlucca.itJie DingStephen GilmoreJane Hillston2015-02-11T14:14:02Z2015-02-11T14:14:02Zhttp://eprints.imtlucca.it/id/eprint/2601This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/26012015-02-11T14:14:02ZScalable differential analysis of process algebra modelsThe exact performance analysis of large-scale software systems with discrete-state approaches is difficult because of the well-known problem of state-space explosion. This paper considers this problem with regard to the stochastic process algebra PEPA, presenting a deterministic approximation to the underlying Markov chain model based on ordinary differential equations. The accuracy of the approximation is assessed by means of a substantial case study of a distributed multithreaded application.Mirco Tribastonemirco.tribastone@imtlucca.itStephen GilmoreJane Hillston2015-02-11T14:10:28Z2015-02-11T14:10:28Zhttp://eprints.imtlucca.it/id/eprint/2600This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/26002015-02-11T14:10:28ZStochastic process algebras: from individuals to populationsIn this paper we report on progress in the use of stochastic process algebras for representing systems which contain many replications of components such as clients, servers and devices. Such systems have traditionally been difficult to analyse even when using high-level models because of the need to represent the vast range of their potential behaviour. Models of concurrent systems with many components very quickly exceed the storage capacity of computing devices even when efficient data structures are used to minimize the cost of representing each state. Here, we show how population-based models that make use of a continuous approximation of the discrete behaviour can be used to efficiently analyse the temporal behaviour of very large systems via their collective dynamics. This approach enables modellers to study problems that cannot be tackled with traditional discrete-state techniques such as continuous-time Markov chains. Jane HillstonMirco Tribastonemirco.tribastone@imtlucca.itStephen Gilmore2015-02-10T14:06:19Z2015-07-24T12:31:15Zhttp://eprints.imtlucca.it/id/eprint/2587This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25872015-02-10T14:06:19ZExact fluid lumpability for Markovian process algebraWe study behavioural relations for process algebra with a fluid semantics given in terms of a system of ordinary differential equations (ODEs). We introduce label equivalence, a relation which is shown to induce an exactly lumped fluid model, a potentially smaller ODE system which can be exactly related to the original one. We show that, in general, for two processes that are related in the fluid sense nothing can be said about their relationship from stochastic viewpoint. However, we identify a class of models for which label equivalence implies a correspondence, called semi-isomorphism, between their transition systems that are at the basis of the Markovian interpretation.Max Tschaikowskimax.tschaikowski@imtlucca.itMirco Tribastonemirco.tribastone@imtlucca.it2015-02-10T14:02:29Z2015-07-24T12:28:15Zhttp://eprints.imtlucca.it/id/eprint/2586This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25862015-02-10T14:02:29ZGeneralised communication for interacting agentsProcess algebra for quantitative evaluation are based on either of the two following mechanisms for communication: binary, where a channel is shared by exactly two agents, or multiway, where all agents sharing a channel must synchronise. In this paper we consider an intermediate form which we call generalised communication, where only m agents out of n potentially available are involved in the communication. We study this in the context of the stochastic process algebra PEPA, of which we conservatively extend the syntax and semantics. We give an intuitive interpretation in terms of bandwidth assignments to agents communicating over a shared medium. We validate this semantics using a real implementation of a simple peer-to-peer protocol, for which our performance model yields predictions with high accuracy. We prove a result of lumpability that exploits symmetries between identical communicating agents, yielding good scalability of the underlying continuous-time Markov chain (CTMC) with respect to increasing population levels. Furthermore, we present an algorithm that derives the lumped chain directly, without having to generate the full CTMC first.Max Tschaikowskimax.tschaikowski@imtlucca.itMirco Tribastonemirco.tribastone@imtlucca.it2015-02-10T13:50:25Z2015-02-10T13:50:25Zhttp://eprints.imtlucca.it/id/eprint/2585This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25852015-02-10T13:50:25ZPerformance modeling of design patterns for distributed computationIn software engineering, design patterns are commonly used and represent robust solution templates to frequently occurring problems in software design and implementation. In this paper, we consider performance simulation for two design patterns for processing of parallel messaging. We develop continuous-time Markov chain models of two commonly used design patterns, Half-Sync/Half-Async and Leader/Followers, for their performance evaluation in multicore machines. We propose a unified modeling approach which contemplates a detailed description of the application-level logic and abstracts away from operating system calls and complex locking and networking application programming interfaces. By means of a validation study against implementations on a 16-core machine, we show that the models accurately predict peak throughputs and variation trends with increasing concurrency levels for a wide range of message processing workloads. We also discuss the limits of our models when memory-level internal contention is not captured.Ronald StrebelowMirco Tribastonemirco.tribastone@imtlucca.itChristian Prehofer2015-02-10T13:31:51Z2015-02-10T13:31:51Zhttp://eprints.imtlucca.it/id/eprint/2584This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25842015-02-10T13:31:51ZFluid limits of queueing networks with batchesThis paper presents an analytical model for the performance prediction of queueing networks with batch services and batch arrivals, related to the fluid limit of a suitable single-parameter sequence of continuous-time Markov chains and interpreted as the deterministic approximation of the average behaviour of the stochastic process. Notably, the underlying system of ordinary differential equations exhibits discontinuities in the right-hand sides, which however are proven to yield a meaningful solution. A substantial numerical assessment is used to study the quality of the approximation and shows very good accuracy in networks with large job populations.Luca BortolussiMirco Tribastonemirco.tribastone@imtlucca.it2015-01-21T08:37:39Z2015-01-21T08:37:39Zhttp://eprints.imtlucca.it/id/eprint/2536This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25362015-01-21T08:37:39ZEffects of layered accretion on the mechanics of masonry structuresMasonry constructions are built up in successive layers of bricks or blocks that may have considerable effect on the deformation and equilibrium of these structures when they are statically indeterminate and when gravity loads are predominant. This problem is analyzed by referring to a thick arch that reaches its final shape by means of a continuous deposition of heavy brick layers in stress free condition. The brick/block units of the accreting layer are assumed to have a negligible size in comparison to the structural size and the resulting continuous deposition is described taking into account their possible sliding on the current extrados at the instant of deposition. The kinematics of the growing body is described by the superposition of the displacement resulting from the continuing addition of heavy layers to the initial displacement at the considered point when it is attached to the current extrados, i.e., on the accreting layer. The two corresponding strain tensor fields do not satisfy the equations of compatibility, while the total strain field results to be compatible. The stress field is the cumulative effect of the incremental stress induced by the weight of the added layers during the growing process with residual stresses, which are shown to be independent of the initial strain field.
Two examples are analyzed to show the effects of the growing process on the stress field and the properties of the strain field are discussed. The first example concerns a masonry wall supported at periodic points while the second example concerns a segmental multileaf thick arch in which the growing process begins from a thin arch resting on its own weight. In both cases a remarkable increase of the stress field is observed in comparison to the solution where the gravity loads are applied on the final domain.Andrea Bacigalupoandrea.bacigalupo@imtlucca.itLuigi Gambarotta2015-01-20T15:37:29Z2015-01-20T15:37:29Zhttp://eprints.imtlucca.it/id/eprint/2533This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25332015-01-20T15:37:29ZComputational two-scale homogenization of periodic masonry: Characteristic lengths and dispersive waves The equations of motion of a second-order continuum equivalent to the periodic masonry made of deformable bricks and mortar are obtained and the overall elastic moduli and the inertial properties are evaluated through a homogenization technique derived from the variational-asymptotic approach proposed by Smyshlyaev and Cherednichenko 23. The computational method consists in solving two sequences of cell problems in the standard format of vanishing body forces and prescribed boundary displacements. In the first step the classical first-order homogenization is carried out by solving four cell problems; the second step concerns the second-order homogenization and involves the solution of six additional cell problems. The equations of motion and the wave equation are specialized to the case of centro-symmetric periodic cells and orthotropic material at the macro-scale, conditions that are common in brick masonry. The characteristic lengths and dispersive elastic waves are obtained. The special cases of characteristic lengths and wave propagation along the orthotropy axes are studied. In the examples running bond and English bond masonry are analyzed by varying the stiffness mismatch between the brick and the mortar. In all cases, the obtained characteristic lengths associated to the shear and extensional strains result to be a fraction of the periodic cell size and become zero for vanishing stiffness mismatch between the brick and the mortar. For both the masonry bonds here considered, the characteristic lengths associated to the shear strain are higher by about an order of magnitude than those associated to the extensional strain. The characteristic lengths along the direction parallel to the mortar joints are prevailing on those along the normal direction. In particular, small characteristic lengths are obtained along the direction normal to the bed mortar joints for both the running bond and the English bond masonry. The wave propagation along the orthotropy axes in both the running bond and English bond masonry is analyzed by considering wave-lengths multiple of periodic cell size. Dispersive waves propagating along the orthotropy direction parallel to the mortar joints are characterized by velocities that differ quite markedly from the corresponding ones in the classical continuum and this difference increases with the increase of the stiffness mismatch between the brick and mortar. Conversely, along the direction perpendicular to the mortar joints the velocity of the shear waves is approximately equal to that in the classical equivalent continuum. These findings show the qualitative similarity of the mechanical behavior of masonry with layered materials. Andrea Bacigalupoandrea.bacigalupo@imtlucca.itLuigi Gambarotta2015-01-20T14:16:03Z2015-01-20T14:17:13Zhttp://eprints.imtlucca.it/id/eprint/2528This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25282015-01-20T14:16:03ZHigh-continuity multi-scale static and dynamic modelling of periodic materialsThe equations of motion of a second-order continuum representative of a classical
heterogeneous periodic material are derived through a variational-asymptotic homogenization
technique and the overall elastic moduli and the inertial properties are evaluated. The
proposed approach is an extension of a dynamic homogenization method developed by the
Authors [9] and [10] which has the aim to improve the accuracy of description of the overall
inertial terms and of the dispersive functions. This procedure is applied to the case of elastic
layered materials with two orthotropic phases having an orthotropy axis parallel to the layers.
To evaluate the reliability of the model the dispersion functions here obtained are compared
with those from the analytical model applied to heterogeneous material [1, 2], and with those
obtained by the Authors in the previous approach [9].Andrea Bacigalupoandrea.bacigalupo@imtlucca.itLuigi Gambarotta2015-01-20T14:00:32Z2015-01-20T14:15:24Zhttp://eprints.imtlucca.it/id/eprint/2527This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25272015-01-20T14:00:32ZStrain localization analysis of layered materials with debonding interfaces by a second-order homogenization approachThe paper is focused on the multiscale modeling of shear banding in a two-phase
linear elastic periodically layered material with damaging interfaces. A layered twodimensional
strip is considered under transverse shear and is assumed to have a finite length
along the direction of the layers and an infinite extension along the direction perpendicular to
layering. The structural system has been analysed as a second-gradient continuum in order to
incorporate size effects due to the material inhomogeneities and to regularize the softening
response due to the interface debonding. The multi-scale approach is based on a secondorder
homogenization procedure proposed by the Authors, here specialized to the simple case
of layered materials. Two problems are analysed differing on the boundary conditions at the
strip edges. The first case considers free warping of the edges with classical homogeneous
response in the elastic regime followed by a localization process as a results of a bifurcation
in analogy to Chambon’s approach. In the second model warping is inhibited at the edges
and the damage propagation is obtained from the center of the specimen. In both cases the
model parameters directly depend on the material microstructure so that both the extension of
the shear band and the occurrence of snap-back in the post-peak phase are given in terms of
the constitutive parameters and geometry of the phasesAndrea Bacigalupoandrea.bacigalupo@imtlucca.itLuigi Gambarotta2015-01-20T13:43:46Z2015-01-20T13:43:46Zhttp://eprints.imtlucca.it/id/eprint/2526This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25262015-01-20T13:43:46ZSecond grade modeling for the strain localization analysis of layered materials with damaging interfacesA second-order computational homogenization procedure for heterogeneous materials with
periodic microstructure is applied to the analysis of a layered strip with damaging interface
subjected to simple shear. The second gradient model is applied in a strain localization
analysis and localization limiters depending on the geometry and mechanical parameters of
the layered material are obtained.Andrea Bacigalupoandrea.bacigalupo@imtlucca.itLuigi Gambarotta2015-01-20T09:13:57Z2015-01-20T09:13:57Zhttp://eprints.imtlucca.it/id/eprint/2517This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25172015-01-20T09:13:57ZSecond-gradient computational homogenization of periodic materialsAndrea Bacigalupoandrea.bacigalupo@imtlucca.itLuigi Gambarotta2015-01-20T09:00:41Z2015-01-20T09:00:41Zhttp://eprints.imtlucca.it/id/eprint/2516This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25162015-01-20T09:00:41ZStrain localization analysis of layered materials with soft interfaces based on a second-order homogenization approachAndrea Bacigalupoandrea.bacigalupo@imtlucca.itLuigi Gambarotta2015-01-20T08:54:59Z2015-01-20T09:01:03Zhttp://eprints.imtlucca.it/id/eprint/2515This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/25152015-01-20T08:54:59ZA computational high-continuity approach to the multi-scale static and dynamic modeling of materials with periodic microsrtuctureAndrea Bacigalupoandrea.bacigalupo@imtlucca.itLuigi Gambarotta2015-01-15T13:24:00Z2015-01-15T13:24:00Zhttp://eprints.imtlucca.it/id/eprint/2491This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24912015-01-15T13:24:00ZA reversible abstract machine and its space overheadWe study in this paper the cost of making a concurrent programming language reversible. More specifically, we take an abstract machine for a fragment of the Oz programming language and make it reversible. We show that the overhead of the reversible machine with respect to the original one in terms of space is at most linear in the number of execution steps. We also show that this bound is tight since some programs cannot be made reversible without storing a commensurate amount of information.Michael LienhardtIvan LaneseClaudio Antares Mezzinaclaudio.mezzina@imtlucca.itJean-Bernard Stefani2015-01-13T15:58:46Z2015-01-13T15:58:46Zhttp://eprints.imtlucca.it/id/eprint/2481This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24812015-01-13T15:58:46ZLa Sicilia di Luigi LanziEmanuele Pellegriniemanuele.pellegrini@imtlucca.it2015-01-13T13:52:43Z2015-01-13T14:35:42Zhttp://eprints.imtlucca.it/id/eprint/2469This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24692015-01-13T13:52:43ZOptimization of airborne wind energy generatorsThis paper presents novel results related to an innovative airborne wind energy technology, named Kitenergy, for the conversion of high-altitude wind energy into electricity. The research activities carried out in the last five years, including theoretical analyses, numerical simulations, and experimental tests, indicate that Kitenergy could bring forth a revolution in wind energy generation, providing renewable energy in large quantities at a lower cost than fossil energy. This work investigates three important theoretical aspects: the evaluation of the performance achieved by the employed control law, the optimization of the generator operating cycle, and the possibility to generate continuously a constant and maximal power output. These issues are tackled through the combined use of modeling, control, and optimization methods that result to be key technologies for a significant breakthrough in renewable energy generation.Lorenzo FagianoMario MilaneseDario Pigadario.piga@imtlucca.it2015-01-13T13:40:47Z2015-01-13T13:40:47Zhttp://eprints.imtlucca.it/id/eprint/2468This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24682015-01-13T13:40:47ZBounded error identification of Hammerstein systems through sparse polynomial optimization In this paper we present a procedure for the evaluation of bounds on the parameters of Hammerstein systems, from output measurements affected by bounded errors. The identification problem is formulated in terms of polynomial optimization, and relaxation techniques, based on linear matrix inequalities, are proposed to evaluate parameter bounds by means of convex optimization. The structured sparsity of the formulated identification problem is exploited to reduce the computational complexity of the convex relaxed problem. Analysis of convergence properties and computational complexity is reported. Vito CeroneDario Pigadario.piga@imtlucca.itDiego Regruto2015-01-13T13:28:37Z2015-01-13T13:28:37Zhttp://eprints.imtlucca.it/id/eprint/2467This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24672015-01-13T13:28:37ZSet-Membership Error-in-variables identification through convex relaxation techniques In this technical note, the set membership error-in-variables identification problem is considered, that is the identification of linear dynamic systems when both output and input measurements are corrupted by bounded noise. A new approach for the computation of parameter uncertainty intervals is presented. First, the identification problem is formulated in terms of nonconvex optimization. Then, relaxation techniques based on linear matrix inequalities are employed to evaluate parameter bounds by means of convex optimization. The inherent structured sparsity of the original identification problems is exploited to reduce the computational complexity of the relaxed problems. Finally, convergence properties and complexity of the proposed procedure are discussed. Advantages of the presented technique with respect to previously published results are discussed and shown by means of two simulated examples.Vito CeroneDario Pigadario.piga@imtlucca.itDiego Regruto2015-01-12T12:06:11Z2015-01-12T12:06:11Zhttp://eprints.imtlucca.it/id/eprint/2458This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24582015-01-12T12:06:11ZSM identification of input-output LPV models with uncertain time-varying parametersIn this chapter, we consider the identification of single-input single-output linear-parameter-varying models when both the output and the time-varying parameter measurements are affected by bounded noise. First, the problem of computing exact parameter uncertainty intervals is formulated in terms of semialgebraic optimization. Then, a suitable relaxation tecnique is presented to compute parameter bounds by means of convex optimization. Advantages of the presented approach with respect to previously published results are discussed.Vito CeroneDario Pigadario.piga@imtlucca.itDiego Regruto2015-01-09T13:37:33Z2015-01-09T13:37:33Zhttp://eprints.imtlucca.it/id/eprint/2453This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24532015-01-09T13:37:33ZPolytopic outer approximations of semialgebraic setsThis paper deals with the problem of finding a polytopic outer approximation P* of a compact semialgebraic set S ⊆ Rn. The computed polytope turns out to be an approximation of the linear hull of the set S. The evaluation of P* is reduced to the solution of a sequence of robust optimization problems with nonconvex functional, which are efficiently solved by means of convex relaxation techniques. Properties of the presented algorithm and its possible applications in the analysis, identification and control of uncertain systems are discussed.Vito CeroneDario Pigadario.piga@imtlucca.itDiego Regruto2015-01-09T13:32:04Z2015-01-09T13:32:04Zhttp://eprints.imtlucca.it/id/eprint/2452This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24522015-01-09T13:32:04ZFixed order LPV controller design for LPV models in input-output formIn this work, a new synthesis approach is proposed to design fixed-order H∞ controllers for linear parameter-varying (LPV) systems described by input-output (I/O) models with polynomial dependence on the scheduling variables. First, by exploiting a suitable technique for polytopic outer approximation of semi-algebraic sets, the closed loop system is equivalently rewritten as an LPV I/O model depending affinely on an augmented scheduling parameter vector constrained inside a polytope. Then, the problem is reformulated in terms of bilinear matrix inequalities (BMI) and solved by means of a suitable semidefinite relaxation technique.Vito CeroneDario Pigadario.piga@imtlucca.itDiego RegrutoRoland Tóth2015-01-09T12:49:50Z2015-01-09T12:49:50Zhttp://eprints.imtlucca.it/id/eprint/2451This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24512015-01-09T12:49:50ZBounded-error identification of linear systems with input and output backlashIn this paper we present a single-stage procedure for computing bounds on the parameters of linear systems with input and output backlash from output data corrupted by bounded measurement noise. By properly selecting a sequence of input/output measurements, the problem of evaluating parameter bounds is formulated as a collection of sparse nonconvex optimization problems. Convex-relation techniques are exploited to efficiently compute guaranteed bounds on system parameters by means of semidefinite programming.Vito CeroneDario Pigadario.piga@imtlucca.itDiego Regruto2015-01-09T12:25:17Z2015-01-09T12:25:17Zhttp://eprints.imtlucca.it/id/eprint/2450This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24502015-01-09T12:25:17ZFIR approximation of linear systems from quantized recordsIn this paper we consider the problem of identifying a fixed-order FIR approximation of linear systems with unknown structure, assuming that both input and output measurements are subjected to quantization. In particular, a FIR model of given order which provides the best approximation of the input-output relationship is sought by minimizing the worst-case distance between the output of the true system and the modeled output, for all possible values of the input and output data consistent with their quantized measurements. First we show that the considered problem can be formulated in terms of robust optimization. Then, we present two different algorithms to compute the optimum of the formulated problem by means of linear programming techniques. The effectiveness of the proposed approach is illustrated by means of a simulation example.Vito CeroneDario Pigadario.piga@imtlucca.itDiego Regruto2015-01-09T12:12:01Z2015-01-09T12:12:01Zhttp://eprints.imtlucca.it/id/eprint/2449This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24492015-01-09T12:12:01ZLPV identification of the glucose-insulin dynamics in Type I DiabetesIn this paper we address the problem of identifying a linear parameter varying (LPV) model of the glucose-insulin dynamics in Type I diabetic patients. First, the identification problem is formulated in the framework of bounded-error identification, then an algorithm for parameter bounds computation, based on semidefinite programming, is presented. The effectiveness of the proposed approach is tested in simulation by means of the widely adopted nonlinear Sorensen patient model.Vito CeroneDario Pigadario.piga@imtlucca.itDiego RegrutoSintayehu Berehanu2015-01-09T11:59:20Z2015-01-09T11:59:20Zhttp://eprints.imtlucca.it/id/eprint/2448This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24482015-01-09T11:59:20ZInput-Output LPV Model identification with guaranteed quadratic stabilityThe problem of identifying linear parameter-varying (LPV) systems, a-priori known to be quadratically stable, is considered in the paper using an input-output model structure. To solve this problem, a novel constrained optimization-based algorithm is proposed which guarantees quadratic stability of the identified model. It is shown that this estimation objective corresponds to a nonconvex optimization problem, defined by a set of polynomial matrix inequalities (PMI), whose optimal solution can be approximated by means of suitable convex semidefinite relaxations. Applicability of such relaxation-based estimation approach in the presence of either stochastic or deterministic bounded noise is discussed. A simulation example is also given to demonstrate the effectiveness of the resulting identification method.Vito CeroneDario Pigadario.piga@imtlucca.itDiego RegrutoRoland Tóth2015-01-09T11:36:20Z2015-01-09T11:52:42Zhttp://eprints.imtlucca.it/id/eprint/2446This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24462015-01-09T11:36:20ZMinimal LPV state-space realization driven set-membership identificationSet-membership identification algorithms have been recently proposed to derive linear parameter-varying (LPV) models in input-output form, under the assumption that both measurements of the output and the scheduling signals are affected by bounded noise. In order to use the identified models for controller synthesis, linear time-invariant (LTI) realization theory is usually applied to derive a statespace model whose matrices depend statically on the scheduling signals, as required by most of the LPV control synthesis techniques. Unfortunately, application of the LTI realization theory leads to an approximate state-space description of the original LPV input-output model. In order to limit the effect of the realization error, a new set-membership algorithm for identification of input/output LPV models is proposed in the paper. A suitable nonconvex optimization problem is formulated to select the model in the feasible set which minimizes a suitable measure of the state-space realization error. The solution of the identification problem is then derived by means of convex relaxation techniques.Vito CeroneDario Pigadario.piga@imtlucca.itDiego RegrutoRoland Tóth2015-01-08T11:00:48Z2015-01-08T11:00:48Zhttp://eprints.imtlucca.it/id/eprint/2432This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24322015-01-08T11:00:48ZSegmentation of ARX systems through SDP-relaxation techniquesSegmentation of ARX models can be formulated as a combinato-
rial minimization problem in terms of the ℓ0-norm of the param-
eter variations and the ℓ2-loss of the prediction error. A typical
approach to compute an approximate solution to such a prob-
lem is based on ℓ1-relaxation. Unfortunately, evaluation of the
level of accuracy of the ℓ1-relaxation in approximating the opti-
mal solution of the original combinatorial problem is not easy to
accomplish. In this poster, an alternative approach is proposed
which provides an attractive solution for the ℓ0-norm minimiza-
tion problem associated with segmentation of ARX models.Dario Pigadario.piga@imtlucca.itRoland Tóth2015-01-08T10:57:52Z2015-01-08T11:01:10Zhttp://eprints.imtlucca.it/id/eprint/2431This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24312015-01-08T10:57:52ZDealing with correlated errors in Least-Squares Support Vector Machine EstimatorsJohn LataireDario Pigadario.piga@imtlucca.itRoland Tóth2015-01-08T10:09:00Z2015-01-08T13:05:30Zhttp://eprints.imtlucca.it/id/eprint/2428This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24282015-01-08T10:09:00ZA convex relaxation approach to set-membership identificationSet-membership identification of dynamical systems is dealt with in this thesis. Differently from the stochastic framework, in the set-membership context the statistical description of the measurement noise is not available and the only information on such an error is that its amplitude or energy is bounded. In the framework of Set-membership identification, the result of the estimation process is the set of all system parameter values consistent with measured data, assumed model structure and a-priori assumptions on the measurement error. The problem of evaluating bounds on system parameters belonging to the feasible parameter set can be formulated in terms of polynomial optimization problems, where the number of decision variables increases with the length of the experimental data sequence. Such problems are generally nonconvex and NP-hard. Therefore, standard nonlinear optimization tools can not be used to compute parameter bounds, since they can trap in local minima and, as a consequence, the computed bounds are not guaranteed to contain the true values of parameters, which is a key requirement in set-membership identification. In order to overcome such a problem, convex relaxation procedures based on the theory of moments are proposed to efficiently compute relaxed bounds which are guaranteed to contain the true values of system parameters. Unfortunately, a direct application of the theory of moments in relaxing set-membership identification problems leads to semidefinite programming problems with high computational burden, thus limiting, in practice, the use of such relaxation procedures to solve identification problems with a small number of measurements. The aim of the thesis is to derive a number of convex-relaxation based algorithms that, exploiting the peculiar properties of the considered identification problems, make it possible to perform bound computation also when the number of measurements is large. In particular, errors-in-variables (EIV) identification of linear models, concerning identification of linear-time-invariant (LTI) systems based on noise-corrupted measurements of both input and output signals, is tackled through two different relaxation approaches. The first method, which is referred to as dynamic-EIV approach, exploits the sparse structure of EIV problems in order to reduce the computational complexity of the semidefinite programming problems arising from theory-of-moment relaxations. The second technique, referred to as semi-static-EIV approach, is based on a suitable handling of the constraints defining the feasible parameter set, and leads to polynomial optimization problems where the number of decision variables does not depend on the size of the measurement sequence. Thanks to that problem reformulation, theory-of-moment relaxations can be efficiently applied to compute bounds on system parameters also from large data set. Identification of block-oriented nonlinear systems is also addressed. The considered model structures are: Hammerstein-Wiener systems; Hammerstein-like and Wiener-like structures with backlash nonlinearity and block-structured nonlinear feedback systems. The semi-static-EIV approach is extended with suitable modifications to estimate the parameters of Hammerstein-Wiener models with static blocks described by polynomial functions. Then, a unified approach for set-membership identification of Hammerstein and Wiener models with backlash is discussed. By properly selecting a sequence of input/output measurements, the evaluation of parameter bounds is formulated in terms of polynomial optimization problems and the structured sparsity of the formulated problems is exploited to reduce the computational complexity of theory-of-moment based relaxations. Finally, a two-stage method for identification of block-structured nonlinear feedback systems is presented. Nonlinear block parameter bounds are first computed by using input/output data collected from the response of the system to square wave inputs. Then, by stimulating the system with a persistently exciting input signal, bounds on the unmeasurable inner-signal are evaluated, which are used, together with noise-corrupted measurements of the output signal, to formulate the identification of linear block parameters in terms of EIV problems that can be solved either through the dynamic or the semi-static-EIV approach. Then, an "ad hoc" convex relaxation scheme is presented to compute guaranteed bounds on the parameters of linear-parameter-varying (LPV) models in input/output (I/O) form, under the assumption that both the output and the scheduling parameter measurements are affected by bounded noise. The developed set-membership identification algorithms are used to derive an LPV model describing vehicle lateral dynamics based on a set of experimental data, and an LPV model to describe glucose-insulin dynamics for patients affected by Type I diabetes. Finally, the problem of identifying systems a-priori known to be stable is discussed. In particular, suitable relaxation-based algorithms are proposed to enforce BIBO stability and quadratic stability constraints for the cases of LTI and LPV systems, respectively. Applicability of the proposed techniques both in the stochastic and in the set-membership framework is discussed.Dario Pigadario.piga@imtlucca.it2014-12-19T12:55:35Z2014-12-19T13:22:33Zhttp://eprints.imtlucca.it/id/eprint/2427This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24272014-12-19T12:55:35ZCicognara e SismondiEmanuele Pellegriniemanuele.pellegrini@imtlucca.it2014-12-19T12:27:40Z2014-12-19T13:24:09Zhttp://eprints.imtlucca.it/id/eprint/2426This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24262014-12-19T12:27:40ZBetween history and art history: Roscoe’s Medici livesEmanuele Pellegriniemanuele.pellegrini@imtlucca.it2014-12-11T11:00:13Z2014-12-16T14:34:53Zhttp://eprints.imtlucca.it/id/eprint/2412This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24122014-12-11T11:00:13ZLow-complexity single-image super-resolution based on nonnegative neighbor embeddingThis paper describes a single-image super-resolution (SR) algorithm based on nonnegative neighbor embedding. It belongs to the family of single-image example-based
SR algorithms, since it uses a dictionary of low resolution (LR) and high resolution (HR) trained patch pairs to infer the unknown HR details. Each LR feature vector in the input
image is expressed as the weighted combination of its K nearest neighbors in the dictionary; the corresponding HR feature vector is reconstructed under the assumption that the local LR embedding is preserved. Three key aspects are introduced in order to build a low-complexity competitive algorithm: (i) a compact but efficient representation of the
patches (feature representation) (ii) an accurate estimation of the patches by their nearest neighbors (weight computation) (iii) a compact and already built (therefore external) dictionary, which allows a one-step upscaling. The neighbor embedding SR algorithm so designed is shown to give good visual results, comparable to other state-of-the-art methods, while presenting an appreciable reduction of the computational time.Marco Bevilacquamarco.bevilacqua@imtlucca.itAline RoumyChristine GuillemotMarie Line Alberi-Morel2014-12-11T10:26:16Z2014-12-11T11:31:46Zhttp://eprints.imtlucca.it/id/eprint/2411This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/24112014-12-11T10:26:16ZNeighbor embedding based single-image super-resolution using Semi-Nonnegative Matrix FactorizationThis paper describes a novel method for single-image super-resolution (SR) based on a neighbor embedding technique which uses Semi-Nonnegative Matrix Factorization (SNMF). Each low-resolution (LR) input patch is approximated by a linear combination of nearest neighbors taken from a dictionary. This dictionary stores low-resolution and corresponding high-resolution (HR) patches taken from natural images and is thus used to infer the HR details of the super-resolved image. The entire neighbor embedding procedure is carried out in a feature space. Features which are either the gradient values of the pixels or the mean-subtracted luminance values are extracted from the LR input patches, and from the LR and HR patches stored in the dictionary. The algorithm thus searches for the K nearest neighbors of the feature vector of the LR input patch and then computes the weights for approximating the input feature vector. The use of SNMF for computing the weights of the linear approximation is shown to have a more stable behavior than the use of LLE and lead to significantly higher PSNR values for the super-resolved images.Marco Bevilacquamarco.bevilacqua@imtlucca.itAline RoumyChristine GuillemotMarie Line Alberi-Morel2014-12-04T09:48:08Z2014-12-04T11:46:07Zhttp://eprints.imtlucca.it/id/eprint/2399This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/23992014-12-04T09:48:08ZSmart random walkers: the cost of knowing the pathIn this work we study the problem of targeting signals in networks using entropy information measurements to quantify the cost of targeting. We introduce a penalization rule that imposes a restriction on the long paths and therefore focuses the signal to the target. By this scheme we go continuously from fully random walkers to walkers biased to the target. We found that the optimal degree of penalization is mainly determined by the topology of the network. By analyzing several examples, we have found that a small amount of penalization reduces considerably the typical walk length, and from this we conclude that a network can be efficiently navigated with restricted information.Juan I. Perottijuanignacio.perotti@imtlucca.itOrlando V. Billoni2014-11-10T14:03:01Z2015-03-25T09:16:39Zhttp://eprints.imtlucca.it/id/eprint/2363This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/23632014-11-10T14:03:01ZClimate variability and amplification revealed from indicators in the Gulf of TarantoA well-dated, high-resolution core (GT90-3), extracted from the Gallipoli Terrace in the Ionian Sea, is used to deduce information about climate variability during the last millennia and in particular before 1000 AD, where few
proxy records are available. We present the foraminiferal δ18O record measured in this core and covering the last 2200 years, whose spectral analysis, performed by
several advanced methods, reveals highly significant oscillatory components with periods of about 600, 350, 200, 125 and 11 years. These components are discussed
also in comparison with those deduced from other archives, concluding that the overall trend and the 200 y component together are very likely temperature-driven. On the contrary, concerning the decadal range the situation is not so clear and salinity and circulation effects probably cannot be completely neglected.Gianna Vivaldogianna.vivaldo@imtlucca.it2014-11-10T13:49:31Z2015-03-25T09:16:39Zhttp://eprints.imtlucca.it/id/eprint/2362This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/23622014-11-10T13:49:31ZNatural variability and anthropogenic effects in a Central Mediterranean coreWe evaluate the contribution of natural variability to the modern decrease in foraminiferal δ18O by relying on a 2200-yr-long, high-resolution record of oxygen isotopic ratio from a Central Mediterranean sediment core. Pre-industrial values are used to train and test two sets of algorithms that are able to forecast the natural variability in δ18O over the last 150 yr. These algorithms are based on autoregressive models and neural networks, respectively; they are applied separately to each of the δ18O series' significant variability components, rather than to the complete series. The separate components are extracted by singular-spectrum analysis and have narrow-band spectral content, which reduces the forecast error. By comparing the sum of the predicted low-frequency components to its actual values during the Industrial Era, we deduce that the natural contribution to these components of the modern δ18O variation decreased gradually, until it reached roughly 40%, as early as the end of the 1970s.Silvia AlessioGianna Vivaldogianna.vivaldo@imtlucca.itCarla TariccoMichael Ghil2014-11-10T13:34:02Z2015-03-25T09:16:39Zhttp://eprints.imtlucca.it/id/eprint/2361This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/23612014-11-10T13:34:02ZAlmahata Sitta meteorite: gamma-activity measurements at Monte dei Cappuccini Laboratory in Torino The asteroid 2008 TC3 was telescopically seen prior to entering Earth’s atmosphere
and was predicted to fall in Sudan on October 7, 2008, as it actually happened.
Subsequently, many fragments were collected from the Nubian desert. At Monte
dei Cappuccini Laboratory (IFSI, INAF) in Torino, using a selective gamma spectrometer
we measured gamma rays from fragment #15, one of the largest retrieved, a ureilite of mass
75 g. Six cosmogenic radionuclides have been measured (46Sc, 57Co, 54Mn, 22Na, 60Co and
26Al). 60Co and 26Al activities allowed us to deduce that the fragment was located at a depth
of 41�14 cm inside the 1.5–2m radius asteroid. Moreover, 22Na activity is slightly greater
than expected on the basis of the average cosmic ray flux and this could be ascribed to the
prolonged solar minimum preceding the meteorite fall.Carla TariccoNarendra BhandariPaolo ColombettiAlberto RomeroGianna Vivaldogianna.vivaldo@imtlucca.itNeeharika SinhaPeter JenniskensMuawia H. Shaddad2014-10-10T09:34:56Z2015-04-08T10:37:32Zhttp://eprints.imtlucca.it/id/eprint/2323This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/23232014-10-10T09:34:56ZAnalysis of service oriented software systems with the conversation calculusWe overview some perspectives on the concept of service-based computing, and discuss the motivation of a small set of modeling abstractions for expressing and analyzing service based systems, which have led to the design of the Conversation Calculus. Distinguishing aspects of the Conversation Calculus are the adoption of a very simple, context sensitive, local message-passing communication mechanism, natural support for modeling multi-party conversations, and a novel mechanism for handling exceptional behavior. In this paper, written in a tutorial style, we review some Conversation Calculus based analysis techniques for reasoning about properties of service-based systems, mainly by going through a sequence of illustrating examples.Luis CairesHugo Torres Vieirahugo.torresvieira@imtlucca.it2014-10-09T13:21:53Z2015-04-08T10:37:32Zhttp://eprints.imtlucca.it/id/eprint/2318This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/23182014-10-09T13:21:53ZSLMC: a tool for model checking concurrent systems against dynamical spatial logic specificationsThe Spatial Logic Model Checker is a tool for verifying π-calculus systems against safety, liveness, and structural properties expressed in the spatial logic for concurrency of Caires and Cardelli. Model-checking is one of the most widely used techniques to check temporal properties of software systems. However, when the analysis focuses on properties related to resource usage, localities, interference, mobility, or topology, it is crucial to reason about spatial properties and structural dynamics. The SLMC is the only currently available tool that supports the combined analysis of behavioral and spatial properties of systems. The implementation, written in OCAML, is mature and robust, available in open source, and outperforms other tools for verifying systems modeled in π-calculus.Luis CairesHugo Torres Vieirahugo.torresvieira@imtlucca.it2014-07-03T12:40:30Z2016-04-06T09:47:26Zhttp://eprints.imtlucca.it/id/eprint/2243This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/22432014-07-03T12:40:30ZI Farmaci Oncologici in Italia: innovazione e sostenibilità economicaI farmaci innovativi ad alto costo ci fanno toccare con mano quali potranno essere i dilemmi della sanità futura se non si completano le riforme della governance di questo capitolo complesso della spesa pubblica (non solo federalismo, ma sistemi contabili e di reporting, schemi di compartecipazione al costo, screening delle prassi terapeutiche per sollecitare best practice, etc.). Quel trade-off crudo che abbiamo di fronte, tra sostenibilità della spesa e domanda di prestazioni da parte dei cittadini, è, per i farmaci ad alto costo, già una realtà vissuta in tutti gli ospedali. Nella prima parte del rapporto Pammolli, Riccaboni e Salerno descrivono le caratteristiche del comparto, fornendone le grandezze principali attuali e prospettiche, anche in chiave di comparazione internazionale. Nella seconda parte, gli autori approfondiscono il quadro normativo-regolatorio che attualmente presiede alla governance dei farmaci oncologici in Italia. Ne emerge un sistema con ombre e approssimazioni, con impostazioni diverse e spesso incompatibili tra Aifa e Regioni, tra Regione e Regione, addirittura tra Als e Ospedali di una stessa Regione. Un assetto poco trasparente e certo non pronto a governare in maniera positiva e programmatica la forbice tra risorse spendibili e necessità e urgenza delle terapie. Nella parte conclusiva del rapporto si avanzano alcune proposte di policy, distinguendo tra quelle attuabili in tempi brevi e quelle per le quali sono necessari tempi di attuazione più lunghi. Le due tipologie di intervento andrebbero avviate il prima possibile e auspicabilmente condotte in parallelo.Fabio Pammollif.pammolli@imtlucca.itMassimo Riccabonimassimo.riccaboni@imtlucca.itNicola C. Salerno2014-07-03T09:06:15Z2014-07-03T09:36:26Zhttp://eprints.imtlucca.it/id/eprint/2235This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/22352014-07-03T09:06:15ZMorphological analysis of the left ventricular endocardial surface and its clinical implicationsThe complex morphological structure of the left ventricular endocardial surface and its relation to the severity of arterial stenosis has not yet been thoroughly investigated due to the limitations of conventional imaging techniques. By exploiting the recent developments in Multirow-Detector Computed Tomography (MDCT) scanner technology, the complex endocardial surface morphology of the left ventricle is studied and the cardiac segments affected by coronary arterial stenosis localized via analysis of Computed Tomography (CT) image data obtained from a 320-MDCT scanner. The non-rigid endocardial surface data is analyzed using an isometry-invariant Bag-of-Words (BOW) feature-based approach. The clinical significance of the analysis in identifying, localizing and quantifying the incidence and extent of coronary artery disease is investigated. Specifically, the association between the incidence and extent of coronary artery disease and the alterations in the endocardial surface morphology is studied. The results of the proposed approach on 15 normal data sets, and 12 abnormal data sets exhibiting coronary artery disease with varying levels of severity are presented. Based on the characterization of the endocardial surface morphology using the Bag-of-Words features, a neural network-based classifier is implemented to test the effectiveness of the proposed morphological analysis approach. Experiments performed on a strict leave-one-out basis are shown to exhibit a distinct pattern in terms of classification accuracy within the cardiac segments where the incidence of coronary arterial stenosis is localized.Anirban Mukhopadhyayanirban.mukhopadhyay@imtlucca.itZhen QianSuchendra M. BhandarkarTianming LiuSarah RinehartSzilard Voros2014-07-03T08:46:34Z2014-07-03T08:46:34Zhttp://eprints.imtlucca.it/id/eprint/2234This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/22342014-07-03T08:46:34ZNon-rigid shape correspondence and description using geodesic field estimate distributionNon-rigid shape description and analysis is an unsolved problem in computer graphics. Shape analysis is a fast evolving research field due to the wide availability of 3D shape databases. Widely studied methods for this family of problems include the Gromov Hausdorff distance [1], Bag-of-Features [2] and diffusion geometry [3]. The limitations of the Euclidian distance measure in the context of isometric deformation have made geodesic distance a de-facto standard for describing a metric space for non-rigid shape analysis. In this work, we propose a novel geodesic field space-based approach to describe and analyze non-rigid shapes from a point correspondence perspective.Austin T. NewAnirban Mukhopadhyayanirban.mukhopadhyay@imtlucca.itHamid R. ArabniaSuchendra M. Bhandarkar2014-06-26T12:39:17Z2014-06-26T12:39:17Zhttp://eprints.imtlucca.it/id/eprint/2211This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/22112014-06-26T12:39:17ZAn economic and financial exploratoryThis paper describes the vision of a European Exploratory for economics and finance using an interdisciplinary consortium of economists, natural scientists, computer scientists and engineers, who will combine their expertise to address the enormous challenges of the 21st century. This Academic Public facility is intended for economic modelling, investigating all aspects of risk and stability, improving financial technology, and evaluating proposed regulatory and taxation changes. The European Exploratory for economics and finance will be constituted as a network of infrastructure, observatories, data repositories, services and facilities and will foster the creation of a new cross-disciplinary research community of social scientists, complexity scientists and computing (ICT) scientists to collaborate in investigating major issues in economics and finance. It is also considered a cradle for training and collaboration with the private sector to spur spin-offs and job creations in Europe in the finance and economic sectors. The Exploratory will allow Social Scientists and Regulators as well as Policy Makers and the private sector to conduct realistic investigations with real economic, financial and social data. The Exploratory will (i) continuously monitor and evaluate the status of the economies of countries in their various components, (ii) use, extend and develop a large variety of methods including data mining, process mining, computational and artificial intelligence and every other computer and complex science techniques coupled with economic theory and econometric, and (iii) provide the framework and infrastructure to perform what-if analysis, scenario evaluations and computational, laboratory, field and web experiments to inform decision makers and help develop innovative policy, market and regulation designs.Silvano CincottiDidler SornettePhilip TreleavenStefano BattistonGuido Caldarelliguido.caldarelli@imtlucca.itCars H. HommesAlan Kirman2014-01-23T09:20:49Z2014-01-23T09:20:49Zhttp://eprints.imtlucca.it/id/eprint/2113This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/21132014-01-23T09:20:49ZMinimum weight dynamo and fast opinion spreadingWe consider the following multi–level opinion spreading model on networks. Initially, each node gets a weight from the set {0,…,k − 1}, where such a weight stands for the individuals conviction of a new idea or product. Then, by proceeding to rounds, each node updates its weight according to the weights of its neighbors. We are interested in the initial assignments of weights leading each node to get the value k − 1 –e.g. unanimous maximum level acceptance– within a given number of rounds. We determine lower bounds on the sum of the initial weights of the nodes under the irreversible simple majority rules, where a node increases its weight if and only if the majority of its neighbors have a weight that is higher than its own one. Moreover, we provide constructive tight upper bounds for some class of regular topologies: rings, tori, and cliques.Sara BrunettiGennaro CordascoLuisa GarganoElena LodiWalter Quattrociocchiwalter.quattrociocchi@imtlucca.it2014-01-23T09:16:45Z2014-01-23T09:16:45Zhttp://eprints.imtlucca.it/id/eprint/2112This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/21122014-01-23T09:16:45ZSelection in scientific networksOne of the most pressing and interesting actual scientific challenges deals with the analysis and the understanding of complex network dynamics. In particular, a major trend is the definition of new frameworks for the analysis, the exploration and the detection of the dynamics at play in real dynamic networks. In this paper, we focus in particular on scientific communities by targeting the social part of science through a descriptive approach that aims at identifying the social determinants behind the emergence and the resilience of scientific communities. We consider that scientific communities are at the same time through co-authorship communities of practice and that they exist also as representations in the scientists mind, since references to other scientists’ works are not merely an objective link to a relevant work, but they reveal also social objects that one manipulates and refers to. In fact, our analysis focuses on the coexistence of co-authorships and citation dynamics and how their interplay affects the shape, the strength and the stability of the scientific systems. Such an analysis—performed through the time-varying graphs (TVG) formalism and derived metrics—concerns the evolution of a scientific network extracted from a portion of the arXiv repository covering a period of 10 years of publications in physics. We detect an example of how the selection process of citations may affect the shape of the co-authorships network from a sparser and disconnected structure to a dense and homogeneous one.Walter Quattrociocchiwalter.quattrociocchi@imtlucca.itFrederic AmblardEugenia Galeota2014-01-23T09:13:08Z2014-01-23T09:13:08Zhttp://eprints.imtlucca.it/id/eprint/2111This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/21112014-01-23T09:13:08ZTime-varying graphs and dynamic networksThe past few years have seen intensive research efforts carried out in some apparently unrelated areas of dynamic systems – delay-tolerant networks, opportunistic-mobility networks and social networks – obtaining closely related insights. Indeed, the concepts discovered in these investigations can be viewed as parts of the same conceptual universe, and the formal models proposed so far to express some specific concepts are the components of a larger formal description of this universe. The main contribution of this paper is to integrate the vast collection of concepts, formalisms and results found in the literature into a unified framework, which we call time-varying graphs (TVGs). Using this framework, it is possible to express directly in the same formalism not only the concepts common to all those different areas, but also those specific to each. Based on this definitional work, employing both existing results and original observations, we present a hierarchical classification of TVGs; each class corresponds to a significant property examined in the distributed computing literature. We then examine how TVGs can be used to study the evolution of network properties, and propose different techniques, depending on whether the indicators for these properties are atemporal (as in the majority of existing studies) or temporal. Finally, we briefly discuss the introduction of randomness in TVGs.Arnaud CasteigtsPaola FlocchiniWalter Quattrociocchiwalter.quattrociocchi@imtlucca.itNicola Santoro2014-01-20T15:41:35Z2014-01-20T15:41:35Zhttp://eprints.imtlucca.it/id/eprint/2102This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/21022014-01-20T15:41:35ZBenjamin’s optical unconscious: the motion in photography as the interstice of cinematic timeThis paper focuses on the concept of optical unconscious as it emerges in Walter Benjamin’s "A Small History of Photography and The Work of Art in the Age of Mechanical Reproduction". In particular, I intend to analyze the origins of the Benjamin’s concept from the László
Moholy-Nagy’s work "Painting Photography Film". According to Benjamin, just like cinematographic stills, photography makes explicit the part of movement that is not present in movement and renders it visible; it adds the ruffling, the tiny details, the half-hidden movement to the moment and thus makes visible the space-time fragment – as Benjamin
writes - «when a person steps out» (A Small History of Photography). Following the premise represented by the optical unconscious notion, the first argument I seek to make isthat a new organization of the perceptible world appeared not in the 1920s and 30s but rather can be
found thoroughly intertwined with the very historical origins of the photo-cinematographic tools, which originate from and are functional to a new conception of objectivity (and, hence, of naturalism and realism) that emerged from the birth of biology as an experimental science
and therefore from the birth of the concept of life and a new conception of the body. As a matter of fact, I argue that biology’s ascendancy over natural history through a process that traces its documentable origins to the end of the eighteenth century, constituted the context that enabled and fostered the invention of photo-cinematographic techniques. Through some examples from the works of Ètienne-Jules Marey and Thomas Alva Edison, I therefore
propose a second hypothesis, closely linked to the first: to begin with, cinema and, even earlier, photography, were created precisely in an interstice produced by the short-circuit between an invisible referent and forms of representation. Moreover, this short-circuit is
nowhere as apparent as it is in the debate surrounding the depiction of movement that began in the second half of the nineteenth century, a debate that developed specifically in the field of physiology but went on to involve the fields of art as well.Linda Bertellilinda.bertelli@imtlucca.it2013-12-16T15:13:30Z2014-10-09T09:20:24Zhttp://eprints.imtlucca.it/id/eprint/2076This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/20762013-12-16T15:13:30ZSpecial issue on fracture and contact mechanics for interface problems Marco Paggimarco.paggi@imtlucca.itPeter WriggersAlberto Carpinteri2013-12-16T15:00:04Z2014-10-09T09:20:24Zhttp://eprints.imtlucca.it/id/eprint/2074This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/20742013-12-16T15:00:04ZSpecial issue on computational methods for interface mechanical problemsMarco Paggimarco.paggi@imtlucca.itAlberto CarpinteriPeter Wriggers2013-12-04T15:43:07Z2014-10-09T09:20:24Zhttp://eprints.imtlucca.it/id/eprint/2046This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/20462013-12-04T15:43:07ZStructural integrity of hierarchical compositesInterface mechanical problems are of paramount importance in engineering and materials science.
Traditionally, due to the complexity of modelling their mechanical behaviour, interfaces are often treated as
defects and their features are not explored. In this study, a different approach is illustrated, where the interfaces
play an active role in the design of innovative hierarchical composites and are fundamental for their structural
integrity. Numerical examples regarding cutting tools made of hierarchical cellular polycrystalline materials are
proposed, showing that tailoring of interface properties at the different scales is the way to achieve superior
mechanical responses that cannot be obtained using standard materialsMarco Paggimarco.paggi@imtlucca.it2013-12-04T09:54:37Z2013-12-04T09:54:37Zhttp://eprints.imtlucca.it/id/eprint/2038This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/20382013-12-04T09:54:37ZAlain-Philippe Segonds: BibliographieStefano Gatteistefano.gattei@imtlucca.it2013-12-03T15:34:33Z2013-12-03T15:34:33Zhttp://eprints.imtlucca.it/id/eprint/2027This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/20272013-12-03T15:34:33ZRicordo di Alain SegondsStefano Gatteistefano.gattei@imtlucca.it2013-12-03T14:45:35Z2014-10-09T09:20:24Zhttp://eprints.imtlucca.it/id/eprint/2015This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/20152013-12-03T14:45:35ZModelling strain localization by cohesive/overlapping zones in tension/compression: Brittleness size effects and scaling in material propertiesThe present paper is a state-of-the-art review of the research carried out at the Politecnico di Torino during the last two decades on the modelling of strain localization. Introducing the elementary cohesive/overlapping models in tension/compression, it will be shown that it is possible to get a deep insight into the ductile-to-brittle transition and into the scaling of the material properties usually detected when testing quasi-brittle material specimens or structures at different size-scalesAlberto CarpinteriMarco Paggimarco.paggi@imtlucca.it2013-12-03T14:13:43Z2014-10-09T09:20:24Zhttp://eprints.imtlucca.it/id/eprint/2012This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/20122013-12-03T14:13:43ZCrack propagation in honeycomb cellular materials: a computational approachComputational models based on the finite element method and linear or nonlinear fracture mechanics are herein proposed to study the mechanical response of functionally designed cellular components. It is demonstrated that, via a suitable tailoring of the properties of interfaces present in the meso- and micro-structures, the tensile strength can be substantially increased as compared to that of a standard polycrystalline material. Moreover, numerical examples regarding the structural response of these components when subjected to loading conditions typical of cutting operations are provided. As a general trend, the occurrence of tortuous crack paths is highly favorable: stable crack propagation can be achieved in case of critical crack growth, whereas an increased fatigue life can be obtained for a sub-critical crack propagation.Marco Paggimarco.paggi@imtlucca.it2013-12-03T14:06:17Z2014-10-09T09:20:24Zhttp://eprints.imtlucca.it/id/eprint/2010This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/20102013-12-03T14:06:17ZStiffness and strength of hierarchical polycrystalline materials with imperfect interfaces In this study we investigate the effect of imperfect (not perfectly bonded) interfaces on the stiffness and strength of hierarchical polycrystalline materials. As a case study we consider a honeycomb cellular polycrystal used for drilling and cutting tools. The conclusions of the analysis are, however, general and applicable to any material with structural hierarchy. Regarding the stiffness, generalized expressions for the Voigt and Reuss estimates of the bounds to the effective elastic modulus of heterogeneous materials are derived. The generalizations regard two aspects that are not included in the standard Reuss and Voigt estimates. The first novelty consists in considering finite thickness interfaces between the constituents undergoing damage up to final debonding. The second generalization consists of interfaces not perpendicular or parallel to the loading direction, i.e., when isostress or isostrain conditions are not satisfied. In this case, approximate expressions for the effective elastic modulus are obtained by performing a computational homogenization approach. In the second part of the paper, the homogenized response of a representative volume element (RVE) of the honeycomb cellular polycrystalline material with one or two levels of hierarchy is numerically investigated. This is put forward by using the cohesive zone model (CZM) for finite thickness interfaces recently proposed by the authors and implemented in the finite element program FEAP. From tensile tests we find that the interface nonlinearity significantly contributes to the deformability of the material. Increasing the number of hierarchical levels, the deformability increases. The RVE is tested in two different directions and, due to different orientations of the interfaces and Mixed Mode deformation, anisotropy in stiffness and strength is observed. Stiffness anisotropy is amplified by increasing the number of hierarchical levels. Finally, the interaction between interfaces at different hierarchical levels is numerically characterized. A condition for scale separation, which corresponds to the independence of the material tensile strength from the properties of the interfaces in the second level, is established. When this condition is fulfilled, the material microstructure at the second level can be efficiently replaced by an effective homogeneous continuum with a homogenized stress–strain response. From the engineering point of view, the proposed criterion of scale separation suggests how to design the optimal microstructure of a hierarchical level to maximize the material tensile strength. An interpretation of this phenomenon according to the concept of flaw tolerance is finally presented. Marco Paggimarco.paggi@imtlucca.itPeter Wriggers2013-12-03T14:02:14Z2014-10-09T09:20:24Zhttp://eprints.imtlucca.it/id/eprint/2008This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/20082013-12-03T14:02:14ZA cohesive crack model coupled with damage for interface fatigue problemsAn semi-analytical formulation based on the cohesive crack model is proposed to describe the phenomenon of fatigue crack growth along an interface. Since the process of material separation under cyclic loading is physically governed by cumulative damage, the material deterioration due to fatigue is taken into account in terms of interfacial cohesive properties degradation. More specifically, the damage increment is determined by the current separation and a history variable. The damage variable is introduced into the constitutive cohesive crack law in order to capture the history-dependent property of fatigue. Parametric studies are presented to understand the influences of the two parameters entering the damage evolution law. An application to a pre-cracked double-cantilever beam is discussed. The model is validated by experimental data. Finally, the effect of using different shapes of the cohesive crack law is illustratedBaoming GongMarco Paggimarco.paggi@imtlucca.itAlberto Carpinteri2013-12-03T13:51:11Z2014-10-09T09:20:24Zhttp://eprints.imtlucca.it/id/eprint/2006This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/20062013-12-03T13:51:11ZSize-scale effects on interaction diagrams for reinforced concrete columns The use of N–M interaction diagrams is well established in the design of reinforced concrete columns, when the second order effects can be neglected. According to the stress–strain constitutive laws usually adopted to compute the resistant domains, complex phenomena such as size effects and concrete confinement cannot be considered in practical applications. On the other hand, several experimental evidences, and some analytical models available in the literature, emphasize the influence of such effects. In the present paper, a numerical approach based on the integrated Cohesive/Overlapping Crack Model is applied to compute the interaction diagrams. Compared to classical approaches, different constitutive laws are assumed for concrete in compression and tension, based on Nonlinear Fracture Mechanics models, and a step-by-step analysis is performed instead of limit state analysis. The proposed model permits the size and the confinement effects to be predicted, according to the experimental results. Moreover, the obtained results completely agree with previous extensive applications of the model to plain concrete specimens subjected to uniaxial compression and reinforced concrete beams in bending. Alberto CarpinteriMauro CorradoGiusemaria GosoMarco Paggimarco.paggi@imtlucca.it2013-11-12T14:09:37Z2014-12-10T14:32:52Zhttp://eprints.imtlucca.it/id/eprint/1901This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/19012013-11-12T14:09:37ZSpatial Correlations in Attribute CommunitiesCommunity detection is an important tool for exploring and classifying the properties of large complex networks and should be of great help for spatial networks. Indeed, in addition to their location, nodes in spatial networks can have attributes such as the language for individuals, or any other socio-economical feature that we would like to identify in communities. We discuss in this paper a crucial aspect which was not considered in previous studies which is the possible existence of correlations between space and attributes. Introducing a simple toy model in which both space and node attributes are considered, we discuss the effect of space-attribute correlations on the results of various community detection methods proposed for spatial networks in this paper and in previous studies. When space is irrelevant, our model is equivalent to the stochastic block model which has been shown to display a detectability-non detectability transition. In the regime where space dominates the link formation process, most methods can fail to recover the communities, an effect which is particularly marked when space-attributes correlations are strong. In this latter case, community detection methods which remove the spatial component of the network can miss a large part of the community structure and can lead to incorrect results.Federica CerinaVincenzo De LeoMarc BarthélemyAlessandro Chessaalessandro.chessa@imtlucca.it2013-11-12T13:57:12Z2013-11-12T13:57:12Zhttp://eprints.imtlucca.it/id/eprint/1900This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/19002013-11-12T13:57:12ZCommunity structure in large-scale cortical networks during motor acts The purpose of the present work is to evaluate the community structure of the cortical network subserving the neurophysiologic processes in simple motor acts. To this end, we studied the topological properties of the functional brain connectivity in the frequency domain. The functional networks were estimated by means of the imaginary coherence from a dataset of high-resolution {EEG} recordings (4094 cortical sources) in a group of healthy subjects (n = 10) during a finger extension task. The analysis of the community structure was addressed through a particular detection algorithm that optimizes the modularity, a function related to the level of internal clustering inside the communities in the network. The principal results indicate that the cortical network changes its structural organization during the motor execution with respect to a baseline condition. Notably in the Beta band (12.5–30 Hz), the level of intra-module connectivity decreases, while inter-module connectivity increases reflecting the need for a neural integration of distant regions. Notably, this distributed interaction involves anatomical regions belonging to both the hemispheres including pre-motor and primary motor areas in the frontal and central part of the cortex as well as parietal associative regions, which are related to the planning, selection and execution of actions. Fabrizio De Vico FallaniAlessandro Chessaalessandro.chessa@imtlucca.itMiguel ValenciaMario ChavezLaura AstolfiFebo CincottiDonatella MattiaFabio Babiloni2013-11-05T14:36:25Z2013-11-05T14:36:25Zhttp://eprints.imtlucca.it/id/eprint/1862This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/18622013-11-05T14:36:25ZThe ‘Invisible Role’ of Business Groups is made EvidentBusiness Groups collect and coordinate legally autonomous firms spanning both within and across national borders . They represent a lion's share of value added generation on a world scale, and yet they received little attention in economics literature, probably due to a lack of detailed data. In Altomonte and Rungi (2013) we exploited a unique own-built dataset of proprietary linkages to find that: a) Business Groups are present in both developing and developed countries, adapting their organization according to the peculiarities of the hosting environment; b) within Business Groups, choices of integration of production activities are not independent from choices of management coordination; c) eventually, choices of management coordination reveal to be important drivers of productivity and dominate on choices of vertical integration. More in general, here we argue, data are telling us that the adoption of different organizational structures at the firm level can in part explain the endurance of productivity gaps across industries and countries and the phenomenon of Business Groups becomes even more important after the emergence of Global Value Chains.Armando Rungiarmando.rungi@imtlucca.it2013-11-05T13:55:29Z2013-11-05T15:02:17Zhttp://eprints.imtlucca.it/id/eprint/1860This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/18602013-11-05T13:55:29ZMal d’Afrique: Abundance of Natural Resources and Growth FailureArmando Rungiarmando.rungi@imtlucca.itSilvia Merler2013-11-05T13:46:17Z2013-11-05T13:46:17Zhttp://eprints.imtlucca.it/id/eprint/1859This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/18592013-11-05T13:46:17ZLe start-up come fenomeno regionaleArmando Rungiarmando.rungi@imtlucca.it2013-10-04T11:04:59Z2013-10-04T11:04:59Zhttp://eprints.imtlucca.it/id/eprint/1829This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/18292013-10-04T11:04:59ZDiscovering Communities through FriendshipWe introduce a new method for detecting communities of arbitrary size in an undirected weighted network. Our approach is based on tracing the path of closest‐friendship between nodes in the network using the recently proposed Generalized Erds Numbers. This method does not require the choice of any arbitrary parameters or null models, and does not suffer from a system‐size resolution limit. Our closest‐friend community detection is able to accurately reconstruct the true network structure for a large number of real world and artificial benchmarks, and can be adapted to study the multi‐level structure of hierarchical communities as well. We also use the closeness between nodes to develop a degree of robustness for each node, which can assess how robustly that node is assigned to its community. To test the efficacy of these methods, we deploy them on a variety of well known benchmarks, a hierarchal structured artificial benchmark with a known community and robustness structure, as well as real‐world networks of coauthorships between the faculty at a major university and the network of citations of articles published in Physical Review. In all cases, microcommunities, hierarchy of the communities, and variable node robustness are all observed, providing insights into the structure of the network.Greg Morrisongreg.morrison@imtlucca.itL. Mahadevan2013-10-04T10:35:35Z2013-10-04T11:05:22Zhttp://eprints.imtlucca.it/id/eprint/1826This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/18262013-10-04T10:35:35ZRobust error correction in infofusesAn infofuse is a combustible fuse in which information is encoded through the patterning of metallic salts, with transmission in the optical range simply associated with burning. The constraints, advantages and unique error statistics of physical chemistry require us to rethink coding and decoding schemes for these systems. We take advantage of the non-binary nature of our signal with a single bit representing one of N=7 states to produce a code that, using a single or pair of intensity thresholds, allows the recovery of the intended signal with an arbitrarily high recovery probability, given reasonable assumptions about the distribution of errors in the system. An analysis of our experiments with infofuses shows that the code presented is consistent with these schemes, and encouraging for the field of chemical communication and infochemistry given the vast permutations and combinations of allowable non-binary signals. Greg Morrisongreg.morrison@imtlucca.itSamuel W. Thomas IIIChristopher N. LaFrattaJian GuoManuel A. PalaciosSameer SonkusaleDavid R. WaltGeorge M. WhitesidesL. Mahadevan2013-09-30T11:51:44Z2013-09-30T11:59:38Zhttp://eprints.imtlucca.it/id/eprint/1809This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/18092013-09-30T11:51:44ZDynamic Hedging of Life Insurance ReservesLuca Regisluca.regis@imtlucca.it2013-09-27T13:02:28Z2013-09-30T11:58:44Zhttp://eprints.imtlucca.it/id/eprint/1808This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/18082013-09-27T13:02:28ZDelta–Gamma hedging of mortality and interest rate risk One of the major concerns of life insurers and pension funds is the increasing longevity of their beneficiaries. This paper studies the hedging problem of annuity cash flows when mortality and interest rates are stochastic. We first propose a Delta–Gamma hedging technique for mortality risk. The risk factor against which to hedge is the difference between the actual mortality intensity in the future and its “forecast” today, the forward intensity. We specialize the hedging technique first to the case in which mortality intensities are affine, then to Ornstein–Uhlenbeck and Feller processes, providing actuarial justifications for this selection. We show that, without imposing no arbitrage, we can get equivalent probability measures under which the {HJM} condition for no arbitrage is satisfied. Last, we extend our results to the presence of both interest rate and mortality risk. We provide a {UK} calibrated example of Delta–Gamma hedging of both mortality and interest rate risk. Elisa LucianoLuca Regisluca.regis@imtlucca.itElena Vigna2013-09-27T12:56:57Z2013-09-27T12:56:57Zhttp://eprints.imtlucca.it/id/eprint/1807This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/18072013-09-27T12:56:57ZGood and bad banksIn the recent financial crisis, reorganizations of distressed financial institutions following the good bank and bad bank model were discussed. In the context of a structural framework and under perfect information, we analyze endogenous capital structure choices of an arrangement constituted by a large regulated unit which manages the more secure assets of a bank and a smaller division - possibly unregulated - which gathers the more risky and volatile ones. We question whether such an arrangement is a priori optimal and whether financial institutions have private incentives to set up different risk-classes of assets in separate entities. We investigate the effect of intra-group guarantees on optimal leverage and expected default costs. Numerical results show that these guarantees can enhance group value and limit default costs when the firm separates its more secure from its more risky assets in regulated entities.Luca Regisluca.regis@imtlucca.it2013-09-27T12:18:01Z2013-09-27T12:18:01Zhttp://eprints.imtlucca.it/id/eprint/1804This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/18042013-09-27T12:18:01ZDemographic risk transfer: is it worth for annuity providers? ICER Working PaperLongevity risk transfer is a popular choice for annuity providers such as pension funds. This paper formalizes the trade-off between the cost and the risk relief of such choice, when the annuity provider uses value-at-risk to assess risk. Using first-order approximations we show that, if the transfer is fairly priced and the aim of the fund is to maximize returns, the funds' alternatives can be represented in the plane expected return-VaR. We build a risk-return frontier, along which the optimal transfer choices of the fund are located and calibrated it to the 2010 UK annuity and bond market.Elisa LucianoLuca Regisluca.regis@imtlucca.it2013-09-17T13:05:08Z2013-09-17T13:05:08Zhttp://eprints.imtlucca.it/id/eprint/1772This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17722013-09-17T13:05:08ZOptimality Conditions For A Nonlinear Stochastic Knapsack ProblemWe investigate a nonlinear stochastic knapsack problem with application in Call Admission Control (CAC) with two classes of users, preliminary studied in [1,2]. Among possible stochastic nonlinear generalizations [3.4] of the NP-hard 0/1 knapsack problem, we consider the following model. One has a knapsack of capacity C and K classes of objects. The objects belonging to each class become available randomly. The inter-arrival times are
exponentially-distributed with means depending on the class and on the state of the knapsack. The sojourn time of each object is independent of the others and described by a class-dependent distribution. When included in the knapsack,
an object from class k generates revenue at a positive rate rk. The occupied portion of knapsack is given by a nonlinear function bk(nk), where, for k=1, . . . K, nk is the number of objects of class k currently inside. The objects can be inserted as long as the sum of their sizes does not exceed the capacity C.
The stochastic nonlinear 0,1-programming problem consists in deciding either acceptance or rejection of the arriving objects in dependence of the current state
of the knapsack, in such a way to maximize the average revenue. The functions used to generate such decisions are called \policies". We focus on coordinate-convex policies. We provide an algorithm which generates all coordinate-convex policies satisfying three different necessary conditions for optimality. Then we derive exact expressions of the cardinalities of such three sets of policies. Finally, we give conditions under which these cardinalities
are significantly smaller than the cardinality of the set of all coordinate-convex policies.Marco CelloGiorgio Gneccogiorgio.gnecco@imtlucca.itMario MarcheseMarcello Sanguineti2013-09-17T13:04:27Z2013-09-17T13:04:27Zhttp://eprints.imtlucca.it/id/eprint/1793This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17932013-09-17T13:04:27ZAn Application to Two-Hop Forwarding of a Model of Buffer Occupancy in ICNsAn application of the model proposed in Cello at al., A Model of Buffer Occupancy in ICNs, IEEE Communications Letters, to appear is investigated. Such a model provides a relationship in the z-domain between the discrete probability densities of the buffer state occupancies of the nodes in the network and the sizes of the arriving bulks. Under a class of two-hop forwarding strategies, expressions are obtained for the average buffer occupancy and its standard deviation.Marco CelloGiorgio Gneccogiorgio.gnecco@imtlucca.itMario MarcheseMarcello Sanguineti2013-09-17T08:12:18Z2013-09-17T08:12:18Zhttp://eprints.imtlucca.it/id/eprint/1757This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17572013-09-17T08:12:18ZDynamic Programming And Value-Function Approximation With
Application To Optimal ConsumptionSequential decision problems are considered, where a reward additive over a number of stages has to be maximized. Instances arise in scheduling eets of vehicles,
allocating resources, selling assets, optimizing transportation or telecommunication networks, inventory forecasting, financial planning, etc. At each stage, Dynamic Programming (DP) introduces the value function, which gives the value of the reward to be incurred at the next stage, as a function of the state at the current stage. The solution is formally obtained via recursive equations. However, closed-form solutions can be derived only in particular cases. We investigate how DP and suitable approximations of the value functions can be combined, providing a methodology to face high-dimensional sequential
decision problems. Approximations of the value functions are considered, expressed as linear combinations of basis functions obtained from a "mother function" (e.g., the Gaussian), by varying some "inner parameters" (e.g., variance and center coordinates) [1-5]. The accuracies of such suboptimal solutions are estimated. It is shown that
one can cope with the \curse of dimensionality" in value-function approximation (i.e., an exponential growth of the number of basis functions, required to guarantee a desired solution accuracy). The theoretical analysis is applied to a multidimensional version of the optimal consumption problem. (In the classical version, a consumer aims at maximizing the discounted value of the consumption of a good, given a time horizon, a sequence of interest rates, an initial wealth, and an income earned at each stage. Here, more consumers are considered.) The proposed approximation scheme is compared with classical linear approximators, i.e., linear combinations of a-priori
fixed basis functions. It is shown via simulations that the our approach provides a better solution accuracy, the number of computational units being the same as in fixed-basis approximation.Mauro GaggeroGiorgio Gneccogiorgio.gnecco@imtlucca.itMarcello SanguinetiRiccardo Zoppoli2013-09-17T07:56:24Z2013-09-17T07:56:24Zhttp://eprints.imtlucca.it/id/eprint/1752This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17522013-09-17T07:56:24ZClassifiers for the Detection of Flood Prone Areas from Remote Sensed Elevation DataMassimiliano DegiorgisGiorgio Gneccogiorgio.gnecco@imtlucca.itSilvia GorniGiorgio RothMarcello SanguinetiAngela Celeste Taramasso2013-09-17T07:43:57Z2013-09-17T07:43:57Zhttp://eprints.imtlucca.it/id/eprint/1753This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17532013-09-17T07:43:57ZClassifiers for the Detection of Flood-Prone Areas Using Remote Sensed Elevation DataSummary A technique is presented for the identification of the areas subject to flooding hazard. Starting from remote sensed elevation data and existing flood hazard maps – usually available for limited areas – the relationships between selected quantitative morphologic features and the flooding hazard are first identified and then used to extend the hazard information to the entire catchment. This is performed through techniques of pattern classification, such as linear classifiers based on quantitative morphologic features, and support vector machines with linear and Gaussian kernels. The experiment starts by discriminating between flood-prone areas and marginal hazard areas. Multiclass classifiers are subsequently used to graduate the hazard. Their designs amount to solving suitable optimization problems. Several performance measures are considered in comparing the different classifiers, such as the area under the receiver operating characteristics curve, and the sum of the false positive and false negative rates. The procedure has been validated for the Tanaro basin, a tributary to the major Italian river, the Po. Results show a high reliability: the classifier properly identifies 93 of flood-prone areas, and only 14 of the areas subject to a marginal hazard are improperly assigned. An increase of this latter value up to 19 is detected when the same structure is applied for hazard graduation. Results derived from the application to different catchments seem to qualitatively indicate the ability of the classifier to perform well also outside the calibration region. Pattern classification techniques should be considered when the identification of flood-prone areas and hazard grading is required for large regions (e.g., for civil protection or insurance purposes) or when a first identification is needed (e.g., to address further detailed flood-mapping activities). Massimiliano DegiorgisGiorgio Gneccogiorgio.gnecco@imtlucca.itSilvia GorniGiorgio RothMarcello SanguinetiAngela Celeste Taramasso2013-09-17T07:43:42Z2013-09-17T07:43:42Zhttp://eprints.imtlucca.it/id/eprint/1750This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17502013-09-17T07:43:42ZApproximation Structures with Moderate Complexity in Functional Optimization and Dynamic ProgrammingConnections between function approximation and classes of functional optimization problems, whose admissible solutions may depend on a large number of variables, are investigated. The insights obtained in this context are exploited to analyze families of nonlinear approximation schemes containing tunable parameters and enjoying the following property: when they are used to approximate the (unknown) solutions to optimization problems, the number of parameters required to guarantee a desired accuracy grows at most polynomially with respect to the number of variables in admissible solutions. Both sigmoidal neural networks and networks with kernel units are considered as approximation structures to which the analysis applies. Finally, it is shown how the approach can be applied for the solution of finite-horizon optimal control problems via approximate dynamic programming enhancing the potentialities of recent developments in nonlinear approximation in the framework of the solution of sequential decision problems with continuous state spaces.Mauro GaggeroGiorgio Gneccogiorgio.gnecco@imtlucca.itThomas ParisiniMarcello SanguinetiRiccardo Zoppoli2013-09-17T07:41:22Z2013-09-17T07:41:22Zhttp://eprints.imtlucca.it/id/eprint/1743This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17432013-09-17T07:41:22ZAccuracy of Approximations of Solutions to Fredholm Equations by Kernel Methods Approximate solutions to inhomogeneous Fredholm integral equations of the second kind by radial and kernel networks are investigated. Upper bounds are derived on errors in approximation of solutions of these equations by networks with increasing model complexity. The bounds are obtained using results from nonlinear approximation theory. The results are applied to networks with Gaussian and kernel units and illustrated by numerical simulations. Giorgio Gneccogiorgio.gnecco@imtlucca.itVěra KůrkováMarcello Sanguineti2013-09-16T09:09:08Z2013-09-16T12:02:59Zhttp://eprints.imtlucca.it/id/eprint/1726This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17262013-09-16T09:09:08ZSuboptimal Solutions to Team Optimization Problems with Stochastic Information StructureExistence, uniqueness, and approximations of smooth solutions to team optimization problems with stochastic information structure are investigated. Suboptimal strategies made up of linear combinations of basis functions containing adjustable parameters are considered. Estimates of their accuracies are derived by combining properties of the unknown optimal strategies with tools from nonlinear approximation theory. The estimates are obtained for basis functions corresponding to sinusoids with variable frequencies and phases, Gaussians with variable centers and widths, and sigmoidal ridge functions. The theoretical results are applied to a problem of optimal production in a multidivisional firm, for which numerical simulations are presented.Giorgio Gneccogiorgio.gnecco@imtlucca.itMarcello SanguinetiMauro Gaggero2013-09-13T12:36:29Z2013-09-16T12:02:59Zhttp://eprints.imtlucca.it/id/eprint/1725This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17252013-09-13T12:36:29ZA Model of Buffer Occupancy for ICNsIn this letter, an analytical framework to model nodes in Intermittently Connected Networks (ICNs) is proposed. A relationship is derived in the z-domain between the discrete probability densities of their buffer state occupancies and the sizes of the arriving bulks. Under a fixed epidemic-routing-based forwarding strategy, expressions are obtained for the average buffer occupancy and its standard deviation with immediate protocol advantages.Marco CelloGiorgio Gneccogiorgio.gnecco@imtlucca.itMario MarcheseMarcello Sanguineti2013-09-13T12:25:47Z2013-09-16T12:02:59Zhttp://eprints.imtlucca.it/id/eprint/1724This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17242013-09-13T12:25:47ZNew insights into Witsenhausen’s counterexampleThe accuracies of certain suboptimal solutions to the famous and still unsolved optimization problem known as “Witsenhausen’s counterexample” are investigated. The differences between the corresponding suboptimal values of the Witsenhausen functional and its optimum are estimated, too. The results give insights into the effectiveness of certain approaches proposed in the literature to face this hard optimization problem and into numerical results obtained by some researchers.Giorgio Gneccogiorgio.gnecco@imtlucca.itMarcello Sanguineti2013-09-13T12:03:55Z2013-09-16T12:02:59Zhttp://eprints.imtlucca.it/id/eprint/1723This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/17232013-09-13T12:03:55ZA Comparison between Fixed-Basis and Variable-Basis Schemes for Function Approximation and Functional OptimizationFixed-basis and variable-basis approximation schemes are compared for the problems of function approximation and functional optimization (also known as infinite programming). Classes of problems are investigated for which variable-basis schemes with sigmoidal computational
units perform better than fixed-basis ones, in terms of the minimum number of computational units needed to achieve a desired error in function approximation or approximate optimization. Previously known bounds on the accuracy are extended, with better rates, to families of Giorgio Gneccogiorgio.gnecco@imtlucca.it2013-07-17T14:21:09Z2014-01-24T14:14:27Zhttp://eprints.imtlucca.it/id/eprint/1647This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/16472013-07-17T14:21:09ZSocial security in a two-country model with myopic agentsIn a standard two-period overlapping generations model, two symmetric countries are involved, each with a PAYG pension system. This paper focuses on how myopic households may affect the optimal pension policy under both non-cooperative and cooperative schemes. It distinguish between the cases when the pension authority considers the welfare function with only the current welfare of the living generations and when they consider the lifetime welfare of the living generations. Assuming perfect international capital mobility, international cooperation among national pension authorities boosts capital accumulation when the international authority only considers the current welfare of living generations, even though the presence of myopic individuals lowers the welfare gain from cooperation. When the lifetime welfare is considered, international cooperation may depress capital accumulation, however, with enough myopes in the economy, international cooperation again boosts capital accumulation. The size of the PAYG system increases with the number of myopic individuals in the cooperative equilibrium. In the non-cooperative equilibrium, this relationship only holds when each national authority considers lifetime wellbeing of the living generation.Xue Wenxue.wen@imtlucca.it2013-07-17T10:41:54Z2014-01-24T14:12:06Zhttp://eprints.imtlucca.it/id/eprint/1646This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/16462013-07-17T10:41:54ZA Probabilistic Voting Model of Social Security: The Role of Myopic AgentsThis paper investigates the political incentives for the design of social security policy in competitive democracies with both far-sighted and myopic households. The social security scheme depends on both a payroll tax rate which determines the size of the pension and a Bismarckian factor that represents its redistributive component. By considering a probabilistic voting setting of electoral competition, we analyze the political game between office-seeking politicians and self-interested citizens.
Politicians can win the election by targeting the voters in each group by trading off the generosity and the redistribution degree of the public pension system. In the
political equilibrium, the contribution rate is U-shaped with respect to the Bismarckian factor. Moreover, the equilibrium Bismarckian factor unambiguously decreases with the proportion of myopic agents, whereas the equilibrium payroll tax rate curve is U-shaped with respect to the proportion of myopic agents.Xue Wenxue.wen@imtlucca.it2013-07-10T12:13:36Z2013-07-10T12:13:36Zhttp://eprints.imtlucca.it/id/eprint/1642This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/16422013-07-10T12:13:36ZEnvironmental Kuznets Curve and Air Pollution in city of London: Evidence from New Panel Smoothing Transition RegressionsThe purpose of the paper is to test empirically the existence of the environmental Kuznets curve (EKC), using existing and new Panel Smoothing Transition Regressions (PSTR) in the city of London. More specifically, two new PSTR are proposed, the Gaussian and the Generalized Bell function used in Fuzzy Logic. Moreover, two air pollutants are examined using social data from the British Household Panel Survey. The air pollutants are the carbon monoxide (CO) and sulphur dioxide (SO2). In particular the paper uses three regime smoothing transition regressions. More specifically, because the definition of the two regime smoothing regressions is not very clear, as three income classes exist, three regime smoothing regressions are proposed. Thus, the three regimes include the low, middle and high income. In the case of CO a negative relationship between air emissions and income is reported in low and high income classes, while midlle income households present a positive relation. On the contrary regarding SO2, individuals and households with middle income pollute more, followed by high income class, while a negative association is presented to low income. Therefore, EKC should be examined for a number of various air pollutants in micro-economic level too, because the patterns derived from the estimations are varied using different air pollutants. Eleftherios Giovaniseleftherios.giovanis@imtlucca.it2013-07-10T10:59:17Z2013-07-10T10:59:17Zhttp://eprints.imtlucca.it/id/eprint/1641This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/16412013-07-10T10:59:17ZStudy of Discrete Choice Models and Adaptive Neuro-Fuzzy Inference System in the Prediction of Economic Crisis Periods in USAIn this study two approaches are applied for the prediction of the economic recession or expansion periods in USA. The first approach includes Logit and Probit models and
the second is an Adaptive Neuro-Fuzzy Inference System (ANFIS) with Gaussian and Generalized Bell membership functions. The in-sample period 1950-2006 is examined
and the forecasting performance of the two approaches is evaluated during the out-of sample period 2007-2010. The estimation results show that the ANFIS model outperforms
the Logit and Probit model. This indicates that neuro-fuzzy model provides a better and more reliable signal on whether or not a financial crisis will take place.Eleftherios Giovaniseleftherios.giovanis@imtlucca.it2013-06-10T12:22:16Z2013-06-11T12:02:47Zhttp://eprints.imtlucca.it/id/eprint/1614This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/16142013-06-10T12:22:16ZIl nome della cosa. Classificare, schedare, discriminareIn this monograph number of the historical journal «Zapruder» Balestracci and Ricciardi make a presentation of the historical practice of the modern nation-states to sort, classify and also discriminate all social, cultural and political phenomena that threaten the social, political and moral order of the nation. The number included different European cases in the 19. and 20. centuries.Fiammetta Balestraccifiammetta.balestracci@imtlucca.itFerrucci Ricciardi2013-06-10T12:16:40Z2013-06-10T12:16:40Zhttp://eprints.imtlucca.it/id/eprint/1613This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/16132013-06-10T12:16:40Z[recensione a] Franco Milanesi, Ribelli e borghesi. Nazionalbolscevismo e rivoluzione conservatrice 1914-1933Fiammetta Balestraccifiammetta.balestracci@imtlucca.it2013-06-10T11:09:11Z2013-06-10T11:09:11Zhttp://eprints.imtlucca.it/id/eprint/1610This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/16102013-06-10T11:09:11Z[recensione] Marzia Ponso, Una storia particolare. «Sonderweg» tedesco e identità europeaFiammetta Balestraccifiammetta.balestracci@imtlucca.it2013-05-31T08:42:21Z2013-05-31T08:42:21Zhttp://eprints.imtlucca.it/id/eprint/1605This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/16052013-05-31T08:42:21ZAntitrust?, Grazie, abbiamo altri impegni!The paper critically analyzes the enforcement practice of the competition authority as related to the use of commitments decisions to conclude an antitrust investigation. In this perspective – and to derive conclusions about the use, suitability and appropriateness of this legal instrument – a quantitative and qualitative analysis is carried out paying specific attention to its most relevant variables: the adoption rate, the comparison with the EU practice, the typology of commitments and their consistency with the ‘concerns’ detected in the preliminary investigation phases, the rate of presentation and acceptance, and finally the efficiency of this process in leading to the formal conclusion of the case as compared to the ordinary one. On the basis of the findings of such an analysis, operated since the enactment of the legal rule, it is then assessed the change in the authority enforcement practice induced by an excessive expansion of the scope of the commitments decisions, with the view to suggest a discontinuity.Andrea Giannaccaria.giannaccari@imtlucca.it2013-05-29T09:10:47Z2013-09-09T11:06:27Zhttp://eprints.imtlucca.it/id/eprint/1593This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15932013-05-29T09:10:47ZMargaret Thatcher e i paradossi di una leadership liberaleGli anni di governo di Margaret Thatcher (4 maggio 1979 - 28 novembre 1990) rappresentano uno dei più radicali esempi di cambiamento nella cultura politica di un paese democratico, al punto che molti commentatori si riferiscono ad esso come se si trattasse di una vera e propria rivoluzione.
Un primo elemento che salta agli occhi è che si trattò di una rivoluzione che cambiò in maniera marginale l’assetto istituzionale del paese, o per meglio dire non cambiò in maniera formale la costituzione e le istituzioni britanniche. Alla fine dell’esperienza di governo della Thatcher non erano cambiate le istituzioni politiche formali, le prassi istituzionali, la legge elettorale o il sistema dei partiti. Tuttavia ad essere mutato era il paese nel suo complesso: era cambiata l’economia, era cambiato il welfare state, ma soprattutto era cambiata la cultura politica dei due principali partiti e di gran parte dell’elettorato.
Una domanda interessante da porsi è allora come questo cambiamento sia stato possibile, quali ne siano stati gli “ingredienti”. Quale è stato il ruolo del politico Margaret Thatcher, della sua leadership? Quanto hanno inciso la fortuna e quanto le caratteristiche delle istituzioni politiche anglosassoni? La Thatcher è stata la creatrice unica, e consapevole, di un tale cambiamento (idea che ha condotto alcuni commentatori a parlare di “thatcherismo”, quasi si trattasse di una nuova ideologia politica) oppure è stata coautrice, se non anche, in parte, prodotto essa stessa di un tale processo di cambiamento? È lei che ha saputo fare un “uso politico”, più o meno spregiudicato, della storia, o almeno di una certa interpretazione dei fatti storici, oppure è stata lei stessa il prodotto di una autentica tendenza storica che si andava realizzando in Gran Bretagna indipendentemente dalla sua persona?
Se vista sotto questa luce l’esperienza del thatcherismo appare non solo un interessante evento storico e politico, ma anche una ideale cartina di tornasole per interrogarsi su come nelle democrazie mature siano possibili dei cambiamenti politici in senso liberale, e per osservare come tali cambiamenti possano talvolta apparire paradossali se analizzati sulla base del duplice binario della teoria e della prassi politica.Antonio Masalaa.masala@imtlucca.it2013-05-17T08:25:25Z2013-11-21T11:43:09Zhttp://eprints.imtlucca.it/id/eprint/1585This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15852013-05-17T08:25:25ZLanguages cool as they expand: Allometric scaling and the decreasing need for new words We analyze the occurrence frequencies of over 15 million words recorded in millions of books published during the past two centuries in seven different languages. For all languages and chronological subsets of the data we confirm that two scaling regimes characterize the word frequency distributions, with only the more common words obeying the classic Zipf law. Using corpora of unprecedented size, we test the allometric scaling relation between the corpus size and the vocabulary size of growing languages to demonstrate a decreasing marginal need for new words, a feature that is likely related to the underlying correlations between words. We calculate the annual growth fluctuations of word use which has a decreasing trend as the corpus size increases, indicating a slowdown in linguistic evolution following language expansion. This ‘‘cooling pattern’’ forms the basis of a third statistical regularity, which unlike the Zipf and the Heaps law, is dynamical in nature.Alexander M. Petersenalexander.petersen@imtlucca.itJoel TenenbaumShlomo HavlinH. Eugene StanleyMatjaz Perc2013-05-16T13:06:14Z2016-07-13T09:48:45Zhttp://eprints.imtlucca.it/id/eprint/1581This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15812013-05-16T13:06:14ZA Conceptual Framework for AdaptationIn this position paper we present a conceptual vision of adaptation, a key feature of autonomic systems. We put some stress on the role of control data and argue how some of the programming paradigms and models used for adaptive systems match with our conceptual framework.Roberto BruniAndrea CorradiniFabio GadducciAlberto Lluch-Lafuentealberto.lluch@imtlucca.itAndrea Vandinandrea.vandin@imtlucca.it2013-05-14T08:41:54Z2016-10-04T15:31:34Zhttp://eprints.imtlucca.it/id/eprint/1580This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15802013-05-14T08:41:54ZAnalysis of a Hurst parameter estimator based on the modified Allan variance In order to estimate the Hurst parameter of Internet
traffic data, it has been recently proposed a log-regression estimator based on the so-called modified Allan variance (MAVAR). Simulations have shown that this estimator achieves higher accuracy and better confidence when compared with an other method of common use based on wavelet analysis. Here we link it to the wavelets setting and stress why a different analysis for the two approaches is required. We then focus on the asymptotic analysis of the MAVAR log-regression estimator and provide new formulas for the related confidence intervals. By numerical evaluation, we analyze these formulas and make a comparison
between three suitable choices on the regression weights, also optimizing over different choices on the data progression.Alessandra BianchiStefano BregniIrene Crimaldiirene.crimaldi@imtlucca.itMarco Ferrari2013-05-02T14:11:09Z2013-05-02T14:11:09Zhttp://eprints.imtlucca.it/id/eprint/1566This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15662013-05-02T14:11:09ZProceedings of 8th International Workshop on Automated Specification and Verification of Web Systems (WWV 2012)This volume contains the final and revised versions of the papers presented at the 8th International Workshop on Automated Specification and Verification of Web Systems (WWV 2012). The workshop was held in Stockholm, Sweden, on June 16, 2012, as part of DisCoTec 2012.
WWV is a yearly workshop that aims at providing an interdisciplinary forum to facilitate the cross-fertilization and the advancement of hybrid methods that exploit concepts and tools drawn from Rule-based programming, Software engineering, Formal methods and Web-oriented research. WWV has a reputation for being a lively, friendly forum for presenting and discussing work in progress. The proceedings have been produced after the symposium to allow the authors to incorporate the feedback gathered during the event in the published papers.
All papers submitted to the workshop were reviewed by at least three Program Committee members or external referees. The Program Committee held an electronic discussion leading to the acceptance of all papers for presentation at the workshop. In addition to the presentation of the contributed papers, the scientific programme included the invited talks by two outstanding speakers: Rocco De Nicola (IMT, Institute for Advanced Studies Lucca, Italy) and Jos\`e Luiz Fiadeiro (Royal Holloway, United Kingdom). Josep SilvaFrancesco Tiezzifrancesco.tiezzi@imtlucca.it2013-05-02T14:08:18Z2013-05-02T14:08:18Zhttp://eprints.imtlucca.it/id/eprint/1562This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15622013-05-02T14:08:18ZA Calculus for Orchestration of Web ServicesService-oriented computing, an emerging paradigm for distributed computing based on the use of services, is calling for the development of tools and techniques to build safe and trustworthy systems, and to analyse their behaviour. Therefore, many researchers have proposed to use process calculi, a cornerstone of current foundational research on specification and analysis of concurrent, reactive, and distributed systems. In this paper, we follow this approach and introduce CWS, a process calculus expressly designed for specifying and combining service-oriented applications, while modelling their dynamic behaviour. We show that CWS can model all the phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, orchestration, deployment, reconfiguration and execution. We illustrate the specification style that CWS supports by means of a large case study from the automotive domain and a number of more specific examples drawn from it.Rosario PuglieseFrancesco Tiezzifrancesco.tiezzi@imtlucca.it2013-05-02T14:05:09Z2013-05-02T14:05:09Zhttp://eprints.imtlucca.it/id/eprint/1577This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15772013-05-02T14:05:09Ze-Health for Rural Areas in Developing Countries: Lessons from the Sebokeng ExperienceWe report the experience gained in an e-Health project in
the Gauteng province, in South Africa. A Proof-of-Concept of the project has been already installed in 3 clinics in the Sebokeng township. The project is now going to be applied to 300 clinics in the whole province. This extension of the Proof-of-Concept can however give rise to security
aws because of the inclusion of rural areas with unreliable Internet connection. We address this problem and propose a safe solution.Massimiliano MasiRosario PuglieseFrancesco Tiezzifrancesco.tiezzi@imtlucca.it2013-05-02T13:50:27Z2013-05-02T13:51:02Zhttp://eprints.imtlucca.it/id/eprint/1561This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15612013-05-02T13:50:27ZSecurity Analysis of Standards-Driven Communication Protocols for Healthcare ScenariosThe importance of the ElectronicHealth Record (EHR), that stores all healthcare-related data belonging to a patient, has been recognised in recent years by governments, institutions and industry. Initiatives like the Integrating the Healthcare Enterprise (IHE) have been developed for the
definition of standard methodologies for secure and interoperable EHR exchanges among clinics and hospitals. Using the requisites specified by these initiatives, many large scale projects have been set up for enabling healthcare professionals to handle patients’ EHRs. The success of applications developed in these contexts crucially depends on ensuring such security properties as
confidentiality, authentication, and authorization.
In this paper, we first propose a communication
protocol, based on the IHE specifications, for authenticating healthcare professionals and assuring
patients’ safety. By means of a formal analysis
carried out by using the specification language
COWS and the model checker CMC, we reveal a security flaw in the protocol thus demonstrating that to simply adopt the international standards does not guarantee the absence of such type of flaws. We then propose how to emend the IHE
specifications and modify the protocol accordingly.
Finally, we show how to tailor our protocol for application to more critical scenarios with no assumptions on the communication channels. To demonstrate feasibility and effectiveness of our protocols we have fully implemented them.Massimiliano MasiRosario PuglieseFrancesco Tiezzifrancesco.tiezzi@imtlucca.it2013-05-02T13:36:28Z2013-05-02T13:36:28Zhttp://eprints.imtlucca.it/id/eprint/1575This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15752013-05-02T13:36:28ZFormalisation and Implementation of the XACML Access Control MechanismWe propose a formal account of XACML, an OASIS standard adhering to the Policy Based Access Control model for the specifica- tion and enforcement of access control policies. To clarify all ambiguous and intricate aspects of XACML, we provide it with a more manageable alternative syntax and with a solid semantic ground. This lays the basis
for developing tools and methodologies which allow software engineers to easily and precisely regulate access to resources using policies. To demonstrate feasibility and effectiveness of our approach, we provide a software tool, supporting the specification and evaluation of policies and access requests, whose implementation fully relies on our formal development.Massimiliano MasiRosario PuglieseFrancesco Tiezzifrancesco.tiezzi@imtlucca.it2013-05-02T13:30:10Z2013-05-02T13:30:10Zhttp://eprints.imtlucca.it/id/eprint/1563This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15632013-05-02T13:30:10ZUsing formal methods to develop WS-BPEL applicationsIn recent years, WS-BPEL has become a de facto standard language for orchestration of Web Services. However, there are still some well-known difficulties that make programming
in WS-BPEL a tricky task. In this paper, we firstly point out major loose points of the WS-BPEL specification by means of many examples, some of which are also exploited
to test and compare the behaviour of three of the most known freely available WS-BPEL engines. We show that, as a matter of fact, these engines implement different
semantics, which undermines portability of WS-BPEL programs over different platforms. Then we introduce Blite, a prototypical orchestration language equipped with a formal
operational semantics, which is closely inspired by, but simpler than, WS-BPEL. Indeed, Blite is designed around some of WS-BPEL distinctive features like partner links, process termination, message correlation, long-running business transactions and compensation handlers. Finally, we present BliteC, a software tool supporting a rapid and easy development of WS-BPEL applications via translation of service orchestrations written in Blite into executable WS-BPEL programs. We illustrate our approach by means of a running example borrowed from the official specification of WS-BPEL.Alessandro LapadulaRosario PuglieseFrancesco Tiezzifrancesco.tiezzi@imtlucca.it2013-05-02T13:25:15Z2013-05-02T13:25:15Zhttp://eprints.imtlucca.it/id/eprint/1576This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15762013-05-02T13:25:15ZModeling adaptation with a tuple-based coordination languageIn recent years, it has been argued that systems and applications, in order to deal with their increasing complexity, should be able to adapt their behavior according to new requirements or environment conditions. In this paper, we present a preliminary investigation aiming at studying how coordination languages and formal methods can contribute to a better understanding, implementation and usage of the mechanisms and techniques for adaptation currently proposed in the literature. Our study relies on the formal coordination language Klaim as a common framework for modeling some adaptation techniques, namely the MAPE-K loop, aspect- and context-oriented programming.Edmond GjondrekajMichele LoretiRosario PuglieseFrancesco Tiezzifrancesco.tiezzi@imtlucca.it2013-05-02T13:21:35Z2013-05-02T13:21:35Zhttp://eprints.imtlucca.it/id/eprint/1559This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15592013-05-02T13:21:35ZModeling Adaptation with KlaimIn recent years, it has been argued that systems and applications, in order to deal with their increasing complexity, should be able to adapt their behavior according to new requirements or environment conditions. In this paper, we present an investigation aiming at studying how coordination languages and formal methods can contribute to a better understanding, implementation and use of the mechanisms and techniques for adaptation currently proposed in the literature. Our study relies on the formal coordination language Klaim as a common framework for modeling some well-known adaptation techniques: the IBM MAPE-K loop, the Accord component-based framework for architectural adaptation, and the aspect- and context-oriented programming paradigms. We illustrate our approach through a simple example concerning a data repository equipped with an automated cache mechanism.Edmond GjondrekajMichele LoretiRosario PuglieseFrancesco Tiezzifrancesco.tiezzi@imtlucca.it2013-05-02T13:09:59Z2013-05-02T13:09:59Zhttp://eprints.imtlucca.it/id/eprint/1560This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15602013-05-02T13:09:59ZA Logical Verification Methodology for Service-Oriented ComputingWe introduce a logical verification methodology for checking behavioural properties of service-oriented computing systems. Service properties are described by means of SocL, a branching-time temporal logic that we have specifically designed to express in an effective way distinctive aspects of services, such as, e.g., acceptance of a request, provision of a response, and correlation among service requests and responses. Our approach allows service properties to be expressed in such a way that
they can be independent of service domains and specifications. We show an instantiation of our general methodology that uses the formal language COWS to conveniently specify services and the expressly developed software tool CMC to assist the user in the task of verifying SocL formulae over service specifications. We demonstrate feasibility and effectiveness of our methodology by means of the specification and the analysis of a case study in the automotive domain.Alessandro FantechiStefania GnesiAlessandro LapadulaFranco MazzantiRosario PuglieseFrancesco Tiezzifrancesco.tiezzi@imtlucca.it2013-05-02T12:47:10Z2013-05-02T12:47:10Zhttp://eprints.imtlucca.it/id/eprint/1574This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15742013-05-02T12:47:10ZTowards Model-Driven Development of Access Control Policies for Web ApplicationsWe introduce a UML-based notation for graphically modeling
systems’ security aspects in a simple and intuitive
way and a model-driven process that transforms graphical
specifications of access control policies in XACML. These
XACML policies are then translated in FACPL, a policy
language with a formal semantics, and the resulting policies
are evaluated by means of a Java-based software tool.Marianne BushNora KochMassimiliano MasiRosario PuglieseFrancesco Tiezzifrancesco.tiezzi@imtlucca.it2013-05-02T12:26:20Z2013-05-02T12:26:20Zhttp://eprints.imtlucca.it/id/eprint/1578This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15782013-05-02T12:26:20ZOrchestrating Tuple-based LanguagesThe World Wide Web can be thought of as a global computing architecture supporting the deployment of distributed networked applications. Currently, such applications can be programmed by resorting mainly to two distinct paradigms: one devised for orchestrating distributed services, and the other designed for coordinating distributed (possibly mobile) agents. In this paper, the issue of designing a pro-
gramming language aiming at reconciling orchestration and coordination is investigated. Taking as starting point the orchestration calculus Orc and the tuple-based coordination language Klaim, a new formalism is introduced combining concepts and primitives of the original calculi.
To demonstrate feasibility and effectiveness of the proposed approach, a prototype implementation of the new formalism is described and it is then used to tackle a case study dealing with a simplified but realistic electronic marketplace, where a number of on-line stores allow client
applications to access information about their goods and to place orders.Rocco De Nicolar.denicola@imtlucca.itAndrea MargheriFrancesco Tiezzifrancesco.tiezzi@imtlucca.it2013-05-02T12:21:09Z2013-05-02T12:21:09Zhttp://eprints.imtlucca.it/id/eprint/1573This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15732013-05-02T12:21:09ZTowards a Formal Verification Methodology for Collective Robotic SystemsWe introduce a UML-based notation for graphically modeling
systems’ security aspects in a simple and intuitive
way and a model-driven process that transforms graphical
specifications of access control policies in XACML. These
XACML policies are then translated in FACPL, a policy
language with a formal semantics, and the resulting policies
are evaluated by means of a Java-based software tool.Edmond GjondrekajMichele LoretiRosario PuglieseFrancesco Tiezzifrancesco.tiezzi@imtlucca.itCarlo PinciroliManuele BrambillaMauro BirattariMarco Dorigo2013-04-30T14:12:58Z2013-04-30T14:12:58Zhttp://eprints.imtlucca.it/id/eprint/1557This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15572013-04-30T14:12:58ZAn arrival-based framework for human mobility modelingModeling human mobility is crucial in the performance analysis and simulation of mobile ad hoc networks, where contacts are exploited as opportunities for peer-to-peer message forwarding. The current approach to human mobility modeling has been based on continuously modifying models, trying to embed in them the newest features of mobility properties (e.g., visiting patterns to locations or inter-contact times) as they came up from trace analysis. As a consequence, typically these models are neither flexible (i.e., features of mobility cannot be changed without changing the model) nor controllable (i.e., the exact shape of mobility properties cannot be controlled directly). In order to take into account the above requirements, in this paper we propose a mobility framework whose goal is, starting from the stochastic process describing the arrival patterns of users to locations, to generate pairwise inter-contact times and aggregate inter-contact times featuring a predictable probability distribution. We validate the proposed framework by means of simulations. In addition, assuming that the arrival process of users to locations can be described by a Bernoulli process, we mathematically derive a closed form for the pairwise and aggregate inter-contact times, proving the controllability of the proposed approach in this case.Dmytro Karamshukdmytro.karamshuk@imtlucca.itChiara BoldriniMarco ContiAndrea Passarella2013-04-17T10:07:59Z2013-04-17T10:07:59Zhttp://eprints.imtlucca.it/id/eprint/1546This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15462013-04-17T10:07:59ZLe Surréalisme et la villeThe essay traces the fascination of the Surrealists for the modern city - as embodied, in particular, by Paris - to the influence of Baudelaire, Nietzsche and de Chirico, illustrating the changes that Surrealism engendered in modern perceptions of the urban space as metaphorical of human consciousness on both individual and collective levelsSilvia Loretisilvia.loreti@imtlucca.it2013-04-12T13:32:30Z2013-04-12T13:32:30Zhttp://eprints.imtlucca.it/id/eprint/1542This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15422013-04-12T13:32:30ZQuantitative Multirun Security under Active AdversariesWe study the security of probabilistic programsunder the assumption that an active adversary controls part ofthe program's inputs, and the program can be run several times. The adversary's target are the high, confidential inputs to theprogram. We model the program behaviour as an information-theoretic channel and define a notion of quantitative multi-runleakage. We characterize in a simple way both the asymptoticmulti-run leakage and its exponential growth rate, depending onthe number of runs, the characterization is given in terms ofthe program's channel matrix. We then study the case where adeclassification policy is specified: we define a measure of thedegree of violation of the policy and characterize its asymptoticmulti-run behaviour, thus allowing for a combined analysis ofwhat and how much information is leaked. We finally study thecase where a user is faced with the task of assessing the undueinfluence of an active adversary on a deployed program or system, of which only a (black-box) specification is available.Michele BorealeFrancesca Pampalonifrancesca.pampaloni@imtlucca.it2013-03-19T15:20:14Z2014-12-02T09:48:25Zhttp://eprints.imtlucca.it/id/eprint/1520This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15202013-03-19T15:20:14ZFrom motion to emotion. An ancient Greek iconography between literal and symbolic interpretations. Maria Luisa Catonimarialuisa.catoni@imtlucca.it2013-03-07T14:02:16Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1530This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15302013-03-07T14:02:16ZStagewise K-SVD to Design Efficient Dictionaries for Sparse RepresentationsThe problem of training a dictionary for sparse representations from a given dataset is receiving a lot of attention mainly due to its applications in the fields of coding, classification and pattern recognition. One of the open questions is how to choose the number of atoms in the dictionary: if the dictionary is too small then the representation errors are big and if the dictionary is too big then using it becomes computationally expensive. In this letter, we solve the problem of computing efficient dictionaries of reduced size by a new design method, called Stagewise K-SVD, which is an adaptation of the popular K-SVD algorithm. Since K-SVD performs very well in practice, we use K-SVD steps to gradually build dictionaries that fulfill an imposed error constraint. The conceptual simplicity of the method makes it easy to apply, while the numerical experiments highlight its efficiency for different overcomplete dictionaries.Cristian Rusucristian.rusu@imtlucca.itBogdan Dumitrescu2013-03-07T13:48:00Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1529This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15292013-03-07T13:48:00ZIterative reweighted l1 design of sparse FIR filtersSparse FIR filters have lower implementation complexity than full filters, while keeping a good performance level. This paper describes a new method for designing 1D and 2D sparse filters in the minimax sense using a mixture of reweighted l1 minimization and greedy iterations. The combination proves to be quite efficient; after the reweighted l1 minimization stage introduces zero coefficients in bulk, a small number of greedy iterations serve to eliminate a few extra coefficients. Experimental results and a comparison with the latest methods show that the proposed method performs very well both in the running speed and in the quality of the solutions obtained.Cristian Rusucristian.rusu@imtlucca.itBogdan Dumitrescu2013-03-07T13:30:21Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1528This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15282013-03-07T13:30:21ZFast design of efficient dictionaries for sparse representationsOne of the central issues in the field of sparse representations is the design of overcomplete dictionaries with a fixed sparsity level from a given dataset. This article describes a fast and efficient procedure for the design of such dictionaries. The method implements the following ideas: a reduction technique is applied to the initial dataset to speed up the upcoming procedure; the actual training procedure runs a more sophisticated iterative expanding procedure based on K-SVD steps. Numerical experiments on image data show the effectiveness of the proposed design strategy.Cristian Rusucristian.rusu@imtlucca.it2013-03-07T13:20:08Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1527This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15272013-03-07T13:20:08ZClustering before training large datasets - Case study: K-SVDTraining and using overcomplete dictionaries has been the subject of many developments in the area of signal processing and sparse representations. The main idea is to train a dictionary that is able to achieve good sparse representations of the items contained in a given dataset. The most popular approach is the K-SVD algorithm and in this paper we study its application to large datasets. The main interest is to speedup the training procedure while keeping the representation errors close to some specific values. This goal is reached by using a clustering procedure, called here T-mindot, which reduces the size of the dataset but keeps the most representative data items and a measure of their importance. Experimental simulations compare the running times and representation errors of the training method with and without the clustering procedure and they clearly show how effective T-mindot is.Cristian Rusucristian.rusu@imtlucca.it2013-03-07T13:07:47Z2013-03-12T14:58:11Zhttp://eprints.imtlucca.it/id/eprint/1525This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15252013-03-07T13:07:47ZClustering large datasets - bounds and applications with K-SVDThis article presents a clustering method called T-mindot that is used to reduce the dimension of datasets in order to diminish the running time of the training algorithms. The T-mindot method is applied before the K-SVD algorithm in the context of sparse representations for the design of
overcomplete dictionaries. Simulations that run on image data show the efficiency of the proposed method that leads to the substantial reduction of the execution time of K-SVD, while keeping the representation performance of the dictionaries designed using the original dataset.Cristian Rusucristian.rusu@imtlucca.it2013-03-06T11:22:15Z2013-03-12T09:32:21Zhttp://eprints.imtlucca.it/id/eprint/1518This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15182013-03-06T11:22:15ZDetecting ACS and Identifying Acute Ischemic Territories with Cardiac Phase-Resolved BOLD MRI at RestSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.itXiangzhi ZhouRichard TangJ. MinDebiao LiRohan Dharmakumar2013-03-06T10:56:25Z2013-03-12T09:32:21Zhttp://eprints.imtlucca.it/id/eprint/1517This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15172013-03-06T10:56:25ZChronic Iron Deposition following Acute Hemorrhagic Myocardial Infarction: A Cardiovascular Magnetic Resonance StudyIntroduction - Intramyocardial hemorrhage frequently occurs in large reperfused myocardial infarctions (MI). However, its long-term fate remains unexplored.
Hypothesis - We hypothesize that intramyocardial hemorrhage, secondary to reperfused MI, results in chronic iron deposition within infarcted territories.
Methods - We studied 15 patients by Cardiovascular Magnetic Resonance (CMR) T2* mapping (1.5T) on day 3 and 6 months after successful percutaneous coronary intervention for first STEMI. Using the same CMR protocol, we also studied 20 canines, on days 3 and 56 post ischemia-reperfusion injury, of which 3 animals received sham procedures. Subsequently, canine hearts were explanted, imaged ex-vivo, and samples of hemorrhagic infarcts (Hemo+), non-hemorrhagic infarcts (Hemo-), remote and sham myocardium were isolated, sectioned and mass spectrometry was performed.
Results - Eleven patients had Hemo+ (verified by T2* CMR on day 3) and their scar tissue T2* values remained significantly lower after 6 months, when compared to Hemo- and remote myocardium (Fig 1; p<0.001). In canines, Hemo+ territories showed a significant T2* reduction compared to the other groups (Fig 2; p<0.001). Mean iron content ([Fe]) of Hemo+ on day 56 was 10-fold greater than that observed in control groups (p<0.001), while no differences were observed among the control groups (p=0.14). A strong linear relationship was observed between log(T2*) and -log([Fe]) (R2 = 0.74; p<0.001) on day 56. Conclusion - Hemorrhagic MI leads to chronic iron depositions within the infarct zones. Consequences of chronic iron deposition within the scar tissue remain to be investigated. Avinash KaliAndreas KumarIvan CokicSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.itMatthias G FriedrichRohan Dharmakumar2013-03-06T10:33:04Z2016-03-18T10:52:59Zhttp://eprints.imtlucca.it/id/eprint/1516This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15162013-03-06T10:33:04ZTracking-optimal error control schemes for H.264 compressed video for vehicle surveillanceIn this paper we present a transportation video coding and transmission system specifically tailored to automated vehicle tracking applications. By taking into account the video characteristics and the lossy nature of the wireless channels, we propose error control approaches to enhance tracking accuracy. The proposed system is shown to give performance improvement over the current state-of-the-art system and yields bitrate savings of up to 60.Zhaofu ChenEren SoyakSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.itAggelos K. Katsaggelos2013-03-06T10:07:58Z2013-03-12T09:32:21Zhttp://eprints.imtlucca.it/id/eprint/1515This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15152013-03-06T10:07:58ZMouse neuroimaging phenotyping in the cloudThe combined use of mice that have genetic mutations (transgenic mouse models) of human pathology and advanced neuroimaging methods (such as MRI) has the potential to radically change how we approach disease understanding, diagnosis and treatment. Morphological changes occurring in the brain of transgenic animals as a result of the interaction between environment and genotype, can be assessed using advanced image analysis methods, an effort described as “mouse brain phenotyping”. However, the computational methods required for the analysis of high-resolution brain images are demanding. In this paper, we propose a computationally effective cloud-based implementation of morphometric analysis of high-resolution mouse brain datasets. We show that the proposed approach is highly scalable and suited for a variety of methods for MR-based brain phenotyping. The proposed approach is easy to deploy, and could become an alternative for laboratories that may require instant access to large high performance computing infrastructure.Massimo Minervinimassimo.minervini@imtlucca.itMario DamianoValter TucciAngelo BifoneAlessandro GozziSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.it2013-03-05T14:57:29Z2013-03-12T09:32:21Zhttp://eprints.imtlucca.it/id/eprint/1508This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/15082013-03-05T14:57:29ZDetecting Myocardial Ischemia at Rest with Cardiac Phase-Resolved BOLD CMRBackground—Fast, noninvasive identification of ischemic territories at rest (prior to tissue-specific changes) and assessment of functional status can be valuable in the management of severe coronary artery disease. This study investigated the utility of cardiac phase-resolved Blood-Oxygen-Level-Dependent (CP-BOLD) CMR in detecting myocardial ischemia at rest secondary to severe coronary artery stenosis.
Methods and Results—CP-BOLD, standard-cine, and T2-weighted images were acquired in canines (n=11) at baseline and within 20 minutes of ischemia induction (severe LAD stenosis) at rest. Following 3-hours of ischemia, LAD stenosis was removed and T2-weighted and late-gadolinium-enhancement (LGE) images were acquired. From standard-cine and CP-BOLD images, End-Systolic (ES) and End-Diastolic (ED) myocardium were segmented. Affected and remote sections of the myocardium were identified from post-reperfusion LGE images. S/D, quotient of mean ES and ED signal intensities (on CP-BOLD and standard-cine), was computed for affected and remote segments at baseline and ischemia. Ejection fraction (EF) and segmental wall-thickening (sWT) were derived from CP-BOLD images at baseline and ischemia. On CP-BOLD images: S/D was greater than 1 (remote and affected territories) at baseline; S/D was diminished only in affected territories during ischemia and the findings were statistically significant (ANOVA, post-hoc p<0.01). The dependence of S/D on ischemia was not observed in standard-cine images. Computer simulations confirmed the experimental findings. ROC analysis showed that S/D identifies affected regions with similar performance (AUC:0.87) as EF (AUC:0.89) and sWT (AUC:0.75).
Conclusions—Preclinical studies and computer simulations showed that CP-BOLD CMR could be useful in detecting myocardial ischemia at rest. Patient studies are needed for clinical translation. Sotirios A. Tsaftarissotirios.tsaftaris@imtlucca.itXiangzhi ZhouRichard TangDebiao LiRohan Dharmakumar2013-02-25T11:11:30Z2013-05-29T12:59:46Zhttp://eprints.imtlucca.it/id/eprint/1491This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14912013-02-25T11:11:30ZLa Spagna oltre l'ostacoloThis book analyses the latest years of the Franco regime and in particular it describes the different projects spread inside the country among the political families who collaborates with the regime. Falangist from one side and Catholic on the other presented many political project of reform in order to cope with the inevitable end of the Franco regime. They designed some hypotesis of reform for the role to be played by the successor of Franco, the rule to be followed for the process of succession itself, the person to be appointed. The way to mantain unchanged the institutional laws, the way to insert the country in the economic international enviroment, the countries towards which strengthen the bilateral international relations in order to overcome the regime isolation, and how to cope with the supranational and international organization such as EEC and NATO. The several proposals made in the latest years of the regime had a great impact even during the following period. The book then describes which reform proposal were being implemented, how they were reshaped and updated after Franco death and how they were melted with the project coming from the oldest antifrancoist platform.Maria Elena Cavallarom.cavallaro@imtlucca.it2013-02-25T08:29:00Z2013-02-25T08:29:00Zhttp://eprints.imtlucca.it/id/eprint/1489This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14892013-02-25T08:29:00ZCrisi e rinascita del liberalismo classicoDopo una lunga crisi, che ne aveva non soltanto deteriorato l’immagine ma soprattutto snaturato le fondamenta, il liberalismo, a partire dagli anni Quaranta, rinasce come una teoria politica capace di dare una risposta forte e innovativa ai problemi della società. L’intento di questo lavoro è dunque cercare di capire in cosa consista la filosofia politica del liberalismo classico e in che termini la sua rinascita, che talora coincide con la riscoperta di radici dimenticate, configuri anche una risposta al problema dell’“ordine politico buono” e una ricerca dei modi in cui perseguirlo. Nella ricostruzione di tale percorso – che prende le mosse da alcune critiche al liberalismo come corresponsabile dei mali della “modernità” – i passaggi chiave sono rappresentati dall’analisi della crisi e della trasformazione della teoria liberale, dalla riflessione sulla natura e sulle cause del totalitarismo, e dalla chiarificazione del controverso legame tra liberalismo e democrazia. Muovendo da questi temi gli esponenti del liberalismo classico configurano così un modello di ‘ordine spontaneo’ che, rifiutando di identificare la politica con lo stato e con le scelte collettive, giunge a mettere in discussione il ruolo della coercizione nella vita politica.Antonio Masalaa.masala@imtlucca.it2013-02-20T10:52:50Z2013-02-20T10:52:50Zhttp://eprints.imtlucca.it/id/eprint/1487This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14872013-02-20T10:52:50ZApproximate Explicit MPC on Simplicial Partitions with Guaranteed Stability for Constrained Linear SystemsThis paper proposes an approximate explicit model predictive control design approach for regulating linear time-invariant systems subject to both state and control constraints. The proposed control law is implemented as a piecewise-affine function defined on a regular simplicial partition, and has two main positive features. First, the regularity of the simplicial partition allows a very efficient implementation of the control law on digital circuits, with computation performed in tens of nanoseconds. Second, the asymptotic stability of the closed-loop system is enforced a priori by design.Matteo RubagottiDavide BarcelliAlberto Bemporadalberto.bemporad@imtlucca.it2013-02-20T10:41:16Z2013-02-20T10:41:16Zhttp://eprints.imtlucca.it/id/eprint/1486This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14862013-02-20T10:41:16ZSimple and Certifiable Quadratic Programming Algorithms for Embedded Linear Model Predictive ControlIn this paper we review a dual fast gradient-projection approach to solving quadratic programming (QP) problems recently proposed in [Patrinos and Bemporad, 2012] that is particularly useful for embedded model predictive control (MPC) of linear systems subject to linear constraints on inputs and states. We show that the method has a computational effort aligned with several other existing QP solvers typically used in MPC, and in addition it is extremely easy to code, requires only basic and easily parallelizable arithmetic operations, and a number of iterations to reach a given accuracy in terms of optimality and feasibility of the primal solution that can be estimated quite tightly by solving an off-line mixed-integer linear programming problem. This research was largely motivated by ongoing research activities on embedded MPC for aerospace systems carried out in collaboration with the European Space Agency.Alberto Bemporadalberto.bemporad@imtlucca.itPanagiotis Patrinospanagiotis.patrinos@imtlucca.it2013-02-20T10:24:10Z2014-07-01T12:51:19Zhttp://eprints.imtlucca.it/id/eprint/1485This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14852013-02-20T10:24:10ZA numerical algorithm for nonlinear L2-gain optimal control with application to vehicle yaw stability controlThis paper is concerned with L2-gain optimal control approach for coordinating the active front steering and differential braking to improve vehicle yaw stability and cornering control. The vehicle dynamics with respect to the tire slip angles is formulated and disturbances are added on the front and rear cornering forces characteristics modelling, for instance, variability on road friction. The mathematical model results in input-affine nonlinear system. A numerical algorithm based on conjugate gradient method to solve L2-gain optimal control problem is presented. The proposed algorithm, which has backward-in-time structure, directly finds the feedback control and the "worst case" disturbance variables. Simulations of the controller in closed-loop with the nonlinear vehicle model are shown and discussed.Vladimir MilicStefano Di CairanoJosip KasacAlberto Bemporadalberto.bemporad@imtlucca.itZeljko Situm2013-02-14T10:04:07Z2013-03-12T14:57:38Zhttp://eprints.imtlucca.it/id/eprint/1480This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14802013-02-14T10:04:07ZModel Predictive Control for Linear Impulsive SystemsLinear Impulsive Control Systems have been extensively studied with respect to their equilibrium points which, in most cases, are no other than the origin. However, the trajectory of the system cannot be stabilized to arbitrary desired points which imposes a significant restriction towards their utilization in various applications such as drug administration. In this paper, we study the equilibrium of Linear Impulsive Systems in light of target-sets instead of the standard equilibrium point approach. We properly extend the notion of invariant sets which is crucial in designing asymptotically stable Model Predictive Controllers (MPC).Pantelis Sopasakispantelis.sopasakis@imtlucca.itPanagiotis Patrinospanagiotis.patrinos@imtlucca.itHaralambos SarimveisAlberto Bemporadalberto.bemporad@imtlucca.it2013-02-13T07:50:17Z2013-02-13T07:50:17Zhttp://eprints.imtlucca.it/id/eprint/1473This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14732013-02-13T07:50:17ZThe rendezvous dynamics under linear quadratic optimal controlThis paper investigates the dynamics of networks of systems achieving rendezvous under linear quadratic optimal control. While the dynamics of rendezvous were studied extensively for the symmetric case, where all systems have exactly the same dynamics (such as simple integrators), this paper investigates the rendezvous dynamics for the general case when the dynamics of the systems may be different. We show that the rendezvous is stable and that the post-rendezvous dynamics of the network of systems is entirely defined by the common eigenvalues with common eigenvectors output image. The approach is also extended to the case of constraints on systems states, inputs, and outputs.Stefano Di CairanoCarlo A. PascucciAlberto Bemporadalberto.bemporad@imtlucca.it2013-02-13T07:46:55Z2013-02-13T07:46:55Zhttp://eprints.imtlucca.it/id/eprint/1472This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14722013-02-13T07:46:55ZStability analysis of discrete-time piecewise-affine systems over non-invariant domainsThis paper analyzes stability of discrete-time piecewise-affine systems defined on non-invariant domains. An algorithm based on linear programming is proposed, in order to prove the exponential stability of the origin and to find a positively invariant estimate of the region of attraction. The theoretical results are based on the definition of a piecewise-affine, possibly discontinuous, Lyapunov function. The proposed method presents a relatively low computational burden, and is proven to lead to feasible solutions in a broader range of cases with respect to a previously proposed approach.Matteo RubagottiLuca ZaccarianAlberto Bemporadalberto.bemporad@imtlucca.it2013-02-12T12:11:05Z2013-02-12T12:11:05Zhttp://eprints.imtlucca.it/id/eprint/1471This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14712013-02-12T12:11:05ZPiecewise affine direct virtual sensors with Reduced ComplexityIn this paper, a piecewise-affine direct virtual sensor is proposed for the estimation of unmeasured outputs of nonlinear systems whose dynamical model is unknown. In order to overcome the lack of a model, the virtual sensor is designed directly from measured inputs and outputs. The proposed approach generalizes a previous contribution, allowing one to design lower-complexity estimators. Indeed, the reduced-complexity approach strongly reduces the effect of the so-called "curse of dimensionality", and can be applied to relatively high-order systems, while enjoying all the convergence and optimality properties of the original approach.Matteo RubagottiTomaso PoggiAlberto Bemporadalberto.bemporad@imtlucca.itMarco Storace2013-02-12T12:03:40Z2013-02-12T12:03:40Zhttp://eprints.imtlucca.it/id/eprint/1470This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14702013-02-12T12:03:40ZAn accelerated dual gradient-projection algorithm for linear model predictive controlThis paper proposes a dual fast gradient-projection method for solving quadratic programming problems that arise in linear model predictive control with general polyhedral constraints on inputs and states. The proposed algorithm is quite suitable for embedded control applications in that: (1) it is extremely simple and easy to code; (2) the number of iterations to reach a given accuracy in terms of optimality and feasibility of the primal solution can be estimated quite tightly; (3) the computational cost per iteration increases only linearly with the prediction horizon; and (4) the algorithm is also applicable to linear time-varying (LTV) model predictive control problems, with an extra on-line computational effort that is still linear with the prediction horizon.Panagiotis Patrinospanagiotis.patrinos@imtlucca.itAlberto Bemporadalberto.bemporad@imtlucca.it2013-01-24T09:27:55Z2013-03-12T14:57:38Zhttp://eprints.imtlucca.it/id/eprint/1463This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14632013-01-24T09:27:55ZAn integer programming approach for optimal drug dose computationIn this paper, we study the problem of determining the optimal drug administration strategy when only a finite number of different dosages are available, a lower bound is posed on the time intervals between two consecutive doses, and drug concentrations should not exceed the toxic concentration levels. The presence of only binary variables leads to the adoption of an integer programming (IP) scheme for the formulation and solution of the drug dose optimal control problem. The proposed method is extended to account for the stochastic formulation of the optimal control problem, so that it can be used in practical applications where large populations of patients are to be treated. A Finite Impulse Response (FIR) model derived from experimental pharmacokinetic data is employed to correlate the administered drug dose with the concentration–time profiles of the drug in the compartments (organs) of the body.Pantelis Sopasakispantelis.sopasakis@imtlucca.itHaralambos Sarimveis2013-01-17T10:06:03Z2014-01-29T13:58:23Zhttp://eprints.imtlucca.it/id/eprint/1460This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14602013-01-17T10:06:03ZTwo-time-scale MPC for economically optimal real-time operation of balance responsible partiesEuropean electrical networks are evolving towards a distributed system where the number of power plants is growing and also the green plants based on renewable energy sources (RES) like wind and solar are increasing. Integration of RES leads to energy imbalance, due to the difficulty to predict their production. This paper proposes a two-time-scale Hierarchical Model Predictive Control (HMPC) strategy for real-time optimal control of Balance Responsible Parties (BRPs) in power systems with high penetration of renewable energy sources (RES). The proposed control strategy is able to handle ramp-rate constraints efficiently and results in reduced generation and imbalance costs due to real-time economic optimization of power setpoints.Panagiotis Patrinospanagiotis.patrinos@imtlucca.itDaniele Bernardinidaniele.bernardini@imtlucca.itAlessandro MaffeiAndrej JokicAlberto Bemporadalberto.bemporad@imtlucca.it2012-12-19T11:09:10Z2013-03-12T14:57:00Zhttp://eprints.imtlucca.it/id/eprint/1456This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14562012-12-19T11:09:10ZYou should read this! let me explain you why: explaining news recommendations to usersRecommender systems have become ubiquitous in contentbased
web applications, from news to shopping sites. Nonetheless,
an aspect that has been largely overlooked so far in the recommender system literature is that of automatically
building explanations for a particular recommendation.
This paper focuses on the news domain, and proposes to enhance effectiveness of news recommender systems by adding,
to each recommendation, an explanatory statement to help
the user to better understand if, and why, the item can be
her interest. We consider the news recommender system as a
black-box, and generate different types of explanations employing pieces of information associated with the news. In
particular, we engineer text-based, entity-based, and usagebased explanations, and make use of a Markov Logic Networks to rank the explanations on the basis of their effectiveness. The assessment of the model is conducted via a user study on a dataset of news read consecutively by actual users. Experiments show that news recommender systems
can greatly benefit from our explanation module.Roi BlancoDiego Ceccarellidiego.ceccarelli@imtlucca.itClaudio LuccheseRaffaele PeregoFabrizio Silvestri2012-12-19T10:57:28Z2013-03-12T14:57:00Zhttp://eprints.imtlucca.it/id/eprint/1454This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14542012-12-19T10:57:28ZIntroducing RDF Graph Summary with Application to Assisted SPARQL FormulationOne of the reasons for the slow adoption of SPARQL is the complexity in query formulation due to data diversity. The principal barrier a user faces when trying to formulate a query is that he generally has no information about the underlying structure and vocabulary of the data. In this paper, we address this problem at the maximum scale we can think of: providing assistance in formulating SPARQL queries over the entire Sindice data collection - 15 billion triples and counting coming from more than 300K datasets. We present a method to help users in formulating complex SPARQL queries across multiple heterogeneous data sources. Even if the structure and vocabulary of the data sources are unknown to the user, the user is able to quickly and easily formulate his queries. Our method is based on a summary of the data graph and assists the user during an interactive query formulation by recommending possible structural query elements.Stephane CampinasThomas E. PerryDiego Ceccarellidiego.ceccarelli@imtlucca.itRenaud DelbruGiovanni Tummarello2012-12-14T08:37:47Z2014-01-24T14:13:11Zhttp://eprints.imtlucca.it/id/eprint/1446This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14462012-12-14T08:37:47ZMetamodel variability in robust simulation-optimization: a bootstrap analysisMetamodels are often used in simulation-optimization for the design and management of complex systems enabling the integration of discipline-dependent analysis into the
overall decision process. These metamodels yield insight into the relationship between responses and decision variables, providing fast analysis tools in place of the more expensive computer simulations. The combined use of stochastic simulation experiments and metamodels introduces a source of uncertainty in the decision process that we refer to as metamodel variability. To quantify this variability, we combine validation and bootstrapping techniques. The rationale behind the method relies on the fact that, after the validation process, the relative validation errors are small indicating that the metamodels give an adequate approximation, and bootstrapping these errors allows to quantify the metamodels' variability in an acceptable way. The method has the advantage to be general and can be used with different kind of metamodels and validation techniques. The resulting methodology is illustrated through some examples using regression and Kriging metamodels.Gabriella Dellinogabriella.dellino@imtlucca.itCarlo Meloni2012-12-14T08:37:25Z2012-12-14T08:37:25Zhttp://eprints.imtlucca.it/id/eprint/1447This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14472012-12-14T08:37:25ZOperations management e sanità: un sistema di supporto alle decisioni per la programmazione della chirurgia elettivaThe operating theater is the most critical resource of the surgical process. From a lean perspective, the planning of surgical activities must maintain an alignment between demand and offer. Therefore, it is necessary to find the right trade-off between the need of keeping organizational complexity low, and the need to give appropriate precedence to surgeries in view of their priority class and waiting times. In this paper we introduce decisional models to determine the Master Surgical Schedule (MSS) and the detailed surgical case assignment. Experimental results carried out on the operating theater of San Giuseppe hospital in Empoli (Italy) show that the introduction of even small amounts of flexibility in the MSS allows to improve operating room usage and quality of service.Alessandro AgnetisAlberto CoppiMatteo CorsiniGabriella Dellinogabriella.dellino@imtlucca.itCarlo MeloniMarco Pranzo2012-12-14T08:26:02Z2014-01-29T13:54:33Zhttp://eprints.imtlucca.it/id/eprint/1444This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14442012-12-14T08:26:02ZMetamodel variability analysis combining bootstrapping and validation techniquesResearch on metamodel-based optimization has received considerably increasing interest in recent years, and has found successful applications in solving computationally expensive problems. The joint use of computer simulation experiments and metamodels introduces a source of uncertainty that we refer to as metamodel variability. To analyze and quantify this variability, we apply bootstrapping to residuals derived as prediction errors computed from cross-validation. The proposed method can be used with different types of metamodels, especially when limited knowledge on parameters’ distribution is available or when a limited computational budget is allowed. Our preliminary experiments based on the robust version of
the EOQ model show encouraging results.Gabriella Dellinogabriella.dellino@imtlucca.itCarlo Meloni2012-11-30T08:27:56Z2012-11-30T08:27:56Zhttp://eprints.imtlucca.it/id/eprint/1439This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14392012-11-30T08:27:56ZSimulation-Optimization in Modeling Ionic Polymer-Metal Composites ActuatorsThe increasing pressure on the development time of new materials and devices has changed the modelling and design process over the years. In the past, they mainly consisted of experimentation and physical prototyping. Clearly, it is hard to incorporate changes in finished prototypes, while producing a variety of different prototypes at once may be very expensive. To this aim, computer simulation models such as circuit design models and continuous system simulation models are widely used in engineering modelling, design and analysis. The studies towards a better understanding of complex systems require quantitative model development, making optimisation and experimental data fitting tools indispensable. In this framework, the modelling of ionic polymer-metal composites (IPMCs) is studied. In particular, this paper deals with simulation-optimisation issues arising in the model calibration of a particular IPMC-based actuator in air. We consider a non-linear dynamical model of the device, with lumped parameters, able to estimate the IPMC actuator absorbed current, together with the mechanical quantities of interest, which, in the case under study, are the free deflection and/or the blocked force. Two optimisation problems have been formulated, focusing on different stages of the model parameters identification. The strategies adopted to solve the problems allow to achieve some promising - although preliminary - results.Gabriella Dellinogabriella.dellino@imtlucca.itPaolo LinoCarlo MeloniAlessandro RizzoClaudia BonomoLuigi FortunaPietro GiannoneSalvatore Graziani2012-11-30T08:25:08Z2012-11-30T08:25:08Zhttp://eprints.imtlucca.it/id/eprint/1438This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14382012-11-30T08:25:08ZDynamic Objectives Aggregation Methods for Evolutionary Portfolio Optimization. A computational studyThis paper proposes a study of different dynamic objectives aggregation methods (DOAMs) in the context of a multi-objective evolutionary approach to portfolio optimisation. Since the incorporation of chaotic rules or behaviour in population-based optimisation algorithms has been shown to possibly enhance their searching ability, this study considers and evaluates also some chaotic rules in the dynamic weights generation process. The ability of the DOAMs to solve the portfolio rebalancing problem is investigated conducting a computational study on a set of instances based on real data. The portfolio model considers a set of realistic constraints and entails the simultaneous optimisation of the risk on portfolio, the expected return and the transaction cost.Gabriella Dellinogabriella.dellino@imtlucca.itMariagrazia FedeleCarlo Meloni2012-11-30T08:16:21Z2012-11-30T08:16:21Zhttp://eprints.imtlucca.it/id/eprint/1442This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14422012-11-30T08:16:21ZAn efficient decomposition approach for surgical planningThis talk presents an efficient decomposition approach to surgical planning. Given a set of surgical waiting lists (one for each discipline) and an operating theater, the problem is to decide the room-to-discipline assignment for the next planning period (Master Surgical Schedule), and the surgical cases to be performed (Surgical Case Assignment), with the objective of optimizing a score related to priority and current waiting time of the cases. While in general MSS and SCA may be concurrently found by solving a complex integer programming problem, we propose an effective decomposition algorithm which does not require expensive or sophisticated computational resources, and is therefore suitable for implementation in any real-life setting.
Our decomposition approach consists in first producing a number of subsets of surgical cases for each discipline (potential OR sessions), and select a subset of them. The surgical cases in the selected potential sessions are then discarded, and only the structure of the MSS is retained. A detailed surgical case assignment is then devised filling the MSS obtained with cases from the waiting lists, via an exact optimization model.
The quality of the plan obtained is assessed by comparing it with the plan obtained by solving the exact integrated formulation for MSS and SCA. Nine different scenarios are considered, for various operating theater sizes and management policies. The results on instances concerning a medium-size hospital show that the decomposition method produces comparable solutions with the exact method in much smaller computation time.Alessandro AgnetisAlberto CoppiMatteo CorsiniGabriella Dellinogabriella.dellino@imtlucca.itCarlo MeloniMarco Pranzo2012-11-29T16:39:01Z2012-11-29T16:39:01Zhttp://eprints.imtlucca.it/id/eprint/1436This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14362012-11-29T16:39:01ZRobust optimization in simulation: Taguchi and Krige combinedOptimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a "robust" methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by design and analysis of simulation experiments based on Kriging (Gaussian process model); moreover, we use bootstrapping to quantify the variability in the estimated Kriging metamodels. In addition, we combine Kriging with nonlinear programming, and we estimate the Pareto frontier. We illustrate the resulting methodology through economic order quantity (EOQ) inventory models. Our results suggest that robust optimization requires order quantities that differ from the classic EOQ. We also compare our results with results we previously obtained using response surface methodology instead of Kriging. Gabriella Dellinogabriella.dellino@imtlucca.itJack P.C. KleijnenCarlo Meloni2012-11-28T13:24:24Z2012-11-29T13:17:37Zhttp://eprints.imtlucca.it/id/eprint/1434This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14342012-11-28T13:24:24ZAsymptotic Normality of a Hurst Parameter Estimator Based on the Modified Allan VarianceIn order to estimate the memory parameter of Internet traffic data, it has been recently proposed a log-regression estimator based on the so-called modified Allan variance (MAVAR). Simulations have shown that this estimator achieves higher accuracy and better confidence when compared with other methods. In this paper we present a rigorous study of the MAVAR log-regression estimator. In particular, under the assumption that the signal process is a fractional Brownian motion, we prove that it is consistent and asymptotically normally distributed. Finally, we discuss its connection with the wavelets estimators.Alessandra BianchiMassimo CampaninoIrene Crimaldiirene.crimaldi@imtlucca.it2012-11-27T08:30:44Z2012-11-27T08:30:44Zhttp://eprints.imtlucca.it/id/eprint/1430This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14302012-11-27T08:30:44ZAn Economic Analysis of Judicial CareersThe aim of this paper is to analyze from an economic perspective the effects of the judicial careers arrangement on the trials’ outcome. The institutional organization of judicial careers follows two distinct ideal systems. One is characterized by the fact that public prosecutor and judge belong to the same professional body, as magistrates, while the other one is characterized by the separation of the judiciary from prosecutors. We model this feature of the judicial system as a continuum variable and explain why this choice can be appropriate. We obtain that a more unified system of judicial careers leads to fewer distortions in the process preceding the trial, while it introduces more distortions during the trial. We find the optimal degree of separation of judicial careers and provide some comparative statics results.Paolo PolidoriDésirée TeobaldelliDavide Ticchidavide.ticchi@imtlucca.it2012-11-13T09:44:21Z2014-06-30T08:55:41Zhttp://eprints.imtlucca.it/id/eprint/1427This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14272012-11-13T09:44:21ZUncertainty and the politics of employment protectionThis paper investigates the social preferences over labor market exibility, in a general equilibrium model of dynamic labor demand. We demonstrate that how the economy responds
to productivity shocks depends on the power of labor to extract rents and on the status quo level of the firing cost. In particular, we show that when the firing cost is initially relatively low, a transition to a rigid labor market is favored by all the employed workers with idiosyncratic productivity below some threshold value. Conversely, when the status quo level of the firing cost is relatively high, the preservation of a rigid labor market is favored by the employed with intermediate productivity, whereas all other workers favor more exibility. A more volatile environment, and a lower rate of productivity growth, i.e., "bad times," increase the political support for more labor market rigidity only where labor appropriates of relatively large rents. The coming of better economic conditions not necessarily favors the demise of high firing costs in rigid high-rents economies, because "good times" cut down the support for flexibility among the least productive employed workers. The model described provides some new insights on the comparative dynamics of labor market institutions in the U.S. and in Europe over the last few decades, shedding some new light both on the reasons for the original build-up of "Eurosclerosis," and for its relative persistence until the present day.Andrea Vindigniandrea.vindigni@imtlucca.itSimone ScottiCristina Tealdicristina.tealdi@imtlucca.it2012-10-24T09:30:20Z2012-11-13T14:35:48Zhttp://eprints.imtlucca.it/id/eprint/1425This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14252012-10-24T09:30:20ZLong term evaluation of operating theater planning policiesThis paper addresses Operating Room (OR) planning policies in elective surgery. In particular, we investigate long-term policies for determining the Master Surgical Schedule (MSS) throughout the year, analyzing the tradeoff between organizational simplicity, favored by an MSS that does not change completely every week, and quality of the service offered to the patients, favored by an MSS that dynamically adapts to the current state of waiting lists, the latter objective being related to a lean approach to hospital management. Surgical cases are selected from the waiting lists according to several parameters, including surgery duration, waiting time and priority class of the operations. We apply the proposed models to the operating theater of a public, medium-size hospital in Empoli, Italy, using Integer Linear Programming formulations, and analyze the scalability of the approach on larger hospitals. The simulations point out that introducing a very limited degree of variability in MSS in terms of OR sessions assignment can largely pay off in terms of resource efficiency and due date performance.Alessandro AgnetisAlberto CoppiMatteo CorsiniGabriella Dellinogabriella.dellino@imtlucca.itCarlo MeloniMarco Pranzo2012-10-22T13:00:17Z2014-04-30T14:46:45Zhttp://eprints.imtlucca.it/id/eprint/1420This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14202012-10-22T13:00:17ZTechnology and the Era of the Mass ArmyWe investigate how technology has influenced the size of armies. During the nineteenth century the development of the railroad made it possible to field and support mass armies, significantly increasing the observed size of military forces. During the late twentieth century
further advances in technology made it possible to deliver explosive force from a distance and with precision, making mass armies less desirable. We find strong support for our
technological account using a new data set covering thirteen great powers between 1600 and 2000. Contrary to what is so often suggested, we find little evidence that the French Revolution was a watershed in terms of levels of mobilization.Massimiliano Gaetano OnoratoKenneth ScheveDavid Stasavage2012-10-22T07:27:04Z2016-04-07T09:28:16Zhttp://eprints.imtlucca.it/id/eprint/1422This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14222012-10-22T07:27:04ZA Network Analysis of Countries’ Export Flows: Firm Grounds for the Building Blocks of the EconomyIn this paper we analyze the bipartite network of countries and products from UN data on country production. We define the country-country and product-product projected networks and introduce a novel method of filtering information based on elements’ similarity. As a result we find that country clustering reveals unexpected socio-geographic links among the most competing countries. On the same footings the products clustering can be efficiently used for a bottom-up classification of produced goods. Furthermore we mathematically reformulate the “reflections method” introduced by Hidalgo and Hausmann as a fixpoint problem; such formulation highlights some conceptual weaknesses of the approach. To overcome such an issue, we introduce an alternative methodology (based on biased Markov chains) that allows to rank countries in a conceptually consistent way. Our analysis uncovers a strong non-linear interaction between the diversification of a country and the ubiquity of its products, thus suggesting the possible need of moving towards more efficient and direct non-linear fixpoint algorithms to rank countries and products in the global market.</p>Guido Caldarelliguido.caldarelli@imtlucca.itMatthieu CristelliAndrea GabrielliLuciano PietroneroAntonio ScalaAndrea Tacchella2012-10-17T14:40:01Z2012-10-17T15:30:59Zhttp://eprints.imtlucca.it/id/eprint/1409This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14092012-10-17T14:40:01Z[review of] Ute Frevert et al., Gefühlswissen. Eine lixikalische Spurensuche in der Moderne Fiammetta Balestraccifiammetta.balestracci@imtlucca.it2012-10-16T12:55:08Z2016-04-07T09:23:37Zhttp://eprints.imtlucca.it/id/eprint/1403This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14032012-10-16T12:55:08ZA New Metrics for Countries' Fitness and Products' ComplexityClassical economic theories prescribe specialization of countries industrial production. Inspection of the country databases of exported products shows that this is not the case: successful countries are extremely diversified, in analogy with biosystems evolving in a competitive dynamical environment. The challenge is assessing quantitatively the non-monetary competitive advantage of diversification which represents the hidden potential for development and growth. Here we develop a new statistical approach based on coupled non-linear maps, whose fixed point defines a new metrics for the country Fitness and product Complexity. We show that a non-linear iteration is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We show that, given the paradigm of economic complexity, the correct and simplest approach to measure the competitiveness of countries is the one presented in this work. Furthermore our metrics appears to be economically well-grounded.Andrea TacchellaMatthieu CristelliGuido Caldarelliguido.caldarelli@imtlucca.itAndrea GabrielliLuciano Pietronero2012-10-15T08:09:17Z2012-10-15T08:09:17Zhttp://eprints.imtlucca.it/id/eprint/1401This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/14012012-10-15T08:09:17ZMOBY-DIC: A MATLAB Toolbox for Circuit-Oriented Design of Explicit MPCThis paper describes a MATLAB Toolbox for the integrated design of Model Predictive Control (MPC) state-feedback control laws and the digital circuits implementing
them. Explicit MPC laws can be designed using optimal and sub-optimal formulations, directly taking into account the specifications of the digital circuit implementing the control law (such as latency and size), together with the usual control specifications (stability, performance,
constraint satisfaction). Tools for a-posteriori stability analysis of the closed-loop system, and for the simulation of the circuit in Simulink, are also included in the toolbox.Alberto OliveriDavide BarcelliAlberto Bemporadalberto.bemporad@imtlucca.itBart GenuitW.P.M.H. HeemelsTomaso PoggiMatteo RubagottiMarco Storace2012-10-04T08:28:37Z2012-10-04T08:28:37Zhttp://eprints.imtlucca.it/id/eprint/1387This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13872012-10-04T08:28:37ZModel predictive control applications for planetary roversModel Predictive Control (MPC) is a well-known method for control of processes with low or moderate dynamics as found in power or chemical plants. Within space applications the typical domain of MPC is spacecraft attitude and orbit control. MPC for control of planetary rovers is a quite new technology and was recently investigated in the frame of the RobMPC project, under ESA contract. In this context the Robust-MPC approach was applied to three layers of the rover
control hierarchy dealing with medium to high dynamics control tasks: 1) guidance, 2) trajectory control and 3) wheel traction and steering control. The selected reference rover is ESA’s four-wheel EGP rover with rear axle steering and a mass of approximately 800 kg. The MPC control design flow is based on the MPCSofT Toolbox for MATLAB, a novel toolbox developed within the RobMPC project. The MPCSofT toolbox provides an environment for design and simulation of
MPC controllers, based on a quite general class of linear
time-varying models, constraints, and quadratic costs, possibly equipped with integral action to increase robustness. As MPC prediction models are easily specified by the user in Embedded MATLAB code, Ccode can be automatically generated within the MATLAB/Simulink environment for immediate rapid prototyping. The highest control level is shared between the nominal path planner (computed offline) and the MPC guidance function. When the rover slips outside the safety corridor around the nominal path, the guidance function continuously builds obstacle-free optimal contingency paths to bring back the vehicle to the nominal path, without the need of stopping the rover to compute a new nominal path. The LTV model included in the MPC optimization engine is used to reconstruct the guidance path from the computed optimal sequence of actions. The MPC trajectory control acts on the velocity vector of the vehicle in order to keep the vehicle within the nominal (guidance) path. This level takes into account the non-holonomic characteristics of the rover and
implements a kinematic LTV model of the vehicle. The lowest MPC level is dedicated to traction and steering The highest control level is shared between the nominal path planner (computed offline) and the MPC guidance function. When the rover slips outside the safety corridor around the nominal path, the guidance function continuously builds obstacle-free optimal contingency paths to bring back the vehicle to the nominal path, without the need of stopping the rover to compute a new nominal path. The LTV model included in the MPC optimization engine is used to reconstruct the guidance path from the computed optimal sequence of actions. The MPC trajectory control acts on the velocity vector of the vehicle in order to keep the vehicle within the nominal (guidance) path. This level takes into account the non-holonomic characteristics of the rover and
implements a kinematic LTV model of the vehicle. The lowest MPC level is dedicated to traction and steering control. This layer is controlling the steering angle and wheel velocity coordination and replaces typically the Ackermann control. Here, the MPC solution is based on a multi-body system model of the rover including the wheel-soil interaction dynamics. It is implemented as a stepwise LTI class problem with corresponding online linearization of the model. The paper will introduce the architecture of the entire control hierarchy together with selected details of the MPC specific implementation. The performance and
robustness analyses are presented based on results of
comprehensive Monte Carlo simulations. A profiling of
the code will give an outlook regarding readiness state
in terms of controller implementation on space qualified
computer hardware.Giovanni BinetRainer KrennAlberto Bemporadalberto.bemporad@imtlucca.it2012-09-24T08:45:24Z2012-09-24T08:45:24Zhttp://eprints.imtlucca.it/id/eprint/1370This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13702012-09-24T08:45:24ZThe Emersion Effect: an analysis on labor tax evasion in ItalyWe analyze how different policy interventions may incentive emersion from unde-clared work. We use Italian data over the period 1998-2003 to investigate whether the 2003 Italian labor market reform was able to reach the objective to reduce the share of shadow economy. We develop a search and matching model, á la Mortensen, on the basis of our empirical investigation to determine the right mix of policy interventions which maybe effective in generating an emersion effect. Our preliminary findings show that differentiated forms of taxations and enforcement might create a good combination of incentives to achieve a significant reduction in undeclared work.Edoardo Di PortoLeandro EliaCristina Tealdicristina.tealdi@imtlucca.it2012-09-24T08:26:07Z2012-09-24T08:26:07Zhttp://eprints.imtlucca.it/id/eprint/1369This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13692012-09-24T08:26:07ZUsing Networks To Understand Medical Data: The Case of Class III Malocclusions<p>A system of elements that interact or regulate each other can be represented by a mathematical object called a network. While network analysis has been successfully applied to high-throughput biological systems, less has been done regarding their application in more applied fields of medicine; here we show an application based on standard medical diagnostic data. We apply network analysis to Class III malocclusion, one of the most difficult to understand and treat orofacial anomaly. We hypothesize that different interactions of the skeletal components can contribute to pathological disequilibrium; in order to test this hypothesis, we apply network analysis to 532 Class III young female patients. The topology of the Class III malocclusion obtained by network analysis shows a strong co-occurrence of abnormal skeletal features. The pattern of these occurrences influences the vertical and horizontal balance of disharmony in skeletal form and position. Patients with more unbalanced orthodontic phenotypes show preponderance of the pathological skeletal nodes and minor relevance of adaptive dentoalveolar equilibrating nodes. Furthermore, by applying Power Graphs analysis we identify some functional modules among orthodontic nodes. These modules correspond to groups of tightly inter-related features and presumably constitute the key regulators of plasticity and the sites of unbalance of the growing dentofacial Class III system. The data of the present study show that, in their most basic abstraction level, the orofacial characteristics can be represented as graphs using nodes to represent orthodontic characteristics, and edges to represent their various types of interactions. The applications of this mathematical model could improve the interpretation of the quantitative, patient-specific information, and help to better targeting therapy. Last but not least, the methodology we have applied in analyzing orthodontic features can be applied easily to other fields of the medical science.</p>Antonio ScalaPietro AuconiMarco ScazzocchioGuido Caldarelliguido.caldarelli@imtlucca.itJames A. McNamaraLorenzo Franchi2012-09-19T09:58:53Z2016-04-07T09:01:21Zhttp://eprints.imtlucca.it/id/eprint/1367This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13672012-09-19T09:58:53ZThe Longevity of RankingsGuido Caldarelliguido.caldarelli@imtlucca.it2012-09-19T09:32:42Z2012-09-19T09:32:42Zhttp://eprints.imtlucca.it/id/eprint/1366This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13662012-09-19T09:32:42ZProgress in the physics of complex networksGuido Caldarelliguido.caldarelli@imtlucca.itGiorgio KaniadakisAnotnio M. Scarfone2012-09-19T09:16:35Z2012-09-19T09:16:35Zhttp://eprints.imtlucca.it/id/eprint/1365This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13652012-09-19T09:16:35ZCompetitors’ communities and taxonomy of products according to export fluxesIn this paper we use Complex Network Theory to quantitatively characterize and synthetically describe the complexity of trade between nations. In particular, we focus our attention on export fluxes. Starting from the bipartite countries-products network defined by export fluxes, we define two complementary graphs projecting the original network on countries and products respectively. We define, in both cases, a distance matrix amongst countries and products. Specifically, two countries are similar if they export similar products. This relationship can be quantified by building the Minimum Spanning Tree and the Minimum Spanning Forest from the distance matrices for products and countries. Through this simple and scalable method we are also able to carry out a community analysis. It is not gone unnoticed that in this way we can produce an effective categorization for products providing several advantages with respect to traditional classifications of COMTRADE 1. Finally, the forests of countries allows for the detection of competitors’ community and for the analysis of the evolution of these communities.Matthieu CristelliAndrea TacchellaAndrea GabrielliLuciano PietroneroAntonio ScalaGuido Caldarelliguido.caldarelli@imtlucca.it2012-09-14T15:24:30Z2016-07-13T10:52:33Zhttp://eprints.imtlucca.it/id/eprint/1350This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13502012-09-14T15:24:30ZState space c-reductions for concurrent systems in rewriting logicWe present c-reductions, a simple, flexible and very general state space reduction technique that exploits an equivalence relation on states that is a bisimulation. Reduction is achieved by a canonizer function, which maps each state into a not necessarily unique canonical representative of its equivalence class. The approach contains symmetry reduction and name reuse and name abstraction as special cases, and exploits the expressiveness of rewriting logic and its realization in Maude to automate c-reductions and to seamlessly integrate model checking and the discharging of correctness proof obligations. The performance of the approach has been validated over a set of representative case studies.Alberto Lluch-Lafuentealberto.lluch@imtlucca.itJosé MeseguerAndrea Vandinandrea.vandin@imtlucca.it2012-09-14T14:18:02Z2013-03-07T12:56:25Zhttp://eprints.imtlucca.it/id/eprint/1348This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13482012-09-14T14:18:02ZGeneration of Test Data Structures Using Constraint Logic ProgrammingThe goal of Bounded-Exhaustive Testing (BET) is the automatic generation of all the test cases satisfying a given invariant, within a given bound. When the input has a complex structure, the development of correct and efficient generators becomes a very challenging task. In this paper we use Constraint Logic Programming (CLP) to systematically develop generators of structurally complex test data. Similarly to filtering -based test generation, we follow a declarative approach which allows us to separate the issue of (i) defining the test structure and invariant, from that of (ii) generating admissible test input instances. This separation helps improve the correctness of the developed test case generators. However, in contrast with filtering approaches, we rely on a symbolic representation and we take advantage of efficient search strategies provided by CLP systems for generating test instances. Through some experiments on examples taken from the literature on BET, we show that CLP, by combining the use of constraints and recursion, allows one to write intuitive and easily understandable test generators. We also show that these generators can be much more efficient than those built using ad-hoc filtering-based test generation tools like Korat.Valerio Sennivalerio.senni@imtlucca.itFabio Fioravanti2012-09-14T14:10:59Z2013-03-07T12:56:25Zhttp://eprints.imtlucca.it/id/eprint/1347This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13472012-09-14T14:10:59ZConstraint-based correctness proofs for logic program transformationsMany approaches proposed in the literature for proving the correctness of unfold/fold transformations of logic programs make use of measures associated with program clauses. When from a program P 1 we derive a program P 2 by applying a sequence of transformations, suitable conditions on the measures of the clauses in P 2 guarantee that the transformation of P 1 into P 2 is correct, that is, P 1 and P 2 have the same least Herbrand model. In the approaches proposed so far, clause measures are fixed in advance, independently of the transformations to be proved correct. In this paper we propose a method for the automatic generation of clause measures which, instead, takes into account the particular program transformation at hand. During the application of a sequence of transformations we construct a system of linear equalities and inequalities over nonnegative integers whose unknowns are the clause measures to be found, and the correctness of the transformation is guaranteed by the satisfiability of that system. Through some examples we show that our method is more powerful and practical than other methods proposed in the literature. In particular, we are able to establish in a fully automatic way the correctness of program transformations which, by using other methods, are proved correct at the expense of fixing in advance sophisticated clause measures.Alberto PettorossiMaurizio ProiettiValerio Sennivalerio.senni@imtlucca.it2012-09-13T10:58:51Z2013-03-07T12:56:25Zhttp://eprints.imtlucca.it/id/eprint/1345This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13452012-09-13T10:58:51ZImproving Reachability Analysis of Infinite State Systems by Specialization We consider infinite state reactive systems specified by using linear constraints over the integers, and we address the problem of verifying safety properties of these systems by applying reachability analysis techniques. We propose a method based on program specialization, which improves the effectiveness of the backward and forward reachability analyses. For backward reachability our method consists in: (i) specializing the reactive system with respect to the initial states, and then (ii) applying to the specialized system the reachability analysis that works backwards from the unsafe states. For reasons of efficiency, during specialization we make use of a relaxation from integers to reals. In particular, we test the satisfiability or entailment of constraints over the real numbers, while preserving the reachability properties of the reactive systems when constraints are interpreted over the integers. For forward reachability our method works as for backward reachability, except that the role of the initial states and the unsafe states are interchanged. We have implemented our method using the MAP transformation system and the ALV verification system. Through various experiments performed on several infinite state systems, we have shown that our specialization-based verification technique considerably increases the number of successful verifications without a significant degradation of the time performance.Fabio FioravantiAlberto PettorossiMaurizio ProiettiValerio Sennivalerio.senni@imtlucca.it2012-09-13T10:40:49Z2013-03-07T12:56:24Zhttp://eprints.imtlucca.it/id/eprint/1344This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13442012-09-13T10:40:49ZSpecification and Validation of Algorithms Generating Planar Lehman WordsAlain GiorgettiValerio Sennivalerio.senni@imtlucca.it2012-09-13T10:29:29Z2013-03-07T12:56:25Zhttp://eprints.imtlucca.it/id/eprint/1343This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13432012-09-13T10:29:29ZUsing Real Relaxations during Program SpecializationWe propose a program specialization technique for locally stratified CLP(ℤ) programs, that is, logic programs with linear constraints over the set ℤ of the integer numbers. For reasons of efficiency our technique makes use of a relaxation from integers to reals. We reformulate the familiar unfold/fold transformation rules for CLP programs so that: (i) the applicability conditions of the rules are based on the satisfiability or entailment of constraints over the set ℝ of the real numbers, and (ii) every application of the rules transforms a given program into a new program with the same perfect model constructed over ℤ. Then, we introduce a strategy which applies the transformation rules for specializing CLP(ℤ) programs with respect to a given query. Finally, we show that our specialization strategy can be applied for verifying properties of infinite state reactive systems specified by constraints over ℤ.Fabio FioravantiAlberto PettorossiMaurizio ProiettiValerio Sennivalerio.senni@imtlucca.it2012-09-10T09:20:40Z2012-09-10T09:20:40Zhttp://eprints.imtlucca.it/id/eprint/1341This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13412012-09-10T09:20:40ZA Game-Theoretic Analysis of Grid Job SchedulingComputational Grid is a well-established platform that gives an assurance to provide a vast range of heterogeneous resources for high performance computing. Efficient and effective resource management and Grid job scheduling are key requirements in order to optimize the use of the resources and to take full advantage from Grid systems. In this paper, we study the job scheduling problem in Computational Grid by using a game-theoretic approach. Grid resources are usually owned by different organizations which may have different and possibly conflicting concerns. Thus it is a crucial objective to analyze potential scenarios where selfish or cooperative behaviors of organizations impact heavily on global Grid efficiency. To this purpose, we formulate a repeated non-cooperative job scheduling game, whose players are Grid sites and whose strategies are scheduling algorithms. We exploit the concept of Nash equilibrium to express a situation in which no player can gain any profit by unilaterally changing its strategy. We extend and complement our previous work by showing whether, under certain circumstances, each investigated strategy is a Nash equilibrium or not. In the negative case we give a counter-example, in the positive case we either give a formal proof or motivate our conjecture by experimental results supported by simulations and exhaustive search.Maria Grazia Buscemim.buscemi@imtlucca.itUgo MontanariSonia Taneja2012-09-06T12:57:20Z2012-09-06T12:57:20Zhttp://eprints.imtlucca.it/id/eprint/1340This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13402012-09-06T12:57:20ZNumerical algorithm for nonlinear state feedback ℌ∞ optimal control problemIn this paper, the numerical algorithm based on conjugate gradient method to solve a finite-horizon min-max optimization problem arising in the ℌ∞; control of nonlinear systems is presented. The feedback control and disturbance variables are formulated as a linear combination of basis functions. The proposed algorithm, which has a backward-intime structure, directly finds very accurate approximations of these feedbacks. Benchmark examples with analytic solutions are provided to demonstrate the effectiveness of the proposed algorithm.Vladimir MilicAlberto Bemporadalberto.bemporad@imtlucca.itJosip KasacZeljko Situm2012-09-04T09:57:02Z2012-09-04T09:57:02Zhttp://eprints.imtlucca.it/id/eprint/1339This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13392012-09-04T09:57:02ZNonnegative Matrix Factorizations Performing Object Detection and LocalizationWe study the problem of detecting and localizing objects in still, gray-scale images making use of the part-based representation provided by nonnegative matrix factorizations. Nonnegative matrix factorization represents an emerging example of subspace methods, which is able to extract interpretable parts from a set of template image objects and then to additively use them for describing individual objects. In this paper, we present a prototype system based on some nonnegative factorization algorithms,
which differ in the additional properties added to the nonnegative representation of data, in order to investigate if any additional constraint produces better results in general object detection via nonnegative matrix factorizations.Gabriella CasalinoNicoletta Del BuonoMassimo Minervinimassimo.minervini@imtlucca.it2012-08-07T07:49:34Z2016-04-07T08:28:50Zhttp://eprints.imtlucca.it/id/eprint/1331This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13312012-08-07T07:49:34ZDebtRank: Too Central to Fail? Financial Networks, the FED and Systemic RiskSystemic risk, here meant as the risk of default of a large portion of the financial system, depends on the network of financial exposures among institutions. However, there is no widely accepted methodology to determine the systemically important nodes in a network. To fill this gap, we introduce, DebtRank, a novel measure of systemic impact inspired by feedback-centrality. As an application, we analyse a new and unique dataset on the USD 1.2 trillion FED emergency loans program to global financial institutions during 2008–2010. We find that a group of 22 institutions, which received most of the funds, form a strongly connected graph where each of the nodes becomes systemically important at the peak of the crisis. Moreover, a systemic default could have been triggered even by small dispersed shocks. The results suggest that the debate on too-big-to-fail institutions should include the even more serious issue of too-central-to-fail.Stefano BattistonMichelangelo Puligamichelangelo.puliga@imtlucca.itRahul KaushikPaolo TascaGuido Caldarelliguido.caldarelli@imtlucca.it2012-07-30T11:14:52Z2016-04-07T09:29:17Zhttp://eprints.imtlucca.it/id/eprint/1328This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13282012-07-30T11:14:52ZWeb Search Queries Can Predict Stock Market VolumesWe live in a computerized and networked society where many of our actions leave a digital trace and affect other people’s actions. This has lead to the emergence of a new data-driven research field: mathematical methods of computer science, statistical physics and sociometry provide insights on a wide range of disciplines ranging from social science to human mobility. A recent important discovery is that search engine traffic (i.e., the number of requests submitted by users to search engines on the www) can be used to track and, in some cases, to anticipate the dynamics of social phenomena. Successful examples include unemployment levels, car and home sales, and epidemics spreading. Few recent works applied this approach to stock prices and market sentiment. However, it remains unclear if trends in financial markets can be anticipated by the collective wisdom of on-line users on the web. Here we show that daily trading volumes of stocks traded in NASDAQ-100 are correlated with daily volumes of queries related to the same stocks. In particular, query volumes anticipate in many cases peaks of trading by one day or more. Our analysis is carried out on a unique dataset of queries, submitted to an important web search engine, which enable us to investigate also the user behavior. We show that the query volume dynamics emerges from the collective but seemingly uncoordinated activity of many users. These findings contribute to the debate on the identification of early warnings of financial systemic risk, based on the activity of users of the www.Ilaria BordinoStefano BattistonGuido Caldarelliguido.caldarelli@imtlucca.itMatthieu CristelliAntti UkkonenIngmar Weber2012-07-26T07:51:58Z2014-07-28T09:35:47Zhttp://eprints.imtlucca.it/id/eprint/1326This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13262012-07-26T07:51:58ZStatistical agent based modelization of the phenomenon of drug abuseWe introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.Riccardo Di Clementericcardo.diclemente@alumni.imtlucca.itLuciano Pietronero2012-07-24T13:28:47Z2013-04-19T12:42:25Zhttp://eprints.imtlucca.it/id/eprint/1323This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13232012-07-24T13:28:47ZA uniform framework for modelling nondeterministic, probabilistic, stochastic, or mixed processes and their behavioral equivalencesLabeled transition systems are typically used as behavioral models of concurrent processes, and the labeled transitions define the a one-step state-to-state reachability relation. This model can be made generalized by modifying the transition relation to associate a state reachability distribution, rather than a single target state, with any pair of source state and transition label. The state reachability distribution becomes a function mapping each possible target state to a value that expresses the degree of one-step reachability of that state. Values are taken from a preordered set equipped with a minimum that denotes unreachability. By selecting suitable preordered sets, the resulting model, called ULTraS from Uniform Labeled Transition System, can be specialized to capture well-known models of fully nondeterministic processes (LTS), fully
probabilistic processes (ADTMC), fully stochastic processes (ACTMC), and of nondeterministic and probabilistic (MDP) or nondeterministic and stochastic (CTMDP) processes. This uniform treatment of different behavioral models extends to behavioral equivalences. These can be defined on ULTraS by relying on appropriate measure functions that expresses the degree of reachability of a set of states when performing
single-step or multi-step computations. It is shown that the specializations of bisimulation, trace, and testing
equivalences for the different classes of ULTraS coincide with the behavioral equivalences defined in the literature over traditional models.Marco BernardoRocco De Nicolar.denicola@imtlucca.itMichele Loreti2012-07-24T13:23:26Z2013-04-19T12:42:07Zhttp://eprints.imtlucca.it/id/eprint/1322This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13222012-07-24T13:23:26ZA uniform definition of stochastic process calculiWe introduce a unifying framework to provide the semantics of process algebras, including their quantitative variants useful for modeling quantitative aspects of behaviors. The unifying framework is then used to describe some of the most representative stochastic process algebras. This
provides a general and clear support for an understanding of their similarities and differences. The framework is based on State to Function Labeled Transition Systems, FuTSs for short, that are state-transition structures where each transition is a triple of the form (s; α;P). The first andthe second components are the source state, s, and the label, α, of the transition, while the third component is the continuation function, P, associating a value of a suitable type to each state s0. For example, in the case of stochastic process algebras the value of the continuation function on s0 represents the rate of the negative exponential distribution characterizing the duration/delay of the action performed to reach state s0 from s. We first provide the semantics of a simple formalism used to describe Continuous-Time Markov Chains, then we model a number of process algebras that permit parallel composition of models according to the two main interaction paradigms (multiparty and one-to-one synchronization). Finally, we deal with formalisms where actions and rates are kept separate and address the issues related to the coexistence of stochastic, probabilistic, and non-deterministic behaviors. For each formalism, we establish the formal correspondence between the FuTSs semantics and its original semantics.Rocco De Nicolar.denicola@imtlucca.itDiego LatellaMichele LoretiMieke Massink2012-07-20T09:51:02Z2012-07-23T13:40:35Zhttp://eprints.imtlucca.it/id/eprint/1320This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/13202012-07-20T09:51:02ZCan persistent Epstein–Barr virus infection induce chronic fatigue syndrome as a Pavlov reflex of the immune response? Chronic fatigue syndrome is a protracted illness condition (lasting even years) appearing with strong flu symptoms and systemic defiances by the immune system. Here, by means of statistical mechanics techniques, we study the most widely accepted picture for its genesis, namely a persistent acute mononucleosis infection, and we show how such infection may drive the immune system towards an out-of-equilibrium etastable state displaying chronic activation of both humoral and cellular responses (a state of full inflammation without a direct ‘causes–effect’ reason). By exploiting a bridge with a neural scenario, we mirror killer lymphocytes TK and B cells to neurons and helper lymphocytes and to synapses, hence showing that the immune system may experience the Pavlov conditional reflex phenomenon: if the exposition to a stimulus (Epstein–Barr virus antigens) lasts for too long, strong internal correlations among B,TK and TH may develop ultimately resulting in a persistent activation even though the stimulus itself is removed. These outcomes are corroborated by several experimental findings. Elena AgliariAdriano BarraKristian Vidal Gervasikristian.gervasi@imtlucca.itFrancesco Guerra2012-06-29T12:28:49Z2016-07-13T09:49:50Zhttp://eprints.imtlucca.it/id/eprint/1293This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12932012-06-29T12:28:49ZState space c-reductions for concurrent systems in rewriting logicWe present c-reductions, a simple, flexible and very general state space reduction technique that exploits an equivalence relation on states that is a bisimulation. Reduction is achieved by a canonizer function, which maps each state into a not necessarily unique canonical representative of its equivalence class. The approach contains symmetry reduction and name reuse and name abstraction as special cases, and exploits the expressiveness of rewriting logic and its realization in Maude
to automate c-reductions and to seamlessly integrate model checking and the discharging of correctness proof obligations. The performance of the approach has been validated over a set of representative case studies.Alberto Lluch-Lafuentealberto.lluch@imtlucca.itJosé MeseguerAndrea Vandinandrea.vandin@imtlucca.it2012-06-29T11:10:58Z2016-07-13T09:49:16Zhttp://eprints.imtlucca.it/id/eprint/1292This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12922012-06-29T11:10:58ZExploiting over- and under-approximations for infinite-state counterpart modelsSoftware systems with dynamic topology are often infini-testate. Paradigmatic examples are those modeled as graph transformation systems (GTSs) with rewrite rules that allow an unbounded creation of items. For such systems, verification can become intractable, thus calling for the development of approximation techniques that may ease
the verification at the cost of losing in preciseness and completeness. Both over- and under-approximations have been considered in the literature, respectively offering more and less behaviors than the original system. At the same time, properties of the system may be either preserved or
reflected by a given approximation. In this paper we propose a general notion of approximation that captures some of the existing approaches for GTSs. Formulae are specified by a generic quantified modal logic, one that also generalizes many specification logics adopted in the literature for GTSs. We also propose a type system to denote part of the formulae as either reflected or preserved, together with a technique that exploits
under- and over-approximations to reason about typed as well as untyped formulaeAlberto Lluch-Lafuentealberto.lluch@imtlucca.itFabio GadducciAndrea Vandinandrea.vandin@imtlucca.it2012-06-28T09:16:53Z2012-06-28T09:24:51Zhttp://eprints.imtlucca.it/id/eprint/1287This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12872012-06-28T09:16:53ZIl National Audit Office e il Public Accounts Committee: ancora lezioni da Westminster?The mainstream description of UK's institutions provides that the parliament plays little role in law and policy-making processes and in the control of government activities. However, the revitalization of an old institution, the National Audit Office, and the new relations built with the Public Accounts Committee, along with other relevant institutional changes, has heavily modified the relative relevance of these public institutions in the last decades. Today we observe that in Westminster parliament there is a new culture of public scrutiny, based on the evaluation of public performances and that this is reflected in an improved role for the parliament itself. Martino Bianchimartino.bianchi@imtlucca.it2012-06-26T12:26:10Z2013-03-12T09:32:21Zhttp://eprints.imtlucca.it/id/eprint/1285This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12852012-06-26T12:26:10ZIschemic extent as a biomarker for characterizing severity of coronary artery stenosis with blood oxygen-sensitive MRIPurpose:To investigate whether a statistical analysis of myocardial blood-oxygen-level-dependent (mBOLD) signal intensities can lead to the identification and quantification of the ischemic area supplied by the culprit artery.Materials and Methods:Cardiac BOLD images were acquired in a canine model (n = 9) with controllable LCX stenosis at rest and during adenosine infusion on a 1.5T clinical scanner. Statistical distributions of myocardial pixel-intensities derived from BOLD images were used to compute an area metric (ischemic extent, IE). True myocardial perfusion was estimated from microsphere analysis. IE was compared against a standard metric (segment-intensity-response, SIR). Additional animals (n = 3) were used to investigate the feasibility of the approach for identifying ischemic territories due to LAD stenosis from mBOLD images.Results:Regression analyses showed that IE and myocardial flow ratio between rest and adenosine infusion (MFR) were exponentially related (R2 > 0.70, P < 0.001, for end-systole and end-diastole), while SIR and MFR were linearly related to end-systole (R2 = 0.51, P < 0.04) and unrelated to end-diastole (R2 ≈ 0, P = 0.91). Receiver-operating-characteristic analysis that IE was superior to SIR for detecting critical stenosis (MFR ≤2) in end-systole and end-diastole. Feasibility studies on LAD narrowing demonstrated that the proposed approach could also identify oxygenation changes in the LAD territories.Conclusion:The proposed evaluation of cardiac BOLD magnetic resonance imaging (MRI) offers marked improvement in sensitivity and specificity for detecting critical coronary stenosis at 1.5T compared to the mean segmental intensity approach. Patient studies are now warranted to determine its clinical utility. Sotirios A. Tsaftarissotirios.tsaftaris@imtlucca.itRichard TangXiangzhi ZhouDebiao LiRohan Dharmakumar2012-06-07T10:17:05Z2012-06-07T10:17:05Zhttp://eprints.imtlucca.it/id/eprint/1283This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12832012-06-07T10:17:05ZAbsorptive Capacity and Efficiency: A Comparative Stochastic Frontier Approach Using Sectoral DataIn this paper, we investigate differences in and determinants of technical efficiency across three groups of OECD, Asian and Latin American countries. As technical efficiency determines the capacity with which countries absorb technology produced abroad, these differences
are important to understand differences in growth and productivity across countries, especially for developing countries which depend to a large extend on foreign technology. Using a stochastic frontier framework and data for 22 manufacturing sectors for 1996-2005, we find notable differences in technical efficiency between the three country groups we examine. We then investigate the effect of human capital and domestic R&D, proxied by the
stock of patents, on technical efficiency. We find that while human capital has always a strongly positive effect on efficiency, an increase in the stock of patents has positive effects on efficiency in high-tech sectors, but negative effects in low-tech sectors.Letizia Montinariletizia.montinari@imtlucca.itMichael Rochlitzmichael.rochlitz@imtlucca.it2012-05-22T13:59:24Z2012-07-05T10:09:36Zhttp://eprints.imtlucca.it/id/eprint/1279This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12792012-05-22T13:59:24ZDo Eco-Innovations Harm Productivity Growth through Crowding Out? Results of an Extended CDM Model for ItalyThis paper discusses the results for Italy of a CDM model (Crepon et al, 1998) further extended with the objective of evaluating drivers and productivity effects of environmental innovations. The particular nature of environmental innovations, especially as regards the need of
government intervention to create market opportunities, is likely to affect the way through which they are pursued (innovation equation within the CDM model) and their effect on productivity (productivity equation). Here I test two main hypothesis: (i) to what extent polluting firms
rely on own innovations to improve their environmental performance? (ii) do the pursue of environmental innovations reduce the likelihood of obtaining other profitable innovations (crowding out)? Results, based
on administrative data (AIDA by Bureau van Dijk and patent data from PATSTAT) show that innovation efforts of polluting firms and sectors is significantly biased towards environmental innovations and that environmental innovations tend to crowd out other more profitable
(at least in the short run) innovations.Giovanni Maringiovanni.marin@alumni.imtlucca.it2012-05-22T13:54:41Z2012-07-05T10:10:04Zhttp://eprints.imtlucca.it/id/eprint/1278This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12782012-05-22T13:54:41ZClosing the gap? Dynamic analyses of emission efficiency and sector productivity in EuropeThis paper investigates the patterns of emission efficiency (value added per emission) growth of 23 manufacturing sectors in 12 European countries with a focus on five emissions (CO2, NOx, NMVOC, SOx and CO). Emission efficiency growth is expected to be triggered by improvements in the efficiency of frontier countries through the diffusion of better technologies to laggard countries. This effect is likely to differ according to the distance from the frontier country. Finally, the role of productivity patterns (Total Factor Productivity) and energy
prices dynamics is assessed. Results based on the European NAMEA (National Accounting Matrix including Environmental Accounts) further merged with sector accounts highlight significant spillovers from leaders in emission efficiency and a general tendency to converge for laggard countries and
sectors (except for NMVOC emission efficiency). Energy prices weakly induce improvements in emission efficiency, with the effect being generally stronger for sectors and countries farther away from the emission efficiency frontier. Finally, total factor productivity (TFP) is strongly correlated with emission efficiency while the distance from TFP frontier significantly harms emission efficiency growth.Giovanni Maringiovanni.marin@alumni.imtlucca.it2012-05-18T10:24:14Z2012-05-18T10:24:14Zhttp://eprints.imtlucca.it/id/eprint/1277This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12772012-05-18T10:24:14ZLe Diable est un bon connaisseur de l'art de séduire par les images. Entretien avec Marco BelpolitiLinda Bertellilinda.bertelli@imtlucca.it2012-05-17T08:37:29Z2012-10-10T09:17:45Zhttp://eprints.imtlucca.it/id/eprint/1275This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12752012-05-17T08:37:29ZDall'intuizione alla figura. Il "discorso sul metodo" bergsonianoAlthough there are many studies focused on the role of images in Bergson’s work, few of them try to outline the relationship between the notion of image (as it emerges from Matter and memory and other essays) and his use of images understood as figures of speech. In order to outline Bergson’s hypothesis on language, this paper tries to delineate this relationship by means of the notions of
metaphor and metonymy, advancing the hypothesis that the connection is possible only if we consider as metonymies some figures of speech in his texts usually considered as metaphors.Linda Bertellilinda.bertelli@imtlucca.it2012-05-15T10:59:51Z2012-05-15T10:59:51Zhttp://eprints.imtlucca.it/id/eprint/1268This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12682012-05-15T10:59:51ZAgeing and risk aspects in predictive inference based on proportional Hazard ModelsProportional Hazard Models arise from a straightforward generalization of the simple case of conditionally i.i.d., exponentially distributed random variables and, in a sense, can be considered as the idealized models in the statistical analysis of failure and survival data for
lifetimes. For these reasons, they have been extensively studied in the literature. Despite of the richness of related contributions, there are still special aspects of these models that are worthwhile focusing. In this discussion paper we aim to present some contributions, in the frame of a Bayesian approach and by using some very basic notions of stochastic ordering.Rachele Foschirachele.foschi@imtlucca.itFabio Spizzichino2012-05-08T08:52:38Z2012-05-09T07:24:25Zhttp://eprints.imtlucca.it/id/eprint/1267This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12672012-05-08T08:52:38ZInteractions between ageing and risk properties in the analysis of burn-in problemsSeveral relevant problems in reliability can be looked at as problems of risk management and of decisions in the face of uncertainty. However, in this frame, the so-called burn-in problem can be seen as a problem of risk taking par excellence. In this paper, we in particular point out some aspects concerning interactions between the probabilistic model for lifetimes and considerations of an economic kind. As one of the features of our work, we hinge on some unexplored connections between ageing properties of a one-dimensional survival function Formula and risk-aversion-type properties of the function u(t) = bG(t), b > 0, when the latter is seen as a utility function. Rachele Foschirachele.foschi@imtlucca.itFabio Spizzichino2012-05-07T09:12:53Z2012-05-07T09:12:53Zhttp://eprints.imtlucca.it/id/eprint/1266This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12662012-05-07T09:12:53ZRevisiting Trace and Testing Equivalences for Nondeterministic and Probabilistic ProcessesOne of the most studied extensions of testing theory to nondeterministic and probabilistic processes yields unrealistic probabilities estimations that give rise to two anomalies. First, probabilistic testing equivalence does not imply probabilistic trace equivalence. Second, probabilistic testing equivalence differentiates processes that perform the same sequence of actions with the same probability but make internal choices in different moments and thus, when applied to processes without probabilities, does not coincide with classical testing equivalence. In this paper, new versions of probabilistic trace and testing equivalences are presented for nondeterministic and probabilistic processes that resolve the two anomalies. Instead of focussing only on suprema and infima of the set of success probabilities of resolutions of interaction systems, our testing equivalence matches all the resolutions on the basis of the success probabilities of their identically labeled computations. A simple spectrum is provided to relate the new relations with existing ones. It is also shown that, with our approach, the standard probabilistic testing equivalences for generative and reactive probabilistic processes can be retrieved.Marco BernardoRocco De Nicolar.denicola@imtlucca.itMichele Loreti2012-04-26T11:07:06Z2012-04-26T11:07:06Zhttp://eprints.imtlucca.it/id/eprint/1265This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12652012-04-26T11:07:06ZA Trick of the (Pareto) TailSeveral economic phenomena are found to follow an approximate Pareto distribution, at least in the upper tail. The debate is well established for the distribution of wealth and business firms, and has recently been particularly animated with respect to city sizes. In this paper we contribute to this stream of the literature by showing that the power-law tail emerges upon aggregation, and this holds true across three different domains: cities, firms and trade flows. We explore different mechanisms that could give rise to this effect, from mere sample size to correlation among the number of constituent parts of aggregate entities and their size, to the aggregation rule, and discuss their impact on the Pareto tail. Using multiple statistical tests we show that it is impossible to prove the existence of a genuine Pareto tail for the US city size distribution because of the smallness of the number of observations. Furthermore, the presence of a positive power-law relationship between the number of units (products, establishments) comprised in each firm and their average size is key to explain why the size distribution of business firms displays a power-law tail. Conversely, we do not find any Pareto tail for trade flows. The paper casts new light on the mechanisms through which idiosyncratic shocks do not average out upon aggregation, so that individual shocks are not washed away in economic aggregates, as the central limit theorem would predict, but can even be magnified.Marco BeeMassimo Riccabonimassimo.riccaboni@imtlucca.itStefano Schiavo2012-04-26T10:50:14Z2012-07-06T12:20:13Zhttp://eprints.imtlucca.it/id/eprint/1264This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12642012-04-26T10:50:14ZAssessment of non-centralised model predictive control techniques for electrical power networks Model predictive control (MPC) is one of the few advanced control methodologies that have proven to be very successful in real-life applications. An attractive feature of MPC is its capability of explicitly taking state and input constraints into account. Recently, there has been an increasing interest in the usage of MPC schemes to control electrical power networks. The major obstacle for implementation lies in the large scale of these systems, which is prohibitive for a centralised approach. In this article, we therefore assess and compare the suitability of several non-centralised predictive control schemes for power balancing, to provide valuable insights that can contribute to the successful implementation of non-centralised MPC in the real-life electrical power system. Ralph M. HermansAndrej JokicMircea LazarAlessandro AlessioPaul Van den boschIan HiskensAlberto Bemporadalberto.bemporad@imtlucca.it2012-04-18T12:49:49Z2013-03-12T09:32:21Zhttp://eprints.imtlucca.it/id/eprint/1263This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12632012-04-18T12:49:49ZMyocardial Blood-Oxygen-Level-Dependent Magnetic Resonance Imaging with Balanced Steady-State Free Precession Imaging ApproachesThe current state of myocardial Blood-Oxygen-Level-Dependent (BOLD) MRI with balanced steady-state free precession (SSFP) approaches is reviewed. Initial studies forming the basis for SSFP-based detection of oxygenation changes beginning with whole blood studies, progressing through controlled studies that consider microcirculatory changes in oxygenation in skeletal muscle and kidney, culminating in basic myocardial studies are outlined. The theoretical basis to observe signal changes and the mechanisms that facilitate such observations are elucidated. Methods to overcome limitations in sensitivity are described.Rohan DharmakumarSotirios A. Tsaftarissotirios.tsaftaris@imtlucca.itDebiao Li2012-04-13T15:22:01Z2012-04-13T15:22:01Zhttp://eprints.imtlucca.it/id/eprint/1262This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12622012-04-13T15:22:01ZA Theory of Political EntrenchmentWe develop a theory of endogenous political entrenchment in a simple two-party dynamic model of income redistribution with probabilistic voting. A partially self-interested
left-wing party may implement (entrenchment) policies reducing the income of its own constituency, the lower class, in order to consolidate its future political power. Such policies increase the net gain that low-skill agents obtain from income redistribution, which only the Left (but not the Right) can credibly commit to provide, and therefore may help o¤setting a potential future aggregate ideological shock averse to the left-wing party. We
demonstrate that political entrenchment by the Left occurs only if incumbency rents are su¢ ciently high and that low-skill citizens may vote for this party even though they rationally expect the adoption of these policies. We also discuss the case where the left-wing party may have the incentive to ex-ante commit to not pursue entrenchment policies once in power. Finally, we show that, in a more general framework, the entrenchment policies can be implemented also by the right-wing party. The comparative statics analyzes the e¤ects of state capacity, a positive bias of voters for one party and income inequality on
the incentives of the incumbent party to pursue entrenchment policies. The importance of our theory for constitutionally legislated term limits is also discussed. The theory sheds light on why left-wing parties or politicians often support liberal immigration policies of
unskilled workers, are sometime in favor of free trade with less developed economies and of globalization more generally, or fail to reform plainly "dysfunctional" public educational systems damaging the lower classes.Gilles Saint-PaulDavide Ticchidavide.ticchi@imtlucca.itAndrea Vindigniandrea.vindigni@imtlucca.it2012-04-13T09:21:05Z2014-01-24T14:13:48Zhttp://eprints.imtlucca.it/id/eprint/1261This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12612012-04-13T09:21:05ZContractual TestingVariants of must testing approach have been successfully applied in Service Oriented Computing for capturing compliance between (contracts exposed by) a client and a service and for characterising safe replacement, namely
the fact that compliance is preserved when a service exposing a ’smaller’ contract is replaced by another one with a ’larger’ contract. Nevertheless, in multi-party
interactions, partners often lack full coordination capabilities. Such a scenario calls for less discriminating notions of testing in which observers are, e.g., the
description of uncoordinated multiparty contexts or contexts that are unable to observe the complete behaviour of the process under test. In this paper we propose an extended notion of must preorder, called contractual preorder, according to which contracts are compared according to their ability to pass only the tests belonging to a given set. We show the generality of our framework by proving that preorders induced by existing notions of compliance in a distributed setting are instances of the contractual preorder when restricting to suitable sets of observers.Maria Grazia Buscemim.buscemi@imtlucca.itRocco De Nicolar.denicola@imtlucca.itHernán C. Melgratti2012-04-02T07:22:55Z2012-04-02T07:22:55Zhttp://eprints.imtlucca.it/id/eprint/1253This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12532012-04-02T07:22:55ZStatistical Laws Governing Fluctuations in Word Use from Word Birth to Word DeathWe analyze the dynamic properties of 107 words recorded in English, Spanish and Hebrew over the period 1800–2008 in order to gain insight into the coevolution of language and culture. We report language independent patterns useful as benchmarks for theoretical models of language evolution. A significantly decreasing (increasing) trend in the birth (death) rate of words indicates a recent shift in the selection laws governing word use. For new words, we observe a peak in the growth-rate fluctuations around 40 years after introduction, consistent with the typical entry time into standard dictionaries and the human generational
timescale. Pronounced changes in the dynamics of language during periods of war shows that word correlations, occurring across time and between words, are largely influenced by coevolutionary social,technological, and political factors. We quantify cultural memory by analyzing the long-term correlations in the use of individual words using detrended fluctuation analysis.Alexander M. Petersenalexander.petersen@imtlucca.itJoel TenenbaumShlomo HavlinH. Eugene Stanley2012-03-28T13:02:05Z2012-04-03T07:49:05Zhttp://eprints.imtlucca.it/id/eprint/1252This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12522012-03-28T13:02:05ZAn integer linear programming approach for radio-based localization of shipping containers in the presence of incomplete proximity informationThe most advanced solutions that are currently adopted in ports and terminals use technologies based on radio frequency identification (RFID) and the Global Positioning System (GPS) to identify and localize shipping containers in the yard. Nevertheless, because of the limitations of these solutions, the position of containers is still affected by errors, and it cannot be determined in real time. In this paper, a nonconventional approach is presented: Each container is equipped with nodes that use wireless communication to detect neighbor containers and to send proximity information to a base station. At the base station, geometrical constraints and proximity data are combined to determine the positions of containers. Missing information due to faulty nodes is tolerated by modeling geometrical constraints as an integer linear programming problem. Numerical simulations show that most of the containers can be localized, even when the number of nodes that are affected by faults is on the order of 30.Stefano Abbatestefano.abbate@alumni.imtlucca.itMarco AvvenutiPaolo CorsiniBarbara PanicucciMauro PassacantandoAlessio Vecchio2012-03-28T10:49:35Z2012-04-03T07:50:44Zhttp://eprints.imtlucca.it/id/eprint/1246This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12462012-03-28T10:49:35ZMIMS: A Minimally Invasive Monitoring Sensor PlatformThis paper describes a minimally invasive sensor platform for active and passive monitoring of human movements and physiological signals. Such a system is needed in cases where 24 #x00D7; 7 monitoring is required, as in older adults with cognitive impairment, dementia and Alzheimer's disease. The passive monitoring systems used today are useful only in detecting events after they happen; the accuracy and speed of detection is questionable. The noninvasive nature of such systems does not bring trade off benefits to early detection and prevention of emergency incidents. We compare some existing sensor platforms and present our monitoring approach using minimally invasive wearable sensor device(s). With a Minimally Invasive Monitoring Sensor (MIMS), using advanced intelligent systems, we analyze the physiological signal data preceding potential emergency events in order to predict them quickly. The Virtual Hub is the core component of MIMS, which acts as a gateway between a monitored person and her/his caregivers, as well as a shared access point between active and passive sensing devices. Some preliminary results are presented here from our sleep-related fall study using two heterogeneous sensor systems.Stefano Abbatestefano.abbate@alumni.imtlucca.itMarco AvvenutiJanet Light2012-03-26T10:42:48Z2012-07-05T10:09:12Zhttp://eprints.imtlucca.it/id/eprint/1240This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12402012-03-26T10:42:48ZLinking NAMEA and Input output for "consumption vs. production perspective" analyses: Evidence on emission efficiency and aggregation biases using the Italian and Spanish environmental accountsWe integrate input output and NAMEA tables for Spain and Italy in 1995, 2000 and 2005, in order to address the hot policy issue of sustainable consumption and production. A comparison of production and consumption perspectives may have relevant policy implications. We deal with the domestic technology assumption and primarily the aggregation bias that may result when calculating indirect emission using different sector aggregations in the analyses (e.g. 16, 30, 50). Extended Input Output Analysis provides analyses of the emissions embodied in domestic consumption and domestic production by considering the structure of intermediate inputs and environmental efficiency in each production sector. Our empirical findings show that different sectoral aggregation significantly biases the amount of emissions for the consumption perspective, though differently in the two countries. Italy surprisingly show consumption/production ratios around or lower than one, but in line with some major work at EU level. Our results thus suggest that special attention must be paid when interpreting the EE-IOA of country estimated amounts of embodied emissions, both in domestic final demand and those directly associated with the production sectors when the sectoral aggregation level has a low definition as considered in some recent similar studies.Giovanni Maringiovanni.marin@alumni.imtlucca.itMassimiliano MazzantiAnna Montini2012-03-26T07:46:36Z2016-04-07T09:48:48Zhttp://eprints.imtlucca.it/id/eprint/1239This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12392012-03-26T07:46:36ZRobustness and assortativity for diffusion-like processes in scale-free networksBy analysing the diffusive dynamics of epidemics and of distress in complex networks, we study the effect of the assortativity on the robustness of the networks. We first determine by spectral analysis the thresholds above which epidemics/failures can spread; we then calculate the slowest diffusional times. Our results shows that disassortative networks exhibit a higher epidemiological threshold and are therefore easier to immunize, while in assortative networks there is a longer time for intervention before epidemic/failure spreads. Moreover, we study by computer simulations the sandpile cascade model, a diffusive model of distress propagation (financial contagion). We show that, while assortative networks are more prone to the propagation of epidemic/failures, degree-targeted immunization policies increases their resilience to systemic risk.Gregorio D'AgostinoAntonio ScalaVinko ZlaticGuido Caldarelliguido.caldarelli@imtlucca.it2012-03-15T11:12:41Z2012-03-15T11:12:41Zhttp://eprints.imtlucca.it/id/eprint/1234This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12342012-03-15T11:12:41ZHow much flexibility do we really need?Short-term contracts have been deployed rapidly across Europe since the mid 90s. The objective of this paper is to investigate both theoretically and empirically the effects of short-term contracts on individual welfare. By comparing the economy pre and post-reforms, we study the evolution of frms' and workers' dynamics, we identify the determinants behind the frms' decision to hire short-term, and we
quantify the change in welfare for different categories of workers. We find that more productive workers fare better, while junior and less productive workers pay the cost of higher turnover and lower wages, conforming the presence of a dual economy. The study of potential policy interventions allows us to conclude that the longer the short-term contracts, the better the labor market outcomes. In addition, the comparison of the models pre and post-reforms with an American-style economy with a unique
exible contract, seems to suggest that exibility has positive effects on the labor market for junior workers, but
not necessarily on the one for senior workers.Cristina Tealdicristina.tealdi@imtlucca.it2012-03-13T12:56:40Z2012-09-05T15:37:49Zhttp://eprints.imtlucca.it/id/eprint/1233This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12332012-03-13T12:56:40ZWarfare, Fiscal Capacity, and PerformanceWe exploit differences in casualties sustained in pre-modern wars to estimate the impact of fiscal capacity on economic performance. In the past, states fought different amounts of external conflicts, of various lengths and magnitudes. To raise the revenues to wage wars, states made fiscal innovations, which persisted and helped to shape current fiscal institutions. Economic historians claim that greater fiscal capacity was the key long-run institutional change brought about by historical conflicts. Using casualties sustained in pre-modern wars to instrument for current fiscal institutions, we estimate substantial impacts of fiscal capacity on GDP per worker. The results are robust to a broad range of specifications, controls, and sub-samples.
Mark Dinceccom.dincecco@imtlucca.itMauricio Prado2012-03-09T11:37:29Z2012-03-09T11:37:29Zhttp://eprints.imtlucca.it/id/eprint/1232This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12322012-03-09T11:37:29ZRagghianti e il pungolo dell’azioneEmanuele Pellegriniemanuele.pellegrini@imtlucca.it2012-03-02T14:47:20Z2013-09-30T12:38:13Zhttp://eprints.imtlucca.it/id/eprint/1205This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12052012-03-02T14:47:20ZHigh-Speed piecewise affine virtual sensorsThis paper proposes piecewise affine (PWA) virtual sensors for the estimation of unmeasured variables of nonlinear systems with unknown dynamics. The estimation functions are designed directly from measured inputs and outputs and have two important features. First, they enjoy convergence and optimality properties, based on classical results on parametric identification. Second, the PWA structure is based on a simplicial partition of the measurement space and allows one to implement very effectively the virtual sensor on a digital circuit. Due to the low cost of the required hardware for the implementation of such a particular structure and to the very high sampling frequencies that can be achieved, the approach is applicable to a wide range of industrial problems.Tomaso PoggiMatteo RubagottiAlberto Bemporadalberto.bemporad@imtlucca.itMarco Storace2012-03-02T14:37:36Z2015-05-12T13:23:25Zhttp://eprints.imtlucca.it/id/eprint/1203This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12032012-03-02T14:37:36ZModeling and control of an airbrake electro-hydraulic smart actuatorIn this paper, an accurate model of an airbrake electro-hydraulic smart actuator is obtained by physical considerations, and then different control strategies (variable-gain proportional control, PT1 control with switching integrator, and second order sub-optimal sliding mode control) are proposed and analyzed. This application is innovative in the avionic field, and is one of the first attempts to realize a fly-by-wire system for airbrakes, oriented to its immediate employment and installation on current aircraft. The project was carried on with the participation of the Italian Ministry of Defense, and was commissioned to MAG, a leading provider of integrated systems and aviation services for aerospace.Matteo RubagottiMarco CarminatiGiampiero ClementeRiccardo GrassettiAntonella Ferrara2012-03-02T12:06:48Z2012-03-02T12:06:48Zhttp://eprints.imtlucca.it/id/eprint/1202This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12022012-03-02T12:06:48ZThe Value of Failures in Pharmaceutical R&DWe build a cumulative innovation model in which both success and failure provide valuable information for future research. To test this learning mechanism, we use a dataset covering outcomes of world-wide R&D projects in the pharmaceutical industry, and proxy knowledge flows with forward citations received by patents associated with each project. Empirical results confirm theoretical predictions that patents associated with successfully completed projects (i.e., leading to drug launch on the market) receive more citations than those associated to failed (terminated) projects, which in turn are cited more often than patents lacking clinical or preclinical information. We therefore offer evidence of the value of failures as research inputs in (pharmaceutical) innovation.Jing-Yuan Chioujy.chiou@imtlucca.itLaura MagazziniFabio Pammollif.pammolli@imtlucca.itMassimo Riccabonimassimo.riccaboni@imtlucca.it2012-03-02T11:32:00Z2012-04-19T09:52:31Zhttp://eprints.imtlucca.it/id/eprint/1201This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12012012-03-02T11:32:00ZLearning from failures or failing to learn? Lessons from pharmaceutical R&DInnovation is a trial and error process in which both successes and failures contribute to knowledge creation and
destruction. In this paper we test theoretical predictions about the role of failures in new product development on
private and public knowledge and interfirm knowledge transfer. We analyse the outcomes of world-wide R&D
projects in the pharmaceutical industry, and proxy knowledge flows with forward citations received by patents
associated with each project. We find that patents covering successfully completed projects (i.e., leading to drug
launch on the market) receive more citations than those associated to failed (terminated) projects, which in turn
are cited more often than patents lacking clinical or preclinical information. Failures by specialized firms are cited more frequently than the ones of generalist companies. We therefore offer evidence of the value of failures as research inputs in (pharmaceutical) innovation.Laura MagazziniFabio Pammollif.pammolli@imtlucca.itMassimo Riccabonimassimo.riccaboni@imtlucca.it2012-02-29T16:10:39Z2012-02-29T16:10:39Zhttp://eprints.imtlucca.it/id/eprint/1200This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/12002012-02-29T16:10:39ZModel predictive control with delay compensation for air-to-fuel ratio controlTo meet increasingly stringent emission regulations modern internal combustion engines require highly accurate control of the air-to-fuel ratio. The performance of the conventional air-to-fuel ratio feedback loop is limited by the combustion delay between fuel injection and engine exhaust, and by the transport delay for the exhaust gas to propagate to the air-to-fuel ratio sensor location. The combined delay is variable, since it depends on engine speed and airflow. Drivability, fuel economy and emission requirements result in constraints on the deviations of the air-to-fuel ratio, stored oxygen in the three-way catalyst, and fuel injection. This paper proposes an approach for air-to-fuel ratio control based on Model Predictive Control (MPC). The approach systematically handles both variable time delays and pointwise-in-time constraints. A delay-free model is considered first, which takes into account the dynamic relations between the injected fuel and the air-to-fuel ratio and the dynamics of the oxygen stored in the catalyst. For the delay-free model, the explicit MPC law is computed. Delay compensation is obtained by estimating the delay online from engine operating conditions, and feeding the MPC law with the state predicted ahead over the time interval of the estimated delay. The predicted state is computed by combining measurement filtering with forward iterations of the nonlinear dynamic equations of the model. The achieved performance in tracking the air-to-fuel ratio and the oxygen storage setpoints while enforcing the constraints is demonstrated in simulation using real data profiles. Sergio TrimboliStefano Di CairanoAlberto Bemporadalberto.bemporad@imtlucca.itIlya Kolmanovsky2012-02-27T13:30:59Z2018-03-08T17:03:15Zhttp://eprints.imtlucca.it/id/eprint/1196This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/11962012-02-27T13:30:59ZNetworks with arbitrary edge multiplicitiesOne of the main characteristics of real-world networks is their large clustering. Clustering is one aspect of a more general but much less studied structural organization of networks, i.e. edge multiplicity, defined as the number of triangles in which edges, rather than vertices, participate. Here we show that the multiplicity distribution of real networks is in many cases scale free, and in general very broad. Thus, besides the fact that in real networks the number of edges attached to vertices often has a scale-free distribution, we find that the number of triangles attached to edges can have a scale-free distribution as well. We show that current models, even when they generate clustered networks, systematically fail to reproduce the observed multiplicity distributions. We therefore propose a generalized model that can reproduce networks with arbitrary distributions of vertex degrees and edge multiplicities, and study many of its properties analytically.Vinko ZlaticDiego Garlaschellidiego.garlaschelli@imtlucca.itGuido Caldarelliguido.caldarelli@imtlucca.it2012-02-27T09:59:42Z2012-10-31T10:31:13Zhttp://eprints.imtlucca.it/id/eprint/1189This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/11892012-02-27T09:59:42ZNetworks: a very short introductionFrom ecosystems to Facebook, from the Internet to the global financial market, some of the most important and familiar natural systems and social phenomena are based on a networked structure. It is impossible to understand the spread of an epidemic, a computer virus, large-scale blackouts, or massive extinctions without taking into account the network structure that underlies all these phenomena.
In this Very Short Introduction, Guido Caldarelli and Michele Catanzaro discuss the nature and variety of networks, using everyday examples from society, technology, nature, and history to explain and understand the science of network theory. They show the ubiquitous role of networks; how networks self-organize; why the rich get richer; and how networks can spontaneously collapse. They conclude by highlighting how the findings of complex network theory have very wide and important applications in genetics, ecology, communications, economics, and sociology.Guido Caldarelliguido.caldarelli@imtlucca.itMichele Catanzaro2012-01-16T09:35:24Z2013-10-10T08:34:21Zhttp://eprints.imtlucca.it/id/eprint/1053This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/10532012-01-16T09:35:24ZPersistence and Uncertainty in the Academic CareerRecent shifts in the business structure of universities and a bottleneck in the supply of tenure track positions are two issues that threaten to change the longstanding patronage system in academia. Understanding how institutional changes within academia may affect the overall potential of science requires a better quantitative understanding of how careers evolve over time. Since knowledge spillovers, cumulative advantage, and collaboration are distinctive features of the academic profession, the employment relationship should be designed to account for these factors. We quantify the impact of these factors in the production n_i(t) of a given scientist i by analyzing the longitudinal career data of 300 scientists and compare our results with 21,156 sports careers comprising a non-academic labor force. The increase in the typical size of scientific collaborations has led to the increasingly difficult task of allocating funding and assigning recognition. We use measures of the scientific collaboration radius, which can change dramatically over the course of a career, to provide insight into the role of collaboration in production efficiency. We introduce a model of proportional growth to provide insight into the complex relation between knowledge spillovers, competition, and uncertainty at the individual scale. Our model shows that high competition levels can make careers vulnerable to “sudden death” termination relatively early in the career as a result of negative production fluctuations and not necessarily due to lack of individual persistence.Alexander M. Petersenalexander.petersen@imtlucca.itMassimo Riccabonimassimo.riccaboni@imtlucca.itH. Eugene StanleyFabio Pammollif.pammolli@imtlucca.it2012-01-09T14:07:35Z2014-07-01T13:34:46Zhttp://eprints.imtlucca.it/id/eprint/1051This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/10512012-01-09T14:07:35ZStabilizing model predictive control of stochastic constrained linear systemsThis paper investigates stochastic stabilization procedures based on quadratic and piecewise linear Lyapunov functions for discrete-time linear systems affected by multiplicative disturbances and subject to linear constraints on inputs and states. A stochastic model predictive control (SMPC) design approach is proposed to optimize closed-loop performance while enforcing constraints. Conditions for stochastic convergence and robust constraints fulfillment of the closed-loop system are enforced by solving linear matrix inequality problems off line. Performance is optimized on line using multi-stage stochastic optimization based on enumeration of scenarios, that amounts to solving a quadratic program subject to either quadratic or linear constraints. In the latter case, an explicit form is computable to ease the implementation of the proposed SMPC law. The approach can deal with a very general class of stochastic disturbance processes with discrete probability distribution. The effectiveness of the proposed SMPC formulation is shown on a numerical example and compared to traditional MPC schemes.Daniele Bernardinidaniele.bernardini@imtlucca.itAlberto Bemporadalberto.bemporad@imtlucca.it2012-01-09T13:58:30Z2012-01-09T13:58:30Zhttp://eprints.imtlucca.it/id/eprint/1050This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/10502012-01-09T13:58:30ZHierarchical and decentralised model predictive control of drinking water networks: application to Barcelona case studyA hierarchical and decentralised model predictive control (DMPC) strategy for drinking water networks (DWN) is proposed. The DWN is partitioned into a set of subnetworks using a partitioning algorithm that makes use of the topology of the network, historic information about the actuator usage and heuristics. A suboptimal DMPC strategy was derived, which consists in a set of MPC controllers, whose prediction model is a plant partition, where each element solves its control problem in a hierarchical order. A comparative simulation study between centralised MPC (CMPC) and DMPC approaches is developed using a case study, which consists in an aggregate version of the Barcelona DWN. Results have shown the effectiveness of the proposed DMPC approach in terms of the scalability of computations with an acceptable admissible loss of performance in all the considered scenarios. Carlos Ocampo-MartinezDavide BarcelliVicenç PuigAlberto Bemporadalberto.bemporad@imtlucca.it2012-01-09T11:57:49Z2016-07-13T10:50:07Zhttp://eprints.imtlucca.it/id/eprint/1049This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/10492012-01-09T11:57:49ZCounterpart Semantics for a Second-Order mu-CalculusQuantified μ-calculi combine the fix-point and modal operators of temporal logics with (existential and universal) quantifiers, and they allow for reasoning about the possible behaviour of individual components within a software system. In this paper we introduce a novel approach to the semantics of such calculi: we consider a sort of labeled transition systems called counterpart models as semantic domain, where states are algebras and transitions are defined by counterpart relations (a family of partial homomorphisms) between states. Then, formulae are interpreted over sets of state assignments (families of partial substitutions, associating formula variables to state components). Our proposal allows us to model and reason about the creation and deletion of components, as well as the merging of components. Moreover, it avoids the limitations of existing approaches, usually enforcing restrictions of the transition relation: the resulting semantics is a streamlined and intuitively appealing one, yet it is general enough to cover most of the alternative proposals we are aware of. The paper is rounded up with some considerations about expressiveness and decidability aspects.Fabio GadducciAlberto Lluch-Lafuentealberto.lluch@imtlucca.itAndrea Vandinandrea.vandin@imtlucca.it2011-12-22T09:22:41Z2016-07-13T10:49:38Zhttp://eprints.imtlucca.it/id/eprint/1048This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/10482011-12-22T09:22:41ZModelling and analyzing adaptive self-assembling strategies with MaudeBuilding adaptive systems with predictable emergent behavior is a challenging task and it is becoming a critical need. The research community has accepted the challenge by introducing approaches of various nature: from software architectures, to programming paradigms, to analysis techniques. We recently proposed a conceptual framework for adaptation centered around the role of control data. In this paper we show that it can be naturally realized in a reflective logical language like Maude by using the Reflective Russian Dolls model. Moreover, we exploit this model to specify and analyse a prominent example of adaptive system: robot swarms equipped with obstacle-avoidance self-assembly strategies. The analysis exploits the statistical model checker PVesta.Roberto BruniAndrea CorradiniFabio GadducciAlberto Lluch-Lafuentealberto.lluch@imtlucca.itAndrea Vandinandrea.vandin@imtlucca.it2011-11-16T11:52:29Z2012-05-09T10:30:57Zhttp://eprints.imtlucca.it/id/eprint/1005This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/10052011-11-16T11:52:29ZStability analysis of stochastic networked control systemsIn this paper, we study the stability of Networked Control Systems (NCSs) that are subject to time-varying transmission intervals, time-varying transmission delays, packet dropouts and communication constraints. The transmission intervals and transmission delays are described by a sequence of continuous random variables. The complexity that the continuous character of these random variables introduces is overcome using a novel convex overapproximation technique that preserves the available probabilistic information. By focusing on linear plants and controllers, we present a modelling framework for NCSs based on discrete-time linear switched and parameter-varying systems. Stability (in the mean-square) of these systems is analysed using a new stochastic computational technique, resulting in a finite number of linear matrix inequalities. We illustrate the developed theory on the benchmark example of a batch reactor.M.C.F. DonkersW.P.M.H. HeemelsDaniele Bernardinidaniele.bernardini@imtlucca.itAlberto Bemporadalberto.bemporad@imtlucca.itVsevolod Shneer2011-07-28T09:52:57Z2012-04-03T07:18:16Zhttp://eprints.imtlucca.it/id/eprint/729This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/7292011-07-28T09:52:57ZModel predictive idle speed control: design, analysis, and experimental evaluationIdle speed control is a landmark application of feedback control in automotive vehicles that continues to be of significant interest to automotive industry practitioners, since improved idle performance and robustness translate into better fuel economy, emissions and drivability. In this paper, we develop a model predictive control (MPC) strategy for regulating the engine speed to the idle speed set-point by actuating the electronic throttle and the spark timing. The MPC controller coordinates the two actuators according to a specified cost function, while explicitly taking into account constraints on the control and requirements on the acceptable engine speed range, e.g., to avoid engine stalls. Following a process proposed here for the implementation of MPC in automotive applications, an MPC controller is obtained with excellent performance and robustness as demonstrated in actual vehicle tests. In particular, the MPC controller performs better than an existing baseline controller in the vehicle, is robust to changes in operating conditions, and to different types of disturbances. It is also shown that the MPC computational complexity is well within the capability of production electronic control unit and that the improved performance achieved by the MPC controller can translate into fuel economy improvements.Stefano Di CairanoDiana YanakievAlberto Bemporadalberto.bemporad@imtlucca.itIlya KolmanovskyDavor Hrovat2011-07-27T12:52:01Z2012-01-09T13:41:47Zhttp://eprints.imtlucca.it/id/eprint/726This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/7262011-07-27T12:52:01ZEnergy-aware robust model predictive control based on noisy wireless sensors
Wireless sensor networks (WSNs) are becoming fundamental components of modern control systems due to their flexibility, ease of deployment and low cost. However, the energy-constrained nature of WSNs poses new issues in control design; in particular the discharge of batteries of sensor nodes, which is mainly due to radio communications, must be taken into account. In this paper we present a novel transmission strategy for communication between controller and sensors which is intended to minimize the data exchange over the wireless channel. Moreover, we propose an energy-aware control technique for constrained linear systems based on explicit model predictive control (MPC), providing closed-loop stability in the presence of disturbances. The presented control schemes are compared to traditional MPC techniques. The results show the effectiveness of the proposed energy-aware approach, which achieves a profitable trade-off between energy savings and closed-loop performance.Daniele Bernardinidaniele.bernardini@imtlucca.itAlberto Bemporadalberto.bemporad@imtlucca.it2011-03-23T13:21:35Z2012-07-09T10:39:58Zhttp://eprints.imtlucca.it/id/eprint/192This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1922011-03-23T13:21:35ZÀ la recherche d’une politique européenne alternative : le sénateur Michel Debré et ses interlocuteurs britanniques, 1948-1958Pendant la IVe République, la réflexion et l’activité politiques de Michel Debré, qui siégea au Conseil de la République de 1948 à 1958, se concentrèrent avant tout sur la question de l’Europe et de sa construction. Bien que convaincu de la nécessité d’organiser l’Europe, Debré s’éloigna progressivement de sa construction telle qu’elle se mettait en place et finit par combattre durement les traités proposés par les diplomaties occidentales. Puisque la France n’avait plus, à son avis, de politique étrangère et puisque ce qui pouvait influencer la politique de la France étaient les pressions et surtout l’attitude de l’Angleterre, il essaya, avec un certain succès, d’établir un solide dialogue avec les hommes du Gouvernement de sa Majesté – in primis Duncan Sandys and Julian Amery – afin de l’amener à modifier ses positions européennes et à influencer la politique extérieure de la France.Lucia Bonfreschilucia.bonfreschi@imtlucca.it2011-03-10T13:20:59Z2013-09-09T11:08:23Zhttp://eprints.imtlucca.it/id/eprint/190This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1902011-03-10T13:20:59ZIl néo-libéralisme del partito gollista tra il 1981 e il 1986: tra strategia del leader e “normalizzazione” del gollismo Lucia Bonfreschilucia.bonfreschi@imtlucca.it2011-03-07T08:51:43Z2012-07-06T13:25:33Zhttp://eprints.imtlucca.it/id/eprint/174This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1742011-03-07T08:51:43ZA Presheaf Environment for the Explicit Fusion CalculusName passing calculi are nowadays one of the preferred formalisms for the specification of concurrent and distributed systems with a dynamically evolving topology. Despite their widespread adoption as a theoretical tool, though, they still face some unresolved semantic issues, since the standard operational, denotational and logical methods often proved inadequate to reason about these formalisms. A domain which has been successfully employed for languages with asymmetric communication, like the π-calculus, are presheaf categories based on (injective) relabellings, such as SetI. Calculi with symmetric binding, in the spirit of the fusion calculus, give rise to novel research challenges. In this work we examine the explicit fusion calculus, and propose to model its syntax and semantics using the presheaf category SetE, where E is the category of equivalence relations and equivalence preserving morphisms. Filippo BonchiMaria Grazia Buscemim.buscemi@imtlucca.itVincenzo CianciaFabio Gadducci2011-02-25T11:00:46Z2013-09-09T11:07:13Zhttp://eprints.imtlucca.it/id/eprint/108This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1082011-02-25T11:00:46ZLa conquista di Felipe González nel PSOE The article cope with the ascent of Felipe Gonzalez inside the Spanish Socialist Party (PSOE). So it describes the way and the role he had at the beginning of his political activism, when he belonged to the Seviglian Federation.
In detail it describes how he succeded from the mid –Sixties up to the mid Seventies to be followed and supported by the other several regional federations. Then I describe his most difficult relationship inside the party: the Basque and Madrilenian Federation. These two basically wanted the party to remain limited to the militance and electorate of reference it gathered during the Civil War; whereas Gonzalez was in favour of enlarging the reference group of the party both at level of militance and electorate and he defended the thesis to pass from the blue collars to the white collars references. Then I cite the most important speeches the future leader delivered during the several congress of the party, and then I analyze in details the congress which elected, and then confirmed him as a segretary general, which is identified such as the turning point in the recent history of the Socialist Party in Spain.Maria Elena Cavallarom.cavallaro@imtlucca.it