IMT Institutional Repository: No conditions. Results ordered -Date Deposited.
2019-06-17T21:23:10Z
EPrints
http://eprints.imtlucca.it/images/logowhite.png
http://eprints.imtlucca.it/
2018-01-24T12:07:11Z
2018-01-24T12:07:11Z
http://eprints.imtlucca.it/id/eprint/3883
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3883
2018-01-24T12:07:11Z
On the rationalizability of observed consumers’ choices when preferences depend on budget sets and (potentially) on anything else
We prove that defining consumers’ preferences over budget sets is both necessary and sufficient to make every fully informative and finite set of observed consumption choices rationalizable by a collection of preferences which are transitive, complete, and monotone with respect to own consumption. Our finding has two important theoretical consequences. First, assuming that preferences depend on budget sets is illegitimate under the scientific commitments of revealed preference theory. Second, as long as consumers’ preferences are not defined over budget sets, we can assume that preferences depend on observable objects other than own consumption without compromising the logical possibility to reject the model against observation. We however point out that, despite this logical possibility, in practice it can be almost impossible to reject a model where preferences are defined over objects that depend on budget sets. As an example of this we show that if preferences are defined over consumption choices of other individuals then rationalization fails only in cases of negligible practical interest.
Ennio Bilancini
ennio.bilancini@imtlucca.it
2018-01-24T12:03:32Z
2018-01-24T12:03:32Z
http://eprints.imtlucca.it/id/eprint/3882
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3882
2018-01-24T12:03:32Z
Agricultural institutions, industrialization and growth: The case of New Zealand and Uruguay in 1870–1940
In this paper we apply a model of early industrialization to the case of New Zealand and Uruguay in 1870–1940. We show how differences in agricultural institutions may have produced different development paths in two countries which were similar under many respects. While in New Zealand the active role of the Crown in regulating the land market facilitated access to land, in Uruguay land was seized by a small group of large landowners. Our model shows that land concentration may have negatively influenced industrialization and growth by impeding the formation of a large group of middle-income landowners and, as a consequence, the development of a domestic demand for basic manufactures. We support this view with a comparative analysis of agricultural institutions and industrial development in New Zealand and Uruguay.
Jorge Álvarez
Ennio Bilancini
ennio.bilancini@imtlucca.it
Simone D'Alessandro
Gabriel Porcile
2017-09-29T08:56:06Z
2017-09-29T08:56:06Z
http://eprints.imtlucca.it/id/eprint/3819
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3819
2017-09-29T08:56:06Z
Net energy return (EROEI) of Thermal Barrier Coatings in turbine engines
Claudia Borri
claudia.borri@imtlucca.it
Ugo Bardi
Carlo Giolli
Andrea Giorgetti
S. Meneghetti
J. Nocivelli
A. Scrivani
2016-03-22T09:46:58Z
2016-03-22T09:46:58Z
http://eprints.imtlucca.it/id/eprint/3284
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3284
2016-03-22T09:46:58Z
Applicazione dell'analisi reologica allo sviluppo di nuovi sistemi polimerici iniettabili e termoreversibili per l'ingegnerizzazione del tessuto cardiaco
Caterina Cristallini
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
2016-03-22T09:44:23Z
2016-04-06T07:46:02Z
http://eprints.imtlucca.it/id/eprint/3283
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3283
2016-03-22T09:44:23Z
Release of anti-restenotic drugs from macromolecular materials useful for covering vascular stents
Giulio D. Guerra
Caterina Cristallini
Niccoletta Barbani
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
2016-03-22T09:42:54Z
2016-04-06T07:45:29Z
http://eprints.imtlucca.it/id/eprint/3282
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3282
2016-03-22T09:42:54Z
New bioartificial microstructures in combination with molecularly imprinted nanoparticles for the treatment of myocardial infarction
Caterina Cristallini
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Giulio D. Guerra
S. Russo Fiorillo
Niccoletta Barbani
2016-03-22T09:39:22Z
2016-03-22T09:39:22Z
http://eprints.imtlucca.it/id/eprint/3281
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3281
2016-03-22T09:39:22Z
Novel biodegradable, biomimetic and functionalised polymer scaffolds to prevent expansion of post-infart left ventricular remodelling
Caterina Cristallini
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Niccoletta Barbani
S. Russo Fiorillo
A. Bonaretti
2016-03-22T09:36:13Z
2016-03-22T09:37:25Z
http://eprints.imtlucca.it/id/eprint/3280
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3280
2016-03-22T09:36:13Z
Experimental and computational study of the dual drug release from polymeric stent coatings
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
L. Schiavello
Caterina Cristallini
2016-03-21T10:55:23Z
2016-03-21T10:55:23Z
http://eprints.imtlucca.it/id/eprint/3259
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3259
2016-03-21T10:55:23Z
Computational models for the in silico analysis of drug delivery from drug-eluting stents
Stents are tubular meshed devices implanted to restore the perviety of an occluded vessel owing to the presence of an atherosclerotic plaque. These devices were introduced in to clinical practice from 1980. The first stent implanted in a human coronary artery was the Wallstent [1], a self-expandable metallic device. The use of a stent to expand the vessel was introduced to overcome the greater limit of angioplasty, the elastic recoil of the vessel wall, yet also caused the onset of another, different pathology: intra-stent restenosis. This pathology results from injuries on the vessel wall after balloon inflation as well as the different fluid dynamic regime established after stent implantation [2]. Intra-stent restenosis is caused by the abnormal growth of tissues within stent meshes, leading to the implant failure.
The common therapeutic approach to limit hyperplasia is the systemic administration of antimitotic and anti-inflammatory drugs. This treatment generally fails because effective dosing levels have toxic effects on patients. Since 2000, a new and emerging class of stents was introduced to redress this problem. We are referring to drug-eluting stents (DES), new devices loaded with one or more active principles for the local administration of the drug, avoiding the systemic administration of massive doses. DES are metallic devices impregnated with a drug on their surface or coated with a polymeric thin layer containing the active principle.
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
2016-03-21T10:45:55Z
2016-04-06T07:44:40Z
http://eprints.imtlucca.it/id/eprint/3257
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3257
2016-03-21T10:45:55Z
Controlled Release of the Anti-cancer Drug Paclitaxel from Bioresorbable Poly(ester-ether-ester) Microspheres
The release of the anti-cancer drug paclitaxel (PTX) from microspheres of the bioresorbable poly(ε-caprolactoneoxyethylene-ε-caprolactone)tri-block copolymer was studied. The microspheres, both loaded and not with PTX, were prepared by emulsion-evaporation technique, then characterized by SEM, AFM, total reflection and spotlight FT-IR spectroscopy, and DSC. The quantities of PTX released were measured by HPLC. The results showed a slow and very regular release, which fits very well the Peppas equation, Mt/M∞ = k · t, where Mt is the amount of solute released at the time t, M∞ is the amount of drug released at the plateau condition, k represents the Peppas kinetic constant and n the diffusion order.
Giulio D. Guerra
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Niccoletta Barbani
Caterina Cristallini
2016-03-21T09:59:36Z
2016-04-06T07:42:12Z
http://eprints.imtlucca.it/id/eprint/3256
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3256
2016-03-21T09:59:36Z
Bioresorbable microspheres as devices for the controlled release of paclitaxel
The release of the anti-cancer drug paclitaxel (PTX) from microspheres of both a bioresorbable poly(ε-caprolactoneoxyethylene- ε-caprolactone) tri-block copolymer and of polyurethanes containing either copolymers with the same composition and different molecular weights or poly(ε-caprolactone) diol as soft segments was studied. The microspheres, both loaded and not with PTX, were prepared by emulsion-evaporation technique, then characterized by SEM and DSC. The quantities of PTX released were measured by HPLC. The results showed slow and very regular releases, which fit very well the Peppas equation, Mt/M? = k · tn, where Mt is the amount of solute released at the time t, M? is the amount of drug released at the plateau condition, k represents the Peppas kinetic constant and n the diffusion order. Most n values are consistent with non-Fickian release mechanisms, with the exceptions of two less hydrophilic polyurethanes.
Giulio D. Guerra
Caterina Cristallini
Niccoletta Barbani
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
2016-03-21T09:53:21Z
2016-04-06T07:42:52Z
http://eprints.imtlucca.it/id/eprint/3255
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3255
2016-03-21T09:53:21Z
Chitosan-Based Macromolecular Biomaterials for the Regeneration of Chondroskeletal and Nerve Tissue
The use of materials, containing the biocompatible and bioresorbable biopolymer poly()-2-amino-2-deoxy--D-glucan, containing some N-acetyl-glucosamine units (chitosan, CHI) and/or its derivatives, to fabricate devices for the regeneration of bone, cartilage and nerve tissue, was reviewed. The CHI-containing devices, to be used for bone and cartilage regeneration and healing, were tested mainly for in vitro cell adhesion and proliferation and for insertion into animals; only the use of CHI in dental surgery has reached the clinical application. Regarding the nerve tissue, only a surgical repair of a 35 mm-long nerve defect in the median nerve of the right arm at elbow level with an artificial nerve graft, comprising an outer microporous conduit of CHI and internal oriented filaments of poly(glycolic acid), was reported. As a consequence, although many positive results have been obtained, much work must still be made, especially for the passage from the experimentation of the CHI-based devices, in vitro and in animals, to their clinical application.
Giulio D. Guerra
Niccoletta Barbani
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Elisabetta Rosellini
Caterina Cristallini
2016-03-21T09:48:43Z
2016-03-21T09:48:43Z
http://eprints.imtlucca.it/id/eprint/3254
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3254
2016-03-21T09:48:43Z
Novel biodegradable, biomimetic and functionalised polymer scaffolds to prevent expansion of post-infarct left ventricular remodelling
Over the past decade, a large number of strategies and technologies have been developed to reduce heart failure progression. Among these, cardiac tissue engineering is one of the most promising. Aim of this study is to develop a 3D scaffold to treat cardiac failure. A new three-block copolymer, obtained from δ-valerolactone and polyoxyethylene, was synthesised under high vacuum without catalyst. Copolymer/gelatine blends were microfabricated to obtain a ECM-like geometry. Structures were studied under morphological, mechanical, degradation and biological aspects. To prevent left ventricular remodelling, constructs were biofunctionalises with molecularly imprinted nanoparticles towards the matrix metalloproteinase MMP-9. Results showed that materials are able to reproduce the ECM structure with high resolution, mechanical properties were in the order of MPa similar to those of the native myocardium and cell viability was verified. Nanoparticles showed the capability to rebind MMP-9 (specific rebinding 18.67) and to be permanently immobilised on the scaffold surface.
Caterina Cristallini
Mariacristina Gagliardi
mariacristina.gagliardi@imtlucca.it
Niccoletta Barbani
Daniela Giannessi
Giulio D. Guerra
2016-03-15T08:52:08Z
2016-03-18T11:01:35Z
http://eprints.imtlucca.it/id/eprint/3232
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3232
2016-03-15T08:52:08Z
Contribution of Tore Supra in preparation of ITER
Tore Supra routinely addresses the physics and technology of very long-duration plasma discharges, thus bringing precious information on critical issues of long pulse operation of ITER. A new ITER relevant lower hybrid current drive (LHCD) launcher has allowed coupling to the plasma a power level of 2.7 MW for 78 s, corresponding to a power density close to the design value foreseen for an ITER LHCD system. In accordance with the expectations, long distance (10 cm) power coupling has been obtained. Successive stationary states of the plasma current profile have been controlled in real-time featuring (i) control of sawteeth with varying plasma parameters, (ii) obtaining and sustaining a 'hot core' plasma regime, (iii) recovery from a voluntarily triggered deleterious magnetohydrodynamic regime. The scrape-off layer (SOL) parameters and power deposition have been documented during L-mode ramp-up phase, a crucial point for ITER before the X-point formation. Disruption mitigation studies have been conducted with massive gas injection, evidencing the difference between He and Ar and the possible role of the q = 2 surface in limiting the gas penetration. ICRF assisted wall conditioning in the presence of magnetic field has been investigated, culminating in the demonstration that this conditioning scheme allows one to recover normal operation after disruptions. The effect of the magnetic field ripple on the intrinsic plasma rotation has been studied, showing the competition between turbulent transport processes and ripple toroidal friction. During dedicated dimensionless experiments, the effect of varying the collisionality on turbulence wavenumber spectra has been documented, giving new insight into the turbulence mechanism. Turbulence measurements have also allowed quantitatively comparing experimental results with predictions by 5D gyrokinetic codes: numerical results simultaneously match the magnitude of effective heat diffusivity, rms values of density fluctuations and wavenumber spectra. A clear correlation between electron temperature gradient and impurity transport in the very core of the plasma has been observed, strongly suggesting the existence of a threshold above which transport is dominated by turbulent electron modes. Dynamics of edge turbulent fluctuations has been studied by correlating data from fast imaging cameras and Langmuir probes, yielding a coherent picture of transport processes involved in the SOL. Corrections were made to this article on 6 January 2012. Some of the letters in the text were missing.
B. Saoutic
J. Abiteboul
L. Allegretti
S. Allfrey
J.M. Ané
T. Aniel
A. Argouarch
J.F. Artaud
M.H. Aumenier
S. Balme
V. Basiuk
O. Baulaigue
P. Bayetti
A. Bécoulet
M. Bécoulet
M.S. Benkadda
F. Benoit
G. Berger-by
J.M. Bernard
B. Bertrand
P. Beyer
A. Bigand
J. Blum
D. Boilson
G. Bonhomme
H. Bottollier-Curtet
C. Bouchand
F. Bouquey
C. Bourdelle
S. Bourmaud
C. Brault
S. Brémond
C. Brosset
J. Bucalossi
Y. Buravand
P. Cara
V. Catherine-Dumont
A. Casati
M. Chantant
M. Chatelier
G. Chevet
D. Ciazynski
G. Ciraolo
F. Clairet
M. Coatanea-Gouachet
L. Colas
L. Commin
E. Corbel
Y. Corre
X. Courtois
R. Dachicourt
M. Dapena Febrer
M. Davi Joanny
R. Daviot
H. De Esch
Joan Decker
P. Decool
P. Delaporte
E. Delchambre
E. Delmas
L. Delpech
C. Desgranges
P. Devynck
T. Dittmar
L. Doceul
D. Douai
H. Dougnac
J.L. Duchateau
B. Dugué
N. Dumas
R. Dumont
A. Durocher
F.X. Duthoit
A. Ekedahl
D. Elbeze
M. El Khaldi
F. Escourbiac
F. Faisse
G. Falchetto
M. Farge
J.L. Farjon
M. Faury
N. Fedorczak
C. Fenzi-Bonizec
M. Firdaouss
Y. Frauel
X. Garbet
J. Garcia
J.L. Gardarein
L. Gargiulo
P. Garibaldi
E. Gauthier
O. Gaye
A. Géraud
M. Geynet
P. Ghendrih
I. Giacalone
S. Gibert
C. Gil
G. Giruzzi
M. Goniche
V. Grandgirard
C. Grisolia
G. Gros
A. Grosman
R. Guigon
D. Guilhem
B. Guillerminet
R. Guirlet
J. Gunn
O. Gurcan
S. Hacquin
J.C. Hatchressian
P. Hennequin
C. Hernandez
P. Hertout
S. Heuraux
J. Hillairet
G.T. Hoang
C. Honore
M. Houry
T. Hutter
P. Huynh
G. Huysmans
F. Imbeaux
E. Joffrin
J. Johner
L. Jourd'Heuil
Y.S. Katharria
D. Keller
S.H. Kim
M. Kocan
M. Kubic
B. Lacroix
V. Lamaison
G. Latu
Y. Lausenaz
C. Laviron
F. Leroux
L. Letellier
M. Lipa
X. Litaudon
T. Loarer
P. Lotte
S. Madeleine
P. Magaud
P. Maget
R. Magne
L. Manenc
Y. Marandet
G. Marbach
J.L. Maréchal
L. Marfisi
C. Martin
G. Martin
V. Martin
A. Martinez
J.P. Martins
R. Masset
D. Mazon
N. Mellet
L. Mercadier
A. Merle
D. Meshcheriakov
O. Meyer
L. Million
M. Missirlian
P. Mollard
V. Moncada
P. Monier-Garbet
D. Moreau
P. Moreau
Lorenzo Morini
lorenzo.morini@imtlucca.it
M. Nannini
M. Naiim Habib
E. Nardon
H. Nehme
C. Nguyen
S. Nicollet
R. Nouilletas
T. Ohsako
M. Ottaviani
S. Pamela
H. Parrat
P. Pastor
A.L. Pecquet
B. Pégourié
Y. Peysson
I. Porchy
C. Portafaix
M. Preynas
M. Prou
J.M. Raharijaona
N. Ravenel
C. Reux
P. Reynaud
M. Richou
H. Roche
P. Roubin
R. Sabot
F. Saint-Laurent
S. Salasca
F. Samaille
A. Santagiustina
Y. Sarazin
A. Semerok
J. Schlosser
M. Schneider
M. Schubert
F. Schwander
J.L. Ségui
G. Selig
P. Sharma
J. Signoret
A. Simonin
S. Song
E. Sonnendruker
F. Sourbier
P. Spuig
P. Tamain
M. Tena
J.M. Theis
D. Thouvenin
A. Torre
J.M. Travère
E. Tsitrone
J.C. Vallet
E. Van Der Plas
A. Vatry
J.M. Verger
L. Vermare
F. Villecroze
D. Villegas
R. Volpe
K. Vulliez
J. Wagrez
T. Wauters
L. Zani
D. Zarzoso
X.L. Zou
2016-03-15T08:47:07Z
2016-03-15T08:47:07Z
http://eprints.imtlucca.it/id/eprint/3231
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3231
2016-03-15T08:47:07Z
RF current drive and plasma fluctuations
The role played by electron density fluctuations near the plasma edge on rf current drive in tokamaks is assessed quantitatively. For this purpose, a general framework for incorporating density fluctuations in existing modelling tools has been developed. It is valid when rf power absorption takes place far from the fluctuating region of the plasma. The ray-tracing formalism is modified in order to take into account time-dependent perturbations of the density, while the Fokker–Planck solver remains unchanged. The evolution of the electron distribution function in time and space under the competing effects of collisions and quasilinear diffusion by rf waves is determined consistently with the time scale of fluctuations described as a statistical process. Using the ray-tracing code C3PO and the 3D linearized relativistic bounce-averaged Fokker–Planck solver LUKE, the effect of electron density fluctuations on the current driven by the lower hybrid (LH) and the electron cyclotron (EC) waves is estimated quantitatively. A thin fluctuating layer characterized by electron drift wave turbulence at the plasma edge is considered. The effect of fluctuations on the LH wave propagation is equivalent to a random scattering process with a broadening of the poloidal mode spectrum proportional to the level of the perturbation. However, in the multipass regime, the LH current density profile remains sensitive to the ray chaotic behaviour, which is not averaged by fluctuations. The effect of large amplitude fluctuations on the EC driven current is found to be similar to an anomalous radial transport of the fast electrons. The resulting lower current drive efficiency and broader current profile are in
Yves Peysson
Joan Decker
Lorenzo Morini
lorenzo.morini@imtlucca.it
S. Coda
2016-01-20T09:41:10Z
2016-01-20T09:41:10Z
http://eprints.imtlucca.it/id/eprint/3023
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3023
2016-01-20T09:41:10Z
X3DMMS: An X3DOM Tool for Molecular and Material Sciences
We are presenting a virtual reality environment based on X3DOM technologies and aimed for enabling a researcher in the Molecular and Matter Sciences to set up the initial conditions of a simulation to be performed using the Dl-Poly software, through a virtual environment implemented in X3D. After having completed the definition of the molecular system to be studied in a very intuitive and user friendly way, the user can write out the Dl-Poly input files. In this way the crucial phase of the initial set up of the simulation is simplified and can be performed in a short time.
Even if some technological drawbacks have been experienced in the current X3DOM implementation, we are confident that this approach, which definitely solves the "traditional" issues related to the compatibility among different web browser (plugins) and operating systems, represents an highway for the diffusion of X3D technologies in several application fields.
Fabiana Zollo
fabiana.zollo@imtlucca.it
Luca Caprini
Osvaldo Gervasi
Alessandro Costantini
2016-01-15T11:25:45Z
2016-09-14T10:21:17Z
http://eprints.imtlucca.it/id/eprint/3015
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3015
2016-01-15T11:25:45Z
The new Italian public law scholarship
Lorenzo Casini
lorenzo.casini@imtlucca.it
Sabino Cassese
Giulio Napolitano
2016-01-15T11:21:07Z
2016-09-14T10:21:17Z
http://eprints.imtlucca.it/id/eprint/3014
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3014
2016-01-15T11:21:07Z
Equalisation and compensation mechanisms in the new Rome urban
development plan
The adoption of equalisation and compensation mechanisms within the urban plan for Rome proved to be quite controversial. The regional law provides nearly no provisions for governing such planning tools, and the regulations set out by the plan introduce a somewhat complex system. The present paper, after a brief presentation of the history of land ownership and development within the city, will focus on the main features of equalisation and compensation practices which aim at governing the distribution of development rights among landowners and developers.
Lorenzo Casini
lorenzo.casini@imtlucca.it
2016-01-15T11:14:57Z
2016-09-14T10:21:17Z
http://eprints.imtlucca.it/id/eprint/3013
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3013
2016-01-15T11:14:57Z
The Making of a Lex Sportiva by the Court of Arbitration for Sport
The purpose of this paper is to examine the structure and functions of the Court of Arbitration for Sport (CAS), in order to highlight a number of problems concerning judicial activities at the global level more generally. Section 1 will outline CAS’ organization and functions, from its inception to the present date. In particular, this section will show how the history of the CAS is reminiscent of a famous German novel based on a biblical saga, “Joseph and his brothers” by Thomas Mann. Put briefly, the CAS was originally the “favourite son” of the Olympic movement’s founding fathers; it subsequently became the target of its envious “brothers” - i.e. the International Federations and other sporting arbitration institutions - which viewed the CAS as a dangerous enemy; ultimately, the CAS defeated its opponents, gained independence and brought normative harmonization, thereby becoming “the Nourisher” (Der Ernährer) of global sports law. Section 2 will focus on the role of CAS in making a lex sportiva, and it will take into account three different functions: the development of common legal principles; the interpretation of global norms and the influence on sports law-making; and the harmonization of global sports law. Section 3 will consider the relationships between the CAS and public authorities (both public administrations and domestic courts), in order to verify the extent to which the CAS and its judicial system are self-contained and autonomous from States. Lastly, section 4 will address the importance of creating bodies like CAS in the global arena, and it will identify the main challenges raised by this form of transnational judicial activity. The analysis of CAS and its role as law-maker, in fact, allows us to shed light on broader global governance trends affecting areas such as the institutional design of global regimes, with specific regard to the separation of powers and the emergence of judicial activities.
Lorenzo Casini
lorenzo.casini@imtlucca.it
2016-01-15T10:02:01Z
2016-09-14T10:21:17Z
http://eprints.imtlucca.it/id/eprint/3011
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3011
2016-01-15T10:02:01Z
“Italian Hours”: the globalization of cultural property law
Nothing in Rome helps your fancy to a more vigorous backward flight than to lounge on a sunny day over the railing which guards the great central researches. It “says” more things to you than you can repeat to see the past, the ancient world, as you stand there, bodily turned up with the spade and transformed from an immaterial, inaccessible fact of time into a matter of soils and surfaces.—Henry James, Italian Hours, “A Roman Holiday,” 1909Cultural property offers a significant yet ambiguous example of the development of global regulatory regimes beyond the State. On the one hand, traditional international law instruments do not seem to ensure an adequate level of protection for cultural heritage; securing such protection requires procedures, norms, and standards produced by global institutions, both public (such as UNESCO) and private (such as the International Council of Museums). On the other hand, a comprehensive global regulatory regime to complement the law of cultural property is still to be achieved. Instead, more regimes are being established, depending on the kind of properties and public interests at stake. Moreover, the huge cultural bias that dominates the debate about cultural property accentuates the “clash of civilizations” that already underlies the debate about global governance. The analysis of the relationship between globalization and cultural property, therefore, sheds light on broader global governance trends and helps highlight the points of weakness and strength in the adoption of administrative law techniques at the global level.
Lorenzo Casini
lorenzo.casini@imtlucca.it
2016-01-15T08:57:43Z
2016-09-14T10:21:17Z
http://eprints.imtlucca.it/id/eprint/3007
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/3007
2016-01-15T08:57:43Z
Programmazione, decisione e localizzazione degli impianti e delle infrastrutture strategiche. Proposte di riforma delle regole e delle procedure
Franco Bassanini
Lorenzo Casini
lorenzo.casini@imtlucca.it
Christian Iaione
2015-12-04T14:16:51Z
2016-09-13T09:53:46Z
http://eprints.imtlucca.it/id/eprint/2969
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2969
2015-12-04T14:16:51Z
The nature of consciousness in the visually-deprived brain
Vision plays a central role in how we represent and interact with the world around us. The primacy of vision is structurally imbedded in cortical organization as about one-third of the cortical surface in primates is involved in visual processes. Consequently, the loss of vision, either at birth or later in life, affects brain organization and the way the world is perceived and acted upon. In this paper, we address a number of issues on the nature of consciousness in people deprived of vision. Do brains from sighted and blind individuals differ, and how? How does the brain of someone who has never had any visual perception form an image of the external world? What is the subjective correlate of activity in the visual cortex of a subject who has never seen in life? More in general, what can we learn about the functional development of the human brain in physiological conditions by studying blindness? We discuss findings from animal research as well from recent psychophysical and functional brain imaging studies in sighted and blind individuals that shed some new light on the answers to these questions.
Ron Kupers
Pietro Pietrini
pietro.pietrini@imtlucca.it
Emiliano Ricciardi
emiliano.ricciardi@imtlucca.it
Maurice Ptito
2015-12-01T12:42:36Z
2016-09-13T09:53:21Z
http://eprints.imtlucca.it/id/eprint/2942
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2942
2015-12-01T12:42:36Z
Conditional granger causality analysis of fMRI data shows a direct connection from LGN to hMT+ bypassing V1
The human middle temporal complex (hMT+) is devoted to motion perception. To determine whether motion-related neural information may reach hMT+ directly from the thalamus, by-passing the primary visual cortex (V1), we measured effective connectivity in an optic flow fMRI experiment in humans. Conditional Granger Causality analysis was employed to measure direct influences between the lateral geniculate nucleus (LGN) and hMT+, discarding indirect effects mediated by V1. Results indicated the existence of a bilateral alternative pathway for visual motion processing that allows for a direct flow of information from LGN to hMT+. This direct link may play a role in blindsight.
Anna Gaglianese
Mauro Costagli
Giulio Bernardi
Lorenzo Sani
Emiliano Ricciardi
emiliano.ricciardi@imtlucca.it
Pietro Pietrini
pietro.pietrini@imtlucca.it
2015-11-24T12:56:57Z
2016-09-13T09:53:34Z
http://eprints.imtlucca.it/id/eprint/2934
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2934
2015-11-24T12:56:57Z
Functional inhibition of the human middle temporal cortex affects non-visual motion perception: a repetitive transcranial magnetic stimulation study during tactile speed discrimination
The visual motion-responsive middle temporal complex (hMT+) is activated during tactile and aural motion discrimination in both sighted and congenitally blind individuals, suggesting a supramodal organization of this area. Specifically, non-visual motion processing has been found to activate the more anterior portion of the hMT+. In the present study, repetitive transcranial magnetic stimulation (rTMS) was used to determine whether this more anterior portion of hMT+ truly plays a functional role in tactile motion processing. Sixteen blindfolded, young, healthy volunteers were asked to detect changes in the rotation velocity of a random Braille-like dot pattern by using the index or middle finger of their right hand. rTMS was applied for 600 ms (10 Hz, 110% motor threshold), 200 ms after the stimulus onset with a figure-of-eight coil over either the anterior portion of hMT+ or a midline parieto-occipital site (as a control). Accuracy and reaction times were significantly impaired only when TMS was applied on hMT+, but not on the control area. These results indicate that the recruitment of hMT+ is necessary for tactile motion processing, and thus corroborate the hypothesis of a ‘supramodal’ functional organization for this sensory motion processing area.
Emiliano Ricciardi
emiliano.ricciardi@imtlucca.it
Demis Basso
Lorenzo Sani
Daniela Bonino
Tomaso Vecchi
Pietro Pietrini
pietro.pietrini@imtlucca.it
Carlo Miniussi
2015-11-24T12:38:55Z
2016-09-13T09:52:11Z
http://eprints.imtlucca.it/id/eprint/2933
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2933
2015-11-24T12:38:55Z
New light from the dark: what blindness can teach us about brain function
Purpose of review: In this review, we discuss findings from some recent brain imaging studies that shed new light on our understanding of the role of visual experience on the development of the brain morphological and functional architecture in humans. To what extent is vision truly necessary to ‘see’ the world around us?
Recent findings: Congenitally blind and sighted individuals present analogous cognitive and social performances. Findings from structural and functional brain studies in both sighted and congenitally blind individuals have shown the existence of supramodal brain regions able to process external information regardless of the sensory modality through which such an information has been acquired. This more abstract nature of functional cortical organization may enable congenitally blind individuals to acquire knowledge, form mental representations of and interact effectively with an external world that they have never seen.
Summary: Altogether, findings from both behavioural and imaging studies indicate that the brain functional organization is to a large extent independent from visual experience and able to process information in a supramodal fashion. The study of the blind brain is a very powerful approach to understanding not only the cross-modal plastic, adaptative modifications that occur in the ‘visual’ regions but primarily the functional architecture of the human brain itself.
Emiliano Ricciardi
emiliano.ricciardi@imtlucca.it
Pietro Pietrini
pietro.pietrini@imtlucca.it
2015-11-13T12:16:11Z
2016-09-13T09:52:01Z
http://eprints.imtlucca.it/id/eprint/2896
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2896
2015-11-13T12:16:11Z
Decomposing metaphor processing at the cognitive and neural level through functional magnetic resonance imaging
Prior neuroimaging studies on metaphor comprehension have tended to focus on the role of the right hemisphere, without reaching consensus and leaving aside the functional architecture of this process. The present work aimed to break down metaphor comprehension into its functional components. The study rationale is two-fold: on the one hand, the large-scale network model as emerging in cognitive neuroscience led us to a consideration of metaphor as supported by a distributed and bilateral network; on the other hand, we based on the accounts of figurative language put forward in pragmatics and cognitive science to postulate a decomposition of such a network into multiple sub-systems. During scanning, participants implicitly processed metaphorical (familiar and unfamiliar) and non-metaphorical passages, while being explicitly involved in an adjective matching task to be performed after reading the target passages. Several regions showed greater activity to metaphors as compared to non-metaphors, including left and right inferior frontal gyrus, right superior temporal gyrus, left angular gyrus, and anterior cingulate. This pattern of activations, markedly bilateral, can be decomposed into circumscribed functional sub-systems mediating different aspects of metaphor resolution, as foreseen in the pragmatic and cognitive literature: (a) the conceptual/pragmatic machinery in the bilateral inferior frontal gyrus and in the left angular gyrus, which supports the integration of linguistic material and world knowledge in context; (b) the attentional component in the anterior cingulate and prefrontal areas, which is set to monitor and filter for the relevant aspects of context and for the appropriate meanings; (c) the Theory of Mind system along the right superior temporal sulcus, which deals with the recognition of speakers’ communicative intentions and is more extensively activated by unfamiliar metaphors. The results have several implications for the field of neuropragmatics, especially on the neuropsychological side and on the right hemisphere hypothesis.
Valentina Bambini
Claudio Gentili
Emiliano Ricciardi
emiliano.ricciardi@imtlucca.it
Pier Marco Bertinetto
Pietro Pietrini
pietro.pietrini@imtlucca.it
2015-11-11T16:19:51Z
2016-09-13T09:52:23Z
http://eprints.imtlucca.it/id/eprint/2895
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2895
2015-11-11T16:19:51Z
Distinct neural systems involved in agency and animacy detection
We designed an fMRI experiment comparing perception of human faces and robotic faces producing emotional expressions. The purpose of our experiment was to investigate engagement of different parts of the social brain by viewing these animate and inanimate agents. Both human and robotic face expressions evoked activity in face-responsive regions in the fusiform gyrus and STS and in the putative human mirror neuron system. These results suggest that these areas mediate perception of agency, independently of whether the agents are living or not. By contrast, the human faces evoked stronger activity than did robotic faces in the medial pFC and the anterior temporal cortex—areas associated with the representation of others' mental states (theory of mind), whereas robotic faces evoked stronger activity in areas associated with perception of objects and mechanical movements. Our data demonstrate that the representation of the distinction between animate and inanimate agents involves areas that participate in attribution of mental stance.
Maria Ida Gobbini
Claudio Gentili
Emiliano Ricciardi
emiliano.ricciardi@imtlucca.it
Claudia Bellucci
Pericle Salvini
Cecilia Laschi
Mario Guazzelli
Pietro Pietrini
pietro.pietrini@imtlucca.it
2015-11-11T15:46:35Z
2015-11-11T15:46:35Z
http://eprints.imtlucca.it/id/eprint/2894
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2894
2015-11-11T15:46:35Z
From molecules to mind and back…
Pietro Pietrini
pietro.pietrini@imtlucca.it
2015-11-06T11:07:28Z
2015-11-06T11:07:28Z
http://eprints.imtlucca.it/id/eprint/2842
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2842
2015-11-06T11:07:28Z
Temporal Effects in the Growth of Networks
We show that to explain the growth of the citation network by preferential attachment (PA), one has to accept that individual nodes exhibit heterogeneous fitness values that decay with time. While previous PA-based models assumed either heterogeneity or decay in isolation, we propose a simple analytically treatable model that combines these two factors. Depending on the input assumptions, the resulting degree distribution shows an exponential, log-normal or power-law decay, which makes the model an apt candidate for modeling a wide range of real systems.
Matúš Medo
Giulio Cimini
giulio.cimini@imtlucca.it
Stanislao Gualdi
2015-11-06T11:04:00Z
2015-11-06T11:04:00Z
http://eprints.imtlucca.it/id/eprint/2841
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2841
2015-11-06T11:04:00Z
Emergence of Scale-Free Leadership Structure in Social Recommender Systems
<p>The study of the organization of social networks is important for the understanding of opinion formation, rumor spreading, and the emergence of trends and fashion. This paper reports empirical analysis of networks extracted from four leading sites with social functionality (Delicious, Flickr, Twitter and YouTube) and shows that they all display a scale-free leadership structure. To reproduce this feature, we propose an adaptive network model driven by social recommending. Artificial agent-based simulations of this model highlight a “good get richer” mechanism where users with broad interests and good judgments are likely to become popular leaders for the others. Simulations also indicate that the studied social recommendation mechanism can gradually improve the user experience by adapting to tastes of its users. Finally we outline implications for real online resource-sharing systems.</p>
Tao Zhou
Matúš Medo
Giulio Cimini
giulio.cimini@imtlucca.it
Zi-Ke Zhang
Yi-Cheng Zhang
2015-11-06T11:01:14Z
2015-11-06T11:01:14Z
http://eprints.imtlucca.it/id/eprint/2840
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2840
2015-11-06T11:01:14Z
Effective mechanism for social recommendation of news
Recommender systems represent an important tool for news distribution on the Internet. In this work we modify a recently proposed social recommendation model in order to deal with no explicit ratings of users on news. The model consists of a network of users which continually adapts in order to achieve an efficient news traffic. To optimize the network’s topology we propose different stochastic algorithms that are scalable with respect to the network’s size. Agent-based simulations reveal the features and the performance of these algorithms. To overcome the resultant drawbacks of each method we introduce two improved algorithms and show that they can optimize the network’s topology almost as fast and effectively as other not-scalable methods that make use of much more information.
Dong Wei
Tao Zhou
Giulio Cimini
giulio.cimini@imtlucca.it
Pei Wu
Weiping Liu
Yi-Cheng Zhang
2015-11-06T10:56:49Z
2016-04-06T10:36:53Z
http://eprints.imtlucca.it/id/eprint/2839
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2839
2015-11-06T10:56:49Z
Heterogeneity, quality, and reputation in an adaptive recommendation model
Recommender systems help people cope with the problem of information overload. A recently proposed adaptive news recommender model [M. Medo, Y.-C. Zhang, T. Zhou, Europhys. Lett. 88, 38005 (2009)] is based on epidemic-like spreading of news in a social network. By means of agent-based simulations we study a “good get richer” feature of the model and determine which attributes are necessary for a user to play a leading role in the network. We further investigate the filtering efficiency of the model as well as its robustness against malicious and spamming behaviour. We show that incorporating user reputation in the recommendation process can substantially improve the outcome.
Giulio Cimini
giulio.cimini@imtlucca.it
Matúš Medo
Tao Zhou
Dong Wei
Yi-Cheng Zhang
2015-11-06T10:51:58Z
2015-11-06T10:51:58Z
http://eprints.imtlucca.it/id/eprint/2838
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2838
2015-11-06T10:51:58Z
Dynamics of movie competition and popularity spreading in recommender systems
We introduce a simple model to study movie competition in recommender systems. Movies of heterogeneous quality compete against each other through viewers’ reviews and generate interesting dynamics at the box office. By assuming mean-field interactions between the competing movies, we show that the runaway effect of popularity spreading is triggered by defeating the average review score, leading to box-office hits: Popularity rises and peaks before fade-out. The average review score thus characterizes the critical movie quality necessary for transition from box-office bombs to blockbusters. The major factors affecting the critical review score are examined. By iterating the mean-field dynamical equations, we obtain qualitative agreements with simulations and real systems in the dynamical box-office forms, revealing the significant role of competition in understanding box-office dynamics.
C. H. Yeung
Giulio Cimini
giulio.cimini@imtlucca.it
C.-H. Jin
2015-11-05T11:16:04Z
2018-03-08T17:04:19Z
http://eprints.imtlucca.it/id/eprint/2813
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2813
2015-11-05T11:16:04Z
Randomizing world trade. II. A weighted network analysis
Based on the misleading expectation that weighted network properties always offer a more complete description than purely topological ones, current economic models of the International Trade Network (ITN) generally aim at explaining local weighted properties, not local binary ones. Here we complement our analysis of the binary projections of the ITN by considering its weighted representations. We show that, unlike the binary case, all possible weighted representations of the ITN (directed and undirected, aggregated and disaggregated) cannot be traced back to local country-specific properties, which are therefore of limited informativeness. Our two papers show that traditional macroeconomic approaches systematically fail to capture the key properties of the ITN. In the binary case, they do not focus on the degree sequence and hence cannot characterize or replicate higher-order properties. In the weighted case, they generally focus on the strength sequence, but the knowledge of the latter is not enough in order to understand or reproduce indirect effects.
Tiziano Squartini
tiziano.squartini@imtlucca.it
Giorgio Fagiolo
Diego Garlaschelli
diego.garlaschelli@imtlucca.it
2015-11-05T11:14:29Z
2018-03-08T17:04:07Z
http://eprints.imtlucca.it/id/eprint/2812
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2812
2015-11-05T11:14:29Z
Randomizing world trade. I. A binary network analysis
The international trade network (ITN) has received renewed multidisciplinary interest due to recent advances in network theory. However, it is still unclear whether a network approach conveys additional, nontrivial information with respect to traditional international-economics analyses that describe world trade only in terms of local (first-order) properties. In this and in a companion paper, we employ a recently proposed randomization method to assess in detail the role that local properties have in shaping higher-order patterns of the ITN in all its possible representations (binary or weighted, directed or undirected, aggregated or disaggregated by commodity) and across several years. Here we show that, remarkably, the properties of all binary projections of the network can be completely traced back to the degree sequence, which is therefore maximally informative. Our results imply that explaining the observed degree sequence of the ITN, which has not received particular attention in economic theory, should instead become one the main focuses of models of trade.
Tiziano Squartini
tiziano.squartini@imtlucca.it
Giorgio Fagiolo
Diego Garlaschelli
diego.garlaschelli@imtlucca.it
2015-11-05T11:11:57Z
2015-11-05T11:11:57Z
http://eprints.imtlucca.it/id/eprint/2811
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2811
2015-11-05T11:11:57Z
Surface modification of titanium alloy with the Ti3Al + TiB2/TiN composite coatings
Laser cladding of the Ti3Al + TiB2 pre-placed alloy powder on the Ti–6Al–4 V alloy in nitrogen protective atmosphere can form the Ti3Al + TiB2/TiN composite coating, which can dramatically improve the wear resistance of the Ti–6Al–4 V alloy surface. In this study, the Ti3Al + TiB2/TiN composite coatings on the Ti–6Al–4 V alloy have been researched by means of X-ray diffraction, SEM and energy dispersive spectrometry. It was found that there is a metallurgical combination between the Ti3Al + TiB2/TiN composite coating and the substrate. The microhardness of the Ti3Al + TiB2/TiN composite coatings were 3 ~ 4 times higher than that of the Ti–6Al–4 V alloy because of the actions of the Ti3Al + TiB2/TiN hard phases and the grain refinement strengthening. Moreover, the wear mass losses of the Ti3Al + TiB2/TiN composite coatings were much lower than that of the substrate. Copyright © 2011 John Wiley & Sons, Ltd.
J. N. Li
C. Z. Chen
B. B. Cui
Tiziano Squartini
tiziano.squartini@imtlucca.it
2015-11-05T11:00:59Z
2018-03-08T17:04:31Z
http://eprints.imtlucca.it/id/eprint/2810
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2810
2015-11-05T11:00:59Z
Analytical maximum-likelihood method to detect patterns in real networks
In order to detect patterns in real networks, randomized graph ensembles that preserve only part of the topology of an observed network are systematically used as fundamental null models. However, the generation of them is still problematic. Existing approaches are either computationally demanding and beyond analytic control or analytically accessible but highly approximate. Here, we propose a solution to this long-standing problem by introducing a fast method that allows one to obtain expectation values and standard deviations of any topological property analytically, for any binary, weighted, directed or undirected network. Remarkably, the time required to obtain the expectation value of any property analytically across the entire graph ensemble is as short as that required to compute the same property using the adjacency matrix of the single original network. Our method reveals that the null behavior of various correlation properties is different from what was believed previously, and is highly sensitive to the particular network considered. Moreover, our approach shows that important structural properties (such as the modularity used in community detection problems) are currently based on incorrect expressions, and provides the exact quantities that should replace them.
Tiziano Squartini
tiziano.squartini@imtlucca.it
Diego Garlaschelli
diego.garlaschelli@imtlucca.it
2015-11-05T10:55:36Z
2015-11-05T10:55:36Z
http://eprints.imtlucca.it/id/eprint/2809
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2809
2015-11-05T10:55:36Z
Phase constituents and microstructure of laser cladding Al2O3/Ti3Al reinforced ceramic layer on titanium alloy
Laser cladding of the Fe3Al + TiB2/Al2O3 pre-placed alloy powder on Ti–6Al–4V alloy can form the Ti3Al/Fe3Al + TiB2/Al2O3 ceramic layer, which can greatly increase wear resistance of titanium alloy. In this study, the Ti3Al/Fe3Al + TiB2/Al2O3 ceramic layer has been researched by means of electron probe, X-ray diffraction, scanning electron microscope and micro-analyzer. In cladding process, Al2O3 can react with TiB2 leading to formation of amount of Ti3Al and B. This principle can be used to improve the Fe3Al + TiB2 laser cladded coating, it was found that with addition of Al2O3, the microstructure performance and micro-hardness of the coating was obviously improved due to the action of the Al–Ti–B system and hard phases.
Jianing Li
Chuanzhong Chen
Zhaoqing Lin
Tiziano Squartini
tiziano.squartini@imtlucca.it
2015-06-25T12:52:14Z
2015-06-25T12:52:14Z
http://eprints.imtlucca.it/id/eprint/2719
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2719
2015-06-25T12:52:14Z
Type-based access control in data-centric systems
Data-centric multi-user systems, such as web applications, require flexible yet fine-grained data security mechanisms. Such mechanisms are usually enforced by a specially crafted security layer, which adds extra complexity and often leads to error prone coding, easily causing severe security breaches. In this paper, we introduce a programming language approach for enforcing access control policies to data in data-centric programs by static typing. Our development is based on the general concept of refinement type, but extended so as to address realistic and challenging scenarios of permission-based data security, in which policies dynamically depend on the database state, and flexible combinations of column- and row-level protection of data are necessary. We state and prove soundness and safety of our type system, stating that well-typed programs never break the declared data access control policies.
Luis Caires
Jorge A. Pérez
João C. Seco
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Lúcio Ferrão
2015-02-11T14:06:11Z
2015-02-11T14:06:11Z
http://eprints.imtlucca.it/id/eprint/2599
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2599
2015-02-11T14:06:11Z
Non-functional properties in the model-driven development of service-oriented systems
Systems based on the service-oriented architecture (SOA) principles have become an important cornerstone of the development of enterprise-scale software applications. They are characterized by separating functions into distinct software units, called services, which can be published, requested and dynamically combined in the production of business applications. Service-oriented systems (SOSs) promise high flexibility, improved maintainability, and simple re-use of functionality. Achieving these properties requires an understanding not only of the individual artifacts of the system but also their integration. In this context, non-functional aspects play an important role and should be analyzed and modeled as early as possible in the development cycle. In this paper, we discuss modeling of non-functional aspects of service-oriented systems, and the use of these models for analysis and deployment. Our contribution in this paper is threefold. First, we show how services and service compositions may be modeled in UML by using a profile for SOA (UML4SOA) and how non-functional properties of service-oriented systems can be represented using the non-functional extension of UML4SOA (UML4SOA-NFP) and the MARTE profile. This enables modeling of performance, security and reliable messaging. Second, we discuss formal analysis of models which respect this design, in particular we consider performance estimates and reliability analysis using the stochastically timed process algebra PEPA as the underlying analytical engine. Last but not least, our models are the source for the application of deployment mechanisms which comprise model-to-model and model-to-text transformations implemented in the framework VIATRA. All techniques presented in this work are illustrated by a running example from an eUniversity case study.
Stephen Gilmore
László Gönczy
Nora Koch
Philip Mayer
Mirco Tribastone
mirco.tribastone@imtlucca.it
Dániel Varró
2015-02-11T13:46:09Z
2015-02-11T13:46:09Z
http://eprints.imtlucca.it/id/eprint/2597
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2597
2015-02-11T13:46:09Z
Scaling performance analysis using fluid-flow approximation
The fluid interpretation of the process calculus PEPA provides a very useful tool for the performance evaluation of large-scale systems because the tractability of the numerical solution does not depend upon the population levels of the system under study. This paper offers a tutorial on how to use this technique by analysing a case study of a service-oriented application to support an e-University infrastructure.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-09T11:30:06Z
2015-02-09T11:30:06Z
http://eprints.imtlucca.it/id/eprint/2583
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2583
2015-02-09T11:30:06Z
Fluid analysis of queueing in two-stage random environments
A large number of random environments leads to Markov processes where average-environment (AVG) and near-complete-decomposability (DEC) approximations suffer unacceptably large errors. This is problematic for queueing networks in particular, where state-space explosion hinders the application of numerical methods. In this paper we introduce blending, a novel fluid-based approximation for queueing models in random environments. The technique is here first introduced for random environments with two stages. Blending estimates the equilibrium of the model by iteratively evaluating transient-analysis sub problems for each of the two stages. Each sub problem is solved by means of a very small system of ordinary differential equations, making the approach scalable and simple to implement. Random environments supported by blending are either state-independent, as for models with breakdown and repair, or state-dependent, such as for Markov-modulated queues where the service phase changes only during busy periods. Comparative results with AVG and DEC approximations prove that blending tackles the limitations of existing methods for evaluating queues in random environments.
Giuliano Casale
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T11:05:12Z
2015-02-09T11:05:12Z
http://eprints.imtlucca.it/id/eprint/2581
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2581
2015-02-09T11:05:12Z
Approximate mean value analysis of process algebra models
Studying the existence of product forms of performance models described with compositional techniques is of central importance since this may lead to particularly efficient solution methods. This paper considers a class of models in the stochastic process algebra PEPA which do not enjoy the exact product form solutions available in the literature. However, they can be interpreted as queueing networks with service vacations and multiple resource possession, which have been shown to admit accurate analytical approximations based on mean value analysis. Special attention is devoted to situations where the use of the competing approximate method based on ordinary differential equations may be questionable due to the presence of components with few replicas.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T10:58:13Z
2015-02-09T10:58:13Z
http://eprints.imtlucca.it/id/eprint/2580
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2580
2015-02-09T10:58:13Z
Modular performance modelling for mobile applications
We propose a model-based approach to analysing the performance of mobile applications where physical mobility and state changes are modelled by graph transformations from which a model in the Performance Evaluation Process Algebra (PEPA) is derived. To fight scalability problems with state space generation we adopt a modular solution where the graph transformation system is decomposed into views, for which labelled transition systems (LTS) are generated separately and later synchronised in PEPA. We demonstrate that the result of this modular analysis is equivalent to that of the monolithic approach and evaluate practicality and scalability by means of a case study.
Niaz Arijo
Reiko Heckel
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-09T10:46:27Z
2015-02-09T10:46:27Z
http://eprints.imtlucca.it/id/eprint/2579
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2579
2015-02-09T10:46:27Z
Scalable performance evaluation of computer systems
The present paper provides an overview of recent and
ongoing research conducted at the Chair of Program-
ming and Software Engineering of LMU Munich on
performance evaluation of large-scale computer sys-
tems.
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-02-09T10:29:56Z
2015-02-09T10:29:56Z
http://eprints.imtlucca.it/id/eprint/2578
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2578
2015-02-09T10:29:56Z
Large-scale modelling with the PEPA eclipse plug-in
We report on recent advances in the development of the PEPA Eclipse Plug-in, a software tool which supports a complete modelling workflow for the stochastic process algebra PEPA. The most notable improvements regard the implementation of the population-based semantics, which constitutes the basis for the aggregation of models for large state spaces. Analysis is supported either via an efficient stochastic simulation algorithm or through fluid approximation based on ordinary differential equations. In either case, the functionality is provided by a common graphical interface, which presents the user with a number of wizards that ease the specification of typical performance measures such as average response time or throughput. Behind the scenes, the engine for stochastic simulation has been extended in order to support both transient and steady-state simulation and to calculate confidence levels and correlations without resorting to external tools.
Mirco Tribastone
mirco.tribastone@imtlucca.it
Stephen Gilmore
2015-02-06T11:39:40Z
2015-07-24T12:32:00Z
http://eprints.imtlucca.it/id/eprint/2558
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2558
2015-02-06T11:39:40Z
Refined theory of packages
The fluid approximation for PEPA usually considers large populations of simple interacting sequential components characterised by small local state spaces. A natural question which arises is whether it is possible to extend this technique to composite processes with arbitrary large local state spaces. In [1] the authors were able to give a positive answer for a certain class of models. The current paper
will enlarge this class.
Max Tschaikowski
max.tschaikowski@imtlucca.it
Mirco Tribastone
mirco.tribastone@imtlucca.it
2015-01-21T08:29:54Z
2015-01-21T08:29:54Z
http://eprints.imtlucca.it/id/eprint/2535
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2535
2015-01-21T08:29:54Z
A simplified evaluation of the influence of the bond pattern on the brickwork limit strength
The influence of the bond pattern on the in-plane limit strength of masonry is analyzed through a simplified procedure based on the application of the safe theorem of limit analysis to the unit cell that generates the whole masonry by periodic repetition. The limit strength domains of running bond, English bond and herringbone bond masonry are obtained with different orientations of the mortar bed joints with respect to the principal directions of the average stress. The effects of different brick geometries are analyzed and comparisons between strength properties of different masonry patterns are made.
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Andrea Cavicchi
Luigi Gambarotta
2015-01-20T15:18:55Z
2015-01-20T15:18:55Z
http://eprints.imtlucca.it/id/eprint/2532
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2532
2015-01-20T15:18:55Z
Non-local computational homogenization of periodic masonry
Micro-polar and second-order homogenization procedures for periodic elastic masonry have been implemented to include geometric and material length scales in the constitutive equation. From the evaluation of the numerical response of the unit cell representative of the masonry to properly prescribed displacement boundary conditions related to homogeneous macro-strain fields, the elastic moduli of the higher-order continua are obtained on the basis of an extended Hill-Mandel macro-homogeneity condition. Elastic moduli and internal lengths for the running bond masonry are obtained in the case of Cosserat and second-order homogenization. To evaluate these results, a shear layer problem representative of a masonry wall subjected to a uniform horizontal displacement at points on the top is analyzed as a micro-polar and a second-order continuum and the results are compared to those corresponding with the reference heterogeneous model. From this analysis the second-order homogenization appears to provide better results in comparison with the micro-polar homogenization.
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Luigi Gambarotta
2015-01-20T11:31:59Z
2015-01-20T11:31:59Z
http://eprints.imtlucca.it/id/eprint/2525
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2525
2015-01-20T11:31:59Z
High continuity second-order homogenization of in-plane loaded periodic masonry
In this paper the second-order homogenization of periodic masonry based on a
computational analysis of the unit cell representative of the masonry wall is derived. The
multi-scale approach is based on an appropriate representation of the micro-displacement
field as the superposition of a local macroscopic displacement field, represented in a polynomial
form related to the macro-displacement field, and an unknown micro-fluctuation field
accounting for the effects of the heterogeneities. By this approach a continuous microdisplacement
field is obtained, i.e. in each unit cell and across the interfaces between adjacent
unit cells. The computational procedure is applied in two steps: the first one corresponds
to the standard homogenization, while the second step is a second-order homogenization
based on the results of the first step. Two numerical examples are presented concerning running
bond and English bond masonry. For both the masonry patterns the overall elastic
moduli of the second-order model and the corresponding characteristic lengths are obtained;
the effects on the characteristic lengths of the stiffness mismatch between the brick phase and
the mortar phase are considered. Moreover, the wave propagation in the homogenized medium
is considered and dispersive waves are obtained. It is shown that remarkable differences
in the phase and group velocities between the first-order and the second-order homogenized
models are obtained for wavelengths shorter than ten times the average brick unit size.
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Luigi Gambarotta
2015-01-20T10:55:09Z
2015-01-20T10:55:09Z
http://eprints.imtlucca.it/id/eprint/2524
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2524
2015-01-20T10:55:09Z
Non-local modelling of blocky rock masses with periodic jointed structure
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Luigi Gambarotta
2015-01-20T10:38:01Z
2015-01-20T10:38:01Z
http://eprints.imtlucca.it/id/eprint/2523
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2523
2015-01-20T10:38:01Z
A simplified evalutation of the influence of the bond pattern on the brickwork limit strength
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Andrea Cavicchi
Luigi Gambarotta
2015-01-19T14:35:56Z
2015-01-19T14:35:56Z
http://eprints.imtlucca.it/id/eprint/2509
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2509
2015-01-19T14:35:56Z
Omogeneizzazione multi-scala di materiali a microstruttura periodica mediante sviluppi asintotici dell'energia di deformazione
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
2015-01-19T14:32:46Z
2015-01-19T14:32:46Z
http://eprints.imtlucca.it/id/eprint/2508
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2508
2015-01-19T14:32:46Z
Multi-scale modelling of periodic masonry: size effects, characteristic lengths and dispersive waves
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Luigi Gambarotta
2015-01-19T13:25:17Z
2015-01-19T13:25:17Z
http://eprints.imtlucca.it/id/eprint/2503
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2503
2015-01-19T13:25:17Z
Un approccio multi-scala per la determinazione delle proprietà meccaniche di materiali eterogenei a micro-struttura periodica
meccanico dei materiali eterogenei basandosi su opportune tecniche di omogeneizzazione (asintotiche
[1] e computazionali [2]). Al mezzo eterogeneo, modellato alla micro-scala come un continuo di
Cauchy, se ne associa uno omogeneo il cui comportamento alla macro-scala può essere sinteticamente
descritto da continui non-locali. È noto che negli approcci computazionali il campo di spostamento
locale ottenuto per localizzazione imponendo sulla cella elementare macro-deformazioni di ordine
superiore non risulta, in generale, né continuo all’interfaccia di celle elementari adiacenti, né
sufficientemente regolare per garantire l’antiperiodicità delle trazioni sul contorno [3].
Nell’intento di risolvere questo problema è elaborata una semplice tecnica di omogeneizzazione nonlocale
(multipolare ovvero al secondo ordine) che si sviluppa per passi e basata su un’opportuna
definizione del down-scaling dove una particolare struttura della perturbazione del micro-spostamento,
espressa in termine delle macro-deformazioni, è sovrapposta al macro-spostamento. Le funzioni di
perturbazione, che dipendono dalle proprietà della microstruttura, sono determinate attraverso la
successiva soluzione di problemi di cella. La struttura dello spostamento locale, direttamente
riconducibile a quella utilizzata nelle tecniche asintotiche, consente di ottenere nella localizzazione
campi di spostamento e tensione opportunamente regolari sul contorno di celle adiacenti e periodici alla
micro-scala. Le costanti elastiche del continuo omogeneo equivalente sono determinate attraverso una
uguaglianza energetica alle due scale di una porzione rappresentativa di materiale eterogeneo. Nel caso
di una omogeneizzazione in un continuo al secondo ordine (ovvero in uno micropolare alla Koiter), lo
sviluppo asintotico dell’energia di deformazione alla micro-scala in termini della dimensione
caratteristica della cella elementare è arrestato al secondo ordine. In particolare, le costanti elastiche
così ottenute sono invarianti ad ogni possibile traslazione della cella elementare fissata e qualora la
microstruttura diventi evanescente le lunghezze interne restituite risultano identicamente nulle
evidenziando l’assenza di effetti non locali a scala macroscopica. La definizione dell’up-scaling è
ottenuta attraverso la soluzione di un opportuno problema di minimizzazione approssimando il macrospostamento
e le macro-deformazioni attraverso una forma polinomiale completa di ordine fissato. Una
semplificazione a livello computazionale del modello di omogeneizzazione elaborato si ottiene
esprimendo direttamente nel down-scaling il macro-spostamento come un polinomio di Taylor al
secondo ordine e risolvendo i problemi di cella in termini dello spostamento locale [3].
Andrea Bacigalupo
andrea.bacigalupo@imtlucca.it
Luigi Gambarotta
2015-01-15T13:12:28Z
2015-01-15T13:12:28Z
http://eprints.imtlucca.it/id/eprint/2489
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2489
2015-01-15T13:12:28Z
Controlling reversibility in higher-order pi
We present in this paper a fine-grained rollback primitive for the higher-order π-calculus (HOπ), that builds on the reversibility apparatus of reversible HOπ [9]. The definition of a proper semantics for such a primitive is a surprisingly delicate matter because of the potential interferences between concurrent rollbacks. We define in this paper a high-level operational semantics which we prove sound and complete with respect to reversible HOπ backward reduction. We also define a lower-level distributed semantics, which is closer to an actual implementation of the rollback primitive, and we prove it to be fully abstract with respect to the high-level semantics.
Ivan Lanese
Claudio Antares Mezzina
claudio.mezzina@imtlucca.it
Alan Schmitt
Jean-Bernard Stefani
2015-01-12T14:46:07Z
2015-01-12T14:46:07Z
http://eprints.imtlucca.it/id/eprint/2466
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2466
2015-01-12T14:46:07Z
Enforcing stability constraints in set-membership identification of linear dynamic systems
In this paper, we consider the identification of linear systems, a priori known to be stable, from input–output data corrupted by bounded noise. By taking explicitly into account a priori information on system stability, a formal definition of the feasible parameter set for a stable linear system is provided. On the basis of a detailed analysis of the geometrical structure of the feasible set, convex relaxation techniques are presented to solve nonconvex optimization problems arising in the computation of parameter uncertainty intervals. Properties of the computed relaxed bounds are discussed. A simulated example is presented to show the effectiveness of the proposed technique.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-12T14:39:42Z
2015-01-13T14:49:53Z
http://eprints.imtlucca.it/id/eprint/2465
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2465
2015-01-12T14:39:42Z
Set-membership LPV model identification of vehicle lateral dynamics
Set-membership identification of a Linear Parameter Varying (LPV) model describing the vehicle lateral dynamics is addressed in the paper. The model structure, chosen as much as possible on the ground of physical insights into the vehicle lateral behavior, consists of two single-input single-output {LPV} models relating the steering angle to the yaw rate and to the sideslip angle. A set of experimental data obtained by performing a large number of maneuvers is used to identify the vehicle lateral dynamics model. Prior information on the error bounds on the output and the time-varying parameter measurements are taken into account. Comparison with other vehicle lateral dynamics models is discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T11:31:37Z
2015-01-09T11:31:37Z
http://eprints.imtlucca.it/id/eprint/2445
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2445
2015-01-09T11:31:37Z
Set-membership identification of Hammerstein-Wiener systems
Set-membership identification of Hammerstein-Wiener models is addressed in the paper. First, it is shown that computation of tight parameter bounds requires the solutions to a number of nonconvex constrained polynomial optimization problems where the number of decision variables increases with the length of the experimental data sequence. Then, a suitable convex relaxation procedure is presented to significantly reduce the computational burden of the identification problem. A detailed discussion of the identification algorithm properties is reported. Finally, a simulated example is used to show the effectiveness and the computational tractability of the proposed approach.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T11:26:33Z
2015-01-09T11:26:33Z
http://eprints.imtlucca.it/id/eprint/2444
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2444
2015-01-09T11:26:33Z
Fast implementation of model predictive control with guaranteed performance
A fast implementation of a given predictive controller for nonlinear systems is introduced through a piecewise constant approximate function defined over an hyper-cube partition of the system state space. Such a state partition is obtained by maximizing the hyper-cube volumes in order to guarantee, besides stability, an a priori fixed trajectory error as well as input and state constraints satisfaction. The presented approximation procedure is achieved by solving a set of nonconvex polynomial optimization problems, whose approximate solutions are computed by means of semidefinite relaxation techniques for semialgebraic problems.
Massimo Canale
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T11:12:20Z
2015-01-09T11:12:20Z
http://eprints.imtlucca.it/id/eprint/2443
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2443
2015-01-09T11:12:20Z
Computational burden reduction in set-membership Hammerstein system identification
Hammerstein system identification from measurements affected by bounded noise is considered in the paper. First, we show that computation of tight parameter bounds requires the solution to nonconvex optimization problems where the number of decision variables increases with the length of the experimental data sequence. Then, in order to reduce the computational burden of the identification problem, we propose a procedure to relax the previously formulated problem to a set of polynomial optimization problems where the number of variables does not depend on the size of the measurements sequence. Advantages of the presented approach with respect to previously published results are discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T10:28:51Z
2015-01-09T10:28:51Z
http://eprints.imtlucca.it/id/eprint/2442
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2442
2015-01-09T10:28:51Z
Convex relaxation techniques for set-membership identification of LPV systems
Set-membership identification of single-input single-output linear parameter varying models is considered in the paper under the assumption that both the output and the scheduling parameter measurements are affected by bounded noise. First, we show that the problem of computing the parameter uncertainty intervals requires the solutions to a number of nonconvex optimization problems. Then, on the basis of the analysis of the regressor structure, we present some ad hoc convex relaxation schemes to compute parameter bounds by means of semidefinite optimization. Advantages of the new techniques with respect to previously published results are discussed both theoretically and by means of simulations.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2015-01-09T10:25:09Z
2015-01-09T10:25:09Z
http://eprints.imtlucca.it/id/eprint/2441
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2441
2015-01-09T10:25:09Z
Hammerstein systems parameters bounding through sparse polynomial optimization
A single-stage procedure for the evaluation of tight bounds on the parameters of Hammerstein systems from output measurements affected by bounded errors is presented. The identification problem is formulated in terms of polynomial optimization, and relaxation techniques based on linear matrix inequalities are proposed to evaluate parameters bounds by means of convex optimization. The structured sparsity of the identification problem is exploited to reduce the computational complexity of the convex relaxed problem. Convergence proper ties, complexity analysis and advantages of the proposed technique with respect to previously published ones are discussed.
Vito Cerone
Dario Piga
dario.piga@imtlucca.it
Diego Regruto
2014-12-18T10:24:10Z
2014-12-18T10:24:10Z
http://eprints.imtlucca.it/id/eprint/2419
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2419
2014-12-18T10:24:10Z
Un viaggio in Abruzzo di Adolfo e Lionello Venturi
Both Adolfo Venturi and his son Lionello made a great use of sketchbooks during their travels in Italy, Europe and America, in order to record artworks preserved in the museums and private collections that they visited. A series of 15 sketchbooks of the same dimension and form is splitted between the Venturi archives in Rome and Pisa. Even if many papers are missing, the 15 sketchbooks record a long travel made together by Adolfo and Lionello, from Italy to Europe (Germany, France), probably written around 1904-1905. The handwriting of Adolfo and Lionello is recognizable in all the units: some of them are written by Adolfo, some others by Lionello and some others by both father and son. Two of them are dedicated to Abruzzo, one preserved in Pisa and one in Rome, in origin probably belonging to the same unit, before being dismembered. This article discusses the Venturi's travel to Abruzzo, its possible datation, and publishes the whole part preserved in Rome, completely dedicated to Tagliacozzo and written by Lionello Venturi alone. Thus, it would be possible to add new material to check the differences and similarities in artwork descriptions between the young Lionello and his father Adolfo, so to say the old and the new generation of art historians.
Emanuele Pellegrini
emanuele.pellegrini@imtlucca.it
2014-12-03T11:12:58Z
2014-12-18T13:55:51Z
http://eprints.imtlucca.it/id/eprint/2387
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2387
2014-12-03T11:12:58Z
Consistent community identification in complex networks
We have found that known community identification algorithmsproduce inconsistent communities when the node ordering changes atinput. We propose two metrics to quantify the level of consistencyacross multiple runs of an algorithm: pairwise membershipprobability and consistency. Based on these two metrics, weaddress the consistency problem without compromising themodularity. Our solution uses pairwise membership probabilitiesas link weights and generates consistent communities within six orfewer cycles. It offers a new tool in the study of communitystructures and their evolutions.
Haewoon Kwak
Sue Moon
Young-Ho Eom
youngho.eom@imtlucca.it
Yoonchan Choi
Hawoong Jeong
2014-12-02T15:39:26Z
2014-12-18T13:55:36Z
http://eprints.imtlucca.it/id/eprint/2386
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2386
2014-12-02T15:39:26Z
Characterizing and modeling citation dynamics
Citation distributions are crucial for the analysis and modeling of the activity of scientists. We investigated bibliometric data of papers published in journals of the American Physical Society, searching for the type of function which best describes the observed citation distributions. We used the goodness of fit with Kolmogorov-Smirnov statistics for three classes of functions: log-normal, simple power law and shifted power law. The shifted power law turns out to be the most reliable hypothesis for all citation networks we derived, which correspond to different time spans. We find that citation dynamics is characterized by bursts, usually occurring within a few years since publication of a paper, and the burst size spans several orders of magnitude. We also investigated the microscopic mechanisms for the evolution of citation networks, by proposing a linear preferential attachment with time dependent initial attractiveness. The model successfully reproduces the empirical citation distributions and accounts for the presence of citation bursts as well.
Young-Ho Eom
youngho.eom@imtlucca.it
Santo Fortunato
2014-12-02T15:33:39Z
2014-12-18T13:56:05Z
http://eprints.imtlucca.it/id/eprint/2385
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2385
2014-12-02T15:33:39Z
How citation boosts promote scientific paradigm shifts and Nobel Prizes
Nobel Prizes are commonly seen to be among the most prestigious achievements of our times. Based on mining several million citations, we quantitatively analyze the processes driving paradigm shifts in science. We find that groundbreaking discoveries of Nobel Prize Laureates and other famous scientists are not only acknowledged by many citations of their landmark papers. Surprisingly, they also boost the citation rates of their previous publications. Given that innovations must outcompete the rich-gets-richer effect for scientific citations, it turns out that they can make their way only through citation cascades. A quantitative analysis reveals how and why they happen. Science appears to behave like a self-organized critical system, in which citation cascades of all sizes occur, from continuous scientific progress all the way up to scientific revolutions, which change the way we see our world. Measuring the “boosting effect” of landmark papers, our analysis reveals how new ideas and new players can make their way and finally triumph in a world dominated by established paradigms. The underlying “boost factor” is also useful to discover scientific breakthroughs and talents much earlier than through classical citation analysis, which by now has become a widespread method to measure scientific excellence, influencing scientific careers and the distribution of research funds. Our findings reveal patterns of collective social behavior, which are also interesting from an attention economics perspective. Understanding the origin of scientific authority may therefore ultimately help to explain how social influence comes about and why the value of goods depends so strongly on the attention they attract.
Amin Mazloumian
Young-Ho Eom
youngho.eom@imtlucca.it
Dirk Helbing
Sergi Lozano
Santo Fortunato
2014-10-09T13:14:13Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2317
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2317
2014-10-09T13:14:13Z
Typing dynamic roles in multiparty interaction
We present a type-based analysis for role-based multiparty
interaction. Novel to our approach are the notions that a role specified in a protocol may be carried out by several parties, and that one party may assume di%erent roles at di%erent stages of the protocol. We build on Conversation Types by adding roles to protocol specifications. Systems
are modeled in ⇤-calculus extended with labeled communication and role annotations. The main result shows that well-typed systems follow the role-based protocols prescribed by the types, addressing systems where
roles have dynamic distributed implementations.
Pedro Baltazar
Vasco Thudichum Vasconcelos
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T12:45:52Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2315
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2315
2014-10-09T12:45:52Z
Static analysis techniques for session-oriented calculi
In the Sensoria project, core calculi have been adopted as a linguistic means to model and analyze service-oriented applications. The present chapter reports about the static analysis techniques developed for the Sensoria session-oriented core calculi CaSPiS and CC. In particular, it presents a type system for client progress and control flow analysis in CaSPiS and type systems for conversation fidelity and progress in CC. The chapter gives an overview of the these techniques, summarizes the main results and presents the analysis of a common example taken from the Sensoria financial case-study: the credit request scenario.
Lucia Acciai
Chiara Bodei
Michele Boreale
Roberto Bruni
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T12:43:01Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2314
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2314
2014-10-09T12:43:01Z
Behavioral theory for session-oriented calculi
This chapter presents the behavioral theory of some of the Sensoria core calculi. We consider SSCC, μ se and CC as representatives of the session-based approach and COWS as representative of the correlation-based one.
For SSCC, μ se and CC the main point is the structure that the session/conversation mechanism creates in programs. We show how the differences between binary sessions, multiparty sessions and dynamic conversations are captured by different behavioral laws. We also exploit those laws for proving the correctness of program transformations.
For COWS the main point is that communication is prioritized (the best matching input captures the output), and this has a strong influence on the behavioral theory of COWS. In particular, we show that communication in COWS is neither purely synchronous nor purely asynchronous.
Ivan Lanese
Antonio Ravara
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
2014-10-09T12:31:27Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2313
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2313
2014-10-09T12:31:27Z
Advanced mechanisms for service combination and transactions
Languages and models for service-oriented applications usually include primitives and constructs for exception and compensation handling. Exception handling is used to react to unexpected events while compensation handling is used to undo previously completed activities. In this chapter we investigate the impact of exception and compensation handling in message-based process calculi and the related theories developed within Sensoria.
Carla Ferreira
Ivan Lanese
Antonio Ravara
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Gianluigi Zavattaro
2014-10-09T11:59:22Z
2015-04-08T10:37:32Z
http://eprints.imtlucca.it/id/eprint/2311
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2311
2014-10-09T11:59:22Z
Tools and verification
This chapter presents different tools that have been developed inside the Sensoria project. Sensoria studied qualitative analysis techniques for verifying properties of service implementations with respect to their formal specifications. The tools presented in this chapter have been developed to carry out the analysis in an automated, or semi-automated, way.
We present four different tools, all developed during the Sensoria project, exploiting new techniques and calculi from the Sensoria project itself.
Massimo Bartoletti
Luis Caires
Ivan Lanese
Franco Mazzanti
Davide Sangiorgi
Hugo Torres Vieira
hugo.torresvieira@imtlucca.it
Roberto Zunino
2014-07-03T08:34:33Z
2014-07-03T09:36:48Z
http://eprints.imtlucca.it/id/eprint/2233
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2233
2014-07-03T08:34:33Z
Shape analysis of the left ventricular endocardial surface and its application in detecting coronary artery disease
Coronary artery disease is the leading cause of morbidity and mortality worldwide. The complex morphological structure of the ventricular endocardial surface has not yet been studied properly due to the limitations of conventional imaging techniques. With the recent developments in Multi-Detector Computed Tomography (MDCT) scanner technology, we propose to study, in this paper, the complex endocardial surface morphology of the left ventricle via analysis of Computed Tomography (CT) image data obtained from a 320 Multi-Detector CT scanner. The CT image data is analyzed using a 3D shape analysis approach and the clinical significance of the analysis in detecting coronary artery disease is investigated. Global and local 3D shape descriptors are adapted for the purpose of shape analysis of the left ventricular endocardial surface. In order to study the association between the incidence of coronary artery disease and the alteration of the endocardial surface structure, we present the results of our shape analysis approach on 5 normal data sets, and 6 abnormal data sets with obstructive coronary artery disease. Based on the morphological characteristics of the endocardial surface as quantified by the shape descriptors, we implement a Linear Discrimination Analysis (LDA)-based classification algorithm to test the effectiveness of our shape analysis approach. Experiments performed on a strict leave-one-out basis are shown to achieve a classification accuracy of 81.8%.
Anirban Mukhopadhyay
anirban.mukhopadhyay@imtlucca.it
Zhen Qian
Suchendra M. Bhandarkar
Tianming Liu
Szilard Voros
2014-02-27T09:05:41Z
2014-02-27T09:05:41Z
http://eprints.imtlucca.it/id/eprint/2146
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2146
2014-02-27T09:05:41Z
Argomentare per immagini. L'incisione in antiporta di Tabulae Rudolphinae (1627)
Stefano Gattei
stefano.gattei@imtlucca.it
2014-01-24T13:38:10Z
2014-01-24T13:38:10Z
http://eprints.imtlucca.it/id/eprint/2123
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2123
2014-01-24T13:38:10Z
Opinions within media, power and gossip
Despite the increasing diffusion of the Internet technology, TV remains the principal medium of communication. People's perceptions, knowledge, beliefs and opinions about matter of facts get (in)formed through the information reported on by the mass-media. However, a single source of information (and consensus) could be a potential cause of anomalies in the structure and evolution of a society. Hence, as the information available (and the way it is reported) is fundamental for our perceptions and opinions, the definition of conditions allowing for a good information to be disseminated is a pressing challenge. In this paper starting from a report on the last Italian political campaign in 2008, we derive a socio-cognitive computational model of opinion dynamics where agents get informed by different sources of information. Then, a what-if analysis, performed trough simulations on the model's parameters space, is shown. In particular, the scenario implemented includes three main streams of information acquisition, differing in both the contents and the perceived reliability of the messages spread. Agents' internal opinion is updated either by accessing one of the information sources, namely media and experts, or by exchanging information with one another. They are also endowed with cognitive mechanisms to accept, reject or partially consider the acquired information.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Rosaria Conte
Elena Lodi
2014-01-24T13:34:36Z
2014-01-24T13:38:31Z
http://eprints.imtlucca.it/id/eprint/2122
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2122
2014-01-24T13:34:36Z
Exploiting reputation in distributed virtual environments
The cognitive research on reputation has shown several interesting properties that can improve both the quality of services and the security in distributed electronic environments. In this paper, the impact of reputation on decision-making under scarcity of information will be shown. First, a cognitive theory of reputation will be presented, then a selection of simulation experimental results from different studies will be discussed. Such results concern the benefits of reputation when agents need to find out good sellers in a virtual market-place under uncertainty and informational cheating.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Rosaria Conte
2014-01-24T13:29:01Z
2014-01-24T13:29:01Z
http://eprints.imtlucca.it/id/eprint/2121
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2121
2014-01-24T13:29:01Z
Rooting opinions in the minds: a cognitive model and a formal account of opinions and their dynamics
The study of opinions, their formation and change, is one of the defining topics addressed by social psychology, but in recent years other disciplines, like computer science and complexity, have tried to deal with this issue. Despite the flourishing of different models and theories in both fields, several key questions still remain unanswered. The understanding of how opinions change and the way they are affected by social influence are challenging issues requiring a thorough analysis of opinion per se but also of the way in which they travel between agents' minds and are modulated by these exchanges. To account for the two-faceted nature of opinions, which are mental entities undergoing complex social processes, we outline a preliminary model in which a cognitive theory of opinions is put forward and it is paired with a formal description of them and of their spreading among minds. Furthermore, investigating social influence also implies the necessity to account for the way in which people change their minds, as a consequence of interacting with other people, and the need to explain the higher or lower persistence of such changes.
Francesca Giardini
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Rosaria Conte
2014-01-24T13:21:39Z
2014-01-24T13:21:39Z
http://eprints.imtlucca.it/id/eprint/2120
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2120
2014-01-24T13:21:39Z
Understanding opinions. A cognitive and formal account
The study of opinions, their formation and change, is one of the defining topics addressed by social psychology, but in recent years other disciplines, as computer science and complexity, have addressed this challenge. Despite the flourishing of different models and theories in both fields, several key questions still remain unanswered. The aim of this paper is to challenge the current theories on opinion by putting forward a cognitively grounded model where opinions are described as specific mental representations whose main properties are put forward. A comparison with reputation will be also presented.
Francesca Giardini
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Rosaria Conte
2014-01-24T13:16:33Z
2014-01-24T13:16:33Z
http://eprints.imtlucca.it/id/eprint/2119
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2119
2014-01-24T13:16:33Z
Emergence through selection: the evolution of a scientific challenge
One of the most interesting scientific challenges nowadays deals with
the analysis and the understanding of complex networks' dynamics and how their
processes lead to emergence according to the interactions among their components.
In this paper we approach the definition of new methodologies for the visualization
and the exploration of the dynamics at play in real dynamic social networks.We
present a recently introduced formalism called TVG (for time-varying graphs), which
was initially developed to model and analyze highly-dynamic and infrastructure-less
communication networks. As an application context, we chose the case of scientific
communities by analyzing a portion of the ArXiv repository (ten years of publications
in physics). The analysis presented in the paper passes through different data
transformations aimed at providing different perspectives on the scientific community
and its evolutions.
On a first level we discuss the dataset by means of both a static and temporal
analysis of citations and co-authorships networks. Afterward, as we consider that
scientific communities are at the same time communities of practice (through coauthorship)
and that a citation represents a deliberative selection pointing out the
relevance of a work in its scientific domain, we introduce a new transformation aimed
at capturing the interdependencies between collaborations' patterns and citations'
effects and how they make evolve a goal oriented systems as Science.
Finally, we show how through the TVG formalism and derived indicators, it is
possible to capture the interactions patterns behind the emergence (selection) of
a sub-community among others, as a goal-driven preferential attachment toward a
set of authors among which there are some key scientists (Nobel prizes) acting as
attractors on the community.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Frederic Amblard
2014-01-24T13:10:24Z
2014-01-24T13:10:24Z
http://eprints.imtlucca.it/id/eprint/2118
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2118
2014-01-24T13:10:24Z
Time-varying graphs and social network analysis: temporal indicators and metrics
Most instruments - formalisms, concepts, and metrics - for social networks analysis fail to capture their dynamics. Typical systems exhibit different scales of dynamics, ranging from the fine-grain dynamics of interactions (which recently led researchers to consider temporal versions of distance, connectivity, and related indicators), to the evolution of network properties over longer periods of time. This paper proposes a general approach to study that evolution for both atemporal and temporal indicators, based respectively on sequences of static graphs and sequences of time-varying graphs that cover successive time-windows. All the concepts and indicators, some of which are new, are expressed using a time-varying graph formalism.
Nicola Santoro
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Paola Flocchini
Arnaud Casteigts
Frederic Amblard
2014-01-21T16:03:32Z
2014-01-21T16:03:32Z
http://eprints.imtlucca.it/id/eprint/2110
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2110
2014-01-21T16:03:32Z
Causality in collective filtering
In this paper, we describe a proposal for improving the practice of web-based collective filtering, in particular for what regards discussions and selection of issues about policy, based on the intuitive concept of causality. Causality, especially when presented in visual form, is especially suited to the task since it is intuitive to understand and to use, and at the same time, it's rich enough to create a semantic network between the representations of real world facts. We give some examples of the suggested system workflow and we present guidelines for its implementation.
Mario Paolucci
Stefano Picascia
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2014-01-21T15:51:19Z
2014-01-21T15:51:19Z
http://eprints.imtlucca.it/id/eprint/2108
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2108
2014-01-21T15:51:19Z
Opinions manipulation: Media, power and gossip
Despite the increasing diffusion of the Internet technology, TV remains the principal medium of communication. People's perceptions, knowledge, beliefs and opinions about matters of fact get (in)formed through the information reported on by the media.
However, a single source of information (and consensus) could be a potential cause of anomalies in the structure and evolution of a society.
Hence, as the information available (and the way it is reported) is fundamental for our perceptions and opinions, the definition of conditions allowing for a good information to be disseminated is a pressing challenge. In this paper starting from a report on the last Italian political campaign in 2008, we derive a socio-cognitive computational model of opinion dynamics where agents get informed by different sources of information. Then, a what-if analysis, performed through simulations on the model's parameters space, is shown. In particular, the scenario implemented includes three main streams of information acquisition, differing in both the contents and the perceived reliability of the messages spread. Agents' internal opinion is updated either by accessing one of the information sources, namely media and experts, or by exchanging information with one another. They are also endowed with cognitive mechanisms to accept, reject or partially consider the acquired information.
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Rosaria Conte
Elena Lodi
2014-01-21T15:43:18Z
2014-01-21T15:43:18Z
http://eprints.imtlucca.it/id/eprint/2107
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2107
2014-01-21T15:43:18Z
Taste and trust
Although taste and trust are concepts on clearly distinct ontological levels, they are strongly interrelated in several contexts. For instance, when assessing trust, e.g. through a trust network, it is important to understand the role that personal taste plays in order to correctly interpret potential value dependent trust recommendations and conclusions, in order to provide a sound basis for decision-making. This paper aims at exploring the relationship between taste and trust in the analysis of semantic trust networks.
Audun Jøsang
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Dino Karabeg
2014-01-21T15:37:10Z
2014-01-21T15:37:10Z
http://eprints.imtlucca.it/id/eprint/2106
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2106
2014-01-21T15:37:10Z
On the temporal analysis of scientific network evolution
In this paper we approach the definition of new methodologies for the visualization and the exploration of social networks and their dynamics. We present a recently introduced formalism called TVG (for time-varying graphs), which was initially developed to model and analyze highly-dynamic and infrastructure-less communication networks, and TVG derived metrics. As an application context, we chose the case of scientific communities by analyzing a portion of the arXiv repository (ten years of publications in physics). We discuss the dataset by means of both static and temporal analysis of citations and co-authorships networks. Afterward, as we consider that scientific communities are at the same time communities of practice (through co-authorship) and that a citation represents a deliberative selection of a work among others, we introduce a new transformation to capture the co-existence of citations' effects and collaboration behaviors.
Frederic Amblard
Arnaud Casteigts
Paola Flocchini
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Nicola Santoro
2014-01-20T14:18:47Z
2014-01-20T14:18:56Z
http://eprints.imtlucca.it/id/eprint/2101
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2101
2014-01-20T14:18:47Z
Dynamic monopolies in colored tori
The information diffusion has been modeled as the spread of an information within a group through a process of social influence, where the diffusion is driven by the so called influential network. Such a process, which has been intensively studied under the name of viral marketing, has the goal to select an initial good set of individuals that will promote a new idea (or message) by spreading the "rumor" within the entire social network through the word-of-mouth. Several studies used the linear threshold model where the group is represented by a graph, nodes have two possible states (active, non-active), and the threshold triggering the adoption (activation) of a new idea to a node is given by the number of the active neighbors. The problem of detecting in a graph the presence of the minimal number of nodes that will be able to activate the entire network is called target set selection (TSS). In this paper we extend TSS by allowing nodes to have more than two colors. The multicolored version of the TSS can be described as follows: let G be a torus where every node is assigned a color from a finite set of colors. At each local time step, each node can recolor itself, depending on the local configurations, with the color held by the majority of its neighbors. We study the initial distributions of colors leading the system to a monochromatic configuration of color k, focusing on the minimum number of initial k-colored nodes. We conclude the paper by providing the time complexity to achieve the monochromatic configuration.
Sara Brunetti
Elena Lodi
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
2014-01-20T13:46:26Z
2014-01-20T13:46:26Z
http://eprints.imtlucca.it/id/eprint/2100
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2100
2014-01-20T13:46:26Z
Time-Varying graphs and dynamic networks
The past decade has seen intensive research efforts on highly dynamic wireless and mobile networks (variously called delay-tolerant, disruptive-tolerant, challenged, opportunistic, etc) whose essential feature is a possible absence of end-to-end communication routes at any instant. As part of these efforts, a number of important concepts have been identified, based on new meanings of distance and connectivity. The main contribution of this paper is to review and integrate the collection of these concepts, formalisms, and related results found in the literature into a unified coherent framework, called TVG (for time-varying graphs).Besides this definitional work, we connect the various assumptions through a hierarchy of classes of TVGs defined with respect to properties with algorithmic significance in distributed computing. One of these classes coincides with the family of dynamic graphs over which population protocols are defined. We examine the (strict) inclusion hierarchy among the classes. The paper also provides a quick review of recent stochastic models for dynamic networks that aim to enable analytical investigation of the dynamics.
Arnaud Casteigts
Paola Flocchini
Walter Quattrociocchi
walter.quattrociocchi@imtlucca.it
Nicola Santoro
2013-12-16T14:00:01Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/2072
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2072
2013-12-16T14:00:01Z
Stiffness and strength of hierarchical polycrystalline materials with imperfect interfaces
In this paper, considering a cohesive zone model (CZM) for finite thickness
interfaces recently proposed by the authors, the stiffness of polycrystalline materials with
imperfect interfaces is characterized. Generalized expressions for the Voigt and Reuss
estimates of the effective elastic modulus of the composite are derived to interpret the
numerical results. Considering a polycrystalline material with a hierarchical microstructure,
the interaction between interfaces at the different hierarchical levels is numerically
investigated. A condition for scale separation, which suggests how to design the optimal
microstructure to maximize the material tensile strength is determined. An original interpretation
of this phenomenon based on the concept of flaw tolerance is finally proposed.
Marco Paggi
marco.paggi@imtlucca.it
Peter Wriggers
2013-12-16T13:43:56Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/2071
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2071
2013-12-16T13:43:56Z
A nonlocal cohesive zone model for studying crack propagation in mechanical systems with finite thickness interfaces
Finite thickness regions between heterogeneous material constituents are often simplified as zerothickness
interfaces. Then, the cohesive zone model (CZM) is employed, establishing a constitutive relation between tractions and displacement discontinuities. The shape of the CZM is usually chosen as simple as possible for numerical reasons, rather than being physically meaningful. Therefore, the reliability and the predictive capabilities of these models are a serious concern. In this contribution, the complex nonlinear damage phenomena occurring in finite thickness interface regions are modeled using damage mechanics. The derived nonlinear relation between cohesive
tractions and anelastic displacements is then reinterpreted as a new nonlocal CZM. Depending on
the ductility of the material, different shapes of the CZM can be recovered, from linear and bilinear
softening curves, typical of brittle materials, to bell-shaped curves typical of ductile materials. It is
also shown that the parameters of the damage law can be tuned according to molecular dynamics simulations.
The implementation of the proposed nonlocal CZM in the finite element method is then presented. Special attention is given to the numerical treatment of the related nonlocality and to the computation of the tangent stiffness matrix to be used in the Newton-Raphson method for the solution of the nonlinear boundary value problem.
The numerical model is applied to polycrystalline materials and it is shown that the nonlocal CZM is able to reproduce realistic statistical distributions of Mode I fracture energies, as a consequence of the interface thickness distribution. Finally, we demonstrate that the relation between interface thickness and grain size can also be used to explain the grain size effects on the material
tensile strength, namely the Hall-Petch law and its inversion at the nanoscale.
Marco Paggi
marco.paggi@imtlucca.it
Peter Wriggers
2013-12-04T10:15:40Z
2013-12-04T10:16:15Z
http://eprints.imtlucca.it/id/eprint/2017
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2017
2013-12-04T10:15:40Z
(a cura di) Karl Popper oggi: una riflessione multidisciplinare
Stefano Gattei
stefano.gattei@imtlucca.it
Andrea Borghini
2013-12-04T10:10:37Z
2013-12-04T10:10:37Z
http://eprints.imtlucca.it/id/eprint/2037
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2037
2013-12-04T10:10:37Z
Verità e relativismi. Alcune note a margine di un dibattito italiano
Stefano Gattei
stefano.gattei@imtlucca.it
2013-12-04T10:05:34Z
2013-12-04T10:12:01Z
http://eprints.imtlucca.it/id/eprint/2036
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2036
2013-12-04T10:05:34Z
Presentazione
Stefano Gattei
stefano.gattei@imtlucca.it
2013-12-03T15:38:26Z
2013-12-03T15:38:26Z
http://eprints.imtlucca.it/id/eprint/2025
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2025
2013-12-03T15:38:26Z
Alain-Philippe Segonds, 1942-2011
Stefano Gattei
stefano.gattei@imtlucca.it
2013-12-02T13:24:56Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/2005
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2005
2013-12-02T13:24:56Z
Numerical modelling of intergranular fracture in polycrystalline materials and grain size effects
ABSTRACT. In this paper, the phenomenon of intergranular fracture in polycrystalline materials is investigated
using a nonlinear fracture mechanics approach. The nonlocal cohesive zone model (CZM) for finite thickness
interfaces recently proposed by the present authors is used to describe the phenomenon of grain boundary
separation. From the modelling point of view, considering the dependency of the grain boundary thickness on
the grain size observed in polycrystals, a distribution of interface thicknesses is obtained. Since the shape and
the parameters of the nonlocal CZM depend on the interface thickness, a distribution of interface fracture
energies is obtained as a consequence of the randomness of the material microstructure. Using these data,
fracture mechanics simulations are performed and the homogenized stress-strain curves of 2D representative
volume elements (RVEs) are computed. Failure is the result of a diffuse microcrack pattern leading to a main
macroscopic crack after coalescence, in good agreement with the experimental observation. Finally, testing
microstructures characterized by different average grain sizes, the computed peak stresses are found to be
dependent on the grain size, in agreement with the trend expected according to the Hall-Petch law.
SOMMARIO. In questo articolo, il fenomeno della frattura intergranulare nei material policristallini è studiato
mediante un approccio di meccanica della frattura non lineare. Il modello non locale di frattura coesiva per
interfacce con spessore finito recentemente proposto dai presenti autori è impiegato per descrivere il fenomeno
di separazione ai bordi di grano. Da un punto di vista modellistico, considerando la dipendenza dello spessore
dei bordi di grano dalla dimensione del grano stesso, si è ottenuta una distribuzione delle proprietà meccaniche
delle interfacce. Essendo la forma ed i parametri del modello non locale della frattura coesiva dipendenti dallo
spessore dell'interfaccia, si ottiene una distribuzione di energie di frattura come conseguenza della variabilità
statistica della microstruttura del materiale. Usando tali dati si conducono simulazioni di meccanica della frattura
su elementi di volumi rappresentativi (RVE) in 2D e si determinano le rispettive curve di tensionedeformazione.
La frattura è il risultato di un insieme di microfessure diffuse che danno luogo alla propagazione
di una fessura macroscopica principale, in ottimo accordo con quanto osservato sperimentalmente. Infine,
testando microstrutture dotate di diversi diametri medi dei grani, si osserva come le tensioni
Marco Paggi
marco.paggi@imtlucca.it
Peter Wriggers
2013-12-02T13:20:31Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/2004
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2004
2013-12-02T13:20:31Z
Thermomechanical deformations in photovoltaic laminates
Recent experimental results based on the digital image correlation technique (U. Eitner, M. Köntges, R. Brendel, Solar Energy Mater. Solar Cells, 2010, 94, 1346–1351) show that the gap between solar cells embedded into a standard photovoltaic laminate varies with temperature. The variation of this gap is an important quantity to assess the integrity of the electric connection between solar cells when exposed to service conditions. In this paper, the thermo-elastic deformations in photovoltaic laminates are analytically investigated by developing different approximate models based on the multilayered beam theory. It is found that the temperature-dependent thermo-elastic properties of the encapsulating polymer layer are responsible for the deviation from linearity experimentally observed in the diagram relating the gap variation to the temperature. The contribution of the different material constituents to the homogenized elastic modulus and thermal expansion coefficient of the composite system is also properly quantified through the definition of weight factors of practical engineering use.
Marco Paggi
marco.paggi@imtlucca.it
Sarah Kajari-Schröder
Ulrich Eitner
2013-12-02T13:11:20Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/2003
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2003
2013-12-02T13:11:20Z
Seismic analysis of concrete gravity dams: nonlinear fracture mechanics models and size-scale effects
The phenomenon of interface crack propagation in concrete gravity dams underseismic loading is herein addressed. This problem is particularly important from the engineeringpoint of view. In fact, besides Mixed-Mode crack growth in concrete, dam failure is oftenthe result of crack propagation along the rock-concrete interface at the dam foundation. Toanalyze such a problem, the generalized interface constitutive law recently proposed by the¯rst author is used to proper modelling the phenomenon of crack closing and reopening at theinterface. A damage variable is also introduced in the cohesive zone formulation in order topredict crack propagation under repeated loadings. Special attention is given to the complexityresulting from the solution of the nonlinear dynamic problem and to the choice of the interfaceconstitutive parameters, taking into account the important size-scale e®ects observed in thesecyclopic structures. Numerical examples will show the capabilities of the proposed approachwhen applied to concrete gravity dams.
Marco Paggi
marco.paggi@imtlucca.it
Giuseppe Ferro
Franco Braga
2013-12-02T13:06:05Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/2000
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/2000
2013-12-02T13:06:05Z
An analytical model based on strain localisation for the study of size-scale and slenderness effects in uniaxial compression tests
In this paper, an analytical model based on the concept of strain localisation is proposed for the analysis and prediction of the response of quasi-brittle materials in uniaxial compression tests, such as mortar, plain concrete with different compression strengths, as well as fibre-reinforced concrete. The proposed approach, referred to as Overlapping Crack Model, relies only on a pair of material constitutive laws, in close analogy with the Cohesive Crack Model: a stress–strain relationship describing the pre-peak behaviour of the material and a stress–interpenetration relationship for the description of the post-peak response. In the paper it will be shown how the stress–interpenetration relationship can be deduced from experimental data and how it depends on the compression strength and on the crushing energy of the tested materials. A wide comparison between the stress–displacement curves predicted by the proposed model and those experimentally found in the literature will show the effectiveness of the present approach to capture both stable softening or sharp snap-back post-peak branches by varying the slenderness or the size-scale of the tested samples.
Alberto Carpinteri
Mauro Corrado
Marco Paggi
marco.paggi@imtlucca.it
2013-12-02T12:56:31Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/1999
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1999
2013-12-02T12:56:31Z
Contact conductance of rough surfaces composed of modified RMD patches
The dependence of the contact conductance of self-affine rough surfaces on the applied pressure is studied using the electric-mechanical analogy which relates the contact conductance to the normal stiffness. According to dimensional analysis arguments, an efficient dimensionless formulation is proposed which minimizes the number of dimensionless variables governing the phenomenon. Assuming incomplete similarity in the dimensionless pressure, a power-law dependence between contact conductance and mean pressure is proposed. This is confirmed by earlier semi-empirical correlations that are recovered as special cases of the proposed formulation. To compute the exponent β of the power-law, and relate it to the morphological properties of the surfaces, we numerically test self-affine rough surfaces composed of random midpoint displacement (RMD) patches. Such patches are generated using a modified {RMD} algorithm in order to decouple the effect of the long wavelength cut-off from that due to microscale roughness. Numerical results show that the long wavelength cut-off has an important effect on the contact conductance, whereas the sampling interval and the fractal dimension are less important. The effect of elastic interaction between asperities has also been quantified and it significantly influences the predicted power-law exponent β.
Marco Paggi
marco.paggi@imtlucca.it
J.R. Barber
2013-12-02T12:40:31Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/1998
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1998
2013-12-02T12:40:31Z
Dimensional analysis and fractal modeling of fatigue crack growth
In the present paper, generalized Paris and Wöhler equations are derived according to dimensional analysis and incomplete similarity concepts. They provide a rational interpretation to a majority of empirical power-law criteria used in fatigue. In particular, they are able to model the effects of the grain size, of the initial crack length, as well as of the size-scale of the tested specimen on the crack growth rate and on the fatigue life. Regarding the important issue of crack-size dependencies of the Paris’ coefficient C and of the fatigue threshold, an independent approach, based on the application of fractal geometry concepts, is proposed to model such an anomalous behavior. As a straightforward consequence of the fractality of the crack surfaces, the fractal approach provides scaling laws fully consistent with those determined from dimensional analysis arguments. The proposed scaling laws are applied to relevant experimental data related to the crack-size and to the structural-size dependencies of the fatigue parameters in metals and in quasi-brittle materials. Finally, paying attention to the limit points defining the range of validity of the classical Wöhler and Paris power-law relationships, correlations between the so-called cyclic or fatigue properties are proposed, giving a rational explanation to the experimental trends observed in the material property charts.
Alberto Carpinteri
Marco Paggi
marco.paggi@imtlucca.it
2013-12-02T12:35:40Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/1997
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1997
2013-12-02T12:35:40Z
Contact mechanics of microscopically rough surfaces with graded elasticity
The well-known Greenwood and Williamson contact theory for microscopically homogeneous rough surfaces is generalized by considering functionally graded elastic rough surfaces. In particular, two distinct cases giving rise to a non-constant Young’s modulus with depth are considered: (I) an initially plane layered (or graded) solid which is non-uniformly eroded, so that the final product is a rough surface with asperities having an elastic modulus depending on the height; (II) an initially homogeneous rough surface which receives a surface treatment or a chemical degradation which modify the elastic properties of the asperities as a function of the depth from the exposed surface. These Functionally Graded Surfaces (FGS) can be observed both in biological systems and in mechanical components. The effects of graded elasticity on the relationship between real contact area versus applied load, and on the plasticity index are quantified and illustrated with numerical examples. It will be shown that the contact response may differ up to one order of magnitude with respect to that of a homogeneous surface. Comparison between Case I and Case II also shows that, for special surface properties, the two types of grading can provide the same mechanical response.
Marco Paggi
marco.paggi@imtlucca.it
Giorgio Zavarise
2013-12-02T12:12:14Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/1996
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1996
2013-12-02T12:12:14Z
A nonlocal cohesive zone model for finite thickness interfaces – Part II: FE implementation and application to polycrystalline materials
Numerical aspects of the nonlocal cohesive zone model (CZM) presented in Part I are discussed in this companion paper. They include the FE implementation of the proposed nonlocal CZM in the framework of zero-thickness interface elements and the numerical treatment of the related nonlocality. In particular, a Newton–Raphson method, combined with a series expansion to obtain tentative values for the cohesive tractions, is used to efficiently compute the tangent stiffness matrix and the residual vector of the interface elements. Then, numerical applications to polycrystalline materials are proposed, focusing on the constitutive modelling of the finite thickness interfaces between the grains. It will be shown that the parameters of the nonlocal CZM (shape, peak stress, fracture energy) depend on the thickness of the interface. The CZM is able to produce statistical distributions of Mode I fracture energies consistent with those assumed a priori in stochastic fracture mechanics studies. The statistical variability of fracture parameters, originating from the natural variability of the interface thicknesses, has an important influence on the crack patterns observed from simulated tensile tests. Finally, we show that the relation between interface thickness and grain size can be used to explain the grain-size effects on the material tensile strength. In particular, considering a sublinear relation between the interface thickness and the grain diameter at the microscale, the nonlocal CZM is able to recover the Hall–Petch law. Therefore, the proposed model suggests that an inverse relation between the interface thickness and the grain size would lead to an inversion of the Hall–Petch law as well. This new interpretation seems to be confirmed by experimental data at the nanoscale, where the inversion of the Hall–Petch law coincides with the anomalous increase of the interface thickness by reducing the grain size.
Marco Paggi
marco.paggi@imtlucca.it
Peter Wriggers
2013-12-02T12:05:06Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/1995
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1995
2013-12-02T12:05:06Z
A nonlocal cohesive zone model for finite thickness interfaces – Part I: Mathematical formulation and validation with molecular dynamics
A nonlocal cohesive zone model is derived taking into account the properties of finite thickness interfaces. The functional expression of the stress–separation relationship, which bridges the gap between continuum damage mechanics and nonlinear fracture mechanics, depends on the complex failure phenomena affecting the material microstructure of the interface region. More specifically, the shape of the nonlocal cohesive zone model is found to be dependent on the damage evolution. On the other hand, damage is in its turn a function of dissipative mechanisms occurring at lower length scales, such as dislocation motion, breaking of interatomic bonds, formation of free surfaces and microvoids, that are usually analyzed according to molecular dynamics. Hence, the relationship intercurring between the parameters of the damage law and the outcome of molecular dynamics simulations available in the literature is also established. Therefore, the proposed nonlocal cohesive zone model provides also the proper mathematical framework for interpreting molecular dynamics-based stress–separation relationships that are typically nonlocal, since they always refer to a finite number of atom layers.
Marco Paggi
marco.paggi@imtlucca.it
Peter Wriggers
2013-12-02T12:02:08Z
2016-04-06T09:38:43Z
http://eprints.imtlucca.it/id/eprint/1994
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1994
2013-12-02T12:02:08Z
A dimensional analysis interpretation to grain size and loading frequency dependencies of the Paris and Wöhler curves
In this paper, a mathematical model based on dimensional analysis and incomplete self-similarity is proposed for the interpretation of the grain size and loading frequency effects on the Paris and Wöhler regimes in metals. In particular, it is demonstrated that these effects correspond to a violation of the physical similitude hypothesis underlying the simplest Paris’ and Wöhler power-law fatigue relationships. As a consequence, generalized representations of fatigue have to be invoked. From the physical point of view, the incomplete similarity behaviour can be regarded as the result of the multiscale character of the problem, where the crack length and the grain size are the two length scales interacting together. Moreover, it will be shown that the relationship between strength and grain size (Hall–Petch relationship) has also to be considered in order to consistently interpret the two opposite effects of the grain size on the Paris and Wöhler regimes within a unified framework. The incomplete similarity exponents are suitably quantified according to experimental results for Aluminum, Copper, Titanium and Nickel. The derived scaling laws are expected to be of paramount importance today, especially after the advent of ultra fine grained materials that offer unique mechanical properties owing to their fine microstructure.
Oleg Plekhov
poa@icmm.ru
Marco Paggi
marco.paggi@imtlucca.it
O. Naimark
Alberto Carpinteri
2013-12-02T11:46:18Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/1992
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1992
2013-12-02T11:46:18Z
Singular harmonic problems at a wedge vertex: mathematical analogies between elasticity, diffusion, electromagnetism, and fluid dynamics
Multimaterial wedges are frequently observed in composite materials. They consist of two or more sectors of dissimilar materials joined together, whose interfaces converge at the same vertex. Due to the mismatch in material properties such as Young’s modulus, thermal conductivity, dielectric permittivity, or magnetic permeability, these geometrical configurations can lead to singular fields at the junction vertex. This paper discusses mathematical analogies, focused on singular harmonic problems, between antiplane shear problem in elasticity due to mode III loading or torsion, the steady-state heat transfer problem, and the diffraction of waves in electromagnetism. In the case of a single material wedge, a mathematical analogy between elasticity and fluid dynamics is also outlined. The proposed unified mathematical formulation is particularly convenient for the identification of common types of singularities (power-law or logarithmic type), the definition of a standardized method to solve nonlinear eigenvalue problems, and the determination of common geometrical and material configurations allowing the relief or removal of different singularities.
Alberto Carpinteri
Marco Paggi
marco.paggi@imtlucca.it
2013-12-02T11:32:14Z
2014-10-09T09:20:24Z
http://eprints.imtlucca.it/id/eprint/1991
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1991
2013-12-02T11:32:14Z
Modelling fatigue in quasi-brittle materials with incomplete self-similarity concepts
In this study, a generalized Barenblatt and Botvina dimensional analysis approach to fatigue crack growth is proposed in order to highlight and explain the deviations from the classical power–law equations used to characterize the fatigue behaviour of quasi-brittle materials. According to this theoretical approach, the microstructural-size (related to the volumetric content of fibers in fiber-reinforced concrete), the crack-size, and the size-scale effects on the Paris’ law and on the Wöhler equation are presented within a unified mathematical framework. Relevant experimental results taken from the literature are used to confirm the theoretical trends and to determine the values of the incomplete self-similarity exponents. All this information is expected to be useful for the design of experiments, since the role of the different dimensionless numbers governing the phenomenon of fatigue is herein elucidated. Finally, a numerical model based on damage mechanics and nonlinear fracture mechanics is proposed for the prediction of uniaxial S–N curves, showing how to efficiently use the information gained from dimensional analysis and how the shape of the S–N curves is influenced by the parameters of the damage model.
Marco Paggi
marco.paggi@imtlucca.it
2013-11-12T12:18:51Z
2013-11-12T12:18:51Z
http://eprints.imtlucca.it/id/eprint/1899
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1899
2013-11-12T12:18:51Z
Spatial Complex Network Analysis and Accessibility Indicators: the Case of Municipal Commuting in Sardinia, Italy
In this paper a contribution is presented with respect to accessibility indicators modelling for commuters moving through the municipalities of Sardinia, in Italy. In this case, spatial complex network analysis is integrated into the construction of accessibility measures: one of the most relevant outcomes of the first tool –the detection of shortest road paths and distances- is adopted as an input for the second in modelling accessibility indicators. Instead of Euclidean distances often adopted in the literature, shortest road distances are chosen, as commuting implies movements that are usually repeated daily and very likely subjected, even unconsciously, to space and time minimization strategies.
In particular, two commuter accessibility indicators are constructed according to approaches based on a travel cost and a spatial interaction model with impedance function calibrated in exponential and in power form. The accessibility indicators are confronted each other and with relevant socio-economic and infrastructure characteristics of Sardinia.
In addition, they are described, with respect to their spatial distribution and their different implications, when adopted in decision-making and planning. The travel cost based accessibility indicator has a municipal spatial distribution strongly influenced by the main road infrastructure of the Island. By contrast, spatial interaction model based accessibility indicators are more reliable, with respect to their capacity to confirm a leading socio-economic role of the municipalities comprehended in the metropolitan area of the capital town Cagliari.
Andrea De Montis
Simone Caschili
Alessandro Chessa
alessandro.chessa@imtlucca.it
2013-11-12T11:54:55Z
2013-11-12T11:54:55Z
http://eprints.imtlucca.it/id/eprint/1898
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1898
2013-11-12T11:54:55Z
Time evolution of complex networks: commuting systems in insular Italy
The aim of this paper is to study the dynamics of the commuting system of two insular regions of Italy, Sardinia and Sicily, inspected as complex networks. The authors refer to a 20-year time period and take into account three census data sets about the work and study-driven inter-municipal origin-destination movements of residential inhabitants in 1981, 1991 and 2001. Since it is likely that the number of municipalities (in this case, the vertices of the system) does not display sharp variations, the authors direct the study to the variation of the properties emerging through both a topological and a weighted network representation of commuting in the time periods indicated.
Andrea De Montis
Simone Caschili
Alessandro Chessa
alessandro.chessa@imtlucca.it
2013-11-12T11:26:16Z
2013-11-12T11:26:16Z
http://eprints.imtlucca.it/id/eprint/1897
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1897
2013-11-12T11:26:16Z
Disentangling the proteome: re-evaluations of topological insights from yeast protein interaction networks
To understand living cells one must study them as systems rather than as a collection of individual molecules. The abstract representation of intracellular systems as „networks‟ is fruitful, because it provides the ability to study these systems as a whole by ignoring details of individual components, but retaining the complexity of the interactions. This chapter will review the discoveries made through application of approaches from „the science of complex networks‟ to Protein Interaction Networks, i.e. undirected networks in which the nodes represent proteins, and pairs are connected by edges if the proteins physically interact. Over the last decade the experimental techniques for measuring protein interactions has been highly improved and large numbers of new protein interactions have been elucidated. Therefore, along with the reviewed concepts and discoveries, we provide a re-evaluation of several previous conclusions by analyzing a set of high quality networks from the organism S. cerevisiae (baker's yeast), based on recent experimental data. These interaction networks are obtained from three distinct experimental methodologies: 1) literature curation (LC; by combining low-throughput experiments), 2) affinity-purification followed by mass spectrometry (AP-MS), and 3) the yeast two-hybrid system (Y2H). Through the analysis of these new quality networks we wish to demonstrate which of the previous conclusions (some dating back from almost a decade ago) still hold anno 2010, and to highlight the differences in Protein Interaction Networks obtained by different experimental techniques. Indeed, we find very distinct topological properties in these different networks, in accordance with other papers who have reported contradictory results when analyzing different datasets. Previous conclusions mainly hold for the new high quality data from Y2H experiments. We end with a discussion on which experimental technique provides the most relevant interaction data for the purpose of constructing wiring diagrams of the proteome, i.e. Protein Interaction Networks.
Vincenzo De Leo
Francesco Ricci
Nicola Soranzo
Alessandro Chessa
alessandro.chessa@imtlucca.it
Alberto de la Fuente
2013-11-08T10:45:37Z
2014-07-07T10:29:45Z
http://eprints.imtlucca.it/id/eprint/1896
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1896
2013-11-08T10:45:37Z
SapRete: saperi in rete per il recupero delle competenze logico matematiche e scientifiche
Susanna Setzu
Alessandro Chessa
alessandro.chessa@imtlucca.it
Michelangelo Puliga
michelangelo.puliga@imtlucca.it
Maria Polo
Maria Cristina Mereu
2013-11-08T10:18:45Z
2013-11-08T10:18:45Z
http://eprints.imtlucca.it/id/eprint/1895
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1895
2013-11-08T10:18:45Z
Integrating the electric grid and the commuter network through a 'Veichle to Grid' concept: a Complex Networks Theory approach
Alfonso Damiano
Guido Caldarelli
guido.caldarelli@imtlucca.it
Alessandro Chessa
alessandro.chessa@imtlucca.it
Antonio Scala
2013-11-05T14:18:55Z
2013-11-05T14:18:55Z
http://eprints.imtlucca.it/id/eprint/1861
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1861
2013-11-05T14:18:55Z
Le catene globali del valore dei gruppi multinazionali in Italia
Armando Rungi
armando.rungi@imtlucca.it
2013-10-04T10:56:22Z
2013-10-04T11:09:42Z
http://eprints.imtlucca.it/id/eprint/1828
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1828
2013-10-04T10:56:22Z
New Encoding Schemes with Infofuses
Kyeng Min Park
Choongik Kim
Samuel W. Thomas III
Hyo Jae Yoon
Greg Morrison
greg.morrison@imtlucca.it
L. Mahadevan
George M. Whitesides
2013-10-04T10:29:32Z
2014-12-05T09:20:01Z
http://eprints.imtlucca.it/id/eprint/1825
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1825
2013-10-04T10:29:32Z
Compaction and tensile forces determine the accuracy of folding landscape parameters from single molecule pulling experiments
We establish a framework for assessing whether the transition state location of a biopolymer, which can be inferred from single molecule pulling experiments, corresponds to the ensemble of structures that have equal probability of reaching either the folded or unfolded states (Pfold=0.5). Using results for the forced unfolding of a RNA hairpin, an exactly soluble model, and an analytic theory, we show that Pfold is solely determined by s, an experimentally measurable molecular tensegrity parameter, which is a ratio of the tensile force and a compaction force that stabilizes the folded state. Applications to folding landscapes of DNA hairpins and a leucine zipper with two barriers provide a structural interpretation of single molecule experimental data. Our theory can be used to assess whether molecular extension is a good reaction coordinate using measured free energy profiles.
Greg Morrison
greg.morrison@imtlucca.it
Changbong Hyeon
Michael Hinczewski
D. Thirumalai
2013-10-04T10:20:55Z
2013-10-04T10:20:55Z
http://eprints.imtlucca.it/id/eprint/1824
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1824
2013-10-04T10:20:55Z
Asymmetric network connectivity using weighted harmonic averages
We propose a non-metric measure of the "closeness" felt between two nodes in an undirected, weighted graph using a simple weighted harmonic average of connectivity, that is a real-valued Generalized Erdös Number (GEN). While our measure is developed with a collaborative network in mind, the approach can be of use in a variety of artificial and real-world networks. We are able to distinguish between network topologies that standard distance metrics view as identical, and use our measure to study some simple analytically tractable networks. We show how this might be used to look at asymmetry in authorship networks such as those that inspired the integer Erdös numbers in mathematical coauthorships. We also show the utility of our approach to devise a ratings scheme that we apply to the data from the NetFlix prize, and find a significant improvement using our method over a baseline.
Greg Morrison
greg.morrison@imtlucca.it
L. Mahadevan
2013-09-27T12:43:42Z
2013-09-30T11:58:30Z
http://eprints.imtlucca.it/id/eprint/1805
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1805
2013-09-27T12:43:42Z
A Bayesian copula model for Claims Reserving
Luca Regis
luca.regis@imtlucca.it
2013-09-27T12:10:43Z
2013-09-27T12:10:43Z
http://eprints.imtlucca.it/id/eprint/1803
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1803
2013-09-27T12:10:43Z
A Bayesian copula model for stochastic claims reserving
We present a full Bayesian model for assessing the reserve requirement of multiline Non-Life insurance companies. Bayesian models for claims reserving allow to account for expert knowledge in the evaluation of Outstanding Loss Liabilities, allowing the use of additional information at a low cost. This paper combines a standard Bayesian approach for the estimation of marginal distribution for the single Lines of Business for a Non-Life insurance company and a Bayesian copula procedure for the estimation of aggregate reserves. The model we present allows to "mix" own-assessments of dependence between LoBs at a company level and market-wide estimates provided by regulators. We illustrate results for the single lines of business and we compare standard copula aggregation for different copula choices and the Bayesian copula approach.
Luca Regis
luca.regis@imtlucca.it
2013-09-27T11:46:40Z
2013-09-27T11:46:40Z
http://eprints.imtlucca.it/id/eprint/1801
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1801
2013-09-27T11:46:40Z
Natural Delta Gamma Hedging of Longevity and Interest Rate Risk. ICER Working Paper
The paper presents closed-form Delta and Gamma hedges for annuities and death assurances, in the presence of both longevity and interest-rate risk. Longevity risk is modeled through an extension of the classical Gompertz law, while interest rate risk is modeled via an Hull-and-White process. We theoretically provide natural hedging strategies, considering also contracts written on different generations. We provide a UK-population and bond-market calibrated example. We compute longevity exposures and explicitly calculate Delta-Gamma hedges. Re-insurance is needed in order to set-up portfolios which are Delta-Gamma neutral to both longevity and interest-rate risk.
Elisa Luciano
Luca Regis
luca.regis@imtlucca.it
Elena Vigna
2013-09-17T13:07:22Z
2013-09-17T13:07:22Z
http://eprints.imtlucca.it/id/eprint/1781
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1781
2013-09-17T13:07:22Z
Structural Properties of Optimal Coordinate-Convex Policies for CAC with Nonlinearly-Constrained Feasibility Regions
Necessary optimality conditions for Call Admission Control (CAC) problems with nonlinearly-constrained feasibility regions and two classes of users are derived. The policies are restricted to the class of coordinate-convex policies. Two kinds of structural properties of the optimal policies and their robustness with respect to changes of the feasibility region are investigated: 1) general properties not depending on the revenue ratio associated with the two classes of users and 2) more specific properties depending on such a ratio. The results allow one to narrow the search for the optimal policies to a suitable subset of the set of coordinate-convex policies.
Mario Marchese
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:06:59Z
2013-09-17T13:06:59Z
http://eprints.imtlucca.it/id/eprint/1767
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1767
2013-09-17T13:06:59Z
Functional Optimization in OR Problems with Very Large Numbers of Variables
Functional optimization, or "infinite-dimensional programming", investigates the minimization (or maximization) of functionals with respect to admissible
solutions belonging to infinite-dimensional spaces of functions. In OR applications, such functions may express, e.g.,
-releasing policies in water-resources management;
-exploration strategies stochastic graphs;
-routing strategies in telecommunication networks;
-input/output mappings in learning from data, etc.
Infinite dimension makes inapplicable many tools used in mathematical programming, and variational methods provide closed-form solutions only in particular cases. Suboptimal solutions can be sought via "linear approximation
schemes",i.e., linear combinations of fixed basis functions (e.g., polynomial expansions):
the functional problem is reduced to optimization of the coefficients
of the linear combinations ("Ritz method"). Most often, admissible solutions
are functions dependent on many variables, related, e.g., to
-reservoirs in water-resources management;
-nodes of a communication network;
-items in inventory problems;
-freeway sections in traffic management.
Unfortunately, linear schemes may be computationally inefficient because
of the "curse of dimensionality": the number of basis functions, necessary to
obtain a desired accuracy, may grow "very fast" with the number of variables.
This motivates the "Extended Ritz Method"(ERIM), based on nonlinear approximation schemes formed by linear combinations of computational units
containing "inner" parameters which make the schemes nonlinear to be optimized (together with the coefficients of the combinations) via nonlinear programming algorithms. Experimental results show that this approach obtains surprisingly good performances. We present recent theoretical results that give insights into the possibility to cope with the curse of dimensionality in functional optimization via the ERIM, when admissible solutions contain very large numbers of variables.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
Riccardo Zoppoli
2013-09-17T13:06:33Z
2013-09-17T13:06:33Z
http://eprints.imtlucca.it/id/eprint/1789
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1789
2013-09-17T13:06:33Z
Team Optimization Problems with Lipschitz Continuous Strategies
Sufficient conditions for the existence and Lipschitz continuity of optimal strategies for static team optimization problems are studied. Revised statements and proofs of some results appeared in the literature are presented. Their extensions are discussed. As an example of application, optimal production in a multidivisional firm is considered.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-17T13:06:11Z
2013-09-17T13:06:11Z
http://eprints.imtlucca.it/id/eprint/1778
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1778
2013-09-17T13:06:11Z
Some Comparisons of Complexity in Dictionary-Based and Linear Computational Models
Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Věra Kůrková
Marcello Sanguineti
2013-09-17T13:05:48Z
2013-09-17T13:05:48Z
http://eprints.imtlucca.it/id/eprint/1766
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1766
2013-09-17T13:05:48Z
Functional Optimization by Variable-Basis Approximation Schemes
This is a summary of the author’s PhD thesis, supervised by Marcello Sanguineti and defended on April 2, 2009 at Università degli Studi di Genova. The thesis is written in English and a copy is available from the author upon request. Functional optimization problems arising in Operations Research are investigated. In such problems, a cost functional Φ has to be minimized over an admissible set S of d-variable functions. As, in general, closed-form solutions cannot be derived, suboptimal solutions are searched for, having the form of variable-basis functions, i.e., elements of the set span n G of linear combinations of at most n elements from a set G of computational units. Upper bounds on inff∈S∩spannGΦ(f)−inff∈SΦ(f) are obtained. Conditions are derived, under which the estimates do not exhibit the so-called “curse of dimensionality” in the number n of computational units, when the number d of variables grows. The problems considered include dynamic optimization, team optimization, and supervised learning from data.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
2013-09-17T13:05:26Z
2013-09-17T13:05:26Z
http://eprints.imtlucca.it/id/eprint/1779
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1779
2013-09-17T13:05:26Z
A Stochastic Knapsack Problem with Nonlinear Capacity Constraint
There exist various generalizations and stochastic variants of the NP-hard 0/1 knapsack problem [1,2]. The following model is considered here. A knapsack of capacity C is given, together with K classes of objects. The stochastic
nature come into play since, in contrast to the classical knapsack, the objects belonging to each class become available randomly. The inter-arrival times are exponentially-distributed with means depending on the class and on the state of the knapsack. Each object has a sojourn time independent from the sojourn times of the other objects and described by a class-dependent distribution.
The other difference with respect to the classical model consists is the following generalization. For k = 1;K, let nk be the number of objects of class k that are currently inside the knapsack; then, the portion of knapsack
occupied by them is given by a nonlinear function bk(nk). When included in the knapsack, an object from class k generates revenue at a positive rate rk. The objects can be placed into the knapsack as long as the sum of their
sizes does not exceed the capacity C. The problem consists in finding a policy that maximizes the average revenue, by accepting or rejecting the arriving objects in dependence of the current state of the knapsack. A-priori knowledge
of structural properties of the (unknown) optimal policies is useful to find satisfactorily accurate suboptimal policies. The family of coordinate-convex
policies is considered here. In this context, structural properties of the optimal policies are investigated. New insights into a criterion proposed in [3] to improve coordinate-convex policies are discussed and the greedy presented in [5] is further developed. Applications in Call Admission Control (CAC) for telecommunication networks are discussed. In this case, the objects are requests of connections coming from K different classes of users, each with an associated bandwidth requirement and a distribution of its duration.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2013-09-17T07:44:28Z
2013-09-17T07:44:28Z
http://eprints.imtlucca.it/id/eprint/1751
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1751
2013-09-17T07:44:28Z
Can Dictionary-Based Computational Models Outperform the Best Linear Ones?
Approximation capabilities of two types of computational models are explored: dictionary-based models (i.e., linear combinations of n -tuples of basis functions computable by units belonging to a set called “dictionary”) and linear ones (i.e., linear combinations of n fixed basis functions). The two models are compared in terms of approximation rates, i.e., speeds of decrease of approximation errors for a growing number n of basis functions. Proofs of upper bounds on approximation rates by dictionary-based models are inspected, to show that for individual functions they do not imply estimates for dictionary-based models that do not hold also for some linear models. Instead, the possibility of getting faster approximation rates by dictionary-based models is demonstrated for worst-case errors in approximation of suitable sets of functions. For such sets, even geometric upper bounds hold.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Věra Kůrková
Marcello Sanguineti
2013-09-17T07:40:56Z
2013-09-17T07:40:56Z
http://eprints.imtlucca.it/id/eprint/1747
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1747
2013-09-17T07:40:56Z
Approximate Dynamic Programming by Variable-Basis Schemes: Error Analysis and Numerical Results
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
Mauro Gaggero
2013-09-13T11:34:55Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1719
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1719
2013-09-13T11:34:55Z
On a Variational Norm Tailored to Variable-Basis Approximation Schemes
A variational norm associated with sets of computational units and used in function approximation, learning from data, and infinite-dimensional optimization is investigated. For sets Gk obtained by varying a vector y of parameters in a fixed-structure computational unit K(-,y) (e.g., the set of Gaussians with free centers and widths), upper and lower bounds on the GK -variation norms of functions having certain integral representations are given, in terms of the £1-norms of the weighting functions in such representations. Families of functions for which the two norms are equal are described.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Marcello Sanguineti
2013-09-13T11:30:21Z
2013-09-16T12:02:59Z
http://eprints.imtlucca.it/id/eprint/1718
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1718
2013-09-13T11:30:21Z
CAC with Nonlinearly-Constrained Feasibility Regions
Two criteria are proposed to characterize and improve suboptimal coordinate-convex (c.c.) policies in Call Admission Control (CAC) problems with nonlinearly-constrained feasibility regions. Then, a structural property of the optimal c.c. policies is derived. This is expressed in terms of constraints on the relative positions of successive corner points.
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2013-09-12T11:06:41Z
2013-09-16T12:03:00Z
http://eprints.imtlucca.it/id/eprint/1696
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1696
2013-09-12T11:06:41Z
Bounds for Approximate Solutions of Fredholm Integral Equations Using Kernel Networks
Approximation of solutions of integral equations by networks with kernel units is investigated theoretically. There are derived upper bounds on speed of decrease of errors in approximation of solutions of Fredholm integral equations by kernel networks with increasing numbers of units. The estimates are obtained for Gaussian and degenerate kernels.
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Věra Kůrková
Marcello Sanguineti
2013-09-11T14:00:57Z
2013-09-16T12:02:59Z
http://eprints.imtlucca.it/id/eprint/1685
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1685
2013-09-11T14:00:57Z
A Generalized Stochastic Knapsack Problem with Application in Call Admission Control
Marco Cello
Giorgio Gnecco
giorgio.gnecco@imtlucca.it
Mario Marchese
Marcello Sanguineti
2013-07-10T10:42:01Z
2013-07-10T10:42:01Z
http://eprints.imtlucca.it/id/eprint/1640
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1640
2013-07-10T10:42:01Z
Ordinary Least Squares and Genetic Algorithms Optimization in Smoothing Transition Autoregressive (STAR) Models
Abstract—In this paper we present, propose and examine
additional membership functions as also we propose least squares with genetic algorithms optimization in order to find the optimum fuzzy membership functions parameters. More
specifically, we present the tangent hyperbolic, Gaussian and Generalized bell functions. The reason we propose that is because Smoothing Transition Autoregressive (STAR) models follow fuzzy logic approach therefore more functions should be tested. Some numerical applications for S&P 500, FTSE 100 stock returns and for unemployment rate are presented and MATLAB routines are provided
Eleftherios Giovanis
eleftherios.giovanis@imtlucca.it
2013-07-10T09:46:59Z
2013-07-10T09:46:59Z
http://eprints.imtlucca.it/id/eprint/1639
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1639
2013-07-10T09:46:59Z
Application of Adaptive Νeuro-Fuzzy Inference System in Interest Rates Effects on Stock Returns
In the current study we examine the effects of interest rate changes on common stock returns of Greek banking sector. We examine the Generalized Autoregressive Heteroskedasticity (GARCH) process and an Adaptive Neuro-Fuzzy Inference System (ANFIS). The conclusions of our findings are that the changes of interest rates, based on GARCH model, are insignificant on common stock returns during the period we examine. On the other hand, with ANFIS we can get the rules and in each case we can have positive or negative effects depending on the conditions and the firing rules of inputs, which information is not possible to be retrieved with the traditional econometric modelling. Furthermore we examine the forecasting performance of both models and we conclude that ANFIS outperforms GARCH model in both in-sample and out-of-sample periods.
Eleftherios Giovanis
eleftherios.giovanis@imtlucca.it
2013-07-10T09:32:33Z
2013-07-10T09:32:33Z
http://eprints.imtlucca.it/id/eprint/1638
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1638
2013-07-10T09:32:33Z
Application of a Modified Generalized Regression Neural Networks Algorithm in Economics and Finance
In this paper we propose an alternative and modified Generalized Regression Neural Networks Autoregressive model (GRNN-AR) in S&P 500 and FTSE 100 index returns, as also in Gross domestic product growth rate of Italy, USA and UK. We compare the forecasts with Generalized Autoregressive conditional Heteroskedasticity (GARCH) and Autoregressive Integrated Moving Average (ARIMA) models. The results indicate that GRNN outperform significant the conventional econometric models and can be an efficient alternative tool for forecasting. The MATLAB algorithm we propose is provided in appendix for further applications, suggestions, modifications and improvements.
Eleftherios Giovanis
eleftherios.giovanis@imtlucca.it
2013-06-10T12:12:48Z
2013-06-10T12:12:48Z
http://eprints.imtlucca.it/id/eprint/1612
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1612
2013-06-10T12:12:48Z
[review of] Gunilla Budde, Eckart Conze e Cornelia Rauh (hg.), Bürgertum nach dem bürgerlichem Zeitalter. Leitbilder und Praxis seit 1945
Fiammetta Balestracci
fiammetta.balestracci@imtlucca.it
2013-06-10T11:15:56Z
2013-06-10T11:15:56Z
http://eprints.imtlucca.it/id/eprint/1611
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1611
2013-06-10T11:15:56Z
[recensione a] Emma Scaramuzza (ed.), Politica e amicizia. Relazioni, conflitti e differenze di genere (1860-1915)
Fiammetta Balestracci
fiammetta.balestracci@imtlucca.it
2013-05-31T08:44:51Z
2013-05-31T08:44:51Z
http://eprints.imtlucca.it/id/eprint/1599
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1599
2013-05-31T08:44:51Z
News of the world: legge di Murphy e diritti TV alla Corte di Giustizia
The paper critically analyses the ECJ decision which has provided interpretation to several references for a preliminary ruling submitted by the High Court of Justice of England and Wales related to the selling and exploitation of TV rights on sport events between the English League, FAPL, and the broadcasters, as BSkyB. In particular, specific emphasis is placed on the competitive issues linked to the territorial exclusivity clauses, commonly used in the agreements between the parties, as well as to the critical aspects related to the completion of the European common market while protecting the intellectual property rights.
Andrea Giannaccari
a.giannaccari@imtlucca.it
2013-05-03T11:38:21Z
2013-05-03T11:38:21Z
http://eprints.imtlucca.it/id/eprint/1567
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1567
2013-05-03T11:38:21Z
Proceedings of 7th International Workshop on Automated Specification and Verification of Web Systems (WWV 2011).
This volume contains the final and revised versions of the papers presented at the 7th International Workshop on Automated Specification and Verification of Web Systems (WWV 2011). The workshop was held in Reykjavik, Iceland, on June 9, 2011, as part of DisCoTec 2011. The aim of the WWV workshop series is to provide an interdisciplinary forum to facilitate the cross-fertilization and the advancement of hybrid methods that exploit concepts and tools drawn from Rule-based programming, Software engineering, Formal methods and Web-oriented research. Nowadays, indeed, many companies and institutions have diverted their Web sites into interactive, completely-automated, Web-based applications for, e.g., e-business, e-learning, e-government, and e-health. The increased complexity and the explosive growth of Web systems have made their design and implementation a challenging task. Systematic, formal approaches to their specification and verification can permit to address the problems of this specific domain by means of automated and effective techniques and tools. In response to this year's call for papers, we received 9 paper submissions. The Program Committee of WWV 2011 collected three reviews for each paper and held an electronic discussion leading to the selection of 7 papers for presentation at the workshop. In addition to the selected papers, the scientific programme included an invited lecture by Elie Najm.
Laura Kovács
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-03T11:36:21Z
2013-05-03T11:36:21Z
http://eprints.imtlucca.it/id/eprint/1568
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1568
2013-05-03T11:36:21Z
The Sensoria Approach Applied to the Finance Case Study
This chapter provides an effective implementation of (part of) the Sensoria approach, specifically modelling and formal analysis of service-oriented software based on mathematically founded techniques. The ‘Finance case study’
is used as a test bed for demonstrating the feasibility and effectiveness of the use of the process calculus COWS and some of its related analysis techniques and tools. In particular, we report the results of an application of a temporal logic and its model checker for expressing and checking functional properties of services and a type system for guaranteeing confidentiality properties of services.
Stefania Gnesi
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-05-02T12:39:11Z
2013-05-02T12:39:11Z
http://eprints.imtlucca.it/id/eprint/1579
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1579
2013-05-02T12:39:11Z
A standard-driven communication protocol for disconnected clinics in rural areas
The importance of the Electronic Health Record (EHR), which stores all healthcare-related data belonging to a patient, has been recognized in recent years by governments, institutions, and industry. Initiatives like Integrating the Healthcare Enterprise (IHE) have been developed for the definition of standard methodologies for secure and interoperable EHR exchanges among clinics and hospitals. Using the requisites specified by these initiatives, many large-scale projects have been set up to enable healthcare professionals to handle patients' EHRs. Applications deployed in these settings are often considered safety-critical, thus ensuring such security properties as confidentiality, authentication, and authorization is crucial for their success. In this paper, we propose a communication protocol, based on the IHE specifications, for authenticating healthcare professionals and assuring patients' safety in settings where no network connection is available, such as in rural areas of some developing countries. We define a specific threat model, driven by the experience of use cases covered by international projects, and prove that an intruder cannot cause damages to the safety of patients and their data by performing any of the attacks falling within this threat model. To demonstrate the feasibility and effectiveness of our protocol, we have fully implemented it.
Massimiliano Masi
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2013-04-30T13:43:46Z
2013-04-30T13:44:36Z
http://eprints.imtlucca.it/id/eprint/1556
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1556
2013-04-30T13:43:46Z
Human mobility models for opportunistic networks
Mobile ad hoc networks enable communications between clouds of mobile devices without the need for a preexisting infrastructure. One of their most interesting evolutions are opportunistic networks, whose goal is to also enable communication in disconnected environments, where the general absence of an end-to-end path between the sender and the receiver impairs communication when legacy MANET networking protocols are used. The key idea of OppNets is that the mobility of nodes helps the delivery of messages, because it may connect, asynchronously in time, otherwise disconnected subnetworks. This is especially true for networks whose nodes are mobile devices (e.g., smartphones and tablets) carried by human users, which is the typical OppNets scenario. In such a network where the movements of the communicating devices mirror those of their owners, finding a route between two disconnected devices implies uncovering habits in human movements and patterns in their connectivity (frequencies of meetings, average duration of a contact, etc.), and exploiting them to predict future encounters. Therefore, there is a challenge in studying human mobility, specifically in its application to OppNets research. In this article we review the state of the art in the field of human mobility analysis and present a survey of mobility models. We start by reviewing the most considerable findings regarding the nature of human movements, which we classify along the spatial, temporal, and social dimensions of mobility. We discuss the shortcomings of the existing knowledge about human movements and extend it with the notion of predictability and patterns. We then survey existing approaches to mobility modeling and fit them into a taxonomy that provides the basis for a discussion on open problems and further directions for research on modeling human mobility.
Dmytro Karamshuk
dmytro.karamshuk@imtlucca.it
Chiara Boldrini
Marco Conti
Andrea Passarella
2013-03-07T12:49:58Z
2013-03-12T14:58:11Z
http://eprints.imtlucca.it/id/eprint/1524
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1524
2013-03-07T12:49:58Z
Classification of music genres using sparse representations in overcomplete dictionaries
This paper presents a simple, but efficient and robust, method for music genre classification that utilizes sparse representations in overcomplete dictionaries. The training step involves creating dictionaries, using the K-SVD algorithm, in which data corresponding to a particular music genre has a sparse representation. In the classification step, the Orthogonal Matching Pursuit (OMP) algorithm is used to separate feature vectors that consist only of Linear Predictive Coding (LPC) coefficients. The paper analyses in detail a popular case study from the literature, the ISMIR 2004 database. Using the presented method, the correct classification percentage of the 6 music genres is 85.59, result that is comparable with the best results published so far.
Cristian Rusu
cristian.rusu@imtlucca.it
2013-02-28T12:15:01Z
2013-02-28T12:15:01Z
http://eprints.imtlucca.it/id/eprint/1494
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1494
2013-02-28T12:15:01Z
Serendipitous Fuzzy Item Recommendation with ProfileMatcher
In this paper an approach to serendipitous item recommendation is outlined. The model used for this task is an extension of ProfileMatcher, which is based on fuzzy metadata describing both user and items to be recommended. To address the task of recommending serendipitous resources, a priori knowledge on the relations occurring among metadata values is injected in the recommendation process. This is achieved using fuzzy graphs to model similarity relations among the elements of the fuzzy sets describing the metadata. An experimentation has been carried out on the MovieLens data set to show the impact of serendipity injection in the item recommendation process.
Danilo Dell’Agnello
Anna Fanelli
Corrado Mencar
Massimo Minervini
massimo.minervini@imtlucca.it
2013-02-20T09:41:55Z
2013-02-20T09:41:55Z
http://eprints.imtlucca.it/id/eprint/1484
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1484
2013-02-20T09:41:55Z
The Puzzle of Job Search and Housing Tenure: a Reconciliation of Theory and Empirical Evidence
This paper attempts to reconcile the empirical evidence with the argument in favour of a positive effect of homeownership on exit rates from unemployment, known as \Oswald's thesis". While the theory would suggest that
homeowners experience more difficulties than renters to exit unemployment due to lower residential mobility, the empirical literature has typically found lower unemployed duration for homeowners. Taking into account some of
the reasons for the falsification of the Oswald's thesis, we provide evidence which supports it. At first, in a theoretical model of endogenous job search we show that
homeowners' higher moving costs imply unambiguously lower search and lower job finding rates, even though an opposite effect works for jobs which do not require a move. Then, in the empirical analysis we make use of data drawn from the British Household Panel Survey to compare job search intensity measures by housing tenure. We find that, controlling for housing costs and for different residential statuses, non-employed outright owners have definitely a lower attachment to the labour market than renters, and that this effect is even more evident when we compare them to private renters.
Andrea Morescalchi
andrea.morescalchi@imtlucca.it
2013-02-20T09:36:06Z
2013-02-20T09:36:06Z
http://eprints.imtlucca.it/id/eprint/1483
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1483
2013-02-20T09:36:06Z
Housing Tenure and Individual Labour Market Outcomes. An Empirical Assessment Based on the UK Labour Force Survey
We analyse the impact of the housing tenure on labour market outcomes using individual data from the UK Labour Force Survey. In defining the residential status, we distinguish between outright owners and mortgage-holders,
and between social and private renters. We estimate both a binary model for the probability to be unemployed and a hazard model for exits out of unemployment. In both models we test for endogenity of housing tenure. In the binary model, exogeneity is rejected so we perform endogenous multinomial treatment effects estimates. In the hazard model we find no evidence of unobserved heterogeneity thus estimates are performed assuming exogeneity. Results show that mortgagers have the lowest probability to be unemployed
and the highest job finding rates, while social renters exhibit the worst performance. Whether private renters perform better than outright owners is a matter of debate: while we have no evidence in favour of this claim, the
evidence in favour of the opposite is only modest.
Andrea Morescalchi
andrea.morescalchi@imtlucca.it
2013-02-20T09:15:19Z
2013-02-20T09:15:19Z
http://eprints.imtlucca.it/id/eprint/1482
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1482
2013-02-20T09:15:19Z
Housing Tenure and Job Search Behaviour. A Different Analysis of the Impact of the UK Jobseeker’s Allowance
This paper investigates the relation between job search effort and housing tenure by focussing on the impact of the UK Jobseeker's Allowance reform introduced in the UK in 1996. Theory suggests that a tightening in job search requirements, as implied by this reform, raises movements off benefit of non-employed with low search intensity and this effect adjusts in size depending on the different housing tenure. Average Treatment Effect estimates confirm that the impact of the reform on the claimant outflow rate is related to housing tenure.
Francesco Arzilli
Andrea Morescalchi
andrea.morescalchi@imtlucca.it
2012-12-19T10:42:21Z
2013-03-12T14:57:00Z
http://eprints.imtlucca.it/id/eprint/1453
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1453
2012-12-19T10:42:21Z
Improving Europeana Search Experience Using Query Logs
Europeana is a long-term project funded by the European Commission with the goal of making Europe’s cultural and scientific heritage accessible to the public. Since 2008, about 1500 institutions have contributed to Europeana, enabling people to explore the digital resources of Europe’s museums, libraries and archives. The huge amount of collected multi-lingual multi-media data is made available today through the Europeana portal, a search engine allowing users to explore such content through textual queries. One of the most important techniques for enhancing users search experience in large information spaces, is the exploitation of the knowledge contained in query logs. In this paper we present a characterization of the Europeana query log, showing statistics on common behavioral patterns of the Europeana users. Our analysis highlights some significative differences between the Europeana query log and the historical data collected by general purpose Web Search Engine logs. In particular, we find out that both query and search session distributions show different behaviors. Finally, we use this information for designing a query recommendation technique having the goal of enhancing the functionality of the Europeana portal.
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Sergiu Gordea
Claudio Lucchese
Franco Maria Nardini
Gabriele Tolomei
2012-12-19T10:23:55Z
2013-03-12T14:57:00Z
http://eprints.imtlucca.it/id/eprint/1451
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1451
2012-12-19T10:23:55Z
Discovering Europeana users’ search behavior
Europeana is a strategic project funded by the European Commission with the goal of making Europe's cultural and scientific heritage accessible to the public. ASSETS is a two-year Best Practice Network co-funded by the CIP PSP Programme to improve performance, accessibility and usability of the Europeana search engine. Here we present a characterization of the Europeana logs by showing statistics on common behavioural patterns of the Europeana users.
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Sergiu Gordea
Claudio Lucchese
Franco Maria Nardini
Raffaele Perego
Gabriele Tolomei
2012-12-19T09:37:52Z
2013-03-12T14:57:00Z
http://eprints.imtlucca.it/id/eprint/1450
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1450
2012-12-19T09:37:52Z
The Sindice-2011 Dataset for Entity-Oriented Search in the Web of Data
The task of entity retrieval becomes increasingly prevalent as more and more (semi-) structured information about objects is available on the Web in the form of documents embedding metadata (RDF,RDFa, Microformats, and others). However, research and development in that direction is dependent on (1) the availability of a representative corpus of entities that are found on the Web, and (2)
the availability of an entity-oriented search infrastructure for experimenting with new retrieval models. In this paper, we introduce the Sindice-2011 data collection which is derived from data collected by the Sindice semantic search engine. The data collection (available at http://data.sindice.com/trec2011/) is especially designed for supporting research in the domain of web entity retrieval. We describe how the corpus is organised, discuss statistics of the data collection, and introduce a search infrastructure to foster research and development.
Stephane Campinas
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Thomas E. Perry
Renaud Delbru
Krisztian Balog
Giovanni Tummarello
2012-12-19T09:15:48Z
2013-03-12T14:57:01Z
http://eprints.imtlucca.it/id/eprint/1449
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1449
2012-12-19T09:15:48Z
Caching query-biased snippets for efficient retrieval
Web Search Engines' result pages contain references to the
top-k documents relevant for the query submitted by a user.
Each document is represented by a title, a snippet and a
URL. Snippets, i.e. short sentences showing the portions
of the document being relevant to the query, help users to
select the most interesting results. The snippet generation process is very expensive, since it may require to access a number of documents for each issued query. We assert that caching, a popular technique used to enhance performance at various levels of any computing systems, can be very effective in this context. We design and experiment several cache organizations, and we introduce the concept of supersnippet, that is the set of sentences in a document that are more likely to answer future queries. We show that supersnippets can be built by exploiting query logs, and that in our experiments a supersnippet cache answers up to 62% of the requests, remarkably outperforming other caching approaches.
Diego Ceccarelli
diego.ceccarelli@imtlucca.it
Claudio Lucchese
Salvatore Orlando
Raffaele Perego
Fabrizio Silvestri
2012-10-17T15:08:48Z
2012-10-17T15:30:22Z
http://eprints.imtlucca.it/id/eprint/1411
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1411
2012-10-17T15:08:48Z
[recensione a] Maria Casalini, Famiglie comuniste. Ideologie e vita quotidiana nell’Italia degli anni Cinquanta
Fiammetta Balestracci
fiammetta.balestracci@imtlucca.it
2012-10-17T15:00:08Z
2012-10-17T15:02:36Z
http://eprints.imtlucca.it/id/eprint/1410
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1410
2012-10-17T15:00:08Z
Democrazia Proletaria: la sfida di un partito di fronte ai movimenti e alla violenza [recensione a: William Gambetta, Democrazia proletaria: la nuova sinistra tra piazze e Palazzi]
Fiammetta Balestracci
fiammetta.balestracci@imtlucca.it
2012-10-12T10:42:48Z
2013-06-11T12:03:33Z
http://eprints.imtlucca.it/id/eprint/1391
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1391
2012-10-12T10:42:48Z
Zwischen ideologischer Diversifikation und politisch-kulturellem Pragmatismus. Die Beziehung zwischen der Partito Comunista Italiano und der SED (1968-1989)
In this essay the Author examines the cultural and political relationships between the Italian Communist Party and the German Socialist Unified Party of East Germany during the weakening of the Soviet system and communist ideology.
Fiammetta Balestracci
fiammetta.balestracci@imtlucca.it
2012-09-26T14:49:47Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1385
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1385
2012-09-26T14:49:47Z
Modular Termination and Combinability for Superposition Modulo Counter Arithmetic
Modularity is a highly desirable property in the develop-
ment of satisfiability procedures. In this paper we are interested in using a dedicated superposition calculus to develop satisfiability procedures for (unions of) theories sharing counter arithmetic. In the first place, we are concerned with the termination of this calculus for theories representing data structures and their extensions. To this purpose, we prove a modularity result for termination which allows us to use our superposition
calculus as a satisfiability procedure for combinations of data structures. In addition, we present a general combinability result that permits us to use our satisfiability procedures into a non-disjoint combination method à la Nelson-Oppen without loss of completeness. This latter result is useful whenever data structures are combined with theories for which superposition is not applicable, like theories of arithmetic.
Christophe Ringeissen
Valerio Senni
valerio.senni@imtlucca.it
2012-09-26T13:50:03Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1384
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1384
2012-09-26T13:50:03Z
Reachability Analysis via Specialization of Constraint Logic Programs
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T13:23:48Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1358
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1358
2012-09-18T13:23:48Z
Program Specialization for Verifying Infinite State Systems: An Experimental Evaluation
We address the problem of the automated verification of temporal properties of infinite state reactive systems. We present some improvements of a verification method based on the specialization of constraint logic programs (CLP). First, we reformulate the verification method as a two-phase procedure: (1) in the first phase a CLP specification of an infinite state system is specialized with respect to the initial state of the system and the temporal property to be verified, and (2) in the second phase the specialized program is evaluated by using a bottom-up strategy. In this paper we propose some new strategies for performing program specialization during the first phase. We evaluate the effectiveness of these new strategies, as well as that of some old strategies, by presenting the results of experiments performed on several infinite state systems and temporal properties. Finally, we compare the implementation of our specialization-based verification method with various constraint-based model checking tools. The experimental results show that our method is effective and competitive with respect to the methods used in those other tools.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T13:12:39Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1357
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1357
2012-09-18T13:12:39Z
Program transformation for development, verification, and synthesis of programs
This paper briefly describes the use of the program transformation methodology for the development of correct and efficient programs. In particular, we will refer to the case of constraint logic programs and, through some examples, we will show how by program transformation, one can improve, synthesize, and verify programs.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T12:52:48Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1356
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1356
2012-09-18T12:52:48Z
Improving Reachability Analysis of Infinite State Systems by Specialization
We consider infinite state reactive systems specified by using linear constraints over the integers, and we address the problem of verifying safety properties of these systems by applying reachability analysis techniques. We propose a method based on program specialization, which improves the effectiveness of the backward and forward reachability analyses. For backward reachability our method consists in: (i) specializing the reactive system with respect to the initial states, and then (ii) applying to the specialized system a reachability analysis that works backwards from the unsafe states. For forward reachability our method works as for backward reachability, except that the role of the initial states and the unsafe states are interchanged. We have implemented our method using the MAP transformation system and the ALV verification system. Through various experiments performed on several infinite state systems, we have shown that our specialization-based verification technique considerably increases the number of successful verifications without significantly degrading the time performance.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T12:20:53Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1355
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1355
2012-09-18T12:20:53Z
Modular Termination and Combinability for Superposition Modulo Counter Arithmetic
Modularity is a highly desirable property in the development of satisfiability procedures. In this paper we are interested in using a dedicated superposition calculus to develop satisfiability procedures for (unions of) theories sharing counter arithmetic. In the first place, we are concerned with the termination of this calculus for theories representing data structures and their extensions. To this purpose, we prove a modularity result for termination which allows us to use our superposition calculus as a satisfiability procedure for combinations of data structures. In addition, we present a general combinability result that permits us to use our satisfiability procedures into a non-disjoint combination method à la Nelson-Oppen without loss of completeness. This latter result is useful whenever data structures are combined with theories for which superposition is not applicable, like theories of arithmetic.
Christophe Ringeissen
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T12:09:21Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1354
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1354
2012-09-18T12:09:21Z
Controlling Polyvariance for Specialization-based Verification
We present some extensions of a method for verifying safety
properties of infinite state reactive systems. Safety properties are specified by constraint logic programs encoding (backward or forward) reachability algorithms. These programs are transformed, before their use for checking safety, by specializing them with respect to the initial states (in the case of backward reachability) or with respect to the unsafe states (in the case of forward reachability). In particular, we present a specialization
strategy which is more general than previous proposals and we show, through some experiments performed on several infinite state reactive systems, that by using the specialized reachability programs obtained by our new strategy, we considerably increase the number of successful
verifications. Then we show that the specialization time, the size of the specialized program, and the number of successful verifications may vary, depending on the polyvariance introduced by the specialization, that is,
the set of specialized predicates which have been introduced. Finally, we propose a general framework for controlling polyvariance and we use our set of examples of infinite state reactive systems to compare in an experimental way various control strategies one may apply in practice.
Fabio Fioravanti
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-18T10:43:10Z
2013-03-07T12:56:25Z
http://eprints.imtlucca.it/id/eprint/1353
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1353
2012-09-18T10:43:10Z
Constraint-Based Correctness Proofs for Logic Program Transformations
Many approaches proposed in the literature for proving the correctness of unfold/fold transformations of logic programs make use of measures associated with program clauses. When from a program P1 we derive a program P2 by a applying a sequence of transformations, suitable conditions on the measures of the clauses in P2 guarantee that the transformation of P1 into P2 is correct, that is, P1 and P2 have the same least Herbrand model. In the approaches proposed so far, clause measures are fixed in advance, independently of the transformations to be proved correct. In this paper we propose a method for the automatic generation of clause measures which, instead, takes into account the particular program transformation at hand. During the application of a sequence of transformations we construct a system of linear equalities and inequalities
over nonnegative integers whose unknowns are the clause measures to be found, and the correctness of the transformation is guaranteed by the satisfiability of that system. Through some examples we show that our method is more powerful and practical than other methods proposed in the literature. In particular, we are able to establish in a fully automatic way the correctness of program transformations which, by using other methods, are proved correct at the expense of fixing in advance sophisticated clause measures.
Alberto Pettorossi
Maurizio Proietti
Valerio Senni
valerio.senni@imtlucca.it
2012-09-04T09:32:46Z
2013-03-05T15:10:06Z
http://eprints.imtlucca.it/id/eprint/1333
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1333
2012-09-04T09:32:46Z
Tracking-Optimized Quantization for H.264 Compression in Transportation Video Surveillance Applications
We propose a tracking-aware system that removes video components of low tracking interest and optimizes the quantization during compression of frequency coefficients, particularly those that most influence trackers, significantly reducing bitrate while maintaining comparable tracking accuracy. We utilize tracking accuracy as our compression criterion in lieu of mean squared error metrics. The process of optimizing quantization tables suitable for automated tracking can be executed online or offline. The online implementation initializes the encoding procedure for a specific scene, but introduces delay. On the other hand, the offline procedure produces globally optimum quantization tables where the optimization occurs for a collection of video sequences. Our proposed system is designed with low processing power and memory requirements in mind, and as such can be deployed on remote nodes. Using H.264/AVC video coding and a commonly used state-of-the-art tracker we show that while maintaining comparable tracking accuracy our system allows for over 50% bitrate savings on top of existing savings from previous work.
Eren Soyak
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2012-09-04T09:02:27Z
2013-03-05T15:09:06Z
http://eprints.imtlucca.it/id/eprint/1336
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1336
2012-09-04T09:02:27Z
Channel protection for H.264 compression in transportation video surveillance applications
The compression of video and subsequent partial loss of the compressed bitstream can dramatically reduce the accuracy of automated tracking algorithms. This is problematic for centralized applications such as transportation surveillance systems, where remotely captured and compressed video is transmitted over lossy wireless links to a central location for tracking. We propose a low-complexity method for protecting compressed video against channel loss such that the tracking accuracy of decoded and concealed video is maximized. Our algorithm leverages a previous method of video processing that removes components of low tracking interest before compression to minimize bitrate, and uses some of the bitrate savings to introduce redundancy into the transmitted bitstream to reduce the probability of information loss. We show using a common tracker and loss concealment algorithm that our system allows for up to 100 increased tracking accuracy at a given bitrate, or 90 bitrate savings for comparable tracking quality.
Eren Soyak
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2012-07-03T12:03:34Z
2012-07-03T12:03:34Z
http://eprints.imtlucca.it/id/eprint/1309
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1309
2012-07-03T12:03:34Z
Sequence alignment, mutual information, and dissimilarity measures for constructing phylogenies
Background:
Existing sequence alignment algorithms use heuristic scoring schemes based on biological expertise, which cannot be used as objective distance metrics. As a result one relies on crude measures, like the p- or log-det distances, or makes explicit, and often too simplistic, a priori assumptions about sequence evolution. Information theory provides an alternative, in the form of mutual information (MI). MI is, in principle, an objective and model independent similarity measure, but it is not widely used in this context and no algorithm for extracting MI from a given alignment (without assuming an evolutionary model) is known. MI can be estimated without alignments, by concatenating and zipping sequences, but so far this has only produced estimates with uncontrolled errors, despite the fact that the normalized compression distance based on it has shown promising results.
Results:
We describe a simple approach to get robust estimates of MI from global pairwise alignments. Our main result uses algorithmic (Kolmogorov) information theory, but we show that similar results can also be obtained from Shannon theory. For animal mitochondrial DNA our approach uses the alignments made by popular global alignment algorithms to produce MI estimates that are strikingly close to estimates obtained from the alignment free methods mentioned above. We point out that, due to the fact that it is not additive, normalized compression distance is not an optimal metric for phylogenetics but we propose a simple modification that overcomes the issue of additivity. We test several versions of our MI based distance measures on a large number of randomly chosen quartets and demonstrate that they all perform better than traditional measures like the Kimura or log-det (resp. paralinear) distances.
Conclusions:
Several versions of MI based distances outperform conventional distances in distance-based phylogeny. Even a simplified version based on single letter Shannon entropies, which can be easily incorporated in existing software packages, gave superior results throughout the entire animal kingdom. But we see the main virtue of our approach in a more general way. For example, it can also help to judge the relative merits of different alignment algorithms, by estimating the significance of specific alignments. It strongly suggests that information theory concepts can be exploited further in sequence analysis.
Orion Penner
orion.penner@imtlucca.it
Peter Grassberger
Maya Paczuski
2012-07-02T13:31:12Z
2013-04-16T14:20:56Z
http://eprints.imtlucca.it/id/eprint/1299
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1299
2012-07-02T13:31:12Z
The Gilliérons and the Greek Bronze Age
Silvia Loreti
silvia.loreti@imtlucca.it
2012-06-29T11:15:48Z
2014-01-29T14:27:01Z
http://eprints.imtlucca.it/id/eprint/1291
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1291
2012-06-29T11:15:48Z
Evaluating the performance of model transformation styles in Maude
Rule-based programming has been shown to be very successful in many application areas. Two prominent examples are the specification of model transformations in model driven development approaches and the definition of structured operational semantics of formal languages. General rewriting frameworks such as Maude are flexible enough to allow the programmer to adopt and mix various rule styles. The choice between styles can be biased by the programmer’s background. For instance, experts in visual formalisms might prefer graph-rewriting styles, while experts in semantics might prefer structurally inductive rules. This paper evaluates the performance of different rule styles on a significant benchmark taken from the literature on model transformation. Depending on the actual transformation being carried out, our results show that different rule styles can offer drastically different performances. We point out the situations from which each rule style benefits to offer a valuable set of hints for choosing one style over the other.
Roberto Bruni
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
2012-06-28T13:36:24Z
2013-04-16T14:20:56Z
http://eprints.imtlucca.it/id/eprint/1289
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1289
2012-06-28T13:36:24Z
Modern Narcissus: the lingering reflections of myth in modern art
Why has myth continued to fascinate modern artists, and why the myth of Narcissus, with its modern association with narcissism? This article considers the relationship between the Narcissus myth and the lineage of modern art that runs from Symbolism to surrealism through the polymorphous prism of the Greco-Roman Pantheon to which Narcissus belongs. The article offers an interpretation of the role of mythology in modern art that moves beyond psychoanalysis to incorporate the longer span of the art-historical tradition. Addressing issues of aesthetics, gender and sexuality, the following account highlights Narcissus‟s double nature as an erotic myth that comprises both identity formation and intersubjectivity, as enacted in the field of representation. The myths associated with Narcissus in the history of Western art will help us reconsider his role as a powerful figure capable to activate that slippage between word and image, identity and sociability, representation and reality which was celebrated by the Symbolists and formed the centre of the surrealists‟ social-aesthetic project.
Silvia Loreti
silvia.loreti@imtlucca.it
2012-06-28T13:18:56Z
2013-04-16T14:20:56Z
http://eprints.imtlucca.it/id/eprint/1288
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1288
2012-06-28T13:18:56Z
A timely call: modern representation awakened by antiquity. De Chirico, Picasso and the classical vision
Silvia Loreti
silvia.loreti@imtlucca.it
2012-05-16T10:17:49Z
2015-05-27T10:21:34Z
http://eprints.imtlucca.it/id/eprint/1269
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1269
2012-05-16T10:17:49Z
Étienne-Jules Marey, Henri Bergson e brevi storie di fotografia: rappresentazione e rappresentabilità del movimento
Linda Bertelli
linda.bertelli@imtlucca.it
2012-04-04T09:31:53Z
2012-04-04T09:31:53Z
http://eprints.imtlucca.it/id/eprint/1257
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1257
2012-04-04T09:31:53Z
Synthesis of low-complexity stabilizing piecewise affine controllers: a control-Lyapunov function approach
Explicit model predictive controllers computed exactly by multi-parametric optimization techniques often lead to piecewise affine (PWA) state feedback controllers with highly complex and irregular partitionings of the feasible set. In many cases complexity prohibits the implementation of the resulting MPC control law for fast or large-scale system. This paper presents a new approach to synthesize low-complexity PWA controllers on regular partitionings that enhance fast on-line implementation with low memory requirements. Based on a PWA control-Lyapunov function, which can be obtained as the optimal cost for a constrained linear system corresponding to a stabilizing MPC setup, the synthesis procedure for the low-complexity control law boils down to local linear programming (LP) feasibility problems, which guarantee stability, constraint satisfaction, and certain performance requirements. Initially, the PWA controllers are computed on a fixed regular partitioning. However, we also present an automatic refinement procedure to refine the partitioning where necessary in order to satisfy the design specifications. A numerical example show the effectiveness of the novel approach.
Liang Lu
W.P.M.H. Heemels
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-04-04T09:18:53Z
2012-04-04T09:18:53Z
http://eprints.imtlucca.it/id/eprint/1256
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1256
2012-04-04T09:18:53Z
An upper Riemann-Stieltjes approach to stochastic design problems
In this paper we study a class of stochastic design problems formulated in terms of general inequality conditions on expectations. These inequalities can be used to express various mean square or almost sure stabilization conditions for stochastic systems. In contrast with existing probabilistic methods that only solve such problems with a certain probability (degree of confidence), we propose a novel method that provides a full guarantee that the constructed solution truly solves the original problem. The main idea of our method is based on overapproximating the expectations by suitably constructed upper Riemann-Stieltjes sums and imposing the inequalities on these sums instead. Next to the full guarantee on the constructed solution, the method offers three other advantages. First, it applies to arbitrary probability distributions. Second, under rather mild conditions we can derive a #x201C;converse theorem #x201D; that states that if the original problem is solvable, our method will find a solution by sufficiently refining the upper Riemann-Stieltjes sums. Finally, we will show that convexity of the function used in the expectation can be exploited to obtain convex design conditions in our approach.
W.P.M.H. Heemels
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-04-04T08:58:38Z
2012-04-04T08:58:38Z
http://eprints.imtlucca.it/id/eprint/1255
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1255
2012-04-04T08:58:38Z
Stochastic MPC for real-time market-based optimal power dispatch
We formulate the problem of dynamic, real-time optimal power dispatch for electric power systems consisting of conventional power generators, intermittent generators from renewable sources, energy storage systems and price-inelastic loads. The generation company managing the power system can place bids on the real-time energy market (the so-called regulating market) in order to balance its loads and/or to make profit. Prices, demands and intermittent power injections are considered to be stochastic processes and the goal is to compute power injections for the conventional power generators, charge and discharge levels for the storage units and exchanged power with the rest of the grid that minimize operating and trading costs. We propose a scenario-based stochastic model predictive control algorithm to solve the real-time market-based optimal power dispatch problem.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Sergio Trimboli
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-04-04T08:41:10Z
2012-04-04T08:41:10Z
http://eprints.imtlucca.it/id/eprint/1254
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1254
2012-04-04T08:41:10Z
Decentralized linear time-varying model predictive control of a formation of unmanned aerial vehicles
This paper proposes a hierarchical MPC approach to stabilization and autonomous navigation of a formation of unmanned aerial vehicles (UAVs), under constraints on motor thrusts, angles and positions, and under collision avoidance constraints. Each vehicle is of quadcopter type and is stabilized by a local linear time-invariant (LTI) MPC controller at the lower level of the control hierarchy around commanded desired set-points. These are generated at the higher level and at a slower sampling rate by a linear time-varying (LTV) MPC controller per vehicle, based on an a simplified dynamical model of the stabilized UAV and a novel algorithm for convex under-approximation of the feasible space. Formation flying is obtained by running the above decentralized scheme in accordance with a leader-follower approach. The performance of the hierarchical control scheme is assessed through simulations, and compared to previous work in which a hybrid MPC scheme is used for planning paths on-line.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Claudio Rocchi
2012-03-28T11:23:03Z
2012-04-03T07:52:02Z
http://eprints.imtlucca.it/id/eprint/1249
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1249
2012-03-28T11:23:03Z
Recognition of false alarms in fall detection systems
Falls are a major cause of hospitalization and injury-related deaths among the elderly population. The detrimental effects of falls, as well as the negative impact on health services costs, have led to a great interest on fall detection systems by the health-care industry. The most promising approaches are those based on a wearable device that monitors the movements of the patient, recognizes a fall and triggers an alarm. Unfortunately such techniques suffer from the problem of false alarms: some activities of daily living are erroneously reported as falls, thus reducing the confidence of the user. This paper presents a novel approach for improving the detection accuracy which is based on the idea of identifying specific movement patterns into the acceleration data. Using a single accelerometer, our system can recognize these patterns and use them to distinguish activities of daily living from real falls; thus the number of false alarms is reduced.
Stefano Abbate
stefano.abbate@alumni.imtlucca.it
Marco Avvenuti
Guglielmo Cola
Paolo Corsini
Janet Light
Alessio Vecchio
2012-03-28T11:05:58Z
2012-04-03T07:51:39Z
http://eprints.imtlucca.it/id/eprint/1248
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1248
2012-03-28T11:05:58Z
Estimation of energy consumption in wireless sensor networks using TinyOS 2.x
Run-time monitoring of energy consumption in wireless sensor networks is a necessary step for the production of energy efficient applications. The demo will show a software system that helps the developer to profile applications based on TinyOS in terms of energy consumption.
Stefano Abbate
stefano.abbate@alumni.imtlucca.it
Marco Avvenuti
Alessandro Biondi
Alessio Vecchio
2012-03-28T10:58:54Z
2012-04-03T07:51:16Z
http://eprints.imtlucca.it/id/eprint/1247
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1247
2012-03-28T10:58:54Z
Developing cognitive decline baseline for normal ageing from sleep-EEG monitoring using wireless neurosensor devices
Sleep has a well-organized and consistent structure and hence can be a valuable instrument for investigating cognitive decline with ageing, and other neurological disorders. The ab- normality in brain function can be observed by changes in sleep patterns and brain signals in Electroencephalograph - EEG recordings. In this study, EEGs are captured through different sleep stages from different age groups, to develop baseline for normal healthy ageing. Threshold values are de- fined and extracted from sleep-spindles' amplitude and frequency characteristics. These values can then be compared with abnormal EEGs in progressive neurodegenerative subjects to identify the progression of the disease leading to decline in cognition and related mobility problems. Cognitive decline indicators derived from this preliminary study, will be useful to build intelligence in non-invasive wireless monitoring systems.
Janet Light
Xiaoyi Li
Stefano Abbate
stefano.abbate@alumni.imtlucca.it
2012-03-27T09:53:56Z
2014-07-28T12:21:38Z
http://eprints.imtlucca.it/id/eprint/1244
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1244
2012-03-27T09:53:56Z
Quantitative information flow, with a view
We put forward a general model intended for assessment of system security against passive eavesdroppers, both quantitatively ( how much information is leaked) and qualitatively ( what properties are leaked). To this purpose, we extend information hiding systems ( ihs ), a model where the secret-observable relation is represented as a noisy channel, with views : basically, partitions of the state-space. Given a view W and n independent observations of the system, one is interested in the probability that a Bayesian adversary wrongly predicts the class of W the underlying secret belongs to. We offer results that allow one to easily characterise the behaviour of this error probability as a function of the number of observations, in terms of the channel matrices defining the ihs and the view W . In particular, we provide expressions for the limit value as n → ∞, show by tight bounds that convergence is exponential, and also characterise the rate of convergence to predefined error thresholds. We then show a few instances of statistical attacks that can be assessed by a direct application of our model: attacks against modular exponentiation that exploit timing leaks, against anonymity in mix-nets and against privacy in sparse datasets.
Michele Boreale
Francesca Pampaloni
francesca.pampaloni@imtlucca.it
Michela Paolini
michela.paolini@alumni.imtlucca.it
2012-03-27T09:38:21Z
2014-07-28T12:21:19Z
http://eprints.imtlucca.it/id/eprint/1243
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1243
2012-03-27T09:38:21Z
Asymptotic information leakage under one-try attacks
We study the asymptotic behaviour of (a) information leakage and (b) adversary’s error probability in information hiding systems modelled as noisy channels. Specifically, we assume the attacker can make a single guess after observing n independent executions of the system, throughout which the secret information is kept fixed. We show that the asymptotic behaviour of quantities (a) and (b) can be determined in a simple way from the channel matrix. Moreover, simple and tight bounds on them as functions of n show that the convergence is exponential. We also discuss feasible methods to evaluate the rate of convergence. Our results cover both the Bayesian case, where a prior probability distribution on the secrets is assumed known to the attacker, and the maximum-likelihood case, where the attacker does not know such distribution. In the Bayesian case, we identify the distributions that maximize the leakage. We consider both the min-entropy setting studied by Smith and the additive form recently proposed by Braun et al., and show the two forms do agree asymptotically. Next, we extend these results to a more sophisticated eavesdropping scenario, where the attacker can perform a (noisy) observation at each state of the computation and the systems are modelled as hidden Markov models.
Michele Boreale
Francesca Pampaloni
francesca.pampaloni@imtlucca.it
Michela Paolini
michela.paolini@alumni.imtlucca.it
2012-03-09T11:27:59Z
2012-03-09T11:27:59Z
http://eprints.imtlucca.it/id/eprint/1231
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1231
2012-03-09T11:27:59Z
Definizione e definizioni di sistema museale: legittimità e retorica
Emanuele Pellegrini
emanuele.pellegrini@imtlucca.it
2012-03-05T13:49:24Z
2013-09-30T12:30:10Z
http://eprints.imtlucca.it/id/eprint/1213
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1213
2012-03-05T13:49:24Z
Sliding mode observers for sensorless control of current-fed induction motors
This paper presents the use of a higher order sliding mode scheme for sensorless control of induction motors. The second order sub-optimal control law is based on a reduced-order model of the motor, and produces the references for a current regulated PWM inverter. A nonlinear observer structure, based on Lyapunov theory and on different sliding mode techniques (first order, sub-optimal and super-twisting) generates the velocity and rotor flux estimates necessary for the controller, based only on the measurements of phase voltages and currents. The proposed control scheme and observers are tested on an experimental setup, showing a satisfactory performance.
Daniele Bullo
Antonella Ferrara
Matteo Rubagotti
2012-03-05T11:06:43Z
2012-04-04T09:21:01Z
http://eprints.imtlucca.it/id/eprint/1212
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1212
2012-03-05T11:06:43Z
A multi-stage stochastic optimization approach to optimal bidding on energy markets
One of the most challenging tasks for an energy producer is represented by the optimal bidding on energy markets. Each eligible plant has to submit bids for the spot market one day before the delivery time and bids for the ancillary services provision. Allocating the optimal amount of energy, jointly minimizing the risk and maximizing profits is not a trivial task, since one has to face several sources of stochasticity, such as the high volatility of energy prices and the uncertainty of the production, due to the deregulation and to the growing importance of renewable sources. In this paper the optimal bidding problem is formulated as a multi-stage optimization problem to be solved in a receding horizon fashion, where at each time step a risk measure is minimized in order to obtain optimal quantities to bid on the day ahead market, while reserving the remaining production to the ancillary market. Simulation results show the optimal bid profile for a trading day, based on stochastic models identified from historical data series from the Italian energy market.
Laura Puglia
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-03-05T10:58:40Z
2013-02-12T12:12:49Z
http://eprints.imtlucca.it/id/eprint/1211
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1211
2012-03-05T10:58:40Z
Stability and invariance analysis of uncertain PWA systems based on linear programming
This paper analyzes stability of discrete-time uncertain piecewise-affine systems whose dynamics are defined on a bounded set χ; that is not necessarily invariant. The objective is to prove the uniform asymptotic stability of the origin and to find an invariant domain of attraction. This goal is attained by defining a suitable extended dynamics (which is partially fictitious), and by using a numerical procedure based on linear programming. The theoretical results are based on the definition of a piecewise-affine, possibly discontinuous, Lyapunov function.
Sergio Trimboli
Matteo Rubagotti
Alberto Bemporad
alberto.bemporad@imtlucca.it
2012-03-02T15:30:33Z
2013-09-30T12:33:14Z
http://eprints.imtlucca.it/id/eprint/1208
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1208
2012-03-02T15:30:33Z
Robust model predictive control with integral sliding mode in continuous-time sampled-data nonlinear systems
This paper proposes a control strategy for nonlinear constrained continuous-time uncertain systems which combines robust model predictive control (MPC) with sliding mode control (SMC). In particular, the so-called Integral SMC approach is used to produce a control action aimed to reduce the difference between the nominal predicted dynamics of the closed-loop system and the actual one. In this way, the MPC strategy can be designed on a system with a reduced uncertainty. In order to prove the stability of the overall control scheme, some general regional input-to-state practical stability results for continuous-time systems are proved.
Matteo Rubagotti
Davide Martino Raimondo
Antonella Ferrara
Lalo Magni
2012-03-02T15:10:30Z
2013-09-30T12:37:32Z
http://eprints.imtlucca.it/id/eprint/1207
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1207
2012-03-02T15:10:30Z
Time-optimal sliding-mode control of a mobile robot in a dynamic environment
In this study, an original strategy to control a mobile robot in a dynamic environment is presented. The strategy consists of two main elements. The first is the method for the online trajectory generation based on harmonic potential fields, capable of generating velocity and orientation references, which extends classical results on harmonic potential fields for the case of static environments to the case when the presence of a moving obstacle with unknown motion is considered. The second is the design of sliding-mode controllers capable of making the controlled variables of the robot track in a finite minimum time both the velocity and the orientation references.
Matteo Rubagotti
Marco L. Della Vedova
Antonella Ferrara
2012-03-02T14:54:31Z
2013-09-30T12:37:55Z
http://eprints.imtlucca.it/id/eprint/1206
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1206
2012-03-02T14:54:31Z
Integral sliding mode control for nonlinear systems with matched and unmatched perturbations
We consider the problem of designing an integral sliding mode controller to reduce the disturbance terms that act on nonlinear systems with state-dependent drift and input matrix. The general case of both, matched and unmatched disturbances affecting the system is addressed. It is proved that the definition of a suitable sliding manifold and the generation of sliding modes upon it guarantees the minimization of the effect of the disturbance terms, which takes place when the matched disturbances are completely rejected and the unmatched ones are not amplified. A simulation of the proposed technique, applied to a dynamically feedback linearized unicycle, illustrates its effectiveness, even in presence of nonholonomic constraints.
Matteo Rubagotti
Antonio Estrada
Fernando Castanos
Antonella Ferrara
Leonid Fridman
2012-02-27T13:18:13Z
2012-02-27T13:18:13Z
http://eprints.imtlucca.it/id/eprint/1195
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1195
2012-02-27T13:18:13Z
A network approach to orthodontic diagnosis
Background – Network analysis, a recent advancement in complexity science, enables understanding of the properties of complex biological processes characterized by the interaction, adaptive regulation, and coordination of a large number of participating components.
Objective – We applied network analysis to orthodontics to detect and visualize the most interconnected clinical, radiographic, and functional data pertaining to the orofacial system.
Materials and Methods – The sample consisted of 104 individuals from 7 to 13 years of age in the mixed dentition phase without previous orthodontic intervention. The subjects were divided according to skeletal class; their clinical, radiographic, and functional features were represented as vertices (nodes) and links (edges) connecting them.
Results – Class II subjects exhibited few highly connected orthodontic features (hubs), while Class III patients showed a more compact network structure characterized by strong co-occurrence of normal and abnormal clinical, functional, and radiological features. Restricting our analysis to the highest correlations, we identified critical peculiarities of Class II and Class III malocclusions.
Conclusions – The topology of the dentofacial system obtained by network analysis could allow orthodontists to visually evaluate and anticipate the co-occurrence of auxological anomalies during individual craniofacial growth and possibly localize reactive sites for a therapeutic approach to malocclusion.
Pietro Auconi
Guido Caldarelli
guido.caldarelli@imtlucca.it
Antonio Scala
Gaetano Ierardo
Antonella Polimeni
2012-01-26T09:06:19Z
2013-11-21T11:14:11Z
http://eprints.imtlucca.it/id/eprint/1079
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1079
2012-01-26T09:06:19Z
Population Dynamics On Complex Food Webs
In this work we analyze the topological and dynamical properties of a simple model of complex food webs, namely the niche model. In order to underline competition among species, we introduce "prey" and "predators" weighted overlap graphs derived from the niche model and compare synthetic food webs with real data. Doing so, we find new tests for the goodness of synthetic food web models and indicate a possible direction of improvement for existing ones. We then exploit the weighted overlap graphs to define a competition kernel for Lotka–Volterra population dynamics and find that for such a model the stability of food webs decreases with its ecological complexity.
Gian Marco Palamara
Vinko Zlatic
Antonio Scala
Guido Caldarelli
guido.caldarelli@imtlucca.it
2012-01-20T09:04:42Z
2012-01-20T09:32:03Z
http://eprints.imtlucca.it/id/eprint/1073
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1073
2012-01-20T09:04:42Z
Behavioral Equivalences
Beahvioral equivalences serve to establish in which cases two reactive (possible concurrent) systems offer similar interaction capabilities relatively to other systems representing their operating environment. Behavioral equivalences have been mainly developed in the context
of process algebras, mathematically rigorous languages that have been used for describing and verifying properties of concurrent communicating systems. By relying on the so called structural operational semantics (SOS), labelled transition systems, are associated to each term of a process
algebra. Behavioral equivalences are used to abstract from unwanted details and identify those labelled transition systems that react “similarly” to external experiments. Due to the large number of properties which may be relevant in the analysis of concurrent systems, many different theories
of equivalences have been proposed in the literature. The main contenders consider those systems equivalent that (i) perform the same sequences of actions, or (ii) perform the same sequences of actions and after each sequence are ready to accept the same sets of actions, or (iii) perform the
same sequences of actions and after each sequence exhibit, recursively, the same behavior. This approach leads to many different equivalences that preserve significantly different properties of systems.
Rocco De Nicola
r.denicola@imtlucca.it
2012-01-20T08:52:55Z
2012-01-20T09:28:17Z
http://eprints.imtlucca.it/id/eprint/1072
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1072
2012-01-20T08:52:55Z
Process Algebras
Process Algebras are mathematically rigorous languages with well defined semantics that permit describing and verifying properties of concurrent communicating systems.
They can be seen as models of processes, regarded as agents that act and interact continuously with other similar agents and with their common environment. The agents may be real-world objects (even people), or they may be artifacts, embodied perhaps in computer hardware or software systems.
Many different approaches (operational, denotational, algebraic) are taken for describing the meaning of processes. However, the operational approach is the reference one. By relying on the so called Structural Operational Semantics (SOS), labelled transition systems are built and composed by using the different operators of the many different process algebras. Behavioral equivalences are used to abstract from unwanted details and identify those systems that react similarly to external
experiments.
Rocco De Nicola
r.denicola@imtlucca.it
2012-01-20T08:32:48Z
2012-01-20T08:32:48Z
http://eprints.imtlucca.it/id/eprint/1071
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1071
2012-01-20T08:32:48Z
Statistical regularities in the rank-citation profile of scientists
Recent science of science research shows that scientific impact measures for journals and individual articles
have quantifiable regularities across both time and discipline. However, little is known about the scientific
impact distribution at the scale of an individual scientist. We analyze the aggregate production and impact
using the rank-citation profile c_i(r) of 200 distinguished professors and 100 assistant professors. For the
entire range of paper rank r, we fit each c_i(r) to a common distribution function. Since two scientists with
equivalent Hirsch h-index can have significantly different c_i(r) profiles, our results demonstrate the utility of
the bi scaling parameter in conjunction with hi for quantifying individual publication impact. We show that
the total number of citations C tallied from a scientist’s N_i papers scales as C_i ~ h_i^\beta_i . Such statistical regularities in the input-output patterns of scientists can be used as benchmarks for theoretical models of career progress.
Alexander M. Petersen
alexander.petersen@imtlucca.it
H. Eugene Stanley
Sauro Succi
2011-12-14T15:06:51Z
2014-12-04T11:46:29Z
http://eprints.imtlucca.it/id/eprint/1041
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1041
2011-12-14T15:06:51Z
Pareto versus lognormal: a maximum entropy test
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Marco Bee
Massimo Riccaboni
massimo.riccaboni@imtlucca.it
Stefano Schiavo
2011-12-13T14:45:12Z
2011-12-13T14:45:12Z
http://eprints.imtlucca.it/id/eprint/1040
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1040
2011-12-13T14:45:12Z
A few special cases: scientific creativity and network dynamics in the field of rare diseases
We develop a model of scientific creativity and test it in the field of rare diseases. Our model is based on the results of an in-depth case study of the Rett Syndrome. Archival analysis, bibliometric techniques and expert surveys are combined with network analysis to identify the most creative scientists. First, we compare alternative measures of generative and combinatorial creativity. Then, we generalize our results in a stochastic model of socio-semantic network evolution. The model predictions are tested with an extended set of rare diseases. We find that new scientific collaborations among experts in a field enhance combinatorial creativity. Instead, high entry rates of novices are negatively related to generative creativity. By expanding the set of useful concepts, creative scientists gain in centrality. At the same time, by increasing their centrality in the scientific community, scientists can replicate and generalize their results, thus contributing to a scientific paradigm.
M. Laura Frigotto
Massimo Riccaboni
massimo.riccaboni@imtlucca.it
2011-12-05T15:49:54Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1031
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1031
2011-12-05T15:49:54Z
Physiologically based pharmacokinetic modeling and predictive control: an integrated approach for optimal drug administration
The barriers between systems engineering and medicine are slowly eroding as recently it has become evident that medicine has a lot to gain from systems technology. In particular, the drug administration problem can be cast as a control engineering problem, where the objective is to keep the drug concentration at certain organs in the body close to desired set-points. A number of constraints render the problem rather challenging. For example, hard constraints may be posed on drug concentration, because if it exceeds an upper limit, the effects of the drug are adverse and toxic.
In this paper we show that a popular method in control engineering can be used for determining the optimal drug administration. Specifically, the Model Predictive Control (MPC) technology can be adopted for taking optimal decisions regarding regulation of drug concentration in the human body, while posing constraints on both drug concentration and drug infusion rate.
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Stefania Giannikou
Haralambos Sarimveis
2011-12-05T14:19:43Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1030
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1030
2011-12-05T14:19:43Z
Stochastic model predictive control for constrained networked control systems with random time delay
In this paper the continuous time stochastic constrained optimal control problem is formulated for the class of networked control systems assuming that time delays follow a discrete-time, finite Markov chain . Polytopic overapproximations of the system's trajectories are employed to produce a polyhedral inner approximation of the non-convex constraint set resulting from imposing the constraints in continuous time. The problem is cast in a Markov jump linear systems (MJLS) framework and a stochastic MPC controller is calculated explicitly, oine, coupling dynamic programming with parametric piecewise quadratic (PWQ) optimization. The calculated control law leads to stochastic stability of the closed loop system, in the mean square sense and respects the state and input constraints in continuous time.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
Haralambos Sarimveis
2011-12-05T09:47:20Z
2013-03-12T14:57:38Z
http://eprints.imtlucca.it/id/eprint/1022
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1022
2011-12-05T09:47:20Z
A global piecewise smooth Newton method for fast large-scale model predictive control
In this paper, the strictly convex quadratic program (QP) arising in model predictive control (MPC) for constrained linear systems is reformulated as a system of piecewise affine equations. A regularized piecewise smooth Newton method with exact line search on a convex, differentiable, piecewise-quadratic merit function is proposed for the solution of the reformulated problem. The algorithm has considerable merits when applied to MPC over standard active set or interior point algorithms. Its performance is tested and compared against state-of-the-art QP solvers on a series of benchmark problems. The proposed algorithm is orders of magnitudes faster, especially for large-scale problems and long horizons. For example, for the challenging crude distillation unit model of Pannocchia, Rawlings, and Wright (2007) with 252 states, 32 inputs, and 90 outputs, the average running time of the proposed approach is 1.57 ms.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
Pantelis Sopasakis
pantelis.sopasakis@imtlucca.it
2011-12-05T09:38:04Z
2011-12-05T09:38:04Z
http://eprints.imtlucca.it/id/eprint/1021
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1021
2011-12-05T09:38:04Z
Convex parametric piecewise quadratic optimization: theory and algorithms
In this paper we study the problem of parametric minimization of convex piecewise quadratic functions. Our study provides a unifying framework for convex parametric quadratic and linear programs. Furthermore, it extends parametric optimization algorithms to problems with piecewise quadratic cost functions, paving the way for new applications of parametric optimization in explicit dynamic programming and optimal control with quadratic stage cost.
Panagiotis Patrinos
panagiotis.patrinos@imtlucca.it
Haralambos Sarimveis
2011-11-18T14:02:09Z
2014-07-01T14:45:43Z
http://eprints.imtlucca.it/id/eprint/1009
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1009
2011-11-18T14:02:09Z
Optimization-based AFC automatic flatness control in cold tandem rolling : an integrated flatness optimization approach for the whole tandem mill
Cold tandem mills have the purpose of reducing the thickness of flat steel by means of consecutive rolling stands.This type of process is widely deployed in order to supply a wide variety of industries, from food processing to automotive manufacturing.In the recent years, the production of steel (and other metals, like copper and alluminium as well) by cold rolling has been subject of research efforts to reach ultra-thin gauges and to advance the production performance together with the quality of the material.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Daniele Bernardini
daniele.bernardini@imtlucca.it
Andrea Spinelli
Francesco Alessandro Cuzzola
2011-11-15T14:50:30Z
2011-11-15T14:50:30Z
http://eprints.imtlucca.it/id/eprint/1003
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/1003
2011-11-15T14:50:30Z
Model predictive control of stochastic and networked systems
Daniele Bernardini
daniele.bernardini@imtlucca.it
2011-10-31T11:23:14Z
2011-11-17T14:44:57Z
http://eprints.imtlucca.it/id/eprint/975
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/975
2011-10-31T11:23:14Z
A central limit theorem and its applications to multicolor randomly reinforced urns
Let Xn be a sequence of integrable real random variables, adapted to a filtration (Gn). Define Cn = √{(1 / n)∑k=1nXk - E(Xn+1 | Gn)} and Dn = √n{E(Xn+1 | Gn) - Z}, where Z is the almost-sure limit of E(Xn+1 | Gn) (assumed to exist). Conditions for (Cn, Dn) → N(0, U) x N(0, V) stably are given, where U and V are certain random variables. In particular, under such conditions, we obtain √n{(1 / n)∑k=1nX_k - Z} = Cn + Dn → N(0, U + V) stably. This central limit theorem has natural applications to Bayesian statistics and urn problems. The latter are investigated, by paying special attention to multicolor randomly reinforced urns.
Patrizia Berti
Irene Crimaldi
irene.crimaldi@imtlucca.it
Luca Pratelli
Pietro Rigo
2011-10-25T10:01:35Z
2013-03-12T09:32:21Z
http://eprints.imtlucca.it/id/eprint/972
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/972
2011-10-25T10:01:35Z
Tracking-Optimized quantization for H.264 compression in transportation video surveillance applications
Eren Soyak
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2011-10-24T14:39:23Z
2011-10-24T14:39:23Z
http://eprints.imtlucca.it/id/eprint/970
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/970
2011-10-24T14:39:23Z
Immagine sull’immagine: la lettura delle opere d’arte tra cinema e cibernetica
Emanuele Pellegrini
emanuele.pellegrini@imtlucca.it
2011-10-24T14:24:53Z
2011-10-24T14:24:53Z
http://eprints.imtlucca.it/id/eprint/969
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/969
2011-10-24T14:24:53Z
Le opere e le fonti: la Toscana tra medioevi e rinascenze
Emanuele Pellegrini
emanuele.pellegrini@imtlucca.it
2011-10-24T14:09:09Z
2014-12-18T10:18:52Z
http://eprints.imtlucca.it/id/eprint/968
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/968
2011-10-24T14:09:09Z
Il viaggio e la memoria: i taccuini di Adolfo Venturi
Emanuele Pellegrini
emanuele.pellegrini@imtlucca.it
2011-10-17T14:10:29Z
2011-10-17T14:10:29Z
http://eprints.imtlucca.it/id/eprint/942
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/942
2011-10-17T14:10:29Z
1954-1964 : un decennio e due commissioni d’indagine per il patrimonio culturale
Emanuele Pellegrini
emanuele.pellegrini@imtlucca.it
2011-10-06T09:22:40Z
2011-10-06T09:28:54Z
http://eprints.imtlucca.it/id/eprint/910
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/910
2011-10-06T09:22:40Z
SoSL: a service-oriented stochastic logic
The Temporal Mobile Stochastic Logic (MoSL) has been introduced in previous works by the authors for formulating properties of systems specified in StoKlaim, a Markovian extension of Klaim. The main purpose of MoSL is addressing key functional aspects of network aware programming such as distribution awareness, mobility and security and to guarantee their integration with performance and dependability guarantees. In this paper we present SoSL, a variant of MoSL, designed for dealing with specific features of Service-Oriented Computing (SOC). We also show how SoSL formulae can be model-checked against systems descriptions expressed with MarCaSPiS, a process calculus designed for addressing quantitative aspects of SOC. In order to perform actual model checking, we rely on a dedicated front-end that uses existing state-based stochastic model-checkers, like e.g. the Markov Reward Model Checker (MRMC).
Rocco De Nicola
r.denicola@imtlucca.it
Diego Latella
Michele Loreti
Mieke Massink
2011-10-06T09:13:07Z
2011-10-06T09:28:54Z
http://eprints.imtlucca.it/id/eprint/909
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/909
2011-10-06T09:13:07Z
Core calculi for service-oriented computing
Core calculi have been adopted in the Sensoria project with three main aims. First of all, they have been used to clarify and formally define the basic concepts that characterize the Sensoria approach to the modeling of service-oriented applications. In second place, they are formal models on which the Sensoria analysis techniques have been developed. Finally, they have been used to drive the implementation of the prototypes of the Sensoria languages for programming actual service-based systems. This chapter reports about the Sensoria core calculi presenting their syntax and intuitive semantics, and describing their main features by means of a common running example, namely a Credit Request scenario taken from the Sensoria Finance case study.
Luis Caires
Rocco De Nicola
r.denicola@imtlucca.it
Rosario Pugliese
Vasco Thudichum Vasconcelos
Gianluigi Zavattaro
2011-09-22T10:54:25Z
2013-02-20T09:44:31Z
http://eprints.imtlucca.it/id/eprint/899
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/899
2011-09-22T10:54:25Z
How do employment contract reforms affect welfare? Theory and evidence
Short-term employment contracts have been deployed rapidly across the European Union (EU) in the past two decades. Characterized by a high degree of ?exibility, they were thought to be the solution to persistent labor market rigidities and high unemployment rates. The objective of this paper is to investigate both theoretically and empirically the effects of introducing short-term employment contracts to the labor market, and to draw conclusions regarding the change in welfare for different categories of people. Data from the Italian labor market show that workers hired on a short-term basis are mostly young, female, inexperienced, less educated, and poorly quali?ed. Short-term contracts, which are associated with lower wages, often come in sequences. Labor force participation has increased in particular among older workers. Such changes in labor force composition and transition patterns can be explained by a search model with workers heterogeneity and differentiated contracts. In steady state, a pooling equilibrium of less and more productive workers exists, when only permanent contracts are available. In the presence of short-term contracts, a separating equilibrium allocates less and more productive workers towards different career paths. Through model calibration it is possible to quantify the change in welfare for different categories of workers. Moreover, within a multi-state duration framework, the model is estimated with the Heckman and Singer non-parametric maximum likelihood (NPMLE) estimation procedure. One of the major ?ndings is that inexperienced workers are worse off after the reforms. However, after the accumulation of some work experience, they have the opportunity to compensate for their losses, if they are more productive. Less productive workers, even though provided with higher chances to work, are the ones paying the cost of higher turnover and lower wages.
Cristina Tealdi
cristina.tealdi@imtlucca.it
2011-09-19T10:53:29Z
2011-10-04T09:52:51Z
http://eprints.imtlucca.it/id/eprint/892
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/892
2011-09-19T10:53:29Z
Warfare, Taxation, and Political Change: Evidence from the Italian Risorgimento
We examine the relationships between warfare, taxation, and political change in the context of the political unification of the Italian peninsula. Using a comprehensive new database, we argue that external and internal threat environments had significant implications for the demand for military strength, which in turn had important ramifications for fiscal policy and the likelihood of constitutional reform and related improvements in the provision of non-military public services. Our analytic narrative complements recent theoretical and econometric works about state capacity. By emphasizing public finances, we also uncover novel insights about the forces underlying state formation in Italy.
Mark Dincecco
m.dincecco@imtlucca.it
Giovanni Federico
giovanni.federico@eui.eu
Andrea Vindigni
andrea.vindigni@imtlucca.it
2011-09-13T13:52:18Z
2014-12-02T09:54:20Z
http://eprints.imtlucca.it/id/eprint/870
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/870
2011-09-13T13:52:18Z
Mimesis and motion in classical antiquity
Maria Luisa Catoni
marialuisa.catoni@imtlucca.it
2011-09-13T13:30:40Z
2014-12-02T09:48:46Z
http://eprints.imtlucca.it/id/eprint/869
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/869
2011-09-13T13:30:40Z
Etiche e forme : l'architettura del simposio
Maria Luisa Catoni
marialuisa.catoni@imtlucca.it
2011-09-13T09:52:21Z
2011-09-27T11:09:23Z
http://eprints.imtlucca.it/id/eprint/863
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/863
2011-09-13T09:52:21Z
A formal support to business and architectural design for service-oriented systems
Architectural Design Rewriting (ADR) is an approach for the design of software architectures developed within Sensoria by reconciling graph transformation and process calculi techniques. The key feature that makes ADR a suitable and expressive framework is the algebraic handling of structured graphs, which improves the support for specification, analysis and verification of service-oriented architectures and applications. We show how ADR is used as a formal ground for high-level modelling languages and approaches developed within Sensoria.
Roberto Bruni
Howard Foster
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
Emilio Tuosto
2011-09-13T09:42:21Z
2011-09-27T11:09:23Z
http://eprints.imtlucca.it/id/eprint/862
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/862
2011-09-13T09:42:21Z
Hierarchical models for service-oriented systems
We present our approach to the denotation and representation of hierarchical graphs: a suitable algebra of hierarchical graphs and two domains of interpretations. Each domain of interpretation focuses on a particular perspective of the graph hierarchy: the top view (nested boxes) is based on a notion of embedded graphs while the side view (tree hierarchy) is based on gs-graphs. Our algebra can be understood as a high-level language for describing such graphical models, which are well suited for defining graphical representations of service-oriented systems where nesting (e.g. sessions, transactions, locations) and linking (e.g. shared channels, resources, names) are key aspects.
Roberto Bruni
Andrea Corradini
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
2011-09-13T09:24:38Z
2016-07-13T10:48:51Z
http://eprints.imtlucca.it/id/eprint/861
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/861
2011-09-13T09:24:38Z
A Lewisian approach to the verification of adaptive systems
Many software artifacts like software architectures or distributed programs are characterized by a high level of
dynamism involving changes in their structure or behaviour as a response to external stimuli or as the result of
programmed reconfigurations. When reasoning on such adaptive systems one is not only interested in proving
properties on their global behaviour like system correctness, but also on the evolution of the single components. For instance, when analysing the well-known stable marriage problem one would like to know whether a solution ensures that “two females never claim to be married with the same male”. To enable automatic reasoning, two main things are needed: models for the software artifacts and logic-based languages for describing their properties. One of the most successful and versatile model for such artifacts are graphs. Regarding the property specification languages, variants of quantified temporal logics have been proposed, which combine the modal operators of temporal logics with monadic second-order logic for graphs. Unfortunately, the semantical models for such logics are not clearly cut, due to the possibility to interleave modal operators and quantifiers in formulae like $x.◊ψ where x is quantified in a world but ψ states properties about x in a reachable world or state where it does not necessarily exist or even have the same identity. The issue is denoted in the quantified temporal logic
literature as trans-world identity [1, 3]. A typical solution follows the so-called “Kripke semantics” approach: roughly, a set of universal items is chosen, and its elements are used to form each state. This solution is the most widely adopted, and it underlines all the proposals we are aware of Kripke-like solutions do not fit well with the merging, deletion and creation of components, neither allows for an easy inclusion of evolution relations possibly forming cycles: if the value of an open formula is a set of states, how to account e.g. for an element that is first deleted and then added again? This problem is often solved by restricting the class of admissible evolution relations: this forces to reformulate the state transition relation modeling the system evolution, hampering the intuitive meaning of the logic. In [2, 5] we presented an alternative approach, inspired to counterpart theory [4]. The key point of Lewis's proposal is the notion of counterpart, which is a consequence of his refusal to interpret the relation of trans-world sameness as
strict identity. In our approach we exploit counterpart relations, i.e. (partial) functions among states, explicitly relating elements of different states. Our solution avoids some limitations of the existing approaches, in particular in what regards the treatment of the possible merging and reuse of components. Moreover, the resulting semantics is a streamlined and intuitively appealing one, yet it is general enough to cover most of the alternatives we are aware of.
Fabio Gadducci
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2011-09-12T14:13:05Z
2016-07-13T09:45:10Z
http://eprints.imtlucca.it/id/eprint/860
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/860
2011-09-12T14:13:05Z
Towards a Maude tool for model checking temporal graph properties
We present our prototypical tool for the verification of graph transformation systems. The major novelty of our tool is that it provides a model checker for temporal graph properties based on counterpart semantics for quantified m-calculi. Our tool can be considered as an instantiation of our approach to counterpart semantics which allows for a neat handling of creation, deletion and merging in systems
with dynamic structure. Our implementation is based on the object-based machinery of Maude, which provides the basics to deal with attributed graphs. Graph transformation
systems are specified with term rewrite rules. The model checker evaluates logical formulae of second-order modal m-calculus in the automatically generated CounterpartModel (a sort of unfolded graph transition system) of the graph transformation system under study. The result of evaluating a formula is a set of assignments for each state, associating node variables to actual nodes.
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Andrea Vandin
andrea.vandin@imtlucca.it
2011-09-12T13:50:29Z
2011-09-27T11:09:23Z
http://eprints.imtlucca.it/id/eprint/859
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/859
2011-09-12T13:50:29Z
On structured model-driven transformations
Structural aspects play a key role in the model-driven development of software systems. Effective techniques and tools must therefore be based on suitable representation formalisms that facilitate the specification, manipulation and analysis of the structure of models. Graphical and algebraic approaches have been shown to be very successful for such purposes: 1) graphs offer natural a representation of topological structures, 2) algebras offer a natural representation of compositional structures, 3) both graphs and algebras can be manipulated in a declarative way by means of rule-based techniques, 4) they allow for a layered presentation of models that enables compositional techniques and favours scalability. Most of the existing approaches represent such layering in a plain manner by overlapping the intra- and the inter-layered structure. It has been shown that some layering structures can be conveniently represented by an explicit hierarchical structure enabling then structurally inductive manipulations of the resulting models. Moreover, providing an inductive presentation of the structure facilitates the compositional analysis and verification of models. In this paper we compare and reconcile some recent approaches and synthesise them into an algebraic and graph-based formalism for representing and manipulating models with inductively defined hierarchical structure.
Roberto Bruni
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Ugo Montanari
2011-09-09T12:33:12Z
2013-03-05T15:29:25Z
http://eprints.imtlucca.it/id/eprint/853
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/853
2011-09-09T12:33:12Z
Detecting myocardial ischemia at rest with cardiac phase-resolved BOLD MRI: early findings
Vasodilatory stress is the standard paradigm for probing myocardial oxygenation (O2) changes due to coronary artery stenosis on the basis of BOLD MRI (1-3). However, since vasodilation is typically achieved with provocative stress, approaches that can identify the presence of stenosis on the basis of microvascular alterations at rest are highly desirable. It is known that myocardial blood volume (MBV) varies throughout the cardiac cycle; MBV increases during diastole and decreases during systole (4,5). It has also been shown that changes in MBV lead to increased O2
extraction by cardiomyocytes (6). Thus, MBV and O2 are expected to vary at different parts of the cardiac cycle. In particular, in diastole, it is expected that MBV and O2 extraction are maximal, while in systole, MBV and O2 extraction are minimal. In addition, as MBV increases, even at a stable level of O2, the number of deoxygenated hemoglobin molecules within a voxel increases, causing a proportionate elevation in the local magnetic field
inhomogeneities (7). Moreover, with increasing grade of stenosis, the MBV in the myocardial territory supplied by a stenotic artery increases in systole (8-11). Thus, the relative MBV and O2 changes between systole and diastole are expected to be different between myocardial territories supplied by healthy and stenotic coronary arteries. Moreover, it is also known that T1 of myocardium is dependent on MBV and that the apparent T2 is dependent on
blood O2. Since SSFP signals are approximately T2/T1 weighted, it is hypothesized that cardiac phase-resolved BOLD SSFP (CP-BOLD) (12) signal intensities at systole and diastole may reflect changes in MBV and blood O2. In addition, since stenosis leads to an increase in systolic MBV and is accompanied by a reduction in blood O2, it is hypothesized that systolic and diastolic CP-BOLD signal intensities may be used to detect the ischemic
territories at resting states. These hypotheses were tested with simulations and canine experiments.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Veronica Rundell
Xiangzhi Zhou
Ying Liu
Richard Tang
Debiao Li
Rohan Dharmakumar
2011-09-09T12:28:14Z
2013-03-05T15:30:15Z
http://eprints.imtlucca.it/id/eprint/852
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/852
2011-09-09T12:28:14Z
An area-based imaging biomarker for the characterization of coronary artery stenosis with blood oxygen-sensitive MRI
BOLD MRI may be used for detecting myocardial oxygenation changes secondary to coronary artery stenosis (1-3). Under
pharmacological stress, the myocardial bed supplied by the stenotic coronary artery appears hypointense relative to healthy regions in BOLD images. Manual windowing (to visualize signal changes) and segmentation according to the American Heart Association’s (AHA) recommendation are often used to characterize the BOLD effect. However, current approaches for analyzing BOLD changes are suboptimal for detecting critical stenosis (reduction in perfusion reserve below 2:1). The purpose of this study is to test the hypothesis that, ARREAS (Area-based biomaRker for chaRactErizing coronAry Stenosis), an area-based statistical approach relying on the differences between rest and stress images, can characterize BOLD changes in end-systole and end-diastole with exquisite sensitivity and specificity. This hypothesis was tested in a canine model.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Richard Tang
Xiangzhi Zhou
Debiao Li
Rohan Dharmakumar
2011-09-08T13:52:30Z
2013-03-05T15:29:49Z
http://eprints.imtlucca.it/id/eprint/851
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/851
2011-09-08T13:52:30Z
An area-based imaging biomarker for characterizing coronary artery stenosis with myocardial BOLD MRI
BOLD MRI may be used for detecting myocardial oxygenation changes secondary to coronary artery stenosis. However, current approaches for analyzing BOLD changes are suboptimal for detecting critical stenosis.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Richard Tang
Xiangzhi Zhou
Debiao Li
Rohan Dharmakumar
2011-09-08T13:34:01Z
2013-03-05T15:32:18Z
http://eprints.imtlucca.it/id/eprint/850
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/850
2011-09-08T13:34:01Z
A fully-automated statistical method for characterization of flow artifact presence in cardiac MRI
Flow artifacts in MR images can appear as ghosts within and outside the body cavity. Current approaches for optimizing sequences for suppressing such artifacts rely on expert scoring or on semi-automated methods for evaluation.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Xiangzhi Zhou
Rohan Dharmakumar
2011-09-08T10:12:47Z
2013-03-05T15:32:32Z
http://eprints.imtlucca.it/id/eprint/841
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/841
2011-09-08T10:12:47Z
Unsupervised and reproducible image-based identification of cardiac phases in cine SSFP MRI
A critical component in computing quantitative diagnostic metrics, such as ejection fraction, as well as, image segmentation and registration is the accurate identification of the end-systolic (ES) and end-diastolic (ED) frames in cardiac cine MRI. Reliable identification of ES is also important in cardiac phase-resolved myocardial blood-oxygen-level-dependent (BOLD) MRI studies (1). An assessment of changes in myocardial oxygenation requires BOLD images to be collected at rest and stress, which is typically induced with intravenous infusion of adenosine. ES images at both states are compared to assess the presence of coronary artery stenosis. To increase reproducibility and eliminate variability it is desirable to automate this procedure.
Most automated methods relying on trigger times do not account for anatomical correspondence, while methods based on identifying the minimum and maximum of the blood pool area in the Left Ventricle (LV) chamber, are computationally intensive, susceptible to noise, and require prior localization and segmentation of the LV. The purpose of this work is to develop automated methods to facilitate in the robust and reproducible evaluation of cardiac cine MRI studies
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Xiangzhi Zhou
Richard Tang
Rohan Dharmakumar
2011-09-08T09:58:03Z
2011-10-07T08:20:22Z
http://eprints.imtlucca.it/id/eprint/840
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/840
2011-09-08T09:58:03Z
The sustainability of European health care systems: beyond income and aging
During the last 30 years, health care expenditure (HCE) has been growing much more rapidly than GDP in OECD countries. In this paper, we review the determinants of HCE dynamics in Europe, taking into account the role of income, aging population, technological progress, female labor participation and public budgetary variables.
We show that HCE is a multifaceted phenomenon where
demographic, social, economic, technological and institutional factors all play an important role. The comparison of total, public and private HCE reveals an imbalance of European welfare toward the care of the elderly. European Governments should increasingly rely on pluralistic systems to balance sustainability and access and equilibrate the distribution of resources across the functions of the public welfare system.
Fabio Pammolli
f.pammolli@imtlucca.it
Massimo Riccaboni
massimo.riccaboni@imtlucca.it
Laura Magazzini
2011-09-07T12:58:03Z
2014-01-24T14:19:09Z
http://eprints.imtlucca.it/id/eprint/825
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/825
2011-09-07T12:58:03Z
Innovation and corporate dynamics: a theoretical framework
We provide a detailed analysis of a generalized proportional growth model (GPGM) of innovation and corporate dynamics that encompasses the Gibrat’s Law of Proportionate Effect and the Simon growth process as particular instances. The predictions of the model are derived in terms of (i) firm size distribution, (ii) the distribution of firm growth rates, and (iii-iv) the relationships between firm size and the mean and variance of firm growth rates. We test the model against data from the worldwide pharmaceutical industry and find its predictions to be in good agreement with empirical evidence on all four dimensions.
Jakub Growiec
Fabio Pammolli
f.pammolli@imtlucca.it
Massimo Riccaboni
massimo.riccaboni@imtlucca.it
2011-08-11T12:27:52Z
2013-03-05T15:08:41Z
http://eprints.imtlucca.it/id/eprint/803
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/803
2011-08-11T12:27:52Z
Low-complexity tracking-aware H.264 video compression for transportation surveillance
In centralized transportation surveillance systems, video is captured and compressed at low processing power remote nodes and transmitted to a central location for processing. Such compression can reduce the accuracy of centrally run automated object tracking algorithms. In typical systems, the majority of communications bandwidth is spent on encoding temporal pixel variations such as acquisition noise or local changes to lighting. We propose a tracking-aware, H.264-compliant compression algorithm that removes temporal components of low tracking interest and optimizes the quantization of frequency coefficients, particularly those that most influence trackers, significantly reducing bitrate while maintaining comparable tracking accuracy. We utilize tracking accuracy as our compression criterion in lieu of mean squared error metrics. Our proposed system is designed with low processing power and memory requirements in mind, and as such can be deployed on remote nodes. Using H.264/AVC video coding and a commonly used state-of-the-art tracker we show that our algorithm allows for over 90 bitrate savings while maintaining comparable tracking accuracy.
Eren Soyak
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2011-08-11T12:19:51Z
2013-03-05T15:13:55Z
http://eprints.imtlucca.it/id/eprint/802
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/802
2011-08-11T12:19:51Z
Colorizing a masterpiece [Applications Corner]
The purpose of this article is to discuss the process of colorizing a historical arti fact-a black and white archival photo of Bathers by a River, 1909-1917, by Henri Matisse (Art Institute of Chicago 1953.158), taken in November 1913, when the art ist was still working on the painting and showing it in a significantly differ ent state compared to the one seen today. Historical accounts describe a painting that was originally a more nat uralistic, pastoral image; but over the course of several years, and under the influence of Cubism and the circum stances of World War I, Matisse radical ly revised his monumental canvas (measuring 260 X 392 cm). Matisse later considered Bathers by a River to be one of the five most pivotal works of his career 1. Historical photographs unearthed by archival research depict the painting at various stages. The painting, along with these historical photographs, our colorized image, and other works documenting the experi mental nature of the artist's output during a period that has been studied very little until now, have been the cen terpiece of a recent exhibit, Matisse: Radical Invention 1913-1917, that was at the Art Institute of Chicago and the Museum of Modern Art in New York City in March-October 2010.
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Kristin H. Lister
Inge Fiedler
Francesca Casadio
Aggelos K. Katsaggelos
2011-08-11T12:06:29Z
2013-03-05T15:14:19Z
http://eprints.imtlucca.it/id/eprint/801
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/801
2011-08-11T12:06:29Z
Anomalous video event detection using spatiotemporal context
Compared to other anomalous video event detection approaches that analyze object trajectories only, we propose a context-aware method to detect anomalies. By tracking all moving objects in the video, three different levels of spatiotemporal contexts are considered, i.e., point anomaly of a video object, sequential anomaly of an object trajectory, and co-occurrence anomaly of multiple video objects. A hierarchical data mining approach is proposed. At each level, frequency-based analysis is performed to automatically discover regular rules of normal events. Events deviating from these rules are identified as anomalies. The proposed method is computationally efficient and can infer complex rules. Experiments on real traffic video validate that the detected video anomalies are hazardous or illegal according to traffic regulations.
Fan Jiang
Junsong Yuanc
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Aggelos K. Katsaggelos
2011-08-11T11:13:25Z
2013-03-05T15:32:50Z
http://eprints.imtlucca.it/id/eprint/800
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/800
2011-08-11T11:13:25Z
T2-weighted STIR imaging of myocardial edema associated with ischemia-reperfusion injury: the influence of proton density effect on image contrast
To investigate the contribution of proton density (PD) in T2-STIR based edema imaging in the setting of acute myocardial infarction (AMI). Materials and Methods Canines (n = 5), subjected to full occlusion of the left anterior descending artery for 3 hours, underwent serial magnetic resonance imaging (MRI) studies 2 hours postreperfusion (day 0) and on day 2. During each study, T1 and T2 maps, STIR (TE = 7.1 msec and 64 msec) and late gadolinium enhancement (LGE) images were acquired. Using T1 and T2 maps, relaxation and PD contributions to myocardial edema contrast (EC) in STIR images at both TEs were calculated. Results Edematous territories showed significant increase in PD (20.3 Â± 14.3%, P < 0.05) relative to healthy territories. The contributions of T1 changes and T2 or PD changes toward EC were in opposite directions. One-tailed t-test confirmed that the mean T2 and PD-based EC at both TEs were greater than zero. EC from STIR images at TE = 7.1 msec was dominated by PD than T2 effects (94.3 Â± 11.3% vs. 17.6 Â± 2.5%, P < 0.05), while at TE = 64 msec, T2 effects were significantly greater than PD effects (90.8 Â± 20.3% vs. 12.5 Â± 11.9%, P < 0.05). The contribution from PD in standard STIR acquisitions (TE = 64 msec) was significantly higher than 0 (P < 0.05). Conclusion In addition to T2-weighting, edema detection in the setting of AMI with T2-weighted STIR imaging has a substantial contribution from PD changes, likely stemming from increased free-water content within the affected tissue. This suggests that imaging approaches that take advantage of both PD as well as T2 effects may provide the optimal sensitivity for detecting myocardial edema.
Xiangzhi Zhou
Veronica Rundell
Ying Liu
Richard Tang
Rachel Klein
Saurabh Shah
Sven Zuehlsdorff
Sotirios A. Tsaftaris
sotirios.tsaftaris@imtlucca.it
Debiao Li
Rohan Dharmakumar
2011-08-11T08:37:32Z
2011-09-27T13:11:38Z
http://eprints.imtlucca.it/id/eprint/794
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/794
2011-08-11T08:37:32Z
Uniform labeled transition systems for nondeterministic, probabilistic, and stochastic process calculi
Labeled transition systems are typically used to represent the behavior of nondeterministic processes, with labeled transitions defining a one-step state to-state reachability relation. This model has been recently made more general by modifying the transition relation in such a way that it associates with any source state and transition label a reachability distribution, i.e., a function mapping each possible target state to a value of some domain that expresses the degree of one-step reachability of that target state. In this extended abstract, we show how the resulting model, called ULTraS from Uniform Labeled Transition System, can be naturally used to give semantics to a fully nondeterministic, a fully probabilistic, and a fully stochastic variant of a CSP-like process language.
Marco Bernardo
Rocco De Nicola
r.denicola@imtlucca.it
Michele Loreti
2011-08-08T13:04:26Z
2011-09-27T13:12:23Z
http://eprints.imtlucca.it/id/eprint/768
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/768
2011-08-08T13:04:26Z
Emergence and persistence of inefficient states
We present a theory of the emergence and persistence of inefficient states based on patronage politics. The society consists of rich and poor. The rich are initially in power, but expect to transition to democracy, which will choose redistributive policies. Taxation requires the employment of bureaucrats. By choosing an inefficient state structure, the rich may be able to use patronage and capture democratic politics, so reducing the amount of redistribution in democracy. Moreover, the inefficient state creates its own constituency and tends to persist over time. Intuitively, an inefficient state structure creates more rents for bureaucrats than would an efficient one. When the poor come to power in democracy, they will reform the structure of the state to make it more efficient so that higher taxes can be collected at lower cost and with lower rents for bureaucrats. Anticipating this, when the society starts out with an inefficient organization of the state, bureaucrats support the rich, who set lower taxes but also provide rents to bureaucrats. We obtain that the rich–bureaucrats coalition may also expand the size of bureaucracy excessively so as to generate enough political support. The model shows that an equilibrium with an inefficient state is more likely to arise when there is greater income inequality, when bureaucratic rents take intermediate values, and when individuals are sufficiently forward-looking
Daron Acemoglu
Davide Ticchi
davide.ticchi@imtlucca.it
Andrea Vindigni
andrea.vindigni@imtlucca.it
2011-08-03T13:33:34Z
2012-03-05T10:25:09Z
http://eprints.imtlucca.it/id/eprint/766
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/766
2011-08-03T13:33:34Z
Appliance operation scheduling for electricity consumption optimization
This paper concerns the problem of optimally scheduling a set of appliances at the end-user premises. The user's energy fee varies over time, and moreover, in the context of smart grids, the user may receive a reward from an energy aggregator if he/she reduces consumption during certain time intervals. In a household, the problem is to decide when to schedule the operation of the appliances, in order to meet a number of goals, namely overall costs, climatic comfort level and timeliness. We devise a model accounting for a typical household user, and present computational results showing that it can be efficiently solved in real-life instances.
Alessandro Agnetis
Gabriella Dellino
gabriella.dellino@imtlucca.it
Paolo Detti
Gianluca De Pascale
Giacomo Innocenti
Antonio Vicino
2011-08-01T10:24:49Z
2011-08-04T07:30:21Z
http://eprints.imtlucca.it/id/eprint/748
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/748
2011-08-01T10:24:49Z
Dynamic objectives aggregation methods in multi-objective evolutionary optimization
Several approaches for solving multi-objective optimization problems entail a form of scalarization of the objectives. This chapter proposes a study of different dynamic objectives aggregation methods in the context of evolutionary algorithms. These methods are mainly based on both weighted sum aggregations and curvature variations. Since the incorporation of chaotic rules or behaviour in population-based optimization algorithms has been shown to possibly enhance their searching ability, this study proposes to introduce and evaluate also some chaotic rules in the dynamic weights generation process. A comparison analysis is presented on the basis of a campaign of computational experiments on a set of benchmark problems from the literature.
Gabriella Dellino
gabriella.dellino@imtlucca.it
Mariagrazia Fedele
Carlo Meloni
2011-07-29T11:00:09Z
2012-07-09T09:42:48Z
http://eprints.imtlucca.it/id/eprint/745
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/745
2011-07-29T11:00:09Z
A stochastic model predictive control approach to dynamic option hedging with transaction costs
This paper proposes a stochastic model predictive control (SMPC) approach to hedging derivative contracts (such as plain vanilla and exotic options) in the presence of transaction costs. The methodology is based on the minimization of a stochastic measures of the hedging error predicted for the next trading date. Three different measures are proposed to determine the optimal composition of the replicating portfolio. The first measure is a combination of variance and expected value of the hedging error, leading to a quadratic program (QP) to solve at each trading date; the second measure is the conditional value at risk (CVaR), a common index used in finance quantifying the average loss over a subset of worst-case realizations, leading to a linear programming (LP) formulation; the third approach is of min-max type and attempts at minimizing the largest possible hedging error, also leading to a (smaller scale) linear program. The hedging performance obtained by the three different measures is tested and compared in simulation on a European call and a barrier option.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Laura Puglia
Tommaso Gabbriellini
2011-07-29T10:53:35Z
2012-07-09T09:26:08Z
http://eprints.imtlucca.it/id/eprint/744
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/744
2011-07-29T10:53:35Z
Decentralized hierarchical multi-rate control of constrained linear systems
This paper proposes a decentralized hierarchical multi-rate control scheme for large-scale dynamically-coupled linear systems subject to linear constraints on input and state variables. At the lower level, a set of decentralized and independent linear controllers stabilizes the process, without taking care of the constraints. Each controller receives reference signals from its own upper-level controller, that runs at a lower sampling frequency. By optimally constraining the magnitude and rate of variation of the reference signals to each lower-level controller, quantitative criteria are provided for selecting the ratio between the sampling rates of the upper and lower layers of control at each location, in a way that closed-loop stability is preserved and the fulfillment of the prescribed constraints is guaranteed.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Davide Barcelli
Giulio Ripaccioli
2011-07-29T10:53:29Z
2012-07-09T09:25:27Z
http://eprints.imtlucca.it/id/eprint/743
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/743
2011-07-29T10:53:29Z
Decentralized hybrid model predictive control of a formation of unmanned aerial vehicles
This paper proposes a hierarchical MPC strategy for autonomous navigation of a formation of unmanned aerial vehicles (UAVs) of quadcopter type under obstacle and collision avoidance constraints. Each vehicle is stabilized by a lower-level local linear MPC controller around a desired position, that is generated, at a slower sampling rate, by a hybrid MPC controller per vehicle. Such an upper control layer is based on a hybrid dynamical model of the UAV in closed-loop with its linear MPC controller and of its surrounding environment (i.e., the other UAVs and obstacles). The resulting decentralized scheme controls the formation based on a leader-follower approach. The performance of the hierarchical control scheme is assessed through simulations and comparisons with other path planning strategies, showing the ability of linear MPC to handle the strong couplings among the dynamical variables of each quadcopter under motor voltage and angle/position constraints, and the flexibility of the decentralized hybrid MPC scheme in planning the desired paths on-line.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Claudio Rocchi
2011-07-29T10:53:22Z
2012-07-06T13:14:36Z
http://eprints.imtlucca.it/id/eprint/742
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/742
2011-07-29T10:53:22Z
Explicit hybrid model predictive control: discontinuous piecewise-affine approximation and FPGA implementation
In this paper we introduce a digital architecture implementing the explicit solution of a switched model predictive control problem. Given a mixed-logic dynamical system, we derive an explicit controller in the form of a possibly discontinuous piecewise-affine function. This function is then approximated by resorting to piecewise-affine simplicial functions, which can be implemented on a circuit by extending the representation capabilities of a previously proposed architecture to evaluate the control action. The architecture has been implemented on FPGA and validated on a benchmark example related to an air conditioning system.
Tomaso Poggi
Sergio Trimboli
Alberto Bemporad
alberto.bemporad@imtlucca.it
Marco Storace
2011-07-29T10:53:06Z
2012-07-09T09:32:53Z
http://eprints.imtlucca.it/id/eprint/741
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/741
2011-07-29T10:53:06Z
Hybrid dynamic optimization for cruise speeed control
The cruise control problem of transferring the speed of a vehicle between two values in a fixed interval of time using a predefined sequence of gears is solved in this paper. This is a hybrid dynamic optimization problem since the control variables include both a continuous variable (fuel flow) and a discrete variable (the gear to apply at each instant). The solution is given in the form of a hybrid optimal control algorithm that computes the optimal switching times between gears using Dynamic Programming and the optimal fuel profile between successive gear boundaries using a gradient algorithm to approximate the optimum conditions. In order to reduce the search of the optimal switching times to a search in a finite dimension graph, a procedure based on a changing grid is used. The algorithm is illustrated by a simulation using a diesel one-dimensional car model.
Tiago Jorge
Joao M. Lemos
Miguel Barão
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-29T10:52:58Z
2013-09-30T12:29:47Z
http://eprints.imtlucca.it/id/eprint/740
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/740
2011-07-29T10:52:58Z
Stability and invariance analysis of approximate explicit MPC based on PWA Lyapunov functions
For piecewise affine (PWA) systems whose dynamics are only defined in a bounded and possibly non-invariant set X, this paper proposes a numerical approach to analyze the stability of the origin and to find a region of attraction. The approach relies on introducing fake dynamics outside X and on synthesizing a piecewise affine and possibly discontinuous Lyapunov function on a larger bounded set containing X by solving a linear program. The existence of a solution proves that the origin is an asymptotically stable equilibrium of the original PWA system and determines a region of attraction contained in X. The procedure is particularly useful in practical applications for analyzing a posteriori the stability properties of approximate explicit model predictive control laws defined over a bounded set X of states, and to determine whether, for a given set of initial states, the closed-loop system evolves within the domain X where the control law is defined.
Matteo Rubagotti
Sergio Trimboli
Daniele Bernardini
daniele.bernardini@imtlucca.it
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-28T09:52:43Z
2011-08-04T07:29:05Z
http://eprints.imtlucca.it/id/eprint/728
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/728
2011-07-28T09:52:43Z
Model-predictive control of discrete hybrid stochastic automata
This paper focuses on optimal and receding horizon control of a class of hybrid dynamical systems, called Discrete Hybrid Stochastic Automata (DHSA), whose discrete-state transitions depend on both deterministic and stochastic events. A finite-time optimal control approach “optimistically”; determines the trajectory that provides the best tradeoff between tracking performance and the probability of the trajectory to actually execute, under possible chance constraints. The approach is also robustified, less optimistically, to ensure that the system satisfies a set of constraints for all possible realizations of the stochastic events, or alternatively for those having enough probability to realize. Sufficient conditions for asymptotic convergence in probability are given for the receding-horizon implementation of the optimal control solution. The effectiveness of the suggested stochastic hybrid control techniques is shown on a case study in supply chain management.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Stefano Di Cairano
2011-07-28T09:52:00Z
2012-07-06T12:30:20Z
http://eprints.imtlucca.it/id/eprint/730
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/730
2011-07-28T09:52:00Z
Ultra-fast stabilizing model predictive control via canonical piecewise affine approximations
This paper investigates the use of canonical piecewise affine (PWA) functions for approximation and fast implementation of linear MPC controllers. The control law is approximated in an optimal way over a regular simplicial partition of a given set of states of interest. The stability properties of the resulting closed-loop system are analyzed by constructing a suitable PWA Lyapunov function. The main advantage of the proposed approach to the implementation of MPC controllers is that the resulting stabilizing approximate MPC controller can be implemented on chip with sampling times in the order of tens of nanoseconds.
Alberto Bemporad
alberto.bemporad@imtlucca.it
Alberto Oliveri
Tomaso Poggi
Marco Storace
2011-07-27T12:51:25Z
2011-08-05T11:12:37Z
http://eprints.imtlucca.it/id/eprint/725
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/725
2011-07-27T12:51:25Z
Decentralized model predictive control of dynamically coupled linear systems
This paper proposes a decentralized model predictive control (DMPC) scheme for large-scale dynamical processes subject to input constraints. The global model of the process is approximated as the decomposition of several (possibly overlapping) smaller models used for local predictions. The degree of decoupling among submodels represents a tuning knob of the approach: the less coupled are the submodels, the lighter the computational burden and the load for transmission of shared information; but the smaller is the degree of cooperativeness of the decentralized controllers and the overall performance of the control system. Sufficient criteria for analyzing asymptotic closed-loop stability are provided for input constrained open-loop asymptotically stable systems and for unconstrained open-loop unstable systems, under possible intermittent lack of communication of measurement data between controllers. The DMPC approach is also extended to asymptotic tracking of output set-points and rejection of constant measured disturbances. The effectiveness of the approach is shown on a relatively large-scale simulation example on decentralized temperature control based on wireless sensor feedback.
Alessandro Alessio
Davide Barcelli
Alberto Bemporad
alberto.bemporad@imtlucca.it
2011-07-04T09:21:27Z
2013-11-21T11:38:01Z
http://eprints.imtlucca.it/id/eprint/267
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/267
2011-07-04T09:21:27Z
Methods for detrending success metrics to account for inflationary and deflationary factors
Time-dependent economic, technological, and social factors can artificially inflate or deflate quantitative measures for career success. Here we develop and test a statistical method for normalizing career success metrics across time dependent factors. In particular, this method addresses the long standing question: how do we compare the career achievements of professional athletes from different historical eras? Developing an objective approach will be of particular importance over the next decade as major league baseball (MLB) players from the steroids era become eligible for Hall of Fame induction. Some experts are calling for asterisks (*) to be placed next to the career statistics of athletes found guilty of using performance enhancing drugs (PED). Here we address this issue, as well as the general problem of comparing statistics from distinct eras, by detrending the seasonal statistics of professional baseball players. We detrend player statistics by normalizing achievements to seasonal averages, which accounts for changes in relative player ability resulting from a range of factors. Our methods are general, and can be extended to various arenas of competition where time-dependent factors play a key role. For five statistical categories, we compare the probability density function (pdf) of detrended career statistics to the pdf of raw career statistics calculated for all player careers in the 90-year period 1920–2009. We find that the functional form of these pdfs is stationary under detrending. This stationarity implies that the statistical regularity observed in the right-skewed distributions for longevity and success in professional sports arises from both the wide range of intrinsic talent among athletes and the underlying nature of competition. We fit the pdfs for career success by the Gamma distribution in order to calculate objective benchmarks based on extreme statistics which can be used for the identification of extraordinary careers.
Alexander M. Petersen
alexander.petersen@imtlucca.it
Orion Penner
orion.penner@imtlucca.it
H. Eugene Stanley
2011-07-04T09:20:58Z
2013-11-21T11:30:22Z
http://eprints.imtlucca.it/id/eprint/421
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/421
2011-07-04T09:20:58Z
Quantitative and empirical demonstration of the Matthew effect in a study of career longevity
The Matthew effect refers to the adage written some two-thousand years ago in the Gospel of St. Matthew: "For to all those who have, more will be given." Even two millennia later, this idiom is used by sociologists to qualitatively describe the dynamics of individual progress and the interplay between status and reward. Quantitative studies of professional careers are traditionally limited by the difficulty in measuring progress and the lack of data on individual careers. However, in some professions, there are well-defined metrics that quantify career longevity, success, and prowess, which together contribute to the overall success rating for an individual employee. Here we demonstrate testable evidence of the age-old Matthew "rich get richer" effect, wherein the longevity and past success of an individual lead to a cumulative advantage in further developing his or her career. We develop an exactly solvable stochastic career progress model that quantitatively incorporates the Matthew effect and validate our model predictions for several competitive professions. We test our model on the careers of 400,000 scientists using data from six high-impact journals and further confirm our findings by testing the model on the careers of more than 20,000 athletes in four sports leagues. Our model highlights the importance of early career development, showing that many careers are stunted by the relative disadvantage associated with inexperience.
Alexander M. Petersen
alexander.petersen@imtlucca.it
Woo-Sung Jung
Jae-Suk Yang
H. Eugene Stanley
2011-07-04T09:18:53Z
2011-09-27T13:26:08Z
http://eprints.imtlucca.it/id/eprint/694
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/694
2011-07-04T09:18:53Z
Applications of Statistical Physics to the Social and Economic Sciences
This thesis applies statistical physics concepts and methods to quantitatively analyze
socioeconomic systems. For each system we combine theoretical models and
empirical data analysis in order to better understand the real-world system in relation
to the complex interactions between the underlying human agents. This thesis is
separated into three parts: (i) response dynamics in financial markets, (ii) dynamics
of career trajectories, and (iii) a stochastic opinion model with quenched disorder.
In Part I we quantify the response of U.S. markets to financial shocks, which
perturb markets and trigger “herding behavior” among traders. We use concepts
from earthquake physics to quantify the decay of volatility shocks after the “main
shock.” We also find, surprisingly, that we can make quantitative statements even
before the main shock. In order to analyze market behavior before as well as after
“anticipated news” we use Federal Reserve interest-rate announcements, which are
regular events that are also scheduled in advance.
In Part II we analyze the statistical physics of career longevity. We construct
a stochastic model for career progress which has two main ingredients: (a) random
forward progress in the career and (b) random termination of the career. We incorporate
the rich-get-richer (Matthew) effect into ingredient (a), meaning that it is easier
to move forward in the career the farther along one is in the career. We verify the
model predictions analyzing data on 400,000 scientific careers and 20,000 professional
sports careers. Our model highlights the importance of early career development,
showing that many careers are stunted by the relative disadvantage associated with
inexperience.
In Part III we analyze a stochastic two-state spin model which represents a system
of voters embedded on a network. We investigate the role in consensus formation of “zealots”, which are agents with time-independent opinion. Our main result is the
unexpected finding that it is the number and not the density of zealots which determines
the steady-state opinion polarization. We compare our findings with results
for United States Presidential elections.
Alexander M. Petersen
alexander.petersen@imtlucca.it
2011-06-30T14:27:56Z
2011-08-31T14:40:38Z
http://eprints.imtlucca.it/id/eprint/632
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/632
2011-06-30T14:27:56Z
The productivity crisis in pharmaceutical R&D
Advances in the understanding of the molecular basis of diseases have expanded the number of plausible therapeutic targets for the development of innovative agents in recent decades. However, although investment in pharmaceutical research and development (R&D) has increased substantially in this time, the lack of a corresponding increase in the output in terms of new drugs being approved indicates that therapeutic innovation has become more challenging. Here, using a large database that contains information on R&D projects for more than 28,000 compounds investigated since 1990, we examine the decline of R&D productivity in pharmaceuticals in the past two decades and its determinants. We show that this decline is associated with an increasing concentration of R&D investments in areas in which the risk of failure is high, which correspond to unmet therapeutic needs and unexploited biological mechanisms. We also investigate the potential variations in productivity with regard to the regional location of companies and find that although companies based in the United States and Europe differ in the composition of their R&D portfolios, there is no evidence of any productivity gap.
Fabio Pammolli
f.pammolli@imtlucca.it
Laura Magazzini
Massimo Riccaboni
massimo.riccaboni@imtlucca.it
2011-06-30T14:23:40Z
2011-08-31T14:40:38Z
http://eprints.imtlucca.it/id/eprint/669
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/669
2011-06-30T14:23:40Z
La distribuzione dei farmaci tra progetti di riforma e incertezze
The paper provides a detailed description of the legal and regulatory framework that governs Italian pharmacies and the retail distribution of pharmaceuticals, as well as the government’s recent attempts to reform and modernize the system. The picture to emerge is a clear example of an over-regulated market closed to competition; in fact, while the declared intention is to protect the public health and citizens, the regulatory machinery appears designed precisely to favor the interests of incumbent pharmacies and pharmacists. In conclusion, the paper proposes some policy guidelines, also based on the considerations, suggestions and provisions that both the Italian Antitrust and the European Commission have been developing over the past few years.
Fabio Pammolli
f.pammolli@imtlucca.it
Nicola C. Salerno
2011-06-30T14:23:32Z
2011-08-31T14:40:38Z
http://eprints.imtlucca.it/id/eprint/670
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/670
2011-06-30T14:23:32Z
Sistemi sanitari regionali alla sfida del federalismo: una proposta per il finanziamento federalista della sanità
Il decreto sulla standardizzazione dei fabbisogni sanitari, delegato dalla Legge n. 42-2009, è da qualche giorno pubblicato in Gazzetta Ufficiale (D. Lgs. del 6 maggio 2011, n.68, in GU n. 109 del 12 maggio 2011). Si tratta di tassello essenziale del federalismo in fieri, con la spesa sanitaria che conta per il 75-80% dei bilanci regionali.
Questo scritto parte dalle caratteristiche salienti del decreto, per prospettare una soluzione concreta per la sua applicazione operativa. I punti salienti sono tre: (1) finanziamento top-down; (2) standard sulle Regioni più virtuose (Stato e Regioni sono chiamate ad individuarle); (3) riparto del Fondo sanitario nazionale (Fsn) con regole semplici.
Fabio Pammolli
f.pammolli@imtlucca.it
Nicola C. Salerno
2011-06-16T09:04:57Z
2011-07-11T14:36:24Z
http://eprints.imtlucca.it/id/eprint/420
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/420
2011-06-16T09:04:57Z
Linear-Time and May-Testing in a Probabilistic Reactive Setting
We consider reactive probabilistic labelled transition systems (rplts), a model where internal choices are refined by probabilistic choices. In this setting, we study the relationship between linear-time and may-testing semantics, where an angelic view of nondeterminism is taken. Building on the model of d-trees of Cleaveland et al., we first introduce a clean model of probabilistic may-testing, based on simple concepts from measure theory. In particular, we define a probability space where statements of the form “p may pass test o” naturally correspond to measurable events. We then obtain an observer-independent characterization of the may-testing preorder, based on comparing the probability of sets of traces, rather than of individual traces. This entails that may-testing is strictly finer than linear-time semantics. Next, we characterize the may-testing preorder in terms of the probability of satisfying safety properties, expressed as languages of infinite trees rather than traces. We then identify a significative subclass of rplts where linear and may-testing semantics do coincide: these are the separated rplts, where actions are partitioned into probabilistic and nondeterministic ones, and at each state only one type is available.
Lucia Acciai
Michele Boreale
Rocco De Nicola
r.denicola@imtlucca.it
2011-06-15T14:43:17Z
2011-07-11T14:35:29Z
http://eprints.imtlucca.it/id/eprint/402
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/402
2011-06-15T14:43:17Z
An accessible verification environment for UML models of services
Service-Oriented Architectures (SOAs) provide methods and technologies for modelling, programming and deploying software applications that can run over globally available network infrastructures. Current software engineering technologies for SOAs, however, remain at the descriptive level and lack rigorous foundations enabling formal analysis of service-oriented models and software. To support automated verification of service properties by relying on mathematically founded techniques, we have developed a software tool that we called Venus (Verification ENvironment for UML models of Services). Our tool takes as an input service models specified by UML 2.0 activity diagrams according to the UML4SOA profile, while its theoretical bases are the process calculus COWS and the temporal logic SocL. A key feature of Venus is that it provides access to verification functionalities also to those users not familiar with formal methods. Indeed, the tool works by first automatically translating UML4SOA models and natural language statements of service properties into, respectively, COWS terms and SocL formulae, and then by automatically model checking the formulae over the COWS terms. In this paper we present the tool, its architecture and its enabling technologies by also illustrating the verification of a classical ‘travel agency’ scenario.
Federico Banti
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-06-15T14:39:12Z
2011-07-11T14:35:29Z
http://eprints.imtlucca.it/id/eprint/401
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/401
2011-06-15T14:39:12Z
A WSDL-based type system for asynchronous WS-BPEL processes
We tackle the problem of providing rigorous formal foundations to current software engineering technologies for web services, and especially to WSDL and WS-BPEL, two of the most used XML-based standard languages for web services. We focus on a simplified fragment of WS-BPEL sufficiently expressive to model asynchronous interactions among web services in a network context. We present this language as a process calculus-like formalism, that we call ws-calculus, for which we define an operational semantics and a type system. The semantics provides a precise operational model of programs, while the type system forces a clean programming discipline for integrating collaborating services. We prove that the operational semantics of ws-calculus and the type system are ‘sound’ and apply our approach to some illustrative examples. We expect that our formal development can be used to make the relationship between WS-BPEL programs and the associated WSDL documents precise and to support verification of their conformance.
Alessandro Lapadula
Rosario Pugliese
Francesco Tiezzi
francesco.tiezzi@imtlucca.it
2011-05-25T14:56:05Z
2011-07-11T14:34:34Z
http://eprints.imtlucca.it/id/eprint/399
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/399
2011-05-25T14:56:05Z
Internet e Web 2.0
Internet e il web sono ormai entrati nella realtà di tutti i giorni. Termini come social network, blog, post, web 2.0 sono utilizzati quotidianamente da milioni di persone.
Ma quali sono i reali meccanismi alla base di queste tecnologie innovative? Cosa è il web semantico? E i wiki? Come funziona un motore di ricerca? A queste e a molte altre domande il volume risponde in maniera efficace ed esaustiva.
Grazie alla chiarezza della trattazione, al rigore scientifico dell’esposizione e al ricco apparato didattico Internet e web 2.0 è un testo fondamentale per chiunque voglia affrontare e accrescere le proprie abilità informatiche.
Alberto Lluch-Lafuente
alberto.lluch@imtlucca.it
Marco Righi
2011-03-30T10:15:25Z
2012-07-09T11:03:37Z
http://eprints.imtlucca.it/id/eprint/232
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/232
2011-03-30T10:15:25Z
Il caso IAMA al Consiglio di Stato: indietro tutta!
Nota a Consiglio di Stato, sezione VI, sentenza 29 dicembre 2010, n. 9565, Autorità garante della concorrenza e del mercato c. San Paolo Imi, et al.
Andrea Giannaccari
a.giannaccari@imtlucca.it
2011-03-18T14:22:12Z
2014-01-24T14:20:37Z
http://eprints.imtlucca.it/id/eprint/205
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/205
2011-03-18T14:22:12Z
Margaret Thatcher and the New Political Culture in the United Kingdom
Antonio Masala
a.masala@imtlucca.it
2011-03-10T11:24:54Z
2012-07-11T12:22:38Z
http://eprints.imtlucca.it/id/eprint/201
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/201
2011-03-10T11:24:54Z
Liberalismo e multiculturalismo: convivenza o conflitto?
Antonio Masala
a.masala@imtlucca.it
2011-03-10T09:59:01Z
2012-07-11T12:26:41Z
http://eprints.imtlucca.it/id/eprint/195
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/195
2011-03-10T09:59:01Z
Diritto naturale o evoluzionismo? Problemi aperti nella Scuola Austriaca
Antonio Masala
a.masala@imtlucca.it
Carlo Cordasco
Raimondo Cubeddu
2011-03-08T10:51:00Z
2014-01-24T14:16:45Z
http://eprints.imtlucca.it/id/eprint/180
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/180
2011-03-08T10:51:00Z
Constraints for Service Contracts
This paper focuses on client-service interactions distinguishing between three phases: negotiate, commit and execute. The participants negotiate their behaviours, and if an agreement is reached they commit and start an execution which is guaranteed to respect the interaction scheme agreed upon. These ideas are materialised through a calculus of contracts enriched with semiring-based constraints, which allow clients to choose services and to interact with them in a safe way. A concrete representation of these constraints with logic programs and logic program combinations is straightforward, thus reducing constraint solution (and consequently the establishment of a contract) to the execution of a logic program.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Mario Coppo
Mariangiola Dezani-Ciancaglini
Ugo Montanari
2011-03-07T10:33:12Z
2014-01-24T14:18:26Z
http://eprints.imtlucca.it/id/eprint/137
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/137
2011-03-07T10:33:12Z
Constraints for Service Contracts
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Mario Coppo
Mariangiola Dezani-Ciancaglini
Ugo Montanari
2011-03-03T11:38:59Z
2011-07-11T14:33:42Z
http://eprints.imtlucca.it/id/eprint/115
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/115
2011-03-03T11:38:59Z
Contracts for Abstract Processes in Service Composition
Contracts are a well-established approach for describing and analyzing behavioral aspects of web service
compositions. The theory of contracts comes equipped with a notion of compatibility between
clients and servers that ensures that every possible interaction between compatible clients and servers
will complete successfully. It is generally agreed that real applications often require the ability of exposing
just partial descriptions of their behaviors, which are usually known as abstract processes. We
propose a formal characterization of abstraction as an extension of the usual symbolic bisimulation
and we recover the notion of abstraction in the context of contracts.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Hernán C. Melgratti
2011-03-01T11:23:40Z
2011-07-11T14:33:42Z
http://eprints.imtlucca.it/id/eprint/114
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/114
2011-03-01T11:23:40Z
QoS Negotiation in Service Composition
Service composition in Service Oriented Computing concerns not only integration of heterogeneous distributed applications but also dynamic selection of services. Quality of Service (QoS) plays a key role in service composition as services providing the same functionalities can be differentiated according to their QoS guarantees. At subscription time, a service requester and a provider may sign a contract recording the QoS of the supplied service. The cc-pi calculus has been introduced as a constraint-based model of QoS contracts. In this work we propose a variant of the cc-pi calculus in which the alternatives in a choice rather than being selected non-deterministically have a dynamic priority. Basically, a guard cj:πj in a choice is enabled if the constraint cj is entailed by the store of constraints and the prefix πj can be consumed. Moreover, the j-th branch can be selected not only if the corresponding guard cj:πj is enabled but also if cj is weaker than the constraints ci of the other enabled alternatives. We prove that our choice operator is more general than a choice operator with static priority. Finally, we exploit some examples to show that our prioritised calculus allows arbitrarily complex QoS negotiations and that a static form of priority is strictly less expressive than ours.
Maria Grazia Buscemi
m.buscemi@imtlucca.it
Ugo Montanari
2011-02-22T15:56:58Z
2012-07-06T13:33:49Z
http://eprints.imtlucca.it/id/eprint/87
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/87
2011-02-22T15:56:58Z
Modeling Electoral Coordination: Parties and Legislative Lists in Uruguay
During each electoral period, the strategic interaction between voters and political elites determines the number of viable candidates in a district. In this paper, we implement a hierarchical seemingly unrelated regression model to explain electoral coordination at the district level in Uruguay as a function of district magnitude, previous electoral outcomes and electoral regime. Elections in this country are particularly useful to test for institutional effects on the coordination process due to the large variations in district magnitude, to the simultaneity of presidential and legislative races held under different rules, and to the reforms implemented during the period under consideration. We find that district magnitude and electoral history heuristics have substantial effects on the number of competing and voted-for parties and lists. Our modeling approach uncovers important interaction-effects between the demand and supply side of the political market that were often overlooked in previous research.
Ines Levin
Gabriel Katz
g.katz@imtlucca.it
2011-02-22T15:54:52Z
2014-01-24T14:19:21Z
http://eprints.imtlucca.it/id/eprint/80
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/80
2011-02-22T15:54:52Z
A Statistical Model of Abstention under Compulsory Voting
Invalid voting and electoral absenteeism are two important sources of abstention in compulsory voting systems. Previous studies in this area have not considered the correlation between both variables and ignored the compositional nature of the data, potentially leading to unfeasible results and discarding helpful information from an inferential standpoint. In order to overcome these problems, this paper develops a statistical model that accounts for the compositional and hierarchical structure of the data and addresses robustness concerns raised by the use of small samples that are typical in the literature. The model is applied to analyze invalid voting and electoral absenteeism in Brazilian legislative elections between 1945 and 2006 via MCMC simulations. The results show considerable differences in the determinants of both forms of non-voting; while invalid voting was strongly positively related both to political protest and to the existence of important informational barriers to voting, the influence of these variables on absenteeism is less evident. Comparisons based on posterior simulations indicate that the model developed in this paper fits the dataset better than several alternative modeling approaches and leads to different substantive conclusions regarding the effect of different predictors on the both sources of abstention.
Gabriel Katz
g.katz@imtlucca.it
2011-02-22T14:56:25Z
2011-07-11T14:25:31Z
http://eprints.imtlucca.it/id/eprint/82
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/82
2011-02-22T14:56:25Z
Assessing the Impact of Alternative Voting Technologies on Multi-Party Elections: Design Features, Heuristic Processing and Voter Choice
This paper analyzes the influence of alternative voting technologies on electoral outcomes in multi-party systems. Using data from a field experiment conducted during the 2005 legislative election in Argentina, we examine the role of information effects associated with alternative voting devices on the support for the competing parties. We find that differences in the type of information displayed and how it was presented across devices favored some parties to the detriment of others. The impact of voting technologies was found to be larger than in two-party systems, and could lead to changes in election results. We conclude that authorities in countries moving to adopt new voting systems should carefully take the potential partisan advantages induced by different technologies into account when evaluating their implementation.
Gabriel Katz
g.katz@imtlucca.it
R. Michael Alvarez
Ernesto Calvo
Marcelo Escolar
Julia Pomares
2011-02-22T14:54:24Z
2011-07-11T14:25:31Z
http://eprints.imtlucca.it/id/eprint/81
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/81
2011-02-22T14:54:24Z
The Impact of New Technologies on Voter Confidence in Latin America: Evidence from E-voting Experiments in Argentina and Colombia
We analyze trust in electronic voting in Latin America using data from two field experiments conducted in Argentina and Colombia. We find that voters generally exhibit high levels of confidence in e-voting, although this depends on individual characteristics such as age and education as well as on the particular type of technology used. We contrast our findings with those from industrialized democracies and show that conclusions derived from American and European e-voting experiences cannot be directly extrapolated to the Latin American context. Overall, our results suggest that e-voting could provide an attractive alternative to traditional voting procedures in the region.
R. Michael Alvarez
Gabriel Katz
g.katz@imtlucca.it
Julia Pomares
2011-02-22T14:39:24Z
2011-09-27T13:25:17Z
http://eprints.imtlucca.it/id/eprint/88
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/88
2011-02-22T14:39:24Z
Measurement Error and Dynamic Nonlinear Models:
(Over)Estimating the Effect of Habit
Estimates from non-linear models are known to be inconsistent when the dependent variable
is misclassified. Although methods have been developed to correct this inconsistency in static
non-linear models, no correction exists for dynamic non-linear models. This is a serious omission
from the literature. Since the lagged dependent variable is an explanatory variable in dynamic
models, any inconsistency that arises from misclassifcation of the dependent variable in a static
non-linear model will be magnifed when that model is made dynamic. Here, we demonstrate
this fact using the habitual voting literature and develop a parametric model to correct for this
inconsistency. We find that, on average, estimates of habitual voting are approximately twice as
large when using survey respondents' self-reports versus official records of their turnout decisions.
When we apply our corrected model to respondents' self-reports, however, the estimates of
habitual voting are significantly closer to those provided by the official records.
Gabriel Katz
g.katz@imtlucca.it
James Melton
james.melton@imtlucca.it
2011-02-16T15:18:50Z
2014-01-29T14:24:08Z
http://eprints.imtlucca.it/id/eprint/79
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/79
2011-02-16T15:18:50Z
The Design of Preferential Trade Agreements
We identified a total of 690 negotiated trade agreements between 1945 and 2009 of which we have coded 404 agreements for which treaty texts and appendices were available. We aim to have a database for about 550 agreements by 2012. We have coded agreements for a total of 10 broad sectors of cooperation, encompassing market access, services, investments, intellectual property rights, competition, public procurement, standards, trade remedies, non-trade issues, and dispute settlement. For each of these sectors, we have coded a significant number of items, meaning that we have about 100 data points for each agreement. The resulting DESTA database is — to the best of our knowledge — by far the most complete in terms of agreements and sectors covered. This dataset fills a crucial gap in the field by providing a fine-grain measurement of the design of PTAs. Among others, we think that DESTA will be of relevance for the literatures on the signing of PTAs; the legalization of international relations; the rational design of international institutions; the diffusion of policies; the political and economic effects of trade agreements; power relations between states; and forum shopping in international politics. This working paper describes the DESTA data set and provides selected descriptive statistics. The overview puts emphasis on variation in design over time and across regions.
Leonardo Baccini
leonardo.baccini@imtlucca.it
Andreas Duer
Manfred Elsig
Karolina Milewicz
2011-02-16T12:06:17Z
2014-01-24T14:17:37Z
http://eprints.imtlucca.it/id/eprint/78
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/78
2011-02-16T12:06:17Z
Institutions, Information, and Trade Policy in Times of Crisis
The paper examines the role of international institutions in preventing the rise of protectionism in times of times of crisis. Economic crisis exacerbates uncertainty in the conduct of commercial relations and thus makes it more likely for countries to resort to "beggar-thy-neighbor" trade policies. The historical record of the Great Depression supports this argument, where global trade suffered a downward spiral as governments pursued protectionist trade policies as a response to domestic pressures. This paper argues that the current era of globalization is distinguishable from its earlier counterparts by the presence of an extensive network of international institutions, which serve as conveyors of information that help to mitigate the information problem that prevails in prisoner‘s dilemma settings. Specifically, international institutions such as the WTO, preferential trade agreements (PTAs) and other international economic organizations increase the flow of information among countries. In doing so, they alleviate coordination problems as well as facilitate the detection of violations in commitments to maintaining a liberal trade regime. We suggest that this mechanism may explain why the current crisis is not replicating the pattern of the Great Depression. Moreover, we explore the combined effect of membership in international organization and political variables, the latter including democracy, veto players, partisanship of government, and government effectiveness. We test this argument using a newly-compiled dataset of trade policies during the current economic crisis and membership in international organizations. The paper finds strong support for the informational role of international institutions as a key factor preventing the rise of protectionism in times of crisis. Conversely, there is mixed evidence that the combining effect of international organizations and domestic political variables matters in explaining protectionism during this crisis.
Leonardo Baccini
leonardo.baccini@imtlucca.it
Soo Yeon Kim
2011-02-16T11:52:03Z
2014-01-24T14:17:11Z
http://eprints.imtlucca.it/id/eprint/77
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/77
2011-02-16T11:52:03Z
Democratization, New Leaders, and the Need for Economic Reform:
Can Preferential Trading Agreements Help?
Can international institutions help leaders commit economic reform? In this article, we examine
how leaders use preferential trading agreements with major powers (European Union and
the United States) to promote liberal economic policies. We argue that under democratization,
new leaders benefit the most from credible commitment. Using original data on treaty negotiations,
our empirical analysis shows that under democratization, leader change greatly increases
the probability that the government of a developing country begins treaty negotiations. We
also demonstrate that preferential trading agreements are accompanied by liberalization in different
sectors of the economy, and this effect is most pronounced if it follows a leader change.
These findings support the notion that international institutions enable credible commitment to
economic reform.
Leonardo Baccini
leonardo.baccini@imtlucca.it
Johannes Urpelainen
2011-02-15T15:33:37Z
2011-07-11T13:50:58Z
http://eprints.imtlucca.it/id/eprint/22
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/22
2011-02-15T15:33:37Z
Democratization and Trade Policy:
An Empirical Analysis of Developing Countries
I show that the process of democratization in developing countries constitutes an important factor in the formation of preferential trade agreements. Specifically, democratizing developing countries are more likely to form a preferential trade agreement with richer countries, whereas there is little evidence that democratic transition affects the probability of a developing country joining a preferential trade agreement with other developing countries. This result follows naturally from median voter preferences and the Heckscher–Ohlin and Stolper–Samuelson theorems. Put simply, the median voter gains from trading with the richer states and loses from trading with the other poor states. Since preferential trade agreements allow countries to wave the most-favored nation principle, the need for both trade openness and protectionism against competitors might explain why preferential trade agreements constitute one of the main features of the current wave of globalization. I quantitatively test this hypothesis using a newly compiled dataset that covers 135 developing countries from 1990 to 2007. An important implication of this article is that it could be more challenging than expected to combine domestic political equality with international economic equality.
Leonardo Baccini
leonardo.baccini@imtlucca.it
2011-02-15T15:30:02Z
2012-07-06T13:39:20Z
http://eprints.imtlucca.it/id/eprint/39
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/39
2011-02-15T15:30:02Z
On the Evasion of Executive Term Limits
Executive term limits are pre-commitments through which the polity restricts its ability to retain a popular executive down the road. But in recent years, many presidents around the world have chosen to remain in office even after their initial maximum term in office has expired. They have
largely done so by amending the constitution, or sometimes by replacing it entirely. The practice of revising higher law for the sake of a particular incumbent raises intriguing issues that touch ultimately on the normative justification for term limits in the first place. This article reviews the normative debate over term limits and identifies the key claims of proponents and opponents. It
introduces the idea of characterizing term limits as a variety of default rule to be overcome if sufficient political support is apparent. It then turns to the historical evidence in order to assess the probability of attempts (both successful and unsuccessful) to evade term limits. It finds that, notwithstanding some high profile cases, term limits are observed with remarkable frequency.
The final section considers alternative institutional designs that might accomplish some of the goals of term limits, but finds that none is likely to provide a perfect substitute. Term limits have the advantage of clarity, making them relatively easy constitutional rules to enforce, and they should be considered an effective part of the arsenal of democratic institutions.
Tom Ginsburg
James Melton
james.melton@imtlucca.it
Zachary Elkins
2011-02-15T14:25:18Z
2011-07-11T13:50:58Z
http://eprints.imtlucca.it/id/eprint/23
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/23
2011-02-15T14:25:18Z
The New Regionalism and Policy Interdependency
Since 1990, the number of preferential trade agreements has increased rapidly. The argument in this article explains this phenomenon, known as the new regionalism, as a result of competition for market access; exporters facing trade diversion because of their exclusion from a preferential trade agreement concluded by foreign countries push their governments into signing an agreement with the country in which their exports are threatened. The argument is tested in a quantitative analysis of the proliferation of preferential trade agreements among 167 countries between 1990 and 2007. The finding that competition for market access is a major driving force of the new regionalism is a contribution to the literature on regionalism and to broader debates about global economic regulation.
Leonardo Baccini
leonardo.baccini@imtlucca.it
Andreas Duer
2011-02-10T16:01:08Z
2014-01-24T14:15:07Z
http://eprints.imtlucca.it/id/eprint/50
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/50
2011-02-10T16:01:08Z
Paying Positive to Go Negative: Advertisers' Competition and Media Reports
This paper analyzes a two-sided market for news where advertisers may pay a media outlet to conceal negative information about the quality of their own product (paying positive to avoid negative) and/or to disclose negative information about the quality of their competitors' products (paying positive to go negative). We show that whether advertisers have negative consequences on the accuracy of news reports or not ultimately depends on the extent of correlation among advertisers' products.
Specifically, the lower the correlation among the qualities of the advertisers' products, the (weakly) higher the accuracy of the media outlet' reports. Moreover, when advertisers' products are correlated, a higher degree of competition in the market of the advertisers' products may decrease the accuracy of the media outlet's reports.
Andrea Blasco
Paolo Pin
Francesco Sobbrio
f.sobbrio@imtlucca.it
2011-02-07T11:02:12Z
2011-09-30T08:00:57Z
http://eprints.imtlucca.it/id/eprint/11
This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/11
2011-02-07T11:02:12Z
Political Transformations and Public Finances: Europe, 1650-1913
How did today's rich states first establish modern fiscal systems? To answer this question, this book examines the evolution of political regimes and public finances in Europe over the long term. The book argues that the emergence of efficient fiscal institutions was the result of two fundamental political transformations that resolved long-standing problems of fiscal fragmentation and absolutism. States gained tax force through fiscal centralization and restricted ruler power through parliamentary limits, which enabled them to gather large tax revenues and channel funds toward public services with positive economic benefits. Using a novel combination of descriptive, case study and statistical methods, the book pursues this argument through a systematic investigation of a new panel database that spans eleven countries and four centuries. The book's findings are significant for our understanding of economic history and have important consequences for current policy debates.
Mark Dincecco
m.dincecco@imtlucca.it