IMT Institutional Repository: No conditions. Results ordered -Date Deposited. 2024-03-28T16:17:06ZEPrintshttp://eprints.imtlucca.it/images/logowhite.pnghttp://eprints.imtlucca.it/2017-06-07T11:10:53Z2017-06-07T11:10:53Zhttp://eprints.imtlucca.it/id/eprint/3711This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/37112017-06-07T11:10:53ZBlock Placement Strategies for Fault-Resilient Distributed Tuple Spaces: An Experimental Study - (Practical Experience Report)The tuple space abstraction provides an easy-to-use programming paradigm
for distributed applications. Intuitively, it behaves like a distributed shared
memory, where applications write and read entries (tuples). When deployed over
a wide area network, the tuple space needs to efficiently cope with faults of links
and nodes. Erasure coding techniques are increasingly popular to deal with such
catastrophic events, in particular due to their storage efficiency with respect to
replication. When a client writes a tuple into the system, this is first striped into
k blocks and encoded into n > k blocks, in a fault-redundant manner. Then, any
k out of the n blocks are sufficient to reconstruct and read the tuple. This paper
presents several strategies to place those blocks across the set of nodes of a
wide area network, that all together form the tuple space. We present the performance
trade-offs of different placement strategies by means of simulations and a
Python implementation of a distributed tuple space. Our results reveal important
differences in the efficiency of the different strategies, for example in terms of
block fetching latency, and that having some knowledge of the underlying network
graph topology is highly beneficialRoberta BarbiVitaly BuravlevClaudio Antares Mezzinaclaudio.mezzina@imtlucca.itValerio Schiavoni2016-06-28T09:56:20Z2016-06-28T09:56:20Zhttp://eprints.imtlucca.it/id/eprint/3506This item is in the repository with the URL: http://eprints.imtlucca.it/id/eprint/35062016-06-28T09:56:20ZTuple Spaces Implementations and Their EfficiencyAmong the paradigms for parallel and distributed computing, the one popularized with Linda and based on tuple spaces is the least used one, despite the fact of being intuitive, easy to understand and to use. A tuple space is a repository of tuples, where process can add, withdraw or read tuples by means of atomic operations. Tuples may contain different values, and processes can inspect the content of a tuple via pattern matching. The lack of a reference implementations for this paradigm has prevented its widespread. In this paper, first we do an extensive analysis on what are the state of the art implementations and summarise their characteristics. Then we select three implementations of the tuple space paradigm and compare their performances on three different case studies that aim at stressing different aspects of computing such as communication, data manipulation, and cpu usage. After reasoning on strengths and weaknesses of the three implementations, we conclude with some recommendations for future work towards building an effective implementation of the tuple space paradigm.Vitaly BuravlevRocco De Nicolar.denicola@imtlucca.itClaudio Antares Mezzinaclaudio.mezzina@imtlucca.it