Marknadens största urval
Snabb leverans

Böcker i Synthesis Lectures on Distributed Computing Theory-serien

Filter
Filter
Sortera efterSortera Serieföljd
  • av Leonid Barenboim
    559,-

    The focus of this monograph is on symmetry breaking problems in the message-passing model of distributed computing. In this model a communication network is represented by a n-vertex graph G = (V,E), whose vertices host autonomous processors. The processors communicate over the edges of G in discrete rounds. The goal is to devise algorithms that use as few rounds as possible. A typical symmetry-breaking problem is the problem of graph coloring. Denote by ? the maximum degree of G. While coloring G with ? + 1 colors is trivial in the centralized setting, the problem becomes much more challenging in the distributed one. One can also compromise on the number of colors, if this allows for more efficient algorithms. Other typical symmetry-breaking problems are the problems of computing a maximal independent set (MIS) and a maximal matching (MM). The study of these problems dates back to the very early days of distributed computing. The founding fathers of distributed computing laid firm foundations for the area of distributed symmetry breaking already in the eighties. In particular, they showed that all these problems can be solved in randomized logarithmic time. Also, Linial showed that an O(?2)-coloring can be solved very efficiently deterministically. However, fundamental questions were left open for decades. In particular, it is not known if the MIS or the (? + 1)-coloring can be solved in deterministic polylogarithmic time. Moreover, until recently it was not known if in deterministic polylogarithmic time one can color a graph with significantly fewer than ?2 colors. Additionally, it was open (and still open to some extent) if one can have sublogarithmic randomized algorithms for the symmetry breaking problems. Recently, significant progress was achieved in the study of these questions. More efficient deterministic and randomized (? + 1)-coloring algorithms were achieved. Deterministic ?1 + o(1)-coloring algorithms with polylogarithmic running time were devised. Improved (and often sublogarithmic-time) randomized algorithms were devised. Drastically improved lower bounds were given. Wide families of graphs in which these problems are solvable much faster than on general graphs were identified. The objective of our monograph is to cover most of these developments, and as a result to provide a treatise on theoretical foundations of distributed symmetry breaking in the message-passing model. We hope that our monograph will stimulate further progress in this exciting area.

  • av Neeraj Mittal, Sahil Dhoked & Wojciech Golab
    519,-

    This book discusses the recent research work on designing efficient fault-tolerant synchronization mechanisms for concurrent processes using the relatively new persistent memory technology that combines the low latency benefits of DRAM with the persistence of magnetic disks. The authors include all of the major contributions published to date, and also convey some perspective regarding how the problem itself is evolving. The results are described at a high level to enable readers to gain a quick and thorough understanding of the RME problem and its nuances, as well as various solutions that have been designed to solve the problem under a variety of important conditions and how they compare to each other.

  • av Ashish Choudhury
    729,-

    This book focuses on multi-party computation (MPC) protocols in the passive corruption model (also known as the semi-honest or honest-but-curious model). The authors present seminal possibility and feasibility results in this model and includes formal security proofs. Even though the passive corruption model may seem very weak, achieving security against such a benign form of adversary turns out to be non-trivial and demands sophisticated and highly advanced techniques. MPC is a fundamental concept, both in cryptography as well as distributed computing. On a very high level, an MPC protocol allows a set of mutually-distrusting parties with their private inputs to jointly and securely perform any computation on their inputs. Examples of such computation include, but not limited to, privacy-preserving data mining; secure e-auction; private set-intersection; and privacy-preserving machine learning. MPC protocols emulate the role of an imaginary, centralized trusted third party (TTP) that collects the inputs of the parties, performs the desired computation, and publishes the result. Due to its powerful abstraction, the MPC problem has been widely studied over the last four decades.

  • av Vincent Gramoli
    795,-

    Providing a shared memory abstraction in distributed systems is a powerful tool that can simplify the design and implementation of software systems for networked platforms. Emulations of shared atomic memory in distributed systems is an active area of research and development.

  • av Raynal Michel
    769,-

    Theory is what remains true when technology is changing. So, it is important to know and master the basic concepts and the theoretical tools that underlie the design of the systems we are using today and the systems we will use tomorrow. This means that, given a computing model, we need to know what can be done and what cannot be done in that model. Considering systems built on top of an asynchronous read/write shared memory prone to process crashes, this monograph presents and develops the fundamental notions that are universal constructions, consensus numbers, distributed recursivity, power of the BG simulation, and what can be done when one has to cope with process anonymity and/or memory anonymity. Numerous distributed algorithms are presented, the aim of which is being to help the reader better understand the power and the subtleties of the notions that are presented. In addition, the reader can appreciate the simplicity and beauty of some of these algorithms.

  • av Dimitris Sakavalas
    795,-

    As the structure of contemporary communication networks grows more complex, practical networked distributed systems become prone to component failures. Fault-tolerant consensus in message-passing systems allows participants in the system to agree on a common value despite the malfunction or misbehavior of some components. It is a task of fundamental importance for distributed computing, due to its numerous applications.We summarize studies on the topological conditions that determine the feasibility of consensus, mainly focusing on directed networks and the case of restricted topology knowledge at each participant. Recently, significant efforts have been devoted to fully characterize the underlying communication networks in which variations of fault-tolerant consensus can be achieved. Although the deduction of analogous topological conditions for undirected networks of known topology had shortly followed the introduction of the problem, their extension to the directed network case has been proven a highly non-trivial task. Moreover, global knowledge restrictions, inherent in modern large-scale networks, require more elaborate arguments concerning the locality of distributed computations. In this work, we present the techniques and ideas used to resolve these issues.Recent studies indicate a number of parameters that affect the topological conditions under which consensus can be achieved, namely, the fault model, the degree of system synchrony (synchronous vs. asynchronous), the type of agreement (exact vs. approximate), the level of topology knowledge, and the algorithm class used (general vs. iterative). We outline the feasibility and impossibility results for various combinations of the above parameters, extensively illustrating the relation between network topology and consensus.

  • av Karine Altisen
    729,-

    This book aims at being a comprehensive and pedagogical introduction to the concept of self-stabilization, introduced by Edsger Wybe Dijkstra in 1973. Self-stabilization characterizes the ability of a distributed algorithm to converge within finite time to a configuration from which its behavior is correct (i.e., satisfies a given specification), regardless the arbitrary initial configuration of the system. This arbitrary initial configuration may be the result of the occurrence of a finite number of transient faults. Hence, self-stabilization is actually considered as a versatile non-masking fault tolerance approach, since it recovers from the effect of any finite number of such faults in an unified manner. Another major interest of such an automatic recovery method comes from the difficulty of resetting malfunctioning devices in a large-scale (and so, geographically spread) distributed system (the Internet, Pair-to-Pair networks, and Delay Tolerant Networks are examples of such distributed systems). Furthermore, self-stabilization is usually recognized as a lightweight property to achieve fault tolerance as compared to other classical fault tolerance approaches. Indeed, the overhead, both in terms of time and space, of state-of-the-art self-stabilizing algorithms is commonly small. This makes self-stabilization very attractive for distributed systems equipped of processes with low computational and memory capabilities, such as wireless sensor networks.After more than 40 years of existence, self-stabilization is now sufficiently established as an important field of research in theoretical distributed computing to justify its teaching in advanced research-oriented graduate courses. This book is an initiation course, which consists of the formal definition of self-stabilization and its related concepts, followed by a deep review and study of classical (simple) algorithms, commonly used proof schemes and design patterns, as well as premium results issued from the self-stabilizing community. As often happens in the self-stabilizing area, in this book we focus on the proof of correctness and the analytical complexity of the studied distributed self-stabilizing algorithms.Finally, we underline that most of the algorithms studied in this book are actually dedicated to the high-level atomic-state model, which is the most commonly used computational model in the self-stabilizing area. However, in the last chapter, we present general techniques to achieve self-stabilization in the low-level message passing model, as well as example algorithms.

  • av Gadi Taubenfeld
    949,-

    Computers and computer networks are one of the most incredible inventions of the 20th century, having an ever-expanding role in our daily lives by enabling complex human activities in areas such as entertainment, education, and commerce. One of the most challenging problems in computer science for the 21st century is to improve the design of distributed systems where computing devices have to work together as a team to achieve common goals.In this book, I have tried to gently introduce the general reader to some of the most fundamental issues and classical results of computer science underlying the design of algorithms for distributed systems, so that the reader can get a feel of the nature of this exciting and fascinating field called distributed computing. The book will appeal to the educated layperson and requires no computer-related background. I strongly suspect that also most computer-knowledgeable readers will be able to learn something new.

  • av Roderick Bloem
    715,-

    While the classic model checking problem is to decide whether a finite system satisfies a specification, the goal of parameterized model checking is to decide, given finite systems ,,,,(n) parameterized by n whether, for all n the system ,,,,(n) satisfies a specification. In this book we consider the important case of ,,,,(n) being a concurrent system, where the number of replicated processes depends on the parameter n but each process is independent of n. Examples are cache coherence protocols, networks of finite-state agents, and systems that solve mutual exclusion or scheduling problems. Further examples are abstractions of systems, where the processes of the original systems actually depend on the parameter. The literature in this area has studied a wealth of computational models based on a variety of synchronization and communication primitives, including token passing, broadcast, and guarded transitions. Often, different terminology is used in the literature, and results are based on implicit assumptions. In this book, we introduce a computational model that unites the central synchronization and communication primitives of many models, and unveils hidden assumptions from the literature. We survey existing decidability and undecidability results, and give a systematic view of the basic problems in this exciting research area.

  • av Hagit Attiya
    409,-

    To understand the power of distributed systems, it is necessary to understand their inherent limitations: what problems cannot be solved in particular systems, or without sufficient resources (such as time or space). This book presents key techniques for proving such impossibility results and applies them to a variety of different problems in a variety of different system models. Insights gained from these results are highlighted, aspects of a problem that make it difficult are isolated, features of an architecture that make it inadequate for solving certain problems efficiently are identified, and different system models are compared.

  • av Roberto Segala, Frits Vaandrager, Dilsun Kaynar & m.fl.
    525,-

    This monograph presents the Timed Input/Output Automaton (TIOA) modeling framework, a basic mathematical framework to support description and analysis of timed (computing) systems. Timed systems are systems in which desirable correctness or performance properties of the system depend on the timing of events, not just on the order of their occurrence. Timed systems are employed in a wide range of domains including communications, embedded systems, real-time operating systems, and automated control. Many applications involving timed systems have strong safety, reliability, and predictability requirements, which make it important to have methods for systematic design of systems and rigorous analysis of timing-dependent behavior. The TIOA framework also supports description and analysis of timed distributed algorithms -- distributed algorithms whose correctness and performance depend on the relative speeds of processors, accuracy of local clocks, or communication delay bounds. Such algorithms arise, for example, in traditional and wireless communications, networks of mobile devices, and shared-memory multiprocessors. The need to prove rigorous theoretical results about timed distributed algorithms makes it important to have a suitable mathematical foundation. An important feature of the TIOA framework is its support for decomposing timed system descriptions. In particular, the framework includes a notion of external behavior for a timed I/O automaton, which captures its discrete interactions with its environment. The framework also defines what it means for one TIOA to implement another, based on an inclusion relationship between their external behavior sets, and defines notions of simulations, which provide sufficient conditions for demonstrating implementation relationships. The framework includes a composition operation for TIOAs, which respects external behavior, and a notion of receptiveness, which implies that a TIOA does not block the passage of time. The TIOA framework also defines the notion of a property and what it means for a property to be a safety or a liveness property. It includes results that capture common proof methods for showing that automata satisfy properties. Table of Contents: Introduction / Mathematical Preliminaries / Describing Timed System Behavior / Timed Automata / Operations on Timed Automata / Properties for Timed Automata / Timed I/O Automata / Operations on Timed I/O Automata / Conclusions and Future Work

  • av Paola Flocchini
    635,-

    The study of what can be computed by a team of autonomous mobile robots, originally started in robotics and AI, has become increasingly popular in theoretical computer science (especially in distributed computing), where it is now an integral part of the investigations on computability by mobile entities. The robots are identical computational entities located and able to move in a spatial universe; they operate without explicit communication and are usually unable to remember the past; they are extremely simple, with limited resources, and individually quite weak. However, collectively the robots are capable of performing complex tasks, and form a system with desirable fault-tolerant and self-stabilizing properties. The research has been concerned with the computational aspects of such systems. In particular, the focus has been on the minimal capabilities that the robots should have in order to solve a problem. This book focuses on the recent algorithmic results in the field of distributed computing by oblivious mobile robots (unable to remember the past). After introducing the computational model with its nuances, we focus on basic coordination problems: pattern formation, gathering, scattering, leader election, as well as on dynamic tasks such as flocking. For each of these problems, we provide a snapshot of the state of the art, reviewing the existing algorithmic results. In doing so, we outline solution techniques, and we analyze the impact of the different assumptions on the robots' computability power. Table of Contents: Introduction / Computational Models / Gathering and Convergence / Pattern Formation / Scatterings and Coverings / Flocking / Other Directions

  • av Marko Vukolic
    559,-

    A quorum system is a collection of subsets of nodes, called quorums, with the property that each pair of quorums have a non-empty intersection. Quorum systems are the key mathematical abstraction for ensuring consistency in fault-tolerant and highly available distributed computing. Critical for many applications since the early days of distributed computing, quorum systems have evolved from simple majorities of a set of processes to complex hierarchical collections of sets, tailored for general adversarial structures. The initial non-empty intersection property has been refined many times to account for, e.g., stronger (Byzantine) adversarial model, latency considerations or better availability. This monograph is an overview of the evolution and refinement of quorum systems, with emphasis on their role in two fundamental applications: distributed read/write storage and consensus. Table of Contents: Introduction / Preliminaries / Classical Quorum Systems / Classical Quorum-Based Emulations / Byzantine Quorum Systems / Latency-efficient Quorum Systems / Probabilistic Quorum Systems

  • av Jennifer Welch
    525,-

    Link reversal is a versatile algorithm design technique that has been used in numerous distributed algorithms for a variety of problems. The common thread in these algorithms is that the distributed system is viewed as a graph, with vertices representing the computing nodes and edges representing some other feature of the system (for instance, point-to-point communication channels or a conflict relationship). Each algorithm assigns a virtual direction to the edges of the graph, producing a directed version of the original graph. As the algorithm proceeds, the virtual directions of some of the links in the graph change in order to accomplish some algorithm-specific goal. The criterion for changing link directions is based on information that is local to a node (such as the node having no outgoing links) and thus this approach scales well, a feature that is desirable for distributed algorithms. This monograph presents, in a tutorial way, a representative sampling of the work on link-reversal-based distributed algorithms. The algorithms considered solve routing, leader election, mutual exclusion, distributed queueing, scheduling, and resource allocation. The algorithms can be roughly divided into two types, those that assume a more abstract graph model of the networks, and those that take into account more realistic details of the system. In particular, these more realistic details include the communication between nodes, which may be through asynchronous message passing, and possible changes in the graph, for instance, due to movement of the nodes. We have not attempted to provide a comprehensive survey of all the literature on these topics. Instead, we have focused in depth on a smaller number of fundamental papers, whose common thread is that link reversal provides a way for nodes in the system to observe their local neighborhoods, take only local actions, and yet cause global problems to be solved. We conjecture that future interesting uses of link reversal are yet to be discovered. Table of Contents: Introduction / Routing in a Graph: Correctness / Routing in a Graph: Complexity / Routing and Leader Election in a Distributed System / Mutual Exclusion in a Distributed System / Distributed Queueing / Scheduling in a Graph / Resource Allocation in a Distributed System / Conclusion

  • av Chryssis Georgiou
    559,-

    Cooperative network supercomputing is becoming increasingly popular for harnessing the power of the global Internet computing platform. A typical Internet supercomputer consists of a master computer or server and a large number of computers called workers, performing computation on behalf of the master. Despite the simplicity and benefits of a single master approach, as the scale of such computing environments grows, it becomes unrealistic to assume the existence of the infallible master that is able to coordinate the activities of multitudes of workers. Large-scale distributed systems are inherently dynamic and are subject to perturbations, such as failures of computers and network links, thus it is also necessary to consider fully distributed peer-to-peer solutions. We present a study of cooperative computing with the focus on modeling distributed computing settings, algorithmic techniques enabling one to combine efficiency and fault-tolerance in distributed systems, and the exposition of trade-offs between efficiency and fault-tolerance for robust cooperative computing. The focus of the exposition is on the abstract problem, called Do-All, and formulated in terms of a system of cooperating processors that together need to perform a collection of tasks in the presence of adversity. Our presentation deals with models, algorithmic techniques, and analysis. Our goal is to present the most interesting approaches to algorithm design and analysis leading to many fundamental results in cooperative distributed computing. The algorithms selected for inclusion are among the most efficient that additionally serve as good pedagogical examples. Each chapter concludes with exercises and bibliographic notes that include a wealth of references to related work and relevant advanced results. Table of Contents: Introduction / Distributed Cooperation and Adversity / Paradigms and Techniques / Shared-Memory Algorithms / Message-Passing Algorithms / The Do-All Problem in Other Settings / Bibliography / Authors' Biographies

  • av Othon Michail
    559,-

    Wireless sensor networks are about to be part of everyday life. Homes and workplaces capable of self-controlling and adapting air-conditioning for different temperature and humidity levels, sleepless forests ready to detect and react in case of a fire, vehicles able to avoid sudden obstacles or possibly able to self-organize routes to avoid congestion, and so on, will probably be commonplace in the very near future. Mobility plays a central role in such systems and so does passive mobility, that is, mobility of the network stemming from the environment itself. The population protocol model was an intellectual invention aiming to describe such systems in a minimalistic and analysis-friendly way. Having as a starting-point the inherent limitations but also the fundamental establishments of the population protocol model, we try in this monograph to present some realistic and practical enhancements that give birth to some new and surprisingly powerful (for these kind of systems) computational models. Table of Contents: Population Protocols / The Computational Power of Population Protocols / Enhancing the model / Mediated Population Protocols and Symmetry / Passively Mobile Machines that Use Restricted Space / Conclusions and Open Research Directions / Acronyms / Authors' Biographies

  • av Rachid Guerraoui
    559,-

    Transactional memory (TM) is an appealing paradigm for concurrent programming on shared memory architectures. With a TM, threads of an application communicate, and synchronize their actions, via in-memory transactions. Each transaction can perform any number of operations on shared data, and then either commit or abort. When the transaction commits, the effects of all its operations become immediately visible to other transactions; when it aborts, however, those effects are entirely discarded. Transactions are atomic: programmers get the illusion that every transaction executes all its operations instantaneously, at some single and unique point in time. Yet, a TM runs transactions concurrently to leverage the parallelism offered by modern processors. The aim of this book is to provide theoretical foundations for transactional memory. This includes defining a model of a TM, as well as answering precisely when a TM implementation is correct, what kind of properties it can ensure, what are the power and limitations of a TM, and what inherent trade-offs are involved in designing a TM algorithm. While the focus of this book is on the fundamental principles, its goal is to capture the common intuition behind the semantics of TMs and the properties of existing TM implementations. Table of Contents: Introduction / Shared Memory Systems / Transactional Memory: A Primer / TM Correctness Issues / Implementing a TM / Further Reading / Opacity / Proving Opacity: An Example / Opacity vs.\ Atomicity / Further Reading / The Liveness of a TM / Lock-Based TMs / Obstruction-Free TMs / General Liveness of TMs / Further Reading / Conclusions

  • av Michel Raynal
    559,-

    Understanding distributed computing is not an easy task. This is due to the many facets of uncertainty one has to cope with and master in order to produce correct distributed software. A previous book Communication and Agreement Abstraction for Fault-tolerant Asynchronous Distributed Systems (published by Morgan & Claypool, 2010) was devoted to the problems created by crash failures in asynchronous message-passing systems. The present book focuses on the way to cope with the uncertainty created by process failures (crash, omission failures and Byzantine behavior) in synchronous message-passing systems (i.e., systems whose progress is governed by the passage of time). To that end, the book considers fundamental problems that distributed synchronous processes have to solve. These fundamental problems concern agreement among processes (if processes are unable to agree in one way or another in presence of failures, no non-trivial problem can be solved). They are consensus, interactive consistency, k-set agreement and non-blocking atomic commit. Being able to solve these basic problems efficiently with provable guarantees allows applications designers to give a precise meaning to the words "e;"e;cooperate"e;"e; and "e;"e;agree"e;"e; despite failures, and write distributed synchronous programs with properties that can be stated and proved. Hence, the aim of the book is to present a comprehensive view of agreement problems, algorithms that solve them and associated computability bounds in synchronous message-passing distributed systems. Table of Contents: List of Figures / Synchronous Model, Failure Models, and Agreement Problems / Consensus and Interactive Consistency in the Crash Failure Model / Expedite Decision in the Crash Failure Model / Simultaneous Consensus Despite Crash Failures / From Consensus to k-Set Agreement / Non-Blocking Atomic Commit in Presence of Crash Failures / k-Set Agreement Despite Omission Failures / Consensus Despite Byzantine Failures / Byzantine Consensus in Enriched Models

  • av Michel Raynal
    719,-

    Understanding distributed computing is not an easy task. This is due to the many facets of uncertainty one has to cope with and master in order to produce correct distributed software. Considering the uncertainty created by asynchrony and process crash failures in the context of message-passing systems, the book focuses on the main abstractions that one has to understand and master in order to be able to produce software with guaranteed properties. These fundamental abstractions are communication abstractions that allow the processes to communicate consistently (namely the register abstraction and the reliable broadcast abstraction), and the consensus agreement abstractions that allows them to cooperate despite failures. As they give a precise meaning to the words "e;communicate"e; and "e;agree"e; despite asynchrony and failures, these abstractions allow distributed programs to be designed with properties that can be stated and proved. Impossibility results are associated with these abstractions. Hence, in order to circumvent these impossibilities, the book relies on the failure detector approach, and, consequently, that approach to fault-tolerance is central to the book. Table of Contents: List of Figures / The Atomic Register Abstraction / Implementing an Atomic Register in a Crash-Prone Asynchronous System / The Uniform Reliable Broadcast Abstraction / Uniform Reliable Broadcast Abstraction Despite Unreliable Channels / The Consensus Abstraction / Consensus Algorithms for Asynchronous Systems Enriched with Various Failure Detectors / Constructing Failure Detectors

Gör som tusentals andra bokälskare

Prenumerera på vårt nyhetsbrev för att få fantastiska erbjudanden och inspiration för din nästa läsning.