Synchronicity
Next: Longevity | Return to Qualities of Communication Protocols
In the most general sense, synchronization deals with the coordinated execution of actions in a distributed system and what the state information is necessary for that coordination. This broad definition encompasses, but is not limited to, the common view among programmers that synchronous communications occur when the sender of a message stops and waits for a response from the message receiver.^{[1]} However, this is not the only way to achieve synchronization. Some other common mechanisms are logical clocks,^{[2]}^{[3]} vector clocks,^{[4]}^{[5]} vector timestamps,^{[6]} optimistic concurrency controls,^{[7]} and timing signals.
To evaluate the synchronization requirements for ACPs, we consider: (a) what are the actions that need to be coordinated, (b) where will those actions be executed, and (c) what kind of state information is needed to achieve the desired coordination. A distributed system may perform many different tasks comprised of numerous operations, but rarely all of them have to be fully coordinated. In fact, the more independent the individual operations are, the more a system can maximize concurrency and increase throughput. From a coordination perspective, where the operations take place is actually more important than what the operations do. For example, if all of the actions occur in just one process, then that process may not need to know anything about the state of the other process. Once developers know what operations have to be coordinated and where they will execute, they can consider what local or global state information the coordination logic will need.
To rank synchronicity for ACP patterns, we will use the following definitions:
- C is a conversation involving a closed set of processes, C.P = {p_{1},…,p_{n}}, and a set of messages, C.M = {m_{1}, … ,m_{n}}, such that sender(m_{i}) ∈ C.P ∧ receivers(m_{i}) ⊆ C.P for 1≤i≤n where sender(m_{i}) is the process that sent message m_{i} and receivers(m_{i}) is the set of processes that received m_{i}.
- A is a set of operations, {a_{1}, … ,a_{n}} that run on C.P and whose execution requires coordination, e.g., ordering, simultaneous execution, etc.
- h(a) is the host process for operation, a, where a ∈ A and h(a) ∈ C.P
- s(a) is the state information that h(a) needs to coordinate a’s execution with the rest of the operations in A.
- H(A) is the set of host processes for all operations in A
Below is an informal 3-point rubric for ranking synchronicity for CommDP patterns using these definitions. We believe that a more rigorous ranking system would have value beyond the categorization of ACP patterns, and its full definition is beyond the scope and purpose of this page.
Rank/Criteria
3. The problem (P) addressed by the pattern deals with situations where |H(A)|>1 and the solution (S) can guarantee that for all a ∈ A, h(a) receives s(a) via messages, m_{i} ∈ C.M, in time to do the prescribed coordination.
2. P deals with situations where |H(A)| = 1 and S can guarantee that for all a ∈ A, h(a) receives s(a) via messages, m_{i} ∈ C.M, in time to do the prescribed coordination.
1. P is not concerned with synchronicity, e.g., |A| = 0, and S does not limit synchronicity.
Next: Longevity | Return to Qualities of Communication Protocols
References
- ↑ M. Burrows, “The Chubby Lock Service for Loosely-coupled Distributed Systems,” in Proceedings of the 7th Symposium on Operating Systems Design and Implementation, Berkeley, CA, USA, 2006, pp. 335–350.
- ↑ L. Lamport, “Time, Clocks, and the Ordering of Events in a Distributed System,” Commun ACM, vol. 21, no. 7, pp. 558–565, Jul. 1978.
- ↑ M. Raynal, “About Logical Clocks for Distributed Systems,” SIGOPS Oper Syst Rev, vol. 26, no. 1, pp. 41–48, Jan. 1992.
- ↑ F. Mattern, “Virtual time and global states of distributed systems,” in Parallel and Distributed Algorithms, 1989, pp. 215–226.
- ↑ C. Fidge, “Logical Time in Distributed Computing Systems,” Computer, vol. 24, no. 8, pp. 28–33, Aug. 1991.
- ↑ G. Coulouris, J. Dollimore, T. Kindberg, and G. Blair, Distributed Systems: Concepts and Design, 5 edition. Boston: Pearson, 2011.
- ↑ H. T. Kung and J. T. Robinson, “On Optimistic Methods for Concurrency Control,” ACM Trans Database Syst, vol. 6, no. 2, pp. 213–226, Jun. 1981.