Term
What are the characteristics of the Central Server Algorithm? |
|
Definition
- Server keeps track of a token - Permission to enter critical region
- Process requests token from server and awaiys reply
- Server grants token if it has it, otherwise queues request
- A process can enter critical region if it gets the token, otherwise it waits
- On exiting critical section, process sends release message to server
|
|
|
Term
Show the central server algorithm |
|
Definition
|
|
Term
What is the correcntess of the Central Server Algorithm |
|
Definition
- Correctness
- Achieves safety - if no processes crash
- Achieves liveness - assuming server has enough storage to queue N processes
- Does not guarantee Happened-before ordering:
- Process A sends a request for entry, then sends message m to B
- On recipt of m, B sends request for entry.
- To satisfy happened-before order, should be granted before
- However dues to message propagation delay, arrives at server before, and they are serviced in opposite order
|
|
|
Term
What is the performance of the Central Server Algorithm |
|
Definition
- Enter overhead: Two messages ( request and grant)
- Enter delay: Time between request and grant
- Exit overhead: One message (release)
- Exit delay: none
- Synchronization delay: Between release and grant
- Centralized server is the bottleneck
|
|
|
Term
What are the characteristics of the ring-based algorithm? |
|
Definition
- Processes are arranged in logical ring, this could be unrelated to physical configuration
- pi sends message to p(i + 1)mod n
- When process holds the token, it can enter, otherwise waits
- When process releases token (exit) it sends to its neighbour
|
|
|
Term
What is the correctness of the ring-based algorithm? |
|
Definition
- Acheives safety - assumed reliable communication/ no crashes
- Achieves liveness - finite processes and finite time in critical region
- Ordering is not ensured, as order of access to critical region is based purely on the arrangement of the nodes
|
|
|
Term
What is the performance of the ring-based algorithm? |
|
Definition
- Bandwidth: High consumption as token continually circulating
- Exit overhead: 0 to N messages
- Enter delay: duration of 0 to N messages
- Exit overhead: 1 message
- Exit delay: none
- Synchronization delay: time for transmission of 0 to N messages
|
|
|
Term
Show the ring-based algorithm |
|
Definition
|
|
Term
What are the principles of the algorithm using multicast and logical clocks? |
|
Definition
- Processes multicas their request messages
- A process can only enter critical region if all processses reply
- Globally ordered timestamps are included in all request messages
- Each process has a state
- RELEASED - They reply
- HELD - They wait until they exit, then reply
- WANTED - They compare the timestamp to the one on its own request, and reply if theirs is higher
|
|
|
Term
Show Ricart and Agrawala's algorithm for multicast and logical clocks |
|
Definition
On initialization
state := RELEASED;
To enter the section
state := Wanted;
Multicast Request to all processes;
T := request's timestamp;
Wait until (number of replies received = (N - 1));
state := HELD;
On receipt of a request < Ti, pi > at pj (i != j)
if (state=HELD or (state=WANTED and (T, pj) < (Ti, pi)
then
queue request from pi without replying;
else
reply immediately to pi;
To exit the critical section
state := RELEASED;
reply to any queued requests; |
|
|
Term
Show the multicast and logical clock algorithm |
|
Definition
|
|
Term
What are the characteristics of correctness for the multicast and logical clocks algorithm? |
|
Definition
- Safety is ensured - asuming reliable communication and no crashes
- Liveness is ensured thanks to finite processes and finite time in critical region
- Ordering is also maintained due to the totally ordered timestamps
|
|
|
Term
What are the characteristics of performance for the multicast and logical clocks algorithm? |
|
Definition
- Bandwidth: No unnecessary circulating tokens
- Entry overhead: 2(N - 1)
- Entry delay: Delay of multicast request + reply from each process
- Exit overhead: 0 to N - 1 messages (However many requests you got while in critical region)
- Exit delay: none
- Synchronization delay: 1 message
|
|
|
Term
What are the principles of Maekawa's voting algorithm? |
|
Definition
Consists of 'Voting', where instead of receiving messages from all peers, only replies from a subset of peers are needed. Each peer vote for one another to enter critical section
- We don't need all N-1 replies, but only a subset of peers
- Consider processes A, B, X:
- A needs the replies from A and X
- B needs the replies from B and X
- If X can only reply to (vote for) one process at a time, then
- A and B cannot have a reply from X at the same time
- Mutex between A and B hinges on X
- Place processes in overlapping groups
- Members in overlap groups control mutex
|
|
|
Term
What are the formal constraints of Maekawa algorithm? |
|
Definition
|
|
Term
What is the optimal solution for K in Maekawa's Algorithm? |
|
Definition
|
|
Term
What is an overview of Maekawa's algorithm? |
|
Definition
- A group of processes Vi for each proces pi
- The groups for any given pair of processes overlap
- To enter the critical region
- pi sends REQUESTs to all processes in Vi
- Waits for REPLYs (VOTEs) from processes in Vi
- To exit the critical region pi
- Sends RELEASEs to all processes in Vi
|
|
|
Term
Show the full Maekawa's algorithm |
|
Definition
|
|
Term
Show an example of how processes could be splitted into groups |
|
Definition
|
|
Term
Does Maekawa's algorithm achieve safety? |
|
Definition
If it were possible for two processes pi and pj to enter the critical section at the same time, then the processes in Vi intersection Vj != empty set would have to have voted for both. The algorithm allows a process to make at most one vote between successive receipts of a release message, so this situation is impossible. It achieves safetly properly |
|
|
Term
Does Maekawa's algorithm achieve Liveness? |
|
Definition
|
|
Term
Does Maekawa's algorithm achieve Happened-before order? |
|
Definition
It requires that the processes queue outstanding requests in happened-before order, so that happened-before order requirement is satisfied. |
|
|
Term
What is the performance of the Maekawa's algorithm? |
|
Definition
- Entry overhead: K requests + K replies
- Exit overhead: K releases
- Entry delay: delay between request and getting all replies
- Exit delay: none
- Synchronisation delay: 2 messages
|
|
|
Term
Compare the Central server, ring based, logical clock multicast and Maekawa (With and without timestamps) algorithms |
|
Definition
|
|