Term
What is the equation for Ahmdal's Law? |
|
Definition
|
|
Term
What does P stand for in the equation for Ahmdal's Law? |
|
Definition
Percentage of parallelized code |
|
|
Term
What does N stand for in the equation for Ahmdal's Law? |
|
Definition
|
|
Term
What is the Liveness property? |
|
Definition
Something good will happen eventually |
|
|
Term
What is the Safety property? |
|
Definition
Something bad will never happen |
|
|
Term
Describe Peterson's algorithm. |
|
Definition
Works for only 2 threads. Interested thread raises flag and sets self as victim. Loop while other flag is raised and you're not the victim. Get lock if other thread's flag isn't raised, or other thread's flag IS raised and it is the victim. |
|
|
Term
Describe Filter algorithm. |
|
Definition
Works for n > 2 threads. Loop through n levels starting at 1. Raise flag to indicate interest in level, set self as victim of level. Spin while there is another thread at a higher level and you are victim of current level. Continue to next level if there is no other thread at a higher level or another thread is the victim of the current level. |
|
|
Term
Describe Bakery algorithm. |
|
Definition
Raise flag when interested in lock. Take a timestamp (which is +1 the value of the current highest stamp). Spin while another thread has flag up and that flag has a lower timestamp. |
|
|
Term
When is a history linearizable? |
|
Definition
When the history can be marked in such a way that the result makes sense in a sequential execution. |
|
|
Term
When is a history sequentially consistent? |
|
Definition
When the history can be arranged so that a program order is created that makes logical sense. Allowed to move a thread along the timeline, but not invocations within a thread. |
|
|
Term
Is a history that is linearizable always sequentially consistent? |
|
Definition
|
|
Term
Is a history that is not linearizable always sequentially consistent? |
|
Definition
Never, sequential consistency is a stronger property. |
|
|
Term
Is a history that is sequentially consistent always linearizable? |
|
Definition
|
|
Term
What does a safe register return when write/read overlap? |
|
Definition
Any value within the range of the register type. i.e. 0 or 1 in boolean |
|
|
Term
What does a regular register return when write/read overlap?
|
|
Definition
Either the old value or new value. |
|
|
Term
What does an atomic register return when write/read overlap?
|
|
Definition
|
|
Term
Describe a SRSW safe boolean register. |
|
Definition
Single memory location holding a boolean value with one writer and one reader. |
|
|
Term
Describe a MRSW safe boolean register. |
|
Definition
Use an array of SRSW safe boolean registers. Each index corresponds to a reader thread's own SRSW safe boolean register. When a write happens, the value is written into each index of the SRSW. Think: each reader has a single location where they do a read from, a write needs to write the value into each readers' location. It's possible that a reader can come in before all of the writes have completed. The reader simply looks at its special location and returns the value. Note: this works for multi-valued registers too! |
|
|
Term
Describe a MRSW regular boolean register. |
|
Definition
Uses an array of MRSW safe boolean registers. Instead of a read returning any value within the range of the register, limit the possibilities to only the old or new value. Problem occurs when value doesn't change with a write i.e. old value is 0, new value is 0 too. If a value changes, write it. If it's the same as the previous value, don't do anything. That's what makes it a regular. Note: this doesn't work for multi-valued registers. |
|
|
Term
Describe a MRSW (non-boolean) regular register. |
|
Definition
Uses an array of MRSW regular boolean registers. Instead of each register corresponding to a reader, this array of registers will represent the value with a 1 in the index corresponding to it. i.e. write: 5 --> [0][0][0][0][0][1] by going right to left. This guarantees that the reader reading from left to write will return either the new value or old value. |
|
|
Term
Describe a SRSW atomic register. |
|
Definition
The writer writes to a [value | timestamp] pair and the reader reads from it. After the reader reads, it saves the [value | timestamp] pair so that the timestamp can be compared to the next write to ensure the newer value is returned. |
|
|
Term
Describe a MRSW atomic register. |
|
Definition
Uses an NxN array of SRSW atomic registers for N readers. The writer will write the new [value | timestamp] pair into each index of the NxN array, going column by column. Each column is associated with a reader. When the reader wants to read, it reads the pairs in its row and looks for the must recent timestamp. Once found, it writes in its column so that if the writer fell asleep other readers will get the up to date value the previous reader found. |
|
|
Term
Describe a MRMW atomic register. |
|
Definition
Uses an array of MRSW atomic register. Each index corresponds to a writer. When a writer wants to write, it reads all of the timestamps in the array, takes the max and adds 1 to come up with its [value | timestamp] pair, then writes to its index. When a reader wants to read, it goes through all of the pairs and takes the value in the max timestamp pair. |
|
|
Term
Describe the Test-and-Set lock. |
|
Definition
Spins on value returned by state.getAndSet. When return is false, lock acquired. .getAndSet returns the current value and sets value with new one (true) in one atomic step. |
|
|
Term
What is the main problem with the Test-and-Set lock? |
|
Definition
The constant writes cause a bottleneck with multiple threads. |
|
|
Term
Describe the Test-and-Test-and-Set lock. |
|
Definition
Split the Test-and-Set lock into two parts: lurk and pounce. The constant writing with the Test-and-Set lock caused a bottleneck so we'll reduce the writes by reading to see if we even have a shot. Spin while the read returns true. Once it is false, pounce. Do getAndSet(true), if it returns false we've got the lock, otherwise continue reading. |
|
|
Term
Describe the Backoff lock. |
|
Definition
The Test-and-Test-and-Set still has a bottleneck - as soon as a thread unlocks, the contending threads all do getAndSet. The Backoff lock introduces a delay to the thread so that all the threads don't storm the getAndSet call. Still spin on the state.get, but once it turns false the thread pounces. The losing threads sleep for some random time between 0 and the current maximum. The current maximum is doubled after each sleep until it reaches the actual maximum. |
|
|
Term
Describe the Anderson Queue lock. |
|
Definition
Works as a queue like standing in line. When locking, get the tail location and increment for next thread. Spin on "predecessor's" flag until it gives the green light (unlocking sets flag to false). Think: constantly checking the status of the person in front of you until you realize it's your turn. |
|
|
Term
What are the cons of using the Anderson lock? |
|
Definition
- Only better than Backoff when there are multiple threads.
- Requires a lot of pre-allocated memory.
- Doesn't work well with uncached architectures.
|
|
|
Term
What are the pros of the Anderson lock? |
|
Definition
First-come first-served and is easy to implement. |
|
|
Term
|
Definition
Implements a "virtually" LinkedList with QNodes in order to reuse memory. Same analogy as Anderson for standing in a line, constantly checking the status of the person in front of you to determine if it's a go. To lock, do getAndSet(myNode) on the tail. Spin while the node is locked. To unlock, set myNode.locked to false and set predecessor as myNode (to recycle myNode). |
|
|
Term
What are the cons of the CLH lock? |
|
Definition
- Doesn't work for uncached NUMA architectures.
|
|
|
Term
What are the pros of the CLH lock? |
|
Definition
- First-come first-served.
- Uses small, constant-sized memory space.
|
|
|
Term
|
Definition
Uses an explicit LinkedList. Think: standing in line but instead of constantly checking the status of the person in front of you, hang out until that person nudges you. |
|
|
Term
What are the cons of the MCS lock? |
|
Definition
- Relying on the predecessor's nudge could be bad in the case that the predecessor's thread dies - waiting for a nudge that'll never come.
|
|
|
Term
What are the pros of the MCS lock? |
|
Definition
- First-come first-served.
- Spin on local memory to reduce writes due to dirtied memory.
|
|
|
Term
Describe coarse-grained synchronization. |
|
Definition
When a thread wants to use all or part of an object [node on a tree], lock the whole object [the tree]. |
|
|
Term
Describe fine-grained synchronization. |
|
Definition
Rather than locking the whole list, use hand-over-hand locking to traverse and make changes. |
|
|
Term
Describe optimistic synchronization. |
|
Definition
Traverse the list in search of the node(s) of interest without locking. Once node(s) found, lock predecessor and main nodes. Continue to 2-step validation: 1) Make sure node is reachable from tail 2) Make sure two locked nodes point to each other. If something doesn't work out (alteration of list), release nodes and start process over. |
|
|
Term
Describe lazy synchronization. |
|
Definition
Add an extra bit to each node to indicate whether it has been logically deleted or not. Removes need for validation in optimistic synchronization. |
|
|
Term
Describe lock-free synchronization. |
|
Definition
|
|