Term
|
Definition
: The management of multiple processes within a uniprocessor system |
|
|
Term
|
Definition
The management of multiple processes within a multiprocessor |
|
|
Term
|
Definition
The management of multiple processes executing on multiple, distributed computer systems.
E. G clusters |
|
|
Term
|
Definition
Managing the interaction of all of these processes
encompasses a host of design issues, including
• communication among processes,
• sharing of and competing for resources (such as memory, files, and I/O access),
• synchronization of the activities of multiple processes, and
allocation of processor time to processes
|
|
|
Term
Difficulties of Concurrency |
|
Definition
•Sharing of global resources
•Optimally managing the allocation of resources
•Difficult to locate programming errors as results are not deterministic and reproducible.
•
|
|
|
Term
|
Definition
:
Sharing Time. Multiprogramming was invented to allow processing time to be dynamically shared among a number of active applications. |
|
|
Term
|
Definition
Extension of modular design. As an extension of the principles of modular design and structured programming, some applications can be effectively programmed as a set of concurrent processes. |
|
|
Term
Operating system structure
|
|
Definition
OS themselves implemented as a set of processes or threads.
The same structuring advantages apply to systems programs, and we have seen that operating systems are themselves often implemented as a set of processes or threads. |
|
|
Term
|
Definition
Multiple processes or threads read and write data items
–They do so in a way where the final result depends on the order of execution of the processes.
•The output depends on who finishes the race last.
|
|
|
Term
|
Definition
•Only one process at a time is allowed in the critical section for a resource
•A process that halts in its noncritical section must do so without interfering with other processes
No deadlock or starvation |
|
|
Term
•What design and management issues are raised by the existence of concurrency?
The 4 OS issues
|
|
Definition
•The OS must
–Keep track of various processes
–Allocate and de-allocate resources
–Protect the data and resources against interference by other processes.
–Ensure that the processes and outputs are independent of the processing speed
|
|
|
Term
Processes unaware of each other:
|
|
Definition
• Independent processes that are not intended to work together.
• E.G. multiprogramming of multiple independent processes.
• Although the processes are not working together, the OS needs to be concerned about competition for resources.
• E.G. two independent applications may both want to access the same disk or file or printer. |
|
|
Term
Processes indirectly aware of each other:
|
|
Definition
• Processes that are not necessarily aware of each other by their respective process IDs but that share access to some object, such as an I/O buffer.
Such processes exhibit cooperation in sharing the common object |
|
|
Term
Processes directly aware of each other:
|
|
Definition
• Processes that are able to communicate with each other by process ID and that are designed to work jointly on some activity.
Again, such processes exhibit cooperation. |
|
|
Term
|
Definition
A section of code within a process that requires access to shared resources and that must not be executed while another process is in a corresponding section of code. |
|
|
Term
|
Definition
During the course of execution, each process will be sending commands to the I/O device, receiving status information, sending data, and/or receiving data. We will refer to such a resource as a ________. |
|
|
Term
|
Definition
•when each process in the set is blocked awaiting an event that can only be triggered by another blocked process in the set
–Typically involves processes competing for the same set of resources
No efficient solution
Neither will release the resource that it already owns until it has acquired the other resource and performed the function requiring both resources |
|
|
Term
|
Definition
a situation in which a runnable process is overlooked indefinitely by the scheduler; although it is able to proceed, it is never chosen.
The OS may grant access to resources to a number of processes while neglecting another |
|
|
Term
Need for mutual exclusion.
|
|
Definition
•Suppose two or more processes require access to a single nonsharable resource, such as a printer.
• During the course of execution, each process will be sending commands to the I/O device, receiving status information, sending data, and/or receiving data. |
|
|
Term
6 requirements for mutual exclusion |
|
Definition
1. M.E. must be enforced:only 1 process at a time is allowed into its critical section, among all processes that have critical sections fro the same resource or shared object
2.A process that halts in its noncritical section must do so without interfering with other processes.
3.It must not be possible for a process requiring access to a critical section to be delayed indefinitely: no deadlock or starvation.
4.When no process is in a critical section, any process that requests entry to its critical section must be permitted to enter without delay.
5.No assumptions are made about relative process speeds or number of processes
6.A process remains inside its critical section for a finite time only
|
|
|
Term
|
Definition
–A process runs until it invokes an operating system service or until it is interrupted
–Disabling interrupts guarantees mutual exclusion
–Will not work in multiprocessor architecture |
|
|
Term
|
Definition
–An integer value used for signalling among processes.
•Only three operations may be performed on a semaphore, all of which are atomic:
–initialize,
–Decrement (semWait)
-increment. (semSignal |
|
|
Term
3 operations that are defined on semaphores? |
|
Definition
1. A semaphore may be initialized to a nonnegative integer value.
2. The semWait operation decrements the semaphore value.
•If the value becomes negative, then the process executing the semWait is blocked.
• Otherwise, the process continues execution.
•
3. The semSignal operation increments the semaphore value.
• If the resulting value is less than or equal to zero, then a process blocked by a semWait operation, if any, is unblocked.
|
|
|
Term
|
Definition
used to hold processes waiting on the semaphore |
|
|
Term
|
Definition
A semaphore that takes on only the values 0 and 1. |
|
|
Term
|
Definition
Similar to a binary semaphore.
•A key difference between the two is that the process that locks the mutex (sets the value to zero) must be the one to unlock it (sets the value to 1).
• In contrast, it is possible for one process to lock a binary semaphore and for another to unlock it.
|
|
|
Term
|
Definition
The gray-shaded area can be referred to as a __________, applies to paths 3 and 4.
If an execution path enters this __________, then deadlock is inevitable.
Only occurs if joint progress of the two processes creates a path that enters ______.
|
|
|
Term
|
Definition
can be safely used by only one process at a time and is not depleted by that use.
•Such as:
–Processors, I/O channels, main and secondary memory, devices, and data structures such as files, databases, and semaphores
•Deadlock occurs if each process holds one resource and requests the other
|
|
|
Term
|
Definition
–one that can be created (produced) and destroyed (consumed).
Such as Interrupts, signals, messages, and information in I/O buffers
•Deadlock may occur if a Receive message is blocking
•May take a rare combination of events to cause deadlock
|
|
|
Term
Resource Allocation Graphs |
|
Definition
•Directed graph that depicts a state of the system of resources and processes
A graph edge directed from a process to a resource indicates a resource that has been requested by the process but not yet granted.
Within a resource node, a dot is shown for each instance of that resource.
A graph edge directed from a reusable resource node dot to a process indicates a request that has been granted
|
|
|
Term
4 conditions for deadlock |
|
Definition
•Mutual exclusion
–Only one process may use a resource at a time
•Hold-and-wait
–A process may hold allocated resources while awaiting assignment of others
•No pre-emption
–No resource can be forcibly removed form a process holding it
•Circular wait
–A closed chain of processes exists, such that each process holds at least one resource needed by the next process in the chain
|
|
|
Term
|
Definition
1. Process initiation denial - do not start a process if its demands might lead to deadlock
2. Resource allocation denial - do not grant an incremental resource request to a process if this allocation might lead to deadlock |
|
|
Term
|
Definition
strategy simply to design a system in such a way that the possibility of deadlock is excluded |
|
|
Term
|
Definition
•A decision is made dynamically whether the current resource allocation request will, if granted, potentially lead to a deadlock
•Requires knowledge of future process requests
•
|
|
|
Term
Process initiation denial |
|
Definition
do not start a process if its demands might lead to deadlock.
•A process is only started if the maximum claim of all current processes plus those of the new process can be met.
•Not optimal,
–Assumes the worst: that all processes will make their maximum claims together.
|
|
|
Term
Resource Allocation Denial
|
|
Definition
–Do not grant an incremental resource request to a process if this allocation might lead to deadlock
•Referred to as the banker’s algorithm
•Consider a system with fixed number of resources
–State of the system is the current allocation of resources to process
–Safe state is where there is at least one sequence that does not result in deadlock
Unsafe state is a state that is not safe |
|
|
Term
|
Definition
where there is at least one sequence that does not result in deadlock.
•A system consisting of four processes and three resources.
•Allocations are made to processors
|
|
|
Term
|
Definition
–Resource requests are granted whenever possible.
–Regularly check for deadlock
|
|
|
Term
|
Definition
•Abort all deadlocked processes
•Back up each deadlocked process to some previously defined checkpoint, and restart all process
–Risk or deadlock recurring
•Successively abort deadlocked processes until deadlock no longer exists
Successively preempt resources until deadlock no longer exists |
|
|
Term
Integrated deadlock strategy approaches |
|
Definition
1. swappable space - blocks of memory on 2nd storage for use in swapping processes.
2. process resources (tape drive or files)
3. main memory - assignable to processes in pages or segments
4. Internal resources (I/O channels) |
|
|
Term
|
Definition
•The programmer does not know where the program will be placed in memory when it is executed,
–it may be swapped to disk and return to main memory at a different location (relocated)
•Memory references must be translated to the actual physical memory address
|
|
|
Term
|
Definition
•Processes should not be able to reference memory locations in another process without permission
•Impossible to check absolute addresses at compile time
•Must be checked at run time
|
|
|
Term
|
Definition
•Allow several processes to access the same portion of memory
•Better to allow each process access to the same copy of the program rather than have their own separate copy
|
|
|
Term
|
Definition
•Memory is organized linearly (usually)
•Programs are written in modules
–Modules can be written and compiled independently
•Different degrees of protection given to modules (read-only, execute-only)
•Share modules among processes
•Segmentation helps here
|
|
|
Term
|
Definition
•Cannot leave the programmer with the responsibility to manage memory
•Memory available for a program plus its data may be insufficient
–Overlaying allows various modules to be assigned the same region of memory but is time consuming to program
•Programmer does not know how much space will be available
|
|
|
Term
|
Definition
•Equal-size partitions
–Any process whose size is less than or equal to the partition size can be loaded into an available partition
•The operating system can swap a process out of a partition
–If none are in a ready or running state
|
|
|
Term
|
Definition
When there is wasted space internal to a partition due to the fact that the block of data loaded is smaller than the partition |
|
|
Term
|
Definition
•Equal-size
–Placement is trivial (no options)
•Unequal-size
–Can assign each process to the smallest partition within which it will fit
–Queue for each partition
–Processes are assigned in such a way as to minimize wasted memory within a partition
|
|
|
Term
|
Definition
•Partitions are of variable length and number
•Process is allocated exactly as much memory as required
|
|
|
Term
|
Definition
As time goes on, memory becomes more and more fragmented, and memory utilization declines.
•Memory external to all processes is fragmented
Can resolve using |
|
|
Term
|
Definition
–OS moves processes so that they are contiguous
–Time consuming and wastes CPU time |
|
|
Term
|
Definition
–Chooses the block that is closest in size to the request
–Worst performer overall
–Since smallest block is found for process, the smallest amount of fragmentation is left
–Memory compaction must be done more often |
|
|
Term
|
Definition
–Scans memory from the beginning and chooses the first available block that is large enough
–Fastest
–May have many process loaded in the front end of memory that must be searched over when trying to find a free block |
|
|
Term
|
Definition
–Reference to a memory location independent of the current assignment of data to memory. |
|
|
Term
|
Definition
The absolute address or actual location in main memory |
|
|
Term
|
Definition
–Address expressed as a location relative to some known point. |
|
|
Term
|
Definition
•Partition memory into small equal fixed-size chunks and divide each process into the same size chunks
|
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
•Operating system maintains a _________ for each process
–Contains the frame location for each page in the process
–Memory address consist of a page number and offset within the page
|
|
|
Term
|
Definition
•A program can be subdivided into segments
–Segments may vary in length
–There is a maximum segment length
•Addressing consist of two parts
–a segment number and
–an offset
•Segmentation is similar to dynamic partitioning
|
|
|
Term
|
Definition
A condition at an interface under which more input can be placed into a buffer or data holding area than the capacity allocated, overwriting other info. Attackers exploit such a condition to crash a system or to insert specially crafted code that allows them to gain control of the system.
Can occur as a result of a programming error when a process attempts to store data beyond the limits of a fixed-sized buffer and consequently overwrites adjacent memory locations |
|
|
Term
Two characteristics of paging and segmentation are the keys to this breakthrough in memory management: |
|
Definition
1) Memory references are logical addresses dynamically translated into physical addresses at run time
–A process may be swapped in and out of main memory occupying different regions at different times during execution
2) A process may be broken up into pieces that do not need to located contiguously in main memory
•
|
|
|