Term
Compare: Unix signal with Signaling semaphore |
|
Definition
A UNIX signal is used to notify a process of an event
As opposed to a signaling semaphore, there is no need to wait for a signal, just register |
|
|
Term
|
Definition
Semaphore with an integer value of 0/1 or TRUE/FALSE. Initialized to 1 for mutex applications or 0 for signalling applications |
|
|
Term
Define: Busy Waiting
Why is it undesirable?
How does an OS avoid busy waiting? |
|
Definition
Polling; spinning technique when process repeatedly checks if a condition is true. Undesirable because we are locked into a state and cant perform other actions. To avoid polling, process blocks a process and moves from active to blocked when wait needed |
|
|
Term
Define: Condition Variable (Synchronization)
What does a CV wait() do?
Why is a CV not a semaphore? |
|
Definition
A synchronization primitive upon which two operations are defined (wait and signal)
Wait() releases mutex on monitor and ensures only 1 thread can enter a critical section at a time
A CV is not a semaphore because it has no associated "value" |
|
|
Term
Define: Counting (General) Semaphore |
|
Definition
Semaphore with an integer value ranging between 0 and positive upper bound. Initial value might represent number of units of an available corresponding resource. |
|
|
Term
Define: Critical Sections |
|
Definition
Sections of potentially concurrent code that access potentially shared data |
|
|
Term
|
Definition
|
|
Term
|
Definition
Mechanisms by which processes may communicate with one another |
|
|
Term
|
Definition
A signal delivered to a core that causes it (via hardware) to save its state and begin executing at the given address |
|
|
Term
Define: Interrupt Handler |
|
Definition
A routine, part of the OS.
When an interrupt occurs, control is transferred to the corresponding interrupt handler, which takes some action in response to the condition that caused the interrupt |
|
|
Term
|
Definition
Core part of the OS which provides the main programming abstractions build directly on the hardware |
|
|
Term
Define: Kernel level threads |
|
Definition
Threads that are created using system calls
Operate independently within address space, can block each other
More expensive. |
|
|
Term
|
Definition
A mechanism by which it is ensured that only one of a group of concurrent processes or threads will execute certain code at a time. |
|
|
Term
Define: Non-blocking receive |
|
Definition
Allows a process to continue executing after calling the receive function (in IPC) |
|
|
Term
|
Definition
The most common programming abstraction provided by the OS to enable concurrency. An address space containing a single thread of execution. |
|
|
Term
|
Definition
Two programs are running that can access and modify the same variable X, different results may result depending on the order of access to X |
|
|
Term
|
Definition
Synchronization variable supporting two atomic operations |
|
|
Term
Define: Semaphore Spin Lock
What are the pros and cons of a Semaphore spin lock? |
|
Definition
Process spins while waiting for lock.
Pros: useful in parallel programming
Cons: potential starvation/indefinite postponement, low efficiency |
|
|
Term
|
Definition
When a process is never given a time slice |
|
|
Term
Define: User Level Threads |
|
Definition
Created and manipulated using set of library routines.
OS does not know they exist, cannot be scheduled independently.
A system call by one blocks entire set.
Cheaper |
|
|
Term
|
Definition
A memory management technique that divides a process image into pieces and loads the pieces into any free area in memory as they are required by the running process |
|
|
Term
Describe: Bounded capacity (IPC) |
|
Definition
Buffer is full, sender must wait |
|
|
Term
Describe: Unbounded capacity (IPC) |
|
Definition
|
|
Term
Describe: Zero capacity (IPC) |
|
Definition
Link cannot have messages waiting
Sender must block until message received
Fully synchronous |
|
|
Term
Explain how CPU support of mutual exclusion works |
|
Definition
Be able to read contents of variable, test it, set it to something else all in one execution cycle. |
|
|
Term
Explain the "Bounded Buffer" (Producer-Consumer) problem |
|
Definition
Two processes (producer and consumer) share a common fixed buffer as a queue.
Producer's job is to generate a piece of data, put into buffer and start again.
Consumer consumes the data one peice at a time. Problem: make sure producer will not try to add data into a buffer if it is full and consumer won't try to remove data from empty buffer. |
|
|
Term
Explain the difference between a condition variable and a binary semaphore wait() |
|
Definition
Binary Semaphore: waits if the semaphore has the value 0 Condition Variable: Always waits.
Condition variable cannot be directly used to provide mutual exclusion. |
|
|
Term
Explain the dining philosophers problem |
|
Definition
N philosophers, circular table, not enough chopsticks. Forks cannot be passed around or taken from neighbour. When two forks, eat, then put down chopsticks |
|
|
Term
Explain the steps in dispatching a thread to active state |
|
Definition
1. Call scheduler to find P
2. Change P's state to active
3. Set up hardware to be ready to run P
4. Load cores register set with saved registers for P (from its PCB) except PCR
5. Load an initiate interrupt timer
6. Loads P's PCR |
|
|
Term
Explain: Asymmetric Addressing
Why might this be ideal for a client-server environment? |
|
Definition
Only sender names recipient.
send(P,message) - Send to P
receive(&id,message) - Receive from any process, id is senders identity
Server does not need to know name of specific client to receive message, needs to know if a reply must be sent. |
|
|
Term
Explain: Direct Messaging |
|
Definition
Synchronous communication: sender blocks until message received, receiver blocks until message available
Asynch communication: send() non blocking, recieve() blocking or non blocking |
|
|
Term
Explain: Exponential averaging |
|
Definition
A way to estimate CPU burst length by assuming it will be similar to previous bursts |
|
|
Term
Explain: FCFS/FIFO scheduling algorithm |
|
Definition
New processes inserted at tail of ready queue, process to be executed removed from head When process blocks for UI, goes to waiting queue and back to end of ready queue when IO done Relative importance of jobs based on arrival time (poor choice) |
|
|
Term
Explain: Non-pre-emptive scheduling |
|
Definition
Process never forced to give up control, only does so when it is not using it, waiting for IO, or finished. |
|
|
Term
Explain: Pre-emptive scheduling |
|
Definition
OS forces process to give up control |
|
|
Term
Explain: Shortest Job First algorithm
What are the pros/cons? |
|
Definition
Selects job with shortest expected processing time first Processes with shorter CPU burst executed before those with longer
Pros: Less wait time than FIFO
Cons: Estimates not always accurate, longer processes may starve |
|
|
Term
Explain: Shortest Remaining Time First scheduling algorithm |
|
Definition
Pre emptive shortest job first
Compares currently running processes to new ones. If new one has shorter estimation, context switch |
|
|
Term
Explain: Symmetric Addressing |
|
Definition
Processes explicitly name receiver and sender of message send(P,message): Send to P
receive(Q,message): Receive from Q |
|
|
Term
Explain: indirect communication |
|
Definition
Messages sent to mailboxes
Two processes can communicate if they address same mailbox
Mailbox associated with many senders/receivers |
|
|
Term
|
Definition
In scheduling, the longer a process waits, the more its priority increases |
|
|
Term
How are hybrid threads created? |
|
Definition
Map user level threads that we want to operate concurrently to different kernel level threads |
|
|
Term
How are threads and processes different? |
|
Definition
Process is a unit of execution with a collection of resources. An address space containing code in which there is a single thread, the active part of a process. Thread is a unit of execution alone.
Thread has its own PC, register set and stack, shares code, data with other threads in same address space. Thread is cheaper to call since all resources need not be duplicated. Switching between threads is cheaper |
|
|
Term
How are threads and processes similar? |
|
Definition
Can be on one of various states
Execute sequentially (within an address spare and share CPU)
Issue system calls to OS to get work done |
|
|
Term
How can we automate the process of locking and unlocking? |
|
Definition
Make the shared data encapsulated in an object, automatically create lock with object, automatically acquire lock on method entry and release on method exit |
|
|
Term
How do "Guard Variables" work? |
|
Definition
Guard variable used like a flag to determine whether or not critical section is currently accessing shared item associated with guard variable |
|
|
Term
How does RR scheduling reduce the penalty in FCFS? |
|
Definition
Preempts running processes periodically
CPU processes current job when reserved quantum (time slice) ends, put at end of ready queue if not yet completed |
|
|
Term
How is pessimistic concurrency control "better" than optimistic concurrency control? |
|
Definition
Pessimistic is better if collisions are likely, while optimistic is better if they are not. |
|
|
Term
How might we stop a running process to regain control? |
|
Definition
A hardware timer associated with each core is set by OS before user process dispatched, after timer expires, time-out interrupt generated |
|
|
Term
What are the steps involved in an OS handling a timer routine? |
|
Definition
1. Copy saved state including PCR into processes PCB
2. Change P's state to ready
3. Transfer control to dispatcher/scheduler |
|
|
Term
How would increasing the speed for one process by dedicating a core to it have a negative impact on other scheduling goals? |
|
Definition
Efficiency decreased, process is waiting
Fairness impacted |
|
|
Term
How would you implement a counting semaphore using binary semaphores? |
|
Definition
One BS to manage access to semaphore data structure, another to manage waiting for actual resource |
|
|
Term
How would you use a semaphore for managing multiple instances of a resource? |
|
Definition
Initialize semaphore to number of instances (number of processes allowed to concurrently share a machine's memory) |
|
|
Term
How would you use a semaphore for mutual exclusion? |
|
Definition
Initialize the semaphore to one
EX: All processes use same
int x; Semaphore s1 = 1;
while(true)
{
P(S);
x += 1;
V(S);
} |
|
|
Term
How would you use a semaphore for signalling between cooperating processes? |
|
Definition
Initialize semaphore to zero (A process raising semaphore to 1 sends signal that can allow other process to do something) |
|
|
Term
In what case is the sender aware of the status of the message it has sent? |
|
Definition
Zero-capacity (synchronous) |
|
|
Term
In which circumstances can a new process be selected to run?
Which cases are non-pre-emptive scheduling? |
|
Definition
1. Process switches from running to waiting due to system call
2. Process terminates
3. Process switches from running to ready
4. Process switches from waiting to ready and might take a core away from running process
1 & 2 pre-emptive |
|
|
Term
In which ways can implementations of message passing differ? |
|
Definition
Form of communication: messages to recipients or through intermediate
Synchronization Buffering: how and where messages are stored, capacity of storage
Error Handling: Deal with exceptional conditions |
|
|
Term
Locks/mutexes usually come with two associated operations: ________ and _______, which are just like ____ and _____ without the need for initialization. |
|
Definition
Lock/acquire
Unlock/free
P
V |
|
|
Term
Name and describe the two types of IPC |
|
Definition
Shared memory IPC - using shared data structures Message Passing IPC - procedures send and receive messages |
|
|
Term
Name some of the attractive properties of semaphores |
|
Definition
Simple Support many processes per shared datum
Can use different semaphores for different data
Can permit multiple processes into a critical section at once, if desired |
|
|
Term
Name some of the downfalls of semaphores |
|
Definition
Unstructured
Unenforced
Don't support data abstraction |
|
|
Term
Name the 5 characteristics any solution that provides mutual exclusion should have |
|
Definition
1. Mutual Exclusion
2. Progress - a process wishing to enter its CS must do so eventually
3. Fault Tolerance - a process failing outside its CS should not impede others accessing their CSs
4. No Assumptions about speed or number of processes
5. Efficiency - a process should stay in its CS for a short time without blocking |
|
|
Term
|
Definition
PID
Saved Registers
Process State
Scheduling Priority
Owner
CPU secondsused
Memory allocated
Open File List
Parent PID |
|
|
Term
Process information is kept in a data structure commonly referred to as a _______ |
|
Definition
Process Control Block (PCB) |
|
|
Term
|
Definition
Program to be executed
Address space (region of isolated memory)
Some state (where program is running, what its register contents/status/etc are)
Unique process identifier |
|
|
Term
Two groups of information the OS must know about processes in order to manage them |
|
Definition
Process execution state information (contents of CPU registers, program counter, etc)
Process Control Information (resources held, access privileges, memory allocated, etc) |
|
|
Term
What are some common errors that may occur in transmission of messages? |
|
Definition
Process terminates
Lost/delayed messages
Scrambled/misordered messages |
|
|
Term
What are the OSs responsibilities when it comes to processes? |
|
Definition
Track and maintain execution state
Isolate processes from one another |
|
|
Term
What are the downsides of Dekker's Algorithm? |
|
Definition
Only works for 2 processes, can be generalized for N processes but: N must be fixed and known As N increases, algorithm becomes far too complicated and expensive. |
|
|
Term
What are the important criteria to consider in short term scheduling? |
|
Definition
Priority/Importance of work
Fairness
Deadlines - hard or soft real time deadlines Consistency/predictability |
|
|
Term
What are the key issues when scheduling processes? Why are these sometimes contradictory? |
|
Definition
Speed
Fairness
Efficiency
Each user wants to complete ASAP (speed), all users must receive a reasonable share of time on CPU (fairness) and the OS wants to use the CPU as much as possible (efficiency) |
|
|
Term
What are the pros and cons of the CAS algorithm? |
|
Definition
Pros: no delay in entering critical sections
Cons: must recompute F (costly?) if processes collide |
|
|
Term
What are the pros and cons of the TAS algorithm? |
|
Definition
Pro: one guard variable for each shared item, supports N processes, processes unaware of N
Cons: busy waiting |
|
|
Term
What are the three general levels of scheduling and when/where is each used? |
|
Definition
Long term: accepts requests to create new processes to run programs
Medium term: swaps processes between main memory and secondary memory
Short term: selects new running processes when the currently running process stops |
|
|
Term
What are the two types semaphores come in? |
|
Definition
|
|
Term
What do we have to do to avoid busy waiting with semaphores? |
|
Definition
We have to be able to block processes (as opposed to polling) and then re-enable them when waiting is over. Also: maintain a queue of processes |
|
|
Term
|
Definition
Provides an abstraction of the machines hardware
Acts as an interface between machine and application programs
Manages resources so they can be used effectively and safely shared among users.
Provides services to application programs |
|
|
Term
What does the execution of a TRAP cause? |
|
Definition
Core to go through same sequence as when interrupt occurs, except handler (trap handler) is different from interrupt handler, part of OS. |
|
|
Term
What happened in the 1940's that was relevant to OS history? |
|
Definition
No OSs - raw hardware - anything the machine did was coded into the application program. |
|
|
Term
What happened in the 1950's that was relevant to OS history? |
|
Definition
Simple executives OS provided common services like I/O drivers for single running program |
|
|
Term
What happened in the 1960's that was relevant to OS history? |
|
Definition
First GP-OSs (Batch Based) OS allowed multiple programs to run together, sharing hardware No interactivity (batch) |
|
|
Term
What happened in the 1970's that was relevant to OS history? |
|
Definition
Real GP-OSs (Batch and Interactive, First Unix OS) OS allowed many programs to run together, sharing hardware Interactive and batch |
|
|
Term
What happened in the 1980's that was relevant to OS history? |
|
Definition
Personal OSs, Single User Evolution of GP-OSs for larger machines Introduction of single user OSs for PC machines |
|
|
Term
What happened in the 1990's that was relevant to OS history? |
|
Definition
GP-OSs on personal machines Linux and the like on PC type machines |
|
|
Term
What is a solution to the producer-consumer problem? |
|
Definition
Three semaphores:
-BS mutex to ensure only one process manipulates buffer at a time
-Empty/Full CS to count number of empty and full buffer slots
-Wait and queue if no empty slots, wait and queue for buffer access.
-Wait and queue if no data, wait and queue for buffer access
-Use monitor to maintain separate count of number of items in buffer |
|
|
Term
What is the difference between TAS and CAS? |
|
Definition
TAS provides pessimistic concurrency control while CAS provides optimistic concurrency control. |
|
|
Term
What is the goal of short term scheduling? What are some factors in this goal? |
|
Definition
Optimize system performance
CPU utilization
Throughput- # processes run per unit time
Total service time - time from submission to completion Job Waiting Time - time spent in ready queue waiting to run
Job Response Time - time to delivery of first results |
|
|
Term
What is this type of "release" called?
V(semaphore): { semaphore++ } |
|
Definition
|
|
Term
What is this type of "waiting" probe/test called? P(semaphore): { while(semaphore==0); semaphore -- } |
|
Definition
|
|
Term
What problems would exist if we did not use OS's? |
|
Definition
Exposed to low level hardware complexities
No standardized target for device manufacturers to write drivers for
Would have to manage sharing of resources |
|
|
Term
What states may a process be in? |
|
Definition
Ready
Blocked - waiting for something to happen
Suspended |
|
|
Term
What structure permits us to share data structures? |
|
Definition
|
|
Term
When is a UNIX signal "pending"? Is a UNIX signal asynch or synch? |
|
Definition
When it has been generated but not delivered
Synch |
|
|
Term
Where are message queues mapped? What does this allow processes to do? |
|
Definition
Locations in file system Allows unrelated processes to share a queue |
|
|
Term
Which two components does the OS typically provide, related to scheduling? |
|
Definition
Scheduler - picks from ready processes/threads to assign them available cores Dispatcher - responsible for causing process/thread selected by scheduler to made active |
|
|
Term
Why are threads potentially dangerous? |
|
Definition
Since they share an address space, they are not protected from one another. |
|
|
Term
Why can system calls not be implemented like a subroutine call? |
|
Definition
The OS and processes do not share a stack |
|
|
Term
Why is OS kernel support needed to support better mutual exclusion? |
|
Definition
Avoid busy waits
Block the process not leave it running occupying a valuable core doing useless work. |
|
|
Term
Why is sharing of data not easily done between processes that have their own address spaces? |
|
Definition
Complex system calls are required |
|
|
Term
Why is synchronization easy for unrelated processes? |
|
Definition
We simply isolate them in separate address spaces and they can't sure data, so no races can occur, no synchronization needed. |
|
|
Term
Why is the critical issue with RR the length of the time quantum? |
|
Definition
Too short - CPU spends more time context switching (overhead)
Too long - interactive processes suffer |
|
|
Term
Why isn't all concurrently executing code part of a critical section? |
|
Definition
Because then we would have to execute everything mutually exclusively and there would be no concurrency |
|
|
Term
Why might we need more than just mutex when using monitors? |
|
Definition
When process wants to insert into full, must wait.
Must be awoken when something is removed, making space for it to insert.
When process wants to remove from empty, must wait until producer puts something in and signals it.
Signaling needed |
|
|
Term
Why must UNIX signals be only one-way? |
|
Definition
1. They use a single integer, no mechanism for a reply.
2. They are asynch, there is no execution path back to the signaller |
|
|
Term
Why must each thread have its own stack?
What would happen if threads shared a stack? |
|
Definition
Each thread needs its own stack to follow its own independent course of execution.
Example of shared stack: Thread one calls subroutine, thread 2 calls subroutine. When thread 1 returns it uses thread 2s activation record to attempt to find its return address because it is on top of the stack. |
|
|
Term
Why must semaphores be build up in software? |
|
Definition
There are no existing hardware implementations of P/wait and V/free operations |
|
|
Term
|
Definition
A - New B - Stopped C - Active D - Ready E - Blocked F - Exiting |
|
|
Term
FCFS performs better for ______ processes and tends to favour _______ jobs. FCFS is unsuitable for _______ jobs. |
|
Definition
|
|
Term
How does a MLQ scheme handle mixed processes? |
|
Definition
Having separate ready queue for each class of process, using different algorithms for each |
|
|
Term
What is a multi-level feedback queue? |
|
Definition
Variation of MLQ, processes are not permanently assigned to a queue when they enter the system. If a process exhausts quantum, moved to another queue with a longer quantum but lower priority. |
|
|
Term
What is Fair Share scheduling? |
|
Definition
Goal is to ensure each user gets an equal share. Each K user entitled to 1/kth of available CPU. |
|
|
Term
How does the F-S algorithm avoid unfairness? |
|
Definition
CPU use tracked over a period of time, used to adjust share. Over time, changes are self-correcting, starvation unlikely. |
|
|
Term
|
Definition
Permanent blocking of a set of processes that either compete for resources or communicate with each other |
|
|
Term
What three conditions are necessary for deadlock? |
|
Definition
Mutual exclusion Hold and wait (process may hold resource while waiting) No preemption Circular wait (closed chain of processes, where each holds one resource needed by the next) |
|
|
Term
What are the two types of resources, and which can be involved in deadlock? |
|
Definition
Reusable (CPU, memory, files) Consumable (interrupts, signals) Either/Both can be involved |
|
|
Term
What are the characteristics of a reusable resource? |
|
Definition
Fixed number in system Not depleted by use Used by only 1 process/thread at a time A resource can be released by a process only if it was prev. acquired |
|
|
Term
What are some characteristics of consumable resources? |
|
Definition
Unlimited number in system Producer processes create resource units (resource can be released without being acquired) Consumer processes acquire resource units (destroyed by use, consumer does not release) |
|
|
Term
Why does a cycle in a graph not necessarily mean deadlock? |
|
Definition
Some process may release Some resources may contain multiple units Graph may be reducible |
|
|
Term
A cycle is a ______ condition for deadlock, but not a __________ one unless every resource is ___________ |
|
Definition
Necessary Sufficient Single |
|
|
Term
|
Definition
A cycle with no non-cycle outgoing RAG path from any involved node. A process blocks as soon as it waits for one resource, processes can wait for only one resource at a time. |
|
|
Term
What are the four strategies to deal with deadlock? |
|
Definition
|
|
Term
How might you prevent deadlock? |
|
Definition
Directly - prevent circular wait Indirectly - prevent occurrence of one of three conditions (mutex, hold/wait, no pre-emption) |
|
|
Term
How can we prevent Hold and Wait, and why shouldn't we? |
|
Definition
Require a process to get its resources simultaneously, process blocked otherwise. Gives poor system performance as processes are blocked when they could be making progress with partial resources, which are idle until used. |
|
|
Term
What is the problem with using pre-emption to prevent deadlock? |
|
Definition
Who should we pre-empt? Decision based on process priority. Only makes sense for resources who can easily save/restore state. Involves rollback, undoing work process started, not always possible |
|
|
Term
Why is directly preventing deadlock difficult? |
|
Definition
Define linear ordering of resources. Process holding resources may only req. new ones that occur later in ordering. Tends to deny resources unnecessarily |
|
|
Term
How might we avoid deadlock, and what strategies are there? |
|
Definition
Allow 3 OS policies, but make resource allocation decisions such that deadlock is never reached. Strategies: process initiation denial, resource allocation denial |
|
|
Term
Describe: Secondary Storage |
|
Definition
Non-volatile repository for user/system data/programs
Manages most information on secondary storage, used for storing programs, data and temp storage of virtual memory pages
May be in many forms (text, raw data, VM pages, etc) |
|
|
Term
|
Definition
Names collection of related information, with two views:
Logical view, how users see file (linear seq. of contigious bytes)
Physical view, how file is stored in secondary storage (not nec. contigious) |
|
|
Term
What is the difference between a file and a data structure in memory? |
|
Definition
Files persistent, long lasting and should survive system failure/reboot/shut down
Files are destined to be moved around and access by different programs/users
Most data structures are private |
|
|
Term
What attribues do files have? |
|
Definition
file name, owner/cretor, type, location in disk, organization, access permissions, date/time of creation/ modification/last access, file size, assorted additional information |
|
|
Term
In what ways might be access information stored in a file? |
|
Definition
Sequential: access in order, one record after another
Direct (random): access in any order, skipping uninteresting records
Keyed: access in any order, based on particular keyv alues |
|
|
Term
|
Definition
Logically a "table" that can be searched for information about other files, usually fundamental way of ordering them, almost always files as well.
Entries contain info about file
Entries created when files they describe are created/removed/deleted |
|
|
Term
What ar the two common directory structures? |
|
Definition
Single level (flat file system): shared by all users
Multi level (hierarchial file system): often have (sub)-tree for each user |
|
|
Term
A general approach to the issue of file sharing is to provide ______________________ to files for each of a set of operations. We permit specific users to perform one or more of these operations, and specific rights using an ___________________ |
|
Definition
Controlled access
Access control list (ACL) |
|
|
Term
What does a file system do? |
|
Definition
Integral part of OS that manages info on 2ndary storage
Provides a mapping between logical and physical views of a file via a set of services and an interface.
Hides much of the device specidic aspects of file manipulation from users |
|
|
Term
Name/Describe the three levels of file abstraction |
|
Definition
File relative: used to address files at high level
Volume (partition) relative: device-independant part of file system uses , partition viewed as array of sectors
Drive relative: lowest level |
|
|
Term
What are three common file allocations techniques? |
|
Definition
Contigious
Chained (linked)
Indexed |
|
|
Term
Describe: Contigious Allocation |
|
Definition
Allocate disk space as collection of adjacent/contigious blocks and keep track of unused disk spacee
Advantages: easy acess (seq, rand), simple, few seeks
Disadvantages: external fragmentation, may not know file size in adv |
|
|
Term
Describe: Chained/linked allocation |
|
Definition
Space allocation simmilar to page frame allocation, mark allocated blocks as in-use
Adv: no external fragmentation, files can grow easily
Disadv: Lots of seeking, random access hard |
|
|
Term
Describe: Indexed Allocation |
|
Definition
Allocate array of pointers (index) during creation, fill array as new blocks assigned.
Adv: small internal fragmentation, easy seq/rand access
Disadv: Lots of seeks, hard size limitations |
|
|
Term
What techniques do file systems keep to manage free disk block lists
|
|
Definition
Bit vectors/maps
Linked lists/chains
Indexing |
|
|
Term
Explain Bit Mape Free Space Management |
|
Definition
Use 1 bit per fixed size free area on disk
Bitmap is stored on disk at a well known address. A 1 bit indicates a free area while a 0 indicates an allocated block |
|
|
Term
Explain Chained Free Space Management |
|
Definition
Chained pointers stored in each disk block.
Pointer to start of list maintained at well known location.
Links not necessarily ordered by starting block address.
Can store length along with pointer that indicates contigious blocks in region of chain to provide support for contigious file allocation. |
|
|
Term
Explain Index Free Space Management |
|
Definition
Maintain index pointing to all free areas
Index of free regions maintained at well known location
Index would likely be ordered by size of free region to permit easy searching |
|
|
Term
What methods are used to find free space? |
|
Definition
Bitmaps - search bitmap for sequence of sufficient number of consequtive 1 bits
Index - Store number of available blocks in each index entry and order index by size
Chained - Traverse chain looking for fit (chain circular to support next fit) |
|
|
Term
Describe some "Other" file systme issues |
|
Definition
Disk blocking: multiple seconds per block for efficiency
Disk Quotas: limit space per user, costly to track use
Reliability: backup/restore, file system (consistency) check
Performance: block/buffer caches, access in memory faster than ondisk, but must be concerned about making changes |
|
|
Term
What does the UNIX disk layout look like? |
|
Definition
Boot block contains (bootstrap) code to boot OS
Super block contains critical info about layout of FS, such as number of inodes and number of disk blocks
Each inode entry contains file attributes, except name. First inode (0) points to block containing root directory of FS |
|
|
Term
What changes have been made to UNIX basic strategies since disks have grown larger? |
|
Definition
Super block replicated across disk to improve reliability on crash
Inode table and data blocks divided up into cylinder groups (containing replica of super block)
Cylinder group is group of contigious cylinders on disk
By localizing access to cylinder group, seeking reduced/performance improved |
|
|
Term
Name/describe the three levels of indirection in accessing a file |
|
Definition
File descriptor: one per process, keeps track of open files for process
Open File table: one per system, keeps track of all files currently open by process
Inode table: one per disk volume/partition, keeps trackin of files on partition |
|
|
Term
Describe the FCFS/FIFO disk scheduling strategy |
|
Definition
Process disk requests in the order that are received without considering location, simple and fair but inefficient |
|
|
Term
What different types of addresses are there? |
|
Definition
Logical: What we use in programs
Relative: actual memory address, relative to known location (start of program, usually)
Physical: Actual address in memory |
|
|
Term
What is Binding and what entities are involved? |
|
Definition
Translation logical->physical address, how it occurs determines relocatability of code.
Involves: linker/loader, programmer/compiler, OS/hardware |
|
|
Term
What is the purpose of the linker and loader? |
|
Definition
Linker: produces load motule, takes a set of object modules, resolves references between modules and joins them
Loader: Puts load module into memory
How these two work determines relocatability |
|
|
Term
What happens (relative to binding) at programming, compile and load time? |
|
Definition
Programming time: programmer computes all physical addresses, code that is NOT relocatable
Compile time: compiler/assembler converts logical addresses into physical addresses, NOT relocatable
Load time: compiler converts logical>relative addresses, loader converts to physical. Relocatable when loaded intially, must be loaded back to same location if swapped out |
|
|
Term
What happens (relative to binding) at run time with a dynamic loader? |
|
Definition
Loader performs NO translation of relative addresses, translation performed in hardware when an address is referenced. Produces modules that are relovatable with swapping |
|
|
Term
What are some relocation and protection mechanisms?
How to we translate a relative address? |
|
Definition
Base register: holds absoulte address of start of code
Bounds register: holds absolute address of end of data
To translate relative addr:
add to base, compare with bounds, out of bounds address traps to OS |
|
|
Term
|
Definition
Memory allocation trivial, no relocation needer because user programs always loaded 1 at a time into pre-determined location.
Linker produces same load address for every user program |
|
|
Term
|
Definition
Program was organized into tree-like structure of object modules called overlays to maximize memory, when there were no nice ways of managing "virtual" memory.
Root overlay was always loaded in memory and controlled which sub-tree was loaded at various times
New sub trees were overlayed on existing ones, one program at a time |
|
|
Term
Why is swapping needed/useful? |
|
Definition
Keeping non-running programs in memory is wasteful
Brings flexibility, even to systems with fixed partitions, because the operating system can always make room for high prioirty jobs, no matter what! |
|
|
Term
Describe the responsibilities of a swapper |
|
Definition
Selection of process to swap (criteria: suspended/blocked state, low prioirty, time spent in memory)
Selection of process to swap in (criteria: tine spent swapped out, priority)
Allocation and managenet of swap space of swapping device
Swap space may be system wide or dedicated to specific users/processes |
|
|
Term
Name some schemes that have been devised to support multi programming |
|
Definition
Fixed partitioning
Dynamic partitioning
Buddy system
Paging
Segmentation
Virtual Memory |
|
|
Term
Describe Fixed Partitioning |
|
Definition
Divide memory into static, equal size partitions
Simpliest, load image into any avail. partition
Limitations: limits # processes in main memory, images larger than partition size have to use overlays, unused memory within partition wasted)
Can refine by having varying size partitions, but then we need queues, etc.
Executable prepared to run in one partition may not be able to run in another without being re-linked |
|
|
Term
Describe Dynamic Partitioning |
|
Definition
Partitions of varying size/number
Partitions the same size as images (now internal fragmentation)
As processes created/destroyed/swapped, gaps develop between partitions (external fragmentation)
Over time much memory lost to external fragmentation
Must reclaim memory by moving processes to combine gragments (compaction, time consuming) |
|
|
Term
Name/describe some placement algorithms |
|
Definition
First fit: use first block large enough (best performance)
Best fit: use smallest block large enough (worst performance)
Worst fit: use largest block
next fit: same as first, start looking at last block allocated |
|
|
Term
Describe the Buddy System |
|
Definition
Compromise between fixed/dynamic partitioning
Allocate blocks power of 2 in size
Allocate smallest block that holds image
(block size 2^i may be split into 2 blocks of size 2^(i-1), continue splitting until block of needed size obtained)
OS maintains lost of unallocated blocks, if theres a right size block use, otherwise split larger
When 2 buddy blocks of size 2^i become free, merge
Some internal fragmentation, merging takes place of compaction |
|
|
Term
|
Definition
Dividing memory into small, fixed size peices
Process divided into pages, main memory divided into frames (page size=frame size)
When process allocated memory, place pages into frames
Small internal fragmentation, no external fragmentation
Possible because of relative addressing |
|
|
Term
How does the OS handle address translation in paging? |
|
Definition
OS has page table, one entry per page of process, lists frame holding page
Relative address has page # and offset within |
|
|
Term
|
Definition
Program divided into segments of diff sizes
System specified maximum size
Segments under programmer control
Allows related code to be loaded as single unit
Simmilar to dynamic programming |
|
|
Term
Using the segment number, how do we get the physical address? |
|
Definition
Check offset <= segment length
Add offset to starting addr in table |
|
|
Term
|
Definition
The peices of a program that are in memory.
Process executes as long as all references are in the resident set |
|
|
Term
What happens when a reference is NOT in the resident set? |
|
Definition
OS generates interrupt (page fault)
Process blocked
OS schedules disk read to bring required peice into memory
Afte read done, process moved to ready state |
|
|
Term
What are the pros of Virtual Memory? |
|
Definition
More process may be in memory
Better processor utilization/performance
No limit to processor size, can be bigger than all of main memory
Transparent to programmer, OS/hardware manages bringing peices into memory |
|
|
Term
Define: Principal of Locality |
|
Definition
The idea that in the short-term, programs tende to execute within a small area of code/data.
Permits a process to execute when only partially loaded |
|
|
Term
Why must the OS be careful about how it brings a new peice into memory? |
|
Definition
If it replaces another peice that will be needed soon, it will have to bring in replaced peice again |
|
|
Term
|
Definition
When OS spends most of its time servicing page faults |
|
|
Term
How does address translation use control bits? |
|
Definition
If P set, page in memory, otherwise page fault
If M set, OS must save frame to disk before it can use frame to store another page |
|
|
Term
What is a possible table storage scheme? |
|
Definition
Base addr of page table held in register
Add base addr to page # in relative addr to acces page table entry
Read frame # from page table and add to offset in relative addr to get physical addr |
|
|
Term
What does a hierarchical page table accomplish? |
|
Definition
Takes advantage of locality to minimize amount of page table in memory |
|
|
Term
How is a hierarchical page table designed? |
|
Definition
Each part will occupy one frame
Root level page table (always in memory) contains pointers to page tables at next level
Leaf level points to pages
Typically one root and few user level page tables need to be resident
Address translation now requires 2 memory accesses to get frame # |
|
|
Term
Describe: Inverted Page Table |
|
Definition
One entry per page FRAME
Adv: not having to store unnecessary page table entries for pages that are not allocated to space in real memory
Problems: how do you know what is currently mapped, how do you find the entry for a page? |
|
|
Term
Describe: Translation Lookaside Buffer (TLB) |
|
Definition
Special cache memory, stores recently used page table entries
H/W first checks TLB for needed entry, if found (hit) get frame # from TLB
Else, (miss) read entry from page table, add to TLB
Associative memory, all cache locatiosn are searched in parallel |
|
|
Term
Compare/Contrast smaller vs larger page sizes |
|
Definition
Small pros: less internal fragmentation, can have more pages in memory, easier to get locality
Small cons: larger page table
Larger pros: more efficient disk I/O
Larger Cons: fewer pages in memory, pages will contain addresses outside current locality |
|
|
Term
|
Definition
Actual set of pages required by current locality.
Should be same as resident set to minimize page faults |
|
|
Term
What are the advantages of segmentation? |
|
Definition
Supports seperate compiliation
Supports data sharing/protection
Supports dynamic data sturctures |
|
|
Term
How is Paging & Segmentation advantageous? |
|
Definition
No external fragmentation
Fixed page size allows for good memory management
Dynamic segment sizes
Modularity
Sharing/protection |
|
|
Term
How can we support both paging & segmentation? |
|
Definition
Relative addr has segment # (index into segment table), page # (indexes into page table)
Each segment has own page table
Hardware support vital |
|
|
Term
What issues do we need to consider in modern OS paging/segmentation? |
|
Definition
Size of memory
Speed of memory (main/secondary)
Size/# processes
Behavior of programs |
|
|
Term
What do memory management policies consist of? |
|
Definition
Fetch
Placement
Replacement
Resident Set
Cleansing
Load |
|
|
Term
What two alternatives do we have for the fetch policy? |
|
Definition
Demand paging: bring page into memory when referenced
Pros: No noneeded pages
Cons: Lots of page faults when process starts
Prepaging: bring in several pages before needed
Pros: more efficient disk I/O
Cons: too many unneeded pages in memory, does not improve performance |
|
|
Term
What issues do we have with the placement policy? |
|
Definition
Only issue with pure segmentation
External fragmentation
Some issues seen in dynamic partitioning |
|
|
Term
Name the 4 basic replacement policy algorithms |
|
Definition
Optimal (OPT)
Least recently used (LRU)
First In First Out (FIFO)
Clock (CLOCK) |
|
|
Term
Describe the OPT replacement polcy |
|
Definition
Select page that wont be referenced for longest time
Requires future knowlege, not possible
Gives best possible results, useful for comparison |
|
|
Term
Describe the LRU replacement polcy |
|
Definition
Select page that hasn't been used for longest time
Good performer, hard to implement
Have to record reference time of page each time referenced
Lots of overhead |
|
|
Term
Describe the FIFO replacement polcy |
|
Definition
Replace page that has been in memory longest
Pages placed in Queue when enter memory
Replace page at front
Simple, bad performance
No way of reorganizing pages that are used heavily |
|
|
Term
Describe the CLOCK replacement polcy |
|
Definition
Approximates LRU, for each frame stores use bit, set when frame first loaded and referenced.
Frames kept in circular buffer, when page replaced set pointer to NEXT frame in buffer |
|
|
Term
How do we find a frame for a replacement in the CLOCK algorithm? |
|
Definition
Start search at pointer
Examine use bit, if cleared stop and use, else clear and advance to next frame
If all frames used, go all the way around and use start frame |
|
|
Term
Describe the Page Buffering replacement strategy |
|
Definition
Maintain small # candidate frames for replacement, simple FIFO
Free page list contains unmodified candidates
Modified page list contains candidates that need to be saved
When frame needed, take from free page list
To reduce I/O, pages in modified list saved in groups
If candidate page referenced before needed, remove from candidate list |
|
|
Term
Wha are the two policies in resident set management? |
|
Definition
Fixed allocation: size resident set fixed at load
Variable: Size may vary over process life, process with high fault rate can get more pages, low fault rate may give up pages,
More powerful, more overhead |
|
|
Term
What must our resident set management policy consider in regards to where we get a replacement frame? |
|
Definition
Fixed/variable size
Local/global scope
|
|
|
Term
What are the three combinations of size/scope in resident set management policies? |
|
Definition
Fixed/local - simple, resident set may not be optimal
Variable/global - simple, process causing fault gets new page
Variable/local - OS periodically evaluates processes and reallocates pages |
|
|
Term
What are the two cleaning policies? |
|
Definition
Demand - Write only when page is going to be replaced. Associated with page fault, process has to now wait for 2 I/O operations: clean/fetch
Precleaning: write pages in batches. Many pages will likely become dirty again before being replaced |
|
|
Term
What is the problem with having too many and too few processes? |
|
Definition
Too few: memory filled with blocked processes most of the time, system busy swapping
Too many: resident sets too small, thrashing |
|
|
Term
How do we determine the best level of load control? |
|
Definition
L=S
L = time between page faults
S = time to service fault
50% criteria, 50% utilization of paging hardware |
|
|
Term
|
Definition
A programming language construct that provides an abstract, high level view of synchronization |
|
|
Term
|
Definition
Take basic RAG and remove resource nodes, leaving edges, and we see graph where processes are waiting on other processes. |
|
|