Term
Phase 1 of operating systems |
|
Definition
Hardware expensive, humans cheap There was only one user using the console at a time. Person would load a card in and wait for it to finish. Machine would read in card, process it, then give the result back in hours. Was modified to do batch processing where one machine strictly did all I/O, then a person took these jobs to another machine that did all processing, then another person took this batch to another machine to do all output. This allowed more jobs to be ran back to back. This phase eventually led to multiprogramming where several programs run at the same time. This was the beginning of scheduling and interrupts |
|
|
Term
Phase 2 of operating systems |
|
Definition
Hardware cheap, humans expensive This phase started out by having multiple people run multiple terminals connected to the operating system. Introduced timer interrupts so that a program that runs for too long will be rescheduled by the OS |
|
|
Term
Phase 3 of operating systems |
|
Definition
hardware very cheap, humans very expensive This is the introduction of personal computing and eventually led to the idea that since hardware is so cheap, give the consumer a bunch of computers |
|
|
Term
What does an operating system do? |
|
Definition
Manages shared resources, provide services, and provides protection for processes |
|
|
Term
|
Definition
means you can move on to another task before the process is done Describes the return from interrupt |
|
|
Term
|
Definition
means you wait for something to finish before moving on to another task Describes coming back from an exception or system call |
|
|
Term
|
Definition
OS has a user mode and a kernel mode A bit in the processor status register tells you which mode you're in 0: kernel 1: user |
|
|
Term
|
Definition
AMI API HAL
The OS resides below the API and above the HAL The order pretty much goes: Applications OS Hardware This makes sense because the OS connects the two |
|
|
Term
|
Definition
An instance of a program (program + executing state) Each process has its own CPU registers, stack, and address space |
|
|
Term
|
Definition
User program causes an exception by doing something stupid, it was interrupted, or it ran a system call User program can also make a new process or switch to a different process |
|
|
Term
|
Definition
Holds all the interrupts, exceptions, and system calls and is used to determine what action the OS should take |
|
|
Term
Saving the State of the Interrupted Process |
|
Definition
Kernel pushes registers onto Exception stack (not the user level stack!!!) before handler runs. Once the interrupt handler is running, the rest of the program state is pushed onto the stack. On return the stack is just popped to restore the state |
|
|
Term
|
Definition
A request by a user-level process to call a function in the kernel. This is done using a trap instruction. When the OS is done performing the service, it will execute an RTI instruction ex: read() |
|
|
Term
Example of a privileged instruction |
|
Definition
Accessing I/O stuff, disabling interrupts |
|
|
Term
What does a process look like in memory |
|
Definition
At the top it has the stack that grows down, below that a heap that grows up, the static data segment and then the code The static data segment is going to hold global variables and variables declared as static |
|
|
Term
|
Definition
Has the code, PC, SP, values of CPU registers, PID, etc |
|
|
Term
|
Definition
New goes to Ready. Ready goes to Running Running can either go to Terminated, Ready, or Blocked Blocked goes to Ready
New: OS is setting up the process Ready: Process is waiting to be scheduled so it can run Running: Process is running Blocked: Process waiting for an event to happen Terminated: Process is being destroyed |
|
|
Term
Process Control Block (PCB) |
|
Definition
What the OS uses to keep track of a process It contains the process sate, PID, PC, SP etc |
|
|
Term
|
Definition
|
|
Term
|
Definition
Splits the process into two, the parent process that called the fork and a child process. It makes an exact copy of the parent process. You'll know you're the parent if the PID is not 0, otherwise you're the child Both copies execute at the same lines. For every fork you call, two processes are created so it's 2^n where n is the fork calls. It will depend on if statements though |
|
|
Term
|
Definition
Puts a new program into a newly created process Overwrites everything except the PID Essentially, it's the SAME process as before but running a different program |
|
|
Term
|
Definition
Forces the parent to wait for its child to terminate. |
|
|
Term
|
Definition
Terminates a process If the process has a parent, process will store its return value and wait for the parent to request it and becomes a zombie |
|
|
Term
|
Definition
Specifically called by parent to terminate a child by sending a signal |
|
|
Term
|
Definition
An example of a parent/child relationship For every command you type, the OS forks a new process and execs this command |
|
|
Term
|
Definition
Change from one process to another: Switch to kernel Kernel saves process state int he current process' PCB Kernel chooses new process to run Kernel loads this new processes state from its PCB New process runs |
|
|
Term
OS runs the scheduler when |
|
Definition
A process switches from running to blocking, a process is created or terminated, an interrupt occurs THIS MAKES SENSE |
|
|
Term
|
Definition
Scheduler runs during interrupts |
|
|
Term
|
Definition
Number of processes completed in a unit of time |
|
|
Term
|
Definition
Length of time to run a process from initialization to completion |
|
|
Term
|
Definition
time between issuing a command and getting a result |
|
|
Term
|
Definition
Amount of time CPU is busy |
|
|
Term
|
Definition
Gives each job a time slice and after each time slice, moves the running process to the back of the ready queue |
|
|
Term
Multilevel Feedback Queues |
|
Definition
There are multiple queues representing different priorities and the jobs in the highest priority queue will run first using round robin. This can lead to starvation if there's a job that takes too long so jobs that don't finish in time drop to a lower queue while a job that finishes early moves up a queue |
|
|
Term
|
Definition
An execution stream within a SINGLE PROCESS. Each process can have multiple threads Each thread has its own CPU registers and stack but other than that, they share the same address space |
|
|
Term
|
Definition
Is the EXACT SAME as the process life cycle |
|
|
Term
Thread Control Block (TCB) |
|
Definition
Similar to the PCb except it contains a pointer to the PCB. Holds the stack pointer, PC, TID, register values |
|
|
Term
|
Definition
Is a lot cheaper than process context switch because it doesn't have to switch address spaces |
|
|
Term
|
Definition
Threads that the OS knows about. Every process has at least 1 kernel thread ( 1 to 1 relationship). Requires a small context switch |
|
|
Term
|
Definition
A thread the OS does not know about. This is because the OS only schedules the process, not the threads within the process (many to 1 model) Requires no context switch between threads Is managed by user through library calls Disadvantage: all user level threads block on system calls |
|
|
Term
|
Definition
A race condition happens when you get different results based on the order of scheduling |
|
|
Term
|
Definition
Consuming CPU resources without doing any work |
|
|
Term
|
Definition
an operation that is uninterrupted. More specifically, it is an operation that no context switching can occur which happens not only with interrupts but with system calls and exceptions as well |
|
|
Term
|
Definition
a piece of code that only one thread can execute at a time |
|
|
Term
4 Properties of Correctness for critical section |
|
Definition
1. Safety: only one thread in the critical section 2. Liveness: if no threads are execuSng a criScal secSon, and a thread wishes to enter a criScal secSon, that thread must be guaranteed to eventually enter the criScal secSon 3. Bounded wai>ng: if a thread wishes to enter a criScal secSon, then there exists a bound on the number of other threads that may enter the criScal secSon before that thread does 4. Failure atomicity: it’s okay for a thread to die in the criScal secSon |
|
|
Term
|
Definition
Is a more general version of a lock. Each semaphore has a value to it and a queue to hold waiting threads |
|
|
Term
|
Definition
Connects shared data. Has the property that is guarantees mutual exclusion. Condition variables release lock temporarily Must acquire and release for every function at the beginning and end. Has only one lock. |
|
|
Term
|
Definition
Is just a lock. Has two values. Initial value is always free |
|
|
Term
|
Definition
Represents a resource with many units available. Initial count is typically the number of resources |
|
|
Term
|
Definition
Equivalent to wait Decrements semaphore value. If value is >= 0, return. Else, waits |
|
|
Term
|
Definition
Equivalent to signal. Increments semaphore value. Automatically wakes up a thread that's waiting on the semaphore. Essentially, if the value is negative, that means there are no resources available so the thread will have to wait |
|
|
Term
|
Definition
Enables threads to wait for changes to shared data Has 3 ops: wait, signal, broadcast |
|
|
Term
Comparing Semaphores and Monitors |
|
Definition
Condition variables don't have history while Semaphores do. This is because semaphores keep track of their values. Results from semaphore ops are the regardless of order of execution while condition variables aren't.
sema down is analogous to wait sema up is analogous to signal |
|
|
Term
|
Definition
signal() is preferable when – at most one waitng thread can make progress broadcast() is preferable when – multiple waiting threads may be able to make progress |
|
|
Term
|
Definition
Always do things the same way Always synchronize with locks and condition variables. Always acquire the lock at the beginning of a function and release it at the end. Always hold the lock when operating on a condition variable. Always wait on a while loop Never sleep |
|
|
Term
|
Definition
Occurs when two or more threads are waiting for an event that can only be generated by these same threads Deadlock implies starvation but starvation doesn't imply deadlock |
|
|
Term
Necessary Conditions for Deadlock |
|
Definition
All 4 have to occur: a finite number of threads or processes can use a resource and resources are finite.
at least one thread or process holds a resources and is waiEng for other resources to become available. A different thread holds the resource.
a thread or process only releases a resource voluntarily; another thread, process, or the OS cannot force the thread or process to release the resource.
circular wait |
|
|
Term
|
Definition
Just break one of the four conditions. The easiest is to establish an order on the resources to prevent circular waiting. |
|
|
Term
|
Definition
If there is no cycle, there's no deadlock |
|
|
Term
If the graph has a cycle, deadlock might exist. Why wouldn't it? |
|
Definition
If there are mulEple instances of a resource(s) and any instance of a resource involved in the cycle is held by a thread/process that is not in the cycle, progress might be made when that thread/ process releases the resource |
|
|
Term
|
Definition
Requires that the thread: acquires all locks it will need, makes necessary changes, commit and release locks. |
|
|
Term
|
Definition
Group actions together so that they are atomic, serializable, and durable |
|
|
Term
|
Definition
Durability is the quality that once an action happens, it sticks. You can achieve durability by executing and action and deciding if you want to commit or roll back based on the results. |
|
|