Term
Logical/virtual address space |
|
Definition
Collection of addresses that the process can access (this is the process’s view) |
|
|
Term
|
Definition
Collection of memory addresses supported by the hardware |
|
|
Term
|
Definition
A chunk of memory assigned to a process |
|
|
Term
|
Definition
One process executes at a time Process executes in a contiguous section of memory. Maximum address to be used for memory is memory size - OS size. So for a simple example, if your total memory is 2400 bytes and the OS size is 200 bytes, then the max address is 2200 bytes |
|
|
Term
3 aspects of multiple programs sharing memory |
|
Definition
Transparency Safety Efficiency |
|
|
Term
Transparency with multiple programs |
|
Definition
We want multiple processes to coexist in memory and no process should be aware that memory is shared |
|
|
Term
Safety with multiple programs |
|
Definition
Processes shouldn't be able to corrupt each other |
|
|
Term
Efficiency with multiple programs |
|
Definition
Performance of the CPU and memory shouldn't be degraded badly due to sharing |
|
|
Term
|
Definition
With this concept, the OS is placed at the highest memory. The process starts at address 0. When the OS loads the process, it allocates a contigous block of memory for it. If this block won't fit, the OS will wait for a process to terminate.
The smallest physical address of the process is the base (relocation) address and the largest physical address the process can access is the limit address. |
|
|
Term
|
Definition
OS adjusts the processes' address to match the location in memory. But then the process cannot be moved once it starts executing |
|
|
Term
|
Definition
Remember this is done in parallel: Hardware adds base register to virtual address to get physical address. Hardware compares virtual address with limit register. If the address is not less than this limit then an exception will occur |
|
|
Term
Advantages to Dynamic Relocation |
|
Definition
Protection is easy OS can easily move a process during execution |
|
|
Term
Disadvantages to Dynamic Relocation |
|
Definition
Sharing is hard Requires contiguous allocation |
|
|
Term
Two types of wasted space |
|
Definition
external fragmentation internal fragmentation |
|
|
Term
|
Definition
Unused memory between processes. Say you allocate memory at address 400 and then allocate memory at address 200 but memory between address 300 and 400 is unused. |
|
|
Term
|
Definition
Unused memory within a unit of allocation |
|
|
Term
First Fit Memory Allocation |
|
Definition
allocates the first available free block. Free list is sorted by address. When you deallocate you have to check to see if the freed blocks could be merged with any adjacent free blocks
Is simple to implement but has external fragmentation. It has external fragmentation because the you could have a free block size of 2 bytes that will never get allocated because all the processes have much higher memory |
|
|
Term
Best Fit Memory Allocation |
|
Definition
Get the smallest available free block. Free list is sorted by size
Is simple but still has external fragmentation. |
|
|
Term
Worst Fit Memory Allocation |
|
Definition
Get the largest available free block. Free list is sorted by size
Has external fragmentation |
|
|
Term
2 ways to eliminate fragmentation |
|
Definition
Compaction - relocate programs to combine holes Swapping - roll processes out to disk and reclaim their memory |
|
|
Term
|
Definition
allows the total memory being used by all processes to exceed the amount of physical memory available |
|
|
Term
|
Definition
Back in the day if a program was too big to fit into memory, they would divide the program into overlays and swap them in and out |
|
|
Term
|
Definition
The process's view of memory Can be larger than physical memory Divides process address space into pages |
|
|
Term
|
Definition
The process where a process's virtual address space is divided into fixed size pages. The address space is stored on disk Physical memory is viewed as equal-sized frames Pages are moved into frames in memory |
|
|
Term
|
Definition
Physical memory divided into equal sized blocks |
|
|
Term
|
Definition
First bits are the frame number, last bits are the offset |
|
|
Term
|
Definition
First bits are the page number, last bits are the offset |
|
|
Term
True or false: Pages are contiguous in virtual address space but doesn't have to be mapped to contiguous frames in physical memory |
|
Definition
|
|
Term
|
Definition
Keeps track of the mapping of pages to page frames Each entry in a page table has valid bits and the frame number |
|
|
Term
Simple virtual address translation steps |
|
Definition
Given a virtual address, the page number is extracted and is looked up in the page table (page number = PTE). If there is a hit, the frame number from the PTE will be extracted and concatenated with the offset extracted from the virtual address. This forms the physical address |
|
|
Term
|
Definition
|
|
Term
True or false: Each process has it's own page table |
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
Indicates if the page was referenced recently |
|
|
Term
How do we improve virtual address to physical address speed? |
|
Definition
Use translation look aside buffers (TLBs!) |
|
|
Term
|
Definition
A cache that holds recently referenced virtual to physical addresses |
|
|
Term
What Happens on a TLB Miss? |
|
Definition
Software: switch to kernel, translate, load new TLB entry. Flush TLB on context switch
Hardware: does look up in page table. Exception handler called. Might flush TLB on context switch |
|
|
Term
Possible performance issues with TLB paging |
|
Definition
Space! Page table can be very large or the size of pages |
|
|
Term
Possible performance issues with normal paging |
|
Definition
Speed - requires two memory references |
|
|
Term
How to deal with large page tables |
|
Definition
|
|
Term
Virtual address translation steps with TLB (p. 792) |
|
Definition
The lower bits of the page number "space" are used as the TLB index. The higher bits of the page number "space" are used as the TLB tag. You use the index to find the right set and then use the tag to find the right frame number. If it's valid, the frame number is returned. If not, then you have to go through the normal virtual to physical translation and the PTE is stored in the TLB |
|
|
Term
|
Definition
Adds additional levels of indirection to the page table by sub-dividing page number into k parts where each part references a level
The first page number tells you the entry to look at in the first level table. This entry will then tell you the next table to look at. The second page number will then tell you what entry to look at in this table. That entry will then tell you the next table to look and it goes on until the kth table will give you the frame number. |
|
|
Term
|
Definition
Also called page registers. Each frame is associated with an entry The entry tells you if the frame is occupied, the page that occupies it and protection bits
LOOK UP MORE LATER |
|
|
Term
Page size == Number of frames |
|
Definition
|
|
Term
Virtual address translation for inverted page table procedure |
|
Definition
Instead of searching entire page table you can use a hash table to look up pages. Hash function will look up the corresponding page entry (if it's occupied) |
|
|
Term
|
Definition
its code, data, and stack
Some code is stored in memory and some of it isn't. Data and stack is stored on disk |
|
|
Term
|
Definition
References to pages not mapped into memory |
|
|
Term
Page fault handling steps |
|
Definition
Interrupt handler OS blocks the running process OS reads the unmapped page OS runs another process After reading of page is complete, OS maps the missing page into memory OS restarts faulting process |
|
|
Term
Is there external fragmentation in paging? |
|
Definition
|
|
Term
|
Definition
OS loads a page the first time it is referenced |
|
|
Term
Pre-paging simple definition |
|
Definition
OS guesses in advance which pages the process will need and pre-loads them |
|
|
Term
|
Definition
A process will arrive needing k pages and if k pages are free, the OS allocates all k pages. The OS puts the first page in a frame and the frame number in the page table. OS flushes the TLB OS starts the process During execution, the TLB will get updated |
|
|
Term
|
Definition
If a process accesses an item in memory, it will tend to reference that same item again |
|
|
Term
|
Definition
If a process accesses an item in memory, it will tend to reference an adjacent item soon |
|
|
Term
|
Definition
Throws out the oldest page |
|
|
Term
|
Definition
Look into the future and throw out the page that won't be referenced for awhile |
|
|
Term
|
Definition
Throw out the page that hasn't been used in the longest time
One way to implement this is to keep a time stamp for each page from when it was last accessed |
|
|
Term
|
Definition
Jesus Only move the clock pointer when you change 0 to 1 |
|
|
Term
Second Chance Page Replacement |
|
Definition
Checks both the reference and modified bit (dirty) to determine which page to replace (reference, modify) It will replace when it finds the (0,0) pair (0, 1) - will write the page, clears modified bit (1, 0) - referenced bit cleared |
|
|
Term
|
Definition
Only considers pages owned by the faulting process |
|
|
Term
|
Definition
considers all pages, not just the ones owned by the process |
|
|
Term
|
Definition
basically the set of pages the process is using (the ones recently referenced) |
|
|
Term
Working Set Page Replacement |
|
Definition
Keep track of the last T page references
This is completely different than how we do the other algorithms
For each page you go horizontally and put a dot if it's still within it's time frame or being referenced and dash if it's expired. Simple
Page faults occur if a page gets referenced when it hasn't been in the working set |
|
|
Term
|
Definition
Occurs when pages are tossed out while they are still in use. Essentially, you're constantly doing swapping out pages because you don't have enough memory |
|
|
Term
|
Definition
refers to the number of processes that can reside in memory at one time |
|
|
Term
|
Definition
Translations are time consuming Requires hardware support to be efficient |
|
|
Term
|
Definition
Explicit memory management: program has to manage memory, C
Automatic memory management (Garbage collection): program allocates memory but the system manages it, Java |
|
|
Term
Explicit Memory Management |
|
Definition
User requests memory and requests deallocation Runtime system identifies appropriate location for allocation and frees allocation on request |
|
|
Term
|
Definition
Bytes are allocated and the pointer is moved just past allocation. Is contiguous |
|
|
Term
A free block in memory has: |
|
Definition
pointer to the next free block, size of the free block, and the space. The address pointing to the beginning of the space is what gets returned to the user |
|
|
Term
|
Definition
Gets a pointer to a block passed in. The function will put this block in the free list and if needed, coalesce |
|
|
Term
Giving a block that's a perfect fit to the user |
|
Definition
Remove block from free list, update pointers of previous free block to point to the next free block and return pointing to the "space" section of the pointer |
|
|
Term
Giving a block that's too big for the user |
|
Definition
Divide the free block into two Keep the smaller block in the free list and give the second block to the user |
|
|
Term
Keep in mind that when you're freeing stuff from the heap, those pointers are still pointing to those memory locations. It's just that what they're pointing to is considered garbage |
|
Definition
|
|
Term
|
Definition
Divide the free list by chunk size into bins
So the first bin will have all the blocks of size 1, the third bin will have all the blocks of size 3 and the last bin will have all the larger free chunks. This way once you find that bin, you can quickly get a free block |
|
|
Term
|
Definition
Just like the exact fit binning strategy except each bin is a small range of sizes. You have a smaller number of bins with this strategy |
|
|
Term
The challenge of garbage collection |
|
Definition
Identifying what is garbage and what is not |
|
|
Term
|
Definition
Any object that the program can't reach |
|
|
Term
Two methods to find reachable objects |
|
Definition
Tracing and reference counting |
|
|
Term
|
Definition
Count the number of references to an object. If it's 0, the object isgarbage |
|
|
Term
|
Definition
Mark all objects reachable from the globals, stack, and registers as live. Also marked any objects that can be reached from these objects as live. Any objects not marked are considered dead |
|
|
Term
3 types of garbage collectors |
|
Definition
Mark Sweep Mark compact Semi space |
|
|
Term
|
Definition
Has a free list (with objects in it) that's organized by size using binning Grabs an object one by one off of the free list and puts it on the heap. As soon as the heap fills up, GC starts Mark phase: finds the live objects Sweep phase: puts free objects on the free list to free up space on the heap
Has poor locality but has very fast collection |
|
|
Term
|
Definition
Bump allocation + trace + compact
Objects are allocated using the bump allocation. Once the heap fills up, a trace is done and live objects are compacted towards the beginning of the heap.
Has good locality but expensive collection |
|
|
Term
|
Definition
Complicated ish. So you have the heap divided into the "from space" and "to space"
You bump allocate crap until your from space gets full. Once it gets full you put the first object referenced to into the "to" space. Then you place a pointer from this object into the "from" to the object in "to". Then you see what object this objects points to and add this into the "to" space. In the "to" space you update the pointers. Then you make a pointer go from this second object in the "from" space to the "to space". So pretty much the "to" space is going to have objects point to the object their pointing to and pointers coming to them from objects in the "from" space. Once you update all the pointers, free the "from space" and start bump allocating in the "to" space and do the same thing except on the other side.
Has good locality but not efficient with space |
|
|
Term
|
Definition
Young object die more quickly than older ones |
|
|
Term
Generational Heap Organization |
|
Definition
Divide the heap into two spaces: young and old (you still have a from space) Start allocating into the young set When the young set fills up, move the live objects to the old space and clear the young space Do the same process again until the old space fills up. When the old space fills up, collect live objects from both spaces and move them to the "from" space This "from" space now becomes a "to" space and I guess you move them back into the |
|
|
Term
Taxonomy of Design Choices of GCs |
|
Definition
Incrementality Composability Concurrency Parallelism Distribution |
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
Mutator and GC operate concurrently |
|
|
Term
|
Definition
Concurrency among multiple GC threads |
|
|
Term
|
Definition
|
|
Term
|
Definition
allows device to communicate with the CPU |
|
|
Term
|
Definition
Bus Device port Controller Device itself |
|
|
Term
|
Definition
Has 4 registers: Status Control Data In Data Out |
|
|
Term
|
Definition
receives commands from the system bus, translates these commands, reads/writes data from and to the system bus |
|
|
Term
|
Definition
Performs input/output or both operations Example: keyboard |
|
|
Term
Three ways for communication to take place between I/O devices and their controller |
|
Definition
Polling (Programmed I/O) Interrupts Direct Memory Access |
|
|
Term
|
Definition
CPU busy waits until status of I/O device is idle CPU will then set the command register and sets the status to command ready Controller reacts to this status and changes it to busy and performs the command After success, the controller changes the status back to idle
Advantage: Handles data quickly |
|
|
Term
|
Definition
The device interrupts the CPU when it's done with an I/O operation
The CPU will determine which device caused the interrupt. If the last command was an input operation, it will get the data CPU starts the next operation for that device (simple interrupt handling) |
|
|
Term
|
Definition
PC is saved in known place All instructions before the one pointed to by PC have fully executed No instruction after PC have been executed The execution state of the PC instruction is known |
|
|
Term
|
Definition
Any interrupt that is not precise |
|
|
Term
Direct Memory Access (I/O) |
|
Definition
Instead of the device using its controller, it uses the DMA controller to write directly to memory CPU tells DMA controller the source and destination of the transfer DMA controller does the transfer and interrupts the CPU when the entire transfer is complete Has better performance that the CPU doing the transfer itself because the CPU can go off and continue to do other work |
|
|
Term
|
Definition
Devices contain a small buffer where they can temporarily store data before transferring to/from the CPU Optimizes speed |
|
|
Term
|
Definition
Improves disk performance by reducing the number of disk accesses Keeps recently used disk locks in main memory |
|
|
Term
|
Definition
Write to all levels of memory that contains the disk block |
|
|
Term
|
Definition
Only write to the fastest memory containing the block |
|
|
Term
I/O Services provided by the OS |
|
Definition
I/O Scheduling Access control Device drivers |
|
|
Term
|
Definition
User process Device independent software Device drivers Interrupt handlers hardware |
|
|
Term
|
Definition
User requests read OS checks to see if data is in buffer If not: device driver tells DMA controller what to do and blocks itself DMA controller transfers data and interrupts CPU when complete CPU transfers data to the user process and puts the process on the ready queue |
|
|
Term
|
Definition
Can last a very long time and are very large and cheap |
|
|
Term
|
Definition
Magnetics disks Flash memory |
|
|
Term
|
Definition
|
|
Term
|
Definition
concentric rings on disk split into sectors. Think of tracks like the lanes on a track field. Each lane is a different track |
|
|
Term
|
Definition
the minimum unit of transfer from the disk |
|
|
Term
|
Definition
|
|
Term
|
Definition
the arms and read/write heads in a disk pack (set of platters) |
|
|
Term
Total read time for a disk: |
|
Definition
seek time + rotation time + transfer time |
|
|
Term
|
Definition
The time it takes for head to move to appropriate track |
|
|
Term
|
Definition
Time it takes for the right sector to appear under head |
|
|
Term
|
Definition
Time it takes to move the bytes from disk to memory |
|
|
Term
|
Definition
time to move from a track on one surface to the same track on a different surface |
|
|
Term
|
Definition
time to transfer one or more sectors to/from surface after head reads/writes the first sector |
|
|
Term
|
Definition
time to transfer data between host memory and disk buffer |
|
|
Term
How to calculate a read request |
|
Definition
Need to figure out seek time (should be given) Need to figure out rotation time (has to be calculated). Usually cut in half Transfer time (surface) may be given but it's usually so small that it can be counted out Add these three times together and times it by the number of reads |
|
|
Term
|
Definition
|
|
Term
FIFO Disk Head Scheduling |
|
Definition
Starting with the track the head is on, you just move to the track in the order given and add up how many tracks you move at a time |
|
|
Term
SSTF Disk Head Scheduling |
|
Definition
Moves the head based on shortest seek time first. Essentially you look at the track the head is currently on and move to the track closest to you |
|
|
Term
SCAN Disk Head Scheduling |
|
Definition
Moves the head in one direction and then reverses. Has to specify what direction you're traveling first |
|
|
Term
C-SCAN Disk Head Scheduling |
|
Definition
move the head in one direction until you reach the edge of the disk (regardless if there are requested tracks or not) and then reset to the opposite edge (ignoring the requests in the middle) |
|
|
Term
C-Look Disk Head Scheduling |
|
Definition
Move the head in one direction until no more requests and then reset back to the opposite edge |
|
|
Term
|
Definition
Used to improve performance Partitions are a collection of cylinders so they're partitioned vertically down. This makes the disks smaller |
|
|
Term
|
Definition
Allocate blocks on the disk that have temporal locality |
|
|
Term
Ways to improve performance on disks |
|
Definition
Partitioning Interleaving Buffering Flash Storage |
|
|
Term
|
Definition
Read blocks from disk ahead of user's request and place in a buffer Reduces the number of seeks |
|
|
Term
Advantages of flash storage |
|
Definition
Less power More resistant to physical damage No moving parts |
|
|
Term
|
Definition
Over time it stops reliably storing stuff |
|
|
Term
Linux page fault handling |
|
Definition
Segmentation fault - trying to access a non-existing page Normal page fault - trying to read an unmapped page Protection exception - violating permissions such as writing to a read only file |
|
|
Term
|
Definition
associating virtual memory areas with disk objects |
|
|
Term
Linux organizes virtual memory as a collection of areas |
|
Definition
|
|
Term
|
Definition
multiple processes can map the same object to physical memory |
|
|
Term
Private copy on write (COW) objects |
|
Definition
If two processes map a COW object, the PTEs in private areas are flagged as read only. An instruction writing to private page triggers a protection fault A new R/W page is created |
|
|
Term
|
Definition
a function to implement user-level memory mapping Returns a pointer to the start of mapped area |
|
|
Term
|
Definition
When two or more threads or processes are waiting for an event that can only be generated by these same threads or processes |
|
|
Term
Deadlock implies starvation but... |
|
Definition
starvation doesn't imply deadlock |
|
|
Term
Necessary conditions for deadlock |
|
Definition
Bounded resources Hold and Wait No pre emption Circular wait |
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
Deadlock avoidance/prevention |
|
Definition
check resource requests and possible availability to prevent deadlock |
|
|
Term
|
Definition
tries to find instances of deadlock and try to recover. An example is scanning the resource allocation graph and looking for cycles |
|
|
Term
|
Definition
algorithm to prevent deadlock. You have a certain number of resources but you allow processes to need more than what you have. This will be fine as long as the process promises to give back the resource.
Safe state: enough resources are available so one process can run to completion Unsafe state: may lead to a deadlock so don't allow this state to ever exist |
|
|
Term
|
Definition
Just need to break one of the 4 necessary conditions such as ordering all the locks |
|
|