Term
In uniprocessor scheduling, the most important criterion is the CPU utilization efficiency. |
|
Definition
|
|
Term
First Come First Served policy is biased against short processes. |
|
Definition
|
|
Term
First Come First Served policy is biased against IO bound processes |
|
Definition
|
|
Term
Round Robin policy is biased against long processes. |
|
Definition
|
|
Term
Round Robin policy is biased against short processes |
|
Definition
|
|
Term
Round Robin policy is biased against IO bound processes |
|
Definition
|
|
Term
Virtual Round Robin policy is not biased against IO bound processes |
|
Definition
|
|
Term
In Virtual Round Robin policy, the scheduler picks a process from auxillary queue if it has one. |
|
Definition
|
|
Term
In Virtual Round Robin policy, a process remembers how much of its last time quantum it was not able to use. |
|
Definition
|
|
Term
Shortest Process Next policy is biased against long processes. |
|
Definition
|
|
Term
Shortest Remaining Time policy is biased against long processes. |
|
Definition
|
|
Term
Highest Response Ratio Next policy is biased against long processes. |
|
Definition
|
|
Term
Feedback scheduling is biased against long processes if the scheduler always picks a process from the non-empty highest priority queue. |
|
Definition
|
|
Term
Fairshare scheduling allows a process to forget about its past CPU usage. |
|
Definition
|
|
Term
Fairshare scheduling penalizes a process for the CPU usage by other processes in its group. |
|
Definition
|
|
Term
CPU utilization efficiency is not as critical a factor for multiprocessor scheduling as it is for uniprocessor scheduling. |
|
Definition
|
|
Term
It does not matter whether a process always executes on the same CPU or not. |
|
Definition
|
|
Term
Load sharing is geared towards efficiently utilizing each CPU. |
|
Definition
|
|
Term
Gang scheduling explictly ensures that all threads of a process/gang run together |
|
Definition
|
|
Term
A CPU might stay idle for long times under gang scheduling. |
|
Definition
|
|
Term
Determinism in a realtime system is measured by how quickly could the system recognize a received interrupt. |
|
Definition
|
|
Term
Responsiveness in a real time system is measured by how quickly could the system process a received interrupt (after recognizing it). |
|
Definition
|
|
Term
RMS guarantees meeting as many deadlines as EDF. |
|
Definition
|
|
Term
RMS is based on converting the periodicity of a real-time task into its priority |
|
Definition
|
|
Term
In RMS, the processes are scheduled as per their priority. |
|
Definition
|
|
Term
In EDF, the processes are scheduled in the order of their deadlines |
|
Definition
|
|
Term
In Linux, the non-real time processes are characterized as either interactive or batch. |
|
Definition
|
|
Term
In Linux, a higher priority process can preempt a lower priority process from the CPU right away. |
|
Definition
|
|
Term
In Linux, the time quantum assigned to a process depends on its static priority. |
|
Definition
|
|
Term
In Linux, the real-time processes can have priority levels between 0 and 99 and are scheduled as per either FIFO or round-robin discipline. |
|
Definition
|
|
Term
In Linux, the non-real time processes can have static/dynamic priority between 100 and 139. |
|
Definition
|
|
Term
In Linux, the dynamic priority of a process can vary in the range (+5,-5) around the static priority. |
|
Definition
|
|
Term
In Linux, the dynamic priority of a process decreases (in numerical value) with the increase in the average sleep time. |
|
Definition
|
|
Term
A process with static priority 100 is considered interactive when its average sleep time exceeds 200ms. |
|
Definition
|
|
Term
In Linux, a process with static priority 139 is never considered interactive. |
|
Definition
|
|
Term
Linux uses a virtual round robin policy where an expired process (the process that has finished its time quantum) does not get executed until all processes have expired. |
|
Definition
|
|
Term
In linux, real-time processed do not expire. |
|
Definition
|
|
Term
In Linux, interactive processes generally get a fresh time quantum as soon as they finish their current one. |
|
Definition
|
|
Term
In linux, a runqueue refers to the set of TASK_RUNNING processes currently bound to a particular CPU. |
|
Definition
|
|
Term
In Linux, the next process to run on a CPU is the one at the head of the highest priority non-empty active queue. |
|
Definition
|
|
Term
In Linux, an idle CPU migrates some processes from a busy CPU. |
|
Definition
|
|
Term
Linux, multiple levels of scheduling domains allow CPUs to be grouped as per their "proximity". |
|
Definition
|
|
Term
An idle CPU looks for work starting with CPUs in its lowest level scheduling domain. |
|
Definition
|
|
Term
Fixed sized partitions suffer from internal fragmentation. |
|
Definition
|
|
Term
Dynamic partitions suffer from external fragmentation. |
|
Definition
|
|
Term
First fit algorithm is better than next fit and best fit in terms of the frequency of compaction. |
|
Definition
|
|
Term
Buddy system eliminates both internal and external fragmentation. |
|
Definition
|
|
Term
Buddy system amortizes the cost of partition creation and deletion. |
|
Definition
|
|
Term
Logical to physical address translation in a segmentatio based system involves adding the offset to the beginning address of the segment. |
|
Definition
|
|
Term
Logical to physical address translation in a paging based system involves replacing the page number with the corresponding frame number. |
|
Definition
|
|
Term
Virtual memory allows a process to execute even if the complete process image is not in main memory. |
|
Definition
|
|
Term
Resident set refers to the part of process image in main memory. |
|
Definition
|
|
Term
Locality of reference means that the information a process will use in near future is likely to be near the information the process accessed in near past. |
|
Definition
|
|
Term
Keeping only a part of the process in main memory won't work in environments where the locality of reference does not hold. |
|
Definition
|
|
Term
Thrashing refers to processes having too little memory and hence issuing too many page faults. |
|
Definition
|
|
Term
The root page table in a multilevel page table always needs to be in memory. |
|
Definition
|
|
Term
A memory reference might involve 3 page faults if a 3-level page table is being used. |
|
Definition
|
|
Term
Inverted page table uses chain of frames to support multiple pages mapping to the same frame number. |
|
Definition
|
|
Term
Translation lookaside buffer is a fast cache for recently used page table entries. |
|
Definition
|
|
Term
TLB uses associative memory. |
|
Definition
|
|
Term
Large page size means smaller page tables. |
|
Definition
|
|
Term
Large page size means more internal fragmentation. |
|
Definition
|
|
Term
Small page size allows a process to keep only immediately useful information in memory. |
|
Definition
|
|
Term
Prepaging refers to bringing pages in memory in anticipation that they will be used. |
|
Definition
|
|
Term
In general, it does not matter which frame a particular page sits in. |
|
Definition
|
|
Term
The best page to replace is the one least likely to be used in near future. |
|
Definition
|
|
Term
LRU page replacement policy is based on the assumption that the least recently used page is the one least likely to be used in near future. |
|
Definition
|
|
Term
The clock page replacement policy is based on LRU policy with page timestamps compressed to just one bit. |
|
Definition
|
|
Term
In a clock based page replacement policy, the best page to replace is the one with used and modified bits zero. |
|
Definition
|
|
Term
When a page is selected to be replaced, its frame is usually added to the list of free frames and incoming page actually sits in a different frame. |
|
Definition
|
|
Term
Use of the scheme described in the previous question allows a replaced page to be quickly reclaimed if it is needed in near future. |
|
Definition
|
|
Term
In variable allocation, local scope policies to determine resident set size, the working set refers to the set of pages referred by the process in recent past. |
|
Definition
|
|
Term
In variable allocation, local scope policies for determining the resident set size, the goal is to make resident set same as the working set. |
|
Definition
|
|
Term
In page fault frequency algorithm to determine the resident set size, the resident set can balloon up whenever there is a shift to a new locality. |
|
Definition
|
|
Term
In variable interval sampled working set strategy, only a certain number of page faults are allowed to occur between sampling instants (when pages with use bit 0 are kicked out). |
|
Definition
|
|
Term
Variable interval sampled working set strategy avoids the ballooning policy associated with the page fault frequency algorithm. |
|
Definition
|
|
Term
Precleaning refers to writing dirty pages back to hard disk while they are still in main memory. |
|
Definition
|
|
Term
Precleaning will be useless if the cleaned up pages become dirty again while they are still in main memory. |
|
Definition
|
|
Term
The level of multiprogramming can be dynamically adjusted based on the page fault rate the system is experiencing. |
|
Definition
|
|
Term
Regular paging in 80x86 architecture divides the 32 bit logical address in 3 parts: 10-bit directory,10-bit table and 12-bit offset. |
|
Definition
|
|
Term
In 80x86 architecture, the PCT flag in a page table entry specifies whether the page can be put in a CPU cache or not. |
|
Definition
|
|
Term
In 80x86 architecture, the PWD flag in a page table entry specifies whether the page follows write back policy or write throough policy if it is put in a cache. |
|
Definition
|
|
Term
Write back policy means that the page is modified only in cache and not in memory when it is written to. |
|
Definition
|
|
Term
Write through policy means that, when a page in cache is modified, both in-memory and in-cache copies are modified. |
|
Definition
|
|
Term
Cache coherence problem deals with making sure that copies of a page in different CPU caches stay synchronized. |
|
Definition
|
|
Term
Extended paging in 80x86 architecture allows pages to be 4 MB in size. |
|
Definition
|
|
Term
In 80x86 architecture, basic paging uses 2-level page tables whereas extended paging uses single level page tables. |
|
Definition
|
|
Term
Physical Address Extension (PAE) allows an 80x86 machine to use upto 64 GB of memory. |
|
Definition
|
|
Term
PAE is based on the use of 36 bit physical addresses. |
|
Definition
|
|
Term
In PAE, the logical addresses are still 32 bits. |
|
Definition
|
|
Term
In PAE, each page table entry is 8-bytes long. |
|
Definition
|
|
Term
In PAE, the directory and table part of a logical address are 9-bits each. |
|
Definition
|
|
Term
In PAE, each process can access upto 16GB of memory. |
|
Definition
|
|
Term
PAE with 4KB pages uses a 3-level page table with a 2-bit indicator of the entry in the Page Directory Pointer Table at the top level. |
|
Definition
|
|
Term
The largest page size available with PAE is 2 MB because only 20 bits are available in a logical address to indicate offset within a page. |
|
Definition
|
|
Term
In PAE, the cr3 register specifies the Page Directory Pointer Table used by a particular process. |
|
Definition
|
|
Term
In 64-bit 80x86 architecture, only 48 bits (out of total 64) are actually used for physical addresses. |
|
Definition
|
|
Term
64-bit 80x86 architecture uses 4-level page tables. |
|
Definition
|
|
Term
Linux adopts a common paging model that fits both 32-bit and 64-bit architectures. |
|
Definition
|
|
Term
Linux uses 4-level page tables. |
|
Definition
|
|
Term
For 32-bit architectures with no Physical Address Extension, Linux allocates 0 bits each to the Page Upper Directory and the Page Middle Directory fields in a logical address, thereby effectively using 2-level page tables. |
|
Definition
|
|
Term
For 32-bit architectures with the Physical Address Extension enabled, Linux uses 3-level page tables by eliminating the Page Upper Directory. |
|
Definition
|
|
Term
Programmed IO can lead to very poor CPU utilization. |
|
Definition
|
|
Term
Interrupt driven IO could be very inefficient if large amounts of data needs to be transferred. |
|
Definition
|
|
Term
DMA based IO involves CPU only at the beginning and the end of an IO operation. |
|
Definition
|
|
Term
A DMA module could control just one IO device or several. |
|
Definition
|
|
Term
DMA modules could be connected in many configurations either having their own separate bus to communicate with IO device(s) or sharing the system bus with the CPU. |
|
Definition
|
|
Term
Reading from an IO device directly in to a frame in process space may be problematic since the frame might be taken away from the process. |
|
Definition
|
|
Term
Using two or more IO buffers to read information from IO device could help reduce the IO wait times for a process. |
|
Definition
|
|
Term
Smaller disk size means smaller seek time. |
|
Definition
|
|
Term
If the disk rotates faster, the disk access time would be reduced. |
|
Definition
|
|
Term
Disk scheduling policies aim to minimize the seek times for various disk read/write requests. |
|
Definition
|
|
Term
FIFO is the best disk scheduling policy. |
|
Definition
|
|
Term
In SCAN and C-SCAN disk scheduling policies, the read/write head moves in just one direction until it reaches the very end. |
|
Definition
|
|