Term
|
Definition
| allowed for devices to access data in memory using channel processors. Allowed for overlap of I/O and CPU. on-line operation. |
|
|
Term
|
Definition
| have two or more processors in close communication, sharing the computer bus and sometimes the clock, memory, and peripheral devices. Advantages: increased throughput, economy of scale(cheap for the benefits), increased reliability (one processor going down doesn't affect the system) |
|
|
Term
|
Definition
| when a block is read into memory, it is stored in the buffer. |
|
|
Term
|
Definition
| operations which have to be performed with the file system unmounted. |
|
|
Term
|
Definition
| operations during normal system operations, could slow down performance. |
|
|
Term
|
Definition
| the instructions for the system are are in ASCII format. Groups similar jobs together, minimizes I/0 time. Type of off-line operation. first step in OS. |
|
|
Term
|
Definition
| provides direct communication between the user and the system, using an input device. Type of on-line operation. |
|
|
Term
|
Definition
| the CPU executes multiple jobs by switching among them, but the switches occur so frequently that the users can interact with each program while its running. |
|
|
Term
|
Definition
| used when rigid time requirements have been placed on the operation of a processor or the flow of data. Can be rate-monotonic (priority=1/rate) or deadline (priority= release time+period). |
|
|
Term
| virtual machine emulation |
|
Definition
| the idea is to abstract the hardware of a single computer into several different execution environments thereby creating the illusion that each separate execution environment is running is own private computer |
|
|
Term
|
Definition
| remote directories are visible from a local machine. |
|
|
Term
|
Definition
| a process's progress info which includes the value of the CPU registers, the process state and memory-management information. |
|
|
Term
|
Definition
| increases CPU utilization by organizing jobs(code and data) so that the CPU always has one to execute. logical concurrency. driven by I/O using interrupts |
|
|
Term
|
Definition
| either asymmetric(one processor does I/O and the other does user code) or symmetric (each processor is self-scheduling, has own OS). physical concurrency. |
|
|
Term
|
Definition
| -interface to the services made available by an operating system. Mechanism to enter the kernel, used in software. - Operating system procedure/functions provided by traps. |
|
|
Term
|
Definition
| the block contains a pointer to the source of a transfer, a pointer to the destination of the transfer and a count of the number of bytes to be transferred. the CPU writes the address of this command clock to the DMA controller then goes on with other work. The DMA controller proceeds to operate the memory bus directly, placing addresses on the bus to perform transfers without the help of the main CPU. |
|
|
Term
|
Definition
| Uses a general-purposed processor to watch status bits and to feed data into a controller register one byte at a time. |
|
|
Term
|
Definition
| a sequential program. single unit of work for OS. |
|
|
Term
|
Definition
| hardware mechanism that causes the CPU to stop the current process in CPU and run another. |
|
|
Term
|
Definition
| software generated interrupt caused by an error or by a specific request from a user program that an operating system be performed. |
|
|
Term
|
Definition
| time device busy/observation interval |
|
|
Term
|
Definition
| number of processes that are complete over observation interval |
|
|
Term
|
Definition
| a process which generates I/O requests infrequently, using more of its time doing computations |
|
|
Term
|
Definition
| a process which spends more of its time doing I/O than it spends doing computations |
|
|
Term
|
Definition
| invoked by the control-card interpreter to load system programs and application programs into memory |
|
|
Term
|
Definition
| the OS is split up into different levels, the bottom one being the hardware and the top one being the user interface |
|
|
Term
|
Definition
| structures the OS by removing all nonessential components from the kernel and implementing them as system and user-level programs. |
|
|
Term
|
Definition
| Remote file systems allow a computer to mount one or more file systems from one or more remote machines. the machines containing the files is the server, and the machine seeking access to the files is the client. |
|
|
Term
|
Definition
| a process's represented in the operating system. It includes: the process state, program counter, CPU registers, CPU scheduling info, memory-management info, memory-management info, accounting into, I/O status info. |
|
|
Term
|
Definition
| list which contains the processes that are residing in main memory and are ready and waiting to execute. |
|
|
Term
|
Definition
| list of processes which cannot execute until another process makes it eligible. created by medium-term scheduler |
|
|
Term
|
Definition
| a list of processes waiting for some event to complete, and are no eligible for execution until it is woken up by the completion of the event. |
|
|
Term
|
Definition
| selects from among the processes that are ready to execute and allocates the CPU to one of them. |
|
|
Term
|
Definition
| the OS is just one big program. Each program thinks that its running on a single OS with a single CPU. |
|
|
Term
|
Definition
| the linker takes shared code in libraries and puts them together to run. |
|
|
Term
|
Definition
| the illusion of processes running simultaneously on a computer |
|
|
Term
|
Definition
| hardware allows for multiple computations to happen at the same time |
|
|
Term
|
Definition
| when a process is removed from memory(and from active contention for the CPU) to be swapped. Later, it can be reintroduced into memory, and its execution can be continued where it left off. |
|
|
Term
|
Definition
| takes processes from a mass-storage device and loads them into memory for execution. |
|
|
Term
|
Definition
| when a CPU switches from one process to another, must perform a state save of the current process and a state restore of a different process. Pure overhead, but way faster than I/O |
|
|
Term
|
Definition
| saves the state of all processor registers. |
|
|
Term
|
Definition
| saves the state of the memory registers |
|
|
Term
|
Definition
| a process can be interrupted, placed from the running queue to the ready, or from the waiting to the ready |
|
|
Term
|
Definition
| scheduling is made with preemptive processes ( the processes being scheduled will be cut-off) |
|
|
Term
|
Definition
| once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by termination or by switching to the waiting state |
|
|
Term
|
Definition
| processes in multiprogramming which can affect or be affected by one another |
|
|
Term
|
Definition
| preemptive scheduling which gives a certain amount of time to each process in the CPU |
|
|
Term
|
Definition
| the amount of time given to a process to run in a round-robin scheduling algorithm |
|
|
Term
|
Definition
| the appearance that n processes has its own processor running at 1/n the speed of the real processor |
|
|
Term
|
Definition
| given certain processes more importance and therefore a higher likelihood to get CPU time next. |
|
|
Term
|
Definition
| a technique of gradually increasing the priority of processes that wait in the system for a long time. |
|
|
Term
|
Definition
| when the processes in the ready queue must wait for a large CPU-bound process to be done. |
|
|
Term
|
Definition
| the time from the submission of a request until the first response is produced. (completion time-start time) |
|
|
Term
|
Definition
| the sum of the periods spent waiting in the ready queue |
|
|
Term
|
Definition
| the interval from the time of submission of a process to the time of completion |
|
|
Term
|
Definition
| the process that requests the CPU first is allocated to the CPU first |
|
|
Term
|
Definition
-scheduling policy. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. -selects the job that is closest to finishing. |
|
|
Term
|
Definition
| a region of memory that is shared by cooperating processes. processes can then exchange information by reading and writing data to the shared region. |
|
|
Term
|
Definition
| the current activity of the process: new, running, waiting, ready, terminated |
|
|
Term
|
Definition
| continuously looping without doing any actual computation |
|
|
Term
|
Definition
| when a process is in a loop, reading the status register over and over until the busy bit becomes clear. |
|
|
Term
|
Definition
| a segment of code in which the process may be changing common variable, updating a table, writing a file, etc, in which no other process is to be allowed to execute these changes (mutually exclusive manner). |
|
|
Term
|
Definition
| at least one resource must be held in a non sharable mode; that is, only one process at a time can use the resource. if another process requests that resource, the requesting process must be delayed until the resource has been released. |
|
|
Term
|
Definition
| when a process is waiting for a condition to be met in order to run, but this condition is never met, it is deadlocked. |
|
|
Term
|
Definition
| when a process that is ready to run but waiting for the CPU can be considered blocked.. it never gets CPU time |
|
|
Term
|
Definition
| a semaphore that can only have a value of either 0 or 1. Initial value of 1. |
|
|
Term
|
Definition
| a semaphore which can have any value between 0 and positive infinity |
|
|
Term
|
Definition
communication between processes takes place by means of messages exchanged between the cooperating processes. -communication is explicit, synchronization is implicit |
|
|
Term
|
Definition
| multiple senders send to one receiver through ports. |
|
|
Term
|
Definition
| in message passing, these objects are there to hold messages sent by the sender and can be sent to multiple receivers. |
|
|
Term
|
Definition
| sending a message to all the processes which could receive it |
|
|
Term
|
Definition
| sending a message to multiple receivers, but not all |
|
|
Term
| buffered asynchronous operation |
|
Definition
| an operation between multiple processes in which the order in which the processes run doesn't matter because the messages to be passed between them are held in a buffer, an ordered list of messages created. |
|
|
Term
|
Definition
| abstracts out the send/await-reply paradigm into a 'procedure call'. This can be done by providing a stub that hides the details of remote communication |
|
|
Term
|
Definition
| packing and unpacking of messages in client-server mechanisms |
|
|
Term
|
Definition
| the service software running on a single machine |
|
|
Term
|
Definition
| a process that can invoke a service using a set of operations |
|
|
Term
|
Definition
| package up the remote procedure, but its really taking away the data from the producer. |
|
|
Term
|
Definition
| the domain of memory accessible to a process |
|
|
Term
|
Definition
| the set of all logical addresses generated by a program |
|
|
Term
|
Definition
| same as logical address space: the set of all logical addresses generated by a program |
|
|
Term
|
Definition
| the set of all physical addresses corresponding to a logical address space. |
|
|
Term
|
Definition
| an address generated by the CPU. |
|
|
Term
|
Definition
| an address seen by the memory-address register of the memory |
|
|
Term
|
Definition
| disks/drums in spooling systems allowed for I/O and computation to be done on the same machine. |
|
|
Term
|
Definition
| in spooling, devices could now access data in memory using channel processors. They connect card, drums/disks, printers, etc to the main machine. |
|
|
Term
|
Definition
| when user programs and the OS share memory, the OS must be protected from user programs. This means that there is 'user mode' and a 'system/protected/supervisor mode' |
|
|
Term
|
Definition
| fork creates a new execution context for the child process, join brings the two contexts together. In UNIX, the child has a PID of 0, parent has a number. Allows for parallelism, security from mistakes. |
|
|
Term
|
Definition
| different 'processes' who share memory state but have separate processor states. They lack 'fault tolerance'= they can mess each other up. Use system calls to synchronize. |
|
|
Term
|
Definition
| taking a process from the ready queue into running. |
|
|
Term
|
Definition
| the queue created by the long-term scheduler; list of processes to enter the ready queue |
|
|
Term
|
Definition
| the point at which a process enters either the ready, waiting or running queue, after long-term scheduling. |
|
|
Term
|
Definition
| a system in which each process gets enough CPU time to avoid the convoy effect |
|
|
Term
|
Definition
| to notion of whether or not a process can be interrupted by I/O or another process. If it cannot, it is non-preemtive, if it can, it is preemtive |
|
|
Term
| optimal processor scheduling |
|
Definition
| no other scheduling policy is better than this one, but it is not the best at keeping the processor running at all points. |
|
|
Term
| multilevel feedback queues |
|
Definition
| n priority levels, each priority level has its own round robin, own quantum, which decreases with higher priority. Processes are demoted a priority level if they're still executing when a quantum expires |
|
|
Term
| producer/consumer systems |
|
Definition
| a producer creates data which the consumer will need. They are correct if all the data produced by the producer is eventually consumed by the consumer, the consumer only consumes a given data item once, the consumer only consumes data items produced by the producer. the buffer provides means of synchronization and communication. |
|
|
Term
|
Definition
| manipulates the hardware such that a certain process's critical section cannot be disrupted. When disabled, there can't be no interrupts. When enabled, there could be. Makes it non-preemptive |
|
|
Term
|
Definition
| can't be interrupted or seem to be interrupted |
|
|
Term
|
Definition
| an abstract data type. It is a non-negative integer variable with two operations: down, which decrements sem by 1 if sem>0, otherwise wait till sem>0; and up, increment sem by 1. both operations are atomic |
|
|
Term
| condition synchronization |
|
Definition
| awaiting the development of a specific state within the computation |
|
|
Term
|
Definition
| if a process is not preemtible, then it is indivisible, and therefore atomic |
|
|
Term
|
Definition
| a way to provide mutual exclusion. Performs a LOAD, COMPARE, and STORE in one indivisible operation. |
|
|
Term
|
Definition
| take the role of semaphores for multiple producers and multiple consumers. |
|
|
Term
|
Definition
|
|
Term
|
Definition
| collect related shred objects into a module. Define data operations, calls to monitor entries guaranteed to be mutually exclusive. Use condition variables to: wait- blocks the caller on a condition-specific queue. signal- wakes up a waiter if one exists. empty- indicates if any process is currently waiting. |
|
|
Term
|
Definition
| if some processes need another process in order to continue, this needed process is put in the urgent queue so that it can enter the ready queue quicker. |
|
|
Term
|
Definition
| the mesa monitor changes the signal from the hoare monitor to notify, which is just a hint that the condition has been met, not a guarantee. Does not have condition variables. |
|
|
Term
|
Definition
| when a low priority process delays a high priority process because of hoare semantics. EX: a low priority consumer does not allow a high priority producer to keep producing. |
|
|
Term
|
Definition
| writers write to shared memory, readers take from there. multiple reader may be reading simultaneously. Only one writer may be active at a time. Reading and writing cannot proceed simultaneously. Make sure readers don't starve writers and vice versa. |
|
|
Term
| blockin/nonblocking synchronization |
|
Definition
| blocking: sender waits until its message is received. receiver waits if no message is available. Non-blocking: send operation 'immediately' returns, receiver operation returns if no message is available. |
|
|
Term
|
Definition
| Java. provides mutual exclusion . Uses wait and notify synchronization, with Mesa semantics, without condition variables. It can be synchronized, notifyAll wakes up all waiting threads, and wait can take a timeout parameter. |
|
|
Term
|
Definition
| a blocking operation, meaning a sender cannot send until the receiver receives. |
|
|
Term
|
Definition
| non-blocking operation, meaning sender can keep sending, receive only waits when there are no messages available. |
|
|
Term
|
Definition
| implicit means you can send to all receivers available, explicit means just to one. |
|
|
Term
|
Definition
| when a process gets moved in memory, requires the program to be edited each time |
|
|
Term
|
Definition
| moving a job from one part of memory to another. |
|
|
Term
|
Definition
| putting a job in memory from logical memory to physical memory. |
|
|
Term
|
Definition
| connects the program to the logical addresses. |
|
|
Term
|
Definition
| when the memory is partitioned into parts whose size cannot change |
|
|
Term
|
Definition
| the bottom memory address of the partition given to the specific job. |
|
|
Term
|
Definition
| how many blocks of memory over the base register are allocated to the job (size of the partition) |
|
|
Term
|
Definition
| unused memory between units of allocation |
|
|
Term
|
Definition
| unused memory within a unit of allocation |
|
|
Term
|
Definition
|
|
Term
|
Definition
|
|
Term
|
Definition
| puts the job in the first spot of memory where it fits |
|
|
Term
|
Definition
| puts the job in the space of memory which is closest to the amount of space it requires |
|
|
Term
|
Definition
| gives the program the biggest available partition. Best allocation strategy, since it allows for bigger holes, more available spaces for the next process |
|
|
Term
|
Definition
| to relocate programs to join holes together into bigger holes |
|
|
Term
|
Definition
| medium term scheduling: preempt processes and reclaim their memory. |
|
|
Term
|
Definition
| if a process calls on two processes, replace the one that finishes first with the other, or others |
|
|
Term
|
Definition
| hides all physical aspects of memory from users. Memory is a logically unbounded virtual address space of 2^n bytes. Only portions of virtual address space are in physical memory at any one time. |
|
|
Term
|
Definition
| physical memory partitioned into equal sized page frames. memory addresses are then treated as a pair (page number, page offset). virtual address (log2(f#), log2(foffset)). Virtual addresses mapped to frames |
|
|
Term
|
Definition
| the number of the partition in the physical address space |
|
|
Term
|
Definition
| maps virtual pages to physical frames |
|
|
Term
| translation lookaside buffer (TLB) |
|
Definition
| a cache for the physical memory |
|
|
Term
|
Definition
| when a non-mapped page is referenced. To handle the error, block the running process, initiate disk I/O to read in the unmapped page, resume/initiate some other process, when disk I/O completes map the missing page into memory, restart the fault process |
|
|
Term
| effective memory access time (EAT) |
|
Definition
(memory access time*probability of a page hit)+(page fault service time*probability of a page fault). reminder!! prob page hit +prob page fault=1. |
|
|