Interview Questions

Operating System Back

Explain the popular multiprocessor thread-scheduling strategies.

Ans: 

1. Load Sharing: Processes are not assigned to a particular processor. A global queue of threads is maintained. Each processor, when idle, selects a thread from this queue. Note that load balancing refers to a scheme where work is allocated to processors on a more permanent basis.
,

2. Gang Scheduling: A set of related threads is scheduled to run on a set of processors at the same time, on a 1-to-1 basis. Closely related threads / processes may be scheduled this way to reduce synchronization blocking, and minimize process switching. Group scheduling predated this strategy.

,

3. Dedicated processor assignment: Provides implicit scheduling defined by assignment of threads to processors. For the duration of program execution, each program is allocated a set of processors equal in number to the number of threads in the program. Processors are chosen from the available pool.

,

4. Dynamic scheduling: The number of thread in a program can be altered during the course of execution.

What is busy waiting?

Ans: 

The repeated execution of a loop of code while waiting for an event to occur is called busy-waiting. The CPU is not engaged in any real productive activity during this period, and the process does not progress toward completion.

What are the stipulations of C2 level security?

Ans: 

C2 level security provides for:
1. Discretionary Access Control.
2. Identification and Authentication.
3. Auditing.
4. Resource reuse.

What is meant by arm-stickiness?

Ans:  If one or a few processes have a high access rate to data on one track of a storage disk, then they may monopolize the device by repeated requests to that track. This generally happens with most common device scheduling algorithms (LIFO, SSTF, C-SCAN, etc). High-density multisurface disks are more likely to be affected by this than low density ones.

What is cycle stealing?

Ans:  We encounter cycle stealing in the context of Direct Memory Access (DMA). Either the DMA controller can use the data bus when the CPU does not need it, or it may force the CPU to temporarily suspend operation. The latter technique is called cycle stealing. Note that cycle stealing can be done only at specific break points in an instruction cycle.

When is a system in safe state?

Ans:  The set of dispatchable processes is in a safe state if there exists at least one temporal order in which all processes can be run to completion without resulting in a deadlock.

What is the resident set and working set of a process?

Ans:  Resident set is that portion of the process image that is actually in real-memory at a particular instant. Working set is that subset of resident set that is actually needed for execution. (Relate this to the variable-window size method for swapping techniques.

What is the Translation Lookaside Buffer (TLB)?

Ans:  In a cached system, the base addresses of the last few referenced pages is maintained in registers called the TLB that aids in faster lookup. TLB contains those page-table entries that have been most recently used. Normally, each virtual memory reference causes 2 physical memory accesses-- one to fetch appropriate page-table entry, and one to fetch the desired data., Using TLB in-between, this is reduced to just one physical memory access in cases of TLB-hit.

What are the typical elements of a process image?

Ans:  User data: Modifiable part of user space. May include program data, user stack area, and programs that may be modified, User program: The instructions to be executed, System Stack: Each process has one or more LIFO stacks associated with it. Used to store parameters and calling addresses for procedure and system calls, Process control Block (PCB): Info needed by the OS to control process.

What are turnaround time and response time?

Ans:  Turnaround time is the interval between the submission of a job and its completion. Response time is the interval between submission of a request, and the first response to that request.