Wednesday, 6 July 2016

Operating System Interview Questions

Kernel Synchronization

What are the differences between mutex and semaphore?

A mutex is locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there is ownership associated with mutex, and only the owner can release the lock (mutex).

Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend calls you, an interrupt is triggered upon which an interrupt service routine (ISR) signals the call processing task to wakeup.

What is a race condition?

Race condition occurs in a multi-threaded environment when more than one thread try to access a shared resource (modify, write) at the same time

Process 1
Process 2
Memory Value
Read value

0
Flip value

1

Read value
1

Flip value
0

If a race condition occurred causing these two processes to overlap, the sequence could potentially look more like this:

Process 1
Process 2
Memory Value
Read value

0

Read value
0
Flip value

1

Flip value
1

How to avoid a race conditon?

Use Synchronization mechanism.

What is a critial region?

In concurrent programming, a critical section is a part of a multi-process program that may not be concurrently executed by more than one of the program's processes. In other words, it is a piece of a program that requires mutual exclusion of access.


What are atomic operations?    
                                           
What are the differences between general purpose OS and and RTOS?

What are the characteristics of an RTOS?

What is the difference between a hard real time system and a soft real time system?
Hard real-time means you must absolutely hit every deadline. Very few systems have this requirement. Some examples are nuclear systems, some medical applications such as pacemakers, a large number of defense applications, avionics, etc.
Firm/soft real time systems can miss some deadlines, but eventually performance will degrade if too many are missed. A good example is the sound system in your computer. If you miss a few bits, no big deal, but miss too many and you're going to eventually degrade the system. Similar would be seismic sensors. If you miss a few datapoints, no big deal, but you have to catch most of them to make sense of the data. More importantly, nobody is going to die if they don't work correctly.
What is priority inversion and how to solve that problem?
Priority inversion is a problem, not a solution. The typical example is a low priority process acquiring a resource that a high priority process needs, and then being preempted by a medium priority process, so the high priority process is blocked on the resource while the medium priority one finishes (effectively being executed with a lower priority).
The high priority task doesn't run 100% of the time. And when it doesn't run, a low priority task could come along and grab a mutex. In the middle of the work, a medium process comes along and preempts the low running task. Then the high priority task wakes up and wants to grab the mutex that the low priority task now owns. But the high priority task cant acquire the mutex now - it's owned by the low priority process - so now there's a problem, the high priority task can't get any further.

Disabling all interrupts to protect critical sections
When disabling interrupts are used to prevent priority inversion, there are only two priorities: preemptible, and interrupts disabled. With no third priority, inversion is impossible.
Under the policy of priority inheritance, whenever a high priority task has to wait for some resource shared with an executing low priority task, the low priority task is temporarily assigned the priority of the highest waiting priority task for the duration of its own use of the shared resource, 
What is priority inheritence?
Under the policy of priority inheritance, whenever a high priority task has to wait for some resource shared with an executing low priority task, the low priority task is temporarily assigned the priority of the highest waiting priority task for the duration of its own use of the shared resource


No comments:

Post a Comment