-
-
Save ajaynitt/f190a78cd39473af9b8e to your computer and use it in GitHub Desktop.
Main memory and the registers built into the processor itself are the only | |
storage that the CPU can access directly | |
Registers that are built into the CPU are generally accessible within one | |
cycle of the CPU clock | |
A base and a limit register define a logical address space | |
We can provide this protection by using two registers, usually | |
a base and a limit | |
The base register is now called a relocation register. | |
The value in the relocation register is added to every address generated by a | |
user process at the time it is sent to memory | |
The memory-mapping | |
hardware converts logical addresses into physical addresses. | |
The run-time mapping from virtual to physical addresses is done by a | |
hardware device called the memory-management unit (MMU). | |
For efficient CPU utilization, we want the execution time for each process | |
to be long relative to the swap time | |
The relocation register contains the value of the | |
smallest physical address; the limit register contains the range of logical | |
addresses | |
VIMU maps the logical address dynamically by adding the value in the relocation | |
register | |
When the CPU scheduler selects a process for execution, the dispatcher | |
loads the relocation and limit registers with the correct values as part of the | |
context switch. Because every address generated by the CPU is checked against | |
these registers, we can protect both the operating system and the other users' | |
programs and data from being modified by this running process. | |
If a device driver (or other operating-system service) | |
is not commonly used, we do not want to keep the code and data in memory, as | |
we might be able to use that space for other purposes. Such code is sometimes | |
called transient operating-system code; it comes and goes as needed | |
given N allocated blocks, another 0.5 N blocks will be lost to fragmentation. | |
That is, one-third of memory may be unusable! This property is known as the | |
50-percent rule | |
situation like this, where
several processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the access
takes place, is called a [b]race condition[/b]. To guard against the race condition
above, we need to ensure that only one process at a time can be manipulating
the variable counter. To make such a guarantee, we require that the processes
be synchronized in some way.
How does 'Mutex' work?
As mentioned before, the data of a mutex is simply an integer in memory. It’s value starts as 0, meaning that it is unlocked. If you wish to lock the mutex you can simply check if it is zero and then assign one. The mutex is now locked and you are the owner of it.
The trick is that the test and set operation has to be atomic. If two threads happen to read 0 at the exact same time, then both would write 1 and think they own the mutex. Without CPU support there is no way to implement a mutex in user space: this operation must be atomic with respect to the other threads. Fortunately CPUs has a function called “compare-and-set” or “test-and-set” which does exactly this. This function takes the address of the integer, and two integer values: a compare and set value. If the compare value matches the current value of the integer then it is replaced with the new value. In C style code this might like look this:
1
2
3
4
5
int compare_set( int * to_compare, int compare, int set );
int mutex_value;
int result = compare_set( &mutex_value, 0, 1 );
if( !result ) { /* we got the lock */ }
The caller determines what happens by the return value. It is the value at the pointer provided prior to the swap. If this value is equal to the test value the caller knows the set was successful. If the value is different then the caller knows the value has not changed. When the piece of code is done with the mutex it can simply set the value back to 0. This makes up the very basic part of our mutex.
template
class LockGuard
{
public:
explicit LockGuard(Lock& resource) : m_lock(resource)
{
m_lock.acquire();
}
~LockGuard() {
m_lock.release();
}
private:
LockGuard(const LockGuard&);
LockGuard& operator=(const LockGuard&);
Lock& m_lock;
};
POSIX (/ˈpɒzɪks/ poz-iks), an acronym for "Portable Operating System Interface",[1] is a family of standards specified by the IEEE for maintaining compatibility between operating systems
A Thread Group is a set of threads all executing inside the same process. They all share the same
memory, and thus can access the same global variables, same heap memory, same set of file descriptors, etc. All these
threads execute in parallel (i.e. using time slices, or if the system has several processors, then really in parallel).
The advantage of using a thread group over using a process group is that context switching between threads is much
faster than context switching between processes (context switching means that the system switches from running one
thread or process, to running another thread or process).
Difference between mutex and binary semaphore:
Mutex can be released only by thread that had acquired it, while you can signal semaphore from any other thread (or process), so semaphores are more suitable for some synchronization problems like producer-consumer.
On Windows, binary semaphores are more like event objects than mutexes.