The main focus of this book is on the Linux kernel, its design and development. The author kicks off the book by giving a brief introduction about Linux, its history, origin, versions, community and setup. He mentions a lot of things in the book (some of them quite fascinating) that I appended my knowledge in operating systems. 1 . Among the things that make the latest version of Linux kernel different from others, is the usage of GNU C and not ANSI C. This kernel uses extensions provided by GNU C like branch annotations, inline assembly and inline functions. All these help the GNU complier (ICC) in making code optimizations. Threads and processes are modeled the same ! The general notion of a thread being a lightweight process and much efficient than a process is defied by the Linux community. We know that threads share code and data, whereas it is not that straight-forward to share data among processes. Threads are computationally less expensive than processing considering their creation and reaping being cheaper than processes. According to the author, process creation times are favorable towards Linux as compared to process or even thread creation for other operating systems. The author does not throw much light on the benchmark exults.
He concludes that the only thing to be concerned about is sharing among different processes in Linux. This is achieved by the clone system call, which makes sharing of resources like VIM, open files, FSP, signal handlers, etc. Possible. 3. Completely fair scheduler. Linux handles scheduling of processes by assigning them nice values (inversely proportional to priority) and real-time priorities. In earlier versions of Linux, a constant time scheduler was introduced which calculated the timescale (based on nice values) in constant time and introduced per processor run queue.
But this design gave poor performance for interactive processes. The author demonstrates this by providing an example of text editor (10 bound process +interactive) and video encoder (CPU bound), where the goal is to have the interactive text editor preempt the video encoder on an 10 and use CAP]. The limitation in directly mapping nice values to timescale are variable context switching behavior, unit difference in nice values result in far different timescale values depending upon starting nice value, inability to assign an absolute timescale, and timescale coupled to timer ticks.
In the new Completely Fair Scheduler (CIFS), recesses are assigned a proportion of CPU to use, context switched out if any other process of equal priority has consumed less CAP]. CIFS calculates the proportion of the CPU assigned to a process by weighing its priority and CPU allocation to other runnel processes In ten system Tort a given time. Moreover, now Nile values nave a geometric effect on CPU allocation. CIFS thus yields constant fairness but a variable switching rate.
CIFS does not schedule fairly if the system has a large number of processes which leads to infinite context switching time, though Linux imposes a restriction on the minimum amount of time quantum allocated to a process. In Linux, there is no process context switch after we acquire a lock in the kernel. This is unfair to processes (might be high priority) that do not want that lock. Tallest this approach does not disable interrupts. 4. Interrupt handling approach in the Linux kernel is very interesting. It breaks down the Sirs into two pieces depending upon their work.
It recognizes upper half of the Sirs to be time critical and perform important functions like acknowledge interrupts. Bottom half is for long processing Jobs. The author gives an example of network card deader, where packets can be dropped if the network buffer is not copied into memory and becomes full, whereas the Job of processing the packets can be delayed for some later time. An eye-opening thing to know was that, interrupt handlers cannot sleep because they are not associated with process context and it blocks the current process or it needs a backing process.
Most of the Linux design decisions are based on this fact ! 5. Bottom halves and deferring work. Softly, tackles and work queues were the important concepts dealt, which perform the bottom half of the SIR, each used for a different purpose. Softly, from their definition, should be used as bottom halves, when we are looking for more concurrency/ performance because Softly can run simultaneously on any processor; even two of the same type can run concurrently.
Softly are limited in number as registered softie’s are statically determined at compile time and cannot be changed later. Tackles are derived from Softly, which are dynamically created. Two different tackles can run concurrently on different processors, but two of the same type of tackles cannot run simultaneously. Thus they provide a good trade-off between performance and ease of use. If there is a need to sleep in the bottom halves, kernel threads can be made to do deferred work, which maintains work queues.
Kernel threads are schedulable and run in process context; hence, can sleep. Context switching overhead is involved in them, of course. An interesting thing to know was softly processing should also be done with kernel threads. Reason being user-space running more or less depending upon softly processing tale, Tanat can lead to letter one AT teem stars nag 6. Kernel Synchronization. A significant point to notice is that spin locks can be used in interrupt handlers, whereas semaphores cannot be used because they sleep.
Sequential Locks was a new thing that Is got to know, which are used for atomic read and write operations on shared data. It uses a sequence count on the object accessed and has a similar concept to Load-linked, Store Conditional instructions. These are used to mange 64-bit Jiffies in Linux, which hold the timer tick count. Big kernel lock is not well-explained. The book says that, “it was created to ease the transition from Linen’s original SMS implementation to fine-grained locking. ” , but it does not explain how it did that and leaves the reader confused.