Search Knowledge

© 2026 LIBREUNI PROJECT

Process and Thread Management

Process and Thread Management

If the kernel is the conductor of an orchestra, then Processes and Threads are the musicians. Managing these units of execution is one of the most complex tasks an operating system performs. A modern computer might have hundreds of programs “running” at once, even though it only has a handful of physical CPU cores. The illusion of simultaneous execution is maintained through clever scheduling and context switching.

What is a Process?

A Process is a program in execution. It is more than just the machine code (the “Text” segment); it includes the current activity, as represented by:

  • Program Counter (PC): The address of the next instruction to execute.
  • Stack: Temporary data (functions, local variables, return addresses).
  • Data Section: Global variables.
  • Heap: Dynamically allocated memory during runtime.
  • Resources: Open files, network sockets, handles to I/O devices.

The Process Control Block (PCB)

To manage a process, the OS maintains a data structure called the Process Control Block (PCB). Think of this as the “manifest” for the process. When the OS stops one process to start another, it saves the current CPU registers and state into the PCB so it can resume exactly where it left off later.

The Process Lifecycle

A process moves through various states during its life.

NewReadyRunningWaitingTerminatedCreatedAdmittedScheduler DispatchInterrupt (Timeout)I/O or Event WaitI/O or Event CompletionExit or Error
  1. New: The process is being created.
  2. Ready: The process is waiting to be assigned to a processor.
  3. Running: Instructions are being executed on a CPU core.
  4. Waiting (Blocked): The process is waiting for some event to occur (like a keystroke or a disk read).
  5. Terminated: The process has finished execution.

Context Switching

The act of stopping the current process, saving its state (the PCB), and loading the state of a new process is called a Context Switch. This is “pure overhead”—while the CPU is context switching, it isn’t doing any useful work for the user. System designers strive to minimize this overhead, but it is the price we pay for multitasking.

Threads: Lightweight Processes

A Thread is a “lightweight” unit of execution within a process.

  • Processes provide Isolation: Process A cannot read Process B’s memory.
  • Threads provide Efficiency: All threads within a single process share the same memory space and resources.

Why use Threads?

Imagine a web browser:

  • One thread handles the user interface (buttons, scrolling).
  • One thread downloads a large image.
  • One thread renders the HTML text.

If these were separate processes, they would struggle to share the data of the webpage efficiently. By being threads, they can all access the same global variables and buffers directly.

The Downside of Threads

The “shared memory” that makes threads efficient also makes them dangerous. If two threads try to increment a counter at the exact same time, a Race Condition can occur, leading to corrupted data. Developers must use synchronization tools like Mutexes (Mutual Exclusion locks) and Semaphores to manage this.

CPU Scheduling

How does the OS decide which “Ready” process gets to run next? This is the job of the Scheduler.

Common Scheduling Algorithms:

  1. First-Come, First-Served (FCFS): Simple but inefficient. A long process can “clog” the system (the Convoy Effect).
  2. Shortest Job Next (SJN): Best for minimizing average wait time, but it’s hard to predict how long a job will take.
  3. Round Robin (RR): Each process gets a small unit of CPU time (a “time slice”). If it doesn’t finish, it’s moved to the back of the queue. This is the basis for modern interactive systems.
  4. Priority Scheduling: Tasks like audio processing or mouse movement get higher priority than background tasks like indexing files.

Inter-Process Communication (IPC)

Even though processes are isolated, they often need to talk to each other.

  • Pipes: A simple way to “pipe” the output of one process into the input of another (e.g., ls | grep .txt in Linux).
  • Shared Memory: Two processes map a segment of physical memory into their own address spaces. This is the fastest method.
  • Message Passing: The kernel provides a queue where processes can leave messages for each other.

Managing processes and threads is a balancing act between responsiveness (the user interface shouldn’t lag) and throughput (as much work as possible should get done). In the next module, we’ll see how the OS provides the memory “sandbox” that these processes live in.

Previous Module Kernel Architectures