Search Knowledge

© 2026 LIBREUNI PROJECT

Memory Management

Memory Management

Memory is arguably the most precious resource in a computer. Every instruction executed by the CPU must be fetched from memory, and every piece of data processed must reside there. In the early days of computing, programs were limited by the physical amount of RAM installed. Today, the Operating System uses a sophisticated layer of abstraction called Virtual Memory to provide each process with its own private, massive address space.

The Problem: Fragmentation and Protection

Initially, OSs used Contiguous Allocation: a program was loaded into a single block of memory. This led to two major problems:

  1. External Fragmentation: Small “holes” of free memory appeared between programs, but they were too small to hold a new program, even if the total free memory was sufficient.
  2. Protection: One malicious (or buggy) program could easily write to the memory address of another program, crashing it.

The Solution: Virtual Memory

Virtual Memory separates the Logical Addresses used by the programmer from the Physical Addresses of the RAM hardware.

Hardware Support: The MMU

The Memory Management Unit (MMU) is a hardware component in the CPU that translates virtual addresses to physical addresses on-the-fly.

CPU CoreMMUPhysical RAMInstruction: Access 0x1234Page Table LookupActual Data at 0x9ABCVirtual AddressPhysical Address

Paging: The Modern Approach

Modern systems use a technique called Paging. Memory is divided into fixed-size blocks:

  • Pages: Virtual memory blocks (e.g., 4 KB).
  • Frames: Physical RAM blocks (identically sized).

The OS maintains a Page Table for each process. This table is a “map” that tells the MMU: “Virtual Page 7 is currently located in Physical Frame 42.”

Advantages of Paging:

  • No External Fragmentation: Since any page can be placed in any available frame, we can use every single byte of RAM.
  • Isolation: Every process has its own page table. Process A’s “Address 0x100” maps to Physical Frame 50, while Process B’s “Address 0x100” maps to Physical Frame 90. They can never touch each other’s data.
  • Shared Libraries: If two processes use the same library (like msvcrt.dll or libc.so), the OS can map their virtual pages to the same physical frame to save space.

Paging Out and Swapping

What happens when you run out of RAM?

  1. The Page Fault: When a program tries to access a virtual page that isn’t in RAM, the MMU triggers a “Page Fault” interrupt.
  2. Swapping: The OS pauses the program, finds a physical frame that hasn’t been used lately, and writes its contents to the disk (the “Swap File” or “Page File”).
  3. Loading: The OS then reads the requested data from the disk into that now-free RAM frame.
  4. Resuming: The OS updates the Page Table and tells the program to try again.

This is why your computer slows down when you have too many tabs open—your CPU is spending all its time moving data between the fast RAM and the slow SSD/HDD.

Segmentation (The Historical Rival)

While paging uses fixed-size blocks, Segmentation uses variable-sized blocks based on the logic of the program (e.g., a “Code Segment,” a “Data Segment,” and a “Stack Segment”).

  • Advantage: It’s more aligned with how programmers think. You can set permissions on a segment (e.g., “the Code Segment is read-only”).
  • Disadvantage: It suffers from External Fragmentation.

Modern Reality: Most systems (x86_64) use a hybrid: “Paging within Segments” or simply flat paging with segment-like protections applied at the page level.

Memory Protection and Security

Memory management is also a security feature.

  • NX Bit (No-eXecute): The OS marks the “Data” and “Stack” pages as non-executable. This prevents hackers from performing “Buffer Overflow” attacks where they inject code into a data buffer and try to run it.
  • ASLR (Address Space Layout Randomization): The OS “shuffles” where the various parts of a program are loaded in virtual memory every time it starts. This makes it much harder for an attacker to predict where a specific function (like system()) is located.

Performance: The TLB

Looking up the Page Table for every single memory access would be too slow. To solve this, CPUs have a tiny, super-fast cache called the TLB (Translation Lookaside Buffer). It stores the most recent virtual-to-physical translations. A “TLB Hit” happens in less than a nanosecond, while a “TLB Miss” might require several cycles to walk the page table in RAM.

Understanding memory management is the key to understanding why “8GB of RAM” might feel fast on one OS and slow on another. It’s not just about how much you have, but how intelligently the OS shuffles it.

Next Module File Systems