Search Knowledge

© 2026 LIBREUNI PROJECT

Linear Algebra / Linear Algebra

Singular Value Decomposition (SVD)

Singular Value Decomposition (SVD)

The Singular Value Decomposition (SVD) is arguably the most important result in applied linear algebra. While eigendecomposition works only for special square matrices, SVD works for any matrix—tall, wide, square, singular, or non-singular. It provides a way to “see” the underlying structure of data by decomposing it into its most significant components.

The Geometric Idea

Every matrix AA represents a linear map. SVD says that any such map can be broken down into three simple steps:

  1. A rotation in the input space (VV).
  2. A scaling along the principal axes (Σ\Sigma).
  3. A rotation in the output space (UU).

Mathematically: A=UΣVTA = U \Sigma V^T

  • VV: Columns are “right singular vectors.” They define an orthonormal basis in the input space.
  • Σ\Sigma: A diagonal matrix of “singular values” σ1σ2σn0\sigma_1 \ge \sigma_2 \ge \dots \ge \sigma_n \ge 0. These tell you the “strength” or “gain” of the matrix in each direction.
  • UU: Columns are “left singular vectors.” They define an orthonormal basis in the output space.

Data Compression: The Best Low-Rank Approximation

The real magic of SVD is the Eckart-Young Theorem. It states that if you want the best possible “summary” of a matrix AA using only kk dimensions (where kk is less than the rank of AA), the answer is to keep only the kk largest singular values and their corresponding vectors.

Ak=i=1kσiuiviTA_k = \sum_{i=1}^k \sigma_i u_i v_i^T

This is how image compression and noise reduction work. By throwing away small singular values, we lose “noise” or “unimportant detail” but keep the overall structure.

python

Interactive Lab

Read the code, make a small change, then run it and inspect the output. Runtime setup messages stay outside the terminal so the result remains focused on what the program prints.

Step 1
Inspect the idea
Step 2
Edit the program
Step 3
Run and compare

The Pseudoinverse: Solving the Solvable and Unsolvable

When a matrix AA is not invertible (e.g., it is not square or is singular), we can still “solve” Ax=bAx = b using the Moore-Penrose Pseudoinverse AA^\dagger.

Using SVD, the pseudoinverse is trivial to compute: A=VΣUTA^\dagger = V \Sigma^\dagger U^T where Σ\Sigma^\dagger is formed by transposing Σ\Sigma and replacing every non-zero σi\sigma_i with 1/σi1/\sigma_i.

The solution x=Abx = A^\dagger b is the “best” solution in two senses:

  1. It minimizes the error Axb2\|Ax - b\|^2 (Least Squares).
  2. If there are many such solutions, it picks the one with the smallest length x\|x\|.

Exercises

In an SVD decomposition A = UΣVᵀ, what do the values in Σ represent?

If a matrix has singular values [100, 50, 0.01, 0.0001], which singular values should we keep for a good low-rank approximation?

What is the relationship between singular values and the eigenvalues of AᵀA?

Next Module Canonical Forms