Algorithms / Approximation Algorithms
Least You Need to Know: Approximation Algorithms, Ratios, and Provable Near-Optimality
Approximation algorithms matter when exact optimization is computationally hard. The central ideas are the approximation ratio, how it differs for minimization and maximization, and how proofs compare the algorithm's output against an unknown optimum using a structural bound.
The least you need to know
- An approximation algorithm for a minimization problem aims to return a solution whose cost is at most `alpha` times optimum.
- For maximization, the guarantee is usually that the returned value is at least `OPT / alpha` or equivalently within a multiplicative factor of optimum.
- A proof of approximation quality compares the algorithm's solution to a lower or upper bound on `OPT`.
- Greedy or relaxation-based methods often yield the approximation certificate.
- Approximation is valuable when exact optimization is NP-hard.
Key notation
OPT
the optimal objective value
alpha-approximation
provable multiplicative-factor guarantee
ratio
how far the algorithm can be from optimum in the worst case
Tiny worked example
- A common proof for **Vertex Cover** uses a maximal matching.
- Picking both endpoints of every matched edge gives a cover.
- If the matching has `m` edges, any vertex cover needs at least `m` vertices, but the algorithm picks `2m`.
- That proves a factor-2 approximation.
Common mistakes
- Students often flip the inequality direction between minimization and maximization.
- A heuristic is not automatically an approximation algorithm unless there is a proof.
- The ratio statement is about worst-case guarantee, not about average performance on friendly inputs.
How to recognize this kind of problem
- The prompt asks for a factor guarantee instead of an exact optimum.
- An NP-hard optimization problem is paired with a greedy or LP-based strategy.
- You are asked to compare an algorithm's answer to a bound on the optimum.