Greedy Algorithm
A greedy algorithm makes the locally optimal choice at each step — the decision that looks best right now — without considering future consequences. It's fast and simple, but only produces the globally optimal solution for problems with the greedy-choice property.
Explanation
A greedy algorithm never backtracks: once a choice is made, it's final. At each step, it picks the option that maximizes (or minimizes) some local criterion, hoping these local optima accumulate into a global optimum. This is a bet — and it only pays off for problems with the greedy-choice property, meaning a globally optimal solution can always be built by making locally optimal choices. Problems where greedy works: interval scheduling (maximize the number of non-overlapping intervals by always picking the one that ends earliest), Huffman coding (build an optimal prefix-free code by always merging the two least-frequent symbols), Dijkstra's shortest path (always process the unvisited node with the smallest known distance), Prim's and Kruskal's minimum spanning tree (always add the cheapest edge that doesn't create a cycle), and the fractional knapsack problem. Problems where greedy fails: the 0/1 knapsack (you can't take a fraction of an item, so taking the highest value-to-weight item first doesn't always work), coin change with arbitrary denominations (greedy works for standard US coins but fails for, say, denominations {1, 3, 4} when making 6 — greedy gives 4+1+1 = 3 coins, but optimal is 3+3 = 2 coins), and the traveling salesman problem. Proving a greedy algorithm correct requires either an exchange argument (show that swapping any non-greedy choice with the greedy choice improves or maintains the solution quality) or showing the problem has a matroid structure.
Code Example
javascript// Greedy: interval scheduling (maximum non-overlapping intervals)
// Sort by end time, always take the interval that ends earliest
function maxIntervals(intervals) {
// Sort by end time (the greedy choice)
intervals.sort((a, b) => a[1] - b[1]);
let count = 0;
let lastEnd = -Infinity;
for (const [start, end] of intervals) {
if (start >= lastEnd) { // non-overlapping with last chosen
count++;
lastEnd = end;
}
}
return count;
}
console.log(maxIntervals([[1,3],[2,4],[3,5],[4,6]])); // 2
// Greedy FAILS: coin change with {1,3,4}, target 6
// Greedy takes 4, then 1,1 = 3 coins
// Optimal: 3,3 = 2 coins
function greedyCoins(coins, amount) {
coins.sort((a, b) => b - a); // largest first
let count = 0;
for (const coin of coins) {
while (amount >= coin) { amount -= coin; count++; }
}
return amount === 0 ? count : -1;
}
console.log(greedyCoins([1,3,4], 6)); // 3 (wrong, optimal is 2)
Why It Matters for Engineers
Greedy algorithms are fast and simple, which makes them attractive — and dangerous. The critical skill is knowing when greedy produces optimal results and when it doesn't. Many common bugs in optimization code come from applying greedy thinking to problems that actually require dynamic programming. Greedy algorithms also appear in system design: load balancers use greedy scheduling (assign incoming request to the server with fewest connections), caches use greedy eviction policies (LRU, LFU), and compilers use greedy register allocation. Understanding the conditions under which greedy succeeds builds the judgment to apply it correctly.
Related Terms
Learn This In Practice
Go deeper with the full module on Beyond Vibe Code.
Algorithms Fundamentals → →