Time Complexity
Time complexity is a measure of how the number of operations an algorithm performs grows as a function of its input size, expressed using Big O notation to describe the worst-case growth rate.
Explanation
Time complexity analysis counts the number of fundamental operations an algorithm performs (comparisons, assignments, arithmetic) as a function of input size n. We express this as a function and then simplify using Big O to drop constants and lower-order terms — what remains is the complexity class. Analyzing loops: a single loop that iterates n times is O(n). Two nested loops each iterating n times is O(n²). A loop that halves its work each iteration (binary search's while loop) is O(log n). A function that calls itself on half the input (merge sort's divide step) and then does O(n) work (the merge step) over O(log n) levels is O(n log n). Best case, worst case, and average case: time complexity can differ based on input. Quicksort is O(n log n) average but O(n²) worst case. Binary search is O(1) best case (first guess is right) and O(log n) worst case. We usually care about worst case for correctness guarantees and average case for practical performance. Amortized analysis handles data structures where individual operations vary: dynamic array push is O(1) amortized because occasional O(n) doublings spread across n O(1) pushes. Practical implications: for n = 1,000,000 elements — O(1) = 1 op, O(log n) ≈ 20 ops, O(n) = 1M ops, O(n log n) ≈ 20M ops, O(n²) = 10^12 ops. At 10^9 operations per second, O(n²) takes ~17 minutes vs O(n log n)'s 0.02 seconds. Choosing the right algorithm is the most impactful performance optimization.
Code Example
javascript// Determining time complexity: step by step
// O(n²) — nested loops, both depending on n
function pairsSum(arr, target) {
for (let i = 0; i < arr.length; i++) { // n iterations
for (let j = i+1; j < arr.length; j++) { // up to n iterations
if (arr[i] + arr[j] === target) return [i, j];
}
}
return null;
} // O(n²)
// O(n) — same problem with a hash map
function pairsSumFast(arr, target) {
const seen = new Map();
for (let i = 0; i < arr.length; i++) { // n iterations
const complement = target - arr[i];
if (seen.has(complement)) return [seen.get(complement), i];
seen.set(arr[i], i); // O(1) each
}
return null;
} // O(n) — trading O(n) space for O(n) time speedup
// O(n log n) — sort then linear scan
function pairsSumSorted(arr, target) {
arr.sort((a, b) => a - b); // O(n log n)
let lo = 0, hi = arr.length - 1;
while (lo < hi) { // O(n)
const sum = arr[lo] + arr[hi];
if (sum === target) return [lo, hi];
sum < target ? lo++ : hi--;
}
return null;
} // O(n log n)
Why It Matters for Engineers
Time complexity analysis is the tool that lets you choose between algorithms before you waste time implementing the wrong one. The "two-sum" problem above is a perfect example: O(n²), O(n log n), and O(n) solutions all exist. The O(n) hash map solution is clearly best for large inputs, but you need time complexity knowledge to see why. In code reviews, identifying "this loop is O(n²) — here's why and how to fix it" is a high-value contribution. It prevents performance bugs from reaching production and demonstrates senior-level engineering judgment. This is a skill that directly translates to higher-quality code reviews and architectural decisions.
Related Terms
Big O Notation · Space Complexity · Binary Search · Dynamic Programming
Learn This In Practice
Go deeper with the full module on Beyond Vibe Code.
Algorithms Fundamentals → →