Doubly Linked List
A doubly linked list is a linked list where each node has two pointers: next (pointing to the next node) and prev (pointing to the previous node), enabling O(1) traversal and deletion in both directions.
Explanation
In a singly linked list, you can only traverse forward. If you need to delete a node and only have a reference to it (not its predecessor), you must traverse from the head to find the predecessor — O(n). A doubly linked list solves this by giving every node a prev pointer, making deletion O(1) given just the node reference. Doubly linked lists also enable O(1) tail insertion (you maintain both head and tail pointers and update tail.next and the new node's prev) and O(1) backward traversal. These properties make doubly linked lists the standard implementation for deques (double-ended queues), LRU caches, and browser history. The LRU (Least Recently Used) cache is the canonical doubly linked list application: you maintain a doubly linked list sorted by recency (most recently used at head, least recently used at tail) combined with a hash map from key to node. On get, you move the accessed node to the head in O(1). On put, if capacity is exceeded, you remove the tail node (least recently used) in O(1). The hash map gives O(1) lookup; the doubly linked list gives O(1) reordering. JavaScript's built-in Map already preserves insertion order and can simulate an LRU cache (delete and re-insert to mark as most recent), which is why raw doubly linked lists are rarely written in JavaScript application code — but understanding the underlying structure explains why Map's insertion-order semantics make it suitable for this.
Code Example
javascript// Doubly linked list node + LRU Cache
class DLLNode {
constructor(key, val) {
this.key = key; this.val = val;
this.prev = null; this.next = null;
}
}
class LRUCache {
constructor(capacity) {
this.capacity = capacity;
this.map = new Map();
// sentinel head and tail simplify edge cases
this.head = new DLLNode(0, 0);
this.tail = new DLLNode(0, 0);
this.head.next = this.tail;
this.tail.prev = this.head;
}
#remove(node) {
node.prev.next = node.next;
node.next.prev = node.prev;
}
#insertFront(node) {
node.next = this.head.next;
node.prev = this.head;
this.head.next.prev = node;
this.head.next = node;
}
get(key) {
if (!this.map.has(key)) return -1;
const node = this.map.get(key);
this.#remove(node);
this.#insertFront(node);
return node.val;
}
put(key, val) {
if (this.map.has(key)) this.#remove(this.map.get(key));
const node = new DLLNode(key, val);
this.map.set(key, node);
this.#insertFront(node);
if (this.map.size > this.capacity) {
const lru = this.tail.prev;
this.#remove(lru);
this.map.delete(lru.key);
}
}
}
Why It Matters for Engineers
The LRU cache is one of the most common system design patterns: browser caches, CDN edge caches, database buffer pools, and CPU caches all use LRU or LRU-adjacent eviction policies. Understanding the doubly linked list + hash map combination that makes LRU O(1) is the explanation for why these caches are fast. Doubly linked list problems (reverse a DLL, flatten a multilevel DLL, implement an LRU cache) appear regularly in technical interviews at top companies. The LRU cache in particular is considered a "medium-hard" problem that tests both data structure knowledge and pointer manipulation skill.
Related Terms
Linked List · Hash Table · Queue · Cache
Learn This In Practice
Go deeper with the full module on Beyond Vibe Code.
Data Structures Fundamentals → →