Facebook Pixel

146. LRU Cache

MediumDesignHash TableLinked ListDoubly-Linked List
Leetcode Link

Problem Description

You need to design a data structure that implements an LRU (Least Recently Used) cache. An LRU cache has a fixed capacity and removes the least recently used item when it needs to make space for a new item.

The cache should support two operations:

  1. get(key): Retrieve the value associated with a given key

    • If the key exists in the cache, return its value
    • If the key doesn't exist, return -1
    • When you access a key, it becomes the most recently used item
  2. put(key, value): Insert or update a key-value pair

    • If the key already exists, update its value
    • If the key doesn't exist, add the new key-value pair
    • When the cache reaches its capacity limit, remove the least recently used item before adding the new one
    • Both inserting and updating make the key the most recently used item

Important constraint: Both get and put operations must run in O(1) average time complexity.

The key insight is that "recently used" means any operation (get or put) on an item makes it the most recent. Items that haven't been accessed for the longest time are considered least recently used and will be evicted first when the cache is full.

For example, if you have a cache with capacity 2:

  • put(1, 1) → cache: {1=1}
  • put(2, 2) → cache: {1=1, 2=2}
  • get(1) returns 1 and makes key 1 most recent
  • put(3, 3) → evicts key 2 (least recent), cache: {1=1, 3=3}
Quick Interview Experience
Help others by sharing your interview experience
Have you seen this problem before?

Intuition

To achieve O(1) time complexity for both operations, we need to think about what data structures give us constant-time access and updates.

A hash table immediately comes to mind for O(1) lookups - we can store key-value pairs and retrieve them instantly. However, a hash table alone doesn't maintain any ordering information. We need to track which items were used recently and which weren't.

For maintaining order, we could consider using an array or list where the most recently used item is at one end and the least recently used is at the other. But here's the problem: when we access an item in the middle, we'd need to move it to the front, which takes O(n) time in an array.

This is where the doubly linked list becomes crucial. With a doubly linked list:

  • We can remove a node from anywhere in the list in O(1) time (if we have a reference to the node)
  • We can add a node to the head or tail in O(1) time
  • The order of nodes naturally represents the usage order

The key insight is combining these two data structures:

  • Hash table: Maps keys to node references in the linked list, giving us O(1) access to any node
  • Doubly linked list: Maintains the usage order, with the head being most recently used and tail being least recently used

When we get or put an existing key, we:

  1. Find the node instantly via the hash table (O(1))
  2. Remove it from its current position in the list (O(1))
  3. Add it to the head of the list (O(1))

When the cache is full and we need to evict, we simply remove the tail node (O(1)).

The dummy head and tail nodes in the implementation are a clever trick to avoid null checks and edge cases when the list is empty or has only one element. They act as sentinels, making the add and remove operations uniform regardless of the list state.

Solution Approach

The implementation uses a hash table combined with a doubly linked list to achieve O(1) operations.

Data Structures:

  1. Node Class: Represents each cache entry with:

    • key and val: Store the key-value pair
    • prev and next: Pointers to maintain the doubly linked list
  2. LRUCache Class maintains:

    • cache: A hash table (dictionary) mapping keys to their corresponding nodes
    • head and tail: Dummy sentinel nodes to simplify list operations
    • size: Current number of items in the cache
    • capacity: Maximum allowed items

Key Helper Methods:

  1. remove_node(node): Removes a node from its current position in the list

    node.prev.next = node.next
    node.next.prev = node.prev

    This bypasses the node by connecting its neighbors directly to each other.

  2. add_to_head(node): Adds a node right after the dummy head

    node.next = self.head.next
    node.prev = self.head
    self.head.next = node
    node.next.prev = node

    This inserts the node between the head and the first actual node.

Main Operations:

  1. get(key):

    • Check if key exists in the hash table
    • If not found, return -1
    • If found, move the node to the head (marking it as most recently used):
      • Remove it from current position
      • Add it to the head
    • Return the node's value
  2. put(key, value):

    • If key exists: Update the existing node
      • Remove it from current position
      • Update its value
      • Move it to the head
    • If key doesn't exist: Create a new node
      • Create new node with the key-value pair
      • Add it to the hash table
      • Add it to the head of the list
      • Increment size
      • If capacity exceeded: Evict the LRU item
        • The LRU item is at tail.prev (the node just before the dummy tail)
        • Remove it from the hash table
        • Remove it from the linked list
        • Decrement size

The dummy head and tail nodes eliminate edge cases:

  • The list is never truly empty (always has head and tail)
  • No null checks needed when adding or removing nodes
  • The actual cache items exist between head and tail

This design ensures all operations maintain O(1) time complexity while keeping track of usage order efficiently.

Ready to land your dream job?

Unlock your dream job with a 5-minute evaluator for a personalized learning plan!

Start Evaluator

Example Walkthrough

Let's walk through a concrete example with an LRU cache of capacity 2 to see how the data structure works internally.

Initial State:

  • Cache capacity: 2
  • Hash table: {} (empty)
  • Linked list: [head] ↔ [tail] (only dummy nodes)

Operation 1: put(1, 10)

  • Create new node with key=1, value=10
  • Add to hash table: {1: node1}
  • Insert node after head: [head] ↔ [1,10] ↔ [tail]
  • Size becomes 1

Operation 2: put(2, 20)

  • Create new node with key=2, value=20
  • Add to hash table: {1: node1, 2: node2}
  • Insert node after head: [head] ↔ [2,20] ↔ [1,10] ↔ [tail]
  • Size becomes 2
  • Note: Node 2 is now most recently used (closest to head)

Operation 3: get(1) → returns 10

  • Look up key 1 in hash table - found!
  • Remove node1 from its current position: [head] ↔ [2,20] ↔ [tail]
  • Add node1 after head: [head] ↔ [1,10] ↔ [2,20] ↔ [tail]
  • Return value 10
  • Note: Node 1 is now most recently used

Operation 4: put(3, 30) (capacity exceeded!)

  • Size is 2, capacity is 2 → need to evict
  • LRU item is at tail.prev which is node2 (key=2)
  • Remove node2 from hash table: {1: node1}
  • Remove node2 from list: [head] ↔ [1,10] ↔ [tail]
  • Create new node with key=3, value=30
  • Add to hash table: {1: node1, 3: node3}
  • Insert node3 after head: [head] ↔ [3,30] ↔ [1,10] ↔ [tail]

Operation 5: get(2) → returns -1

  • Look up key 2 in hash table - not found!
  • Return -1 (key 2 was evicted earlier)

Final State:

  • Hash table: {1: node1, 3: node3}
  • Linked list: [head] ↔ [3,30] ↔ [1,10] ↔ [tail]
  • Most recently used: key 3
  • Least recently used: key 1

This example demonstrates how:

  • The hash table provides O(1) lookups
  • The linked list maintains usage order (head = most recent, tail = least recent)
  • Every access (get or put) moves items to the head
  • Eviction always happens at the tail when capacity is exceeded

Solution Implementation

1class Node:
2    """Doubly linked list node for LRU Cache implementation."""
3  
4    def __init__(self, key: int = 0, value: int = 0) -> None:
5        """
6        Initialize a node with key-value pair.
7      
8        Args:
9            key: The key for cache lookup
10            value: The value associated with the key
11        """
12        self.key = key
13        self.value = value
14        self.prev: Node | None = None
15        self.next: Node | None = None
16
17
18class LRUCache:
19    """
20    Least Recently Used (LRU) Cache implementation.
21  
22    Uses a combination of HashMap and Doubly Linked List for O(1) operations.
23    Most recently used items are kept at the head of the list.
24    Least recently used items are at the tail and get evicted when capacity is exceeded.
25    """
26  
27    def __init__(self, capacity: int) -> None:
28        """
29        Initialize the LRU cache with given capacity.
30      
31        Args:
32            capacity: Maximum number of key-value pairs the cache can hold
33        """
34        self.capacity = capacity
35        self.size = 0
36        self.cache: dict[int, Node] = {}  # HashMap for O(1) lookup
37      
38        # Create dummy head and tail nodes to simplify edge cases
39        self.head = Node()  # Dummy head node
40        self.tail = Node()  # Dummy tail node
41      
42        # Connect dummy nodes
43        self.head.next = self.tail
44        self.tail.prev = self.head
45
46    def get(self, key: int) -> int:
47        """
48        Get the value of the key if it exists in the cache.
49        Move the accessed node to head (mark as most recently used).
50      
51        Args:
52            key: The key to look up
53          
54        Returns:
55            The value associated with the key, or -1 if not found
56        """
57        if key not in self.cache:
58            return -1
59      
60        # Move the accessed node to head (most recently used)
61        node = self.cache[key]
62        self._remove_node(node)
63        self._add_to_head(node)
64      
65        return node.value
66
67    def put(self, key: int, value: int) -> None:
68        """
69        Add or update a key-value pair in the cache.
70        If key exists, update its value and move to head.
71        If adding new key exceeds capacity, remove least recently used item.
72      
73        Args:
74            key: The key to add or update
75            value: The value to associate with the key
76        """
77        if key in self.cache:
78            # Update existing node
79            node = self.cache[key]
80            self._remove_node(node)
81            node.value = value
82            self._add_to_head(node)
83        else:
84            # Create new node
85            new_node = Node(key, value)
86            self.cache[key] = new_node
87            self._add_to_head(new_node)
88            self.size += 1
89          
90            # Check if capacity is exceeded
91            if self.size > self.capacity:
92                # Remove least recently used node (tail.prev)
93                lru_node = self.tail.prev
94                self.cache.pop(lru_node.key)
95                self._remove_node(lru_node)
96                self.size -= 1
97
98    def _remove_node(self, node: Node) -> None:
99        """
100        Remove a node from its current position in the doubly linked list.
101      
102        Args:
103            node: The node to remove from the list
104        """
105        # Connect the previous and next nodes directly
106        node.prev.next = node.next
107        node.next.prev = node.prev
108
109    def _add_to_head(self, node: Node) -> None:
110        """
111        Add a node right after the dummy head (mark as most recently used).
112      
113        Args:
114            node: The node to add to the head of the list
115        """
116        # Insert node between head and head.next
117        node.next = self.head.next
118        node.prev = self.head
119        self.head.next = node
120        node.next.prev = node
121
122
123# Your LRUCache object will be instantiated and called as such:
124# obj = LRUCache(capacity)
125# param_1 = obj.get(key)
126# obj.put(key,value)
127
1/**
2 * Node class for doubly linked list implementation
3 * Each node stores a key-value pair and pointers to previous and next nodes
4 */
5class Node {
6    int key;
7    int value;
8    Node previous;
9    Node next;
10
11    /**
12     * Default constructor for sentinel nodes
13     */
14    Node() {
15    }
16
17    /**
18     * Constructor with key and value parameters
19     * @param key   The key for this node
20     * @param value The value associated with the key
21     */
22    Node(int key, int value) {
23        this.key = key;
24        this.value = value;
25    }
26}
27
28/**
29 * LRU (Least Recently Used) Cache implementation
30 * Uses a HashMap for O(1) lookup and a doubly linked list for O(1) removal/addition
31 * Most recently used items are kept at the head, least recently used at the tail
32 */
33class LRUCache {
34    private int currentSize;
35    private int maxCapacity;
36    private Node dummyHead;  // Sentinel node for head
37    private Node dummyTail;  // Sentinel node for tail
38    private Map<Integer, Node> cacheMap;  // HashMap for O(1) access to nodes
39
40    /**
41     * Initialize the LRU cache with given capacity
42     * @param capacity Maximum number of key-value pairs the cache can hold
43     */
44    public LRUCache(int capacity) {
45        this.maxCapacity = capacity;
46        this.currentSize = 0;
47      
48        // Initialize sentinel nodes
49        this.dummyHead = new Node();
50        this.dummyTail = new Node();
51      
52        // Connect sentinel nodes
53        dummyHead.next = dummyTail;
54        dummyTail.previous = dummyHead;
55      
56        // Initialize the cache map
57        this.cacheMap = new HashMap<>();
58    }
59
60    /**
61     * Get the value associated with the key
62     * Move the accessed node to head (mark as most recently used)
63     * @param key The key to look up
64     * @return The value if key exists, -1 otherwise
65     */
66    public int get(int key) {
67        // Check if key exists in cache
68        if (!cacheMap.containsKey(key)) {
69            return -1;
70        }
71      
72        // Get the node from cache
73        Node targetNode = cacheMap.get(key);
74      
75        // Move node to head (mark as most recently used)
76        removeNodeFromList(targetNode);
77        addNodeToHead(targetNode);
78      
79        return targetNode.value;
80    }
81
82    /**
83     * Add or update a key-value pair in the cache
84     * @param key   The key to insert or update
85     * @param value The value to associate with the key
86     */
87    public void put(int key, int value) {
88        // Case 1: Key already exists - update value and move to head
89        if (cacheMap.containsKey(key)) {
90            Node existingNode = cacheMap.get(key);
91          
92            // Remove from current position
93            removeNodeFromList(existingNode);
94          
95            // Update value
96            existingNode.value = value;
97          
98            // Move to head (mark as most recently used)
99            addNodeToHead(existingNode);
100        } 
101        // Case 2: New key - create new node and add to cache
102        else {
103            // Create new node
104            Node newNode = new Node(key, value);
105          
106            // Add to cache map
107            cacheMap.put(key, newNode);
108          
109            // Add to head of linked list
110            addNodeToHead(newNode);
111          
112            // Increment size
113            currentSize++;
114          
115            // Check if capacity exceeded
116            if (currentSize > maxCapacity) {
117                // Remove least recently used node (from tail)
118                Node lruNode = dummyTail.previous;
119              
120                // Remove from cache map
121                cacheMap.remove(lruNode.key);
122              
123                // Remove from linked list
124                removeNodeFromList(lruNode);
125              
126                // Decrement size
127                currentSize--;
128            }
129        }
130    }
131
132    /**
133     * Remove a node from its current position in the doubly linked list
134     * @param node The node to remove
135     */
136    private void removeNodeFromList(Node node) {
137        // Update previous node's next pointer
138        node.previous.next = node.next;
139      
140        // Update next node's previous pointer
141        node.next.previous = node.previous;
142    }
143
144    /**
145     * Add a node right after the dummy head (mark as most recently used)
146     * @param node The node to add at head
147     */
148    private void addNodeToHead(Node node) {
149        // Set node's pointers
150        node.next = dummyHead.next;
151        node.previous = dummyHead;
152      
153        // Update dummy head's next node
154        dummyHead.next = node;
155      
156        // Update the previous pointer of what was the first node
157        node.next.previous = node;
158    }
159}
160
161/**
162 * Your LRUCache object will be instantiated and called as such:
163 * LRUCache obj = new LRUCache(capacity);
164 * int param_1 = obj.get(key);
165 * obj.put(key,value);
166 */
167
1class LRUCache {
2private:
3    // Doubly linked list node structure
4    struct Node {
5        int key;
6        int value;
7        Node* prev;
8        Node* next;
9      
10        Node(int k, int v) : key(k), value(v), prev(nullptr), next(nullptr) {}
11    };
12  
13    int currentSize;                           // Current number of elements in cache
14    int maxCapacity;                           // Maximum capacity of cache
15    Node* dummyHead;                           // Dummy head node for easier list manipulation
16    Node* dummyTail;                           // Dummy tail node for easier list manipulation
17    unordered_map<int, Node*> keyToNode;      // Hash map for O(1) key lookup
18  
19    // Remove a node from its current position in the doubly linked list
20    void removeNode(Node* node) {
21        node->prev->next = node->next;
22        node->next->prev = node->prev;
23    }
24  
25    // Add a node right after the dummy head (making it the most recently used)
26    void addToHead(Node* node) {
27        node->next = dummyHead->next;
28        node->prev = dummyHead;
29        dummyHead->next->prev = node;
30        dummyHead->next = node;
31    }
32  
33public:
34    // Initialize LRU cache with given capacity
35    LRUCache(int capacity) : currentSize(0), maxCapacity(capacity) {
36        // Create dummy head and tail nodes to simplify edge cases
37        dummyHead = new Node(0, 0);
38        dummyTail = new Node(0, 0);
39        dummyHead->next = dummyTail;
40        dummyTail->prev = dummyHead;
41    }
42  
43    // Get value for given key, return -1 if not found
44    // Move accessed node to head (mark as most recently used)
45    int get(int key) {
46        // Check if key exists in cache
47        if (keyToNode.find(key) == keyToNode.end()) {
48            return -1;
49        }
50      
51        // Move node to head (mark as most recently used)
52        Node* node = keyToNode[key];
53        removeNode(node);
54        addToHead(node);
55      
56        return node->value;
57    }
58  
59    // Insert or update key-value pair in cache
60    void put(int key, int value) {
61        // Case 1: Key already exists - update value and move to head
62        if (keyToNode.find(key) != keyToNode.end()) {
63            Node* node = keyToNode[key];
64            removeNode(node);
65            node->value = value;
66            addToHead(node);
67        } 
68        // Case 2: New key - create new node and add to cache
69        else {
70            Node* newNode = new Node(key, value);
71            keyToNode[key] = newNode;
72            addToHead(newNode);
73            currentSize++;
74          
75            // If capacity exceeded, remove least recently used node (tail)
76            if (currentSize > maxCapacity) {
77                Node* lruNode = dummyTail->prev;
78                keyToNode.erase(lruNode->key);
79                removeNode(lruNode);
80                delete lruNode;  // Free memory
81                currentSize--;
82            }
83        }
84    }
85};
86
87/**
88 * Your LRUCache object will be instantiated and called as such:
89 * LRUCache* obj = new LRUCache(capacity);
90 * int param_1 = obj->get(key);
91 * obj->put(key,value);
92 */
93
1// Node structure for doubly linked list
2interface Node {
3    key: number;
4    val: number;
5    prev: Node | null;
6    next: Node | null;
7}
8
9// Global variables for LRU cache
10let size: number;
11let capacity: number;
12let head: Node;
13let tail: Node;
14let cache: Map<number, Node>;
15
16// Initialize the LRU cache with given capacity
17function LRUCache(cap: number): void {
18    size = 0;
19    capacity = cap;
20  
21    // Create dummy head and tail nodes for easier manipulation
22    head = { key: 0, val: 0, prev: null, next: null };
23    tail = { key: 0, val: 0, prev: null, next: null };
24  
25    // Connect head and tail
26    head.next = tail;
27    tail.prev = head;
28  
29    // Initialize hash map for O(1) access
30    cache = new Map<number, Node>();
31}
32
33// Get value of the key if it exists, otherwise return -1
34function get(key: number): number {
35    // Key doesn't exist in cache
36    if (!cache.has(key)) {
37        return -1;
38    }
39  
40    // Key exists - move node to front (most recently used)
41    const node = cache.get(key)!;
42    removeNode(node);
43    addToHead(node);
44  
45    return node.val;
46}
47
48// Update value if key exists, otherwise add new key-value pair
49function put(key: number, value: number): void {
50    if (cache.has(key)) {
51        // Update existing key - move to front
52        const node = cache.get(key)!;
53        removeNode(node);
54        node.val = value;
55        addToHead(node);
56    } else {
57        // Add new key-value pair
58        const newNode: Node = { 
59            key: key, 
60            val: value, 
61            prev: null, 
62            next: null 
63        };
64      
65        cache.set(key, newNode);
66        addToHead(newNode);
67        size++;
68      
69        // Remove least recently used item if capacity exceeded
70        if (size > capacity) {
71            const lruNode = tail.prev!;
72            cache.delete(lruNode.key);
73            removeNode(lruNode);
74            size--;
75        }
76    }
77}
78
79// Remove node from its current position in the linked list
80function removeNode(node: Node): void {
81    if (!node) return;
82  
83    // Update pointers to bypass this node
84    node.prev!.next = node.next;
85    node.next!.prev = node.prev;
86}
87
88// Add node right after the head (most recently used position)
89function addToHead(node: Node): void {
90    // Insert node between head and head.next
91    node.next = head.next;
92    node.prev = head;
93    head.next!.prev = node;
94    head.next = node;
95}
96
97/**
98 * Your LRUCache object will be instantiated and called as such:
99 * LRUCache(capacity)
100 * var param_1 = get(key)
101 * put(key,value)
102 */
103

Time and Space Complexity

Time Complexity: O(1) for both get and put operations.

  • The get operation performs the following steps:

    • Checking if key exists in the cache dictionary: O(1)
    • Removing the node from its current position: O(1) (updating 2 pointers)
    • Adding the node to the head: O(1) (updating 4 pointers)
  • The put operation performs the following steps:

    • Checking if key exists in the cache dictionary: O(1)
    • If key exists: remove node O(1), update value, add to head O(1)
    • If key doesn't exist: create new node, add to dictionary O(1), add to head O(1)
    • If capacity exceeded: remove from dictionary O(1), remove tail node O(1)

All operations involve only pointer manipulations and dictionary operations, which are constant time.

Space Complexity: O(capacity)

  • The cache dictionary stores at most capacity key-node pairs
  • The doubly linked list contains at most capacity nodes (plus 2 sentinel nodes)
  • Each node stores a constant amount of data (key, value, and two pointers)
  • Total space used is proportional to the capacity of the cache

Common Pitfalls

1. Forgetting to Update the Hash Map When Evicting LRU Item

One of the most common mistakes is removing the LRU node from the linked list but forgetting to remove it from the hash map. This leads to the hash map containing stale references to nodes that no longer exist in the list.

Incorrect Implementation:

def put(self, key: int, value: int) -> None:
    if key not in self.cache:
        new_node = Node(key, value)
        self.cache[key] = new_node
        self._add_to_head(new_node)
        self.size += 1
      
        if self.size > self.capacity:
            lru_node = self.tail.prev
            self._remove_node(lru_node)  # Only removes from list
            # FORGOT: self.cache.pop(lru_node.key)
            self.size -= 1

Why it's problematic:

  • The hash map will still contain the evicted key
  • Future get() calls might return nodes that aren't in the list
  • Memory leak as the hash map grows unbounded
  • Breaks the invariant that hash map size equals list size

Correct Solution: Always remove from both data structures when evicting:

if self.size > self.capacity:
    lru_node = self.tail.prev
    self.cache.pop(lru_node.key)  # Remove from hash map FIRST
    self._remove_node(lru_node)   # Then remove from list
    self.size -= 1

2. Not Moving Existing Nodes to Head on Update

Another frequent error is updating the value of an existing key without moving it to the head of the list, failing to mark it as recently used.

Incorrect Implementation:

def put(self, key: int, value: int) -> None:
    if key in self.cache:
        # Just update the value without moving the node
        self.cache[key].value = value
        return  # WRONG: Node stays in its current position

Why it's problematic:

  • Violates the LRU principle - updating a value counts as "using" it
  • The updated item might get evicted soon even though it was just accessed
  • Leads to incorrect cache behavior

Correct Solution: Always move updated nodes to the head:

if key in self.cache:
    node = self.cache[key]
    self._remove_node(node)  # Remove from current position
    node.value = value       # Update value
    self._add_to_head(node)  # Move to head as most recently used

3. Incorrect Pointer Updates in Add/Remove Operations

Pointer manipulation in doubly linked lists is error-prone. A common mistake is updating pointers in the wrong order or missing some updates.

Incorrect _add_to_head Implementation:

def _add_to_head(self, node: Node) -> None:
    # WRONG ORDER: Breaks the chain
    self.head.next = node
    node.next = self.head.next  # This now points to itself!
    node.prev = self.head
    node.next.prev = node

Correct Solution: Update pointers in the right sequence:

def _add_to_head(self, node: Node) -> None:
    # Save the current first node
    node.next = self.head.next  # Point to current first
    node.prev = self.head       # Point back to head
    self.head.next = node        # Head points to new node
    node.next.prev = node        # Old first points back to new node

4. Not Handling the Case When Capacity is 0

Some implementations fail when initialized with capacity 0, leading to crashes or undefined behavior.

Solution: Add a guard clause in the put method:

def put(self, key: int, value: int) -> None:
    if self.capacity == 0:
        return  # Cannot store anything
    # ... rest of the implementation

5. Memory Leak from Circular References

Without properly clearing node references when removing them, Python's garbage collector might not free memory due to circular references between nodes.

Better Practice:

def _remove_node(self, node: Node) -> None:
    node.prev.next = node.next
    node.next.prev = node.prev
    # Clear references to help garbage collection
    node.prev = None
    node.next = None

These pitfalls highlight the importance of maintaining consistency between the hash map and linked list, properly managing node positions based on usage, and careful pointer manipulation.

Discover Your Strengths and Weaknesses: Take Our 5-Minute Quiz to Tailor Your Study Plan:

Depth first search is equivalent to which of the tree traversal order?


Recommended Readings

Want a Structured Path to Master System Design Too? Don’t Miss This!

Load More