146. LRU Cache
Problem Description
The problem asks us to design a Least Recently Used (LRU) cache data structure. This cache has to have a specified maximum capacity. The important aspect of an LRU cache is how it determines which items to discard: when the cache reaches its capacity limit, the least recently accessed item is removed to make room for a new item.
The LRU cache must support two operations:
-
get(key)
: This operation must return the value associated with a providedkey
. If the key is not in the cache, it should return -1. If the key exists in the cache, it should be marked as recently used. -
put(key, value)
: This operation must insert a key-value pair into the cache. If the key already exists, it updates the current value with the new value provided, marking the key as recently used. If the key doesn't exist, the function must add the new key-value pair. If adding a new item exceeds the cache's capacity, the least recently used item should be evicted.
The challenge is to implement both get
and put
functions with a time complexity of O(1), ensuring that these operations are performed very quickly, which is critical for a performant cache system.
Intuition
To achieve the O(1) time complexity for both get
and put
operations, we make use of a hash map (dictionary in Python) to store the keys and their associated nodes, since access in a hash map has an average time complexity of O(1). The value in the hash map points to a node in a doubly-linked list, where each node represents a cache entry with a key and value. The doubly-linked list is used to efficiently maintain the ordering of the nodes based on recent usage.
Intuitively, whenever an item is accessed using get
or updated/added using put
, it must be moved to the head of the linked list since it is now the most recently used item. When the cache exceeds capacity and we need to evict an item, we remove the item from the tail end of the list, as it is the least recently used item.
The solution involves the following operations:
add_to_head(node)
: Inserts the node right after the dummy head node of the doubly-linked list.remove_node(node)
: Removes the node from its current position in the list.move_to_head(node)
: Combinesremove_node
andadd_to_head
to move an existing node to the head of the list.remove_tail()
: Removes and returns the node right before the dummy tail node, which is the least recently used item.
In the put
method, if a key doesn't already exist, we create a new node and add it to the hash map and the linked list. If the size exceeds the capacity, the tail node is removed both from the linked list and the hash map. If the key does exist, we update the node's value and move the node to the head of the list. The get
method checks if the key is in the hash map and, if found, moves the node to the head of the list and returns its value.
By combining the constant-time lookups provided by the hash map with the ordered structure of the doubly-linked list, we can ensure that both get
and put
operations run in O(1) average time complexity, fulfilling the requirements of an LRU cache.
Learn more about Linked List patterns.
Solution Approach
To implement the LRU cache, the solution makes use of two primary data structures: a hash map and a doubly-linked list. The hash map allows quick access to the nodes of the list, which in turn represent each key-value pair in the cache. The doubly-linked list is necessary to maintain the order of elements by their recency of use.
Here is a step-by-step walkthrough of the implementation:
-
Node
class: This inner class represents an element in the doubly-linked list, containing properties for the key, value, and pointers to the previous and next nodes. -
LRUCache.__init__(capacity: int)
: The constructor initializes the LRUCache with a specific capacity. It also initializes the hash map (namedcache
), a dummy head and tail node for the doubly-linked list, and a size variable to keep track of the current number of elements in the cache. The head and tail are linked to each other, forming an empty doubly-linked list. -
LRUCache.get(key: int) -> int
: This method attempts to retrieve the value for the given key.- If the key is not present in the hash map, it returns
-1
. - If the key exists, the corresponding node is fetched from the hash map, moved to the head of the linked list using
move_to_head()
to mark it as recently used, and its value is returned.
- If the key is not present in the hash map, it returns
-
LRUCache.put(key: int, value: int) -> None
: This method updates the value of the key if it exists, or inserts a new key-value pair into the cache.- If the key exists, update the value of the node and use
move_to_head()
to mark it as recently used. - If the key does not exist, create a new node, add it to the hash map, and insert it at the head of the list with
add_to_head()
. Increase the size counter by 1. - If after insertion, the cache size exceeds its capacity, the least recently used element, which is right before the tail, is removed through
remove_tail()
, and its key is removed from the hash map. Decrease the size counter by 1.
- If the key exists, update the value of the node and use
-
LRUCache.move_to_head(node)
: This utility method takes a node and places it at the head of the doubly-linked list. It callsremove_node(node)
followed byadd_to_head(node)
. -
LRUCache.remove_node(node)
: Removes a node from the list by updating the pointers of its neighboring nodes, thus effectively extracting it from its current position. -
LRUCache.add_to_head(node)
: This places a node right after the dummy head node, effectively making it the new first element of the list. -
LRUCache.remove_tail()
: Removes the last non-dummy node (tail's previous node) from the list, which represents the least recently used cache item, and returns it.
The combination of these methods and the use of a hash map and a doubly-linked list ensures that both get
and put
methods achieve the required O(1)
average time complexity.
Ready to land your dream job?
Unlock your dream job with a 2-minute evaluator for a personalized learning plan!
Start EvaluatorExample Walkthrough
Let's walk through a small example to illustrate how the LRU Cache would be implemented according to the solution approach.
Suppose we initialize an LRU Cache with a capacity of 2, and we perform a sequence of put
and get
operations:
-
put(1, 1)
: The cache is empty, so we create a new node with the key-value pair (1, 1) and place it at the head of the doubly-linked list. The cache now contains {[1,1]}. -
put(2, 2)
: No existing node with key 2, so we create a new node and add it to the head, moving the previous head node behind it. The cache now looks like {[2,2], [1,1]}. -
get(1)
: The key 1 exists, so the corresponding node is moved to the head, as it is now the most recently used. The cache order is now {[1,1], [2,2]} after retrieving the value, which is1
. -
put(3, 3)
: There is no key 3 in the cache, but adding it would exceed the cache's capacity. So, we remove the node at the tail, which is currently key 2 since it's the least recently used. Then we create a new node for key 3 and add it to the head. The cache now holds {[3,3], [1,1]}. -
get(2)
: Key 2 was just evicted, so it's no longer in the cache. Theget
operation returns-1
. -
put(4, 4)
: Again, adding a new node for key 4 will exceed the capacity, so we remove the least recently used item, which is now key 1. The cache is updated to {[4,4], [3,3]}. -
get(1)
: Key 1 was evicted in the previous step, soget
returns-1
. -
get(3)
: Key 3 is present, so its value is returned, which is3
, and it is moved to the head, making the cache order {[3,3], [4,4]}.
Through each step, the put and get methods dynamically adjust the order of the doubly-linked list to ensure the most recently used items are nearest to the head, and the least recently used are near the tail, ready to be evicted when the capacity is exceeded. The hash map provides constant-time access to the nodes in the list during these operations.
Solution Implementation
1class Node:
2 def __init__(self, key=0, value=0):
3 # initialize a new node with a key, value, and prev/next pointers
4 self.key = key
5 self.value = value
6 self.prev = None
7 self.next = None
8
9
10class LRUCache:
11 def __init__(self, capacity: int):
12 # initialize the LRU cache with a given capacity
13 self.cache = {} # to hold the keys and node addresses
14 self.head = Node() # dummy head node
15 self.tail = Node() # dummy tail node
16 self.capacity = capacity # maximum capacity of the cache
17 self.size = 0 # current size of the cache
18
19 # link head and tail dummy nodes
20 self.head.next = self.tail
21 self.tail.prev = self.head
22
23 def get(self, key: int) -> int:
24 # retrieves the value of the node with given key if present
25 if key not in self.cache:
26 return -1 # key not found
27 node = self.cache[key]
28 self.move_to_head(node)
29 return node.value
30
31 def put(self, key: int, value: int) -> None:
32 # inserts a new key-value pair or updates the value of an existing key
33 if key in self.cache:
34 # key exists, update the value
35 node = self.cache[key]
36 node.value = value
37 self.move_to_head(node)
38 else:
39 # create a new node for the key-value pair
40 node = Node(key, value)
41 self.cache[key] = node
42 self.add_to_head(node)
43
44 self.size += 1
45 if self.size > self.capacity:
46 # capacity exceeded, remove the least recently used item
47 removed_node = self.remove_tail()
48 del self.cache[removed_node.key]
49 self.size -= 1
50
51 def move_to_head(self, node):
52 # moves a node to the front (head) of the cache
53 self.remove_node(node)
54 self.add_to_head(node)
55
56 def remove_node(self, node):
57 # removes a node from the doubly linked list
58 node.prev.next = node.next
59 node.next.prev = node.prev
60
61 def add_to_head(self, node):
62 # adds a node to the front (head) of the cache right after the dummy head node
63 node.prev = self.head
64 node.next = self.head.next
65 self.head.next.prev = node
66 self.head.next = node
67
68 def remove_tail(self):
69 # removes the last node of the doubly linked list (before the dummy tail node)
70 node = self.tail.prev
71 self.remove_node(node)
72 return node
73
74
75# Example of usage:
76# cache = LRUCache(capacity=2)
77# cache.put(1, 1)
78# cache.put(2, 2)
79# print(cache.get(1)) # returns 1
80# cache.put(3, 3) # evicts key 2
81# print(cache.get(2)) # returns -1 (not found)
82
1import java.util.HashMap;
2import java.util.Map;
3
4// Definition for a doubly linked list node
5class DoublyLinkedNode {
6 int key;
7 int value;
8 DoublyLinkedNode prev;
9 DoublyLinkedNode next;
10
11 // Constructor for creating a new node with key and value
12 public DoublyLinkedNode(int key, int value) {
13 this.key = key;
14 this.value = value;
15 }
16}
17
18// LRUCache class implementing LRU caching mechanism
19public class LRUCache {
20 private final Map<Integer, DoublyLinkedNode> cacheMap; // Hash map to store key-node pairs for O(1) access
21 private final DoublyLinkedNode head; // Dummy head of the doubly linked list
22 private final DoublyLinkedNode tail; // Dummy tail of the doubly linked list
23 private final int capacity; // Maximum capacity of the cache
24 private int size; // Current size of the cache
25
26 // Constructor to initialize the LRU cache
27 public LRUCache(int capacity) {
28 this.capacity = capacity;
29 this.cacheMap = new HashMap<>();
30 this.size = 0;
31 head = new DoublyLinkedNode(0, 0);
32 tail = new DoublyLinkedNode(0, 0);
33 head.next = tail; // Initially, the dummy head points to the dummy tail
34 tail.prev = head; // and the dummy tail points back to the dummy head
35 }
36
37 // Retrieves the value of the node associated with the given key
38 public int get(int key) {
39 if (!cacheMap.containsKey(key)) {
40 return -1;
41 }
42 DoublyLinkedNode node = cacheMap.get(key);
43 moveToHead(node); // Move the accessed node to the head to maintain LRU order
44 return node.value;
45 }
46
47 // Adds a new key-value pair to the cache or updates the value if it already exists
48 public void put(int key, int value) {
49 if (cacheMap.containsKey(key)) {
50 DoublyLinkedNode node = cacheMap.get(key);
51 node.value = value;
52 moveToHead(node); // Move the updated node to the head
53 } else {
54 DoublyLinkedNode newNode = new DoublyLinkedNode(key, value);
55 cacheMap.put(key, newNode);
56 addToHead(newNode); // Add the new node to the head of the linked list
57 size++;
58 if (size > capacity) {
59 DoublyLinkedNode tail = removeTail(); // Remove the least recently used node
60 cacheMap.remove(tail.key); // Also remove it from the map
61 size--;
62 }
63 }
64 }
65
66 // Moves a node to the head of the doubly linked list (marks it as recently used)
67 private void moveToHead(DoublyLinkedNode node) {
68 removeNode(node); // Remove the node from its current position
69 addToHead(node); // Add it back to the head
70 }
71
72 // Removes a node from the doubly linked list
73 private void removeNode(DoublyLinkedNode node) {
74 node.prev.next = node.next;
75 node.next.prev = node.prev;
76 }
77
78 // Adds a node right after the dummy head node (marks it as recently used)
79 private void addToHead(DoublyLinkedNode node) {
80 node.next = head.next;
81 node.prev = head;
82 head.next.prev = node;
83 head.next = node;
84 }
85
86 // Removes and returns the tail node (least recently used node)
87 private DoublyLinkedNode removeTail() {
88 DoublyLinkedNode res = tail.prev;
89 removeNode(res);
90 return res;
91 }
92}
93
94// The usage of LRUCache class would be the same as before, following the given interface.
95
1#include <unordered_map>
2
3// Node structure representing a doubly linked list node
4struct Node {
5 int key;
6 int value;
7 Node* prev;
8 Node* next;
9
10 // Constructor initializes a generic node
11 Node()
12 : key(0)
13 , value(0)
14 , prev(nullptr)
15 , next(nullptr) {}
16
17 // Constructor initializes a node with a specified key and value
18 Node(int key, int value)
19 : key(key)
20 , value(value)
21 , prev(nullptr)
22 , next(nullptr) {}
23};
24
25// LRUCache class representing a Least Recently Used (LRU) cache
26class LRUCache {
27public:
28 // Constructor to create a cache with a specific capacity
29 LRUCache(int capacity)
30 : capacity(capacity)
31 , size(0) {
32 head = new Node(); // Sentinel head node
33 tail = new Node(); // Sentinel tail node
34 head->next = tail; // Connect head to tail
35 tail->prev = head; // Connect tail to head
36 }
37
38 // Returns the value of the node with the given key, if it exists
39 int get(int key) {
40 if (!cache.count(key)) return -1; // Key not found, return -1
41 Node* node = cache[key];
42 moveToHead(node); // Move node to head to mark as recently used
43 return node->value;
44 }
45
46 // Add or update the value of the node with the given key
47 void put(int key, int value) {
48 if (cache.count(key)) { // If the key is found, update the node
49 Node* node = cache[key];
50 node->value = value;
51 moveToHead(node); // Move node to head as it's recently used
52 } else { // Key not found, need to insert a new node
53 Node* node = new Node(key, value);
54 cache[key] = node; // Add to the map
55 addToHead(node); // Add node to the doubly linked list as the most recently used
56 ++size;
57
58 // If over capacity, remove the least recently used item
59 if (size > capacity) {
60 node = removeTail();
61 cache.erase(node->key); // Remove from the cache map
62 delete node; // Delete the node to prevent memory leak
63 --size;
64 }
65 }
66 }
67
68private:
69 std::unordered_map<int, Node*> cache; // Cache storage
70 Node* head; // Head of the doubly linked list
71 Node* tail; // Tail of the doubly linked list
72 int capacity; // The capacity of the cache
73 int size; // The current size of the cache
74
75 // Move a node to the head of the doubly linked list
76 void moveToHead(Node* node) {
77 removeNode(node);
78 addToHead(node);
79 }
80
81 // Remove a node from the doubly linked list
82 void removeNode(Node* node) {
83 node->prev->next = node->next;
84 node->next->prev = node->prev;
85 }
86
87 // Insert a node right after the head node
88 void addToHead(Node* node) {
89 node->next = head->next;
90 node->prev = head;
91 head->next->prev = node;
92 head->next = node;
93 }
94
95 // Remove the tail node (the least recently used node)
96 Node* removeTail() {
97 Node* node = tail->prev;
98 removeNode(node);
99 return node;
100 }
101};
102
103/**
104 * Your LRUCache object will be instantiated and called as such:
105 * LRUCache* obj = new LRUCache(capacity);
106 * int param_1 = obj->get(key);
107 * obj->put(key,value);
108 */
109
1// Define the capacity and storage for the LRU cache
2let lruCapacity: number;
3let lruMap: Map<number, number>;
4
5/**
6 * Initializes the LRU Cache with a specified capacity.
7 * @param capacity - the number of items the cache can hold
8 */
9function initializeLRUCache(capacity: number): void {
10 lruCapacity = capacity;
11 lruMap = new Map();
12}
13
14/**
15 * Retrieves the value associated with the given key and updates the cache to reflect recent access.
16 * Returns -1 if the key is not present in the cache.
17 * @param key - the key whose associated value is to be returned
18 * @return the value associated with the key, or -1 if the key does not exist
19 */
20function getFromLRUCache(key: number): number {
21 if (lruMap.has(key)) {
22 const val = lruMap.get(key)!;
23 // Move the key to the end to show it was recently accessed
24 lruMap.delete(key);
25 lruMap.set(key, val);
26 return val;
27 }
28 return -1;
29}
30
31/**
32 * Inserts or updates the value associated with the key in the cache.
33 * If the cache exceeds its capacity, the least recently used (LRU) entry is removed.
34 * @param key - the key with which the specified value is to be associated
35 * @param value - the value to be associated with the specified key
36 */
37function putIntoLRUCache(key: number, value: number): void {
38 // If the value is already in the cache, delete it so we can update the access order
39 if (lruMap.has(key)) {
40 lruMap.delete(key);
41 }
42
43 // Insert the key with its new value at the correct position in the access order
44 lruMap.set(key, value);
45
46 // If the cache size exceeds capacity, evict the LRU item from the cache
47 if (lruMap.size > lruCapacity) {
48 // The first key in the map is the least recently used key
49 const lruKey = lruMap.keys().next().value;
50 lruMap.delete(lruKey);
51 }
52}
53
54// Example of using the functions. Replace this with actual usage as needed.
55// initializeLRUCache(2); // Initialize the cache with a capacity of 2
56// console.log(getFromLRUCache(2)); // Should log -1 (not found)
57// putIntoLRUCache(2, 6); // Store the value 6 with key 2
58// console.log(getFromLRUCache(2)); // Should log 6
59
Time and Space Complexity
Time Complexity:
-
__init__
: The initial setting up of the LRUCache object has a time complexity ofO(1)
as it simply sets up a few variables and links the head and tail dummy nodes. -
get
: This function has a time complexity ofO(1)
assuming that the hash table operation (self.cache[key]
) isO(1)
on average due to the nature of hash tables. Moving the node to the head of the list is alsoO(1)
since it involves only changing a few pointers. -
put
: Theput
function also runs inO(1)
average time complexity. When the key already exists in the cache, updating the value and moving the node to the head isO(1)
. When the key does not exist, a new node is created and added to the head, which isO(1)
. If the cache exceeds the capacity, removing the tail node and popping it from the hash table is alsoO(1)
. -
move_to_head
: As a helper function,move_to_head
simply removes a node and adds it to the head. Both operations areO(1)
in time complexity. -
remove_node
: Theremove_node
helper function isO(1)
since it only involves pointer updates. -
add_to_head
: Similarly,add_to_head
isO(1)
because it hooks up the new node right after the head dummy node. -
remove_tail
: Removing the tail node isO(1)
as it takes constant time to change a few pointers and get the last node.
Space Complexity:
- The space complexity of the LRUCache is
O(capacity)
because the cache will store at mostcapacity
number of nodes. Each node consists of a key, value, and two pointers, but since the space taken per node is constant and does not depend on the number of total nodes beyond the capacity, the space complexity remains linear with respect to the capacity.
Learn more about how to find time and space complexity quickly using problem constraints.
What are the most two important steps in writing a depth first search function? (Select 2)
Recommended Readings
Linked List Cycle Given a linked list with potentially a loop determine whether the linked list from the first node contains a cycle in it For bonus points do this with constant space Parameters nodes The first node of a linked list with potentially a loop Result Whether there is a loop contained
LeetCode Patterns Your Personal Dijkstra's Algorithm to Landing Your Dream Job The goal of AlgoMonster is to help you get a job in the shortest amount of time possible in a data driven way We compiled datasets of tech interview problems and broke them down by patterns This way we
Recursion Recursion is one of the most important concepts in computer science Simply speaking recursion is the process of a function calling itself Using a real life analogy imagine a scenario where you invite your friends to lunch https algomonster s3 us east 2 amazonaws com recursion jpg You first
Want a Structured Path to Master System Design Too? Don’t Miss This!
Null checks missing for the removeNode() operation