2630. Memoize II


Problem Description

The problem requires us to enhance a given function fn by adding memoization to it. Memoization is an optimization technique used to increase the performance by storing the results of expensive function calls and returning the cached result when the same inputs occur again.

The goal is to ensure that fn, once memoized, does not compute the result again for inputs it has already processed. Instead, it should return the previously computed result. It is important to note that inputs are considered identical when they are strictly equal to each other (===), meaning that the comparison must consider both value and type.

Intuition

To memoize a function, we need two main components: a cache to store previously computed values and a way to uniquely identify each set of inputs.

The provided solution creates a memoized version of the fn function. Here's the thought process behind it:

  1. Define a cache mechanism. The solution uses two maps: idxMap and cache.

    • idxMap is used to assign a unique index for every distinct object.
    • cache is where the actual function outputs are stored, mapped by a unique key representing the function arguments.
  2. Generate unique keys for function arguments. Since the arguments could be objects, and we need a primitive value as a key for caching, the approach abstracts every input using an index representation. It assigns a unique index to every unique object input and then creates a string from these indices, which represents the combination of arguments.

  3. Wrap the original function fn with a new function that intercepts calls to fn. For every call:

    • Generate a unique key for the arguments provided.
    • Check if the result is already present in the cache. If it's there, return the cached result.
    • If the result is not cached, call fn with the original arguments, store the result in the cache, and then return the computed result.

This approach ensures that if the memoized function is called again with the same combination of arguments (based on strict equality), it does not recompute the result but instantly returns the cached result, thereby saving time on subsequent calls with identical inputs.

Not Sure What to Study? Take the 2-min Quiz to Find Your Missing Piece:

What are the two properties the problem needs to have for dynamic programming to be applicable? (Select 2)

Solution Approach

The solution uses the following algorithms, data structures, and patterns:

  1. Closures: The memoize function returns a new function that maintains a closure over the idxMap and cache. This allows the new function to have persistent private data (caches) that are accessible across multiple invocations.

  2. Map Object: Two Map objects are used.

    • cache stores the results of the function calls with a unique string key for each set of parameters.
    • idxMap maps objects to unique indices. This is needed because we cannot use objects directly as keys in the cache Map.
  3. Unique Key Generation: Each set of parameters needs to be uniquely identified to decide if the function output for those parameters is already cached. To achieve that uniqueness:

    • We assign a unique index to every distinct object encountered as a parameter via the getIdx function. For non-object parameters that can be directly compared by value (like numbers, strings, etc.), their "index" is effectively the value itself.
    • The unique key for the cache map is generated by concatenating the indices of the parameters with commas, ensuring a string that represents the parameters' combination.
  4. Memoization Logic: When the returned function is called, a key is generated to represent the parameters. The function then does the following:

    • Checks cache to see if the key exists, indicating that the function has been called before with these parameters. If so, the cached value is returned, preventing the recalculation.
    • If the key does not exist in the cache, it calls fn with the given parameters, stores the result in cache using the generated key, and returns the result.

The algorithm is efficient because it avoids redundant computation by storing function results that have already been computed, using only the memory needed to store unique calls. The use of a Map object for caching provides constant-time retrieval, making the look-up process fast.

Here's a walkthrough of the implementation steps in code:

  • Define the memoize function which takes the function fn as an argument.
  • Inside memoize, initialize the idxMap and cache maps.
  • Define the getIdx function, which assigns and retrieves a unique index for object parameters.
  • Return a new function from memoize that wraps the original function fn.
  • In the wrapped function, generate a key based on the parameters using getIdx.
  • Check if the key exists in cache. If it does, return the stored value.
  • If the key does not exist, compute the result using fn, store it in cache, and return it.

This memoize function can then be applied to any function fn to create a memoized version of it, thereby optimizing it for scenarios where the function is likely to be called multiple times with the same set of parameters.

Discover Your Strengths and Weaknesses: Take Our 2-Minute Quiz to Tailor Your Study Plan:

Which type of traversal does breadth first search do?

Example Walkthrough

Let's walk through an example by applying the memoization technique to a simple function — a Fibonacci sequence generator. The Fibonacci function fib is well-known for its efficiency issues with recursive calls because it recalculates the same values multiple times.

Here's the fib function without memoization:

1function fib(n) {
2  if (n === 0 || n === 1) {
3    return n;
4  }
5  return fib(n - 1) + fib(n - 2);
6}

Now, let's apply the memoize function to fib to create a memoized version:

1const memoizeFib = memoize(fib);

Next, we'll see how memoization optimizes the function calls. Suppose we call memoizeFib(5):

  1. The function checks if the result for 5 is in the cache. It's not, so it computes fib(5).
  2. To compute fib(5), it needs fib(4) and fib(3). These calls go through the memoized function as well.
  3. Recursive calls continue until it hits the base case for fib(0) and fib(1), which return 0 and 1, respectively. Each computed value gets stored in the cache.
  4. It then combines cached results to return fib(5) without additional redundant calls.
  5. The result is stored in the cache with the key '5'.

Now, if we call memoizeFib(5) again:

  1. The function generates the key '5'.
  2. It finds the key in the cache and returns the stored result immediately without recomputation.

Here are the specific steps the memoization follows for fib:

  • When memoizeFib(5) is called for the first time, the cache is empty, so it calculates and stores the result for each unique call of fib (i.e., fib(5), fib(4), fib(3), fib(2), fib(1), and fib(0)).
  • For the sake of example, assume that calculating fib(5) indirectly led to two calls to fib(3). The first call would compute and cache the result. The second call to fib(3) would immediately return the cached result.
  • The memoization saves computation time by storing the result of each unique fib(n) so all subsequent calls for these n values will be returned instantly from the cache.

Applying the memoize function to fib drastically improves its performance, especially for large values of n, because the number of computations from an exponential to a linear growth in terms of the input.

This walkthrough illustrates the core idea of memoization and how it can be applied to optimize a computationally expensive recursive function.

Solution Implementation

1from typing import Any, Callable, Dict
2
3# Define the type for a function that can take any number of any type of parameters and return any type.
4# In Python, we use 'Callable' from the 'typing' module to represent such a function.
5FunctionWithAnyParams = Callable[..., Any]
6
7def memoize(fn: FunctionWithAnyParams) -> FunctionWithAnyParams:
8    """
9    Utility function to memoize another function to cache its results based on the arguments passed.
10    This can optimize the performance of functions with expensive computations when called with the same arguments.
11    """
12    # Dictionary to keep track of a unique index for each unique object argument.
13    object_index_map: Dict[Any, int] = {}
14    # Cache to store results based on a unique key for the function arguments.
15    cache: Dict[str, Any] = {}
16
17    def get_object_index(obj: Any) -> int:
18        """
19        Helper function to obtain a unique index for an object.
20        If it encounters the same object multiple times, it returns the same index.
21        """
22        if obj not in object_index_map:
23            object_index_map[obj] = len(object_index_map)
24        return object_index_map[obj]
25
26    def memoized_function(*params: Any) -> Any:
27        """
28        Returns a function that wraps the passed function with memoization.
29        """
30        # Construct the cache key based on unique indices for the function arguments.
31        # For non-object parameters, their value is used directly in the cache key.
32        key = ','.join(str(get_object_index(param) if isinstance(param, (dict, list)) else param)
33                       for param in params)
34        # Check the cache; if the result is not cached yet, compute and cache it.
35        if key not in cache:
36            # This calls the original function with the spread parameters.
37            cache[key] = fn(*params)
38        # Return the cached result.
39        return cache[key]
40  
41    return memoized_function
42
43
44# Example of how to use:
45# call_count = 0
46
47# def add(a, b):
48#     global call_count
49#     call_count += 1
50#     return a + b
51
52# memoized_add = memoize(add)
53# print(memoized_add(2, 3))  # returns 5, the result is computed.
54# print(memoized_add(2, 3))  # returns 5, the result is retrieved from cache.
55# print(call_count)  # outputs 1, because the function was actually called only once.
56
1import java.util.HashMap;
2import java.util.Map;
3
4// Functional interface definition for a function that can take any number of any type of parameters and returns any type.
5@FunctionalInterface
6interface FunctionWithAnyParams {
7    Object apply(Object... params);
8}
9
10public class MemoizationUtil {
11
12    // Utility function to memoize another function to cache its results based on the arguments passed.
13    // This can optimize the performance of functions with expensive computations when called with the same arguments.
14    public static FunctionWithAnyParams memoize(FunctionWithAnyParams fn) {
15        // Map to keep track of a unique index for each unique object argument.
16        Map<Object, Integer> objectIndexMap = new HashMap<>();
17        // Cache to store results based on a unique key for the function arguments.
18        Map<String, Object> cache = new HashMap<>();
19
20        // Return a function that wraps the passed function with memoization.
21        return (params) -> {
22            // Construct the cache key based on unique indices for the function arguments.
23            final StringBuilder keyBuilder = new StringBuilder();
24            for (Object param : params) {
25                if (param instanceof Object[]) {
26                    keyBuilder.append(System.identityHashCode(param));
27                } else {
28                    keyBuilder.append(param == null ? "null" : param.toString());
29                }
30                keyBuilder.append(',');
31            }
32            String key = keyBuilder.toString();
33          
34            // Check the cache; if the result is not cached yet, compute and cache it.
35            if (!cache.containsKey(key)) {
36                // This spreads the parameters and calls the original function.
37                cache.put(key, fn.apply(params));
38            }
39            // Return the cached result.
40            return cache.get(key);
41        };
42    }
43  
44    // Example of how to use:
45    public static void main(String[] args) {
46        // Sample usage of the memoize function
47        int[] callCount = {0};
48        FunctionWithAnyParams memoizedAdd = memoize((params) -> {
49            callCount[0]++;
50            return (int) params[0] + (int) params[1];
51        });
52      
53        // Compute the result of adding 2 and 3. The result is computed since it's not cached yet.
54        System.out.println(memoizedAdd.apply(2, 3));  // Outputs 5, the first time the result is computed.
55        // Try adding 2 and 3 again. This time the result is retrieved from the cache.
56        System.out.println(memoizedAdd.apply(2, 3));  // Outputs 5, the result is retrieved from cache.
57        // The number of times the function has actually been called. Should output 1.
58        System.out.println(callCount[0]);              // Outputs 1, because the memoized function was actually called only once.
59    }
60}
61
1#include <functional>  // for std::function
2#include <map>         // for std::map
3#include <string>      // for std::string
4#include <sstream>     // for std::stringstream
5#include <vector>      // for std::vector
6
7// Function type that takes any number of parameters of any type and returns any value.
8using FunctionWithAnyParams = std::function<std::string(std::vector<std::string>)>;
9
10// Utility function to memoize another function to cache its results based on the arguments passed.
11FunctionWithAnyParams memoize(FunctionWithAnyParams fn) {
12    // Map to keep track of a unique index for each unique object argument.
13    std::map<void*, int> objectIndexMap;
14    // Cache to store results based on a unique key for the function arguments.
15    std::map<std::string, std::string> cache;
16
17    // Functor to obtain a unique index for an object.
18    auto getObjectIndex = [&objectIndexMap](void* obj) -> int {
19        // If the object hasn't been seen before, assign it a new index.
20        if (objectIndexMap.find(obj) == objectIndexMap.end()) {
21            int index = objectIndexMap.size();
22            objectIndexMap[obj] = index;
23        }
24        return objectIndexMap[obj];
25    };
26
27    // Return lambda function wrapped around the passed function with memoization.
28    return [fn, &cache, getObjectIndex](std::vector<std::string> params) -> std::string {
29        std::stringstream keyStream;
30        // Construct the cache key based on parameters.
31        for (auto& param : params) {
32            keyStream << param << ",";
33        }
34        std::string key = keyStream.str();
35      
36        // Check the cache; if the result is not cached yet, compute and cache it.
37        if (cache.find(key) == cache.end()) {
38            // This calls the original function with the parameters.
39            cache[key] = fn(params);
40        }
41
42        // Return the cached result.
43        return cache[key];
44    };
45}
46
47// Example usage:
48/*
49int callCount = 0;
50auto memoizedAdd = memoize([&callCount](std::vector<std::string> params) -> std::string {
51    callCount++;
52    int a = std::stoi(params[0]);
53    int b = std::stoi(params[1]);
54    return std::to_string(a + b);
55});
56
57std::cout << memoizedAdd({"2", "3"}) << std::endl; // Outputs "5", the function computes the result.
58std::cout << memoizedAdd({"2", "3"}) << std::endl; // Outputs "5", the result is retrieved from the cache.
59std::cout << callCount << std::endl; // Outputs "1", function is actually called only once.
60
1// Type definition for a function that can take any number of any type of parameters and returns any type.
2type FunctionWithAnyParams = (...params: any[]) => any;
3
4// Utility function to memoize another function to cache its results based on the arguments passed.
5// This can optimize the performance of functions with expensive computations when called with the same arguments.
6function memoize(fn: FunctionWithAnyParams): FunctionWithAnyParams {
7    // Map to keep track of a unique index for each unique object argument.
8    const objectIndexMap: Map<unknown, number> = new Map();
9    // Cache to store results based on a unique key for the function arguments.
10    const cache: Map<string, any> = new Map();
11
12    // Helper function to obtain a unique index for an object.
13    // If it encounters the same object multiple times, it returns the same index.
14    const getObjectIndex = (obj: unknown): number => {
15        if (!objectIndexMap.has(obj)) {
16            objectIndexMap.set(obj, objectIndexMap.size);
17        }
18        return objectIndexMap.get(obj)!; // Using non-null assertion because we know the key exists in the map.
19    };
20
21    // Return a function that wraps the passed function with memoization.
22    return function (...params: any[]): any {
23        // Construct the cache key based on unique indices for the function arguments.
24        const key = params.map(param => typeof param === 'object' ? getObjectIndex(param) : param).join(',');
25        // Check the cache; if the result is not cached yet, compute and cache it.
26        if (!cache.has(key)) {
27            // This spreads the parameters and calls the original function.
28            cache.set(key, fn(...params));
29        }
30        // Return the cached result.
31        return cache.get(key)!; // Using non-null assertion because we set the value before or it's already cached.
32    };
33}
34
35// Example of how to use:
36// let callCount = 0;
37// const memoizedAdd = memoize(function (a, b) {
38//     callCount += 1;
39//     return a + b;
40// });
41// memoizedAdd(2, 3) // returns 5, the result is computed.
42// memoizedAdd(2, 3) // returns 5, the result is retrieved from cache.
43// console.log(callCount) // outputs 1, because the function was actually called only once.
44
Not Sure What to Study? Take the 2-min Quiz:

What's the output of running the following function using input [30, 20, 10, 100, 33, 12]?

1def fun(arr: List[int]) -> List[int]:
2    import heapq
3    heapq.heapify(arr)
4    res = []
5    for i in range(3):
6        res.append(heapq.heappop(arr))
7    return res
8
1public static int[] fun(int[] arr) {
2    int[] res = new int[3];
3    PriorityQueue<Integer> heap = new PriorityQueue<>();
4    for (int i = 0; i < arr.length; i++) {
5        heap.add(arr[i]);
6    }
7    for (int i = 0; i < 3; i++) {
8        res[i] = heap.poll();
9    }
10    return res;
11}
12
1class HeapItem {
2    constructor(item, priority = item) {
3        this.item = item;
4        this.priority = priority;
5    }
6}
7
8class MinHeap {
9    constructor() {
10        this.heap = [];
11    }
12
13    push(node) {
14        // insert the new node at the end of the heap array
15        this.heap.push(node);
16        // find the correct position for the new node
17        this.bubble_up();
18    }
19
20    bubble_up() {
21        let index = this.heap.length - 1;
22
23        while (index > 0) {
24            const element = this.heap[index];
25            const parentIndex = Math.floor((index - 1) / 2);
26            const parent = this.heap[parentIndex];
27
28            if (parent.priority <= element.priority) break;
29            // if the parent is bigger than the child then swap the parent and child
30            this.heap[index] = parent;
31            this.heap[parentIndex] = element;
32            index = parentIndex;
33        }
34    }
35
36    pop() {
37        const min = this.heap[0];
38        this.heap[0] = this.heap[this.size() - 1];
39        this.heap.pop();
40        this.bubble_down();
41        return min;
42    }
43
44    bubble_down() {
45        let index = 0;
46        let min = index;
47        const n = this.heap.length;
48
49        while (index < n) {
50            const left = 2 * index + 1;
51            const right = left + 1;
52
53            if (left < n && this.heap[left].priority < this.heap[min].priority) {
54                min = left;
55            }
56            if (right < n && this.heap[right].priority < this.heap[min].priority) {
57                min = right;
58            }
59            if (min === index) break;
60            [this.heap[min], this.heap[index]] = [this.heap[index], this.heap[min]];
61            index = min;
62        }
63    }
64
65    peek() {
66        return this.heap[0];
67    }
68
69    size() {
70        return this.heap.length;
71    }
72}
73
74function fun(arr) {
75    const heap = new MinHeap();
76    for (const x of arr) {
77        heap.push(new HeapItem(x));
78    }
79    const res = [];
80    for (let i = 0; i < 3; i++) {
81        res.push(heap.pop().item);
82    }
83    return res;
84}
85

Time and Space Complexity

The time complexity and space complexity of the code are as follows:

Time Complexity

For function memoize:

  • The getIdx function has a time complexity of O(1) for each call due to the direct access of Maps.
  • memoize generates a key by iterating over the input parameters and calling getIdx. If there are n parameters, this process has a time complexity of O(n) since it maps each parameter.
  • Checking the cache and potentially executing the function fn with ...params depends on:
    • The number of unique parameter combinations passed (m), which would be the number of unique keys, and
    • The time complexity of the function fn itself (let's denote this as T(fn)).

Hence, the time complexity can vary depending on whether the data is present in the cache:

  • First call with new parameters: O(n) + T(fn).
  • Subsequent calls with the same parameters: O(n), since it will retrieve the result directly from the cache.

Space Complexity

  • The idxMap map keeps growing as new unique objects are passed, which leads to space complexity of O(u), where u is the number of unique object parameters seen over all calls.
  • Similarly, the cache map stores the results for each unique parameter combination. Thus, it has a space complexity of O(m), where m is the number of unique parameter combinations.

The overall space complexity is O(u + m).

In conclusion, the final overall time complexity is dependent on the specific use case, particularly the complexity of fn and the unique parameter combinations. The space complexity is a combination of unique objects encountered and unique stored results.

Fast Track Your Learning with Our Quick Skills Quiz:

In a binary min heap, the minimum element can be found in:


Recommended Readings


Got a question? Ask the Teaching Assistant anything you don't understand.

Still not clear? Ask in the Forum,  Discord or Submit the part you don't understand to our editors.


TA 👨‍🏫