2623. Memoize


Problem Description

In the provided problem, you are asked to create a version of a function that is memoized. What this means is that when you have a function that computes a value based on some given input(s), you want to keep track of those inputs and the result of the computation. If the function is ever called again with the same input(s), instead of recomputing the result, you should return the previously computed and stored value. This approach saves time by avoiding redundant calculations, particularly for functions that might be called frequently with the same arguments or involve expensive computation.

The problem assumes there are three specific example functions you might want to memoize:

  1. A sum function that takes two integers a and b, and returns their sum.
  2. A fib function that computes the n-th Fibonacci number, which is defined as fib(n) = fib(n - 1) + fib(n - 2) with the base cases fib(0) = 0 and fib(1) = 1.
  3. A factorial function that computes n!, the factorial of n, which multiplies all integers from n down to 1.

The goal is to write a memoization wrapper function applies to any function, including sum, fib, and factorial.

Intuition

To solve this problem, we need to build a function that takes another function as its input and returns a new, memoized version of that function. This memoized function will remember (or "memoize") the results of previous function calls.

How do we remember these results? We use a cache, which in this code is simply an object (cache) that will store previous arguments and their computed results as key-value pairs. Each time the memoized function is called, we check if the arguments have been seen before by looking them up in the cache. If the arguments exist in the cache, we immediately return the stored result, bypassing the computation altogether.

If the arguments haven't been seen before, we proceed with the computation by calling the original function and storing the result in the cache before returning it.

The solution leverages closures in JavaScript: the returned, memoized function has access to the cache object even after the memoize function has finished executing. Closures are functions that retain access to the outer function scope in which they were created, meaning the cache will exist for as long as the memoized function does, allowing it to retain memory of all previous calls.

The memoize function employs a simple but effective strategy to avoid recomputation. By following this memoization pattern, we ensure that the original function can operate more efficiently.

Solution Approach

The memoize function is a higher-order function. This means it's a function that takes another function as its argument and returns a new function. The returned function is the memoized version of the original function. The solution leverages two key concepts: closures and associative arrays (JavaScript objects in this case).

Here's a step-by-step breakdown of the implementation:

  • A cache object is created inside the memoize function using {}. This object will store previous computations.
  • The memoize function returns an anonymous function that captures the cache and the original fn.
  • When this anonymous function is called with a set of arguments (...args), it first checks if those arguments already exist as a key in the cache.
    • In JavaScript, objects use strings as keys, so args would automatically be coalesced into a string representation. However, for an optimal solution, one would require a more robust way to serialize arguments, especially to handle edge cases and arguments like objects.
  • If the cache contains the result, the stored result is returned, avoiding the need to call the original function.
  • If the result is not in the cache, the function uses the spread operator (...args) to pass all the arguments to the original function fn.
  • The result of fn(...args) is computed and stored in the cache, with args as the key.
  • Finally, the computed result is returned.
Data Structures:
  • An associative array (object) is used as the cache to store key-result pairs. Here, the key is the string representation of the arguments list.
Algorithm:
  • Caching/Memoization: The algorithm involves checking the cache to retrieve results of previous operations and stores new results as they are computed.
Patterns:
  • Higher-order function: memoize is a function that takes a function and returns a function.
  • Closure: The returned memoized function retains access to the cache from memoize's scope even after memoize has finished execution.

This memoization algorithm is simple and effective for functions with discrete, countable inputs. However, one should note that it does not account for potential issues such as the size of the cache growing indefinitely or the need for a more complex serialization approach for non-primitive arguments.

Discover Your Strengths and Weaknesses: Take Our 2-Minute Quiz to Tailor Your Study Plan:

Problem: Given a list of tasks and a list of requirements, compute a sequence of tasks that can be performed, such that we complete every task once while satisfying all the requirements.

Which of the following method should we use to solve this problem?

Example Walkthrough

Let us walk through a small example to illustrate the solution approach using the fib function, which computes the n-th Fibonacci number.

Suppose we want to compute fib(5) using a naive recursive function. This computation would involve the following calls:

1fib(5)
2→ fib(4) + fib(3)
3→ (fib(3) + fib(2)) + (fib(2) + fib(1))
4→ ((fib(2) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
5→ (((fib(1) + fib(0)) + fib(1)) + (fib(1) + fib(0))) + ((fib(1) + fib(0)) + fib(1))
6→ ... and so on

Notice that fib(2) and fib(3) are being computed multiple times. This is where memoization comes in handy.

Here's how we apply the memoize function to fib:

  1. Create an instance of memoized fib by passing fib into memoize.

    1let memoizedFib = memoize(fib);
  2. Invoke memoizedFib(5) and observe how it executes:

    • The function checks if 5 is a key in the cache. It's not, so it computes fib(5).
    • In the process, memoizedFib makes calls to memoizedFib(4) and memoizedFib(3).
    • memoizedFib(4) computes and caches the result for 4, then calls memoizedFib(3) and memoizedFib(2).
    • Similarly, memoizedFib(3) computes and caches the result for 3, then calls memoizedFib(2) and memoizedFib(1).
    • memoizedFib(2) notices 2 is not in the cache, so it computes fib(2) and caches it.
    • memoizedFib(1) and memoizedFib(0) are base cases, so they return 1 and 0 respectively.
    • All intermediate results are now cached.
  3. Any subsequent call to memoizedFib with previously computed arguments, like memoizedFib(3) or memoizedFib(4), will immediately return the cached result instead of recomputing.

Using this memoized function, we have significantly reduced the number of computations. While computing fib(5) would have taken many recursive calls without memoization, with memoization, each Fibonacci number from 0 to 5 is calculated only once. All subsequent calls look up the precomputed value in the cache, returning the result in constant time. This change greatly improves the efficiency when dealing with larger input values.

Solution Implementation

1from typing import Callable, Any, Dict
2
3# Define the type signature for a generic function that can take any number and type of parameters
4# and return any type of value.
5Fn = Callable[..., Any]
6
7# Function to memoize another function to save on repeat computations.
8def memoize(fn: Fn) -> Fn:
9    # This cache dictionary will store function call results, keyed by a unique representation of their arguments.
10    cache: Dict[str, Any] = {}
11
12    # Define a wrapper function that checks the cache before executing the original function.
13    def memoized_function(*args: Any) -> Any:
14        # Serialize the arguments into a string to use as a cache key.
15        key = str(args)  # In Python, tuples are hashable and can be used as dictionary keys unlike lists.
16
17        # Check if the result is present in the cache for the given arguments.
18        if key in cache:
19            # Return the cached result.
20            return cache[key]
21
22        # If the result is not cached, call the function with the provided arguments.
23        result = fn(*args)
24
25        # Cache the result associated with the serialized arguments.
26        cache[key] = result
27
28        # Return the newly computed result.
29        return result
30
31    # Return the new function that will use the cache.
32    return memoized_function
33
34# Example usage:
35
36# Initialize a variable to track the number of times the function is actually called.
37call_count = 0
38
39# Define a function that will be memoized.
40# It simply adds two numbers and increments the call count.
41def add(a: int, b: int) -> int:
42    global call_count
43    call_count += 1
44    return a + b
45
46# Create a memoized version of the 'add' function.
47memoized_add = memoize(add)
48
49# Execute the memoized function with the same arguments multiple times.
50memoized_add(2, 3)  # Should return 5 and increment `call_count` to 1.
51memoized_add(2, 3)  # Should return 5 without incrementing `call_count`, since the result is cached.
52
53# Log the number of times the function was called.
54print(call_count)  # Expected to log: 1
55
1import java.util.HashMap;
2import java.util.Map;
3import java.util.Arrays;
4import java.util.function.Function;
5
6// This interface represents a generic function that can take any number and type of parameters
7// and return any type of value.
8@FunctionalInterface
9interface Fn<T, R> {
10    R apply(T... args);
11}
12
13// A class that provides a static method to memoize a function.
14// The memoize method returns a new function that stores previous results to save on repeat computations.
15public class Memoizer {
16
17    // The memoize method takes a function as an argument and returns a memoized version of it.
18    public static <T, R> Fn<T, R> memoize(Fn<T, R> fn) {
19        // Cache to store results of function calls, keyed by unique representation of the arguments.
20        final Map<String, R> cache = new HashMap<>();
21
22        // Returns a new function that remembers results of previous function calls.
23        return (T... args) -> {
24            // Convert the arguments to a string representation to use as a unique cache key.
25            String key = Arrays.deepToString(args);
26
27            // If the cache contains the result for these arguments, return the cached result.
28            if (cache.containsKey(key)) {
29                return cache.get(key);
30            }
31
32            // Call the function with the provided arguments and store the result in the cache.
33            R result = fn.apply(args);
34            cache.put(key, result);
35
36            // Return the result.
37            return result;
38        };
39    }
40}
41
42/**
43 * Example usage:
44 */
45
46public class Example {
47  
48    // A variable to track the number of times a function is called.
49    private static int callCount = 0;
50
51    public static void main(String[] args) {
52      
53        // A function that adds two numbers and increments the call count each time it's called.
54        Fn<Integer, Integer> add = (Integer... numbers) -> {
55            callCount++;
56            return numbers[0] + numbers[1];
57        };
58
59        // Memoize the 'add' function to optimize repetitive calculations with the same arguments.
60        Fn<Integer, Integer> memoizedAdd = Memoizer.memoize(add);
61
62        // Execute the memoized function and check that the call count is properly handled.
63        System.out.println(memoizedAdd.apply(2, 3)); // Should return 5 and increment `callCount`.
64        System.out.println(memoizedAdd.apply(2, 3)); // Should return 5 and not increment `callCount`.
65
66        // Log the number of times the original function was called.
67        System.out.println(callCount); // Expected to print: 1
68    }
69}
70
1#include <iostream>
2#include <unordered_map>
3#include <string>
4#include <functional>
5#include <sstream>
6#include <vector>
7#include <boost/any.hpp>
8#include <boost/functional/hash.hpp>
9
10// Define the type signature for a generic function - any number and type of parameters with any return type.
11typedef std::function<boost::any(std::vector<boost::any>)> Fn;
12
13// A function to memoize another function to save on repeat computations.
14Fn memoize(Fn func) {
15    // This unordered_map will be used as a cache to store function call results,
16    // keyed by a hash of their arguments.
17    std::unordered_map<std::size_t, boost::any> cache;
18
19    // Return a lambda (anonymous function) that captures the cache and the function to memoize.
20    return [func, &cache](std::vector<boost::any> args) -> boost::any {
21        // Compute a hash of the arguments to use as a unique cache key.
22        std::size_t args_hash = 0;
23        boost::hash_combine(args_hash, boost::hash_range(args.begin(), args.end()));
24
25        // Check if the result is present in cache for the given arguments.
26        auto it = cache.find(args_hash);
27        if (it != cache.end()) {
28            // Return the cached result.
29            return it->second;
30        }
31
32        // If the result is not cached, call the function with the provided arguments.
33        boost::any result = func(args);
34
35        // Cache the result associated with the hash of the arguments.
36        cache[args_hash] = result;
37
38        // Return the newly computed result.
39        return result;
40    };
41}
42
43/**
44 * Example usage:
45 */
46
47// Initialize a variable to track the number of times the function is actually called.
48int callCount = 0;
49
50// Define a function that takes two boost::any objects, checks if they're numbers, adds them, and increments the call count.
51Fn add = [](std::vector<boost::any> params) -> boost::any {
52    callCount++; // Increment our call count state.
53
54    // Perform the calculation, assuming the params hold integers. 
55    // In a robust implementation, we would include type checks and error handling.
56    return boost::any_cast<int>(params.at(0)) + boost::any_cast<int>(params.at(1));
57};
58
59// Create a memoized version of the 'add' function.
60Fn memoizedAdd = memoize(add);
61
62// Execute the memoized function with the same arguments multiple times.
63std::vector<boost::any> args{2, 3};
64memoizedAdd(args); // Should return 5 and increment `callCount` to 1.
65memoizedAdd(args); // Should return 5 without incrementing `callCount`, since the result is cached.
66
67// Log the number of times the function was called.
68std::cout << "Function was called " << callCount << " times." << std::endl; // Expected to log: 1
69
1// Define the type signature for a generic function that can take any number and type of parameters
2// and returns any type of value.
3type Fn = (...params: any[]) => any;
4
5// A function to memoize another function to save on repeat computations.
6// It takes a function as an argument and returns a new function that keeps track of the results
7// for given arguments and returns the cached result for subsequent calls with the same arguments.
8function memoize(fn: Fn): Fn {
9  // This cache object will store function call results, keyed by a unique representation of their arguments.
10  const cache: Record<string, any> = {};
11
12  // Returns a new function that checks the cache before executing the original function.
13  return function (...args: any[]) {
14    // Serialize the arguments into a string to use as a cache key, because objects/arrays as keys are
15    // compared by identity and not by value, which would result in incorrect caching.
16    const key = JSON.stringify(args);
17
18    // Check if the result is present in cache for the given arguments.
19    if (key in cache) {
20      // Return the cached result.
21      return cache[key];
22    }
23
24    // If the result is not cached, call the function with the provided arguments.
25    const result = fn(...args);
26
27    // Cache the result associated with the serialized arguments.
28    cache[key] = result;
29
30    // Return the newly computed result.
31    return result;
32  };
33}
34
35/**
36 * Example usage:
37 */
38
39// Initialize a variable to track the number of times the function is actually called.
40let callCount = 0;
41
42// Define a function that will be memoized.
43// It simply adds two numbers and increments the call count.
44const add = (a: number, b: number): number => {
45  callCount += 1;
46  return a + b;
47};
48
49// Create a memoized version of the 'add' function.
50const memoizedAdd = memoize(add);
51
52// Execute the memoized function with the same arguments multiple times.
53memoizedAdd(2, 3); // Should return 5 and increment `callCount` to 1.
54memoizedAdd(2, 3); // Should return 5 without incrementing `callCount`, since the result is cached.
55
56// Log the number of times the function was called.
57console.log(callCount); // Expected to log: 1
58

Time and Space Complexity

Time Complexity

The time complexity of the memoization function depends on several factors:

  1. The complexity of the original function fn that is being memoized. If the time complexity of fn is O(f(n)), then for the first time that any unique set of arguments is passed to the memoized function, the time complexity will also be O(f(n)).

  2. The efficiency of the cache lookup. Assuming that JavaScript object property access by string keys is generally O(1) under typical conditions, then the cache checking and retrieving is O(1) for every subsequent call with the same arguments.

  3. The transformation of arguments into a key for the cache. In the given implementation, the arguments being used directly as keys without any transformation can lead to incorrect behavior because object keys in JavaScript are always strings. Without a proper serialization that would correctly handle different data types and distinguish between them, the caching could malfunction and mix up entries. However, if proper serialization is assumed, then the complexity of serialization would affect the overall time complexity. This can vary greatly depending on the serialization method used.

Considering a proper serialization step with a complexity of O(s(n)) where s(n) depends on the size and nature of the arguments, then for any new call with unique arguments, the time complexity would be O(f(n) + s(n)). For repeated calls with the same arguments post-caching, it becomes O(1).

Thus, the average time complexity is a function of how often new vs. repeated arguments are used.

Space Complexity

The space complexity is primarily affected by the cache storage. For every unique set of arguments, a new entry is created in the cache.

  1. If n represents the number of unique calls with different arguments to the memoized function, and each result takes up space O(r(n)), then the space complexity for storing the results in the cache will be O(n * r(n)).

  2. The space complexity for storing the keys in the cache depends on the representation of the argument sets as keys. If proper serialization is used and it takes O(k(n)) space per argument set, the total space required for the keys would be O(n * k(n)).

Therefore, the overall space complexity of the memoization function is O(n * (r(n) + k(n))). Note that this could become quite significant for a large number of unique arguments or large argument sizes.

Reminder: The given function has a bug related to the arguments being used as keys directly in the cache, which can lead to unexpected behavior and incorrect cache retrievals. To ensure reliable operation, a robust serialization method should be implemented to uniquely and consistently represent argument sets as cache keys.


Fast Track Your Learning with Our Quick Skills Quiz:

Which of the following is a min heap?


Recommended Readings


Got a question? Ask the Monster Assistant anything you don't understand.

Still not clear? Ask in the Forum,  Discord or Submit the part you don't understand to our editors.

Coding Interview Strategies

Dive into our free, detailed pattern charts and company guides to understand what each company focuses on.

See Patterns

🪄