2623. Memoize
Problem Description
You need to create a memoization wrapper function that takes any function as input and returns a memoized version of it. A memoized function caches its results based on the input arguments, so when called with the same inputs again, it returns the cached result instead of recalculating.
The function should handle three specific types of input functions:
-
sum(a, b)
: Takes two integers and returns their suma + b
. Important note: the order of arguments matters -(3, 2)
and(2, 3)
are treated as different inputs and should have separate cache entries. -
fib(n)
: Calculates the Fibonacci number for a given integern
. Returns1
ifn <= 1
, otherwise returnsfib(n - 1) + fib(n - 2)
. -
factorial(n)
: Calculates the factorial of an integern
. Returns1
ifn <= 1
, otherwise returnsfactorial(n - 1) * n
.
The memoization mechanism should:
- Check if the result for given arguments already exists in the cache
- If yes, return the cached value without calling the original function
- If no, call the original function, store the result in the cache, and return it
The provided solution uses a JavaScript object as a cache, where the arguments array is used as the key (which gets implicitly converted to a string). When the memoized function is called, it first checks if the arguments exist in the cache. If found, it returns the cached result immediately. Otherwise, it executes the original function, stores the result in the cache, and returns it.
Intuition
The key insight here is recognizing that many functions, especially recursive ones like Fibonacci and factorial, end up computing the same values multiple times. For example, when calculating fib(5)
, we need fib(4)
and fib(3)
. But fib(4)
itself needs fib(3)
and fib(2)
. Notice how fib(3)
gets calculated twice? This redundancy grows exponentially as n
increases.
Memoization solves this by trading space for time - we store previously computed results so we never have to recalculate them. Think of it like taking notes while solving a math problem: once you've calculated something, you write it down so you can just look it up if you need it again.
The natural data structure for this is a hash map (or JavaScript object), where we can store key-value pairs. The challenge is determining what to use as the key. Since functions can have multiple arguments, we need a way to uniquely identify each combination of arguments.
In JavaScript, when we use an array as an object key like cache[args]
, the array gets automatically converted to a string. For example, [2, 3]
becomes "2,3"
. This works perfectly for our use case because:
- It creates a unique string for each combination of arguments
- The order is preserved (so
[2, 3]
and[3, 2]
create different keys:"2,3"
vs"3,2"
) - It handles single arguments naturally (
[5]
becomes"5"
)
The wrapper function pattern (returning a new function that has access to the cache through closure) allows us to maintain state between function calls while keeping the interface clean. Each memoized function gets its own cache, isolated from others, which is exactly what we want.
Solution Approach
The implementation uses a closure pattern to create a memoized version of any function. Let's walk through the solution step by step:
1. Function Signature
function memoize(fn: Fn): Fn
The function takes a function fn
as input and returns a new function with the same signature but with memoization capability.
2. Cache Storage
const cache: Record<any, any> = {};
We create an empty object to serve as our cache. This object will store key-value pairs where:
- Key: stringified version of the arguments array
- Value: the result of calling
fn
with those arguments
3. Return a Wrapper Function
return function (...args) {
// memoization logic
}
We return a new function that accepts any number of arguments using the rest parameter ...args
. This wrapper function has access to both the original function fn
and the cache
object through closure.
4. Cache Lookup
if (args in cache) { return cache[args]; }
When the wrapper function is called, we first check if we've seen these arguments before. The args
array is automatically converted to a string when used as an object key. For example:
memoizedSum(2, 3)
→ args =[2, 3]
→ key ="2,3"
memoizedFib(5)
→ args =[5]
→ key ="5"
If the key exists in our cache, we immediately return the cached value without executing the original function.
5. Compute and Cache
const result = fn(...args); cache[args] = result; return result;
If the arguments haven't been seen before:
- Call the original function with the spread arguments
fn(...args)
- Store the result in the cache using the arguments as the key
- Return the result
Why This Works:
- The string conversion of arrays preserves order, so
(2, 3)
and(3, 2)
map to different cache keys ("2,3"
vs"3,2"
) - Each memoized function maintains its own cache through closure
- The solution is generic enough to work with any function signature
- Once a value is cached, subsequent calls with the same arguments have O(1) lookup time
This approach is particularly effective for recursive functions like Fibonacci, where the same subproblems are solved multiple times. With memoization, each unique input is computed only once, dramatically improving performance from exponential to linear time complexity.
Ready to land your dream job?
Unlock your dream job with a 5-minute evaluator for a personalized learning plan!
Start EvaluatorExample Walkthrough
Let's walk through how memoization works with a Fibonacci example, calculating fib(4)
:
Initial Setup:
function fib(n) {
if (n <= 1) return 1;
return fib(n - 1) + fib(n - 2);
}
const memoizedFib = memoize(fib);
First Call: memoizedFib(4)
- Check cache for key
"4"
: Not found - Execute
fib(4)
, which needsfib(3) + fib(2)
- This triggers recursive calls:
Recursive Call: memoizedFib(3)
- Check cache for key
"3"
: Not found - Execute
fib(3)
, which needsfib(2) + fib(1)
Recursive Call: memoizedFib(2)
- Check cache for key
"2"
: Not found - Execute
fib(2)
, which needsfib(1) + fib(0)
Base Cases:
memoizedFib(1)
: Cache miss → compute result = 1 → cache["1"
] = 1memoizedFib(0)
: Cache miss → compute result = 1 → cache["0"
] = 1
Unwinding:
memoizedFib(2)
returns 1 + 1 = 2 → cache["2"
] = 2memoizedFib(1)
: Cache hit! → return cache["1"
] = 1 (no recomputation)memoizedFib(3)
returns 2 + 1 = 3 → cache["3"
] = 3memoizedFib(2)
: Cache hit! → return cache["2"
] = 2 (no recomputation)memoizedFib(4)
returns 3 + 2 = 5 → cache["4"
] = 5
Final Cache State:
cache = { "0": 1, "1": 1, "2": 2, "3": 3, "4": 5 }
Second Call: memoizedFib(4)
- Check cache for key
"4"
: Found! - Return cache[
"4"
] = 5 immediately (no function execution)
Notice how memoizedFib(2)
and memoizedFib(1)
were called multiple times during the recursion, but only computed once. The second time they were needed, we retrieved them from the cache instantly. This transforms the exponential time complexity O(2^n) into linear O(n).
Solution Implementation
1from typing import Any, Callable
2import json
3
4def memoize(fn: Callable[..., Any]) -> Callable[..., Any]:
5 """
6 Creates a memoized version of the given function.
7 Caches results based on input arguments to avoid redundant computations.
8
9 Args:
10 fn: The function to be memoized
11
12 Returns:
13 A memoized version of the input function
14 """
15 # Cache dictionary to store computed results with stringified arguments as keys
16 cache = {}
17
18 def memoized_function(*args: Any) -> Any:
19 """
20 Wrapper function that checks cache before computing.
21
22 Args:
23 *args: Variable arguments to pass to the original function
24
25 Returns:
26 The result of the function call (either cached or newly computed)
27 """
28 # Convert arguments tuple to string for use as cache key
29 # Using json.dumps to serialize the arguments similar to JSON.stringify
30 key = json.dumps(args)
31
32 # Check if result exists in cache
33 if key in cache:
34 # Return cached result if found
35 return cache[key]
36
37 # Compute result if not in cache
38 result = fn(*args)
39
40 # Store result in cache for future use
41 cache[key] = result
42
43 return result
44
45 return memoized_function
46
47
48# Example usage:
49# call_count = 0
50#
51# def add_function(a, b):
52# global call_count
53# call_count += 1
54# return a + b
55#
56# memoized_fn = memoize(add_function)
57# print(memoized_fn(2, 3)) # 5
58# print(memoized_fn(2, 3)) # 5 (retrieved from cache)
59# print(call_count) # 1 (function only called once)
60
1import java.util.HashMap;
2import java.util.Map;
3import java.util.Arrays;
4import java.util.function.Function;
5
6/**
7 * Memoization utility class that provides caching functionality for functions.
8 * Caches results based on input arguments to avoid redundant computations.
9 */
10public class Memoizer {
11
12 /**
13 * Functional interface for a function that accepts variable arguments and returns an Object.
14 * This interface represents any function that can be memoized.
15 */
16 @FunctionalInterface
17 public interface Fn {
18 Object apply(Object... params);
19 }
20
21 /**
22 * Creates a memoized version of the given function.
23 * The memoized function caches results based on input arguments to avoid redundant computations.
24 *
25 * @param fn The function to be memoized
26 * @return A memoized version of the input function
27 */
28 public static Fn memoize(Fn fn) {
29 // Cache map to store computed results with stringified arguments as keys
30 final Map<String, Object> cache = new HashMap<>();
31
32 // Return a new function that wraps the original with caching logic
33 return new Fn() {
34 @Override
35 public Object apply(Object... args) {
36 // Convert arguments array to string for use as cache key
37 // Using Arrays.deepToString to handle nested arrays properly
38 String key = Arrays.deepToString(args);
39
40 // Check if result exists in cache
41 if (cache.containsKey(key)) {
42 // Return cached result if found
43 return cache.get(key);
44 }
45
46 // Compute result if not in cache
47 Object result = fn.apply(args);
48
49 // Store result in cache for future use
50 cache.put(key, result);
51
52 return result;
53 }
54 };
55 }
56
57 /**
58 * Example usage demonstration:
59 *
60 * public static void main(String[] args) {
61 * final int[] callCount = {0};
62 * Fn memoizedFn = memoize((Object... params) -> {
63 * callCount[0] += 1;
64 * return (Integer)params[0] + (Integer)params[1];
65 * });
66 *
67 * System.out.println(memoizedFn.apply(2, 3)); // 5
68 * System.out.println(memoizedFn.apply(2, 3)); // 5 (retrieved from cache)
69 * System.out.println("Call count: " + callCount[0]); // 1 (function only called once)
70 * }
71 */
72}
73
1#include <unordered_map>
2#include <string>
3#include <sstream>
4#include <functional>
5#include <vector>
6#include <any>
7
8// Type definition for a function that accepts a vector of any type and returns any type
9using Fn = std::function<std::any(const std::vector<std::any>&)>;
10
11/**
12 * Converts a vector of any type to a string representation for use as cache key
13 * @param args - Vector of arguments to be converted
14 * @return String representation of the arguments
15 */
16std::string argsToString(const std::vector<std::any>& args) {
17 std::stringstream ss;
18 for (size_t i = 0; i < args.size(); ++i) {
19 // Try to convert each argument to string based on its type
20 if (args[i].type() == typeid(int)) {
21 ss << std::any_cast<int>(args[i]);
22 } else if (args[i].type() == typeid(double)) {
23 ss << std::any_cast<double>(args[i]);
24 } else if (args[i].type() == typeid(std::string)) {
25 ss << std::any_cast<std::string>(args[i]);
26 } else if (args[i].type() == typeid(bool)) {
27 ss << std::any_cast<bool>(args[i]);
28 }
29 // Add delimiter between arguments
30 if (i < args.size() - 1) {
31 ss << ",";
32 }
33 }
34 return ss.str();
35}
36
37/**
38 * Creates a memoized version of the given function.
39 * Caches results based on input arguments to avoid redundant computations.
40 *
41 * @param fn - The function to be memoized
42 * @return A memoized version of the input function
43 */
44Fn memoize(Fn fn) {
45 // Shared pointer to cache to ensure it persists across lambda copies
46 auto cache = std::make_shared<std::unordered_map<std::string, std::any>>();
47
48 return [cache, fn](const std::vector<std::any>& args) -> std::any {
49 // Convert arguments array to string for use as cache key
50 std::string key = argsToString(args);
51
52 // Check if result exists in cache
53 if (cache->find(key) != cache->end()) {
54 // Return cached result if found
55 return (*cache)[key];
56 }
57
58 // Compute result if not in cache
59 std::any result = fn(args);
60
61 // Store result in cache for future use
62 (*cache)[key] = result;
63
64 return result;
65 };
66}
67
68/**
69 * Example usage:
70 * int callCount = 0;
71 * auto memoizedFn = memoize([&callCount](const std::vector<std::any>& args) -> std::any {
72 * callCount += 1;
73 * int a = std::any_cast<int>(args[0]);
74 * int b = std::any_cast<int>(args[1]);
75 * return a + b;
76 * });
77 *
78 * std::vector<std::any> params = {2, 3};
79 * std::any result1 = memoizedFn(params); // 5
80 * std::any result2 = memoizedFn(params); // 5 (retrieved from cache)
81 * std::cout << callCount << std::endl; // 1 (function only called once)
82 */
83
1// Type definition for a function that accepts any parameters and returns any type
2type Fn = (...params: any[]) => any;
3
4/**
5 * Creates a memoized version of the given function.
6 * Caches results based on input arguments to avoid redundant computations.
7 *
8 * @param fn - The function to be memoized
9 * @returns A memoized version of the input function
10 */
11function memoize(fn: Fn): Fn {
12 // Cache object to store computed results with stringified arguments as keys
13 const cache: Record<string, any> = {};
14
15 return function (...args: any[]): any {
16 // Convert arguments array to string for use as cache key
17 const key: string = JSON.stringify(args);
18
19 // Check if result exists in cache
20 if (key in cache) {
21 // Return cached result if found
22 return cache[key];
23 }
24
25 // Compute result if not in cache
26 const result: any = fn(...args);
27
28 // Store result in cache for future use
29 cache[key] = result;
30
31 return result;
32 };
33}
34
35/**
36 * Example usage:
37 * let callCount = 0;
38 * const memoizedFn = memoize(function (a, b) {
39 * callCount += 1;
40 * return a + b;
41 * })
42 * memoizedFn(2, 3) // 5
43 * memoizedFn(2, 3) // 5 (retrieved from cache)
44 * console.log(callCount) // 1 (function only called once)
45 */
46
Time and Space Complexity
Time Complexity:
- Cache lookup:
O(k)
wherek
is the number of arguments, as JavaScript converts the arrayargs
to a string key when using it with thein
operator - Cache insertion:
O(k)
for the same reason - array-to-string conversion - Function execution:
O(f)
wheref
is the time complexity of the original functionfn
- Overall per call:
- First call with unique arguments:
O(k + f)
- Subsequent calls with same arguments:
O(k)
- First call with unique arguments:
Space Complexity:
- Cache storage:
O(n * m)
wheren
is the number of unique argument combinations cached andm
is the average size of the stored results - Closure scope:
O(1)
for maintaining reference to the original function - Per call stack space:
O(k)
for storing the arguments array - Overall:
O(n * m + k)
Note on Implementation Issue:
The current implementation has a bug - using args
(an array) as a key in cache
will convert it to a string using toString()
, which means [1,2]
and ["1,2"]
would incorrectly map to the same cache key "1,2"
. A proper implementation should use JSON.stringify(args)
or another serialization method for the cache key.
Common Pitfalls
1. Unhashable Arguments Problem
The most significant pitfall in the provided solution is that it uses json.dumps()
to serialize arguments as cache keys. This approach fails when arguments contain unhashable or non-JSON-serializable objects:
Problem Example:
def process_data(data_dict):
return sum(data_dict.values())
memoized_fn = memoize(process_data)
memoized_fn({1, 2, 3}) # TypeError: Object of type set is not JSON serializable
memoized_fn(lambda x: x) # TypeError: Object of type function is not JSON serializable
Solution: Use a more robust key generation strategy that handles various data types:
def memoize(fn: Callable[..., Any]) -> Callable[..., Any]:
cache = {}
def memoized_function(*args: Any) -> Any:
# Try to create a hashable key from arguments
try:
# For hashable arguments, use them directly as tuple
key = args
except TypeError:
# For unhashable arguments, convert to string representation
key = str(args)
if key in cache:
return cache[key]
result = fn(*args)
cache[key] = result
return result
return memoized_function
2. Memory Leak with Unbounded Cache
The cache grows indefinitely without any mechanism to limit its size, potentially causing memory issues in long-running applications:
Problem Example:
# This will create millions of cache entries
memoized_sum = memoize(lambda a, b: a + b)
for i in range(10_000_000):
memoized_sum(i, i + 1) # Each unique pair creates a new cache entry
Solution: Implement an LRU (Least Recently Used) cache with a maximum size:
from functools import lru_cache
def memoize(fn: Callable[..., Any]) -> Callable[..., Any]:
# Use built-in LRU cache with max size
@lru_cache(maxsize=128)
def memoized_function(*args):
return fn(*args)
return memoized_function
# Or implement manually with OrderedDict
from collections import OrderedDict
def memoize_with_limit(fn: Callable[..., Any], maxsize: int = 128) -> Callable[..., Any]:
cache = OrderedDict()
def memoized_function(*args: Any) -> Any:
key = str(args)
if key in cache:
# Move to end (most recently used)
cache.move_to_end(key)
return cache[key]
result = fn(*args)
cache[key] = result
# Remove oldest item if cache exceeds max size
if len(cache) > maxsize:
cache.popitem(last=False)
return result
return memoized_function
3. Mutable Arguments Causing Incorrect Cache Hits
When arguments include mutable objects (lists, dictionaries), changes to these objects after caching can lead to incorrect results:
Problem Example:
def sum_list(lst):
return sum(lst)
memoized_sum = memoize(sum_list)
my_list = [1, 2, 3]
print(memoized_sum(my_list)) # 6 (cached)
my_list.append(4) # Modify the list
print(memoized_sum(my_list)) # Still returns 6 (incorrect!)
Solution: Create deep copies of mutable arguments or use immutable representations:
import copy
import json
def memoize(fn: Callable[..., Any]) -> Callable[..., Any]:
cache = {}
def memoized_function(*args: Any) -> Any:
# Create immutable representation of arguments
immutable_args = []
for arg in args:
if isinstance(arg, (list, dict)):
# Convert to immutable representation
immutable_args.append(json.dumps(arg, sort_keys=True))
else:
immutable_args.append(arg)
key = tuple(immutable_args)
if key in cache:
return cache[key]
result = fn(*args)
cache[key] = result
return result
return memoized_function
These solutions address the major pitfalls while maintaining the core memoization functionality and ensuring correctness across different use cases.
How would you design a stack which has a function min
that returns the minimum element in the stack, in addition to push
and pop
? All push
, pop
, min
should have running time O(1)
.
Recommended Readings
Coding Interview Patterns Your Personal Dijkstra's Algorithm to Landing Your Dream Job The goal of AlgoMonster is to help you get a job in the shortest amount of time possible in a data driven way We compiled datasets of tech interview problems and broke them down by patterns This way
Recursion Recursion is one of the most important concepts in computer science Simply speaking recursion is the process of a function calling itself Using a real life analogy imagine a scenario where you invite your friends to lunch https assets algo monster recursion jpg You first call Ben and ask
Runtime Overview When learning about algorithms and data structures you'll frequently encounter the term time complexity This concept is fundamental in computer science and offers insights into how long an algorithm takes to complete given a certain input size What is Time Complexity Time complexity represents the amount of time
Want a Structured Path to Master System Design Too? Don’t Miss This!