Facebook Pixel

3180. Maximum Total Reward Using Operations I


Problem Description

You are given an integer array rewardValues of length n, representing the values of rewards.

Initially, your total reward x is 0, and all indices are unmarked. You are allowed to perform the following operation any number of times:

  • Choose an unmarked index i from the range [0, n - 1].
  • If rewardValues[i] is greater than your current total reward x, then add rewardValues[i] to x (i.e., x = x + rewardValues[i]), and mark the index i.

Return an integer denoting the maximum total reward you can collect by performing the operations optimally.

Intuition

To solve this problem efficiently, we need to figure out a way to maximize the total reward x by leveraging the constraints given. The core idea is to exploit the fact that we can only add reward values greater than the current total reward x.

  1. Sort the Rewards: First, sorting rewardValues allows us to later find, using binary search, the smallest value that can be added to x.

  2. Use Dynamic Programming (DFS + Memoization): We define a function dfs(x) that returns the maximal reward that can be obtained when the current total reward is x. Using memoization reduces redundant calculations.

  3. Binary Search Efficiency: For each current x, perform a binary search to quickly locate the smallest index i in rewardValues satisfying rewardValues[i] > x. This operation allows focus solely on feasible reward candidates.

  4. Recursive Solution Exploration: Recursively calculate the potential maximal reward starting from index i. For each v selected, update x to x + v and calculate v + dfs(x + v) to explore further sums, maintaining an optimal approach throughout.

By taking advantage of sorting and efficient searching, combined with recursive decision making and memoization, the solution maximally accumulates rewards.

Learn more about Dynamic Programming patterns.

Solution Approach

Sorting + Memoization + Binary Search

The solution uses a combination of sorting, memoization, and binary search to efficiently collect the maximum reward.

  1. Sorting: Begin by sorting the rewardValues array. Sorting allows us to quickly identify the next feasible reward value using binary search, thereby optimizing our decision-making process.

  2. Memoization: We implement memoization using a cache to store already computed results of the function dfs(x). This avoids redundant calculations and significantly enhances performance, especially for overlapping subproblems characteristic of dynamic programming.

  3. Binary Search: For each current reward x, use bisect_right to perform a binary search on the sorted rewardValues array. This search finds the smallest index i where rewardValues[i] is greater than x.

  4. DFS (Depth-First Search) Recursion: Define the recursive function dfs(x):

    • Perform a binary search to find index i where rewardValues[i] > x.
    • Iterate over all reward values starting from index i.
    • For each reward value v that can be added, calculate v + dfs(x + v) and determine the maximum possible reward.
    • Return this maximum calculated reward, storing results in the cache for future access.

Here's how the implementation ties everything together:

class Solution:
    def maxTotalReward(self, rewardValues: List[int]) -> int:
        @cache
        def dfs(x: int) -> int:
            i = bisect_right(rewardValues, x)
            ans = 0
            for v in rewardValues[i:]:
                ans = max(ans, v + dfs(x + v))
            return ans

        rewardValues.sort()
        return dfs(0)

The use of sorting allows for an ordered approach to selecting rewards, memoization avoids recalculations, and the binary search accelerates the identification of optimal choices, effectively combining to solve the problem.

Ready to land your dream job?

Unlock your dream job with a 2-minute evaluator for a personalized learning plan!

Start Evaluator

Example Walkthrough

Let's go through the solution approach using a small example to illustrate the steps clearly.

Example

Consider the rewardValues array: [2, 9, 5, 3].

  1. Sorting:

    • First, sort the array to get [2, 3, 5, 9]. This helps in efficiently identifying the next potential reward value to add to our total reward x.
  2. Initial Setup:

    • Start with x = 0. This is our initial reward, and we need to maximize it by choosing unmarked indices with values greater than x.
  3. Memoization:

    • Implement a memoization function dfs(x) to avoid redundant calculations. We'll store results for specific x values to reuse them when needed.
  4. Binary Search:

    • Use binary search (bisect_right) to find the smallest index i where the reward value is greater than the current x. For x = 0, the smallest reward value greater than 0 is 2.
  5. Recursive DFS Calculation:

    • Starting from index i = 0 (value 2):
      • Add 2 to x: Now, x = 0 + 2 = 2.
      • Calculate potential reward by calling dfs(x). The next potential values are 3, 5, and 9.
      • Next, check adding 3:
        • For x = 2, increase x to 2 + 3 = 5.
        • Continue recursively, exploring further increases, such as adding 5 or 9.
      • Ultimately credit the reward by evaluating each combination and choose the optimal sum path using memoized results.
  6. Maximize Reward:

    • Repeat the process recursively for all viable starting indices, using memoized results to quickly obtain previously calculated paths.
    • The maximum reward obtainable will be returned by dfs(0).

This structured approach allows for efficiently reaching the optimal solution through a combination of sorting, binary search, and recursive exploration with memoization.

Solution Implementation

1from typing import List
2from functools import lru_cache
3from bisect import bisect_right
4
5class Solution:
6    def maxTotalReward(self, rewardValues: List[int]) -> int:
7        # Use lru_cache instead of cache for caching results of dfs
8        @lru_cache(None)
9        def dfs(x: int) -> int:
10            # Find the first index in rewardValues where the value is greater than x
11            i = bisect_right(rewardValues, x)
12
13            # Initialize the maximum reward sum as 0
14            max_reward = 0
15
16            # Iterate through the reward values starting from index i
17            for v in rewardValues[i:]:
18                # Recursively calculate maximum rewards and find the best possible combination
19                max_reward = max(max_reward, v + dfs(x + v))
20
21            return max_reward
22
23        # Ensure the rewardValues are sorted before processing
24        rewardValues.sort()
25
26        # Start recursion from x = 0 and return the maximum reward
27        return dfs(0)
28
1import java.util.Arrays;
2
3class Solution {
4    private int[] rewardValues;  // Array to hold the reward values
5    private Integer[] memoization;  // Array for memoization to store intermediate results
6
7    /* 
8     * Method to find the maximum total reward that can be achieved 
9     * by selecting a subset of reward values.
10     */
11    public int maxTotalReward(int[] rewardValues) {
12        this.rewardValues = rewardValues;
13      
14        // Sort the array to enable binary search for optimal next steps
15        Arrays.sort(this.rewardValues);
16      
17        // Initialize the memoization array
18        int maxElement = this.rewardValues[this.rewardValues.length - 1];
19        memoization = new Integer[maxElement << 1];  // Double the size of the max element
20      
21        // Start the depth-first search from element 0
22        return dfs(0);
23    }
24
25    /*
26     * Helper method to perform depth-first search and calculate the maximum reward.
27     */
28    private int dfs(int current) {
29        // Check if the result for the current state is already computed
30        if (memoization[current] != null) {
31            return memoization[current];
32        }
33
34        // Perform a binary search to find the smallest number greater than 'current'
35        int index = Arrays.binarySearch(rewardValues, current + 1);
36        index = index < 0 ? -index - 1 : index;
37
38        int maxReward = 0;  // Variable to keep track of the maximum reward for the current state
39
40        // Iterate through the remaining reward values to compute possible rewards
41        for (; index < rewardValues.length; ++index) {
42            // Recursively calculate the reward for the next state
43            maxReward = Math.max(maxReward, rewardValues[index] + dfs(current + rewardValues[index]));
44        }
45
46        // Store the computed maximum reward in the memoization array
47        return memoization[current] = maxReward;
48    }
49}
50
1#include <vector>
2#include <algorithm>
3#include <cstring>
4#include <functional>
5using namespace std;
6
7class Solution {
8public:
9    int maxTotalReward(vector<int>& rewardValues) {
10        // Sort the reward values in non-decreasing order
11        sort(rewardValues.begin(), rewardValues.end());
12      
13        int n = rewardValues.size();
14      
15        // Create a memoization array to store the maximum rewards
16        // Initialize with -1 (indicating that value has not been computed yet)
17        int maxReward[rewardValues.back() * 2];
18        memset(maxReward, -1, sizeof(maxReward));
19      
20        // Define a recursive lambda function for memoized depth-first search
21        function<int(int)> dfs = [&](int currentSum) {
22            // If the result for the currentSum is already computed, return it
23            if (maxReward[currentSum] != -1) {
24                return maxReward[currentSum];
25            }
26
27            // Find the first reward value greater than the currentSum
28            auto it = upper_bound(rewardValues.begin(), rewardValues.end(), currentSum);
29            int maxResult = 0;
30
31            // Explore further options to maximize the reward
32            for (; it != rewardValues.end(); ++it) {
33                // Calculate the potential reward by including the current rewardValue
34                int rewardValue = *it;
35                maxResult = max(maxResult, rewardValue + dfs(currentSum + rewardValue));
36            }
37
38            // Memoize the result for the currentSum
39            return maxReward[currentSum] = maxResult;
40        };
41
42        // Start the depth-first search from a sum of zero
43        return dfs(0);
44    }
45};
46
1// Function to calculate the maximum total reward from rewardValues array
2function maxTotalReward(rewardValues: number[]): number {
3    // Sort the rewards in ascending order to facilitate binary search
4    rewardValues.sort((a, b) => a - b);
5
6    // Binary search function to find the index of the smallest number greater than 'x'
7    const search = (x: number): number => {
8        let left = 0;
9        let right = rewardValues.length;
10        while (left < right) {
11            const mid = (left + right) >> 1;
12            if (rewardValues[mid] > x) {
13                right = mid;
14            } else {
15                left = mid + 1;
16            }
17        }
18        return left;
19    };
20
21    // Array to memoize the results of subproblems, initialized with -1
22    const memoization: number[] = Array(rewardValues.at(-1)! * 2).fill(-1);
23
24    // Recursive depth-first search function with memoization to calculate maximum reward
25    const dfs = (x: number): number => {
26        // If already computed, return the stored result
27        if (memoization[x] !== -1) {
28            return memoization[x];
29        }
30
31        let maxReward = 0;
32        // Start from the first index where rewardValues[i] > x and try to collect rewards
33        for (let i = search(x); i < rewardValues.length; ++i) {
34            maxReward = Math.max(maxReward, rewardValues[i] + dfs(x + rewardValues[i]));
35        }
36
37        // Store the result in the memoization array and return it
38        return (memoization[x] = maxReward);
39    };
40
41    // Initiate the search with 0 starting point
42    return dfs(0);
43}
44

Time and Space Complexity

The time complexity of the given code is O(n \times (\log n + M)), where n is the length of the rewardValues array, and M is twice the maximum value in the rewardValues array. This complexity arises due to the following:

  1. Sorting the rewardValues array takes O(n \log n).
  2. For each recursive call in the depth-first search (dfs), finding the index with bisect_right takes O(\log n).
  3. The number of possible states for dfs(x) is proportional to M, as the maximum possible sum that can be accumulated depends on the values in rewardValues.

The space complexity is O(M), primarily for the cache used to store the results of dfs computations, where M is again related to the largest possible value of the accumulated rewards.

Learn more about how to find time and space complexity quickly.


Discover Your Strengths and Weaknesses: Take Our 2-Minute Quiz to Tailor Your Study Plan:
Question 1 out of 10

What's the output of running the following function using input [30, 20, 10, 100, 33, 12]?

1def fun(arr: List[int]) -> List[int]:
2    import heapq
3    heapq.heapify(arr)
4    res = []
5    for i in range(3):
6        res.append(heapq.heappop(arr))
7    return res
8
1public static int[] fun(int[] arr) {
2    int[] res = new int[3];
3    PriorityQueue<Integer> heap = new PriorityQueue<>();
4    for (int i = 0; i < arr.length; i++) {
5        heap.add(arr[i]);
6    }
7    for (int i = 0; i < 3; i++) {
8        res[i] = heap.poll();
9    }
10    return res;
11}
12
1class HeapItem {
2    constructor(item, priority = item) {
3        this.item = item;
4        this.priority = priority;
5    }
6}
7
8class MinHeap {
9    constructor() {
10        this.heap = [];
11    }
12
13    push(node) {
14        // insert the new node at the end of the heap array
15        this.heap.push(node);
16        // find the correct position for the new node
17        this.bubble_up();
18    }
19
20    bubble_up() {
21        let index = this.heap.length - 1;
22
23        while (index > 0) {
24            const element = this.heap[index];
25            const parentIndex = Math.floor((index - 1) / 2);
26            const parent = this.heap[parentIndex];
27
28            if (parent.priority <= element.priority) break;
29            // if the parent is bigger than the child then swap the parent and child
30            this.heap[index] = parent;
31            this.heap[parentIndex] = element;
32            index = parentIndex;
33        }
34    }
35
36    pop() {
37        const min = this.heap[0];
38        this.heap[0] = this.heap[this.size() - 1];
39        this.heap.pop();
40        this.bubble_down();
41        return min;
42    }
43
44    bubble_down() {
45        let index = 0;
46        let min = index;
47        const n = this.heap.length;
48
49        while (index < n) {
50            const left = 2 * index + 1;
51            const right = left + 1;
52
53            if (left < n && this.heap[left].priority < this.heap[min].priority) {
54                min = left;
55            }
56            if (right < n && this.heap[right].priority < this.heap[min].priority) {
57                min = right;
58            }
59            if (min === index) break;
60            [this.heap[min], this.heap[index]] = [this.heap[index], this.heap[min]];
61            index = min;
62        }
63    }
64
65    peek() {
66        return this.heap[0];
67    }
68
69    size() {
70        return this.heap.length;
71    }
72}
73
74function fun(arr) {
75    const heap = new MinHeap();
76    for (const x of arr) {
77        heap.push(new HeapItem(x));
78    }
79    const res = [];
80    for (let i = 0; i < 3; i++) {
81        res.push(heap.pop().item);
82    }
83    return res;
84}
85

Recommended Readings

Want a Structured Path to Master System Design Too? Don’t Miss This!


Load More