Facebook Pixel

3180. Maximum Total Reward Using Operations I

Problem Description

You start with a total reward of x = 0 and are given an array rewardValues of length n. All indices in the array are initially unmarked.

You can perform the following operation any number of times:

  • Select any unmarked index i from range [0, n-1]
  • If rewardValues[i] > x (current total reward), then:
    • Add rewardValues[i] to your total reward: x = x + rewardValues[i]
    • Mark index i as used

Your goal is to find the maximum total reward you can collect by choosing the optimal sequence of operations.

Key constraints:

  • You can only collect a reward at index i if its value is strictly greater than your current total reward
  • Each index can be used at most once (once marked, it cannot be used again)
  • You can perform operations in any order

Example walkthrough: If rewardValues = [1, 6, 3, 2]:

  • Start with x = 0
  • Choose index 0 (value 1): Since 1 > 0, collect it. Now x = 1
  • Choose index 3 (value 2): Since 2 > 1, collect it. Now x = 3
  • Choose index 1 (value 6): Since 6 > 3, collect it. Now x = 9
  • Cannot choose index 2 (value 3) since 3 ≤ 9
  • Maximum total reward = 9

The problem asks you to return the maximum possible total reward achievable through optimal selection of rewards.

Quick Interview Experience
Help others by sharing your interview experience
Have you seen this problem before?

Intuition

The key insight is that once we sort the array, we can make smarter decisions about which rewards to collect. If we have rewards [1, 2, 3, 6], collecting smaller rewards first allows us to potentially collect more rewards later.

Think about it this way: if our current total is x, we can only collect rewards that are greater than x. Once we collect a reward v, our new total becomes x + v, which means we can now only collect rewards greater than x + v. This creates a natural progression where we want to be strategic about the order.

Why sorting helps: After sorting, we know that all rewards to the left of a certain point are too small to collect (they're ≤ our current total), and we only need to consider rewards to the right. This is where binary search becomes useful - we can quickly find the first reward that's greater than our current total using bisect_right.

The recursive structure: At any state with total reward x, we face a choice: which available reward should we collect next? Each choice leads to a new state with a different total. We want to explore all possible choices and pick the one that gives us the maximum final reward. This naturally leads to a recursive solution:

  • For current total x, find all collectible rewards (those > x)
  • Try collecting each one and recursively solve for the new total
  • Return the maximum among all choices

Why memoization is crucial: We might reach the same total reward x through different paths. For example, collecting rewards [1, 3] gives total 4, and so does collecting [2, 2]. Without memoization, we'd recalculate the optimal solution from total 4 multiple times. The @cache decorator remembers the result for each x value we've seen before.

The formula becomes: dfs(x) = max(v + dfs(x + v)) for all valid rewards v > x. Starting from dfs(0) gives us the maximum total reward possible.

Learn more about Dynamic Programming patterns.

Solution Approach

The solution implements the recursive approach with memoization and binary search optimization:

1. Sort the rewards array:

rewardValues.sort()

Sorting enables us to use binary search and ensures that once we find rewards we can't collect, all subsequent smaller rewards are also uncollectible.

2. Define the recursive function with memoization:

@cache
def dfs(x: int) -> int:

The @cache decorator automatically memoizes the function, storing results for each unique value of x to avoid redundant calculations.

3. Find the starting point using binary search:

i = bisect_right(rewardValues, x)

bisect_right(rewardValues, x) returns the index of the first element greater than x. This is efficient with O(log n) complexity. All elements before index i are ≤ x and cannot be collected.

4. Try collecting each valid reward:

ans = 0
for v in rewardValues[i:]:
    ans = max(ans, v + dfs(x + v))
  • Iterate through all rewards starting from index i (these are all > x)
  • For each reward v, calculate the total if we collect it: v + dfs(x + v)
  • dfs(x + v) recursively finds the maximum additional reward we can get after our total becomes x + v
  • Track the maximum value among all choices

5. Return the result:

return ans

If no rewards can be collected (all are ≤ x), the loop doesn't execute and we return 0.

6. Start the recursion:

return dfs(0)

Begin with initial total reward of 0.

Time Complexity: O(n²) in the worst case, where n is the length of rewardValues. Each state can transition to at most n other states, and we have at most O(n) unique states (different values of x).

Space Complexity: O(n) for the memoization cache and recursion stack.

Ready to land your dream job?

Unlock your dream job with a 5-minute evaluator for a personalized learning plan!

Start Evaluator

Example Walkthrough

Let's trace through a small example with rewardValues = [1, 1, 3, 3] to understand how the solution works.

Step 1: Sort the array After sorting: rewardValues = [1, 1, 3, 3] (already sorted in this case)

Step 2: Start recursion with dfs(0)

Call dfs(0):

  • Current total: x = 0
  • Binary search: bisect_right([1, 1, 3, 3], 0) = 0 (all elements are > 0)
  • Can collect from index 0 onwards: [1, 1, 3, 3]
  • Try each option:
    • Option 1: Collect first 1 → Total becomes 1, recursively solve dfs(1)
    • Option 2: Collect second 1 → Total becomes 1, recursively solve dfs(1)
    • Option 3: Collect first 3 → Total becomes 3, recursively solve dfs(3)
    • Option 4: Collect second 3 → Total becomes 3, recursively solve dfs(3)

Call dfs(1):

  • Current total: x = 1
  • Binary search: bisect_right([1, 1, 3, 3], 1) = 2 (skip elements ≤ 1)
  • Can collect from index 2 onwards: [3, 3]
  • Try each option:
    • Collect first 3 → Total becomes 4, recursively solve dfs(4)
    • Collect second 3 → Total becomes 4, recursively solve dfs(4)
  • Both give: 3 + dfs(4) = 3 + 0 = 3
  • Return: 3

Call dfs(3):

  • Current total: x = 3
  • Binary search: bisect_right([1, 1, 3, 3], 3) = 4 (no elements > 3)
  • No rewards can be collected
  • Return: 0

Call dfs(4):

  • Current total: x = 4
  • Binary search: bisect_right([1, 1, 3, 3], 4) = 4 (no elements > 4)
  • No rewards can be collected
  • Return: 0

Back to dfs(0):

  • Option 1: 1 + dfs(1) = 1 + 3 = 4
  • Option 2: 1 + dfs(1) = 1 + 3 = 4 (same due to memoization)
  • Option 3: 3 + dfs(3) = 3 + 0 = 3
  • Option 4: 3 + dfs(3) = 3 + 0 = 3
  • Maximum: 4

Result: The maximum total reward is 4, achieved by collecting a reward of 1 first (bringing total to 1), then collecting a reward of 3 (bringing total to 4).

The key insight demonstrated here is that collecting smaller rewards first can enable us to collect more rewards overall. If we had greedily collected 3 first, we'd only get a total of 3 since no remaining rewards would be greater than 3.

Solution Implementation

1from typing import List
2from functools import cache
3import bisect
4
5
6class Solution:
7    def maxTotalReward(self, rewardValues: List[int]) -> int:
8        """
9        Find the maximum total reward that can be obtained.
10      
11        Args:
12            rewardValues: List of available reward values
13          
14        Returns:
15            Maximum total reward achievable
16        """
17      
18        @cache
19        def dfs(current_sum: int) -> int:
20            """
21            Depth-first search to find maximum additional reward from current state.
22          
23            Args:
24                current_sum: Current accumulated reward sum
25              
26            Returns:
27                Maximum additional reward that can be obtained
28            """
29            # Find the index of first reward value greater than current_sum
30            # We can only use rewards that are greater than our current sum
31            start_index = bisect.bisect_right(rewardValues, current_sum)
32          
33            # Track the maximum reward we can get from this state
34            max_reward = 0
35          
36            # Try each valid reward value (those greater than current_sum)
37            for reward_value in rewardValues[start_index:]:
38                # Recursively calculate the total reward if we take this value
39                # Total = current reward + best we can do after taking it
40                total_reward = reward_value + dfs(current_sum + reward_value)
41                max_reward = max(max_reward, total_reward)
42          
43            return max_reward
44      
45        # Sort reward values to enable binary search
46        rewardValues.sort()
47      
48        # Start DFS from initial sum of 0
49        return dfs(0)
50
1class Solution {
2    private int[] rewardValues;
3    private Integer[] memoization;
4
5    /**
6     * Finds the maximum total reward that can be obtained from the given reward values.
7     * You can only pick a reward if its value is greater than your current total.
8     * 
9     * @param rewardValues Array of available reward values
10     * @return Maximum total reward achievable
11     */
12    public int maxTotalReward(int[] rewardValues) {
13        this.rewardValues = rewardValues;
14      
15        // Sort rewards to enable binary search and optimize selection
16        Arrays.sort(this.rewardValues);
17      
18        int n = this.rewardValues.length;
19        // Initialize memoization array with size = 2 * maxReward
20        // This covers all possible states (current reward sum can be at most maxReward - 1,
21        // and we can add maxReward to it)
22        memoization = new Integer[this.rewardValues[n - 1] << 1];
23      
24        // Start DFS from initial state (currentReward = 0)
25        return dfs(0);
26    }
27
28    /**
29     * Performs depth-first search with memoization to find maximum reward
30     * starting from current reward sum.
31     * 
32     * @param currentReward Current accumulated reward sum
33     * @return Maximum additional reward achievable from this state
34     */
35    private int dfs(int currentReward) {
36        // Check if result for this state is already computed
37        if (memoization[currentReward] != null) {
38            return memoization[currentReward];
39        }
40      
41        // Find the first reward value greater than currentReward using binary search
42        // We search for (currentReward + 1) to find rewards we can still pick
43        int startIndex = Arrays.binarySearch(rewardValues, currentReward + 1);
44      
45        // If exact match not found, binarySearch returns -(insertionPoint) - 1
46        // Convert to actual insertion point (first valid reward index)
47        startIndex = startIndex < 0 ? -startIndex - 1 : startIndex;
48      
49        int maxReward = 0;
50      
51        // Try picking each valid reward (those greater than currentReward)
52        for (int i = startIndex; i < rewardValues.length; i++) {
53            // Calculate total reward if we pick rewardValues[i]
54            // and continue optimally from the new state
55            int totalReward = rewardValues[i] + dfs(currentReward + rewardValues[i]);
56            maxReward = Math.max(maxReward, totalReward);
57        }
58      
59        // Store and return the computed result for this state
60        memoization[currentReward] = maxReward;
61        return maxReward;
62    }
63}
64
1class Solution {
2public:
3    int maxTotalReward(vector<int>& rewardValues) {
4        // Sort the reward values in ascending order
5        sort(rewardValues.begin(), rewardValues.end());
6      
7        int n = rewardValues.size();
8        int maxReward = rewardValues.back();
9      
10        // Initialize memoization array with -1
11        // Size is double the maximum reward value to accommodate all possible states
12        int memo[maxReward << 1];
13        memset(memo, -1, sizeof(memo));
14      
15        // Define recursive function with memoization
16        function<int(int)> dfs = [&](int currentSum) -> int {
17            // Return memoized result if already computed
18            if (memo[currentSum] != -1) {
19                return memo[currentSum];
20            }
21          
22            // Find the first reward value greater than current sum
23            auto iterator = upper_bound(rewardValues.begin(), rewardValues.end(), currentSum);
24          
25            int maxRewardFromHere = 0;
26          
27            // Try taking each valid reward value (those greater than current sum)
28            for (; iterator != rewardValues.end(); ++iterator) {
29                // Calculate index of current reward
30                int index = iterator - rewardValues.begin();
31                int rewardValue = rewardValues[index];
32              
33                // Recursively calculate maximum reward after taking current reward
34                int totalReward = rewardValue + dfs(currentSum + rewardValue);
35                maxRewardFromHere = max(maxRewardFromHere, totalReward);
36            }
37          
38            // Store and return the maximum reward achievable from this state
39            memo[currentSum] = maxRewardFromHere;
40            return maxRewardFromHere;
41        };
42      
43        // Start DFS from initial sum of 0
44        return dfs(0);
45    }
46};
47
1/**
2 * Calculates the maximum total reward that can be obtained
3 * @param rewardValues - Array of reward values to choose from
4 * @returns Maximum total reward achievable
5 */
6function maxTotalReward(rewardValues: number[]): number {
7    // Sort reward values in ascending order
8    rewardValues.sort((a, b) => a - b);
9  
10    /**
11     * Binary search to find the first index where rewardValues[index] > targetValue
12     * @param targetValue - The value to search for
13     * @returns Index of the first element greater than targetValue
14     */
15    const findFirstGreaterIndex = (targetValue: number): number => {
16        let left = 0;
17        let right = rewardValues.length;
18      
19        while (left < right) {
20            const mid = (left + right) >> 1; // Equivalent to Math.floor((left + right) / 2)
21          
22            if (rewardValues[mid] > targetValue) {
23                right = mid;
24            } else {
25                left = mid + 1;
26            }
27        }
28      
29        return left;
30    };
31  
32    // Initialize memoization array with size of 2 * maximum reward value
33    // This represents all possible states up to twice the maximum reward
34    const maxRewardValue = rewardValues[rewardValues.length - 1];
35    const memoizationCache: number[] = Array(maxRewardValue << 1).fill(-1); // << 1 is equivalent to * 2
36  
37    /**
38     * Depth-first search with memoization to find maximum reward
39     * @param currentSum - Current accumulated reward sum
40     * @returns Maximum additional reward that can be obtained from this state
41     */
42    const dfs = (currentSum: number): number => {
43        // Return cached result if already computed
44        if (memoizationCache[currentSum] !== -1) {
45            return memoizationCache[currentSum];
46        }
47      
48        let maxAdditionalReward = 0;
49      
50        // Find starting index where rewards are greater than current sum
51        const startIndex = findFirstGreaterIndex(currentSum);
52      
53        // Try adding each valid reward value and recursively calculate the best path
54        for (let i = startIndex; i < rewardValues.length; ++i) {
55            const newSum = currentSum + rewardValues[i];
56            const totalReward = rewardValues[i] + dfs(newSum);
57            maxAdditionalReward = Math.max(maxAdditionalReward, totalReward);
58        }
59      
60        // Cache and return the result
61        memoizationCache[currentSum] = maxAdditionalReward;
62        return maxAdditionalReward;
63    };
64  
65    // Start DFS from initial sum of 0
66    return dfs(0);
67}
68

Time and Space Complexity

Time Complexity: O(n × (log n + M))

The time complexity consists of:

  • Initial sorting: O(n log n)
  • DFS with memoization: The dfs function can be called with at most M different values of x (where M is twice the maximum value in rewardValues), since we start from 0 and can only add reward values, and the maximum possible sum is less than 2 × max(rewardValues). For each unique state x:
    • bisect_right operation: O(log n)
    • Iterating through remaining rewards: In the worst case, we iterate through all n elements
    • Due to memoization, each state is computed only once
  • Total DFS complexity: O(M × n × log n) without optimization, but with careful analysis, the dominant operations are O(M × log n) for bisect operations and O(M × n) for iterations across all states
  • Combined: O(n log n + M × (log n + n)) = O(n × (log n + M))

Space Complexity: O(M)

The space complexity consists of:

  • Cache storage: The @cache decorator stores at most M different states (different values of x), each taking O(1) space
  • Recursion stack depth: In the worst case, the recursion depth can be O(n) when we select rewards sequentially
  • Since n ≤ M (the number of rewards cannot exceed the range of possible sums), and the cache dominates the space usage: O(M)

Learn more about how to find time and space complexity quickly.

Common Pitfalls

1. Not Removing Duplicates from the Input Array

The current solution doesn't handle duplicate values in rewardValues. When duplicates exist, the algorithm unnecessarily explores the same state multiple times through different paths, leading to inefficiency.

Why it's a problem:

  • If rewardValues = [1, 2, 2, 3], after collecting the first 2, we reach state x = 3. The algorithm will try both remaining 2s even though they lead to identical subproblems.
  • This creates redundant computation paths that the memoization doesn't fully optimize away.

Solution:

# Remove duplicates before sorting
rewardValues = list(set(rewardValues))
rewardValues.sort()

2. Integer Overflow in Large Test Cases

While Python handles arbitrary precision integers, in other languages or when dealing with extremely large reward sums, overflow could occur when computing current_sum + reward_value.

Solution:

# Add early termination when sum becomes too large
MAX_REWARD = 10**9  # or appropriate limit based on constraints

@cache
def dfs(current_sum: int) -> int:
    if current_sum >= MAX_REWARD:
        return 0
    # ... rest of the logic

3. Inefficient Memory Usage with Large Unique Values

The memoization cache stores results for every unique current_sum value encountered. If reward values are large and sparse (e.g., [1, 1000000, 2000000]), the cache might store many intermediate states.

Solution: Use state compression or limit the cache size:

from functools import lru_cache

# Use lru_cache with maxsize instead of unlimited cache
@lru_cache(maxsize=10000)
def dfs(current_sum: int) -> int:
    # ... implementation

4. Not Considering Early Termination Optimization

The current solution explores all possible paths even when it's clear that no better solution exists. For example, if we've already found a solution that collects all rewards, we don't need to explore other paths.

Solution:

def maxTotalReward(self, rewardValues: List[int]) -> int:
    rewardValues = sorted(set(rewardValues))
  
    # Calculate theoretical maximum (sum of all rewards)
    total_sum = sum(rewardValues)
  
    @cache
    def dfs(current_sum: int) -> int:
        # Early termination if we've reached the maximum possible
        if current_sum >= total_sum:
            return 0
          
        start_index = bisect.bisect_right(rewardValues, current_sum)
      
        # If we can potentially collect all remaining rewards
        remaining_sum = sum(rewardValues[start_index:])
        if all(rewardValues[i] > current_sum + sum(rewardValues[start_index:i]) 
               for i in range(start_index, len(rewardValues))):
            return remaining_sum
      
        max_reward = 0
        for reward_value in rewardValues[start_index:]:
            total_reward = reward_value + dfs(current_sum + reward_value)
            max_reward = max(max_reward, total_reward)
      
        return max_reward
  
    return dfs(0)
Discover Your Strengths and Weaknesses: Take Our 5-Minute Quiz to Tailor Your Study Plan:

What is the best way of checking if an element exists in a sorted array once in terms of time complexity? Select the best that applies.


Recommended Readings

Want a Structured Path to Master System Design Too? Don’t Miss This!

Load More