2809. Minimum Time to Make Array Sum At Most x
Problem Description
You have two arrays nums1
and nums2
of equal length. The problem involves a time-based simulation where:
-
Every second: All elements in
nums1
are automatically incremented by their corresponding elements innums2
. That is, for all indicesi
,nums1[i]
becomesnums1[i] + nums2[i]
. -
After each increment: You can optionally choose one index
i
and setnums1[i] = 0
.
Your goal is to find the minimum time (in seconds) needed to make the sum of all elements in nums1
less than or equal to a given value x
. If it's impossible to achieve this, return -1
.
Key Points:
- At time
t
, if you haven't reset an element at indexi
, its value will benums1[i] + t * nums2[i]
- You can reset at most one element per second (after the automatic increment)
- Once an element is reset to 0, it will continue growing from 0 in subsequent seconds
Example scenario: If nums1 = [3, 2]
, nums2 = [1, 2]
, and x = 10
:
- At
t=0
: Sum is3 + 2 = 5
(≤ 10, so answer could be 0) - At
t=1
: Elements become[4, 4]
before any operation, then you could reset one to get[0, 4]
or[4, 0]
- At
t=2
: Elements that weren't reset continue growing, and you can reset another element
The challenge is determining the optimal strategy for which elements to reset and when, in order to keep the sum at or below x
in minimum time.
Intuition
Let's think about what happens when we reset an element at different times. If we reset nums1[i]
at time t
, we eliminate a value of nums1[i] + t * nums2[i]
from the sum. This is a crucial observation - the later we reset an element, the more value we eliminate.
Now consider if we reset the same element multiple times. The second reset would only eliminate the growth since the first reset, which is less effective than resetting a different element that has been growing since the beginning. Therefore, each element should be reset at most once.
Since we can perform at most n
operations (one per element), and each second allows at most one operation, the answer must be between 0
and n
seconds.
Here's the key insight: If we decide to reset exactly j
elements, which elements should we choose and in what order?
- The value eliminated when resetting element
i
at timet
isnums1[i] + t * nums2[i]
- If we're resetting
j
elements at times1, 2, ..., j
, the total reduction is:- Element reset at time 1:
nums1[i₁] + 1 * nums2[i₁]
- Element reset at time 2:
nums1[i₂] + 2 * nums2[i₂]
- ...
- Element reset at time j:
nums1[iⱼ] + j * nums2[iⱼ]
- Element reset at time 1:
Notice that elements with larger nums2[i]
values benefit more from being reset later (since they're multiplied by the time). This suggests a greedy strategy: sort elements by nums2
values in ascending order, so elements with larger growth rates are reset later to maximize the reduction.
After sorting, we need to determine which subset of elements to reset and in what order. This becomes a dynamic programming problem: for the first i
elements (after sorting), what's the maximum reduction we can achieve with exactly j
operations? We can either skip element i
or reset it at time j
, leading to the recurrence relation.
Finally, for each possible number of operations j
from 0
to n
, we check if the total sum after j
seconds minus the maximum possible reduction is at most x
.
Learn more about Dynamic Programming and Sorting patterns.
Solution Approach
Based on our intuition, we implement the solution using sorting and dynamic programming.
Step 1: Sorting
First, we sort the pairs (nums1[i], nums2[i])
based on nums2[i]
values in ascending order. This ensures that elements with larger growth rates will be considered for later time slots, maximizing the reduction value.
sorted(zip(nums1, nums2), key=lambda z: z[1])
Step 2: Dynamic Programming Setup
We define f[i][j]
as the maximum reduction in sum achievable when considering the first i
elements (after sorting) with exactly j
operations.
- Initialize a 2D array
f
of size(n+1) × (n+1)
with all zeros f[0][j] = 0
for allj
(no elements means no reduction possible)
Step 3: DP Transition
For each element i
(1-indexed) and each possible number of operations j
:
f[i][j] = max( f[i-1][j], # Skip element i f[i-1][j-1] + nums1[i] + nums2[i] * j # Reset element i at time j )
The second option represents resetting the current element at time j
, which eliminates nums1[i] + nums2[i] * j
from the sum.
Step 4: Finding the Answer After filling the DP table, we calculate:
s1 = sum(nums1)
: initial sums2 = sum(nums2)
: total growth per second
For each possible time j
from 0
to n
:
- The sum at time
j
without any operations would bes1 + s2 * j
- The maximum reduction possible with
j
operations isf[n][j]
- The actual sum after optimal operations is
s1 + s2 * j - f[n][j]
We return the smallest j
where s1 + s2 * j - f[n][j] <= x
. If no such j
exists, return -1
.
Time Complexity: O(n²)
for the DP computation
Space Complexity: O(n²)
for the DP table
Ready to land your dream job?
Unlock your dream job with a 5-minute evaluator for a personalized learning plan!
Start EvaluatorExample Walkthrough
Let's walk through a concrete example with nums1 = [3, 2, 5]
, nums2 = [2, 1, 3]
, and x = 15
.
Step 1: Sort by nums2 values
We sort the pairs by nums2
in ascending order:
- Original pairs:
(3,2), (2,1), (5,3)
- After sorting:
(2,1), (3,2), (5,3)
Step 2: Build DP table
We create a DP table f[i][j]
where i
represents the first i
elements considered and j
represents using exactly j
operations.
Let's fill the table step by step:
For element 1 (2,1):
f[1][0] = 0
(no operations, no reduction)f[1][1] = max(0, 0 + 2 + 1*1) = 3
(reset at time 1, eliminates 2 + 1 = 3)
For element 2 (3,2):
f[2][0] = 0
(no operations)f[2][1] = max(f[1][1], f[1][0] + 3 + 2*1) = max(3, 5) = 5
(better to reset element 2 at time 1)f[2][2] = max(f[1][2], f[1][1] + 3 + 2*2) = max(0, 3 + 7) = 10
(reset element 1 at time 1, element 2 at time 2)
For element 3 (5,3):
f[3][0] = 0
f[3][1] = max(f[2][1], f[2][0] + 5 + 3*1) = max(5, 8) = 8
(reset element 3 at time 1)f[3][2] = max(f[2][2], f[2][1] + 5 + 3*2) = max(10, 5 + 11) = 16
(reset another element at time 1, element 3 at time 2)f[3][3] = max(f[2][3], f[2][2] + 5 + 3*3) = max(0, 10 + 14) = 24
(reset elements 1,2 at times 1,2, element 3 at time 3)
Step 3: Calculate sums and find answer
- Initial sum
s1 = 3 + 2 + 5 = 10
- Growth per second
s2 = 2 + 1 + 3 = 6
Check each time:
- Time 0: Sum =
10 + 6*0 - 0 = 10
(≤ 15) ✓ - Time 1: Sum =
10 + 6*1 - 8 = 8
(≤ 15) ✓ - Time 2: Sum =
10 + 6*2 - 16 = 6
(≤ 15) ✓
The answer is 0 seconds since the initial sum of 10 is already ≤ 15.
Let's verify with a different target x = 8
:
- Time 0: Sum = 10 (> 8) ✗
- Time 1: Sum =
10 + 6*1 - 8 = 8
(≤ 8) ✓
The answer would be 1 second.
The key insight illustrated here is that by sorting elements with smaller nums2
values first, we ensure that elements with higher growth rates (like the element with nums2=3
) are available for resetting at later times, maximizing the reduction value when multiplied by the time factor.
Solution Implementation
1class Solution:
2 def minimumTime(self, nums1: List[int], nums2: List[int], x: int) -> int:
3 n = len(nums1)
4
5 # dp[i][j] represents the maximum reduction we can achieve
6 # by selecting j items from the first i items
7 dp = [[0] * (n + 1) for _ in range(n + 1)]
8
9 # Sort items by nums2 values (ascending order) to optimize the selection order
10 # Each item contributes nums1[i] + nums2[i] * time when selected at a given time
11 sorted_pairs = sorted(zip(nums1, nums2), key=lambda pair: pair[1])
12
13 # Build the DP table
14 for i in range(1, n + 1):
15 value1, value2 = sorted_pairs[i - 1]
16
17 for operations_count in range(n + 1):
18 # Option 1: Don't select the current item
19 dp[i][operations_count] = dp[i - 1][operations_count]
20
21 # Option 2: Select the current item (if we have operations available)
22 if operations_count > 0:
23 # When selected at time 'operations_count', this item contributes:
24 # value1 + value2 * operations_count to the reduction
25 reduction_with_current = dp[i - 1][operations_count - 1] + value1 + value2 * operations_count
26 dp[i][operations_count] = max(dp[i][operations_count], reduction_with_current)
27
28 # Calculate initial sums
29 sum1 = sum(nums1)
30 sum2 = sum(nums2)
31
32 # Check each possible number of operations (time steps)
33 for time in range(n + 1):
34 # Total sum at time t = initial_sum + sum2 * t - maximum_reduction
35 total_sum_at_time = sum1 + sum2 * time - dp[n][time]
36
37 if total_sum_at_time <= x:
38 return time
39
40 # If no valid time found, return -1
41 return -1
42
1class Solution {
2 public int minimumTime(List<Integer> nums1, List<Integer> nums2, int x) {
3 int n = nums1.size();
4
5 // dp[i][j] represents the maximum reduction we can achieve
6 // by considering first i elements and performing j operations
7 int[][] dp = new int[n + 1][n + 1];
8
9 // Create pairs of (nums1[i], nums2[i]) for sorting
10 int[][] pairs = new int[n][2];
11 for (int i = 0; i < n; ++i) {
12 pairs[i] = new int[] {nums1.get(i), nums2.get(i)};
13 }
14
15 // Sort pairs by nums2 values (increment rate) in ascending order
16 // This ensures we reset elements with smaller increment rates first
17 Arrays.sort(pairs, Comparator.comparingInt(a -> a[1]));
18
19 // Fill the DP table
20 for (int i = 1; i <= n; ++i) {
21 for (int j = 0; j <= n; ++j) {
22 // Option 1: Don't reset the i-th element at time j
23 dp[i][j] = dp[i - 1][j];
24
25 // Option 2: Reset the i-th element at time j
26 if (j > 0) {
27 int initialValue = pairs[i - 1][0];
28 int incrementRate = pairs[i - 1][1];
29 // At time j, the element would have value: initialValue + incrementRate * j
30 // Resetting it reduces the sum by this amount
31 dp[i][j] = Math.max(dp[i][j], dp[i - 1][j - 1] + initialValue + incrementRate * j);
32 }
33 }
34 }
35
36 // Calculate initial sums
37 int sum1 = 0; // Sum of initial values
38 int sum2 = 0; // Sum of increment rates
39 for (int value : nums1) {
40 sum1 += value;
41 }
42 for (int value : nums2) {
43 sum2 += value;
44 }
45
46 // Check each possible number of operations (0 to n)
47 for (int operations = 0; operations <= n; ++operations) {
48 // Total sum at time 'operations' = initial sum + increment sum * time - reduction
49 int totalSum = sum1 + sum2 * operations - dp[n][operations];
50 if (totalSum <= x) {
51 return operations;
52 }
53 }
54
55 // If no valid time found, return -1
56 return -1;
57 }
58}
59
1class Solution {
2public:
3 int minimumTime(vector<int>& nums1, vector<int>& nums2, int x) {
4 int n = nums1.size();
5
6 // Create pairs of (nums2[i], nums1[i]) for sorting by nums2 values
7 vector<pair<int, int>> sortedPairs;
8 for (int i = 0; i < n; ++i) {
9 sortedPairs.emplace_back(nums2[i], nums1[i]);
10 }
11
12 // Sort pairs by nums2 values (ascending order)
13 sort(sortedPairs.begin(), sortedPairs.end());
14
15 // dp[i][j] represents the maximum reduction we can achieve
16 // using first i elements with exactly j operations
17 int dp[n + 1][n + 1];
18 memset(dp, 0, sizeof(dp));
19
20 // Fill the DP table
21 for (int i = 1; i <= n; ++i) {
22 for (int operations = 0; operations <= n; ++operations) {
23 // Option 1: Don't use the i-th element in this operation
24 dp[i][operations] = dp[i - 1][operations];
25
26 // Option 2: Use the i-th element if we have operations left
27 if (operations > 0) {
28 auto [increaseRate, baseValue] = sortedPairs[i - 1];
29 // The reduction when performing operation at time 'operations'
30 // is baseValue + increaseRate * operations
31 int reductionWithCurrent = dp[i - 1][operations - 1] +
32 baseValue + increaseRate * operations;
33 dp[i][operations] = max(dp[i][operations], reductionWithCurrent);
34 }
35 }
36 }
37
38 // Calculate initial sums
39 int sum1 = accumulate(nums1.begin(), nums1.end(), 0);
40 int sum2 = accumulate(nums2.begin(), nums2.end(), 0);
41
42 // Check for each possible number of operations (0 to n)
43 // if we can achieve sum <= x
44 for (int operations = 0; operations <= n; ++operations) {
45 // Total sum after 'operations' time units:
46 // initial_sum + increase_per_time * operations - total_reduction
47 int totalSum = sum1 + sum2 * operations - dp[n][operations];
48
49 if (totalSum <= x) {
50 return operations;
51 }
52 }
53
54 // If no valid number of operations found
55 return -1;
56 }
57};
58
1/**
2 * Finds the minimum time needed to reduce the sum to at most x
3 * @param nums1 - Initial values array
4 * @param nums2 - Increment values array (per time unit)
5 * @param x - Target maximum sum
6 * @returns Minimum time needed, or -1 if impossible
7 */
8function minimumTime(nums1: number[], nums2: number[], x: number): number {
9 const n: number = nums1.length;
10
11 // dp[i][j] represents the maximum reduction we can achieve
12 // by considering first i elements and performing j operations
13 const dp: number[][] = Array(n + 1)
14 .fill(0)
15 .map(() => Array(n + 1).fill(0));
16
17 // Combine nums1 and nums2 into pairs for easier processing
18 const pairedNums: number[][] = [];
19 for (let i = 0; i < n; ++i) {
20 pairedNums.push([nums1[i], nums2[i]]);
21 }
22
23 // Sort pairs by increment value (nums2) in ascending order
24 // This ensures we handle smaller increments first for optimal reduction
25 pairedNums.sort((a, b) => a[1] - b[1]);
26
27 // Dynamic programming to calculate maximum achievable reduction
28 for (let i = 1; i <= n; ++i) {
29 for (let j = 0; j <= n; ++j) {
30 // Option 1: Don't perform operation on current element
31 dp[i][j] = dp[i - 1][j];
32
33 // Option 2: Perform operation on current element at time j
34 if (j > 0) {
35 const [initialValue, incrementValue] = pairedNums[i - 1];
36 // Reduction = initial value + (increment * time when operation is performed)
37 dp[i][j] = Math.max(dp[i][j], dp[i - 1][j - 1] + initialValue + incrementValue * j);
38 }
39 }
40 }
41
42 // Calculate initial sums
43 const sumNums1: number = nums1.reduce((acc, val) => acc + val, 0);
44 const sumNums2: number = nums2.reduce((acc, val) => acc + val, 0);
45
46 // Check for each possible time j if we can achieve sum <= x
47 for (let j = 0; j <= n; ++j) {
48 // Total sum at time j = initial sum + increments * time - maximum reduction
49 const totalSum: number = sumNums1 + sumNums2 * j - dp[n][j];
50 if (totalSum <= x) {
51 return j;
52 }
53 }
54
55 // No valid time found
56 return -1;
57}
58
Time and Space Complexity
Time Complexity: O(n^2)
The code first sorts the zipped arrays based on the second element of each pair, which takes O(n log n)
time. Then it uses a nested loop structure where the outer loop runs n
times (iterating through sorted pairs) and the inner loop runs up to n+1
times (iterating through j
values). Within the nested loops, all operations are O(1)
. Therefore, the dominant operation is the double nested loop which gives us O(n * n) = O(n^2)
. The final loop to check conditions runs O(n)
times. Since O(n^2)
dominates O(n log n)
and O(n)
, the overall time complexity is O(n^2)
.
Space Complexity: O(n^2)
The code creates a 2D array f
with dimensions (n+1) × (n+1)
, which requires O(n^2)
space. The sorted operation creates a new list of tuples which takes O(n)
space. Other variables like s1
, s2
, and loop variables use O(1)
space. The dominant space usage comes from the 2D array f
, making the overall space complexity O(n^2)
.
Learn more about how to find time and space complexity quickly.
Common Pitfalls
1. Incorrect Sorting Direction
A critical pitfall is sorting the pairs by nums2
in descending order instead of ascending order. This seems intuitive since we might want to "eliminate" the fastest-growing elements first.
Why it's wrong: When we reset an element at time t
, we eliminate nums1[i] + nums2[i] * t
from the sum. If we process elements with larger nums2[i]
values first (at earlier times), we lose potential reduction value since t
is smaller. By sorting in ascending order, elements with larger growth rates are reset at later times, maximizing the product nums2[i] * t
.
Example:
- If
nums2 = [3, 1]
and we reset both elements - Ascending order: Reset index 1 at t=1 (saves 1×1=1), reset index 0 at t=2 (saves 3×2=6). Total savings: 7
- Descending order: Reset index 0 at t=1 (saves 3×1=3), reset index 1 at t=2 (saves 1×2=2). Total savings: 5
2. Misunderstanding the DP State Definition
Another common mistake is defining dp[i][j]
as "the minimum sum achievable" rather than "the maximum reduction achievable". This leads to incorrect transitions and boundary conditions.
Solution: Always define DP in terms of maximum reduction. This makes the transition clearer: we're accumulating the total amount we can subtract from the ever-growing sum.
3. Incorrect Time Calculation in DP Transition
When calculating the reduction for resetting element i
at operation j
, using j-1
instead of j
as the time multiplier is a frequent error.
Wrong: dp[i-1][j-1] + value1 + value2 * (j-1)
Correct: dp[i-1][j-1] + value1 + value2 * j
Why: When we perform the j-th operation, it happens at time j
(assuming 0-indexed operations but 1-indexed in this context). The element has grown for j
seconds before being reset.
4. Off-by-One Errors in Array Indexing
Mixing 0-based and 1-based indexing when accessing sorted_pairs
can cause array out-of-bounds errors or incorrect values.
Solution: Be consistent with indexing. In the code above, dp
uses 1-based indexing for items (rows) while sorted_pairs
uses 0-based indexing, hence we use sorted_pairs[i-1]
when processing the i-th item in the DP table.
Which technique can we use to find the middle of a linked list?
Recommended Readings
What is Dynamic Programming Prerequisite DFS problems dfs_intro Backtracking problems backtracking Memoization problems memoization_intro Pruning problems backtracking_pruning Dynamic programming is an algorithmic optimization technique that breaks down a complicated problem into smaller overlapping sub problems in a recursive manner and uses solutions to the sub problems to construct a solution
Sorting Summary Comparisons We presented quite a few sorting algorithms and it is essential to know the advantages and disadvantages of each one The basic algorithms are easy to visualize and easy to learn for beginner programmers because of their simplicity As such they will suffice if you don't know any advanced
Coding Interview Patterns Your Personal Dijkstra's Algorithm to Landing Your Dream Job The goal of AlgoMonster is to help you get a job in the shortest amount of time possible in a data driven way We compiled datasets of tech interview problems and broke them down by patterns This way
Want a Structured Path to Master System Design Too? Don’t Miss This!