1199. Minimum Time to Build Blocks 🔒
Problem Description
You have a list of blocks where blocks[i] = t
means the i-th block requires t
units of time to build. Each block must be built by exactly one worker.
Starting with just one worker, you have two options at any point:
- Split a worker: A worker can split into two workers, which takes
split
units of time. If multiple workers split simultaneously, they do so in parallel (the time cost is still justsplit
, not multiplied). - Build a block: A worker can build one block and then go home (that worker is no longer available).
Your goal is to find the minimum time needed to build all blocks.
Example scenario:
- If you have blocks
[1, 2]
andsplit = 1
:- The single worker first splits (takes 1 time unit), creating 2 workers
- Then both workers build their blocks in parallel
- Total time = 1 (split) + max(1, 2) = 3
The challenge is determining the optimal strategy for when to split workers versus when to assign them to build blocks, minimizing the total time required to complete all blocks.
Intuition
The key insight is to think about this problem in reverse. Instead of thinking forward about splitting workers, we can think backward about merging blocks.
Consider what happens when we have multiple blocks to build:
- With one block: No splitting needed, just build it directly
- With two blocks: We must split once to get 2 workers, then they build in parallel. Total time =
split + max(block1, block2)
- With more blocks: The pattern becomes complex if we think forward
The reverse perspective reveals something elegant: When we split a worker to build two blocks, the total time is split + max(block_i, block_j)
. This is equivalent to "merging" two blocks into one super-block that takes split + max(block_i, block_j)
time to build.
Why does greedy selection work here? The blocks with longer build times should participate in as few merges as possible because each merge adds a split
cost. By always merging the two smallest blocks first:
- We minimize the impact of the
split
overhead on larger blocks - Larger blocks get merged later (or not at all if they're the last one), reducing their participation in costly merge operations
Think of it like a tournament bracket in reverse - we want the "strongest" (longest) blocks to have the shortest path to the final, while the "weakest" (shortest) blocks can afford to go through multiple rounds of merging.
Using a min heap allows us to efficiently extract the two smallest blocks at each step, merge them into a new block with time split + max(block_i, block_j)
(which simplifies to split + block_j
since block_j
is the larger one after popping from the heap), and continue until only one block remains.
Learn more about Greedy, Math and Heap (Priority Queue) patterns.
Solution Approach
The implementation uses a greedy algorithm with a min heap to efficiently merge blocks from smallest to largest.
Step-by-step implementation:
-
Initialize the min heap: Convert the
blocks
list into a min heap usingheapify(blocks)
. This operation takes O(n) time and transforms the list in-place into a heap structure where the smallest element is always at the root. -
Merge blocks iteratively: While there's more than one block in the heap:
- Extract the smallest block using
heappop(blocks)
- this is discarded since it gets absorbed into the merge - Extract the second smallest block using
heappop(blocks)
- Create a new merged block with time
second_smallest + split
- Push this new block back into the heap using
heappush(blocks, second_smallest + split)
- Extract the smallest block using
-
Return the final block: After all merges, only one block remains in the heap - this represents the minimum total time needed.
Why the algorithm works:
- When we pop two blocks and only keep the larger one plus
split
, we're simulating the scenario where one worker splits and two workers build these blocks in parallel - The time for parallel execution is determined by the slower (larger) block
- By always merging the smallest blocks first, we ensure larger blocks participate in fewer merges, minimizing the accumulation of
split
costs
Time Complexity: O(n log n) where n is the number of blocks
- Initial heapify: O(n)
- We perform n-1 merge operations
- Each merge involves 2 pops and 1 push, each taking O(log n)
Space Complexity: O(1) as we modify the input list in-place (the heap is built on the original list)
Ready to land your dream job?
Unlock your dream job with a 5-minute evaluator for a personalized learning plan!
Start EvaluatorExample Walkthrough
Let's walk through the solution with blocks = [3, 1, 4, 2]
and split = 2
.
Initial Setup:
- Convert the array into a min heap:
[1, 2, 4, 3]
(heap representation) - Visual: We start with 4 blocks that need to be built
Iteration 1:
- Pop smallest: 1 (discarded)
- Pop second smallest: 2
- New block time: 2 + 2 (split) = 4
- Push 4 back to heap
- Heap now:
[3, 4, 4]
- Interpretation: One worker splits (2 time), creating two workers who build blocks of size 1 and 2 in parallel. Total time for this group: 2 + max(1,2) = 4
Iteration 2:
- Pop smallest: 3 (discarded)
- Pop second smallest: 4
- New block time: 4 + 2 (split) = 6
- Push 6 back to heap
- Heap now:
[4, 6]
- Interpretation: A worker handling the merged block (size 4) splits, and the two resulting workers build blocks of size 3 and 4 in parallel. Total time: 2 + max(3,4) = 6
Iteration 3:
- Pop smallest: 4 (discarded)
- Pop second smallest: 6
- New block time: 6 + 2 (split) = 8
- Push 8 back to heap
- Heap now:
[8]
- Interpretation: The initial worker splits, with one branch eventually handling the block of size 4, and the other handling the complex merged block of size 6
Result: The final block has time 8, which is the minimum time needed to build all blocks.
This represents the strategy where:
- First split creates 2 workers (time: 2)
- One worker builds block 4, the other splits again (time: 2)
- After second split, one worker splits again while other builds block 3 (time: 2)
- After third split, workers build blocks 1 and 2 in parallel (time: max(1,2) = 2)
- Total: Following the critical path = 2 + 2 + 2 + 2 = 8
Solution Implementation
1from typing import List
2import heapq
3
4class Solution:
5 def minBuildTime(self, blocks: List[int], split: int) -> int:
6 """
7 Calculate minimum time to build all blocks with workers that can split.
8
9 Strategy: Use a min-heap to greedily pair the smallest build times.
10 When a worker splits, it creates a new worker, taking 'split' time.
11 The optimal approach is to pair smallest blocks together under split operations.
12
13 Args:
14 blocks: List of integers representing time to build each block
15 split: Time required for a worker to split into two workers
16
17 Returns:
18 Minimum time to build all blocks
19 """
20 # Convert blocks list into a min-heap (in-place operation)
21 heapq.heapify(blocks)
22
23 # Keep merging until only one time value remains
24 while len(blocks) > 1:
25 # Remove the smallest build time (we pair it with the second smallest)
26 heapq.heappop(blocks)
27
28 # Take the second smallest and add split time
29 # This represents the total time when these two blocks are built in parallel
30 # after one worker splits
31 second_smallest = heapq.heappop(blocks)
32 merged_time = second_smallest + split
33
34 # Push the merged time back to heap
35 heapq.heappush(blocks, merged_time)
36
37 # The last remaining value is the minimum total time
38 return blocks[0]
39
1class Solution {
2 public int minBuildTime(int[] blocks, int split) {
3 // Create a min-heap to store block build times
4 // We use this to always process the smallest times first
5 PriorityQueue<Integer> minHeap = new PriorityQueue<>();
6
7 // Add all block build times to the heap
8 for (int blockTime : blocks) {
9 minHeap.offer(blockTime);
10 }
11
12 // Keep merging blocks until only one remains
13 // This simulates the process of workers splitting and building
14 while (minHeap.size() > 1) {
15 // Remove the smallest build time (this block gets built in parallel)
16 minHeap.poll();
17
18 // Take the second smallest and add split time
19 // This represents the time for a worker to split and then build
20 int secondSmallest = minHeap.poll();
21 minHeap.offer(secondSmallest + split);
22 }
23
24 // Return the final remaining time, which is the minimum total build time
25 return minHeap.poll();
26 }
27}
28
1class Solution {
2public:
3 int minBuildTime(vector<int>& blocks, int split) {
4 // Use a min-heap to always process the smallest building times first
5 // This greedy approach ensures we minimize the total time
6 priority_queue<int, vector<int>, greater<int>> minHeap;
7
8 // Initialize the heap with all block building times
9 // Each block initially represents one worker building one block
10 for (int blockTime : blocks) {
11 minHeap.push(blockTime);
12 }
13
14 // Keep merging workers until only one remains
15 // Each merge represents splitting a worker into two
16 while (minHeap.size() > 1) {
17 // Remove the smallest time (this worker will be "created" by splitting)
18 minHeap.pop();
19
20 // Get the second smallest time
21 int secondSmallest = minHeap.top();
22 minHeap.pop();
23
24 // The new time is the maximum of split time and the second smallest
25 // plus the split time (time to split + time for the slower worker to finish)
26 minHeap.push(secondSmallest + split);
27 }
28
29 // The last remaining element is the minimum total time needed
30 return minHeap.top();
31 }
32};
33
1// MinPriorityQueue implementation using a min-heap
2let heap: number[] = [];
3
4function enqueue(value: number): void {
5 // Add element to the end of heap
6 heap.push(value);
7 // Bubble up to maintain heap property
8 bubbleUp(heap.length - 1);
9}
10
11function dequeue(): number | undefined {
12 // Return undefined if heap is empty
13 if (heap.length === 0) return undefined;
14
15 // Store the minimum value (root)
16 const minValue = heap[0];
17
18 // Move last element to root
19 heap[0] = heap[heap.length - 1];
20 heap.pop();
21
22 // Bubble down to maintain heap property
23 if (heap.length > 0) {
24 bubbleDown(0);
25 }
26
27 return minValue;
28}
29
30function size(): number {
31 return heap.length;
32}
33
34function bubbleUp(index: number): void {
35 while (index > 0) {
36 const parentIndex = Math.floor((index - 1) / 2);
37 // If parent is smaller or equal, heap property is satisfied
38 if (heap[parentIndex] <= heap[index]) break;
39
40 // Swap with parent
41 [heap[parentIndex], heap[index]] = [heap[index], heap[parentIndex]];
42 index = parentIndex;
43 }
44}
45
46function bubbleDown(index: number): void {
47 while (true) {
48 const leftChildIndex = 2 * index + 1;
49 const rightChildIndex = 2 * index + 2;
50 let smallestIndex = index;
51
52 // Check if left child is smaller
53 if (leftChildIndex < heap.length && heap[leftChildIndex] < heap[smallestIndex]) {
54 smallestIndex = leftChildIndex;
55 }
56
57 // Check if right child is smaller
58 if (rightChildIndex < heap.length && heap[rightChildIndex] < heap[smallestIndex]) {
59 smallestIndex = rightChildIndex;
60 }
61
62 // If current node is the smallest, heap property is satisfied
63 if (smallestIndex === index) break;
64
65 // Swap with the smallest child
66 [heap[smallestIndex], heap[index]] = [heap[index], heap[smallestIndex]];
67 index = smallestIndex;
68 }
69}
70
71function minBuildTime(blocks: number[], split: number): number {
72 // Initialize heap for this function call
73 heap = [];
74
75 // Add all block build times to the priority queue
76 for (const blockTime of blocks) {
77 enqueue(blockTime);
78 }
79
80 // Merge blocks using workers until only one remains
81 // Strategy: Always pair the two smallest build times
82 while (size() > 1) {
83 // Remove the smallest build time (worker helps with this)
84 dequeue();
85
86 // Remove the second smallest and add split time
87 // This represents the total time when a worker splits
88 const secondSmallest = dequeue()!;
89 enqueue(secondSmallest + split);
90 }
91
92 // Return the final minimum build time
93 return dequeue()!;
94}
95
Time and Space Complexity
The time complexity is O(n × log n)
, where n
is the number of blocks.
- The
heapify(blocks)
operation takesO(n)
time to build a min-heap from the list. - The while loop runs
n - 1
iterations (until only one element remains in the heap). - In each iteration:
heappop(blocks)
is called twice, each takingO(log n)
timeheappush(blocks, ...)
is called once, takingO(log n)
time- Total per iteration:
O(log n)
- Overall time complexity:
O(n) + O((n-1) × log n) = O(n × log n)
The space complexity is O(n)
, where n
is the number of blocks.
- The heap is created in-place by modifying the input list
blocks
, so no additional space proportional ton
is needed for a separate data structure. - However, the heap operations themselves use
O(log n)
space for the call stack during the heapify and heap operations. - Since the input list itself takes
O(n)
space and we're considering the total space used, the space complexity isO(n)
.
Learn more about how to find time and space complexity quickly.
Common Pitfalls
Pitfall 1: Misunderstanding the Parallel Execution Model
The Problem: A common misconception is thinking that when workers split and build blocks in parallel, the total time should be split + block1 + block2
. This leads to incorrect implementations that sum all block times.
Why It's Wrong: When multiple workers operate in parallel, they work simultaneously, not sequentially. The total time for parallel operations is determined by the slowest worker, not the sum of all work times.
Incorrect Implementation Example:
# WRONG: This adds both block times merged_time = first_smallest + second_smallest + split
Correct Understanding: When two workers build blocks in parallel after a split, the time is split + max(block1, block2)
. Since we always pop the smallest first and discard it, keeping only the larger block time plus split achieves this.
Pitfall 2: Not Handling Edge Cases Properly
The Problem: Failing to handle special cases like empty block lists or single blocks can cause runtime errors or incorrect results.
Solution: Add proper validation:
def minBuildTime(self, blocks: List[int], split: int) -> int:
# Handle edge cases
if not blocks:
return 0
if len(blocks) == 1:
return blocks[0]
# Continue with main algorithm...
heapq.heapify(blocks)
# ... rest of the code
Pitfall 3: Modifying the Input List Without Consideration
The Problem: The current implementation modifies the input blocks
list in-place. If the caller needs the original list preserved, this causes unexpected side effects.
Solution: Create a copy if preserving the original is important:
def minBuildTime(self, blocks: List[int], split: int) -> int:
# Create a copy to avoid modifying the original
blocks_heap = blocks.copy() # or blocks[:]
heapq.heapify(blocks_heap)
while len(blocks_heap) > 1:
heapq.heappop(blocks_heap)
second_smallest = heapq.heappop(blocks_heap)
heapq.heappush(blocks_heap, second_smallest + split)
return blocks_heap[0]
Pitfall 4: Incorrect Greedy Strategy
The Problem: Some might think pairing the smallest with the largest block is optimal (to "balance" the work), leading to implementations using both min and max values.
Why It's Wrong: The optimal strategy is to always merge the two smallest values. This ensures that larger block times (which take longer) participate in fewer merge operations, minimizing the accumulation of split costs.
Visual Example:
- Blocks: [1, 1, 3, 5], split = 1
- Correct (merge smallest): Total time = 8
- Wrong (merge min-max): Would result in suboptimal time > 8
How many ways can you arrange the three letters A, B and C?
Recommended Readings
Greedy Introduction div class responsive iframe iframe src https www youtube com embed WTslqPbj7I title YouTube video player frameborder 0 allow accelerometer autoplay clipboard write encrypted media gyroscope picture in picture web share allowfullscreen iframe div When do we use greedy Greedy algorithms tend to solve optimization problems Typically they will ask you to calculate the max min of some value Commonly you may see this phrased in the problem as max min longest shortest largest smallest etc These keywords can be identified by just scanning
Math for Technical Interviews How much math do I need to know for technical interviews The short answer is about high school level math Computer science is often associated with math and some universities even place their computer science department under the math faculty However the reality is that you
https assets algo monster cover_photos heap svg Priority Queue and Heap What is the relationship between priority queue and heap Priority Queue is an Abstract Data Type and Heap is the concrete data structure we use to implement a priority queue Priority Queue A priority queue is a data structure
Want a Structured Path to Master System Design Too? Don’t Miss This!