# Knapsack, Weight-Only

Prereqs: Backtracking, Memoization

Given a list of `n` items and their weights, find all sums that can be formed using their weights.

### Input

• `weights`: A list of items and their weights.

### Output

A list of possible sums using the weights.

### Examples

#### Example 1:

Input:

``1weights = [1, 3, 3, 5]``

Output: `[0, 1, 3, 4, 5, 6, 7, 8, 9, 11, 12]`

Explanation:

We can form all sums from 0 to 12 except 2 and 10. Here is a short explanation for the sums:

• 0: use none of the weights
• 1: use item with weight 1
• 3: use item with weight 3
• 4: use weights 1 + 3 = 4
• 5: use item with weight 5
• 6: use weights 3 + 3 = 6
• 7: use weights 1 + 3 + 3 = 7
• 8: use weights 3 + 5 = 8
• 9: use weights 1 + 3 + 5 = 9
• 11: use weights 3 + 3 + 5 = 11
• 12: use all weights

### Constraints

• `1 <= len(weights) <= 100`
• `1 <= weights[i] <= 100`

## Solution

### Brute Force

A brute force method enumerates all possibilities. We start with a total sum of 0 and process every item by either choosing to include it into our sum or not into our sum. Once no more items are left to process, we can include the final sum in a list of sums. Additionally, we will store these sums in a set since there can be repeating sums.

By going through every possibility, we're generating all possible subsets, so we guarantee that we are also generating all possible sums.

Since there are `n` items, two possibilities each, and it takes `O(1)` to compute each possibility, the final runtime is `O(2^n)`.

The following is the space-state tree for this idea using input `[1, 3, 3, 5]`. Each level `i` of the tree represents a binary decision to include or not include the `ith` number. For example, we have two branches in level `i = 1`, the left branch means not picking the `ith` item `3`, and the right branch means picking it. Here is the code for the idea above:

``````1def generate_sums(weights, sums, sum, n):
2  if n == 0:
4    return
5  generate_sums(weights, sums, sum, n - 1)
6  generate_sums(weights, sums, sum + weights[n - 1], n - 1)
7
8def knapsack_weight_only(weights):
9  sums = set()
10  n = len(weights)
11  generate_sums(weights, sums, 0, n)
12  return list(sums)
13``````

## Top-down Dynamic Programming

First, the "top-down" solution is, basically, the brute force solution but with memoization. We store results that have already been computed and return them once needed. But in precisely what way should we store/represent the data? Going back to the idea of dynamic programming, we should consider what is important so far and if any of the information has been recomputed.

### Memoization, identifying the state

To memoize, we need to find the duplicate subtrees in the state-space tree. Notice that the duplicate subtrees are of the same level for this problem. This isn't a coincidence.

Unlike Word Break and Decode Ways in the backtracking section, the items in the knapsack problem can only be used once.

Consider `Node A` and `Node B` in the tree: `Node A`'s subtree has leaf values of `3` and `8`. And `Node B`'s subtree has leaf values of `3, 8, 6, 11`. They are clearly not the same subtree. This is because the meaning of a node's value is the weight sum by considering items from `0` to `i`.

Therefore, the state we need to memoize consists of the level/depth of the node and the node value itself. We will use `(i, sum)` to denote this.

Thus, we will store a 2D boolean array `memo` where `memo[i][sum] = true` if the `(i, sum)` pair has already been computed and `false` otherwise. The size of the array is `n * total_sum` where `n` is the number of items and `total_sum` is the weight sum of all items. We need a slot for each possible weight we can make up, and all the possible weights are in the range of `0` to `total_sum`.

Here is the implementation of the idea:

 `1` `-` ``1def generate_sums(weights, sums, sum, n):`` `1` `+` ``1def generate_sums(weights, sums, sum, n, memo):`` `2` `+` ``1 if memo[n][sum]:`` `3` `+` ``1 return`` `2` `4` ``1 if n == 0:`` `3` `5` ``1 sums.add(sum)`` `4` `6` ``1 return`` `5` `-` ``1 generate_sums(weights, sums, sum, n - 1)`` `7` `+` ``1 generate_sums(weights, sums, sum, n - 1, memo)`` `6` `-` ``1 generate_sums(weights, sums, sum + weights[n - 1], n - 1)`` `8` `+` ``1 generate_sums(weights, sums, sum + weights[n - 1], n - 1, memo)`` `7` `9` `8` `10` ``1def knapsack_weight_only(weights):`` `9` `11` ``1 sums = set()`` `10` `12` ``1 n = len(weights)`` `11` `-` ``1 generate_sums(weights, sums, 0, n)`` `13` `+` ``1 # find total sum of weights`` `14` `+` ``1 total_sum = sum(weights)`` `15` `+` ``1 memo = [[False for _ in range(total_sum + 1)] for _ in range(n + 1)]`` `16` `+` ``1 generate_sums(weights, total_sums, 0, n, memo)`` `12` `17` ``1 return list(sums)``

Since there are `n * totalSum` states, each state depends on `O(1)` subproblems, and each state takes `O(1)` to compute, and the final runtime is `O(n * totalSum)`.

## Bottom-up Dynamic Programming

Now let's talk about the "bottom-up" solution. Recall that the idea of any bottom-up The solution is to work from the smallest cases, "combine" them together, and continue this until we get to our desired solution. Thus, by looping through each item, we determine which sums we can construct based on if there exists a smaller sum that we can build on top of. For example, suppose we already built all possible sums using `[1, 3, 3]`, and we wanted to know which sums we can build using all of `[1, 3, 3, 5]` now. The following is an illustration of this idea:

And here's the code for the iterative/bottom-up solution:

``````1def knapsack_weight_only(weights):
2  n = len(weights)
3  total_sum = sum(weights)
4  dp = [[False for _ in range(total_sum + 1)] for _ in range(n + 1)]
5  dp = True
6  for i in range(1, n + 1):
7    for w in range(0, total_sum + 1):
8      # vertical blue arrow in the above slides
9      dp[i][w] = dp[i][w] or dp[i - 1][w]
10      # diagonal blue arrow in the above slides
11      if w - weights[i - 1] >= 0: # make sure the current item's weight is smaller than the target weight w
12        dp[i][w] = dp[i][w] or dp[i - 1][w - weights[i - 1]]
13  ans = []
14  # check the last row for all possible answers
15  for w in range(0, total_sum + 1):
16    if dp[n][w]:
17      ans.append(w)
18  return ans
19``````

The final runtime of this program is `O(n * totalSum)` because there is `O(n * totalSum)` states, each state depends on `O(1)` subproblems, and each state takes `O(1)` to compute.

### Space Optimization (optional)

Finally, there is a slight memory optimization that we can perform. Observe that `dp[i][w]` depends on, at most, the previous row (`dp[i][w] = dp[i][w] || dp[i - 1][w - weights[i - 1]]`). Thus, instead of storing the entire 2D array, we can simply store two 1D arrays, one to keep track of the previous and one to for the current. This improves our memory usage from `O(n * totalSum)` to `O(totalSum)`.

Here's the implementation:

``````1from typing import List
2
3def knapsack_weight_only(weights: List[int]) -> int:
4  n = len(weights)
5  total_sum = sum(weights) # get maximum sum possible
6
7  # initialize 2 1D arrays (2 * N array)
8  dp = [[False for _ in range(total_sum + 1)] for _ in range(2)]
9  dp = True
10  for i in range(1, n + 1):
11    for w in range(0, total_sum + 1):
12      # current row is dp, previous row is dp
13      dp[w] = dp[w] or dp[w]
14      if w - weights[i - 1] >= 0:
15        dp[w] = dp[w] or dp[w - weights[i - 1]]
16    for w in range(0, total_sum + 1): # update previous row to current row
17      dp[w] = dp[w]
18
19  # dp[w] = true if `w` is an attainable weight, thus add it to our answer
20  ans = []
21  for w in range(0, total_sum + 1):
22    if dp[w]:
23      ans.append(w)
24  return ans
25
26if __name__ == '__main__':
27    weights = [int(x) for x in input().split()]
28    res = knapsack_weight_only(weights)
29    res.sort()
30    for i in range(len(res)):
31      print(res[i], end='')
32      if i != len(res) - 1:
33        print(' ', end='')
34    print()
35``````

The space optimization is a nice trick but it's also easy to get the row swapping wrong. It's great to mention this to the interviewer after you have perfected your regular solution and confirm that with the interviewer. Pre-mature optimization is the root of all evil in software engineering.

## Steps to work through a DP problem:

We can see from this example the top-down and bottom-up are essentially equivalent.

Remember: DP = DFS + pruning + memoization.

Steps to solve a DP problem:

• brute force
• draw the state-space tree
• dfs on the state-space tree
• prune if possible
• memo
• find the duplicate subtrees
• bottom-up (if you want to)
• optional space optimzation