What is Dynamic Programming?
Prerequisite: DFS, Backtracking, Memoization, Pruning
Dynamic programming is an algorithmic optimization technique that breaks down a complicated problem into smaller overlapping subproblems in a recursive manner and uses solutions to the subproblems to construct a solution to the original problem.
Name Origin
"Dynamic programmingâ, what an awfully scary name. What does it even mean?? Whatâs so âdynamicâ about programming?
The name was invented by Richard Bellman in the 1950s when computers were still decades away. So by âprogrammingâ he did NOT mean programming as coding at a computer. Bellman was a mathematician, and what he really meant by programming was âplanningâ and âdecision makingâ.
Trivia time: according to Wikipedia, Bellman was working at RAND corporation, and it was hard to get mathematical research funding at the time. To disguise the fact that he was conducting mathematical research, he phrased his research in a less mathematical term, âdynamic programmingâ. âBellman chose the word dynamic to capture the timevarying aspect of the problems and because it sounded impressive. The word programming referred to the use of the method to find an optimal program, in the sense of a military schedule for training or logistics.â
So he really meant âmultistage planningâ, a simple concept of solving bigger problems using smaller problems while saving results to avoid repeated calculations. That sounds awfully familiar. Isnât that memoization? Yes, it is. Keep on reading.
Characteristics of Dynamic Programming
A problem is a dynamic programming problem if it satisfies two conditions:

The problem can be divided into subproblems, and its optimal solution can be constructed from optimal solutions of the subproblems. In academic terms, this is called optimal substructure.

The subproblems from 1) overlap.
1. Optimal substructure
Consider the problem of the shortest driving path from San Francisco (SF) to San Diego (SD). Since the highway goes through Los Angeles (LA), the problem can be divided into two subproblems  driving from SF to LA and driving from LA to SD.
In addition, shortest_path(SF, SD) = shortest_path(SF, LA) + shortest_path(LA, SD). Optimal solution to the problem = combination of optimal solutions of the subproblems.
Now letâs look at an example where the problem does NOT have an optimal substructure. Consider buying the cheapest airline ticket from New York (NYC) to San Francisco (SF). Letâs assume there is no direct flight, and we must transit through Chicago (CHI). Even though our trip is divided into two parts, NYC to CHI and CHI to SF, usually the cheapest ticket from NYC to SF != the cheapest ticket from NYC to CHI + the cheapest ticket from CHI to SF because airlines do generally not price multileg trips the sum of each flights to maximize profit.
2. Overlapping subproblems
As we have seen in the memoization section, Fibonacci number calculation has a good amount of repeated computation (overlapping subproblems) whose results can be cached and reused.
If the two conditions stated above are satisfied, then dynamic programming can solve the problem.
DP == DFS + memoization + pruning
You might have seen posts on the coding forum titled âsimple DFS solutionâ and â0.5 sec DP solutionâ for the same problem. It is because the two methods are equivalent. There are two different approaches to DPâtopdown and bottomup.
It's important to mention that pruning is an integral part of the process to cut down run time. We have seen statespace tree branch pruning in the backtracking section. We will see how it's applied in the following Knapsack section.
How to Solve Dynamic Programming Problems?
Topdown: this is basically DFS + memoization as we have seen memoization. We split large problems and recursively solve smaller subproblems.
Bottomup: we try to solve subproblems and then use their solutions to find the solutions to bigger subproblems. This is usually done in a tabular form.
Letâs look at a concrete example.
Fibonacci
Let's revisit the Fibonacci number problem from the memoization section.
Topdown with Memoization
Recall that we have a system for backtracking and memoization.

Draw the tree: see the tree above

Identify states
 What state do we need to know if we have reached a solution?
We need to know the value of
n
we are computing.  What state do we need to decide which child nodes to visit next?
No extra state is required. We always visit
n1
andn2
.
 DFS + pruning (if needed) + memoization
1def fib(n, memo):
2 if n in memo: # check for the solution in the memo, if found, return it right away
3 return memo[n]
4
5 if n == 0 or n == 1:
6 return n
7
8 res = fib(n  1, memo) + fib(n  2, memo)
9
10 memo[n] = res # save the solution in memo before returning
11 return res
Bottomup with Tabulation
For bottomup dynamic programming, we first want to start with the subproblems and work our way up to the main problem. This is usually done by filling up a table.
For the Fibonacci problem, we want to fill a onedimensional table dp
, where each entry at index i
represents the value of the Fibonacci number at index i. The last element of the array is the result we want to return.
The order of filling matters because we cannot calculate dp[i]
without dp[i  1]
and dp[i  2]
.
1def fib(n):
2 dp = [0, 1]
3 for i in range(2, n + 1):
4 dp.append(dp[i  1] + dp[i  2])
5
6 return dp[1]
subproblems and Recurrence Relation
The formula dp[i] = dp[i  1] + dp[i  2]
is called the recurrence relation. It is the key to solving any dynamic programing problem.
For the Fibonacci number problem, the relation is already given dp[i] = dp[i  1] + dp[i  2]
. We will discuss the patterns of recurrence relation in the next section.
Should I do topdown or bottomup?
Topdown pros:
 The order of computing subproblems doesn't matter. For bottomup, we have to fill the table in order to solve all the subproblems first. For example, to fill
dp[8]
, we have to have filleddp[6]
anddp[7]
first. For topdown, we can let recursion and memoization take care of the subproblems and, therefore, not worry about the order.  Easier to the reason for partition type of problems (how many ways are there too.., splitting a string into...). Just do DFS and add memoization.
Bottomup pros:
 Easier to analyze the time complexity (since it's just the time to fill the table)
 No recursion, and thus no system stack overflowâalthough not a huge concern for normal coding interviews.
From our experiences, topdown is often a better place to start unless it's a grid problem where the states are in plain sight.
Greedy Algorithm vs. Dynamic Programming
What is a greedy algorithm? As the name suggests, it is an algorithm where we always want to choose the best answer. The main difference between a greedy algorithm and dynamic programming is that the answer to a dynamic programming problem is not always necessarily the best answer for every state. This can be due to other restrictions in the problem statement, such that we don't always want to pick the best answer. A good way to distinguish between the two is to figure out a dynamic programming solution and see if you can optimize it by always picking the best answer for the dynamic programming substates.
For example, given a series of intervals, you were asked to pick the minimum number of intervals required to cover a given length.
Let dp[i]
denote the minimum number of intervals to make an interval of length i
.
Then dp[i] = min(dp[i], dp[i  length[j]] + 1)
where length
is the array containing the interval lengths. We then realize that for our
dp state that we should greedily pick the longest interval each time if permitted, which leads us to our greedy solution.
This is a rather simple example, but it may be helpful for more obscure greedy solutions disguised as dynamic programming problems.
Divide and Conquer vs. Dynamic Programming
Both Divide and Conquer and dynamic programming break the original problem down into multiple subproblems. The difference is that in dynamic programming, the subproblems overlap, whereas in divide and conquer, they don't.
Consider Merge Sort, the subarrays are sorted and merged, but the subarrays do not overlap. Now consider Fibonacci; the green and red nodes in the "overlapping subproblems" overlap.
When to use dynamic programming
Mathematically, dynamic programming is an optimization method on one or more sequences (e.g., arrays, matrices). So questions asking about the optimal way to do something on one or more sequences are often a good candidate for dynamic programming. Signs of dynamic programming:
 The problem asks for the maximum/longest, minimal/shortest value/cost/profit you can get from doing operations on a sequence.
 You've tried greedy, but it sometimes gives the wrong solution. This often means you have to consider subproblems for an optimal solution.
 The problem asks how many ways there are to do something. This can often be solved by DFS + memoization, i.e., topdown dynamic programming.
 Partition a string/array into subsequences so that a specific condition is met. This is often wellsuited for topdown dynamic programming.
 The problem is about the optimal way to play a game.
How to Develop Intuition for Dynamic Programming Problems
As you may have noticed, the concept of DP is quite simpleâfind the overlapping subproblems, solve them, and use the subproblem solutions to find the solution to the original problem. The hard part is to know how to find the recurrence relation. To best way to develop intuition is to get familiar with common patterns. Some classic examples include longest common subsequence (LCS), 01 knapsack, and longest increasing subsequence (LIS).
Dynamic Programming Patterns
Here's the breakdown. We also highlighted the keywords that indicate it's likely a dynamic programming problem.
Weightonly Knapsack
This is the most common type of DP problem and an excellent place to get a feel of dynamic programming and how it's different from brute force backtracking. The state in these problems is a twovariable pair instead of the singlevariable state we have seen so far in backtracking.
 Knapsack  given a number of items of different weights, is it possible to use the items to make up weight X?
 Partition an array into two equal sum subsets  is it possible to divide an array into two subsets with equal sum?
Note that we categorize this section as "weightonly" knapsack to differentiate the classic textbook 01 knapsack.
 01 Knapsack  same as weightonly knapsack except items have values, and the goal is to find the maximum object value we can put in our knapsack without exceeding the allowed weight.
Grid
The state in this type of DP is often the grid itself. dp[i][j]
means max/min/best value for matrix cell ending at index i, j.
 Robot unique paths  number of ways for robot to move from top left to bottom right
 Min path sum  find path in a grid with minimum cost
 Maximal square  find maximal square of 1s in a grid of 0s and 1s
Game theory
This type of problem asks whether a player can win a decision game. The key to solving game theory problems is to identify a winning state, and formulate a winning state as a state that returns a losing state to the opponent
 Divisor game These problems are often closely related to the following Interval DP problems.
Interval
The key to solving this type of problem involves finding a subproblem defined on an interval dp[i][j]
.
 Coin game  two players play a game by removing coins from either end of a row of coins. Find the maximum store.
 Festival game, bursting balloons  similar to the coin game problem but with a different way of evaluating scores.
Two Sequences
This type of problem has two sequences in its problem statement. dp[i][j]
represents the max/min/best value for the first sequence ending in index i and the second sequence ending in index j.
 Longest common subsequence  find the longest common subsequence that is common in two sequences
 Edit distance  find the minimum distance to edit one string to another
Dynamic number of subproblems, Longest Increasing Subsequence
This type of DP problem is unique in that the current state depends on a dynamic number of previous states, e.g. dp[i] = max(d[j]..) for j from 0 to i
.
 Longest Increasing Subsequence  find the longest increasing subsequence of an array of numbers
 Buy/sell the stock with at most K transactions  maximize profit by buying and selling stocks using at most K transaction
Bitmask
These DP problems use bitmasks to reduce factorial complexity (n!) to 2^n by encoding the dp state in bitmasks.
 Longest Path in a DAG  find the longest path in a directed acyclic graph.
 Minimum Cost to Visit Every Node in a Graph  find the minimum cost to traverse every node in a directed weighted graph
Still not clear?Â SubmitÂ the part you don't understand to our editors.