Dynamic Programming Pattern: How to Spot Overlapping Subproblems Early
A practical guide to recognizing dynamic programming problems, choosing states, and moving from brute force recursion to memoization and tabulation.
Dynamic Programming Pattern: How to Spot Overlapping Subproblems Early
Dynamic programming has a reputation for being hard, but most of that difficulty comes from trying to memorize solutions instead of understanding the shape of the problem.
At its core, dynamic programming is what you use when:
- the problem can be broken into smaller subproblems
- those subproblems overlap
- each subproblem has an optimal answer you can reuse
If you keep recalculating the same recursive work, DP is usually nearby.
The Fastest Way to Recognize DP
Listen for questions like:
- "What is the minimum cost...?"
- "How many ways can we...?"
- "What is the longest...?"
- "Can we reach the end?"
Those are often signals that the answer at one position depends on answers from earlier positions or smaller states.
Start With Brute Force
Consider climbing stairs: you can take 1 or 2 steps at a time. How many ways are there to reach step n?
A recursive solution is easy to write:
def climb_stairs(n: int) -> int:
if n <= 2:
return n
return climb_stairs(n - 1) + climb_stairs(n - 2)The problem is that this recalculates the same values again and again.
Add Memoization
def climb_stairs(n: int) -> int:
memo: dict[int, int] = {}
def dp(i: int) -> int:
if i <= 2:
return i
if i in memo:
return memo[i]
memo[i] = dp(i - 1) + dp(i - 2)
return memo[i]
return dp(n)Now each state is solved once.
Move to Tabulation
Bottom-up DP often makes dependencies even clearer.
def climb_stairs(n: int) -> int:
if n <= 2:
return n
dp = [0] * (n + 1)
dp[1], dp[2] = 1, 2
for i in range(3, n + 1):
dp[i] = dp[i - 1] + dp[i - 2]
return dp[n]Sometimes you can compress the space even further if only a few previous states are needed.
The Three Questions That Matter
When solving DP, always define:
- State: what does
dp[i]ordp[i][j]mean? - Transition: how do you compute it from smaller states?
- Base case: what are the smallest answers you already know?
If those are fuzzy, the implementation will be fuzzy too.
A More Realistic Example: House Robber
def rob(nums: list[int]) -> int:
if not nums:
return 0
if len(nums) == 1:
return nums[0]
dp = [0] * len(nums)
dp[0] = nums[0]
dp[1] = max(nums[0], nums[1])
for i in range(2, len(nums)):
dp[i] = max(dp[i - 1], dp[i - 2] + nums[i])
return dp[-1]Here, dp[i] means the maximum money you can rob from houses 0..i.
Common Mistakes
- Jumping into code before defining the state
- Using DP when greedy would be simpler
- Forgetting what each index actually represents
- Getting base cases wrong and corrupting the entire table
Top-Down vs Bottom-Up
Use memoization when:
- recursion expresses the problem naturally
- you do not need every state
Use tabulation when:
- state order is clear
- you want tighter performance or simpler debugging
Both approaches solve the same kind of problem. Pick the one that makes the dependencies easiest to reason about.
Practice These Next
- Climbing Stairs
- House Robber
- Coin Change
- Longest Increasing Subsequence
- Unique Paths
Dynamic programming becomes much less mysterious when you stop asking, "What trick solves this?" and start asking, "What repeated subproblem am I solving more than once?"