In this tutorial, we will learn about a new technique ” Dynamic Programming ” to solve the problems for which we failed to get the optimal solutions using other techniques (say, Divide & Conquer and Greedy methods). Dynamic Programming (DP) is a simple technique but it can be difficult to master. One easy way to identify and solve DP problems is by solving as many problems as possible. The term Programming is not related to coding but it is from literature and means filling tables (similar to Linear Programming).
You may also like: 18 differences between a data scientist and a data analyst
What is Dynamic Programming?
Dynamic programming and memoization work together. The main difference between dynamic programming and divide and conquer is that in the case of the latter, sub-problems are independent, whereas in DP there can be an overlap of sub-problems. By using memoization [maintaining a table of subproblems already solved], dynamic programming reduces the exponential complexity to polynomial complexity (O(n2), O(n3), etc.) for many problems. The major components of DP are:
- Recursion: Solves sub-problems recursively
- Memoization: Stores already computed values in a table (Memoization means caching).
Thus these two techniques combinedly form Dynamic Programming.
Properties of Dynamic Programming Strategy –
The two dynamic programming properties which can tell whether it can solve the given problem or not are:
- Optimal substructure: an optimal solution to a problem contains optimal solutions to subproblems.
- Overlapping subproblems: a recursive solution contains a small number of distinct subproblems repeated many times.
Is there a solution for every problem using Dynamic Programming?
Basically, there are two approaches to solving DP problems:
- Bottom–Up Dynamic Programming
- Top–Down Dynamic Programming
Bottom-up Dynamic Programming
In this method, we evaluate the function starting with the smallest possible input argument value and then we step through possible values, slowly increasing the input argument value. While computing the values we store all computed values in a table (memory). As larger arguments are evaluated, pre-computed values for smaller arguments can be used.
Top-down Dynamic Programming
In this method, the problem is broken into sub-problems; each of these sub-problems is solved; and the solutions are remembered, in case they need to be solved. Also, we save each computed value as the final action of the recursive function, and as the first action, we check if a pre-computed value exists.
Bottom Up Vs Top Down Programming
In bottom-up programming, the programmer has to select values to calculate and decide the order of calculation. In this case, all subproblems that might be needed are solved in advance and then used to build up solutions to larger problems. In top-down programming, the recursive structure of the original code is preserved, but unnecessary recalculation is avoided. The problem is broken into subproblems, these subproblems are solved and the solutions are remembered, in case they need to be solved again.
Note: Some problems can be solved with both techniques and we will see examples in the next section.
Examples of Dynamic Programming Problems
Many string algorithms include the longest common subsequence, longest increasing subsequence, longest common substring, and edit distance.
Algorithms on graphs can be solved efficiently: Bellman-Ford algorithm for finding the shortest distance in a graph, Floyd’s All-Pairs shortest path algorithm, etc.
Chain matrix multiplication
Subset Sum
0/1 Knapsack
Traveling salesman problem, and many more
So, in this tutorial, we learned about a brief introduction to Dynamic Programming. We will learn about these in detail in upcoming tutorials.