Dynamic programming | Engati (2024)

What is dynamic programming?

Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics.

In both contexts, it refers to simplifying a complicated problem by breaking it down into simpler sub-problems in a recursive manner. While some decision problems cannot be taken apart this way, decisions that span several points in time do often break apart recursively. Likewise, in computer science, if a problem can be solved optimally by breaking it into sub-problems and then recursively finding the optimal solutions to the sub-problems, then it is said to have an optimal substructure.

Dynamic programming | Engati (1)

What is dynamic programming example?

Dynamic Programming is also used in optimization problems. Like the divide-and-conquer method, Dynamic Programming solves problems by combining the solutions of subproblems. Moreover, Dynamic Programming algorithm solves each sub-problem just once and then saves its answer in a table, thereby avoiding the work of re-computing the answer every time.

Two main properties of a problem suggest that the given problem can be solved using Dynamic Programming. These properties are overlapping sub-problems and optimal substructure.

1. Overlapping Sub-Problems

Similar to Divide-and-Conquer approach, Dynamic Programming also combines solutions to sub-problems. It is mainly used where the solution of one sub-problem is needed repeatedly. The computed solutions are stored in a table, so that these don’t have to be re-computed. Hence, this technique is needed where overlapping sub-problem exists.

For example, Binary Search does not have overlapping sub-problem. Whereas recursive program of Fibonacci numbers have many overlapping sub-problems.

2. Optimal Sub-Structure

A given problem has Optimal Substructure Property, if the optimal solution of the given problem can be obtained using optimal solutions of its sub-problems.

For example, the Shortest Path problem has the following optimal substructure property −

If a node x lies in the shortest path from a source node u to destination node v, then the shortest path from u to v is the combination of the shortest path from u to x, and the shortest path from x to v.

The standard All Pair Shortest Path algorithms like Floyd-Warshall and Bellman-Ford are typical examples of Dynamic Programming.

Where is dynamic programming used?

Dynamic programming is used where we have problems, which can be divided into similar sub-problems, so that their results can be re-used. Mostly, these algorithms are used for optimization. Before solving the in-hand sub-problem, the dynamic algorithm will try to examine the results of the previously solved sub-problems. The solutions of sub-problems are combined in order to achieve the best solution.

So we can say that −

  • The problem should be able to be divided into smaller overlapping sub-problem.
  • An optimum solution can be achieved by using an optimum solution of smaller sub-problems.
  • Dynamic algorithms use Memoization.

The following computer problems can be solved using dynamic programming approach −

  • Fibonacci number series
  • Knapsack problem
  • Tower of Hanoi
  • All pair shortest path by Floyd-Warshall
  • Shortest path by Dijkstra
  • Project scheduling

Dynamic programming can be used in both top-down and bottom-up manner. And of course, most of the time, referring to the previous solution output is cheaper than recomputing in terms of CPU cycles.

What are the steps of a dynamic programming approach?

Dynamic Programming algorithm is designed using the following four steps −

  • Define a small problem and find an optimum solution to this problem. In our example, this is the same as finding the best path from the last intersection to the office.
  • Enlarge this small problem slightly and find the optimum solution to the new problem using the previously found optimum solution. This step is the same as moving from the last intersection to the second last.
  • Continue with step 2 until you have enlarged sufficiently that the current problem encompasses the original problem. When the problem is solved, the stopping conditions will have been met. This would be equivalent to going all the way from the last intersection to the first intersection.
  • Trackback the solution to the whole problem from the optimum solutions to the small problems solved along the way.

Is dynamic programming used in real life?

Dynamic programming is heavily used in computer networks, routing, graph problems, computer vision, artificial intelligence, machine learning, etc.

What are the advantages of dynamic programming?

Dynamic programming (also known as dynamic optimization) is a popular method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions. The next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space.

Dynamic programming algorithms are often used for optimization. A dynamic programming algorithm will examine the previously solved subproblems and will combine their solutions to give the best solution for the given problem. In comparison, a greedy algorithm treats the solution as some sequence of steps and picks the locally optimal choice at each step.

How do you create a dynamic programming algorithm?

Dynamic programming | Engati (2)

1. Identify the sub-problem in words

Too often, programmers will turn to writing code before thinking critically about the problem at hand. Not good. One strategy for firing up your brain before you touch the keyboard is using words, English or otherwise, to describe the sub-problem that you have identified within the original problem.

If you’re solving a problem that requires dynamic programming, grab a piece of paper and think about the information that you need to solve this problem. Write out the sub-problem with this in mind.

2. Write out the sub-problem as a recurring mathematical decision

Once you’ve identified a sub-problem in words, it’s time to write it out mathematically. Why? Well, the mathematical recurrence, or repeated decision, that you find will eventually be what you put into your code.

3. Solve the original problem using Steps 1 and 2

In Step 1, we wrote down the sub-problem for the punchcard problem in words. In Step 2, we wrote down a recurring mathematical decision that corresponds to these sub-problems. Ask yourself how to solve the original problem with this information?

4. Determine the dimensions of the memoization array and the direction in which it should be filled

How do we determine the dimensions of this memoization array? Here’s a trick: the dimensions of the array are equal to the number and size of the variables on which OPT(•) relies.

5. Code it!

To code our dynamic program, we put together Steps 2–4. The only new piece of information that you’ll need to write a dynamic program is a base case, which you can find as you tinker with your algorithm.

Dynamic programming | Engati (2024)
Top Articles
Latest Posts
Article information

Author: Moshe Kshlerin

Last Updated:

Views: 6697

Rating: 4.7 / 5 (57 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Moshe Kshlerin

Birthday: 1994-01-25

Address: Suite 609 315 Lupita Unions, Ronnieburgh, MI 62697

Phone: +2424755286529

Job: District Education Designer

Hobby: Yoga, Gunsmithing, Singing, 3D printing, Nordic skating, Soapmaking, Juggling

Introduction: My name is Moshe Kshlerin, I am a gleaming, attractive, outstanding, pleasant, delightful, outstanding, famous person who loves writing and wants to share my knowledge and understanding with you.