Basic Concept
The process of dynamic programming is: each decision depends on the current state, and then causes a state transition. A decision sequence is generated in a changing state. Therefore, this multi-stage optimization decision-making process is called dynamic programming.
Initial state → │Decision 1│ → │Decision 2│ →… → │Decision n│ → End state
Scope of application
The problems that can be solved by dynamic programming generally have three properties:
Optimization principle: If the solution of the sub-problem contained in the optimal solution of the problem is also optimal, the problem is said to have an optimal sub-structure, that is, to satisfy the optimization principle.
No aftereffect: once the state of a certain stage is determined, it will not be affected by the future decisions. In other words, the process after a certain state will not affect the previous state, only the current state.
There are overlapping sub-problems: that is, the sub-problems are not independent, and a sub-problem may be used multiple times in the next stage of decision-making. (This property is not a necessary condition for dynamic programming, but without this property, dynamic programming algorithms have no advantage over other algorithms)
Basic ideas and strategies
The basic idea is similar to the divide-and-conquer method, which decomposes the problem into several sub-problems (stages), and solves the sub-problems in order. The solution of the former sub-problem provides useful information for the solution of the latter sub-problem. When solving any sub-problem, list all possible local solutions, retain those that are likely to reach the optimal local solution through decision-making, and discard other local solutions. Solve each sub-problem in turn. The last sub-problem is the solution of the initial problem.
Since most of the problems solved by dynamic programming have overlapping sub-problems, in order to reduce repeated calculations, each sub-problem is solved only once, and different states at different stages are stored in a two-dimensional array.
The biggest difference from the divide-and-conquer method is that for problems that are suitable for dynamic programming, the sub-problems obtained after decomposition are often not independent of each other (that is, the solution of the next sub-stage is based on the solution of the previous sub-stage).
Steps to solve problems using dynamic programming
The problems dealt with by dynamic programming is a multi-stage decision-making problem, which generally starts from the initial state and reaches the end state through the selection of intermediate-stage decisions. These decisions form a sequence of decisions, and determine an activity route (usually the optimal activity route) to complete the entire process.
Initial state → │Decision 1│ → │Decision 2│ →… → │Decision n│ → End state
There are certain patterns in the design of dynamic programming solution. Generally, the following steps are required.
Division: According to the time or space characteristics of the problem, the problem is divided into several stages. In the division stage, pay attention that the order of the stages must be ordered or sortable, otherwise the problem cannot be solved.
Determining the state and state variables: The various situations in which the problem develops into various stages are represented by different states. Of course, the choice of state must satisfy no aftereffect.
Determine the decision and write the state transition equation: Because decision and state transition have a connection, state transition is to derive the state of this stage according to the state of previous stage and the decision. So if a decision is made, the state transition equation can be written. But in reality, it is often done the other way around, and the decision-making method and state transition equation are determined according to the relationship between the states of two adjacent stages.
Finding the boundary conditions: The given state transition equation is a recursive formula, which requires a recursive termination condition or boundary condition.
In general, as long as the problem’s stage, state, and state transition decision are determined, the state transition equation (including boundary conditions) can be written.
In practical applications, you can design according to the following simplified steps:
- Analyze the nature of the optimal solution and characterize its structure.
- Recursive definition of optimal solution.
- Calculate the optimal value by bottom-up or top-down memorization method (memorandum method)
- Construct the optimal solution of the problem based on the information obtained when calculating the optimal value
Algorithm design
The main difficulty of dynamic programming is the theoretical design, which is the determination of the above 4 steps. Once the design is completed, the implementation part will be very simple.
Using dynamic programming to solve problems, the most important thing is to determine the three elements of dynamic programming:
- Problem stage
- State of each stage
- Recursive relationship from the previous stage to the later stage.
The recursive relationship must be a conversion from the smaller problem to the larger one. From this perspective, dynamic programming can often be implemented using recursion, but because dynamic programming can make use of the previously saved sub-problems solution to reduce repeated calculations. So for large-scale problems, dynamic programming have advantages compared with recursion, which is the core of dynamic programming algorithms.
After determining these three elements of dynamic programming, the entire solution process can be described by an optimal decision table. The optimal decision table is a two-dimensional table, where the rows represent the stages of decision-making, and the columns represent the problem state. Generally speaking, the cell value corresponds to the optimal value (such as the shortest path, longest common subsequence, maximum value, etc.) at a certain stage and state. The process of filling the form is based on the recursive relationship, starting from Row 1 and Column 1, in the order of row or column first, and finally obtain the optimal solution of the problem by simple rounding or calculation based on the data of the entire form.
1 | f(n,m)=max{f(n-1,m), f(n-1,m-w[n])+P(n,m)} |
Algorithm basic framework
1 | // Initial stage |