Linear Programming: Applications and Optimization Techniques
Explore the world of linear programming through real-world examples like the Diet Problem, understand how to formulate optimization problems, and find solutions using linear functions within constraints. Linear programming plays a crucial role in minimizing costs, maximizing profits, and solving complex decision-making scenarios efficiently.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.
E N D
Presentation Transcript
CMPT 405/705 Design & Analysis of Algorithms March 23, 2022
Plan for today Linear Programming
Linear Programming Many optimization problems can be formulated as linear programs. The variables are real numbers The constraints are linear equalities and/or inequalities (e.g. x+1.5y 3) The objective is a linear function (e.g., maximize x+2y-0.3z)
The Diet Problem The input is: Different nutrients (proteins, different vitamins, fats) A person needs at least certain amount of each nutrient per day Different types of food (pasta, salad, chicken ) For each type of food, we have the amount of nutrients per unit For each food we have price per unit Goal: design diet that provides the required nutrients while minimizing the total price for food
The Diet Problem M different nutrients (proteins, different vitamins, fats) Let bibe the minimal required amount of the i th nutrient per day N different types of food (pasta, salad, chicken ) Let aijbe the amount of i th nutrient in j th food Let cjdenote the prices of the j th food per unit Goal: Minimize c1x1+c2x2+ +cNxN Minimize cTx Subject to Subject to: Ax b x 0 a11x1+ a12x2+ a13x3+ + a1NxN b1 aM1x1+ aM2x2+ aM3x3+ + aMNxN bM xj 0 for all j=1 M
Linear Programming an example Goal: minimize 2x+3y y-2x 2 Subject to: x=10 x=0 x+y 6 y=10 y-2x 2 x-2y 0 x+y 6 x-2y 0 0 x 10 (4,2) 0 y 10 2x+3y=14 y=0 2x+3y=7 2x+3y=0 Solution is: 14 attained at (x,y) = (4,2)
Linear Programming an example Goal: maximize 2x+y y-2x 2 Subject to: x=0 x+y 6 y-2x 2 x-2y 0 x+y 6 x-2y 0 0 x 0 y y=0 Maximum is infinity
Linear Programming For any LP we have one of the following possibilities: The feasible polytope is non-empty and bounded. The set of feasible solutions is unbounded, and the optimum is or - . The set of feasible solutions is unbounded, but the optimum is finite. There are no feasible solutions.
Linear Programming (minimization version) A canonical minimization LP Input: A RMxN,b RM,c RN, Goal: find a solution x RN that Minimize cTx minimizes cTx = c1x1 + c2x2+ + cNxN Subject to: Ax b x 0 Subject to Ax b And x 0.
Linear Programming (maximization version) A canonical maximization LP Input: A RMxN,b RM,c RN, Goal: find a solution x RN that Maximize cTx maximizes cTx = c1x1 + c2x2+ + cNxN Subject to: Ax b x 0 Subject to Ax b And x 0.
Linear Programming More generally, the constraints may be arbitrary linear inequalities, We can write LPs without x 0 constraints Theorem: Every minimization LP can be converted into a canonical minimization LP, i.e. LP of the form cTx = c1x1 + c2x2+ + cNxN minimize Subject to: Ax b x 0
Linear Programming Fact: Every minimization LP can be converted into a canonical minimization LP i.e., LP of the form cTx = c1x1 + c2x2+ + cNxN minimize Subject to: Ax b x 0 Proof: Given a general LP we convert in into a canonical LP as follows. Replace each x with two new variables: x+ 0 and x- 0 1. Replace each x with (x+-x-) 2. Every equality aTx=b can be replaced with aTx b and aTx b 3. Every constraints aTx b can be replaced with (-aT)x -b 4.
Linear Programming feasibility version We can also consider the feasibility version of LP. LPFEAS is the following problem: Find x Rn such that Ax b
Linear Programming feasibility version Question: If we can solve the optimization of LP, then we can also solve LPFEAS. What about the other direction? Claim: Suppose we have an algorithm that solves LPFEAS on n variables and m constraints in time T. For an optimization LP, suppose we know that OPT is in [-M,M] Then, we can approximate the optimum up to in time O(T*log(M/ )). Proof idea: binary search
Linear Programming feasibility version Proof: Suppose the LP is maximize cTx s.t. Ax b x 0 Ask if the set of constraints {Ax b, x 0, cTx 0} is feasible. If yes, then the optimum is in [0,M] Check feasibility of {Ax b, x 0, cTx M/2} Otherwise, the optimum is in [-M,0] Check feasibility of {Ax b, x 0, -M/2 cTx 0} Continue to the next iteration each time reducing the range by half
Fourier-Motzkin Elimination Q: How can we solve an LPFEAS? If all constraints are equalities, we can solve the system of linear equations using Gaussian elimination. For inequalities, we can run the following (inefficient) procedure. Fourier-Motzkin Elimination: Choose some xi and isolate it in each equation and then eliminate. For example: x y + 2z 3 2x + y 4 -2x + 3z 8 x + y 7 x 7-y x 3 + y 2z x 2 y/2 x 1.5z-4 2 y/2 3 + y 2z 1.5z - 4 3 + y 2z 2 y/2 7-y 1.5z - 4 7-y Note that in general each step converts m constraints into up to (m/2)2 constraints. Therefore, the total running time may be up to m2n
Linear Programming feasibility version Q: We have an LP with n variables and m constraints, with -M xi M for M=poly(n). Write an algorithm that solves a given LP up to . A: We can try all possible solutions up to . For each xi try all possible values: {-M+ , -M+2 , ,- ,0, ,2 , M} Check if the solution it is feasible, Find the one maximizing the objective function. Conclusion: We can find opt(LP) up to in time exp(poly(n)) This is better than Fourier-Motzkin that works in time m2n Theorem: Given an LP instance, we can find a feasible solution that is optimal up to in time poly(n). The algorithms: Ellipsoid Algorithm, Interior Point Method Out of scope for this course
2-approximation for weighted vertex cover
Weighted minimum vertex cover We saw a combinatorial algorithm for 2-approximation for vertex cover. Let s consider the weighted version of the min vertex cover problem. Input: a graph G=(V,E) , and weights wv 0 Goal: find a vertex cover C V of G such that v C wv is minimized. It is not clear how to solve it combinatorially. 1 1 1 1 1 1 1 1 100 8 1 1 1 1 1 1 1 1 1
Weighted minimum vertex cover Input: a graph G=(V,E) , and weights wv 0 Goal: find a vertex cover C V of G such that v C wv is minimized. Algorithm: Consider the following LP: The variables are xv for each v V minimize v V wvxv subject to xu+xv 1 for all (u,v) E 0 xv 1 for all v V Intention: if C is a vertex cover, then we can set xv=1 if v C and xv=0 if v V\C. Therefore, OPT(LP) minVC(G) Another trivial solution: xv=1/2 for all v V (this is not very meaningful)
Weighted minimum vertex cover Input: a graph G=(V,E) , and weights wv 0 Goal: find a vertex cover C V of G such that v C wv is minimized. Algorithm: Consider the following LP: The variables are xv for each v V minimize v V wvxv subject to xu+xv 1 for all (u,v) E 0 xv 1 for all v V Run the LP solver this gives fractional solutions, and not necessarily 0/1 Let S = {v V : xv 0.5} Output S
Weighted minimum vertex cover Algorithm: Consider the following LP: The variables are xv for each v V minimize v V wvxv subject to xu+xv 1 for all (u,v) E 0 xv 1 for all v V Before we continue let s make the following observation: Observation: OPT(LP) minVC(G). Proof: Let C be a min vertex cover of G. Set xv=1 if v C and xv=0 if v V\C (x is a feasible solution) Therefore, OPT(LP) v V wvxv = minVC(G)
Weighted minimum vertex cover minimize v V wvxv subject to xu+xv 1 0 xv 1 for all (u,v) E for all v V Solve the LP. Let S = {v V : xv 0.5}. Output S Claim1: S is vertex cover. Proof: Let x be a feasible solution returned by the LP solver. Then for any (u,v) E we have xu+xv 1. Therefore, either xu or xv are at least , and hence one of them is in S.
Weighted minimum vertex cover minimize v V wvxv subject to xu+xv 1 0 xv 1 for all (u,v) E for all v V Solve the LP. Let S = {v V : xv 0.5}. Output S Claim2: w(S) 2OPT(LP) 2minVC(G). Proof: Let x* be a solution returned by the LP solver. OPT(LP) = v V wvx*v v S wvx*v 0.5* v S wv = 0.5*w(S) Therefore, w(S) 2OPT(LP) 2 minVC(G). Where the second inequality is the observation from before.
Weighted minimum vertex cover Algorithm: Consider the following LP: minimize v V wvxv subject to xu+xv 1 0 xv 1 for all (u,v) E for all v V Solve the LP. Let S = {v V : xv 0.5}. Output S We proved: (1) S is a vertex cover of G and (2) w(S) 2OPT(G). Therefore, the algorithm is a 2-approximation for the weighted minVC problem.
Questions? Comments?