Optimizing Nutrition and Cost with Linear Programming

cmpt 409 815 n.w
1 / 27
Embed
Share

Explore the concepts of linear programming applied to real-world problems like designing an optimal diet that meets nutritional requirements while minimizing costs. Linear programming involves formulating objectives and constraints as linear functions to solve optimization problems efficiently.

  • Linear Programming
  • Nutrition
  • Optimization
  • Diet Problem
  • Algorithms

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. CMPT 409/815 Advanced Algorithms October 7, 2020

  2. Announcements Assignment 2 is online. Submit your solutions to Coursys by Oct 21, 2020

  3. Linear Programming

  4. Linear Programming Many optimization problems can be formulated as linear programs. The variables are real numbers The constraints are linear equalities and/or inequalities (e.g. x+1.5y <= 3) The objective is a linear function (e.g. maximize x+y-z)

  5. The Diet Problem The input is: Different nutrients (proteins, different vitamins, fats) A person needs at least certain amount of each nutrient per day Different types of food (pasta, salad, chicken ) For each type of food we have the amount of nutrients per unit For each food we have price per unit Goal: design diet that provides the required nutrients while minimizing the total price for food

  6. The Diet Problem M different nutrients (proteins, different vitamins, fats) Let bjbe the minimal required amount of the j th nutrient per day N different types of food (pasta, salad, chicken ) Let aijbe the amount of j th nutrient in i th food Let cidenote the prices of the i th food per unit Goal: Minimize c1x1+c2x2+ +cNxN Minimize cTx Subject to Subject to: Ax >=b x >=0 a11x1+ a12x2+ a13x3+ + a1NxN>= b1 aM1x1+ aM2x2+ aM3x3+ + aMNxN>= bM xj>=0 for all j

  7. Linear Programming an example Goal: minimize 2x+3y y-2x<=2 Subject to: x=10 x=0 x+y>=6 y=10 y-2x <= 2 x+y>=6 x-2y<=0 x-2y <= 0 0 <= x <= 10 (4,2) 0 <= y <= 10 2x+3y=14 y=0 2x+3y=7 2x+3y=0 Solution is: 14 attained at (x,y) = (4,2)

  8. Linear Programming an example Goal: maximize 2x+y y-2x<=2 Subject to: x=0 x+y>=6 y-2x <= 2 x+y>=6 x-2y<=0 x-2y <= 0 0 <= x 0 <= y y=0 Maximum is infinity

  9. Linear Programming For any LP we have one of the following possibilities: The feasible polytope is non-empty and bounded. The set of feasible solutions is unbounded, and the optimum is or - . The set of feasible solutions is unbounded, but the optimum is finite. There are no feasible solutions.

  10. Linear Programming (minimization version) A canonical minimization LP Input: A RMxN,b RM,c RN, Goal: find a solution x RN that Minimize cTx minimizes cTx = c1x1 + c2x2+ + cNxN Subject to: Ax >=b x >=0 Subject to Ax >=b And x>=0.

  11. Linear Programming (maximization version) A canonical maximization LP Input: A RMxN,b RM,c RN, Goal: find a solution x RN that Maximize cTx maximizes cTx = c1x1 + c2x2+ + cNxN Subject to: Ax <=b x >=0 Subject to Ax <=b And x>=0.

  12. Linear Programming More generally, the constraints may be arbitrary linear inequalities, We can write LPs without x>=0 constraints Theorem: Every minimization LP can be converted into a canonical minimization LP, i.e. LP of the form cTx = c1x1 + c2x2+ + cNxN minimize Subject to: Ax >=b x>=0

  13. Linear Programming Fact: Every minimization LP can be converted into a canonical minimization LP, i.e. LP of the form cTx = c1x1 + c2x2+ + cNxN minimize Subject to: Ax >=b x>=0 Proof: Given a general LP we convert in into a canonical LP as follows. Replace each x with two new variables: x+>=0 and x->=0 1. Replace each x with (x+-x-) 2. Every equality aTx=b can be replaced with aTx <= b and aTx >= b 3. Every constraints aTx <= b can be replaced with (-aT)x >= -b 4.

  14. Linear Programming feasibility version We can also consider the feasibility version of LP. LPFEAS is the following problem: Find x Rn such that Ax >= b

  15. Linear Programming feasibility version It is clear that if we can solve the optimization of LP, then we can also solve LPFEAS. What about the other direction? Claim: Suppose we have an algorithm that solves LPFEAS on n variables and m constraints in time T. For an optimization LP, suppose we know that OPT is in [-M,M] Then, we can approximate the optimum up to in time O(T*log(M/ )). Proof idea: binary search

  16. Linear Programming feasibility version Proof: Suppose the LP is maximize cTx s.t. Ax<=b x>=0 Ask if the set of constraints {Ax<=b, x>=0, cTx>=0} is feasible. If yes, then the optimum is in [0,M] Check feasibility of {Ax<=b, x>=0, cTx>=M/2} Otherwise, the optimum is in [-M,0] Check feasibility of {Ax<=b, x>=0, -M/2<=cTx<=0} Continue to the next iteration each time reducing the range by half

  17. Fourier-Motzkin Elimination Q: How can we solve an LPFEAS? If all constraints are equalities, we can solve the system of linear equations using Gaussian elimination. For inequalities, we can run the following (inefficient) procedure. Fourier-Motzkin Elimination: Choose some xi and isolate it in each equation and then eliminate. For example: x y + 2z <= 3 2x + y >= 4 -2x + 3z <= 8 x + y <= 7 x <= 7-y x <= 3 + y 2z x >= 2 y/2 x >= 1.5z-4 2 y/2 <= 3 + y 2z 1.5z - 4 <= 3 + y 2z 2 y/2 <= 7-y 1.5z - 4 <= 7-y Note that in general each step converts m constraints into up to (m/2)2 constraints. Therefore, the total running time may be up to m2n

  18. Linear Programming feasibility version Claim: LPFEAS NP. Obvious. Conclusion: We can find the optimum of a given LP up to in time exp(poly(n)) This is better than Fourier-Motzkin that works in time m2n Theorem: Given an LP instance, we can find a feasible solution that is optimal up to in time poly(n). The algorithms: Ellipsoid Algorithm, Interior Point Method Out of scope for this course

  19. 2-approximation for weighted vertex cover

  20. Weighted minimum vertex cover We saw a combinatorial algorithm for 2-approximation for vertex cover. Let s consider the weighted version of the min vertex cover problem. Input: a graph G=(V,E) , and weights wv>=0 Goal: find a vertex cover C V of G such that v C wv is minimized. It is not clear how to solve it combinatorially. 1 1 1 1 1 1 1 1 100 8 1 1 1 1 1 1 1 1 1

  21. Weighted minimum vertex cover Input: a graph G=(V,E) , and weights wv>=0 Goal: find a vertex cover C V of G such that v C wv is minimized. Algorithm: Consider the following LP: The variables are xv for each v V minimize v V wvxv subject to xu+xv >=1 for all (u,v) E 0 <= xv <=1 for all v V Intention: if C is a vertex cover, then we can set xv=1 if v C and xv=0 if v V\C. Therefore, OPT(LP)<=minVC(G) Another trivial solution: xv=1/2 for all v V (this is not very meaningful)

  22. Weighted minimum vertex cover Input: a graph G=(V,E) , and weights wv>=0 Goal: find a vertex cover C V of G such that v C wv is minimized. Algorithm: Consider the following LP: The variables are xv for each v V minimize v V wvxv subject to xu+xv >=1 for all (u,v) E 0 <= xv <=1 for all v V Solve the LP. Let S = {v V : xv>=0.5}. Output S

  23. Weighted minimum vertex cover Algorithm: Consider the following LP: The variables are xv for each v V minimize v V wvxv subject to xu+xv >=1 for all (u,v) E 0 <= xv <=1 for all v V Before we continue let s make the following observation: Observation: OPT(LP) <= minVC(G). Proof: let x* be the optimal solution to LP, and let C be a min vertex cover of G. Then we can set xv=1 if v C and xv=0 if v V\C (be an integer solution) Therefore, OPT(LP) = v V wvx*v<= v V wvxv = minVC(G)

  24. Weighted minimum vertex cover minimize v V wvxv subject to xu+xv >=1 0 <= xv <=1 for all (u,v) E for all v V Solve the LP. Let S = {v V : xv>=0.5}. Output S Claim1: S is vertex cover. Proof: Let x be any feasible solution. Then for any (u,v) E we have xu+xv >=1. Therefore, either xu or xv are at least , and hence one of them is in S.

  25. Weighted minimum vertex cover minimize v V wvxv subject to xu+xv >=1 0 <= xv <=1 for all (u,v) E for all v V Solve the LP. Let S = {v V : xv>=0.5}. Output S Claim2: w(S) <= 2OPT(LP) <= 2minVC(G). Proof: Let x* be the optimal solution OPT(LP) = v V wvx*v>= v S wvx*v>= 0.5* v S wv = 0.5*w(S) Therefore, w(S) <= 2OPT(LP), And by the observation from before we have 2OPT(LP)<=2minVC(G).

  26. Weighted minimum vertex cover Algorithm: Consider the following LP: minimize v V wvxv subject to xu+xv >=1 0 <= xv <=1 for all (u,v) E for all v V Solve the LP. Let S = {v V : xv>=0.5}. Output S We proved: (1) S is a vertex cover of G and (2) w(S) <= 2OPT(G). Therefore, the algorithm is a 2-approximation for the weighted minVC problem.

  27. Questions? Comments?

Related


More Related Content