Lecture 15 - Linear Programming Imperative programming vs Declarative programming - specify what is to be solved, not how to solve it Declarative programming: specify a problem in terms of: - Constraints on set of possible solution - Optimization criteria, used to pick between possible solution Most important declarative programming paradigm: linear programming - Constraints: set of linear inequalities linear = what? sum of constants times variables - no products of variables no other functions - sin, square root, etc. - Objective function: a linear function to be minimized or maximized example (draw) constraints x1 >= 2 x2 >= 1 x1 + x2 <= 4 - each constraint defines a half plane - feasible region = intersection of the planes objective: maximize x2 -- which point? minimize x1 + x2 -- which point - the objective function defines a direction - Notice that the solution in both of these cases was an intersection of some of the constraints. - Is this true in general? Can an optimal solution ever be on the middle of a edge (and not also on the edges' endpoints)? No - Why? Because region is CONVEX. In 2D: convex = no interior angles greater than 180 degrees. The algorithms for solving linear programs involve searching through the set of intersections of the constraints. - Simplex algorithm - initialization: find some vertex - while there is an adjacent vertex with a better objective value move to it However, we won't look at the Simplex algorithms until the towards the end of this section of the course. - except in special cases (and we will see one) it would be unusual to write your own linear program solver - modern solvers reflect many years of refinement and optimizations Instead: now we will look at "Reductions"to LP - Reduction: transforming one kind of problem into another kind of problem ******************************************* example: production planning company makes handmade carpets estimate of demand over next year: d1, d2, ..., d12 range from 440 to 920 currently have 30 employees - each makes 20 carpets @ month - each is paid $2000 @ month initially: no carpets in warehouse ways to handle fluctuations in demand: - overtime: - workers can make up to 6 more carpets per month, receiving extra 80% of regular pay ($180 @ carpet instead of $100 @ carpet) - hiring: $320 @ worker - firing: $400 @ worker - warehousing carpets: $8 @ carpet @ month. Must not have any in warehouse at end of year. Variables: wi = # workers during month i, where w0 = 30 xi = # carpets made during month i oi = # carpets made during month i using overtime hi = # workers hired at beginning of month i fi = # workers fired at beginning of month i si = # carepets in storage at end of month i, s0=0 Total number of variables: 72 (plus w0 and s0) Constraints: - all variables non-negative - total # carpets is regular production plus overtime xi = 20wi + oi for i in 1..12 - change in workers each month: wi = w_i-1 + hi - fi - number of carpets stored si = s_i-1 + xi - di - overtime is limited: oi <= 6 wi Objective: minimize 2000 SUM wi + 220 SUM hi + 400 SUM fi + 8 SUM si + 180 SUM oi Push the button and solve. What if solution says to hire a fractional worker, e.g. h2 = 1.6 ? Either round up or down, thus increasing the objective function. - Can this lead to a non-optimal solution? - Yes, but... - Usually is either optimal or near-optimal - guaranteeing integer-values for some of the variables and optimality requires solver what is called a "mixed integer linear program" -- MUCH harder. LP is in PTIME mixed ILP is NP-complete! Many other restrictions on the LP problem, however, do not change the nature of LP. - Suppose objective function is to MAXIMIZE, but your solver only handles MINIMIZATION. What to do? - negate objective function -What if you want to express an equality, a = b + c but solver only has inequalities? - What if you want to allow a variable to take on both positive and negative values, but your solver only handles positive variables? Tricky: replace xi by xi^+ - xi^- for two new variables