BA1 terminology

Ace your homework & exams now with Quizwiz!

Steps to solve LP problem with Python Pulp

1. Create lists for variable numbers and parameters (constraints) 2. Create optimization model LpProblem model = LpProblem("model_noma", LpMinimize or LpMaximize) 3. Create decision variables list with LpVariable x = [ LpVariable ] 4. Insert objective function into model model += lpSum ( xij * cij ) 5. Insert constraints into model model += lpSum( xij <= constraints) 6. Solve model model.solve() 7. Print results model.objective.value() variable.VarValue

Markov Chain Monte Carlo

A class of algorithms for sampling from probability distributions based on constructing a Markov chain that has the desired distribution.

Traveling Salesman Problem (TSP)

A classic optimization problem that involves finding the most efficient route that visits a set of cities and returns to the starting city.

Monte Carlo Simulation

A computational technique that uses random sampling to obtain numerical results, often used in optimization to model uncertainty.

Binding Constraint

A constraint in a linear programming problem that determines the shape of the feasible region and is active at the optimal solution.

Redundant Constraint

A constraint in a linear programming problem that does not affect the feasible region's shape and can be removed without changing the solution.

Decision variable

A controllable input for a linear programming model. (𝑥 = x1 , x2, x3, ... , 𝑥𝑚 )

Integer Solution

A solution to an optimization problem where all decision variables take integer values, typically used in integer programming.

Pareto Efficiency

A state where no individual or preference criterion can be improved without worsening at least one other criterion.

Linear Regression

A statistical method used in optimization to model the relationship between a dependent variable and one or more independent variables.

Integer Programming

A type of mathematical optimization where some or all decision variables are required to be integers.

Integer Linear Programming

A type of optimization problem where the objective function and constraints are linear, and some or all decision variables must be integers.

Convex Optimization

A type of optimization where the feasible region forms a convex set, and the objective function is convex.

Artificial Variable / Dummy Variable

A variable added to a linear programming problem to convert an infeasible problem into a feasible one, helping in finding an initial feasible solution.

Slack Variable

A variable introduced to transform an inequality constraint into an equation in linear programming, representing the amount by which the constraint can be relaxed.

Surplus Variable

A variable introduced to transform an inequality constraint into an equation in linear programming, representing the amount by which the constraint is violated.

Decision Variable

A variable that decision-makers can control in an optimization problem.

Corner Point

A vertex or extreme point of the feasible region in linear programming, where two or more constraints intersect.

Can adding a constraint improve model? Removing a constraint can improve model?

Adding a constraint can never improve the objective function value. Either the new constraint will make the feasible reagion smaller, or the constraint will be redundant. You can find the same solution but not a better one. Removing a constraint can make the model better since the feasible region is bigger.

shadow price

Amount by which the value of the objective function would change with a one-unit change in the RHS value of a constraint. The change in the optimal value of the objective function per unit increase in the constraint right-hand side is called the shadow price. - E.g., added revenue (10k€) per unit (kton) increase in the available amount of iron ore →If a constraint is not binding then its shadow price is zero.

Nonlinear Programming

An approach to optimization problems where the objective function or constraints involve non-linear relationships between variables.

Robust Optimization

An optimization approach that seeks solutions that perform well under a range of uncertain scenarios, rather than just a single scenario.

Infeasible Problem

An optimization problem that has no feasible solution, meaning there is no combination of decision variables that satisfies all constraints.

Unbounded Problem

An optimization problem that has no finite optimal solution; the objective function can increase or decrease indefinitely without violating constraints.

BLP

Binary Linear Programming - Decision variables restricted to be binary values (i.e. 0 or 1) - Pure BLP: all the decision variables binary - Mixed BLP: some decision variables binary Since binary variables only provide two choices, they are ideal for modelling yes-or-no (go/no-go, continue/discard, etc.) decisions Constraints can then be used to capture logical dependencies between these decisions. ▪ Examples: xi =1, i=1,...,n if and only if project i is started, otherwise xi =0 At most k out of n projects can be started: SUMi xi < k Project j is conditional on project i: xj -xi <0 Projects i and j are mutually exclusive: xi +xj <1

MOP dominating solution

Consider a MOP problem with 𝑛 objective functions 𝑓 𝑥 , ... , 𝑓 (𝑥) to be maximized. Solution 𝑥 dominates solution 𝑥′ if 1. 𝑓 𝑥 ≥𝑓(𝑥′) for all 𝑖 ∈ {1,...,𝑛}, (same or better) 2. 𝑓 𝑥 > 𝑓(𝑥′) for some 𝑖 ∈ {1,...,𝑛} (strictly better) A feasible solution 𝑥 is efficient if it is not dominated by any other feasible solution. Selecting between efficient solutions of a question of trade-off and preference.

Cautionary note about the weighted sum approach

Every solution generated by the weighted sum approach is efficient. Assuming all weights are strictly positive. However, if the feasible region is not convex, there can be efficient solutions that the weighted approach cannot find. These solution do not maximize the weighted sum for any weights. For instance, MILP, ILP and BLP problems do not usually have a convex feasible region.

infeasible problem

Has no feasible solution area; every possible solution point violates one or more constraints. An over constrained LP with no point that satisfies all the constraints (i.e., the feasible region is empty)

the "reduced cost" in sensitivity report

Improvement needed in the objective function coefficient of a zero-valued variable. Reduced cost is the amount the decision variable's objective function coefficient would have to be improved before the variable's optimal value would be positive Improved = increased for max. and decreased for min. problems. MAX "How much would the profit of product x1 increase so that it would be optimal to produce it?" MIN "How much would the cost of product x1 decrese so that it would be optimal to produce it?" The reduced cost for a decision variable with a positive optimal value is zero. Meaning, we are already producing it e.g. the variable value is not equal to zero, or a negative number.

left-hand side

In the context of linear programming, the "left-hand side" (LHS) refers to the expressions on the left side of the inequality constraints. When we represent a linear programming problem, the constraints are typically written in the form: a1​x1 ​ +a2​x2 ​+ ... + an​xn ​≤ b where b is the right hand side

LP Network models

Linear Programming: Network models ▪ "How to distribute products from manufacturing to end-customers?" ▪ "How to assign workers with different skillsets to specific tasks?" ▪These decision can be supported by network models = Linear Programming (LP) models with a special network structure General relationship between LP formulation and network structure - Decision variables = Arcs (and their coefficient e.g. costs) - Constrains = Nodes examples: ▪ Transportation problem (node - arc - node) ▪ Transshipment problem (node - arc - node - arc - node) ▪ Assignment problem

Linear constraint

Linear constraints are linear functions that are restricted to be "less than or equal to" (≤), "equal to" (=), or "greater than or equal to" (≥) a constant. On this course we only have ≤, ≥, and = constraints, aka we do not have strict constraints < or >.

MILP Fixed-Charge Problem

Mixed Integer Linear Problem: Fixed Charge A problem in which there is a fixed cost (integer/binary) in addition to variable costs (continuous). Fixed costs need to be modeled using binary variables. A type of linear programming problem that incorporates fixed costs associated with the use of decision variables. In this problem, there are two types of costs: variable costs that depend on the quantity of a decision variable, and fixed costs that are incurred regardless of the quantity produced or used. Fixed costs (and variable costs) will only be activated if that option is chosen.

MILP

Mixed Integer Linear Programming - Some of the decision variables are integers

Network Flow Optimization

Optimization problems involving the flow of resources through a network, such as transportation or telecommunication networks.

several optimal solutions to an LP problem

Several alternative optimal solutions exists if all points of a line segment between two extreme points yield the optimal objective function value.

LP Sensitivity Analysis

Solving a LP problem with the Simplex algorithm produces other information on the problem besides the optimal solution This information gives insight how the optimal solution is affected by changes in - the objective function coefficients - the constraints' right-hand side(RHS)values Sensitivity report allows the manager to ask certain what-if questions about the problem. Sensitivity analysis gives information on what would happen if only one coefficient would be changed - I.e., it is assumed that all other parameters of the problem remain unchanged

Linear Combination

The combination of variables with constant coefficients, typically seen in objective functions and constraints in linear programming.

Lagrange multiplier

The dual value of a constraint. Analogous to the shadow price. It reflects the approximate change in the objective function resulting from a unit change in the quantity (right-hand-side) value of the constraint equation. Shadow price in LP is the rate of change in the objective function as the RHS of a constraint increases (all other data unchanged) - This rate is constant for a range of RHS values ("range of feasibility") In NLPs this rate is called "Lagrange multiplier" - However, in NLP the rate does not generally remain constant - It can be guaranteed to hold only for the current RHS value (so it is not so useful)

objective function

The function being maximized or minimized in Linear Programming

Optimization

The process of making something as effective or functional as possible. In business analytics, it often involves maximizing or minimizing an objective function subject to constraints.

Feasibility

The property of a solution that satisfies all the constraints in an optimization problem.

range of optimality

The range of values over which an objective function coefficient may vary without causing any change in the values of the decision variables in the optimal solution. Range of optimality is the range of objective function coefficient values over which the optimal solution does not change (Optimal solution = optimal values for decision variables) Also, the range between Allowable decrease and Allowable increase for the coefficient of the variable. (also for the constraints) "How much can we tilt the objective function before the optimal solution is no longer the optimal solution?"

feasible region

The set of all feasible solution is called the feasible region. The area in a graph that is left between the constraints.

Search Space

The set of all possible combinations of decision variables in an optimization problem.

Pareto Front

The set of solutions in multi-objective optimization that represents the best trade-offs between conflicting objectives.

Transportation problem

Transportation Problem: General Characteristics ▪ A common problem in logistics is how to transport goods from a set of sources (e.g., plants, warehouses, etc.) to a set of destinations (e.g., warehouses, customers, etc.) with minimum possible cost ▪ Nodes (Constraints) - a set of sources, each with a given supply - a set of destinations, each with a given demand ▪ Arcs (Decision variables) - Possible transport routes between sources and destinations, each with a shipping cost ▪ Objective - To determine how much should be shipped from each source node to each destination node so that the total transportation costs are minimized. - If total supply ≥ Total demand, then not all supply will be used - If total supply < Total demand, then the problem is infeasible

MOP Weighted sum approach

Weighted sum formulation of MOP max SUM 𝑤i*fi (x ,...,x ) 𝑖= [1, 2, ...., n] subject to constraints on decision variables x1,...,xm Weighted sums approach 1. Select (at random) positive weights w1,...,wn for the objective functions 2. Solve the single objective optimization problem ➔ Solution is efficient Repeat Steps 1 and 2 until enough efficient solutions have been found.

Rounding non-integer solutions / Use cases when to round and when not

When are "non-integer" solutions okay? ▪ Solution is naturally divisible (e.g., $, pounds, hours) ▪ Solution represents a rate (e.g., units per week) When is rounding okay? ▪ When numbers are large- e.g.,rounding 114.286 to 114 is probably okay. When is rounding not okay? ▪ When numbers are small- e.g., rounding 2.6 to 2 or 3 may be a problem. ▪ Binary variables- yes-or-nodecisions

range of feasibility

range of values for the RHS of a constraint over which the shadow price remains the same. E.g., at some point additional iron ore will no longer increase revenues as other constraints limit production

unbounded problem

the objective function can increase indefinitely without reaching a maximum value. The objective function value can be improved without a bound in the feasible region. The objective function line is parallel to a boundary constraint in the direction of optimization.

Non-negativity Constraint

A constraint in linear programming that requires decision variables to be greater than or equal to zero, reflecting real-world limitations.

redundant constraint

A constraint that does not affect the feasible region. If a constraint is redundant, it can be removed from the problem without affecting the feasible region. A constraint is redundant if removing it does not change the feasible region.

Constraint Programming

A declarative programming paradigm for solving combinatorial problems by expressing constraints on the possible solutions.

Concave function

A function that is bowl-shaped down. A function of two variables for which the line segment between any two points on the function lies entirely below the curve representing the function (the function is convex when the line segment lies above the function). A function is concave if a line connecting any two points lies below the function.

Convex function

A function that is bowl-shaped up. A function of two variables for which the line segment between any two points on the function lies entirely above the curve representing the function. A function is convex if a line connecting any two points lies above the function.

Piecewise-linear function

A function written using two or more linear expressions.

constraint

A limit to a either objective function or decision variable, or other aspects of situation. Such things as appearance, funding, space, materials, and human capabilities.

Local optimum

A local optimum is a solution that is the best within a specific neighborhood or region of the solution space. It may be superior to nearby solutions but might not be the best solution globally. Optimization algorithms, especially local search methods like GRG Non-Linear in Excel, may converge to a local optimum, and escaping such points is essential for finding the global optimum in certain cases. Most NLP algorithms terminate when they have found a local optimal solution i.e. a feasible solution such that all neighboring feasible solutions are worse. We will never reach the promised land of very deep banana shaped valleys. :( Special case: If NLP is "convex" or "concave" then any local optimal solution is a global optimal solution.

Objective Function

A mathematical expression representing the goal to be maximized or minimized in an optimization problem.

Markowitz Portfolio optimization

A mathematical framework for constructing an investment portfolio that aims to maximize expected returns while minimizing risk. The key idea is diversification, spreading investments across different assets to achieve a balance between risk and return. The process involves analyzing the expected returns and volatility (risk) of individual assets, as well as the correlations between them. By combining assets with low or negative correlations, the portfolio can achieve a higher level of diversification, which helps to reduce overall risk. The optimal portfolio is identified at the point where the investor achieves the highest expected return for a given level of risk (or the lowest risk for a given level of expected return) based on their risk tolerance. This is often depicted graphically as the "efficient frontier," a curve that represents the set of optimal portfolios for different levels of risk.

Lagrange Multiplier

A method for finding the local maxima and minima of a function subject to equality constraints by introducing a multiplier for each constraint.

Lagrangian Relaxation

A method for solving optimization problems by relaxing some constraints and solving an easier problem, then adjusting the solution to satisfy the original constraints.

Linear Programming

A method to achieve the best outcome (such as maximum profit or minimum cost) in a mathematical model whose requirements are represented by linear relationships. If both the objective function and the constraints are linear, the problem is referred to as a linear programming (LP) problem

Assignment problem

A network flow problem that often involves the assignment of agents to tasks; it can be formulated as a linear program and is a special case of the transportation problem. The problem of assigning agents (people, machines) to a set of tasks is called an assignment problem Problem components • a set of agents • a set of tasks • a cost table (cost associated with each agent performing each task) Objective: Allocate agents to the tasks so that all tasks are performed at the minimum possible cost

Local Minimum

A point in the solution space where the objective function has a lower value than in its immediate vicinity, but not necessarily the lowest overall.

Simplex Method

A popular algorithm for solving linear programming problems by moving from one feasible solution to another along the edges of the feasible region.

Heuristic

A problem-solving approach that uses practical rules or guidelines to find good solutions, but not necessarily optimal ones.

Sensitivity Report

A report generated after solving a linear programming problem that provides information on how changes in coefficients or constraints affect the optimal solution.

Heuristic Algorithm

A rule-of-thumb method or procedure that may not guarantee an optimal solution but is efficient in finding good solutions for complex problems.

A Convex Set

A set in which every segment that connects points of the set lies entirely in the set. Point A , Point B, And the entire line connecting points A and B is contained entirely in the set. remember, there is no such thing as a concave set.

Linear Independence

A set of vectors is linearly independent if none of the vectors in the set can be represented as a combination of the others.

Heuristic Solution

A solution obtained using a heuristic, which is a practical approach to find reasonably good solutions quickly.

Feasible Solution

A solution that satisfies all the constraints of an optimization problem.

feasible solution

A solution that satisfies all the constraints. Not the same as optimal solution. There can be many feasible solutions.

Infeasible Solution

A solution that violates one or more constraints in an optimization problem.

Linear Equation

An equation in which each term is a constant or the product of a constant and a single variable, with the variable having an exponent of 1.

Mixed-Integer Programming

An extension of integer programming where some variables are required to be integers, and others can take on continuous values.

Transshipment Problem

An extension of the transportation problem to distribution problems involving transfer points and possible shipments between any pair of nodes. A transportation problem but shipment may move through intermediate nodes (transshipment nodes) before reaching a particular destination node.

Linear Inequality

An inequality in which the variables have a degree of 1, and there are no products of variables.

Gradient Descent

An iterative optimization algorithm for finding the minimum of a function by moving in the direction of steepest descent.

Interior Point Method

An optimization algorithm that moves through the interior of the feasible region to find the optimal solution, rather than moving along the edges like the simplex method.

Evolutionary algorithm

Evolutionary algorithms are heuristic, i.e., provide a feasible solution with a "good" objective value, but no guarantees that it is optimal. A large set of solutions ("population") simulated through multiple iterations ("generations"). On each iteration: • Solutions with best objective function value ("fitness") are combined to produce new solutions ("reproduction") • Random changes to some solutions ("mutation") • Infeasible solution and those with poor objective function value ("unfit") are deleted.

Summary: Sensitivity Report for Constraints

Final Value ▪ The usage of the resource in the optimal solution—the left-hand side of the constraint. Shadow Price ▪ The change in the value of the objective function per unit increase in the right-hand-side of the constraint (RHS): ∆Z = (Shadow Price)(∆RHS) (Note: only valid if change is within the allowable range) Constraint R.H. Side ▪ The current value of the right-hand-side of the constraint. Allowable Increase/Decrease ▪ Defines the range of values for the RHS for which the shadow price is valid and hence for which the new objective function value can be calculated.

Summary: Sensitivity Report for the Objective Function

Final Value ▪ The value of the decision variables (changing cells) in the optimal solution. Reduced Cost ▪ Improvement needed in the objective function coefficient of a zero-valued variable Objective Coefficient ▪ The current value of the objective coefficient. Allowable Increase/Decrease ▪ Defines the range of the coefficients in the objective function for which the current solution (value of the decision variables or changing cells in the optimal solution) will not change.

Efficient solution to a MOP problem

Generally, there does not exist a feasible solution that simultaneously optimizes all the objective functions. e.g. there is no one optimal solution. A feasible solution to MOP problem is efficient, if 1. there does not exist another feasible solution which yields a better or equal value in each objective function AND 2. there does not exist another feasible solution which yields a strictly better value in some objective function. It might be easier to try to think if a solution is NOT feasible. A feasible solution to MOP problem is NOT efficient, if 1. there exist another feasible solution which yields a better or equal value in each objective function AND 2. there exist another feasible solution which yields a strictly better value in some objective function.

Convex NLPs

If in an NLP the LHS function of each ≤ constraint is convex then the feasible region is convex. e.g. constraint: LHS = f(x) ≤ RHS Convex NLP: ▪ A convex objective function is minimized ▪ The feasible region is convex Property of concave and convex NLPs: In this case, solver is guaranteed to find global optima.

Concave NLPs

If in an NLP the LHS function of each ≥ constraint is concave then the feasible region is convex. e.g. constraint: LHS = f(x) ≥ RHS Concave NLP: ▪ A concave objective function is maximized ▪ The feasible region is convex Property of concave and convex NLPs: In this case, solver is guaranteed to find global optima.

Dominance / Dominating solution

In multi-objective optimization, a solution is said to dominate another if it is better in at least one objective and not worse in any other.

Expected return

In the context of an investment portfolio, "expected return" refers to the anticipated average or mean return that an investor can reasonably expect to achieve from the portfolio over a specific period of time. It is a measure that helps investors assess the potential profitability of their investment strategy. The expected return is typically expressed as a percentage and represents the average gain or loss that an investor might experience based on historical performance, financial analysis, or other relevant factors. To calculate the expected return of a portfolio, investors often consider the weighted average of the expected returns of individual assets within the portfolio. The weights are determined by the proportion of each asset's value relative to the total portfolio value. Investors use expected return as a key input in portfolio optimization and decision-making processes. When combined with the concept of risk (volatility), as in the Markowitz Portfolio Optimization framework, investors can seek a balance between maximizing returns and minimizing risk to construct portfolios that align with their financial goals and risk tolerance.

Standard deviation

In the context of an investment portfolio, standard deviation is a statistical measure that quantifies the amount of variability or dispersion of a set of returns from its mean (average). In other words, it measures the degree of fluctuation of returns around the average return of the portfolio. Standard deviation is commonly used as a measure of risk in finance and investment analysis. Volatility Measurement: A higher standard deviation implies greater variability in the portfolio's returns. In the context of investments, this variability is often referred to as volatility. Volatility is considered a key indicator of risk. Investors generally associate higher volatility with higher risk because it suggests that the portfolio's returns can deviate significantly from the expected or average return. Risk Assessment: Investors use standard deviation as a measure of risk to assess the potential for both positive and negative outcomes. Higher standard deviation means there is a wider range of possible returns, indicating a riskier investment. When constructing a portfolio, investors often aim to balance risk and return. Portfolios with lower standard deviation are generally preferred because they are perceived as less risky, especially for investors who prioritize capital preservation. Portfolio Diversification: Standard deviation is a critical component in the context of diversification. Diversifying a portfolio by including assets with low or negatively correlated returns can help reduce the overall standard deviation of the portfolio. Standard deviation serves as a valuable metric for investors to quantify and manage risk in their investment portfolios. It provides insights into the potential range of returns and helps investors make informed decisions based on their risk tolerance and investment objectives.

Dual Problem

In the context of linear programming (LP), the dual problem is a mathematical formulation associated with an original LP problem. The dual problem provides insight into the relationship between the primal (original) and dual LP problems and is a key concept in duality theory. Key Points: The dual problem is obtained by minimizing the sum of the products of the dual variables and the right-hand side constants, subject to constraints derived from the coefficients of the primal problem. The coefficients of the dual objective function are the right-hand side constants of the primal constraints, and the coefficients of the dual constraints are related to the coefficients of the primal objective function. The dual problem provides insights into the resource values (shadow prices) associated with the constraints of the primal problem. These shadow prices represent the rate at which the objective function of the primal problem would change with respect to a one-unit increase in the right-hand side of the corresponding constraint. Strong duality theorem ensures that, under certain conditions, the optimal values of the primal and dual problems are equal. In summary, the dual problem in linear programming provides a different perspective on the optimization problem, offering valuable information about the resource values and the sensitivity of the optimal solution to changes in the right-hand side constants.

BLP Applications

InvestmentAnalysis • Should we make a certain fixed investment? SiteSelection • Should a certain site be selected for the location of a new facility? Designing a Production and Distribution Network - Should a certain plant (distribution center) remain open? - Should a certain site be selected for a new plant (or distribution center)? - Should a distribution center remain open? - Should a certain distribution center be assigned to serve a certain market area? Scheduling Inter related Activities - Should a certain activity begin in a certain time period? Airline Applications: • Should a certain type of airplane be assigned to a certain flight leg? • Should a certain sequence of flight legs be assigned to a crew?

Different kinds of problems seen in assignments

LP - Linear Programming - Portfolio selection - Workforce planning - Production scheduling (production + inventory) - Blending problem (grades of fuel) - Dual problem - Transportation problem (mill-warehouse) - Transshipment problem (mill-warehouse-customer) - Assignment problem (assign jobs) BLP - Binary Linear Programming - Project portfolio (standardization between two different units + binary y) - Covering problem (states, cities, countries) MILP - Mixed Integer Linear Programming - Fixed cost (x1, x2, x3 and also y1, y2, y3 where y = binary) NLP - Non-Linear programming - Transportation with non-linear costs - Location with non-linear variables and/or constraints - Resource allocation with non-linear variables and/or constraints - Portfolio optimization with non-linear variables and/or constraints

MOP

Multi-objective programming problems Optimization/Programming problems with multiple objective functions are called Multi-objective (MO) MOLP = Multi-Objective Linear Programming MOILP = Multi Objective Integer Linear Programming MOZOLP = Multi Objective Zero-One Linear Programming (Binary?) MONLP = Multi Objective Non-Linear Programming (Objective function or const. not linear) MOINLP = Multi Objective Integer Non-Linear Programming

Dummy variable / Use cases

New "dummy" variable 𝑥𝑠 added to the model to "activate" for example possible synergy benefits. "If x1 and x2 are selected cost will drop 10€" We will assign coefficient -10 to dummy variable xs. Total costs with possible synergy : 20x1 + 30x1 -10xs We need a constraint to make sure that dummy variable xs is selected if and only if x1 and x2 are selected: x1 + x2 - 2xs >= 0 (if both or none of x1 and x2 is selected) x1 + x2 - 2xs <= 1 (if only one of x1 or x2 selected)

Iteration

One complete step of an optimization algorithm, often involving the improvement of the current solution to get closer to the optimal solution.

Optimal solution will not change based on these

Optimal solution will not change if - A constant is added to the objective function - The objective function is multiplied with a positive constant - The objective function is multiplied with a negative constant and min is changed to max and/or max is changed to min.

Non-linear Optimization

Optimization problems where the objective function or constraints involve non-linear relationships, such as quadratic or exponential functions.

Portfolio Optimization

Optimizing the allocation of assets in an investment portfolio to maximize returns or minimize risk.

Supply Chain Optimization

Optimizing the processes involved in the production and distribution of goods to minimize costs and maximize efficiency.

Pure ILP

Pure Integer Linear Programming - All the decision variables are integers

Range of optimality calculation

Range of Optimality for c1 (coefficient of x1) 1. Calculate c1 2. Calculate slope for constraint 1 3. Calculate slope for constraint 2 4. Calculate points in between - The slope of the objective function line is c1x1 +7x2 --> slope is -c1/7 - The slope of the first binding constraint is x1 +x2 =8 --> slope is -1 - The slope of the second binding constraint 2x1 + 3x2 = 19 --> slope is -2/3. two binding constraints: -1 < -c1/7 < -2/3 2 - Multiplying by -7 and reversing the inequalities gives: 14/3 < c1 < 7 (This is the range of optimality) When original c1 = 5 5 - 14/3 = 0.33333 (allowed decrease) 7 - 5 = 2 (allowed increase)

Implications of relaxing (integrality) constraints

Relaxing integrality constraints refers to allowing decision variables that were initially required to be integers to take on fractional values. This relaxation is often done to facilitate the solution of a problem or to explore solutions that may not be restricted to integer values. Improved Solvability: LP problems with integer variables can be computationally challenging. Relaxing integrality constraints and solving the resulting linear program (LP) may lead to a solution more efficiently. Solving the relaxed LP provides a lower bound on the optimal objective value of the original integer programming problem. Continuous and Fractional Solutions: Relaxing integrality constraints allows decision variables to take fractional values, resulting in a continuous solution space. This can provide a more detailed understanding of the feasible region and potential solutions. Rounding Solutions: After obtaining a solution to the relaxed LP, a common practice is to round the fractional values to the nearest integers to obtain a feasible solution for the original integer programming problem. However, this may not guarantee an optimal integer solution. Optimality Gap: The difference between the optimal objective value of the relaxed LP and the optimal objective value of the original integer programming problem is called the optimality gap. A smaller gap suggests that the fractional solution is close to the optimal integer solution. It's important to note that while relaxing integrality constraints can be a useful technique, it doesn't guarantee an optimal integer solution. Additional techniques, such as branch-and-bound, cutting planes, or heuristics, may be employed to find an optimal integer solution to the original problem.

Slack and Surplus Variables aka additional decision variables

Represent the difference between the left and right sides of the constraints. A linear program in which all the variables are non-negative and all the constraints are equalities is said to be in standard form - Standard form can be obtained by adding slack variables to ≤ constraints, and by subtracting surplus variables from ≥ constraints Slack/surplus variables Represent the difference between the left and right sides of the constraints. Have objective function coefficients equal to zero since the slack or surplus variables cannot affect objective function value.

Constraints

Restrictions or limitations in an optimization problem that must be satisfied.

Steps in solving linear programming problem (with only two variables) graphically

Step 1: Formulate the Objective Function Write down the objective function, which is the linear expression representing the quantity to be maximized or minimized. For example, if x and y are your decision variables, the objective function may look like Z =ax + by, where a and b are coefficients. Step 2: Write Down the Constraints Identify and write down all the constraints as linear inequalities. Each constraint will represent a restriction on the decision variables. For example, if 2x + 3y ≤ 12 is a constraint, it means that the combination of x and must satisfy this inequality. Step 3: Plot the Constraints Graph each constraint on the coordinate plane. To do this, rearrange each inequality to the form y = mx + b and plot the corresponding line. Determine whether the region above or below the line satisfies the inequality. Step 4: Identify Feasible Region The feasible region is the overlapping region that satisfies all the constraints. The feasible region is the set of points (x, y) that satisfy all the constraints simultaneously. Step 5: Plot the Objective Function Determine the direction of improvement for the objective function. For maximization, it means moving towards higher values of the objective function; for minimization, it means moving towards lower values. Plot the objective function on the same graph. Step 6: Find the Optimal Solution The optimal solution is the point within the feasible region where the objective function is optimized. This is often the corner point (vertex) of the feasible region where the objective function has the highest or lowest value. Step 7: Check Corner Points Evaluate the objective function at each corner point of the feasible region. The corner point that maximizes or minimizes the objective function is the optimal solution.

GRG algorithm logic in Excel Solver

The GRG algorithm in Solver employs a gradient search, akin to "hill-climbing." Beginning with an initial solution, it computes a direction that maximally enhances the objective function. The solution is iteratively adjusted in this direction until a constraint boundary is reached or the objective function ceases to improve. Subsequently, a new direction is determined based on the updated solution, and the process iterates until no further enhancement is achievable. This method efficiently navigates the solution space, dynamically adapting to changes and constraints, making it a robust tool for nonlinear optimization problems, ensuring convergence towards an optimal solution. However, the GRG algorithm in Solver, being a local optimization method, may not guarantee the discovery of the global optimum. Its reliance on local gradient information may lead to convergence in a local minimum or maximum, rendering it sensitive to the initial solution. Users should exercise caution and consider employing global optimization techniques for complex problem spaces where multiple optima exist, ensuring a more comprehensive exploration of the solution landscape.

Linear programming relaxation

The LP relaxation of a (M)ILP problem is the LP problem obtained when all the integrality constraints are removed. AKA. remove integer requirement x1, x2 integer ILP --> LP MILP --> LP The LP relaxation of a BLP problem is the LP problem obtained when all the integrality constraints are removed. BLP --> LP x1, x2 {0,1} requirement for binary becomes requirement between 0 and 1 x1, x2 <= 1 x1, x2 >= 0

Optimal Solution

The best feasible solution that maximizes or minimizes the objective function in an optimization problem.

Shadow Price

The change in the value of the objective function per unit increase in the right-hand side of a constraint in a linear programming problem.

extreme points

The corners of the feasible region are referred to as the extreme points. At least one of the extreme points is an optimal solution. (as long as the problem has an optimal solution (cf. unbounded and infeasible problems). Graphically speaking, the feasible solution points occurring at the vertices, or "corners," of the feasible region. With two-variable problems, extreme points are determined by the intersection of the constraint lines.

Global optimum

The global optimum is the best possible solution across the entire feasible solution space. It represents the absolute maximum (in maximization problems) or minimum (in minimization problems) of the objective function. Finding the global optimum ensures the most favorable outcome in the entire problem domain. For NLP problems we often do not have a guarantee that the optimal solution is a true global optimal solution, it can be a local optimal solution. Most NLP algorithms terminate when they have found a local optimal solution. Special case: If NLP is "convex" or "concave" then any local optimal solution is a global optimal solution.

Coefficient

The numerical factor in a term of a linear equation, representing the proportionality between variables.

optimal solution

The specific decision-variable value or values that provide the "best" output for the model. An optimal solution is a feasible solution that results in the best possible value for the objective function. The lowest in minimization problems and The highest in maximization problems.

Sensitivity Analysis

The study of how changes in the coefficients of an optimization model affect the optimal solution.

Prescriptive Analytics

The use of data and mathematical models to prescribe actions that can optimize or improve decision-making.

Objective Value

The value of the objective function at the optimal solution, indicating the maximum or minimum attainable value.

Branch and Bound Method

Traditional approach to solving integer programming problems: Step 1: Bounding - Solve the LP relaxation with Simplex. - Gives an upper bound to the optimal value (cf. implications of relaxation). - Feasible solutions can be partitioned into smaller subsets; - Bounding helps limit the point that need to be checked Step 2: Branching - Add constraints on one variable that did not have an integer value in the optimal solution - Smaller subsets evaluated until best solution is found; Step 3: Bounding (again) - Solve the LP relaxations in each branch Step 4: Branching (again) - Add constraints on variable that did not have an integer value Continue until solution is found. NOTE! Excel Solver Simplex algorithm does this automatically. User does not choose or calculate these. Solving (M)ILP problems is computationally much more demanding then solving LP problems - It is possible that in the B&B algorithm the number of sub-problems doubles with each branching-step - Hence, running time of the algorithm can grow exponentially as a function of the number of integer valued decision variables - Adding constraints to an LP problem usually makes solving it computationally more demanding BUT--> - Adding constraints to a MILP problem can make it easier to solve!

branch and bound

a general algorithm for finding optimal solutions of various optimization problems. The basic concept underlying the is to divide and conquer. Since the original "large" problem is too difficult to be solved directly, it is divided into smaller and smaller subproblems until they are solved. Guarantees convergence.

right-hand side

n the context of linear programming, the "right-hand side" (RHS) refers to the constants on the right side of the inequality constraints. In a linear programming problem, constraints are typically represented in the form: a1​x1 ​+ a2​x2 ​+ ... + an​xn ​≤ b Here, the right-hand side (RHS) is the constant b. The left-hand side (LHS) is the expression a1​x1 ​+ a2​x2 ​+ ... + an​xn​, which involves the decision variables x1​,x2​,...,xn​ and their coefficients a1​, a2​, ... , an.


Related study sets

Criminal Justice Administration Final

View Set

NCLEX_Musculoskeletal Disorders 222Q no exp

View Set

Cost Accounting - Exam 2 Materials

View Set

Exam 4 Endocrine and Gastrointestinal

View Set

CIW JavaScript Specialist Exam Prep

View Set