Skip to content

Instantly share code, notes, and snippets.

@0xdevalias
Last active September 20, 2024 04:38
Show Gist options
  • Save 0xdevalias/b7ec3eba3d6173c279b3e9ee7068bc4b to your computer and use it in GitHub Desktop.
Save 0xdevalias/b7ec3eba3d6173c279b3e9ee7068bc4b to your computer and use it in GitHub Desktop.
Some notes on Linear Programming and related optimisation problems

Linear Programming, Optimisation Problems, etc

Some notes on Linear Programming and related optimisation problems.

Table of Contents

Overview

  • https://en.wikipedia.org/wiki/Optimization_problem
    • In mathematics, engineering, computer science and economics, an optimization problem is the problem of finding the best solution from all feasible solutions.

  • https://en.wikipedia.org/wiki/Mathematical_optimization
    • Mathematical optimization or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization.

    • In the more general approach, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics.

    • Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:

      • An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.
      • A problem with continuous variables is known as a continuous optimization, in which optimal arguments from a continuous set must be found. They can include constrained problems and multimodal problems.
      • https://en.wikipedia.org/wiki/Discrete_optimization
        • Discrete optimization is a branch of optimization in applied mathematics and computer science. As opposed to continuous optimization, some or all of the variables used in a discrete optimization problem are restricted to be discrete variables—that is, to assume only a discrete set of values, such as the integers.

        • Three notable branches of discrete optimization are:

          • combinatorial optimization, which refers to problems on graphs, matroids and other discrete structures
          • integer programming
          • constraint programming

          These branches are all closely intertwined however, since many combinatorial optimization problems can be modeled as integer programs (e.g. shortest path) or constraint programs, any constraint program can be formulated as an integer program and vice versa, and constraint and integer programs can often be given a combinatorial interpretation.

      • https://en.wikipedia.org/wiki/Continuous_optimization
        • Continuous optimization is a branch of optimization in applied mathematics.

          As opposed to discrete optimization, the variables used in the objective function are required to be continuous variables—that is, to be chosen from a set of real values between which there are no gaps (values from intervals of the real line). Because of this continuity assumption, continuous optimization allows the use of calculus techniques.

  • https://en.wikipedia.org/wiki/Linear_programming
    • Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization).

      More formally, linear programming is a technique for the optimization of a linear objective function, subject to linear equality and linear inequality constraints. Its feasible region is a convex polytope, which is a set defined as the intersection of finitely many half spaces, each of which is defined by a linear inequality. Its objective function is a real-valued affine (linear) function defined on this polytope. A linear programming algorithm finds a point in the polytope where this function has the largest (or smallest) value if such a point exists.

  • https://en.wikipedia.org/wiki/Nonlinear_programming
    • In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints are not linear equalities or the objective function is not a linear function. An optimization problem is one of calculation of the extrema (maxima, minima or stationary points) of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.

    • https://en.wikipedia.org/wiki/Karush%E2%80%93Kuhn%E2%80%93Tucker_conditions
      • In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

        Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a global maximum or minimum over the domain of the choice variables and a global minimum (maximum) over the multipliers. The Karush–Kuhn–Tucker theorem is sometimes referred to as the saddle-point theorem.

  • https://en.wikipedia.org/wiki/Integer_programming
    • An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints (other than the integer constraints) are linear.

  • https://apmonitor.com/wiki/index.php/Main/IntegerProgramming
    • Integer Linear Programming Integer Linear Programming (ILP) is a type of optimization problem where the variables are integer values and the objective function and equations are linear.

  • https://en.wikipedia.org/wiki/Multi-objective_optimization
    • Multi-objective optimization

    • Multi-objective optimization or Pareto optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, or multiattribute optimization) is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives.

    • For a multi-objective optimization problem, it is not guaranteed that a single solution simultaneously optimizes each objective. The objective functions are said to be conflicting. A solution is called nondominated, Pareto optimal, Pareto efficient or noninferior, if none of the objective functions can be improved in value without degrading some of the other objective values. Without additional subjective preference information, there may exist a (possibly infinite) number of Pareto optimal solutions, all of which are considered equally good.

    • https://en.wikipedia.org/wiki/Multi-objective_optimization#List_of_methods
      • Commonly known a posteriori methods are listed below:

        • ε-constraint method
        • Pareto-Hypernetworks
        • Multi-objective Branch-and-Bound
        • Normal Boundary Intersection (NBI)
        • Modified Normal Boundary Intersection (NBIm)
        • Normal Constraint (NC)
        • Successive Pareto Optimization (SPO)
        • Directed Search Domain (DSD)
        • NSGA-II
        • PGEN (Pareto surface generation for convex multi-objective instances)
        • IOSO (Indirect Optimization on the basis of Self-Organization)
        • SMS-EMOA (S-metric selection evolutionary multi-objective algorithm)
        • Approximation-Guided Evolution (first algorithm to directly implement and optimize the formal concept of approximation from theoretical computer science)
        • Reactive Search Optimization (using machine learning for adapting strategies and objectives), implemented in LIONsolver
        • Benson's algorithm for multi-objective linear programs and for multi-objective convex programs
        • Multi-objective particle swarm optimization
        • Subpopulation Algorithm based on Novelty
        • MOEA/D (Multi-Objective Evolutionary Algorithm based on Decomposition)
  • https://en.wikipedia.org/wiki/Lexicographic_optimization
    • Lexicographic optimization

    • Lexicographic optimization is a kind of Multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Often, the different objectives can be ranked in order of importance to the decision-maker, so that objective f1 is the most important, objective f2 is the next most important, and so on. Lexicographic optimization presumes that the decision-maker prefers even a very small increase in f1, to even a very large increase in f2,f3,f4, etc. Similarly, the decision-maker prefers even a very small increase in f2, to even a very large increase in f3,f4, etc. In other words, the decision-maker has lexicographic preferences, ranking the possible solutions according to a lexicographic order of their objective function values. Lexicographic optimization is sometimes called preemptive optimization, since a small increase in one objective value preempts a much larger increase in less important objective values.

  • https://en.wikipedia.org/wiki/Loss_function
    • In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized.

  • https://en.wikipedia.org/wiki/Lagrange_multiplier
    • In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints (i.e., subject to the condition that one or more equations have to be satisfied exactly by the chosen values of the variables).

  • https://en.wikipedia.org/wiki/Simplex_algorithm
    • In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming.

    • It can be shown that for a linear program in standard form, if the objective function has a maximum value on the feasible region, then it has this value on (at least) one of the extreme points. This in itself reduces the problem to a finite computation since there is a finite number of extreme points, but the number of extreme points is unmanageably large for all but the smallest linear programs.

      It can also be shown that, if an extreme point is not a maximum point of the objective function, then there is an edge containing the point so that the value of the objective function is strictly increasing on the edge moving away from the point. If the edge is finite, then the edge connects to another extreme point where the objective function has a greater value, otherwise the objective function is unbounded above on the edge and the linear program has no solution. The simplex algorithm applies this insight by walking along edges of the polytope to extreme points with greater and greater objective values. This continues until the maximum value is reached, or an unbounded edge is visited (concluding that the problem has no solution). The algorithm always terminates because the number of vertices in the polytope is finite; moreover since we jump between vertices always in the same direction (that of the objective function), we hope that the number of vertices visited will be small.

      The solution of a linear program is accomplished in two steps. In the first step, known as Phase I, a starting extreme point is found. Depending on the nature of the program this may be trivial, but in general it can be solved by applying the simplex algorithm to a modified version of the original program. The possible results of Phase I are either that a basic feasible solution is found or that the feasible region is empty. In the latter case the linear program is called infeasible. In the second step, Phase II, the simplex algorithm is applied using the basic feasible solution found in Phase I as a starting point. The possible results from Phase II are either an optimum basic feasible solution or an infinite edge on which the objective function is unbounded above.

  • https://en.wikipedia.org/wiki/Big_M_method
    • In operations research, the Big M method is a method of solving linear programming problems using the simplex algorithm. The Big M method extends the simplex algorithm to problems that contain "greater-than" constraints. It does so by associating the constraints with large negative constants which would not be part of any optimal solution, if it exists.

    • Algorithm The simplex algorithm is the original and still one of the most widely used methods for solving linear maximization problems. However, to apply it, the origin (all variables equal to 0) must be a feasible point. This condition is satisfied only when all the constraints (except non-negativity) are less-than constraints and with positive constant on the right-hand side. The Big M method introduces surplus and artificial variables to convert all inequalities into that form. The "Big M" refers to a large number associated with the artificial variables, represented by the letter M.

      The steps in the algorithm are as follows:

      • Multiply the inequality constraints to ensure that the right hand side is positive.
      • If the problem is of minimization, transform to maximization by multiplying the objective by −1.
      • For any greater-than constraints, introduce surplus si and artificial variables ai (as shown below).
      • Choose a large positive Value M and introduce a term in the objective of the form −M multiplying the artificial variables.
      • For less-than or equal constraints, introduce slack variables si so that all constraints are equalities.
      • Solve the problem using the usual simplex method.
      • For example, x + y ≤ 100 becomes x + y + s1 = 100, whilst x + y ≥ 100 becomes x + y − s1 + a1 = 100. The artificial variables must be shown to be 0. The function to be maximised is rewritten to include the sum of all the artificial variables. Then row reductions are applied to gain a final solution.

      The value of M must be chosen sufficiently large so that the artificial variable would not be part of any feasible solution.

      For a sufficiently large M, the optimal solution contains any artificial variables in the basis (i.e. positive values) if and only if the problem is not feasible.

    • Other Usage When used in the objective function, the Big M method sometimes refers to formulations of linear optimization problems in which violations of a constraint or set of constraints are associated with a large positive penalty constant, M.

      When used in the constraints themselves, one of the many uses of Big M, for example, refers to ensuring equality of variables only when a certain binary variable takes on one value, but to leave the variables "open" if the binary variable takes on its opposite value.

Tools / Libraries / etc

Python

PuLP

Pyomo

  • https://github.com/Pyomo/pyomo
    • Pyomo

    • An object-oriented algebraic modeling language in Python for structured optimization problems.

    • Pyomo is a Python-based open-source software package that supports a diverse set of optimization capabilities for formulating and analyzing optimization models. Pyomo can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. Pyomo supports a wide range of problem types

  • https://www.pyomo.org/
    • Pyomo is a Python-based, open-source optimization modeling language with a diverse set of optimization capabilities.

    • https://www.pyomo.org/about
    • About Pyomo Pyomo is a Python-based open-source software package that supports a diverse set of optimization capabilities for formulating, solving, and analyzing optimization models.

      A core capability of Pyomo is modeling structured optimization applications. Pyomo can be used to define general symbolic problems, create specific problem instances, and solve these instances using commercial and open-source solvers. Pyomo's modeling objects are embedded within a full-featured high-level programming language providing a rich set of supporting libraries, which distinguishes Pyomo from other algebraic modeling languages like AMPL, AIMMS and GAMS.

      Pyomo supports a wide range of problem types, including:

      • Linear programming
      • Quadratic programming
      • Nonlinear programming
      • Mixed-integer linear programming
      • Mixed-integer quadratic programming
      • Mixed-integer nonlinear programming
      • Stochastic programming
      • Generalized disjunctive programming
      • Differential algebraic equations
      • Bilevel programming
      • Mathematical programs with equilibrium constraints

      Pyomo also supports iterative analysis and scripting capabilities within a full-featured programming language. Further, Pyomo has also proven an effective framework for developing high-level optimization and analysis tools. For example, the PySP package provides generic solvers for stochastic programming. PySP leverages the fact that Pyomo's modeling objects are embedded within a full-featured high-level programming language, which allows for transparent parallelization of subproblems using Python parallel communication libraries.

      Pyomo was formerly released as the Coopr software library.

      Pyomo was originally developed by researchers in the Center for Computing Research at Sandia National Laboratories and is a COIN-OR project.

    • https://www.pyomo.org/blog
      • Blog

    • https://www.pyomo.org/documentation

JavaScript / NodeJS / WebASM

  • https://github.com/JWally/jsLPSolver
  • jsLPSolver

  • Simple OOP javaScript library to solve linear programs, and mixed integer linear programs

  • https://github.com/hgourvest/glpk.js
  • https://github.com/jvail/glpk.js/
    • glpk.js GLPK for browser & node

    • JavaScript/WebAssembly port of GLPK (GNU Linear Programming Kit) for browser & node. Rather than porting the complete GLPK library (including GLPSOL) this project aims at creating a simple JSON interface to setup and solve LP/MILP with JavaScript.

    • https://jvail.github.io/glpk.js/
      • glpk.js JavaScript/WebAssembly port of GLPK (GNU Linear Programming Kit) for browser & node. Rather than porting the complete GLPK library (including GLPSOL) this project aims at creating a simple JSON interface to setup and solve LP/MILP with JavaScript.

  • https://github.com/optimization-js/optimization-js
    • Mathematical Optimization in JavaScript

    • Unconstrained, first and zero order gradient descent optimization in JavaScript.

    • A javascript library implementing useful multivariate function optimization procedures, which allow to find a local minimum of some function of a vector argument. Such argument is a javascript array.

    • http://optimization-js.github.io/optimization-js/
      • Mathematical Optimization in JavaScript

  • https://lovasoa.github.io/highs-js/
    • HiGHS online linear programming solver A linear programming solver you can use directly from your browser

    • https://github.com/lovasoa/highs-js
      • highs-js

      • Javascript linear programming library

      • This is a javascript mixed integer linear programming library. It is built by compiling a high-performance C++ solver developed by the University of Edinburgh, (HiGHS), to WebAssembly using emscripten.

    • https://highs.dev/
      • HiGHS - high performance software for linear optimization Open source serial and parallel solvers for large-scale sparse linear programming (LP), mixed-integer programming (MIP), and quadratic programming (QP) models

  • https://github.com/Ivordir/YALPS
    • YALPS

    • Yet Another Linear Programming Solver. (A rewrite of javascript-lp-solver.) Aims to be decently fast.

    • This is Yet Another Linear Programming Solver (YALPS). It is intended as a performant, lightweight linear programming (LP) solver geared towards small LP problems. It can solve non-integer, integer, and mixed integer LP problems. While webassembly ports of existing solvers perform well, they tend to have larger bundle sizes and may be overkill for your use case. YALPS is the alternative for the browser featuring a small bundle size.

      YALPS is a rewrite of jsLPSolver. The people there have made a great and easy to use solver. However, the API was limited to objects only, and I saw other areas that could have been improved. You can check out jsLPSolver for more background and information regarding LP problems.

      Compared to jsLPSolver, YALPS has the following differences:

      • More flexible API (e.g., support for Iterables alongside objects)
      • Better performance (especially for non-integer problems, see Performance for more details.)
      • Good Typescript support (YALPS is written in Typescript)

      On the other hand, these features from jsLPSolver were dropped:

      • Unrestricted variables (might be added later)
      • Multiobjective optimization
      • External solvers
  • https://github.com/smremde/node-lp_solve
    • node-lp_solve A NodeJS module for createing and solving simple Linear Programs using lp_solve.

  • https://github.com/kchinzei/linear-program-solver
    • Linear Program Solver

    • JS/TS wrapper for simplex linear programming solver by jeronimonunes

    • Linear-program-solver is a JavaScript / TypeScript wrapper to simplex C++ engine by jeronimonunes. It's intended to be used together with linear-program-parser written in TypeScript. There are several linear program solvers found in npm. Linear-program-parser is relatively new, written in TypeScript and personally easy to use among them. Linear-program-solver can accept the output of linear-program-parser directly.

    • https://github.com/jeronimonunes/linear-program-parser
      • Linear Program Parser A Library to parse a Linear Program text.

        This library is built upon Pegjs and is able to transform the Linear Programming text representation in a JavaScript object. The object is an instance of a class that can be able to evaluate some of the math present in the representation and return the FPI (portuguese initials for Standard Form of Equality).

    • https://github.com/jeronimonunes/simplex
      • Simplex C++ Engine This is a implementation of the simplex algorithm written in C++, the calculations are all made with infinite precision by using an internal Fraction implementation that uses biginteger as its numerator and denominator values.

    • https://jeronimonunes.github.io/simplex-web/
      • Simplex Web This page has an implementation of the simplex algorithm. It can show you all the intermediate steps used to get the result.

        The algorithm is executed with infinite precision, using a fraction implementation that has the numerator and denominator implemented with biginteger.

        All the calculations are done within your browser page, so, if you close the app while the calculations are running, you will loose the progress.

      • https://github.com/jeronimonunes/simplex-web
        • SimplexWeb

        • Simplex web implementation with step by step

        • This project aims to provide an easy to use simplex that works offline within a browser and can possibly show the steps taken to find the optimal value (or certificates for non optimal ones).

  • https://github.com/rubenjgarcia/nodesolve
    • nodesolve

    • A LPSolve Library implementation in NodeJs

    • Use LP Solve from NodeJs. This module uses LP Solve 5.5.2

Google Sheets, etc

  • https://workspace.google.com/marketplace/app/goal_seek/583737146763
    • Goal Seek The Goal Seek add-on for Google Sheetsᵀᴹ enables users to calculate the required value of a cell to achieve a desired formula value in another cell.

    • If you know the result you need from a formula, you can use the Goal Seek add-on to find the missing value needed to achieve it.

      You can use the add-on to conduct break-even analysis, or determine subcomponent targets when modeling future outcomes. You can also modify parameters, like max iterations, tolerance and max execution time that Goal Seek uses to determine a solution or load previous calculations.

      There are three components to configure:

      • the target value
      • the changing cell (where the formula is)
      • the output value to achieve
    • https://support.google.com/docs/answer/9506732
      • Find an unknown value with Goal Seek

  • https://workspace.google.com/marketplace/app/opensolver/207251662973
    • OpenSolver Solve linear and mixed-integer optimization problems in Google Sheets with no artificial limits on problem size.

    • The OpenSolver Add-on for Google Sheets lets you solve complex optimization problems from within your Google Sheet. Made by the developers of the popular OpenSolver addin for Microsoft Excel, this add-on brings the same Open Source optimization engine to the cloud!

      OpenSolver for Google Sheets uses the excellent, Open Source, SCIP and Glop optimization engines to quickly solve large Linear and Integer problems. SCIP is one of the fastest non-commercial solvers for mixed integer programs, and Glop is an Open Source Linear Programming solver by the Operations Research team at Google. We also support using the Open Source COIN-OR Cbc solver via the NEOS optimization cloud, and the Open Source GLPK solver running locally in your browser via GLPK.js.

      There are no artificial limits on the size of problem you can solve with OpenSolver.

      OpenSolver is free, Open Source software developed and maintained by Andrew Mason and students at the Engineering Science department, University of Auckland, New Zealand.

    • https://opensolver.org/
  • https://workspace.google.com/marketplace/app/solver/539454054595
    • Solver Solve resource allocation problems with linear and mixed-integer programming, nonlinear optimization, and evolutionary algorithms.

    • With the Solver Add-on, created by Frontline Systems, developers of the Solver in Microsoft Excel (www.solver.com), you can define and solve many types of optimization problems in Google Sheets ™ , just as you can with the Excel Solver and with Frontline's Solver App for Excel Online.

      You can solve linear programming and mixed-integer programming problems with the Simplex LP Solver, nonlinear optimization problems with the GRG Nonlinear Solver, global optimization problems with the multistart method, and 'arbitrary' Sheets models containing any numeric formulas with the Evolutionary Solver. To learn more about optimization problems and what you can (and cannot) expect from solutions, visit www.solver.com/solver-addon-google-sheets-help-and-support.

      You can use the Solver Add-on to find better ways to allocate scarce resources, maximize profits or minimize costs or risks, in a wide range of applications in finance and investment, marketing, manufacturing and production, distribution and logistics, purchasing, and human resources, as well as science and engineering. To see examples, visit www.solver.com. Many textbooks also include examples and discussions of Solver models -- see www.solver.com/recommended-books.

      Problem size limits in the Solver Add-on are the same as in the Excel Solver (200 decision variables and 100 constraints in addition to bounds on the variables). There are no fixed limits on iterations or subproblems, but the backend server currently limits solving time to 30 seconds.

      When you click the Solve button in the Add-on, your model is queued for solution by Frontline's backend Solver Server. This server has a finite capacity, as currently provisioned (and paid for) by Frontline Systems.

Etc

OR-Tools - Google Optimization Tools

  • https://github.com/google/or-tools
    • OR-Tools - Google Optimization Tools

    • Google's Operations Research tools

    • Google's software suite for combinatorial optimization.

    • Google Optimization Tools (a.k.a., OR-Tools) is an open-source, fast and portable software suite for solving combinatorial optimization problems.

      The suite contains:

      • Two constraint programming solver (CP* and CP-SAT);
      • Two linear programming solvers (Glop and PDLP);
      • Wrappers around commercial and other open source solvers, including mixed integer solvers;
      • Bin packing and knapsack algorithms;
      • Algorithms for the Traveling Salesman Problem and Vehicle Routing Problem;
      • Graph algorithms (shortest paths, min cost flow, max flow, linear sum assignment).

      We wrote OR-Tools in C++, but provide wrappers in Python, C# and Java.

  • https://developers.google.com/optimization/
    • OR-Tools is fast and portable software for combinatorial optimization.

    • OR-Tools is an open source software suite for optimization, tuned for tackling the world's toughest problems in vehicle routing, flows, integer and linear programming, and constraint programming.

      After modeling your problem in the programming language of your choice, you can use any of a half dozen solvers to solve it: commercial solvers such as Gurobi or CPLEX, or open-source solvers such as SCIP, GLPK, or Google's GLOP and award-winning CP-SAT.

    • OR-Tools won gold in the international constraint programming competition every year since 2013.

    • https://developers.google.com/optimization/introduction
      • About OR-Tools

        OR-Tools is open source software for combinatorial optimization, which seeks to find the best solution to a problem out of a very large set of possible solutions. Here are some examples of problems that OR-Tools solves:

        • Vehicle routing: Find optimal routes for vehicle fleets that pick up and deliver packages given constraints (e.g., "this truck can't hold more than 20,000 pounds" or "all deliveries must be made within a two-hour window").
        • Scheduling: Find the optimal schedule for a complex set of tasks, some of which need to be performed before others, on a fixed set of machines, or other resources.
        • Bin packing: Pack as many objects of various sizes as possible into a fixed number of bins with maximum capacities.

        In most cases, problems like these have a vast number of possible solutions—too many for a computer to search them all. To overcome this, OR-Tools uses state-of-the-art algorithms to narrow down the search set, in order to find an optimal (or close to optimal) solution.

        OR-Tools includes solvers for:

        • Constraint Programming
          • A set of techniques for finding feasible solutions to a problem expressed as constraints (e.g., a room can't be used for two events simultaneously, or the distance to the crops must be less than the length of the hose, or no more than five TV shows can be recorded at once).
        • Linear and Mixed-Integer Programming
          • The Glop linear optimizer finds the optimal value of a linear objective function, given a set of linear inequalities as constraints (e.g., assigning people to jobs, or finding the best allocation of a set of resources while minimizing cost). Glop and the mixed-integer programming software SCIP are also available via the Google Apps Script Optimization Service.
        • Vehicle Routing
          • A specialized library for identifying best vehicle routes given constraints.
        • Graph Algorithms
          • Code for finding shortest paths in graphs, min-cost flows, max flows, and linear sum assignments.
    • https://developers.google.com/optimization/reference
    • https://developers.google.com/optimization/examples
      • OR-Tools Examples

        The following table provides links to:

        • Code examples in the supported languages: C++, Python, DotNet, Java
        • Tutorials that explain the examples.
        • Colabs — code demos that you can run directly in your browser.
    • https://developers.google.com/optimization/service
      • Operations Research API

        The Operations Research API is a Google Research service that exposes solvers for high-level optimization problems. These solvers run on Google's infrastructure and are consumed as a REST API or gRPC

      • The OR API supports the following solvers:

        • Workforce Scheduling (Beta, Trusted testers only)
        • Shipping Network Design (Alpha, Open to public)
        • MathOpt Service (Alpha, Open to public)
  • https://developers.google.com/apps-script/reference/optimization
    • Optimization Service

      The linear optimization service, used to model and solve linear and mixed-integer linear programs.

MiniZinc

  • https://github.com/MiniZinc
  • https://www.minizinc.org/
    • MiniZinc MiniZinc is a high-level constraint modelling language that allows you to easily express and solve discrete optimisation problems.

    • MiniZinc is developed at Monash University with support from OPTIMA.

    • Why choose MiniZinc?

      • High-level language
        • Model constraint satisfaction and optimisation problems using readable, intuitive logical constructs, with full type safety.
      • Solver independent
      • Pre-defined constraints
      • Fully featured development environment
        • Use the MiniZinc IDE to write and run models, visualise search and solutions, profile performance and more.
      • In-depth learning materials
      • Integrate MiniZinc into your project
        • MiniZinc can be embedded into Python and JavaScript programs, and can even be used entirely from within the browser.
    • https://www.youtube.com/watch?v=K1tH2otAtMw
      • MiniZinc - Faculty of Information Technology | Monash University (2:25) MiniZinc is developed at Monash University with support from OPTIMA. MiniZinc is a free and open-source constraint modeling language.

    • https://play.minizinc.dev/
      • Playground

    • https://www.minizinc.org/downloads/
      • Downloads Official binary packages are provided for Windows, macOS and Linux. These contain MiniZinc 2.8.5, the MiniZinc IDE 2.8.5, as well as several state of the art solvers.

      • Both the compiler tool chain and the IDE are available as free and open source software.

      • The MiniZinc compiler and IDE The compiler is the heart of the MiniZinc tool chain. It translates constraint models into FlatZinc, a language that is understood by a wide range of solvers.

        MiniZinc comes with a simple Integrated Development Environment, the MiniZinc IDE, which makes it easy to develop and run constraint models.

        Solvers

        A number of solvers are included in the official binary packages:

        • Gecode
        • Chuffed
        • Google OR-Tools CP-SAT
        • HiGHS
        • COIN-OR CBC

        MiniZinc can also interface with the following solvers if installed:

        • Gurobi
        • IBM ILOG CPLEX
        • SCIP
        • FICO Xpress

        Additionally, many other compatible solvers can be installed, including:

        • Choco 4
        • JaCoP
        • Mistral-2.0
        • Picat CP/SAT
        • SICStus Prolog
        • Yuck

        Note that if you build MiniZinc from source, you will have to install any desired solvers yourself.

    • https://www.minizinc.org/resources/
      • Documentation

        • The MiniZinc Handbook contains all the documentation for the MiniZinc language and system.

          • https://docs.minizinc.dev/en/stable/
            • The MiniZinc Handbook MiniZinc is a free and open-source constraint modeling language. You can use MiniZinc to model constraint satisfaction and optimization problems in a high-level, solver-independent way, taking advantage of a large library of pre-defined constraints

              This handbook consists of four parts: Section 1 covers installation and basic steps; Section 2 is a tutorial-style introduction into modelling with MiniZinc; Section 3 is a user manual for the individual tools in the MiniZinc tool chain; and Section 4 is a reference to the language.

            • https://docs.minizinc.dev/en/stable/part_2_tutorial.html
              • MiniZinc Tutorial A step-by-step introduction to modelling with MiniZinc, from basic syntax and simple modelling techniques to more advanced topics.

            • https://docs.minizinc.dev/en/stable/part_3_user_manual.html
              • MiniZinc User Manual Manual for the MiniZinc command line tools, the MiniZinc IDE, and additional tools.

            • https://docs.minizinc.dev/en/stable/part_4_reference.html
              • MiniZinc Reference Manual Standard Library documentation, the specification of the MiniZinc language, and how to interface a solver with MiniZinc.

              • https://docs.minizinc.dev/en/stable/lib-stdlib.html
                • 4.2.1. Standard Library These functions and predicates define built-in operations of the MiniZinc language.

              • https://docs.minizinc.dev/en/stable/lib-globals.html
                • 4.2.2. Global constraints These constraints represent high-level modelling abstractions. Many solvers implement dedicated, efficient inference algorithms for these constraints, or provide a MiniZinc library decomposition that is better suited to the particular solving technology than the standard library decomposition.

      • Integrations MiniZinc can be used from your favourite programming language through our official language integrations.

        • https://python.minizinc.dev/
          • MiniZinc Python Access all MiniZinc functionality directly from Python.

          • MiniZinc Python provides a native python interface for the MiniZinc toolchain. The package can interface with MiniZinc using the command line interface, the minizinc executable. The main goal of this library is to allow users to use all of MiniZinc’s capabilities directly from Python. This allows you to use MiniZinc in your application, but also enables you to use MiniZinc in new ways! Using MiniZinc in a procedural language allows you to use incremental solving techniques that can be used to implement different kinds of meta-heuristics.

          • https://github.com/MiniZinc/minizinc-python
            • MiniZinc Python The python package that allows you to access all of MiniZinc's functionalities directly from Python.

        • https://js.minizinc.dev/
          • MiniZinc JavaScript Use MiniZinc completely within your web browser, or server-side using NodeJS.

          • This package provides a JavaScript API for MiniZinc for use in web browsers using WebAssembly, or in NodeJS using a native installation of MiniZinc.

            This library powers the MiniZinc Playground.

          • https://github.com/minizinc/minizinc-js
            • JavaScript interface for MiniZinc Use MiniZinc in the browser or with NodeJS

      • Courses MiniZinc has an extensive online course available on Coursera for free.

      • Publications We have published a number of papers about the design and implementation of MiniZinc, and many other researchers have used MiniZinc for their work.

    • https://www.minizinc.org/challenge/
      • The MiniZinc Challenge The MiniZinc Challenge is an annual competition of constraint programming solvers on a variety of benchmarks. It has been held every year since 2008, with the winners being announced at the annual International Conference on Principles and Practice of Constraint Programming.

Unsorted

  • https://lim.univ-reunion.fr/staff/fred/Enseignement/Optim/doc/CPLEX-LP/CPLEX-LP-file-format.html
    • CPLEX LP file format The CPLEX LP file format provides a facility for entering a problem in a natural, algebraic LP formulation from the keyboard. The problem can be modified and saved from within lpsolve. This procedure is one way to create a file in a format that lpsolve can read. An alternative technique is to create a similar file using a standard text editor and to read it into lpsolve.

Questions

  • Are there any existing python PuLP abstractions / libraries / etc that simplify adding more complex logic to the problems to be solved, without having to do so manually each time?

    For example, I would like a function that generalizes the use of 'Big M' + a binary variable to simulate an if/else branch of a constraint.

    • ChatGPT Response: https://chatgpt.com/c/66eceda3-cd50-8008-bf23-34b5c017ce65?model=o1-preview
      • As of my knowledge cutoff in October 2023, there isn't a widely adopted library or abstraction layer that extends PuLP to simplify adding complex logical constraints like if/else branches without manually implementing methods like the Big M technique each time. PuLP is a powerful linear programming library, but it operates at a relatively low level when it comes to modeling complex logical relationships.

        However, there are alternative Python-based modeling libraries that offer higher-level abstractions and built-in support for logical constraints:

        • Pyomo:
          • Overview: Pyomo is an open-source Python library for modeling optimization applications. It supports a wide range of problem types, including linear, nonlinear, and mixed-integer programming.
          • Logical Constraints: Pyomo provides native support for logical expressions and disjunctions through its LogicalConstraint and Disjunct components. This allows you to model complex logical relationships without manually coding Big M constraints.
        • Google OR-Tools:
          • Overview: OR-Tools is an open-source software suite for optimization, developed by Google. It provides robust support for constraint programming, linear programming, and mixed-integer programming.
          • Logical Constraints: OR-Tools allows you to add complex logical constraints more naturally. Its constraint programming solver (CpModel) supports Boolean variables and logical implications directly.
        • Commercial Solvers with Python APIs:
          • Gurobi and CPLEX: Both solvers support indicator constraints and have Python APIs that allow you to model logical conditions directly.
        • Custom Helper Functions:
          • If switching libraries is not feasible, you might consider creating your own helper functions within PuLP to encapsulate the Big M method. While this doesn't eliminate Big M, it centralizes its management, making your code cleaner and more maintainable.
          • eg.
            import pulp
            
            def conditional_constraint(lhs, operator, rhs_when_true, binary_var, big_m, name):
                """
                Creates a conditional constraint using the Big M method.
            
                Parameters:
                - lhs: The left-hand side PuLP expression.
                - operator: A string representing the constraint operator ('<', '<=', '>', '>=', '==').
                - rhs_when_true: The right-hand side expression when binary_var == 1.
                - binary_var: The PuLP binary variable controlling the condition.
                - big_m: A sufficiently large constant for the Big M method.
                - name: A unique name for the constraint.
            
                Returns:
                - A PuLP Constraint object or a list of Constraint objects (for '==').
                """
                if operator == "<=":
                    constraint = lhs <= rhs_when_true * binary_var + big_m * (1 - binary_var)
                    return pulp.LpConstraint(e=lhs, sense=pulp.LpConstraintLE, rhs=rhs_when_true * binary_var + big_m * (1 - binary_var), name=name)
                
                elif operator == "<":
                    # Convert 'lhs < rhs' to 'lhs <= rhs - epsilon'
                    epsilon = 1e-5  # Small value to approximate strict inequality
                    constraint = lhs <= rhs_when_true * binary_var + big_m * (1 - binary_var) - epsilon * binary_var
                    return pulp.LpConstraint(e=lhs, sense=pulp.LpConstraintLE, rhs=rhs_when_true * binary_var + big_m * (1 - binary_var) - epsilon * binary_var, name=name)
                
                elif operator == ">=":
                    constraint = lhs >= rhs_when_true * binary_var - big_m * (1 - binary_var)
                    return pulp.LpConstraint(e=lhs, sense=pulp.LpConstraintGE, rhs=rhs_when_true * binary_var - big_m * (1 - binary_var), name=name)
                
                elif operator == ">":
                    # Convert 'lhs > rhs' to 'lhs >= rhs + epsilon'
                    epsilon = 1e-5  # Small value to approximate strict inequality
                    constraint = lhs >= rhs_when_true * binary_var - big_m * (1 - binary_var) + epsilon * binary_var
                    return pulp.LpConstraint(e=lhs, sense=pulp.LpConstraintGE, rhs=rhs_when_true * binary_var - big_m * (1 - binary_var) + epsilon * binary_var, name=name)
                
                elif operator == "==":
                    constraint_le = pulp.LpConstraint(e=lhs, sense=pulp.LpConstraintLE, rhs=rhs_when_true * binary_var + big_m * (1 - binary_var), name=f"{name}_le")
                    constraint_ge = pulp.LpConstraint(e=lhs, sense=pulp.LpConstraintGE, rhs=rhs_when_true * binary_var - big_m * (1 - binary_var), name=f"{name}_ge")
                    return [constraint_le, constraint_ge]
                
                else:
                    raise ValueError("Unsupported operator. Use '<', '<=', '>', '>=', or '=='.")
            
            # Example
            
            # ..snip..
            
            M = 10000  # Big M value, should be sufficiently large
            
            # ..snip..
            
            # Initialize the problem
            prob = pulp.LpProblem("Bottle_Usage_Problem", pulp.LpMinimize)
            
            # Decision variables
            leftover_units = pulp.LpVariable.dicts("LeftoverUnits", labels, lowBound=0, cat='Continuous')
            purchase_indicator = pulp.LpVariable.dicts("PurchaseIndicator", labels, cat='Binary')
            
            # ..snip..
            
            for label in labels:
                prob += conditional_constraint(
                    lhs=leftover_units[label],
                    operator="<=",
                    rhs_when_true=(1 - min_usage_pct) * bottle_size,
                    binary_var=purchase_indicator[label],
                    big_m=M,
                    name=f"LeftoverUnitsUsagePercentConstraint_{label}"
                )
            
            # ..snip..

See Also

My Other Related Deepdive Gist's and Projects

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment