OR-Notes are a series of introductory notes on topics that fall under the broad heading of the field of operations research (OR). They were originally used by me in an introductory OR course I give at Imperial College. They are now available for use by any students and teachers interested in OR subject to the following conditions.

A full list of the topics available in OR-Notes can be found here.

We consider here a number of more advanced LP topics than we considered previously. If you want to learn more about advanced LP than is given here see here.

The following are terms that you may come across if you do much solving of LP's using packages.

For an LP with m constraints and n variables if b is a one-dimensional vector of length m, c and x are one-dimensional vectors of length n and A is a two-dimensional matrix with m rows and n columns the primal linear program (in matrix notation) is:

minimise cx subject to Ax >= b x >= 0

Associated with this primal linear program we have the *dual* linear
program (involving a one-dimensional vector y of length m) given (in matrix
notation) by:

maximise by subject to yA <= c y >= 0

Linear programming theory tells us that (provided the primal LP is feasible)
the optimal value of the primal LP is *equal* to the optimal value
of the dual LP.

Any LP involving inequality constraints can be converted into an equivalent LP involving just equality constraints (simply add slack and artificial variables). After such a conversion the LP (in matrix notation) is:

minimise cx subject to Ax = b x >= 0

with m equality constraints and n variables (where we can assume m>n). Then, theory tells us that each vertex of the feasible region of this LP can be found by:

- choosing m of the n variables (these m variables are collectively known
as the
*basis*); - setting the remaining (n-m) variables to zero; and
- solving a set of simultaneous linear equations to determine values for the m variables we have selected.

If these values for the m variables are all >0 then the basis is non-degenerate. If one or more of these variables is zero then the basis is degenerate.

Essentially the simplex algorithm starts at one vertex of the feasible region and moves (at each iteration) to another (adjacent) vertex, improving (or leaving unchanged) the objective function value as it does so, until it reaches the vertex corresponding to the optimal LP solution.

Obviously the ideal situation is when moving from one vertex to another we improve the objective function value by a significant amount. The worse case is when the objective function value is unchanged when we move from one vertex to another.

When we solve the LP then, if it is highly degenerate (i.e. there are many vertices of the feasible region for which the associated basis is degenerate), we may find that a large number of iterations (moves between adjacent vertices) occur with little or no improvement in the objective function value.

Computationally this is very unfortunate!

The revised simplex algorithm is one example of a primal simplex algorithm. The dual simplex algorithm is related to the dual of the LP. Many packages contain both primal and dual simplex algorithms.

Computationally one algorithm (primal or dual) will solve a particular LP quicker than the other algorithm. If your LP solution time is excessive with primal simplex (the usual default algorithm) it may be worthwhile trying dual simplex. Trying dual simplex is particularly useful if your LP appears to be highly degenerate.

The simplex algorithm, as typified by a package, will:

*crash*to find an initial vertex of the feasible region, that is, an initial basis (crashing is the name given to the procedures that packages adopt to find an initial feasible solution (vertex of the feasible region))- at each simplex iteration
*price*to find a variable to bring into the basis (and then choose a variable to drop from the basis) - this replacing of a single variable in the basis by another variable is what moves us between adjacent vertices of the feasible region - at each simplex iteration perform a major or a minor iteration. Major iterations are iterations at which an inversion (sometimes called a factorisation or refactorisation) is carried out.

Essentially at a major iteration a matrix is inverted (the inverse is found). This is done to maintain numeric stability (i.e. avoid rounding errors) during simplex iterations. At a minor iteration no inversion is done.

The algorithm stops when the optimal solution is found.

Some packages have options that enable you to preprocess the problem, e.g. you may have included in the LP a constraint of the form x=5. Obviously the variable x (as well as this constraint) could be eliminated from the LP very easily simply by doing some algebra (replace x by 5 everywhere it appears).

There are also more sophisticated tests available that enable reduction in the size of the LP to be achieved. Generally preprocessing is a good idea as it can reduce LP solution time dramatically.

Note too that preprocessing can also be applied to integer and mixed-integer programs.

Obviously if we have a large LP then input procedures can become unwieldy. A matrix generator is a piece of software that enables you to easily generate the LP input for a package (or generate a MPS file that can be fed to any package). Nowadays it tends to be the case that LP input comes via an algebraic modelling language such as GAMS or AMPL.

Obviously if we have a large LP then the output becomes potentially too large to digest intelligently. A report generator is a piece of software that enables you to easily extract from the LP solution items of interest and present them in a readable format.

The computer time required to solve an LP can be affected by how the problem data is scaled. For example dividing all terms (including the right-hand side) of a constraint by 100 cannot affect the optimal solution but may affect the number of iterations needed to find the optimal solution. Many packages have options to scale the data automatically.

Having solved an LP it is often worthwhile saving the final solution. This is because, at a later date, we may wish to solve essentially the same LP but with just a few changes made (e.g. some data values altered and some constraints added). Generally restarting from the previously saved solution, rather than from scratch, reduces solution time.

The variables in an LP are often referred to as columns (thinking of them as being the columns of the A matrix in the definition of the problem). In column generation we choose to start solving the LP problem using just a subset of the variables and automatically include any additional variables that we need as required. Often at the LP optimal solution many variables have the value zero and so need never really have been considered during the course of the simplex algorithm. Column generation is often used for LP's with a relatively small number of constraints (rows) compared to the number of variables (columns).

This is an option available in some packages and involves automatically investigating how the solution changes as some parameter is altered. For example for the LP:

minimise 45x_{1}+ 70x_{2 }subject to certain constraints

a parametric analysis of the objective function would involve looking at the LP:

minimise (45+alpha)x_{1}+ (70+alpha)x_{2 }subject to the same constraints

for varying values of *alpha* and seeing how the optimal solution
changes (if at all) as *alpha* varies. Parametric analysis can also
be carried out for right-hand sides, columns and rows.