4  Lagrangian Relaxation Theories

When it comes to solving integer programming (IP) problems, linear programming relaxation is often used to obtain the lower bound of a minimization problem or upper bound of a maximization problem. The effectiveness of the bounding algorithm plays a vital role in the overall performance of branch and bound algorithms. Sometimes, we might not be interested in solving the optimization problem at hand to its optimality at all, instead, we could be happy with deriving a feasible solution from the optimal solution of the relaxed formulation. This is particularly common in industry application where proving optimality might not be as important as obtaining a near-optimal solution in reasonable computational time.

Aside from resorting to linear programming relaxation to help solve IP problem, there is another powerful decomposition algorithm, Lagrangian Relaxation, that can help obtain tight bound though another form of relaxation of the original problem. I have used this algorithm to solve some very interesting optimization problems in industry and I believe mastering the essence and intricacies of Lagrangian Relaxation could be very beneficial to the success of solving optimization problems in practice.

In the following sections, we will examine the two key components of applying Lagrangian Relaxation, namely, reformulation and problem solving through subgradient search.

4.1 Lagrangian Relaxation Reformulation

As with the case of applying other decomposition algorithm, the first step in utilizing Lagrangian Relaxation is to transform our problem at hand into a suitable format. This involves two critical steps:

  • Recognizing the applicability of the Lagrangian Relaxation algorithm.
  • Model reformulation.

Arguably, the first step is more challenging when facing a new optimization problem. The second step is often straightforward to do once one recognize the required structure is present in the model formulation. My experience in using the Lagrangian Relaxation is through problem solving. It is natural to build up intuition once one solves more optimization problem with this powerful algorithm. In this chapter and following chapter, we will use Lagrangian Relaxation to solve a number of classical optimization problems. Hopefully, the exercise will help us gain enough sense of recognizing when Lagrangian Relaxation is applicable.

Regarding the second point, we will see that sometimes one original problem formulation invites multiple forms of Lagrangian relaxations and some are better than others in terms of providing tighter bounds.

To illustrate the Lagrangian Relaxation process, we adopt the following primal problem model as in Fisher (2004). In the model, \(x\) are the decision variables that are nonnegative and integral. \(c\) are the cost coefficients in the objective function. There are two constraints \(Ax = b\) and \(Dx \leq e\) in the model that by putting one of them into the objective function, the remaining problem becomes easier to solve.

One example of such case is that \(Dx \leq e\) are separable constraints and \(Ax = b\) are the complicating constraints that couple all the separable constraints together. After lifting the coupling constraints into the objective function, the remaining problems consists in several independent subproblems and become much tractable to solve.

\[\begin{align} \text{min.} &\quad cx \label{p-obj} \\ \text{s.t.} &\quad Ax = b \label{p-cons1} \\ &\quad Dx \leq e \label{p-cons2} \\ &\quad x \geq 0 \ \text{and integral} \end{align}\]

To perform Lagrangian Relaxation, we define a vector of Lagrange multipliers for constraints \(\eqref{p-cons1}\) and penalize the constraint violations in the objective function.

\[\begin{align} q(u) = \text{min.} &\quad cx + u(Ax - b) \label{lr-obj} \\ \text{s.t.} &\quad Dx \leq e \label{lr-cons1} \\ &\quad x \geq 0 \ \text{and integral} \end{align}\]

Note that for any given vector of \(u\), the optimal objective value of the Lagrangian Relaxation is one lower bound of the optimal value of the original problem. The our goal is to find the optimal value of \(u^*\) that gives up the best lower bound for the original problem. This dual problem id formally defined below.

\[\begin{align} \text{max.} &\quad q(u) \\ \text{s.t.} &\quad u \ \text{unrestricted} \end{align}\]

Note that the sign of the Lagrange multipliers is decided by the form of objective sense of the original problem and the constraints that are relaxed. In the aforementioned case, the primal objective is minimization and the relaxed constraints take the ‘=’ sign. The value of \(Ax\) could be smaller or greater than \(b\) and therefore the violations could be positive or negative, which leads to the unrestricted sign of the multipliers \(u\). If the constraints \(\eqref{p-cons2}\) are relaxed, the associated Lagrange multipliers must be restricted to \(u \geq 0\).

4.2 Approximating the Optimal Lagrange Multipliers

The essential component of Lagrangian relaxation is the use of Lagrange multipliers to penalize the violation of constraints that we choose to put in the objective function. There are several approaches proposed in the literature to approximate the optimal \(u\) values. In the following section, we introduce two such algorithms, namely, subgradient search and surrogate subgradient search.