Non-Liner Programming

Lecture notes in Transportation Systems Engineering

4 August 2009

1 Introduction

Optimization is the act of obtaining the best result under given circumstances. It is the process of finding the condtions that give maximum or minimum value of a function. The general form of the optimization problem will be to find $\bar{x}$ that

\begin{eqnarray*}
\mathrm{Minimize} & f(\bar{x}) & \\
\mathrm{subjected to} \\
g_j(\bar{x}) & \le & 0\\
h_j(\bar{x}) & = & 0.
\end{eqnarray*}

where $\bar{x}$ is the decision variable. Figure 1 shows the concept of optimization which is to find the minimum or maximum of a function. Since Minimizing $f(x)$ is same as maximizing $-f(x)$ we shall follow the former notation. The optimum solution is indicated by $\bar{x}^*$
Figure 1: The concept of optimisation
\begin{figure}\centerline{\epsfig{file=100-nlp-eg-minimisation.eps,width=4cm}}\end{figure}

2 Convexity of functions

2.1 Multivariable functions

2.1.1 Example

Determin the convexity of the following function:

\begin{eqnarray*}
f(x_1,x_2)=(x_1-x_2)^2 ,
\end{eqnarray*}

2.1.2 Problems

Determin the convexity of the following functions:

\begin{eqnarray*}
f(\bar{x})&=&x_1-2x_1x_2+x_2^2\\
f(\bar{x})&=&x_1^2+x_2^2+x_3^2-x_1x_2+x_2x_3-x_1x_3\\
f(x)&=&4x_1^4+2x_1^2-5x_1
\end{eqnarray*}

3 Classical methods

3.1 Unconstrained single variable

3.1.1 Example

Solve the function and iterpret the results(s):

\begin{eqnarray*}
f(x)=12x^5-45x^4+40x^3+5.
\end{eqnarray*}

3.1.2 Problems

Solve the following functions and iterpret the results(s):

\begin{eqnarray*}
f(x)&=&(x-7)^2,\\
f(x)&=&x(x-1.5),\\
f(x)&=&-3x^2+21.6x+1.
\end{eqnarray*}

3.2 Unconstrained multi variable

3.2.1 Example

Solve the function and iterpret the results(s):

\begin{eqnarray*}
f(\bar{x})=x_1^3+x_2^3+2x_1^2+4x_2^2+6.
\end{eqnarray*}

3.2.2 Problems

Solve the following functions and iterpret the results(s):

\begin{eqnarray*}
f(\bar{x})&=&(x_1-10)^2+(x_2-10)^2.
\end{eqnarray*}

3.3 Constrained multi variable with equality constraints

3.3.1 Example

Solve the following constrained program:

\begin{eqnarray*}
\mathrm{Min: } f(\bar{x})&=&x_1^2+2x_2,\\
\mathrm{st: } g(\bar{x})&=&x_1^2+x_2^2=1.
\end{eqnarray*}

3.3.2 Problems

Solve the following constrained program:

\begin{eqnarray*}
\mathrm{Min: } f(x,y)&=&kx^{-1}y^{-2},\\
\mathrm{st: } g(x,y)&=&x^2+y^2=a^2.
\end{eqnarray*}

Solve the following constrained program:

\begin{eqnarray*}
\mathrm{Min: } f(x,y,z)&=&3x^2+4y^2+5z^2,\\
\mathrm{st: } g(x,y,z)&=&x+y+z=10.
\end{eqnarray*}

3.4 Karush-Kuhn-Tucker conditions

3.4.1 Example

Solve the following constrained program:

\begin{eqnarray*}
\mathrm{Min: } f(\bar{x})&=&x_1^2-x_2,\\
\mathrm{st: } x_1+x_2&=&6,\\
x_1&\ge&1,\\
x_1^2+x_2^2&\le&26.
\end{eqnarray*}

3.4.2 Problems

Solve the following constrained program:

\begin{eqnarray*}
\mathrm{Max: } f(\bar{x})&=&\log(x_1+1)+x_2,\\
\mathrm{st: } 2x_1+x_2&\le&3,\\
x_1,x_2&\ge&0.
\end{eqnarray*}

Solve the following constrained program:

\begin{eqnarray*}
\mathrm{Min: } f(x,y,z)&=&3x^2+4y^2+5z^2,\\
\mathrm{st: } g(x,y,z)&=&x+y+z=10.
\end{eqnarray*}

4 Numerical methods

4.1 Unconstrained one variable problems

4.1.1 Example

Solve the following unconstrained one dimentional problem:

\begin{eqnarray*}
\mathrm{Max: } f(x)&=&12x-3x^4-2x^6.
\end{eqnarray*}

4.1.2 Problems

Solve the following unconstrained one dimentional problems

\begin{eqnarray*}
\mathrm{Min: } f(x)&=&x(x-1.5),  [0-1];\\
\mathrm{Min: } f(x...
... &&  [0-3];\\
\mathrm{Max: } f(x)&=&-3x^2+21.6x+1,  [0-25].
\end{eqnarray*}

4.2 Unconstrained multi variable problems

4.2.1 Example

Solve the following unconstrained multi-variable problem:

\begin{eqnarray*}
\mathrm{Max: } f(\bar{x})=2x_1x_2+2x_2-x_1^2-2x_2^2,   \bar{x}^0=(0,0).
\end{eqnarray*}

4.2.2 Problems

Solve the following unconstrained multi-variable problems:

\begin{eqnarray*}
\mathrm{Min: } f(\bar{x})=4x_1^2-4x_1x_2+2x_2^2,   \bar{x}^0=(2,3).
\end{eqnarray*}

4.3 Constrained multi variable problems

4.3.1 Frank-Wolfe Alorithm

It is type of sequential approximation algorithm. This is applicable when the objective function is non liner and the constraints are all liner. It uses a linear approximation of the objetive function and solve by any classical linear programming algorithm.

Given a feasible trail solution ( $\bar{x}=\bar{x}^1$), then the linear approximation used for the objective function is the first order Taylor series expansion of $f(\bar{x})$ around $\bar{x}=\bar{x}^1$. Thus

$\displaystyle f(\bar{x})$ $\textstyle =$ $\displaystyle f(\bar{x}^1)+\sum^n_{j=1}\frac{\partial f(\bar{x}^1)}{\partial x_j}(x_j-x_j^1)$ (1)
  $\textstyle =$ $\displaystyle f(\bar{x}^1)-\sum^n_{j=1}\frac{\partial f(\bar{x}^1)}{\partial x_j}x_j^1+\sum^n_{j=1}\frac{\partial f(\bar{x}^1)}{\partial x_j}x_j$ (2)
  $\textstyle =$ $\displaystyle f(\bar{x}^1)-\sum^n_{j=1}\frac{\partial f(\bar{x}^1)}{\partial x_j}x_j^1+g(\bar{x})$ (3)

Since the first and second term of $f(x)$ is constant, optimzing $f(x)$ is same as optimizing $g(x)$. Therefore, the equivalent linear form of $f(x)$ is $g(x)$, that is,
$\displaystyle g(\bar{x})$ $\textstyle =$ $\displaystyle \sum^n_{j=1}\frac{\partial f(\bar{x}^1)}{\partial x_j}x_j$ (4)
  $\textstyle =$ $\displaystyle \sum^n_{j=1}c_j~x_j,~\mathrm{where~}c_j=\left\vert \frac{\partial f(\bar{x})}{\partial x_j}\right \vert _{\bar{x}=\bar{x}^1}$ (5)

4.3.1.1 Frank-Wolfe Algorithm

  1. Set $k$=0, find initital solution $\bar{x}^k$
  2. Set $k=k+1$ compute $c_j=\left\vert \frac{\partial f(\bar{x})}{\partial x_j}\right \vert _{\bar{x}=\bar{x}^{k-1}}~\forall j=1,2,\dots,n$
  3. Find optimum solution $x^k_{LP}$ for the following LP
    $\displaystyle \mathrm{Max:~} g(\bar{x})$ $\textstyle =$ $\displaystyle c_j~x_j,$  
    $\displaystyle \mathrm{s.t.~~} A\bar{x}$ $\textstyle \leq$ $\displaystyle \bar{b},$  
    $\displaystyle \bar{x}$ $\textstyle \geq$ $\displaystyle 0.$  

  4. For the variable $t,~0\leq t \leq 1$ set
    $\displaystyle h(t)=f(\bar{x}^k)$ $\textstyle \mathrm{where}$ $\displaystyle \bar{x}^k=\bar{x}^{k-1}+t\left[\bar{x}^k_{LP} - \bar{x}^{k-1} \right]$  

  5. Find $t^*$ that maximizes $h(t)$ in the interval $0\leq t\leq 1$ and compute the next point by
    $\displaystyle \bar{x}^k=\bar{x}^{k-1}+t^*\left[\bar{x}^k_{LP} - \bar{x}^{k-1} \right]$     (6)

  6. If the error $\epsilon \geq \vert\bar{x}^k - \bar{x}^{k-1}\vert$ go to step 2, else STOP.
Note that in step 1 use LP to get a feasible initital solution. Also in step 5 use any one dimentional search (either numerical or classical) to maximize the function $h(t)$.

4.3.2 Example

Solve the following program by frank-wolfe alogorithm:

\begin{eqnarray*}
\mathrm{Max: } f(\bar{x})&=&5x_1-x_1^2+8x_2-2x_2^2,\\
\mathrm{st: } 3x_1+2x_2&\le&6,\\
x_1,x_2&\ge&0.
\end{eqnarray*}

4.3.2.1 Solution

k $x^{(k-1)}$ $(c_1,c_2)$ $x^k_{LP}$ x(t) h(t) $t^*$ $x^k$
0 0, 0 5, 8 0, 3 0, 3t 24t-18t$^2$ 2/3 0, 2

4.3.3 Problems

Solve the following program by frank-wolfe alogorithm:

\begin{eqnarray*}
\mathrm{Min: } f(\bar{x})&=&(x_1-x_2)^2+x_2,\\
\mathrm{st: } x_1+x_2&\ge&1,\\
x_1+2x_2&\le&3,\\
x_1,x_2&\ge&0.
\end{eqnarray*}

4.3.4 Problems

Solve the following program by frank-wolfe alogorithm:

\begin{eqnarray*}
\mathrm{Max: } f(\bar{x})&=&3x_1+4x_2-x_1^3-x_2^3,\\
\mathrm{st: } x_1+x_2&\le&1,\\
x_1,x_2&\ge&0.
\end{eqnarray*}

4.3.5 Problems

Solve the following program by frank-wolfe alogorithm:

\begin{eqnarray*}
\mathrm{Max: } f(\bar{x})&=&4x_1-x_1^4+2x_2-x_2^2,\\
\mathrm{st: } 4x_1+2x_2&\le&5,\\
x_1,x_2&\ge&0.
\end{eqnarray*}

4.3.6 Example

Solve the following program by inner penalty function method:

\begin{eqnarray*}
\mathrm{Min: } f(\bar{x})&=&\frac{(x_1+1)^3}{3}+x_2,\\
\mathrm{st: } -x_1+1&\le&0,\\
-x_2&\le&0.\\
\end{eqnarray*}

4.3.7 Example

Solve the following program by exterior penalty function method:

\begin{eqnarray*}
\mathrm{Min: } f(\bar{x})&=&\frac{(x_1+1)^3}{3}+x_2,\\
\mathrm{st: } -x_1+1&\le&0,\\
-x_2&\le&0.\\
\end{eqnarray*}

Bibliography

1 Mohan c Joshi and Kannan M Moudgalya. Optimization: Theory and practice. Narosa Publishing House, New Delhi, India, 2004.

2 K. Deb. Optimization for engineering design: Algorithms and Examples. Prentice Hall, India, 1998.

3 F S Hillier and G J Lieberman. Introduction to operations research. McGraw Hill, Inc., 2001.

4 Singiresu S Rao. Engineering optimization: Theory and practice. New Age International, New Delhi, India, 1996.

5 A Ravindran, D T Philips, and J J Solberg. Operations research: Principles and practice. John Wiley and Sons, 1987.

6 H A Taha. Operations research: An introduction. Prentice-Hall, India, 1997.

Prof. Tom V. Mathew 2009-08-04