2D Newton's and Steepest Descent Methods in Matlab

A Newton's Method
Example 1     Example 2
B Steepest Descent Method
Example 3

A Newton's Method   top

Example 1top

Consider the problem of finding a solution to the following  system of two nonlinear equations:

g1(x,y)ºx2+y2-1=0,    g2(x,y)ºx4-y4+xy=0.
Newton's iteration scheme
(xn+1,yn+1)T=(xn,yn)T- (Jg(xn,yn))-1(g1(xn,yn),g2(xn,yn))T
requires the Jacobian Jg(x,y). The components of Jg are:
(Jg)11=2x,   (Jg)12=2y,   (Jg)21=4x3+y,   (Jg)22=-4y3+x.
The iteration starts with some properly chosen starting value (x0,y0). We stop the iteration if one of the following criteria is satisfied: We choose nmax=100 and emax=10-10. For the starting value (x0,y0)=(1,1), the iteration is implemented in Matlab through the following commands (executed in a script file):

n=0;            %initialize iteration counter
eps=1;          %initialize error
x=[1;1];        %set starting value

%Computation loop
while eps>1e-10&n<100
    g=[x(1)^2+x(2)^3-1;x(1)^4-x(2)^4+x(1)*x(2)];         %g(x)
    eps=abs(g(1))+abs(g(2));                             %error
    Jg=[2*x(1),3*x(2)^2;4*x(1)^3+x(2),-4*x(2)^3+x(1)];   %Jacobian
    y=x-Jg\g;                                            %iterate
    x=y;                                                 %update x
    n=n+1;                                               %counter+1
end
n,x,eps,        %display end values

The answer in the Matlab command window is:

n =
   6
x =
   0.56497837204568
   0.87971040708669
eps =
    1.139643934777723e-013

Thus it took only 6 iterations to find a solution with an error below 10-13. If the starting value is changed to (x0,y0)=(-1,-1), the output is

n =
     9
x =
  -0.89443195117284
   0.58479524791598
eps =
    8.626432901337466e-014

giving another approximate solution.

Example 2:   top

Find the (unconstrained) minimum of the objective function

f(x,y)=(x-y)4+2x2+y2-x+2y
To find the minimum, we apply Newton's method to the gradient equation
f(x,y)/x=4(x-y)3+4x-1=0,    f(x,y)/y=-4(x-y)3+2y+2=0.
The Jacobian is now the Hessean matrix Hf(x,y), with components
(Hf)11=12(x-y)2+4,  (Hf)12=(Hf)21=-12(x-y)2,   (Hf)22=12(x-y)2+2
We use the same stopping criteria as in Problem 1, and (x0,y0)=(1,1).
Matlab commands:

n=0;            %initialize iteration counter
eps=1;          %initialize error
x=[1;1];        %set starting value

%Computation loop
while eps>1e-10&n<100
    gradf=[4*(x(1)-x(2))^3+4*x(1)-1;-4*(x(1)-x(2))^3+2*x(2)+2];  %gradf(x)
    eps=abs(gradf(1))+abs(gradf(2));                             %error
    Hf=[12*(x(1)-x(2))^2+4,-12*(x(1)-x(2))^2;...                 %Hessean
            -12*(x(1)-x(2))^2,12*(x(1)-x(2))^2+2];
    y=x-Hf\gradf;                                                %iterate
    x=y;                                                         %update x
    n=n+1;                                                       %counter+1
end
n,x,eps,        %display end values

Answer in the command window:

n =
     8
x =
   0.03349047166921
  -0.56698094333841
eps =
    2.220446049250313e-016
 

B Steepest Desccent Method    top

Example 3:   top

Next we solve Problem 2 using the steepest descent method. The iteration scheme is now

(xn+1,yn+1)=(xn,yn)-a(f(xn,yn)/x,f(xn,yn)/y)
with a properly chosen value of a.

Command sequence for a=0.09:

n=0;            %initialize iteration counter
eps=1;          %initialize error
a=0.09;         %set iteration parameter
x=[1;1];        %set starting value

%Computation loop
while eps>1e-10&n<100
    gradf=[4*(x(1)-x(2))^3+4*x(1)-1;-4*(x(1)-x(2))^3+2*x(2)+2];  %gradf(x)
    eps=abs(gradf(1))+abs(gradf(2));                             %error
    y=x-a*gradf;                                                 %iterate
    x=y;                                                         %update x
    n=n+1;                                                       %counter+1
end
n,x,eps,        %display end values

Answer:

n =
    85
x =
   0.03349047167893
  -0.56698094332619
eps =
    8.552891728186296e-011

Obviously the number of iterations to reach an approximate solution below the threshold is much larger than for the Newton method.
In Propblem 4.22 you experiment with different values of a. You will see that the performance of the steepest descent algorithm is very sensitive against variations of a.
top