Richard's MatLab Examples 01 Page


Example 1

Single-input neuron with a bias, output will be -1 for inputs < 3; and +1 for inputs >=3.
using transfer function hardlimiter (1, >=0; -1, <0), bias = -3, weight = 1;

clear all;
x = -7:0.1:13;
% start -7, step size 0.1, end at 13

bias=-3;
w=1;
v = w*x + bias;
y = hardlims (v);
%Matlab symmetric hard limiter function

plot (x, y, 'r');
% PLOT(X,Y) plots vector Y versus vector X.
axis ([-7 13 -1.2 1.2]);
% AXIS([XMIN XMAX YMIN YMAX ZMIN ZMAX CMIN CMAX]) sets the scaling for the x-, y-, z-axes and color scaling limits on the current axis (see CAXIS).
title ('HW1-1, Symmetric hard limiter activation function.');
xlabel ('x');
ylabel ('y = f(v)');

line ([-7 13], [0 0]);
% LINE(X,Y) adds the line in vectors X and Y to the current axes. In other words, [x1, x2], [y1, y2], (-7,0) to (-13,0), who thinks this way?

x = [2 3 4] % testing
v = w*x + bias;
y = hardlims (v)


Example 2

Two-input neuron with weight matrix w=[1,-2] and an input vector x=[-3,5], an output of 0.75 is required. If bias is 0, use Binary Sigmoid function with slope parameter alpha = -ln3/13.

w = [1 -2];
x = [-3 5];
v = w*x'
y = 1 / (1 + exp ( ( log (3) / 13 ) * v ))
% EXP(X) is the exponential of the elements of X, e to the X.


Example 3

Three-input perceptron.

w = [0 -1 1];
b = 0;
p1 = [-1; 1; -1]; %banana
p2 = [-1; -1; 1]; %pineapple
banada = hardlims (w*p1+b) %simulate banana
pineapple= hardlims (w*p2+b) %simulate pineapple


Example 4

Hopfield network to recognize the same pattern as example 3. ("remembered" state)

clc %clears command window
clear all %clear memory
w = [0.2 0 0; 0 1.2 -0.5; 0 -0.5 1.2]; %weight
b = [-0.9; 0; 0]; %bias
p1 = [-1; 1; -1]; %banana
p2 = [-1; -1; 1]; %pineapple

input = [10; 10; 10];
banana = satlins(w*input + b); %Symmetric saturating linear transfer function.

while (isequal(banana, p1)~=1)
input = banana;
banana = satlins(w*input + b)
%show output during loop
end

input = p2 + [-0.3; 0.3; 0.3];

pineapple = satlins(w*input + b); %Symmetric saturating linear transfer function.
while (isequal(pineapple, p2)~=1)
input = pineapple;
pineapple = satlins(w*input + b)
%show output during loop
end
pineapple %it does not loop for this case (above loop)


Example 5

Find determinant and rank (max no of linearly independent columns or rows) of matrix.
Determinant is not 0, means vector is linearly independent.

x = [1 1 1; 2 0 2; 3 1 1]
d = det(x)
r = rank(x)


Example 6

Compute Gramin. The vectors are dependent iff the Garmin is zero. (non-square matrix cannot use determinant)

x1 = [1;2;2;1];
x2 = [1;0;0;1];
x3 = [3;4;4;3];

A = [x1'*x1 x1'*x2 x1'*x3; x2'*x1 x2'*x2 x2'*x3; x3'*x1 x3'*x2 x3'*x3]
d = det(A)


Example 7

Adaline, alpha-LMS, fig 2.19, to perform OR logic function.

% bi-polar case
clear all
close all
disp (' ');
disp ('Bipolar Training');
P = [1 1 1 1; -1 -1 1 1; -1 1 -1 1]
T = [-1 1 1 1]
[R, Q] = size(P); % containing the number of rows and columns in the matrix.

W = 0.001*randn(R,1);
%RANDN(N) returns an N-by-N matrix containing random values between -1 and 1 (normal Distn, mean 0, Std Dev of 1)
alpha = 0.15;
err = 0.1;

MaxIter = 1000;
iter = 0;
MSE = [];
% MSE is a network performance function. It measures the network's performance according to the mean of squared errors.
% MSE(E,X,PP) takes from one to three arguments,
% E - Matrix or cell array of error vector(s).
% X - Vector of all weight and bias values (ignored).
% PP - Performance parameters (ignored).
% and returns the mean squared error.

while iter < MaxIter

iter = iter + 1;
Er = 0;
tqe = 0;

for k = 1:Q %q pattern, each training case, 4 cases in total, no of columns in P

v = P(:, k)'*W; % v = w'(x) x(k); w= 3x1 matrix; x = 3x1 matrix; w'x = 1x1
e = T(k) - v; %desired value - v
n2 = norm (P(:, k)); % NORM(V,P) = sum(abs(V).^P)^(1/P). NORM(V) = norm(V,2).
if n2~=0
W = W + alpha * e * P(:,k) / n2 ^ 2; %change weight
end

Er = Er + 1/Q*e^2;

end

MSE = [MSE Er];

if Er <= err
fprintf(1, 'err satisfied \n');
break;
end

span = 10;
if iter > (span+1)
de = MSE(iter) - MSE(iter - span);
if abs(de) < 1e-7
fprintf(1, 'the network is updating too slow\n');
break
end
end

end

W
iter

figure;
plot(MSE);
title('Bipolar Training MSE performance');
xlabel('Epochs');
ylabel('MSE');


Example 8

Using a perceptron (Fig 2.30) to classify the digits given in Fig 2.41.
Need to use 10 neurons for 10 digits.

XX = zeros (9,4,10); %10 9x4 matrices; for display as graph and later transfer to P0
XX(:,:,1) = [0 0 0 1; ...   % The binary expression of digit "1",
             0 0 0 1; ...   % before turning it into a vector
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1];
XX(:,:,2) = [1 1 1 1; ...   % The binary expression of digit "2",
             0 0 0 1; ...   % before turning it into a vector
             0 0 0 1; ...
             0 0 0 1; ...
             1 1 1 1; ...
             1 0 0 0; ...
             1 0 0 0; ...
             1 0 0 0; ...
             1 1 1 1];
XX(:,:,3) = [1 1 1 1; ...   % The binary expression of digit "3",
             0 0 0 1; ...   % before turning it into a vector
             0 0 0 1; ...
             0 0 0 1; ...
             1 1 1 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             1 1 1 1];
XX(:,:,4) = [1 0 0 1; ...   % The binary expression of digit "4",
             1 0 0 1; ...   % before turning it into a vector
             1 0 0 1; ...
             1 0 0 1; ...
             1 1 1 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1];
XX(:,:,5) = [1 1 1 1; ...   % The binary expression of digit "5", before
             1 0 0 0; ...   % turning it into a vector
             1 0 0 0; ...
             1 0 0 0; ...
             1 1 1 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             1 1 1 1];
XX(:,:,6) = [1 1 1 1; ...   % The binary expression of digit "6",
             1 0 0 0; ...   % before turning it into a vector
             1 0 0 0; ...
             1 0 0 0; ...
             1 1 1 1; ...
             1 0 0 1; ...
             1 0 0 1; ...
             1 0 0 1; ...
             1 1 1 1];
XX(:,:,7) = [1 1 1 1; ...   % The binary expression of digit "7",
             0 0 0 1; ...   % before turning it into a vector
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1];
XX(:,:,8) = [1 1 1 1; ...   % The binary expression of digit "8",
             1 0 0 1; ...   % before turning it into a vector
             1 0 0 1; ...
             1 0 0 1; ...
             1 1 1 1; ...
             1 0 0 1; ...
             1 0 0 1; ...
             1 0 0 1; ...
             1 1 1 1];
XX(:,:,9) = [1 1 1 1; ...   % The binary expression of digit "9",
             1 0 0 1; ...   % before turning it into a vector
             1 0 0 1; ...
             1 0 0 1; ...
             1 1 1 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1; ...
             0 0 0 1];
XX(:,:,10)= [1 1 1 1; ...   % The binary expression of digit "0",
             1 0 0 1; ...   % before turning it into a vector
             1 0 0 1; ...
             1 0 0 1; ...
             1 0 0 1; ...
             1 0 0 1; ...
             1 0 0 1; ...
             1 0 0 1; ...
             1 1 1 1];

%Convert the digits to bipolar
XX = 2*XX - 1;

%Digit identifiers for ploting graph below
digitID = [1 2 3 4 5 6 7 8 9 0];

%Plot the digits to check
for i = 1:10

subplot(2,5,i)
%SUBPLOT(mnp), breaks the Figure window into an m-by-n matrix of small axes, selects the p-th axes for the current plot, and returns the axis handle.
imagesc(~hardlim(XX(:,:,i) - 0.5));
%scale data and display as image; IMAGESC(...,CLIM) where CLIM = [CLOW CHIGH] can specify the scaling.
colormap('gray'); %Color look-up table.
set (gca, 'PlotBoxAspectRatio', [4 9 1]);
set(gca,'XTick',[],'YTick',[]);
xlabel(sprintf('Digit %d',digitID(i)));

%SET(H,'PropertyName',PropertyValue) sets the value of the specified property for the graphics object with handle H.

end

%create the input pattern matrix
P0 = zeros(36,10);
for i = 1:10
P0(:,i) = reshape (XX(:,:,i), 36, 1);
% RESHAPE(X,M,N) returns the M-by-N matrix whose elements are taken columnwise from X.
end

%Create the Bipolar target value
T = 2 * eye(10) - ones(10);
% EYE(N) is the N-by-N identity matrix. ONES(N) is an N-by-N matrix of ones.

%TP [learning rate, max no iterations, mean square error goal] = training parameters
%P: N x Q matrix of Q input vectors;
%T: M x Q matrix of target values;
%W: weights; E: error trajectory;

Tp = [0.01 2000 0.1] %TP [learning rate, max no iterations, mean square error goal]

P = [ones(1,10); P0] %add a row of ones as bias

[N, Q] = size(P)
[M, Q] = size(T)
W = 0.001 * randn(M, N); %10 x 37, initialize the weight matrix of small values

iter = 0;
MSE = [];

while iter < Tp(2)

iter = iter + 1;
SEr = 0; %mean square error

for k = 1:Q

yout = tansig (W*P(:,k)); %output = f(w(k)' *x(k)), w(k) is declared transposed
SEr = SEr + 1/Q * norm((yout - T(:,k)))^2; %mean squared error
W = W - Tp(1) * diag (1-yout.^2) * (yout - T(:,k)) * P(:,k)'; % update Weight, important, .^ means array power

end

MSE = [MSE SEr];

if SEr < TP(3)
break;
end

end

%display the epochs
disp (sprintf ('Epochs to converge = %i', iter));
% Display the ending MSE
disp(sprintf('Ending MSE = %f',MSE(1,end)));
 
% Plot the MSE performance
figure;
plot(MSE);
title('MSE performance');
xlabel('Epochs');
ylabel('MSE');

% experiment with noise
pn = 0.1; % rate of noise
 
% Create some noisy data
XX_noisy = XX.*hardlims(rand(size(XX))-pn); %.^ means array times

for i = 1:10
    noisyP0(:,i) = reshape(XX_noisy(:,:,i),36,1);
end

% Augment the pattern matrix with a row of ones
noisyP = [ones(1,10);noisyP0];

recog = zeros(10,1);
SEr = 0;
for k = 1:Q

yout = tansig (W*noisyP(:,k));
SEr = SEr + 1/Q * norm((yout - T(:,k)))^2; %mean squared error
[junk, recog(k)] = max(yout);

end

MSE = [MSE SEr];
disp(sprintf('MSE on noisy input = %f',MSE(1,end)));

figure;
for i = 1:10
subplot(2,5,i);
imagesc(~hardlim(XX_noisy(:,:,i)-0.5)); colormap('gray');
set(gca,'PlotBoxAspectRatio',[4 9 1]);
set(gca,'XTick',[],'YTick',[]);
xlabel(sprintf('Digit %d',digitID(recog(i))));
end


Example 9

MatLab Toolbox

p=[1 1 2 2 -1 -2 -1 -2;1 2 -1 0 2 1 -1 -2];
t=[0 0 0 0 1 1 1 1;0 0 1 1 0 0 1 1];

net=newp([-2 2;-2 2],2);
% Create a perceptron. net = newp(pr,s,tf,lf)
% PR - Rx2 matrix of min and max values for R input elements.
% S - Number of neurons.
% TF - Transfer function, default = 'hardlim'.
% LF - Learning function, default = 'learnp'.
% Returns a new perceptron.

net=train(net,p,t);
% [net,tr,Y,E,Pf,Af] = train(NET,P,T,Pi,Ai,VV,TV)
% NET - Network. P - Network inputs. T - Network targets, default = zeros.

net.iw{1,1},
net.b{1}
%weights and bias


Example 10

Autoassociative Memories (E.g. 3.1 on textbook)

x1=[-0.3333; 0.7778; 0.5329];
x2=[0.4444; -0.5556; 0.7027];
x3=[0.4969; 0.6667; 0.5556];

estimate_M = x1*x1' + x2*x2' + x3*x3'

estimate_x1=estimate_M*x1
estimate_x2=estimate_M*x2
estimate_x3=estimate_M*x3

%Estimates are not perfect replica because of nonorthogonality of the vectors
%Euclidean distance between x1 and other key vectors
d11 = norm(x1-estimate_x1)
d21 = norm(x2-estimate_x1)
d31 = norm(x3-estimate_x1)
% As expected the response vector estimate_x1 is closest to x1

%Euclidean distance between x2 and other key vectors
d12 = norm(x1-estimate_x2)
d22 = norm(x2-estimate_x2)
d32 = norm(x3-estimate_x2)
%As expected the response vector estimate_x2 is closest to x2

%Euclidean distance between x3 and other key vectors
d13 = norm(x1-estimate_x3)
d23 = norm(x2-estimate_x3)
d33 = norm(x3-estimate_x3)
%As expected the response vector estimate_x3 is closest to x3


Example 11

1-2-1 Network - Backpropagation Algorithm

%Initialize the network weights and biases in small random values.
w1_0=[-0.27; -0.41]
b1_0=[-0.48; -0.13]
w2_0=[0.09 -0.17]
b2_0=[0.48]

%Now we are ready to start the algorithm. For out initial input we will choose p=1:
p=1;
a0=p;

%The output of the first layer is then
a1=logsig(w1_0*p + b1_0)

%The second layer output is
a2=purelin(w2_0*a1 + b2_0)

%The error would then be e = t - a
e= (1 + sin(pi/4*1)) - a2

%The next stage of the algorithm is to backpropagate the sensitivities.
%We need the derivativies of the transfer functions
%First layer: derivative of log-sigmoid = (1-a1)(a1)
%Second layer: derivative of liner = 1

%Now perform the backpropagation. The starting point is found at the second layer
% s2=-2 [f square (n square)] e
s2=-2*1*e

f1=[(1-a1(1))*(a1(1)) 0; 0 (1-a1(2))*(a1(2))];
s1=f1*w2_0'*s2

%The final stage of the algorithm is to update the weights. For simplicity, we will use a learning rate alpha = lr = 0.1
lr = 0.1;
w2_1=w2_0 - lr*s2*a1'
b2_1=b2_0 - lr*s2

w1_1=w1_0 - lr*s1*a0'
b1_1=b1_0 - lr*s1


Example 12
NN Toolbox Example

% Simple Network
% a = purelin (Wp +b)
% Two inputs, one output

% NEWLIN(PR,S,ID,LR) takes these arguments,
% PR - Rx2 matrix of min and max values for R input elements.
% S - Number of elements in the output vector.
% ID - Input delay vector, default = [0]. (default value used)
% LR - Learning rate, default = 0.01; (default value used)

% Simulate a NN: SIM(net,P,Pi,Ai) takes,
% NET - Network.
% P - Network inputs.
% Pi - Initial input delay conditions, default = zeros.
% Ai - Initial layer delay conditions, default = zeros.
% and returns:
% Y - Network outputs.
% Pf - Final input delay conditions.
% Af - Final layer delay conditions.

% Simple Network
% a = purelin (Wp +b)
% Two inputs, one output

MyNet = newlin ([-1 1;-1 1],1); %2 inputs, for each min=-1; max = 1
% PR - Rx2 matrix of min and max values for R input elements.
% S - Number of elements in the output vector.

MyNet.IW{1,1} = [1,2]; %set weight for 1st neuron, layer 1 %there is only 1 layer and 1 neron anyway for this case
MyNet.b{1} = 0; %set bias = 0
P = [1 2 2 3; 2 1 3 1]; %4 input vectors

% Simulate a NN: SIM(net,P,Pi,Ai) takes,
% NET - Network.
% P - Network inputs.
% and returns:
% Y - Network outputs.

A_Output = sim(MyNet,P)

Example 13 NN Toolbox Example - Perception

% Design a perceptron to solve the given problem.
% Then given 4 additional inputs, test network
% P = 4 inputs (2 input elements for 1 neuron); T = expected result; W = initial weight; b = initial bias

% First let's do it the old way, without NN Toolbox

error = 1;
P = [-1 -1 0 1;1 -1 0 0];
T = [1 1 0 0];
W = [rand ; rand];
b = 0;

while (error ~= 0)
error = 0;
for i = 1 : 4
a = hardlim(W(:)'*P(:,i) + b);
e = T(i) - a;
W(:) = W(:) + e * P(:,i);
b = b + e;
if (e ~= 0)
error = error + 1;
end
end
end
W
b

% test data setup
P_test = [-2 1 0 -1; 0 1 1 -2];
for i = 1 : 4
test_data(i) = hardlim(W(:)'*P_test(:,i) + b);
end
test_data

%Graph solution
%red and blue dots
hold on
plot(P(1,1),P(2,1),'R*');
plot(P(1,2),P(2,2),'R*');
plot(P(1,3),P(2,3),'b*');
plot(P(1,4),P(2,4),'b*');
%Test Data, green dots
plot(P_test(1,1),P_test(2,1),'g*');
plot(P_test(1,2),P_test(2,2),'g*');
plot(P_test(1,3),P_test(2,3),'g*');
plot(P_test(1,4),P_test(2,4),'g*');

% Then let us do it the new way, with NN Toolbox

% Design a perceptron to solve the given problem.
% Then given 4 additional inputs, test network
% P = 4 inputs (2 input elements for 1 neuron); T = expected result; W = initial weight; b = initial bias
P = [-1 -1 0 1;1 -1 0 0];
T = [1 1 0 0];
%setup network
% Create new perceptron; NET = NEWP(PR,S,TF,LF) takes these inputs,
% PR - Rx2 matrix of min and max values for R input elements.
% S - Number of neurons.
% TF - Transfer function, default = 'hardlim'.
% LF - Learning function, default = 'learnp'.
net = newp([-2 2;-2 2],1); % 4 inputs; 2 input elements, 1 neuron
net.inputweights{1,1}.initFcn = 'rands';
net.biases{1}.initFcn = 'rands';
net = init(net);
net.adaptParam.passes = 11;
[net,a,e] = adapt(net,P,T);
% ADAPT Allow a neural network to adapt. see "help adapt"

%Test Data
P_test = [-2 1 0 -1; 0 1 1 -2];
Test_results = sim (net,P_test)

 

Example 14 NN Toolbox Example - Example Network

% 5 elements in the input vector, a bias, 1 neuron in the 1st layer,
% and it uses the hardlims transfer function to output 1 element.

% A second layer is added.
% This layer has 2 neurons, no bias,
% and uses the purelin transfer function to output 2 elements.

% AND, the second layer takes as input output from the network.
% So the second layer has 2 inputs, the first of 1 element, the second of 2 elements.

% This example only shows how a network is defined

% 5 elements in the input vector, a bias, 1 neuron in the 1st layer,
% and it uses the hardlims transfer function to output 1 element.

% A second layer is added.
% This layer has 2 neurons, no bias,
% and uses the purelin transfer function to output 2 elements.

% AND, the second layer takes as input output from the network.
% So the second layer has 2 inputs, the first of 1 element, the second of 2 elements.

% First create a network
net = network

% then set property for those displayed on screen
net.numInputs = 1;
% Note, although there are 5 elements in our input vector, we have only 1 input vector.
% Later we will specify the 5 elements when we specify the net.inputs{i}.size

%Now set the number of layers (in this case we have 2)
net.numLayers = 2;

% The information for connections is binary, a zero represents the lack of a
% connection, and a 1 will indicate the connection exists.
% As you can see from our architecture, there is a bias only in the first layer, so we can set this by the following code,
net.biasConnect = [1; 0];

% In our architecture, the inputs only connect to the first layer, so we must specify this connection:
net.inputConnect(1,1) = 1;
% The net.inputConnect(i,j) specifies the ith layer connecting to the jth input.

% Similarly, net.layerConnect(i,j) represents a layer weight connection to the ith layer from the jth layer.
% Note: jth to ith
% We have a connection from the 1st layer to the 2nd, and the 2nd to the 2nd.
net.layerConnect(2,1) = 1;
net.layerConnect(2,2) = 1;

% Next we must establish output and target connections. Both are 1x2 matrices because there are 2 layers which can connect to the outside world.
net.outputConnect = [0 1];
net.targetConnect = [0 1];

% Next we will deal with the subobject properties.
net.inputs{1}.range = [-10 10; -10 10; -10 10; -10 10; -10 10]

net.layers{1}.transferFcn = 'hardlims';
net.layers{1}.size = 1;
net.layers{1}.initFcn = 'rands';

% We can set the actual bias and weight values of the network. Try these:
net.b{1}(1) = 10;
net.IW{1,1}
net.LW{1,1}
net.LW{2,1}
net.LW{2,2}
net.LW{2,1} (1,1) = 5;
net.LW {2,1}

% Two types of input vectors: those that occur concurrently (at the same time, or in no particular time sequence), and those that occur sequentially in time (in the last case, the order in which the vectors are presented is important).

% Concurrent Inputs in a Static Network
% Static indicates the network has no feedback or delays.
% In this case we do not have to be concerned about whether or not the input vectors occur in a particular time sequence, so we can treat the inputs as concurrent.

Example 15 NN Toolbox Example - Simple Feedforward Network

% Consider a simple feedfoward network of one layer and one neuron, two inputs and one output with a linear layer.

% Consider a simple feedfoward network of one layer and one neuron, two inputs and one output with a linear layer.
Net = newlin([-1 1;-1 1],1); %create a new linear layer
Net.IW{1,1} = [1 2]; net.b{1} = [0];
% Input data consists of four CONCURRENT vectors:
P=[1 2 2 3; 2 1 3 1];
% Just simulate network
A = sim(Net,P)
% The result would be the same if there were four networks operating in
% parallel and each network received one of the input vectors and produced one of the outputs.

Note: I skipped a few examples here from NN exercises

Example 16 NN Toolbox Example - Backpropagation Network

% Backpropagation Network
% Newcf – trainable cascade-forward backpropagation
% Newelm – Elman backpropagation network
% Newff – feed-forward backpropagation network
% Newfftd – feed-forward input-delay backprop network

First, create a m-file testbp.m containing function "testbp"

%The following examples use a network created with newff to compare the different training algorithms.

% function takes 2 parameters, training algorithm, figure_window
function testbp (training_algorithm, figure_window)

% setup a 3x4 plotting window
subplot(3, 4, figure_window);

%create network
% NEWFF(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF) takes,
% PR - Rx2 matrix of min and max values for R input elements.
% Si - Size of ith layer, for Nl layers.
% TFi - Transfer function of ith layer, default = 'tansig'.
% BTF - Backprop network training function, default = 'trainlm'.
% BLF - Backprop weight/bias learning function, default = 'learngdm'.
% PF - Performance function, default = 'mse'.
% and returns an N layer feed-forward backprop network.
net = newff([-1 2; 0 5], [3, 1], {'tansig','purelin'}, training_algorithm);
net.trainParam.show = 10;
net.trainParam.epochs = 300;
net.trainParam.goal = 1e-5;

%network input and target values
p = [-1 -1 2 2; 0 5 0 5];
t = [-1 -1 1 1];

%train network, and simulate
[net, tr] = train(net, p, t);
a = sim(net, p)

% setup plot window as desired
% AXIS([XMIN XMAX YMIN YMAX]) sets scaling for the x- and y-axes on the current plot.
axis ([0 50 10e-10 10e10])
axis off
title (strcat(strcat(training_algorithm, ', epochs: '), num2str(length(tr.epoch)-1)));
line ([0 50 50 0 0], [10e-10 10e-10 10e10 10e10 10e-10])
line ([0 50], [1e-5 1e-5], 'Color', [0 0 0], 'LineWidth', [2])

Next the function is called from the workspace with the following arguments:

testbp('traingd',1) %Gradient descent backpropagation.

testbp('traingdm', 2) %Gradient descent with momentum backpropagation.

testbp('traingda', 3) %Gradient descent with an adaptive learning rate, lr, backpropagation.

testbp('traingdx', 4) %Gradient descent with momentum & adaptive an adaptive learning rate backpropagation.

testbp('trainrp', 5) %TRAINRP is a network training function that updates weight and bias values according to the resilient backpropagation algorithm (RPROP).

testbp('traincgf', 6) %Conjugate gradient backpropagation with Fletcher-Reeves updates.

testbp('traincgp', 7) %Conjugate gradient backpropagation with Polak-Ribiere updates.

testbp('traincgb', 8) %Conjugate gradient backpropagation with Powell-Beale restarts.

testbp('trainscg', 9) %The scaled conjugate gradient algorithm is based on conjugate directions, as in TRAINCGP, TRAINCGF and TRAINCGB, but this algorithm does not perform a line search at each iteration. The line search is computationally expensive. This algorithm was designed to avoid the time consuming line search.

testbp('trainbfg', 10) %BFGS quasi-Newton backpropagation.

testbp('trainoss', 11) %One step secant backpropagation.

testbp('trainlm', 12) %This algorithm was designed to approach second-order training speed without having to compute the Hessian matrix.

 

Example 17 NN Toolbox Example - Radial basis networks

clear all;

%Radial basis networks can be used to approximate functions.
P = [1 2 3];
T = [2.0 4.1 5.9];
net = newrbe(P,T);

%Here the network is simulated for a new input.

P = 3;
Y = sim(net,P)