Richard's MatLab Examples 02 Page


Example 1

Memory matrix (associative memory): Compute the resonse of the memory using memory matrix given in 3.19. Then compute the Euclidean distances between the response and the original key vectors in 3.15.

Part I. The memory is trained with three key vectors: x1, x2 & x3.

x1=[-0.3333; 0.7778; 0.5329];
x2=[0.4444; -0.5556; 0.7027];
x3=[0.4969; 0.6667; 0.5556];
estimate_M = x1*x1' + x2*x2' + x3*x3'
x1m = [0; 0.7778; 0.5329];
estimate_x1m=estimate_M*x1m
%Euclidean distance between x1m and other key vectors
d11m = norm(x1-estimate_x1m)
d21m = norm(x2-estimate_x1m)
d31m = norm(x3-estimate_x1m)

%compute the original Euclidean distance
x11=[-0.1023; 1.3249; 0.7489];
d11 = norm(x1-x11)
d21 = norm(x2-x11)
d31 = norm(x3-x11)

Part II Key vectros are more mutually orthogonal.

x1=[0.1309; -0.9779; -0.1629];
x2=[-0.7548; 0.0587; -0.6533];
x3=[-0.6354; -0.2370; 0.7349];

estimate_M = x1*x1' + x2*x2' + x3*x3'

x1m = [0; -0.9779; -0.1629];

estimate_x1m=estimate_M*x1m

%Euclidean distance between x1m and other key vectors
d11m = norm(x1-estimate_x1m)
d21m = norm(x2-estimate_x1m)
d31m = norm(x3-estimate_x1m)

%compute the original Euclidean distance
x11=[0.1501; -0.9876; -0.1092];
d11 = norm(x1-x11)
d21 = norm(x2-x11)
d31 = norm(x3-x11)

 

Example 2

MLP NN trained with backpropagation. P154, 3.5.

function 1...

function [W1, W2, E] = sol_hw5_bp2(Ptrain, Ttrain, Ptest, Ttest, M, Tp, W1, W2);
% one hidden layer trained using backpropagation
% Ptrain: n x Q matrix with Q, n-dimensional training input vectors
% Ttrain: m x Q matrix with Q, m-dimensional training output vectors
% Ptest: n x q matrix with q, n-dimensional testing input vectors
% Ttest: m x q matrix with q, m-dimensional testing output vectors
% M: number of neurons in the hidden layer
% Tp: training parameter vector
% Tp(1): learning rate
% Tp(2): maximum % number of iterations
% Tp(3): slope of the activation function, f(x) = tanh(beta*x)
% Tp(4): stopping condition; The learning is stopped if the error for testing data is increased by Tp(4)
% Tp(5): weights initialization option. If Tp(5)=0-random weights
% initialization; else use Nguyen-widrow initalization
% W1: weights from the input to the hidden layer (includes biases)
% W2: weights from the hidden layer to the output layer (includes bias)

[n, Q] = size(Ptrain);
[m, Q] = size(Ttrain);
q = length(Ttest);
%Parameter initialization
alpha=Tp(1) %learning rate
MaxIter=Tp(2); %maximum number of epochs
beta=Tp(3); %slope of the tanh activation function
Incr=Tp(4); %maximum increase of error for test set
NgyWid=Tp(5); %NgyWid=0: random initilization; NgyWid~=0: Nguyen-Widrow initialization
E=[];
MSEold = realmax; %largest +ve floating pt number
MSEnew = realmax;
iter=0;

if nargin<8
if NgyWid==0 %random initialization
W1=0.1*randn(M, n+1);
W2=0.1*randm(m, M+1);
else %Nguyen-Widrow initialization
gamma1=0.7*M^(1/n); %Eq 3.58
gamma2=0.7*m^(1/M); %Eq 3.58
W1=-0.5+rand(M,n)
W2=-0.5+rand(m,M)
b1=zeros(M,1)
b2=zeros(m,1);
for i=1:M
W1(i,:) = gamma1*W1(i,:)/norm(W1(i,:)); %Eq 3.59
b1(i) = (max(W1(i,:))-min(W1(i,:)))*rand(1,1)-mean(W1(i,:));
end
W1=[W1 b1];

for i=1:m
W2(i,:) = gamma2*W2(i,:)/norm(W2(i,:)); % Eq. (3.59)
b2(i) = (max(W2(i,:))-min(W2(i,:)))*rand(1,1)-mean(W2(i,:));

end
W2=[W2 b2];
end

%network training
MSE = zeros(MaxIter, 1);
while iter<MaxIter&MSEnew<=(MSEold+Incr);
iter=iter+1;
for i=1:Q
v1=W1*[Ptrain(:,i)' 1]';
xout1=tanh(beta*v1);
g1=beta*(1-xout1.^2);
v2=W2*[xout1' 1]';
xout2=purelin(v2);
g2=1;
e=Ttrain(:,i)-xout2;
MSE(iter) = MSE(iter)+e'*e/Q;
D2=diag(g2)*e; %eq 3.77
D1=diag(g1)*W2(1:m, 1:M)'*D2; %eq 3.83
W2=W2+alpha*D2*[xout1' 1]; % Eq. (3.74)
W1=W1+alpha*D1*[Ptrain(:,i)' 1]; % Eq. (3.74)
end
%Calculating the error for the testing data set
MSEold=MSEnew;
MSEnew=norm(Ttest-sol_hw5_bp2val(Ptest, W1,W2,beta))^2/q;
E=[E;MSEnew];
end
figure;
loglog(MSE, 'r');
hold on;
loglog(E, 'b'); grid;
xlabel('Training epoch');
ylabel('MSE');
legend('Training MSE','Test MSE');
hold off;
end

end

function 2...

function y=sol_hw5_bp2val(P,W1,W2,beta);

%BP2VAL-output of the MLP NN with one hidden layer

%y=bp2val(P,W1,W2,beta);
%
% P: n by Q matrix containing Q, n-dimensional input vectors
% W1: weights from the input to the hidden layer
% W2: weights from the hidden layer to the output layer
% beta: slope of the activation function, f(x)=tanh(beta*x)
% y: m by Q matrix containing Q, m-dimensional output vectors

[n,Q]=size(P);
[m,M]=size(W2);
y=zeros(m,Q);
for i=1:Q
v1=W1*[P(:,i)' 1]';
xout1=tanh(beta*v1);
v2=W2*[xout1' 1]';
xout2=purelin(v2);
y(:,i)=xout2;
end

Problem a, calling functions

clear all;
close all;

Qq = [200 100 50]; %train, test, verification case nos

%Generate the training data
Ptrain = zeros(1, Qq(1));
Ptrain = rand(size(Ptrain))*0.9 + 0.1;
Ttrain = 1./Ptrain; %right divide, for function f(x)=1/x

Ptest = zeros(1, Qq(2));
Ptest = rand(size(Ptest))*0.9 + 0.1;
Ttest = 1./Ptest;

%Generate the verification data. This time choose sequential points
Pverif = zeros(1, Qq(3));
i=1:Qq(3);
Pverif(:,i) = 0.1+(i-1)*(0.9/Qq(3));
Tverif = 1./Pverif;

Tp = [0.05 2000 1 0.1 1];
M = 10;

[W1,W2,E] = sol_hw5_bp2(Ptrain, Ttrain, Ptest, Ttest, M, Tp);

Yverif=sol_hw5_bp2val(Pverif, W1,W2,1);

figure;
axis([1 10 1 10]);
plot(Pverif, Tverif,'r.',Pverif, Yverif,'-');grid;
xlabel('X');
ylabel('Network Prediction vs. True value');
legend('True value','Prediction');
title('Network Prediction and Verification data draw on the same graph');

figure;
xlabel('Prediction');
ylabel('Verification data');
axis([1 10 1 10]);
i=1:0.1:10;
plot(i,i,'r-',Yverif, Tverif,'.'); grid;
title('Network Prediction vs. Verification data');

Problem b, calling functions

clear all;
close all;

Qq = [200 100 50]; %train, test, verification case nos

%Generate the training data
Ptrain = zeros(2, Qq(1));
Ptrain = 2*rand(size(Ptrain))-1;
Ttrain = Ptrain(1,:).^2+Ptrain(2,:).^2

Ptest = zeros(2, Qq(2));
Ptest = 2*rand(size(Ptrain))-1;
Ttest = Ptest(1,:).^2+Ptest(2,:).^2;

%Generate the verification data. This time choose sequential points
[X,Y] = meshgrid(-1:0.2:1);
Z=X.^2+Y.^2;
mesh(X,Y,Z);
title('Function shape');
Pverif = [reshape(X,1,121);reshape(Y,1,121)]
Tverif = reshape(Z,1,121);

%train the network
Tp = [0.05 2000 1 0.1 1];
M = 10;

[W1,W2,E] = sol_hw5_bp2(Ptrain, Ttrain, Ptest, Ttest, M, Tp);

%generate plots
Yverif=sol_hw5_bp2val(Pverif, W1,W2,1);

axis([0 2 0 2]);
i=0:0.02:2;
plot(i,i,'r-',Yverif, Tverif,'.'); grid;
xlabel('Verification data');
ylabel('Network Prediction');
title('Network Prediction vs. Verification data');

figure;
X2=reshape(Pverif(1,:),11,11);
Y2=reshape(Pverif(2,:),11,11);
Z2=reshape(Yverif,11,11);
mesh(X2,Y2,Z2);
title('Network Output');

Problem c, calling functions


clear all;close all;
Qq=[200 100 50];

% Generate the training data
Ptrain=zeros(2,Qq(1));
Ptrain=4*rand(size(Ptrain))-2;
Ttrain=sin(pi*Ptrain(1,:)).*cos(pi*Ptrain(2,:));

% Generate the test data
Ptest=zeros(2,Qq(2));
Ptest=4*rand(size(Ptest))-2;
Ttest=sin(pi*Ptest(1,:)).*cos(pi*Ptest(2,:));

% Generate the verification data. This time choose sequaential points
[X,Y] = meshgrid(-2:0.2:2);
Z=sin(pi*X).*cos(pi*Y);
mesh(X,Y,Z);
title('Function shape');
Pverif=[reshape(X,1,441);reshape(Y,1,441)];
Tverif=reshape(Z,1,441);

% Train the network
Tp=[0.02 5000 1 0.1 1];
M=20;
[W1,W2,E]=sol_hw5_bp2(Ptrain,Ttrain,Ptest,Ttest,M,Tp);
% generate the plots

Yverif=sol_hw5_bp2val(Pverif,W1,W2,1);
figure;
axis([-1 1 -1 1]);
i=-1:0.02:1;
plot(i,i,'r-',Yverif, Tverif,'.'); grid;
xlabel('Verification data');
ylabel('Network Prediction');
title('Network Prediction vs. Verification data');

figure;
X2=reshape(Pverif(1,:),21,21);
Y2=reshape(Pverif(2,:),21,21);
Z2=reshape(Yverif,21,21);
mesh(X2,Y2,Z2);
title('Network Output');

Problem d, calling functions

% Problem 3.5(d)
clear all;close all;
Qq=[200 100 50];

% Generate the training data
Ptrain=zeros(3,Qq(1));
Ptrain=4*rand(size(Ptrain))-2;
Ttrain=Ptrain(1,:).^2/2+Ptrain(2,:).^2/3+Ptrain(3,:).^2/4;

% Generate the test data
Ptest=zeros(3,Qq(2));
Ptest=4*rand(size(Ptest))-2;
Ttest=Ptest(1,:).^2/2+Ptest(2,:).^2/3+Ptest(3,:).^2/4;

% Generate the verification data. This time choose sequaential points
[X,Y,Z] = meshgrid(-2:0.4:2);
DZ=X.^2/2+Y.^2/3+Z.^2/4;

Pverif=[reshape(X,1,1331);reshape(Y,1,1331);reshape(Z,1,1331)];
Tverif=reshape(DZ,1,1331);

% Train the network
Tp=[0.05 5000 1 0.1 1];
M=20;
[W1,W2,E]=sol_hw5_bp2(Ptrain,Ttrain,Ptest,Ttest,M,Tp);
% generate the plots

Yverif=sol_hw5_bp2val(Pverif,W1,W2,1);
figure;
axis([0 5 0 5]);
i= 0:0.2:5;
plot(i,i,'r-',Yverif, Tverif,'.'); grid;
xlabel('Verification data');
ylabel('Network Prediction');
title('Network Prediction vs. Verification data');


Example 3

A rather manual Kohonen SOM NN example.

W=[0.41 0.45 0.41 0 0 0 -0.41 -0.45 -0.41;
0.41 0 -0.41 0.45 0 -0.45 0.41 0 -0.41;
0.82 0.89 0.82 0.89 1 0.89 0.82 0.89 0.82]'
p=[0.67;
0.07;
0.74]
w=W';
plot(w(1,: ),w(2,: ))
a=compet(W*p)
% COMPET is a transfer function. Transfer functions calculate a layer's output from its net input.
% The second neuron won the competition. Looking at the network diagram, we
% see that the second neuron's neighbors, at the radius of 1, include neurons 1 and 3.
W(1,: )=(W(1,: )'+0.1*(p-W(1,: )'))';
W(2,: )=(W(2,: )'+0.1*(p-W(2,: )'))';
W(3,: )=(W(3,: )'+0.1*(p-W(3,: )'))';
figure
w=W';
plot(w(1,: ),w(2,: ))

 

Example 4

SOM: From higher-dimension to two-dimension representation

C1 = randn(3,1000); %3 x 1000 random matrix
C1 = C1*sqrt(0.1);

C2 = randn(3,1000); %3 x 1000 random matrix
C2 = C2*sqrt(0.1);
C2 = C2+5;

%Plot the data set
plot3(C1(1,:), C1(2,:), C1(3,:), 'g+')
hold on
plot3(C2(1,:), C2(2,:), C2(3,:), 'r*')
hold off

%Create SOM and Plot the weights
P = [C1 C2];
net = newsom(minmax(P),[5 5],'gridtop');
%NEWSOM Create a self-organizing map.
%net = newsom(PR,[d1,d2,...],tfcn,dfcn,olr,osteps,tlr,tns)
%PR - Rx2 matrix of min and max values for R input elements.
%Di - Size of ith layer dimension, defaults = [5 8].
%TFCN - Topology function, default = 'hextop'.

net.trainParam.epochs = 5;
net = train(net,P);
plotsom(net.iw{1,1},net.layers{1}.distances)
title('SOM 3-D weights');
%PLOTSOM(W,D,ND) takes three arguments,
% W - SxR weight matrix.
% D - SxS distance matrix.
% ND - Neighborhood distance, default = 1.
% and plots the neuron's weight vectors with connections between weight vectors whose neurons are within a distance of 1.


Example 5

SOM: From higher-dimension to two-dimension representation

C1 = randn(3,1000); %3 x 1000 random matrix
C1 = C1*sqrt(0.1);

C2 = randn(3,1000); %3 x 1000 random matrix
C2 = C2*sqrt(0.1);
C2 = C2+5;

%Plot the data set
plot3(C1(1,:), C1(2,:), C1(3,:), 'g+')
hold on
plot3(C2(1,:), C2(2,:), C2(3,:), 'r*')
hold off

%Create SOM and Plot the weights
P = [C1 C2];
net = newsom(minmax(P),[5 5],'gridtop');
%NEWSOM Create a self-organizing map.
%net = newsom(PR,[d1,d2,...],tfcn,dfcn,olr,osteps,tlr,tns)
%PR - Rx2 matrix of min and max values for R input elements.
%Di - Size of ith layer dimension, defaults = [5 8].
%TFCN - Topology function, default = 'hextop'.

net.trainParam.epochs = 5;
net = train(net,P);
plotsom(net.iw{1,1},net.layers{1}.distances)
title('SOM 3-D weights');
%PLOTSOM(W,D,ND) takes three arguments,
% W - SxR weight matrix.
% D - SxS distance matrix.
% ND - Neighborhood distance, default = 1.
% and plots the neuron's weight vectors with connections between weight vectors whose neurons are within a distance of 1.

 

Example 6

Self-Orgranizing Feature Map (SOFM)

%Generate 1,000 two-dimensional vectors in the unit square
P = rand(2,1000);
plot(P(1,:),P(2,:),'+');
axis([-0.5 1.5 -0.5 1.5])

%Create SOM onto a 5x5 rectangular topology
net = newsom(minmax(P),[5 5],'gridtop');

%Initial weights
net.iw{1,1}(:)=0.5;
figure
plotsom(net.iw{1,1},net.layers{1}.distances)
title('0 epoch');

%After 1 epoch
net.trainParam.epochs = 1;
net = train(net,P);
figure
plotsom(net.iw{1,1},net.layers{1}.distances)
title('1 epoch');

%After 2 epochs
net.trainParam.epochs = 1;
net = train(net,P);
figure
plotsom(net.iw{1,1},net.layers{1}.distances)
title('2 epochs');

%After 10 epochs
net.trainParam.epochs = 8;
net = train(net,P);
figure
plotsom(net.iw{1,1},net.layers{1}.distances)
title('10 epochs');

 

Example 7

NEQLVQ example from matlab

%NEWLVQ Create a learning vector quantization network.
%NET = NEWLVQ(PR,S1,PC,LR,LF) takes these inputs,
%PR - Rx2 matrix of min and max values for R input elements.
%S1 - Number of hidden neurons.
%PC - S2 element vector of typical class percentages.
%LR - Learning rate, default = 0.01.
%LF - Learning function, default = 'learnlv1'.
%Returns a new LVQ network.

P = [-3 -2 -2 0 0 0 0 +2 +2 +3;
0 +1 -1 +2 +1 -1 -2 +1 -1 0];
Tc = [1 1 1 2 2 2 2 1 1 1];

T = ind2vec(Tc);
net = newlvq(minmax(P),4,[.6 .4]);
net = train(net,P,T);

Y = sim(net,P)
Yc = vec2ind(Y)

function 2...

 

Example 8

A LVQ function for my project,
how to store weights (save to a file) after training, and load weights later to simulate net.

02_example08.txt

 

Example 9

Class example,
Train an ART1 network using 3 input vectors.

02_example09.txt

 

Example 10

Class example,
Train an ART1 network using 3 input vectors. Same as example above, but with a different p value, cause the network to behave differently.

02_example10.txt

Example 11

A example of ploting neurons of layer 1 of ART1 network using ODE function. 2nd file name must be function name.

02_example11.txt
02_example11_function.txt

Example 12

A example of simulating layer 1 of ART1 network. 2nd file name must be function name.

02_example12.txt
02_example12_function.txt

 

Example 13

A example of simulating layer 2 of ART1 network. 2nd file name must be function name.

02_example13.txt
02_example13_function.txt