clear all; close all; %same as last example, but p=0.6, same until p3 %input vectors p1=[0;1;0]; p2=[1;0;0]; p3=[1;1;0]; %initial weights w21=ones(3,3); w12=0.5*w21; %1. Compute the Layer 1 response: a1=p1 %2. Next, compute the input to Layer 2: w12*a1 %Since all neurons have the same input, pick the first neuron as the %winner. (In case of a tie, pick the neuron with the smallest index.) a2=[1;0;0] %3. Now compute the L2-L1 expectation: w211=w21*a2 %4. Ajust the Layer 1 output to include the L2-L1 expectation: a1=p1&w211 a1=[0;1;0]; %5. Next, the Orienting Subsystem determines the degree of match between %the expectation and the input pattern: norm(a1)^2/norm(p1)^2 % ans = 1 > p=0.4 %Therefore no reset a0=0; %6. Since a0=0, continue with step 7. %7. Resonance has occurred, therefore update row 1 of w12: w121=2*a1/(2+norm(a1).^2-1) w12(1,:)=w121' %8. Update column 1 of w21: w21(:,1)=a1 %9. Remove p1, and return to step 1 with input pattern p2. a1=p2 %2. Compute the input to Layer 2: w12*a1 %Since neurons 2 and 3 have the same input, pick the second neuron as the winner: a2=[0;1;0] %3. Now compute the L2-L1 expectation: w212=w21*a2 %4. Adjust the Layer 1 output to include the L2-L1 expectation: a1=p2&w212 a1=[1;0;0] %5. Next, the Orienting Subsystem determines the degree of match between %the expectation and the input pattern: norm(a1)^2/norm(p2)^2 %ans = 1 >p=0.4 %Therefore no reset a0=0; %6. Since a0=0, continue with step 7. %7. Resonance has occurred, therefore update row 2 of w12: w122=2*a1/(2+norm(a1).^2-1) w12(2,:)=w122' %8. Update column 2 of w21: w21(:,2)=a1 %9.Remove p2, and return to step 1 with input pattern p3. %1. Compute the new Layer 1 response: a1=p3 %2. Compute the input to Layer 2: w12*a1 %Since all neurons have the same input, pick the first neuron as the winner: a2=[1;0;0] %3. Now compute the L2-L1 expectation: w211=w21*a2 %4. Ajust the Layer 1 output to include the L2-L1 expectation: a1=p3&w211 a1=[0;1;0] %5. Next, the Orienting Subsystem determines the degree of match between the expectation and the input pattern: norm(a1)^2/norm(p3)^2 %ans = 0.5000 < p=0.6 %Therefore a0=1 (reset) %6. Since a0=1, set a21=0,.inhibit it until an adquate match occurs (resonance), and return to step 1. %1. Recompute the Layer 1 response (Layer 2 inactive): a1=p3 %2. Next, compute the input to Layer 2: w12*a1 %Since neuron 1 is inhibited, neuron 2 is the winner: a2=[0; 1; 0] %3. Now compute the L2-L1 expectation: w212=w21*a2 %4. Adjust the Layer 1 output to include the L2-L1 expectation: a1=p3&w212 a1=[1;0;0] %5. Next, the Orienting Subsystem determines the degree of match between %the expectation and the input pattern: norm(a1)^2/norm(p3)^2 %ans = 0.5000 < p=0.6 %Therefore a0=1 (reset) %6. Since a0=1, set a21=0,.inhibit it until an adquate match occurs (resonance), and return to step 1. a1=p3 %2. Next, compute the input to Layer 2: w12*a1 %Since neuron 1 and 2 are inhibited, neuron 3 is the winner: a2=[0; 0; 1] %3. Now compute the L2-L1 expectation: w213=w21*a2 %4. Adjust the Layer 1 output to include the L2-L1 expectation: a1=p3&w213 a1=[1;1;0] %5. Next, the Orienting Subsystem determines the degree of match between the expectation and the input pattern: norm(a1)^2/norm(p3)^2 % ans = 1 > p=0.6 % Therefore a0=0 (no reset) %6. Since a0=0, continue with step 7. %7. Resonance has occurred, therefore update row 3 of w12: w123=2*a1/(2+norm(a1).^2-1) w12(3,:)=w123' %8. Update column 3 of w21: w21(:,3)=a1 %This completes the training, since if you apply any of the three patterns again they will not change the weights. These patterns have been successfully clustered. %The closer the vigilance is to 1, the more catagorites will be used. This is because an input pattern must be closer to a prototype in order to be incorporated into that prototype. %When the vigilance is close to zero, many different input patterns can be incorporated into one prototype. The vigilance parameter adjusts the coarseness of the categorization.