0 and t< Tmax, then go
to Step A3 with te t+ 1; clsc, if E(t)s0 and t s Tmax, then the algorithm terminates
Step A9: If t> Tmax and E(t)>8, then go to Step A2 with n=n+ 1 and t=1
In particular, Algorithm SDM is defined as follows
Algorithm SDM(c, b, w)
01: inference error
Tnax1: the maximum number of learning time
n: the number of rules
input: current parameters
output: parameters c, b, and w after learning
Steps A3 to A8 of Algorithm A arc pcrformcd
2.2. Neural gas method
Vector quantization techniques encode a data space cR", utilizing only a finite set C=cil
iEZ of reference vectors [18]
Let the winner vector ci(o) bc dcfincd for any vector vE Vas
gmIn
i∈Z
By using the finite set C, the space v is partitioned as
V-{v∈V|-cl‖s|-c‖forj∈Z}
(10)
where V-Uiez, Vi and VinI- for ifj
Learning Algorithms for Fuzzy Inference Systems Using Vector Quantization 133
nttp: //dx. doi. org/10.5772/intechopen 79925
The evaluation function for the partition is defined by
E=∑∑
1=1∈b
where n: =V
Let us introduce the neural gas method as follows [18]
For any input data vector v, the neighborhood ranking cik for keZ
determined, being the reference vector for which there are k vectors Ci with
U-c;‖<|-ck‖
(12)
Let the number k associated with each vector c, denoted by ki (v, ci). Then, the adaption step for
adjusting the parameters is given b
△
hx(k;(O,C;)·(乙-C;)
(13)
ha( ki(v, ci))=exp(-ki(v,ci/
where e∈[,1andA>0
Let the probability of v selected from V be denoted by plo)
The flowchart of the conventional neural gas algorithm is shown in Figure 1 [18 where eintr
fin, and Tmax are learning constants and the maximum number of learning, respectively. The
method is called learning algorithm NG
Using the set D, a decision procedure for center and width parameters is given as follows
Algorithm Center(c)
(
∈Z,
plr: the probability of x selected for xED
Step 1: By using p(x) for x E D, NG method of Figure 1 [16, 18] is performed
As a result, the set C of reference vectors for D" is determined where C=n
Step 2: Each valuc for center paramcters is assigncd to a reference vector. Let
∑(
(15)
ni
where Ci and n: are the set and the number of learning data belonging to the ith cluster Ci and
C=U1 Ci and n=∑1
As a result, center and width parameters are determined from algorithm center(c)
134 From Natural to Artificial Intelligence -Algorithms and Applications
Given e
int tinman
Lett=1
Each reference vector
Wi is selected randomly.
Given v E V with(o)
Determine the neighborhood
ranking k(vW)fori∈Zr
Update wi for i∈Z
t←t+1
using Eqs. 14, (15)and
t/T
Eint f Eint Eint/Efin
miax
END
Fi
Learning Algorithm B using Algorithm Center(c) is introduced as follows [16, 17]
Learning algorithm b
0: threshold of mse
wr: maximum number of learning time for ng
Tmax: maximum number of learning timc for SDM
M: the size of ranges
n the number of rules
Step 1: Initialize
Step 2: Center and width parameters are determined from Algorithm Center(P)and the set D
Step 3: Parameters c, b, and w are updated using Algorithm SDM(c, b, w)
Step 4: If E(t)se, then algorithm terminates else go to Step 3 with n+-n+ 1 and t+t+1
Learning Algorithms for Fuzzy Inference Systems Using Vector Quantization 135
nttp: //dx. doi. org/10.5772/intechopen 79925
2. 3. The probability distribution of input data based on the rate of change of output
It is known that many rules are needed at or near the places where output data change quickly
in fuzzy modeling. Then, how can we find the ratc of output change? The probability pM(x)
one method to perform it. As shown in Eqs. (16) and(17), any input data where output
hanges quickly is selected with the high probability, and any input data where output
changes slowly is selected with the low probability, where M is the size of range considering
output change
Based on the literature [13], the probability( distribution is defined as follows
Algorithm Prob(Pm(x))
Input:D={(x,y)lp∈Zp}andD"={(x2)lp∈Zp}
Output: PM(r)
Step 1: Give an input data x'ED", we determine the neighborhood ranking(r0, xl,.,,xk
x' p-1)of the vector x' with x0=x', x'1 being closest to x and xk(k=0,., P-1) being the
vector xfor which there are k vectors xwith x-xi