All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.

Share

Description

We know that there are generally two ways in MATLAB software by which we apply radial basis function. 1. Directly use radbas (radial Basis transfer) function in hidden layer of MLFFNN (Multi-layer Feedforward Neural Network). 2. We make Radial Basis network with the help of newrb or newrbe function. If we apply both the function in the case of face recognition then I see that both conditions are not show equal performance, second condition is fast with respect to first condition. Because second condition is used for local approximation and first condition is used for universal approximations. For recognition of face I take Caltech 101 database. Caltech 101 database is created in California Institute of Technology in September 2003. This database contains color-full digital images. This database is compiled by Fei-Fei Li, Marco Andreetto, Marc ?Aurelio Ranzato and PietroPerona. This is used for facilitate computer vision research and technique. This is mainly used for image recognition, classification and categorization. Caltech 101 contains 9146 images. All the images are spilt into 101 distinct object categories such as faces, watches, ants etc.

Tags

Transcript

ISSN: 2320-5407 Int. J. Adv. Res. 5(10), 863-873
863
Journal Homepage: -
www.journalijar.com
Article DOI:
10.21474/IJAR01/5597
DOI URL:
http://dx.doi.org/10.21474/IJAR01/5597
RESEARCH ARTICLE
COMPARATIVE ANALYSIS OF RBF (RADIAL BASIS FUNCTION) NETWORK AND GAUSSIAN FUNCTION IN MULTI-LAYER FEED-FORWARD NEURAL NETWORK (MLFFNN) FOR THE CASE OF FACE RECOGNITION. Arvind Kumar.
Qtr No-512, Sector-4, CPWD qtrs, Near Balak Rram Hospital, Timarpur, Delhi India-110054.
……………………………………………………………………………………………………....
Manuscript Info Abstract
…………………….
………………………………………………………………
Manuscript History
Received: 11 August 2017 Final Accepted: 13 September 2017 Published: October 2017
Key words:-
Neural Network, Face Recognition, RBF-Radial Basis Function, RBFNN-Radial Basis Function Neural Network, MLFNN--Multi-layer Feed-forward Neural Network. Caltech 101- Face database of California Institute of Technology.
We know that there are generally two ways in MATLAB software by which we apply radial basis function. 1.
Directly use radbas (radial Basis transfer) function in hidden layer of MLFFNN (Multi-layer Feedforward Neural Network). 2.
We make Radial Basis network with the help of newrb or newrbe function. If we apply both the function in the case of face recognition then I see that both conditions are not show equal performance, second condition is fast with respect to first condition. Because second condition is used for local approximation and first condition is used for universal approximations. For recognition of face I take Caltech 101 database. Caltech 101 database is created in California Institute of Technology in September 2003. This database contains color-full digital images. This database is compiled by Fei-
Fei Li, Marco Andreetto, Marc „Aurelio
Ranzato and PietroPerona. This is used for facilitate computer vision research and technique. This is mainly used for image recognition, classification and categorization. Caltech 101 contains 9146 images. All the images are spilt into 101 distinct object categories such as faces, watches, ants etc.
Copy Right, IJAR, 2017,. All rights reserved.
……………………………………………………………………………………………………....
Introduction:-
There are mainly two learning technologies have been used in recognition system:- 1.
Supervised learning method. 2.
Un-supervised learning method. In supervised learning method, we have given target and I achieve that target. In this process first of all I found out (output of the network) and then I calculate its errors, by the following manner:- Algorithm:- Error=given_target - output If (Error=0) Network becomes steady Else
Corresponding Author:-Arvind Kumar.
Address:-Qtr No-512, Sector-4, CPWD qtrs, Near Balak Ram Hospital, Timarpur, Delhi India-110054.
ISSN: 2320-5407 Int. J. Adv. Res. 5(10), 863-873
864 Again Calculate error End (do until error==0)
Example:-
Multi-Layer Feedforward Neural Network, RBF Network, ADALINE etc. In Un-supervised learning method, we have
not
given target. So we make a group that is called Cluster, so this method is also called Clustering Method. Example:- SOM (Self Organization Map) Network. p
1 w n a w n a w n o
b p
2 w
p
3 w
. . . . p
n
Input Layer Hidden Layer Output Layer
Fig. 1:-
Multi-layer feedforward Neural Network
Introduction of Multilayer Feed-Forward Neural Network ( MLFNN):-
This network comes under supervised learning process.
Related work
:- First of all Single-layer neural network was given by Rosenblatt. But this network is limited to classification of linearly patterns only.
Then Widrow and Hoff‟s had given LMS algorith
m. But this algorithm is based on a Single Linear Neuron with adjustable weights, which limits the computing power of the algorithm. To remove the limitations of the perceptron and the LMS algorithm another network comes that is called Multilayer Perceptron, which is given by Minsky and Papert (1996).
Multiple layers of Neurons
[21]
:- This network has several layers. Each layer has its own weight matrix
w
, its own bias vector
b
, a net input vector
n
and an output vector
a
. There are mainly three layers of neural are very famous, which is shown in figure 1. In figure 1:- First Layer- Input Layers Second Layer- Hidden Layer Third Layer- Output Layer Of course multi-layer networks are more powerful than Single-layer networks.
∑
∑
f
1
∑
∑
f
1
f
1
∑
∑
∑
f
1
∑
F
2
F
3
∑
F
2
F
2
F
2
∑
∑
∑
F
3
F
3
F
3
ISSN: 2320-5407 Int. J. Adv. Res. 5(10), 863-873
865 Training in multilayer perceptron is known as the back-propagation algorithm, which also includes special case of LMS(Provided by Widrow and Hoff). In this network there are two phases of training
[22]
:- 1.
Forward phase:-
In this phase layer-by-layer, the synaptic weights of the network are calculated and the inputs signal is propagated through the network until we reaches the outputs. 2.
Backward phase
:- We calculate error=(target)-(induced output) This error is again propagated through the network [layer by layer] by the backward direction. Again we calculate the synaptic weight and output in straightforward manner.
Function of the Hidden Layer:-
Hidden neurons work as like a feature detectors. So, by applying the hidden neurons, we find out features of the system.
Method of perform in Multi_layer perceptron:-
Multi-layer perceptron is actually performed in two different methods:- 1.
Batch learning:-
Epoch-by-epoch basis manner, we adjust the synaptic weights of multi-layer perceptron. 2.
On-line learning:-
This is also called example-by-example basis method.
Suppose example are in the order {x(1), d(1)}, {x(2), d(2)},…… ……… {x(n), d(n)}.
The first examples pair {x(1), d(1)}. Now weight adjusts using method of gradient descent. Then I take another example {x(2), d(2)} an
d again adjust the weight and bias of the network.…….This procedure is continue to the last
method. So, by this procedure, this method is also used for solving pattern-classification problems.
The Back-propagation Algorithm
[22]
y
0
=+1 b
j
(n) y
1
(n) d
j
(n) y
i
(n) W
ji
(n)
ᵩ
(.) -1 e
j
(n) . . Y
m
(n)
Figure 2:-
Signal-Flow graph highlighting the details of output neuron j.
In the figure 2, v
j
(n)=induced local field (Summation of multiplication of inputs and weights are called induced local field.) i.e. v
j
(n)=
∑
……………………………..(i)
where, w=w
ji
(n)= Weight or the first layer and y=y
i
(n)=input of the first layer m=total number of inputs (including bias). If activation function is
J
(n)), then y
j
(n)=
j
(
J
(n))………………………
.
……(ii)
Suppose, Instantaneous error energy of the neuron j is defined by:-
V
j
(n) y
j
(n) error
ISSN: 2320-5407 Int. J. Adv. Res. 5(10), 863-873
866
j
=
(e
j2
(n))…………………………
..
…….(iii)
Where e
j
=d
j
(n)-y
j
(n)…………………….……….
..(iv) So, by chain rule,
))))
=
))
x
))
x
))
x
))
…..(v)
Now, for finding
))
, we use equation (iii),
))
=
.2. e
j
(n)= e
j
(n)…………………….(vi)
Now, for finding
))
, we use equation (iv),
))
=-
1……………………….………….(vi)
Now, for finding
))
, we use equation (ii),
))
=
‟
j
(
J
(n))…………………….……(vii)
Now, for finding
))
, we use equation (i),
))
= y
i
(n)………………..…………….(viii)
So, putting all of above value in equation (v)
))))
=- e
j
(n).
‟
j
(
J
(n)). y
i
(n)…
.
……….(ix)
By, the rule of delta
w
ji
(n)=-
.
))))
……………………….(x)
where,
=learning rate parameter. When we use gradient in weight space, then
w
ji
(n)=-
.
j
(n).y
i
(n)…………………….(xi)
Where
j
(n)=local gradient and
j
(n)=
))))
= e
j
(n).
‟
j
(
J
(n))………….(xii)
Here two cases are arise:- Case(i):- Neuron j is an output node:- Here, we do straight forward to compute the local gradient
j
(n). Case(ii):- Neuron j is a Hidden node:- In this case, we redefine the local gradient
j
(n) for hidden neuron j as
j
(n)=
))))
.
))
=
))))
.
))
………(xiii)
Neuron j is hidden. Now take another layer, say k-layer. So,
(n)=
(e
k 2
(n))……………………..……(xiv)
By the above method, we calculate
))))
e
k
))
………………….….(xv)
In summarize manner, we say that in the back-propagation algorithm, we do:- weight correction (
w
ji
(n))=
.
j
(n).y
i
(n)………..
.(xvi) Here
=learning rate parameter
j
(n)=local gradient y
i
(n)=input signal of neuron j.

We Need Your Support

Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks