AddThis

Bookmark and Share

Tuesday 31 May 2011

RADIAL BASIS NETWORKS(ARTIFICIAL NEURAL NETWORKS)

ABSTRACT

RADIAL BASIS NETWORKS is similar to ARTIFICIAL NEURAL NETWORKS,but it differs in number of neurons. They are faster when compared to artificial neural network for a particular application.
Radial basis networks can be designed very quickly in two different ways. The first design method, newrbe, finds an exact solution. The function newrbe creates radial basis networks with as many radial basis neurons as there are input vectors in the training data. The second method, newrb, finds the smallest network that can solve the problem within a given error goal.

Typically, far fewer neurons are required by newrb than are returned newrbe. However, because the number of radial basis neurons is proportional to the size of the input space, and the complexity of the problem, radial basis networks can still be larger than backpropagation networks.
A generalized regression neural network (GRNN) is often used for function approximation. It has been shown that, given a sufficient number of hidden neurons, GRNNs can approximate a continuous function to an arbitrary accuracy.
Probabilistic neural networks (PNN) can be used for classification problems. Their design is straightforward and does not depend on training. A PNN is guaranteed to converge to a Bayesian classifier providing it is given enough training data. These networks generalize well. The GRNN and PNN have many advantages, but they both suffer from one major disadvantage. They are slower to operate because they use more computation than other kinds of networks to do their function approximation or classification.

0 comments:

Post a Comment