CCE Faculty Articles

A Fast Algorithm for Finding Global Minima of Error Functions in Layered Neural Networks

Document Type

Article

Publication Title

Proceedings of 1990 IEEE International Joint Conference on Neural Networks

Event Date/Location

San Diego, CA / 1990

Publication Date

6-1990

Abstract

A fast algorithm is proposed for optimal supervised learning in multiple-layer neural networks. The proposed algorithm is based on random optimization methods with dynamic annealing. The algorithm does not require the computation of error function gradients and guarantees convergence to global minima. When applied to multiple-layer neural networks, the proposed algorithm updates, in batch mode, all neuron weights by Gaussian-distributed increments in a direction which reduces total decision error. The variance of the Gaussian distribution is automatically controlled so that the random search step is concentrated in potential minimum energy/error regions. Also demonstrated is a hybrid method which combines a gradient-descent phase followed by a phase of dynamically annealed random search suitable for optimal search in difficult learning tasks like parity. Extensive simulations are performed which show substantial convergence speedup of the proposed learning method as compared to gradient search methods like backpropagation. The proposed algorithm is also shown to be simple to implement and computationally effective and to lead to global minima over wide ranges of parameter settings.

DOI

10.1109/IJCNN.1990.137653

First Page

I - 715

Last Page

I - 720

This document is currently not available here.

Find in your library

Share

COinS