C# (CSharp) Encog.Neural.Networks.Training.Propagation.Back Пространство имен

Классы

Имя Описание
Backpropagation This class implements a backpropagation training algorithm for feed forward neural networks. It is used in the same manner as any other training class that implements the Train interface. Backpropagation is a common neural network training algorithm. It works by analyzing the error of the output of the neural network. Each neuron in the output layer's contribution, according to weight, to this error is determined. These weights are then adjusted to minimize this error. This process continues working its way backwards through the layers of the neural network. This implementation of the backpropagation algorithm uses both momentum and a learning rate. The learning rate specifies the degree to which the weight matrixes will be modified through each iteration. The momentum specifies how much the previous learning iteration affects the current. To use no momentum at all specify zero. One primary problem with backpropagation is that the magnitude of the partial derivative is often detrimental to the training of the neural network. The other propagation methods of Manhatten and Resilient address this issue in different ways. In general, it is suggested that you use the resilient propagation technique for most Encog training tasks over back propagation.