C# Class Encog.Neural.Networks.Training.Propagation.Resilient.ResilientPropagation

One problem with the backpropagation algorithm is that the magnitude of the partial derivative is usually too large or too small. Further, the learning rate is a single value for the entire neural network. The resilient propagation learning algorithm uses a special update value(similar to the learning rate) for every neuron connection. Further these update values are automatically determined, unlike the learning rate of the backpropagation algorithm. For most training situations, we suggest that the resilient propagation algorithm (this class) be used for training. There are a total of three parameters that must be provided to the resilient training algorithm. Defaults are provided for each, and in nearly all cases, these defaults are acceptable. This makes the resilient propagation algorithm one of the easiest and most efficient training algorithms available. The optional parameters are: zeroTolerance - How close to zero can a number be to be considered zero. The default is 0.00000000000000001. initialUpdate - What are the initial update values for each matrix value. The default is 0.1. maxStep - What is the largest amount that the update values can step. The default is 50. Usually you will not need to use these, and you should use the constructor that does not require them.
Inheritance: Propagation
显示文件 Open project: encog/encog-silverlight-core Class Usage Examples

Public Methods

Method Description
IsValidResume ( TrainingContinuation state ) : bool

Determine if the specified continuation object is valid to resume with.

Pause ( ) : TrainingContinuation

Pause the training.

ResilientPropagation ( IContainsFlat network, IMLDataSet training ) : System

Construct an RPROP trainer, allows an OpenCL device to be specified. Use the defaults for all training parameters. Usually this is the constructor to use as the resilient training algorithm is designed for the default parameters to be acceptable for nearly all problems.

ResilientPropagation ( IContainsFlat network, IMLDataSet training, double initialUpdate, double maxStep ) : System

Construct a resilient training object, allow the training parameters to be specified. Usually the default parameters are acceptable for the resilient training algorithm. Therefore you should usually use the other constructor, that makes use of the default values.

Resume ( TrainingContinuation state ) : void

Resume training.

Method Details

IsValidResume() public method

Determine if the specified continuation object is valid to resume with.
public IsValidResume ( TrainingContinuation state ) : bool
state TrainingContinuation The continuation object to check.
return bool

Pause() public final method

Pause the training.
public final Pause ( ) : TrainingContinuation
return TrainingContinuation

ResilientPropagation() public method

Construct an RPROP trainer, allows an OpenCL device to be specified. Use the defaults for all training parameters. Usually this is the constructor to use as the resilient training algorithm is designed for the default parameters to be acceptable for nearly all problems.
public ResilientPropagation ( IContainsFlat network, IMLDataSet training ) : System
network IContainsFlat The network to train.
training IMLDataSet The training data to use.
return System

ResilientPropagation() public method

Construct a resilient training object, allow the training parameters to be specified. Usually the default parameters are acceptable for the resilient training algorithm. Therefore you should usually use the other constructor, that makes use of the default values.
public ResilientPropagation ( IContainsFlat network, IMLDataSet training, double initialUpdate, double maxStep ) : System
network IContainsFlat The network to train.
training IMLDataSet The training set to use.
initialUpdate double
maxStep double The maximum that a delta can reach.
return System

Resume() public final method

Resume training.
public final Resume ( TrainingContinuation state ) : void
state TrainingContinuation The training state to return to.
return void