C# (CSharp) Encog.Engine.Network.Train.Prop Namespace

Classes

Name Description
OpenCLTrainingProfile Specifies a training profile for an OpenCL training session. Includes the following information. device The device to use. local ratio: The local workgroup is a OpenCL concept where the global work group is broken into several local work groups. The bigger the local work group the faster things will run. However, your OpenCL device will impose a maximum local work group size. This ratio allows you to use a smaller local work group, for example 0.5 would be half of the max size of the local work group. You will almost always want to leave this value at the max 1.0. It is rare that you might need to decrease it because of the GPU being overtaxed. global ratio: The global work group must be a multiple of the local work group. The default value is 1, which means local and global workgroups the same size. Do not set this value lower than 1.0. Values higher than 1.0 can result in higher performance. Should be set to an integer value. For example, 2 would specify a global work workgroup twice the size of the local. Higher values will increase resource load on the GPU and may crash. segmentation ratio: The main purpose of this ratio is to allow you to scale back on how long the kernels take to execute. For maximum performance leave this value at the default 1.0 value. However, if your GPU is crashing, setting it to a value lower can help. If your are running Encog on the same GPU as your display uses, you may run into timeout issues if your kernel takes too long to execute. Setting this ratio lower can help.
RPROPConst Constants used for Resilient Propagation (RPROP) training.
TrainFlatNetworkBackPropagation Train a flat network, using backpropagation.
TrainFlatNetworkManhattan Train the flat network using Manhattan update rule.
TrainFlatNetworkOpenCL Train a flat network using OpenCL.
TrainFlatNetworkProp Train a flat network using multithreading, and GPU support. The training data must be indexable, it will be broken into groups for each thread to process. At the end of each iteration the training from each thread is aggregated back to the neural network.
TrainFlatNetworkResilient Train a flat network using RPROP.
TrainFlatNetworkSCG Train a network using scaled conjugate gradient.