Property | Type | Description | |
---|---|---|---|
_prevState | double[] |
Method | Description | |
---|---|---|
QLearningAgent ( int id, int speciesId, IBlackBox brain, bool agentsNavigate, bool agentsHide, int numOrientationActions, int numVelocityActions, |
Creates a new Q-Learning teacher.
|
|
activateNetwork ( double sensors ) : ISignalArray |
Called at every step in the world. Given the sensor input, returns the change in orientation and velocity in the range [0,1].
|
Method | Description | |
---|---|---|
greedyValue ( double sensors ) : double | ||
selectEpsilonGreedy ( double sensors ) : double[] | ||
selectGreedy ( double sensors ) : double[] | ||
selectRandom ( double sensors ) : double[] | ||
updateValueFunction ( double sensors ) : void | ||
world_PlantEaten ( object sender, IAgent eater, Plant eaten ) : void |
public QLearningAgent ( int id, int speciesId, IBlackBox brain, bool agentsNavigate, bool agentsHide, int numOrientationActions, int numVelocityActions, |
||
id | int | The unique ID of this teacher. |
speciesId | int | |
brain | IBlackBox | The neural network value function for this teacher. It should have (2 + # of sensors) input nodes and 1 output node. |
agentsNavigate | bool | |
agentsHide | bool | |
numOrientationActions | int | The number of buckets to discretize the orientation action spacer into. |
numVelocityActions | int | The number of buckets to discretize the velocity action spacer into. |
world | The world this teacher will be evaluated in. | |
return | System |
public activateNetwork ( double sensors ) : ISignalArray | ||
sensors | double | |
return | ISignalArray |