整理桌面

This commit is contained in:
2022-04-10 00:37:53 +08:00
parent 82e3f2623f
commit e25c8bb318
728 changed files with 986384 additions and 16 deletions

View File

@@ -0,0 +1,122 @@
description: Documentation of the evolve function, which allows you to evolve neural networks
authors: Thomas Wagenaar
keywords: neat, neuro-evolution, neataptic, neural-network, javascript
The evolve function will evolve the network to conform the given training set. If you want to perform neuro-evolution on problems without a training set, check out the [NEAT](../neat.md) wiki page. This function may not always be successful, so always specify a number of iterations for it too maximally run.
<a href="https://wagenaartje.github.io/neataptic/articles/neuroevolution/">View a whole bunch of neuroevolution algorithms set up with Neataptic here.</a>
### Constructor
Initiating the evolution of your neural network is easy:
```javascript
await myNetwork.evolve(trainingSet, options);
```
Please note that `await` is used as `evolve` is an `async` function. Thus, you
need to wrap these statements in an async function.
#### Training set
Where `trainingSet` is your training set. An example is coming up ahead. An example
of a training set would be:
```javascript
// XOR training set
var trainingSet = [
{ input: [0,0], output: [0] },
{ input: [0,1], output: [1] },
{ input: [1,0], output: [1] },
{ input: [1,1], output: [0] }
];
```
#### Options
There are **a lot** of options, here are the basic options:
* `cost` - Specify the cost function for the evolution, this tells a genome in the population how well it's performing. Default: _methods.cost.MSE_ (recommended).
* `amount`- Set the amount of times to test the trainingset on a genome each generation. Useful for timeseries. Do not use for regular feedfoward problems. Default is _1_.
* `growth` - Set the penalty you want to give for large networks. The penalty get's calculated as follows: _penalty = (genome.nodes.length + genome.connectoins.length + genome.gates.length) * growth;_
This penalty will get added on top of the error. Your growth should be a very small number, the default value is _0.0001_
* `iterations` - Set the maximum amount of iterations/generations for the algorithm to run. Always specify this, as the algorithm will not always converge.
* `error` - Set the target error. The algorithm will stop once this target error has been reached. The default value is _0.005_.
* `log` - If set to _n_, will output every _n_ iterations (_log : 1_ will log every iteration)
* `schedule` - You can schedule tasks to happen every _n_ iterations. An example of usage is _schedule : { function: function(){console.log(Date.now)}, iterations: 5}_. This will log the time every 5 iterations. This option allows for complex scheduled tasks during evolution.
* `clear` - If set to _true_, will clear the network after every activation. This is useful for evolving recurrent networks, more importantly for timeseries prediction. Default: _false_
* `threads` - Specify the amount of threads to use. Default value is the amount of cores in your CPU. Set to _1_ if you are evolving on a small dataset.
Please note that you can also specify _any_ of the options that are specified on
the [neat page](../neat.md).
An example of options would be:
```javascript
var options = {
mutation: methods.mutation.ALL,
mutationRate: 0.4,
clear: true,
cost: methods.cost.MSE,
error: 0.03,
log: 1,
iterations: 1000
};
```
If you want to use the default options, you can either pass an empty object or
just dismiss the whole second argument:
```javascript
await myNetwork.evolve(trainingSet, {});
// or
await myNetwork.evolve(trainingSet);
```
The default value will be used for any option that is not explicitly provided
in the options object.
### Result
This function will output an object containing the final error, amount of iterations, time and the evolved network:
```javascript
return results = {
error: mse,
generations: neat.generation,
time: Date.now() - start,
evolved: fittest
};
```
### Examples
<details>
<summary>XOR</summary>
Activates the network. It will activate all the nodes in activation order and produce an output.
<pre>
async function execute () {
var network = new Network(2,1);
// XOR dataset
var trainingSet = [
{ input: [0,0], output: [0] },
{ input: [0,1], output: [1] },
{ input: [1,0], output: [1] },
{ input: [1,1], output: [0] }
];
await network.evolve(trainingSet, {
mutation: methods.mutation.FFW,
equal: true,
elitism: 5,
mutationRate: 0.5
});
network.activate([0,0]); // 0.2413
network.activate([0,1]); // 1.0000
network.activate([1,0]); // 0.7663
network.activate([1,1]); // -0.008
}
execute();</pre>
</details>

View File

@@ -0,0 +1,6 @@
description: List of important functions in Neataptic
authors: Thomas Wagenaar
keywords: train, evolve, neataptic
* [Train](train.md)
* [Evolve](evolve.md)

View File

@@ -0,0 +1,115 @@
description: Documentation of the train function, used to train neural networks in Neataptic.
authors: Thomas Wagenaar
keywords: train, backpropagation, neural-network, dropout, momentum, learning rate
The train method allows you to train your network with given parameters. If this
documentation is too complicated, I recommend to check out the
[training tutorial](../tutorials/training.md)!
### Constructor
Initiating the training process is similar to initiating the evolution process:
<pre>
myNetwork.train(trainingSet, options)
</pre>
#### Training set
Where set is an array containing objects in the following way: <code>{ input: [input(s)], output: [output(s)] }</code>. So for example, this is how you would train an XOR:
<pre>
var network = new architect.Perceptron(2,4,1);
// Train the XOR gate
network.train([{ input: [0,0], output: [0] },
{ input: [0,1], output: [1] },
{ input: [1,0], output: [1] },
{ input: [1,1], output: [0] }]);
network.activate([0,1]); // 0.9824...
</pre>
#### Options
Options allow you to finetune the training process:
* `log` - If set to _n_, will output the training status every _n_ iterations (_log : 1_ will log every iteration)
* `error` - The target error to reach, once the network falls below this error, the process is stopped. Default: _0.03_
* `cost` - The cost function to use. See [cost methods](../methods/cost.md). Default: _methods.cost.MSE_
* `rate` - Sets the learning rate of the backpropagation process. Default: _0.3_.
* `dropout` - Sets the dropout of the hidden network nodes. Read more about it on the [regularization](../methods/regularization.md) page. Default: _0_.
* `shuffle` - When set to _true_, will shuffle the training data every iteration. A good option to use if your network is performing less in cross validation than in the real training set. Default: _false_
* `iterations` - Sets the amount of iterations the process will maximally run, even when the target error has not been reached. Default: _NaN_
* `schedule` - You can schedule tasks to happen every _n_ iterations. An example of usage is _schedule : { function: function(data){console.log(Date.now, data.error)}, iterations: 5}_. This will log the time and error every 5 iterations. This option allows for complex scheduled tasks during training.
* `clear` - If set to _true_, will clear the network after every activation. This is useful for training [LSTM](../builtins/lstm.md)'s, more importantly for timeseries prediction. Default: _false_
* `momentum` - Sets the momentum of the weight change. More info [here](https://www.willamette.edu/~gorr/classes/cs449/momrate.html). Default: _0_
* `ratePolicy` - Sets the rate policy for your training. This allows your rate to be dynamic, see the [rate policies page](../methods/rate.md). Default: _methods.rate.FIXED()_
* `batchSize` - Sets the (mini-) batch size of your training. Default: `1` (online training)
If you want to use the default options, you can either pass an empty object or
just dismiss the whole second argument:
```javascript
myNetwork.evolve(trainingSet, {});
// or
myNetwork.evolve(trainingSet);
```
The default value will be used for any option that is not explicitly provided
in the options object.
#### Example
So the following setup will train until the error of <code>0.0001</code> is reached or if the iterations hit <code>1000</code>. It will log the status every iteration as well. The rate has been lowered to <code>0.2</code>.
```javascript
var network = new architect.Perceptron(2,4,1);
var trainingSet = [
{ input: [0,0], output: [1] },
{ input: [0,1], output: [0] },
{ input: [1,0], output: [0] },
{ input: [1,1], output: [1] }
];
// Train the XNOR gate
network.train(trainingSet, {
log: 1,
iterations: 1000,
error: 0.0001,
rate: 0.2
});
```
#### Cross-validation
The last option is the **crossValidate** option, which will validate if the network also performs well enough on a non-trained part of the given set. Options:
* `crossValidate.testSize` - Sets the amount of test cases that should be assigned to cross validation. If set to _0.4_, 40% of the given set will be used for cross validation.
* `crossValidate.testError` - Sets the target error of the validation set.
So an example of cross validation would be:
```javascript
var network = new architect.Perceptron(2,4,1);
var trainingSet = [
{ input: [0,0], output: [1] },
{ input: [0,1], output: [0] },
{ input: [1,0], output: [0] },
{ input: [1,1], output: [1] }
];
// Train the XNOR gate
network.train(trainingSet, {
crossValidate :
{
testSize: 0.4,
testError: 0.02
}
});
```
PS: don't use cross validation for small sets, this is just an example!