I am writing a backprop neural net mini-library from scratch and I need some help with writing meaningful automated tests. Up until now I have automated tests that verify that weight and bias gradients are calculated correctly by the backprop algorithm, but no test on whether the training itself actually works.
The code I have up until now lets me do the following:
- Define a neural net with any number of layers and neurons per layer.
- It can use any layer activation functions.
- Using biases is also possible.
- Layers of neurons can only be fully connected at the moment.
- Training is only BP with gradient descent.
- Must use train, validation and test sets (none of these sets can be empty at the moment).
Given all of these, what kind of automated test can I write to ensure that the training algorithm is implemented correctly. What function (sin, cos, exp, quadratic, etc) should I try to approximate? In what range and how densely should I sample data from this function? What architecture should the NN have?
Ideally, the function should be fairly simple to learn so the test wouldn't last very long (1-3 seconds), but also complicated enough to provide some degree of certainty that the algorithm is implemented correctly.
Aucun commentaire:
Enregistrer un commentaire