NeuralNetwork


Abstract

An ANSI C pseudoclass for manipulating neural networks

Classes

NeuralNetwork


Functions

nnNew
nnDestroy
nnOpen
nnSave
nnNumLayers
nnNumNeuronsInLayer
nnNumInputNeurons
nnNumOutputNeurons
nnLearningRate
nnSetLearningRate
nnAnnealingRate
nnSetAnnealingRate
nnSetAnnealingFunction
nnAnneal
nnSetActivationFunctionInLayer
nnSetHiddenLayersActivationFunction
nnSetOutputLayerActivationFunction
nnInputBuffer
nnOutputBuffer
nnTrainOnce
nnTrain
nnForgetTraining
nnErrorRate
nnCalculateOutput
nnCalculateOutputForInput
nnIndexOfHighestOutputNeuron
nnIndexOfRandomOutputNeuron
nnPrintOutputBuffer

nnNew

NeuralNetwork* nnNew(
    int numLayers,
    ...); 

create and initialize a NeuralNetwork object

Parameter Descriptions
numLayers
the number of layers in the network, including the input, hidden and output layers

...
integers representing the number of neurons in each layer, the first number corresponds to the input layer, and the last to the output layer

function result
returns the NeuralNetwork object, or NULL if it could not be created

nnDestroy

NeuralNetwork* nnDestroy(
    NeuralNetwork* self); 

destroy the NeuralNetwork object and free all of its resources

Parameter Descriptions
self
the NeuralNetwork object to operate upon

function result
NULL
use like:
network = nnDestroy(network);
to prevent runtime segmentation faults;

nnOpen

NeuralNetwork* nnOpen(
    char* filename); 

open a previously saved NeuralNetwork object

Parameter Descriptions
filename
the path of the saved object

function result
the NeuralNetwork object or NULL if an error occurred

nnSave

int nnSave(
    NeuralNetwork* self,
    char* filename); 

Save a NeuralNetwork object to disk. This does not create a portable file type, as the binary structrue of the file will depend upon the system's byte order and size of various data types. This method is only intended to accomodate individual networks running on individual robots

Parameter Descriptions
self
the NeuralNetwork object to save

filename
the path of the saved object

function result
1 on success, otherwise 0;

nnNumLayers

int nnNumLayers(
    NeuralNetwork* self); 

get the number of layers in the network

Parameter Descriptions
self
the NeuralNetwork object to operate upon

function result
the number of layers in the network including the input and output layers

nnNumNeuronsInLayer

nnNeuronCount_t nnNumNeuronsInLayer(
    NeuralNetwork* self,
    unsigned int layer); 

get the number of neurons in the specified layer

Parameter Descriptions
self
the NeuralNetwork object to operate upon

layer
the layer to query. 0 is the input layer, nnNumLayers(self)-1 is the output layer

function result
the number of neurons in the specified layer

nnNumInputNeurons

nnNeuronCount_t nnNumInputNeurons(
    NeuralNetwork* self); 

get the number of input neurons
this is a convenience method for
nnNumNeuronsInLayer(network, 0);

Parameter Descriptions
self
the NeuralNetwork object to operate upon

function result
the number of neurons in the specified layer

nnNumOutputNeurons

nnNeuronCount_t nnNumOutputNeurons(
    NeuralNetwork* self); 

get the number of output neurons
this is a convenience method for
nnNumNeuronsInLayer(network, nnNumLayers(network)-1);

Parameter Descriptions
self
the NeuralNetwork object to operate upon

function result
the number of neurons in the specified layer

nnLearningRate

nnNeuronData_t nnLearningRate(
    NeuralNetwork* self); 

nnSetLearningRate

void nnSetLearningRate(
    NeuralNetwork* self,
    nnNeuronData_t rate); 

nnAnnealingRate

nnNeuronData_t nnAnnealingRate(
    NeuralNetwork* self); 

nnSetAnnealingRate

void nnSetAnnealingRate(
    NeuralNetwork* self,
    nnNeuronData_t rate); 

nnSetAnnealingFunction

void nnSetAnnealingFunction(
    NeuralNetwork* self,
    nnAnnealingFunction_t function); 

customize the network's annealing function, which will be called each generation of nnTrain(); The default annealing function does nothing for 15 generations and then just multiplies the learning rate by the annealing coefficient during each successive generation

Parameter Descriptions
self
the NeuralNetwork object to operate upon

function
the new annealing function

nnAnneal

void nnAnneal(
    NeuralNetwork* self,
    int generation); 

Calls the network's annealing function to manually anneal the network. This is normally done automatically by nnTrain(). The default annealing function does nothing for 15 generations and then just multiplies the learning rate by the annealing coefficient during each successive generation. This behaviour can be changed with nnSetAnnealingFunction();

Parameter Descriptions
self
the NeuralNetwork object to operate upon

generation
the number of previously elapsed generations of training, starting at 0

nnSetActivationFunctionInLayer

void nnSetActivationFunctionInLayer(
    NeuralNetwork* self,
    nnActivationFunction_t activation,
    nnActivationDerivative_t derivative,
    int layer); 

customize neuron activation functions. By default, a tanh function is used in in the hidden layers, and a sigmoidal in the output layer.



the arguments 'activation' and 'derivative' may be replaced by a single macro that expands to the names of the built-in activation functions. Valid macros are:

  1. NN_ACTIVATION_SIGMOID
  2. NN_ACTIVATION_TANH
  3. NN_ACTIVATION_LINEAR
eg: nnSetActivationFunctionInLayer(self, NN_ACTIVATION_SIGMOID, 1);

Parameter Descriptions
self
the NeuralNetwork object to operate upon

activation
the activationFunction

derivative
a function that computes the derivative of the activation function;

layer
the layer whose activation is to be set. 1 is the first hidden layer (inputs have no activation function), and nnNumLayers(self)-1 is the output layer.

nnSetHiddenLayersActivationFunction

void nnSetHiddenLayersActivationFunction(
    NeuralNetwork* self,
    nnActivationFunction_t activation,
    nnActivationDerivative_t derivative); 

a convenience method that calls nnSetActivationFunctionInLayer() for each hidden layer. See that method for usage details


nnSetOutputLayerActivationFunction

void nnSetOutputLayerActivationFunction(
    NeuralNetwork* self,
    nnActivationFunction_t activation,
    nnActivationDerivative_t derivative); 

a convenience method that calls nnSetActivationFunctionInLayer() for the output layer. See that method for usage details


nnInputBuffer

nnNeuronData_t* nnInputBuffer(
    NeuralNetwork* self,
    nnNeuronCount_t* numNeurons); 

get the network's input buffer. Fill this buffer before calling nnCalculateOutput()

Parameter Descriptions
self
the NeuralNetwork object to operate upon

numNeurons
on return will contain the length of the returned buffer; pass NULL if you don't care.

function result
a pointer to the buffer

nnOutputBuffer

nnNeuronData_t* nnOutputBuffer(
    NeuralNetwork* self,
    nnNeuronCount_t* numNeurons); 

provides direct access to the output of the output-layer's neurons

Parameter Descriptions
self
the NeuralNetwork object to operate upon

numNeurons
on return will contain the length of the returned buffer; pass NULL if you don't care.

function result
a pointer to the buffer

nnTrainOnce

void nnTrainOnce(
    NeuralNetwork* self,
    nnNeuronData_t* targetOutput,
    int numOutputs); 

this method compares the network's actual output to the target output, back-propigates errors and adjusts weights and rest-states accordingly. It does not first calculate the network's actual output. Do that with
nnCalculateOutput(self);

Parameter Descriptions
self
the NeuralNetwork object to operate upon

targetOutput
a buffer containing the network's target output

numOutputs
the size of 'targetOutput', which must match the number of output neurons in the network

nnTrain

nnNeuronData_t nnTrain(
    NeuralNetwork* self,
    nnNeuronData_t** inputBuffers,
    nnNeuronData_t** targetOutputBuffers,
    int numDataSets, 
    nnNeuronCount_t numInputNeurons,
    nnNeuronCount_t numOutputNeurons, 
    nnNeuronData_t targetError,
    int maxNumGenerations); 

a convenience method that calculates the output for each of a set of input buffers, calls nnTrainOnce() with each of the corresponding target output buffers, and repeats over many generations. Each generation the annealing function is called and the order of the training set is scrambled to aviod oscillating error-rate loops. This function will return when the target error rate is reached, or when the specified maximum number of generations have been exhausted, whichever comes first.

Parameter Descriptions
self
the NeuralNetwork object to operate upon
inputBuffers
an array of input buffers for training.
targetOutputBuffers
an array of buffers that correspond to the target output for the input buffers
numDataSets
the number of input buffers, which must be the same as the number of target output buffers
numInputNeurons
the number of objects in each input buffer, which must match the number of input neurons in the network.
numOutputNeurons
the number of objects in each targetOutput buffer, which must match the number of output neurons in the network.
targetError
the error rate at which this method will return
maxNumGenerations
the maximum number of generations to train if the target error rate is not reached, and the actual error rate does not converge;

function result
returns the average error rate over the last generation of training

nnForgetTraining

void nnForgetTraining(
    NeuralNetwork* self); 

sets the weights and rest states to random numbers between -n and n, where n is the 1 over the square root of the number of inputs to that neuron

Parameter Descriptions
self
the NeuralNetwork object to operate upon

nnErrorRate

nnNeuronData_t nnErrorRate(
    NeuralNetwork* self); 

returns the average output-layer error rate

Parameter Descriptions
self
the NeuralNetwork object to operate upon

function result
the error rate

nnCalculateOutput

void nnCalculateOutput(
    NeuralNetwork* self); 

calculates the network's output. Get the input buffer with nnInputBuffer() and fill it before calling this, or use nnCalculateOutputForInput() instead.

Parameter Descriptions
self
the NeuralNetwork object to operate upon

nnCalculateOutputForInput

nnNeuronData_t* nnCalculateOutputForInput(
    NeuralNetwork* self,
    nnNeuronData_t* input,
    nnNeuronCount_t numInputs,
    nnNeuronCount_t* numOutputs); 

a convenience method that copies the specified input values into the network's input buffer, and then calls nnCalculateOutput();

Parameter Descriptions
self
the NeuralNetwork object to operate upon
input
the input for which to calculate the output
numInputs
the number of objects in 'input', which must match the number of input neurons in the network.
numNeurons
on return will contain the length of the returned buffer; pass NULL if you don't care.

function result
the network's output buffer

nnIndexOfHighestOutputNeuron

nnNeuronCount_t nnIndexOfHighestOutputNeuron(
    NeuralNetwork* self); 

returns the index of the most active output neuron

Parameter Descriptions
self
the NeuralNetwork object to operate upon

function result
the neuron's index

nnIndexOfRandomOutputNeuron

nnNeuronCount_t nnIndexOfRandomOutputNeuron(
    NeuralNetwork* self); 

returns the index of a random output neuron. This method interprets the activity of each neuron as a weighted probability, so that it is most likely to return the index of the most active neuron, depending upon how much more active it is than the other neurons, and so forth. This method destroys the output buffer and then attempts to restore it, which will fail with custom activation functions that are not normalized between 0 and 1.

Parameter Descriptions
self
the NeuralNetwork object to operate upon

function result
the index of the chosen neuron

nnPrintOutputBuffer

void nnPrintOutputBuffer(
    NeuralNetwork* self); 

prints the output of each output-layer neuron to stdout

Typedefs


nnNeuronCount_t

typedef unsigned int nnNeuronCount_t; 

nnNeuronData_t

typedef double nnNeuronData_t; 

used for neuron calculations
including neuron inputs, outputs, weights, errors, rest states, etc...


nnActivationFunction_t

callback type for custom activation functions
typedef void (*nnActivationFunction_t) (
    nnNeuronData_t*); 

a pointer to the sum of the weights times inputs will be passed into the function, which should operate on that value "in place"


nnActivationDerivative_t

callback type for calculation the derivative of a custom activation functions
typedef nnNeuronData_t (*nnActivationDerivative_t)(
    nnNeuronData_t ); 

should return the derivative of the value passed in


nnAnnealingFunction_t

callback type for custom annealing functions
typedef void (*nnAnnealingFunction_t) (
    nnNeuronData_t* learningRate,
    nnNeuronData_t* annealingRate,
    nnNeuronCount_t generation); 

a pointer to the network's learning rate and annealing rate are passed in, and may be operated upon directly. The current generation is also passed, which starts at 0 for every call to nnTrain()

#defines


NN_ACTIVATION_SIGMOID

#define NN_ACTIVATION_SIGMOID nnActivationFunctionSigmoid, nnActivationDerivativeSigmoid 

NN_ACTIVATION_TANH

#define NN_ACTIVATION_TANH nnActivationFunctionTanh, nnActivationDerivativeTanh 

NN_ACTIVATION_LINEAR

#define NN_ACTIVATION_LINEAR nnActivationFunctionLinear, nnActivationDerivativeLinear 

© Michael Krzyzaniak (Last Updated 11/13/2011)