Diwa
Lightweight implementation of Artificial Neural Network for resource-constrained environments
Loading...
Searching...
No Matches
Diwa Class Referencefinal

Lightweight Feedforward Artificial Neural Network (ANN) library tailored for microcontrollers. More...

#include <diwa.h>

Public Member Functions

 Diwa ()
 Default constructor for the Diwa class.
 
 ~Diwa ()
 Destructor for the Diwa class.
 
DiwaError initialize (int inputNeurons, int hiddenLayers, int hiddenNeurons, int outputNeurons, bool randomizeWeights=true)
 Initializes the Diwa neural network with specified parameters.
 
double * inference (double *inputs)
 Perform inference on the neural network.
 
void train (double learningRate, double *inputNeurons, double *outputNeurons)
 Train the neural network using backpropagation.
 
DiwaError loadFromFile (T annFile)
 Load neural network model from file in Arduino environment.
 
DiwaError saveToFile (T annFile)
 Save neural network model to file in Arduino environment.
 
double calculateAccuracy (double *testInput, double *testExpectedOutput, int epoch)
 Calculates the accuracy of the neural network on test data.
 
double calculateLoss (double *testInput, double *testExpectedOutput, int epoch)
 Calculates the loss of the neural network on test data.
 
void setActivationFunction (diwa_activation activation)
 Sets the activation function for the neural network.
 
diwa_activation getActivationFunction () const
 Retrieves the current activation function used by the neural network.
 
int recommendedHiddenNeuronCount ()
 Calculates the recommended number of hidden neurons based on the input and output neurons.
 
int recommendedHiddenLayerCount (int numSamples, int alpha)
 Calculates the recommended number of hidden layers based on the dataset size and complexity.
 
int getInputNeurons () const
 Get the number of input neurons in the neural network.
 
int getHiddenNeurons () const
 Get the number of neurons in the hidden layer.
 
int getHiddenLayers () const
 Get the number of hidden layers in the neural network.
 
int getOutputNeurons () const
 Get the number of output neurons in the neural network.
 
int getWeightCount () const
 Get the total number of weights in the neural network.
 
int getNeuronCount () const
 Get the total number of neurons in the neural network.
 
void getWeights (double *weights)
 Retrieve the weights of the neural network.
 
void getOutputs (double *outputs)
 Retrieve the outputs of the neural network.
 

Detailed Description

Lightweight Feedforward Artificial Neural Network (ANN) library tailored for microcontrollers.

The Diwa library is designed to provide a simple yet effective implementation of a Feedforward Artificial Neural Network (ANN) for resource-constrained microcontroller environments such as ESP8266, ESP32, and similar development boards.

Note
This library is primarily intended for lightweight applications. For more intricate tasks, consider using advanced machine learning libraries on more powerful platforms.

Constructor & Destructor Documentation

◆ Diwa()

Diwa::Diwa ( )

Default constructor for the Diwa class.

This constructor initializes a new instance of the Diwa class. It sets up the neural network with default value 0 on parameters.

◆ ~Diwa()

Diwa::~Diwa ( )

Destructor for the Diwa class.

This destructor releases resources associated with the Diwa object upon its destruction. It ensures proper cleanup to prevent memory leaks.

Member Function Documentation

◆ calculateAccuracy()

double Diwa::calculateAccuracy ( double *  testInput,
double *  testExpectedOutput,
int  epoch 
)

Calculates the accuracy of the neural network on test data.

This function calculates the accuracy of the neural network on a given set of test data. It compares the inferred output with the expected output for each test sample and calculates the percentage of correct inferences.

Parameters
testInputPointer to the input values of the test data.
testExpectedOutputPointer to the expected output values of the test data.
epochTotal number of test samples in the test data.
Returns
The accuracy of the neural network on the test data as a percentage.

◆ calculateLoss()

double Diwa::calculateLoss ( double *  testInput,
double *  testExpectedOutput,
int  epoch 
)

Calculates the loss of the neural network on test data.

This function calculates the loss of the neural network on a given set of test data. It computes the percentage of test samples for which the inferred output does not match the expected output.

Parameters
testInputPointer to the input values of the test data.
testExpectedOutputPointer to the expected output values of the test data.
epochTotal number of test samples in the test data.
Returns
The loss of the neural network on the test data as a percentage.

◆ getActivationFunction()

diwa_activation Diwa::getActivationFunction ( ) const

Retrieves the current activation function used by the neural network.

This method returns the activation function currently set for the neurons in the neural network. It allows the user to query the current activation function being used for inference and training purposes. The activation function determines the output of a neuron based on its input. Different activation functions can be used depending on the nature of the problem being solved and the characteristics of the dataset. Common activation functions include sigmoid, ReLU, and tanh.

Returns
The activation function currently set for the neural network.
See also
Diwa::setActivationFunction()

◆ getHiddenLayers()

int Diwa::getHiddenLayers ( ) const

Get the number of hidden layers in the neural network.

This function returns the total number of hidden layers in the neural network. It helps to understand the depth of the network architecture.

Returns
The number of hidden layers.

◆ getHiddenNeurons()

int Diwa::getHiddenNeurons ( ) const

Get the number of neurons in the hidden layer.

This function returns the number of neurons in a single hidden layer of the neural network. If there are multiple hidden layers, this value represents the number of neurons per hidden layer.

Returns
The number of neurons in each hidden layer.

◆ getInputNeurons()

int Diwa::getInputNeurons ( ) const

Get the number of input neurons in the neural network.

This function returns the number of input neurons in the neural network. It is useful for understanding the network's input size.

Returns
The number of input neurons.

◆ getNeuronCount()

int Diwa::getNeuronCount ( ) const

Get the total number of neurons in the neural network.

This function calculates and returns the total number of neurons in the neural network, including input neurons, hidden neurons (across all layers), and output neurons.

Returns
The total number of neurons.

◆ getOutputNeurons()

int Diwa::getOutputNeurons ( ) const

Get the number of output neurons in the neural network.

This function returns the number of output neurons in the neural network. This is useful for understanding the size of the output layer and the expected output format.

Returns
The number of output neurons.

◆ getOutputs()

void Diwa::getOutputs ( double *  outputs)

Retrieve the outputs of the neural network.

This function copies the current outputs of the neural network into the provided array. The array must be pre-allocated with sufficient space to hold all the outputs.

Parameters
outputsPointer to an array where the outputs will be copied. The size of the array should be at least getOutputNeurons() elements.

◆ getWeightCount()

int Diwa::getWeightCount ( ) const

Get the total number of weights in the neural network.

This function calculates and returns the total number of weights present in the neural network. This includes weights for connections between input neurons, hidden layers, and output neurons.

Returns
The total number of weights.

◆ getWeights()

void Diwa::getWeights ( double *  weights)

Retrieve the weights of the neural network.

This function copies the current weights of the neural network into the provided array. The array must be pre-allocated with sufficient space to hold all the weights.

Parameters
weightsPointer to an array where the weights will be copied. The size of the array should be at least getWeightCount() elements.

◆ inference()

double * Diwa::inference ( double *  inputs)

Perform inference on the neural network.

Given an array of input values, this method computes and returns an array of output values through the neural network.

Parameters
inputsArray of input values for the neural network.
Returns
Array of output values after inference.

◆ initialize()

DiwaError Diwa::initialize ( int  inputNeurons,
int  hiddenLayers,
int  hiddenNeurons,
int  outputNeurons,
bool  randomizeWeights = true 
)

Initializes the Diwa neural network with specified parameters.

This method initializes the Diwa neural network with the given parameters, including the number of input neurons, hidden layers, hidden neurons per layer, and output neurons. Additionally, it allows the option to randomize the weights in the network if desired.

Parameters
inputNeuronsNumber of input neurons in the neural network.
hiddenLayersNumber of hidden layers in the neural network.
hiddenNeuronsNumber of neurons in each hidden layer.
outputNeuronsNumber of output neurons in the neural network.
randomizeWeightsFlag indicating whether to randomize weights in the network (default is true).
Returns
DiwaError indicating the initialization status.

◆ loadFromFile()

DiwaError Diwa::loadFromFile ( annFile)

Load neural network model from file in Arduino environment.

This method loads a previously saved neural network model from the specified file in an Arduino environment. It reads the model data from the given file and initializes the Diwa object with the loaded model parameters and weights.

Parameters
annFileFile object representing the neural network model file.
Returns
DiwaError indicating the loading status.

◆ recommendedHiddenLayerCount()

int Diwa::recommendedHiddenLayerCount ( int  numSamples,
int  alpha 
)

Calculates the recommended number of hidden layers based on the dataset size and complexity.

This function computes the recommended number of hidden layers for a neural network based on the size and complexity of the dataset. The recommendation is calculated using a heuristic formula that takes into account the number of samples, input neurons, output neurons, and a scaling factor alpha. The recommended number of hidden layers is determined as the total number of samples divided by (alpha times the sum of input and output neurons).

Parameters
numSamplesThe total number of samples in the dataset.
alphaA scaling factor used to adjust the recommendation based on dataset complexity.
Returns
The recommended number of hidden layers, or -1 if any of the input parameters are non-positive.

◆ recommendedHiddenNeuronCount()

int Diwa::recommendedHiddenNeuronCount ( )

Calculates the recommended number of hidden neurons based on the input and output neurons.

This function computes the recommended number of hidden neurons for a neural network based on the number of input and output neurons. The recommendation is calculated using a heuristic formula that aims to strike a balance between model complexity and generalization ability. The recommended number of hidden neurons is determined as the square root of the product of the input and output neurons.

Returns
The recommended number of hidden neurons, or -1 if the input or output neurons are non-positive.

◆ saveToFile()

DiwaError Diwa::saveToFile ( annFile)

Save neural network model to file in Arduino environment.

This method saves the current state of the neural network model to the specified file in an Arduino environment. It writes the model parameters and weights to the given file, allowing later retrieval and reuse of the trained model.

Parameters
annFileFile object representing the destination file for the model.
Returns
DiwaError indicating the saving status.

◆ setActivationFunction()

void Diwa::setActivationFunction ( diwa_activation  activation)

Sets the activation function for the neural network.

This method allows the user to set the activation function used by the neurons in the neural network. The activation function determines the output of a neuron based on its input. Different activation functions can be used depending on the nature of the problem being solved and the characteristics of the dataset. Common activation functions include sigmoid, ReLU, and tanh.

Parameters
activationThe activation function to be set for the neural network.
See also
Diwa::getActivationFunction()

◆ train()

void Diwa::train ( double  learningRate,
double *  inputNeurons,
double *  outputNeurons 
)

Train the neural network using backpropagation.

This method facilitates the training of the neural network by adjusting its weights based on the provided input and target output values.

Parameters
learningRateLearning rate for the training process.
inputNeuronsArray of input values for training.
outputNeuronsArray of target output values for training.

The documentation for this class was generated from the following files: