Artificial neural network can be used as a universal approximator. We all had the experience of guess the solution to some particular problems. Artificial neural network serves as a general one that can be used to approximate a lot of functions.

The simplest example is that we can always describe a function as

where **activation** is a function that is related to the property of single neurons. Precisely speaking, this decomposition of the function is a network composed of N neurons.

How exactly does it work?

The only problem, however, is that we need to find out the value of the parameters, thus training. For training we either need some real data that should be approximated by this network or a conservation law.

For a differential equation, we have a natural conservation law. For example, equation

means that the quantity

is conserved and is exactly 0. If we ever try to use ANN to approximate the function , this conservation law should be satisfied and it’s the only law that the approximator should obey.

Using the network, we know that for each argument , we should have an output

By training, we are talking about minimization the deviation of the quantity from the conserved value. We devise a function that describes the deviation and name it the cost,

We could actually calculate using the approximator,

where we denote the **activation** function . Then we can parameterize the cost function using the parameters

The final step is to find the parameters so that this cost is minimized.

Activation function or trigger function

One of the useful activation function is

© 2017, Lei Ma| GitHub| Statistical Mechanics Notebook | Index | Page Source| changelog| Created with Sphinx