The analogy goes deeper as neurons are known to provide human brain with a “generic learning algorithm”: By re-wiring various types of I BOUGHT MYSELF A LOTTERY TICKET AND DECLARED MYSELF THE WINNER TEE SHIRT, the same region can learn to recognize different types of input. E.g. the brain region responsible for hearing can learn to see with the appropriate sensory re-wiring from the eyes to the hearing region. This has been confirmed experimentally on ferrets . Similarly ANs organised in NNs provide a generic algorithm in principle capable of learning to distinguish any classes. So, going back to the example applications in the beginning of this answer, you can use the same NN principles to classify pictures, texts or transactions. For a better understanding, read on.
I BOUGHT MYSELF A LOTTERY TICKET AND DECLARED MYSELF THE WINNER TEE SHIRT, Hoodie, Sweater, Ladie Tee, Vneck, Bellaflowy, Kids Shirt, Unisex and Tank Top, T-shirt
Best I BOUGHT MYSELF A LOTTERY TICKET AND DECLARED MYSELF THE WINNER TEE SHIRT
At this point you must be wondering what on earth is the I BOUGHT MYSELF A LOTTERY TICKET AND DECLARED MYSELF THE WINNER TEE SHIRT that was mentioned previously. In order to understand this we need to recall what a NN tries to compute: An output function (the model) that takes an example described by its features as an input and outputs the likelihood that the example falls into each one of the classes. What the activation function does is to take as an input the sum of these feature values and transform it to a form that can be used as a component of the output function. When multiple such components from all the ANs of the network are combined, the goal output function is constructed.