Random article ( of 1070 ) Latest updates

User Tools

Site Tools


Wikenigma - an Encyclopedia of Unknowns Wikenigma - an Encyclopedia of the Unknown

Neural Networks

"Multilayer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding."

Source : Proceedings of the National Academy of Sciences open access2018 115 (33)

'Artificial Intelligence' (AI) systems predominantly use Neural Networks to achieve their 'Machine Learning' capability. However :

No underlying principle has guided the design of these learning systems, other than vague inspiration drawn from the architecture of the brain (and no one really understands how that operates either)."

Source : Quanta Magazine Sept. 2017

A theory known as the 'Information Bottleneck Method' which was first proposed in 1999, is currently being evaluated as a possible explanation - but as yet there is no general agreement on how the networks operate.

A further complication is that the 'back propagation' technique which artificial neural nets use doesn't have a direct equivalent in biological neural nets. Biological neural nets have however been shown to locally back-propagate their electrical action potentials. The function(s) of this phenomenon is also currently unexplained.

While there is ample evidence to prove the existence of backpropagating action potentials, the function of such action potentials and the extent to which they invade the most distal dendrites remains highly controversial."

Source :Wikipedia


Neural networks are notoriously unstable. For example, systems which have been 'trained' to recognise images, and which operate with extreme efficiency most of the time, will occasionally completely misinterpret an image and give an erroneous result. The reasons for such profound instabilities - called, in AI jargon 'Hallucinations' - are currently unknown.


Given all the above, it follows that technical systems which are strongly based around neural networks tend to work in ways which the designers don't fully understand. For an overview regarding current AI systems, particularly Large Language Models (LLMs) see Scientific American May, 2023.

Editor's Note: The ongoing research efforts to try to understand exactly how neural networks come to their 'decisions' has recently generated a new research field which is being called 'Explainability' (example ref.)

Importance Rating

Show another (random) article

Suggestions for corrections and ideas for articles are welcomed : Get in touch!

Further resources :