Moving Towards Real Artificial Intelligence: Part 1 – The Brain as Inspiration for Modelling Computation
Link to: Part 2
This post is the first of the series Moving Towards Real Artificial Intelligence that will explore the HTM approach to create artificial intelligent machines by mimicking the processes that govern human learning.
In the past few years, artificial intelligence has once again become a hot topic. This is largely due to the various successful applications in which AI has been used in the last years, from speech-recognition to translation, from medical image analysis to video captioning. Most of these examples are powered by artificial neural networks (ANNs), which outperform humans on well encoded tasks (image classification, handwriting recognition, etc.), but lack the ability to generalise and learn from limited datasets. That’s why, despite its name, present-day state-of-the-art AI is actually not very intelligent.
ANNs are, in fact, only loosely related to the structure and functioning of the human network of neurons (a.k.a. the brain).
Originally, the single computational units of ANNs were called “neurons”, since it was thought that their “activation” when a certain (numerical) threshold was reached was mimicking the “firing” of brain neurons. However, it was soon realised that the real and artificial neurons did not share much more than few abstract properties (hierarchical distribution, multiple inter-neuron connections, etc.). A major difference is that, while a single human neuron is a complex machine with non linear responses and unknown properties (i.e. we don’t yet know how they learn), ANN building blocks are generally monotonic mathematical functions that process a very narrow range of inputs.
Thus, while the media enjoys presenting recent advances in AI as the first step toward a soon-to-doom-us-all human-like machine, it looks very unlikely that such an intelligence will ever be able to compete with humans when dealing with tasks that require a high degree of flexibility and creativity.
A New Theory For Human Brain Intelligence
A different approach to the pursuit of human-level intelligence is presented by Jeff Hawkins in the book “On Intelligence” (published in 2004). Humans display the highest and most versatile form of intelligence (at least, that we are aware of), which is mostly an expression of the superficial part of the brain: the neocortex.
In his book, Hawkins proposes a challenging model that describes the basics of how human brains learn, which can be used to develop intelligent machines.
Is already well established by the neuroscience community that when we learn something, information is stored in our neocortex, and specifically is encoded in the complex system of connections between the billions of neurons. It’s also believed that different parts of the brain are responsible for processing different inputs, with, for example, different regions specialising in understanding visual and auditory information. A large contingent of the brain scientific community is focused on narrowing down which groups of neurons are specialised for what and why.

Hawkins’ first basic intuition is that (at least in the neocortex) neurons are not different in different regions, and that the apparent specialisation of different areas is due to different sources of input signals. This is supported by many cases where the lack of sensorial input (blindness, deafness, etc.) caused regions traditionally specialised in those inputs to adapt themselves to completely different signals. This wouldn’t be possible if, for example, neurons that process visual information from the eyes are intrinsically different from those processing auditory inputs.
The second intuition is about how we learn. Instead of storing static information as files in a hard drive, our neurons learn to recognise repeating patterns of inputs. In a nutshell, a new sensorial pattern triggers the firing of a certain group of neurons. If a number of these neurons have already experienced the same pattern (and, therefore, fired with a similar pattern before), their mutual links get stronger (i.e. new connections get created).
A third intuition is that neurons fire before they receive the input signal. In a way, they live in a status of perennial anticipation of what’s happening next, and flag that something’s not right when there are discrepancies between the expectation and the actual signal when the two don’t match. An example of this is when we walk down a well known flight of steps. The neurons that receive sensorial signals from our legs are continuously expecting the steps to be spaced as they learnt it, and that’s why we don’t need to always be 100% focused on the act of walking in such a context. But if a sudden exception happens (e.g. a step is few centimeters shorter), as soon as the signal from the legs is not matching with what the neurons are expecting, we realise something is wrong and we consciously coordinate our muscles to balance our equilibrium.
Hierarchical Temporal Memory
From these and other intuitions, Hawkins developed the basic foundations of his Hierarchical Temporal Memory (HTM) model. An example of this model (which I’ll describe in more detail in a future post) is being built as open-source code by Numenta, the company that he co-founded with the purpose of creating artificial intelligence based on modelling human brain function. They strongly support a wide community of external contributors and host a vibrant knowledge-sharing environment, with open theory ,technical discussions, and tutorials.
1: While other parts of the brain can be linked with different forms of intelligence, most of the functions we associate with intelligence (language, vision processing, imagination, etc.) are created and processed in the neocortex.
Header image courtesy of OSA Student Chapter at UCI Art in Science Contest (link).