Recently, Numenta Founder Jeff Hawkins made a claim that we’d have the technology necessary for “intelligent machines” in just five years. Numenta is working to model the neocortex in the hopes of combining machine learning with self directed action. Wow. I’d love that. But I think most normal people are terrified.
For background, please see these two videos.
In his talks, Hawkins cites an impressive example of generalization. During a recent hackathon, Subutai Ahmad ran a program training a Nupic Word Client (I’m just starting to grasp the terms, this may include a Cortical Learning Algorithm (CLA) and/or Online Prediction Framework (OPF)) on a short list of words. The sets of words were for example “cow,eats,grain.” With each input, the algorithm predicts the output, then adjusts based on what you teach it (load into it).
The impressive part was that the algorithm predicted “fox eats rodent” – without having seen the word “fox” before in the input.
The code actually does sort of “know” what a fox is, though. It queries Cortical.io (formerly CEPT) for a Sparse Distributed Representation (SDR) of each word it sees. The SDR is a numerical representation of the word, derived from “reading” wikipedia. Still, this is an impressive generalization – it is effectively understanding that foxes is like other animals, and it knows that those other animals eat rodents, so it appears to guess that foxes must eat rodents.
There is a ton of interesting stuff going on here, including real-time learning that I won’t even attempt to explain. In the videos above, Hawkins explains how this is different from a traditional neural network.
But, in some ways this demo is misleading. It is not showing how the neocortex works (or how the brain reads, interprets, and generalizes between words), it is only showing how the building blocks we’ve got so far can be hacked to do interesting things.
The experiment only shows how an upper layer of the brain might work. This demo (unless I’m misunderstanding) is showing how one layer CLA/OPF magic behaves when fed a sequence of richly derived word meanings (which in a multi-layer model would be learned and stored by a different level of the brain).
What I wanted to test was how robust this prediction is. Did Ahmad just get lucky with his 38 lines of input?
After a couple hours twidling my laptop to get things to run, I did reproduce the result with the same input file. Aside: it is wonderful that Numenta and the hackers have open sourced their code and examples so the rest of us can play with it!
However, I also gave it a bunch more input, and got different – sometimes less or more logical results. With this data set, I get:
Input 1 | Input 2 | Prediction |
fox | eat | grain |
I also got “leaves” or “mice” with other variations of the input (I didn’t change much related to animals). It seemed kind of random.
But, I also get these great results (starting with grandma the first time it sees any of these terms in the input file)…
grandma | likes | music |
grandma | eats | water |
cousin | likes | sun |
nephew | likes | music |
niece | likes | water |
brother | likes | iphone |
raccoon | likes | release |
horse | likes | sun |
musician | likes | artists |
runner | likes | beach |
“Release” and “artists” don’t exist anywhere in the input. WTF? To be sure, I’m not training it on the best data set, and it is coming up with reasonable predictions. Here’s the full input and output.
I tried a bunch of much more abstract terms to see if we could use this at Haiku Deck, where we’ve got some interesting ways of guessing the best background image for a slide in your presentation. While the algorithm is clearly learning, it leaves a lot of mental jumping to decide if its predictions are correct.
I have no idea how Numenta is regarded by other AI or neuroscience researchers. But Numenta’s advances in modeling the brain have definitely re-awakened my dormant interest in Artificial Intelligence. Next, I want to try simpler input like images or even X/Y data (bitcoin prices, anyone?).
Like you my interest in A.I has been re-awakened by Jeff Hawkins and Numenta. In the past as a computer science student I dappled with List, Prologue, but always found them lacking. Neural Nets were also an interest, but I could see that it wasn’t going to scale up to general intelligence. Hawkin’s approach of studying the human brain, getting to grips with it’s structure and applying it to A.I seems such an obvious as a route to take I’m amazed know one else has tried.
In previous online content Hawkins has stated that his methodology hasn’t been taken seriously by others in the A.I field due to his lack of mathematical rigor, so in answer to your question “I have no idea how Numenta is regarded by other AI or neuroscience researchers.”, I would suggest it’s less than highly. In some respects I have some sympathy with this view, but Hawkins is attempting to copy the real McCoy and turn biology into silicon. Maths doesn’t really apply here, in my view.
Whether his optimism for A.I in five years can be met, I have my doubts. However, I believe that he and his team are on the right track. Personally I’ve looked forward to this momentous event for decades, so rest assured I will be watching Numenta, with fingers crossed.
I look forward to watching the latest videos.