I'll just interject once in a great discussion. There are several of us who have no connection with Jeff or Numenta, apart from being friends and colleagues in a joint enterprise, so let me speak for myself and in some part for others (who can speak for themselves if they wish).
This is first and foremost a project to extract and characterise a tractable model of the primary computational functions of neocortex. Other people can develop ML algorithms which do some things well, and I study everything of use they come up with. When their results tell us something of use to the brain science, then all the better, but when it gives us no light into how brains work then we judge that as such, and our integration of that is different. We're basing our work on an intuition. The particular structures used in neocortex may be key to genuine intelligence, so our chosen job is to identify the most important of these and find out what they can do. This is a difficult task, because the data we get from neuroscience is messy, biological and noisy, and there is no "golden standard" (like in physics) which tells us what is relevant and what is not. Jeff is very much the pioneer in that task. Using the analogy everyone's been using, we are actually trying to figure out how feathers and muscles lead to flight. Airplanes and helicopters cannot land on twigs, and they cannot catch prey in 120-mph dives. But brain-controlled animals can, so we seek the learning algorithms which allow us to do things no "brainless computer" can. It's ironic that Yann LeCun in particular scorns our work. He, Hinton, Bengio, Schmidhuber and several more were shut out and laughed out of conference after conference and journal after journal for 15-20 years. Now that they have data, GPUs and a few engineering tricks, everything they've been preaching about for decades suddenly works and takes over the Zeitgeist. But because they're actually very smart, every one of them knows that their oversimplified ANN models are just local maxima. Bengio is publishing about networks without backprop. Hinton talks about capsules. LeCun asks about how brains "really work". Schmidhuber (always wiser than the fashion) has been 20 years ahead of his friends, asking all these questions all along while inventing things like LSTM to help keep things outperforming. The mistake Numenta perhaps makes is to ignore these people, and others. They have huge amounts to teach us, because many, many more people have been working on ANNs for decades, and the same is true of information theory, nonlinear dynamics, and all areas of ML, statistics, operations research and applied mathematics. That's what everyone outside Numenta is doing - learning how we can combine such a huge amount of other people's learning to help us with our work. We make no claims beyond a claim that we're among few who attempt to figure out how cortex does cognition. Our software, lagging well beyond our embryonic theory, is an encouraging testimony that we might be getting somewhere. We're not in a pissing contest with some other algorithm, but we won't balk at the inane attacks David Ray mentioned he suffered at the hands of Torch jockeys who think they know how to do Machine Learning. End of rant. On Tue, Jun 30, 2015 at 8:36 PM, Julian Samaroo <[email protected]> wrote: > I think that in the case of behavior production, in the absence of the > sensorimotor framework you are trying to implement, a simple reinforcement > learning system might suffice. Say you have one or more HTM layers > receiving data from some sensory system, and you want to act on it. First, > you can train the HTM on the sensory data so it can learn to represent the > data spatially and temporally. Next, you would fully connect all the > cells/columns in the layer to every input node of the reinforcement layer, > and begin administering rewards/punishments to the reinforcement layers > based on how accurate or proper its outputs are. Assuming the HTM layer is > capable of extracting the features from the sensory input that are most > relevant to proper action production, a single layer of a common RL layer > should be able to take that and learn an effective output policy. > > This is in fact one of the first things I'm working on as I get to work on > my own projects, and is likely how the cortex and basal ganglia communicate > in the brain (although this abstracts away quite a bit of details, but I > digress). Look into something like SARSA or Q-learning, those should be > more than sufficient to accomplish the task. If you'd like to see an > example online of "robot" agent doing something like this, check out this > link: http://cs.stanford.edu/people/karpathy/convnetjs/demo/rldemo.html > > Julian Samaroo > Manager of Information Technology > BluePrint Pathways, LLC > (516) 993-1150 > > On Tue, Jun 30, 2015 at 2:21 PM, Matthew Taylor <[email protected]> wrote: > >> John, >> >> Just to make sure that all your questions have been addressed directly: >> >> On Tue, Jun 30, 2015 at 2:55 AM, John Blackburn >> <[email protected]> wrote: >> > "performs with true intelligence" is a pretty bold claim. If this is >> > the case, how come there are no very convincing examples of HTM >> > working with human like intelligence? The Hotgym example is nice but >> > it is really no better than what could be achieved with many existing >> > neural networks. Echo state networks have been around for years and >> > can make temporal predictions quite well. >> >> People define "intelligence" in different ways. If you take for >> granted that the neocortex has "true intelligence", then HTM might be >> called an implementation of "true intelligence" algorithms based upon >> the fact that it acts upon incoming data with the same basic >> principles as the neocortex. We are trying to lift the intelligence >> out of the brain and into software, one step at a time. >> >> So, while NuPIC's performance might not seem all that impressive when >> other technologies can do similar things, we have lots of room to grow >> [1] and a lot more work to do. All of this upcoming work should >> increase the capabilities of the HTM system we are implementing. The >> fact that we are somewhat on-par with some other ML techniques at this >> point is encouraging to me. >> >> > I recently presented some >> > time sequence data relating to a bridge to this forum but HTM did not >> > succeed in modelling this (ESNs worked much better). >> >> I had a little time to work on your bridge tilt data [2], but not >> enough to make it useful. I still think this problem presents a >> relevant challenge for HTM, and I think with more time and effort, >> someone might be able to create a real solution. I, unfortunately, >> have other projects I have to work on. :( >> >> > So outside of >> > Hotgym, what really compelling demos do you have? I've been away for a >> > while so maybe I missed something... >> >> My current favorites are location-based anomaly demos like these: >> - https://github.com/nupic-community/mine-hack >> - https://github.com/numenta/nupic.geospatial >> >> I am also working on a new tutorial, coming within a couple weeks >> (hopefully). >> >> > I am also rather concerned HTM needs swarming before it can model >> > anything. Isn't that "cheating" in a way? It seems the HTM is rather >> > fragile and needs a lot of help. The human brain does not have this >> > luxury it just has to cope with whatever data it gets. >> >> Swarming is hard to explain. In the brain, input data to the neocortex >> comes from sensory organs, which have been tuned by millions of years >> of evolution to have very specific characteristics that process >> incoming light, sound, movement, etc. into certain patterns of nerve >> excitations. These patterns get generated outside the cortex, but they >> are still important to attempt to replicate in some ways. All data in >> "reality" must be represented to the cortex somehow outside of that >> reality. In NuPIC, this is what encoders so. They translate data >> coming into them into a representation similar to a vector of nerve >> excitations. >> >> Anyway, swarming is a very rough way to simulate evolution in the >> sensory organs. It randomly sets up encoders with different parameters >> (also spatial pooling and temporal memory parameters) and tries to >> find the best possible set of configurations for the specific data >> that is being processed. Your cochlea have had millions of years to >> come to that perfect set of configuration parameters ;). Swarming is a >> brute-force attempt to resolve some set of parameters for a specific >> input data set. It is not always right, it takes a long time, and it >> sometimes requires manual intervention, but it definitely very useful >> for finding groups of configurations that work well for certain types >> of data. >> >> > I'm also not convinced the neocortex is everything as Jeff Hawkins >> > thinks. I seriously doubt the bulk of the brain is just scaffolding. >> > I've been told birds have no neocortex but are capable of very >> > intelligent behaviour including constructing tools. Meanwhile I don't >> > see any AI robot capable of even ant-like intelligence. (ants are >> > amazing!) Has anyone even constructed a robot based on HTM? >> >> While I know nothing about bird brains, except that they have a >> cerebral cortex that has some similarities to the mammalian cortex, I >> do know that hierarchy in the neocortex is a generally accepted theory >> in neuroscience. >> >> We could still learn a helluva lot from the lower levels of the brain >> (imagine a flight vehicle that could control itself as efficiently as >> a fly), that just isn't what we're trying to do at Numenta. >> >> > Personally I don't think a a disembodied computer can ever be >> > intelligent (not even ant-like intelligence). IMO a robot (and it must >> > BE a robot) needs to be embodied with sensory-motor loop at the core >> > of its functionality to start behaving like an animal. >> >> You don't need to have physical interaction with the world to have >> behavior. There are millions of actions that can be taken on the >> internet that all have consequences, change the landscape for the >> actor, and present different possible actions in return. The most >> obvious example is video games, but the internet in general is a very >> large universe with no physical structure, but endless virtual >> structures to interact with. >> >> [1] https://github.com/numenta/nupic.research/wiki/Current-Research-Tasks >> [2] https://github.com/nupic-community/bridge-tilt >> >> Regards, >> >> > -- Fergal Byrne, Brenter IT @fergbyrne http://inbits.com - Better Living through Thoughtful Technology http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne Founder of Clortex: HTM in Clojure - https://github.com/nupic-community/clortex Co-creator @OccupyStartups Time-Bombed Open License http://occupystartups.me Author, Real Machine Intelligence with Clortex and NuPIC Read for free or buy the book at https://leanpub.com/realsmartmachines e:[email protected] t:+353 83 4214179 Join the quest for Machine Intelligence at http://numenta.org Formerly of Adnet [email protected] http://www.adnet.ie
