Hannu, No, I haven't gotten very along yet. I've been spending time familiarizing myself to the CLA, getting the VM installed and generally just getting my feet wet with NuPIC again (things have changed a lot since 1.7, the last version I played with).
I still don't know enough to make the call as to whether this is the best platform option for my purposes yet, but I plan to give it a really fair shake, and that means getting involved with the community and perhaps contributing where I can. I built a two-column model a couple of years back using a native neural network implementation (using my own learning algorithm - somewhere between the Izhikevich model and a simple LIF model simplified for speed), and started working on implementing it in CUDA, but never finished it. I was able to achieve my performance design goals (2,500 neurons total updated in real-time at 8,000 updates/sec) - on a quad-core OTS machine, so I didn't go much further. http://www.youtube.com/watch?v=OkIHGyIVufE I may yet back to it when I get some time, but for the time being I'm focused on finding the most efficient way to implement larger scale structures rather than at the neuron level. Dean On Mon, Nov 11, 2013 at 2:43 PM, Hannu Kettinen <[email protected]> wrote: > Dean have you made any progress with this plan of yours? > > I am curious if you are planning to do a binary or a trinary (yes ternary) > integration? > I am toying with the idea of taking the HTM CLA trinary just to be able to > inject it into my model. But doing that would mean I need to quit my day > job :) > > Just curious what your plans are here and how you plan to implement this? > With your vast experience as a dev, it will probably be a very interesting > journey worth sharing. > Have you looked at parallelism at all? CUDA/openCL? > > Personally I am so far just modelling Thalamus, a limited CNS and cortex > with the aim to create a unified system of sorts. > > > regards, > Hannu > > > > > > On 01 Nov 2013, at 18:40, Dean Horak <[email protected]> wrote: > > "what additional structures do you plan to add ?" > > My plan with the Nengo simulator was to take Spaun as a starting point and > add the hippocampus to start with and then just keep continue adding more > and more until the full brain was implemented (a very, very long term > goal). > > However, if I shift gears and move to a new platform, I'll probably start > with exactly the same structures that Spaun implemented (basal ganglia, > thalamus and some pre-frontal cortices) and then compare it's performance > against Spaun to get a bearing on how much, if any, it's capabilities were > impacted by the change. > > " it seems that what it offers from a functional perspective > is applicable to nearly all brain regions (even brainstem, > cerebellum and sub-cortical regions)" - are there works that suggest > this ?" > > Sure, plenty of examples. The nuclei in the basil ganglia (particularly > the caudate nucleus) is heavily involved in learning and the formation of > memory. The inferior colliculus in the brainstem, which is involved in 3D > sound localization, implements conditional learning in order to create a > topographical map of sound. Almost all motor skill is acquired within the > cerrebellum through conditioned/reinforcement learning, as well as being > modulated by the motor cortices. The hippocampus (part of the allocortex), > exhibits a simpler cortical structure (fewer layers) than the neocortex, > yet it is perhaps the most significant region involved with memory > formation and reconstruction. It seems to me any of these regions might be > implemented using an underlying platform such as NuPIC." > > > On Fri, Nov 1, 2013 at 12:09 PM, Azat <[email protected]> wrote: > >> Dean, >> >> Thank you for the details - I didn't know about performance of the Spaun >> simulator. >> But what additional structures do you plan to add ? >> >> " it seems that what it offers from a functional perspective >> is applicable to nearly all brain regions (even brainstem, >> cerebellum and sub-cortical regions)" - are there works that suggest >> this ? >> >> Azat >> PS: Your message made me think of an analogy: we don't have to know how >> the muscles move in a bird's wing as long as we can fly. >> >> -------------------------------------------- >> On Fri, 11/1/13, Dean Horak <[email protected]> wrote: >> >> Subject: Re: [nupic-dev] Using NuPIC as an engine for WBE >> To: "NuPIC general mailing list." <[email protected]> >> Date: Friday, November 1, 2013, 11:49 AM >> >> Azat, >> Yes, as I noted, I've been using the Nengo >> simulator for my models and could continue to do so. The >> problem is performance. >> The current Spaun simulation contains a >> rudimentary model of the basal ganglia, thalamus and a >> couple pre-frontal cortical areas - pretty much the bare >> bones components to being able to perform the tasks it was >> designed to perform. Already, with only that basic >> structure, the Spaun simulation requires 24MB RAM (minimum) >> and is able to simulate about 1 second of real-time every 3 >> hours of processing time (on a typical quad-core processor). >> This resource intensity is largely due to the fact that >> Spaun is using a spiking neural network based on some very >> computationally expensive models (Izhikevich, >> Hodgkin-Huxley, LIF). >> >> I'm looking to extend these basic structures >> and include many additional ones, so the performance is only >> going to get worse. Since I'm looking for something that >> is at least an order of a magnitude or two closer to >> real-time (and I don't have a BlueGene computer at my >> disposal), there is simply no way I can accomplish my goals >> using this system. >> >> That is why I'm looking for an alternative >> platform. I'm willing to give up the biological realism >> of a spiking neural network to gain a dramatic increase in >> performance because my conjecture is that the route to AGI >> lies less in the low level implementation details of the >> human brain, and more in the interactions between brain >> regions honed over billions of years to perform specific >> functions which, taken together at a holistic level result >> in the wide spectrum of abilities we consider to be human >> intelligence. >> >> Jeff, >> Thanks for your well considered >> response. >> I fully understand the need to make trade-offs >> wrt biological realism in order to achieve design goals. >> Indeed, I am facing those same sorts of design decisions >> regarding my project. >> >> As I noted, I've been following Numenta since >> it was first publicly announced (in fact you probably still >> have the NDA on file I seem to recall signing to gain access >> to the original algorithms). In fact, I largely credit >> "On Intelligence" for inspiring me (for which >> I'm am deeply indebted) to turn to neuroscience, a >> subject I had only casually studied for answers. This, after >> about 25 years of attacking the AI problem through more >> traditional AI techniques. However, once I turned to >> neuroscience, I found a subject so fascinating that it drew >> me in and has kept me immersed ever since, somewhat to the >> detriment of my pure AI research. >> >> Getting back to the matter at hand however, I >> understand that the CLA is rooted in the neuroscience of the >> neocortex and based on your hypotheses of HTM. While my >> objectives are quite different, and while my perspective of >> how intelligence is implemented in the brain may not >> precisely align with yours (I view the neocortex as >> significant, but only one part of the picture and cortical >> columns as far less homogeneous than they might at first >> appear). >> >> Still, for my purposes, I'm looking for a >> platform to construct my models of functional brain regions >> which internally will require features that a system like >> NuPIC exhibits (i.e. pattern recognition, unsupervised >> learning, prediction). >> >> The internal (local) networks within each region >> will be developed in such a way as to exhibit behavior >> analogous to it's biological counterpart and the >> connectivity between regions guided by data from >> neuroscience literature. >> >> So (finally), while I understand the CLA is an >> implementation of an interpretation of cortical processing, >> it seems that what it offers from a functional perspective >> is applicable to nearly all brain regions (even brainstem, >> cerebellum and sub-cortical regions). The main reason for >> the original question was to test the waters to see if there >> was some architectural limitation I had overlooked that >> might prevent me from pursuing this project using NuPIC (for >> example, an limitation such as not allowing more than one >> network allowed per process). >> >> My apologies for the length of the >> message. >> Thanks,Dean >> >> -----Inline Attachment Follows----- >> >> _______________________________________________ >> nupic mailing list >> [email protected] >> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >> >> >> _______________________________________________ >> nupic mailing list >> [email protected] >> http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org >> > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > > > > _______________________________________________ > nupic mailing list > [email protected] > http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org > >
_______________________________________________ nupic mailing list [email protected] http://lists.numenta.org/mailman/listinfo/nupic_lists.numenta.org
