Ben, On Wed, Aug 29, 2012 at 4:55 PM, Ben Goertzel <[email protected]> wrote:
> > Picking one particular tiny illustrative detail of this - my realization >>>> that neurons MUST communicate derivatives like dP/dt rather than straight >>>> probabilities, to be capable of temporal learning without horrendous >>>> workarounds. I thoroughly explained it on this forum, and no one objected >>>> to any of it, yet it has changed nothing. >>>> >>> >>> To those of us not working on neural net models, this sort of insight is >>> kinda irrelevant... >>> >> >> I'm not so sure. Don't you use Bayesian methods to compute probabilities, >> that change with circumstances? If so, by converting the inputs to dP/dt >> notation, computation throughout remains in dP/dt. At the outputs, you will >> need to integrate to get back to P. The net result is that you no longer >> need past memory for temporal learning - all without a single >> neuron-equivalent in your code. >> > > Certainly one can do probabilistic reasoning about rates of change of > probabilities. But can you give an example to illustrate why this is a > profoundly better approach? > Because temporal learning, instead of being a challenge requiring some sort of "workaround", becomes the normal mode of operation that would require a workaround to avoid doing. > >>> I'm curious: How would you modify, for instance, the Izhikevich neuron >>> equations >>> >> >> I wouldn't. This is SO simple, it is almost hard to even see it. >> Converting to/from dP/dt is something you do at the inputs and outputs. >> Within the NN/Bayesian "network" computations remain the SAME, only now >> things become naturally capable of temporal learning. >> > > So, consider a standard firing rate model of a neuron. There is a lot > evidence that, in various cases, the firing rate of certain neurons can be > normalized into a probability value, representing a probability of some > relevant event observed by the organism. > You must look CAREFULLY at the supporting experiments, as experimenters routinely use moving/changing stimuli without mentioning it in their papers. Apparently, all neuroscientists somehow managed to avoid taking freshman calculus. > Are you suggesting that, in other cases, the firing rate can meaningfully > be normalized into a number representing the rate of change of some > probability? > Yes. > Can you point to some neuroscience experiments validating this hypothesis? > Much of the visual system works this way. When things stop moving, everything shuts down. > Or if not, can you suggest a specific experiment that you think could be > done to validate/refute the hypothesis? > If you actually watch someone doing visual system experiments, you will see that they routinely use moving or flashing displays at various locations to identify the receptive fields of neurons, etc. Without movement, everything quickly goes quiet. There will doubtless be exceptions to this, as neurons are very adept at differentiating and integrating across synapses, so shifting between notations is easy for them. Maybe I'll be able to do it in a few years time, in HK or China or >>> Singapore, we'll see... >>> >> >> I wasn't really expecting a return message to report of China 8-:D> >> > > Oh... well, I am living in Hong Kong now, > I have a good friend who is now living in Shenzhen, a sort of suburb of Hong Kong. > and that's where the bulk of the OpenCog project is now situated. > > AGI researcher Wlodek Duch is now spending much of his time in Singapore... > > Roboticist David Hanson is splitting most of his time btw Singapore, > Guangzhou (China) and Hong Kong > > The AGI-13 conference will be in Beijing... > > Seoul National University recently started a grad school of Convergence > Technology (nano-bio-info-cogno) > Perhaps they might consider adding some disciplines to round this out? I suspect that the "secret" to selling this would be to exhibit what is available at the end of the road: 1. Generally intelligent computers by several potential mechanisms. Like the Manhattan project that produced three VERY DIFFERENT atomic bombs, this multi-pronged approach might well produce VERY different AGIs that work VERY differently, e.g. direct simulation vs. OpenCog-like approaches. 2. MUCH better than arguing now about whether things like uploading/downloading will ever work, this should be left on the plate as just one of the many potential BIG payoffs. People buy the sizzle, and not the steak. BTW, uploading/downloading, if/when it works, is clearly worth FAR more money than all other potential applications of this sort of research, probably including AGI, combined. Even the unproved near-term prospect of such a thing would turn the World economy upside down. 3. Cures for many diseases, once their operation can be directly observed. 4. Advancing math by 1,000 years or so, in one gigantic jump. > > So, if I were going to try to start an interdisciplinary AGI/cognition > research center somewhere, it would probably > in in Asia, yeah... > That was my thought. They have money, lots of people, and really need the industry that such a thing would likely create. Meanwhile, research in America is stone cold dead, and is likely to stay that way for a looooong time, perhaps forever. I sense a long plane flight in my future. I wonder what language I should be learning to speak? Steve ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-c97d2393 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-2484a968 Powered by Listbox: http://www.listbox.com
