Hi Mike and Folks, I had a long private conversation on zoom with Thomas Nail and have seen 2 of his talks. He did a deep dive, including all the supplementaries, on my neuromimetic chip paper:
https://doi.org/10.36227/techrxiv.13298750.v4 As a result he's basically on-board with the ideas. Doesn't mean I'm right, but it's a sign of change. That's what's behind the salon text. A toehold on a new future. > He has a proposed hardware device/architecture, which he believes does a > better job of emulating brain physics than a traditional digital computer. > But I don't know what algorithm he is going to run on it. And I can't > remember seeing him hypothesize a *mechanism* by which the unique physics of > his device will affect the output, or even describing (in specifics) how he > expects the output to differ from the output a digital computer would > produce when running the same algorithm. OK. I am going to shout. Ready? I AM NOT EMULATING BRAIN PHYSICS. There. That feels better! :-). I am REPLICATING brain physics. What does it take to get this across? To conclusively, scientifically empirically prove that you can 'emulate brain physics' with a general purpose computer you have to test the null hypothesis. That means building the hardware that decisively tests the hypothesis that you can't emulate brain physics. That hardware is not a general purpose computer (an emulation!). You need to compare and contrast the emulation *with something that is not an emulation.* There is no brain physics in a general purpose computer, including the physics of neuromorphic computer. That is the actual problem. Half the test subjects are missing. I am building the other half of a test regime that has never happened. Testing based on assuming you can emulate brain physics fails in exactly the way 70 years of testing has failed: it tells you *nothing about what went wrong*. You end up with AI winters and springs and winters and springs and .... now we're about to get the post deep learning winter and it's all going to keep going forever until computer science finally figures out what actual empirical work looks like. How many times have I had to dance around this and get nowhere? Please read the above paper. This is about the broken structure of a deformed science of 'AI' (deformed since birth and is now 70) that does not know it is deformed. I have spent thousands of hours describing in detail how I am not using software, models or general-purpose computers to do AGI. Just like brains don't. Brain physics has already been proved capable of creating natural general intelligence. I do not have to justify the prospect that an inorganic artificial version of it can be equally 'artificially generally intelligent'. This is not computer science. It is neuroscience. Empirical neuroscience. The test rig is on the floor next to me. The first little bit of replicated membrane is sitting in it. It's all about brain-mimetic EM fields. Not abstract models of EM fields. The actual EM field physics. It's about teaching the science of AI what a real artificial inorganic version of natural brain signalling physics actually looks like, at a *million times scale*. ... so I can beat computer science over the head with it in the literature. Maybe then, computer scientists will finally understand what actual empirical work on natural general intelligence, done with an artificial equivalent to the natural physics, looks like. No more arguing. Empirical work only. I have been at this for 20 years. Long enough to get real grumpy about it. And old. :-) 70 years of this era is enough. cheers, colin On Wed, May 5, 2021 at 4:26 AM Mike Archbold <jazzbo...@gmail.com> wrote: > On 5/4/21, WriterOfMinds <jennifer.hane....@gmail.com> wrote: > > On Tuesday, May 04, 2021, at 11:31 AM, Mike Archbold wrote: > >> Colin's methods are first and foremost scientific. You can't > > fault that. > > The scientific methods by which Colin hopes to test his claims remain > pretty > > cloudy to me. > > > > He has a proposed hardware device/architecture, which he believes does a > > better job of emulating brain physics than a traditional digital > computer. > > But I don't know what algorithm he is going to run on it. And I can't > > remember seeing him hypothesize a *mechanism* by which the unique > physics of > > his device will affect the output, or even describing (in specifics) how > he > > expects the output to differ from the output a digital computer would > > produce when running the same algorithm. > > > > So what falsifiable assumption is he subjecting to experiment? > > Hopefully Colin will be along soon to answer... but in general, for > the last 10 years I've been reading him emphasizing "science, science, > science, science"! > > > > ------------------------------------------ > > Artificial General Intelligence List: AGI > > Permalink: > > > https://agi.topicbox.com/groups/agi/T7c7052974ce450f1-M5e114a9dfe886242f1e187d9 > > Delivery options: https://agi.topicbox.com/groups/agi/subscription > > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T7c7052974ce450f1-Ma2ed3e90585692043590e745 Delivery options: https://agi.topicbox.com/groups/agi/subscription