On Mon, Jan 8, 2024, 1:29 PM Matt Mahoney <mattmahone...@gmail.com> wrote:

> Colin, it's been a while. How is your consciousness research going? Who
> would have thought 20 years ago that AI would turn out to be nothing more
> than text prediction usung neural networks on massive data sets and
> computing power?
>

Indeed. The great dynamical system attractor that deflects all attempts at
non-zero G in AGI, setting G to 0.0 in an endless zoo of fungible narrow-AI
that can be argued to be neither A nor I, essentially owns the entire
enterprise.

All the players are circling the G=0.0 plughole of that failure attractor.
The original game, as set out back in 1956, is over. As science it's now a
Feynmanian cargo-cult where "all you need is more compute" rules all. The
insanity is now industrialised.

Meanwhile, I have made progress in what is now a lab in the Department of
Anatomy & Physiology at the University of Melbourne.

Myself and colleague Peter Kitchener are striving to create a startup, and
will have gone from early seed to proof of concept stage, we hope, by the
end of 2024. The goal is a chip technology that learns like natural nervous
tissue, for use as an artificial nervous system in robots, starting at very
very small but non-zero G (sub-neuronal!).

We've built a 20000x scale version of what amounts to the 'transistor' in
the new chips. It's effectively a small patch of membrane with one fat ion
channel in it. Took 18 intense months to learn how to build it. From here
we do what amounts to the "logic gate" (rough analogy only) of the chip.

As discussed endlessly here ..The (in the end, nano-fabbed) chips will have
no software, no models and are not general-purpose computers. The chips
will, however, be a hybrid analog/digital tech, and the robot brains will
have an EEG/MEG like nature. That too, has been discussed here already.
Turns out that material, size and energy consumption will be pretty close
to 1:1 with brain tissue. It's effectively inorganic brain "tissue" that
lacks all the biological overheads. And it's intrinsically safe. It has
none of the intrinsic risk factors endlessly argued in the literature. All
it has is the very human greed/ignorance/laziness risk profile that dogs
all the tech we ever made.

Now the department can see it's real, we're seeing the doors open around
us. Things will get interesting this year.

The project is not directed at consciousness at all.

However, later on, with mature devices, someone could use it to
scientifically examine questions aimed at the 1PP (1st person perspective)
of simple robots based on the chips. A proper science can then start that
should have started decades ago.

This year, the world is going to have to decide which of 2 things is an
"artificial neuron":
1) an abstract software model of a property of a natural neuron. (Analogy:
flight simulator)
2) a novel chip that has physics in it that literally is the property as it
is found in a natural neuron.(Analogy: actual flight).

Guess which one wins? 1) has never actually happened. And when we're done
everyone will understand that brutal fact and why it is.

Can't wait to do it!

While all that's happening, one of the collateral deaths will be the great
joke of substrate independence. This will vanish in a brutal physics
excision.

This baseless obsession with a broken science will at last end this year.
It was all based on the assumed-empirically-proved truth of hypothesis that
nobody can cite, and that can't be tested conclusively with computers alone.

For the first time ever, a fully functional empirical science of AI
(however poorly or well it functions in our lab!) will be possible, and
computer scientists all over the world are going to learn (a) Just how far
out on the breezy limb of that science malfunction they are.
(b) That the real AGI game is a physics/neuroscience collaboration in which
a general purpose computer is a design tool for a product, not the actual
product. Just like for aircraft.

 The irony is that the uncitable, unproved "hypothesis" driving this
insanity is likely to be formally true!!

The truth of the uncitable hypothesis is, I predict, actually a practical
falsehood through it being intractable in practice (due to a formal NP
exponential explosion)... Leading to a pyhrric victory where the compute
needed to do G>=1 AGI is so vast it'll kill us all in a resource death
while demanding we already know everything ... While the very thing needed
to discover the folly  -"AI done without computers, assuming the great
uncitable hypothesis is false" is utterly invisible in the cargo cult
vacuum that supplanted it.

That invisibility, and it's cargo cult, ends this year, a year where AGI
and "AI" part company in a normalized (70 years overdue!) science of it.

I know I'm too old to see the process to fruition. But I'm sure as hell
gonna float its boat and fill it full of smart kids that will.

Thanks for asking, hope all's well for you and yours. See you on the other
side (in the literature). :-) Back to it! I'm done here!

Cheers,
Colin











> On Sat, Jan 6, 2024, 8:21 PM Colin Hales <col.ha...@gmail.com> wrote:
>
>> Test.
>> Happy New year!
>> Cheers
>> Colin
>>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> +
> delivery options <https://agi.topicbox.com/groups/agi/subscription>
> Permalink
> <https://agi.topicbox.com/groups/agi/T9cb9721bce147a11-M287bf0d22887ddb375ba8449>
>

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T9cb9721bce147a11-M0f368dee0fb6d64e93437324
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to