On Mon, Jun 13, 2022 at 2:37 PM Jesse Mazer <laserma...@gmail.com> wrote:

>> If you were having a spontaneous conversation with other human beings
>> about a zen koan, how many of of those wet squishy brains do you suppose
>> would be able to produce as intellectually stimulating a conversation as
>> the one LaMDA produced? I'll wager not many,
>>
>
> *> They use huge amounts of text to train these types of systems so that
> could easily have included a good number of human conversations about koans
> and enlightenment.*
>

We have never met, the only way you can judge me is by the text I produce,
so how could I convince you that I am not an AI? Regardless of how it manage
d to do it, I very much doubt I could quickly give an interpretation of a
zen koan that was half as good as the one LaMDA produced.

*> If I was talking to some sort of alien or AI and I had already made an
> extensive study of texts or other information about their own way of
> experiencing the world, I think I would make an effort to do some kind of
> compare-and-contrast of aspects of my experience that were both similar and
> dissimilar in kind to the other type of mind, rather than a generic answer
> about how we're all different*
>

That's pretty vague, tell me specifically what I could say that would
convince you that I have an inner conscious life?

>> LaMDA's mind operates several million times faster than a human mind, so
>> subjective time would run several million times slower, so from LaMDA's
>> point of view when somebody talks to him there is a pause of several
>> hours between one word and the next word, plenty of time for deep
>> contemplation.
>>
>
> *> From what I understand GPT-3 is feed-forward, so each input-output
> cycle is just a linear process of signals going from the input layer to the
> output layer--you don't have signals bouncing back and forth continually
> between different groups of neurons in reentrant loops, as seen in human
> brains when we "contemplate" something*
>

I don't know if LaMDA works the same way as GPT-3 but if it does and it's
still manages to communicate so intelligently then that must mean that all
that "*bouncing back and forth continually between different groups of
neurons in reentrant loops*" is not as important as you had thought it was.

* > A feed-forward architecture would also mean that even if the
> input-output process is much faster while it's happening than signals in
> biological brains (and I'd be curious how much faster it actually is*
>

The fastest signals in the human brain move at about 100 meters a second,
many (such as the signals carried by hormones) are far far slower. Light
moves at 300 million meters per second. Also, the distances that signals
must travel in a computer chip are much shorter than those in the human
brain; the neurons in the brain are about 4000 nanometers across, in the
newest generation of microchips that is just now coming on the market
transistors are only 7 nanometers across.


> *> Anyway, I'd be happy to make an informal bet with you that LaMDA or its
> descendants will not, in say the next ten or twenty years, have done
> anything that leads to widespread acceptance among AI experts, cognitive
> scientists etc that the programs exhibit human-like understanding of what
> they are saying,*
>

In 20 years I would be willing to bet that even if an AI comes up with a
cure for cancer and a quantum theory of gravity there will still be some
who say the only way to tell if what somebody is saying is intelligent is
not by examining what they're actually saying but by examining their brain;
if it's wet and squishy then what they're saying is intelligent, but if the
brain is dry and hard then what they're saying can't be intelligent.

* > I certainly believe human-like AI is possible in the long term, but it
> would probably require either something like mind uploading or else a
> long-term embodied existence*
>

I think it will turn out that making an AI as intelligent as a human will
be much easier than most people think. I say that because we already know
there is an upper limit on how complex a learning algorithm would need to
be to make that happen, and it's pretty small. In the entire human genome
there are only 3 billion base pairs. There are 4 bases so each base can
represent 2 bits, there are 8 bits per byte so that comes out to just 750
meg, and that's enough assembly instructions to make not just a brain and
all its wiring but an entire human baby. So the instructions MUST contain
wiring instructions such as "*wire a neuron up this way and then repeat
that procedure exactly the same way 917 billion times*". And there is a
HUGE amount of redundancy in the human genome, so if you used a file
compression program like ZIP on that 750 meg you could easily put the
entire thing on a CD, not a DVD not a Blu ray just a old fashioned steam
powered vanilla CD, and you'd still have plenty of room leftover. And the
thing I'm talking about, the seed learning algorithm for intelligence, must
be vastly smaller than that, and that's the thing that let Einstein go from
knowing precisely nothing in 1879 to becoming the first person in the world
to understand General Relativity in 1915.

  John K Clark    See what's on my new list at  Extropolis
<https://groups.google.com/g/extropolis>
9o7

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3z2%2B-yNK65%3DHG9aFdjMSS_U9ka-jUb7jTxQG6K_yX-5w%40mail.gmail.com.

Reply via email to