On Sat, Jun 19, 2021, 5:55 AM John Clark <johnkcl...@gmail.com> wrote:

> On Fri, Jun 18, 2021 at 8:17 PM Jason Resch <jasonre...@gmail.com> wrote:
>
> *>Deepmind has succeeded in building general purposes learning algorithms.
>> Intelligence is mostly a solved problem,*
>>
>
> I'm enormously impressed with Deepmind and I'm an optimist regarding AI,
> but I'm not quite that optimistic.
>

Are you familiar with their Agent 57? -- a single algorithm that mastered
all 57 Atari games at a super human level, with no outside direction, no
specification of the rules, and whose only input was the "TV screen" of the
game.


If intelligence was a solved problem the world would change beyond all
> recognition and we'd be smack in the middle of the Singularity, and we're
> obviously not because at least to some degree future human events are still
> somewhat predictable.
>

The algorithms are known, but the computational power is not there yet. Our
top supercomputer only recently broke the computing power of one human
brain.

Also, because of chaos, predicting the future to any degree of accuracy
requires exponentially more information about the system for each finite
amount of additional time to simulate, and this does not even factor in
quantum uncertainty, nor uncertainty about oneself and own mind. Being
unable to predict the future isn't a good definition of the singularity,
because we already can't. You might say the singularity is when most
decisions are no longer made by biological intelligences, again arguably we
have reached that point. I prefer the definition of when we have a single
nonbiological intelligence that exceeds the intelligence of any human in
any domain. We are getting very close to that point. That may not be the
point of an intelligence explosion, but it means one cannot be far off.



> > *But questions of consciousness are no less important nor less
>> pressing:*
>> *Is this uploaded brain conscious or a zombie?*
>>
>
> I don't know, are you conscious or a zombie?
>

There may be valid logical arguments that disprove the consistency of
zombies. For example, can something "know without knowing?" It seems not.
So how does a zombie "know" where to place it's hand to catch a ball, if it
doesn't "knowing" what it sees?

A single result on the possibility or impossibility of zombies would enable
massive progess in theories of consciousness.

For example, wee could rule out many theories and narrow down on those that
accept "organizational invariance" as Chalmers defines it. This is the
principle that if one entity is consciousness, and another entity is
organizationally and functionally equivalent, preserving all the parts and
relationships among its parts, then that second entity must be equivalently
conscious to the first.



> > *Can (bacterium, protists, plants, jellyfish, worms, clams, insects,
>> spiders, crabs, snakes, mice, apes, humans) suffer?*
>>
>
> I don't know, I know I can suffer, can you?
>

I can tell you that I can. You could verify via functional brain scans that
I wasn't preprogrammed like an Eliza bot to say I can. You could trace the
neural firings in my brain to uncover the origin of my belief that I can
suffer, and I could do the same for you.




>
>> > *Are these robot slaves conscious?*
>>
>
> Are you conscious?
>

Could a zombie write a book like Chalmers's "The Consciousness Mind"? Some
have proposed writing philosophical texts on the philosophy of mind as a
kind of super-Turing test for establishing consciousness.

When GPT-X writes new philosophical treatises on topics of consciousness
and when it insists it is conscious, and we trace the origins of this
statement to a tangled self-reference loop in its processing, what are we
to conclude? Would it become immoral to turn it off at that point?


>
>> * > Do they have likes or dislikes that we repress?*
>>
>
> What's with this "we" business?
>


Humanity I mean.


> > *When does a developing human become conscious?*
>>
>
> Other than in my case does any developing human EVER become conscious?
>
> > *Is that person in a coma or locked-in?*
>>
>
> I don't know, are you locked in?
>

I can move, so no. Being locked in means you are conscious but lack any
control over your body.


> > *Does this artificial retina/visual cortex provide the same visual
>> experiences?*
>>
>
> The same as what?
>

A biological retina and visual cortex.


>
>> > *Does this particular anesthetic block consciousness or merely memory
>> formation?*
>>
>
> Did the person have consciousness even before the administration of the
> anesthetic?
>

Let's assume so for the purposes of the question. Wouldn't you prefer the
anesthetic that knocks you out vs. the one that only blocks memory
formation? Wouldn't a theory of consciousness be valuable here to establish
which is which?


>
>> *> These questions remain unsettled*
>>
>
> Yes, and these questions will remain unsettled till the end of time, so
> even if time is infinite it could be better spent pondering other questions
> that actually have answers.
>


You appear to operate according to a "mysterian" view of consciousness,
which is that we cannot ever know. Several philosophers of mind have
expressed this, such as Thomas Nagel I believe.

But I think just because we do not know now, does not mean we will not one
day know. You could have been a mysterian about how life reproduces itself
or why the stars shine, until a few hundred years ago, but you would have
been proven wrong. Why do you think these questions below are intractable?



>
>> *>If none of these questions interest you, perhaps this one will: Is
>> consciousness inherent to any intelligent process?*
>
>
> I have no proof and never will have any, however I must assume that the
> above is true because I simply could not function if I really believed that
> solipsism was correct and I was the only conscious being in the universe.
> Therefore I take it as an axiom that intelligent behavior implies
> consciousness.
>

This itself is a theory of consciousness. You must have some reason to
believe it, even if you cannot yet prove it.

It has many consequences, such as the unimportance of the material
substrate. That alone rules out Searle's biological naturalism.

Progess is possible. Especially if one performs consciousness experiments
on oneself (e.g. trying a neural implant)

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUgtG1RXZH-bo4Jp-7E83ufk42aPP17%3DzyXL3GQ%2BKx7Lmg%40mail.gmail.com.

Reply via email to