On Jan 16, 10:26 pm, Jason Resch <jasonre...@gmail.com> wrote:
> On Mon, Jan 16, 2012 at 12:57 PM, Craig Weinberg <whatsons...@gmail.com>wrote:
>
> > On Jan 16, 12:15 pm, Jason Resch <jasonre...@gmail.com> wrote:
> > > Craig,
>
> > > Do you have an opinion regarding the possibility of Strong AI, and the
> > > other questions I posed in my earlier post?
>
> > Sorry Jason, I didn't see your comment earlier.
>
> > On Jan 15, 2:45 am, Jason Resch <jasonre...@gmail.com> wrote:
> > > On Sat, Jan 14, 2012 at 9:39 PM, Craig Weinberg <whatsons...@gmail.com
> > >wrote:
> > > > >wrote:
>
> > > > > > Thought I'd throw this out there. If computationalism argues that
> > > > > > zombies can't exist,
>
> > > > > I think the two ideas "zombies are impossible" and computationalism
> > are
> > > > > independent.  Where you might say they are related is that a
> > disbelief in
> > > > > zombies yields a strong argument for computationalism.
>
> > > > I don't think that it's possible to say that any two ideas 'are'
> > > > independent from each other.
>
> > > Okay.  Perhaps 'independent' was not an ideal term, but computationalism
> > is
> > > at least not dependent on an argument against zombies, as far as I am
> > aware.
>
> > What computationlism does depend on though is the same view of
> > consciousness that zombies would disqualify.
>
> > > > All ideas can be related through semantic
> > > > association, however distant. As far as your point though, of course I
> > > > see the opposite relation - while admitting even the possibility of
> > > > zombies suggests computationalism is founded on illusion., but a
> > > > disbelief in zombies gives no more support for computationalism than
> > > > it does for materialism or panpsychism.
>
> > > If one accepts that zombies are impossible, then to reject
> > computationalism
> > > requires also rejecting the possibility of Strong AI (
> >https://secure.wikimedia.org/wikipedia/en/wiki/Strong_AI).
>
> > What I'm saying is that if one accepts that zombies are impossible,
> > then to accept computationalism requires accepting that *all* AI is
> > strong already.
>
> Strong AI is an AI capable of any task that a human is capable of.  I am
> not aware of any AI that fits this definition.

What I'm saying though is that computationalism implies that whatever
task is being done by AI that a human can also do is the same. If AI
can print the letters 'y-e-s', then it must be no different from
person answering yes. What I'm saying is that makes all AI strong,
just incomplete.

>
>
>
> > > > > > therefore anything that we cannot distinguish
> > > > > > from a conscious person must be conscious, that also means that it
> > is
> > > > > > impossible to create something that acts like a person which is
> > not a
> > > > > > person. Zombies are not Turing emulable.
>
> > > > > I think there is a subtle difference in meaning between "it is
> > impossible
> > > > > to create something that acts like a person which is not a person"
> > and
> > > > > saying "Zombies are not Turing emulable".  It is important to
> > remember
> > > > that
> > > > > the non-possibility of zombies doesn't imply a particular person or
> > thing
> > > > > cannot be emulated, rather it means there is a particular
> > consequence of
> > > > > certain Turing emulations which is unavoidable, namely the
> > > > > consciousness/mind/person.
>
> > > > That's true, in the sense that emulable can only refer to a specific
> > > > natural and real process being emulated rather than a fictional one.
> > > > You have a valid point that the word emulable isn't the best term, but
> > > > it's a red herring since the point I was making is that it would not
> > > > be possible to avoid creating sentience in any sufficiently
> > > > sophisticated cartoon, sculpture, or graphic representation of a
> > > > person. Call it emulation, simulation, synthesis, whatever, the result
> > > > is the same.
>
> > > I think you and I have different mental models for what is entailed by
> > > "emulation, simulation, synthesis".  Cartoons, sculptures, recordings,
> > > projections, and so on, don't necessarily compute anything (or at least,
> > > what they might depict as being computed can have little or no relation
> > to
> > > what is actually computed by said cartoon, sculpture, recording,
> > > projection...  For actual computation you need counterfactuals
> > conditions.
> > > A cartoon depicting an AND gate is not required to behave as a genuine
> > AND
> > > gate would, and flashing a few frames depicting what such an AND gate
> > might
> > > do is not equivalent to the logical decision of an AND gate.
>
> > I understand what you think I mean, but you're strawmanning my point.
> > An AND gate is a generalizable concept. We know that. It's logic can
> > be enacted in many (but not every) different physical forms. If we
> > built the Lego AND mechanism seen here:
> >http://goldfish.ikaruga.co.uk/andnor.html#
>
> This page did not load for me..

Weird. Can you see a pic from it? 
http://goldfish.ikaruga.co.uk/legopics/newand11.jpg

>
> > and attached each side to a an effector which plays a cartoon of a
> > semiconductor AND gate, then you would have a cartoon which is
> > simulates an AND gate. The cartoon would be two separate cartoons in
> > reality, and the logic between them would be entirely inferred by the
> > audience, but this apparatus could be interpreted by the audience as a
> > functional simulation. The audience can jump to the conclusion that
> > the cartoon is a semiconductor AND gate. This is all that Strong AI
> > will ever be.
>
> > Computationalism assumes that consciousness is a generalizable
> > concept, but we don't know that is true. My view is that it is not
> > true, since we know that computation itself is not even generalizable
> > to all physical forms. You can't build a computer without any solid
> > materials.
>
> This is a statement about what is possible to build given what physics has
> provided us.  I am not sure what that implies for computationalism.
> Certainly, Turing machines are special structures and not everything is a
> Turing machine.  However, if one can build a Turing machine, one will find
> that its repertoire is infinite.

That's what I'm saying though. A Turing machine cannot be built in
liquid, gas, or vacuum. It is a logic of solid objects only. That
means it's repertoire is not infinite, since it can't simulate a
Turing machine that is not made of some simulated solidity.

> To date, there is nothing we
> (individually or as a race) has accomplished that could not in principle
> also be accomplished by an appropriately programed Turing machine.

Even if that were true, no Turing machine has ever known what it has
accomplished, so in principle nothing can ever be accomplished by a
Turing machine independently of our perception. What is an
'accomplishment' in computational terms?

>
> > You can't build it out of uncontrollable living organisms.
> > There are physical constraints even on what can function as a simple
> > AND gate. It has no existence in a vacuum or a liquid or gas.
>
> > Just as basic logic functions are impossible under those ordinary
> > physically disorganized conditions, it may be the case that awareness
> > can only develop by itself under the opposite conditions. It needs a
> > variety of solids, liquids, and gases - very specific ones. It's not
> > Legos. It's alive. This means that consciousness may not be a concept
> > at all - not generalizable in any way. Consciousness is the opposite,
> > it is a specific enactment of particular events and materials. A brain
> > can only show us that a person is a live, but not who that person is.
> > The who cannot be simulated because it is an unrepeatable event in the
> > cosmos. A computer is not a single event. It is parts which have been
> > assembled together. It did not replicate itself from a single living
> > cell.
>
> > > > You can't make a machine that acts like a person without
> > > > it becoming a person automatically. That clearly is ridiculous to me.
>
> > > What do you think about Strong AI, do you think it is possible?
>
> > The whole concept is a category error.
>
> Let me use a more limited example of Strong AI.  Do you think there is any
> existing or past human profession that an appropriately built android
> (which is driven by a computer and a program) could not excel at?

Artist, musician, therapist, actor, talk show host, teacher,
caregiver, parent, comedian, diplomat, clothing designer, director,
movie critic, author, etc.

>  Could
> there be a successful android surgeon, computer programmer, psychologist,
> lawyer, etc.

I would say there could be very successful android surgeons, less so
computer programmers and lawyers because there is an element of
creativity there, and not so much for a psychologist, because the job
requires the understanding of feeling, which is not possible for a
computer executed in material that cannot feel like an animal feels.
Until silicon can feel proud and ashamed, it won't be any good at
psychology.

> Or do you believe there is some inherent limitation of
> computers that would prevent them from being capable in one of these
> roles?  If so please provide an example.

Computers are inherently limited by their material substrate. A
mechanism of electronic silicon will never know what it is to feel
pain, fear, pleasure, etc. Any role which emphasizes a talent for
feeling and understanding would fail to be fulfilled by the promise of
disembodied recursive enumeration.

>
> > It's like saying do you think
> > it's possible to have human colored paint. It is possible to have
> > technology that seems to us like Strong AI, just as a mannequin can
> > seem like a person to us momentarily. The better the simulation, the
> > longer it will take for more people to doubt it's authenticity, but
> > there will always be ways to tell the difference (you might need a
> > trained guinea pig or a voice stress analyzer to do it, but eventually
> > you could probably tell).
>
> > >  If so, if
> > > the program that creates a strong AI were implemented on various
> > > computational substrates, silicon, carbon nanotubes, pen and paper, pipes
> > > and water, do you think any of them would yield a mind that is conscious?
>
> > No. By definition, consciousness has to come from the substrate
> > itself. If the substrate is conscious, then the program can be
> > conscious, but the more something is conscious, the less possible it
> > is that it can be programmed.
>
> > > If yes, do you think the content of that AI's consciousness would differ
> > > depending on the substrate?
>
> > No, it's the ability to accept the program that would differ depending
> > on the substrate. The sensorimotive awareness of any substrate is
> > already different from any other. We play a song on a computer but the
> > computer does not experience the song, nor do the speakers in your
> > headphones, or even your cochlea. They do probably experience
> > vibration, and maybe the cochlea experiences 'sound' in a zoological
> > sense, but the song level interpretation is private to anthropological
> > level experience. You can't put an mp3 directly into your ear or your
> > brain. There is no AI independent of substrate. I can draw a straight
> > line or walk a straight line, but there is no universal straight line
> > experience. Straight and linear are sensorimotive qualities carried by
> > particular channels of sense.
>
> > >  And finally, if you believe at least some
> > > substrates would be conscious, are there any cases where the AI would
> > > respond or behave differently on one substrate or the other (in terms of
> > > the Strong AI program's output) when given equivalent input?
>
> > I can wear a suit and tie and stand in a department store. A mannequin
> > can do the same thing. AI is the suit and tie. Does the suit make the
> > mannequin look more like me when I'm wearing the same suit? Sure. Does
> > it make any difference to the mannequin? No. Does it make any
> > difference to me? Yes, my experience of the mannequin depends on how
> > good of a mannequin it is and how directly I look at it and for how
> > long.
>
> > > > > > If we run the zombie argument backwards then, at what substitution
> > > > > > level of zombiehood does a (completely possible) simulated person
> > > > > > become an (non-Turing emulable) unconscious puppet? How bad of a
> > > > > > simulation does it have to be before becoming an impossible zombie?
>
> > > > > > This to me reveals an absurdity of arithmetic realism. Pinocchio
> > the
> > > > > > boy is possible to simulate mechanically, but Pinocchio the puppet
> > is
> > > > > > impossible. Doesn't that strike anyone else as an obvious deal
> > breaker?
>
> > > > > Not every Turing emulable process is necessarily conscious.
>
> > > > Why not? What makes them unconscious?
>
> > > In my guess, it would be a lack of sophistication.  For example, one
> > > program might simply consist of a for loop iterating from 1 to 10.  Is
> > this
> > > program conscious?  I don't know, but it almost certainly isn't conscious
> > > in the way you or I are.
>
> > If that were the case then sophistication alone would be
> > consciousness. It's not though. Our consciousness is certainly
> > sophisticated but a beach full of sand is sophisticated too.
>
> A computer program written to simulate sand would not require a significant
> amount of information compared to the amount of information needed to
> specify a human brain.

Sand can be pretty complicated to generate:
http://inspirationgreen.com/magnified-grains-of-sand.html

I'm not saying it's as complicated as a human brain, but by your
correlation, it should be more conscious than a block of iron, and I
think that is clearly is not.

>
> > Would a
> > program that makes a copy of itself every 10 iterations be any more
> > conscious than one that doesn't copy itself? Without some kind of
> > capacity for sense and motive within the loops from the start, there
> > isn't anything that knows there is any looping going on. We have to
> > realize that there is no such thing as a 'loop' in general, anymore
> > than there is a such thing as a touchdown in general. When we talk
> > about a for loop we are talking about a common sense neurological
> > modeling which relates to certain organizations of physical objects
> > and the computational manipulation thereof. There is no looping for
> > vapor or in a vacuum.
>
> > > > You can't draw the line in one
> > > > direction but not the other. If you say that anything that seems to
> > > > act alive well enough must be alive, then you also have to say that
> > > > anything that does not seem conscious may just be poorly programmed.
>
> > > When you talk about changing substitution levels, you are talking about
> > > different programs.  Some levels may be so high-level that the important
> > > and necessary aspects are eliminated and replaced with functions which
> > > fundamentally alter the experience of the simulated mind.  Whether or not
> > > this would be noticed depends on the sophistication of the Turing test.
> > > Examination of outward appearance may not even be sufficient.  I think
> > Ned
> > > Block had an argument against that you could have a giant state table
> > that
> > > is infinite in size and for any possible question it had the stored
> > > output.  Such a program might pass a Turing test, but internally it is
> > > performing only a very trivial computation.  If we inspected the code of
> > > this program we could say it has no understanding of individual words, no
> > > complex thought processes, etc.  However, most zombies are defined to be
> > > functionally (if not physically) identical rather than merely capable of
> > > passing a some limited test based on external appearances.
>
> > Zombiehood has nothing to do with external appearances, other than
> > that they are presumed to be the same as a non-zombie.
>
> Right.
>
> > What makes a
> > zombie a zombie is that it lacks interiority.
>
> Yes.
>
> > It doesn't matter if it
> > is possible to test it or not, if we call it a zombie, that means that
> > it is a given that it does not have conscious interior experience. All
> > programs are zombies, and all consciousness is more than a program.
>
> You have finally answered a question I asked many months ago.  That you do
> believe zombies are possible.

No, zombies are not actually possible in reality, since there will
always be something or someone who can tell the difference, but the
principle as it pertains to AI is valid. A person can impersonate a
computer and a computer can seem to impersonate a human, but that
doesn't mean impersonation carries the subjective experience.
Pretending I am Napoleon doesn't make me Napoleon, even if I do a
really good imitation.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to