On Jan 17, 12:51 am, Jason Resch <jasonre...@gmail.com> wrote:
> On Mon, Jan 16, 2012 at 10:29 PM, Craig Weinberg <whatsons...@gmail.com>wrote:

>
> > That's what I'm saying though. A Turing machine cannot be built in
> > liquid, gas, or vacuum. It is a logic of solid objects only. That
> > means it's repertoire is not infinite, since it can't simulate a
> > Turing machine that is not made of some simulated solidity.
>
> Well you're asking for something impossible, not something impossible to
> simulate, but something that is logically impossible.

We can simulate logical impossibilities graphically though (Escher,
etc). My point is that a Turing machine is not even truly universal,
let alone infinite. It's an object oriented syntax that is limited to
particular kinds of functions, none of which include biological
awareness (which might make sense since biology is almost entirely
fluid-solution based.)

>
> Also, something can be infinite without encompassing everything.  A line
> can be infinite in length without every point in existence having to lie on
> that line.

If that's what you meant though, it's not saying much of anything
about the repertoire. A player piano has an infinite repertoire too.
So what?

>
> > > To date, there is nothing we
> > > (individually or as a race) has accomplished that could not in principle
> > > also be accomplished by an appropriately programed Turing machine.
>
> > Even if that were true, no Turing machine has ever known what it has
> > accomplished,
>
> Assuming you and I aren't Turing machines.

It would be begging the question otherwise.

>
> > so in principle nothing can ever be accomplished by a
> > Turing machine independently of our perception.
>
> Do asteroids and planets exist "out there" even if no one perceives them?

They don't need humans to perceive them to exist, but my view is that
gravity is evidence that all physical objects perceive each other. Not
in a biological sense of feeling, seeing, or knowing, but in the most
primitive forms of collision detection, accumulation, attraction to
mass, etc.

>
> > What is an
> > 'accomplishment' in computational terms?
>
> I don't know.
>
>
>
> > > > You can't build it out of uncontrollable living organisms.
> > > > There are physical constraints even on what can function as a simple
> > > > AND gate. It has no existence in a vacuum or a liquid or gas.
>
> > > > Just as basic logic functions are impossible under those ordinary
> > > > physically disorganized conditions, it may be the case that awareness
> > > > can only develop by itself under the opposite conditions. It needs a
> > > > variety of solids, liquids, and gases - very specific ones. It's not
> > > > Legos. It's alive. This means that consciousness may not be a concept
> > > > at all - not generalizable in any way. Consciousness is the opposite,
> > > > it is a specific enactment of particular events and materials. A brain
> > > > can only show us that a person is a live, but not who that person is.
> > > > The who cannot be simulated because it is an unrepeatable event in the
> > > > cosmos. A computer is not a single event. It is parts which have been
> > > > assembled together. It did not replicate itself from a single living
> > > > cell.
>
> > > > > > You can't make a machine that acts like a person without
> > > > > > it becoming a person automatically. That clearly is ridiculous to
> > me.
>
> > > > > What do you think about Strong AI, do you think it is possible?
>
> > > > The whole concept is a category error.
>
> > > Let me use a more limited example of Strong AI.  Do you think there is
> > any
> > > existing or past human profession that an appropriately built android
> > > (which is driven by a computer and a program) could not excel at?
>
> > Artist, musician, therapist, actor, talk show host, teacher,
> > caregiver, parent, comedian, diplomat, clothing designer, director,
> > movie critic, author, etc.
>
> What do you base this on?  What is it about being a machine that precludes
> them from fulfilling any of these roles?

Machines have no feeling. These kinds of careers rely on sensitivity
to human feeling and meaning. They require that you care about things
that humans care about. Caring cannot be programmed. That is the
opposite of caring, because programming requires no investment by the
programmed. There is no subject in a program, only an object
programmed to behave in a way that seems like it could be a subject in
some ways.

>
> Also, although their abilities are limited, the below examples certainly
> show that computers are making inroads along many of these lines of work,
> and will only improve overtime as computers become more powerful.

Many professions would be much better performed by a computer. Human
oversight might be desirable for something like surgery, but I would
probably go with the computer over a human surgeon.

>
> Artist and Musician: Computer generated music has been around since at
> least the 60s:http://www.youtube.com/watch?v=X4Neivqp2K4

Yep, 47 years since then and still no improvement whatsoever. Based on
that I think we cannot assume that computer generated music will
improve significantly over time as computers become more powerful.
They can just make more realistic music sound just as bad.

> Therapist: ELIZA, the computer psychologist has been around since 
> 1964:http://nlp-addiction.com/eliza/

Again, no improvement in almost 50 years. Does anyone use ELIZA for
psychology? No. It's utterly useless except as a novelty and
linguistics demonstration.

> Teacher:http://en.wikipedia.org/wiki/Rosetta_Stone_%28software%29

It's not a teacher, it's a computer assisted learning regimen. An
exercise machine is not the same thing as a personal trainer or a
coach.

> Caregiver: The Japanese are actively researching and developing caregiving
> robots to take care of their aging 
> population:http://web-japan.org/trends/09_sci-tech/sci100225.html

That doesn't mean that they will excel at being caregivers.

> Comedian: "What kind of murderer has moral fiber?" — "A cereal killer."
> This joke was written by a computer. 
> (http://www.newscientist.com/article/dn1719)
> Movie Critic:http://www.netflixprize.com/

Again, generating a sophomoric pun (in a sea of garbage jokes) is not
the same thing as 'excelling at being a comedian.' All of these
examples reveal the utter failure of computation to get passed square
one in any of these areas. It is obvious to me that the failure is
rooted in precisely the failure of computation to simulate awareness
beyond a trivial form of sophistication. Limited capacities for
simulating trivial music, conversation, humor, compassion are
radically overestimated, even though there has been no sign of
progress at all since the beginning of computing.

>
> > >  Could
> > > there be a successful android surgeon, computer programmer, psychologist,
> > > lawyer, etc.
>
> > I would say there could be very successful android surgeons, less so
> > computer programmers and lawyers because there is an element of
> > creativity there,
>
> Computers have demonstrated 
> creativity:http://www.mendeley.com/research/automated-design-previously-patented...
>

link doesn't come up.

> > and not so much for a psychologist, because the job
> > requires the understanding of feeling, which is not possible for a
> > computer executed in material that cannot feel like an animal feels.
>
> But a computer program will have the same output (outwardly visible
> behavior) regardless of its substrate.  Clearly the material on which the
> Turing machine is executed cannot have any effect on its performance.

If that were the case then a Turing machine should be executable as a
truck load of live hamsters or a dense layer of fog. The fact that it
cannot work that way is evidence that the material does relate to the
ability of a Turing machine to perform even basic functions.

Art, music, comedy, compassion, etc are not 'output'. They are
experiences which can be shared. A Turing machine can't experience
anything by itself, it is only the substrate that experiences.

> If a
> Turing machine run on carbon makes a better psychologist, then that same
> program executed on a silicon Turing machine will be just as successful.

The machine exploits the common sense of object oriented substrates.
It doesn't matter whether it runs on silicon or boron or gadolinium,
because any sufficiently polite solid material will do. None of them
make a good psychologist. For that you need something that neurons run
on themselves.

>
> > Until silicon can feel proud and ashamed, it won't be any good at
> > psychology.
>
> Unless there is something about psychologists that is infinite, then there
> is no externally visible behavior a psychologist is capable of that the
> android controlled by a Turing machine could not also do.

A keyboard can be programmed to type any sentence. Does that mean it
is Shakespeare? A Turing machine can only impersonate intelligence
trivially, it can't embody it authentically. It's not about matching
behaviors, it's about having the sensitivity and feeling to know when
and why the behaviors are appropriate. It's about originating new
behaviors that are significant improvements over previous approaches.

>
>
>
> > > Or do you believe there is some inherent limitation of
> > > computers that would prevent them from being capable in one of these
> > > roles?  If so please provide an example.
>
> > Computers are inherently limited by their material substrate. A
> > mechanism of electronic silicon will never know what it is to feel
> > pain, fear, pleasure, etc. Any role which emphasizes a talent for
> > feeling and understanding would fail to be fulfilled by the promise of
> > disembodied recursive enumeration.
>
> Do you think something have to feel to perfectly act as though it is
> feeling?  Actors can pretend to suffer if their role is to be tortured in a
> movie, yet they feel no pain.

They aren't feeling pain at the moment, but they are capable of
experiencing pain, therefore they can fake it with feeling.

>  If you are into sci-fi, you should watch the
> recent (not 1970s) Battlestar Galactica series.  Among other things, it
> explores a racism against machines who in all respects look act and behave
> like humans.

Yeah I have watched a lot of that BSG. I like how the cylons are
monotheistic and humans are pagan.  It's a good show. I would agree,
if it were the case that AI robots were indistinguishable to us that
it would be a valid philosophical issue. My view though is that there
are some good reasons that will never be the case. As the AI horizon
continues to recede infinitely, even in the face of ever faster
hardware and more bloated software, we will continue to have to deal
with actual racism rather than theoretical anthropism. If the cylons
were genetically engineered beings instead, well, that's a different
story entirely. Living creatures matter, programs don't (except to the
living creatures that use them).

>
>
>
> > > > It's like saying do you think
> > > > it's possible to have human colored paint. It is possible to have
> > > > technology that seems to us like Strong AI, just as a mannequin can
> > > > seem like a person to us momentarily. The better the simulation, the
> > > > longer it will take for more people to doubt it's authenticity, but
> > > > there will always be ways to tell the difference (you might need a
> > > > trained guinea pig or a voice stress analyzer to do it, but eventually
> > > > you could probably tell).
>
> > > > >  If so, if
> > > > > the program that creates a strong AI were implemented on various
> > > > > computational substrates, silicon, carbon nanotubes, pen and paper,
> > pipes
> > > > > and water, do you think any of them would yield a mind that is
> > conscious?
>
> > > > No. By definition, consciousness has to come from the substrate
> > > > itself. If the substrate is conscious, then the program can be
> > > > conscious, but the more something is conscious, the less possible it
> > > > is that it can be programmed.
>
> > > > > If yes, do you think the content of that AI's consciousness would
> > differ
> > > > > depending on the substrate?
>
> > > > No, it's the ability to accept the program that would differ depending
> > > > on the substrate. The sensorimotive awareness of any substrate is
> > > > already different from any other. We play a song on a computer but the
> > > > computer does not experience the song, nor do the speakers in your
> > > > headphones, or even your cochlea. They do probably experience
> > > > vibration, and maybe the cochlea experiences 'sound' in a zoological
> > > > sense, but the song level interpretation is private to anthropological
> > > > level experience. You can't put an mp3 directly into your ear or your
> > > > brain. There is no AI independent of substrate. I can draw a straight
> > > > line or walk a straight line, but there is no universal straight line
> > > > experience. Straight and linear are sensorimotive qualities carried by
> > > > particular channels of sense.
>
> > > > >  And finally, if you believe at least some
> > > > > substrates would be conscious, are there any cases where the AI would
> > > > > respond or behave differently on one substrate or the other (in
> > terms of
> > > > > the Strong AI program's output) when given equivalent input?
>
> > > > I can wear a suit and tie and stand in a department store. A mannequin
> > > > can do the same thing. AI is the suit and tie. Does the suit make the
> > > > mannequin look more like me when I'm wearing the same suit? Sure. Does
> > > > it make any difference to the mannequin? No. Does it make any
> > > > difference to me? Yes, my experience of the mannequin depends on how
> > > > good of a mannequin it is and how directly I look at it and for how
> > > > long.
>
> > > > > > > > If we run the zombie argument backwards then, at what
> > substitution
> > > > > > > > level of zombiehood does a (completely possible) simulated
> > person
> > > > > > > > become an (non-Turing emulable) unconscious puppet? How bad of
> > a
> > > > > > > > simulation does it have to be before becoming an impossible
> > zombie?
>
> > > > > > > > This to me reveals an absurdity of arithmetic realism.
> > Pinocchio
> > > > the
> > > > > > > > boy is possible to simulate mechanically, but Pinocchio the
> > puppet
> > > > is
> > > > > > > > impossible. Doesn't that strike anyone else as an obvious deal
> > > > breaker?
>
> > > > > > > Not every Turing emulable process is necessarily conscious.
>
> > > > > > Why not? What makes them unconscious?
>
> > > > > In my guess, it would be a lack of sophistication.  For example, one
> > > > > program might simply consist of a for loop iterating from 1 to 10.
> >  Is
> > > > this
> > > > > program conscious?  I don't know, but it almost certainly isn't
> > conscious
> > > > > in the way you or I are.
>
> > > > If that were the case then sophistication alone would be
> > > > consciousness. It's not though. Our consciousness is certainly
> > > > sophisticated but a beach full of sand is sophisticated too.
>
> > > A computer program written to simulate sand would not require a
> > significant
> > > amount of information compared to the amount of information needed to
> > > specify a human brain.
>
> > Sand can be pretty complicated to generate:
> >http://inspirationgreen.com/magnified-grains-of-sand.html
>
> Wow.  Those are very cool.
>
>
>
> > I'm not saying it's as complicated as a human brain, but by your
> > correlation, it should be more conscious than a block of iron, and I
> > think that is clearly is not.
>
> > > > Would a
> > > > program that makes a copy of itself every 10 iterations be any more
> > > > conscious than one that doesn't copy itself? Without some kind of
> > > > capacity for sense and motive within the loops from the start, there
> > > > isn't anything that knows there is any looping going on. We have to
> > > > realize that there is no such thing as a 'loop' in general, anymore
> > > > than there is a such thing as a touchdown in general. When we talk
> > > > about a for loop we are talking about a common sense neurological
> > > > modeling which relates to certain organizations of physical objects
> > > > and the computational manipulation thereof. There is no looping for
> > > > vapor or in a vacuum.
>
> > > > > > You can't draw the line in one
> > > > > > direction but not the other. If you say that anything that seems to
> > > > > > act alive well enough must be alive, then you also have to say that
> > > > > > anything that does not seem conscious may just be poorly
> > programmed.
>
> > > > > When you talk about changing substitution levels, you are talking
> > about
> > > > > different programs.  Some levels may be so high-level that the
> > important
> > > > > and necessary aspects are eliminated and replaced with functions
> > which
> > > > > fundamentally alter the experience of the simulated mind.  Whether
> > or not
> > > > > this would be noticed depends on the sophistication of the Turing
> > test.
> > > > > Examination of outward appearance may not even be sufficient.  I
> > think
> > > > Ned
> > > > > Block had an argument against that you could have a giant state table
> > > > that
> > > > > is infinite in size and for any possible question it had the stored
> > > > > output.  Such a program might pass a Turing test, but internally it
> > is
> > > > > performing only a very trivial computation.  If we inspected the
> > code of
> > > > > this program we could say it has no understanding of individual
> > words, no
> > > > > complex thought processes, etc.  However, most zombies are defined
> > to be
> > > > > functionally (if not physically) identical rather than merely
> > capable of
> > > > > passing a some limited test based on external appearances.
>
> > > > Zombiehood has nothing to do with external appearances, other than
> > > > that they are presumed to be the same as a non-zombie.
>
> > > Right.
>
> > > > What makes a
> > > > zombie a zombie is that it lacks interiority.
>
> > > Yes.
>
> > > > It doesn't matter if it
> > > > is possible to test it or not, if we call it a zombie, that means that
> > > > it is a given that it does not have conscious interior experience. All
> > > > programs are zombies, and all consciousness is more than a program.
>
> > > You have finally answered a question I asked many months ago.  That you
> > do
> > > believe zombies are possible.
>
> > No, zombies are not actually possible in reality, since there will
> > always be something or someone who can tell the difference, but the
> > principle as it pertains to AI is valid. A person can impersonate a
> > computer and a computer can seem to impersonate a human, but that
> > doesn't mean impersonation carries the subjective experience.
>
> If you think Zombies are impossible, then you are forced to reject the
> possibility that machines can satisfy any human occupation, as otherwise
> you would have to consider them conscious (owing to the fact that you think
> zombies are impossible).  So you are consistent, but I think you are shaky
> ground, since Turing machines are believed to be capable of replicating the
> behavior of any finite process.

I think that's because our notions of a finite process arise from the
same logic from which the idea of a Turing machine arises. If we see
logic as a special case of sense, then the important property becomes
not whether or not something is a finite, but how high the quality of
it's sense and motives. Like molded plastic, a Turing machine can make
a useful simulacra in almost any form, but only if you don't care
about the quality. Quality is not quantitative. It isn't useful to
measure it that way. There is no amount of plastic forks and paper
plates that equal fine china and sterling silver.

>
> > Pretending I am Napoleon doesn't make me Napoleon, even if I do a
> > really good imitation.
>
> If your brain were identical to Napoleon's you would be he.

No, I would still be just Napoleon's brain twin in a completely
different life. If a person is blind from birth, their visual cortex
activity is associated with tactile experiences. The same exact brain
will see if it has eyes to see with or feel if it doesn't. If Napoleon
was optically blind, then having my brain would not let him see.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to