On 3/3/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:

--- Jef Allbright <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > Is it possible to program to program any autonomous agent
> > that responds to reinforcement learning (a reward/penalty signal) that
> does
> > not act as though its environment were real?  How would one test for this
> > belief?
>
> Exactly.
>
> Of course an agent could certainly claim that its environment isn't real.
>
> - Jef

Of course I can write

  main() {
    printf("Nothing is real.\n");
  }

But is this convincing?  I could also say "nothing is real", yet I continue to
eat, breathe, sleep, go to work, budget my money, not drive recklessly, not
jump off a cliff, and do all the other things that would make no difference to
my survival if I were dreaming.  So would you believe that I don't believe in
reality just because I say so?

So my question is what aspect of behavior could be used as a test for belief
in reality?  The test ought to be applicable to humans, animals, robots, and
computer programs.  I believe the most general test is response to
reinforcement learning.  Suppose that you could not experience pain or
pleasure or any emotion, so that you would impassively accept any kind of
disease, disability, torture, or death, as unconcerned as if you were reading
a work of fiction.  Would you believe in reality then?

I'm sorry, my terse reply was too subtle.

When you wrote "How would one test for this belief?", I responded
"Exactly", meaning "Exactly. How /would/ one test for this belief?
It's not conceivable in practice."

You can find plenty of age-old debate on solipsism on the web, but I
find it to be an entirely sterile topic.

A slightly more interesting approach might be that of that of a
mentally ill AI suffering from something similar to Cotard's Syndrome,
such that some portion of its processing is actively denying the
reality of its environment.  But that seems nearly as sterile since
you could just as easily imagine any other form of delusion such as
"everything is blue."

So my response was meant to indicate that agency necessarily implies
awareness of an environment of interaction.  Certainly one could
modify the software to claim otherwise, but so what?

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to