--- Jef Allbright <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > Is it possible to program to program any autonomous agent
> > that responds to reinforcement learning (a reward/penalty signal) that
> does
> > not act as though its environment were real?  How would one test for this
> > belief?
> 
> Exactly.
> 
> Of course an agent could certainly claim that its environment isn't real.
> 
> - Jef

Of course I can write

  main() {
    printf("Nothing is real.\n");
  }

But is this convincing?  I could also say "nothing is real", yet I continue to
eat, breathe, sleep, go to work, budget my money, not drive recklessly, not
jump off a cliff, and do all the other things that would make no difference to
my survival if I were dreaming.  So would you believe that I don't believe in
reality just because I say so?

So my question is what aspect of behavior could be used as a test for belief
in reality?  The test ought to be applicable to humans, animals, robots, and
computer programs.  I believe the most general test is response to
reinforcement learning.  Suppose that you could not experience pain or
pleasure or any emotion, so that you would impassively accept any kind of
disease, disability, torture, or death, as unconcerned as if you were reading
a work of fiction.  Would you believe in reality then?




-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to