Matt Mahoney wrote:
Is it possible to program to program any autonomous agent
that responds to reinforcement learning (a reward/penalty signal) that does
not act as though its environment were real?  How would one test for this
belief?

Exactly.

Of course an agent could certainly claim that its environment isn't real.

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to