Matt Mahoney wrote:
--- Stan Nilsen <[EMAIL PROTECTED]> wrote:

Matt Mahoney wrote:

Remember that the goal is to test for "understanding" in intelligent
agents that are not necessarily human.  What does it mean for a
machine to
understand something?  What does it mean to understand a string of
bits?
Have you considered testing intelligent agents by simply observing what they do when left alone? If it has understanding, wouldn't it do something? And wouldn't "it's" choice be revealing? Just a thought.

What it does depends on its goals, in addition to understanding.  Suppose
a robot just sits there, doing nothing.  Maybe it understands its
environment but doesn't need to do anything because its batteries are
charged.


-- Matt Mahoney, [EMAIL PROTECTED]

If the batteries are charged and it waits around for an "order" from it's master, then it will always be a robot and not an AGI. If it understands it's environment, it is not an AGI - there are too many mysteries in the "big environment" to understand it. If nothing else, it ought to be looking for a way to engage itself for someone or somethings benefit - else it probably doesn't understand existence.

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to