--- Stan Nilsen <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > --- Stan Nilsen <[EMAIL PROTECTED]> wrote:
> > 
> >> Matt Mahoney wrote:
> >>
> >>> Remember that the goal is to test for "understanding" in intelligent
> >>> agents that are not necessarily human.  What does it mean for a
> >> machine to
> >>> understand something?  What does it mean to understand a string of
> >> bits?
> >> Have you considered testing intelligent agents by simply observing
> what 
> >> they do when left alone?  If it has understanding, wouldn't it do 
> >> something?  And wouldn't "it's" choice be revealing?  Just a thought.
> > 
> > What it does depends on its goals, in addition to understanding. 
> Suppose
> > a robot just sits there, doing nothing.  Maybe it understands its
> > environment but doesn't need to do anything because its batteries are
> > charged.
> > 
> > 
> > -- Matt Mahoney, [EMAIL PROTECTED]
> 
> If the batteries are charged and it waits around for an "order" from 
> it's master, then it will always be a robot and not an AGI. If it 
> understands it's environment, it is not an AGI - there are too many 
> mysteries in the "big environment" to understand it.  If nothing else, 
> it ought to be looking for a way to engage itself for someone or 
> somethings benefit - else it probably doesn't understand existence.

I am not sure what you mean by AGI.  I consider a measure of intelligence
to be the degree to which goals are satisfied in a range of environments. 
It does not matter what the goals are.  They may seem irrational to you. 
The goal of a smart bomb is to blow itself up at a given target.  I would
consider bombs that hit their targets more often to be more intelligent.

I consider "understanding" to mean "intelligence" in this context.  You
can't say that a robot that does nothing is unintelligent unless you
specify its goals.

We may consider intelligence as a measure and AGI as a threshold.  AGI is
not required for understanding.  You can measure the degree to which
various search engines understand your query, spam filters understand your
email, language translators understand your document, vision systems
understand images, intrusion detection systems understand network traffic,
etc.  Each system was designed with a goal and can be evaluated according
to how well that goal is met.

AIXI allows us to evaluate intelligence independent of goals.  An agent
understands its input if it can predict it.  This can be measured
precisely.


-- Matt Mahoney, [EMAIL PROTECTED]

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to