On 14/01/2008, Pei Wang <[EMAIL PROTECTED]> wrote:
> On Jan 13, 2008 7:40 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> > And, as I indicated, my particular beef was with Shane Legg's paper,
> > which I found singularly content-free.
>
> Shane Legg and Marcus Hutter have a recent publication on this topic,
>     http://www.springerlink.com/content/jm81548387248180/
> which is much richer in content.
>

I think this can also be found here

http://arxiv.org/abs/0712.3329

For those of us without springerlink accounts.


"While we do not consider efficiency to be a part of the definition of
intelligence, this is not to say that considering the efficiency of
agents is unimportant. Indeed, a key goal of artificial intelligence is
to find algorithms which have the greatest efficiency of intelligence,
that is, which achieve the most intelligence per unit of computational
resources consumed."

Why not consider resource efficiciency a thing to be adapted? Over
which "problems" can be solved.

An example. consider 2 android robots with finite energy supplies
tasked with a long foot race.

One shuts down all processing non-essential to its current task of
running (sound familiar to what humans do? I certainly think better
walking), so it uses less energy.

The other one attempts to find programs that precisly predict its
input given its output, churning through billions of possibilities and
consuming vast amounts of energy.

The one that shuts down its processing finishes the race and gets
reward, the other one runs its battery down by processing too much and
has to be rescued, getting no reward.

As they have defined it only outputting can make the system more or
less likely to achieve a goal. Which is a narrow view.

  Will Pearson

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=85547641-0ef2b3

Reply via email to