On 10/11/2007, William Pearson <[EMAIL PROTECTED]> wrote:
> On 09/11/2007, Jef Allbright <[EMAIL PROTECTED]> wrote:
> > On 11/8/07, William Pearson <[EMAIL PROTECTED]> wrote:
> > > 1) Doesn't treat computation as outputting to the environment, thus
> > > can have no concept of saving energy or avoiding inteference with
> > > other systems by avoiding computation. The lack of energy saving means
> > > it is not valid model for solving the problem of being a
> > > non-reversible intelligence in an energy poor environment (which
> > > humans are and most mobile robots will be).
> >
> > This is intriguing, but unclear to me.  Does it entail anything
> > consequential beyond the statement that Solomonoff induction is
> > incomputable?  Any references?
> >
>
> Not that I have found, I tried to write something about it a while
> back. But that is not very appropiate to this audience.
>
> I'll try and write something more coherent, with an example in the
> morning. The following paragraph is just food for thought.
>

<snip some unenlightening stuff>

Okay back again. The first question is how to deal with energy usage
and interference in systems. They should not be ignored as processing
will take up significant energy resources of a robot, and you would
want it capable of not modulating their processing near sensitive
medical/scientific equipment. And even in a friendly RSI AI the more
resources used by the system, the less that can be used for the
benefit of mankind.  So they can both be used in the definition of the
fitness function of a system. As the fitness function is generally
defined over the output of the system we shall consider them as
output, which occurs whenever there is computation, so we can analyse
abstract machines with this problem.

We shall give the AIXI model an extra tape which records all
computational by product output. Although we could just interleave it
with the normal output, it is easier to seperate it for display.

So let us use an example of a sequence that is optimal, with fitness
decreasing proportionally to the hamming distance from it.

Normal Output stream x

1111111111111.....

Computational Byproduct Output z

0000000000000......

Where a 1 indicates over a certain amount of energy E used in a
second. Fairly simple. We want to minimise computation. As it stands
AIXI would be oblivious to  z and just optimise x to the limit of it
fitness. In order to optimise the computational byproduct, you would
have to estimate in some fashion how much energy is used or
computation is done and then pipe it back to the input to the system
and give it someway of altering how much it computes (altering T and
L?), so that it can figure out what the right strategy is.

There are some interesting scenarios where direct evidence of the
utility of the optimal strategy is impossible to attain. This is when
the the optimal strategy precludes collecting evidence, due to the
energetic costs of computation of recording the utility of that
strategy, reducing the utility of that strategy. Whether they mean
much in the real world is another matter.

  Will Pearson

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=63930464-6d59f2

Reply via email to