For the third grade, my oldest son Zar went to a progressive charter school
where they did one silly thing: each morning in homeroom the kids had to
write on a piece of paper what their goal for the day was.  Then at the end
of the day they had to write down how well they did at achieving their goal.

Being a Goertzel, Zar started out with "My goal is to meet my goal" and
after a few days started using "My goal is not to meet my goal."

Soon many of the boys in his class were using "My goal is not to meet my
goal".

Self-referential goals were banned in the school ... but soon, the silly
goal-setting exercise was abolished (saving the kids a bit of time-wasting
each day).

What happens when AIXI is given the goal "My goal is not to meet my goal"?
;-)

I suppose its behavior becomes essentially random?

If one started a Novamente system off with the prime goal "My goal is not to
meet my goal", it would probably end up de-emphasizing and eventually
killing this goal.  Its long-term dynamics would not be random, because some
other goal (or set of goals) would arise in the system and become dominant.
But it's hard to say in advance what those would be.

-- Ben G



> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Ben Goertzel
> Sent: Tuesday, February 11, 2003 4:33 PM
> To: [EMAIL PROTECTED]
> Subject: RE: [agi] unFriendly AIXI
>
>
>
> > The formality of Hutter's definitions can give the impression
> > that they cannot evolve. But they are open to interactions
> > with the external environment, and can be influenced by it
> > (including evolving in response to it). If the reinforcement
> > values are for human happiness, then the formal system and
> > humans together form a symbiotic system. This symbiotic
> > system is where you have to look for the friendliness. This
> > is part of an earlier discussion at:
> >
> >   http://www.mail-archive.com/agi@v2.listbox.com/msg00606.html
> >
> > Cheers,
> > Bill
>
> Bill,
>
> What you say is mostly true.
>
> However, taken literally Hutter's AGI designs involve a fixed,
> precisely-defined goal function.
>
> This strikes me as an "unsafe" architecture in the sense that we
> may not get
> the goal exactly right the first time around.
>
> Now, if humans iteratively tweak the goal function, then indeed, we have a
> synergetic system, whose dynamics include the dynamics of the
> goal-tweaking
> humans...
>
> But what happens if the system interprets its rigid goal to imply that it
> should stop humans from tweaking its goal?
>
> Of course, the goal function should be written in such a way as to make it
> unlikely the system will draw such an implication...
>
> It's also true that tweaking a superhumanly intelligent system's goal
> function may be very difficult for us humans with our limited
> intelligence.
>
> Making the goal function adaptable makes AIXItl into something a bit
> different... and making the AIXItl code rewritable by AIXItl makes it into
> something even more different...
>
> -- Ben G
>
> -------
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to