Hi again,

I now got to reading Ben's essay and would like to place my comments here.

On 9/10/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
Not to be left out, I also wrote down some of my own thoughts
following our interesting chat in that Genova cafe' (which of course
followed up a long series of email chats on similar themes), which you
may find here:

http://www.goertzel.org/papers/LimitationsOnFriendliness.pdf

Note to Ben: I have recently (~4 week ago) ordered your book on AGI
(http://www.amazon.de/gp/product/354023733X/) on Amazon but it has not
shpped yet. Are you in the loop when it may be available - on Amazon
it said August 2006. Sorry to bother you ;-)

On Ben's essay: Ben is arguing that due to incomputable complexity
'friendliness' can only be guranteed under unsatisfactory narrow
circumstances. Independent of one agrees or not, it would follow that
if this is the case then substituting friendliness with one or all of
the alternative goals proposed by Ben namely "compassion", "growth"
and "choice" would not make a difference as the same incomputability
problem applies to all goals.

Reading the essay I got reminded of Marc Hutter's work on the
Universal Algorithmic Agent AIXI
(http://www.idsia.ch/~marcus/ai/aixigentle.htm)

From the abstract:

"Decision theory formally solves the problem of rational agents in
uncertain worlds if the true environmental prior probability
distribution is known. Solomonoff's theory of universal induction
formally solves the problem of sequence prediction for unknown prior
distribution. We combine both ideas and get a parameterless theory of
universal Artificial Intelligence. We give strong arguments that the
resulting AIXI model is the most intelligent unbiased agent possible.
We outline for a number of problem classes, including sequence
prediction, strategic games, function minimization, reinforcement and
supervised learning, how the AIXI model can formally solve them. The
major drawback of the AIXI model is that it is uncomputable. To
overcome this problem, we construct a modified algorithm AIXItl, which
is still effectively more intelligent than any other time t and space
l bounded agent. The computation time of AIXItl is of the order t·2l.
Other discussed topics are formal definitions of intelligence order
relations, the horizon problem and relations of the AIXI theory to
other AI approaches."

It follows that the AIXItl algoritm applied to friendliness would be
effectively more friendly than any other time t and space bounded
agent.

Personally I find that satisfying in the sense that once
"compassion", "growth" and "choice" or the classical "friendliness"
has been defined an optimal algorythm will be available to achive the
goal.

Best regards,

Stefan
--
Stefan Pernar
App. 1-6-I, Piao Home
No. 19 Jiang Tai Xi Lu
100016 Beijing
China
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-------
AGIRI.org hosts two discussion lists: http://www.agiri.org/email
[singularity] = more general, [agi] = more technical

To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to