On 9/14/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
However, I am not so sure this is the most sensible approach to
take....  The details of my own personal Friendliness criterion are
not that important (nor are the details of *anyone*'s particular
Friendliness criterion).  It may be more sensible to create an AI with
a more abstract top-level goal representing more abstract and general
values....

Hi Ben,

After reading KnowabilityOfFAI and perhaps coming to an Awful
Realization, it seems Friendliness is plausible with strict criteria
for an optimization target. It also seems an optimization target is
necessary, regardless, with more or less strict criteria.

What I think would be illuminating are some characterizations of these
optimization targets from the ascendant researchers. These
characterizations might show which optimization target seems
Friendlier than the others and if it accordingly requires the
strictest criteria.

Incidentally, it recently sort of dawned on me that SIAI refers to
developing an 'AI' rather than an 'AGI'. This seems to suggest the
plan to build a relatively narrow AI, albeit one toward powerful
optimization. I genuinely wonder if this is an approach worth
considering more deeply, to the extent it's possible.

I'll understand if this would be too much to get into sufficiently,
but I do wish to accentuate my interest in having a little better idea
about some of these planned optimization targets.

Best,
Nate

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to