Shane Legg wrote:
It would be much easier to aim at the right target if the target was
properly defined. There are endless megabytes of text about friendly
AI on the internet, but still no precise formal definition. That was, and
remains, may main problem with FAI. If you believe that the only safe
AI is one that has been mathematically proven to be 100% safe, then
you will need a 100% water tight formal mathematical definition of what
this means. Until I see such a definition, I'm not convinced that FAI is
really going anywhere.
I think Eliezer has made it pretty clear previously that there is a big
difference between solving the problems of (1) Friendliness content and (2) a
mechanism for reliable longterm guidance of a self-modifying AGI based on that
content, preferably a mathematically provable mechanism.
At the moment he's trying to work on the latter. Things like CEV are about the
former.
In the preface of: http://www.sl4.org/wiki/KnowabilityOfFAI
"This work is narrowly focused; for example, it doesn't try to ask - given
that one has the power to create an AI that is "predictably Friendly" for
some chosen sense of "Friendly" - what "Friendly" should mean. (Similarly,
the document CoherentExtrapolatedVolition, which does focus on choosing a
sense of "Friendly", disclaims any attempt to say how a thus-Friendly AI
might be built."
There's a bit more in section 5.2. You're probably already aware of all this,
but I'll continue.
To have a true FAI, we suggest you have to crack both separate problems, and
then properly combine them together with your seed AGI. You can work on
debunking the possibility of FAI by breaking either piece of this plan, so I
don't see that we need to have the whole thing assembled together
(mathematically) before you can take a crack at it.
So then is the complaint that Eliezer hasn't given enough information regarding
his ideas for (2) above to even let you take a shot at breaking them? Even if he
hasn't, perhaps you could still work on your own ideas of how to formalize the
problem, but the best would be to get your heads together to try and get on the
same page.
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]