Thanks! I considered another title before such as 'Apotheotron' but a friend
found that too hostile and unfriendly so I changed it to Jame5 with is an
acronym for Joint Artificial Mind Experiment Five. Looking forward to
hearing from you.
Stefan

On 10/26/07, candice schuster <[EMAIL PROTECTED]> wrote:
>
>  Stefan,
>
> I plan to read your 112+- paged book 'Jame5' this weekend....while sitting
> in taxis / on planes etc to Ireland.  PS : I really like the graphics on the
> cover, interesting that the 'fire ball' is the one that is about to hit the
> rest of the balls !  PPS : Why the name JAME5 ?
>
> Candice
>
>
>  ------------------------------
> Date: Fri, 26 Oct 2007 11:13:17 +0800
> From: [EMAIL PROTECTED]
> To: [email protected]
> Subject: Re: [singularity] 14 objections against AI/Friendly AI/The
> Singularity answered
>
> Great write up. My special interest is AI friendliness so I would like to
> comment on 11.
>
> CEV is a concept that avoids answering the question of what friendliness
> is by letting an advanced AI figure out what good might be. Doing so makes
> endowing an AI implementation with friendliness not feasible. CEV is
> circular. See the following core sentence for example:
>
> "...if we knew more, thought faster, were more the people we wished we
> were, had grown up farther together; where the extrapolation converges
> rather than diverges, where our wishes cohere rather than interfere;
> extrapolated as we wish that extrapolated, interpreted as we wish that
> interpreted..."
>
> Simplified: "If we were better people we were better people." True - but
> not adding value as key concepts such as 'friendliness', 'good',  'better'
> and 'benevolence' remain undefined.
>
> In my recent book (see www.Jame5.com <http://www.jame5.com/>) I take the
> definition of friendliness further by grounding key terms such as 'good' and
> 'friendly'.
>
> If you rather not read my complete 45'000 word book I suggest focusing on
> the end of chapter 9 until 12. Those sum up the key concepts. Further I will
> post a 7 page paper (hopefully today) that further condenses the core ideas
> of what benevolence means and how hard goals for a friendly AI can be
> derived from those ideas.
>
> Kind regards,
>
> Stefan
>
> On 10/26/07, *Kaj Sotala* <[EMAIL PROTECTED]> wrote:
>
> Can be found at 
> http://www.saunalahti.fi/~tspro1/objections.html<http://www.saunalahti.fi/%7Etspro1/objections.html>.
>
> Answers the following objections:
>
> 1: There are limits to everything. You can't get infinite growth
> 2: Extrapolation of graphs doesn't prove anything. It doesn't show
> that we'll have AI in the future.
> 3: A superintelligence could rewrite itself to remove human tampering.
> Therefore we cannot build Friendly AI.
> 4: What reason would a super-intelligent AI have to care about us?
> 5: The idea of a hostile AI is anthropomorphic.
> 6: Intelligence is not linear.
> 7: There is no such thing as a human-equivalent AI.
> 8: Intelligence isn't everything. An AI still wouldn't have the
> resources of humanity.
> 9: It's too early to start thinking about Friendly AI
> 10: Development towards AI will be gradual. Methods will pop up to deal
> with it.
> 11: "Friendliness" is too vaguely defined.
> 12: What if the AI misinterprets its goals?
> 13: Couldn't AIs be built as pure advisors, so they wouldn't do
> anything themselves? That way, we wouldn't need to worry about
> Friendly AI.
> 14: Machines will never be placed in positions of power.
>
> Constructive criticism welcome, as always.
>
>
> --
> http://www.saunalahti.fi/~tspro1/ <http://www.saunalahti.fi/%7Etspro1/> | 
> http://xuenay.livejournal.com/
>
>
> Organizations worth your time:
> http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>
>
>
> --
> Stefan Pernar
> 3-E-101 Silver Maple Garden
> #6 Cai Hong Road, Da Shan Zi
> Chao Yang District
> 100015 Beijing
> P.R. CHINA
> Mobil: +86 1391 009 1931
> Skype: Stefan.Pernar
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>
> ------------------------------
> Are you the Quizmaster? Play BrainBattle with a friend now!
> <http://specials.uk.msn.com/brainbattle>
> ------------------------------
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>



-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57774985-d99adb

Reply via email to