Hi,

Your philosophical objections aren't really objections to my perspective, so
far as I have understood so far...

What you said is

"
I've been saying that Friendliness is impossible to implement because 1)
it's a moving target (as in, changes through time), since 2) its definition
is dependent on context (situational context, cultural context, etc).  In
other words, Friendliness is not something that can be hardwired. It can't
be formalized, coded, designed, implemented, or proved. It is an invention
of the collective psychology of humankind, and every bit as fuzzy as that
sounds. At best, it can be approximated.
"

I don't plan to hardwire beneficialness (by which I may not mean precisely
the same thing as "Friendliness" in Eliezer's vernacular), I plan to teach
it ... to an AGI with an architecture that's well-suited to learn it, by
design...

I do however plan to hardwire **a powerful, super-human capability for
empathy** ... and a goal-maintenance system hardwired toward **stability of
top-level goals under self-modification**.   But I agree this is different
from hardwiring specific goal content ... though it strongly *biases* the
system toward learning certain goals.

-- Ben


On Sat, Aug 30, 2008 at 1:42 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:

>
> I agree with that to the extent that "theoretical advances" could address
> the philosophical objections I am making. But until those are dealt with,
> experimentation is a waste of time and money.
>
> If I was talking about how to build faster-than-lightspeed travel, you
> would want to know how I plan to overcome theoretical limitations. You
> wouldn't fund experimentation on that until those objections on principle
> had been dealt with.
>
>
> --- On *Sat, 8/30/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote:
>
> About Friendly AI..
>
>
>>
>> Let me put it this way: I would think anyone in a position to offer
>> funding for this kind of work would require good answers to the above.
>>
>> Terren
>
>
>
> My view is a little different.  I think these answers are going to come out
> of a combination of theoretical advances with lessons learned via
> experimenting with early-stage AGI systems, rather than being arrived at
> in-advance based on pure armchair theorization...
>
> -- Ben G
>
>
>  ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
>
>  ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to