comments below...

[BG]
Hi,

Your philosophical objections aren't really objections to my perspective, so 
far as I have understood so far...

[TS]
Agreed. They're to the Eliezer perspective that Vlad is arguing for.

[BG]
I don't plan to hardwire beneficialness (by which I may not mean precisely the 
same thing as "Friendliness" in Eliezer's vernacular), I plan to teach it ... 
to an AGI with an architecture that's well-suited to learn it, by design...


[TS]
This is essentially what we do with our kids, so no objections to the 
methodology here. But from the "you have to guarantee it or we're doomed" 
perspective, that's not good enough.

[BG]
I do however plan to hardwire **a powerful, super-human capability for 
empathy** ... and a goal-maintenance system hardwired toward **stability of 
top-level goals under self-modification**.   But I agree this is different from 
hardwiring specific goal content ... though it strongly *biases* the system 
toward learning certain goals.

[TS]
Hardwired empathy strikes me as a basic oxymoron. Empathy must involve embodied 
experience and the ability to imagine the embodied experience of another. When 
we have an empathic experience, it's because we see ourselves in another's 
situation - it's hard to understand what empathy could mean without that basic  
subjective aspect.




      


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to