> [BG]
> I do however plan to hardwire **a powerful, super-human capability for
> empathy** ... and a goal-maintenance system hardwired toward **stability of
> top-level goals under self-modification**.   But I agree this is different
> from hardwiring specific goal content ... though it strongly *biases* the
> system toward learning certain goals.
>
> [TS]
> Hardwired empathy strikes me as a basic oxymoron. Empathy must involve
> embodied experience and the ability to imagine the embodied experience of
> another. When we have an empathic experience, it's because we see ourselves
> in another's situation - it's hard to understand what empathy could mean
> without that basic  subjective aspect.
>
>

I plan to write a blog post on this sometime during the next week...

Humans can do empathy, it seems, in large part because we have mirror neuron
systems that allow us to do so.  But mirror neuron systems are a fairly lame
way of mirroring others' actions, perceptions and emotions.  In essence what
I suggest is that AI's can incorporate a superior, more accurate sort of
"mirror neuron system" (more accurate in part because systems like
NM/OpenCogPrime don't use neural nets but more precise probabilistic logic
methods...

When I write the blog post I'll link to it here...

ben



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to