Hi,

I know Selmer Bringsjord (the leader of this project) and his work fairly
well.

He's an interesting guy and I'm afraid to misrepresent his views somehow in
a brief summary.  But I'll try.

First, an interesting point is that Selmer does not believe strong AI is
possible on traditional digital computers.  Possibly related to this is that
he is a serious Christian theological thinker.

Second, his approach to AI is very strongly logic-based, and his approach to
implementing morality is based on deontic logic, which is an attempt to
formalize explicit rules defining the structure of good and evil.

So, yes, his stuff is not ELIZA-like, it's based on a fairly sophisticated
crisp-logic-theorem-prover back end, and a well-thought-out cognitive
architecture.

On the other hand, my own strong feeling is that this kind of
crisp-logic-theorem-proving based approach to AI is never going to achieve
any kind of broad, deep or interesting general intelligence ... even though
it may do things that on the surface, in toy domains, give that appearance
due to their clever underlying logical formalism.  I stress that Selmer is a
very deep-thinking and insightful and creative guy, but nonetheless, I think
the basic approach is far too limited and ultimately wrongheaded where AGI
is concerned.

My view is that these AI systems of his are not acting evil in any
significant way -- rather, they are formulaically enacting formal structures
that some humans created in order to capture some abstract properties of
evil.  But without the grounding in perception, action and reflective
pattern-recognition, there is no evil there ... any more than a sketch drawn
of rat-poison is actually poisonous...

-- Ben G





On Sun, Nov 2, 2008 at 4:17 PM, Nathan Cook <[EMAIL PROTECTED]> wrote:

> This article (http://www.sciam.com/article.cfm?id=defining-evil) about a
> chatbot programmed to have an 'evil' intentionality, from Scientific
> American, may be of some interest to this list. Reading the researcher's
> personal and laboratory websites 
> (http://www.rpi.edu/~brings/<http://www.rpi.edu/%7Ebrings/>,
> http://www.cogsci.rpi.edu/research/rair/projects.php), it seems clear to
> me that the program is more than an Eliza clone.
>
> --
> Nathan Cook
>  ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects."  -- Robert Heinlein



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to