On 5/3/07, William Pearson <[EMAIL PROTECTED]> wrote:
1) The theory not completely divorced from brains

It doesn't have to describe everything about human brains, but you can
see how roughly a similar sort of system to it may be running in the
human brain and can account for things such as motivation, neural
plasticity.

My focus is on truth maintenance, but I agree that the AGI should have
motivations and plasticity (in the form of machine learning, not exactly
neural network).


2) It takes some note of theoretical computer science

So nothing that ignores limits to collecting information from the
environment or promises unlimited bug free creation/alteration of
programming.


Certainly.

3) A reason why it is different from normal computers/programs

How it deals with meaning and other things. If it could explain
conciousness in some fashion, I would have to abandon my own theories
as well.


1. I'd define consciousness := self-awareness := a self-reflexive
representation in the KB.  So a thermostat is not conscious because it's not
*aware* of what it's performing.

2. Consciousness is not central to AGI unless you want to make it
"sentient", by which I mean having its own emotions.  Why would someone want
that?

3. The lack of consciousness does not prevent the AGI to have robust
knowledge of human values.  That's what's needed for an AGI to be an aide to
humans.

YKY

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to