--- albert medina <[EMAIL PROTECTED]> wrote:

>   All sentient creatures have a sense of self, about which all else
> revolves.  Call it "egocentric singularity" or "selfhood" or "identity". 
> The most evolved "ego" that we can perceive is in the human species.  As far
> as I know, we are the only beings in the universe who "know that we do not
> know."  This fundamental "deficiency" is the basis for every desire to
> acquire things, as well as knowledge.

Understand where these ideas come from.  A machine learning algorithm capable
of reinforcement learning must respond to reinforcement as if the signal were
real.  It must also balance short term exploitation (immediate reward) against
long term exploration.  Evolution favors animals with good learning
algorithms.  In humans we associate these properties with consciousness and
free will.  These beliefs are instinctive.  You cannot reason logically about
them.  In particular, you cannot ask if a machine or animal or another person
is conscious.  (Does it really feel pain, or only respond to pain?)  You can
only ask about its behavior.

Current research in AGI is directed at solving the remaining problems that
people still do better than machines, such as language and vision.  These
problems don't require reinforcement learning.  Therefore, such machines need
not have behavior that would make them appear conscious.

If humans succeed in making machines smarter than themselves, those machines
could do likewise.  This process is called recursive self improvement (RSI). 
An agent cannot predict what a more intelligent agent will do (see
http://www.vetta.org/documents/IDSIA-12-06-1.pdf and
http://www.sl4.org/wiki/KnowabilityOfFAI for debate).  Thus, RSI is
experimental at every step.  Some offspring will be more fit than others.  If
agents must compete for computing resources, then we have an evolutionary
algorithm favoring agents whose goal is rapid reproduction and acquisition of
resources.  If an agent has goals and is capable of reinforcement learning,
then it will mimic conscious behavior.

RSI is necessary for a singularity, and goal directed agents seem to be
necessary for RSI.  It raises hard questions about what role humans will play
in this, if any.



-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56346815-402f08

Reply via email to