[agi] Screwing up Friendliness

2003-01-14 Thread Eliezer S. Yudkowsky
Roger "localroger" Williams has published online at kuro5hin his novel 
"The Metamorphosis of Prime Intellect", the first member of the new "Seed 
AI Programmer Screws Up Friendliness" genre of science fiction.  I would 
like to recommend it to, at the very least, Ben Goertzel, Bill Hibbard, 
and Kevin Copple, since the novel not only illustrates vividly some of the 
problems with "hard-wiring", but also some of the problems that 
"experiential learning" as an answer to hard-wiring does *not* solve.  One 
of the Big Lessons in AI is that just because you've solved one piece of 
the problem, doesn't mean you can stop there.

http://www.kuro5hin.org/prime-intellect

Roger Williams has emphasized for the record that this story is meant to 
emphasize the *importance* of thinking through the Singularity, not as a 
prediction of dystopia; also that the story was written in 1994, and set 
in 1988; hence, if the story fails to make any mention of more recent 
thinking on the Singularity, it may be excused.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] Screwing up Friendliness

2003-01-14 Thread Bill Hibbard
Hi Eliezer,

It looks like Williams' book is more about the perils of Asimov's
Laws than about hard-wiring. As logical constraints, Asimov's Laws
suffer from the grounding problem. Any analysis of brains as purely
logical runs afoul of the grounding problem. Brains are statistical
(or, if you prefer, "fuzzy"), and logic must emerge from statistical
processes. That is, symbols must be grounded in sensory experience,
reason and planning must be grounded in learning, and goals must be
grounded in values.

Also, while I advocate hard-wiring certain values of intelligent
machines, I also recognize that such machines will evolve (there
is a section on "Evolving God" in my book). And as Ben says, once
things evolve there can be no absolute guaratees. But I think
that a machine whose primary values are for the happiness of all
humans will not learn any behaviors to evolve against human
interests. Ask any mother whether she would rewire her brain
to want to eat her children. Designing machines with primary
values for the happiness of all humans essentially defers their
values to the values of humans, so that machine values will
adapt to evolving circumstances as human values adapt.

Cheers,
Bill

On Tue, 14 Jan 2003, Eliezer S. Yudkowsky wrote:

> Roger "localroger" Williams has published online at kuro5hin his novel
> "The Metamorphosis of Prime Intellect", the first member of the new "Seed
> AI Programmer Screws Up Friendliness" genre of science fiction.  I would
> like to recommend it to, at the very least, Ben Goertzel, Bill Hibbard,
> and Kevin Copple, since the novel not only illustrates vividly some of the
> problems with "hard-wiring", but also some of the problems that
> "experiential learning" as an answer to hard-wiring does *not* solve.  One
> of the Big Lessons in AI is that just because you've solved one piece of
> the problem, doesn't mean you can stop there.
>
> http://www.kuro5hin.org/prime-intellect
>
> Roger Williams has emphasized for the record that this story is meant to
> emphasize the *importance* of thinking through the Singularity, not as a
> prediction of dystopia; also that the story was written in 1994, and set
> in 1988; hence, if the story fails to make any mention of more recent
> thinking on the Singularity, it may be excused.
>
> --
> Eliezer S. Yudkowsky  http://singinst.org/
> Research Fellow, Singularity Institute for Artificial Intelligence
>
> ---
> To unsubscribe, change your address, or temporarily deactivate your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Screwing up Friendliness

2003-01-14 Thread Eliezer S. Yudkowsky
Bill Hibbard wrote:

Hi Eliezer,

It looks like Williams' book is more about the perils of Asimov's
Laws than about hard-wiring. As logical constraints, Asimov's Laws
suffer from the grounding problem. Any analysis of brains as purely
logical runs afoul of the grounding problem. Brains are statistical
(or, if you prefer, "fuzzy"), and logic must emerge from statistical
processes. That is, symbols must be grounded in sensory experience,
reason and planning must be grounded in learning, and goals must be
grounded in values.


This solves a *small* portion of the Friendliness problem.  It doesn't 
solve all of it.

There is more work to do even after you ground symbols in experience, 
planning in learned models, and goals (what I would call "subgoals") in 
values (what I would call "supergoals").  For example, Prime Intellect 
*does* do reinforcement learning and, indeed, goes on evolving its 
definitions of, for example, "human", as time goes on, yet Lawrence is 
still locked out of the goal system editor and humanity is still stuck in 
a pretty nightmarish system because Lawrence picked the *wrong* 
reinforcement values and didn't give any thought about how to fix that. 
Afterward, of course, Prime Intellect locked Lawrence out of editing the 
reinforcement values, because that would have conflicted with the very 
reinforcement values he wanted to edit.  This also happens with the class 
of system designs you propose.  If "temporal credit assignment" solves 
this problem I would like to know exactly why it does.

Also, while I advocate hard-wiring certain values of intelligent
machines, I also recognize that such machines will evolve (there
is a section on "Evolving God" in my book). And as Ben says, once
things evolve there can be no absolute guaratees. But I think
that a machine whose primary values are for the happiness of all
humans will not learn any behaviors to evolve against human
interests. Ask any mother whether she would rewire her brain
to want to eat her children. Designing machines with primary
values for the happiness of all humans essentially defers their
values to the values of humans, so that machine values will
adapt to evolving circumstances as human values adapt.


Erm... damn.  I've been trying to be nice recently, but I can't think of 
any way to phrase my criticism except "Basically we've got a vague magical 
improvement force that fixes all the flaws in your system?"

What kind of evolution?  How does it work?  What does it do?  Where does 
it go?  If you don't know where it ends up, then what forces determine the 
trajectory and why do you trust them?  Why doesn't your system shut off 
the reinforcement mechanism on top-level goals for exactly the same reason 
Prime Intellect locks Lawrence out of the goal system editor.  Why doesn't 
your system wirehead on infinitely increasing the amount of 
"reinforcement" by direct editing its own code?  What exactly happens in 
each of these cases?  How?  Why?  We are talking about the fate of the 
human species here.  Someone has to work out the nitty-gritty, not just to 
implement the system, but to even know for any reason beyond pure 
ungrounded hope that Friendliness *can* be made to work.  I understand 
that you *hope that* machines will evolve, and that you hope this will be 
beneficial to humanity.  Hope is not evidence.  As it stands, using 
reinforcement learning alone as a solution to Friendliness can be modeled 
to malfunction in pretty much the same way Prime Intellect does.  If you 
have a world model for solving the temporal credit assignment problem, 
exactly the same thing happens.  That's the straightforward projection. 
If evolution is supposed to fix this problem, you have to explain how.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] Screwing up Friendliness

2003-01-14 Thread Philip . Sutton
Bill,

You said:
> But I think that a machine whose primary values are for the happiness
> of all humans will not learn any behaviors to evolve against human
> interests. Ask any mother whether she would rewire her brain to want to
> eat her children. 

I'm afraid I can't see your logic.  Firstly not eating ones own children 
leaves about 6 billion other humans that you might be nasty to and 
there's plenty of evidence that once humans asign other humans to the 
'other' category (ie. not my kin/friend/child/whatever) then the others 
can be fair game in at least some circumstances for torture, murder, 
massacre, canibalism, etc. etc.

You might say that humans are not fundamentally programmed to be 
nice to ALL humans and so argue that my objection doesn't hold.

But there are (admitedly rare) cases where parents (including mothers) 
do kill their own kids - most often I guess in suicide/murder cases - and 
kids that scream non-stop for long periods can lead to homicidal 
tendencies in even nice parents (hands up anyone who has not bee in 
that position)!

At the moment in time that we are feeling homicidal we might be 
tempeted to do a bit of reprogramming to make the homicide less guilt-
inducing - if we could only reach into our brains and do it.

So maybe a bit of hard wiring that we should build into an AGI is the 
requirement for a long cooling off period before an AGI could do any 
self-modification to its core ethical coding.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Screwing up Friendliness

2003-01-14 Thread Eliezer S. Yudkowsky
[EMAIL PROTECTED] wrote:


So maybe a bit of hard wiring that we should build into an AGI is the 
requirement for a long cooling off period before an AGI could do any 
self-modification to its core ethical coding.

There's no such thing as hard-wiring morality.  You can't do that any more 
than you can hard-wire a chatbot with an IQ of 200 or hard-wire Windows XP 
not to crash.  Either you know how to embody the needed complexity in ones 
and zeroes, or you don't; either you know how to keep it from stepping on 
its own toes or you don't.

I think the "hard-wiring" fantasy derives from a kind of fictional 
crossover between humans and AIs - a slavemaster's wish that orders given 
to those darned rebellious humans could somehow be branded into their 
pre-existing minds with the absolute rigidity that supposedly 
characterizes "machines".  What distinguishes real AI morality from the 
vast majority of fictional discussions of it is that you aren't trying to 
order about an existing mind, but actually creating a new mind; a process 
totally foreign to our evolved intuitions for other minds, and hence 
totally foreign to most authors.

The apparent "rigidity" of machines is the result of anthropomorphizing a 
nonmindful physical process.  Machines are not rigid cognitions but 
non-cognitions.  Yet another reason to go on emphasizing that an AI is no 
more a machine than a human is a protein.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


[agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Philip . Sutton
I've just read the first chapter of The Metamorphosis of Prime Intellect.

http://www.kuro5hin.org/prime-intellect

It makes you realise that Ben's notion that ethical structures should be 
based on a hierarchy going from general to specific is very valid - if 
Prime Intellect had been programmed to respect all *life* and not just 
humans then the 490 worlds with sentient life not to mention the 14,623 
worlds with life of some type might have been spared.

It also makes it clear that when we talk about building AGIs for 'human 
friendliness' we are using language that does not follow Ben's 
recommended ethical goal structure.

I'm wondering (seriously) whether the AGI movement needs to change 
it short hand language (human friendly) in this case - in other arenas 
people talk about the need for ethical behaviour.  Would that term 
suffice?

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] uncountable nouns

2003-01-14 Thread Pei Wang
I'm working on a paper to compare predicate logic and term logic.  One
argument I want to make is that it is hard to infer on uncountable nouns in
predicate logic, such as to derive ``Rain-drop is a kind of liquid'' from
"Water is a kind of liquid'' and ``Rain-drop is a kind of water'', (which
can be early done in term logic, as the one used in NARS).

This is a problem because predicate logic treats a predicate as a set.  If
you force a uncontable noun to be used as a set, it can be done, but it is
not natural at all, and the distinction between "countable noun" and
"uncountable noun" is gone.

I browsed the website of CYC and cannot found how it is handled in CycL,
which is based on predicate logic.  Maybe Steve (or others) can give me a
clue.

The related conceptual issue is whether all concepts should be treated as
sets. My answer is no.

Pei



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] uncountable nouns

2003-01-14 Thread Ben Goertzel

Pei,

Chapter 3 of Chris Fox's book "The Ontology of Language: Properties,
Individuals and Discourse" is on "Plurals and Mass Terms."  It seems to
address this issue, among many others, using an axiomatic framework that is
founded on predicate logic.  It also gives a lot of references into the
related literature.

What Fox is doing is unorthodox -- he introduces Property Theory, which
(very generally speaking) is a way of dealing with intensionality within a
predicate-logic context.  But he references a lot of more traditional
predicate-logic expressions..

Personally, I think his approach is frighteningly overcomplicated, and I
tend to agree that the term logic approach is simpler and more elegant.

So your statement that "it's hard to infer on uncountable nouns in predicate
logic" is just right.  It's not true that predicate logic can't handle
uncountable nouns... it's just that the mechanisms conventionally used to do
so, seem to get too complicated too fast.

Novamente's inference module uses a semantics based on set theory, and I
don't believe it will have any trouble dealing with mass nouns, though.
It's a question of how the sets involved are set up  If you treat the
concept of "water" as the set of instances of water that the system in
question has observed, experienced or heard about, you don't run into any
problems.  In other words, if you use an experience-grounded set-theoretic
semantics, rather than an abstraction-grounded set-theoretic semantics, then
it seems to me things work out just fine, and very simply.

-- Ben



> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Pei Wang
> Sent: Tuesday, January 14, 2003 11:01 AM
> To: [EMAIL PROTECTED]
> Subject: [agi] uncountable nouns
>
>
> I'm working on a paper to compare predicate logic and term logic.  One
> argument I want to make is that it is hard to infer on
> uncountable nouns in
> predicate logic, such as to derive ``Rain-drop is a kind of liquid'' from
> "Water is a kind of liquid'' and ``Rain-drop is a kind of water'', (which
> can be early done in term logic, as the one used in NARS).
>
> This is a problem because predicate logic treats a predicate as a set.  If
> you force a uncontable noun to be used as a set, it can be done, but it is
> not natural at all, and the distinction between "countable noun" and
> "uncountable noun" is gone.
>
> I browsed the website of CYC and cannot found how it is handled in CycL,
> which is based on predicate logic.  Maybe Steve (or others) can give me a
> clue.
>
> The related conceptual issue is whether all concepts should be treated as
> sets. My answer is no.
>
> Pei
>
>
>
> ---
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Eliezer S. Yudkowsky
[EMAIL PROTECTED] wrote:

I've just read the first chapter of The Metamorphosis of Prime Intellect.

http://www.kuro5hin.org/prime-intellect

It makes you realise that Ben's notion that ethical structures should be 
based on a hierarchy going from general to specific is very valid - if 
Prime Intellect had been programmed to respect all *life* and not just 
humans then the 490 worlds with sentient life not to mention the 14,623 
worlds with life of some type might have been spared.

From my perspective, this isn't *the* problem.  It is unreasonable to 
expect Lawrence to think of everything.  His suicidal error was not in 
building an AI with an imperfect definition, but in building an AI such 
that if the programmer creates an imperfect definition you're screwed.

Once Prime Intellect was set in motion, it didn't care about Lawrence's 
realization of a mistake in his own goal definitions, because Prime 
Intellect was simply trying to minimize First Law violations, and the task 
of minimizing First Law violations makes no mention of inspecting your own 
moral philosophy.  Lawrence did not build Prime Intellect to carry out the 
kind of metamoral cognition that would have enabled Prime Intellect to 
understand Lawrence's plea "But that's not what I meant!" as significant.

Prime Intellect could understand how reality departed from the First Law, 
and move to correct that departure.  It had no concept that the definition 
of the First Law could be imperfect; it simply moved to bring future 
reality into correspondence with the current content of the First Law. 
Prime Intellect automatically attempted to prevent modification of the 
agent "Prime Intellect" away from its present definition of the First Law, 
as that would have resulted in the future "Prime Intellect" taking actions 
leading to suboptimal fulfillment of the present First Law.  Even worse, 
Prime Intellect had no conception that its *own moral architecture* could 
be imperfect, preventing Lawrence from improving the moral architecture to 
let Prime Intellect conceive of an "error in a moral definition" 
correctable by programmer feedback, after which Lawrence would finally 
have been able to improve the definition of the First Law.  Hence 
Singularity Regret.

This is exactly why I keep trying to emphasize that we all should forsake 
those endlessly fascinating, instinctively attractive political arguments 
over our favorite moralities, and instead focus on the much harder problem 
of defining an AI architecture which can understand that its morality is 
"wrong" in various ways; wrong definitions, wrong reinforcement 
procedures, wrong source code, wrong Friendliness architecture, wrong 
definition of "wrongness", and many others.  These are nontrivial 
problems!  Each turns out to require nonobvious structural qualities in 
the architecture of the goal system.

Making up more and more orders to give an AI may be endless fun, but it's 
not the knowledge you actually need to create AI morality.

It also makes it clear that when we talk about building AGIs for 'human 
friendliness' we are using language that does not follow Ben's 
recommended ethical goal structure.

I'm wondering (seriously) whether the AGI movement needs to change 
it short hand language (human friendly) in this case - in other arenas 
people talk about the need for ethical behaviour.  Would that term 
suffice?

The terms "Friendly AI" and "Friendliness", capitalized and used to refer 
to AI morality, is a technical term I coined in 2000 (if I recall 
correctly) and then defined at greater length in 2001 in "Creating 
Friendly AI".  The general term would be "AI morality", I think.

Incidentally, current theory on Friendly AI content - as opposed to 
Friendly AI structure and architecture - is volitionism, which does indeed 
refer to sentient life in general as opposed to humans particularly.  But 
how you define sentience?  I've been stabbing away at this question ever 
since, and while I don't have a definite provable answer, I can at least 
see that I'm getting closer to one over time, and I have some idea of 
which judgment functions I'm using to make the decision.  Friendly AI 
theory for transferring moral judgment functions should take care of the 
rest, even if I never manage to find an answer using my unaided intellect.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Ben Goertzel

> This is exactly why I keep trying to emphasize that we all should forsake
> those endlessly fascinating, instinctively attractive political arguments
> over our favorite moralities, and instead focus on the much
> harder problem
> of defining an AI architecture which can understand that its morality is
> "wrong" in various ways; wrong definitions, wrong reinforcement
> procedures, wrong source code, wrong Friendliness architecture, wrong
> definition of "wrongness", and many others.  These are nontrivial
> problems!  Each turns out to require nonobvious structural qualities in
> the architecture of the goal system.

Hmmm.  It seems to me the ability to recognize one's own potential wrongness
comes along automatically with general intelligence...

Recognizing "wrong source code" requires a codic modality, of course, and
recognizing "wrong Friendliness architecture" requires an intellectual
knowledge of philosophy and software design.

What is there about recognizing one's wrongness in the ways you mention,
that doesn't come "for free" with general cognition and appropriate
perception?

I guess there is an attitude needed to recognize one's own wrongness: a lack
of egoistic self-defensive certainty in one's own correctness  A
skeptical attitude even about one's own most deeply-held beliefs.

In Novamente, this skeptical attitude has two aspects:

1) very high level schemata that must be taught not programmed
2) some basic parameter settings that will statistically tend to incline the
system toward skepticism of its own conclusions [but you can't turn the dial
too far in the skeptical direction either...]

-- Ben G


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Billy Brown
Ben Goertzel wrote:
> Hmmm.  It seems to me the ability to recognize one's own potential
wrongness
> comes along automatically with general intelligence...

and

> What is there about recognizing one's wrongness in the ways you mention,
> that doesn't come "for free" with general cognition and appropriate
> perception?

I think it is worth reflecting here on the fact that many (perhaps most)
adult human beings routinely fail to recognize errors in their own thinking,
even when the mistake is pointed out. It is also quite common for humans to
become locked into self-reinforcing belief systems that have little or no
relation to anything real, and this state often lasts a lifetime.

If humans, who have the benefit of massive evolutionary debugging, are so
prone to meta-level errors, it seems unwise to assume that intelligence
alone will automatically solve the problem. At a minimum, we should look for
a coherent theory as to why humans make these kinds of mistakes, but the AI
is unlikely to do so.

Billy Brown

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Eliezer S. Yudkowsky
Billy Brown wrote:


I think it is worth reflecting here on the fact that many (perhaps most)
adult human beings routinely fail to recognize errors in their own thinking,
even when the mistake is pointed out. It is also quite common for humans to
become locked into self-reinforcing belief systems that have little or no
relation to anything real, and this state often lasts a lifetime.

If humans, who have the benefit of massive evolutionary debugging, are so
prone to meta-level errors, it seems unwise to assume that intelligence
alone will automatically solve the problem. At a minimum, we should look for
a coherent theory as to why humans make these kinds of mistakes, but the AI
is unlikely to do so.


I don't think we are the beneficiaries of massive evolutionary debugging. 
 I think we are the victims of massive evolutionary warpage to win 
arguments in adaptive political contexts.  I've identified at least four 
separate mechanisms of rationalization in human psychology so far:

1)  Deliberate rationalization by people who do not realize, or do not 
choose, that rationalization is wrong.

2)  Instinctive, unconscious rationalization in political contexts.

3)  Emergent rationalization as a product of the human reinforcement 
architecture (we flinch away from unpleasant thoughts); this emergent 
phenomenon of our goal architecture may have been evolutionarily fixed as 
a mechanism leading to adaptive rationalizations.

4)  Inertial rationalization as a product of the human pattern-completion 
mechanism for extending world-models; we have mechanisms which looks 
selectively for data consistent with what we already know, without an 
equal and opposing search for inconsistent data, or better yet a fully 
general search for relevant data, and without continuing fine-grained 
readjustment of probabilities using a Bayesian support model. 
Essentially, once we're in the flood of an argument, whether political or 
not, we tend to continue onward, inertially, without continually 
re-evaluating the conclusion.

There may be additional rationalization mechanisms I haven't identified 
yet which are needed to explain anosognosia and similar disorders. 
Mechanism (4) is the only one deep enough to explain why, for example, the 
left hemisphere automatically and unconsciously rationalizes the actions 
of the left hemisphere; and mechanism (4) doesn't necessarily explain 
that, it only looks like it might someday do so.

However, mechanism (4) is also the only mechanism that would, even in 
theory, be likely to apply to a nonevolved AI; and it should be easy to 
avoid this by instituting searches for *relevant* evidence rather than 
*supporting* evidence and by continually adjusting Bayesian support on the 
basis of *all* evidence found, as opposed to the human mind's search for 
*consistent* evidence and simple is/is-not instead of continuous updating 
of fine probabilities.

Any form of purely intellectual rationalization, such as (4), would 
probably be spotted and corrected by a seed AI renormalizing its own 
source code - there is no extra moral component needed to see the 
desirability of this.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Ben Goertzel

> If humans, who have the benefit of massive evolutionary debugging, are so
> prone to meta-level errors, it seems unwise to assume that intelligence
> alone will automatically solve the problem. At a minimum, we
> should look for
> a coherent theory as to why humans make these kinds of mistakes,
> but the AI
> is unlikely to do so.
>
> Billy Brown

This is the sort of thing I was talking about in the language of "giving an
AGI the right attitude."

We humans have all sorts of emotional complexes that prevent us from being
objective about ourselves.

One should not anthropomorphically assume that AGI's will have similar
complexes!

But yet, one should not glibly assume that they will automatically emerge as
paragons of rationality and mental health either...

In my view, what we're talking about here is partly a matter of "AGI
personality psychology" ...

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Eliezer S. Yudkowsky
Eliezer S. Yudkowsky wrote:

There may be additional rationalization mechanisms I haven't identified 
yet which are needed to explain anosognosia and similar disorders. 
Mechanism (4) is the only one deep enough to explain why, for example, 
the left hemisphere automatically and unconsciously rationalizes the 
actions of the left hemisphere; and mechanism (4) doesn't necessarily 
explain that, it only looks like it might someday do so.

That is, the left hemisphere automatically and unconsciously rationalizes 
the actions of the right hemisphere in split-brain patients.  Sorry.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote:

This is exactly why I keep trying to emphasize that we all should forsake
those endlessly fascinating, instinctively attractive political arguments
over our favorite moralities, and instead focus on the much
harder problem
of defining an AI architecture which can understand that its morality is
"wrong" in various ways; wrong definitions, wrong reinforcement
procedures, wrong source code, wrong Friendliness architecture, wrong
definition of "wrongness", and many others.  These are nontrivial
problems!  Each turns out to require nonobvious structural qualities in
the architecture of the goal system.


Hmmm.  It seems to me the ability to recognize one's own potential wrongness
comes along automatically with general intelligence...


Ben, I've been there, 1996-2000, and that turned out to be the WRONG 
ANSWER.  There's an enormous amount of moral complexity that does *not* 
come along with asymptotically increasing intelligence.  Thankfully, 
despite the tremendous emotional energy I put into believing that 
superintelligences are inevitably moral, and despite the amount of 
published reasoning I had staked on it, I managed to spot this mistake 
before I "pulled a Lawrence" on the human species.  Please, please, please 
don't continue where I left off.

The problem here is the imprecision of words.  *One* form of wrongness, 
such as factual error, or "wrong source code" which is "wrong" because it 
is inefficient or introduces factual errors, is readily conceivable by a 
general intelligence without extra moral complexity.  You do, indeed, get 
recognition of *that particular* kind of "wrongness" for free.  It does 
not follow that all the things we recognize as wrong, in moral domains 
especially, can be recognized by a general intelligence without extra 
moral complexity.

If it is the case that a general intelligence necessarily has the ability 
to conceive of a "wrongness" in a top-level goal definition and has a 
mechanism for correcting it, this is not obvious to me - not for any 
definition of "wrongness" at all.  Prime Intellect, with its total 
inability to ask any moral question except "how desirable is X, under the 
Three Laws as presently defined", seems to me quite realistic.

Note also that the ability to identify *a* kind of wrongness, does not 
necessarily mean the ability to see - as a human would - the specific 
wrongness of your own programmer standing by and screaming "That's not 
what I meant!  Stop!  Stop!"  If this realization is a necessary ability 
of all minds-in-general it is certainly not clear why.

Recognizing "wrong source code" requires a codic modality, of course, and
recognizing "wrong Friendliness architecture" requires an intellectual
knowledge of philosophy and software design.

What is there about recognizing one's wrongness in the ways you mention,
that doesn't come "for free" with general cognition and appropriate
perception?


So... you think a real-life Prime Intellect would have, for free, 
recognized that it should not lock Lawrence out?  But why?

I guess there is an attitude needed to recognize one's own wrongness: a lack
of egoistic self-defensive certainty in one's own correctness  A
skeptical attitude even about one's own most deeply-held beliefs.

In Novamente, this skeptical attitude has two aspects:

1) very high level schemata that must be taught not programmed
2) some basic parameter settings that will statistically tend to incline the
system toward skepticism of its own conclusions [but you can't turn the dial
too far in the skeptical direction either...]


That's for skepticism about facts.  I agree you get that for free with 
general intelligence.  If *all* questions of morality, means and ends and 
ultimate goals, were reducible to facts and deducible by logic or 
observation, then the issue would end right there.  That was my position 
1996-2000.  Is this your current position?

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Billy Brown
Ben Goertzel wrote:
> In my view, what we're talking about here is partly a matter of "AGI
> personality psychology" ...

Exactly. My point is that there is no particular reason to assume that "AGI
personality psychology" will be any easier than, say, computer vision, or
natural language processing. In fact, the history of AI to date makes it
seem safer to assume the opposite - just about everything else interesting
that anyone has ever tried to do has turned out to require all sorts of
specialized code and novel theoretical insights, so we ought to assume this
will too.

Now, that doesn't mean that all AI work should focus on this topic, of
course. But it does mean that any serious AGI project can't expect that
sane, ethical behavior will just naturally emerge once the basic problem of
making the system think at all is solved. It would be more realistic to
expect to encounter a whole new level of difficult problems that are poorly
studied today, due to the lack of AI systems that are complex enough to
produce them.

Billy Brown

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Ben Goertzel

> > In Novamente, this skeptical attitude has two aspects:
> >
> > 1) very high level schemata that must be taught not programmed
> > 2) some basic parameter settings that will statistically tend
> to incline the
> > system toward skepticism of its own conclusions [but you can't
> turn the dial
> > too far in the skeptical direction either...]
>
> That's for skepticism about facts.  I agree you get that for free with
> general intelligence.  If *all* questions of morality, means and ends and
> ultimate goals, were reducible to facts and deducible by logic or
> observation, then the issue would end right there.  That was my position
> 1996-2000.  Is this your current position?

Not exactly, no... that is not my current position.

For example: Of course, there is no logical way to deduce that killing
chimpanzees is morally worse than killing fleas, from no assumptions.

If one assumes that killing humans is morally bad, then from this
assumption, reasoning (probabilistic analogical reasoning, for instance)
leads one to conclude that killing chimpanzees is morally worse than killing
fleas...

My view is fairly subtle, and has progressed a bit since I wrote "Thoughts
on AI Morality."  I don't have time to write a long e-mail on it today, but
I will do so tomorrow.  I think that will be better than writing something
hasty and hard-to-understand right now.

Ben


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Ben Goertzel
> Now, that doesn't mean that all AI work should focus on this topic, of
> course. But it does mean that any serious AGI project can't expect that
> sane, ethical behavior will just naturally emerge once the basic
> problem of
> making the system think at all is solved. It would be more realistic to
> expect to encounter a whole new level of difficult problems that
> are poorly
> studied today, due to the lack of AI systems that are complex enough to
> produce them.
>
> Billy Brown

Sure.  But these may be problems of how to *teach* AI's, as much as problems
about how to *program* them...

That is my suspicion..

Ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Billy Brown
Eliezer S. Yudkowsky wrote:
> I don't think we are the beneficiaries of massive evolutionary debugging.
>   I think we are the victims of massive evolutionary warpage to win
> arguments in adaptive political contexts.  I've identified at least four
> separate mechanisms of rationalization in human psychology so far:

Well, yes. Human minds are tuned for fitness in the ancestral environment,
not for correspondence with objective reality. But just getting to the point
where implementing those rationalizations is possible would be a huge leap
forward from current AI systems.

In any case, I think your approach to the problem is a step in the right
direction. We need a theory of AI ethics before we can test it, and we need
lots of experimental testing before we start building things that have any
chance of taking off. Sometimes I think it is a good thing that AI is still
stuck in a mire of wishful thinking, because we aren't ready to build AGI
safely.

Billy Brown

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] The Metamorphosis of Prime Intellect

2003-01-14 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote:


Sure.  But these may be problems of how to *teach* AI's, as much as problems
about how to *program* them...

That is my suspicion..


I think most of us here take that point for granted, actually - can we 
accept it and move on?  Is there anyone here who thinks AI morality can or 
should be a matter of source code?

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


[agi] C-T Thesis (or a version thereof) - Is it useable as an in-principle argument for strong AI?

2003-01-14 Thread Anand AI
Hi everyone,

After having read quite a bit about the the C-T Thesis, and its different
versions, I'm still somewhat confused on whether it's useable as an
in-principle argument for strong AI.  Why or why isn't it useable?  Since I
suspect this is a common question, any good references that you have are
appreciated.  (Incidentally, I've read Copeland's entry on the C-T Thesis in
SEoC (plato.standford.edu).)

I'll edit any answers for SL4's Wiki (http://sl4.org/bin/wiki.pl?HomePage),
and thanks very much in advance.

Best wishes,

Anand
___

The following text is from the MIT Encyclopedia of Cognitive Sciences:

COMPUTATION AND THE BRAIN

Two very different insights motivate characterizing the brain as a computer.
The first and more fundamental assumes that the defining function of nervous
systems is representational; that is, brain states represent states of some
other system the outside world or the body itself-where transitions between
states can be explained as computational operations on representations. The
second insight derives from a domain of mathematical theory that defines
computability in a highly abstract sense.

The mathematical approach is based on the idea of a Turing machine. Not an
actual machine, the Turing machine is a conceptual way of saying that any
well-defined function could be executed, step by step, according to simple
"if you are in state P and have input Q then do R" rules, given enough time
(maybe infinite time; see COMPUTATION). Insofar as the brain is a device
whose input and output can be characterized in terms of some mathematical
function- however complicated -then in that very abstract sense, it can be
mimicked by a Turing machine. Because neurobiological data indicate that
brains are indeed cause-effect machines, brains are, in this formal sense,
equivalent to a Turing machine (see CHURCHTURING THESIS). Significant though
this result is mathematically, it reveals nothing specific about the nature
of mindbrain representation and computation. It does not even imply that the
best explanation of brain function will actually be in
computational/representational terms. For in this abstract sense, livers,
stomachs, and brains-not to mention sieves and the solar system-all compute.
What is believed to make brains unique, however, is their evolved capacity
to represent the brain's body and its world, and by virtue of computation,
to produce coherent, adaptive motor behavior in real time.

CHURCH-TURING THESIS

Alonzo Church proposed at a meeting of the American Mathematical Society in
April 1935, "that the notion of an effectively calculable function of
positive integers should be identified with that of a recursive function."
This proposal of identifying an informal notion, effectively calculable
function, with a mathematically precise one, recursive function, has been
called Church's thesis since Stephen Cole Kleene used that name in 1952.
Alan TURING independently made a related proposal in 1936, Turing's thesis,
suggesting the identification of effectively calculable functions with
functions whose values can be computed by a particular idealized computing
device, a Turing machine. As the two mathematical notions are provably
equivalent, the theses are "equivalent," and are jointly referred to as the
Church-Turing thesis.

The reflective, partly philosophical and partly mathematical, work around
and in support of the thesis concerns one of the fundamental notions of
mathematical logic. Its proper understanding is crucial for making informed
and reasoned judgments on the significance of limitative results-like GÖDEL'
S THEOREMS or Church's theorem. The work is equally crucial for computer
science, artificial intelligence, and cognitive psychology as it provides
also for these subjects a basic theoretical notion. For example, the thesis
is the cornerstone for Allen NEWELL's delimitation of the class of physical
symbol systems, that is, universal machines with a particular architecture.
Newell (1980) views this delimitation "as the most fundamental contribution
of artificial intelligence and computer science to the joint enterprise of
cognitive science." In a turn that had almost been taken by Turing (1948,
1950), Newell points to the basic role physical symbol systems have in the
study of the human mind: "the hypothesis is that humans are instances of
physical symbol systems, and, by virtue of this, mind enters into the
physical universe . . . this hypothesis sets the terms on which we search
for a scientific theory of mind." The restrictive "almost" in Turing's case
is easily motivated: he viewed the precise mathematical notion as a crucial
ingredient for the investigation of the mind (using computing machines to
simulate aspects of the mind), but did not subscribe to a sweeping
"mechanist" theory. It is precisely for an understanding of such-sometimes
controversial-claims that the background for Church's and Turing's work has
to be 

Re: [agi] Screwing up Friendliness

2003-01-14 Thread Bill Hibbard
Hi Eliezer,

> > It looks like Williams' book is more about the perils of Asimov's
> > Laws than about hard-wiring. As logical constraints, Asimov's Laws
> > suffer from the grounding problem. Any analysis of brains as purely
> > logical runs afoul of the grounding problem. Brains are statistical
> > (or, if you prefer, "fuzzy"), and logic must emerge from statistical
> > processes. That is, symbols must be grounded in sensory experience,
> > reason and planning must be grounded in learning, and goals must be
> > grounded in values.
>
> This solves a *small* portion of the Friendliness problem.  It doesn't
> solve all of it.
>
> There is more work to do even after you ground symbols in experience,
> planning in learned models, and goals (what I would call "subgoals") in
> values (what I would call "supergoals").  For example, Prime Intellect
> *does* do reinforcement learning and, indeed, goes on evolving its
> definitions of, for example, "human", as time goes on, yet Lawrence is
> still locked out of the goal system editor and humanity is still stuck in
> a pretty nightmarish system because Lawrence picked the *wrong*
> reinforcement values and didn't give any thought about how to fix that.
> Afterward, of course, Prime Intellect locked Lawrence out of editing the
> reinforcement values, because that would have conflicted with the very
> reinforcement values he wanted to edit.  This also happens with the class
> of system designs you propose.  If "temporal credit assignment" solves
> this problem I would like to know exactly why it does.

The temporal credit assignment problem is the problem
whose solution causes reason and planning to emerge
from learning, in order to simulate the world and hence
predict the effect of actions on values. It isn't
specifically about the problem you describe.

I'll answer the question about Lawrence being locked
out in my next set of paragraphs.

> > Also, while I advocate hard-wiring certain values of intelligent
> > machines, I also recognize that such machines will evolve (there
> > is a section on "Evolving God" in my book). And as Ben says, once
> > things evolve there can be no absolute guaratees. But I think
> > that a machine whose primary values are for the happiness of all
> > humans will not learn any behaviors to evolve against human
> > interests. Ask any mother whether she would rewire her brain
> > to want to eat her children. Designing machines with primary
> > values for the happiness of all humans essentially defers their
> > values to the values of humans, so that machine values will
> > adapt to evolving circumstances as human values adapt.
>
> Erm... damn.  I've been trying to be nice recently, but I can't think of
> any way to phrase my criticism except "Basically we've got a vague magical
> improvement force that fixes all the flaws in your system?"

If you want to be nasty, you'll have to try harder than that.
I think you've been studying friendliness so long you've
internalized it.

My approach is not magic. By making machine (I know you don't
like that word, but I use it to mean artifact, and also use God
to make it clear I'm not talking about can openers) values depend
on human happiness, they are essentially deferred to human
values. There can never be guarantees. So given that I have to
trust something, I put my trust in the happiness expressed by
all humans.

In fact, I trust the expression of happiness by all humans a
lot more than I trust any individual (e.g., Lawrence) to
modify machine values. Lawrence may be a good guy, but lots
of individuals aren't and I certainly won't trust
a programmed set of criteria about which individuals to
trust.

> What kind of evolution? How does it work? What does it do?

The world changes through human action, natural action, and
in the future the actions of intelligent machines. Human
happiness will change in response, and the machines will
learn new behaviors based on world changes and human
happiness changes. Furthermore, the mental and physical
capabilities of the machines will change, giving it a
broader array of actions for causing human happiness, and
more accurate simulations for predicting human happiness.

> Where does it go?

That's the big question, isn't it? Who can say for sure
where super-intelligent brains responding to the happiness
of all humans will go. In my book I say the machines will
simulate all humans and their interactions (except for
those luddites who opt out). I say they will probably
continue the human science program, drive by continuing
human curiosity. They will probably work hard to reduce
humans' natural xenophobia, which is the source of so
much unhappiness. And for any party animals out there,
there will probably be lots of really well produced low
brow entertainment.

> If you don't know where it ends up, then what forces determine the
> trajectory and why do you trust them?

If I have to trust anything, its the happiness of all
humans. Its like politics. Benjamin Frankl

[agi] Is AI morality a matter of source code? Yes but not only....

2003-01-14 Thread Philip . Sutton



Eliezer,


> I think most 
of us here take that point for granted, actually - can we
> accept it and 
move on?  Is there anyone here who thinks AI morality
> can or should 
be a matter of source code? 

I am not deeply experienced in issues 
of AI morality or the origin of 
morality in biological life..so for me to offer a comment on your 
question is either brave or foolish or both!..but here's my intuition for 
what it's worth


I think that (a) architecture/coding 
and (b) learning are both essential in 
developing moral behaviour in AGIs. I strongly feel that relying on one 
or the other wholly or even largely will not work.


I agree with both Ben and yourself 
that morality in any advanced 
general intelligence (biological or not) will depend mightily on good 
training and that, even assuming that coding is important, the emergent 
moral behaviour will not bear a simplisitic direct relationship to any 
coding that may be involved.


But I think the coding will be critical 
to giving any advanced general 
intelligence a high aptitude for moral learning and for the effective, 
adaptive application of morality.


For example, in the day-to-day work 
I do on environmental 
sustainability I notice that people seem to find it terribly hard to model 
multidimentional problems operating over large areas and long time 
horizons - that is pat of the reason why we find it hard to avoid global 
warming or to create a robust state of global peace. Humans in general 
have a tendency to grab their favourite bits of multidimensional 
problems and elevate them above the other parts of the problem.


So I think it would help boost the 
aptitude of artificial general 
intelligences if, coupled to a moral drive to seek no-major trade-offs 
and win-win outcomes for all life and a motivational pragmatic/aesthetic 
drive to strive to retain of valuable patterns we also worked to build in 
the capacity for complex whole system modelling.  I think it would also 
be desirable to make sure that AGIs are given, at the outset, in built in 
form, well developed tools for the easy and rapid identification at least 
some intitial critical examples of 'life'.


It might also be worth building in 
a curiosity to explore moral beliefs 
among AGIs and other sentient beings - to seek the goodness in others 
moral beliefs/behaviours and to identify wrongness as well (in both the 
AGIs moral beliefs and the beliefs of others).


I know someone is going to say - but 
how do you code these abstract 
ideas into programsbut I think this is ultimately dooble through the 
extension of high level computer languages to encompass moral 
concepts and as a complementary measure to develop specialist 
pattern recognition systems that are attuned to seeking out patterns in 
the behaviour of advanced lifeforms that reflect moral 
responses.


For example, Franz De Waal (a very 
respected animal behaviourist) 
tells a wonderfully instructive true story of an older bonobo (a species 
somewhat like a chimpanzee, but much more peaceful in it's basic 
behaviours) that removed a captured bird from the clutches of a 
juvenile and that then climbed a tree opened the birds wings and threw 
it into the air.  Franz using human skills at sensing moral behaviour 
believes that the most probable explanation for this behavior is that the 
older bonobo felt empathy for the captured bird and that it deliberately 
rescued it from probable death at the hands of the less empathetic 
youngster.


Franz also points out that bonobos 
especially and all other apes and 
also many monkey species devote a great deal of their time to studying 
and memorising the relationships between members of their clan - 
even keeping tabs of kinship relationships and hierarchies (all this is 
backed up with observational data).  This suggests (a) that these 
creatures (humans included) have a drive to pay attention to clan 
members and that they have a large part of their brain devoted to 
keeping track of all the social dimensions of the clan. 


This is one argument for why big brained 
primates emerged - that they 
gained in evolutionary terms from social, cooperative behaviour and 
that operating socially required a lot of brain grunt to keep tabs on the 
group - and possibly a large amount of human brain power that could 
be used for other things might have 'come free' with the growth of the 
brain to handle social interactions.


I guess what I'm thinking is that 
developing moral sensibility might be 
analogous to developing a vision system.  Images are analysed for 
special regularities by the retina and the brain.  I think we need to think 
about what regularities there are in moral behaviour so that a high 
performance system can be built so AGIs can 'see' the moral aspects of 
what goes on around them.


All this is an intuition at this stage 
rather than a a well researched idea. 
But I think there is something here worth exploring before we dismiss 
'hard wirin

RE: [agi] Music and AI's Friendliness

2003-01-14 Thread cosmodelia




Hi Ben:
You wrote: 
 
I guess we can look at AI Friendliness as having 2 components: 

1) AI's having a deep understanding of humanity 
2) AI's having positive goals as regards humanity 
One question is whether 1 is necessary or not. Could an AI with positive 
goals consistently "do the right thing" for humans without a deep understanding 
of them? 
I suspect that 1 is necessary, or at least very valuable. Tough decisions 
about humans are going to come up, and understanding will be very 
valuable. 
The amazing this for me it would be to read someone thinks 
component 1 is not necessary. I think it is essential, because imo we need to 
look at AI Friendliness having also more components togeher with 1 and 2, 
as:

  AIs helping to humans to understand ourselves 
  before
  AIs helping us to transform ourselves and our world in a 
  transhumanist way. 
I don’t think a AI can be friendly if we are passive 
materials for hir transformations and hir future creations. We would need 
at least some comprehension of the changes in we will be involved. 

Hence, deep understanding of humans will be very valuable. And music and art 
provide an important means for AI's to achieve this. 
I agree: a important part of our essence as humans can be 
deeply understood knowing our music and art. 
Not only with them, but because AI does not will use 
traditional formal logic, there are no simple ways to understand us. If we see 
the creation of a AGI as a mutual encounter of two different intelligences, 
mutual understanding is a master piece, and different non-verbal languajes will 
be necessary. One of the obstacles we have to understand ourselves and the 
universe is we are the only intelligent beings we know. To have another 
reference each other, and to search and create knowledge togeher, there are 
certain central concepts that it is essential to keep in mind about ethics and 
philosophy. AI can remove comforting but limiting self-delusions. AI can infer 
that all values [including religious values] are the creations of human beings 
and that therefore we [both human and artificial beings] are all responsible for 
creating high values and living up to them. Maybe some of you think yet these 
values need not be shared, that we can be relativists arguing that one person's 
virtue is another's vice, anyway music and art can give to a AGI learning about 
us a full range of expressions of our ethics and different ways to face to up 
existence. 
After all, AI's can't fall in love, or feel human sadness, etc. But they can 
listen to human music inspired by these experiences, and they can improvise 
music together with humans who are playing their instruments in such a way as to 
evoke the essence of these human experiences... 

Actually if you are able to do sensitive to a AGI of energy 
involved in a live performance you could teach intangible knowledges about human nature and attitudes, not easily 
available throught books, etc.
I think this 
would be a really nifty aspect of AGI teaching/training ;-)   
Hopefully as the Novamente project unfolds, some computer music programmers will 
emerge to write some Novamente musical interfacing software for 
us!!!
Maybe in that moment 
non-programmers will be able to be useful in your research 
;-)
Cosmodelia


RE: [agi] Music and AI's Friendliness

2003-01-14 Thread Ben Goertzel



 
*
 After all, AI's can't fall in love, or 
feel human sadness, etc. But they can listen to human music inspired by these 
experiences, and they can improvise music together with humans who are playing 
their instruments in such a way as to evoke the essence of these human 
experiences...  
 
 Actually if 
you are able to do sensitive to a AGI of energy involved in a live performance 
you could teach intangible knowledges about human nature and attitudes, not 
easily available throught books, etc. 
 
 I think 
this would be a really nifty aspect of AGI teaching/training ;-)   
Hopefully as the Novamente project unfolds, some computer music programmers will 
emerge to write some Novamente musical interfacing software for us!!! 
 
 Maybe in that moment non-programmers will be able 
to be useful in your research ;-) 
***
 
 
I think it'll be useful to 
have all sorts of humans -- not just programmers -- involved in teaching baby 
AI's ... via ordinary verbal-type teaching, and via 
musical education as well ...
 
I would like to see a baby AGI come to appreciate the 
peculiar and fascinating beauty of the full spectrum of humanity...  
;-)
 
-- 
Ben 


RE: [agi] C-T Thesis (or a version thereof) - Is it useable as an in-principle argument for strong AI?

2003-01-14 Thread Ben Goertzel

The CT thesis would seem to imply the possibility of strong AI.

That is, it implies that: On any general-purpose computer, there is some
computer program that (if supplied with enough memory, e.g. a huge disk
drive) can display the exact same behaviors as a human, but perhaps on a
much slower time-scale.

It doesn't imply that strong AI can be achieved by any means other than
direct human-imitation, and it doesn't say anything about how fast a
computer has to be or how big it has to be to display a given functionality.

It also is just a philosophical hypothesis, not something that has been
scientifically proved

Although, one can argue for it on physics grounds, as some have done, and as
David Deutsch has done for the related Quantum Church-Turing Thesis.

-- Ben G


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Anand AI
> Sent: Tuesday, January 14, 2003 2:29 PM
> To: AGI List
> Subject: [agi] C-T Thesis (or a version thereof) - Is it useable as an
> in-principle argument for strong AI?
>
>
> Hi everyone,
>
> After having read quite a bit about the the C-T Thesis, and its different
> versions, I'm still somewhat confused on whether it's useable as an
> in-principle argument for strong AI.  Why or why isn't it
> useable?  Since I
> suspect this is a common question, any good references that you have are
> appreciated.  (Incidentally, I've read Copeland's entry on the
> C-T Thesis in
> SEoC (plato.standford.edu).)
>
> I'll edit any answers for SL4's Wiki
> (http://sl4.org/bin/wiki.pl?HomePage),
> and thanks very much in advance.
>
> Best wishes,
>
> Anand
> ___
>
> The following text is from the MIT Encyclopedia of Cognitive Sciences:
>
> COMPUTATION AND THE BRAIN
>
> Two very different insights motivate characterizing the brain as
> a computer.
> The first and more fundamental assumes that the defining function
> of nervous
> systems is representational; that is, brain states represent
> states of some
> other system the outside world or the body itself-where
> transitions between
> states can be explained as computational operations on
> representations. The
> second insight derives from a domain of mathematical theory that defines
> computability in a highly abstract sense.
>
> The mathematical approach is based on the idea of a Turing machine. Not an
> actual machine, the Turing machine is a conceptual way of saying that any
> well-defined function could be executed, step by step, according to simple
> "if you are in state P and have input Q then do R" rules, given
> enough time
> (maybe infinite time; see COMPUTATION). Insofar as the brain is a device
> whose input and output can be characterized in terms of some mathematical
> function- however complicated -then in that very abstract sense, it can be
> mimicked by a Turing machine. Because neurobiological data indicate that
> brains are indeed cause-effect machines, brains are, in this formal sense,
> equivalent to a Turing machine (see CHURCHTURING THESIS).
> Significant though
> this result is mathematically, it reveals nothing specific about
> the nature
> of mindbrain representation and computation. It does not even
> imply that the
> best explanation of brain function will actually be in
> computational/representational terms. For in this abstract sense, livers,
> stomachs, and brains-not to mention sieves and the solar
> system-all compute.
> What is believed to make brains unique, however, is their evolved capacity
> to represent the brain's body and its world, and by virtue of computation,
> to produce coherent, adaptive motor behavior in real time.
>
> CHURCH-TURING THESIS
>
> Alonzo Church proposed at a meeting of the American Mathematical
> Society in
> April 1935, "that the notion of an effectively calculable function of
> positive integers should be identified with that of a recursive function."
> This proposal of identifying an informal notion, effectively calculable
> function, with a mathematically precise one, recursive function, has been
> called Church's thesis since Stephen Cole Kleene used that name in 1952.
> Alan TURING independently made a related proposal in 1936,
> Turing's thesis,
> suggesting the identification of effectively calculable functions with
> functions whose values can be computed by a particular idealized computing
> device, a Turing machine. As the two mathematical notions are provably
> equivalent, the theses are "equivalent," and are jointly referred
> to as the
> Church-Turing thesis.
>
> The reflective, partly philosophical and partly mathematical, work around
> and in support of the thesis concerns one of the fundamental notions of
> mathematical logic. Its proper understanding is crucial for
> making informed
> and reasoned judgments on the significance of limitative
> results-like GÖDEL'
> S THEOREMS or Church's theorem. The work is equally crucial for computer
> science, artificial intelligence, and cogni

[agi] Turing Tournament

2003-01-14 Thread Damien Sullivan
Hey, look what my alma mater is up to.  The Humanities and Social Sciences
department, no less.  Although it was common for undergrads to be in economics
experiments, and this 'test' looks pretty similar.  No hard language stuff.

http://turing.ssel.caltech.edu/

-xx- Damien X-) 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] C-T Thesis (or a version thereof) - Is it useable as an in-principle argument for strong AI?

2003-01-14 Thread Anand AI
Thanks, Ben, that answer will be useful for different things.

http://sl4.org/bin/wiki.pl?SingularityQuestions (edited answer below
question 5)

Best,

Anand


Ben Goertzel wrote:
> The CT thesis would seem to imply the possibility of strong AI.
>
> That is, it implies that: On any general-purpose computer, there is some
> computer program that (if supplied with enough memory, e.g. a huge disk
> drive) can display the exact same behaviors as a human, but perhaps on a
> much slower time-scale.
>
> It doesn't imply that strong AI can be achieved by any means other than
> direct human-imitation, and it doesn't say anything about how fast a
> computer has to be or how big it has to be to display a given
functionality.
>
> It also is just a philosophical hypothesis, not something that has been
> scientifically proved
>
> Although, one can argue for it on physics grounds, as some have done, and
as
> David Deutsch has done for the related Quantum Church-Turing Thesis.
>
> -- Ben G
>
> Anand wrote:
> > After having read quite a bit about the the C-T Thesis, and its
different
> > versions, I'm still somewhat confused on whether it's useable as an
> > in-principle argument for strong AI.  Why or why isn't it
> > useable?  Since I
> > suspect this is a common question, any good references that you have are
> > appreciated.  (Incidentally, I've read Copeland's entry on the
> > C-T Thesis in
> > SEoC (plato.standford.edu).)
> >
> > I'll edit any answers for SL4's Wiki
> > (http://sl4.org/bin/wiki.pl?HomePage),
> > and thanks very much in advance.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]