RE: [agi] AGI morality

2003-02-10 Thread Bill Hibbard
Hi Philip,

On Tue, 11 Feb 2003, Philip Sutton wrote:

 Ben,

 If in the Novamente configuration the dedicated Ethics Unit is focussed
 on GoalNode refinement, it might be worth using another term to
 describe the whole ethical architecture/machinery which would involve
 aspects of most/all (??) Units plus perhaps even the Mind Operating
 System (??).

 Maybe we need to think about an 'ethics system' that is woven into the
 whole Novamente architecture and processes.
 . . .

I think discussing ethics in terms of goals leads to confusion.
As I described in an earlier post at:

  http://www.mail-archive.com/agi@v2.listbox.com/msg00390.html

reasoning must be grounded in learning and goals must be grounded
in values (i.e., the values used to reinforce behaviors in
reinforcement learning).

Reinforcement learning is fundamental to the way brains work, so
expressing ethics in terms of learning values builds those ethics
in to brain behavior in a fundamental way.

Because reasoning emerges from learning, expressing ethics in terms
of the goals of a reasoning system can lead to confusion, when the
goals derived from ethics turn out to be inconsistent with the goals
that emerge from learning values.

In my book I advocate using human happiness for learning values, where
behaviors are positively reinforced by human happiness and negatively
reinforced by human unhappiness. Of course there will be ambiguity
caused by conflicts between humans, and machine minds will learn
complex behaviors for dealing with such ambiguities (just as mothers
learn complex behaviors for dealing with conflicts among their
children). It is much more difficult to deal with conflict and
ambiguity in a purely reasoning based system.

Cheers,
Bill

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-10 Thread Ben Goertzel

  My idea is that action-framing and environment-monitoring are carried
  out in a unified way in Units assigned to these tasks generically.
  ..ethical thought gets to affect system behavior indirectly
  through a), via ethically-motivated GoalNodes, both general ones and
  context-specific ones.  Thus, the role of the ethics Unit I posited
  would be create ethically-motivated Goalnodes, which would then be
  exported to the generic action-framing and environment-monitoring Units
  to live and work along with the other Goalnodes. 
 
 OK - that makes sense.
 
 Presumably there would be a lot of feedback from the action-framing 
 and environment-monitoring Units to the Ethical Unit for it to create 
 additional or refined GoalNodes to help resolve previously unresolved 
 or ambiguous ethical issues?
 
 Cheers, Philip


Correct!

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-10 Thread Michael Anissimov

Philip Sutton wrote:
Maybe we need to think about an 'ethics system' that is woven into the 
whole Novamente architecture and processes.

How about a benevolence-capped goal system where all the AI's actions 
flow from a single supergoal?  That way you aren't adding ethics into a 
fundamentally ethics-indifferent being, but creating a system that is 
ethical from the foundations upward.  Since humans aren't used to 
consciously thinking about our morality all day long and performing 
every action based on that morality, it's difficult to imagine a being 
that could.  But I believe that building an AI in that way would be 
much safer; as recursive self-improvement begins to take place, (it 
could at any point, we don't really know) it would probably be a good 
thing for the AI's high-level goals to be maximally aligned with any 
preexisting complexity within the AI.  Letting the AI grow up with 
whichever goals look immediately useful, (regularly check and optimize 
chunk of code X, win this training game, etc.) and then trying 
to weave in ethics works in humans because we already come pre-
equipped with cognitive machinery ready for behaving ethically; when 
we teach each other to be more good, we're only marginally tweaking 
the DNA-constructed cognitive architecture which is already there to 
begin with.  Weaving in ethics, by creating a set of injunctions and 
encouraging a ethically nascent AI to extrapolate off those injunctions 
(analogous to humans giving one another ethical advice) isn't as robust 
a system as one which starts off early with the ability to perform fine-
grained tweaks of its own goals and methods within the context of its 
top-level goal (which has no analogy: it's better than anything 
evolution could have come up with.) 

I wonder if the top of the ethics hierarchy is the commitment of the 
AGI to act 'ethically' - ie. to have a commitment to modifying its own 
behaviour to benefit non-self (including life, people, other AGIs,  
community, etc.)

This means that an AGI has to be able to perceive self and non-self 
and to be able to subdivide non-self into elements or layers or 
whatever that deserve focussed empathetic or compassionate 
consideration.  

Why does the AGI need to create a boundary between itself and others in 
order to help others?  You seem to be writing under the implicit 
assumption that the AGI has a natural tendency to become selfish; where 
will this tendency come from?  An AGI might have a variety of layers 
of self for different purposes, but how would the self/non-self 
distinction be useful for an AGI engaging in compassionate or 
benevolent acts?  Instead of be good to others, why not simply be 
good in general?

Maybe the experience of biological life, especially highly intelligent 
biological life, is useful here.  Young animals, including humans, 
seem to depend on hard wired instinct to see them through in relation 
to certain key issues before they have experienced enough to rely 
heavily or largely on learned and rational processes.

But the learned and rational processes are just the tip of the iceberg 
of underlying biological complexity, right?

Another key issue for the ethics system, but this time for more mature 
AGIs, is how the basic system architecture guides or restricts or 
facilitates the AGI's self modification process.  Maybe AGIs need to 
be designed to be social in that they have a really strong desire to: 

(a) talk to other advanced sentient beings to kick around ideas for 
self 
modification before they commit themselves to fundamental change.  

Probably a good idea just in case, but in a society of minds already 
independent from observer-biased moral reasoning, borrowing extra 
computing power for a tough decision is a more likely action 
than kicking around ideas in the way that humans do, right?  Or are 
we assuming a society of AIs with observer-biased moral reasoning?

This does not preclude changes that are not approved of by the 
collective but it might at least make an AGI give any changes careful 
consideration. If this is a good direction to go in it suggests that 
having more than one AGI around is a good thing.

What if the AGI could encapsulate the moral benefits of communal 
exchange through the introduction of a single cognitive module?  It 
could happen.  If we're building a bootstrapping AI, instead of 
building a bunch and launching them all at the same time, why not just 
build one we can trust to create buddies along the takeoff trajectory 
if circumstances warrant?  An AI that *really wanted* to be good from 
the start wouldn't need humans to create a society of AIs to keep their 
eyes on one another; it would do that on its own.

(c) maybe AGIs need to have reached a certain age or level of maturity 
before their machinary for fundamental self-modification is turned 
on...and maybe it gets turned on for different aspects of itself at 
different times in its process of maturation.

Of course, we'd have 

Re: [agi] AGI morality

2003-02-10 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote:


However, it's to be expected that an AGI's ethics will be different than any
human's ethics, even if closely related.


What do a Goertzelian AGI's ethics and a human's ethics have in common 
that makes it a humanly ethical act to construct a Goertzelian AGI?

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: [agi] AGI morality

2003-02-10 Thread Ben Goertzel

I think we all agree that, loosely speaking, we want our AGI's to have a
goal of respecting and promoting the survival and happiness of humans and
all intelligent and living beings.

However, no two minds interpret these general goals in the same way.  You
and I don't interpret them exactly the same, and my children don't interpret
them exactly the same as me in spite of my explicit  implicit moral
instruction.  Similarly, an AGI will certainly have its own special twist on
the theme...

-- Ben G



 Ben Goertzel wrote:
 
  However, it's to be expected that an AGI's ethics will be
 different than any
  human's ethics, even if closely related.

 What do a Goertzelian AGI's ethics and a human's ethics have in common
 that makes it a humanly ethical act to construct a Goertzelian AGI?

 --
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence

 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-10 Thread Michael Anissimov

Ben Goertzel writes:
This is a key aspect of Eliezer Yudkowsky's Friendly Goal 
Architecture

Yeah; too bad there isn't really anyone else to cite on this one.  It 
will be interesting to see what other AGI pursuers have to say about 
the hierarchial goal system issue, once they write up their thoughts.

The Novamente design does not lend itself naturally to a hierarchical 
goal structure in which all the AI's actions flow from a single 
supergoal.

Doesn't it depend pretty heavily on how you look at it?  If the 
supergoal is abstract enough and generates a diversity of subgoals, 
then many people wouldn't call it a supergoal at all.  I guess it 
ultimately burns down to how the AI designer looks at it.

GoalNodes are simply PredicateNodes that are specially labeled as 
GoalNodes; the special labeling indicates to other MindAgents that 
they are used to drive schema (procedure) learning.

Okay; got it.

 Letting the AI grow up with
 whichever goals look immediately useful, (regularly check and 
optimize
 chunk of code X, win this training game, etc.) and then trying
 to weave in ethics ...

That was not my suggestion at all, though.  The ethical goals can be 
there
from the beginning.  It's just that a purely hierarchical goal 
structure is
highly unlikely to emerge as a goal map, i.e. an attractor, of 
Novamente's
self-organizing goal-creating dynamics.

Right, that statement was directed towards Philip Sutton's mail, but I 
appreciate your stepping in to clarify.  Of course, whether AIs with 
substantially prehuman (low) intelligence can have goals that deserve 
being called ethical or unethical is a matter of word choice and 
definitions.  

Michael Anissimov

-
http://eo.yifan.net
Free POP3/Web Email, File Manager, Calendar and Address Book

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AGI morality

2003-02-10 Thread Bill Hibbard
Hi Ben,

  I think discussing ethics in terms of goals leads to confusion.
  As I described in an earlier post at:
 
http://www.mail-archive.com/agi@v2.listbox.com/msg00390.html
 
  reasoning must be grounded in learning and goals must be grounded
  in values (i.e., the values used to reinforce behaviors in
  reinforcement learning).

 Bill, I think we differ mainly on semantics here.

 What you call values I'm just calling the highest-level goals in the goal
 hierarchy...

 A goal in Novamente is a kind of predicate, which is just a function that
 assigns a value in [0,1] to each input situation it observes... i.e. it's a
 'valuation' ;-)

Interesting. Are these values used for reinforcing behaviors
in a learning system? Or are they used in a continuous-valued
reasoning system?

Cheers,
Bill
--
Bill Hibbard, SSEC, 1225 W. Dayton St., Madison, WI  53706
[EMAIL PROTECTED]  608-263-4427  fax: 608-263-6738
http://www.ssec.wisc.edu/~billh/vis.html

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AGI morality

2003-02-10 Thread Brad Wyble


 
 There might even be a benefit to trying to develop an ethical system for 
 the earliest possible AGIs - and that is that it forces everyone to strip 
 the concept of an ethical system down to its absolute basics so that it 
 can be made part of a not very intelligent system.  That will probably be 
 helpful in getting the clarity we need for any robust ethical system 
 (provided we also think about the upgrade path issues and any 
 evolutionary deadends we might need to avoid).
 
 Cheers, Philip

I'm sure this idea is nothing new to this group, but I'll mention it anyway out of 
curiosity.

A simple and implementable means of evaluating and training the ethics of an early AGI 
(one existing in a limited FileWorld type environment), would engage the AGI in 
variants of prisoner's dilemna with either humans or a copy of itself.   The payoff 
matrix(CC, CD, DD) could be varied to provide a number of different ethical 
situtations.  

Another idea is that the prisoner's dilemna could then be internalized, and the AGI 
could play the game between internal actors, with the Self evaluating their actions 
and outcomes.


-Brad





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Self, other, community

2003-02-10 Thread Philip Sutton
A number of people have expressed concern about making AGIs 'self' 
aware - fearing that this will lead to selfish behaviour.

however I don't think that AGIs can actually be ethical without being 
able to develop awareness of the needs of others and I don't think you 
can be aware of others needs without being able to distinguish between 
own needs and others needs (ie. others needs are not simply the self's 
needs)

Maybe the solution is to help AGIs to develop a basic suite of concepts:
-self
-other
-community

I think all social animals have these concepts.  

Where AGIs need to go further is to have a very inclusive sense of 
what the community is - humans, AGIs, other living things - and then to 
have a belief that it should modify its behaviour to optimise for all the 
entities in the community rather than for just 'self'.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Context

2003-02-10 Thread Ben Goertzel

Hi,

 I see that Novamente has Context and NumericalContext Links,  but
 I'm wondering if something more is needed to handle the various
 subtypes of context?

yeah, those link types just deal with certain special situations, they are
not the whole of Novamente's contextuality-handling mechanism, by any
means...

 In summary, it's clear that context is a vital part of memory
 processes used by NGI's, and I was wondering to what extent
 context is emphasized in the design of Novamente.

context is not emphasized in a unified way, but it comes up in a lot of
places.

For example, in the inference module there's a specific parameter called
context size that controls the implicit sample space of the probability
estimates used in inference..

Generally speaking, Novamente is intended to be able to deal with
contextuality in all the senses you describe it, but not by a unified
mechanism -- by a host of different mechanisms, some more useful for some
kinds of contextuality, some for others...

 It's difficult
 to get a feel for it from the available documentation.

Heh.   That is certainly true.

All will be clear in 2004 when the 1500-page beast (the Novamente-design
book) finally appears in print ;-p

Or at least, then the difficulty will shift to a difficulty with
*understanding* what we're talking about rather than *guessing* it ;-)

 I'd also
 like to explore the idea of creating some more concrete words for
 the various types of context that will be a necessary part of any
 AGI.  The word context is too generalized to perform the many
 functions required of it.  Agree/disagree?  Am I reinventing the wheel?

I don't think you're reinventing the wheel.  Similar things have been
discussed, e.g. the situation semantics of John Barwise and others, which
tries to take formal semantics and make all meanings within it
situation-dependent.  But situation semantics is tied too closely to rigid
logicist theories of semantics to really appeal to me.  I think an adequate,
general conceptual and mathematical model of contextuality has yet to be
formulated  I am not sure such a model is needed for AGI, but it would
certainly be helpful.

ben  g

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]