[agi] AGI morality

2003-02-08 Thread Ben Goertzel
Hi, Some time ago I promised Eliezer a response on question he posed regarding "AGI morality." I was hoping I'd find the time to write a really detailed response, mirroring the detailed ideas on the topic that exist in my mind. But that time has not arisen, and so I'm going to make a brief resp

Re: [agi] AGI morality

2003-02-09 Thread Philip Sutton
Ben/Eliezer, > Is appropriate morality likely to arise in an AGI system purely through rationality and "good upbringing"? It seems like Ben's answer is implicitly 'no' because at the end of his post he said: > If we want an AGI to have another goal besides its natural > self-survival goal, we

RE: [agi] AGI morality

2003-02-09 Thread Ben Goertzel
Hi, > I'm not convinced that seeking "human-happiness and human-survival" > is a sufficiently broad base for the hard wired ethical imperative. The term "hard wired ethical imperative" is stronger than what I have in mind, which is more like an initial condition for an evolving and growing ethic

Re: [agi] AGI morality

2003-02-09 Thread Simon McClenahan
I am of the belief that as human (adults), we pretty much know the difference between a moral right and wrong. It is only through the effectiveness of our communication channels and the language we use that affects the quality of the meme being spread. Over the ages there have been many philosopher

RE: [agi] AGI morality

2003-02-09 Thread Philip Sutton
Ben, One issue you didn't respond to that I suggested was: > I also think that AGIs need to have a built in commitment to devote an > adequate amount of mind space to monitoring the external environment > and internal thought processes to identify issues where ethical > considerations should appl

RE: [agi] AGI morality

2003-02-09 Thread Ben Goertzel
ry 09, 2003 7:58 PM > To: [EMAIL PROTECTED] > Subject: RE: [agi] AGI morality > > > Ben, > > One issue you didn't respond to that I suggested was: > > > I also think that AGIs need to have a built in commitment to devote an > > adequate amount of mind space to

RE: [agi] AGI morality

2003-02-09 Thread Philip Sutton
Ben, > I agree that a functionally-specialized Ethics Unit could make sense in > an advanced Novamente configuration. .devoting a Unit to ethics > goal-refinement on an architectural level would be a simple way of > ensuring resource allocation to "ethics processing" through successive > syste

RE: [agi] AGI morality

2003-02-09 Thread Ben Goertzel
Philip, My idea is that action-framing and environment-monitoring are carried out in a unified way in Units assigned to these tasks generically. You seem to want them to be done in a subdivided way, where actions and perceptions with different motivations are carried out in different Units. Bu

RE: [agi] AGI morality

2003-02-09 Thread Philip Sutton
Ben, > My idea is that action-framing and environment-monitoring are carried > out in a unified way in Units assigned to these tasks generically. > ..ethical thought gets to affect system behavior indirectly > through a), via ethically-motivated GoalNodes, both general ones and > context-s

RE: [agi] AGI morality

2003-02-09 Thread Philip Sutton
Ben, It just occurred to me that ethically competent AGIs could be very useful colleagues for humans working on ethical issues. Humans could do with some collaborative help in sorting out our own ethical messes! Cheers, Philip --- To unsubscribe, change your address, or temporarily deacti

RE: [agi] AGI morality

2003-02-10 Thread Philip Sutton
Ben, If in the Novamente configuration the dedicated Ethics Unit is focussed on GoalNode refinement, it might be worth using another term to describe the whole ethical architecture/machinery which would involve aspects of most/all (??) Units plus perhaps even the Mind Operating System (??). M

RE: [agi] AGI morality

2003-02-10 Thread Bill Hibbard
Hi Philip, On Tue, 11 Feb 2003, Philip Sutton wrote: > Ben, > > If in the Novamente configuration the dedicated Ethics Unit is focussed > on GoalNode refinement, it might be worth using another term to > describe the whole ethical architecture/machinery which would involve > aspects of most/all (

RE: [agi] AGI morality

2003-02-10 Thread Ben Goertzel
Monday, February 10, 2003 2:24 AM > To: [EMAIL PROTECTED] > Subject: RE: [agi] AGI morality > > > Ben, > > It just occurred to me that ethically competent AGIs could be very > useful colleagues for humans working on ethical issues. Humans could > do with some co

RE: [agi] AGI morality

2003-02-10 Thread Ben Goertzel
> > My idea is that action-framing and environment-monitoring are carried > > out in a unified way in Units assigned to these tasks generically. > > ..ethical thought gets to affect system behavior indirectly > > through a), via ethically-motivated GoalNodes, both general ones and > > cont

RE: [agi] AGI morality

2003-02-10 Thread Michael Anissimov
Philip Sutton wrote: >Maybe we need to think about an 'ethics system' that is woven into the >whole Novamente architecture and processes. How about a benevolence-capped goal system where all the AI's actions flow from a single supergoal? That way you aren't adding ethics into a fundamentally

Re: [agi] AGI morality

2003-02-10 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote: However, it's to be expected that an AGI's ethics will be different than any human's ethics, even if closely related. What do a Goertzelian AGI's ethics and a human's ethics have in common that makes it a humanly ethical act to construct a Goertzelian AGI? -- Eliezer S. Yu

RE: [agi] AGI morality

2003-02-10 Thread Ben Goertzel
Michael Anissimov wrote: > > Philip Sutton wrote: > >Maybe we need to think about an 'ethics system' that is woven into the > >whole Novamente architecture and processes. > > How about a benevolence-capped goal system where all the AI's actions > flow from a single supergoal? That way you aren't

RE: [agi] AGI morality

2003-02-10 Thread Ben Goertzel
Philip Sutton wrote: > If in the Novamente configuration the dedicated Ethics Unit is focussed > on GoalNode refinement, it might be worth using another term to > describe the whole ethical architecture/machinery which would involve > aspects of most/all (??) Units plus perhaps even the Mind Opera

RE: [agi] AGI morality

2003-02-10 Thread Ben Goertzel
I think we all agree that, loosely speaking, we want our AGI's to have a goal of respecting and promoting the survival and happiness of humans and all intelligent and living beings. However, no two minds interpret these general goals in the same way. You and I don't interpret them exactly the sa

RE: [agi] AGI morality

2003-02-10 Thread Ben Goertzel
> I think discussing ethics in terms of goals leads to confusion. > As I described in an earlier post at: > > http://www.mail-archive.com/agi@v2.listbox.com/msg00390.html > > reasoning must be grounded in learning and goals must be grounded > in values (i.e., the values used to reinforce behavi

RE: [agi] AGI morality

2003-02-10 Thread Michael Anissimov
Ben Goertzel writes: >This is a key aspect of Eliezer Yudkowsky's "Friendly Goal >Architecture" Yeah; too bad there isn't really anyone else to cite on this one. It will be interesting to see what other AGI pursuers have to say about the hierarchial goal system issue, once they write up their

RE: [agi] AGI morality

2003-02-10 Thread Ben Goertzel
> Right, that statement was directed towards Philip Sutton's mail, but I > appreciate your stepping in to clarify. Of course, whether AIs with > substantially prehuman (low) intelligence can have goals that deserve > being called "ethical" or "unethical" is a matter of word choice and > definitio

RE: [agi] AGI morality

2003-02-10 Thread Bill Hibbard
Hi Ben, > > I think discussing ethics in terms of goals leads to confusion. > > As I described in an earlier post at: > > > > http://www.mail-archive.com/agi@v2.listbox.com/msg00390.html > > > > reasoning must be grounded in learning and goals must be grounded > > in values (i.e., the values use

RE: [agi] AGI morality

2003-02-10 Thread Philip Sutton
Michael/Ben, Michael said: > whether AIs with substantially prehuman (low) intelligence can have > goals that deserve being called "ethical" or "unethical" is a matter of > word choice and definitions. This raises the issue of whether one should even try to build in ethics right from the start

RE: [agi] AGI morality

2003-02-10 Thread Ben Goertzel
> > A goal in Novamente is a kind of predicate, which is just a > function that > > assigns a value in [0,1] to each input situation it observes... > i.e. it's a > > 'valuation' ;-) > > Interesting. Are these values used for reinforcing behaviors > in a learning system? Or are they used in a con

Re: [agi] AGI morality

2003-02-10 Thread Brad Wyble
> > There might even be a benefit to trying to develop an ethical system for > the earliest possible AGIs - and that is that it forces everyone to strip > the concept of an ethical system down to its absolute basics so that it > can be made part of a not very intelligent system. That will pr

RE: [agi] AGI morality

2003-02-10 Thread Michael Anissimov
Philip wrote: >This raises the issue of whether one should even try to build in >ethics right from the start of the evolution of AGIs when they will >not be very smart compared to humans. Oh; I think one should try. This is the fate of the universe we're talking about here. And keep in mind

RE: [agi] AGI morality

2003-02-11 Thread Bill Hibbard
On Mon, 10 Feb 2003, Ben Goertzel wrote: > > > A goal in Novamente is a kind of predicate, which is just a > > function that > > > assigns a value in [0,1] to each input situation it observes... > > i.e. it's a > > > 'valuation' ;-) > > > > Interesting. Are these values used for reinforcing behavi

RE: [agi] AGI morality

2003-02-11 Thread Ben Goertzel
Bill Hibbard wrote: > On Mon, 10 Feb 2003, Ben Goertzel wrote: > > > > > A goal in Novamente is a kind of predicate, which is just a > > > function that > > > > assigns a value in [0,1] to each input situation it observes... > > > i.e. it's a > > > > 'valuation' ;-) > > > > > > Interesting. Are t

Re: [agi] AGI morality

2003-02-11 Thread C. David Noziglia
> Philip wrote: > I think ethics only come in where an intelligent entity can identify > 'otherness' in the environment and needs that are not its own. Ethics > are then rules that guide the formulation of the intelligent entity's > behaviour in a way that optimises for not only the intelligent en

RE: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Philip Sutton
Ben/Bill, My feeling is that goals and ethics are not identical concepts. And I would think that goals would only make an intentional ethical contribution if they related to the empathetic consideration of others. So whether ethics are built in from the start in the Novamente architecture dep

RE: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Ben Goertzel
> Ben/Bill, > > My feeling is that goals and ethics are not identical concepts. One needs to be careful in using words to describe mathematical/software concepts. The English word "goal" has a lot of meanings, and is not identical to the behavior of "goal nodes" or "goal maps" in Novamente. Go

RE: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Bill Hibbard
On Wed, 12 Feb 2003, Philip Sutton wrote: > Ben/Bill, > > My feeling is that goals and ethics are not identical concepts. And I > would think that goals would only make an intentional ethical > contribution if they related to the empathetic consideration of others. > . . . Absolutely goals (I pr

Re: [agi] AGI morality - goals and reinforcement values

2003-02-11 Thread Eliezer S. Yudkowsky
Bill Hibbard wrote: On Wed, 12 Feb 2003, Philip Sutton wrote: Ben/Bill, My feeling is that goals and ethics are not identical concepts. And I would think that goals would only make an intentional ethical contribution if they related to the empathetic consideration of others. Absolutely goals

RE: [agi] AGI morality - goals and reinforcement values - plus early learning

2003-02-11 Thread Philip Sutton
Ben, > Right from the start, even before there is an intelligent autonomous mind > there, there will be goals that are of the basic structural character of > ethical goals. I.e. goals that involve the structure of "compassion", of > adjusting the system's actions to account for the well-being of

Re: [agi] AGI morality - a paranoid view - "will it burn up the atmosphere?"

2003-02-09 Thread John Rose
Reading this just made me change the way I've been thinking about AGI. One of the "issues" with human existence is the implication of our DNA survival over time. Mortality, behavior, emotions, just so the DNA can survive and propagate. Aren't we past that stage? Probably not. We all benefit and

Re: [agi] AGI morality - a paranoid view - "will it burn up the atmosphere?"

2003-02-09 Thread Alan Grimes
> we are it's keepers. Until it doesn’t need us anymore (nanotechnology) > and we are an obstacle. An obstacle to what? This is the one thing that doesn't make any sense to me... People assume that once it is capable of devowering the universe it _WILL_ devour the universe. The trick here is that

Re: [agi] AGI morality - a paranoid view - "will it burn up the atmosphere?"

2003-02-09 Thread Brad Wyble
> > An obstacle to what? > This is the one thing that doesn't make any sense to me... People assume > that once it is capable of devowering the universe it _WILL_ devour the > universe. The trick here is that for it to undertake such an agenda it > must have, explicitly, some code which gives it