Re: [agi] Religion-free technical content

2007-10-01 Thread Vladimir Nesov
On 10/1/07, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > I remain skeptical. Your argument applies to an AGI not modifying its own > motivational system. It does not apply to an AGI making modified copies of > itself. In fact you say: > > > Also, during the development of the first true AI, we wo

Re: [agi] What is the complexity of RSI?

2007-10-01 Thread J Storrs Hall, PhD
On Sunday 30 September 2007 09:24:24 pm, Matt Mahoney wrote: > > --- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: > > And detrimental mutations greatly outnumber beneficial ones. > > It depends. Eukaryotes mutate more intelligently than prokaryotes. Their > mutations (by mixing large snips o

Re: [agi] Religion-free technical content

2007-10-01 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: Derek Zahn wrote: Richard Loosemore writes: > It is much less opaque. > > I have argued that this is the ONLY way that I know of to ensure that > AGI is done in a way that allows safety/friendliness to be guaranteed. >

Re: [agi] Religion-free technical content

2007-10-01 Thread Jef Allbright
On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: > > The motivational system of some types of AI (the types you would > > classify as tainted by complexity) can be made so reliable that the > > likelihood of them becoming unfriendly would be similar to the > > likelihood of the molecule

RE: [agi] Religion-free technical content

2007-10-01 Thread Edward W. Porter
To Matt Mahoney. Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn and implied RSI (which I assume from context is a reference to Recursive Self Improvement) is necessary for general intelligence. When I said -- in reply to Derek's suggestion that RSI be banned -- that I didn't

RE: [agi] Religion-free technical content

2007-10-01 Thread Derek Zahn
Richard Loosemore writes:> You must remember that the complexity is not a massive part of the > system, just a small-but-indispensible part.> > I think this sometimes causes confusion: did you think that I meant > that the whole thing would be so opaque that I could not understand > *anything* a

RE: [agi] Religion-free technical content

2007-10-01 Thread Derek Zahn
Edward W. Porter writes:> To Matt Mahoney. > Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn and > implied RSI > (which I assume from context is a reference to Recursive Self Improvement) is > necessary for general intelligence. > So could you, or someone, please define exa

Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Matt Mahoney
--- William Pearson <[EMAIL PROTECTED]> wrote: > On 30/09/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > The real danger is this: a program intelligent enough to understand > software > > would be intelligent enough to modify itself. > > Well it would always have the potential. But you are as

Re: [agi] Religion-free technical content

2007-10-01 Thread Richard Loosemore
Derek Zahn wrote: Richard Loosemore writes: > You must remember that the complexity is not a massive part of the > system, just a small-but-indispensible part. > > I think this sometimes causes confusion: did you think that I meant > that the whole thing would be so opaque that I could not

Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Matt Mahoney
--- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: > On Sunday 30 September 2007 09:24:24 pm, Matt Mahoney wrote: > > > > --- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: > > > And detrimental mutations greatly outnumber beneficial ones. > > > > It depends. Eukaryotes mutate more intelligen

RE: [agi] Religion-free technical content

2007-10-01 Thread Derek Zahn
Richard: > You agree that if we could get such a connection between the > probabilities, > we are home and dry? That we need not care about > "proving" the friendliness > if we can show that the probability is simply > too low to be plausible? Yes, although the probability itself would have to

RE: [agi] Religion-free technical content

2007-10-01 Thread Matt Mahoney
In my last post I had in mind RSI at the level of source code or machine code. Clearly we already have RSI in more restricted computational models, such as a neural network modifying its objective function by adjusting its weights. This type of RSI is not dangerous because it cannot interact with

Re: [agi] Religion-free technical content

2007-10-01 Thread Richard Loosemore
Derek Zahn wrote: Richard: > You agree that if we could get such a connection between the > probabilities, we are home and dry? That we need not care about > "proving" the friendliness if we can show that the probability is simply > too low to be plausible? Yes, although the probability i

Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Charles D Hixson
Matt Mahoney wrote: --- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: ... So you are arguing that RSI is a hard problem? That is my question. Understanding software to the point where a program could make intelligent changes to itself seems to require human level intelligence. But co

Re: [agi] Religion-free technical content

2007-10-01 Thread Matt Mahoney
Richard, Let me make sure I understand your proposal. You propose to program friendliness into the motivational structure of the AGI as tens of thousands of hand-coded soft constraints or rules. Presumably with so many rules, we should be able to cover every conceivable situation now or in the fu

Re: [agi] Religion-free technical content

2007-10-01 Thread Mark Waser
Matt, Is there any particular reason why you're being so obnoxious? His proposal said *nothing* of the sort and your sarcasm has buried any value your post might have had. - Original Message - From: "Matt Mahoney" <[EMAIL PROTECTED]> To: Sent: Monday, October 01, 2007 12:57 P

Re: [agi] Religion-free technical content

2007-10-01 Thread Richard Loosemore
Matt Mahoney wrote: Richard, Let me make sure I understand your proposal. You propose to program friendliness into the motivational structure of the AGI as tens of thousands of hand-coded soft constraints or rules. Presumably with so many rules, we should be able to cover every conceivable situ

[agi] AGI Motivation

2007-10-01 Thread Don Detrich
I have always found one of the best ways to evaluate human behavior is to understand motivation. What does the person want? A classic physiological theory for this is Maslow's Hierarchy of Needs. Whether or not you totally believe in the humanistic details of this theory, the basic premise is fairl

RE: [agi] Religion-free technical content

2007-10-01 Thread Edward W. Porter
RE: Matt Mahoney's Mon 10/1/2007 12:01 PM post which said in part "IN MY LAST POST I HAD IN MIND RSI AT THE LEVEL OF SOURCE CODE OR MACHINE CODE." Thank you for clarifying this, as least with regard to what you meant. But that begs the question: is there any uniform agreement about this definiti

[agi] Re: [aima-talk] Next edition?

2007-10-01 Thread A. T. Murray
Peter Norvig wrote: > > Yes, there will be. The authors are discussing > the process of writing a third edition now, > but don't yet have a schedule. > > -Peter Norvig > > On 10/1/07, per.nyblom <[EMAIL PROTECTED]> wrote: >> Will there be a next edition of the AIMA text book? >> It would be nic

Re: [agi] Religion-free technical content

2007-10-01 Thread Richard Loosemore
Jef Allbright wrote: On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: The motivational system of some types of AI (the types you would classify as tainted by complexity) can be made so reliable that the likelihood of them becoming unfriendly would be similar to the likelihood of the m

RE: [agi] Religion-free technical content

2007-10-01 Thread Edward W. Porter
Richard and Matt, The below is an interesting exchange. For Richard I have the question, how is what you are proposing that different than what could be done with Novamente, where if one had hardcoded a set of top level goals, all of the perceptual, cognitive, behavioral, and goal patterns -- and

Re: [agi] Religion-free technical content

2007-10-01 Thread Mark Waser
3) The system would actually be driven by a very smart, flexible, subtle sense of 'empathy' and would not force us to do painful things that were "good" for us, for the simple reason that this kind of nannying would be the antithesis of really intelligent empathy. Hmmm. My daughter hates gett

Re: [agi] AGI Motivation

2007-10-01 Thread Linas Vepstas
On Mon, Oct 01, 2007 at 10:47:36AM -0700, Don Detrich wrote: [...] > apply to the personality of an AGI with no need for food, no pain, no > hunger, no higher level behavior related to pecking order. It will presumably be hungry for compute cycles and ergo, electricity. Ergo, it may want to make

Re: [agi] Religion-free technical content

2007-10-01 Thread BillK
On 10/1/07, Richard Loosemore wrote: > > 3) The system would actually be driven by a very smart, flexible, subtle > sense of 'empathy' and would not force us to do painful things that were > "good" for us, for the simple reason that this kind of nannying would be > the antithesis of really intellig

Re: [agi] Religion-free technical content

2007-10-01 Thread Jef Allbright
On 10/1/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Jef Allbright wrote: > > On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: > >>> The motivational system of some types of AI (the types you would > >>> classify as tainted by complexity) can be made so reliable that > >>> the likeli

Re: [agi] Re: [aima-talk] Next edition?

2007-10-01 Thread Chris Petersen
On 10/1/07, A. T. Murray <[EMAIL PROTECTED]> wrote: > > It would be nice if future editions of the AIMA textbook > were to include some treatment of the various independent > AI projects that are out there (on the fringe?) nowadays. > > http://mind.sourceforge.net/Mind.html in JavaScript > is an AI

[agi] The Future of Computing, According to Intel -- Massively multicore processors will enable smarter computers that can infer our activities

2007-10-01 Thread Edward W. Porter
Check out the following article entitled: The Future of Computing, According to Intel -- Massively multicore processors will enable smarter computers that can infer our activities. http://www.technologyreview.com/printer_friendly_article.aspx?id=19432 Not only is the type of hardware needed fo

Re: [agi] What is the complexity of RSI?

2007-10-01 Thread J Storrs Hall, PhD
On Monday 01 October 2007 11:41:35 am, Matt Mahoney wrote: > So you are arguing that RSI is a hard problem? That is my question. > Understanding software to the point where a program could make intelligent > changes to itself seems to require human level intelligence. But could it > come sooner?

Re: [agi] Religion-free technical content

2007-10-01 Thread J Storrs Hall, PhD
On Monday 01 October 2007 11:34:09 am, Richard Loosemore wrote: > Right, now consider the nature of the design I propose: the > motivational system never has an opportunity for a point failure: > everything that happens is multiply-constrained (and on a massive scale: > far more than is the c

Re: [agi] Religion-free technical content

2007-10-01 Thread Jef Allbright
On 10/1/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote: > On Monday 01 October 2007 11:34:09 am, Richard Loosemore wrote: > > Right, now consider the nature of the design I propose: the > > motivational system never has an opportunity for a point failure: > > everything that happens is multiply-

Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Matt Mahoney
--- Russell Wallace <[EMAIL PROTECTED]> wrote: > On 9/30/07, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > What would be the simplest system capable of recursive self improvement, > not > > necessarily with human level intelligence? What are the time and memory > > costs? What would be its algori

Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Linas Vepstas
On Mon, Oct 01, 2007 at 12:48:00PM -0700, Matt Mahoney wrote: > The problem is that an intelligent RSI worm might be millions of > times faster than a human once it starts replicating. Yes, but the proposed means of finding it, i.e. via evolution and random mutation, is hopelessly time consuming.

Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Mark Waser
So the real question is "what is the minimal amount of intelligence needed for a system to self-engineer improvments to itself?" Some folks might argue that humans are just below that threshold. Humans are only below the threshold because our internal systems are so convoluted and difficult to

Re: [agi] Religion-free technical content

2007-10-01 Thread Richard Loosemore
Jef Allbright wrote: [snip] Jef, I accept that you did not necessarily introduce any of the confusions that I dealt with in the snipped section, above: but your question was ambiguous enough that many people might have done so, so I was just covering all the bases, not asserting that you had

Re: [agi] Religion-free technical content

2007-10-01 Thread Richard Loosemore
Replies to several posts, omnibus edition: Edward W. Porter wrote: Richard and Matt, The below is an interesting exchange. For Richard I have the question, how is what you are proposing that different than what could

Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Eliezer S. Yudkowsky
Mark Waser wrote: So the real question is "what is the minimal amount of intelligence needed for a system to self-engineer improvments to itself?" Some folks might argue that humans are just below that threshold. Humans are only below the threshold because our internal systems are so convolut

Re: [agi] Religion-free technical content

2007-10-01 Thread Mark Waser
And apart from the global differences between the two types of AGI, it would be no good to try to guarantee friendliness using the kind of conventional AI system that is Novamente, because inasmuch as general goals would be encoded in such a system, they are explicitly coded as "statement" whic

Re: [agi] Religion-free technical content

2007-10-01 Thread Mark Waser
Answer in this case: (1) such elemental things as protection from diseases could always be engineered so as not to involve painful injections (we are assuming superintelligent AGI, after all), :-)First of all, I'm not willing to concede an AGI superintelligent enough to solve all the worl

Re: [agi] What is the complexity of RSI?

2007-10-01 Thread Matt Mahoney
--- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote: > Clarification, please -- suppose you had a 3-year-old equivalent mind, e.g. > a > working Joshua Blue. Would this qualify, for your question? You have a mind > with the potential to grow into an adult-human equivalent, but it still > needs >

Re: [agi] What is the complexity of RSI?

2007-10-01 Thread J Storrs Hall, PhD
On Monday 01 October 2007 05:47:25 pm, Matt Mahoney wrote: > Understanding software is equivalent to compressing it. Programs that are > useful, bug free, and well documented have higher probability. An intelligent > model capable of RSI would compress these programs smaller. We do not seem >

Socio-political prediction [was Re: [agi] Religion-free technical content

2007-10-01 Thread Linas Vepstas
On Sun, Sep 30, 2007 at 12:49:43PM -0700, Morris F. Johnson wrote: > Integration of sociopolitical factors into a global evolution predictive > model will require something the best > economists, scientists, military strategists will have to get right or risk > global social anarchy. FYI, there w

Re: [agi] Religion-free technical content

2007-10-01 Thread Richard Loosemore
Mark Waser wrote: And apart from the global differences between the two types of AGI, it would be no good to try to guarantee friendliness using the kind of conventional AI system that is Novamente, because inasmuch as general goals would be encoded in such a system, they are explicitly coded a

[agi] Delayed announcement

2007-10-01 Thread Richard Loosemore
Well, I was going to make an announcement today, but since I have been so thoroughly consumed with writing all day, I have not had the time to finish my preparations, so I am going to delay it until tomorrow. Richard Loosemore - This list is sponsored by AGIRI: http://www.agiri.org/email

Re: [agi] Religion-free technical content

2007-10-01 Thread Mark Waser
Interesting. I believe that we have a fundamental disagreement. I would argue that the semantics *don't* have to be distributed. My argument/proof would be that I believe that *anything* can be described in words -- and that I believe that previous narrow AI are brittle because they don't

Re: [agi] Religion-free technical content

2007-10-01 Thread Vladimir Nesov
So this "hackability" is a technical question about possibility of closed-source deployment that would provide functional copies of the system but would prevent users from modifying its goal system. Is it really important? Source/technology will eventually get away, and from it any goal system can

Re: AI and botnets Re: [agi] What is the complexity of RSI?

2007-10-01 Thread William Pearson
On 01/10/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > --- William Pearson <[EMAIL PROTECTED]> wrote: > > > On 30/09/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote: > > > The real danger is this: a program intelligent enough to understand > > software > > > would be intelligent enough to modify i

Re: [agi] The Future of Computing, According to Intel -- Massively multicore processors will enable smarter computers that can infer our activities

2007-10-01 Thread Jiri Jelinek
Talking about processing power... A friend just sent me an email with links some of you may find interesting: --- cut -- Building or gaining access to computing resources with enough power to complete jobs on large data sets usually costs a lot of money. Amazon Web Services (A