On 10/1/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> I remain skeptical. Your argument applies to an AGI not modifying its own
> motivational system. It does not apply to an AGI making modified copies of
> itself. In fact you say:
>
> > Also, during the development of the first true AI, we wo
On Sunday 30 September 2007 09:24:24 pm, Matt Mahoney wrote:
>
> --- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote:
> > And detrimental mutations greatly outnumber beneficial ones.
>
> It depends. Eukaryotes mutate more intelligently than prokaryotes. Their
> mutations (by mixing large snips o
Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
Derek Zahn wrote:
Richard Loosemore writes:
> It is much less opaque.
>
> I have argued that this is the ONLY way that I know of to ensure that
> AGI is done in a way that allows safety/friendliness to be guaranteed.
>
On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > The motivational system of some types of AI (the types you would
> > classify as tainted by complexity) can be made so reliable that the
> > likelihood of them becoming unfriendly would be similar to the
> > likelihood of the molecule
To Matt Mahoney.
Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn and
implied RSI (which I assume from context is a reference to Recursive Self
Improvement) is necessary for general intelligence.
When I said -- in reply to Derek's suggestion that RSI be banned -- that I
didn't
Richard Loosemore writes:> You must remember that the complexity is not a
massive part of the > system, just a small-but-indispensible part.> > I think
this sometimes causes confusion: did you think that I meant > that the whole
thing would be so opaque that I could not understand > *anything* a
Edward W. Porter writes:> To Matt Mahoney.
> Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn and
> implied RSI
> (which I assume from context is a reference to Recursive Self Improvement) is
> necessary for general intelligence.
> So could you, or someone, please define exa
--- William Pearson <[EMAIL PROTECTED]> wrote:
> On 30/09/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > The real danger is this: a program intelligent enough to understand
> software
> > would be intelligent enough to modify itself.
>
> Well it would always have the potential. But you are as
Derek Zahn wrote:
Richard Loosemore writes:
> You must remember that the complexity is not a massive part of the
> system, just a small-but-indispensible part.
>
> I think this sometimes causes confusion: did you think that I meant
> that the whole thing would be so opaque that I could not
--- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote:
> On Sunday 30 September 2007 09:24:24 pm, Matt Mahoney wrote:
> >
> > --- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote:
> > > And detrimental mutations greatly outnumber beneficial ones.
> >
> > It depends. Eukaryotes mutate more intelligen
Richard:
> You agree that if we could get such a connection between the > probabilities,
> we are home and dry? That we need not care about > "proving" the friendliness
> if we can show that the probability is simply > too low to be plausible?
Yes, although the probability itself would have to
In my last post I had in mind RSI at the level of source code or machine code.
Clearly we already have RSI in more restricted computational models, such as
a neural network modifying its objective function by adjusting its weights.
This type of RSI is not dangerous because it cannot interact with
Derek Zahn wrote:
Richard:
> You agree that if we could get such a connection between the
> probabilities, we are home and dry? That we need not care about
> "proving" the friendliness if we can show that the probability is simply
> too low to be plausible?
Yes, although the probability i
Matt Mahoney wrote:
--- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote:
...
So you are arguing that RSI is a hard problem? That is my question.
Understanding software to the point where a program could make intelligent
changes to itself seems to require human level intelligence. But co
Richard,
Let me make sure I understand your proposal. You propose to program
friendliness into the motivational structure of the AGI as tens of thousands
of hand-coded soft constraints or rules. Presumably with so many rules, we
should be able to cover every conceivable situation now or in the fu
Matt,
Is there any particular reason why you're being so obnoxious?
His proposal said *nothing* of the sort and your sarcasm has buried any
value your post might have had.
- Original Message -
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To:
Sent: Monday, October 01, 2007 12:57 P
Matt Mahoney wrote:
Richard,
Let me make sure I understand your proposal. You propose to program
friendliness into the motivational structure of the AGI as tens of thousands
of hand-coded soft constraints or rules. Presumably with so many rules, we
should be able to cover every conceivable situ
I have always found one of the best ways to evaluate human behavior is to
understand motivation. What does the person want? A classic physiological
theory for this is Maslow's Hierarchy of Needs. Whether or not you totally
believe in the humanistic details of this theory, the basic premise is
fairl
RE: Matt Mahoney's Mon 10/1/2007 12:01 PM post which said in part
"IN MY LAST POST I HAD IN MIND RSI AT THE LEVEL OF SOURCE CODE OR MACHINE
CODE."
Thank you for clarifying this, as least with regard to what you meant.
But that begs the question: is there any uniform agreement about this
definiti
Peter Norvig wrote:
>
> Yes, there will be. The authors are discussing
> the process of writing a third edition now,
> but don't yet have a schedule.
>
> -Peter Norvig
>
> On 10/1/07, per.nyblom <[EMAIL PROTECTED]> wrote:
>> Will there be a next edition of the AIMA text book?
>>
It would be nic
Jef Allbright wrote:
On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that
the likelihood of them becoming unfriendly would be similar to
the likelihood of the m
Richard and Matt,
The below is an interesting exchange.
For Richard I have the question, how is what you are proposing that
different than what could be done with Novamente, where if one had
hardcoded a set of top level goals, all of the perceptual, cognitive,
behavioral, and goal patterns -- and
3) The system would actually be driven by a very smart, flexible, subtle
sense of 'empathy' and would not force us to do painful things that were
"good" for us, for the simple reason that this kind of nannying would be
the antithesis of really intelligent empathy.
Hmmm. My daughter hates gett
On Mon, Oct 01, 2007 at 10:47:36AM -0700, Don Detrich wrote:
[...]
> apply to the personality of an AGI with no need for food, no pain, no
> hunger, no higher level behavior related to pecking order.
It will presumably be hungry for compute cycles and ergo, electricity.
Ergo, it may want to make
On 10/1/07, Richard Loosemore wrote:
>
> 3) The system would actually be driven by a very smart, flexible, subtle
> sense of 'empathy' and would not force us to do painful things that were
> "good" for us, for the simple reason that this kind of nannying would be
> the antithesis of really intellig
On 10/1/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Jef Allbright wrote:
> > On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >>> The motivational system of some types of AI (the types you would
> >>> classify as tainted by complexity) can be made so reliable that
> >>> the likeli
On 10/1/07, A. T. Murray <[EMAIL PROTECTED]> wrote:
>
> It would be nice if future editions of the AIMA textbook
> were to include some treatment of the various independent
> AI projects that are out there (on the fringe?) nowadays.
>
> http://mind.sourceforge.net/Mind.html in JavaScript
> is an AI
Check out the following article entitled: The Future of Computing,
According to Intel -- Massively multicore processors will enable smarter
computers that can infer our activities.
http://www.technologyreview.com/printer_friendly_article.aspx?id=19432
Not only is the type of hardware needed fo
On Monday 01 October 2007 11:41:35 am, Matt Mahoney wrote:
> So you are arguing that RSI is a hard problem? That is my question.
> Understanding software to the point where a program could make intelligent
> changes to itself seems to require human level intelligence. But could it
> come sooner?
On Monday 01 October 2007 11:34:09 am, Richard Loosemore wrote:
> Right, now consider the nature of the design I propose: the
> motivational system never has an opportunity for a point failure:
> everything that happens is multiply-constrained (and on a massive scale:
> far more than is the c
On 10/1/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> On Monday 01 October 2007 11:34:09 am, Richard Loosemore wrote:
> > Right, now consider the nature of the design I propose: the
> > motivational system never has an opportunity for a point failure:
> > everything that happens is multiply-
--- Russell Wallace <[EMAIL PROTECTED]> wrote:
> On 9/30/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > What would be the simplest system capable of recursive self improvement,
> not
> > necessarily with human level intelligence? What are the time and memory
> > costs? What would be its algori
On Mon, Oct 01, 2007 at 12:48:00PM -0700, Matt Mahoney wrote:
> The problem is that an intelligent RSI worm might be millions of
> times faster than a human once it starts replicating.
Yes, but the proposed means of finding it, i.e. via evolution
and random mutation, is hopelessly time consuming.
So the real question is "what is the minimal amount of
intelligence needed for a system to self-engineer
improvments to itself?"
Some folks might argue that humans are just below that
threshold.
Humans are only below the threshold because our internal systems are so
convoluted and difficult to
Jef Allbright wrote:
[snip]
Jef, I accept that you did not necessarily introduce any of the
confusions that I dealt with in the snipped section, above: but your
question was ambiguous enough that many people might have done so, so I
was just covering all the bases, not asserting that you had
Replies to several posts, omnibus edition:
Edward W. Porter wrote:
Richard and Matt,
The below is an interesting exchange.
For Richard I have the question, how is what you are proposing that
different than what could
Mark Waser wrote:
So the real question is "what is the minimal amount of
intelligence needed for a system to self-engineer
improvments to itself?"
Some folks might argue that humans are just below that
threshold.
Humans are only below the threshold because our internal systems are so
convolut
And apart from the global differences between the two types of AGI, it
would be no good to try to guarantee friendliness using the kind of
conventional AI system that is Novamente, because inasmuch as general
goals would be encoded in such a system, they are explicitly coded as
"statement" whic
Answer in this case: (1) such elemental things as protection from
diseases could always be engineered so as not to involve painful
injections (we are assuming superintelligent AGI, after all),
:-)First of all, I'm not willing to concede an AGI superintelligent
enough to solve all the worl
--- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote:
> Clarification, please -- suppose you had a 3-year-old equivalent mind, e.g.
> a
> working Joshua Blue. Would this qualify, for your question? You have a mind
> with the potential to grow into an adult-human equivalent, but it still
> needs
>
On Monday 01 October 2007 05:47:25 pm, Matt Mahoney wrote:
> Understanding software is equivalent to compressing it. Programs that are
> useful, bug free, and well documented have higher probability. An intelligent
> model capable of RSI would compress these programs smaller. We do not seem
>
On Sun, Sep 30, 2007 at 12:49:43PM -0700, Morris F. Johnson wrote:
> Integration of sociopolitical factors into a global evolution predictive
> model will require something the best
> economists, scientists, military strategists will have to get right or risk
> global social anarchy.
FYI, there w
Mark Waser wrote:
And apart from the global differences between the two types of AGI, it
would be no good to try to guarantee friendliness using the kind of
conventional AI system that is Novamente, because inasmuch as general
goals would be encoded in such a system, they are explicitly coded a
Well, I was going to make an announcement today, but since I have been
so thoroughly consumed with writing all day, I have not had the time to
finish my preparations, so I am going to delay it until tomorrow.
Richard Loosemore
-
This list is sponsored by AGIRI: http://www.agiri.org/email
Interesting. I believe that we have a fundamental disagreement. I
would argue that the semantics *don't* have to be distributed. My
argument/proof would be that I believe that *anything* can be described in
words -- and that I believe that previous narrow AI are brittle because they
don't
So this "hackability" is a technical question about possibility of
closed-source deployment that would provide functional copies of the
system but would prevent users from modifying its goal system. Is it
really important? Source/technology will eventually get away, and from
it any goal system can
On 01/10/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> --- William Pearson <[EMAIL PROTECTED]> wrote:
>
> > On 30/09/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > The real danger is this: a program intelligent enough to understand
> > software
> > > would be intelligent enough to modify i
Talking about processing power... A friend just sent me an email with
links some of you may find interesting:
--- cut --
Building or gaining access to computing resources with enough power to
complete jobs on large data sets usually costs a lot of money. Amazon
Web Services (A
48 matches
Mail list logo