Again a well reasoned response.

With regard to the limitations of AM, I think if the young Doug Lenat and
those of his generation had had 32K processor Blue Gene Ls, with 4TBytes
of RAM, to play with they would have soon started coming up with things
way way beyond AM.

In fact, if the average AI post-grad of today had such hardware to play
with, things would really start jumping.  Within ten years the equivents
of such machines could easily be sold for somewhere between $10k and
$100k, and lots of post-grads will be playing with them.

Hardware to the people!

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 3:21 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


Thanks!

It's worthwhile being specific about levels of interpretation in the
discussion of self-modification. I can write self-modifying assembly code
that yet does not change the physical processor, or even its microcode it
it's one of those old architectures. I can write a self-modifying Lisp
program that doesn't change the assembly language interpreter that's
running
it.

So it's certainly possible to push the self-modification up the
interpretive
abstraction ladder, to levels designed to handle it cleanly. But the basic

point, I think, stands: there has to be some level that is both
controlling
the way the system does things, and gets modified.

I agree with you that there has been little genetic change in human brain
structure since the paleolithic, but I would claim that culture *is* the
software and it has been upgraded drastically. And I would agree that the
vast bulk of human self-improvement has been at this software level, the
level of learned representations.

If we want to improve our basic hardware, i.e. brains, we'll need to
understand them well enough to do basic engineering on them -- a
self-model.
However, we didn't need that to build all the science and culture we have
so
far, a huge software self-improvement. That means to me that it is
possible
to abstract out the self-model until the part you need to understand and
modify is some tractable kernel. For human culture that is the concept of
science (and logic and evidence and so forth).

This means to me that it should be possible to structure an AGI so that it

could be recursively self improving at a very abstract, highly interpreted

level, and still have a huge amount to learn before it do anything about
the
next level down.

Regarding machine speed/capacity: yes, indeed. Horsepower is definitely
going
to be one of the enabling factors, over the next decade or two. But I
don't
think AM would get too much farther on a Blue Gene than on a PDP-10 -- I
think it required hyper-exponential time for concepts of a given size.

Josh


On Wednesday 03 October 2007 12:44:20 pm, Edward W. Porter wrote:
> Josh,
>
> Thank you for your reply, copied below.  It was – as have been many of
> your posts – thoughtful and helpful.
>
> I did have a question about the following section
>
> “THE LEARNING PROCESS MUST NOT ONLY IMPROVE THE WORLD MODEL AND
> WHATNOT, BUT MUST IMPROVE (=> MODIFY) *ITSELF*. KIND OF THE WAY
> CIVILIZATION HAS (MORE OR LESS) MOVED FROM RELIGION TO PHILOSOPHY TO
> SCIENCE AS THE METHODOLOGY OF CHOICE FOR ITS SAGES.”
>
> “THAT, OF COURSE, IS SELF-MODIFYING CODE -- THE DARK PLACE IN A COMPUTER
> SCIENTIST'S SOUL WHERE ONLY THE KWISATZ HADERACH CAN LOOK.   :^)”
>
> My question is: if a machine’s world model includes the system’s model
> of itself and its own learned mental representation and behavior
> patterns, is it not possible that modification of these learned
> representations and behaviors could be enough to provide what you are
> talking about -- without requiring modifying its code at some deeper
> level.
>
> For example, it is commonly said that humans and their brains have
> changed very little in the last 30,000 years, that if a new born from
> that age were raised in our society, nobody would notice the
> difference.  Yet in the last 30,000 years the sophistication of
> mankind’s understanding of, and ability to manipulate, the world has
> grown exponentially.  There has been tremendous changes in code, at
> the level of learned representations and learned mental behaviors,
> such as advances in mathematics, science, and technology, but there
> has been very little, if any, significant changes in code at the level
> of inherited brain hardware and software.
>
> Take for example mathematics and algebra.  These are learned mental
> representations and behaviors that let a human manage levels of
> complexity they could not otherwise even begin to.  But my belief is
> that when executing such behaviors or remembering such
> representations, the basic brain mechanisms involved – probability,
> importance, and temporal based inference; instantiating general
> patterns in a context appropriate way; context sensitive pattern-based
> memory access; learned patterns of sequential attention shifts, etc.
> -- are all virtually identical to ones used by our ancestors 30,000
> years ago.
>
> I think in the coming years there will be lots of changes in AGI code
> at a level corresponding to the human inherited brain level.  But once
> human level AGI has been created -- with what will obviously have to a
> learning capability as powerful, adaptive, exploratory, creative, and
> as capable of building upon its own advances at that of a human -- it
> is not clear to me it would require further changes at a level
> equivalent to the human inherited brain level to continue to operate
> and learn as well as a human, any more than have the tremendous
> advances of human civilization in the last 30,000 years.
>
> Your implication that civilization had improved itself by moving “from
> religion to philosophy to science” seems to suggest that the level of
> improvement you say is needed might actually be at the level of
> learned representation, including learned representation of mental
> behaviors.
>
>
>
> As a minor note, I would like to point out the following concerning
> your statement that:
>
> “ALL AI LEARNING SYSTEMS TO DATE HAVE BEEN "WIND-UP TOYS" “
>
> I think a lot of early AI learning systems, although clearly toys when
> compared with humans in many respects, have been amazingly powerful
> considering many of them ran on roughly fly-brain-level hardware.  As
> I have been saying for decades, I know which end is up in AI -- its
> computational horsepower. And it is coming fast.
>
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49476217-fcc9c6

Reply via email to