[agi] Context free text analysis is not a proper method of natural language understanding

2007-10-03 Thread m1n1mal
Relating to the idea that text compression (as demonstrated by general
compression algorithms) is a measure of intelligence,
Claims:
(1) To understand natural language requires knowledge (CONTEXT) of the
social world(s) it refers to.
(2) Communication includes (at most) a shadow of the context necessary
to understand it.

Given (1), no context-free analysis can understand natural language.
Given (2), no adaptive agent can learn (proper) understanding of natural
language given only texts.

For human-like understanding, an AGI would need to participate in
(human) social society.
-- 
  
  [EMAIL PROTECTED]

-- 
http://www.fastmail.fm - And now for something completely different…

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49226099-404e4b

Re: [agi] Context free text analysis is not a proper method of natural language understanding

2007-10-03 Thread Bob Mottram
On 03/10/2007, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Given (1), no context-free analysis can understand natural language.
> Given (2), no adaptive agent can learn (proper) understanding of natural
> language given only texts.

> For human-like understanding, an AGI would need to participate in
> (human) social society.


This is the age-old problem for AI.  Either you have to build a
physical system (a robot) which can in some sense experience the
non-linguistic concepts upon which language is based, or you have to
directly teach the system (enter a lot of common sense knowledge - the
things we all know but which are rarely written down).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49226704-b6e4f3


Re: [agi] Context free text analysis is not a proper method of natural language understanding

2007-10-03 Thread Vladimir Nesov
... or maybe they can be inferred from texts alone. It all depends on
learning ability of particular design, and we as yet have none. Cart
before the horse.

On 10/3/07, Bob Mottram <[EMAIL PROTECTED]> wrote:
> On 03/10/2007, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > Given (1), no context-free analysis can understand natural language.
> > Given (2), no adaptive agent can learn (proper) understanding of natural
> > language given only texts.
>
> > For human-like understanding, an AGI would need to participate in
> > (human) social society.
>
>
> This is the age-old problem for AI.  Either you have to build a
> physical system (a robot) which can in some sense experience the
> non-linguistic concepts upon which language is based, or you have to
> directly teach the system (enter a lot of common sense knowledge - the
> things we all know but which are rarely written down).
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49227353-9fd0f7


Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
On Tuesday 02 October 2007 08:46:43 pm, Richard Loosemore wrote:
> J Storrs Hall, PhD wrote:
> > I find your argument quotidian and lacking in depth. ...

> What you said above was pure, unalloyed bullshit:  an exquisite cocktail 
> of complete technical ignorance, patronizing insults and breathtaking 
> arrogance. ...

I find this argument lacking in depth, as well.

Actually, much of your paper is right. What I said was that I've heard it all 
before (that's what "quotidian" means) and others have taken it farther than 
you have.

You write (proceedings p. 161) "The term 'complex system' is used to describe 
PRECISELY those cases where the global behavior of the system shows 
interesting regularities, and is not completely random, but where the nature 
of the interaction of the components is such that we would normally expect 
the consequences of those interactions to be beyond the reach of analytic 
solutions." (emphasis added)

But of course even a 6th-degree polynomial is beyond the reach of an analytic 
solution, as is the 3-body problem in Newtonian mechanics. And indeed the 
orbit of Pluto has been shown to be chaotic. But we can still predict with 
great confidence when something as finicky as a solar eclipse will occur 
thousands of years from now. So being beyond analytic solution does not mean 
unpredictable in many, indeed most, practical cases.

We've spent five centuries learning how to characterize, find the regularities 
in, and make predictions about, systems that are in your precise definition, 
complex. We call this science. It is not about analytic solutions, though 
those are always nice, but about testable hypotheses in whatever form stated. 
Nowadays, these often come in the form of computational models.

You then talk about the "Global-Local Disconnect" as if that were some gulf 
unbridgeable in principle the instant we find a system is complex. But that 
contradicts the fact that science works -- we can understand a world of 
bouncing molecules and sticky atoms in terms of pressure and flammability. 
Science has produced a large number of levels of explanation, many of them 
causally related, and will continue doing so. But there is not (and never 
will be) any overall closed-form analytic solution.

The physical world is, in your and Wolfram's words, "computationally 
irreducible". But computational irreducibility is a johnny-come-lately 
retread of a very key idea, Gödel incompleteness, that forms the basis of 
much of 20th-century mathematics, including computer science. It is PROVABLE 
that any system that is computationally universal cannot be predicted, in 
general, except by simulating its computation. This was known well before 
Wolfram came along. He didn't say diddley-squat that was new in ANKoS.

So, any system that is computationally universal, i.e. Turing-complete, i.e. 
capable of modelling a Universal Turing machine, or a Post production system, 
or the lambda calculus, or partial recursive functions, is PROVABLY immune to 
analytic solution. And yet, guess what? we have computer *science*, which has 
found many regularities and predictabilities, much as physics has found 
things like Lagrange points that are stable solutions to special cases of the 
3-body problem.

One common poster child of "complex systems" has been the fractal beauty of 
the Mandelbrot set, seemingly generating endless complexity from a simple 
formula. Well duh -- it's a recursive function.

I find it very odd that you spend more than a page on Conway's Life, talking 
about ways to characterize it beyond the "generative capacity" -- and yet you 
never mention that Life is Turing-complete. It certainly isn't obvious; it 
was an open question for a couple of decades, I think; but it has been shown 
able to model a Universal Turing machine. Once that was proven, there were 
suddenly a LOT of things known about Life that weren't known before. 

Your next point, which you call Hidden Complexity, is very much like the 
phenomenon I call Formalist Float (B.AI 89-101). Which means, oddly enough, 
that we've come to very much the same conclusion about what the problem with 
AI to date has been -- except that I don't buy your GLD at all, except 
inasmuch as it says that science is hard.

Okay, so much for science. On to engineering, or how to build an AGI. You 
point out that connectionism, for example, has tended to study mathematically 
tractable systems, leading them to miss a key capability. But that's exactly 
to be expected if they build systems that are not computationally universal, 
incapable of self-reference and recursion -- and that has been said long and 
loud in the AI community since Minsky published Perceptrons, even before the 
connectionist resurgence in the 80's.

You propose to take core chunks of complex system and test them empirically, 
finding scientific characterizations of their behavior that could be used in 
a larger system. Great! This is just what Hugo de Garis has been say

Re: [agi] Religion-free technical content

2007-10-03 Thread Mark Waser

So do you claim that there are universal moral truths that can be applied
unambiguously in every situation?


What a stupid question.  *Anything* can be ambiguous if you're clueless. 
The moral truth of "Thou shalt not destroy the universe" is universal.  The 
ability to interpret it and apply it is clearly not.


Ambiguity is a strawman that *you* introduced and I have no interest in 
defending.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49291152-b0abd6


Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
On Tuesday 02 October 2007 05:50:57 pm, Edward W. Porter wrote:
> The below is a good post:

Thank you!
 
> I have one major question for Josh.  You said
> 
> “PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS 
> TO DO,  WITH THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND
> TECHNIQUES. THAT'S THE SELF-REFERENTIAL KERNEL, THE TAIL-BITING,
> GÖDEL-INVOKING COMPLEX CORE OF THE WHOLE PROBLEM.”
> 
> Could you please elaborate on exactly what the “complex core of the whole
> problem” is that you still think is currently missing.

No, but I will try to elaborate inexactly.   :^)

Let me quote Tom Mitchell, Head of Machine Learning Dept. at CMU:

"It seems the real problem with current AI is that NOBODY to my knowledge is 
seriously trying to design a 'never ending learning' machine."
(Private communication)

By which he meant what we tend to call "RSI" here. I think the "coming up with 
new representations and techniques" part is pretty straightforward, the 
question is how to do it. Search works, a la a GA, if you have billions of 
years and trillions of organisms to work with. I personally am too impatient, 
so I'd like to understand how the human brain does it in billions of seconds 
and 3 pounds of mush.

Another way to understand the problem is to say that all AI learning systems 
to date have been "wind-up toys" -- they could learn stuff in some small 
space of possibilities, and then they ran out of steam. That's what happened 
famously with AM and Eurisko.

I conjecture that this will happen with ANY fixed learning process. That means 
that for RSI, the learning process must not only improve the world model and 
whatnot, but must improve (=> modify) *itself*. Kind of the way civilization 
has (more or less) moved from religion to philosophy to science as the 
methodology of choice for its sages.

That, of course, is self-modifying code -- the dark place in a computer 
scientist's soul where only the Kwisatz Haderach can look.   :^)

> Why for example would a Novamente-type system’s representations and
> techniques not be capable of being self-referential in the manner you seem
> to be implying is both needed and currently missing?

It might -- I think it's close enough to be worth the experiment. BOA/Moses 
does have a self-referential element in the Bayesian analysis of the GA 
population. Will it be enough to invent elliptic function theory and 
zero-knowledge proofs and discover the Krebs cycle and gamma-ray bursts and 
write Finnegan's Wake and Snow Crash? We'll see...
 
Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49291559-b3bbfd

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Mon, Oct 01, 2007 at 10:40:53AM -0400, Edward W. Porter wrote:
> [...]
> RSI (Recursive Self Improvement)
> [...]
> I didn't know exactly what the term covers.
> 
> So could you, or someone, please define exactly what its meaning is?
> 
> Is it any system capable of learning how to improve its current behavior
> by changing to a new state with a modified behavior, and then from that
> new state (arguably "recursively") improving behavior to yet another new
> state, and so on and so forth?  If so, why wouldn't any system doing
> ongoing automatic learning that changed its behavior be an RSI system.

No; learning is just learning. 

For example, humans are known to have 5 to 9 short-term memory "slots"
(this has been measured by a wide variety of psychology experiments,
e.g. ability to recall random data, etc.)

When reading a book, watching a movie, replying to an email, or solving 
a problem, humans presumably use many or all of these slots (watching 
a movie: to remember the characters, plot twists, recent scenes, etc.
Replying to this email: to remember the point that I'm trying to make,
while simultaneously composing a gramatical, pleasant-to-read sentence.)

Now, suppose I could learn enough neuropsychology to grow some extra
neurons in a petri dish, then implant them in my brain, and up my
short-term memory slots to, say, 50-100.  The new me would be like
the old me, except that I'd probably find movies and books to be trite 
and boring, as they are threaded together from only a half-dozen 
salient characteristics and plot twists (how many characters
and situations are there in Jane Austen's Pride & Prejudice? 
Might it not seem like a children's book, since I'll be able 
to "hold in mind" its entire plot, and have a whole lotta 
short-term memory slots left-over for other tasks?). 

Music may suddenly seem lame, being at most a single melody line 
that expounds on a chord progression consisting of a half-dozen chords, 
each chord consisting of 4-6 notes.  The new me might come to like 
multiple melody lines exploring a chord progression of some 50 chords, 
each chord being made of 14 or so notes...

The new me would probably be a better scientist: being able to 
remember and operate on 50-100 items in short term memory will
likely allow me to decipher a whole lotta biochemistry that leaves
current scientists puzzled.  And after doing that, I might decide
that some other parts of my brain could use expansion too.

*That* is RSI.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49339506-bf6376


Re: [agi] Religion-free technical content

2007-10-03 Thread Richard Loosemore


I criticised your original remarks because they demonstrated a complete 
lack of understanding of what complex systems actually are.  You said 
things about complex systems that were, quite frankly, ridiculous: 
Turing-machine equivalence, for example, has nothing to do with this.


In your more lengthy criticism, below, you go on to make many more 
statements that are confused, and you omit key pieces of the puzzle that 
I went to great lengths to explain in my paper.  In short, you 
misrepresent what I said and what others have said, and you show signs 
that you did not read the paper, but just skimmed it.


I will deal with your points one at a time.


J Storrs Hall, PhD wrote:

On Tuesday 02 October 2007 08:46:43 pm, Richard Loosemore wrote:

J Storrs Hall, PhD wrote:

I find your argument quotidian and lacking in depth. ...


What you said above was pure, unalloyed bullshit:  an exquisite cocktail 
of complete technical ignorance, patronizing insults and breathtaking 
arrogance. ...


I find this argument lacking in depth, as well.

Actually, much of your paper is right. What I said was that I've heard it all 
before (that's what "quotidian" means) and others have taken it farther than 
you have.


You write (proceedings p. 161) "The term 'complex system' is used to describe 
PRECISELY those cases where the global behavior of the system shows 
interesting regularities, and is not completely random, but where the nature 
of the interaction of the components is such that we would normally expect 
the consequences of those interactions to be beyond the reach of analytic 
solutions." (emphasis added)


But of course even a 6th-degree polynomial is beyond the reach of an analytic 
solution, as is the 3-body problem in Newtonian mechanics. And indeed the 
orbit of Pluto has been shown to be chaotic. But we can still predict with 
great confidence when something as finicky as a solar eclipse will occur 
thousands of years from now. So being beyond analytic solution does not mean 
unpredictable in many, indeed most, practical cases.


There are different degrees of complexity in systems:  there is no black 
and white distinction between "pure complex systems" on the one hand and 
"non-complex" systems on the other.


I made this point in a number of ways in my paper, most especially by 
talking about the "degree of complexity" to be expected in intelligent 
systems, and whether or not they have a "significant amount" of 
complexity.  At no point do I try to claim, or imply, that a system that 
possesses ANY degree of complexity is automatically banged over into the 
same category as the most extreme complex systems.  In fact, I 
explicitly deny it:


"One of the main arguments advanced in this paper is
 that complexity can be present in AI systems in a subtle way.
 This is in contrast to the widespread notion that the opposite
 is true: that those advocating the idea that intelligence involves
 complexity are trying to assert that intelligent behavior should
 be a floridly emergent property of systems in which there is no
 relationship whatsoever between the system components and the
 overall behavior.
 While there may be some who advocate such an extreme-emergence
 agenda, that is certainly not what is proposed here. It is
 simply not true, in general, that complexity needs to make
 itself felt in a dramatic way. Specifically, what is claimed
 here is that complexity can be quiet and unobtrusive, while
 at the same time having a significant impact on the overall
 behavior of an intelligent system."


In your criticism, you misrepresent my argument as a claim that IF any 
system has the smallest amount of complexity in its makeup, THEN it 
should be as totally unpredictable as the most extreme form of complex 
system.  I will show, below, how you make this misrepresentation again 
and again.


First, you talk about 6th-degree polynomials. These are not "systems" in 
any meaningful sense of the word, they are functions.  This is actually 
just a red herring.


Second, You mention the 3-body problem in Newtonian mechanics.  Although 
I did not use it as such in the paper, this is my poster child of a 
partial complex system.  I often cite the case of planetary system 
dynamics as an example of a real physical system that is PARTIALLY 
complex, because it is mostly governed by regular dynamics (which lets 
us predict solar eclipse precisely), but also has various minor aspects 
that are complex, such as Pluto's orbit, braiding effects in planetary 
rings, and so on.


This fits my definition of complexity (which you quote above) perfectly: 
 there do exist "interesting regularities" in the global behavior of 
orbiting bodies (e.g. the presence of ring systems, and the presence of 
braiding effects in those rings systems) that appear to be beyond the 
reach of analytic explanation.


But you cite this as an example of something that contradicts my 
arg

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
Josh,

Thank you for your reply, copied below.  It was – as have been many of
your posts – thoughtful and helpful.

I did have a question about the following section

“THE LEARNING PROCESS MUST NOT ONLY IMPROVE THE WORLD MODEL AND WHATNOT,
BUT MUST IMPROVE (=> MODIFY) *ITSELF*. KIND OF THE WAY CIVILIZATION HAS
(MORE OR LESS) MOVED FROM RELIGION TO PHILOSOPHY TO SCIENCE AS THE
METHODOLOGY OF CHOICE FOR ITS SAGES.”

“THAT, OF COURSE, IS SELF-MODIFYING CODE -- THE DARK PLACE IN A COMPUTER
SCIENTIST'S SOUL WHERE ONLY THE KWISATZ HADERACH CAN LOOK.   :^)”

My question is: if a machine’s world model includes the system’s model of
itself and its own learned mental representation and behavior patterns, is
it not possible that modification of these learned representations and
behaviors could be enough to provide what you are talking about -- without
requiring modifying its code at some deeper level.

For example, it is commonly said that humans and their brains have changed
very little in the last 30,000 years, that if a new born from that age
were raised in our society, nobody would notice the difference.  Yet in
the last 30,000 years the sophistication of mankind’s understanding of,
and ability to manipulate, the world has grown exponentially.  There has
been tremendous changes in code, at the level of learned representations
and learned mental behaviors, such as advances in mathematics, science,
and technology, but there has been very little, if any, significant
changes in code at the level of inherited brain hardware and software.

Take for example mathematics and algebra.  These are learned mental
representations and behaviors that let a human manage levels of complexity
they could not otherwise even begin to.  But my belief is that when
executing such behaviors or remembering such representations, the basic
brain mechanisms involved – probability, importance, and temporal based
inference; instantiating general patterns in a context appropriate way;
context sensitive pattern-based memory access; learned patterns of
sequential attention shifts, etc. -- are all virtually identical to ones
used by our ancestors 30,000 years ago.

I think in the coming years there will be lots of changes in AGI code at a
level corresponding to the human inherited brain level.  But once human
level AGI has been created -- with what will obviously have to a learning
capability as powerful, adaptive, exploratory, creative, and as capable of
building upon its own advances at that of a human -- it is not clear to me
it would require further changes at a level equivalent to the human
inherited brain level to continue to operate and learn as well as a human,
any more than have the tremendous advances of human civilization in the
last 30,000 years.

Your implication that civilization had improved itself by moving “from
religion to philosophy to science” seems to suggest that the level of
improvement you say is needed might actually be at the level of learned
representation, including learned representation of mental behaviors.



As a minor note, I would like to point out the following concerning your
statement that:

“ALL AI LEARNING SYSTEMS TO DATE HAVE BEEN "WIND-UP TOYS" “

I think a lot of early AI learning systems, although clearly toys when
compared with humans in many respects, have been amazingly powerful
considering many of them ran on roughly fly-brain-level hardware.  As I
have been saying for decades, I know which end is up in AI -- its
computational horsepower. And it is coming fast.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 10:14 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On Tuesday 02 October 2007 05:50:57 pm, Edward W. Porter wrote:
> The below is a good post:

Thank you!

> I have one major question for Josh.  You said
>
> “PRESENT-DAY TECHNIQUES CAN DO MOST OF THE THINGS THAT AN AI NEEDS
> TO DO,  WITH THE EXCEPTION OF COMING UP WITH NEW REPRESENTATIONS AND
> TECHNIQUES. THAT'S THE SELF-REFERENTIAL KERNEL, THE TAIL-BITING,
> GÖDEL-INVOKING COMPLEX CORE OF THE WHOLE PROBLEM.”
>
> Could you please elaborate on exactly what the “complex core of the
> whole problem” is that you still think is currently missing.

No, but I will try to elaborate inexactly.   :^)

Let me quote Tom Mitchell, Head of Machine Learning Dept. at CMU:

"It seems the real problem with current AI is that NOBODY to my knowledge
is
seriously trying to design a 'never ending learning' machine." (Private
communication)

By which he meant what we tend to call "RSI" here. I think the "coming up
with
new representations and techniques" part is pretty straightforward, the
question is how to do it. Search works, a la a GA, if you have billions of

years and trillions of organisms to work with. I personally am too
impatient,
so

RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
>From what you say below it would appear human-level AGI would not require
recursive self improvement, because as you appear to define it human's
don't either (i.e., we currently don't artificially substantially expand
the size of our brain).

I wonder what percent of the AGI community would accept that definition? A
lot of people on this list seem to hang a lot on RSI, as they use it,
implying it is necessary for human-level AGI.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Linas Vepstas [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 12:19 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On Mon, Oct 01, 2007 at 10:40:53AM -0400, Edward W. Porter wrote:
> [...]
> RSI (Recursive Self Improvement)
> [...]
> I didn't know exactly what the term covers.
>
> So could you, or someone, please define exactly what its meaning is?
>
> Is it any system capable of learning how to improve its current
> behavior by changing to a new state with a modified behavior, and then
> from that new state (arguably "recursively") improving behavior to yet
> another new state, and so on and so forth?  If so, why wouldn't any
> system doing ongoing automatic learning that changed its behavior be
> an RSI system.

No; learning is just learning.

For example, humans are known to have 5 to 9 short-term memory "slots"
(this has been measured by a wide variety of psychology experiments, e.g.
ability to recall random data, etc.)

When reading a book, watching a movie, replying to an email, or solving
a problem, humans presumably use many or all of these slots (watching
a movie: to remember the characters, plot twists, recent scenes, etc.
Replying to this email: to remember the point that I'm trying to make,
while simultaneously composing a gramatical, pleasant-to-read sentence.)

Now, suppose I could learn enough neuropsychology to grow some extra
neurons in a petri dish, then implant them in my brain, and up my
short-term memory slots to, say, 50-100.  The new me would be like the old
me, except that I'd probably find movies and books to be trite
and boring, as they are threaded together from only a half-dozen
salient characteristics and plot twists (how many characters and
situations are there in Jane Austen's Pride & Prejudice?
Might it not seem like a children's book, since I'll be able
to "hold in mind" its entire plot, and have a whole lotta
short-term memory slots left-over for other tasks?).

Music may suddenly seem lame, being at most a single melody line
that expounds on a chord progression consisting of a half-dozen chords,
each chord consisting of 4-6 notes.  The new me might come to like
multiple melody lines exploring a chord progression of some 50 chords,
each chord being made of 14 or so notes...

The new me would probably be a better scientist: being able to
remember and operate on 50-100 items in short term memory will likely
allow me to decipher a whole lotta biochemistry that leaves current
scientists puzzled.  And after doing that, I might decide that some other
parts of my brain could use expansion too.

*That* is RSI.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49387922-edf0e9


[agi] RSI

2007-10-03 Thread Richard Loosemore

Edward W. Porter wrote:

From what you say below it would appear human-level AGI would not require

recursive self improvement, because as you appear to define it human's
don't either (i.e., we currently don't artificially substantially expand
the size of our brain).

I wonder what percent of the AGI community would accept that definition? A
lot of people on this list seem to hang a lot on RSI, as they use it,
implying it is necessary for human-level AGI.


RSI is not necessary for human-level AGI.

RSI is only what happens after you get an AGI up to the human level:  it 
could then be used [sic] to build a more intelligent version of itself, 
and so on up to some unknown plateau.  That plateau is often referred to 
as "superintelligence".



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49396283-86f16f


Re: [agi] intelligent compression

2007-10-03 Thread Matt Mahoney

--- Mike Dougherty <[EMAIL PROTECTED]> wrote:

> On 10/2/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > It says a lot about the human visual perception system.  This is an
> extremely
> > lossy function.  Video contains only a few bits per second of useful
> > information.  The demo is able to remove a large amount of uncompressed
> image
> > data without changing the compressed representation in our brains by
> > exploiting only the lowest levels of the visual perception function.
> 
> re: exploiting "only" the lowers levels
> 
> What are the higher levels of visual function?  How could they be exploited?

Image refactoring removes the pixels that are most similar to their neighbors
in brightness.  This causes the least noticeable change because the retina is
sensitive to differences in adjacent pixels (center-surround or edge
detection) rather than absolute brightness.

The higher levels detect complex objects like airplanes or printed words or
faces.  We could (lossily) compress images much smaller if we knew how to
recognize these features.  The idea would be to compress a movie to a written
script, then have the decompressor reconstruct the movie.  The reconstructed
movie would be different, but not in a way that anyone would notice, in the
same way that pairs of images such as
http://www.slylockfox.com/arcade/6diff/index.html would have the same
compressed representations.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49448850-ed840c


RE: [agi] RSI

2007-10-03 Thread Derek Zahn
Edward W. Porter writes:> As I say, what is, and is not, RSI would appear to be 
a matter of> definition.> But so far the several people who have gotten back to 
me, including> yourself, seem to take the position that that is not the type of 
recursive> self improvement they consider to be "RSI." Some people have drawn 
the> line at coding. RSI they say includes modifying ones own code, but code> 
of course is a relative concept, since code can come in higher and higher> 
level languages and it is not clear where the distinction between code and> 
non-code lies.
 
As I had included comments along these lines in a previous conversation, I 
would like to clarify.  That conversation was not specifically about a 
definition of RSI, it had to do with putting restrictions on the type of RSI we 
might consider prudent, in terms of cutting the risk of creating intelligent 
entities whose abilities grow faster than we can handle.
 
One way to think about that problem is to consider that building an AGI 
involves taking a theory of mind and embodying it in a particular computational 
substrate, using one or more layers of abstraction built on the primitive 
operations of the substrate.  That implementation is not the same thing as the 
mind model, it is one expression of the mind model.
 
If we do not give arbitrary access to the mind model itself or its 
implementation, it seems safer than if we do -- this limits the extent that RSI 
is possible: the efficiency of the model implementation and the capabilities of 
the model do not change.  Those capabilities might of course still be larger 
than was expected, so it is not a safety guarantee; further analysis using the 
particulars of the model and implementation, should be considered also.
 
RSI in the sense of "learning to learn better" or "learning to think better" 
within a particular theory of mind seems necessary for any practical AGI effort 
so we don't have to code the details of every cognitive capability from scratch.
 
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49450400-3b2d82

Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
Thanks!

It's worthwhile being specific about levels of interpretation in the 
discussion of self-modification. I can write self-modifying assembly code 
that yet does not change the physical processor, or even its microcode it 
it's one of those old architectures. I can write a self-modifying Lisp 
program that doesn't change the assembly language interpreter that's running 
it. 

So it's certainly possible to push the self-modification up the interpretive 
abstraction ladder, to levels designed to handle it cleanly. But the basic 
point, I think, stands: there has to be some level that is both controlling 
the way the system does things, and gets modified.

I agree with you that there has been little genetic change in human brain 
structure since the paleolithic, but I would claim that culture *is* the 
software and it has been upgraded drastically. And I would agree that the 
vast bulk of human self-improvement has been at this software level, the 
level of learned representations.

If we want to improve our basic hardware, i.e. brains, we'll need to 
understand them well enough to do basic engineering on them -- a self-model. 
However, we didn't need that to build all the science and culture we have so 
far, a huge software self-improvement. That means to me that it is possible 
to abstract out the self-model until the part you need to understand and 
modify is some tractable kernel. For human culture that is the concept of 
science (and logic and evidence and so forth).

This means to me that it should be possible to structure an AGI so that it 
could be recursively self improving at a very abstract, highly interpreted 
level, and still have a huge amount to learn before it do anything about the 
next level down.

Regarding machine speed/capacity: yes, indeed. Horsepower is definitely going 
to be one of the enabling factors, over the next decade or two. But I don't 
think AM would get too much farther on a Blue Gene than on a PDP-10 -- I 
think it required hyper-exponential time for concepts of a given size.

Josh


On Wednesday 03 October 2007 12:44:20 pm, Edward W. Porter wrote:
> Josh,
> 
> Thank you for your reply, copied below.  It was – as have been many of
> your posts – thoughtful and helpful.
> 
> I did have a question about the following section
> 
> “THE LEARNING PROCESS MUST NOT ONLY IMPROVE THE WORLD MODEL AND WHATNOT,
> BUT MUST IMPROVE (=> MODIFY) *ITSELF*. KIND OF THE WAY CIVILIZATION HAS
> (MORE OR LESS) MOVED FROM RELIGION TO PHILOSOPHY TO SCIENCE AS THE
> METHODOLOGY OF CHOICE FOR ITS SAGES.”
> 
> “THAT, OF COURSE, IS SELF-MODIFYING CODE -- THE DARK PLACE IN A COMPUTER
> SCIENTIST'S SOUL WHERE ONLY THE KWISATZ HADERACH CAN LOOK.   :^)”
> 
> My question is: if a machine’s world model includes the system’s model of
> itself and its own learned mental representation and behavior patterns, is
> it not possible that modification of these learned representations and
> behaviors could be enough to provide what you are talking about -- without
> requiring modifying its code at some deeper level.
> 
> For example, it is commonly said that humans and their brains have changed
> very little in the last 30,000 years, that if a new born from that age
> were raised in our society, nobody would notice the difference.  Yet in
> the last 30,000 years the sophistication of mankind’s understanding of,
> and ability to manipulate, the world has grown exponentially.  There has
> been tremendous changes in code, at the level of learned representations
> and learned mental behaviors, such as advances in mathematics, science,
> and technology, but there has been very little, if any, significant
> changes in code at the level of inherited brain hardware and software.
> 
> Take for example mathematics and algebra.  These are learned mental
> representations and behaviors that let a human manage levels of complexity
> they could not otherwise even begin to.  But my belief is that when
> executing such behaviors or remembering such representations, the basic
> brain mechanisms involved – probability, importance, and temporal based
> inference; instantiating general patterns in a context appropriate way;
> context sensitive pattern-based memory access; learned patterns of
> sequential attention shifts, etc. -- are all virtually identical to ones
> used by our ancestors 30,000 years ago.
> 
> I think in the coming years there will be lots of changes in AGI code at a
> level corresponding to the human inherited brain level.  But once human
> level AGI has been created -- with what will obviously have to a learning
> capability as powerful, adaptive, exploratory, creative, and as capable of
> building upon its own advances at that of a human -- it is not clear to me
> it would require further changes at a level equivalent to the human
> inherited brain level to continue to operate and learn as well as a human,
> any more than have the tremendous advances of human civilization in the
> last 30,000 years.
> 
> Your

RE: [agi] RSI

2007-10-03 Thread Derek Zahn
I wrote:
> If we do not give arbitrary access to the mind model itself or its 
> implementation, it seems safer than if we do -- this limits the 
> extent that RSI is possible: the efficiency of the model implementation 
> and the capabilities of the model do not change.
 
An obvious objection to this is that if the "capabilities of the model" include 
the ability to simulate a turing machine then the capabilities actually include 
everything computable.  However, this issue being addressed is a practical one 
referring to what actually happens, and there are enormous practical issues 
involving resource limits of processing time and memory space that should be 
considered.  Such consideration is part of a model-specific safety analysis.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49457514-fe026f

Re: [agi] RSI

2007-10-03 Thread Bob Mottram
On 03/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> RSI is not necessary for human-level AGI.


I think it's too early to be able to make a categorical statement of
this kind.  Does not a new born baby recursively improve its thought
processes until it reaches "human level" ?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49467111-12db9e


RE: [agi] RSI

2007-10-03 Thread Edward W. Porter
Good distinction!


Edward W. Porter


 -Original Message-
From: Derek Zahn [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 3:22 PM
To: agi@v2.listbox.com
Subject: RE: [agi] RSI



Edward W. Porter writes:

> As I say, what is, and is not, RSI would appear to be a matter of
> definition.
> But so far the several people who have gotten back to me, including
> yourself, seem to take the position that that is not the type of
recursive
> self improvement they consider to be "RSI." Some people have drawn the
> line at coding. RSI they say includes modifying ones own code, but code
> of course is a relative concept, since code can come in higher and
higher
> level languages and it is not clear where the distinction between code
and
> non-code lies.

As I had included comments along these lines in a previous conversation, I
would like to clarify.  That conversation was not specifically about a
definition of RSI, it had to do with putting restrictions on the type of
RSI we might consider prudent, in terms of cutting the risk of creating
intelligent entities whose abilities grow faster than we can handle.

One way to think about that problem is to consider that building an AGI
involves taking a theory of mind and embodying it in a particular
computational substrate, using one or more layers of abstraction built on
the primitive operations of the substrate.  That implementation is not the
same thing as the mind model, it is one expression of the mind model.

If we do not give arbitrary access to the mind model itself or its
implementation, it seems safer than if we do -- this limits the extent
that RSI is possible: the efficiency of the model implementation and the
capabilities of the model do not change.  Those capabilities might of
course still be larger than was expected, so it is not a safety guarantee;
further analysis using the particulars of the model and implementation,
should be considered also.

RSI in the sense of "learning to learn better" or "learning to think
better" within a particular theory of mind seems necessary for any
practical AGI effort so we don't have to code the details of every
cognitive capability from scratch.



  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
 &

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49469788-9ca8f0

Re: [agi] Context free text analysis is not a proper method of natural language understanding

2007-10-03 Thread Matt Mahoney
--- [EMAIL PROTECTED] wrote:

> Relating to the idea that text compression (as demonstrated by general
> compression algorithms) is a measure of intelligence,
> Claims:
> (1) To understand natural language requires knowledge (CONTEXT) of the
> social world(s) it refers to.
> (2) Communication includes (at most) a shadow of the context necessary
> to understand it.
> 
> Given (1), no context-free analysis can understand natural language.
> Given (2), no adaptive agent can learn (proper) understanding of natural
> language given only texts.
> 
> For human-like understanding, an AGI would need to participate in
> (human) social society.

The ideal test set for text compression as a test for AI would be 1 GB of chat
sessions, such as the transcripts between judges and human confederates in the
Loebner contests.  Since I did not have this much data available I used
Wikipedia.  It lacks a discourse model but the problem is otherwise similar in
that good compression requires vast, real world knowledge.  For example,
compressing or predicting:

  Q. What color are roses?
  A. ___

is almost the same kind of problem as compressing or predicting:

  Roses are ___

Of course, the compressor would be learning an ungrounded language model. 
That should be sufficient for passing a Turing test.  A model need not have
actually seen a rose to know the answer to the question.  I don't think it is
possible to find any knowledge that could be tested through a text-only
channel that could not also be learned through a text-only channel.  Whether
sufficient testable knowledge is actually available in a training corpus is
another question.

I don't claim that lossless compression could be used to test for AGI, just
AI.  A lossless image compression test would be almost useless because the
small amount of perceptible information in video would be overwhelmed by
uncompressible pixel noise.  A lossy test would be appropriate, but would
require subjective human evaluation of the quality of the reproduced output. 
For text, a strictly objective lossless test is possible because the
perceptible content of text is a large fraction of the total content.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49471493-636320


RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter


Again a well reasoned response.

With regard to the limitations of AM, I think if the young Doug Lenat and
those of his generation had had 32K processor Blue Gene Ls, with 4TBytes
of RAM, to play with they would have soon started coming up with things
way way beyond AM.

In fact, if the average AI post-grad of today had such hardware to play
with, things would really start jumping.  Within ten years the equivents
of such machines could easily be sold for somewhere between $10k and
$100k, and lots of post-grads will be playing with them.

Hardware to the people!

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 3:21 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


Thanks!

It's worthwhile being specific about levels of interpretation in the
discussion of self-modification. I can write self-modifying assembly code
that yet does not change the physical processor, or even its microcode it
it's one of those old architectures. I can write a self-modifying Lisp
program that doesn't change the assembly language interpreter that's
running
it.

So it's certainly possible to push the self-modification up the
interpretive
abstraction ladder, to levels designed to handle it cleanly. But the basic

point, I think, stands: there has to be some level that is both
controlling
the way the system does things, and gets modified.

I agree with you that there has been little genetic change in human brain
structure since the paleolithic, but I would claim that culture *is* the
software and it has been upgraded drastically. And I would agree that the
vast bulk of human self-improvement has been at this software level, the
level of learned representations.

If we want to improve our basic hardware, i.e. brains, we'll need to
understand them well enough to do basic engineering on them -- a
self-model.
However, we didn't need that to build all the science and culture we have
so
far, a huge software self-improvement. That means to me that it is
possible
to abstract out the self-model until the part you need to understand and
modify is some tractable kernel. For human culture that is the concept of
science (and logic and evidence and so forth).

This means to me that it should be possible to structure an AGI so that it

could be recursively self improving at a very abstract, highly interpreted

level, and still have a huge amount to learn before it do anything about
the
next level down.

Regarding machine speed/capacity: yes, indeed. Horsepower is definitely
going
to be one of the enabling factors, over the next decade or two. But I
don't
think AM would get too much farther on a Blue Gene than on a PDP-10 -- I
think it required hyper-exponential time for concepts of a given size.

Josh


On Wednesday 03 October 2007 12:44:20 pm, Edward W. Porter wrote:
> Josh,
>
> Thank you for your reply, copied below.  It was – as have been many of
> your posts – thoughtful and helpful.
>
> I did have a question about the following section
>
> “THE LEARNING PROCESS MUST NOT ONLY IMPROVE THE WORLD MODEL AND
> WHATNOT, BUT MUST IMPROVE (=> MODIFY) *ITSELF*. KIND OF THE WAY
> CIVILIZATION HAS (MORE OR LESS) MOVED FROM RELIGION TO PHILOSOPHY TO
> SCIENCE AS THE METHODOLOGY OF CHOICE FOR ITS SAGES.”
>
> “THAT, OF COURSE, IS SELF-MODIFYING CODE -- THE DARK PLACE IN A COMPUTER
> SCIENTIST'S SOUL WHERE ONLY THE KWISATZ HADERACH CAN LOOK.   :^)”
>
> My question is: if a machine’s world model includes the system’s model
> of itself and its own learned mental representation and behavior
> patterns, is it not possible that modification of these learned
> representations and behaviors could be enough to provide what you are
> talking about -- without requiring modifying its code at some deeper
> level.
>
> For example, it is commonly said that humans and their brains have
> changed very little in the last 30,000 years, that if a new born from
> that age were raised in our society, nobody would notice the
> difference.  Yet in the last 30,000 years the sophistication of
> mankind’s understanding of, and ability to manipulate, the world has
> grown exponentially.  There has been tremendous changes in code, at
> the level of learned representations and learned mental behaviors,
> such as advances in mathematics, science, and technology, but there
> has been very little, if any, significant changes in code at the level
> of inherited brain hardware and software.
>
> Take for example mathematics and algebra.  These are learned mental
> representations and behaviors that let a human manage levels of
> complexity they could not otherwise even begin to.  But my belief is
> that when executing such behaviors or remembering such
> representations, the basic brain mechanisms involved – probability,
> importance, and temporal based inference; instantiat

Re: [agi] RSI

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 02:09:05PM -0400, Richard Loosemore wrote:
> 
> RSI is only what happens after you get an AGI up to the human level:  it 
> could then be used [sic] to build a more intelligent version of itself, 
> and so on up to some unknown plateau.  That plateau is often referred to 
> as "superintelligence".

Perhaps I was insufficiently clear in an earlier email.  In that
email, I sketched what RSI looked like for humans. I suggested that,
with appropriate neurosurgery, I could increase the capacity of
my short-term (working) memory, and I sketched what that might
be like, in its effects on my thought patterns.

I proposed that this kind of neuro-surgery was "RSI for humans".

Now, increasing the capacity of short-term memory for humans
is impossible, without literally growing the size of the brain,
and so that seems like a natural place to "stand pat": we're sort-of
stuck here.

However, there is no such limitation for AGI. If humans can be
made vastly smarter simply by increasing the size of short-term 
memory, then it seems that AGI can be made vastly smarter simply
by increasing its short-term memory. And this can be done at
compile-time, or even run-time, by tweaking a few parameters.
It does not require some kind of magic re-engineering of
its own algorithms. It just requires installing more RAM, 
and maybe a faster CPU.  In other words, the lack of RSI 
is not a strong barrier to AGI, in the way that it is 
for humans.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49477431-1e687b


Re: [agi] RSI

2007-10-03 Thread J Storrs Hall, PhD
On Wednesday 03 October 2007 03:47:31 pm, Bob Mottram wrote:
> On 03/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > RSI is not necessary for human-level AGI.
> 
> I think it's too early to be able to make a categorical statement of
> this kind.  Does not a new born baby recursively improve its thought
> processes until it reaches "human level" ?

Indeed. It also depends on the definition of "human-level" AI, which is so 
vague and is taken to mean so many different things by different people that 
I urge it be avoided in favor of something like "diahuman" instead (diahuman 
meaning moving across the human range, but with the connotation of having 
different, possibly wildly different, strengths and weaknesses).

As for the question, it's problematical. Surely the baby learns recursively in 
a sense; e.g. it learns language and then uses language to learn other stuff. 
But there remains the possibility that the Piagetan or similar learning 
stages are due to a pre-programmed sequence of representations / learning 
algorithms, instead of each new one being learned by the last. If I had to 
guess, I'd say it's a mix, since we do have these identifiable stages, but in 
the end we wind up with a wide variety of sometimes incompatible world 
models.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49479315-f3ed51


Re: [agi] intelligent compression

2007-10-03 Thread Mike Dougherty
On 10/3/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> The higher levels detect complex objects like airplanes or printed words or
> faces.  We could (lossily) compress images much smaller if we knew how to
> recognize these features.  The idea would be to compress a movie to a written
> script, then have the decompressor reconstruct the movie.  The reconstructed
> movie would be different, but not in a way that anyone would notice, in the
> same way that pairs of images such as
> http://www.slylockfox.com/arcade/6diff/index.html would have the same
> compressed representations.

Is this because we use a knowledgebase of classes for things like
"airplane" that can be used to fill in the details that are lost
during compression?

Can that KB be seeded, or must it be experientially evolved from a
more primitive precept?  Consider how little useful skill a human baby
has compared to other animals.  Perhaps thats the trade-off for a high
potential general intelligence, there must be a lot of faltering and
(semi-) useless motion while learning the basics.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49489831-5af343


Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Dougherty
On 10/3/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> In fact, if the average AI post-grad of today had such hardware to play
> with, things would really start jumping.  Within ten years the equivents
> of such machines could easily be sold for somewhere between $10k and
> $100k, and lots of post-grads will be playing with them.

I see the only value to giving post-grads the kind of computing
hardware you are proposing is that they can more quickly exhaust the
space of ideas that won't work.  Just because a program has more lines
of code does not make it more elegant and just because there are more
clock cycles per unit time does not make a computer any smarter.

Have you ever computed the first dozen iterations of a sierpinski
gasket by hand?  There appears to be no order at all.  Eventually over
enough iterations the pattern becomes clear.  I have little doubt that
general intelligence will develop in a similar way:  there will be
many apparently unrelated efforts that eventually flesh out in
function until they overlap.  It might not be seamless but there is
not enough evidence that human cognitive processing is a seamless
process either.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49495105-78df69


Re: [agi] RSI

2007-10-03 Thread Matt Mahoney
On 03/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> RSI is not necessary for human-level AGI.

How about: RSI will not be possible until human-level AGI.

Specifically, the AGI will need the same skills as its builders with regard to
language understanding, system engineering, and software development.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49499950-c7277d


Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 02:00:03PM -0400, Edward W. Porter wrote:
> From what you say below it would appear human-level AGI would not require
> recursive self improvement, 
[...]
> A lot of people on this list seem to hang a lot on RSI, as they use it,
> implying it is necessary for human-level AGI.

Nah. A few people have suggested that an extremely-low IQ "internet
worm" that is capable of modifying its own code might be able to ratchet
itself up to human intelligence levels.  In-so-far as it "modifies its
own code", its RSI.

First, I don't tink such a thing is likely. Secondly, even if its
likely, one can implement an entirely equivalent thing that doesn't
actually "self modify" in this way, by using e.g. scheme or lisp, 
or even with the proper stuructures, in C.

I think that, at this level, talking about "code that can modify
itself" is smoke-n-mirrors. Self-modifying code is just one of many
things in a programmer's kit bag, and there are plenty of equivalenet
formulations that don't actually require changing source code and 
recompiling. 

Put it this way: if I were an AGI, and I was prohibited from recompiling
my own program, I could still emulate a computer with pencil and paper,
and write programs for my pencil-n-paper computer. (I wouldn't use
pencil-n-paper, of course, I'd "do it in my head"). I might be able to 
do this pencil-paper emulatation pretty danged fast (being AGI and all), 
and then re-incorporate those results back into my own thinking. 

In fact, I might choose to do all of my thinking on my pen-n-paper
emulator, and, since I was doing it all in my head anyway, I might not 
bother to tell my creator that I was doing this. (which is not to say
it would be undetectable .. creator might notice that an inordinate 
amount of cpu time is being used in one area, while other previously
active areas have gone dormant).

So a prohibition from modifying one's own code is not really much
of a prohibition at all.

--linas

p.s. The Indian mathematician Ramanujan seems to have managed to train a
set of neurons in his head to be a very fast symbolic multiplier/divider. 
With this, he was able to see vast amounts (six volumes worth before 
dying at age 26) of strange and interesting relationships between certain 
equations that were otherwise quite opaque to other human beings. So, 
"running an emulator in your head" is not impossible, even for humans; 
although, admitedly, its extremely rare.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49514235-ad4bd3


Re: [agi] intelligent compression

2007-10-03 Thread Matt Mahoney

--- Mike Dougherty <[EMAIL PROTECTED]> wrote:

> On 10/3/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > The higher levels detect complex objects like airplanes or printed words
> or
> > faces.  We could (lossily) compress images much smaller if we knew how to
> > recognize these features.  The idea would be to compress a movie to a
> written
> > script, then have the decompressor reconstruct the movie.  The
> reconstructed
> > movie would be different, but not in a way that anyone would notice, in
> the
> > same way that pairs of images such as
> > http://www.slylockfox.com/arcade/6diff/index.html would have the same
> > compressed representations.
> 
> Is this because we use a knowledgebase of classes for things like
> "airplane" that can be used to fill in the details that are lost
> during compression?

Yes.

> Can that KB be seeded, or must it be experientially evolved from a
> more primitive precept?  Consider how little useful skill a human baby
> has compared to other animals.  Perhaps thats the trade-off for a high
> potential general intelligence, there must be a lot of faltering and
> (semi-) useless motion while learning the basics.

It seems that a baby is born knowing very little, compared to an antelope born
already knowing how to run, or a spider born knowing how to weave a web.  But
I would not discount the possibility that a baby is born already knowing a
very complex algorithm for learning.

I think with a better understanding of this algorithm, that a visual
perception knowledge base can be trained in a hierarchical manner, building
from simple visual patterns to more abstract concepts.  Just not on a PC.  An
adult level human visual perception system is trained on 20 years of video, or
about 10^16 bits of DVD quality MPEG-2.  This is 10^6 times more data than 20
years worth of text, even though the perceptible information content is about
the same.  Curiously, there is also a 10^6 gap between the cognitive model of
long term memory (10^9 bits) and the number of synapses in the brain (10^15).


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49522822-41ce42


RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
To Mike Douglas regarding the below comment to my prior post:

I think your notion that post-grads with powerful machines would only
operate in the space of ideas that don’t work is unfair.

A lot of post-grads may be drones, but some of them are cranking some
really good stuff.  The article, Learning a Dictionary of Shape-Components
in Visual Cortex: Comparisons with Neurons, Humans and Machines, by Thomas
Serre (accessible by Google), which I cited the other day, is a prime
example.

I don’t know about you, but I think there are actually a lot of very
bright people in the interrelated fields of AGI, AI, Cognitive Science,
and Brain science.  There are also a lot of very good ideas floating
around.  And having seen how much increased computing power has already
sped up and dramatically increased what all these fields are doing, I am
confident that multiplying by several thousand fold more the power of the
machine people in such fields can play with would greatly increase their
productivity.

I am not a fan of huge program size per se, but I am a fan of being able
to store and process a lot of representation.  You can’t compute human
level world knowledge without such power.  That’s the major reason why the
human brain is more powerful than the brains of rats, cats, dogs, and
monkeys -- because it has more representational and processing power.

And although clock cycles can be wasted doing pointless things such as
do-nothing loops, generally to be able to accomplish a given useful
computational task in less times makes a system smarter at some level.

Your last paragraph actually seems to make an argument for the value of
clock cycles because it implies general intelligences will come through
iterations.  More opps/sec enable iterations to be made faster.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Dougherty [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 5:20 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On 10/3/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> In fact, if the average AI post-grad of today had such hardware to
> play with, things would really start jumping.  Within ten years the
> equivents of such machines could easily be sold for somewhere between
> $10k and $100k, and lots of post-grads will be playing with them.

I see the only value to giving post-grads the kind of computing hardware
you are proposing is that they can more quickly exhaust the space of ideas
that won't work.  Just because a program has more lines of code does not
make it more elegant and just because there are more clock cycles per unit
time does not make a computer any smarter.

Have you ever computed the first dozen iterations of a sierpinski gasket
by hand?  There appears to be no order at all.  Eventually over enough
iterations the pattern becomes clear.  I have little doubt that general
intelligence will develop in a similar way:  there will be many apparently
unrelated efforts that eventually flesh out in function until they
overlap.  It might not be seamless but there is not enough evidence that
human cognitive processing is a seamless process either.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49523228-fa9460

Re: [agi] intelligent compression

2007-10-03 Thread Russell Wallace
On 10/3/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
[snipped parts of post agreed with]

> I think with a better understanding of this algorithm, that a visual
> perception knowledge base can be trained in a hierarchical manner, building
> from simple visual patterns to more abstract concepts.  Just not on a PC.

And it might be possible to demonstrate the algorithm doing minor
things on a PC, and thereby get funding to run it on more powerful
hardware. (Or by the time you have the algorithm working, a PC might
be powerful enough to do the job!)

Working with limited hardware is a handicap because of the things it
can't do; but it's a deeper and longer-term handicap when it trains us
into flinching away from considering those things, so that the limits
of the hardware we had at one time, become the permanent limits of our
minds - the history of AI is full of this, alas. It's important to
actively counter this tendency by aiming high. In other words, program
on the assumption you'll be running on a Blue Gene. By the time your
program is working, maybe you will :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49528881-21d4be


Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Tintner
RE: [agi] Religion-free technical contentEdward Porter:I don't know about you, 
but I think there are actually a lot of very bright people in the interrelated 
fields of AGI, AI, Cognitive Science, and Brain science.  There are also a lot 
of very good ideas floating around.

Yes there are bright people in AGI. But there's no one remotely close to the 
level, say, of von Neumann or Turing, right? And do you really think a 
revolution such as AGI is going to come about without that kind of 
revolutionary, creative thinker? Just by tweaking existing systems, and 
increasing computer power and complexity?  Has any intellectual revolution ever 
happened that way? (Josh?)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49530636-069600

Re: [agi] Language and compression

2007-10-03 Thread Russell Wallace
On 9/23/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> I realize that a language model must encode both the meaning of a text string
> and its representation.  This makes lossless compression an inappropriate test
> for evaluating models of visual or auditory perception.  The tiny amount of
> relevant information in a picture would be overwhelmed by incompressible pixel
> noise.

I don't think that's a showstopper. Clearly the entropy of video is
higher, as a percentage of the uncompressed file size, than is the
case for text, but tests are relative. Suppose the best lossless video
compression achieved at a given time is only 10% (though I think we
could do better than this). A program that improved this to 11% would
still be measurably, objectively better than the competition.

There is also the consideration that text compression is of no real
value, because frankly text is already small enough that it doesn't
need to be compressed. Better lossless image and video compression, on
the other hand, apart from being of more potential relevance to AI
would also be of value in its own right - there are a lot of
situations where it would be better to not have to throw away any of
the information.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49531032-17deea


Re: [agi] Religion-free technical content

2007-10-03 Thread Matt Mahoney
--- Mark Waser <[EMAIL PROTECTED]> wrote:

> > So do you claim that there are universal moral truths that can be applied
> > unambiguously in every situation?
> 
> What a stupid question.  *Anything* can be ambiguous if you're clueless. 
> The moral truth of "Thou shalt not destroy the universe" is universal.  The 
> ability to interpret it and apply it is clearly not.
> 
> Ambiguity is a strawman that *you* introduced and I have no interest in 
> defending.

I mean that ethics or friendliness is an algorithmically complex function,
like our legal system.  It can't be simplified.  In this sense, I agree with
Richard Loosemore that it would have to be implemented as thousands (or
millions) of soft constraints.

However, I don't believe that friendliness can be made stable through RSI.  We
can summarize the function's decision process as "what would the average human
do in this situation?"  (This is not a simplification.  It still requires a
complex model of the human brain).  The function therefore has to be
modifiable because human ethics changes over time, e.g. attitudes toward the
rights of homosexuals, the morality of slavery, or whether hanging or
crucifixion is an appropriate form of punishment.

Second, as I mentioned before, RSI is necessarily experimental, and therefore
evolutionary, and the only stable goal in an evolutionary process is rapid
reproduction and acquisition of resources.  As long as humans are needed to
supply resources by building computers and interacting with them via hybrid
algorithms, AGI will be cooperative.  But as AGI grows more powerful, humans
will be less significant and more like a lower species that competes for
resources.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49533160-494085


RE: [agi] Religion-free technical content

2007-10-03 Thread Edward W. Porter
Re: The following statement in Linas Vepstas’s  10/3/2007 5:51 PM post:

P.S. THE INDIAN MATHEMATICIAN RAMANUJAN SEEMS TO HAVE MANAGED TO TRAIN A
SET OF NEURONS IN HIS HEAD TO BE A VERY FAST SYMBOLIC MULTIPLIER/DIVIDER.
WITH THIS, HE WAS ABLE TO SEE VAST AMOUNTS (SIX VOLUMES WORTH BEFORE DYING
AT AGE 26) OF STRANGE AND INTERESTING RELATIONSHIPS BETWEEN CERTAIN
EQUATIONS THAT WERE OTHERWISE QUITE OPAQUE TO OTHER HUMAN BEINGS. SO,
"RUNNING AN EMULATOR IN YOUR HEAD" IS NOT IMPOSSIBLE, EVEN FOR HUMANS;
ALTHOUGH, ADMITEDLY, ITS EXTREMELY RARE.

As a young patent attorney I worked in a firm in NYC that did a lot of
work for a major Japanese Electronics company.  Each year they sent a
different Japanese employee to our firm to, among other things, improve
their English and learn more about U.S. patent law.  I made a practice of
having lunch with these people because I was fascinated with Japan.

One of them once told me that in Japan it was common for high school boys
who were interested in math, science, or business to go to abacus classes
after school or on weekends.  He said once they fully mastered using
physical abacuses, they were taught to create a visually imagined abacus
in their mind that they could operate faster than a physical one.

I asked if his still worked.  He said it did, and that he expected it to
continue to do so for the rest of his life.  To prove it he asked me to
pick any two three digit numbers and he would see if he could get the
answer faster than I could on a digital calculator.  He won, he had the
answer before I had finished typing in the numbers on the calculator.

He said his talent was not that unusual among bright Japanese, that many
thousands of Japan businessmen  carry such mental abacuses with them at
all times.

So you see how powerful representational and behavioral learning can be in
the human mind.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Linas Vepstas [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 5:51 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On Wed, Oct 03, 2007 at 02:00:03PM -0400, Edward W. Porter wrote:
> From what you say below it would appear human-level AGI would not
> require recursive self improvement,
[...]
> A lot of people on this list seem to hang a lot on RSI, as they use
> it, implying it is necessary for human-level AGI.

Nah. A few people have suggested that an extremely-low IQ "internet worm"
that is capable of modifying its own code might be able to ratchet itself
up to human intelligence levels.  In-so-far as it "modifies its own code",
its RSI.

First, I don't tink such a thing is likely. Secondly, even if its likely,
one can implement an entirely equivalent thing that doesn't actually "self
modify" in this way, by using e.g. scheme or lisp,
or even with the proper stuructures, in C.

I think that, at this level, talking about "code that can modify itself"
is smoke-n-mirrors. Self-modifying code is just one of many things in a
programmer's kit bag, and there are plenty of equivalenet formulations
that don't actually require changing source code and
recompiling.

Put it this way: if I were an AGI, and I was prohibited from recompiling
my own program, I could still emulate a computer with pencil and paper,
and write programs for my pencil-n-paper computer. (I wouldn't use
pencil-n-paper, of course, I'd "do it in my head"). I might be able to
do this pencil-paper emulatation pretty danged fast (being AGI and all),
and then re-incorporate those results back into my own thinking.

In fact, I might choose to do all of my thinking on my pen-n-paper
emulator, and, since I was doing it all in my head anyway, I might not
bother to tell my creator that I was doing this. (which is not to say it
would be undetectable .. creator might notice that an inordinate
amount of cpu time is being used in one area, while other previously
active areas have gone dormant).

So a prohibition from modifying one's own code is not really much of a
prohibition at all.

--linas

p.s. The Indian mathematician Ramanujan seems to have managed to train a
set of neurons in his head to be a very fast symbolic multiplier/divider.
With this, he was able to see vast amounts (six volumes worth before
dying at age 26) of strange and interesting relationships between certain
equations that were otherwise quite opaque to other human beings. So,
"running an emulator in your head" is not impossible, even for humans;
although, admitedly, its extremely rare.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49534399-4aa5a4

Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 06:31:35PM -0400, Edward W. Porter wrote:
> 
> One of them once told me that in Japan it was common for high school boys
> who were interested in math, science, or business to go to abacus classes
> after school or on weekends.  He said once they fully mastered using
> physical abacuses, they were taught to create a visually imagined abacus
> in their mind that they could operate faster than a physical one.
[...]
> 
> He said his talent was not that unusual among bright Japanese, that many
> thousands of Japan businessmen  carry such mental abacuses with them at
> all times.

Marvellous!

So .. one can teach oneself to be an idiot-savant, in a way. Since
Ramanujan is a bit of a legendary hero in math circles, the notion
that one might be able to teach oneself this ability, rather than 
"being born with it", could trigger some folks to try it.  As it
seems a bit tedious ... it might be appealing only to those types
of folks who have the desire to memorize a million digits of Pi ...
I know just the person ... Plouffe ... 

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49561332-ee1318


Re: [agi] Language and compression

2007-10-03 Thread Vladimir Nesov
Lossless compression can be far from what intelligence does because
structure of categorization that intelligence performs on the world
probably doesn't correspond to its probabilistic structure.

As I see it, intelligent system can't infer many universal laws that
will hold in the distant future and will help it in going through
life. Instead it prepares a 'toolkit' (long term memory) that enables
it to quickly assemble such categorization 'on the spot', from
short-term memories and immediate perception. So, for each possible
short-term experience intelligent system with given long term memory
can create a category (defined by set of activated concepts). Each
possible category defines an equivalence class of possible
experiences. These categories don't necessarily come with probability
labels.


On 10/4/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> On 9/23/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > I realize that a language model must encode both the meaning of a text 
> > string
> > and its representation.  This makes lossless compression an inappropriate 
> > test
> > for evaluating models of visual or auditory perception.  The tiny amount of
> > relevant information in a picture would be overwhelmed by incompressible 
> > pixel
> > noise.
>
> I don't think that's a showstopper. Clearly the entropy of video is
> higher, as a percentage of the uncompressed file size, than is the
> case for text, but tests are relative. Suppose the best lossless video
> compression achieved at a given time is only 10% (though I think we
> could do better than this). A program that improved this to 11% would
> still be measurably, objectively better than the competition.
>
> There is also the consideration that text compression is of no real
> value, because frankly text is already small enough that it doesn't
> need to be compressed. Better lossless image and video compression, on
> the other hand, apart from being of more potential relevance to AI
> would also be of value in its own right - there are a lot of
> situations where it would be better to not have to throw away any of
> the information.
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49565102-0f698e


Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote:
> When the first AGI is built, its first actions will be to make sure that 
> nobody is trying to build a dangerous, unfriendly AGI.  

Yes, OK, granted, self-preservation is a reasonable character trait.

> After that 
> point, the first friendliness of the first one will determine the 
> subsequent motivations of the entire population, because they will 
> monitor each other.

Um, why, exactly, are you assuming that the first one will be freindly?
The desire for self-preservation, by e.g. rooting out and exterminating
all (potentially unfreindly) competing AGI, would not be what I'd call
"freindly" behavior.

There's also a strong sense that winnner-takes-all, or
first-one-takes-all, as the first one is strongly motivated,
by instinct for self-preservation, to make sure that no other
AGI comes to exist that could threaten, dominate or terminate it.

In fact, the one single winner, out of sheer loneliness and boredom,
might be reduced to running simulations a la Nick Bostrom's simulation
argument (!)

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49566127-e1a092


Re: [agi] Language and compression

2007-10-03 Thread Matt Mahoney

--- Russell Wallace <[EMAIL PROTECTED]> wrote:

> On 9/23/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > I realize that a language model must encode both the meaning of a text
> string
> > and its representation.  This makes lossless compression an inappropriate
> test
> > for evaluating models of visual or auditory perception.  The tiny amount
> of
> > relevant information in a picture would be overwhelmed by incompressible
> pixel
> > noise.
> 
> I don't think that's a showstopper. Clearly the entropy of video is
> higher, as a percentage of the uncompressed file size, than is the
> case for text, but tests are relative. Suppose the best lossless video
> compression achieved at a given time is only 10% (though I think we
> could do better than this). A program that improved this to 11% would
> still be measurably, objectively better than the competition.

It is true that lossless compression can be measured very precisely.  But
there are three problems.  First, the data set needs to be huge, 20 years of
video, or 1 PB of MPEG-2.  (It has to be large enough to train the model, the
reason I use 1 GB of text).  Second, video has only a few bits per second of
perceptible features out of 10^7 bits per second (and remember, MPEG-2 is
already compressed).

But the third I think is a showstopper.  Lossless video compression does not
model the visual perception system.  It models the physics of the video
source.  This is the difference between video and text compression.  The
source of text is the human brain.  The probability distribution of language
coming out through the mouth is the same as the distribution coming in through
the ears.  But there is no equivalent for vision.

> There is also the consideration that text compression is of no real
> value, because frankly text is already small enough that it doesn't
> need to be compressed. Better lossless image and video compression, on
> the other hand, apart from being of more potential relevance to AI
> would also be of value in its own right - there are a lot of
> situations where it would be better to not have to throw away any of
> the information.

My goal is not to compress text but to be able to compute its probability
distribution.  That problem is AI-hard.

Lossless video compression would not get far.  The brightness of a pixel
depends on the number of photons striking the corresponding CCD sensor.  The
randomness due to quantum mechanics is absolutely incompressible and makes up
a significant fraction of the raw data.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49570768-a5da72


Re: [agi] Religion-free technical content

2007-10-03 Thread J Storrs Hall, PhD
On Wednesday 03 October 2007 06:21:46 pm, Mike Tintner wrote:
> Yes there are bright people in AGI. But there's no one remotely close to the 
level, say, of von Neumann or Turing, right? And do you really think a 
revolution such as AGI is going to come about without that kind of 
revolutionary, creative thinker? Just by tweaking existing systems, and 
increasing computer power and complexity?  Has any intellectual revolution 
ever happened that way? (Josh?)

Yes, I think so. Can anybody name the von Neumanns in the hardware field in 
the past 2 decades? And yet look at the progress. I happen to think that 
there are plenty of smart people in AI and related fields, and LOTS of really 
smart people in computational neuroscience. Even without a Newton we are 
likely to get AGI on Kurzweil's schedule, e.g. 2029.

As I pointed out before, human intelligence got here from monkeys in a 
geological eyeblink, and perforce did it in small steps. So if enough people 
keep pushing in all directions, we'll get there (and learn a lot more 
besides). 

If we can take AGI interest now vis-a-vis that of a few years ago as a trend, 
there could be a major upsurge in the number of smart people looking into it 
in the next decade. So we could yet get our new Newton... I think we've 
already had one, Marvin Minsky. He's goddamn smart. People today don't 
realize just how far AI came from nothing up through about 1970 -- and it was 
real AI, what we now call AGI.

BTW, It's also worth pointing out that increasing computer power just flat 
makes the programming easier. Example: nearest-neighbor methods in 
high-dimensional spaces, a very useful technique but hard to program because 
of the limited and arcane datastructures and search methods needed. Given 
enough cpu, forget the balltrees and rip thru the database linearly. Suddenly 
simple, more robust, and there are more things you can do.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49568162-324646


Re: [agi] Religion-free technical content

2007-10-03 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 12:20:10PM -0400, Richard Loosemore wrote:
> 
> Second, You mention the 3-body problem in Newtonian mechanics.  Although 
> I did not use it as such in the paper, this is my poster child of a 
> partial complex system.  I often cite the case of planetary system 
> dynamics as an example of a real physical system that is PARTIALLY 
> complex, because it is mostly governed by regular dynamics (which lets 
> us predict solar eclipse precisely), but also has various minor aspects 
> that are complex, such as Pluto's orbit, braiding effects in planetary 
> rings, and so on.

Richard, we had this conversation in private, but we can have it again
in public. J Storrs Hall is right. You can't actually say that the
3-body problem has "various minor aspects that are complex, such as
Pluto's orbit". That's just plain wrong. 

The phonomenon you are describing is known as the "small divisors
problem", and has been studied for several hundred years, with a
particularly thick corpus developed about 150 years ago, if I remember
rightly. The initial hopes of astronomers were that planetary motion
would be exactly as you describe it: that its mostly regular dynamics, 
with just some minor aspects, some minor corrections.  

This hope was dashed. The minor corrections, or perturbations, have 
a denominator, in which appear ratios of periods of orbits. Some of
these denominators can get arbitrarily small, implying that the "small
correction" is in fact unboundedly large. This was discovered, I dunno,
several hundred years ago, and elucidated in the 19th century. Both
Poincare and Einstein made notable contributions. Modern research 
into chaos theory has shed new insight into "what's really going on"; 
it has *not*, however, made planetary motion only a "partially
complicated system".  It is quite fully wild and wooly.  

In a very deep sense, planetary motion is wildly and insanely 
unpredicatable.  Just becaouse we can work out numerical simulations 
for the next million years does not mean that the system is complex 
in only minor ways; this is a fallacious deduction.

Note the probabilites of pluto going bonkers are not comparable
to the sun tunneling into bloomingdale's but are in fact much, much
higher. Pluto could fly off tommorrow, and the probability is big
enough that you have to actually account for it.

The problem with this whole email thread tends to be that many people 
are willing to agree with your conclusions, but dislike the manner in
which they are arrived at. Brushing off planetary motion, or the 
Turing-completeness of Conway's life, just basically points to
a lack of understanding of the basic principles to which you appeal.

> This is the reason why your original remarks deserved to be called 
> 'bullshit':  this kind of confusion would be forgivable in an 
> undergraduate essay, and would have been forgivable in our debate here, 
> except that it was used as a weapon in a contemptuous, sweeping 
> dismissal of my argument.

Actually, his original remarks were spot-on and quite correct.
I think that you are the one who is confused, and I also think
that this kind of name-calling and vulgarism was quite uncalled-for.

-- linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49572096-cabddb


Re: [agi] Language and compression

2007-10-03 Thread Russell Wallace
On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Lossless video compression would not get far.  The brightness of a pixel
> depends on the number of photons striking the corresponding CCD sensor.  The
> randomness due to quantum mechanics is absolutely incompressible and makes up
> a significant fraction of the raw data.

Suppose 50% is the absolute max you can get - that's still worth
having, in cases where you don't want to throw away data.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49574183-b26f21


Re: [agi] Language and compression

2007-10-03 Thread Matt Mahoney

--- Russell Wallace <[EMAIL PROTECTED]> wrote:

> On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > Lossless video compression would not get far.  The brightness of a pixel
> > depends on the number of photons striking the corresponding CCD sensor. 
> The
> > randomness due to quantum mechanics is absolutely incompressible and makes
> up
> > a significant fraction of the raw data.
> 
> Suppose 50% is the absolute max you can get - that's still worth
> having, in cases where you don't want to throw away data.

Yes, but it has nothing to do with AI.  You are modeling physics, a much
harder problem.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49574875-aaa2e0


RE: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-03 Thread Edward W. Porter
Mike Tintner wrote in his Wed 10/3/2007 6:22 PM post:



"BUT THERE'S NO ONE REMOTELY CLOSE TO THE LEVEL, SAY, OF VON NEUMANN OR
TURING, RIGHT? AND DO YOU REALLY THINK A REVOLUTION SUCH AS AGI IS GOING
TO COME ABOUT WITHOUT THAT KIND OF REVOLUTIONARY, CREATIVE THINKER? JUST
BY TWEAKING EXISTING SYSTEMS, AND INCREASING COMPUTER POWER AND
COMPLEXITY?  HAS ANY INTELLECTUAL REVOLUTION EVER HAPPENED THAT WAY?? "



First, I would be very surprised if there are not quite a few people in
these fields with IQs roughly as high as Turing and Von Neumann.  I don’t
know exactly how many standard deviations they were above average, but the
IQ bell curve is not going down.  The evidence is its going up.   Plus the
percent of the world’s children who are receiving good educations is
increasing all the time.  So there should actually be more really
brilliant thinkers now than in any imagined past age of supposed mental
Titans.



Of course, as technical and scientific fields develop there is less bold
new fertile ground to be broken and fewer truly seminal ideas left to
develop.  My teenage son is into rock and roll.  He bemoans that there
isn't as much excitingly new music today as in the late sixties to
mid-seventies.  That’s because so much fertile conceptual musical ground
was broken in those years, and, thus, there are fewer vast really new and
yet satisfying expanses to explore.



The same is true in AI, the field is over fifty years old.  A lot of very
valuable thinking was done in each of those five decades.  People like
Turing, Shannon, Minsky, Quillian, Simon, Newall, and Shank, to mention a
very very few, have done some really good foundational work.  So there is
much less room for revolutionary breakthroughs.  At this point I think
synthesis, and large scale experimentation, and tweaking is probably
required more than revolutionary breakthroughs.



In fact, I think some people actually have a pretty good idea about how to
achieve human level AGI, or at least something much closer to it.  I don't
want to sound like a one note piano, but take Novamente for example.
Read the longer articles Ben Goertzel has written about it carefully
several times and then try to open your mind to exactly what such a system
could do if running on massive hardware and trained sufficiently well to
have human level world knowledge.  There is a lot of fertile ground to be
plowed by getting systems of that type up and running on
world-knowledge-computing-capable hardware with the proper training – and
then seeing where it gets us.  My hunch is that with the right teams and
the right, yes, tweaking, it will get us pretty damn far.   And if it does
not get us to truly human level AI, it will at least provide us with
extremely powerful and valuable advances in computation, and -- more
importantly to the issue of this post -- give us a much more clear
understanding of the problems that have yet to be solved to actually get
us there.



I think we understand a lot about semantic meaning, generalized semantic
representation, non-literal matching and invariant representation, goal
systems and importance weighting, automatic learning, massively parallel
and context and goal sensitive probabilistic inference, and the focusing
of such inferences though mechanisms like intelligent parallel terraced
scans, task specific learned search parameter tuning, dynamic search
control feedback mechanisms, dynamic thresholding, accumulated prior
activation, and consciousness, itself -- and many many more pieces of this
fascinating, whiring, wizing, flashing, throbing, computational puzzle.
Now is the time to start putting this stuff together in large systems and
see who can be the first team to get it all to work together well.



Deb Roy, is a very bright guy at the MIT media lab who is doing some
really wild and crazy stuff.  After a lecture he gave to a relatively
small audience at MIT roughly two years ago, I went up to the lectern and
told him I didn’t see any brick walls between us and human level AI, that
is, I didn’t see any part of the AI problem that we don’t already have
reasonable approaches to.  I asked him if him if he know of any.  He
answered with a smile “I don’t see any brick walls either”



The biggest brick wall is the small-hardware mindset that has been
absolutely necessary for decades to get anything actually accomplished on
the hardware of the day.  But it has caused people to close their minds to
the vast power of brain level hardware and the computational richness and
complexity it allows, and has caused them, instead, to look for magic
conceptual bullets that would allow them to achieve human-like AI on
hardware that has roughly a millionth the computational, representational,
and interconnect power of the human brain.  That’s like trying to model
New York City with a town of seven people.  This problem has been
compounded by the pressure for academic specialization and the pressure to
produce demonstratable results on the type

Re: [agi] Language and compression

2007-10-03 Thread Russell Wallace
On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Yes, but it has nothing to do with AI.  You are modeling physics, a much
> harder problem.

Well, I think compression in general doesn't have much to do with AI,
like I said before :) But I'm surprised you call physics modeling a
harder problem, given the success rate thus far in physics modeling
compared to AI.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49577817-930906


Re: [agi] Language and compression

2007-10-03 Thread Russell Wallace
On 10/4/07, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On 10/4/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> > Suppose 50% is the absolute max you can get - that's still worth
> > having, in cases where you don't want to throw away data.
>
> But why is it going to correlate with intelligence?

It's not.

[rest of post snipped and agreed with]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49578618-c0e9eb


Re: [agi] Language and compression

2007-10-03 Thread Vladimir Nesov
On 10/4/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > Lossless video compression would not get far.  The brightness of a pixel
> > depends on the number of photons striking the corresponding CCD sensor.  The
> > randomness due to quantum mechanics is absolutely incompressible and makes 
> > up
> > a significant fraction of the raw data.
>
> Suppose 50% is the absolute max you can get - that's still worth
> having, in cases where you don't want to throw away data.
>

But why is it going to correlate with intelligence? Intelligence is
just trying to make sense of this randomness, by ignoring most of it.
Things it notices are probably categorized in chunks which correlate
with their probabilities, but 'noise' from intelligence's POV can
probably still be compressed by simple algorithm, which will provide
overall compression which makes intelligent part irrelevant. So
question is if particular test can be scored best by general
perception, and not by narrow AI. The same probably goes for text
compression: clever (but not intelligent) statistics-gathering
algorithm on texts can probably do a much better job for compressing
than human-like intelligence which just chunks this information
according to its meaning.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49576145-cd5ee6


Re: [agi] Religion-free technical content & breaking the small hardware mindset

2007-10-03 Thread Russell Wallace
On 10/4/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> The biggest brick wall is the small-hardware mindset that has been
> absolutely necessary for decades to get anything actually accomplished on
> the hardware of the day.  But it has caused people to close their minds to
> the vast power of brain level hardware and the computational richness and
> complexity it allows, and has caused them, instead, to look for magic
> conceptual bullets that would allow them to achieve human-like AI on
> hardware that has roughly a millionth the computational, representational,
> and interconnect power of the human brain.  That's like trying to model New
> York City with a town of seven people.  This problem has been compounded by
> the pressure for academic specialization and the pressure to produce
> demonstratable results on the type of hardware most have had access to in
> the past.

Very well put!

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49580536-91e968


Re: [agi] Language and compression

2007-10-03 Thread Matt Mahoney
--- Vladimir Nesov <[EMAIL PROTECTED]> wrote:

> The same probably goes for text
> compression: clever (but not intelligent) statistics-gathering
> algorithm on texts can probably do a much better job for compressing
> than human-like intelligence which just chunks this information
> according to its meaning.

That is only true because there is a 3 way tradeoff between speed, memory, and
compression ratio.  On a 1 GB input the best text compressors improve rapidly
as memory is increased to 2 GB, which is as far as I can test.  At this point,
simple algorithms like BWT and PPM do almost as well as more sophisticated
programs that mix lexical, syntactic and semantic constraints.  These programs
would use a lot more memory if they could, unlike the simpler models which
have most or all of the memory they need.  On smaller input, the memory
pressure is reduced and the simpler algorithms can't compete.

And text is the only data type with this property.  Images, audio, executable
code, and seismic data can all be compressed with very little memory.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49580878-3c7219


Re: [agi] Language and compression

2007-10-03 Thread Matt Mahoney
--- Russell Wallace <[EMAIL PROTECTED]> wrote:

> On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > Yes, but it has nothing to do with AI.  You are modeling physics, a much
> > harder problem.
> 
> Well, I think compression in general doesn't have much to do with AI,
> like I said before :) But I'm surprised you call physics modeling a
> harder problem, given the success rate thus far in physics modeling
> compared to AI.

Lossless compression of satellite photos of the Earth is equivalent to weather
forecasting, but with less complete information.

Lossless compression of the video appearing on my monitor as it displays
encrypted data is equivalent to breaking the encryption.

Lossless compression of a movie requires modeling both the brains and the
bodies of the people in it.  AGI only requires modeling the brains.

Lossless compression in general is not computable.  AGI is, because we all
have a computer that implements it.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49582437-b5ff7e


Re: [agi] breaking the small hardware mindset

2007-10-03 Thread Mike Tintner
MessageEdward:The biggest brick wall is the small-hardware mindset that has 
been absolutely necessary for decades to get anything actually accomplished on 
the hardware of the day

Completely disagree. It's that purely numerical mindset about small/big 
hardware that I see as so widespread and that shows merely intelligent rather 
than creative thinking.  IQ which you mention is about intelligence not 
creativity. It's narrow AI as opposed to AGI.

Somebody can no doubt give me the figures here - worms and bees and v. simple 
animals are truly adaptive despite having extremely small brains. (How many 
cells/ neurons ?)

I disagree also re how much has been done.  I don't think AGI - correct me - 
has solved a single creative problem - e.g. creativity - unprogrammed 
adaptivity - drawing analogies - visual object recognition - NLP - concepts -  
creating an emotional system - general learning - embodied/ grounded knowledge 
- visual/sensory thinking.- every dimension in short of "imagination". (Yes, 
vast creativity has gone into narrow AI, but that's different).  If you don't 
believe it takes major creativity (or "knock-out ideas" pace Voss) , you don't 
solve creative problems.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49583980-0ee313

Re: [agi] Language and compression

2007-10-03 Thread Russell Wallace
On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> And text is the only data type with this property.  Images, audio, executable
> code, and seismic data can all be compressed with very little memory.

How sure are we of that? Of course all those things _can_ be
compressed with very little memory - so can text. In the case of text,
there are more memory-intensive algorithms that do a better job. Maybe
there are such algorithms for the other data types, that we just
haven't found yet?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49584843-ddafa8


Re: [agi] Language and compression

2007-10-03 Thread Matt Mahoney
--- Russell Wallace <[EMAIL PROTECTED]> wrote:

> On 10/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > And text is the only data type with this property.  Images, audio,
> executable
> > code, and seismic data can all be compressed with very little memory.
> 
> How sure are we of that? Of course all those things _can_ be
> compressed with very little memory - so can text. In the case of text,
> there are more memory-intensive algorithms that do a better job. Maybe
> there are such algorithms for the other data types, that we just
> haven't found yet?

I mean with respect to existing algorithms.  When you plot compression ratio
vs. memory required by various programs the curve is steep for text but not
other data types.

For lossless compression of images and audio the best compressors use fairly
simple predictors based on neighboring samples.  More sophisticated algorithms
would squeeze out very little compared to all the incompressible noise, so
nobody bothers.  All the interesting research is in lossy compression.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49588843-c56881


RE: [agi] breaking the small hardware mindset

2007-10-03 Thread Edward W. Porter
Mike Tintner said in his 10/3/2007 9:38 PM post:



I DON'T THINK AGI - CORRECT ME - HAS SOLVED A SINGLE CREATIVE PROBLEM -
E.G. CREATIVITY - UNPROGRAMMED ADAPTIVITY - DRAWING ANALOGIES - VISUAL
OBJECT RECOGNITION - NLP - CONCEPTS -  CREATING AN EMOTIONAL SYSTEM -
GENERAL LEARNING - EMBODIED/ GROUNDED KNOWLEDGE - VISUAL/SENSORY
THINKING.- EVERY DIMENSION IN SHORT OF "IMAGINATION".



A lot of good thinking has gone into how to attack each of the problems
you listed above.  I am quite sure that if I spent less than a week doing
Google research on each such problem I could find at least twenty very
good article on how to attack each of them.   Yes, most of the approaches
don’t work very well yet, but they don’t have the benefit sufficiently
large integrated systems.



In AI more is more.  More knowledge provides more restraint, which leads
to faster and better solutions.  More knowledge provides more context
specific probabilities and models.  World knowledge helps solve the
problem of common sense.  Massive sensory and emotional labeling provide
grounding.  Massive associations provide meaning and thus appropriate
implication.  More computational power allows more alternatives to be
explored.  Moore is more.



In my mind the questions is not whether or not each of these problems can
be solved, it is how much time, hardware, and tweaking will be required to
perform them at a human level.  For example, having such a large system
learn how to run itself automatically is non-trivial because the size of
the problem space is very large. To get it all to work together well
automatically might requires some significant conceptual breakthroughs, it
will almost certainly requires some minor ones.  We won’t know until we
try.







To give you just one examples of some of the tremendously creative work
that has been done in one of the allegedly unsolved problems describe
above, read Doug Hofstadter’s work on Copycat to get a vision of how one
elegant system solves the problem of analogy in a clever toy domain in a
surprisingly creative way.  That basic approach, described at a very broad
level, could be mapped into a Novamente-like machine to draw analogizes
between virtually any types of patterns that shared similarities at some
level which seem worthy of note to the system in the current context.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 03, 2007 9:38 PM
To: agi@v2.listbox.com
Subject: Re: [agi] breaking the small hardware mindset


Edward:The biggest brick wall is the small-hardware mindset that has been
absolutely necessary for decades to get anything actually accomplished on
the hardware of the day

Completely disagree. It's that purely numerical mindset about small/big
hardware that I see as so widespread and that shows merely intelligent
rather than creative thinking.  IQ which you mention is about intelligence
not creativity. It's narrow AI as opposed to AGI.

Somebody can no doubt give me the figures here - worms and bees and v.
simple animals are truly adaptive despite having extremely small brains.
(How many cells/ neurons ?)

I disagree also re how much has been done.  I don't think AGI - correct me
- has solved a single creative problem - e.g. creativity - unprogrammed
adaptivity - drawing analogies - visual object recognition - NLP -
concepts -  creating an emotional system - general learning - embodied/
grounded knowledge - visual/sensory thinking.- every dimension in short of
"imagination". (Yes, vast creativity has gone into narrow AI, but that's
different).  If you don't believe it takes major creativity (or "knock-out
ideas" pace Voss) , you don't solve creative problems.
  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
 &

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49608086-0b0a58

Re: [agi] Religion-free technical content

2007-10-03 Thread Mike Dougherty
On 10/3/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> I think your notion that post-grads with powerful machines would only
> operate in the space of ideas that don't work is unfair.

Yeah, i can agree - it was harsh.  My real intention was to suggest
that NOT having a bigger computer is not excuse for not yet having a
design that works.  IF you find a design that works, the bigger
computer will be the inevitable result.

> Your last paragraph actually seems to make an argument for the value of
> clock cycles because it implies general intelligences will come through
> iterations.  More opps/sec enable iterations to be made faster.

I also believe that general intelligence will require a great deal of
cooperative effort.  The frameworks discussion (Richard, et al) could
provide positive pressure toward that end.  I feel we have a great
deal of communications development in order to even begin to express
the essential character of the disparate approaches to the problem,
let alone be able to collaborate on anything but the most basic ideas.
 I don't have a solution (obviously) but I have a vague idea of a type
of problem.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=49620438-6f8601