Ed Porter wrote:
Richard,

Despite your statement to the contrary --- despite your "FURY" --- I did get
your point. Not everybody beside Richard Loosemore is stupid.
I understand there have been people making bold promises in AI for over 40
years, and most of them have been based on a gross under estimation of the
problem.  For example, in 1969 Minsky was claiming with then current
minicomputers AI would surpass human intelligence within several years.
But in 1970, after my year long special study my senior year at Harvard, in
which I read a long reading list Minsky gave me, I came to the conclusion
that Minsky projection seemed rediculous.  I believed human level thinking
required deep experiential knowledge (now called grounding) and that I
seriously doubted anybody could make a human level AI without hardware
capable of storing many terabytes of memory, and the ability to access
significant portions of that memory multiple times a second -- a level of
hardware that is still not available, and that only recently been
approximated at a cost of many tens of millions of dollars.

But on what *basis* did you come to that conclusion? Your basis was a hunch, perhaps? A shot in the dark?

Did you ever calculate the approximate number of concepts in a human brain? The number of experience events in a lifetime? The rate of chunking? Any numbers like that?


To date, I am unaware of anyone approaching AGI with the type of hardware
that I have felt for much of the last 38 years would be necessary for human
level AGI.  So don't accuse me of being one of those who has been shown to
have been making false AI promises, because the hardware my predictions have
been based on has never yet been available to AI researchers.
Since 1970 I have thought that if multiple teams had really powerful
hardware of the type you can buy now for several million dollars (but which
would have cost 20 to 30 times as much just a decade ago), although this
hardware was not capable of human level performance, it would enable very
rapid progress in AI in just ten or twenty years.
But that was before I became aware of the advances in brain science and AI
that have been made in the last decade or two, advances that have radically
improved and clarified my understanding of they type of computation
architectures needed for various mind functions.
Now, we actually have good ideas how to address almost all of the known
functions of the mind that we would want an AGI to have.
For people like Ben, Joscha Bach, Sam Adams, myself, and multiple others IT
IS NOT THAT WE --- as you claim --- "JUST HAVE THIS BELIEF THAT IT WILL
WORK."  --- We have much more.  WE HAVE REASONABLY GOOD EXPLANATIONS FOR HOW
TO PERFORM ALMOST ALL OF THE MENTAL FUNCTIONS OF THE HUMAN MIND THAT WE WANT
AGI'S TO HAVE.

I notice that Ben has recently been dropping hints that he will soon be able to show us concrete reasons to believe that he is working on more than just belief.

We will see.



It not as if these explanations are totally nailed down, at least in my
mind.  (They may be much better nailed down in Ben's, Joscha Bach's, and Sam
Adams's.)  But I have an idea at a high level how each of them could be made
to work.  This is relatively new, at least for me.  They are complex
multi-level arguments so they cannot be conveyed briefly.  Ben has probably
done a better job of putting his ideas in writing, and his recent post in
this thread promises that relatively shortly he will provide them in much
more detail.
One example of some of the new reasons for confidence that we are learning
how to design AGI is shown in the amazing success of the Serre-Poggio system
descrived at
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf.
This paper show the tremendous advances that have been made in automatically
learning hierarchical memory, and the power such memory provides in machine
perceptions.  This is not belief.  This a significantly automatically
learned system that works amazingly well for the rapid feed forward part of
visual object recognition.

You are easily impressed by things that look glamorous. Are you aware of the conceptual gulf that lies between the feedforward part of visual object recognition and the processes involved in learning AND USING structured hierarchies of abstract, domain-independent concepts? Do you think you could give me a quick summary of why the Serre-Poggio system is a believable advance in that much more important issue?

Sigh.


Another reason for optimism is Hintons new work described in papers such as
"Modeling image patches with a directed hierarchy of Markov random fields"
by Simon Osindero and Geoffrey Hinton and the Google Tech Talk at
http://www.youtube.com/watch?v=AyzOUbkUf3M.  Hinton has shown how to
automatically learn hierarchical neural nets that have 2000 hidden nodes in
one layer, 500 in the next, and 1000 in the top layer.  In the past it would
have been virtually impossible to train a neural net with so many hidden
nodes, but Hinton's new method allows rapid largely automatic training of
such large networks, enabling in the example show, surprisingly good
handwritten numeral recognition.

Don't have access to that paper right now, so can you tell me: this goes beyond mere supervised learning, right? And it solves the problem of representing multiple tokens? And also the problem of encoding structured knowledge? It doesn't represent structure with hard-coded templates, yes? The technique scales well to full-scale thinking systems in which the domain is not restricted to, say, handwriting, but includes everything the system could ever want to recognize, yes? Oh. and in case I forget, the images are not preselected, but are naturally occuring in context, so the system can recognize a letter "A" in a scene in which two people lean against one another and hold something horizontal between them at waist height?

I assume all the answers to the above were Yes!, so it sounds like a great leap forward: I'll read the full paper tomorrow.

Pity that Hinton chose a title that implied all the answers were 'no'. Bit of an oversight on his part, but never mind.



Yet another example of the power of automatic learning is shown by
impressive success of Hecht-Nielsen confabulation system in generating a
second sentence that reasonably follows from first, as if it had been
written by a human intelligence, withoug any attempt to teach the rules of
grammar or any explicit semantic knowledge.  The system learns from text
corpora.

You may say this is narrow AI.  But it all has general applicability.  For
example, the type of hierarchical memory with max-pooling shown in Serre's
paper shows is an extremely powerful paradigm that addresses some of the
most difficult problems in AI, including robust non-literal matching.  Such
hierarchical memory can be modified to perform a lot of tasks for which many
people in AI still think there is no method for solving, such as complex
context-appropriate inferencing.  Hinton's paper shows that neural net
learning is suddenly much more powerful than it has been before.  And
Hecht-Neilsen's paper shows another powerful form or neural net-like
learning and computing that scales well

The convergence of such much more sophisticated software approaches and the
much more powerful hardware necessary to actually build minds that use them
is much more than just a belief.
Today, for $33K you can buy a system I talked about in my email which
started this thread.  It has 126Gbytes of RAM and roughly 160Million random
RAM access/second.  This is enough power to start building small toy AGI
mind that could show limited generalized learning, perception, inferencing,
planning, behaviors, attention focusing, and behavior selection, i.e.,
something like Ben's pet brains.  The  $850K system would allow
substantially more sophisticated demonstrations of artificial minds to be
created.
This combination of much more sophisticated understandings for how to build
AGI's, combined with much more powerful hardware is something new.  And,
much, much more powerful hardware should be arriving in about 6 years when
multi-level chips with mesh-networked, massively-mutli-cored processors, and
8 or more layers of memory connected to the processors with many thousands
of though silicon vias, and with hundreds of high speed channels to external
memory and other such multi-level chips will hopefully become routinely
available.

Richard, a lot has changed since the '70, '80, '90s, and early '00s --- and
if you do see it --- that's your problem.

Oh dear, Ed. I just shouldn't get into discussions with you. It's fun sometimes, but.....


Back to work.




Richard Loosemore




Ed Porter




-----Original Message-----
From: Richard Loosemore [mailto:[EMAIL PROTECTED] Sent: Saturday, June 28, 2008 4:14 PM
To: agi@v2.listbox.com
Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE
IN AGI

Ed Porter wrote:
I do not claim the software architecture for AGI has been totally solved.
But I believe that enough good AGI approaches exist (and I think Novamente
is one) that when powerful hardware available to more people we will be
able
to relatively quickly get systems up and running that demonstrate the
parts
of the problems we have solved.  And that will provide valuable insights
and
test beds for solving the parts of the problem that we have not yet
solved.

You are not getting my point. What you just said was EXACTLY what was said in 1970, 1971, 1972, 1973 ......2003, 2004, 2005, 2006, 2007 ......

And every time it was said, the same justification for the claim was given: "I just have this belief that it will work".

Plus ca change, plus c'est la meme fubar.





With regard to your statement "the problem is understanding HOW TO DO IT"
---
WE DO UNDERSTAND HOW TO DO IT --- NOT ALL OF IT --- AND NOT HOW TO MAKE IT
ALL WORK TOGETHER WELL AUTOMATICALLY --- BUT --- GIVEN THE TYPE OF
HARDWARE
EXPECTED TO COST LESS THAN $3M IN 6 YEARS --- WE KNOW HOW TO BUILD MUCH OF
IT --- ENOUGH THAT WE COULD PROVIDE EXTREMELY VALUABLE COMPUTERS WITH OUR
CURRENT UNDERSTANDINGS.

You do *not* understand how to do it. But I have to say that statements like your paragraph above are actually very good for my health, because their humor content is right up there in the top ten, along with Eddie Izzard's Death Star Canteen sketch and Stephen Colbert at the 2006 White House Correspondents' Association Dinner.

So long as the general response to the complex systems problem is not "This could be a serious issue, let's put our heads together to investigate it", but "My gut feeling is that this is just not going to be a problem", or "Quit rocking the boat!", you can bet that nobody really wants to ask any questions about whether the approaches are correct, they just want to be left alone to get on with their approaches. History, I think, will have some interesting things to say about all this.

Good luck anyway.



Richard Loosemore


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to