DEREK ZAHN’S Thu 10/18/2007 11:45 AM POST STRUCK ME AS RAISING SOME
PARTICULARLY IMPORTANT QUESTIONS.  MY RESPONSES ARE IN ALL-CAPS.



> 1. What is the single biggest technical gap between current AI and AGI?

I think hardware is a limitation because it biases our thinking to focus
on simplistic models of intelligence.   However, even if we had more
computational power at our disposal we do not yet know what to do with it,
and so the biggest gap is conceptual rather than technical.



I THINK SOME PEOPLE, SUCH AS THOSE AT NOVAMENTE, HAVE SOME PRETTY GOOD
IDEAS OF IMPORTANT THINGS TO TRY.


In particular, I become more and more skeptical that the effort to produce
concise theories of things like knowledge representation are likely to
succeed.  Frames, is-a relations, logical inference on atomic tokens, and
so on, are efforts to make intelligent behavior comprehensible in
concisely describable ways, but they seem to only be crude approximations
to the "reality" of intelligent behavior, which seem less and less likely
to have formulations that are comfortably within our human ability to
reason about effectively.  As one example, consider the study in cognitive
science of the theory of categories -- from the "necessary and sufficient
conditions" classical view to the more modern competing views of
"prototypes" vs "exemplars".  All of these are nice simple descriptions
but as so often happens it seems that the effort to boil down the
phenomena to nice simple ideas we can work with in our tiny brains
actually boils off most of the important stuff.

The challenge is for us to come up with ways to think about or at least
work with (and somehow reproduce or invent!) mechanisms that appear not to
be reduceable to convenient theories.  I expect that our ways of thinking
about these things will evolve as the systems we build operate on more and
more data.  As Novamente's atom table grows from thousands to millions and
eventually billions of rows; as cortex simulations become more and more
detailed and studyable; as we start to grapple with semantic nets
containing many millions of nodes -- our understanding of the dynamics of
such systems will increase.  Eventually we will become comfortable with
and become more able to build systems whose desired behaviors cannot even
be specified in a simple or rigorous way.



THE ABOVE TWO PARAGRAPHS ARE VERY INTERESTING.  THEY ARE WHAT MOTIVATED ME
TO WRITE THIS RESPONSE.



YOU ARE PROBABLY CORRECT THAT THE REPRESENTATIONS NEEDED FOR HUMAN LEVEL
AGI WILL BE COMPLEX AND NOT EASILY REDUCED TO THINGS OUR BRAINS CAN EASILY
DEAL WITH.  MUCH OF THIS PROBLEM COMES FROM THE FACT THAT THE EXPERIENCE
OF EXTERNAL REALITY THE BRAIN IS TRYING TO MODEL IS ITSELF VERY COMPLEX
AND NOT EASILY REDUCIBLE INTO A SIMPLE FRAMEWORK.   AS IN NOVAMENTE,
COMPLEX THINGS WILL BE REPRESENTED BY COMPLEX NETS, AND WHICH ASPECTS OF
THAT COMPLEXITY ARE OPERATIVELY RELEVANT AT A GIVEN TIME MUST BE ABLE TO
CHANGE RAPIDLY IN A CONTEXT AND TASK APPROPRIATE MANNER, WHICH ADDS
FURTHER COMPLEXITY.



BUT IT IS NOT CLEAR TO ME THIS MEANS COMING UP WITH WAYS FOR DEALING WITH
SUCH COMPLEXITY ARE CURRENTLY DECADES BEYOND US.  IF THERE WERE
SIGNIFICANT FUNDING FOR EXPERIMENTING WITH THE TYPES OF LARGE COMPLEX
REPRESENTATIONS YOU TALK ABOUT ABOVE, UNDERSTANDING THE DYNAMICS OF SUCH
SYSTEMS MIGHT COME MUCH MORE QUICKLY.



REMEMBER THE  GOAL IS TO HAVE THE SYSTEM LEARN MOST OF THE COMPLEXITY
NEEDED, PARTICULARLY IN TERMS OF REPRESENTATION AND BEHAVIORS, BUT ALSO IN
TERMS OF PARAMETER SETTING AND, TO SOME EXTENT, IN TERMS OF ALGORITHMS.
SO THE COMPLEXITY WE HAVE TO MASTER TO BUILD SUCH A HUMAN-LEVEL
INTELLIGENCE IS MUCH LESS THAN THE COMPLEXITY OF A HUMAN MIND, OR THE
COMPLEXITY OF THE REPRESENTATION AND OPERATION OF THE MACHINE, ITSELF.



Or, perhaps, theoretical breakthroughs will occur making it possible to
describe intelligence and its associated phenomena in simple scientific
language.


I DOUBT IT (AS I PRESUME YOU DO).  THE BASIC ARCHITECTURE OF THE SYSTEM,
IN TERMS OF WHAT IS PRE-PROGRAMMED INTO IT MIGHT BE RELATIVELY SIMPLE, AND
MANY OF ITS BASIC OPERATIONS MIGHT BE RELATIVELY SIMPLE, AND IT MAY BE
ABLE TO COMMUNICATE TO US THE THINGS IT IS THINKING AS WELL AS A HUMAN,
AND WE MIGHT DEVELOP SOME VERY USEFUL SIMPLIFICATIONS AND GENERALIZATIONS,
BUT WORLD KNOWLEDGE IS GOING TO BE A TANGLED MESS OF INTERCONNECTIONS --
MILLIONS OR BILLIONS OF THEM -- SOME OF WHICH MAY HAVE LITTLE CLEARLY
DEFINABLE MEANING TO US HUMANS.  THE DYNAMIC STATE INSIDE A HUMAN-LEVEL
AGI, OVER EVEN A FEW SECONDS, WILL BE EVEN MORE COMPLEX.
Because neither of these things can be done at present, we can barely even
talk to each other about things like goals, semantics, grounding,
intelligence, and so forth... the process of taking these unknown and
perhaps inherently complex things and compressing them into simple
language symbols throws out too much information to even effectively
communicate what little we do understand.



I THINK WE CAN CURRENTLY MEANINGFULLY “TALK TO EACH OTHER ABOUT THINGS
LIKE GOALS, SEMANTICS, GROUNDING, INTELLIGENCE...”  YES, OUR UNDERSTANDING
OF THEM WILL BE MUCH BETTER WITH THE TYPE OF UNDERSTANDING YOU SO PROPERLY
ADVOCATE ABOVE, UNDERSTANDING YOU SUGGEST WE MAY GET FROM EXPERIENCE WITH
LARGE SYSTEMS.  BUT WE CAN SAY MEANINGFUL THINGS ABOUT THESE CONCEPTS NOW,
AND SUCH MEANINGFUL THOUGHTS ARE IMPORTANT FOR GENERATING THE LARGE
SYSTEMS WE NEED TO GET THE UNDERSTANDING YOU TALK ABOUT.  EXPERIMENTING
WITH SUCH CONCEPTS IN SUCH LARGE SYSTEMS SHOULD BE AN IMPORTANT PART OF
THAT LEARNING EXPERIENCE


Either way, it will take decades if we're lucky.  Moving from mouse-level
hardware to monkey-level hardware in the next couple decades will be
helpful, just like our views on machine intelligence have expanded beyond
those of our forebears looking at the first digital computers and
wondering about how they might be made to think.


IT MAY WELL TAKE A DECADE, BUT I DOUBT IT WILL TAKE TWO.   THAT IS, IF
THERE IS ANY FUNDING THAT IS AT ALL COMMENSURATE WITH THE EXTREME
IMPORTANCE OF THE FIELD.  IT IS ALMOST CERTAIN THAT HUMAN-BRAIN-LEVEL
HARDWARE COULD BE PROFITABLY SOLD FOR BETWEEN SEVERAL HUNDRED THOUSAND AND
SEVER MILLION DOLLARS IN 10 YEARS, AND IT IS HIGHLY LIKELY THAT SYSTEMS
1/10 THAT SIZE WOULD BE VERY USEFUL INTELLIGENCES FOR MANY TASKS AND A
GOOD TEST BED FOR THE TYPE OF LEARNING (AND PRESUMABLY EXPERIMENTING) YOU
SEEM TO BE SUGGESTING.



YOU ARE VERY RIGHT IN THINKING OUR VIEWS ON MACHINE INTELLIGENCE HAVE
MOVED WAY BEYOND THOSE FIRST THOUGHTS ABOUT HOW TO MAKE COMPUTERS THINK.
YOU ARE ALSO CORRECT IN THINKING OUR UNDERSTANDING STILL HAS A WAY TO GO.






Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Derek Zahn [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 18, 2007 11:45 AM
To: agi@v2.listbox.com
Subject: RE: [agi] Poll



> 1. What is the single biggest technical gap between current AI and AGI?

I think hardware is a limitation because it biases our thinking to focus
on simplistic models of intelligence.   However, even if we had more
computational power at our disposal we do not yet know what to do with it,
and so the biggest gap is conceptual rather than technical.

In particular, I become more and more skeptical that the effort to produce
concise theories of things like knowledge representation are likely to
succeed.  Frames, is-a relations, logical inference on atomic tokens, and
so on, are efforts to make intelligent behavior comprehensible in
concisely describable ways, but they seem to only be crude approximations
to the "reality" of intelligent behavior, which seem less and less likely
to have formulations that are comfortably within our human ability to
reason about effectively.  As one example, consider the study in cognitive
science of the theory of categories -- from the "necessary and sufficient
conditions" classical view to the more modern competing views of
"prototypes" vs "exemplars".  All of these are nice simple descriptions
but as so often happens it seems that the effort to boil down the
phenomena to nice simple ideas we can work with in our tiny brains
actually boils off most of the important stuff.

The challenge is for us to come up with ways to think about or at least
work with (and somehow reproduce or invent!) mechanisms that appear not to
be reduceable to convenient theories.  I expect that our ways of thinking
about these things will evolve as the systems we build operate on more and
more data.  As Novamente's atom table grows from thousands to millions and
eventually billions of rows; as cortex simulations become more and more
detailed and studyable; as we start to grapple with semantic nets
containing many millions of nodes -- our understanding of the dynamics of
such systems will increase.  Eventually we will become comfortable with
and become more able to build systems whose desired behaviors cannot even
be specified in a simple or rigorous way.

Or, perhaps, theoretical breakthroughs will occur making it possible to
describe intelligence and its associated phenomena in simple scientific
language.

Because neither of these things can be done at present, we can barely even
talk to each other about things like goals, semantics, grounding,
intelligence, and so forth... the process of taking these unknown and
perhaps inherently complex things and compressing them into simple
language symbols throws out too much information to even effectively
communicate what little we do understand.

Either way, it will take decades if we're lucky.  Moving from mouse-level
hardware to monkey-level hardware in the next couple decades will be
helpful, just like our views on machine intelligence have expanded beyond
those of our forebears looking at the first digital computers and
wondering about how they might be made to think.



  _____

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;
> &

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=55050483-f80d2c

Reply via email to