Re: Re[10]: [agi] Funding AGI research

2007-11-30 Thread James Ratcliff
The overall architecture is what is needed, the glue to hold the modules 
together.

Lots of talk has been goign on about the Narrow AI pieces being used to build 
complete AGI.  We will not be able to use much of any of the pieces directly in 
the core AGI, unless as Ben says they are modeled for the AGI architecture, 
though we will use each of the narrow ai technologies inside the final AGI 
product.

This overall architecture, such as Novamente (being the best defined one around 
here) is what is needed, and it is near impossible to show any of these will 
work until they do, prototypes are very hard to create like this, as without 
any of these core pieces the AGI is merely a cute toy.

James Ratcliff

Dennis Gorelik [EMAIL PROTECTED] wrote: Russell,

 The main piece of technology I reckon is required to make more general
 progress is a software framework, which would be useful for narrow AI
 but is only essential if you want to go beyond that.

What do you mean under software framework?
I don't even see how software framework can be a bottleneck for
assembling AGI.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Never miss a thing.   Make Yahoo your homepage.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=70854937-befe5d

Re: Re[10]: [agi] Funding AGI research

2007-11-28 Thread Benjamin Goertzel
 EDI must admit, I have never heard cortical column described as
 containing 10^5 neurons.  The figure I have commonly seen is 10^2 neurons
 for a cortical column, although my understanding is that the actual number
 could be either less or more.  I guess the 10^5 figure would relate to
 so-called hypercolumns.

The term cortical column is vague

http://en.wikipedia.org/wiki/Cortical_column

There are minicolumns (around 100 neurons each) and hypercolumns
(around 100 minicolumns each).  Both are called columns..

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69432075-16cf74


Re[10]: [agi] Funding AGI research

2007-11-27 Thread Dennis Gorelik
Benjamin,

 Nearly any AGI component can be used within a narrow AI,

That proves my point [that AGI project can be successfully split
into smaller narrow AI subprojects], right?

 but, the problem is, it's usually a bunch easier to make narrow AI's
 using components that don't have any AGI value...

I agree, that many narrow AI projects are not very useful for future
AGI project.

Still, AGI-oriented researcher can pick appropriate narrow AI projects
in a such way that:
1) Narrow AI project will be considerably less complex than full AGI
project.
2) Narrow AI project will be useful by itself.
3) Narrow AI project will be an important building block for the full
AGI project.

Would you agree that splitting very complex and big project into
meaningful parts considerably improves chances of success?


 Another way to go -- use existing narrow AIs as prototypes when
 building AGI.

That's right.
The problem is -- there is not enough narrow AIs at this point to
assemble AGI in any reasonable amount of time.
I consider anything longer than 3 years -- unreasonable [and almost
guaranteed failure].


 I don't really accept any narrow-AI as a prototype for an AGI.

Ok.
How about set of narrow-AIs that cover different functionality of AGI.
Would it be a good prototype?

In any case, narrow AI prototype[s] is better, than no prototype,
right?


 I think there is loads of evidence that narrow-AI prowess does not imply
 AGI prowess,

All other things being equal -- would you invest into researcher who
successfully developed narrow AI or into researcher who did not?
:-)




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69274200-4fc0e0


Re[10]: [agi] Funding AGI research

2007-11-27 Thread Dennis Gorelik
Matt,

 --- Dennis Gorelik [EMAIL PROTECTED] wrote:
 Could you describe a piece of technology that simultaneously:
 - Is required for AGI.
 - Cannot be required part of any useful narrow AI.

 A one million CPU cluster.

Are you claiming that computational power of human brain is equivalent
to one million CPU cluster?

My feeling is that human's brain computational power is about the same
as of modern PC.

AGI software is the missing part of AGI, not hardware.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69275829-75b273


Re: Re[10]: [agi] Funding AGI research

2007-11-27 Thread Benjamin Goertzel
  Nearly any AGI component can be used within a narrow AI,

 That proves my point [that AGI project can be successfully split
 into smaller narrow AI subprojects], right?

Yes, but it's a largely irrelevant point.  Because building a narrow-AI
system in an AGI-compatible way is HARDER than building that same
narrow-AI component in a non-AGI-compatible way.

So, given the pressures of commerce and academia, people who are
motivated to make narrow-AI for its own sake, will almost never create
narrow-AI components that are useful for AGI.

And, anyone who creates narrow-AI components with an AGI outlook,
will have a large disadvantage in the competition to create optimal
narrow-AI systems given limited time and financial resources.

 Still, AGI-oriented researcher can pick appropriate narrow AI projects
 in a such way that:
 1) Narrow AI project will be considerably less complex than full AGI
 project.
 2) Narrow AI project will be useful by itself.
 3) Narrow AI project will be an important building block for the full
 AGI project.

 Would you agree that splitting very complex and big project into
 meaningful parts considerably improves chances of success?

Yes, sure ... but demanding that these meaningful parts

-- be economically viable

and/or

-- beat competing, somewhat-similar components in competitions

dramatically DECREASES chances of success ...

That is the problem.

An AGI may be built out of narrow-AI components, but these narrow-AI
components must be architected for AGI-integration, which is a lot of
extra work; and considered as standalone narrow-AI components, they
may not outperform other similar narrow-Ai components NOT intended
for AGI-integration...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69277648-780726


Re: Re[10]: [agi] Funding AGI research

2007-11-27 Thread Matt Mahoney

--- Dennis Gorelik [EMAIL PROTECTED] wrote:

 Matt,
 
  --- Dennis Gorelik [EMAIL PROTECTED] wrote:
  Could you describe a piece of technology that simultaneously:
  - Is required for AGI.
  - Cannot be required part of any useful narrow AI.
 
  A one million CPU cluster.
 
 Are you claiming that computational power of human brain is equivalent
 to one million CPU cluster?
 
 My feeling is that human's brain computational power is about the same
 as of modern PC.
 
 AGI software is the missing part of AGI, not hardware.

We don't know that.  What we do know is that people have historically
underestimated the difficulty of AI since about 1950.  Our approach has always
been to design algorithms that push the limits of whatever hardware capacity
was available at the time.  At every point in history we seem to have the
hindsight to realize that past attempts have failed for lack of computing
power but not the foresight to realize when we are still in the same
situation.  If AGI is possible with one millionth of the computing power of
human brain, then 

1. Why didn't we evolve insect sized brains?
2. Why aren't insects as smart as we are?
3. Why aren't our computers as smart as insects?

With regard to 1, the human brain accounts for most of our resting metabolism.
 It uses more power than any other organ except the muscles during exercise.

One of the arguments that AGI is possible on a PC is from information theory. 
Humans learn language from the equivalent of about 1 GB of training data (or
10^9 bits compressed).  Turing argued in 1950 that a learning algorithm
running on a computer with 10^9 bits of memory and educated like a child
should pass the imitation game.  Likewise, Landauer estimated human long term
memory capacity to be 10^9 bits.

Yet a human brain has 10^11 neurons and 10^15 synapses.  Why?

And some of the Blue Brain research suggests it is even worse.  A mouse
cortical column of 10^5 neurons is about 10% connected, but the neurons are
arranged such that connections can be formed between any pair of neurons. 
Extending this idea to the human brain, with 10^6 columns of 10^5 neurons
each, each column should be modeled as a 10^5 by 10^5 sparse matrix, 10%
filled.  This model requires about 10^16 bits.

Perhaps there are ways to optimize neural networks by taking advantage of the
reliability of digital hardware, but over the last few decades researchers
have not found any.  Approaches that reduce the number of neurons or synapses,
such as connectionist systems and various weighted graphs, just haven't scaled
well.  Yes, I know Novamente and NARS fall into this category.

For narrow AI applications, we can usually find better algorithms than neural
networks, for example, arithmetic, deductive logic, or playing chess.  But
none of these other algorithms are so broadly applicable to so many different
domains such as language, speech, vision, robotics, etc.

My work in text compression (an AI problem) is an attempt to answer the
question by measuring trends in intelligence (compression) as a function of
CPU and memory.  The best algorithms model mostly at the lexical level (the
level of a 1 year old child) with only a crude model of semantics and no
syntax.  Memory is so tightly constrained (at 2 GB) that modeling at a higher
level is mostly pointless.  The slope of compression surface in speed/memory
space is steep along the memory axis.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=69282304-388e06