Matt,

****
> "Why is evaluating partial progress toward human-level AGI so hard?"
> http://multiverseaccordingtoben.blogspot.com/2011/06/why-is-evaluating-partial-progress.html

I don't buy it.
****

I don't expect you personally to "buy", or even make a serious effort
to understand, anything differing from your prior views.... So I
mostly posted those links for any others who were interested but
hadn't seen them....  If this were just a conversation with you
personally, I wouldn't bother, since I realize your own subjective
beliefs and intuitions on these matters are extremely firmly set in
your mind.

****
I realize there is a cognitive synergy between
different components like language and vision, but that is not an
excuse for not testing. Synergy makes testing easier because improving
any component will improve the test scores of all components. For
example, a language model would improve the ability of an image
recognition system to score higher in a test matching different photos
of the same objects, by enabling it to recognize and understand
printed words in the images. Likewise, an image recognition system
would make more knowledge available to a language model.
****

In principle this is true -- once the components are mature enough, and so is
the framework for interconnecting them.

But if you have a system whose components will display effective synergy
ONLY once they are developed sufficiently, and when they are
interconnected within
a sufficiently sophisticated framework -- THEN you have a situation
where in the early
stages of development the benefits of the synergy will be difficult to see...

****
I also don't buy that all the parts need to be in place before we can
see progress. That is wishful thinking. In fact, we find historically
that the opposite happens. You see a lot of progress initially as the
easy parts of the problem are solved first. You can solve half of the
language modeling problem with a simple parser and a few hundred
rules. But the full problem requires a vast understanding of real
world and common sense knowledge and the ability to reason,
generalize, and solve problems. Natural intelligence has a lot of
redundancy and fault tolerance. If one part fails, the rest still
works at a reduced level. A blind or deaf person can still be
intelligent.
***

What we find historically is only a very partial guide to the process of
building minds.  The systems built in the past lack the complexity of
interconnectivity of a human-like mind....

It's true that natural intelligence has a lot of redundancy and fault tolerance.
However, I'm not trying to build a biological-style intelligence with
OpenCog....
Largely because I think that would require a lot more computational resources...


> I am not suggesting that you throw out all of the work on OpenCog and
> start over with a radically new design. I am suggesting that you start
> applying it to some real problems. I already have a text prediction
> (compression) benchmark. Perhaps some test results might attract the
> interest of investors. (That's how I got my current job).

I am currently applying part of OpenCog (MOSES) to a couple practical
machine learning applications (in finance and in genetics).  That is
my current job.

I don't currently think that lossless text compression is very good
proto-AGI application area,
as I think there are probably more straightforward ways to get
incremental improvements
on current lossless compression results.

However, I would find it interesting (if I had time for it, which I
don't currently) to see if
integrating OpenCog's PLN reasoning formalism into some probabilistic
text compressor could
be gotten to yield some improvement.  This would require lots of
extension to the current PLN
code, I feel...

The problem I've found with using practical applications to direct AGI
development is that the timelines and resource restrictions associated
with practical application development, inexorably push one toward
making single-component systems that are relatively quick to tweak,
improve and test.....   Within the context of any particular
application project, it's difficult to justify doing a bunch of
foundational work on something as complex as OpenCog....  But when one
customizes a component (say, MOSES or PLN) for some application, one
often does so in a way that's orthogonal (or only loosely related) to
what one would need to do, to get it to work in an AGI context...

I have not found it so difficult to get investors for narrow vertical
applications of OpenCog components.  If one of our current vertical
applications succeeds dramatically, then (after a few more years) this
may generate sufficient revenue to fund the AGI work, which includes
key aspects very different from the vertical application work.

Raising $$ is always hard, but raising $$ for AGI is far harder than
doing so for narrow application development, of course...

> I find it
> curious that a system that could potentially replace most human labor,
> worth hundreds of trillions of dollars, can't even find a few million.
> Are people really betting that you have less chance of success than
> winning a lottery?

The inconsistency of humans' judgments is well known; this is far from
the only instance of the phenomenon ;p

There are various issues going on here, including (according to my
crude guessing) fear of the Terminator, left-over effects from the old
AI Winter, and most of all peoples' general fear and skepticism of the
unknown...

>> "The real reasons we don't have AGI yet"
>> http://www.kurzweilai.net/the-real-reasons-we-dont-have-agi-yet
>
> I agree that computers are not powerful enough to model a human brain
> sized neural network or to run lots of experiments. Training data is
> another problem. The human vision is trained on the equivalent of
> decades of high resolution video. I think that language is an easier
> problem. Watson shows that the problem of human-level performance is
> at least feasible.

Watson is a cool engineering achievement but doesn't really show what you
say it does, as it's restricted to an artificial domain.

>Google's cat-face neural network recognizer has a
> long way to go to get to that level. (And BTW they do have a
> quantitative result in their paper: 15% accuracy on ImageNet, the best
> so far. IMHO ImageNet is far too small to train a vision system
> anyway).

I know they have a quantitative result in their paper.  However, my point is:
that quantitative result is not particularly strong, and is not the thing that
got people excited...

> I think the hardest problem will turn out to be robotics. About 80% of
> our neurons and most of our synapses are in the cerebellum. It is also
> the oldest part of our brain in terms of evolution, and therefore the
> most complex.

Other parts of the body are even older than the brain, are they therefore even
more complex?  Your logic seems dubious...

I agree that movement, planning and other cerebellum-centric brain functions
are important to consider in AGI.   But, I'm unconvinced by your argument
that they are the most difficult part.

-- Ben


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to