Oops, I clicked SEND prematurely...

> I suggest you review the history of Babbage's Analytical Engine
>
> http://en.wikipedia.org/wiki/Analytical_Engine
>
> "The Analytical Engine was a proposed mechanical general-purpose
> computer designed by English mathematician Charles Babbage.[2]
> It was first described in 1837 as the successor to Babbage's
> Difference Engine, a design for a mechanical computer. The Analytical
> Engine incorporated an arithmetic logic unit, control flow in the form
> of conditional branching and loops, and integrated memory, making it
> the first design for a general-purpose computer that could be
> described in modern terms as Turing-complete.[3][4]
> Babbage was never able to complete construction of any of his machines
> due to conflicts with his chief engineer and inadequate funding.[5][6]
> It was not until the 1940s that the first general-purpose computers
> were actually built."


Note here that Babbage's core ideas were not only correct, but now seem
OBVIOUS to nearly anyone with a university education in computer science...

Note that he failed to get the analytical engine built during his
lifetime, not because
of any core problem with the ideas or the design, but simply because
it was tricky to
accomplish using the component technologies available, and he ran into various
human-management and funding problems...

Note this page, titled

"Babbage's Analytical Engine, 1834-1871. (Trial model)"

http://www.sciencemuseum.org.uk/objects/computing_and_data_processing/1878-3.aspx

If you had asked Babbage in 1840, 6 years after conceiving the idea of
the Analytical Engine,
when he would have a complete Analytical Engine, what would he have
said?  Maybe he
would have projected completion of the Analytical Engine by 1845 or
1850.  He would not have
predicted that the thing would still remain incomplete at the time of
his death in 1971....

But nevertheless, his failure to correctly foresee the pragmatic,
non-scientific obstacles he
would run into in trying to get his Analytical Engine created, tells
you NOTHING about the validity
of his underlying design -- which is now considered obvious and trivial.

In my view, the situation with OpenCog is similar.  The basic validity
of the design will look obvious
and almost trivial to any AI undergraduate of 2050.   Given the code
libraries and hardware of that time,
the implementation of something like OpenCog will be an undergrad
course project.....  Given the
code libraries and hardware available NOW, it's a lot of work to get
something like OpenCog implemented
and tested....  This is the sort of practical problem frequently
confronted by people with ideas that are
"ahead of the times" relative to the available technical infrastructure.

One thing that is different now than in Babbage's time, though, is
that the exponential advancement of technology
is further along the curve.   When five years of Babbage's life
passed, the underlying technologies needed to
support his Analytical Engine advanced only a little.  When five years
of my life pass, the underlying tech needed
to support OpenCog advances quite a bit ;-) ...

To counterbalance your mocking of my prior, conditional positive
predictions, I'd like to remind you of the long
list of incorrect negative predictions made in the past, regarding
various incipient technologies:

http://www.merkle.com/badPredictions.html

The folks making these incorrect negative predictions were just as
superior-sounding, self-confident and high-handed
as you are, in their dismissal of various technologies and approaches
that now seem obvious.  Generically speaking,
humans aren't great at either positive or negative prediction, and we
need to consider each case carefully rather than
evaluating situations glibly.  In the case of AGI, I prefer to
evaluate someone's AGI approach via actually looking at the
conceptual and scientific and technical ideas underlying their work,
rather than based on shallow considerations such
as the ones you are applying to OpenCog....

Your solution to the difficulty of achieving adequate funding for AGI
R&D, is to work on narrow AI and count on it
gradually becoming more and more AGI-ish....  You have
repeated this message dozens, perhaps hundreds of times during the
years I've been intersecting you online.  (I am actually
amazed at your patience for repeating essentially the same arguments
in different words, month after month and year
after year.)   You believe that
by incrementally improving a variety narrow-AI products like text
compressors, it will be possible to eventually achieve human-level
AGI.  I doubt this will work.  I understand how convenient it would be
if this WERE a workable path, because of course it's
easier to leverage resources toward practical projects with near-term,
high-probability commercial payoffs for investors.  But
I'm not going to modify my scientific/conceptual understanding of
intelligence, based on criteria of economic convenience.
You will proceed with your R&D according to your own understanding,
and I'll proceed with my R&D according to mine.   Research
is always risky, because it's always based in part on uncertain
knowledge and intuition.

Success in AGI requires a number of things to go right: the right core
ideas, the right technical implementation choices, and the right
practical situation (team/funding/etc/).    Failure in AGI requires
only one of these things to go wrong....  And to make AGI work, you
have to do all these difficult things right, in the  midst of a bunch
of trolls and nay-sayers screeching annoyingly in your ear "You might
fail!!  You might fail!!  You may be wasting your time!!  Are you sure
you shouldn't just get a garden-variety job and have an easier life??"
 .....   But somehow, I manage to enjoy  myself working on AGI anyway
-- I guess mainly because the subject matter is just SO damn important
and fascinating ;-) ...

Merry Christmas ;)
Ben G


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to