Matt,

Over the years I've been working on OpenCog (and before that,
Novamente and Webmind),
I've made various projections of the form

"IF this project had funding for a dedicated AGI team at level X, THEN
I predict we could achieve Y
in time-frame Z"

The have been are conditional projections; and as it happens, the
conditions have not yet been
met....   We have not yet managed to get funding for a substantial,
dedicated AGI team....  I am happy
we do have SOME funding now for a small team, that is working on
AGI-oriented stuff together with
application-oriented stuff... this is great.  But it's not at the
level we would need to move ahead full-speed
on implementation of the design.

As I have given lots of interviews and written lots of stuff, it's
possible that sometimes I have slipped
up and phrased some prognostication in an imprecise or suboptimal way.
 Big whoop....

In any serious discussion of timelines, I always emphasize the
difficulty of projecting the precise
time-course of development of any project like this.   I generally
note that Microsoft can't even
precisely estimate the amount of time it will take to generate the
next version of their operating system --
and an OS is a much less researchy, complex thing than an AGI.

Obsessing on trashing me for making conditional predictions that
didn't come true, isn't really very
pointful, now is it?   I'd be far more interested in having a
discussion on the actual AGI ideas and design underlying
OpenCog.  The real question here is whether the OpenCog design is
actually adequate to yield AGI, not
the particulars of what wording I've used in making conditional
predictions at what points in the past...

For example, you trash my prior prediction about the results I hoped
to obtain by hooking OpenCog up to the Nao
robot.  OK, so let me explain that particular situation.  The work on
the Nao was being done at Xiamen University,
in collaboration with Hugo de Garis, who was a professor at that
university at that time.   We did some preliminary
work hooking OpenCog to the Nao.  Then Hugo was pushed out of his job
at Xiamen University, due to some political
hassles unrelated to OpenCog, and took an early retirement.   So that
OpenCog/Nao project ended, and the Chinese NSF
money that had been obtained for it, wound up being used for
non-OpenCog stuff by other people.   What does this story
prove about the OpenCog design, or the adequacy of my predictive
powers?  It proves that I did not foresee Hugo getting
pushed out of his job, and that project thus being prematurely ended.
It proves that I naively assumed we would get to
actually execute the Chinese NSF funded project for which we'd
obtained funds.  Does Hugo getting pushed out of his job
prove that the OpenCog design is inadequate, or that there was some
fundamental flaw to our scheme for using it together
with the Nao robot?  Of course not...

Now, several years, later, we are poised to try that same basic thing
again.  We're trying a collaboration with Hanson Robotics,
based at Hong Kong Polytechnic University, with a great but small and
fairly low-paid team based in Hong Kong.   Again, I think
our technical approach makes sense.   I have high hopes for success.
But again, we might end up failing for non-technical reasons.
I'm quite grateful fo the funding we've received, but our odds of
success are not the same as they would be if we had millions of US
dollars in dedicated funding.   Getting all this very complicated
stuff done on shoestring budgets presents challenges beyond the
technical level.

Similarly, let's hark back to that "virtual dog in Second Life"
project.  That was a collaboration between Novamente LLC and
Electric Sheep Company.  It was going great for a while, but then
Electric Sheep Company fell into some financial difficulties and laid
of a bunch of their staff, and ended a bunch of "non-core" projects
like their collaboration with Novamente. They were doing the virtual
world side of that project and without them we didn't have the
expertise to keep that going, Novamente being a quite small company.

What you can see here is that, since Novamente/OpenCog have not been
well-funded, we've been scrambling to get stuff done via
collaborations with external entities, which has been an effective
strategy to some extent, but has also had lots of ups and downs...

Rest assured there are lots more stories where these come from.  Note
also that there is no massive business organization behind all this,
brokering deals and making alliances.  There is one guy -- me -- who
has to earn a living via doing narrow-AI work (not just to pay my own
personal
bills, but to cover educational costs for 3 teenage kids in the US,
child support payments, etc. etc.), coordinate the AI R&D aspects
of OpenCog, *and* do the business-development/fundraising.....  I've
had help with all of these things part-time from time-to-time, but
pretty much
it's been me plus some technical folks....  Given this situation, it's
hardly shocking that our progress on the business/money side has not
been
dramatic....   Nor is it shocking that I've taken a while to finish up
"Building Better Minds" in my spare time!!

Drawing conclusions about the scientific and technical validity of the
OpenCog design based on the success or failure of various OpenCog
collaborative projects, is simply foolish....

One conclusion I've gathered from my study of human history, including
the history of science and business, is: Persistence goes a long way.
 It's not enough to have good ideas, you have to be willing to keep
pushing really hard, even when it starts to seem pointless, and even
when the series of obstacles gets really, really annoying.

Another conclusion I've gathered is: In most cases, anyone who has
done something really great, has done so while ignoring the trolling
and mockery of a shitload of sarcastic, self-confident nay-sayers...

I suggest you review the history of Babbage's Analytical Engine

http://en.wikipedia.org/wiki/Analytical_Engine

"The Analytical Engine was a proposed mechanical general-purpose
computer designed by English mathematician Charles Babbage.[2]
It was first described in 1837 as the successor to Babbage's
Difference Engine, a design for a mechanical computer. The Analytical
Engine incorporated an arithmetic logic unit, control flow in the form
of conditional branching and loops, and integrated memory, making it
the first design for a general-purpose computer that could be
described in modern terms as Turing-complete.[3][4]
Babbage was never able to complete construction of any of his machines
due to conflicts with his chief engineer and inadequate funding.[5][6]
It was not until the 1940s that the first general-purpose computers
were actually built."




On Tue, Dec 25, 2012 at 8:13 PM, Matt Mahoney <[email protected]> wrote:
> On Mon, Dec 24, 2012 at 11:33 PM, Ben Goertzel <[email protected]> wrote:
>>> I find it
>>> curious that a system that could potentially replace most human labor,
>>> worth hundreds of trillions of dollars, can't even find a few million.
>>> Are people really betting that you have less chance of success than
>>> winning a lottery?
>>
>> The inconsistency of humans' judgments is well known; this is far from
>> the only instance of the phenomenon ;p
>>
>> There are various issues going on here, including (according to my
>> crude guessing) fear of the Terminator, left-over effects from the old
>> AI Winter, and most of all peoples' general fear and skepticism of the
>> unknown...
>
> One would hope that large companies that have an interest in AI and a
> lot of money to invest (Google, IBM, Microsoft, Facebook, etc) would
> be more rational. It is easy to place the blame elsewhere, but not
> productive. When I look at the OpenCog roadmap:
>
> http://opencog.org/roadmap/
>
> I see predictions like:
>
> "Creation of an OpenCog-based artificial scientist, operating a small
> molecular biology laboratory on its own, designing its own experiments
> and operating the equipment and analyzing the results and describing
> them in English."
>
> "Creation of an OpenCog-based service robot, which carries out basic
> household tasks in a manner driven by English-language communication,
> and knowledge sharing with the network of other robots."
>
> supposedly in 4 to 5 years. In 6 to 8 years we will have full-on human
> level AGI. In 8 to 10 years we will have advanced self improvement.
> Naturally I am skeptical, so I research the last 16 years going back
> to the beginning of Webmind:
>
> http://opencog.org/research/
>
> paying close attention to papers that have a "results" section. This
> leads me to the virtual puppy demo and accompanying papers from around
> 2010.
>
> http://novamente.net/example/
>
> The papers make no mention of MOSES or DeSTIN, as these have not yet
> been integrated. But another paper in 2010 proposes to do that as part
> of imbuing a Nao robot with child level AGI in 3 years. Since next
> week will be 3 years, I guess it would be a legitimate question at
> this point to ask how that project is progressing?
>
> But I am intrigued at least by the possibility of natural language
> understanding. A more careful inspection shows that one of the papers,
>
> http://goertzel.org/ICAI_CogSyn_paper.pdf
>
> mentions that RelEx was used to extract semantic information and make
> inferences from biomedical research abstracts. I go to the reference:
>
> http://acl.ldc.upenn.edu/W/W06/W06-3317.pdf
>
> but I am disappointed to find that all of the work is done with
> hand-coded rules and that the results are anecdotal. So really, the
> puppy demo isn't doing anything more sophisticated than SHRDLU did
> around 1968-1970, except that the graphics are better.
>
> I am not writing this to be critical or because I am trolling. I
> really do want to see AGI, and I think that it is going to happen,
> given the enormous incentives to automate human labor. We all know
> that vision, language, and robotics will require massive computing
> power, and this will be expensive. Making grandiose predictions is not
> the best way to get funding. But even worse is going for years or
> decades without being able to show progress. I don't mean to imply
> that there hasn't been progress, but without measurable test results,
> we don't even know for ourselves if there really is any or if we are
> just fooling ourselves.
>
>
> --
> -- Matt Mahoney, [email protected]
>
> --
> You received this message because you are subscribed to the Google Groups 
> "opencog" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to 
> [email protected].
> For more options, visit this group at 
> http://groups.google.com/group/opencog?hl=en.
>



-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche


-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to