Samantha,

>  You know, I am getting pretty tired of hearing this poor mouth crap.   This
> is not that huge a sum to raise or get financed.  Hell, there are some very
> futuristic rich geeks who could finance this single-handed and would not
> really care that much whether they could somehow monetize the result.   I
> don't believe for a minute that there is no way to do this.    So exactly
> why are you singing this sad song year after year?
...
>  From what you said above $50M will do the entire job.   If that is all that
> is standing between us and AGI then surely we can get on with it in all
> haste.   If it is a great deal more than this relatively small amount of
> money then lets move on to talk about that instead of whining about lack of
> coin.


This is what I thought in 2001, and what Bruce Klein thought when he started
working with me in 2005.

In brief, what we thought is something like:

"
OK, so  ...

On the one hand, we have an AGI design that seems to its sane PhD-scientist
creator to have serious potential of leading to human-level AGI.  We have
a team of professional AI scientists and software engineers who are
a) knowledgeable about it, b) eager to work on it, c) in agreement that
it has a strong chance of leading to human-level AGI, although with
varying opinions on whether the timeline is, say, 7, 10, 15 or 20 years.
Furthermore, the individuals involved are at least thoughtful about issues
of AGI ethics and the social implications of their work.   Carefully-detailed
arguments as to why it is believed the AGI design will work exist, but,
these are complex, and furthermore do not comprise any sort of irrefutable
proof.

On the other hand, we have a number of wealthy transhumanists who would
love to see a beneficial human-level AGI come about, and who could
donate or invest some $$ to this cause without serious risk to their own
financial stability should the AGI effort fail.

Not only that, but there are a couple related factors

a) early non-AGI versions of some of the components of said AGI design
are already being used to help make biological discoveries of relevant
to life extension (as documented in refereed publications)

b) very clear plans exist, including discussions with many specific potential
customers, regarding how to make $$ from incremental products along the
way to the human-level AGI, if this is the pathway desired
"

So, we talked to a load of wealthy futurists and the upshot is that it's really
really hard to get these folks to believe you have a chance at achieving
human-level AGI.  These guys don't have the background to spend 6 months
carefully studying the technical documentation, so they make a gut decision,
which is always (so far) that "gee, you're a really smart guy, and your team
is great, and you're doing cool stuff, but technology just isn't there yet."

Novamente has gotten small (but much valued)
investments from some visionary folks, and SIAI has
had the vision to hire 1.6 folks to work on OpenCog, which is an
open-source sister project of the Novamente Cogntion Engine project.

I could speculate about the reasons behind this situation, but the reason is NOT
that I suck at raising money ... I have been involved in fundraising
for commercial
software projects before and have been successful at it.

I believe that in 10-15 years from now, one will be able to approach the exact
same people with the same sort of project, and get greeted with enthusiasm
rather than friendly dismissal.  Going against prevailing culture is
really hard,
even if you're dealing with people who **think** they're seeing beyond the
typical preconceptions of their culture.  Slowly though the idea that AGI is
possible and feasible is wending its way into the collective mind.

I stress, though, that if one had some kind of convincing, compelling **proof**
of being on the correct path to AGI, it would likely be possible to raise $$
for one's project.  This proof could be in several possible forms, e.g.

a) a mathematical proof, which was accepted by a substantial majority
of AI academics

b) a working software program that demonstrated human-child-like
functionality

c) a working robot that demonstrated full dog-like functionality

Also, if one had good enough personal connections with the right sort
of wealthy folks, one could raise the $$ -- based on their personal trust
in you rather than their trust in your ideas.

Or of course, being rich and funding your work yourself is always an
option (cf Jeff Hawkins)

This gets back to a milder version of an issue Richard Loosemore is
always raising; the complex systems problem.  My approach to AGI
is complex systems based, which means that the components are NOT
going to demonstrate any general intelligence -- the GI is intended
to come about as a holistic, whole-system phenomenon.  But not in any
kind of mysterious way: we have a detailed, specific theory of why
this will occur, in terms of the particular interactions between the
components.

But what this  means is that, by the time we get to a demo of the system
that will look impressive to skeptical potential investors/donors, we'll
be halfway there to the end goal already.  Because the demo itself
will require decently effective versions of all the major system components,
tuned and integrated and reasonably scalably deployed.

Yes, this whole situation frustrates me tremendously.  It often occurs to me
that I'd have a better time spending my life writing cognitive science books
and avant-garde fiction, recording music, playing with my kids and pets,
proving math theorems ... and putting off doing serious AGI work until
the society I'm embedded in becomes interested in supporting it.

Fighting the difficulty of the AGI problem is hard enough; fighting it at the
same time as fighting the shortsightedness and hyperskepticism of society
and culture is DAMN exhausting and has driven me to despair more than
once, even though I'm innately a cheerful person.

Many evenings, around 10PM or so, I sit at my computer and realize
that

a) I've spent the bulk of my workday on stuff related to Novamente's
narrow-AI consulting biz or fundraising/product-dev efforts ... rather
than directly pushing toward AGI with my own brainpower

b) I need to get up at 6:15 AM to drive my son to high school

and I wonder whether it's really intelligent to sit up another 3-4 hours
and do some direct, concrete AGI work myself.  But usually even
though  my physiological organism is damn tired, I push ahead
with it anyway... because  I am blessed/cursed with a highly stubborn and
persistent personality, and I really believe both in the end goal (AGI)
and the path we're pursuing to it.  And I know my efforts on the narrow-AI
and biz stuff are allowing others, funded by Novamente, to spend their
days doing technical work that is building directly toward the AGI goal
(though not nearly as fast as could be done with more precisely targeted
and/or copious resources).

Anyway, Samantha, this is not some abstract discussion to me.

To re-quote, what you said is:

>  You know, I am getting pretty tired of hearing this poor mouth crap.   This
> is not that huge a sum to raise or get financed.  Hell, there are some very
> futuristic rich geeks who could finance this single-handed and would not
> really care that much whether they could somehow monetize the result.   I
> don't believe for a minute that there is no way to do this.    So exactly
> why are you singing this sad song year after year?
...
>  From what you said above $50M will do the entire job.   If that is all that
> is standing between us and AGI then surely we can get on with it in all
> haste.   If it is a great deal more than this relatively small amount of
> money then lets move on to talk about that instead of whining about lack of
> coin.

but I don't hear any constructive suggestions here.

If you have some specific "futuristic rich geeks" in mind who would like to
meet with me and talk about investing in or donating to the creation of human
level AGI ... please let me know, and my Novamente colleagues and I will
be there pronto.

My fear is that you are not fully understanding the psychological and cultural
factors involved, due to not having the years of frustrating experience talking
to such "futuristic rich geeks" that Bruce and I have.

I have a lot of persistence and I believe we can get to the end goal anyway,
but it sure seems to be going a lot more slowly than it would if your optimistic
assessment of the fundraising scenario were right.

One of the AI PhD's working on the Novamente project, who's been on board
for a bit over a year, made the following statement to me last month:

Him: It seems to me that it's going to take about 20 years to work through all
the details and really make Novamente into a human-level AGI

Me: But, what level of staffing are you assuming

Him: Ah, well that's a good point.  I haven't really thought through what could
be done if we got more people dedicated to the project.  I was more thinking
about our current rate of progress.

So there you go.  Most of the team hasn't even bothered to THINK in depth
about what we could do with say $1M/year in dedicated AGI funding, because
it's just not the current reality....

Interestingly enough, from October 2007 thru Jan 2008 we had a contract
in place that was supposed to supply us with roughly $1M/year in AGI funding
as of Jan 2009.  But that fell apart for reasons having nothing to do with
Novamente, related solely to the changing financial fortunes of the other
party to the contract.

And we were in the final legalese phase of a gov't contract that was going to
aupply $500K/year of AGI funding ... but I just found out Friday that this
project is going to be delayed for at least a year ... for reasons having
nothing to do with Novamente, but related to management changes in
the government agency involved.

And so it goes.  Each of those deals required an incredible amount of
my time and effort to coordinate and then failed to pan out for reasons beyond
our control (but ultimately for reasons to do with the tenuous status of AGI
in our society and culture ... for instance, AGI is rarely a partner
or funder's main
priority, so if Wall Street takes a dive, it's near the top of the
list to get cut...)

But I've got other meetings with potential investors and partners set up in the
near future ... there's always hope around the corner ... and it's always hard
to know what probability to assign to any given promising-looking opportunity.

What encourages me the  MOST at the moment is that we have found a business
model that both

a) viably and truly, is on the direct path to powerful AGI

b) can yield substantial revenue from early versions of the system, well before
being at the "amazingly compelling AGI demo" level

This is the creation and marketing of intelligent virtual pets for
virtual worlds
and online games.

I've come to believe we have a way better chance of getting investors or
corporate partners using an intelligent virtual pets than for AGI per se.

And, we have an alpha version of a Pet Brain software system, based on a
subset of the Novamente Cognition Engine architecture.  We are currently
working with some 3D artists to hook it up with the Multiverse virtual world
to make a wizzy demo.  (Previously we were working with Electric Sheep
Company on a Second Life version, and they were handling the graphics
part; but that collaboration has cooled down dramatically at the moment,
due to the Sheep deciding to focus on other things due to their own business
priorities.)

Well anyway.  That's a long enough email, I guess you have the flavor of the
situation.

-- Ben

-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com

Reply via email to