Mike,

The lack of AGI funding can't be attributed solely to its risky nature,
because other highly costly and highly risk research has been consistently
funded.

For instance, a load of $$ has been put into building huge particle
accelerators, in the speculative hope that they  might tell us something
about fundamental physics.

And, *so* much $$ has been put into parallel processing and various
supercomputing hardware projects ... even though these really have
contributed little, and nearly all progress has been made using commodity
computing hardware, in almost every domain.

Not to mention various military-related boondoggles like the hafnium bomb...
which never had any reasonable scientific backing at all.

Pure theoretic research in string theory is funded vastly more than pure
theoretic research in AGI, in spite of the fact that string theory has never
made an empirical prediction and quite possibly never will, and has no near
or medium term practical applications.

I think there are historical and psychological reasons for the bias against
AGI funding, not just a rational assessment of its risk of failure.

For one thing, people have a strong bias toward wanting to fund the creation
of large pieces of machinery.  They just look impressive.  They make big
scary noises, and even if the scientific results aren't great, you can take
your boss on a tour of the facilities and they'll see Multiple Wizzy-Looking
Devices.

For another thing, people just don't *want* to believe AGI is possible --
for similar emotional reasons to the reasons *you* seem not to want to
believe AGI is possible.  Many people have a nonscientific intuition that
mind is too special to be implemented in a computer, so they are more
skeptical of AGI than of other risky scientific pursuits.

And then there's the history of AI, which has involved some overpromising
and underdelivering in the 1960s and 1970s -- though, I think this factor is
overplayed.  After all, plenty of Big Physics projects have overpromised and
underdelivered.  The Human Genome project, wonderful as it was for biology,
also overpromised and underdelivered: where are all the miracle cures that
were supposed to follow the mapping of the genome?   The mapping of the
genome was a critical step, but it was originally sold as being more than it
could ever have been ... because biologists did not come clean to
politicians about the fact that mapping the genome is only the first step in
a long process to understanding how the body generates disease (first the
genome, then the proteome, the metabolome, systems biology, etc.)

Finally, your analysis that AGI funding would be easier to achieve if
researchers focused on transfer learning among a small number of domains,
seems just not accurate.  I don't see why transfer learning among 2 or 3
domains would be appealing to conservative, pragmatics-oriented funders.  I
mean

-- on the one hand, it's not that exciting-sounding, except to those very
deep in the AI field

-- also, if your goal is to get software that does 3 different things, it's
always going to seem easier to just fund 3 projects to do those 3 things
specifically using narrowly-specialized methods, instead of making a riskier
investment in something more nebulous like transfer learning

I think the AGI funding bottleneck will be broken either by

-- some really cool demonstrated achievement [I'm working on it!! ... though
it's slow with so little funding...]

-- a nonrational shift in attitude ... I mean, if string theory and
supercolliders can attract $$ in the absence of immediate utility or
demonstrated results, so can AGI ... and the difference is really just one
of culture, politics and mass psychology

or a combination of the two...

ben




On Thu, Dec 18, 2008 at 6:02 AM, Mike Tintner <tint...@blueyonder.co.uk>wrote:

>
>
> Ben: Research grants for AGI are very hard to come by in the US, and from
> what I hear, elsewhere in the world also
>
> That sounds like -  no academically convincing case has been made for
> pursuing not just long-term AGI & its more grandiose ambitions (which is
> understandable/ obviously v.  risky) but ALSO its simpler ambitions, i.e.
> making even the smallest progress towards *general* as opposed to
> *specialist/narrow* intelligence, producing a ,machine, say, that could
> cross just two or three domains. If the latter is true, isn't that rather an
> indictment of the AGI field?
>
>
>
>
> ------------------------------
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
b...@goertzel.org

"I intend to live forever, or die trying."
-- Groucho Marx



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com

Reply via email to