On 22 Oct 2006 at 17:22, Samantha Atkins wrote:
> It is a lot easier I imagine to find many people willing and able to
> donate on the order of $100/month indefinitely to such a cause than to
> find one or a few people to put up the entire amount. I am sure that has
> already been kicked around.  Why wouldn't it work though?

There have been many, many well funded AGI projects in the past, public
and private. Most of them didn't produce anything useful at all. A few
managed some narrow AI spinoffs. Most of the directors of those projects
were just as confident about success as Ben and Peter are. All of them
were wrong. No-one on this list has produced any evidence (publically) that
they can succeed where all previous attempts failed other than cute
powerpoint slides - which all the previous projects had too. All you can do
judge architecture by the vauge descriptions given, and the history of AI
strongly suggests that even when full details are available, even so-called
experts completely suck at judging what will work and what won't. The
chances of arbitrary donors correctly ascertaining what approaches will
work are effectively zero. The usual strategy is to judge by hot buzzword
count and apparent project credibility (number of PhDs, papers published
by leader, how cool the website and offices are, number of glowing writeups
in specialist press; remember Thinking Machines Corp?). Needless to say,
this doesn't have a good track record either.

As far as I can see, there are only two good reasons to throw funding at a
specific AGI project you're not actually involved in (ignoring the critical
FAI problem for a moment); hard evidence that the software in question can
produce intelligent behaviour significantly in advance of the state of the
art, or a genuinely novel attack on the problem - not just a new mix of AI
concepts in the architecture, /everyone/ vaguely credible has that, a
genuinely new methodology. Both of those have an expiry date after a few
years with no further progress. I'd say the SIAI had a genuinely new
methodology with the whole provable-FAI idea and to a lesser extent some
of the nonpublished Bayesian AGI stuff that immediately followed LOGI,
but I admit that they may well be past the 'no useful further results'
expiry date for continued support from strangers.

Setting up a structure that can handle the funding is a secondary issue.
It's nontrivial, but it's clearly within the range of what reasonably
competent and experienced people can do. The primary issue is evidence
that raises the probability that any one project is going to buck the very
high prior for failure, and neither hand-waving, buzzwords or powerpoint
(should) cut it. Even detailed descriptions of the architecture with
associated functional case studies, while interesting to read and perhaps
convincing for other experts, historically won't help non-expert donors
make the right choice. Radically novel projects like the SIAI /may/ be an
exception (in a good or bad way), but for relatively conventional groups
like AGIRI and AAII insist on seeing some of this supposedly
already-amazing software before choosing which project to back.

Personally if I had to back an AGI project other than our research
approach at Bitphase, and I wasn't so dubious about his Friendliness
strategy, I'd go with James Rogers' project, but I'd still estimate a
less-than-5% chance of success even with indefinite funding. Ben would
be a little way behind that with the proviso that I know his Friendliness
strategy sucks, but he has been improving both that and his architecture
so it's conceivable (though alas unlikely) that he'll fix it in time. AAII 
would be some way back behind that, with the minor benefit that if their
architecture ever made it to AGI it's probably too opaque to undergo early
take-off, but with the huge downside that when it finally does enter an
accelerating recursive self-improvement phase what I know of the structure
strongly suggests that the results will be effectively arbitrary (i.e.
really  bad). As noted, hard demonstrations of both capability and scaling
(from anyone) will rapidly increase those probability estimates. I
understand why many researchers are so careful about disclosure, but
frankly without it I think it's unrealistic verging on dishonest to expect
significant donated funding (ignoring the question of why the hell
/companies/ would be fishing for donnations instead of investment).

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to