Michael,

I think your summary of the situation is in many respects accurate; but, an interesting aspect you don't mention has to do with the disclosure of technical details...

In the case of Novamente, we have sufficient academic credibility and know-how that we could easily publish a raft of journal papers on the details of Novamente's design and preliminary experimentation.  With this behind us, it would not be hard for us to get a moderate-sized team of somewhat-prestigious academic AI researchers on board ... and then we could almost surely raise funding from conventional government research funding sources.  This process would take a number of years but is a well-understood process and would be very likely to succeed.

The main problem then boils down to the Friendliness issue.  Do we really want to put a set of detailed scientific ideas, some validated via software work already, that we believe are capable of dramatically accelerating progress toward AGI, into the public domain?  Perhaps it is rational to do so, on the grounds that we will be able to progress more rapidly toward AGI than anyone else with the funding that this disclosure will bring, even if others have exposure to our basic concepts.  But I have not reached the point of deciding so, yet....

As for your distinction of a "fundamental innovation" versus a "combination of prior ideas," I find that is largely a matter of marketing.  I could easily spin Novamente as a fundamental, radical innovative design OR as an integrative combination of prior ideas.  Ditto with Eliezer's ideas.  Ditto with just about anything else that's at all innovative -- e.g. was Einstein's General Relativity a fundamental new breakthrough, or just a tweak on prior insights by Riemann and Hilbert?  Was Special Relativity a radical breakthrough, or just a tweak on Lorentz and Poincare'?  I don't really find assessments of "perceived radicalness" nearly as interesting as assessments of perceived feasibility ;-)

Finally: although progress on the Novamente project right now is slower than I would like, we do have 2 full-time experienced AI engineers on the AGI project, plus one new full-time addition and the part-time efforts of several PhD AI scientists.  So, we are slowly but surely moving toward a Novamente version with sufficiently impressive functionality to be more effective at attracting funding via what it can do, rather than what we argue it will be able to do....  We are going to get there ... it's just a drag to be getting there so much more slowly than necessary due to sociological issues related to funding.

-- Ben



On 10/23/06, Starglider <[EMAIL PROTECTED]> wrote:
On 22 Oct 2006 at 17:22, Samantha Atkins wrote:
>ItisaloteasierIimaginetofindmanypeoplewillingandableto
>donateontheorderof$100/monthindefinitelytosuchacausethanto
>findoneorafewpeopletoputuptheentireamount. Iamsurethathas
>alreadybeenkickedaround.Whywouldn'titworkthough?

There have been many, many well funded AGI projects in the past, public
and private. Most of them didn't produce anything useful at all. A few
managed some narrow AI spinoffs. Most of the directors of those projects
were just as confident about success as Ben and Peter are. All of them
were wrong. No-one on this list has produced any evidence (publically) that
they can succeed where all previous attempts failed other than cute
powerpoint slides - which all the previous projects had too. All you can do
judge architecture by the vauge descriptions given, and the history of AI
strongly suggests that even when full details are available, even so-called
experts completely suck at judging what will work and what won't. The
chances of arbitrary donors correctly ascertaining what approaches will
work are effectively zero. The usual strategy is to judge by hot buzzword
count and apparent project credibility (number of PhDs, papers published
by leader, how cool the website and offices are, number of glowing writeups
in specialist press; remember Thinking Machines Corp?). Needless to say,
this doesn't have a good track record either.

As far as I can see, there are only two good reasons to throw funding at a
specific AGI project you're not actually involved in (ignoring the critical
FAI problem for a moment); hard evidence that the software in question can
produce intelligent behaviour significantly in advance of the state of the
art, or a genuinely novel attack on the problem - not just a new mix of AI
concepts in the architecture, /everyone/ vaguely credible has that, a
genuinely new methodology. Both of those have an expiry date after a few
years with no further progress. I'd say the SIAI had a genuinely new
methodology with the whole provable-FAI idea and to a lesser extent some
of the nonpublished Bayesian AGI stuff that immediately followed LOGI,
but I admit that they may well be past the 'no useful further results'
expiry date for continued support from strangers.

Setting up a structure that can handle the funding is a secondary issue.
It's nontrivial, but it's clearly within the range of what reasonably
competent and experienced people can do. The primary issue is evidence
that raises the probability that any one project is going to buck the very
high prior for failure, and neither hand-waving, buzzwords or powerpoint
(should) cut it. Even detailed descriptions of the architecture with
associated functional case studies, while interesting to read and perhaps
convincing for other experts, historically won't help non-expert donors
make the right choice. Radically novel projects like the SIAI /may/ be an
exception (in a good or bad way), but for relatively conventional groups
like AGIRI and AAII insist on seeing some of this supposedly
already-amazing software before choosing which project to back.

Personally if I had to back an AGI project other than our research
approach at Bitphase, and I wasn't so dubious about his Friendliness
strategy, I'd go with James Rogers' project, but I'd still estimate a
less-than-5% chance of success even with indefinite funding. Ben would
be a little way behind that with the proviso that I know his Friendliness
strategy sucks, but he has been improving both that and his architecture
so it's conceivable (though alas unlikely) that he'll fix it in time. AAII
would be some way back behind that, with the minor benefit that if their
architecture ever made it to AGI it's probably too opaque to undergo early
take-off, but with the huge downside that when it finally does enter an
accelerating recursive self-improvement phase what I know of the structure
strongly suggests that the results will be effectively arbitrary (i.e.
really  bad). As noted, hard demonstrations of both capability and scaling
(from anyone) will rapidly increase those probability estimates. I
understand why many researchers are so careful about disclosure, but
frankly without it I think it's unrealistic verging on dishonest to expect
significant donated funding (ignoring the question of why the hell
/companies/ would be fishing for donnations instead of investment).

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to