On 23 Oct 2006 at 10:39, Ben Goertzel wrote:
> In the case of Novamente, we have sufficient academic credibility and know-
> how that we could easily publish a raft of journal papers on the details of 
> Novamente's design and preliminary experimentation.

That bumps your objective success probability /slightly/. Newell's Soar
architecture and derivatives have had hundreds (possibly thousands, I
haven't done a detailed check) of papers published on both design and
experimental work. Soar still doesn't do anything terribly impressive and I
doubt anyone here would consider it a realistic AGI candidate, though
clearly many academics still do. There are lots of slightly less extreme
academic examples, some of which actually resemble paper-generators
more than serious AGI attempts (EPIC for example, though there are plenty
of connectionist examples too). OTOH Eurisko was very impressive for
the time (and still interesting today) but produced (AFAIK) only two
papers.

> With this behind us, it would not be hard for us to get a moderate-sized team
> of somewhat-prestigious academic AI researchers on board ... and then we > 
> could almost surely raise funding from conventional government research
> funding  sources.

I think your ordering is reversed, unless you really have objectively
highly impressive stuff that academics can see the value of and potential
investors can't. Most academics have their own favourite pet architectures,
or at the very least back a general approach to AI that conflicts with
yours (and there's the general academic bias against big complicated 
systems with no magic algorithms). Stacks of cash and profilgerate grants
can change attitudes real quick though, given the scarcity of funding for
AGI projects. Or at least /apparent/ attitudes; most researchers will try
to continue doing what they were doing before (and believing the things
they believed before) with a minimum of renaming and restructuring to fit
in with whatever the people handing out funding think is cool. Just look at
the current situation with nanotechnology funding.

> This process would take a number of years but is a well-understood
> process and would be very likely to succeed. 

Possibly, for small values of 'moderate sized' and 'somewhat prestigous'.
Again, countless projects by AI academics never gained acceptance or
support beyond their own research teams; only a tiny fraction beat the odds
and start a genuine distributed research effort (poor replication of
results is one reason why so many scientists are skeptical of AI as a
field). To beat those odds you'd have to be keeping something fairly 
impressive under your hat - and in AI 'objectively impressive' generally
means 'does something that's impressive without you having to explain it'.
 
> The main problem then boils down to the Friendliness issue. Do we really 
> want to put a set of detailed scientific ideas, some validated via software 
> work already, that we believe are capable of dramatically accelerating 
> progress toward AGI, into the public domain?

Only if there's a tight correlation between the people who take your AGI 
ideas seriously (enough to attempt to replicate them) and the people who
take your FAIs seriously (assuming your FAI ideas are right in the first
place). It's very difficult to say how good this correlation would be, as
there aren't really any past examples to go on. I agree that it's plausible
that the correlation could be low, and that this is a huge risk. My
previous email was not advocating disclosure as such, I was just pointing
out that trying to raise or make donations without a decent stand-alone
demo is a bad idea.

> As for your distinction of a "fundamental innovation" versus a "combination 
> of prior ideas," I find that is largely a matter of marketing.

Unfortunately that's true in practice. I personally believe that the
distinction can usefully be made at a more fundamental level; it's about
about how the architecture is generated and developed, not what mechanisms
it contains, model of intelligence it's based on, which buzzwords it
complies with or the resources the development team have. In my opinion
the former is a better objective indicator of success probability than the
later, which is how I generated the ordering over AGI project success
probabilities in my previous email. It's a relatively subtle distinction
though and I'm not going to try and convince everyone else to adopt it;
I'm not sure it's even possible to make it without making a personal,
reasonably detailed study of many past AGI projects (which decent
professional AGI will researchers have done, but which most observers
won't have the time or expertise for).

> I could easily spin Novamente as a fundamental, radical innovative design
> OR as an integrative combination of prior ideas.

That would be talking about the functional details of the AI, and your 
rationale for putting them in. While this is what ultimately determines
whether the design will work (and indeed all architectures I'd consider a
good idea probably have the characteristic you mention), the distinction
I was making is at a more abstract level.

> Ditto with Eliezer's ideas.

Eliezer's ideas are different because he doesn't actually /have/ any 
(published, post-LOGI, not-currently-deprecated) ideas about how to 
actually build an AGI. There's nothing there to characterise as 'radically
innovative' or an 'integrative combination', in AI terms. Ok he likes to
toss 'Bayesian' and 'Friendly' around along with a few other terms, but
there's no constructive detail. Thus my characterisation of his ideas as
'fundamentally different' /has/ to be at the next abstraction level up, of
how to approach the whole AGI problem in the first place. I'm rather
skeptical of Eliezer's exact approach (which seems to be 'sit in a room
and meditate until a mathematical proof describing why a specific seed AI
design is guaranteed to be both tractable and a Friendly implementation of
CEV pops out of my head'), but many of his insights were both novel and
important, and I do think a less-extreme and more-connected-with-reality 
version.of this methodology is a good way to attack the AGI problem.

> Ditto with just about anything else that's at all innovative -- e.g. was 
> Einstein's General Relativity a fundamental new breakthrough, or just a 
> tweak on prior insights by Riemann and Hilbert?

I wonder if this is a sublime form of irony for a horribly naïve and
arrogant analogy to GR I drew on SL4 some years back :) Seriously,
analogies to other fields are always a dicey proposition because AGI is
just so different. Science is about finding compact bits of maths/logic
that match existing experimental results and hopefully predict the results
of interesting future experiments. The basic methodology for verification
is pretty stable, and there isn't really any general methodology for coming
up with the theories in the first place; people just rely on 'insight' and
lots of hard work trying out and iteratively refining various candidate
theories. AGI is fundamentally different, as we're trying to build
something that exhibits a wide class of fuzzily defined but definitely
very complicated behaviour, mostly from scratch. It's really engineering
rather than science (the whole 'using AI that isn't a close physical model
of the brain as a model of high-level brain function' idea was and is a
rather toxic red herring IMHO), but engineering with a uniquely close
relationship to maths/logic (even moreso than normal software engineering).

> Was Special Relativity a radical breakthrough, or just a tweak on Lorentz
> and Poincare'?

AI doesn't really have many examples of a new powerful approach
subsuming an existing one (as SR subsumed Newtonian mechanics);
possibly backprop subsuming perceptrons and Bayesian logic subsuming
predicate logic. Neither of those are accepted across the field as the best
design approach; in fact I can't think of any AI theories or design
approaches that are accepted as valid by the majority of AI researchers.
But again, this is about specific mechanisms. The methodological
distinction I'm making is more akin to chemistry versus alchemy, or physics
versus Aristotlean natural philosophy. I know that sounds melodramatic,
and I would not claim that the non-alchemistic approaches to AGI are
anywhere near as developed as the scientific method is. But still, it does
seem to be the state of the field; projects like AAII are just the latest
in a long line of alchemists claiming that they will have turned lead into
gold by next year, while a few lonely theorists are groping at notions of
a verifiable, logic-based way of approaching the /AGI design process/
(not AGI itself).

>From where I'm sitting, Novamente appears to be just starting to peek
over the border into this 'formal methods for AGI design' realm; I can only
hope that you keep going. I'm trying to leapfrog to a working formal design
process for tractable AGI (which relies heavily on narrow AI support
tools), and use that to design lower-level AGI components from first
principles and analysis of prior work at the same time as refining the
methodology itself, but it's a very high risk endeavour. I think it's a lot
more likely to produce an AGI than Eliezer's Bayesian meditation, and
has a vastly better ratio of expected FAI-to-UFAI outcomes than all the
'conventional' AGI projects I know about. But it's very hard to be
objective about the probability of one's own project succeeding in
producing an AGI, and I can't currently claim to have a higher chance of
doing that you, Peter or a few similar people with much confidence.
Fortunately Bitphase's business plan treats AGI as a long term strategic
goal, not a necessity for growth and profitability (acknowleging that that
way lies different challenges regarding staying focused on AGI
development despite conflicting commercial priorities).

> I don't really find assessments of "perceived radicalness" nearly as
> interesting as assessments of perceived feasibility ;-) 

Ditto, but getting enough information to do the later seriously is rare, so
debates about it usually bog down in a morass of speculation. Sometimes
(e.g. picking a personal Singularity strategy) you just have to go with
that and use the best probability assessments you can manage, but in
email debates it usually just leads to people talking past each other.

> So, we are slowly but surely moving toward a Novamente version with
> sufficiently impressive functionality to be more effective at attracting 
> funding via what it can do, rather than what we argue it will be able to
> do.... We are going to get there ... it's just a drag to be getting there
> so much more slowly than necessary due to sociological issues related
> to funding. 

Not having your ten year head start, Bitphase will be trying to leverage
what we believe are inherent advantages of our high-level approach to take
some shortcuts (at least up until we get to the really dangerous near-AGI
regions). I'm not particularly happy about it, but it does seem that many
people are unwittingly working to destroy the world ASAP, and heading them
off at the pass is going to take some out-of-the-hyperbox thinking. The
most helpful thing that could happen right now would actually be for
someone as (or more) competent but more practical and less isolationist to
take over from Eliezer in progressing formal Friendliness research. After
that, more AGI projects switching to FAI-compatible methods would be good
(though I probably wouldn't say no to additional investors right now;
scaling revenues takes precious time, something you're clearly all too
familiar with).

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to