Re: [singularity] Defining the Singularity

2006-10-26 Thread Starglider
Matt Mahoney wrote:
>> 'Access to' isn't the same thing as 'augmented with' of course, but I'm
>> not sure exactly what you mean by this (and I'd rather wait for you to
>> explain than guess).
> 
> I was referring to one possible implementation of AGI consisting of part 
> neural
> or brainlike implementation and part conventional computer (or network)
> to combine the strengths of both.

I'm sure that a design like this is possible, and there are quite a few
people trying to build AGIs like this, either with close integration
between the connectionist and code-like parts or having them as relavtively
discrete but communicating parts. Yes it should be more powerful than
connectionism on its own, no it's not necessarily any more Friendly, but
any kind of hard structural constraints (what can trigger what, what can
modify what) can be reliably enforced via the non-connectionist elements
then it has the potential to be more Friendly than a connectionist system
could be.

What I'm not sure about is that you gain anything from 'neural' or
'brainlike' elements at all. The brain should not be put on a pedestal.
It's just what evolution on earth happened to come up with, blindly
following incremental paths and further hobbled by all kinds of cruft and
design constraints. There's no a priori reason to believe that the brain is
a /good/ way to do anything, given hardware that can execute arbitrary
Turing-equivalent code. Of course it's still pragmatic to try copying the
brain when we can't think of anything better (i.e. don't have the
theoretical basis or tools to do better than attempt crude immitations).
As with rational AGI (and FAI) in general, I don't expect people (who
haven't deeply studied it and tried to build these systems) to accept that
this is true, just that it might be true; there may be much more efficient
algorithms that effectively outperform connectionism in all cases.
Getting some confirmation (or otherwise) of that is something that is one
of the things I'm working on at present.

> The architecture of this system would be that the neural part has the
> capability to write programs and run them on the conventional part in
> the same way that humans interact with computers.

Neural nets are a really bad fit with code design. Current ANNs aren't
generally capable of from-requirements design anyway, as opposed to pattern
recognition and completion. Writing code involves juggling lots of logical
constraints and boolean conditions, so it's actually one of the few real
world tasks that is a natural fit with predicate logic. This is why humans
currently use high-level languages and error-checking compilers. You could
of course use a connectionist system as the control mechanism to direct
inference in a logic system, in a roughly analogous manner.

> This seems to me to be the most logical way to build an AGI, and
> probably the most dangerous

I'd agree that it looks good when you first start attacking the problem.
Classic ANNs have some demonstrated competencies, classic symbolic
AI has some different demonstrated competencies, as do humans and
existing non-AI software. I was all for hybridising various forms of 
connectionism, fuzzy symbolic logic, genetic algorithms and more at one
point. It was only later that I began to realise that most if not all of
those mechanisms were neither optimal, adequate or even all that useful.
Most dangerous, perhaps, in that highly hybridised systems that overcome
the representational communication barrier between their subcomponents
are probably unusually prone to early takeoff. It's easy to proceed without
really understanding what you're doing if you take the 'kitchen sink'
approach of tossing in everything that looks useful (letting the AI sort
out how to actually use it). Not all integrative projects are like that,
but quite a few are, and yes they are dangerous.

> I believe that less interaction means less monitoring and control, and
> therefore greater possibility that something will go wrong.

Plus humans in the decision loop inherently slow things down greatly
compared to an autonomous intelligence running at electronic speeds.

> As long as human brains remain an essential component of a superhuman
> intelligence, it seems less likely that this combined intelligence will
> destroy itself.

Probably true, but 'destroy itself' is a minor and recoverable failure
scenario unless the intelligence takes a good chunk of the scenery with
it. It's the 'start restructuring everything in reach according to a
non-Friendly goal system' outcome that's the real problem.

> If AGI is external or independent of human existence, then there is a
> great risk.  But if you follow the work of people trying to develop AGI,
> it seems that is where we are headed, if they are successful.

It's inevitable. Someone is going to build one eventually. The only
useful argument is 'we should develop intelligence enhancement first,
so that we have a better chance of getting AGI right'. Yo

Re: [singularity] Defining the Singularity

2006-10-25 Thread Starglider
My apologies for the duplication of my previous post; I thought my mail
client failed to send the original, but actually it just dropped the echo
from the server.

Matt Mahoney wrote:
> Michael Wilson wrote:
>> Hybrid approaches (e.g. what Ben's probably envisioning) are almost certainly
>> better than emergence-based theories... if fully formal FAI turns out to be
>> impossibly difficult we might have to downgrade to some form of
>> probabilistic verification.
> 
> I think a hybrid approach is still risky.

Alas, Seed AI is inherently risky. All we can do is compare levels of risk.

> By hybrid, do you mean AGI augmented with conventional computing
> capability?

All AGIs implemented on general purpose computers will have access to
'conventional computing capability' unless (sucessfully) kept in a sandbox
- and even then anything with a Turing-complete substrate has the potential
to develop such capability internally. 'Access to' isn't the same thing as
'augmented with' of course, but I'm not sure exactly what you mean by this
(and I'd rather wait for you to explain than guess). Certainly there is the
potential for combing formal and informal control mechanisms (as opposed
to just local inference and learning machanisms, where informality is much
easier to render safe) in an FAI system. Given a good understanding of
what's going on I would expect this to be a big improvement on purely
informal/implicit methods, though it is possible to imagine someone
throwing together the two approaches in such a way that the result is even
worse than an informal approach on its own (largely because the kind of
reflective analysis and global transforms a constraint-based system can
support override what little protection the passive causality constraints
in a typical localised-connectionist system give you).

My statement above was referencing the structure of the theory used to
design/verify the FAI though, not the structure of the FAI itself. I'd
characterise a hybrid FAI theory as one that uses some directly provable
constraints to narrow down the range of possible behaviours, and then
some probabilistic calculation (possibly incorporating experimental
evidence) to show that the probability of the AGI staying Friendly is high.
The biggest issues with probabilistic calculations are the difficultly of
generalising them across self-modification, the fact that any nontrivial
uncertainty that compounds across self-modification steps will quickly
render the theory useless when applied to an AGI undergoing takeoff,
and the fact that humans are just so prone to making serious mistakes
when trying to reason probabilistically (even when formal probability
theory is used, though that's still /much/ better than intuition/guessing
for a problem this complex). As I've mentioned previously, I am optimistic
about using narrow AI to help develop AGI designs and FAI theories, and
have had some modest success in this area already. I'm not sure if this
counts as 'augmenting with conventional computing capability'.

> Suppose we develop an AGI using a neural model, with all the strengths
> and weaknesses of humans, such as limited short term memory, inefficient
> and error prone symbolic reasoning and arithmetic skills, slow long term
> learning rate, inability to explicitly delete data, etc.  Then we
> observe:
> 
> A human with pencil and paper can solve many more problems than one
> without.
> A human with a calculator is even more powerful.
> A human with a computer and programming skills is even more powerful.
> A human with control over a network of millions of computers is even more
> powerful.
>
> Substitute AGI for human and you have all the ingredients to launch a 
> singularity.

Absolutely. Plus the AGI has the equivalent of these things directly
interfaced into a human's brain, not manipulated through slow and
unreliable physical interfaces, and even a brain-like AGI may well be
running at a much higher effective clock rate than a human in the first
place. This is essentially why AGI is so dangerous even if you don't
accept hard and/or early takeoff in an AGI system on its own.

>  If your goal is friendly AI, then not only must you get it right, but so
> must the AGI when it programs the network to build a more powerful AGI,
> and so must that AGI, and so on.  You cannot make a mistake anywhere
> along the chain.

Thus probabilistic methods have a serious problem remaining effective under
recursive self-modification; any flaws in the original theory that don't
get quickly and correctly fixed by the AGI (which requires an accurate
meta-description of what your FAI theory is supposed to do...) are likely
to deviate the effective goal system out of the human-desirable space. If
you /have/ to use probabilistic methods, they are all kinds of mitigating
strategies you can take; Eliezer actually covered quite a few of them back 
in Creating A Friendly AI. But provable Friendliness (implemented with many
layers of redundancy just to

Re: [singularity] Defining the Singularity

2006-10-24 Thread Starglider
I'll try and avoid a repeat of the lenghtly, fairly futile and extremely 
disruptive
discussion of Loosemore's assertions that occurred on the SL4 mailing
list. I am willing to address the implicit questions/assumptions about my
own position.

Richard Loosemore wrote:
> The contribution of complex systems science is not to send across a
> whole body of plug-and-play theoretical work: they only need to send
> across one idea (an empirical fact), and that is enough. This empirical 
> idea is the notion of the disconnectedness of global from local behavior 
> - what I have called the 'Global-Local Disconnect' and what, roughly 
> speaking, Wolfram calls 'Computational Irreducibility'.

This is only an issue if you're using open-ended selective dynamics on
or in a substrate with softly-constrained, implicitly-constrained or
unconstrained side effects. Nailing that statement down precisely would
take a few more paragraphs of definition, but I'll skip that for now. The
point is that plenty of complex engineered systems, including almost all
existing software systems, don't have this property. The assertion that
it is possible (for humans) to design an AGI with fully explicit and
rigorous side effect control is contraversial and unproven; I'm optimistic
about it, but I'm not sure and I certainly wouldn't call it a fact. What
you failed to do was show that it is impossible, and indeed below you
seem to acknowledge that it may in fact be possible.

The question of whether is more desirable to build an AGI with strong
structural constraints is more complicated. Eliezer Yudkowsky has
spent hundreds of thousands of words arguing fairly convincingly for
this, including a fairly good essay on the subject that was forwarded
to this list earlier by Ben and I'm not going to rehash it here.

>> It is entirely possible to build an AI in such a way that the general
>>  course of its behavior is as reliable as the behavior of an Ideal
>> Gas: can't predict the position and momentum of all its particles,
>> but you sure can predict such overall characteristics as temperature,
>> pressure and volume.

A highly transhuman intelligence could probably do this, though I
suspect it would be very inefficient, partially I expect you'd need
strong passive constraints on the power of local mechanisms (the kind
the brain has in abundance), which will always sacrifice performance
on many tasks compared to unconstrained or intelligently-verified
mechanisms. The chances of humans being able to do this are
pretty remote, much worse than the already not-promising chances
for doing constraint/logic-based FAI. Part of that is due to the fact that
while there are people making theoretical progress on constraint-based
analysis of AGI, all the suggestions for developing the essential theory
for this kind of FAI seem to involve running experiments on highly
dangerous proto-AGI or AGI systems (necessarily built before any
such theory can be developed and verified). Another problem is the
fact that people advocating this kind of approach usually don't
appreciate the difficult of designing a good set of FAI goals in the
first place, nor the difficulty of verifying that an AGI has a precisely
human-like motivational structure if they're going with the dubious
plan of hoping an enhanced-human-equivalent can steer humanity
through the Singularity successfully. Finally the most serious problem
is that an AGI of this type isn't capable of doing safe full-scale self
modification until it has full competence in applying all of this as yet
undeveloped emergent-FAI theory; unlike constraint-based FAI you
don't get any help from the basic substrate and the self-modification
competence doesn't grow with the main AI. Until both the abstract
knowledge of the reliable-emergent-goal-system-design and the
Friendly goal system to use it properly are fully in place (i.e. in all of
your prototypes) you're relying on adversarial methods to prevent
arbitary self-modification, hard takeoff and general bad news.

In short this approach is ridiculously risky and unlikely to work, orders
of magnitude more so than actively verified FAI on a rational AGI
substrate, which is already extremely difficult and pretty damn risky to
develop. Hybrid approaches (e.g. what Ben's probably envisioning) are
almost certainly better than emergence-based theories (and I use the
word theories loosely there), and I accept that if fully formal FAI turns
out to be impossibly difficult we might have to downgrade to some form
of probabilistic verification. I'd add that I have yet to see any evidence
that you or anyone else are actually making progress on 'emergent'
FAI design, or any evidence or even detailed arguments for an AGI
design capable of this.

> The motivational system of some types of AI (the types you would
> classify as tainted by complexity) can be made so reliable that the 
> likelihood of them becoming unfriendly would be similar to the 
> likelihood of the molecules of an Ideal Gas 

Re: [singularity] Defining the Singularity

2006-10-24 Thread Starglider
I have no wish to rehash the fairly futile and extremely disruptive
discussion of Loosemore's assertions that occurred on the SL4 mailing
list. I am willing to address the implicit questions/assumptions about my
own position.

Richard Loosemore wrote:
> The contribution of complex systems science is not to send across a
> whole body of plug-and-play theoretical work: they only need to send
> across one idea (an empirical fact), and that is enough. This empirical 
> idea is the notion of the disconnectedness of global from local behavior 
> - what I have called the 'Global-Local Disconnect' and what, roughly 
> speaking, Wolfram calls 'Computational Irreducibility'.

This is only an issue if you're using open-ended selective dynamics on
or in a substrate with softly-constrained, implicitly-constrained or
unconstrained side effects. Nailing that statement down precisely would
take a few more paragraphs of definition, but I'll skip that for now. The
point is that plenty of complex engineered systems, including almost all
existing software systems, don't have this property. The assertion that
it is possible (for humans) to design an AGI with fully explicit and
rigorous side effect control is contraversial and unproven; I'm optimistic
about it, but I'm not sure and I certainly wouldn't call it a fact. What
you failed to do was show that it is impossible, and indeed below you
seem to acknowledge that it may in fact be possible.

The assertion that it is more desirable to build an AGI with strong
structural constraints is more complicated. Eliezer Yudkowsky has
spent hundreds of thousands of words arguing fairly convincingly for
this, and I'm not going to revist that subject here.

>> It is entirely possible to build an AI in such a way that the general
>>  course of its behavior is as reliable as the behavior of an Ideal
>> Gas: can't predict the position and momentum of all its particles,
>> but you sure can predict such overall characteristics as temperature,
>> pressure and volume.

A highly transhuman intelligence could probably do this, though I
suspect it would be very inefficient, partially I expect you'd need
strong passive constraints on the power of local mechanisms (the kind
the brain has in abundance), which will always sacrifice performance
on many tasks compared to unconstrained or intelligently-verified
mechanisms. The chances of humans being able to do this are
pretty remote, much worse than the already not-promising chances
for doing constraint/logic-based FAI. Part of that is due to the fact that
while there are people making theoretical progress on constraint-based
analysis of AGI, all the suggestions for developing the essential theory
for this kind of FAI seem to involve running experiments on highly
dangerous proto-AGI or AGI systems (necessarily built before any
such theory can be developed and verified). Another problem is the
fact that people advocating this kind of approach usually don't
appreciate the difficult of designing a good set of FAI goals in the
first place, nor the difficulty of verifying that an AGI has a precisely
human-like motivational structure if they're going with the dubious
plan of hoping an enhanced-human-equivalent can steer humanity
through the Singularity successfully. Finally the most serious problem
is that an AGI of this type isn't capable of doing safe full-scale self
modification until it has full competence in applying all of this as yet
undeveloped emergent-FAI theory; unlike constraint-based FAI you
don't get any help from the basic substrate and the self-modification
competence doesn't grow with the main AI. Until both the abstract
knowledge of the reliable-emergent-goal-system-design and the
Friendly goal system to use it properly are fully in place (i.e. in all of
your prototypes) you're relying on adversarial methods to prevent
arbitary self-modification, hard takeoff and general bad news.

In short it's ridiculously risky and unlikely to work, orders of magnitude
more so than actively verified FAI on a rational AGI substrate, which is
already extremely difficult and pretty damn risky to develop. Hybrid
approaches (e.g. what Ben's probably envisioning) are almost certainly
better than emergence-based theories (and I use the word theories
loosely there), and I accept that if fully formal FAI turns out to be
impossibly difficult we might have to downgrade to some form of
probabilistic verification.

> The motivational system of some types of AI (the types you would
> classify as tainted by complexity) can be made so reliable that the 
> likelihood of them becoming unfriendly would be similar to the 
> likelihood of the molecules of an Ideal Gas suddenly deciding to split 
> into two groups and head for opposite ends of their container.

Ok, let's see the design specs for one of these systems, along with
some evidence that it's scalable to AGI. Or is this just a personal
hunch?

> And by contrast, the type of system that the Rational/Normative AI 
> comm

Re: [singularity] Defining the Singularity

2006-10-23 Thread Starglider
On 23 Oct 2006 at 13:26, Ben Goertzel wrote:
> Whereas, my view is that it is precisely the effective combination of 
> probabilistic logic with complex systems science (including the notion of 
> emergence) that will lead to, finally, a coherent and useful theoretical 
> framework for designing and analyzing AGI systems... 

You know my position on 'complex systems science'; yet to do anything
useful, unlikely to ever help in AGI, would create FAI-incompatible
systems even if it could. We don't really care about the global dynamics
of arbitrary distributed systems anyway. What we care about is finding
systems that produce useful behaviour, where 'useful' consists of
a description of what we want plus an explicit or implicit description of
behaviour or outcomes that would be unacceptable. Creating a general
theory of how optimisation pressure is exerted on outcome sets, through
layered systems that implement progressive (mostly specialising)
transforms, covers the same kind of ground but is much more useful
(and hopefully a little easier, though by no means easy).
 
> I am also interested in creating a fundamental theoretical framework for 
> AGI, but am pursuing this on the backburner in parallel with practical work 
> on Novamente (even tho I personally find theoretical work more fun...).

I prefer practical work, but I've accepted that to have a nontrivial chance 
of success theory has to come first, and also that theory about what you
want has to come before theory about how to get it. My single biggest
disagreement with Eliezer is probably that I think it's possible to proceed
with a description of how you will specify what you actually want, rather
than an exact specification of what you want (i.e. that it's possible to
design an AGI that is capable of implementing a range of goal systems,
including the kind of Friendly goal systems that I hope will be invented).
Thus I'm doing AGI design rather than researching Friendliness theory
(though I /would/ be doing that if I was better equipped for it than AGI
research).

> I find that in working on the theoretical framework it is very helpful
> to proceed in the context of a well-fleshed-out practical design... 

Our positions on experimental work are actually quite close, but still
distinct in some important respects. For example, the likelihood of being
able to extrapolate experimental results on goal system dynamics;
at least these days you accept that any such extrapolation is futile
without a deep and verifiable understanding of the underlying functional
mechanisms. I mostly agree with Eliezer there in saying that if you had
an adequate understanding for extrapolation the experiments would
(probably) only be useful for additional confirmation, but conversely I do
think experimentation has an important role in developing tractable
algorithms.

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-23 Thread Starglider
On 23 Oct 2006 at 12:59, Ben Goertzel wrote:
>>> Ditto with just about anything else that's at all innovative -- e.g. was
>>> Einstein's General Relativity a fundamental new breakthrough, or just a
>>> tweak on prior insights by Riemann and Hilbert?
>> 
>> I wonder if this is a sublime form of irony for a horribly naïve and
>> arrogant analogy to GR I drew on SL4 some years back :) 
> 
> Yes, I do remember that entertaining comment of yours, way back when... ;) 
> ... I assume you have backpedaled on that particular assertion by now, 
> though... 

Well, I still believe that there is a theoretical framework to AGI design
that will prove both incredibly useful in building AGIs in general and
pretty much essential to designing stable Friendly goal systems (and the
rational, causally clean AGIs to implement them). In fact I'm more sure of
that now than I was then. What was horribly wrong and naïve of me was
the implication that Eliezer had actually found/developed this framework,
as of early 2004. He'd heavily inspired my own progress up to that point,
and we had a somewhat-shared initial peak into what seemed like a new
realm of exciting possibilites for building verified, rational seed AIs,
and there was a huge clash of egos going on on SL4 at the time. What
can I say, I got carried away and started spouting SIAI-glorifying 
hyperbole, which I soon regretted. Though I have remained often-publicly
opposed to emergence and 'fuzzy' design since first realising what the true
consequences (of the heavily enhanced-GA-based system I was working
on at the time) were, as far as I know I haven't made that particular
mistake again.

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] AGI funding: US versus China

2006-10-23 Thread Starglider
On 23 Oct 2006 at 9:39, Josh Treadwell wrote:
> This is a big problem. If China was a free nation, I wouldn't have any 
> qualms with it, but the first thing China will do with AGI is marginalize 
> human rights. Any nation who censors it's internet (violators are sent to 
> prisoner/slave camps) and sells organs of unwilling executed prisoners 
> (more are executed each year in china than the entire world combined) is 
> not a place I'd like AGI to be developed. I hope Hugo doesn't regret his 
> decision.

Last time I checked, Hugo de Garis was all for hard takeoff of arbitrary
AGIs as soon as possible, and damn the consequences. This is
someone who gleefully predicts massively destructive wars between
'terrans' and 'cosmists', and expects humanity to be made extinct by
'artilects', and actually wants to /hasten the arrival of this/. While I'd
have to characterise this goal system as quite literally insane, the
decision to accept funding from totalitarian regiemes is actually a quite
rational consequence. His architecture (at least as of 'CAM-brain') is just
about as horribly emergent and uncontrollable/unpredictable as it is
possible to get. If you accept hard takeoff, and you're using an
architecture like that, then it doesn't make a jot of difference what petty
political goals your funders might have; they're as irrelevant as everyone
else's goals once the hard takeoff kicks in. Fortunately there's no short
term prospect of anything like that actually working, but given enough
zettaflops of nanotech-supplied compute power it might start to be a
serious problem. I'm guessing that his backers are looking for PR and/or
limited commercial spinoffs though.

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-23 Thread Starglider
On 23 Oct 2006 at 10:39, Ben Goertzel wrote:
> In the case of Novamente, we have sufficient academic credibility and know-
> how that we could easily publish a raft of journal papers on the details of 
> Novamente's design and preliminary experimentation.

That bumps your objective success probability /slightly/. Newell's Soar
architecture and derivatives have had hundreds (possibly thousands, I
haven't done a detailed check) of papers published on both design and
experimental work. Soar still doesn't do anything terribly impressive and I
doubt anyone here would consider it a realistic AGI candidate, though
clearly many academics still do. There are lots of slightly less extreme
academic examples, some of which actually resemble paper-generators
more than serious AGI attempts (EPIC for example, though there are plenty
of connectionist examples too). OTOH Eurisko was very impressive for
the time (and still interesting today) but produced (AFAIK) only two
papers.

> With this behind us, it would not be hard for us to get a moderate-sized team
> of somewhat-prestigious academic AI researchers on board ... and then we > 
> could almost surely raise funding from conventional government research
> funding  sources.

I think your ordering is reversed, unless you really have objectively
highly impressive stuff that academics can see the value of and potential
investors can't. Most academics have their own favourite pet architectures,
or at the very least back a general approach to AI that conflicts with
yours (and there's the general academic bias against big complicated 
systems with no magic algorithms). Stacks of cash and profilgerate grants
can change attitudes real quick though, given the scarcity of funding for
AGI projects. Or at least /apparent/ attitudes; most researchers will try
to continue doing what they were doing before (and believing the things
they believed before) with a minimum of renaming and restructuring to fit
in with whatever the people handing out funding think is cool. Just look at
the current situation with nanotechnology funding.

> This process would take a number of years but is a well-understood
> process and would be very likely to succeed. 

Possibly, for small values of 'moderate sized' and 'somewhat prestigous'.
Again, countless projects by AI academics never gained acceptance or
support beyond their own research teams; only a tiny fraction beat the odds
and start a genuine distributed research effort (poor replication of
results is one reason why so many scientists are skeptical of AI as a
field). To beat those odds you'd have to be keeping something fairly 
impressive under your hat - and in AI 'objectively impressive' generally
means 'does something that's impressive without you having to explain it'.
 
> The main problem then boils down to the Friendliness issue. Do we really 
> want to put a set of detailed scientific ideas, some validated via software 
> work already, that we believe are capable of dramatically accelerating 
> progress toward AGI, into the public domain?

Only if there's a tight correlation between the people who take your AGI 
ideas seriously (enough to attempt to replicate them) and the people who
take your FAIs seriously (assuming your FAI ideas are right in the first
place). It's very difficult to say how good this correlation would be, as
there aren't really any past examples to go on. I agree that it's plausible
that the correlation could be low, and that this is a huge risk. My
previous email was not advocating disclosure as such, I was just pointing
out that trying to raise or make donations without a decent stand-alone
demo is a bad idea.

> As for your distinction of a "fundamental innovation" versus a "combination 
> of prior ideas," I find that is largely a matter of marketing.

Unfortunately that's true in practice. I personally believe that the
distinction can usefully be made at a more fundamental level; it's about
about how the architecture is generated and developed, not what mechanisms
it contains, model of intelligence it's based on, which buzzwords it
complies with or the resources the development team have. In my opinion
the former is a better objective indicator of success probability than the
later, which is how I generated the ordering over AGI project success
probabilities in my previous email. It's a relatively subtle distinction
though and I'm not going to try and convince everyone else to adopt it;
I'm not sure it's even possible to make it without making a personal,
reasonably detailed study of many past AGI projects (which decent
professional AGI will researchers have done, but which most observers
won't have the time or expertise for).

> I could easily spin Novamente as a fundamental, radical innovative design
> OR as an integrative combination of prior ideas.

That would be talking about the functional details of the AI, and your 
rationale for putting them in. While this is what ultimately determines
whether the design will

Re: [singularity] Defining the Singularity

2006-10-23 Thread Starglider
On 22 Oct 2006 at 17:22, Samantha Atkins wrote:
> It is a lot easier I imagine to find many people willing and able to
> donate on the order of $100/month indefinitely to such a cause than to
> find one or a few people to put up the entire amount. I am sure that has
> already been kicked around.  Why wouldn't it work though?

There have been many, many well funded AGI projects in the past, public
and private. Most of them didn't produce anything useful at all. A few
managed some narrow AI spinoffs. Most of the directors of those projects
were just as confident about success as Ben and Peter are. All of them
were wrong. No-one on this list has produced any evidence (publically) that
they can succeed where all previous attempts failed other than cute
powerpoint slides - which all the previous projects had too. All you can do
judge architecture by the vauge descriptions given, and the history of AI
strongly suggests that even when full details are available, even so-called
experts completely suck at judging what will work and what won't. The
chances of arbitrary donors correctly ascertaining what approaches will
work are effectively zero. The usual strategy is to judge by hot buzzword
count and apparent project credibility (number of PhDs, papers published
by leader, how cool the website and offices are, number of glowing writeups
in specialist press; remember Thinking Machines Corp?). Needless to say,
this doesn't have a good track record either.

As far as I can see, there are only two good reasons to throw funding at a
specific AGI project you're not actually involved in (ignoring the critical
FAI problem for a moment); hard evidence that the software in question can
produce intelligent behaviour significantly in advance of the state of the
art, or a genuinely novel attack on the problem - not just a new mix of AI
concepts in the architecture, /everyone/ vaguely credible has that, a
genuinely new methodology. Both of those have an expiry date after a few
years with no further progress. I'd say the SIAI had a genuinely new
methodology with the whole provable-FAI idea and to a lesser extent some
of the nonpublished Bayesian AGI stuff that immediately followed LOGI,
but I admit that they may well be past the 'no useful further results'
expiry date for continued support from strangers.

Setting up a structure that can handle the funding is a secondary issue.
It's nontrivial, but it's clearly within the range of what reasonably
competent and experienced people can do. The primary issue is evidence
that raises the probability that any one project is going to buck the very
high prior for failure, and neither hand-waving, buzzwords or powerpoint
(should) cut it. Even detailed descriptions of the architecture with
associated functional case studies, while interesting to read and perhaps
convincing for other experts, historically won't help non-expert donors
make the right choice. Radically novel projects like the SIAI /may/ be an
exception (in a good or bad way), but for relatively conventional groups
like AGIRI and AAII insist on seeing some of this supposedly
already-amazing software before choosing which project to back.

Personally if I had to back an AGI project other than our research
approach at Bitphase, and I wasn't so dubious about his Friendliness
strategy, I'd go with James Rogers' project, but I'd still estimate a
less-than-5% chance of success even with indefinite funding. Ben would
be a little way behind that with the proviso that I know his Friendliness
strategy sucks, but he has been improving both that and his architecture
so it's conceivable (though alas unlikely) that he'll fix it in time. AAII 
would be some way back behind that, with the minor benefit that if their
architecture ever made it to AGI it's probably too opaque to undergo early
take-off, but with the huge downside that when it finally does enter an
accelerating recursive self-improvement phase what I know of the structure
strongly suggests that the results will be effectively arbitrary (i.e.
really  bad). As noted, hard demonstrations of both capability and scaling
(from anyone) will rapidly increase those probability estimates. I
understand why many researchers are so careful about disclosure, but
frankly without it I think it's unrealistic verging on dishonest to expect
significant donated funding (ignoring the question of why the hell
/companies/ would be fishing for donnations instead of investment).

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-22 Thread Starglider
Samantha Atkins wrote:
> Of late I feel a lot of despair because I see lots of brilliant people
> seemingly mired in endlessly rehashing what-ifs, arcane philosophical
> points and willing to put off actually creating greater than human
> intelligence and transhuman tech indefinitely until they can somehow
> prove to their and our quite limited intelligence that all will be well.

As far as I'm aware the only researcher taking this point of view ATM is
Eliezer Yudkowsky (and implicitly, his assistants). Everyone else with
the capability is proceeding full steam ahead (at least, to the extent
that resources permitt) with AGI development. I'm somewhat unusual in
that I'm proceeding with AGI component development, but I accept that
even if I'm successful I can't safely assemble those components before
someone comes up with a reasonably sound FAI scheme (and taking
moderately paranoid precautions against takeoff in the larger
subassemblies). Who other than Eliezer are you criticsing here?

> I see brilliant idealistic people who don't bother to admit or examine
> what evil is now bearing down on them and their dreams because they
> believe the singularity is near inevitable and will make everything all
> better in the sweet by and bye.

That's true, but not so much of an issue. We don't have to actually
solve these problems directly, and as I've said most researchers are
already working as fast as they can given current resources. As such
I don't think a fuller appreciation of what's currently wrong with the
world would make much difference.

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]