Ben,
> I don't
have a good argument on this point, just an intuition, based
> on the fact that
generally speaking in narrow AI, inductive learning
> based rules based
on a very broad range of experience, are much more
> robust than expert-encoded
rules. The key is a broad range of
> experience, otherwise
inductive learning can indeed lead to rules that
> are "overfit"
to their training situations and don't generalize well
> to fundamentally
novel situations.
I've played around with expert systems
years ago (I designed one to
interpret a legal framework I was working on) and I'm familiar with the
notion of inductive learning - using computers to generate algorithms
representing patterns in large data sets. And I can see why the fuzzier
system might be more robust in the face of partial novely.
But I'm not proposing that AGIs rely only on
pre-wired ethical drivers -
a major program of experience-based learning would also be needed -
just as you are planning.
And in any case I didn't propose that
the modicum of hard-wiring take
the form a deductive 'expert system'-style rule-base. That would be
very inflexible as the sole basis for ethical judgement formation (and in
any case the AGI itself would be capable of developing very good
deductive rule-bases and inductive expert system 'rule' bases without
the need for these to be preloaded).
> If there need
to be multiple Novamentes (not clear -- one might be
> enough), they
could be produced through "cloning" rather than raising
> each one from
scratch.
Ok - I hadn't thought of cloning as
a way to avoid having to directly
train every Novamente.
But the idea of having just one Novamente
seems somewhat
unrealistic and quite risky to me.
If the Novamente design is going to
enable boostraping as you plan
then your one Novamente is going to end up being very powerful. If you
try to be the gatekeeper to this one powerful AGI then (a) the rest of the
world will end up considering your organisation as worse than Microsoft
and many of your clients are not going to want to be held to ranson by
being dependent on your one AGI for their mission critical work and (b)
the one super-Novamente might develop ideas if it own that might not
include you or anyone else being the gatekeeper.
The idea of one super-Novamente is
also dangerous because this one
AGI will develop its own perspecitive on things and given its growing
power that perpective or bias could become very dangerous for any
one or anything that didn't fit in with that perspective.
I think an AGI needs other AGIs to
relate to as a community so that a
community of leaning develops with multiple perspectives available.
This I think is the only way that the accelerating bootstraping of AGIs
can be handled with any possibility of being safe.
> The engineering/teaching
of ethics in an AI system is pretty different
> from its evolution
in natural systems...
Of course. But that is not to
say that there is nothing to be learned
from evolution about the value of building in ethics in creatures that are
very intelligent and very powerful.
You didn't respond to one part of
my last message:
> Philip: So why
not proceed to develop Novamentes down two different
> paths simultaneously
- the path you have already designed - where
> experience-based
learning is virtually the only strategy, and a variant
> where some Novamentes
have a modicum of carefully designed pre-wiring
> for ethics.........
(coupled with a major program of experience-based
learning)?
On reflection I can well imagine that
you are not ready to make any
commitment to my suggestion to give the dual (simultaneous)
development path approach a go. But would
you be prepared to
explore the possibility of dual (simultaneous) development path
approach? I think there would be much to be learned from at least
examining the dual approach prior to making any commitment.
What do you think?
Cheers, Philip