Ben,
> > Philip:
I think an AGI needs other AGIs to relate to as a community so that a
> > community
of learning develops with multiple perspectives available.
> > This I think
is the only way that the accelerating bootstraping of
> > AGIs can
be handled with any possibility of being safe. ******
>
> Ben: That feels
to me like a lot of anthropomorphizing...
Why? Why would humans be the
only super-intelligent GI to have
perspectives or points of view? I would have thought it was inevitable
for any resource limited/experience limited GI system. And any AGI in
the real world is going to be resource and experience limited.
> To me, it's an
unanswered question whether it's a better use of, say,
> 10^5 computers
to make them all one Novamente, or to partition them
> into a society
of Novamente's....
This was the argument that raged over
mainframe vs mini/PC
computers.
The question is only partly technical
- there are many other issues that
will determine the outcome.
If for no other reason, the monopolies
regulators are probably not going
to allow all the work requiring an AGI to go through one company. Also
users of AGI services are not going to want to have to deal with a
monopolist - most big companies will want to have at the very least
least 2-3 AGI service companies in the market place. And its unlikely
that these service companies are going to want to have to buy all their
AGI grunt from just one company.
Even in the CPU market there's still
AMD serving up a bit of
competition to Intel. And Windows isn't the only OS in the market.
And then there's the wider community
- if there are going to be AGIs at
all will the community rest easier if they think there is just one super
AGI?? What do people think of Oracle's plan to have one big
government database?
In any case it's clearly not safe
to have just one AGI in existance - if the
one AGI goes feral the rest of us are going to need to access the power
of some pretty powerful AGIs to contain/manage the feral one.
Humans have the advantage of numbers but in the end we may not
have the intellectual power or speed to counter an AGI that is actively
setting out to threaten humans.
> > Philip:
So why not proceed to develop Novamentes down two different
> > paths simultaneously
- the path you have already designed - where
> > experience-based
learning is virtually the only strategy, and a
> > variant
where some Novamentes have a modicum of carefully designed
> > pre-wiring
for ethics......... (coupled with a major program of
> > experience-based
learning)?
> Ben: I guess
I'm accustomed to working in a limited-resources
> situation, where
you just have to make an intuitive call as to the
> best way to do
something and then go with it ... and then try the next
> way on the list,
if one's first way didn't work... Of course, if
> there are a lot
of resources available, one can explore parallel paths
> simultaneously
and do more of a breadth-first rather than a
> depth-first search
through design space !
There is at least one other option
that you haven't mentioned and that
is to take longer to create the AGI via the 100% experience-based
learning route so you can free some resources to devote to following
the 'hard-wiring plus experiential
learning' route as well.
It's not going to be the end of the
world if we take a little longer to
create a safe AGI but it could be the end of the line for all humans or at
least those humans not allied with the AGI if we get a feral or
dangerous AGI by mistake.
And maybe by pursuing both routes
simulaneously you might generate
more goodwill that might increase the resourcing levels a bit further
down the track.
Cheers, Philip