On Oct 17, 2006, at 2:45 PM, Michael Anissimov wrote:
Mike,
On 10/10/06, deering <[EMAIL PROTECTED]> wrote:
Going beyond the definition of Singularity we can make some
educated guesses
about the most likely conditions under which the Singularity will
occur.
Due to technological synergy, the creation of STHI will happen
coincident
with the achievement of molecular manufacturing and the completion
of all
basic biological molecular functions including gene expression
control
functions and proteomics. For all three of these highly
significant events
to occur at the same time ensures sudden and massive sociological
disruption.
Not at all. It should be theoretically possible to create an AGI that
is superintelligent and has all this knowledge, but uses it in a
nondisruptive way to help humanity.
Considering the state of the world today I don't see how changes
sufficient to be really helpful can be anything but disruptive of the
status quo. Being non-disruptive per se is a non-goal.
A Friendly AI could introduce
technologies with feature sets that bring out the cooperative aspect
of humanity rather than our warlike aspect. This might be both what a
normatively altruistic AI and a collective volition AI would do.
We can make up whatever we want and claim that because we made it up
it is possible. But is this actually productive?
In that case you would have all these significant events without
accompanying sociological upheaval. You could even go back to a
society superficially similar to the Middle Ages (or, say, steampunk
novels), except you are perpetually youthful and can fabricate tools
out of thin air. For more on this:
No thanks.
- samantha
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]