On 9/24/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
Anyway, I am curious if anyone would like to share experiences they've
had trying to get Singularitarian concepts across to ordinary (but
let's assume college-educated) Joes out there.  Successful experiences
are valued but also unsuccessful ones.  I'm specifically interested in

Personally, I've noticed that the opposition to a thought of
Singularity falls into two main camps:

1) Sure, we might get human-equivalent hardware in the near future,
but we're still nowhere near having the software for true AI.

2) We might get a Singularity within our lifetimes, but it's just as
likely to be a rather soft takeoff and thus not really *that* big of
an issue - life-changing, sure, but not substantially different from
the development of technology so far.

The difficulty with arguing against point 1 is that, well, I don't
know all that much that'd support me in arguing against it. I've had
some limited success with quoting Kurzweil's "brain scanning
resolution is constantly getting better" graph and pointing out that
we'll become able of doing a brute-force simulation at some point, but
as for anything more elegant, not much luck.

Moore's Law seems to work somewhat against point 2, but people often
question how long we can assume it to hold.

approaches, metaphors, focii and so forth that have actually proved
successful at waking non-nerd, non-SF-maniac human beings up to the
idea that this idea of a coming Singularity is not **completely**
absurd...

Myself, I've recently taken a liking to the Venus flytrap metaphor I
stole from Robert Freitas' Xenopsychology. To quote my in-the-works
introductory essay to the Singularity (yes, it seems to be
in-the-works indefinitely - short spurts of progress, after which I
can't be bothered to touch it for months at a time):

"In his 1984 paper Xenopsychology [3], Robert Freitas introduces the
concept of Sentience Quotient for determining a mind's intellect. It
is based on the size of the brain's neurons and their
information-processing capability. The dumbest possible brain would
have a single neuron massing as much as the entire universe and
require a time equal to the age of the universe to process one bit,
giving it an SQ of -70. The smartest possible brain allowed by the
laws of physics, on the other hand, would have an SQ of +50. While
this only reflects pure processing capability and doesn't take into
account the software running on the brains, it's still a useful rough
guideline.

So what's this have to do with artificial intelligences? Well, Freitas
estimates Venus flytraps to have an SQ of +1, while most plants have
an SQ of around -2. The SQ for humans is estimated at +13. Freitas
estimates electronic sentiences that can be built to have an SQ of +23
- making the difference of us and advanced AIs <i>nearly as high as
between humans and Venus flytraps</i>. It should be obvious that when
compared to this, even the smartest humans would stand no chance
against the AI's intellect - any more than we should be afraid of a
genius carnivorous plant suddenly developing a working plan for taking
over all of humanity."

http://www.saunalahti.fi/~tspro1/Esitys/009.png has the same
compressed in a catchy presentation slide (some of the text is in
Finnish, but you ought to get the gist of it anyway).

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to