Olie,

It seems to me that the time-scale issue is very critical here, and is
indeed the most dubious aspect of popular Singularitarian
prognostications.

It's quite possible to accept that

a) the advent of greater than human intelligence will likely lead to a
total transformation of reality, mind and society

b) technological advancement will likely lead us to this point eventually

without accepting that there is anything in the above that is worthy
of immediate attention.

Since most knowledgeable academic and industry scientists put the
advent of human-level AI at 2100 or later, most people are fairly well
justified in considering these issues (of powerful AI's and so forth)
as issues for their great-great-grandchildren not themselves.

Furthermore, most knowledgeable biologists and physicians are not that
optimistic about radical life extension happening in the next 50 years
or so.  So, from this perspective, most people are fairly well
justified in considering their own death as extremely likely.

What makes all these Singularitarian issues seem more humanly palpable
(both in the positive and negative direction) is the idea that they
may occur within our own lifetimes, or our children's lifetimes....

It seems to me that, for most people, because the target date for
these wild changes is considered as being so far in the future, the
details of the changes don't seem worth thinking about.  It seems
worthwhile to me to separate the time-to-singularity issue from the
nature-of-singularity issue.  Kurzweil focuses on the time aspect, and
his unique and valuable contribution is to assemble a bunch of data
regarding timing, to make a solid (though by no means ironclad) case
for a Singularity mid-21'st-century.  [I actually think it could
happen sooner if resources were focused on AGI, but that is another
story and not the focus of this message.]  But in some cases this
approach may lose people who would otherwise be interested: If people
don't buy his timing estimates, they may then ignore the rest of the
message, which is in a way more interesting and which has to do with
what will ensue from superhuman AGI **whenever** it occurs...

Whereas for those of us who grew up obsessed with SF, the far-future
always DID seem worth thinking about; and the advent of the modern
Singularity meme simply served to place a bunch of familiar (and
exciting) ideas closer into the future than had previously generally
been thought....

-- Ben

On 9/24/06, Olie Lamb <[EMAIL PROTECTED]> wrote:
On 9/24/06, Michael Anissimov <[EMAIL PROTECTED]> wrote:
> Ben,
>
> From what I've seen the Kurzweil approach is among the most
> effective... if by "Singularity" you mean "smarter than human
> intelligence making everything fly out the window", only a couple
> hundred people even understand this, and most of them arrived at it
> through Staring Into the Singularity.

Uh, I find this statement _highly_ dubious.
 Just because a person understands something doesn't mean that that it will
change their behaviour; that requires a whole different level of "get".

Firstly, I think the number is unsubstantiated.  Second, I don't know why
you value that essay so highly, I'd expect that only singularitarian stuff
would point to that (small number, as indicated), and only a very small
portion would lead new observers to the essay.

Thirdly,

Someone may understand that Smarter-Than-Human Intelligence (STHI) means
that everything changes.  They may understand that STHI means predictability
goes out the window... A lot of people don't really expect to be able to
predict much beyond the 10 year mark anyway.  They may accept that STHI has
a reasonable chance of happening in their life.  They may accept that any 50
year projections, such as demographics, resource use... all have a good
chance of being made extraordinarily wrong.  It generally doesn't affect
them any more than the idea that their house might be destroyed by a
calamity for which they are uninsured, or that they may make an unexpected
massive windfall.  It's not immediately obvious how to plan for such events,
so why try?

Most people don't believe that they would be in a position to influence the
course of development of AI or neuroscience.  A lot wouldn't believe
themselves capable of doing so without significant effort.  As for indirect
influence (SIAI etc...), well, they probably wouldn't expect the probability
of anything interesting happening soon enough to make any such participation
worthwhile.

Some people might find the singularity an interesting concept.  But if
that's all it is - one concept - that's not going to hold their attention
for long, and it's sure as hell not enough to bother passing the meme on.

I like analogies, so let me look at a couple:

A lot of people find string theory interesting.  Getting that the
fundamental "bits" of our universe might be strings isn't that tricky, it's
also a pretty nifty idea.  Getting that our world might include 10
dimensions?  That can be easily understood.  Understanding how to
conceptualise 10 dimensions?  That's tricky.  Actually getting the detail of
any of this stuff, in the math?  That's tough.

The implications of string theory are whacky, potentially interesting, but
as presented, hardly riveting- It's not Hollywood movie stuff.  So why are
so many people so interested in String theory?  Because there is an /innate/
desire to know what our whole world is really like... Physicsy theories can
encompass the whole world in a single theory - great if you're lazy.

...

A heckuva lot of people understood the basic principle of global warming
30-40 years ago; a lot of them understood it at better levels than it would
take to recognise that the behavioural issue is not specifically "greenhouse
gases", but "anthropogenically increased climatic forcing".  They understood
the implications (humans have the capacity to unintentionally fuck things up
catastrophically), but did it change their behaviour? Generally not.

People are generally so heavily invested in a micro-universe, that stepping
outside of that, into potential scenarios and broader perspectives is
just... well, too much hard work.  It's the genre trap: interest in one
view/domain leads to more understanding of that view, leads to more work in
that view, leads to greater interest in that view/domain.   Whaddya know...

--Olie
 ________________________________
 This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to