Samantha Atkins wrote:
> Of late I feel a lot of despair because I see lots of brilliant people
> seemingly mired in endlessly rehashing what-ifs, arcane philosophical
> points and willing to put off actually creating greater than human
> intelligence and transhuman tech indefinitely until they can somehow
> prove to their and our quite limited intelligence that all will be well.

As far as I'm aware the only researcher taking this point of view ATM is
Eliezer Yudkowsky (and implicitly, his assistants). Everyone else with
the capability is proceeding full steam ahead (at least, to the extent
that resources permitt) with AGI development. I'm somewhat unusual in
that I'm proceeding with AGI component development, but I accept that
even if I'm successful I can't safely assemble those components before
someone comes up with a reasonably sound FAI scheme (and taking
moderately paranoid precautions against takeoff in the larger
subassemblies). Who other than Eliezer are you criticsing here?

> I see brilliant idealistic people who don't bother to admit or examine
> what evil is now bearing down on them and their dreams because they
> believe the singularity is near inevitable and will make everything all
> better in the sweet by and bye.

That's true, but not so much of an issue. We don't have to actually
solve these problems directly, and as I've said most researchers are
already working as fast as they can given current resources. As such
I don't think a fuller appreciation of what's currently wrong with the
world would make much difference.

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to