On 9/12/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
While we are speculating on future technologies like AI and uploading leading to a Singularity, we should be aware that both of these are already happening.

The Internet already has the computational power and knowledge of several thousand human brains.  Its vastness makes us aware of our own mental limitations.  We can never know more than a tiny fraction of the knowledge accessable through search engines.  Instead, we rely on computers as an extension of our intelligence.  Knowing where to find information quickly is as close as we can get to actually knowing it.  New technologies like image recognition and natural language processing will make this extension more useful.

Uploading is occurring as well, every time we post our words and pictures on the Internet.  I realize this only gets a small fraction of our knowledge, but we would never want to upload everything anyway.  Much of the knowledge related to low level sensory processing and motor control would not be useful in a different physical embodiment.  Instead, we copy only what is important and useful.

Is the Internet friendly?  It seems like a silly question.  I think the same will be true of future AI.

-- Matt Mahoney, [EMAIL PROTECTED]
 
Ugh... I completely disagree.
 
"Uploading is occurring as well"
Ok, in some extremely limited degenerate sense, yes, that is true. But it's truth is completely tangential to the Singularity. None of this "uploading" you refer to here has anything to do with a Singularity. Only once someone uploads the actual processes of their consciousness, and starts upgrading those algorithms and their resident hardware, do we start talking about the Singularity coming from an upload.
 
"Is the Internet friendly?  It seems like a silly question.  I think the same will be true of future AI."
I seriously doubt that the algorithms required for consciousness will magically *arise* by simply connecting a lot of powerful things together, like the internet simply 'arose' when we connected a bunch of computers. Even if that were somehow possible (...uh), virtually any generalization, ESPECIALLY this one, may hold for some mind-designs of AI, but certainly not ALL POSSIBLE mind-designs. What specific point in mind-design space the future AI turns out to fulfill is entirely dependent on the initial code written by some programmers, and I think it would be a rather difficult (heh..) argument for you to make that some particular mind-design, such that the analogy quoted above holds, would in fact be more likely for these unknown programmers of these future conscious algorithms to write, than ANY other mind-design such that the analogy does not hold.
 
I think the question, "will the AI be Friendly?", is only possible to answer AFTER you have the source code of the conscious algorithms sitting on your computer screen, and have a rigorous prior theoretical knowledge on exactly how to make an AI Friendly.
 
Although, I think there is a chance (not a good one!), that a less rigorous approach to Friendlines with an AI implementation, if used strictly for the purpose of assisting people (like Eliezer) working on a more rigorous approach to Friendliness, might not kill us all, if carried out extremely carefully. Don't take that as an endorsement that it might be an okay thing to do.
 
-hank
 

This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to