Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-23 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote: Let's assume for the moment that the very first AI is safe and friendly, and not an intelligent worm bent on swallowing the Internet. And let's also assume that once this SAFAI starts self improving, that it quickly advances to the point where it

Re: [singularity] QUESTION

2007-10-23 Thread Natasha Vita-More
At 01:10 AM 10/22/2007, AL wrote: Dear Sirs, You can call me madam :-) My question is: AGI, as I perceive your explanation of it, is when a computer gains/develops an ego and begins to consciously plot its own existence and make its own decisions. I think the term you are looking for is

Re: [singularity] Re: QUESTION

2007-10-23 Thread Charles D Hixson
Aleksei Riikonen wrote: On 10/22/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: My own opinion is that the first AGI systems to be built will have extremely passive, quiet, peaceful "egos" that feel great empathy for the needs and aspirations of the human species. Sounds rather optim

Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-23 Thread Richard Loosemore
candice schuster wrote: Richard, Thank you for your response. I have read your other posts and understand what 'the story' is so to speak. I understand where you are coming from and when I talk about evolution therioes this is not to throw a 'stick in the wheel' so to speak, it is to think

RE: [singularity] QUESTION

2007-10-23 Thread YOST Andrew
Candice and others, Here's a wild idea: Simply because we are here and processing information in ways that make us believe we are contemplating why we are here contemplating the purpose of life and the universe logically implies that the energy/mass system from which we have been assembled (b

RE: [singularity] QUESTION

2007-10-23 Thread candice schuster
Richard, Thank you for your response. I have read your other posts and understand what 'the story' is so to speak. I understand where you are coming from and when I talk about evolution therioes this is not to throw a 'stick in the wheel' so to speak, it is to think with a universal mind.

Re: [singularity] QUESTION

2007-10-23 Thread Richard Loosemore
candice schuster wrote: Ok Richard, let's talk about your scenario...define 'Mad', 'Irrational' and 'Improbable scenario' ? Let's look at some of Charles Darwin's and Richard Dawkins theories as some examples in this improbable scenario. Ever heard of a Meme ? or the Memetic Theory ? I woul

Re: [singularity] QUESTION

2007-10-23 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: This is nonsense: the result of giving way to science fiction fantasies instead of thinking through the ACTUAL course of events. If the first one is benign, the scenario below will be impossible, and if the first one is no

Re: [singularity] QUESTION

2007-10-23 Thread Richard Loosemore
Vladimir Nesov wrote: On 10/23/07, *Richard Loosemore* <[EMAIL PROTECTED] > wrote: To make a system do something organized, you would have to give it goals and motivations. These would have to be designed: you could not build a "thinking part" and then le

Re: [singularity] CONSCIOUSNESS

2007-10-23 Thread Richard Loosemore
albert medina wrote: Dear Sir, Pardon me for intruding. As you said, the divergent viewpoints on AI, AGI, SYNBIO, NANO are all over the map and that the future is looking more like an uncontrolled "experiment". I believe it is not an "uncontrolled" experiment, because most of the divergen

Re: [singularity] QUESTION

2007-10-23 Thread Vladimir Nesov
On 10/23/07, Richard Loosemore <[EMAIL PROTECTED]> wrote: > > To make a system do something organized, you would have to give it goals > and motivations. These would have to be designed: you could not build > a "thinking part" and then leave it to come up with motivations of its > own. This is a

[singularity] CONSCIOUSNESS

2007-10-23 Thread albert medina
Dear Sir, Pardon me for intruding. As you said, the divergent viewpoints on AI, AGI, SYNBIO, NANO are all over the map and that the future is looking more like an uncontrolled "experiment". I would like to posit a supplementary viewpoint for you to contemplate, one that may support

Re: [singularity] QUESTION

2007-10-23 Thread Richard Loosemore
[EMAIL PROTECTED] wrote: Hello Richard, If it's not too lengthy and unwieldy to answer, or give a general sense as to why yourself and various researchers think so... Why is it that in the same e-mail you can make the statement so confidently that "ego" or sense of selfhood is not somethin

Re: [singularity] QUESTION

2007-10-23 Thread stolzy
Hello Richard, If it's not too lengthy and unwieldy to answer, or give a general sense as to why yourself and various researchers think so... Why is it that in the same e-mail you can make the statement so confidently that "ego" or sense of selfhood is not something that the naive observer shou

RE: [singularity] QUESTION

2007-10-23 Thread candice schuster
Ok Richard, let's talk about your scenario...define 'Mad', 'Irrational' and 'Improbable scenario' ? Let's look at some of Charles Darwin's and Richard Dawkins theories as some examples in this improbable scenario. Ever heard of a Meme ? or the Memetic Theory ? I would like to say that based