Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Mike Tintner
Candice: My sentiments exactly...which is why in the first place I said we should be achieving Super Intelligence on an individual level Er that's interesting - because there doesn't seem to be much interest here in the future of human nature as distinct from AGI's. I personally think we'll

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore
candice schuster wrote: Which down the line Richard is why I asked for a technical thesis on how you propose this entire theory of yours is going to work. Until that 'blueprint' so to speak is mapped out then it's fantasy ! Well, there are many, many other people who have worked out the detai

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Thomas McCabe
On 10/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote: > If you look at what I actually wrote, you'll see that I don't claim > (natural) evolution has any role in AGI/robotic evolution. My point is that > you wouldn't dream of speculating so baselessly about the future of natural > evolution, why spe

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore
candice schuster wrote: LOL ! You make me laugh Richard...in a good way that is.'' I have a bad feeling about this discussion, really I do.'' If it's a discussion why would you have a bad feeling about it ? The point at hand is discussion, not bad feelings. I am joking now though as I

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Thomas McCabe
On 10/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote: > Every speculation on this board about the nature of future AGI's has been > pure fantasy. Even those which try to dress themselves up in some semblance > of scientific reasoning. All this speculation, for example, about the > friendliness and e

RE: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread candice schuster
Which down the line Richard is why I asked for a technical thesis on how you propose this entire theory of yours is going to work. Until that 'blueprint' so to speak is mapped out then it's fantasy ! Candice> Date: Wed, 24 Oct 2007 15:14:31 -0400> From: [EMAIL PROTECTED]> To: singularity@v2.

RE: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread candice schuster
LOL ! You make me laugh Richard...in a good way that is.'' I have a bad feeling about this discussion, really I do.'' If it's a discussion why would you have a bad feeling about it ? The point at hand is discussion, not bad feelings. I am joking now though as I am purposely using your

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore
Mike Tintner wrote: [snip] When you and others speculate about the future emotional systems of AGI's though - that is not in any way based on any comparable reality. There are no machines with functioning emotional systems at the moment on which you can base predictions. When the Wright bro

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Mike Tintner
If you look at what I actually wrote, you'll see that I don't claim (natural) evolution has any role in AGI/robotic evolution. My point is that you wouldn't dream of speculating so baselessly about the future of natural evolution, why speculate baselessly about AGI evolution? I should explain

[singularity] Question ... Singularity book ...

2007-10-24 Thread Richard Loosemore
[With apologies for cross-posting] A question for you all Over the last several years I have assembled a lot of material on topics like: - The "Bright Green Tomorrow" idea that I mentioned yesterday, about what the world might be like after the singularity - The Friendliness issue (

[singularity] Positive versus Negative discussion of ideas

2007-10-24 Thread Richard Loosemore
YOST Andrew wrote: Easy Richard, Matt's ideas, as well as yours, are thought provoking and worth serious consideration. Andrew Yost Fair comment: my goal is not to squash anyone else's ideas just for the sake of it. But I do have a real concern: it is very, very easy for people to destr

RE: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-24 Thread YOST Andrew
Easy Richard, Matt's ideas, as well as yours, are thought provoking and worth serious consideration. Andrew Yost -Original Message- From: Richard Loosemore [mailto:[EMAIL PROTECTED] Sent: Wednesday, October 24, 2007 9:33 AM To: singularity@v2.listbox.com Subject: Re: Bright Green Tomo

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-24 Thread Richard Loosemore
candice schuster wrote: Hi Richard, Without getting too technical on you...how do you propose implementing these ideas of yours ? In what sense? The point is that "implementation" would be done by the AGIs, after we produce a blueprint for what we want. Richard Loosemore - This li

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore
candice schuster wrote: BRAVO Mike ! My sentiments exactly...which is why in the first place I said we should be achieving Super Intelligence on an individual levelwe are like Andrew mentioned too Organic Robots. I still find myself pondering on Richard's last post however, I also find my

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Richard Loosemore
You could start by noticing that I already pointed out that evolution cannot play any possible role. I rather suspect that the things that you call "speculation" and "fantasy" are only seeming that way to you because you have not understood them, since, in fact, you have not addressed any of

Re: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-24 Thread Richard Loosemore
This is a perfect example of how one person comes up with some positive, constructive ideas and then someone else waltzes right in, pays no attention to the actual arguments, pays no attention to the relative probability of different outcomes, but just snears at the whole idea with a

Re: [singularity] PhD Ideas

2007-10-24 Thread Natasha Vita-More
At 07:09 AM 10/24/2007, XAV wrote: I would be interested to know if there are a group of PhD students researching this field or other discussion forums that would help me to formulate my research question. I am more in the philosophical approach rather than the programing side of AI due to the

Re: [singularity] PhD Ideas

2007-10-24 Thread arcange1m
I would be interested in hearing about anything you discover, Xavier. The consulting firm I am with is currently forming a virtualization group and I am contemplating joining. Ben, I am working with Tyler Emerson right now to get something published around AI & business.  If you are interested

Re: [singularity] PhD Ideas

2007-10-24 Thread A. T. Murray
Ben Goertzel wrote: > [...] > [P]erhaps you could focus on the environment rather > than the AI itself, and you could design a learning > environment for AI systems in virtual worlds. "Write a doctoral dissertation, trigger a Singularity" is the subject at http://mentifex.virtualentity.com/e

Re: [singularity] PhD Ideas

2007-10-24 Thread Benjamin Goertzel
Another related suggestion is to create social-psychology simulations in virtual worlds... Standard social-psych sim games like stag hunt, http://en.wikipedia.org/wiki/Stag_hunt tragedy of the commons, http://www.enviroliteracy.org/pdf/materials/1132.pdf etc. could be implemented in a virtual

Re: [singularity] PhD Ideas

2007-10-24 Thread Xavier Laurent
hello Ben Many thanks for your reply and ideas Regards Xav Benjamin Goertzel wrote: Hi, Right now, doing any serious AI stuff in virtual worlds definitely requires some serious programming However, here is one suggestion: perhaps you could focus on the environment rather than the AI

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Stefan Pernar
Dear Mike, Human evolution would have to be towards ever higher levels of fitness. However, given our rather slow reproductive cycle and the relative proximity of the singularity I do not think that even one more generation - ~25 years - will come and go before the inefficient optimization mechani

Re: [singularity] PhD Ideas

2007-10-24 Thread Benjamin Goertzel
btw... An alternative to SL might be Metaplace, which seems to have a more solid software architecture, but it's still only in alpha so I can't say for sure how useful it will be... ben On 10/24/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: > > > Hi, > > Right now, doing any serious AI stuff

Re: [singularity] PhD Ideas

2007-10-24 Thread Benjamin Goertzel
Hi, Right now, doing any serious AI stuff in virtual worlds definitely requires some serious programming However, here is one suggestion: perhaps you could focus on the environment rather than the AI itself, and you could design a learning environment for AI systems in virtual worlds. For in

Re: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Benjamin Goertzel
On 10/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote: > > Every speculation on this board about the nature of future AGI's has been > pure fantasy. Even those which try to dress themselves up in some > semblance > of scientific reasoning. All this speculation, for example, about the > friendliness a

[singularity] PhD Ideas

2007-10-24 Thread Xavier Laurent
Hello My name is Xavier, doing a Part time PhD in the UK. I am currently register with the Creative Media department. I have just recently started and did not yet come up with a research question, still need to read more. But I have narrowed down my interests with embodiment in virtual worlds

RE: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]

2007-10-24 Thread candice schuster
Hi Richard, Without getting too technical on you...how do you propose implementing these ideas of yours ? Candice> Date: Tue, 23 Oct 2007 20:28:42 -0400> From: [EMAIL PROTECTED]> To: singularity@v2.listbox.com> Subject: Bright Green Tomorrow [WAS Re: [singularity] QUESTION]> > candice schus

RE: [singularity] QUESTION

2007-10-24 Thread candice schuster
Wild idea ? Far from it I would say...that is evolution evolving...bit like Emotional Intelligence ! Subject: RE: [singularity] QUESTIONDate: Tue, 23 Oct 2007 15:05:57 -0700From: [EMAIL PROTECTED]: singularity@v2.listbox.com Candice and others, Here's a wild idea: Simply because we a

RE: [singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread candice schuster
BRAVO Mike ! My sentiments exactly...which is why in the first place I said we should be achieving Super Intelligence on an individual levelwe are like Andrew mentioned too Organic Robots. I still find myself pondering on Richard's last post however, I also find myself wondering does Rich

[singularity] How to Stop Fantasying About Future AGI's

2007-10-24 Thread Mike Tintner
Every speculation on this board about the nature of future AGI's has been pure fantasy. Even those which try to dress themselves up in some semblance of scientific reasoning. All this speculation, for example, about the friendliness and emotions of future AGI's has been non-sense - and often fr