Re: [singularity] New list announcement: fai-logistics

2008-04-26 Thread Samantha Atkins
On Sat, Apr 26, 2008 at 7:48 AM, Thomas McCabe <[EMAIL PROTECTED]> wrote: > On Thu, Apr 24, 2008 at 3:16 AM, Samantha Atkins <[EMAIL PROTECTED]> wrote: > > Thomas McCabe wrote: > > > Popularity is irrelevant. > > Popularity, of course, is not the ultim

Re: [singularity] Vista/AGI

2008-04-24 Thread Samantha Atkins
J. Andrew Rogers wrote: On Apr 6, 2008, at 9:38 AM, Ben Goertzel wrote: That's surely part of it ... but investors have put big $$ into much LESS mature projects in areas such as nanotech and quantum computing. This is because nanotech and quantum computing can be readily and easily packag

Re: [singularity] New list announcement: fai-logistics

2008-04-24 Thread Samantha Atkins
Thomas McCabe wrote: On 4/18/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: Thomas McCabe wrote: On 4/18/08, Richard Loosemore <[EMAIL PROTECTED]> wrote: You repeatedly insinuate, in your comments above, that the idea is not taken seriously by anyone, in spite of the fact I

Re: [singularity] Vista/AGI

2008-04-13 Thread Samantha Atkins
Mike Tintner wrote: Samantha:From what you said above $50M will do the entire job. If that is all that is standing between us and AGI then surely we can get on with it in all haste. Oh for gawdsake, this is such a tedious discussion. I would suggest the following is a reasonable *framework*

Re: [singularity] Vista/AGI

2008-04-12 Thread Samantha Atkins
Ben Goertzel wrote: Much of this discussion is very abstract, which is I guess how you think about these issues when you don't have a specific AGI design in mind. My view is a little different. If the Novamente design is basically correct, there's no way it can possibly take thousands or hundre

Re: [singularity] Re: Promoting an A.S.P.C,A.G.I.

2008-04-09 Thread Samantha  Atkins
On Apr 9, 2008, at 12:33 PM, Derek Zahn wrote: Matt Mahoney writes: > Just what do you want out of AGI? Something that thinks like a person or > something that does what you ask it to? The "or" is interesting. If it really "thinks like a person" and at at least human level then I doubt

Re: [singularity] Vista/AGI

2008-04-06 Thread Samantha Atkins
Arguably many of the problems of Vista including its legendary slippages were the direct result of having thousands of merely human programmers involved. That complex monkey interaction is enough to kill almost anything interesting. - samantha Panu Horsmalahti wrote: Just because it takes

Re: [singularity] Definitions

2008-02-19 Thread Samantha Atkins
Richard Loosemore wrote: John K Clark wrote: And I will define consciousness just as soon as you define "define". Ah, but that is exactly my approach. Thus, the subtitle I gave to my 2006 conference paper was "Explaining Consciousness by Explaining That You Cannot Explain it, Because Your E

Re: [singularity] Multi-Multi-....-Multiverse

2008-02-02 Thread Samantha Atkins
On Feb 2, 2008, at 7:29 AM, gifting wrote: WTF (I can only assume what that stands for) are you such an angry person. Or is linear thinking the only possible solution for your VotW (guess what that stands for)? I am not angry. I am bored with what seems like endless often off subject

Re: [singularity] Wrong focus?

2008-02-02 Thread Samantha Atkins
On Jan 27, 2008, at 5:40 AM, Mike Tintner wrote: Ben: MT: Venter has changed everything today - including the paradigms that govern both science and AI.. Ben: Lets not overblow things -- please note that Venter's team has not yet synthesized an artificial organism. Here's why I think V

Re: [singularity] Multi-Multi-....-Multiverse

2008-02-02 Thread Samantha Atkins
WTF does this have to do with AGI or Singularity? I hope the AGI gets here soon. We Stupid Monkeys get damn tiresome. - samantha On Jan 29, 2008, at 7:06 AM, gifting wrote: On 29 Jan 2008, at 14:13, Vladimir Nesov wrote: On Jan 29, 2008 11:49 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:

Re: [singularity] Wrong focus?

2008-02-02 Thread Samantha Atkins
On Jan 28, 2008, at 6:43 AM, Mike Tintner wrote: Stathis: Are you simply arguing that an embodied AI that can interact with the real world will find it easier to learn and develop, or are you arguing that there is a fundamental reason why an AI can't develop in a purely virtual environment

Re: [singularity] Wrong focus?

2008-01-31 Thread Samantha Atkins
On Jan 27, 2008, at 6:18 AM, Mike Tintner wrote: Samantha: MT: You've been fooled by the puppet. It doesn't work without the puppeteer. Samantha:What's that, elan vitale, a "soul", a "consciousness" that is independent of the puppet? It's significant that you make quite the wrong assumpt

Re: [singularity] Wrong focus?

2008-01-26 Thread Samantha Atkins
On Jan 26, 2008, at 6:07 PM, Mike Tintner wrote: Tom:A computer is not "disembodied" any more than you are. Silicon, as a substrate, is fully equivalent to biological neurons in terms of theoretical problem-solving ability. You've been fooled by the puppet. It doesn't work without the pupp

Re: [singularity] Wrong focus?

2008-01-26 Thread Samantha Atkins
On Jan 26, 2008, at 3:59 PM, Mike Tintner wrote: Ben, Thanks for reply. I think though that Samantha may be more representative - i.e. most here simply aren't interested in non- computer alternatives. Which is fine. The Singularity Institute exists for one purpose. That I point that ou

Re: [singularity] The Extropian Creed by Ben

2008-01-26 Thread Samantha Atkins
On Jan 26, 2008, at 2:36 PM, Mike Tintner wrote: Gudrun: I am an artist who is interested in science, in utopia and seemingly impossible projects. I also came across a lot of artists with OC traits. ... The OCAP, actually the obsessive compulsive 'arctificial' project .. These new OCA entitie

Re: [singularity] The Extropian Creed by Ben

2008-01-26 Thread Samantha Atkins
On Jan 26, 2008, at 11:13 AM, [EMAIL PROTECTED] wrote: Quoting Natasha Vita-More <[EMAIL PROTECTED]>: At 03:04 PM 1/24/2008, Gudrun wrote: and N. Vita-More This is confusing. Fine that extropians want to self-improve. That ALL humanity should improve, is quite questionable. Does all h

Re: [singularity] Wrong focus?

2008-01-26 Thread Samantha Atkins
On Jan 26, 2008, at 9:57 AM, Bryan Bishop wrote: On Saturday 26 January 2008, Mike Tintner wrote: Why does discussion never (unless I've missed something - in which case apologies) focus on the more realistic future "threats"/possibilities - future artificial species as opposed to future com

Re: [singularity] The Extropian Creed by Ben

2008-01-25 Thread Samantha  Atkins
On Jan 25, 2008, at 10:14 AM, Natasha Vita-More wrote: The idea of useless technology is developed in wearables more than in bioart. Steve's perspective is more political than artistic in regards to uselessness, don't you think? My paper which includes an interview with him is publishe

Re: [singularity] Re: CEV

2007-10-27 Thread Samantha  Atkins
On Oct 27, 2007, at 1:55 AM, Aleksei Riikonen wrote: On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote: On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote: You seem to have a need to personally give a final answer to "What is 'good'?" -- an answer to what moral rules the universe s

Re: [singularity] Reduced activism

2007-08-19 Thread Samantha Atkins
On Aug 19, 2007, at 12:26 PM, Matt Mahoney wrote: I was never really a Singularity activist, but 1. I realized the singularity is coming and nothing can stop it. Not so. Humanity could so harm its technological base as to postpone Singularity on this planet for quite some time. We could

Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-14 Thread Samantha Atkins
Alan Grimes wrote: If it's a problem may I suggest you use a more user friendly terminal such as gnome-terminal or konsole. They have profiles that can be edited through the GUI. Not a bad suggestion, lemme see if my distro will let me kill Xterm... crap, it's depended on by xinit, which i

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-07-02 Thread Samantha Atkins
Tom McCabe wrote: --- Samantha Atkins <[EMAIL PROTECTED]> wrote: Tom McCabe wrote: --- Samantha Atkins <[EMAIL PROTECTED]> wrote: Out of the bazillions of possible ways to configure matter only a ridiculously tiny fraction are more

Re: [singularity] AI concerns

2007-07-02 Thread Samantha Atkins
Charles D Hixson wrote: Samantha Atkins wrote: Sergey A. Novitsky wrote: Dear all, ... o Be deprived of free will or be given limited free will (if such a concept is applicable to AI). See above, no effective means of control. - samantha There is *one* effective

Re: [singularity] AI concerns

2007-07-02 Thread Samantha Atkins
Alan Grimes wrote: Samantha Atkins wrote: Alan Grimes wrote: Available computing power doesn't yet match that of the human brain, but I see your point, What makes you so sure of that? It has been computed countless times here and elsewhere that I am sur

Re: [singularity] Top AI Services to Humans

2007-07-02 Thread Samantha Atkins
Tom McCabe wrote: Okay, to start with: - Total control over the structure of our minds, with an AI-provided user-friendly interface. - The elimination of war, hunger, disease, old age, heart disease and a whole bunch of other undesirable stuff. - The ability to build anything that is physically

Re: [singularity] critiques of Eliezer's views on AI

2007-07-02 Thread Samantha Atkins
Colin Tate-Majcher wrote: When you talk about "uploading" are you referring to creating a copy of your consciousness? If that's the case then what do you do after uploading, continue on with a mediocre existence while your cyber-duplicate shoots past you? Sure, it would have all of those won

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-07-02 Thread Samantha Atkins
Tom McCabe wrote: --- Samantha Atkins <[EMAIL PROTECTED]> wrote: Out of the bazillions of possible ways to configure matter only a ridiculously tiny fraction are more intelligent than a cockroach. Yet it did not take any grand design effort upfront to arrive at a world overru

Re: [singularity] AI concerns

2007-06-30 Thread Samantha Atkins
Alan Grimes wrote: Available computing power doesn't yet match that of the human brain, but I see your point, What makes you so sure of that? It has been computed countless times here and elsewhere that I am sure you are aware of so why do you ask? - This list is sponsored by AG

Re: [singularity] AI concerns

2007-06-30 Thread Samantha Atkins
Charles D Hixson wrote: Stathis Papaioannou wrote: Available computing power doesn't yet match that of the human brain, but I see your point, software (in general) isn't getting better nearly as quickly as hardware is getting better. Well, not at the personally accessible level. I understand

Re: [singularity] AI concerns

2007-06-30 Thread Samantha Atkins
Sergey A. Novitsky wrote: Dear all, Perhaps, the questions below were already touched numerous times in the past. Could someone kindly point to discussion threads and/or articles where these concerns were addressed or discussed? Kind regards, Serge --

Re: [singularity] critiques of Eliezer's views on AI (was: Re: Personal attacks)

2007-06-23 Thread Samantha  Atkins
On Jun 21, 2007, at 8:14 AM, Tom McCabe wrote: We can't "know it" in the sense of a mathematical proof, but it is a trivial observation that out of the bazillions of possible ways to configure matter, only a ridiculously tiny fraction are Friendly, and so it is highly unlikely that a selected

Re: [singularity] What form will superAGI take?

2007-06-17 Thread Samantha Atkins
Mike Tintner wrote: Perhaps you've been through this - but I'd like to know people's ideas about what exact physical form a Singulitarian or near-Singul. AGI will take. And I'd like to know people's automatic associations even if they don't have thought-through ideas - just what does a superAGI

Re: SPAM: Re: SPAM: Re: [singularity] The humans are dead...

2007-05-30 Thread Samantha Atkins
they will do if they are true intellectuals. Jon From: Samantha Atkins [mailto:[EMAIL PROTECTED] Sent: Tuesday, May 29, 2007 11:28 PM To: singularity@v2.listbox.com Subject: SPAM: Re: SPAM: Re: [singularity] The humans are dead... On May 29, 2007, at 4:22 PM, Jonathan H. Hinck wrote: But

Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
On May 29, 2007, at 7:03 PM, Keith Elis wrote: I understand that you value intelligence and capability, but I can't see my way to the destruction of humanity from there. I was quite careful to say that that was not what I would choose. The existence of superintelligence (a fact of the

Re: SPAM: Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
On May 29, 2007, at 4:22 PM, Jonathan H. Hinck wrote: But does there need to be consensus among the experts for a public issue to be raised? Regarding other topics that have been on the public discussion palate for awhile, how often has this been the case? Perhaps with regard to issues s

Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
On May 29, 2007, at 12:43 PM, Jonathan H. Hinck wrote: Thanks for your response (below). To clarify, I wasn't talking about the need to initiate a public policy (at least not at the front end of the process). Rather, I was talking about the need for an open dialog and discussion, such as we n

Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
On May 29, 2007, at 11:36 AM, Jonathan H. Hinck wrote: Indeed, displacement of the human labor force began since the beginning of the industrial revolution (if not before). This is the definition of technology. And, indeed, the jump form a labor-based to an automation-based economy would

Re: [singularity] Re: Personal attacks

2007-05-29 Thread Samantha Atkins
On May 29, 2007, at 11:25 AM, Richard Loosemore wrote: Samantha Atkins wrote: While I have my own doubts about Eliezer's approach and likelihood of success and about the extent of his biases and limitations, I don't consider it fruitful to continue to bash Eliezer on various

Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
On May 29, 2007, at 7:36 AM, Jonathan H. Hinck wrote: To clarify, I meant too much disagreement internally (within the A.I. community) or too much disregard for the geeks externally (in the world at large). I would go for choice #3. Most of the people will not "get it" at all. Those wh

Re: [singularity] Re: Personal attacks

2007-05-29 Thread Samantha Atkins
While I have my own doubts about Eliezer's approach and likelihood of success and about the extent of his biases and limitations, I don't consider it fruitful to continue to bash Eliezer on various lists once you feel seriously slighted by him or convinced that he is hopelessly mired or wha

Re: [singularity] The humans are dead...

2007-05-29 Thread Samantha Atkins
The rubber has already hit the road. Automation and computer displacement of jobs is an old story. The real challenge in my mind is how the world at large will shift to a post-scarcity economics and at what point. More importantly what are the least disruptive and most beneficial steps a

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 9:10 PM, Russell Wallace wrote: On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: Without AI or such IA to be almost the same thing I don't have much reason to believe humanity will see 3007. *nods* Or rather - in my opinion - it probably will las

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 8:11 PM, Russell Wallace wrote: On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: I think you know well enough that most of us who have considered such things for significant time have done considerable work to get beyond "metal men". Yep.

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 6:52 PM, Russell Wallace wrote: On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote: So your are happily provincial in this respect. Addendum: I think it is my view that is unprovincial. We're programmed to think intelligence = humanlike, because f

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 6:23 PM, Keith Elis wrote: Samantha Atkins wrote: On what basis is that answer correct? Do you mean factual in that it is the choice that you would make and that you belief proper? Or are you saying it is more objectively correct. If so, on what basis? Mere assertion

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 5:44 PM, Keith Elis wrote: Richard Loosemore wrote: Your email could be taken as threatening to set up a website to promote violence against AI researchers who speculate on ideas that, in your judgment, could be considered "scary". I'm on your side, too, Richard. Answer

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 4:29 PM, Joel Pitt wrote: On 5/29/07, Keith Elis <[EMAIL PROTECTED]> wrote: In the end, my advice is pragmatic: Anytime you post publicly on topics such as these, where the stakes are very, very high, ask yourself, Can I be taken out of context here? Is this position, wh

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 3:32 PM, Russell Wallace wrote: On 5/28/07, Shane Legg <[EMAIL PROTECTED]> wrote: If one accepts that there is, then the question becomes: Where should we put a super human intelligent machine on the list? If it's not at the top, then where is it and why? I don't claim to

Re: [singularity] "Friendly" question...

2007-05-28 Thread Samantha Atkins
On May 28, 2007, at 1:11 PM, Joshua Fox wrote: It is not at all sensible. Today we have no real idea how to build a working AGI. Right. The Friendly AI work is aimed at a future system. Fermi and company planned against meltdown _before_ they let their reactor go critical. The analogy is

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
Keith Elis wrote: Shane Legg wrote: If a machine was more intelligent/complex/conscious/...etc... than all of humanity combined, would killing it be worse than killing all of humanity? You're asking a rhetorical question but let's just get the correct

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
Keith Elis wrote: Shane Legg wrote: If a machine was more intelligent/complex/conscious/...etc... than all of humanity combined, would killing it be worse than killing all of humanity? You're asking a rhetorical question but let's just get the correct

Re: [singularity] The humans are dead...

2007-05-28 Thread Samantha Atkins
Shane Legg wrote: http://www.youtube.com/watch?v=WGoi1MSGu64 Which got me thinking. It seems reasonable to think that killing a human is worse than killing a mouse because a human is more intelligent/complex/conscious/...etc...(use what ever measure you prefer) than a mouse. So, would killing

Re: [singularity] The humans are dead...

2007-05-27 Thread Samantha Atkins
On May 27, 2007, at 5:48 PM, Stathis Papaioannou wrote: On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote: Which got me thinking. It seems reasonable to think that killing a human is worse than killing a mouse because a human is more intelligent/complex/conscious/...etc...(use what ever mea

Re: [singularity] "Friendly" question...

2007-05-27 Thread Samantha Atkins
On May 27, 2007, at 12:37 PM, Abram Demski wrote: Alright, that's sensible. The reason I asked was because it seemed to me that it would need to keep humans around to build hardware, feed it mathematical info, et cetera. It is not at all sensible. Today we have no real idea how to build a

Re: [singularity] "Friendly" question...

2007-05-26 Thread Samantha Atkins
John Ku wrote: On 5/26/07, *Samantha Atkins* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote: We care about humans in the first instance because we are human. What do you mean by this? If you are suggesting that our care for other humans is conditioned upon our

Re: [singularity] "Friendly" question...

2007-05-26 Thread Samantha Atkins
On May 26, 2007, at 4:16 AM, John Ku wrote: I think maximization of negative entropy is a poor goal to have. Although life perhaps has some intrinsic value, I think the primary thing we care about is not life, per se, but beings with consciousness and capable of well-being. Under your idea

Re: [singularity] Replicable Patterns of Breakthroughs

2007-05-22 Thread Samantha Atkins
It might be more immediately accelerating to take a somewhat different tack. In many fields there is research in the labs that is missing some number of key components, sometimes breakthroughs, in the same field or others before it can go to next level or be turned into more broadly access

Re: [singularity] Implications of an already existing singularity.

2007-04-25 Thread Samantha Atkins
Jeff Rose wrote: Matt Mahoney wrote: --- Tom McCabe <[EMAIL PROTECTED]> wrote: --- Craig <[EMAIL PROTECTED]> wrote: Kurzweil already postulated this a while ago. Although I don't agree with his conclusions. He says that if any society were to attain the "singularity" then their presence woul

Re: [singularity] Implications of an already existing singularity.

2007-04-25 Thread Samantha Atkins
John Rose wrote: What I was trying to say is similar to - let's say that you are trying to prove using only your eyeballs that a certain substance emits light. If you see light emitted you proved it. If you don't see the light then you haven't proven it because the substance may be emitting lig

Re: [singularity] Philanthropy & Singularity

2007-03-18 Thread Samantha Atkins
On Mar 16, 2007, at 12:14 PM, Mark Nuzzolilo wrote: On 3/16/07, Joshua Fox <[EMAIL PROTECTED]> wrote: Why has the singularity and AGI not triggered such an interest? Thiel's donations to SIAI seem like the exception which highlights the rule. Joshua The issue at hand is that AGI has a

Re: [singularity] Philanthropy & Singularity

2007-03-18 Thread Samantha Atkins
On Mar 16, 2007, at 5:59 AM, Joshua Fox wrote: > Does anyone know what Bill Gates thinks about the singularity? >(Or for that matter, other great philanthropists.) Yes, I too have wondered why Singularity efforts have not received more funding. There are a lot of very rich high-tech zilliona

Re: [singularity] Philanthropy & Singularity

2007-03-18 Thread Samantha Atkins
On Mar 16, 2007, at 12:11 AM, Adrian-Bogdan Morut wrote: "Thus, if that is one's priorities, it might still make sense to concentrate on more direct efforts to help the global poor." The poor? What about the people getting killed every day for having the wrong gender, religion, clothing, opini

Re: [singularity] Philanthropy & Singularity

2007-03-18 Thread Samantha Atkins
On Mar 15, 2007, at 11:24 PM, John Ku wrote: Another concern it would make sense to have though, would be that although the singularity would probably benefit a great deal of people, it is not clear how much it would immediately benefit those who are most in need. (Hopefully, it would at

Re: [singularity] Defining the Singularity

2006-10-23 Thread Samantha Atkins
On Oct 23, 2006, at 7:39 AM, Ben Goertzel wrote: Michael, I think your summary of the situation is in many respects accurate; but, an interesting aspect you don't mention has to do with the disclosure of technical details... In the case of Novamente, we have sufficient academic credibil

Re: [singularity] Defining the Singularity

2006-10-22 Thread Samantha Atkins
On Oct 22, 2006, at 11:32 AM, Ben Goertzel wrote:Hi, Mike Deering wrote: If you really were interested in working on the Singularity you would be designing your education plan around getting a job at the NSA.  The NSA has the budget, the technology, the skill set, and the motivation to build the S

Re: [singularity] Defining the Singularity

2006-10-22 Thread Samantha Atkins
Well, there is funding like in the Methuselah Mouse project.  I am one of "the 300" myself.   With enough interested people it should not be that hard to raise $5 million even on a very long term project.  Most of us seem to think that conquering aging will take longer than AGI but there are fairl

Re: [singularity] Defining the Singularity

2006-10-21 Thread Samantha Atkins
On Oct 20, 2006, at 2:14 AM, Michael Anissimov wrote:Sometimes, Samantha, it seems like you have little faith in anypossible form of intelligence, and that the only way for one to besafe/happy is to be isolated from everything.  I sometimes get thisimpression from libertarians (not to say that I'm

Re: [singularity] Defining the Singularity

2006-10-21 Thread Samantha Atkins
On Oct 20, 2006, at 2:14 AM, Michael Anissimov wrote: Samantha, Considering the state of the world today I don't see how changes sufficient to be really helpful can be anything but disruptive of the status quo. Being non-disruptive per se is a non-goal. Ah, that's what it seems like! But

Re: [singularity] Defining the Singularity

2006-10-18 Thread Samantha Atkins
On Oct 17, 2006, at 2:45 PM, Michael Anissimov wrote: Mike, On 10/10/06, deering <[EMAIL PROTECTED]> wrote: Going beyond the definition of Singularity we can make some educated guesses about the most likely conditions under which the Singularity will occur. Due to technological synergy,

Re: [singularity] Counter-argument

2006-10-13 Thread Samantha Atkins
I see two paths toward >human intelligence: 1) Intelligence Augmentation. This can be through a mixture of significantly improved computer-human interaction including possibly ubiquitious computing, wearables, much improved human-computer interfaces (including though controlled computing),

Re: [singularity] Counter-argument

2006-10-13 Thread Samantha Atkins
On Oct 4, 2006, at 2:16 PM, Joshua Fox wrote: Could I offer Singularity-list readers this intellectual challenge: Give an argument supporting the thesis "Any sort of Singularity is very unlikely to occur in this century." The Singularity won't happen this century because: a) Those capable

Re: [singularity] Defining the Singularity

2006-10-10 Thread Samantha  Atkins
hmm.  Someone will please give me a gentle nudge when something is discussed here of actual import to achieving singularity. In the meantime think I will take a siesta.- samanthaOn Oct 10, 2006, at 1:01 PM, Richard Leis wrote:The general consensus also depends on the context for which it is being u

Re: [singularity] Re: Intuitive limits of applied CogPsy

2006-09-22 Thread Samantha Atkins
Of course I got that. It was the "infinitely self-sufficient environment of infinite layers of infinite media" stuff that wasn't doing it for me. - samantha On Sep 22, 2006, at 2:08 PM, Michael Anissimov wrote: On 9/22/06, Samantha Atkins <[EMAIL PROTECTED]> wrote:

Re: [singularity] Re: Intuitive limits of applied CogPsy

2006-09-22 Thread Samantha Atkins
This does not parse. Please rephrase. - s On Sep 19, 2006, at 5:57 PM, Nathan Barna wrote: Update to first description: The ultimate aim of applied cognitive psychology is for one to be an infinitely self-sufficient environment of infinite layers of infinite media where from any non-empty se

Re: [singularity] Fundamental questions

2006-09-18 Thread Samantha Atkins
On Sep 18, 2006, at 6:25 PM, Stefan Pernar wrote: Hi Matt, On 9/18/06, Matt Mahoney <[EMAIL PROTECTED]> wrote: Suppose in case 1, the AGI is smarter than humans as humans are smarter than monkeys. How would you convince a monkey that you are smarter than it? How could an AGI convince you

Re: [singularity] Re: Is Friendly AI Bunk?

2006-09-10 Thread Samantha Atkins
On Sep 10, 2006, at 1:56 PM, Aleksei Riikonen wrote: samantha <[EMAIL PROTECTED]> wrote: Why is being maximally self-preserving incompatible with being a desirable AGI exactly? What is the "maximal" part? In this discussion, maximal self-preservation includes e.g. that the entity wouldn't a