On Sat, Apr 26, 2008 at 7:48 AM, Thomas McCabe <[EMAIL PROTECTED]>
wrote:
> On Thu, Apr 24, 2008 at 3:16 AM, Samantha Atkins <[EMAIL PROTECTED]> wrote:
> > Thomas McCabe wrote:
>
> > Popularity is irrelevant.
>
> Popularity, of course, is not the ultim
J. Andrew Rogers wrote:
On Apr 6, 2008, at 9:38 AM, Ben Goertzel wrote:
That's surely part of it ... but investors have put big $$ into much
LESS
mature projects in areas such as nanotech and quantum computing.
This is because nanotech and quantum computing can be readily and
easily packag
Thomas McCabe wrote:
On 4/18/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Thomas McCabe wrote:
On 4/18/08, Richard Loosemore <[EMAIL PROTECTED]> wrote:
You repeatedly insinuate, in your comments above, that the idea is not
taken seriously by anyone, in spite of the fact I
Mike Tintner wrote:
Samantha:From what you said above $50M will do the entire job. If
that is all
that is standing between us and AGI then surely we can get on with it in
all haste.
Oh for gawdsake, this is such a tedious discussion. I would suggest
the following is a reasonable *framework*
Ben Goertzel wrote:
Much of this discussion is very abstract, which is I guess how you think about
these issues when you don't have a specific AGI design in mind.
My view is a little different.
If the Novamente design is basically correct, there's no way it can possibly
take thousands or hundre
On Apr 9, 2008, at 12:33 PM, Derek Zahn wrote:
Matt Mahoney writes:
> Just what do you want out of AGI? Something that thinks like a
person or
> something that does what you ask it to?
The "or" is interesting. If it really "thinks like a person" and at
at least human level then I doubt
Arguably many of the problems of Vista including its legendary slippages
were the direct result of having thousands of merely human programmers
involved. That complex monkey interaction is enough to kill almost
anything interesting.
- samantha
Panu Horsmalahti wrote:
Just because it takes
Richard Loosemore wrote:
John K Clark wrote:
And I will define consciousness just as soon as you define "define".
Ah, but that is exactly my approach.
Thus, the subtitle I gave to my 2006 conference paper was "Explaining
Consciousness by Explaining That You Cannot Explain it, Because Your
E
On Feb 2, 2008, at 7:29 AM, gifting wrote:
WTF (I can only assume what that stands for) are you such an angry
person. Or is linear thinking the only possible solution for your
VotW (guess what that stands for)?
I am not angry. I am bored with what seems like endless often off
subject
On Jan 27, 2008, at 5:40 AM, Mike Tintner wrote:
Ben: MT: Venter has changed everything
today - including the paradigms that govern both science and AI..
Ben: Lets not overblow things -- please note that Venter's team has
not yet
synthesized an artificial organism.
Here's why I think V
WTF does this have to do with AGI or Singularity? I hope the AGI
gets here soon. We Stupid Monkeys get damn tiresome.
- samantha
On Jan 29, 2008, at 7:06 AM, gifting wrote:
On 29 Jan 2008, at 14:13, Vladimir Nesov wrote:
On Jan 29, 2008 11:49 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
On Jan 28, 2008, at 6:43 AM, Mike Tintner wrote:
Stathis: Are you simply arguing that an embodied AI that can
interact with the
real world will find it easier to learn and develop, or are you
arguing that there is a fundamental reason why an AI can't develop in
a purely virtual environment
On Jan 27, 2008, at 6:18 AM, Mike Tintner wrote:
Samantha: MT: You've been fooled by the puppet. It doesn't work
without the
puppeteer. Samantha:What's that, elan vitale, a "soul", a
"consciousness" that is
independent of the puppet?
It's significant that you make quite the wrong assumpt
On Jan 26, 2008, at 6:07 PM, Mike Tintner wrote:
Tom:A computer is not "disembodied" any more than you are. Silicon,
as a
substrate, is fully equivalent to biological neurons in terms of
theoretical problem-solving ability.
You've been fooled by the puppet. It doesn't work without the
pupp
On Jan 26, 2008, at 3:59 PM, Mike Tintner wrote:
Ben,
Thanks for reply. I think though that Samantha may be more
representative - i.e. most here simply aren't interested in non-
computer alternatives. Which is fine.
The Singularity Institute exists for one purpose. That I point that
ou
On Jan 26, 2008, at 2:36 PM, Mike Tintner wrote:
Gudrun: I am an artist who is interested in science, in utopia and
seemingly
impossible
projects. I also came across a lot of artists with OC traits. ...
The OCAP, actually the obsessive compulsive 'arctificial' project ..
These new OCA entitie
On Jan 26, 2008, at 11:13 AM, [EMAIL PROTECTED] wrote:
Quoting Natasha Vita-More <[EMAIL PROTECTED]>:
At 03:04 PM 1/24/2008, Gudrun wrote:
and N. Vita-More
This is confusing. Fine that extropians want to self-improve. That
ALL humanity should improve, is quite questionable. Does all
h
On Jan 26, 2008, at 9:57 AM, Bryan Bishop wrote:
On Saturday 26 January 2008, Mike Tintner wrote:
Why does discussion never (unless I've missed something - in which
case apologies) focus on the more realistic future
"threats"/possibilities - future artificial species as opposed to
future com
On Jan 25, 2008, at 10:14 AM, Natasha Vita-More wrote:
The idea of useless technology is developed in wearables more than
in bioart. Steve's perspective is more political than artistic in
regards to uselessness, don't you think? My paper which includes
an interview with him is publishe
On Oct 27, 2007, at 1:55 AM, Aleksei Riikonen wrote:
On 10/27/07, Stefan Pernar <[EMAIL PROTECTED]> wrote:
On 10/27/07, Aleksei Riikonen <[EMAIL PROTECTED]> wrote:
You seem to have a need to personally give a final answer to "What
is
'good'?" -- an answer to what moral rules the universe s
On Aug 19, 2007, at 12:26 PM, Matt Mahoney wrote:
I was never really a Singularity activist, but
1. I realized the singularity is coming and nothing can stop it.
Not so. Humanity could so harm its technological base as to postpone
Singularity on this planet for quite some time. We could
Alan Grimes wrote:
If it's a problem may I suggest you use a more user friendly terminal
such as gnome-terminal or konsole. They have profiles that can be
edited through the GUI.
Not a bad suggestion, lemme see if my distro will let me kill Xterm...
crap, it's depended on by xinit, which i
Tom McCabe wrote:
--- Samantha Atkins <[EMAIL PROTECTED]> wrote:
Tom McCabe wrote:
--- Samantha Atkins <[EMAIL PROTECTED]> wrote:
Out of the bazillions of possible ways to
configure
matter only a
ridiculously tiny fraction are more
Charles D Hixson wrote:
Samantha Atkins wrote:
Sergey A. Novitsky wrote:
Dear all,
...
o Be deprived of free will or be given limited free will (if
such a concept is applicable to AI).
See above, no effective means of control.
- samantha
There is *one* effective
Alan Grimes wrote:
Samantha Atkins wrote:
Alan Grimes wrote:
Available computing power doesn't yet match that of the human brain,
but I see your point,
What makes you so sure of that?
It has been computed countless times here and elsewhere that I am sur
Tom McCabe wrote:
Okay, to start with:
- Total control over the structure of our minds, with
an AI-provided user-friendly interface.
- The elimination of war, hunger, disease, old age,
heart disease and a whole bunch of other undesirable
stuff.
- The ability to build anything that is physically
Colin Tate-Majcher wrote:
When you talk about "uploading" are you referring to creating a copy
of your consciousness? If that's the case then what do you do after
uploading, continue on with a mediocre existence while your
cyber-duplicate shoots past you? Sure, it would have all of those
won
Tom McCabe wrote:
--- Samantha Atkins <[EMAIL PROTECTED]> wrote:
Out of the bazillions of possible ways to configure
matter only a
ridiculously tiny fraction are more intelligent than
a cockroach. Yet
it did not take any grand design effort upfront to
arrive at a world
overru
Alan Grimes wrote:
Available computing power doesn't yet match that of the human brain,
but I see your point,
What makes you so sure of that?
It has been computed countless times here and elsewhere that I am sure
you are aware of so why do you ask?
-
This list is sponsored by AG
Charles D Hixson wrote:
Stathis Papaioannou wrote:
Available computing power doesn't yet match that of the human brain,
but I see your point, software (in general) isn't getting better
nearly as quickly as hardware is getting better.
Well, not at the personally accessible level. I understand
Sergey A. Novitsky wrote:
Dear all,
Perhaps, the questions below were already touched numerous times in
the past.
Could someone kindly point to discussion threads and/or articles where
these concerns were addressed or discussed?
Kind regards,
Serge
--
On Jun 21, 2007, at 8:14 AM, Tom McCabe wrote:
We can't "know it" in the sense of a mathematical
proof, but it is a trivial observation that out of the
bazillions of possible ways to configure matter, only
a ridiculously tiny fraction are Friendly, and so it
is highly unlikely that a selected
Mike Tintner wrote:
Perhaps you've been through this - but I'd like to know people's ideas
about what exact physical form a Singulitarian or near-Singul. AGI
will take. And I'd like to know people's automatic associations even
if they don't have thought-through ideas - just what does a superAGI
they will do if they are true
intellectuals.
Jon
From: Samantha Atkins [mailto:[EMAIL PROTECTED]
Sent: Tuesday, May 29, 2007 11:28 PM
To: singularity@v2.listbox.com
Subject: SPAM: Re: SPAM: Re: [singularity] The humans are dead...
On May 29, 2007, at 4:22 PM, Jonathan H. Hinck wrote:
But
On May 29, 2007, at 7:03 PM, Keith Elis wrote:
I understand that you value intelligence and capability, but I can't
see
my way to the destruction of humanity from there.
I was quite careful to say that that was not what I would choose.
The existence of superintelligence (a fact of the
On May 29, 2007, at 4:22 PM, Jonathan H. Hinck wrote:
But does there need to be consensus among the experts for a public
issue to be raised? Regarding other topics that have been on the
public discussion palate for awhile, how often has this been the
case? Perhaps with regard to issues s
On May 29, 2007, at 12:43 PM, Jonathan H. Hinck wrote:
Thanks for your response (below). To clarify, I wasn't talking about
the need to initiate a public policy (at least not at the front end of
the process). Rather, I was talking about the need for an open dialog
and discussion, such as we n
On May 29, 2007, at 11:36 AM, Jonathan H. Hinck wrote:
Indeed, displacement of the human labor force began since the
beginning of the industrial revolution (if not before). This is the
definition of technology. And, indeed, the jump form a labor-based
to an automation-based economy would
On May 29, 2007, at 11:25 AM, Richard Loosemore wrote:
Samantha Atkins wrote:
While I have my own doubts about Eliezer's approach and likelihood
of success and about the extent of his biases and limitations, I
don't consider it fruitful to continue to bash Eliezer on various
On May 29, 2007, at 7:36 AM, Jonathan H. Hinck wrote:
To clarify, I meant too much disagreement internally (within the
A.I. community) or too much disregard for the geeks externally (in
the world at large).
I would go for choice #3. Most of the people will not "get it" at
all. Those wh
While I have my own doubts about Eliezer's approach and likelihood of
success and about the extent of his biases and limitations, I don't
consider it fruitful to continue to bash Eliezer on various lists
once you feel seriously slighted by him or convinced that he is
hopelessly mired or wha
The rubber has already hit the road. Automation and computer
displacement of jobs is an old story. The real challenge in my mind
is how the world at large will shift to a post-scarcity economics and
at what point. More importantly what are the least disruptive and
most beneficial steps a
On May 28, 2007, at 9:10 PM, Russell Wallace wrote:
On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote:
Without AI or such IA to be almost the same thing I don't have much
reason to believe humanity will see 3007.
*nods* Or rather - in my opinion - it probably will las
On May 28, 2007, at 8:11 PM, Russell Wallace wrote:
On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote:
I think you know well enough that most of us who have considered
such things for significant time have done considerable work to get
beyond "metal men".
Yep.
On May 28, 2007, at 6:52 PM, Russell Wallace wrote:
On 5/29/07, Samantha Atkins <[EMAIL PROTECTED]> wrote:
So your are happily provincial in this respect.
Addendum: I think it is my view that is unprovincial. We're
programmed to think intelligence = humanlike, because f
On May 28, 2007, at 6:23 PM, Keith Elis wrote:
Samantha Atkins wrote:
On what basis is that answer correct? Do you mean factual in
that it is
the choice that you would make and that you belief proper?
Or are you
saying it is more objectively correct. If so, on what basis? Mere
assertion
On May 28, 2007, at 5:44 PM, Keith Elis wrote:
Richard Loosemore wrote:
Your email could be taken as threatening to set up a website
to promote
violence against AI researchers who speculate on ideas that, in your
judgment, could be considered "scary".
I'm on your side, too, Richard.
Answer
On May 28, 2007, at 4:29 PM, Joel Pitt wrote:
On 5/29/07, Keith Elis <[EMAIL PROTECTED]> wrote:
In the end, my advice is pragmatic: Anytime you post publicly on
topics
such as these, where the stakes are very, very high, ask yourself,
Can I
be taken out of context here? Is this position, wh
On May 28, 2007, at 3:32 PM, Russell Wallace wrote:
On 5/28/07, Shane Legg <[EMAIL PROTECTED]> wrote:
If one accepts that there is, then the question becomes:
Where should we put a super human intelligent machine
on the list? If it's not at the top, then where is it and why?
I don't claim to
On May 28, 2007, at 1:11 PM, Joshua Fox wrote:
It is not at all sensible. Today we have no real idea how to
build a working AGI.
Right. The Friendly AI work is aimed at a future system. Fermi and
company planned against meltdown _before_ they let their reactor go
critical.
The analogy is
Keith Elis wrote:
Shane Legg wrote:
If a machine was more intelligent/complex/conscious/...etc... than
all of humanity combined, would killing it be worse than killing all of
humanity?
You're asking a rhetorical question but let's just get the correct
Keith Elis wrote:
Shane Legg wrote:
If a machine was more intelligent/complex/conscious/...etc... than
all of humanity combined, would killing it be worse than killing all of
humanity?
You're asking a rhetorical question but let's just get the correct
Shane Legg wrote:
http://www.youtube.com/watch?v=WGoi1MSGu64
Which got me thinking. It seems reasonable to think that killing a
human is worse than killing a mouse because a human is more
intelligent/complex/conscious/...etc...(use what ever measure you
prefer) than a mouse.
So, would killing
On May 27, 2007, at 5:48 PM, Stathis Papaioannou wrote:
On 28/05/07, Shane Legg <[EMAIL PROTECTED]> wrote:
Which got me thinking. It seems reasonable to think that killing a
human is worse than killing a mouse because a human is more
intelligent/complex/conscious/...etc...(use what ever mea
On May 27, 2007, at 12:37 PM, Abram Demski wrote:
Alright, that's sensible. The reason I asked was because it seemed
to me that it would need to keep humans around to build hardware,
feed it mathematical info, et cetera.
It is not at all sensible. Today we have no real idea how to build a
John Ku wrote:
On 5/26/07, *Samantha Atkins* <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:
We care about humans in the first instance because we are human.
What do you mean by this? If you are suggesting that our care for
other humans is conditioned upon our
On May 26, 2007, at 4:16 AM, John Ku wrote:
I think maximization of negative entropy is a poor goal to have.
Although life perhaps has some intrinsic value, I think the primary
thing we care about is not life, per se, but beings with
consciousness and capable of well-being. Under your idea
It might be more immediately accelerating to take a somewhat
different tack. In many fields there is research in the labs that is
missing some number of key components, sometimes breakthroughs, in
the same field or others before it can go to next level or be turned
into more broadly access
Jeff Rose wrote:
Matt Mahoney wrote:
--- Tom McCabe <[EMAIL PROTECTED]> wrote:
--- Craig <[EMAIL PROTECTED]> wrote:
Kurzweil already postulated this a while ago.
Although I don't agree with his conclusions. He says
that if any society were to attain the "singularity"
then their presence woul
John Rose wrote:
What I was trying to say is similar to - let's say that you are trying to
prove using only your eyeballs that a certain substance emits light. If you
see light emitted you proved it. If you don't see the light then you
haven't proven it because the substance may be emitting lig
On Mar 16, 2007, at 12:14 PM, Mark Nuzzolilo wrote:
On 3/16/07, Joshua Fox <[EMAIL PROTECTED]> wrote:
Why has the singularity and AGI not triggered such an interest?
Thiel's donations to SIAI seem like the exception which highlights
the rule.
Joshua
The issue at hand is that AGI has a
On Mar 16, 2007, at 5:59 AM, Joshua Fox wrote:
> Does anyone know what Bill Gates thinks about the singularity?
>(Or for that matter, other great philanthropists.)
Yes, I too have wondered why Singularity efforts have not received
more funding. There are a lot of very rich high-tech zilliona
On Mar 16, 2007, at 12:11 AM, Adrian-Bogdan Morut wrote:
"Thus, if that is one's priorities, it might still make sense to
concentrate on more direct efforts to help the global poor."
The poor? What about the people getting killed every day for having
the wrong gender, religion, clothing, opini
On Mar 15, 2007, at 11:24 PM, John Ku wrote:
Another concern it would make sense to have though, would be that
although the singularity would probably benefit a great deal of
people, it is not clear how much it would immediately benefit those
who are most in need. (Hopefully, it would at
On Oct 23, 2006, at 7:39 AM, Ben Goertzel wrote:
Michael,
I think your summary of the situation is in many respects accurate;
but, an interesting aspect you don't mention has to do with the
disclosure of technical details...
In the case of Novamente, we have sufficient academic credibil
On Oct 22, 2006, at 11:32 AM, Ben Goertzel wrote:Hi, Mike Deering wrote: If you really were interested in working on the Singularity you would be designing your education plan around getting a job at the NSA. The NSA has the budget, the technology, the skill set, and the motivation to build the S
Well, there is funding like in the Methuselah Mouse project. I am one of "the 300" myself. With enough interested people it should not be that hard to raise $5 million even on a very long term project. Most of us seem to think that conquering aging will take longer than AGI but there are fairl
On Oct 20, 2006, at 2:14 AM, Michael Anissimov wrote:Sometimes, Samantha, it seems like you have little faith in anypossible form of intelligence, and that the only way for one to besafe/happy is to be isolated from everything. I sometimes get thisimpression from libertarians (not to say that I'm
On Oct 20, 2006, at 2:14 AM, Michael Anissimov wrote:
Samantha,
Considering the state of the world today I don't see how changes
sufficient to be really helpful can be anything but disruptive of the
status quo. Being non-disruptive per se is a non-goal.
Ah, that's what it seems like! But
On Oct 17, 2006, at 2:45 PM, Michael Anissimov wrote:
Mike,
On 10/10/06, deering <[EMAIL PROTECTED]> wrote:
Going beyond the definition of Singularity we can make some
educated guesses
about the most likely conditions under which the Singularity will
occur.
Due to technological synergy,
I see two paths toward >human intelligence:
1) Intelligence Augmentation.
This can be through a mixture of significantly improved computer-human
interaction including possibly ubiquitious computing, wearables, much
improved human-computer interfaces (including though controlled
computing),
On Oct 4, 2006, at 2:16 PM, Joshua Fox wrote:
Could I offer Singularity-list readers this intellectual challenge:
Give an argument supporting the thesis "Any sort of Singularity is
very unlikely to occur in this century."
The Singularity won't happen this century because:
a) Those capable
hmm. Someone will please give me a gentle nudge when something is discussed here of actual import to achieving singularity. In the meantime think I will take a siesta.- samanthaOn Oct 10, 2006, at 1:01 PM, Richard Leis wrote:The general consensus also depends on the context for which it is being u
Of course I got that. It was the "infinitely self-sufficient
environment of infinite layers of infinite
media" stuff that wasn't doing it for me.
- samantha
On Sep 22, 2006, at 2:08 PM, Michael Anissimov wrote:
On 9/22/06, Samantha Atkins <[EMAIL PROTECTED]> wrote:
This does not parse. Please rephrase.
- s
On Sep 19, 2006, at 5:57 PM, Nathan Barna wrote:
Update to first description:
The ultimate aim of applied cognitive psychology is for one to be an
infinitely self-sufficient environment of infinite layers of infinite
media where from any non-empty se
On Sep 18, 2006, at 6:25 PM, Stefan Pernar wrote:
Hi Matt,
On 9/18/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
Suppose in case 1, the AGI is smarter than humans as humans are
smarter than
monkeys. How would you convince a monkey that you are smarter
than it? How could
an AGI convince you
On Sep 10, 2006, at 1:56 PM, Aleksei Riikonen wrote:
samantha <[EMAIL PROTECTED]> wrote:
Why is being maximally self-preserving incompatible with being a
desirable AGI exactly? What is the "maximal" part?
In this discussion, maximal self-preservation includes e.g. that the
entity wouldn't a
77 matches
Mail list logo