On 6/28/07, Alan Grimes <[EMAIL PROTECTED]> wrote:
(...)
Seriously now, Why do people insist there is a necessary connection (as
in A implies B) between the singularity and brain uploading?
Why is it that anyone who thinks "the singularity happens and most
people remain humanoid" is automaticall
On 6/24/07, Alan Grimes <[EMAIL PROTECTED]> wrote:
Dear Uploaders,
I am quite confused. Please choose one of the following two statements:
A. The nature of superintelligence and it's needs and preferences is
largely unknowable to mere mortals.
B. Nearly all superintelligences will have a pract
On 6/17/07, Eugen Leitl <[EMAIL PROTECTED]> wrote:
(...)
Also, there
are a few asteroids that are even closer to us in terms of delta v
Yes? Really? I would like to know which (I don't disagree, I would just
want to have a list).
(...)
http://echo.jpl.nasa.gov/~lance/delta_v/delta_v.rendezvou
On 6/17/07, Eugen Leitl <[EMAIL PROTECTED]> wrote:
(...)
What would it be like to wake up embodied as a circumstellar node cloud? Uh,
it would nothing like you could possibly imagine, so you shouldn't waste
your time on it. What would it be to wake up as a dust mite, or a god?
It's nothing what y
On 6/17/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
Lucio: Given the ever-distributed nature of processing power, I would
suspect
> that a superAGI would have no physical form, in the sense that it
> would be distributed across many processing nodes around the world.
> (And those could be compute
On 6/17/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Lucio: Given the ever-distributed nature of processing power, I would
> suspect
>> that a superAGI would have no physical form,
One of the interesting 2nd thoughts this provoked in me is the idea: what
would it be like if you could wake each
On 6/16/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
(...)
The obvious alternatives, it seems to me, (but please comment), are either
pace the movie 2001, a desk-bound supercomputer like Hal, with perhaps
extended sensors all over the place, even around the world - although that
supercomputer, I g
On 6/15/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
(...)
- Uploading your mind and simulating a world where resources are plentiful.
For all you know, the latter has already happened.
(...)
Err... in *my* world many resources are getting scarce, and indeed I
thought that all this discussion w
On 6/15/07, Tom McCabe <[EMAIL PROTECTED]> wrote:
How exactly do you control a megaton-size hunk of
metal flying through the air at 10,000+ m/s?
Clarifying this point on speed, in my view the asteroid would not hit
Earth directly. Instead it would first make aerobrake maneuvers to
enter Earth o
On 6/15/07, Tom McCabe <[EMAIL PROTECTED]> wrote:
(...)
Also, simply crashing an asteroid onto the planet will
vaporize all the ore and scatter it for dozens of
kilometers in every direction.
(...)
I talked about a controlled crash, where dispersion and vaporization
would tend to be minimized.
On 6/15/07, Eugen Leitl <[EMAIL PROTECTED]> wrote:
(...)
Sure, and don't forget to add some hand mirrors, and glass pearls.
It seems that you are trying to equal all rare elements to gold -
i.e., something that is valuable just because it is rare - but
unfortunately that does not seems to be th
On 6/14/07, Charles D Hixson <[EMAIL PROTECTED]> wrote:
(...)
Check your energetics. Asteroid mining is promising for space-based
construction. Otherwise you'd better at least have controllable fusion
rockets.
(...)
Not really.
Elements that are incredibly rare on Earth - such as platinum gr
If you have "strong", Drexler-like nanotech - i.e., assemblers and
disasemblers - this scare of upcoming shortage of resources becomes
moot, and the need of "ephemeralization" as you call it also tends to
disappear. Given strong nanotech it would be for instance very cheap
to gather resources else
On 6/12/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
(...)
But that might be an overly bleak interpretation. Another way to look at the
rapid uptake of computers in the BRICs is an example of the astonishing
possibilities for catch-up that technology offers the developing world.
Russia is a specia
On 6/4/07, Tom McCabe <[EMAIL PROTECTED]> wrote:
So there's you're problem! You're demanding a system
that works, however badly. Any computer programmer can
tell you that you will not get a system that works at
all without doing a large percentage of the work
needed to implement a system that wor
On 6/4/07, Papiewski, John <[EMAIL PROTECTED]> wrote:
(...)
I disagree. If even a half-baked, partial, buggy, slow simulation of a
human mind were available the captains of industry would jump on it in a
second.
(...)
Do you remember when no business had an automated answering service? That
t
On 6/4/07, Panu Horsmalahti <[EMAIL PROTECTED]> wrote:
2007/6/4, Matt Mahoney <[EMAIL PROTECTED]>:
(...)
> If you are looking for a computer
> simulations of a human mind, you will be disappointed, because there is no
> economic incentive to build such a thing.
>
> -- Matt Mahoney, [EMAIL PROT
On 4/25/07, Eugen Leitl <[EMAIL PROTECTED]> wrote:
(...)
When people bulldozer jungle to build an air-conditioned mall they
rarely think about talking to the ants. The ants still notice, and die.
(...)
But are ants able to infer that it was a bulldozer and not, say, a
strong wind or landslide t
On 12/20/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
(...)
For example, to encourage the storytelling/empathy connection to exist
in an AI system, one might want to give the system an explicit
cognitive process of hypothetically "putting itself in someone else's
place." So, when it hears a story
On 10/27/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
(...)
Orwell's 1984 predicted a world where a totalitarian government watched your
every move. What he failed to predict is that it would happen in a democracy.
People want surveillence. You want cameras in businesses for better security.
On 10/27/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
(...)
2. What is human?
- If you make an exact copy of a human and kill the original, is it murder?
- What if you copy only the brain and put it in a different body?
- What if you put the copy in a robot body?
- What if you copy only the infor
On 10/26/06, deering <[EMAIL PROTECTED]> wrote:
(...)
The only rational thing to do is to build an SAI without any preconceived
ideas of right and wrong, and let it figure it out for itself. What makes
you think that protecting humanity is the greatest good in the universe?
(...)
Hundreds of t
On 10/11/06, Chris Norwood <[EMAIL PROTECTED]> wrote:
How much of our "selves" are driven by biological
processes that an AI would not have to begin with, for
example...fear? I would think that the AI's self would
be fundamentaly different to begin with due to this.
(...)
I think that, Darwinia
On 10/10/06, Hank Conn <[EMAIL PROTECTED]> wrote:
(...)
My problem with Michael's original definition was the statement about
producing a genetically engineered child that was smarter-than-human, and
allowing that to be defined as the Singularity. I think in order for a point
in this recursive se
On 10/10/06, BillK <[EMAIL PROTECTED]> wrote:
(...)
If next year a quad-core pc becomes a self-improving AI in a basement
in Atlanta, then disappears a hour later into another dimension, then
so far as the rest of the world is concerned, the Singularity never
happened.
(...)
Yep, I also tend to
I think that is a natural outcome, and will not necessarily be imposed
by some elite. Some people will simply not want to participate on the
Singularity, and possibly *large* numbers of people will refuse to be
uploaded. Indeed that's already happening. You mentioned restrictions
to stem cell rese
On 10/6/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
(...)
The "program" can't be larger than the DNA which describes it. This is at
most 6 x 10^9 bits, but probably much smaller because of all the noise
(random mutations that have no effect), redundancy (multiple copies of
genes), noncoding DNA
On 10/6/06, Bergeson, Loren <[EMAIL PROTECTED]> wrote:
This article takes a shot at making a counter-argument, at least if you
assume that general AI is a necessary part of the Singularity:
http://www.skeptic.com/the_magazine/featured_articles/v12n02_AI_gone_awry.html
(...)
Of course if we t
On 10/5/06, Anna Taylor <[EMAIL PROTECTED]> wrote:
(...)
You wrote:
Another example: particle physics. In the 90s there was that project
for the Supercollider, a particle accelerator that would produce
energies high enough...
Anna writes:
I'm not really sure what you are talking about. Could yo
On 10/4/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
(...)
- In fig. 1, what is the justification for dividing by population? Isn't the
absolute rate of innovation more important than innovation per person?
(...)
I also think that's the most problematic assumption the article.
Indeed absolute
On 10/4/06, Anna Taylor <[EMAIL PROTECTED]> wrote:
(...)
From my experience:
Innovative creative ideas are in most, rewarding, and at times very
financially rewarding.
(...)
Yes, but sometimes you have to put vast amounts of money into a
project into a creative idea to actually bring it to real
Some argue that the Singularity will not be reached because of
economic barriers. As the "easy" scientific and technological advances
are reached, the difficult ones will demand more and more sums of
money/time/effort to be accomplished, and so at some point it will
simply not be financially attra
On 9/12/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
(...)
1. It is not possible for a less intelligent entity (human) to predict the
behavior of a more intelligent entity. A state machine cannot simulate
another machine with more states than itself.
(...)
I think you should add "in the general
On 9/12/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
(...)
Uploading is occurring as well, every time we post our words and pictures on
the Internet. I realize this only gets a small fraction of our knowledge, but
we would never want to upload everything anyway. Much of the knowledge related
On 9/11/06, Michael Anissimov <[EMAIL PROTECTED]> wrote:
Technologically, AI is far far easier than uploading. So, AI will
come first, and we will have to build AI that is reliably nice to us,
or suffer the consequences.
(...)
I am not so sure that AI is easier than uploading.
Surely upload n
On 9/11/06, Shane Legg <[EMAIL PROTECTED]> wrote:
(...)
I'm no longer the child who used to play with the kids across the
road in New Zealand in the late 1970's. I have some memories of
being that person, but that's it. I don't see this as fundamentally
different, especially if it is allowed to
36 matches
Mail list logo