Re: [singularity] Previous message was a big hit, eh?

2007-06-28 Thread Lúcio de Souza Coelho

On 6/28/07, Alan Grimes [EMAIL PROTECTED] wrote:
(...)

Seriously now, Why do people insist there is a necessary connection (as
in A implies B) between the singularity and brain uploading?

Why is it that anyone who thinks the singularity happens and most
people remain humanoid is automatically branded a luddite?


I don't think that is the case. Personally I would expect that, given
the choice, the vast majority of people would remain humanoid. And
even with upload technology (which, granted, does not have a necessary
connection with the Singularity), I would expect most of people that
choose upload-based immortality to inhabit humanoid bodies - virtual
or physical. BTW, isn't it interesting that in existing virtual worlds
basically everyone chooses humanoid bodies?


looks like I'm going to have to have to resort to my collection of
reducto ad absurdum arguments to get this list back on topic... =\


Thanks! Those angels dancing on a pin discussions are REALLY boring,
and I almost followed the unsubscribe link that someone posted here.
:)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Dear Uploaders,

2007-06-24 Thread Lúcio de Souza Coelho

On 6/24/07, Alan Grimes [EMAIL PROTECTED] wrote:

Dear Uploaders,

I am quite confused. Please choose one of the following two statements:

A. The nature of superintelligence and it's needs and preferences is
largely unknowable to mere mortals.

B. Nearly all superintelligences will have a practically insatiable
appetite for computational resources, mostly for the sake of running
Simulations.

I have heard both of these obviously contradictory and irreconcilable
statements, please pick one.

(...)

A. :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] What form will superAGI take?

2007-06-17 Thread Lúcio de Souza Coelho

On 6/17/07, Mike Tintner [EMAIL PROTECTED] wrote:

Lucio: Given the ever-distributed nature of processing power, I would
suspect
 that a superAGI would have no physical form, in the sense that it
 would be distributed across many processing nodes around the world.
 (And those could be computer clusters, single personal computers, and
 so on - if you want to stick to physical forms, probably we are
 talking about zillions of boxes of many sizes and shapes.) And despite
 being formless it would be omnipresent, in the sense that it would
 be able to access zillions of sensors (and possibly actuators), from
 street surveillance cameras to radiotelescopes

That's interesting - a massive extension of Hal, in a sense.


And it is not a new idea either. There is another old science fiction movie
(which I find far more interesting than 2001, for the horror of many
hardcore Kubrick fans :) that depicts a scenario ressembling that: Demon
Seed http://en.wikipedia.org/wiki/Demon_Seed. Proteus IV, the superAGI in
there, spreads its consciousness to everywhere - including the automated
house of its (his?) creator. Oh, and it also uses the radiotelescopes that i
cited! In that sense the film is prophetic, depicting a planet-wide net of
information flow where all devices are connected to each other. However,
Proteus IV is depicted as a huge machine in an underground facility, so the
idea of distributed computation was not present. (Well, kind of. The
intriguing finale hints at a self-replicating superAGI of sorts.)


But the immediate problem with that is how could it have a sense of self?
That's crucial, surely, if you are to distinguish between what I think

and

others think - all the opinions that you, either superAGI or human, are
continually being immersed in.

(...)

That's an interesting question indeed. Suppose that a superAGI has *already*
emerged in the information flow of the Internet. (Like another AI entity
from yet another scifi movie, the marvelous Ghost in the
Shellhttp://en.wikipedia.org/wiki/Ghost_in_the_Shell_%2528film%2529.
Sorry, you should not have mentioned movies and activated my Pop Culture
Reference Mode. :) Would this Internet-based entity think of humans as
separate entities? Or - what I think more plausible - would it think of
humans as *part of it*, little nodes of consciousness continuosly inputing
new information and modifying existing information?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8

Re: [singularity] Getting ready for takeoff

2007-06-17 Thread Lúcio de Souza Coelho

On 6/17/07, Eugen Leitl [EMAIL PROTECTED] wrote:
(...)

Also, there
are a few asteroids that are even closer to us in terms of delta v


Yes? Really? I would like to know which (I don't disagree, I would just
want to have a list).

(...)

http://echo.jpl.nasa.gov/~lance/delta_v/delta_v.rendezvous.html

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Getting ready for takeoff

2007-06-15 Thread Lúcio de Souza Coelho

On 6/15/07, Eugen Leitl [EMAIL PROTECTED] wrote:
(...)

Sure, and don't forget to add some hand mirrors, and glass pearls.


It seems that you are trying to equal all rare elements to gold -
i.e., something that is valuable just because it is rare - but
unfortunately that does not seems to be the case. Platinum for
instance has a lot of industrial applications, including the use as a
catalyst in fuel cells, and in fact its price has been skyrocketing in
past years due to increased demand. (As is the case with many metallic
commodities.)

(...)

Why asteroids? The Moon is close enough, both in distance, and
in terms of delta v.

(...)

Unless you know of some reserve of pure metal alloy buried under the
regolith, there will always be some raw materials that are of easier
exploration in asteroids than by processing zillions of tonnes of
regolith - and that adds distance in commercial terms. Also, there
are a few asteroids that are even closer to us in terms of delta v
than the Moon - the gravity well of our neighboring world is not
exactly shallow.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Getting ready for takeoff

2007-06-15 Thread Lúcio de Souza Coelho

On 6/15/07, Tom McCabe [EMAIL PROTECTED] wrote:
(...)

Also, simply crashing an asteroid onto the planet will
vaporize all the ore and scatter it for dozens of
kilometers in every direction.

(...)

I talked about a controlled crash, where dispersion and vaporization
would tend to be minimized. I don't think it is impossible, for we
have seen a great number of metallic meteors hitting the ground and
still remaining relatively preserved. And that in uncontrolled
conditions...

By the way, although I ultimately agree with you that mining the Moon
is difficult (it is the slag pile of Solar System, using Zubrin's
words), I wouldn't discard the possibility that some crater bottoms
have high concentrations of many minerals - specially metals. Remnants
of the asteroid impacts that created them. However, so far this is
just a conjecture, while in the case of asteroids we already have a
wealth of data supporting the view that mining them will be much
easier.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Getting ready for takeoff

2007-06-15 Thread Lúcio de Souza Coelho

On 6/15/07, Matt Mahoney [EMAIL PROTECTED] wrote:
(...)

- Uploading your mind and simulating a world where resources are plentiful.

For all you know, the latter has already happened.

(...)

Err... in *my* world many resources are getting scarce, and indeed I
thought that all this discussion was about that. :)

Besides, virtual worlds will have their own resource problems in the
real world - caused by increasing memory and processing demands.
(Actually the crude virtual worlds of today, like Second Life, are
already limited by that problem.) However, virtual worlds populated by
uploads would be, granted, more efficient, in the sense that a human
upload would need far less matter and energy than an actual human.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Getting ready for takeoff

2007-06-14 Thread Lúcio de Souza Coelho

On 6/14/07, Charles D Hixson [EMAIL PROTECTED] wrote:
(...)

Check your energetics.  Asteroid mining is promising for space-based
construction.  Otherwise you'd better at least have controllable fusion
rockets.

(...)

Not really.

Elements that are incredibly rare on Earth - such as platinum group
metals - could be mined in asteroids and simply dropped into Earth in
round-of-the-mill reentry capsules - and those would't even need
rocketry tech beyond the current level. Take in consideration that
even a few tonnes of platinum - well below the weight of the space
shuttle - would be of immeasurable value.

As for bulk elements like iron, copper, nickel, etc, there are small
asteroids - a few tens of meters in length - that could potentially
have thousands of tons of those metals. My suggestion for that would
be a controlled crash - simply boost the asteroid (using a mass driver
or whatever) to a trajectory where it will be aerobraked by Earth's
upper atmosphere (preferably over the ocean to avoid hazardous
hypersonic booms over populated areas) and then, stripped of most of
its kinetic energy, crash in an uninhabited area. Probably the crash
will still look like a small nuke, but then we devastate similarly
larger areas for comparable gains (as in the case of hydroelectric
plants or extensive surface mining). By the way, talking about mining
on Earth, some of the ore deposits currently explored are in fact
ancient asteroid crashes...

Finally, in the long term space elevators may well be possible, and
then the limitation of bringing raw materials from space to Earth will
be similar to the limitation of moving materials between continents
using ships.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Getting ready for takeoff

2007-06-13 Thread Lúcio de Souza Coelho

If you have strong, Drexler-like nanotech - i.e., assemblers and
disasemblers - this scare of upcoming shortage of resources becomes
moot, and the need of ephemeralization as you call it also tends to
disappear. Given strong nanotech it would be for instance very cheap
to gather resources elsewhere in the Solar System - asteroid mining
seems specially promising. Indeed, even exploration of untapped
resources here on Earth, like the possibility ocean mining that you
mention, would likely increase available resources by an order of
magnitude or so - and that likely requires just weak nanotech.
(Which I call materials science on steroids. :)

Personally my attitude toward the cyclical alerts of OMG! This or
that resource is running short! The world is doomed! We are all gonna
die! tends to be skeptical. Basically because this has happened
several times in history and what usually happens is, once this or
that resource gets more expensive, the pressure for finding
alternatives also increases - and so far they were found.

On 6/13/07, Charles D Hixson [EMAIL PROTECTED] wrote:
(...)

Unless there are some replacements for certain rare elements...probably not.
Ephemeralization is about to become NECESSARY, as the pool of available
material resources is shrinking FAST!!
(This statement is based on one article, but I found it utterly
convincing, as I was expecting that this result would be discovered as
soon as someone looked.)

So I think the next necessary area of development is MEMs and
nano-tech.  Assemblers and disassemblers are going to be needed within
20 years.  For some elements even sooner.  Screens will need to start
shrinking rather than growing...plausibly being worn as glasses are now
until a direct neural feed becomes available.

One good place to start might be solar powered desalinization...and then
material recovery from the brine.  That's one's difficult, as there's
already lots of competition.  OTOH, there's an *immense* market if you
can get the price down.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Bootstrapping AI

2007-06-04 Thread Lúcio de Souza Coelho

On 6/4/07, Panu Horsmalahti [EMAIL PROTECTED] wrote:

2007/6/4, Matt Mahoney [EMAIL PROTECTED]:

(...)

  If you are looking for a computer
 simulations of a human mind, you will be disappointed, because there is no
 economic incentive to build such a thing.

 -- Matt Mahoney, [EMAIL PROTECTED]

(...)

IBM Blue Brain project or CCortex?

(...)

It is a simulation of some aspects of the brain, aimed at
understanding brain structure. It is not a simulation of a human mind,
neither in terms of an upload nor a human-equivalent AI.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Bootstrapping AI

2007-06-04 Thread Lúcio de Souza Coelho

On 6/4/07, Papiewski, John [EMAIL PROTECTED] wrote:
(...)

I disagree.  If even a half-baked, partial, buggy, slow simulation of a
human mind were available the captains of industry would jump on it in a
second.

(...)

Do you remember when no business had an automated answering service?  That
transition took only a few years.

(...)

Considering previous messages from Matt, I think that when he mentions
simulation of a human mind he means an entity possessing not only
human intelligence, but also human feelings and motivations. That, I
agree, would look uneconomical in the sense that it would have the
same problems of a human worker - boredom, getting pissed off, making
strikes, and so on. (Not to mention the ethical problem of having a
human-equivalent intelligence that would probably be kept as a slave.)
Maybe a profitable AI should just do the work that it is supposed to
do with the same degree of efficiency, never complain and never
manifest the slightest hint of emotions.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Bootstrapping AI

2007-06-04 Thread Lúcio de Souza Coelho

On 6/4/07, Tom McCabe [EMAIL PROTECTED] wrote:

So there's you're problem! You're demanding a system
that works, however badly. Any computer programmer can
tell you that you will not get a system that works at
all without doing a large percentage of the work
needed to implement a system that works *well*. So you
can see a model of the human brain that has a lot of
the ideas of AI in place already, and go well, it
isn't fully intelligent yet, so it doesn't count and
go on ignoring the parts that we have implemented.

(...)

As it happens, I *am* a programmer. And I would gladly accept your
gradualistic argument, if the Blue Brain had AI goals. But it is
guided toward neuroscience... *Perhaps* insights obtained with the
Blue Brain will be used in some fields of AI, but to point the Blue
Brain project as it is now as an example of simulation of human mind
sounds like a falacy of undue amplification...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604user_secret=7d7fb4d8


Re: [singularity] Storytelling, empathy and AI

2006-12-20 Thread Lúcio de Souza Coelho

On 12/20/06, Ben Goertzel [EMAIL PROTECTED] wrote:
(...)

For example, to encourage the storytelling/empathy connection to exist
in an AI system, one might want to give the system an explicit
cognitive process of hypothetically putting itself in someone else's
place.  So, when it hears a story about character X, it creates
internally a fabricated story in which it takes the place of character
X.  There is no reason to think this kind of strategy would come
naturally to an AI, particularly given its intrinsic dissimilarity to
humans.  But there is also no reason that kind of strategy couldn't be
forced, with the impact of causing the system to understand humans
better than it might otherwise.

(...)

I half-remember quotes from the now somewhat quaint scifi book The
Robots of Dawn, by Asimov. There the character Dr. Fastolfe justifies
his creation of humaniform robots saying things like there is no
mind without a body and no body without mind and an inhuman body
develops an inhuman mind.

Your assertion about storytelling putting our selves in someone else's
skin makes me wonder that those fictional are true. Perhaps in order
to put her self into the skin of a human by means of storytelling,
an AI beforehand needs to have a human model of herself. Perhaps a
virtual body, a simulation, but even though something that would serve
to anchor her to the point of view of humans.

By the way, I wholeheartedly agree with the storytelling-induced
self-transference stuff. When I read a story, specially one of those
in first person, I feel like kind I am incarnated in the character.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Animal rights

2006-10-30 Thread Lúcio de Souza Coelho

On 10/27/06, Matt Mahoney [EMAIL PROTECTED] wrote:
(...)

2. What is human?

- If you make an exact copy of a human and kill the original, is it murder?
- What if you copy only the brain and put it in a different body?
- What if you put the copy in a robot body?
- What if you copy only the information in the brain and run it in a simulation?
- What if you put the memory in archival storage but don't otherwise use it?


All of them are human-equivalent to me, in an ethical sense. The last
case is equivalent to suspended animation.

I would like to point to what may be a fifth case described by David
Brin in his fantastic Kiln People
(http://www.amazon.com/Kiln-People-Books-David-Brin/dp/BDK4HM/sr=8-1/qid=1162212357/ref=pd_bbs_sr_1/104-7205949-4618306?ie=UTF8s=books).
There people have the technology to produce golems - replicas of
themselves made of nano-modified clay (!), which carry the same
memories and thoughts of the original, but are disposable: due to
chemical energy storage limitations, they typically last for just one
day. In the end of the day they begin to dissolve and their only
chance of survival is to upload their memories back to the
long-lasting original. In fact biological people live their lives in
parallel, creating several golems in the morning (one for going to the
supermarket, one for going to the office, etc, while the original
dedicates himself to leisure time) and downloading them in the
evening.

Occasionally of course some golems are destroyed (either by accident
or murder) before uploading to the original. What are the ethical
implications of that?


- What if you only copy part of the memory?  How much do you need?
- What if you copy none of it, but reconstruct a plausible substitute based on 
what you know about the person?

(...)

I think that those are questions of intellectual property and
plagiarism already extensively debated in the world of today. ;-)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Defining the Singularity

2006-10-30 Thread Lúcio de Souza Coelho

On 10/27/06, Matt Mahoney [EMAIL PROTECTED] wrote:
(...)

Orwell's 1984 predicted a world where a totalitarian government watched your 
every move.  What he failed to predict is that it would happen in a democracy.  
People want surveillence.  You want cameras in businesses for better security.  
You use credit cards that track your spending because cash can be stolen.  You 
let automated toll booths track your movements so you don't have to stop.  You 
trust Yahoo/Google/your ISP/company/university with your email because it's 
convenient.  You give up privacy a little bit at a time and each time you get 
something in return.  Big changes happen slowly.

(...)

We are discussing semantics of politics here. Just because democracies
tend to give more and more power to entities that they consider good
and responsible, they tend to destroy themselves and become
totalitarian governments. Totalitarian governments on the other hand
have a tendency to destroy themselves either by power abuse or
dependency on a personality cult.

For the record, it is not clear from the reading of 1984 that the
Big Brother government reached power by force. Winston's childhood
memories on this historical subject are blurred, and conceivably that
totalitarian government could have ascended to power by a democratic
way.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Defining the Singularity

2006-10-26 Thread Lúcio de Souza Coelho

On 10/26/06, deering [EMAIL PROTECTED] wrote:
(...)

The only rational thing to do is to build an SAI without any preconceived
ideas of right and wrong, and let it figure it out for itself.  What makes
you think that protecting humanity is the greatest good in the universe?

(...)

Hundreds of thousands of years of evolution selecting humans that like
humans (or at least part of them). And before that, billions of years
of similar selective pressures on various evolutionary ancestors.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Is Friendly AI Bunk?

2006-09-12 Thread Lúcio de Souza Coelho

On 9/12/06, Matt Mahoney [EMAIL PROTECTED] wrote:
(...)

Uploading is occurring as well, every time we post our words and pictures on 
the Internet.  I realize this only gets a small fraction of our knowledge, but 
we would never want to upload everything anyway.  Much of the knowledge related 
to low level sensory processing and motor control would not be useful in a 
different physical embodiment.  Instead, we copy only what is important and 
useful.

(...)

In that sense uploading always occurred. First using oral transmission
of knowledge, then using writings in paper, and now writings on the
Internet - which also includes photos, videos, personal sensory data
in a way.

I call that memetic immortality, but I tend to draw a conceptual
line between that and uploading. Using a computer-as-a-brain analogy,
uploading would be something analogue to a full backup of the hard
disk; memetic immortality would be more akin to file sharing and other
gradual data exchange processes.

On the other hand, once we have Strong AI it may be possible to
produce, uh, reverse uploads. Suppose that an AI scans the Internet
in search of all writings, videos, photos, etc, that a given dead
person, Mr. X, left on the Internet. Supposing that X wrote a lot of
things and the AI has a good enough capacity for
abstraction/extrapolation, it would then be able to create a
simulation of X that would have all his recorded memories and
thoughts, and would be able to formulate opinions about new subjects
that likely would be the opinions of X if he were still alive.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Is Friendly AI Bunk?

2006-09-12 Thread Lúcio de Souza Coelho

On 9/12/06, Matt Mahoney [EMAIL PROTECTED] wrote:
(...)

1. It is not possible for a less intelligent entity (human) to predict the
behavior of a more intelligent entity.  A state machine cannot simulate
another machine with more states than itself.

(...)

I think you should add in the general case to the statement above.
In particular cases a less intelligent entity is perfectly able to
predict the behavior of a more intelligent one. For instance, my cats
are less intelligent than me (or so I hope ;-) and they can predict
several of my actions and take decisions based on that. For instance
Lúcio has finished dinner and so he will not be at the kitchen
anymore tonight, so I should better meow for more food.

I guess they can predict that based on previous cases - countless
times that I finished dinner, turned the kitchen light off and went to
my bedroom. Which by the way may hint at a way to predict (in the same
cat-like statistical way) the friendliness of an AI:

- Start the AI inside a virtual environment approximating reality, but
don't tell the AI that it's virtual.
- Observe a significant number of the AI actions (and reactions) in
that virtual reality.
- If the AI is considered friendly, then restart it, this time in a
real environment.

Which by the way goes to another point of your list:


2. A rigorous proof that an AI will be friendly requires a rigorous definition of 
friendly.


People (and even science) often are satisfied with proofs that are
empirical instead of rigorous. And I think that the definition of
friendliness may be intrinsically subjective. The VR testbed for AI
would accommodate both empiricism and subjectivism in proving
friendliness.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]