Re: [FairfieldLife] Why the Future Doesn't Need Us (Long)

2006-04-29 Thread Bhairitu



I read this article when it originally came out and am much in agreement 
with Bill Joy.   This last Christmas a relative gave me a copy of 
Kurzweil's  "The Singularity is Near" that's premise I feel is very 
flawed.   My relative mistakenly thought that I would see Kurzweil as a 
great technologist but I see him as a "mad scientist" and very 
unenlightened.   I had also read some  of Kaczynski's rants too and it 
is too bad he chose the wrong actions to make his point.

Computers are a great tool but that's it: a tool.  I work with them all 
the time and enjoy getting away from them.  Nothing is more painful that 
getting stuck on a project trying to fix a difficult to find bug and 
getting naive comments from suits who in their ignorance think I'm 
either a wunderkind for creating such as program or a jerk if I can't 
fix it.

Computers allowed small businesses to track their losses which is 
somewhat good and somewhat bad.  Before every small store had computers 
to track inventory then tended to have a more varied selection.  That 
has long gone away.

It looks like big business and the "Illuminati" or whatever you want to 
call those rakshasas don't like computers either and the freedom they 
give us either.  They really don't like the freedom of speech on the 
Internet.  They want to reign in the Internet severely.  I would suggest 
we reign them in instead.

As for AI or "artificial consciousness" which I have actually worked on 
a bit I mentioned to some of my colleagues that Indian philosophy has 
much of the mechanics of consciousness broken out into paradigms that 
could be implemented on a computer.  One colleague actually located a 
Phd thesis by someone primitively demonstrated such theory.  I never 
took the theories much farther to implement them myself but there is at 
least one well known program that borrowed on some of this theory.



Vaj wrote:

> Why the future doesn't need us is an article by Bill Joy, Chief  
> Scientist at Sun Microsystems. In this article, he argues (quoting  
> the sub title) that "Our most powerful 21st-century technologies -  
> robotics, genetic engineering, and nanotech - are threatening to make  
> humans an endangered species." The article was published in the April  
> 2000 issue of Wired Magazine. Joy warns:
>
> "The experiences of the atomic scientists clearly show the need to  
> take personal responsibility, the danger that things will move too  
> fast, and the way in which a process can take on a life of its own.  
> We can, as they did, create insurmountable problems in almost no time  
> flat. We must do more thinking up front if we are not to be similarly  
> surprised and shocked by the consequences of our inventions."
> The essay has been compared by The Times to Albert Einstein's 1939  
> letter to then US President Franklin D. Roosevelt, warning him of the  
> possibility of the Nazis inventing the atomic bomb.
>
> http://www.primitivism.com/future.htm
>







To subscribe, send a message to:
[EMAIL PROTECTED]

Or go to: 
http://groups.yahoo.com/group/FairfieldLife/
and click 'Join This Group!'








  
  
SPONSORED LINKS
  
  
  

Maharishi university of management
  
  
Maharishi mahesh yogi
  
  
Ramana maharshi
  
  

   
  







  
  
  YAHOO! GROUPS LINKS



   Visit your group "FairfieldLife" on the web. 
   To unsubscribe from this group, send an email to: [EMAIL PROTECTED] 
   Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.



  












Re: [FairfieldLife] Why the Future Doesn't Need Us (Long)

2006-04-28 Thread Vaj




On Apr 28, 2006, at 11:14 AM, Peter wrote:

> A sentient man-made robot/machine would be mind
> boggling. If it was intelligent, watch out! So many
> possibilities to consider.

The disturbing thing to me is that the sceintists in AI who are  
*seriously* talking about robot species are not talking about  
initially uploading the entire consciousness of a human to the robot,  
but merely the instinctual, thinking mind. No higher intellect, no  
fine discriminating intellect (buddhi) and no conscience. At one of  
the first Mind and Life conferences the Dalai Lama stated that once  
some material matrix becomes available to hold consciousness,  
consciousness will be able to incarnate into this new species. And of  
course by the time we get to that stage, the ability to self- 
replicate, a relatively mechanical process, will have already been  
mastered.

There are a number of yogis who have talked of future Buddhas who  
appear to be made of some silicon or crystalline material.

We will already, quite soon, have computers the size of cells.  
Injectable. It's coming sooner than we think.






To subscribe, send a message to:
[EMAIL PROTECTED]

Or go to: 
http://groups.yahoo.com/group/FairfieldLife/
and click 'Join This Group!'








  
  
SPONSORED LINKS
  
  
  

Maharishi university of management
  
  
Maharishi mahesh yogi
  
  
Ramana maharshi
  
  

   
  







  
  
  YAHOO! GROUPS LINKS



   Visit your group "FairfieldLife" on the web. 
   To unsubscribe from this group, send an email to: [EMAIL PROTECTED] 
   Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.



  











Re: [FairfieldLife] Why the Future Doesn't Need Us (Long)

2006-04-28 Thread Peter



A sentient man-made robot/machine would be mind
boggling. If it was intelligent, watch out! So many
possibilities to consider.

--- Vaj <[EMAIL PROTECTED]> wrote:

> Why the future doesn't need us is an article by Bill
> Joy, Chief  
> Scientist at Sun Microsystems. In this article, he
> argues (quoting  
> the sub title) that "Our most powerful 21st-century
> technologies -  
> robotics, genetic engineering, and nanotech - are
> threatening to make  
> humans an endangered species." The article was
> published in the April  
> 2000 issue of Wired Magazine. Joy warns:
> 
> "The experiences of the atomic scientists clearly
> show the need to  
> take personal responsibility, the danger that things
> will move too  
> fast, and the way in which a process can take on a
> life of its own.  
> We can, as they did, create insurmountable problems
> in almost no time  
> flat. We must do more thinking up front if we are
> not to be similarly  
> surprised and shocked by the consequences of our
> inventions."
> The essay has been compared by The Times to Albert
> Einstein's 1939  
> letter to then US President Franklin D. Roosevelt,
> warning him of the  
> possibility of the Nazis inventing the atomic bomb.
> 
> http://www.primitivism.com/future.htm
> 
> Why the Future Doesn't Need Us
> 
> 
> Bill Joy
> 
>  From the moment I became involved in the creation
> of new  
> technologies, their ethical dimensions have
> concerned me, but it was  
> only in the autumn of 1998 that I became anxiously
> aware of how great  
> are the dangers facing us in the 21st century. I can
> date the onset  
> of my unease to the day I met Ray Kurzweil, the
> deservedly famous  
> inventor of the first reading machine for the blind
> and many other  
> amazing things.
> 
> Ray and I were both speakers at George Gilder's
> Telecosm conference,  
> and I encountered him by chance in the bar of the
> hotel after both  
> our sessions were over. I was sitting with John
> Searle, a Berkeley  
> philosopher who studies consciousness. While we were
> talking, Ray  
> approached and a conversation began, the subject of
> which haunts me  
> to this day.
> 
> I had missed Ray's talk and the subsequent panel
> that Ray and John  
> had been on, and they now picked right up where
> they'd left off, with  
> Ray saying that the rate of improvement of
> technology was going to  
> accelerate and that we were going to become robots
> or fuse with  
> robots or something like that, and John countering
> that this couldn't  
> happen, because the robots couldn't be conscious.
> 
> While I had heard such talk before, I had always
> felt sentient robots  
> were in the realm of science fiction. But now, from
> someone I  
> respected, I was hearing a strong argument that they
> were a near-term  
> possibility. I was taken aback, especially given
> Ray's proven ability  
> to imagine and create the future. I already knew
> that new  
> technologies like genetic engineering and
> nanotechnology were giving  
> us the power to remake the world, but a realistic
> and imminent  
> scenario for intelligent robots surprised me.
> 
> It's easy to get jaded about such breakthroughs. We
> hear in the news  
> almost every day of some kind of technological or
> scientific advance.  
> Yet this was no ordinary prediction. In the hotel
> bar, Ray gave me a  
> partial preprint of his then-forthcoming book The
> Age of Spiritual  
> Machines, which outlined a utopia he foresaw - one
> in which humans  
> gained near immortality by becoming one with robotic
> technology. On  
> reading it, my sense of unease only intensified; I
> felt sure he had  
> to be understating the dangers, understating the
> probability of a bad  
> outcome along this path.
> 
> I found myself most troubled by a passage detailing
> a dystopian  
> scenario:
> 
> 
> The New Luddite Challenge
> First let us postulate that the computer scientists
> succeed in  
> developing intelligent machines that can do all
> things better than  
> human beings can do them. In that case presumably
> all work will be  
> done by vast, highly organized systems of machines
> and no human  
> effort will be necessary. Either of two cases might
> occur. The  
> machines might be permitted to make all of their own
> decisions  
> without human oversight, or else human control over
> the machines  
> might be retained.
> 
> If the machines are permitted to make all their own
> decisions, we  
> can't make any conjectures as to the results,
> because it is  
> impossible to guess how such machines might behave.
> We only point out  
> that the fate of the human race would be at the
> mercy of the  
> machines. It might be argued that the human race
> would never be  
> foolish enough to hand over all the power to the
> machines. But we are  
> suggesting neither that the human race would
> voluntarily turn power  
> over to the machines nor that the machines would
> willfully seize  
> power. What we do suggest