Re: Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-28 Thread Ben Goertzel

Hi,


Do most in the filed believe that only a war can advance technology to
the point of singularity-level events?
Any opinions would be helpful.


My view is that for technologies involving large investment in
manufacturing infrastructure, the US military is one very likely
source of funds.  But not the only one.  For instance, suppose that
computer manufacturers decide they need powerful nanotech in order to
build better and better processors: that would be a convincing
nonmilitary source for massive nanotech RD funds.

OTOH for technologies like AGI where the main need is innovation
rather than expensive infrastructure, I think a key role for the
military is less likely.  I would expect the US military to be among
the leaders in robotics, because robotics is
costly-infrastructure-centric.  But not necessarily in robot
*cognition* (as opposed to hardware) because cognition RD is more
innovation-centric.

Not that I'm saying the US military is incapable of innovation, just
that it seems to be more reliable as a source of development $$ for
technologies not yet mature enough to attract commercial investment,
than as a source for innovative ideas.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-27 Thread Anna Taylor

Josh Cowan wrote:

Issues associated with animal rights are better known then the coming
Singularity.


Issues associated with animal rights are easy to understand, they make
you feel good when you help. The general public can pick up a phone,
donate money and feel rewarded that it is helping a cause. If there is
no cause, no warm feelings of helping others, chances are the general
public won't be interested. The Singularity is complicated with issues
that the general public can't even begin to grasp. I think that the
Singularity needs to be refined in terms if the scientific world wants
the general public to believe, contribute or be part of the
Singularity.

Anna:)



On 10/26/06, Josh Cowan [EMAIL PROTECTED] wrote:



Chris Norwood wrote:

  When talking about use, it is easy to explain by
 giving examples. When talking about safety, I always
 bring in disembodied AGI vs. embodied and the normal
 range of possible minds debate. If they are still
 wary, I talk about the possible inevitability of AGI.
 I relate it to the making of the atom bomb during
 WWII. Do we want someone aware of the danger and
 motivated to make it, and standard practice
 guidelines, as safe as possible? Or would you rather
 someone with bad intent and recklessness to make the
 attempt?



Assuming memes in the general culture have some, if only very indirect,
effect on the future. Perhaps a back up approach to both FAI and, more
relevantly to the culture at large, would be encouraging animal rights.
Issues associated with animal rights are better known then the coming
Singularity.  Besides, if the AI is so completely in control and
inevitable, and if  my children or I, shall be nothing more than
insects (De Garis's description) or gold fish I want the general ethos
to value the dignity of pets. Next time you see that collection-can at
the grocery store, look at that cute puppy and give generously.   :)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-27 Thread BillK

On 10/22/06, Anna Taylor [EMAIL PROTECTED] wrote:

On 10/22/06, Bill K wrote:

But I agree that huge military RD expenditure (which already supports
many, many research groups) is the place most likely to produce
singularity-level events.

I am aware that the military is the most likely place to produce
singularity-level events, i'm just trying to stay optimistic that a
war won't be the answer to advancing it.




I've just seen a news article that is relevant.
http://technology.guardian.co.uk/weekly/story/0,,1930960,00.html

Launching a new kind of warfare
Thursday October 26, 2006   The Guardian

Extracts:

By 2015, the US Department of Defense plans that one third of its
fighting strength will be composed of robots, part of a $127bn (£68bn)
project known as Future Combat Systems (FCS), a transformation that is
part of the largest technology project in American history.

Among the 37 or so UAVs detailed in the US Unmanned Aircraft Systems
Roadmap 2005-2030 (http://tinyurl.com/ozv78), two projects
demonstrated in 2004 - the Boeing X45a and the Northrop Grumman X47a
(both uncannily similar to the Stealth fighter) - are listed as Joint
Unmanned Combat Air Systems. A similar project, the Cormorant, which
can be launched from a submerged submarine, can be used by special
forces for ground support. A close reading of the UAV Systems Roadmap
shows the startling progress the US has already made in this field,
with systems ranging from fighters to helicopters and propeller driven
missiles called Long Guns on display.

But if this is the beginning of the end of humanity's presence on the
battlefield, it merits an ethical debate that the military and its
weapons designers are shying away from.
--
For the FCS project is far more than the use of robots. It also
involves the creation of a hugely complex, distributed mobile computer
network on to a battlefield with huge numbers of drones supplying
nodes and communication points in an environment under continual
attack.
-
End extracts.


This project looks to me like autonomous robot fighters linking back
to an AI-type real-time command and control system.  It may not be
general AI, but it certainly looks like AI in its own domain of the
battlefield.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-27 Thread Anna Taylor

On 10/28/06, Bill K wrote:
I've just seen a news article that is relevant.
http://technology.guardian.co.uk/weekly/story/0,,1930960,00.html

I'm aware that robot fighters of some sort are being built by the
military, it would be ridiculous to believe that with technology as
advanced as it is, that the military wouldn't have such systems.  I
just don't care to believe that singularity-level events will only be
advanced by a war.
Maybe my optimism isn't worth keeping or maybe i'm just being naive.

Do most in the filed believe that only a war can advance technology to
the point of singularity-level events?
Any opinions would be helpful.

Just curious
Anna




On 10/27/06, BillK [EMAIL PROTECTED] wrote:

On 10/22/06, Anna Taylor [EMAIL PROTECTED] wrote:
 On 10/22/06, Bill K wrote:

 But I agree that huge military RD expenditure (which already supports
 many, many research groups) is the place most likely to produce
 singularity-level events.

 I am aware that the military is the most likely place to produce
 singularity-level events, i'm just trying to stay optimistic that a
 war won't be the answer to advancing it.



I've just seen a news article that is relevant.
http://technology.guardian.co.uk/weekly/story/0,,1930960,00.html

Launching a new kind of warfare
Thursday October 26, 2006   The Guardian

Extracts:

By 2015, the US Department of Defense plans that one third of its
fighting strength will be composed of robots, part of a $127bn (£68bn)
project known as Future Combat Systems (FCS), a transformation that is
part of the largest technology project in American history.

Among the 37 or so UAVs detailed in the US Unmanned Aircraft Systems
Roadmap 2005-2030 (http://tinyurl.com/ozv78), two projects
demonstrated in 2004 - the Boeing X45a and the Northrop Grumman X47a
(both uncannily similar to the Stealth fighter) - are listed as Joint
Unmanned Combat Air Systems. A similar project, the Cormorant, which
can be launched from a submerged submarine, can be used by special
forces for ground support. A close reading of the UAV Systems Roadmap
shows the startling progress the US has already made in this field,
with systems ranging from fighters to helicopters and propeller driven
missiles called Long Guns on display.

But if this is the beginning of the end of humanity's presence on the
battlefield, it merits an ethical debate that the military and its
weapons designers are shying away from.
--
For the FCS project is far more than the use of robots. It also
involves the creation of a hugely complex, distributed mobile computer
network on to a battlefield with huge numbers of drones supplying
nodes and communication points in an environment under continual
attack.
-
End extracts.


This project looks to me like autonomous robot fighters linking back
to an AI-type real-time command and control system.  It may not be
general AI, but it certainly looks like AI in its own domain of the
battlefield.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-26 Thread Kaj Sotala

On 9/24/06, Ben Goertzel [EMAIL PROTECTED] wrote:

Anyway, I am curious if anyone would like to share experiences they've
had trying to get Singularitarian concepts across to ordinary (but
let's assume college-educated) Joes out there.  Successful experiences
are valued but also unsuccessful ones.  I'm specifically interested in


Personally, I've noticed that the opposition to a thought of
Singularity falls into two main camps:

1) Sure, we might get human-equivalent hardware in the near future,
but we're still nowhere near having the software for true AI.

2) We might get a Singularity within our lifetimes, but it's just as
likely to be a rather soft takeoff and thus not really *that* big of
an issue - life-changing, sure, but not substantially different from
the development of technology so far.

The difficulty with arguing against point 1 is that, well, I don't
know all that much that'd support me in arguing against it. I've had
some limited success with quoting Kurzweil's brain scanning
resolution is constantly getting better graph and pointing out that
we'll become able of doing a brute-force simulation at some point, but
as for anything more elegant, not much luck.

Moore's Law seems to work somewhat against point 2, but people often
question how long we can assume it to hold.


approaches, metaphors, focii and so forth that have actually proved
successful at waking non-nerd, non-SF-maniac human beings up to the
idea that this idea of a coming Singularity is not **completely**
absurd...


Myself, I've recently taken a liking to the Venus flytrap metaphor I
stole from Robert Freitas' Xenopsychology. To quote my in-the-works
introductory essay to the Singularity (yes, it seems to be
in-the-works indefinitely - short spurts of progress, after which I
can't be bothered to touch it for months at a time):

In his 1984 paper Xenopsychology [3], Robert Freitas introduces the
concept of Sentience Quotient for determining a mind's intellect. It
is based on the size of the brain's neurons and their
information-processing capability. The dumbest possible brain would
have a single neuron massing as much as the entire universe and
require a time equal to the age of the universe to process one bit,
giving it an SQ of -70. The smartest possible brain allowed by the
laws of physics, on the other hand, would have an SQ of +50. While
this only reflects pure processing capability and doesn't take into
account the software running on the brains, it's still a useful rough
guideline.

So what's this have to do with artificial intelligences? Well, Freitas
estimates Venus flytraps to have an SQ of +1, while most plants have
an SQ of around -2. The SQ for humans is estimated at +13. Freitas
estimates electronic sentiences that can be built to have an SQ of +23
- making the difference of us and advanced AIs inearly as high as
between humans and Venus flytraps/i. It should be obvious that when
compared to this, even the smartest humans would stand no chance
against the AI's intellect - any more than we should be afraid of a
genius carnivorous plant suddenly developing a working plan for taking
over all of humanity.

http://www.saunalahti.fi/~tspro1/Esitys/009.png has the same
compressed in a catchy presentation slide (some of the text is in
Finnish, but you ought to get the gist of it anyway).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-26 Thread Matt Mahoney
I found more on Freitas' SQ
http://en.wikipedia.org/wiki/Sentience_Quotient

The ratio of the highest and lowest values, 10^120 depends only on Planck's 
constant h, the speed of light c, the gravitational constant G, and the age of 
the universe, T (which is related to the size and mass of the universe by c and 
G).  This number is also the quantum mechanical limit on the entropy of the 
universe, or the largest memory you could build, about 10^120 bits.  Let me 
call this number H.  A more precise calculation shows

h = 1.054e-34 Kg m^2/s  (actually h-bar)
c = 3.00e8 m/s
G = 6.673e-11 Kg m^3/s^2
T = 4.32e17 s (13.7 billion years)
H = hG/(c^5 T^2) = 1.55e122 (unitless)

although I am probably neglecting some small but important constants due to my 
crude attempt at physics.  I derived H by nothing more than cancelling out 
units.

If this memory filled the universe (and it would have to), then each bit would 
occupy about the space of a proton or neutron.  This is quite a coincidence, 
since h, G,  c, and T do not depend on the physical properties of any 
particles.  The actual number of baryons (protons and neutrons and possibly 
their antiparticles) in the universe is about H^(2/3) ~ 10^80.  If the universe 
was mashed flat, it would form a sheet of neutrons one particle thick.

Another possible coincidence is that H could be related to the fine structure 
constant alpha = 1/137.0359997... by H ~ e^2/alpha ~ 10^119.  If this could be 
confirmed, it would be significant because alpha is known to about 9 
significant digits.  Alpha is unitless and depends on h, c, and the unit 
quantum electric charge.
http://en.wikipedia.org/wiki/Fine_structure_constant
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Kaj Sotala [EMAIL PROTECTED]
To: singularity@v2.listbox.com
Sent: Thursday, October 26, 2006 9:46:55 AM
Subject: Re: [singularity] Convincing non-techie skeptics that the Singularity 
isn't total bunk

On 9/24/06, Ben Goertzel [EMAIL PROTECTED] wrote:
 Anyway, I am curious if anyone would like to share experiences they've
 had trying to get Singularitarian concepts across to ordinary (but
 let's assume college-educated) Joes out there.  Successful experiences
 are valued but also unsuccessful ones.  I'm specifically interested in

Personally, I've noticed that the opposition to a thought of
Singularity falls into two main camps:

1) Sure, we might get human-equivalent hardware in the near future,
but we're still nowhere near having the software for true AI.

2) We might get a Singularity within our lifetimes, but it's just as
likely to be a rather soft takeoff and thus not really *that* big of
an issue - life-changing, sure, but not substantially different from
the development of technology so far.

The difficulty with arguing against point 1 is that, well, I don't
know all that much that'd support me in arguing against it. I've had
some limited success with quoting Kurzweil's brain scanning
resolution is constantly getting better graph and pointing out that
we'll become able of doing a brute-force simulation at some point, but
as for anything more elegant, not much luck.

Moore's Law seems to work somewhat against point 2, but people often
question how long we can assume it to hold.

 approaches, metaphors, focii and so forth that have actually proved
 successful at waking non-nerd, non-SF-maniac human beings up to the
 idea that this idea of a coming Singularity is not **completely**
 absurd...

Myself, I've recently taken a liking to the Venus flytrap metaphor I
stole from Robert Freitas' Xenopsychology. To quote my in-the-works
introductory essay to the Singularity (yes, it seems to be
in-the-works indefinitely - short spurts of progress, after which I
can't be bothered to touch it for months at a time):

In his 1984 paper Xenopsychology [3], Robert Freitas introduces the
concept of Sentience Quotient for determining a mind's intellect. It
is based on the size of the brain's neurons and their
information-processing capability. The dumbest possible brain would
have a single neuron massing as much as the entire universe and
require a time equal to the age of the universe to process one bit,
giving it an SQ of -70. The smartest possible brain allowed by the
laws of physics, on the other hand, would have an SQ of +50. While
this only reflects pure processing capability and doesn't take into
account the software running on the brains, it's still a useful rough
guideline.

So what's this have to do with artificial intelligences? Well, Freitas
estimates Venus flytraps to have an SQ of +1, while most plants have
an SQ of around -2. The SQ for humans is estimated at +13. Freitas
estimates electronic sentiences that can be built to have an SQ of +23
- making the difference of us and advanced AIs inearly as high as
between humans and Venus flytraps/i. It should be obvious that when
compared to this, even the smartest humans would stand no chance
against the AI's intellect - any more than we

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-24 Thread J. Andrew Rogers


On Oct 23, 2006, at 6:43 PM, Gregory Johnson wrote:
I most certainly am not a proponent of the military industrial  
complex as opposed to the
Japanese and German business models , but it is my sense that that  
is not where

the world is headed at the moment.



Huh?

J. Andrew Rogers




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-23 Thread Anna Taylor

On 10/23/06, J. Andrew Rogers [EMAIL PROTECTED] wrote:

So you could say that the economics of responding to the mere threat
of war is adequate to drive all the research the military does.


Yes I agree but why is the threat of war always the motive?  Do not
think that there are other possible economical ways to motivate the
military to want to concentrate on singularity-level events or am I
wasting my time trying to be optimistic?

Just Curious
Anna:)


On Oct 22, 2006, at 11:10 AM, Anna Taylor wrote:

 On 10/22/06, Bill K wrote:

 But I agree that huge military RD expenditure (which already
 supports
 many, many research groups) is the place most likely to produce
 singularity-level events.

 I am aware that the military is the most likely place to produce
 singularity-level events, i'm just trying to stay optimistic that a
 war won't be the answer to advancing it.


War per se does not advance military research, but economics and
logistics.  If it was about killing people, we could have stopped at
clubs and spears.  The cost of RD and procurement of new systems,
supporting and front line, are usually completely recovered within a
decade of deployment relative to the systems they replace, so it is
actually a profitable enterprise of sorts.  This is the primary
reason military expenditures as a percentage of GDP continue to
rapidly shrink -- even in the US -- while the apparent capabilities
do not.

So you could say that the economics of responding to the mere threat
of war is adequate to drive all the research the military does.
Short of completely eliminating the military, there will always be
plenty of reason to do the RD without ever firing a shot.  While I
am doubtful that the military RD programs will directly yield AGI,
they do fund a lot of interesting blue sky research.


J. Andrew Rogers


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-23 Thread Joel Pitt

On 10/22/06, Anna Taylor [EMAIL PROTECTED] wrote:

Ignoring the mass is only going to limit the potential of any idea.
People buy CD's, watch tv, download music, chat, read (if you're
lucky) therefore the only possible solution is to find a way to
integrate within the mass population.  (Unless ofcourse, the
scientific technological world really doesn't mean to participate
within the general public, I would assume that's a possibility.)


Then I think we should record some singularity music.

I'm moving to being a working DJ as a hobby, so if anyone can throw me
some danceable 130 bpm singularity songs that'd be great :)

This reminds me off talking with Ben about creating a musical
interface to Novamente. As soon as Novamente makes a hit tune, can
represent itself as a funky looking person  and dance suggestively,
you'll have legions of young fans (who will eventually grow up) and
you can use your signing deals to fund further AGI research!

[ Whether you tell people that Novamente is a human or not is another story ]


--
-Joel

Wish not to seem, but to be, the best.
   -- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-23 Thread Mark Davis


Gregory,

I don't think the military or industries related to the military are working 
on any sort of a general intelligence system. Narrow AI is fairly mainstream 
and I can see the military working on various projects in that realm, but 
general AI is a pretty specialized problem most scientists dismiss as too 
difficult with current technology. I'm obviously not privy to research going 
on in militaries around the world, but I think it is much more likely that 
the first general AI will come from a team that develops a sufficient 
understanding of all the complexity involved in building a digital 
intelligence. The military and other researchers will probably jump in later 
on, but the initial breakthroughs are going to probably come from a small 
team with the right approach due to the highly specialized nature of the 
problem.


Mark


From: Gregory Johnson [EMAIL PROTECTED]
Reply-To: singularity@v2.listbox.com
To: singularity@v2.listbox.com
Subject: Re: [singularity] Convincing non-techie skeptics that the 
Singularity isn't total bunk

Date: Mon, 23 Oct 2006 20:43:44 -0500

I note that Ray Kurzweil, is also an advisor to some military computational
projects.
If I was Ray I would find the gauranteed profit in servicing a market that
does not
have to respond to the market and social  ups and down might be just what I
need to
see some AGI  RD turned into prototypes post haste.

A luddite  backlash like the GMO foods thing would drastically slow down
AGI in its early phases.

Once  military prototypes work under the rigorous conditions of the global
white spy/black spy world  , they might be safely brought into the normal
commercial
world.

I most certainly am not a proponent of the military industrial complex as
opposed to the
Japanese and German business models , but it is my sense that that is not
where
the world is headed at the moment.

Perhaps the singularity will be a top secret event and it will be the AGI
who will decide how and when to make it go public.

.??

On 10/23/06, J. Andrew Rogers [EMAIL PROTECTED] wrote:



On Oct 22, 2006, at 11:10 AM, Anna Taylor wrote:
 On 10/22/06, Bill K wrote:

 But I agree that huge military RD expenditure (which already
 supports
 many, many research groups) is the place most likely to produce
 singularity-level events.

 I am aware that the military is the most likely place to produce
 singularity-level events, i'm just trying to stay optimistic that a
 war won't be the answer to advancing it.


War per se does not advance military research, but economics and
logistics.  If it was about killing people, we could have stopped at
clubs and spears.  The cost of RD and procurement of new systems,
supporting and front line, are usually completely recovered within a
decade of deployment relative to the systems they replace, so it is
actually a profitable enterprise of sorts.  This is the primary
reason military expenditures as a percentage of GDP continue to
rapidly shrink -- even in the US -- while the apparent capabilities
do not.

So you could say that the economics of responding to the mere threat
of war is adequate to drive all the research the military does.
Short of completely eliminating the military, there will always be
plenty of reason to do the RD without ever firing a shot.  While I
am doubtful that the military RD programs will directly yield AGI,
they do fund a lot of interesting blue sky research.


J. Andrew Rogers



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


_
All-in-one security and maintenance for your PC.  Get a free 90-day trial! 
http://clk.atdmt.com/MSN/go/msnnkwlo005002msn/direct/01/?href=http://www.windowsonecare.com/?sc_cid=msn_hotmail


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-23 Thread Anna Taylor

On 10/23/06, Joel Pitt [EMAIL PROTECTED] wrote:

Then I think we should record some singularity music.


If you have lyrics to describe exactly what the singularity will be, I
would love to hear your music:)


This reminds me off talking with Ben about creating a musical
interface to Novamente. As soon as Novamente makes a hit tune, it can
represent itself as a funky looking person  and dance suggestively,
you'll have legions of young fans (who will eventually grow up) and
you can use your signing deals to fund further AGI research.


Wouldn't be any different from Arnold and politics.

Anna:)




On 10/22/06, Anna Taylor [EMAIL PROTECTED] wrote:
 Ignoring the mass is only going to limit the potential of any idea.
 People buy CD's, watch tv, download music, chat, read (if you're
 lucky) therefore the only possible solution is to find a way to
 integrate within the mass population.  (Unless ofcourse, the
 scientific technological world really doesn't mean to participate
 within the general public, I would assume that's a possibility.)

Then I think we should record some singularity music.

I'm moving to being a working DJ as a hobby, so if anyone can throw me
some danceable 130 bpm singularity songs that'd be great :)

This reminds me off talking with Ben about creating a musical
interface to Novamente. As soon as Novamente makes a hit tune, can
represent itself as a funky looking person  and dance suggestively,
you'll have legions of young fans (who will eventually grow up) and
you can use your signing deals to fund further AGI research!

[ Whether you tell people that Novamente is a human or not is another story
]


--
-Joel

Wish not to seem, but to be, the best.
-- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-21 Thread deering



In reference to the original question of this 
thread, 'How to convince non-techy types of the Singularity.' I think I 
have come across an epiphany. 'Normal' people, not like us, make all of 
their decisions based on arguments from authority. They don't feel 
competent to think for themselves. They have always been told that the 
experts know best. Until they see it on CNN they won't believe it. 
You can't reason with them. They're not reasoners, they're 
viewers.

Don't stop trying to convince the viewers directly 
one-on-one, but understand why it will never get anywhere. Instead, try to 
convince the experts. Like climate change, when the overwhelming majority 
of experts agree that it's real and coming, and when they say it on CNN, only 
then, will the viewers believe it.



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-27 Thread Anna Taylor

Robert wrote:
Well, ever's a long time.

Yes, my apology, I was thinking on the terms of say 20-35 years.

Anna:)


On 9/27/06, Russell Wallace [EMAIL PROTECTED] wrote:

On 9/27/06, Anna Taylor [EMAIL PROTECTED] wrote:

 Bruce LaDuke wrote:
 I don't believe a machine can ever have intention that doesn't
 ultimately trace back to a human being.

 I was curious to know what the major opinions are on this comment.


Well, ever's a long time. I think it will be true for the foreseeable
future. Whether it will still be true in a million years, say, is a
different matter; I can't predict that far ahead and I don't think anyone
else can either.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-27 Thread Joel Pitt

On 9/28/06, Anna Taylor [EMAIL PROTECTED] wrote:

Bruce LaDuke wrote:
I don't believe a machine can ever have intention that doesn't
ultimately trace back to a human being.

I was curious to know what the major opinions are on this comment.
Most of my concerns are related to the fact that I too believe it will
be traced back to a human(s).  Are there other ways at looking at the
scenario?  Do people really believe that a whole new species will
emerge not having any reflection to a human?


Well this starts to get into cause and effect discussion.

My 2c is that since we'll ultimately create these thinking machines,
so any intention it has will be, in some way, however distant and
removed, traceable back to humans.

In the same way that the soup of organic chemical reactions led to
evolutionary systems and eventually led to *us* thinking.

-J

--
-Joel

Wish not to seem, but to be, the best.
   -- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-26 Thread Randall Randall

On Sep 25, 2006, at 10:05 PM, Bruce LaDuke wrote:


We're not looking into singularity yet, but the convergence has  
already started.  Consider that the molecular economy has the  
potential to bring total social upheaval in its own right, without  
singularity.


What you're speaking of *is* singularity, just not by the on-this-list
generally expected means.  Someone can have no expectation that
superhuman intelligence is easy enough to do without trial and error,
and still expect singularity through merely molecular manufacturing and
computing power.

Even this type of singularity is really difficult to explain to people,
though, and we already have a close analogue to this: software.


--
Randall Randall [EMAIL PROTECTED]
This is a fascinating question, right up there with whether rocks
fall because of gravity or being dropped, and whether 3+5=5+3
because addition is commutative or because they both equal 8.
  - Scott Aaronson


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-26 Thread Hank Conn
Bruce I tend to agree with all the things you say here and appreciate your insight, observations,and sentiment.

However, here is where you are horribly wrong:

In my mind, singularity is no different.I pesonally see it providing justanother tool in the hand of mankind, only one of greater power.

The Kurzweilian belief that the Singularity will be the end point of the accelerating curves of technology discounts the reality of creating AGI. All that matters is the algorithm for intelligence.

As such, the Singularity is entirely *discontinuous* with every single trend- regardless of kind, scale, orhistory- that humanity knows today.

-hank
On 9/25/06, Bruce LaDuke [EMAIL PROTECTED] wrote:
I really like Shane's observation below that people just don't thinkSingularity is coming for a very long time.The beginning affects are
already here.Related to this, I've got a few additional thoughts to share.We're not looking into singularity yet, but the convergence has alreadystarted.Consider that the molecular economy has the potential to bring
total social upheaval in its own right, without singularity.For example,what happens when an automobile is weighs around 400 pounds andis poweredby a battery that never needs charging.What happens to the oil industry?
What happens to politics because of what happens to the oil industry?Howwill a space elevator by 2012 change the balance of power?Nanoweapons?World War III?China/India industrialization and resulting pollution? As
announced recently what happens when the world warms to its hottest level ina million years?When biodiversity reduction goes critical and plankton dieand oxygen fails?I'm sure you know about most of these things and how quickly they are
moving, but my point is, trouble isn't coming...it's here.Not only shouldwe be thinking about these things now, but I think it is our socialresponsibility.That is, if we want children to grow up and inhabit this
world with any level of normalcy...or at all.Any number of things could bring our glorious house crashing down in amatter of days or months.When the Soviet economy crashed, nuclearphysicists were standing in the soup line over night.The same could easily
be seen of us in a global economic crash.Our scholarly/industrialexistence is really very fragile.It doesn't take much for our hierarchy ofneeds to return to survival.Our human track record of late in terms of creating advance is really quite
good, but in terms of dealing with the social impacts of that advance isreally very, very poor and immature.All of our wonderful creations arealready making quite a big global mess.So who's to say that our continued
focus on modernist, profit-centric values will result in any thing less thanmore and more advance alongside escalating social issues?In my mind, singularity is no different.I pesonally see it providing just
another tool in the hand of mankind, only one of greater power.And thispower holds the potential to fulfill human values and human intention, whichis the piece we really aren't managing well.Bad intentions and bad values,
combined with a bigger tool, equals bigger trouble.Given our human track record and factors already outside of our control, wehave a far better chance of destroying what we have now (the rest of theway) than we have of realizing singularity.Not that we shouldn't continue
to seek singularity, but we need a hard look at the values and intentionsthan we're basing these efforts on.See the Second Enlightenment Conference:http://www.2enlightenment.com
Elizabet Sahtouris will be keynote (http://www.ratical.org/LifeWeb/)Kind Regards,Bruce LaDukeManaging DirectorInstant Innovation, LLCIndianapolis, IN
[EMAIL PROTECTED]http://www.hyperadvance.comOriginal Message FollowsFrom: Shane Legg 
[EMAIL PROTECTED]Reply-To: singularity@v2.listbox.comTo: singularity@v2.listbox.com
Subject: Re: [singularity] Convincing non-techie skeptics that theSingularity isn't total bunkDate: Mon, 25 Sep 2006 23:16:12 +0200I'd suggest looking at Joy's Why the future doesn't need us article in
Wired.For some reason, which isn't clear to me, that article was a huge hit,drawingin people that normally would never read such stuff.I was surprised whenvarious educated but non-techie people I know started asking me about it.
I think the major problem is one of time scale.Due to Hollywood everybodyis familiar with the idea of the future containing super powerfulintelligent (andusually evil) computers.So I think the basic concept that these things
couldhappen in the future is already out there in the popular culture.I thinkthe keything is that most people, both Joe six pack and almost all professors Iknow,don't think it's going to happen for a really long time --- long enough that
it'snot going to affect their lives, or the lives of anybody they know.As suchtheyaren't all that worried about it.Anyway, I don't think the idea is goingto betaken seriously until something happens that really gives the public a
fright.Shane-This 

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-26 Thread Bruce LaDuke

Hank,

Can definitely appreciate your view here, and if I held to the Kurzweilian 
belief, I'd be inclined to agree.  But I really don't see an 'endpoint' and 
also don't see superhuman intelligence the same way I think folks in the 
Kurzweilian arena tend to see it because I don't believe a machine can ever 
have intention that doesn't ultimately trace back to a human being.  
Definitely not the popular view I know, but I think as we approach this 
level of intelligence we're going to clearly see what differentiates us 
humans from machines, which is intention, motive, desire, spirituality.


This stems from my understanding of knowledge creation, which basically sees 
knowledge as a non-feeling, non-intending, non-motivated mass of symbolic 
connections that is constantly expanding through the efforts driven by human 
intention.  Robotics, cybernetics, etc., being the actionable arm of these 
creations...but again, only the human has intention.  As such their is no 
real endpoint in terms of how far we will expand this intelligence.  It is a 
never-ending expansion as we explore the universe and create technologies.


Granted a human with good or bad intentions can *absolutely* transfer those 
intentions to the machine, and again just my opinion, but I think the human 
originated these intentions and the machine *absolutely never* will 
originate them...only execute them as instructed.


In transferring these intentions to machine they are magnifying personal 
intentions with a 'tool' that can be used for good or bad.  The constructive 
and/or destructive force is exponentially magnified by the 'tool' man is 
given.  Similar to nuclear weapons...the more powerful the tool, the more 
rigor and wisdom required to manage it.


When we can barely manage the tools we have, we're not going to fare well 
with a bigger, more powerful tool.  We need to start with understanding the 
culprit of our current woes...poorly understood and managed human intention. 
 I think I've used this quote before, but here's how Drucker put it:


In a few hundred years, when the history of our time will be written from a 
long-term perspective, it is likely that the most important event that 
historians will see is not technology, not the Internet, not e-commerce. It 
is an unprecedented change in the human condition. For the first time - 
literally - substantial and rapidly growing numbers of people have choices. 
For the first time, they will have to manage themselves. And society is 
totally unprepared for it. - Peter Drucker


Kind Regards,

Bruce LaDuke
Managing Director

Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com




Original Message Follows
From: Hank Conn [EMAIL PROTECTED]
Reply-To: singularity@v2.listbox.com
To: singularity@v2.listbox.com
Subject: Re: [singularity] Convincing non-techie skeptics that the 
Singularity isn't total bunk

Date: Tue, 26 Sep 2006 13:36:57 -0400

Bruce I tend to agree with all the things you say here and appreciate your
insight, observations, and sentiment.

However, here is where you are horribly wrong:

In my mind, singularity is no different.  I pesonally see it providing just
another tool in the hand of mankind, only one of greater power.

The Kurzweilian belief that the Singularity will be the end point of the
accelerating curves of technology discounts the reality of creating AGI. All
that matters is the algorithm for intelligence.

As such, the Singularity is entirely *discontinuous* with every single
trend- regardless of kind, scale, or history- that humanity knows today.

-hank


On 9/25/06, Bruce LaDuke [EMAIL PROTECTED] wrote:


I really like Shane's observation below that people just don't think
Singularity is coming for a very long time.  The beginning affects are
already here.  Related to this, I've got a few additional thoughts to
share.

We're not looking into singularity yet, but the convergence has already
started.  Consider that the molecular economy has the potential to bring
total social upheaval in its own right, without singularity.  For example,
what happens when an automobile is weighs around 400 pounds and  is
powered
by a battery that never needs charging.  What happens to the oil industry?
What happens to politics because of what happens to the oil industry?  How
will a space elevator by 2012 change the balance of power?  Nanoweapons?
World War III?  China/India industrialization and resulting pollution? As
announced recently what happens when the world warms to its hottest level
in
a million years?  When biodiversity reduction goes critical and plankton
die
and oxygen fails?

I'm sure you know about most of these things and how quickly they are
moving, but my point is, trouble isn't coming...it's here.  Not only
should
we be thinking about these things now, but I think it is our social
responsibility.  That is, if we want children to grow up and inhabit this
world with any level of normalcy...or at all.

Any number

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-25 Thread Ben Goertzel

Peter Voss wrote:

I have a more fundamental question though: Why in particular would we want
to convince people that the Singularity is coming? I see many disadvantages
to widely promoting these ideas prematurely.


If one's plan is to launch a Singularity quickly, before anyone else
notices, then I feel that promoting these ideas is basically
irrelevant  It is unlikely that promotion will lead to such rapid
spread of the concepts as to create significant risk of
Singularity-enabling technologies being made illegal in the near
term...

OTOH, if the Singularity launch is to happen a little more slowly,
then it will be of value if a larger number of intelligent and
open-minded people have more thoroughly thought through the
Singularity and related ideas.  These sorts of ideas take a while to
sink in; and I think that people who have had the idea of the
Singularity in their minds for a while will be better able to grapple
with the reality when it comes about...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-25 Thread David Hart

Russell Wallace wrote:
Now, Ben was saying awhile ago, IIRC, that he's doing simulated 3D 
worlds as sort of a side project, relatively loosely coupled to the 
rest of Novamente, that would be therefore relatively easy for someone 
else to contribute to without requiring face to face meetings, full 
time etc. Perhaps you could contribute to that, particularly since you 
know maths and physics which are obviously relevant in that domain, if 
you'd be interested?


See:

http://www.agiri.org/forum/index.php?s=610f840cb9f78e24e4f333695e21232ashowtopic=3

http://sourceforge.net/projects/agisim

David

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]