Re: [singularity] Vinge & Goerzel = Uplift Academy's Good Ancestor Principle Workshop 2007

2007-02-19 Thread Anna Taylor

Ben wrote:
That doesn't mean they couldn't have some smart staff who shifted
research interest to AGI after moving to Google, but it doesn't seem
tremendously likely.

I don't agree.  Google is a form of research engine that enables
information in grose load.  How you "decyfer" it, is up to the
individual.
Having the advantage of learning about so many new interests may lead
to new conclusive ideas.

Ben:
I don't have the impression they are funding a lot of blue-sky AGI ...

I would have to agree but I think they will become more wise regarding
the "how important research is regarding AGI."

Ben wrote:
So, my opinion remains that: Google staff described as working on "AI"
are almost surely working on clever variants of highly scalable
statistical language processing.   So, if you believe that this kind of
work is likely to lead to powerful AGI, then yeah, you should attach a
fairly high probability to the outcome that Google will create AGI.
Personally I think it's very unlikely (though not impossible) that AGI
is going to emerge via this route.

I think an AGI will be a mix of both a Google staff as well as a
working clever variant.

Thanks
Anna:)




On 2/19/07, Shane Legg <[EMAIL PROTECTED]> wrote:

I saw a talk about a year or two ago where one of the Google founders was
asked if they had projects to build general purpose artificial intelligence.
He answered that they did not have such a project at the company level,
however they did have many AI people in the company, some of whom where
interested in this kind of thing.  Indeed a few people were playing around
with
such projects as part of their 20% free time in the company.

Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-27 Thread Anna Taylor

On 10/28/06, Bill K wrote:
I've just seen a news article that is relevant.
<http://technology.guardian.co.uk/weekly/story/0,,1930960,00.html>

I'm aware that robot fighters of some sort are being built by the
military, it would be ridiculous to believe that with technology as
advanced as it is, that the military wouldn't have such systems.  I
just don't care to believe that singularity-level events will only be
advanced by a war.
Maybe my optimism isn't worth keeping or maybe i'm just being naive.

Do most in the filed believe that only a war can advance technology to
the point of singularity-level events?
Any opinions would be helpful.

Just curious
Anna




On 10/27/06, BillK <[EMAIL PROTECTED]> wrote:

On 10/22/06, Anna Taylor <[EMAIL PROTECTED]> wrote:
> On 10/22/06, Bill K wrote:
>
> >But I agree that huge military R&D expenditure (which already supports
> >many, many research groups) is the place most likely to produce
> >singularity-level events.
>
> I am aware that the military is the most likely place to produce
> singularity-level events, i'm just trying to stay optimistic that a
> war won't be the answer to advancing it.
>


I've just seen a news article that is relevant.
<http://technology.guardian.co.uk/weekly/story/0,,1930960,00.html>

Launching a new kind of warfare
Thursday October 26, 2006   The Guardian

Extracts:

By 2015, the US Department of Defense plans that one third of its
fighting strength will be composed of robots, part of a $127bn (£68bn)
project known as Future Combat Systems (FCS), a transformation that is
part of the largest technology project in American history.

Among the 37 or so UAVs detailed in the "US Unmanned Aircraft Systems
Roadmap 2005-2030" (http://tinyurl.com/ozv78), two projects
demonstrated in 2004 - the Boeing X45a and the Northrop Grumman X47a
(both uncannily similar to the Stealth fighter) - are listed as Joint
Unmanned Combat Air Systems. A similar project, the Cormorant, which
can be launched from a submerged submarine, can be used by special
forces for ground support. A close reading of the UAV Systems Roadmap
shows the startling progress the US has already made in this field,
with systems ranging from fighters to helicopters and propeller driven
missiles called Long Guns on display.

But if this is the beginning of the end of humanity's presence on the
battlefield, it merits an ethical debate that the military and its
weapons designers are shying away from.
--
For the FCS project is far more than the use of robots. It also
involves the creation of a hugely complex, distributed mobile computer
network on to a battlefield with huge numbers of drones supplying
nodes and communication points in an environment under continual
attack.
-
End extracts.


This project looks to me like autonomous robot fighters linking back
to an AI-type real-time command and control system.  It may not be
general AI, but it certainly looks like AI in its own domain of the
battlefield.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-27 Thread Anna Taylor

Josh Cowan wrote:

Issues associated with animal rights are better known then the coming
Singularity.


Issues associated with animal rights are easy to understand, they make
you feel good when you help. The general public can pick up a phone,
donate money and feel rewarded that it is helping a cause. If there is
no cause, no warm feelings of helping others, chances are the general
public won't be interested. The Singularity is complicated with issues
that the general public can't even begin to grasp. I think that the
Singularity needs to be refined in terms if the scientific world wants
the general public to believe, contribute or be part of the
Singularity.

Anna:)



On 10/26/06, Josh Cowan <[EMAIL PROTECTED]> wrote:

>

Chris Norwood wrote:

>  When talking about use, it is easy to explain by
> giving examples. When talking about safety, I always
> bring in disembodied AGI vs. embodied and the normal
> "range of possible minds" debate. If they are still
> wary, I talk about the possible inevitability of AGI.
> I relate it to the making of the atom bomb during
> WWII. Do we want someone aware of the danger and
> motivated to make it, and standard practice
> guidelines, as safe as possible? Or would you rather
> someone with bad intent and recklessness to make the
> attempt?
>
>

Assuming memes in the general culture have some, if only very indirect,
effect on the future. Perhaps a back up approach to both FAI and, more
relevantly to the culture at large, would be encouraging animal rights.
Issues associated with animal rights are better known then the coming
Singularity.  Besides, if the AI is so completely in control and
inevitable, and if  my children or I, shall be nothing more than
insects (De Garis's description) or gold fish I want the general ethos
to value the dignity of pets. Next time you see that collection-can at
the grocery store, look at that cute puppy and give generously.   :)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Motivational Systems that are stable

2006-10-25 Thread Anna Taylor

The last I heard, computers are spied upon because of the language the
computer is generating.  Why would the government care about the guy
that "picks up garbage"?

Richard Loosemore wrote, Wed, Oct 25, 2006:

The word "trapdoor" is a reference to trapdoor algorithms that allow
computers to be spied upon.


If you feet guilty about something then you will feel that your
ethical values are being compromised.
Technology is without a doubt the age of the future  If you have
posted, said or done, chances are if will come and haunt you.
The only way to change the algorithms is to change the thoughts.

Just my thoughts, let me know what you think.
Anna:)






On 10/25/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Anna Taylor wrote:
> On, Wed, Oct 25, 2006 at 10:11 R. Loosemore wrote:
>> What I have in mind here is the objection (that I know
>> some people will raise) that it might harbor some deep-seated animosity
>> such as an association between human beings in general and something
>> 'bad' that happened to it when it was growing up ... we would easily be
>> able to catch something like that if we had a trapdoor on the
>> motivational system.
>
> I'm not clear what you meant, could you rephrase?
> I understood, what I have in mind is a trapdoor of the motivational
> system:)
> Do you think motivation is a key factor that generates
> singularity-level events?
> Am I understanding properly?
>
> Just curious
> Anna:)

Anna,

The word "trapdoor" is a reference to trapdoor algorithms that allow
computers to be spied upon:  I meant it in a similar sense, that the AI
would be built in such a way that we could (in the development stages)
spy on what was happening in the motivational system to find out whether
the AI was developing any nasty intentions.

The purpose of the essay was to establish that this alternative approach
to creating a "friendly" AI would be both viable and (potentially)
extremely stable.  It is a very different approach to the one currently
thought to be the only method, which is to prove properties of the AI's
goal system mathematically  a task that many consider impossible.
By suggesting this alternative I am saying that mathematical proof may
be impossible, but guarantees of very strong kind may well be possible.

As you probably know, many people (including me) are extremely concerned
that AI be developed safely.

Hope that helps,

Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Motivational Systems that are stable

2006-10-25 Thread Anna Taylor

On, Wed, Oct 25, 2006 at 10:11 R. Loosemore wrote:

What I have in mind here is the objection (that I know
some people will raise) that it might harbor some deep-seated animosity
such as an association between human beings in general and something
'bad' that happened to it when it was growing up ... we would easily be
able to catch something like that if we had a trapdoor on the
motivational system.


I'm not clear what you meant, could you rephrase?
I understood, what I have in mind is a trapdoor of the motivational system:)
Do you think motivation is a key factor that generates
singularity-level events?
Am I understanding properly?

Just curious
Anna:)








On 10/25/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Ben Goertzel wrote:
> Loosemore wrote:
>> > The motivational system of some types of AI (the types you would
>> > classify as tainted by complexity) can be made so reliable that the
>> > likelihood of them becoming unfriendly would be similar to the
>> > likelihood of the molecules of an Ideal Gas suddenly deciding to split
>> > into two groups and head for opposite ends of their container.
>
> Wow!  This is a vey strong hypothesis  I really doubt this
> kind of certainty is possible for any AI with radically increasing
> intelligence ... let alone a complex-system-type AI with highly
> indeterminate internals...
>
> I don't expect you to have a proof for this assertion, but do you have
> an argument at all?
>
> ben

Ben,

You are being overdramatic here.

But since you ask, here is the argument/proof.

As usual, I am required to compress complex ideas into a terse piece of
text, but for anyone who can follow and fill in the gaps for themselves,
here it is.  Oh, and btw, for anyone who is scarified by the
psychological-sounding terms, don't worry:  these could all be cashed
out in mechanism-specific detail if I could be bothered  --  it is just
that for a cognitive AI person like myself, it is such a PITB to have to
avoid such language just for the sake of political correctness.

You can build such a motivational system by controlling the system's
agenda by diffuse connections into the thinking component that controls
what it wants to do.

This set of diffuse connections will govern the ways that the system
gets 'pleasure' --  and what this means is, the thinking mechanism is
driven by dynamic relaxation, and the 'direction' of that relaxation
pressure is what defines the things that the system considers
'pleasurable'.  There would likely be several sources of pleasure, not
just one, but the overall idea is that the system always tries to
maximize this pleasure, but the only way it can do this is to engage in
activities or thoughts that stimulate the diffuse channels that go back
from the thinking component to the motivational system.

[Here is a crude analogy:  the thinking part of the system is like a
table ontaining a complicated model landscape, on which a ball bearing
is rolling around (the attentional focus).  The motivational system
controls this situation, not be micromanaging the movements of the ball
bearing, but by tilting the table in one direction or another.  Need to
pee right now?  That's because the table is tilted in the direction of
thoughts about water, and urinary relief.  You are being flooded with
images of the pleasure you would get if you went for a visit, and also
the thoughts and actions that normally give you pleasure are being
disrupted and associated with unpleasant thoughts of future increased
bladder-agony.  You get the idea.]

The diffuse channels are set up in such a way that they grow from seed
concepts that are the basis of later concept building.  One of those
seed concepts is social attachment, or empathy, or imprinting  the
idea of wanting to be part of, and approved by, a 'family' group.  By
the time the system is mature, it has well-developed concepts of family,
social group, etc., and the feeling of pleasure it gets from being part
of that group is mediated by a large number of channels going from all
these concepts (which all developed from the same seed) back to the
motivational system.  Also, by the time it is adult, it is able to
understand these issues in an explicit way and come up with quite
complex reasons for the behavior that stimulates this source of pleasure

[In simple terms, when it's a baby it just wants Momma, but when it is
an adult its concept of its social attachment group may, if it is a
touchy feely liberal (;-)) embrace the whole world, and so it gets the
same source of pleasure from its efforts as an anti-war activist.  And
not just pleasure, either:  the related concept of obligation is also
there:  it cannot *not* be an ant-war activist, because that would lead
to cognitive dissonance.]

This is why I have referred to them as 'diffuse channels' - they involve
large numbers of connections from motivational system to thinking
system.  The motivational system does not go to the action stack and add
a specific, caref

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-23 Thread Anna Taylor

On 10/23/06, Joel Pitt <[EMAIL PROTECTED]> wrote:

Then I think we should record some singularity music.


If you have lyrics to describe exactly what the singularity will be, I
would love to hear your music:)


This reminds me off talking with Ben about creating a musical
interface to Novamente. As soon as Novamente makes a hit tune, it can
represent itself as a funky looking person  and dance suggestively,
you'll have legions of young fans (who will eventually grow up) and
you can use your signing deals to fund further AGI research.


Wouldn't be any different from Arnold and politics.

Anna:)




On 10/22/06, Anna Taylor <[EMAIL PROTECTED]> wrote:
> Ignoring the mass is only going to limit the potential of any idea.
> People buy CD's, watch tv, download music, chat, read (if you're
> lucky) therefore the only possible solution is to find a way to
> integrate within the mass population.  (Unless ofcourse, the
> scientific technological world really doesn't mean to participate
> within the general public, I would assume that's a possibility.)

Then I think we should record some singularity music.

I'm moving to being a working DJ as a hobby, so if anyone can throw me
some danceable 130 bpm singularity songs that'd be great :)

This reminds me off talking with Ben about creating a musical
interface to Novamente. As soon as Novamente makes a hit tune, can
represent itself as a funky looking person  and dance suggestively,
you'll have legions of young fans (who will eventually grow up) and
you can use your signing deals to fund further AGI research!

[ Whether you tell people that Novamente is a human or not is another story
]


--
-Joel

"Wish not to seem, but to be, the best."
-- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-23 Thread Anna Taylor

Sorry, that should have read:
"Do you not think that there are other possible economical ways to
motivate the...".

My mistake.
Anna:)





On 10/23/06, Anna Taylor <[EMAIL PROTECTED]> wrote:

On 10/23/06, J. Andrew Rogers <[EMAIL PROTECTED]> wrote:
> So you could say that the economics of responding to the mere threat
> of war is adequate to drive all the research the military does.

Yes I agree but why is the threat of war always the motive?  Do not
think that there are other possible economical ways to motivate the
military to want to concentrate on singularity-level events or am I
wasting my time trying to be optimistic?

Just Curious
Anna:)

>> On Oct 22, 2006, at 11:10 AM, Anna Taylor wrote:
> > On 10/22/06, Bill K wrote:
> >
> >> But I agree that huge military R&D expenditure (which already
> >> supports
> >> many, many research groups) is the place most likely to produce
> >> singularity-level events.
> >
> > I am aware that the military is the most likely place to produce
> > singularity-level events, i'm just trying to stay optimistic that a
> > war won't be the answer to advancing it.
>
>
> War per se does not advance military research, but economics and
> logistics.  If it was about killing people, we could have stopped at
> clubs and spears.  The cost of R&D and procurement of new systems,
> supporting and front line, are usually completely recovered within a
> decade of deployment relative to the systems they replace, so it is
> actually a "profitable" enterprise of sorts.  This is the primary
> reason military expenditures as a percentage of GDP continue to
> rapidly shrink -- even in the US -- while the apparent capabilities
> do not.
>
> So you could say that the economics of responding to the mere threat
> of war is adequate to drive all the research the military does.
> Short of completely eliminating the military, there will always be
> plenty of reason to do the R&D without ever firing a shot.  While I
> am doubtful that the military R&D programs will directly yield AGI,
> they do fund a lot of interesting blue sky research.
>
>
> J. Andrew Rogers
>
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/[EMAIL PROTECTED]
>



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-23 Thread Anna Taylor

On 10/23/06, J. Andrew Rogers <[EMAIL PROTECTED]> wrote:

So you could say that the economics of responding to the mere threat
of war is adequate to drive all the research the military does.


Yes I agree but why is the threat of war always the motive?  Do not
think that there are other possible economical ways to motivate the
military to want to concentrate on singularity-level events or am I
wasting my time trying to be optimistic?

Just Curious
Anna:)


On Oct 22, 2006, at 11:10 AM, Anna Taylor wrote:

> On 10/22/06, Bill K wrote:
>
>> But I agree that huge military R&D expenditure (which already
>> supports
>> many, many research groups) is the place most likely to produce
>> singularity-level events.
>
> I am aware that the military is the most likely place to produce
> singularity-level events, i'm just trying to stay optimistic that a
> war won't be the answer to advancing it.


War per se does not advance military research, but economics and
logistics.  If it was about killing people, we could have stopped at
clubs and spears.  The cost of R&D and procurement of new systems,
supporting and front line, are usually completely recovered within a
decade of deployment relative to the systems they replace, so it is
actually a "profitable" enterprise of sorts.  This is the primary
reason military expenditures as a percentage of GDP continue to
rapidly shrink -- even in the US -- while the apparent capabilities
do not.

So you could say that the economics of responding to the mere threat
of war is adequate to drive all the research the military does.
Short of completely eliminating the military, there will always be
plenty of reason to do the R&D without ever firing a shot.  While I
am doubtful that the military R&D programs will directly yield AGI,
they do fund a lot of interesting blue sky research.


J. Andrew Rogers


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-22 Thread Anna Taylor

On 10/22/06, Bill K wrote:


But I agree that huge military R&D expenditure (which already supports
many, many research groups) is the place most likely to produce
singularity-level events.


I am aware that the military is the most likely place to produce
singularity-level events, i'm just trying to stay optimistic that a
war won't be the answer to advancing it.

Anna:)


On 10/22/06, BillK <[EMAIL PROTECTED]> wrote:

On 10/22/06, Anna Taylor wrote:
>
> Gregory, this is a very damaging responsive.
> Posthuman has nothing to do with "supersoldiers".
> Technology is not there to enhance war like behavior.  What part of
> history made you think that?
>

Unfortunately history tells us that war drives technology forward.
After the war, military tech developments find civilian uses.

WWII produced a huge leap forward in many technologies.
Iraq is continuing to drive military technology R&D.

DARPA already has unmanned robot vehicles driving around.
They want them flying around soon as well.

The military R&D may not necessarily go in the direction of a
superhuman-soldier. They may be aiming more at non-human automatic
weaponry, backed up by enhanced humans.

But I agree that huge military R&D expenditure (which already supports
many, many research groups) is the place most likely to produce
singularity-level events.

You should be keeping your fingers crossed that the USA gets there first.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-21 Thread Anna Taylor

[EMAIL PROTECTED]
Shane Legg" < [EMAIL PROTECTED] : Mon, 25 Sep 2006 23:16:12 +0200 wrote:


I think the major problem is one of time scale.  Due to Hollywood everybody
is familiar with the idea of the future containing super powerful
intelligent (and usually evil) computers.  So I think the basic

concept that these >things could happen in the future is already out
there in the popular culture.  I think

the key thing is that most people, both Joe six pack and almost all

professors I

know, don't think it's going to happen for a really long time ---

long enough that

it's not going to affect their lives, or the lives of anybody they

know.  As such

they aren't all that worried about it.  Anyway, I don't think the idea is going
to be taken seriously until something happens that really gives the public a
fright.


Your right.  The general public doesn't really care of future
generations.  They are solely concentrated on themselves.  (A natural
response)
This doesn't take away the responsibility.  If YOU are aware that that
something is occuring then it is your responsibility to follow threw.
Frightening people doesn't relay anything but FEAR. (False evidence
appearing real.)

On 9/25/06, Bruce LaDuke <[EMAIL PROTECTED]> wrote

What happens to the oil industry?
What happens to politics because of what happens to the oil industry?  How
will a space elevator by 2012 change the balance of power?  Nanoweapons?
World War III?  China/India industrialization and resulting pollution? As
announced recently what happens when the world warms to its hottest level in
a million years?  When biodiversity reduction goes critical and plankton die
and oxygen fails?


Well, I guess you need to find the experts within those fields to tell
you what's  going to happen.
There are no quick solutions, I wish there where.
Nobody at this present moment has all the answers.  The best scenario
is to find the people you "believe" have the right answers to your
questions.

Gregory Johnson <[EMAIL PROTECTED]>  wrote:

For the very reasons you point out, the public is not ready for the singularity
and we don't have the time or resources to waste making them ready
for something we are not even sure the shape  of yet..


Maybe creating a shape is a good thing.  Defining it  may take some
time but so what.


What would a supersoldier be like?


Why do we need a supersoldier? If rationality has become a popular
aspect, we wouldn't need a supersoldier.  We would rely on science.
Wouldn't a supersoldier simply be a virus, chemical weapon or
biochemical reaction, isn't that what you are talking about?
The bombs never change, they just become different.


The point at which the singularity will  occur is when the general population
becomes aware of the existance of  posthuman supersoldiers and has to

decide >if they want to destroy the technology or use it to enhance
their personal lives.

Gregory, this is a very damaging responsive.
Posthuman has nothing to do with "supersoldiers".
Technology is not there to enhance war like behavior.  What part of
history made you think that?

Just curious
Anna:)














On 10/21/06, Gregory Johnson <[EMAIL PROTECTED]> wrote:

I had bit of a cute experiencel last week.
An associate in a group that advises our goverment found in her
housecleaning of old printed materials a book in which one of my futurist
essays from back in 1971 appeared.  I  thought it was a neat way to review
the subject to see if the future for 2008-2012 I wrote about was close to
the real thing.

One of the biggest mistakes we can make when proposing the singularity to
non-techies is to even use the term.

Lets just dumb it down.
No sense getting the fear factor up when the singularity is so weak and
defenceless that popular frankenstein/terminator/day after tomorrow
reactions can jeopardize the entire event horizon.

I think we entered the event horizon the day the DARPANET was switched on
and simply have continued on since.

I really think that Kurzweil and Google to name drop a few of the players
have found shelter in the USA Military Industrial Complex.
It is from here, sheltered from the luddites with access to significant
R&D resources that the singularity will eminate.

For the very reasons you point out, the public is not ready for the
singularity
and we don't have the time or resources to waste making them ready
for something we are not even sure the shape  of yet.

Perhaps the GMO combination of extreme biological modification
to endure extreme lenths of time in constant battle  without loss of
mental  functionality and enhancement to incorporate  high speed data
managment direct to the cortex from the military internet servers
will not only create a supersoldier, but also
as an accidental side effect, a super long lived post-human.

Yes society would change with the perfection of the supersoldier.
What would a supersoldier be like?
A person who can work perhaps 100 hours non-stop at full operating
efficiency both

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-21 Thread Anna Taylor

deering <[EMAIL PROTECTED] on Sat, Oct 21, 2006 at 1:40 PM wrote:

I think I have come across an epiphany.  'Normal' people, not like

us, make all of >their decisions based on arguments from authority.

I would love for you to write what is a descriptive paragraph of
"normal people", then maybe i'll understand what your epiphany is.


They don't feel competent to think for themselves.  They have always

been told >that the experts know best.

I'm not sure I understand.  Everything we learn comes from nature.
The experts in nature have learned many things.  Wouldn't the right
phrase be; "They, the experts in that area of expertise have taught us
very valuable things".
Therefore, it's a contradiction.
If you feel that people are not competent it's because you're not
interested in their expertise.
The experts come in very many different forms.  There is the experts
in science, math, religion, sales, education, heuristics, technology
and many other fields.

I agree that I acknowledge that a form of Singularity will occur, i'm
just not clear what form will appeal to the general public.

Ignoring the mass is only going to limit the potential of any idea.
People buy CD's, watch tv, download music, chat, read (if you're
lucky) therefore the only possible solution is to find a way to
integrate within the mass population.  (Unless ofcourse, the
scientific technological world really doesn't mean to participate
within the general public, I would assume that's a possibility.)


Like climate change, when the overwhelming majority of experts agree

that it's >real and coming, and when they say it on CNN, only then,
will the viewers believe >it.

When it finally arrives on TV or DVD, chances are, it's passé.  Already done.
Again a contradiction.
Then how do you change things?
The catch is to find the balance between the two.
Nobody from the science world is ever going to understand the man that
"picks up the garbage" and that downloads Eliezer's speach at Stanford
but there are humans and believers that do just that.
Reducing them to a small majority is ignorant.  You do not have the
statistics to really generalize the public. (If you do, I would be
very interested in seeing those statistics, at least then, I could
change my opinion.)

A reasoner would know this.

Just an opinion.
Anna:)








Don't stop trying to convince the viewers directly one-on-one, but

understand >why it will never get anywhere.  Instead, try to convince
the experts.

I was curious.
How many experts do you think there are in the field?
Do you not think it's about time that the general public becomes
somewhat aware of what's going on regarding the Transhumanists,
Singularity and Extropian points of view?
I'm aware that most will not be convinced simply due to ignorance,
lack of vision and creativity but isn't it up to those that are aware,
to make it clear to those that don't understand?






On 10/21/06, deering <[EMAIL PROTECTED]> wrote:

In reference to the original question of this thread, 'How to convince
non-techy types of the Singularity.'  I think I have come across an
epiphany.  'Normal' people, not like us, make all of their decisions based
on arguments from authority.  They don't feel competent to think for
themselves.  They have always been told that the experts know best.  Until
they see it on CNN they won't believe it.  You can't reason with them.
They're not reasoners, they're viewers.

Don't stop trying to convince the viewers directly one-on-one, but
understand why it will never get anywhere.  Instead, try to convince the
experts.  Like climate change, when the overwhelming majority of experts
agree that it's real and coming, and when they say it on CNN, only then,
will the viewers believe it.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Counter-argument

2006-10-04 Thread Anna Taylor

Lucio wrote:
Yes, but sometimes you have to put vast amounts of money into a
project into a creative idea to actually bring it to reality. And
often it is simply too much money to attract investors or even
government to the idea.

Anna writes:
Then I would assume that the creative idea wasn't or isn't very creative.

You wrote:
Take for instance drug discovery.

Anna writes:
I would assume that the drug discovery, at any time, has been
financially beneficial.

You wrote:
Another example: particle physics. In the 90s there was that project
for the Supercollider, a particle accelerator that would produce
energies high enough...

Anna writes:
I'm not really sure what you are talking about.  Could you explain?

You wrote:
Of course creative breakthroughs are possible.

Anna:)
Yes, that's what's make them unique:)






On 10/4/06, Bruce LaDuke <[EMAIL PROTECTED]> wrote:

The current rate of innovation doesn't really matter if society 'innovates
innovation itself.'  This **is** singularity because it removes the primary
barrier for all social advance, which is the understanding of what advance
is, where it comes from, how we accomplish it, and how it can be mechanized.

Industry is the application of advance or the 'science of making things.'
Advance itself is somewhat separate from the products associated with that
advance as evidenced by the fact that one can have the knowledge to make
something and choose to never make it.  Not saying that one doesn't need
money for some advances, but I'm saying you have to separate these two
appropriately to fully understand both.

That said, I see two primary obstacles to singularity and economics is not
one of them.

The first is Social Impacts.  It is not a given that social advance will
eliminate social risks and negative social impacts that result from that
advance.  For example, one can 'advance' to make bombs that fit in your
shirt pocket that have power to destroy a city, but conflicting, beliefs,
values, and ideologies will be the killer, not the bomb itself.  It is quite
possible to be highly knowledge adults and a social or spiritual babies at
the same time.

The second is Social Acceptance.  It's one thing to discover singularity,
but it is entirely another for society as a whole to accept it.  This has
been the primary obstacle to most great advances in ages past.  The
'establishment' tends to resist truly radical advance because it reforms the
establishment.  The world is flat until after you're dead, then we might
believe its round.

On a personal level, what criteria do you use personally to accept or reject
new ideas?  Are politics, status, connections, reputation, etc. in any way
involved in your decision?  Point being that individuals, groups, and
society often reject advance, or accept non-advances, for all the wrong
reasons.

My futuring manifesto talks about the three elements of advance, social
impacts, and industry in a little more detail:
http://www.hyperadvance.com/manifesto.htm

Kind Regards,

Bruce LaDuke
Managing Director

Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com




Original Message Follows
From: "Lúcio de Souza Coelho" <[EMAIL PROTECTED]>
Reply-To: singularity@v2.listbox.com
To: singularity@v2.listbox.com
Subject: Re: [singularity] Counter-argument
Date: Wed, 4 Oct 2006 23:25:53 -0300

On 10/4/06, Anna Taylor <[EMAIL PROTECTED]> wrote:
(...)
>From my experience:
>Innovative creative ideas are in most, rewarding, and at times very
>financially rewarding.
(...)

Yes, but sometimes you have to put vast amounts of money into a
project into a creative idea to actually bring it to reality. And
often it is simply too much money to attract investors or even
government to the idea.

Take for instance drug discovery. Sometimes it takes years of research
and millions and millions on lab equipment and scientists to make some
advance in some group of medications/substances; and as in any other
risk activity, sometimes those efforts end in failure. There is even a
provocative book about that, "The $800 Million Pill". Now, imagine
that in the future, after we discover many other drugs, the cost for
finding even newer ones may be so high that companies will decide that
it is higher than the likely return obtained by selling the said newer
drugs. And then advances in that field will come to a halt. In fact
costs of drug development are already high enough to trigger work on a
new field of research, the study of combinations of *existing* drugs,
which may have some interesting returns at a far lower cost.

Another example: particle physics. In the 90s there was that project
for the Supercollider, a particle accelerator that would produce
energies high enough to probe the inner workings of matter-energy and,
who knows, even Existence itself. (Supposedly the Supercolider 

[singularity] Newton not shaped properly

2006-10-04 Thread Anna Taylor

Eliezer wrote:
It really doesn't matter whether an improperly shaped intelligence
explosion is set off by altruistic idealists, or patriots, or
bureaucrats, or terrorists.

Anna:
What?  How can it not matter?  If you could change how it happens, why
wouldn't you?  If don't know, don't give your opinion.

You wrote:
A galaxy turned into paperclips is just as much of a tragedy either way.

Anna writes:
Yes I agree, a galaxy turned into paperclips is a tragedy.
Highly unlikely.

You wrote:
Ask yourself if they seem to understand intelligence as
solidly as you understand biology.

Anna questions:
Exactly.  Ask yourself if you understand math, heuristics and physics
as much as someone understands public relations, religion and sales as
much as someone understands love, family and relations.

Eliezer wrote:
It doesn't matter if they're motivated by the will to heal, or pure
greed - they can't do it at that level of understanding, end of story.

Anna questions:
I don't understand what you mean?  Could you rephrase?

You wrote:
Newton wasted much of his life on Christian mysticism

Anna writes;
Newton had no choice but to waste half of his precious time devoted to
Christian mysticism because at that time, religion ruled.  He had no
choice.

Anna

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Counter-argument

2006-10-04 Thread Anna Taylor

Lúcio de Souza Coelho wrote:
and so at some point it will simply not be financially attractive to
invest in innovation anymore.

Anna:)
I'm not sure I understand.
What do you mean by this?

From my experience:
Innovative creative ideas are in most, rewarding, and at times very
financially rewarding.

Just curious
Anna:)















On 10/4/06, Lúcio de Souza Coelho <[EMAIL PROTECTED]> wrote:

Some argue that the Singularity will not be reached because of
economic barriers. As the "easy" scientific and technological advances
are reached, the difficult ones will demand more and more sums of
money/time/effort to be accomplished, and so at some point it will
simply not be financially attractive to invest in innovation anymore.

This argument was one of the many exposed by John Horgan in his "The
End of Science" in the 90s. And there is a more recent and highly
controversial article that claims it is already happening:
http://tinyurl.com/n6zsk

On 10/4/06, Joshua Fox <[EMAIL PROTECTED]> wrote:
> Could I offer Singularity-list readers this intellectual challenge: Give
an
> argument supporting the thesis "Any sort of Singularity is very unlikely
to
> occur in this century."
>
> Even if you don't actually believe the point, consider it a
> debate-club-style challenge. If there is already something on the web
> somewhere, could you please point me to it.
>
> I've been eager for this piece ever since I learned of the Singularity
> concept.  I know of  the "objections" chapter in Kurzweil's Singularity is
> Near, the relevant parts of Vinge's seminal essay, as well the ideas of
> Lanier, Huebner, and a few others, but in all the millions of words out
> there I can't remember seeing a well-reasoned article with the above claim
> as its major thesis.  (Note, I'm looking for "why the Singularity won't
> happen" rather than "why the Singularity is a bad idea" or "why technology
> is not accelerating".)
(...)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-27 Thread Anna Taylor

Russell Wallace wrote:
- though with luck and if things go as I hope they will, a lot more
intelligently than they do today.

Hopefully...
Anna:)



On 9/27/06, Russell Wallace <[EMAIL PROTECTED]> wrote:

On 9/27/06, Anna Taylor <[EMAIL PROTECTED]> wrote:
>
> Yes, my apology, I was thinking on the terms of say 20-35 years.


Sure, in that timescale we'll still be looking at computers just doing what
humans program them to do - though with luck and if things go as I hope they
will, a lot more intelligently than they do today.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-27 Thread Anna Taylor

In the same way that the soup of organic chemical reactions led to
evolutionary systems and eventually led to *us* thinking.
--
-Joel

Yes you're right.
I was thinking on the lines of the first ape description and moving it
along from there:)

Anna:)







On 9/27/06, Joel Pitt <[EMAIL PROTECTED]> wrote:

On 9/28/06, Anna Taylor <[EMAIL PROTECTED]> wrote:
> Bruce LaDuke wrote:
> I don't believe a machine can ever have intention that doesn't
> ultimately trace back to a human being.
>
> I was curious to know what the major opinions are on this comment.
> Most of my concerns are related to the fact that I too believe it will
> be traced back to a human(s).  Are there other ways at looking at the
> scenario?  Do people really believe that a whole new species will
> emerge not having any reflection to a human?

Well this starts to get into cause and effect discussion.

My 2c is that since we'll ultimately create these thinking machines,
so any intention it has will be, in some way, however distant and
removed, traceable back to humans.

In the same way that the soup of organic chemical reactions led to
evolutionary systems and eventually led to *us* thinking.

-J

--
-Joel

"Wish not to seem, but to be, the best."
-- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-27 Thread Anna Taylor

Robert wrote:
Well, ever's a long time.

Yes, my apology, I was thinking on the terms of say 20-35 years.

Anna:)


On 9/27/06, Russell Wallace <[EMAIL PROTECTED]> wrote:

On 9/27/06, Anna Taylor <[EMAIL PROTECTED]> wrote:
>
> Bruce LaDuke wrote:
> I don't believe a machine can ever have intention that doesn't
> ultimately trace back to a human being.
>
> I was curious to know what the major opinions are on this comment.


Well, ever's a long time. I think it will be true for the foreseeable
future. Whether it will still be true in a million years, say, is a
different matter; I can't predict that far ahead and I don't think anyone
else can either.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-27 Thread Anna Taylor

Bruce LaDuke wrote:
I don't believe a machine can ever have intention that doesn't
ultimately trace back to a human being.

I was curious to know what the major opinions are on this comment.
Most of my concerns are related to the fact that I too believe it will
be traced back to a human(s).  Are there other ways at looking at the
scenario?  Do people really believe that a whole new species will
emerge not having any reflection to a human?

Anna:)


On 9/26/06, Bruce LaDuke <[EMAIL PROTECTED]> wrote:

Hank,

Can definitely appreciate your view here, and if I held to the Kurzweilian
belief, I'd be inclined to agree.  But I really don't see an 'endpoint' and
also don't see superhuman intelligence the same way I think folks in the
Kurzweilian arena tend to see it because I don't believe a machine can ever
have intention that doesn't ultimately trace back to a human being.
Definitely not the popular view I know, but I think as we approach this
level of intelligence we're going to clearly see what differentiates us
humans from machines, which is intention, motive, desire, spirituality.

This stems from my understanding of knowledge creation, which basically sees
knowledge as a non-feeling, non-intending, non-motivated mass of symbolic
connections that is constantly expanding through the efforts driven by human
intention.  Robotics, cybernetics, etc., being the actionable arm of these
creations...but again, only the human has intention.  As such their is no
real endpoint in terms of how far we will expand this intelligence.  It is a
never-ending expansion as we explore the universe and create technologies.

Granted a human with good or bad intentions can *absolutely* transfer those
intentions to the machine, and again just my opinion, but I think the human
originated these intentions and the machine *absolutely never* will
originate them...only execute them as instructed.

In transferring these intentions to machine they are magnifying personal
intentions with a 'tool' that can be used for good or bad.  The constructive
and/or destructive force is exponentially magnified by the 'tool' man is
given.  Similar to nuclear weapons...the more powerful the tool, the more
rigor and wisdom required to manage it.

When we can barely manage the tools we have, we're not going to fare well
with a bigger, more powerful tool.  We need to start with understanding the
culprit of our current woes...poorly understood and managed human intention.
  I think I've used this quote before, but here's how Drucker put it:

"In a few hundred years, when the history of our time will be written from a
long-term perspective, it is likely that the most important event that
historians will see is not technology, not the Internet, not e-commerce. It
is an unprecedented change in the human condition. For the first time -
literally - substantial and rapidly growing numbers of people have choices.
For the first time, they will have to manage themselves. And society is
totally unprepared for it." - Peter Drucker

Kind Regards,

Bruce LaDuke
Managing Director

Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com




Original Message Follows
From: "Hank Conn" <[EMAIL PROTECTED]>
Reply-To: singularity@v2.listbox.com
To: singularity@v2.listbox.com
Subject: Re: [singularity] Convincing non-techie skeptics that the
Singularity isn't total bunk
Date: Tue, 26 Sep 2006 13:36:57 -0400

Bruce I tend to agree with all the things you say here and appreciate your
insight, observations, and sentiment.

However, here is where you are horribly wrong:

"In my mind, singularity is no different.  I pesonally see it providing just
another tool in the hand of mankind, only one of greater power."

The Kurzweilian belief that the Singularity will be the end point of the
accelerating curves of technology discounts the reality of creating AGI. All
that matters is the algorithm for intelligence.

As such, the Singularity is entirely *discontinuous* with every single
trend- regardless of kind, scale, or history- that humanity knows today.

-hank


On 9/25/06, Bruce LaDuke <[EMAIL PROTECTED]> wrote:
>
>I really like Shane's observation below that people just don't think
>Singularity is coming for a very long time.  The beginning affects are
>already here.  Related to this, I've got a few additional thoughts to
>share.
>
>We're not looking into singularity yet, but the convergence has already
>started.  Consider that the molecular economy has the potential to bring
>total social upheaval in its own right, without singularity.  For example,
>what happens when an automobile is weighs around 400 pounds and  is
>powered
>by a battery that never needs charging.  What happens to the oil industry?
>What happens to politics because of what happens to the oil industry?  How
>will a space elevator by 2012 change the balance of power?  Nanoweapons?
>World War III?  China/India industrialization and resulting pollution? As
>announced recently what h

[singularity] Re: Is Friendly AI Bunk?

2006-09-14 Thread Anna Taylor

Ben wrote:
I don't think that Friendliness, to be meaningful, needs to have a
compact definition.

Anna's questions:
Then how will you build a "Friendly AI"?
Are you no longer interested in building a "Friendly AI"?
Sorry for the ignorance but if you don't analyze what it takes to
create a "Friendly" AI, how can you then create it?
Otherwise, you are only building an AI without meaning.
You then join the AI researchers that are interested in building a
smarter than intelligent design.
I thought that Google, Wikipedia or wordnet princeton pretty much
ruled this world.

Just my opinion.
Anna:)





On 9/14/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:

> In my view, thinking too much about whether one can prove that a system
> is friendly or not is getting a bit ahead of ourselves.  What we need
first
> is
> a formal definition of what friendly means. Then we can try to figure out
> whether or not we can prove anything.  I think we should focus on the
> problem of definition first.
>
> Shane

But, it may be that one can prove a theorem of the form

"For any definition of Friendliness fulfilling properties P1, in any
universe satisfying properties P2, it is impossible for a system of
complexity < K1 to prove Friendliness about a system of complexity >
K2"

(for an appropriate computation-theory-relevant definition of "complexity")

In this case, the problem of definition of Friendliness is sidestepped...

I think this is the right approach, because I don't think that
Friendliness, to be meaningful, needs to have a compact definition.
My personal definition of what a Friendly universe is like is quite
complex and difficult to formalize, in the same manner that the rules
of English are complex and difficult to formalize  But that
doesn't mean that it's meaningless, nor that it's unformalizable in
principle

I think the argument in my recent pdf file could probably be turned
into such a proof, where the property P2 of the universe has to do
with its dynamical complexity.  But I don't seem to have the time to
turn my heuristic argument into a real proof...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[singularity] Re: Is Friendly AI Bunk?

2006-09-12 Thread Anna Taylor

Matt Mahoney wrote:
You wrote:
1. It is not possible for a less intelligent entity (human) to predict
the behavior of a more intelligent entity.

Anna questions:
I'm just curious to know why?
If you're saying it's not possible then you must have some pretty good
references to back that statement.
I would like to read those references if it's possible.
Thanks

You wrote:
2. A rigorous proof that an AI will be friendly requires a rigorous
definition of "friendly".

Anna agrees:
If you are going to promote "friendly" behavior, most people need to
agree about what the definition of "friendly" really is?

You wrote:
3. Assuming (2), proving this property runs into Godel's
incompleteness theorem for any AI system with a Kolmogorov complexity
over about 1000 bits.
See http://www.vetta.org/documents/IDSIA-12-06-1.pdf

Anna writes:
No opinion, I have no idea what you're talking about.
Could you please rephrase #3 in english language terms so that I can
understand:)

You wrote:
4. There is no experimental evidence that consiousness exists.  You
believe that it does because animals that lacked an instinct for self
preservation and fear of death were eliminated by natural selection.

Anna writes:
Your right.  There is no way to measure consciouness.
At the same time, the word does exist.
Why?
What do you think the word consciouness means?

Just curious
Anna:)







On 9/12/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:

From: Hank Conn <[EMAIL PROTECTED]>
>I think the question, "will the AI be Friendly?", is only possible to
answer AFTER you have the source code of the conscious >algorithms sitting
on your computer screen, and have a rigorous prior theoretical knowledge on
exactly how to make an AI >Friendly.

Then I'm afraid it is hopeless.  There are several problems.

1. It is not possible for a less intelligent entity (human) to predict the
behavior of a more intelligent entity.  A state machine cannot simulate
another machine with more states than itself.

2. A rigorous proof that an AI will be friendly requires a rigorous
definition of "friendly".

3. Assuming (2), proving this property runs into Godel's incompleteness
theorem for any AI system with a Kolmogorov complexity over about 1000 bits.
 See http://www.vetta.org/documents/IDSIA-12-06-1.pdf

4. There is no experimental evidence that consiousness exists.  You believe
that it does because animals that lacked an instinct for self preservation
and fear of death were eliminated by natural selection.


-- Matt Mahoney, [EMAIL PROTECTED]




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]