I don't believe we should underestimate what a super AI is capable of doing.

First, I think that everyone here accepts that the brain is a computer.  Humans 
cannot solve problems that are fundamentally uncomputable, such as solve the 
halting problem, predict random numbers, or violate the laws of physics.  One 
might argue that computers cannot compose music or fall in love, but this 
misses the point.  A Turing machine is capable of arbitrarily complex behavior. 
 Just because we don't know how to model something we do, does not mean we can 
attribute it to godlike powers.

Now, we already see examples of machines doing things we didn't intend them to 
do.  These are called bugs.  I don't believe we will solve the software 
enginnering problem by the time we start building machines that are smarter 
than us.  I also don't believe those machines will solve it either, when they 
start doing the same.  Here is why.  Currently, it is possible to fix bugs and 
prove program correctness only if you have a complete understanding of the 
machine you are developing.  If you accept that the brain is a computer, then 
you must accept that you cannot debug, simulate, predict, or otherwise control 
a machine that is smarter than you. The simple reason is that a Turing machine 
cannot model another machine with higher Kolmogorov complexity.
 
-- Matt Mahoney, [EMAIL PROTECTED]

----- Original Message ----
From: Anna Taylor <[EMAIL PROTECTED]>
To: singularity@v2.listbox.com
Sent: Wednesday, September 27, 2006 12:47:49 PM
Subject: Re: [singularity] Convincing non-techie skeptics that the Singularity 
isn't total bunk

Bruce LaDuke wrote:
I don't believe a machine can ever have intention that doesn't
ultimately trace back to a human being.

I was curious to know what the major opinions are on this comment.
Most of my concerns are related to the fact that I too believe it will
be traced back to a human(s).  Are there other ways at looking at the
scenario?  Do people really believe that a whole new species will
emerge not having any reflection to a human?

Anna:)


On 9/26/06, Bruce LaDuke <[EMAIL PROTECTED]> wrote:
> Hank,
>
> Can definitely appreciate your view here, and if I held to the Kurzweilian
> belief, I'd be inclined to agree.  But I really don't see an 'endpoint' and
> also don't see superhuman intelligence the same way I think folks in the
> Kurzweilian arena tend to see it because I don't believe a machine can ever
> have intention that doesn't ultimately trace back to a human being.
> Definitely not the popular view I know, but I think as we approach this
> level of intelligence we're going to clearly see what differentiates us
> humans from machines, which is intention, motive, desire, spirituality.
>
> This stems from my understanding of knowledge creation, which basically sees
> knowledge as a non-feeling, non-intending, non-motivated mass of symbolic
> connections that is constantly expanding through the efforts driven by human
> intention.  Robotics, cybernetics, etc., being the actionable arm of these
> creations...but again, only the human has intention.  As such their is no
> real endpoint in terms of how far we will expand this intelligence.  It is a
> never-ending expansion as we explore the universe and create technologies.
>
> Granted a human with good or bad intentions can *absolutely* transfer those
> intentions to the machine, and again just my opinion, but I think the human
> originated these intentions and the machine *absolutely never* will
> originate them...only execute them as instructed.
>
> In transferring these intentions to machine they are magnifying personal
> intentions with a 'tool' that can be used for good or bad.  The constructive
> and/or destructive force is exponentially magnified by the 'tool' man is
> given.  Similar to nuclear weapons...the more powerful the tool, the more
> rigor and wisdom required to manage it.
>
> When we can barely manage the tools we have, we're not going to fare well
> with a bigger, more powerful tool.  We need to start with understanding the
> culprit of our current woes...poorly understood and managed human intention.
>   I think I've used this quote before, but here's how Drucker put it:
>
> "In a few hundred years, when the history of our time will be written from a
> long-term perspective, it is likely that the most important event that
> historians will see is not technology, not the Internet, not e-commerce. It
> is an unprecedented change in the human condition. For the first time -
> literally - substantial and rapidly growing numbers of people have choices.
> For the first time, they will have to manage themselves. And society is
> totally unprepared for it." - Peter Drucker
>
> Kind Regards,
>
> Bruce LaDuke
> Managing Director
>
> Instant Innovation, LLC
> Indianapolis, IN
> [EMAIL PROTECTED]
> http://www.hyperadvance.com
>
>
>
>
> ----Original Message Follows----
> From: "Hank Conn" <[EMAIL PROTECTED]>
> Reply-To: singularity@v2.listbox.com
> To: singularity@v2.listbox.com
> Subject: Re: [singularity] Convincing non-techie skeptics that the
> Singularity isn't total bunk
> Date: Tue, 26 Sep 2006 13:36:57 -0400
>
> Bruce I tend to agree with all the things you say here and appreciate your
> insight, observations, and sentiment.
>
> However, here is where you are horribly wrong:
>
> "In my mind, singularity is no different.  I pesonally see it providing just
> another tool in the hand of mankind, only one of greater power."
>
> The Kurzweilian belief that the Singularity will be the end point of the
> accelerating curves of technology discounts the reality of creating AGI. All
> that matters is the algorithm for intelligence.
>
> As such, the Singularity is entirely *discontinuous* with every single
> trend- regardless of kind, scale, or history- that humanity knows today.
>
> -hank
>
>
> On 9/25/06, Bruce LaDuke <[EMAIL PROTECTED]> wrote:
> >
> >I really like Shane's observation below that people just don't think
> >Singularity is coming for a very long time.  The beginning affects are
> >already here.  Related to this, I've got a few additional thoughts to
> >share.
> >
> >We're not looking into singularity yet, but the convergence has already
> >started.  Consider that the molecular economy has the potential to bring
> >total social upheaval in its own right, without singularity.  For example,
> >what happens when an automobile is weighs around 400 pounds and  is
> >powered
> >by a battery that never needs charging.  What happens to the oil industry?
> >What happens to politics because of what happens to the oil industry?  How
> >will a space elevator by 2012 change the balance of power?  Nanoweapons?
> >World War III?  China/India industrialization and resulting pollution? As
> >announced recently what happens when the world warms to its hottest level
> >in
> >a million years?  When biodiversity reduction goes critical and plankton
> >die
> >and oxygen fails?
> >
> >I'm sure you know about most of these things and how quickly they are
> >moving, but my point is, trouble isn't coming...it's here.  Not only
> >should
> >we be thinking about these things now, but I think it is our social
> >responsibility.  That is, if we want children to grow up and inhabit this
> >world with any level of normalcy...or at all.
> >
> >Any number of things could bring our glorious house crashing down in a
> >matter of days or months.  When the Soviet economy crashed, nuclear
> >physicists were standing in the soup line over night.  The same could
> >easily
> >be seen of us in a global economic crash.  Our scholarly/industrial
> >existence is really very fragile.  It doesn't take much for our hierarchy
> >of
> >needs to return to survival.
> >
> >Our human track record of late in terms of creating advance is really
> >quite
> >good, but in terms of dealing with the social impacts of that advance is
> >really very, very poor and immature.  All of our wonderful creations are
> >already making quite a big global mess.  So who's to say that our
> >continued
> >focus on modernist, profit-centric values will result in any thing less
> >than
> >more and more advance alongside escalating social issues?
> >
> >In my mind, singularity is no different.  I pesonally see it providing
> >just
> >another tool in the hand of mankind, only one of greater power.  And this
> >power holds the potential to fulfill human values and human intention,
> >which
> >is the piece we really aren't managing well.  Bad intentions and bad
> >values,
> >combined with a bigger tool, equals bigger trouble.
> >
> >Given our human track record and factors already outside of our control,
> >we
> >have a far better chance of destroying what we have now (the rest of the
> >way) than we have of realizing singularity.  Not that we shouldn't
> >continue
> >to seek singularity, but we need a hard look at the values and intentions
> >than we're basing these efforts on.
> >
> >See the Second Enlightenment Conference:  http://www.2enlightenment.com
> >Elizabet Sahtouris will be keynote (http://www.ratical.org/LifeWeb/)
> >
> >Kind Regards,
> >
> >Bruce LaDuke
> >Managing Director
> >
> >Instant Innovation, LLC
> >Indianapolis, IN
> >[EMAIL PROTECTED]
> >http://www.hyperadvance.com
> >
> >
> >
> >
> >----Original Message Follows----
> >From: "Shane Legg" <[EMAIL PROTECTED]>
> >Reply-To: singularity@v2.listbox.com
> >To: singularity@v2.listbox.com
> >Subject: Re: [singularity] Convincing non-techie skeptics that the
> >Singularity isn't total bunk
> >Date: Mon, 25 Sep 2006 23:16:12 +0200
> >
> >I'd suggest looking at Joy's "Why the future doesn't need us" article in
> >Wired.
> >For some reason, which isn't clear to me, that article was a huge hit,
> >drawing
> >in people that normally would never read such stuff.  I was surprised when
> >various educated but non-techie people I know started asking me about it.
> >
> >I think the major problem is one of time scale.  Due to Hollywood
> >everybody
> >is familiar with the idea of the future containing super powerful
> >intelligent (and
> >usually evil) computers.  So I think the basic concept that these things
> >could
> >happen in the future is already out there in the popular culture.  I think
> >the key
> >thing is that most people, both Joe six pack and almost all professors I
> >know,
> >don't think it's going to happen for a really long time --- long enough
> >that
> >it's
> >not going to affect their lives, or the lives of anybody they know.  As
> >such
> >they
> >aren't all that worried about it.  Anyway, I don't think the idea is going
> >to be
> >taken seriously until something happens that really gives the public a
> >fright.
> >
> >Shane
> >
> >-----
> >This list is sponsored by AGIRI: http://www.agiri.org/email
> >To unsubscribe or change your options, please go to:
> >http://v2.listbox.com/member/[EMAIL PROTECTED]
> >
> >
> >-----
> >This list is sponsored by AGIRI: http://www.agiri.org/email
> >To unsubscribe or change your options, please go to:
> >http://v2.listbox.com/member/[EMAIL PROTECTED]
> >
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/[EMAIL PROTECTED]
>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/[EMAIL PROTECTED]
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to