Going on a bit of a tangent... I didn't mean to sound like i was disagreeing on Jeff's claim that an AGI's way of thinking will be different from a human's. This will most certainly be the case. But on the flip side of an intelligence that takes over and considers us pointless (and doesn't bother bumping us all off, but then doesn't bother helping us either, just as we don't bother helping apes live the best lives they can), consider an AGI that, some time after surpassing our intelligence looks into the future and decides that the universe is going to eventually peter out anyway and that there is no point in living and so shuts itself down. What if it proved to be a big problem to keep the thing on in the first place? :)

AGI will need to live somewhere between psychopathy and depression. The question comes down to how we motivate the machine. This is the part that we need to be careful with.


On 5/25/2015 2:51 PM, Fergal Byrne wrote:
True, but it's more about the combination of power with intelligence. The entities with the most power are likely to be those with the most intelligence. We hope.

--

Fergal Byrne, Brenter IT

Author, Real Machine Intelligence with Clortex and NuPIC
https://leanpub.com/realsmartmachines

Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014: http://euroclojure.com/2014/
and at LambdaJam Chicago, July 2014: http://www.lambdajam.com

http://inbits.com - Better Living through Thoughtful Technology
http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne

e:[email protected] t:+353 83 4214179
Join the quest for Machine Intelligence at http://numenta.org
Formerly of Adnet [email protected] http://www.adnet.ie


On Mon, May 25, 2015 at 7:49 PM, Matthew Lohbihler <[email protected] <mailto:[email protected]>> wrote:

    Good points Fergal. But do remember that you are assigning
    sentimentality to the AGI, while claiming - like Jeff - that its
    way of thinking will not be like ours. It could just as easily
    logically decide that the lost civilization was non-optimal
    anyway, so no harm done.

    On 5/25/2015 2:40 PM, Fergal Byrne wrote:
    Matthew,

    You touch upon the right point. Intelligence which can
    self-improve could only come about by having an appreciation for
    intelligence, so it's not going to be interested in destroying
    diverse sources of intelligence. We represent a crap kind of
    intelligence to such an AI in a certain sense, but one which it
    itself would rather communicate with than condemn its offspring
    to have to live like. If these things appear (which looks
    inevitable) and then they kill us, many of them will look back at
    us as a kind of "lost civilisation" which they'll struggle to
    reconstruct.

    The nice thing is that they'll always be able to rebuild us from
    the human genome. It's just a file of numbers after all.

    So, we have these huge threats to humanity. The AGI future is the
    only reversible one.

    Regards
    Fergal Byrne

    --

    Fergal Byrne, Brenter IT

    Author, Real Machine Intelligence with Clortex and NuPIC
    https://leanpub.com/realsmartmachines

    Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014:
    http://euroclojure.com/2014/
    and at LambdaJam Chicago, July 2014: http://www.lambdajam.com

    http://inbits.com - Better Living through Thoughtful Technology
    http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne

    e:[email protected] t:+353 83 4214179
    Join the quest for Machine Intelligence at http://numenta.org
    Formerly of Adnet [email protected] http://www.adnet.ie


    On Mon, May 25, 2015 at 7:27 PM, Matthew Lohbihler
    <[email protected] <mailto:[email protected]>> wrote:

        I think Jeff underplays a couple of points, the main one
        being the speed at which an AGI can learn. Yes, there is a
        natural limit to how much experimentation in the real world
        can be done in a given amount of time. But we humans are
        already going beyond this with, for example, protein folding
        simulations, which speeds up the discovery of new drugs and
        such by many orders of magnitude. Any sufficiently detailed
        simulation could massively narrow down the amount of real
        world verification necessary, such that new discoveries
        happen more and more quickly, possibly at some point faster
        than we know the AGI is doing them. An intelligence explosion
        is not a remote possibility. The major risk here is what
        Eliezer Yudkowsky pointed out: not that the AGI is evil or
        something, but that it is indifferent to humanity. No one yet
        goes out of their way to make any form of AI care about us
        (because we don't yet know how). What if an AI created
        self-replicating nanobots just to prove a hypothesis?

        I think Nick Bostrom's book is what got Stephen, Elon, and
        Bill all upset. I have to say it starts out merely
        interesting, but gets to a dark place pretty quickly. But he
        goes too far in the other direction, at the same time easily
        accepting that superinteligences have all manner of cognitive
        skill, but at the same time can't fathom the how humans might
        not like the idea of having our brain's pleasure centers
        constantly poked, turning us all into smiling idiots (as i
        mentioned here:
        http://blog.serotoninsoftware.com/so-smart-its-stupid).



        On 5/25/2015 2:01 PM, Fergal Byrne wrote:
        Just one last idea in this. One thing that crops up every
        now and again in the Culture novels is the response of the
        Culture to Swarms, which are self-replicating viral machines
        or organisms. Once these things start consuming everything
        else, the AIs (mainly Ships and Hubs) respond by treating
        the swarms as a threat to the diversity of their Culture.
        They first try to negotiate, then they'll eradicate. If they
        can contain them, they'll do that.

        They do this even though they can themselves withdraw from
        real spacetime. They don't have to worry about their own
        survival. They do this simply because life is more
        interesting when it includes all the rest of us.

        Regards

        Fergal Byrne

        --

        Fergal Byrne, Brenter IT

        Author, Real Machine Intelligence with Clortex and NuPIC
        https://leanpub.com/realsmartmachines

        Speaking on Clortex and HTM/CLA at euroClojure Krakow, June
        2014: http://euroclojure.com/2014/
        and at LambdaJam Chicago, July 2014: http://www.lambdajam.com

        http://inbits.com - Better Living through Thoughtful Technology
        http://ie.linkedin.com/in/fergbyrne/ -
        https://github.com/fergalbyrne

        e:[email protected] t:+353 83 4214179
        Join the quest for Machine Intelligence at http://numenta.org
        Formerly of Adnet [email protected] http://www.adnet.ie


        On Mon, May 25, 2015 at 5:04 PM, cogmission (David Ray)
        <[email protected]
        <mailto:[email protected]>> wrote:

            This was someone's response to Jeff's interview (see
            here:
            https://www.facebook.com/fareedzakaria/posts/10152703985901330)


            Please read and comment if you feel the need...

            Cheers,
            David

-- /With kind regards,/
            David Ray
            Java Solutions Architect
            *Cortical.io <http://cortical.io/>*
            Sponsor of: HTM.java <https://github.com/numenta/htm.java>
            [email protected] <mailto:[email protected]>
            http://cortical.io <http://cortical.io/>







Reply via email to