Alexander,

Thanks for sending this. I really dug what Michio Kaku was talking about. Bill Joy was interesting too. The rest was also good, but i've heard all that before. :)



On 5/25/2015 4:06 PM, Alexander Kettinen wrote:
If anyone has 46 minutes to spare, the university of Reyjjavikpresents the following:
https://www.youtube.com/watch?v=pRPpFqufyOo


2015-05-25 21:59 GMT+02:00 Matthew Lohbihler <[email protected] <mailto:[email protected]>>:

    Goodness. I thought we agreed that an AGI would not think like
    humans. And besides, "love" doesn't feel like something i want to
    depend on as obvious in a machine.


    On 5/25/2015 3:50 PM, David Ray wrote:
    If I can take this conversation into yet a different direction.

    I think we've all been dancing around The question of what belies
    the generation of morality or how will an AI derive its sense of
    ethics? Of course initially there will be those parameters that
    are programmed in -  but eventually those will be gotten around.

    There has been a lot of research into this actually - though it's
    not common knowledge it is however knowledge developed over the
    observation of millions of people.

    The universe and all beings along the gradient of sentience
    observe (albeit perhaps unconsciously), a sense of what I will
    call integrity or "wholeness". We'd like to think that mankind
    steered itself through the ages toward notions of gentility and
    societal sophistication; but it didn't really. The idea that a
    group or different groups devised a grand plan to have it turn
    out this way is totally preposterous.

    What is more likely is that there is a natural order to things
    and that is motion toward what works for the whole. I can't prove
    any of this but internally we all know when it's missing or when
    we are not in alignment with it. This ineffable sense is what
    love is - it's concern for the whole.

    So I say that any truly intelligent being, by virtue of existing
    in a substrate of integrity will have this built in and a super
    intelligent being will understand this - and that is ultimately
    the best chance for any single instance to survive is for the
    whole to survive.

    Yes I know immediately people want to cite all the aberrations
    and of course yes there are aberrations just as there are
    mutations but those aberrations our reactions to how a person is
    shown love during their development.

    Like I said I can't prove any of this but eventually it will bear
    itself out and we will find it to be so in the future.

    You can be skeptical if you want to but ask yourself some
    questions. Why is it that we all know when it's missing
    (fairness/justice/integrity)? Why is it that we develop open
    source software and free software? Why is it that despite our
    greed and insecurity society moves toward freedom and equality
    for everyone?

    One more question. Why is it that the most advanced philosophical
    beliefs cite that where we are located as a phenomenological
    event, is not in separate bodies?

    I know this kind of talk doesn't go over well in this crowd of
    concrete thinkers but I know that there is some science somewhere
    that backs this up.

    Sent from my iPhone

    On May 25, 2015, at 2:12 PM, vlab <[email protected]
    <mailto:[email protected]>> wrote:

    Small point: Even if they did decide that our diverse
    intelligence is worth keeping around (having not already mapped
    it into silicon) why would they need all of us.  Surely 10% of
    the population would give them enough 'sample size' to get their
    diversity ration, heck maybe 1/10 of 1% would be enough.   They
    may find that we are wasting away the planet (oh, not maybe, we
    are) and the planet would be more efficient and they could have
    more energy without most of us. (Unless we become 'copper tops'
    as in the Matrix movie).

    On 5/25/2015 2:40 PM, Fergal Byrne wrote:
    Matthew,

    You touch upon the right point. Intelligence which can
    self-improve could only come about by having an appreciation
    for intelligence, so it's not going to be interested in
    destroying diverse sources of intelligence. We represent a crap
    kind of intelligence to such an AI in a certain sense, but one
    which it itself would rather communicate with than condemn its
    offspring to have to live like. If these things appear (which
    looks inevitable) and then they kill us, many of them will look
    back at us as a kind of "lost civilisation" which they'll
    struggle to reconstruct.

    The nice thing is that they'll always be able to rebuild us
    from the human genome. It's just a file of numbers after all.

    So, we have these huge threats to humanity. The AGI future is
    the only reversible one.

    Regards
    Fergal Byrne

    --

    Fergal Byrne, Brenter IT

    Author, Real Machine Intelligence with Clortex and NuPIC
    https://leanpub.com/realsmartmachines

    Speaking on Clortex and HTM/CLA at euroClojure Krakow, June
    2014: http://euroclojure.com/2014/
    and at LambdaJam Chicago, July 2014: http://www.lambdajam.com

    http://inbits.com - Better Living through Thoughtful Technology
    http://ie.linkedin.com/in/fergbyrne/ -
    https://github.com/fergalbyrne

    e:[email protected]
    <mailto:e:[email protected]> t:+353 83 4214179
    <tel:%2B353%2083%204214179>
    Join the quest for Machine Intelligence at http://numenta.org
    Formerly of Adnet [email protected] <mailto:[email protected]>
    http://www.adnet.ie


    On Mon, May 25, 2015 at 7:27 PM, Matthew Lohbihler
    <[email protected] <mailto:[email protected]>> wrote:

        I think Jeff underplays a couple of points, the main one
        being the speed at which an AGI can learn. Yes, there is a
        natural limit to how much experimentation in the real world
        can be done in a given amount of time. But we humans are
        already going beyond this with, for example, protein
        folding simulations, which speeds up the discovery of new
        drugs and such by many orders of magnitude. Any
        sufficiently detailed simulation could massively narrow
        down the amount of real world verification necessary, such
        that new discoveries happen more and more quickly, possibly
        at some point faster than we know the AGI is doing them. An
        intelligence explosion is not a remote possibility. The
        major risk here is what Eliezer Yudkowsky pointed out: not
        that the AGI is evil or something, but that it is
        indifferent to humanity. No one yet goes out of their way
        to make any form of AI care about us (because we don't yet
        know how). What if an AI created self-replicating nanobots
        just to prove a hypothesis?

        I think Nick Bostrom's book is what got Stephen, Elon, and
        Bill all upset. I have to say it starts out merely
        interesting, but gets to a dark place pretty quickly. But
        he goes too far in the other direction, at the same time
        easily accepting that superinteligences have all manner of
        cognitive skill, but at the same time can't fathom the how
        humans might not like the idea of having our brain's
        pleasure centers constantly poked, turning us all into
        smiling idiots (as i mentioned here:
        http://blog.serotoninsoftware.com/so-smart-its-stupid).



        On 5/25/2015 2:01 PM, Fergal Byrne wrote:
        Just one last idea in this. One thing that crops up every
        now and again in the Culture novels is the response of the
        Culture to Swarms, which are self-replicating viral
        machines or organisms. Once these things start consuming
        everything else, the AIs (mainly Ships and Hubs) respond
        by treating the swarms as a threat to the diversity of
        their Culture. They first try to negotiate, then they'll
        eradicate. If they can contain them, they'll do that.

        They do this even though they can themselves withdraw from
        real spacetime. They don't have to worry about their own
        survival. They do this simply because life is more
        interesting when it includes all the rest of us.

        Regards

        Fergal Byrne

        --

        Fergal Byrne, Brenter IT

        Author, Real Machine Intelligence with Clortex and NuPIC
        https://leanpub.com/realsmartmachines

        Speaking on Clortex and HTM/CLA at euroClojure Krakow,
        June 2014: http://euroclojure.com/2014/
        and at LambdaJam Chicago, July 2014: http://www.lambdajam.com

        http://inbits.com - Better Living through Thoughtful
        Technology
        http://ie.linkedin.com/in/fergbyrne/ -
        https://github.com/fergalbyrne

        e:[email protected]
        <mailto:e:[email protected]> t:+353 83 4214179
        <tel:%2B353%2083%204214179>
        Join the quest for Machine Intelligence at http://numenta.org
        Formerly of Adnet [email protected] <mailto:[email protected]>
        http://www.adnet.ie


        On Mon, May 25, 2015 at 5:04 PM, cogmission (David Ray)
        <[email protected]
        <mailto:[email protected]>> wrote:

            This was someone's response to Jeff's interview (see
            here:
            https://www.facebook.com/fareedzakaria/posts/10152703985901330)


            Please read and comment if you feel the need...

            Cheers,
            David

-- /With kind regards,/
            David Ray
            Java Solutions Architect
            *Cortical.io <http://cortical.io/>*
            Sponsor of: HTM.java <https://github.com/numenta/htm.java>
            [email protected] <mailto:[email protected]>
            http://cortical.io <http://cortical.io/>








Reply via email to