albert medina wrote:
Dear Sir,
Pardon me for intruding. As you said, the divergent viewpoints on AI, AGI, SYNBIO, NANO are all over the map and that the future is looking more like an uncontrolled "experiment".

I believe it is not an "uncontrolled" experiment, because most of the divergent viewpoints are a result of confusion, and they will eventually converge on a more unified point of view .... and this will happen long before any "experiments" actually happen. Don't forget: there are no artificial intelligences on this planet at the moment, and (IMO) none that are close to realization.

About your points below.

I do not mind if people speculate about the more esoteric aspects of "consciousness", the "soul", and so on, but I distinguish between what we can know today, and what must be left to future spiritual thought to decide. What I believe we can know NOW is that if we create the fabric for a mind (in a computer) then this mind will be conscious. As far as I am concerned, that much is not negotiable, and is completely separate from any issues about survival of minds, souls, etc.

Anything beyond that is for future speculation or investigation.

I prefer not to engage in any speculations about spiritual matters: that is for people to resolve in their own private relationship with the universe. I would like to decline any further invitations to talk about such matters, if you do not mind.

So I do not contradict you, I only say: I have no position on any of those other issues, because I believe that anything is possible beyond the basic facts about what subjective consciousness [note well: not other meanings for consciousness, but only the core philosophical issue of subjective consciousness] is and where it comes from.


Richard Loosemore


I would like to posit a supplementary viewpoint for you to contemplate, one that may support your assumptions listed here, but in a different way: Consciousness is not an "outcropping" of the mind, did not emerge from a mind. Mind is matter. . ."from dust to dust", and returns to constituent elements when consciousness departs the encasement of the mind. IT IS CONSCIOUSNESS THAT ENLIVENS THE MIND WITH ENERGY, not vice-versa. The mind is simply an instrument utilized BY THE INDWELLING CONSCIOUSNESS. All attempts to understand the world we live in, the noble efforts to reform/refashion and "improve" it, are the result of the indwelling Consciousness not having realized Itself. . .thus, it perforce must exit through the sensory-intellectual apparatus (mind/senses) to the outside world, in a continuous attempt to gain knowledge of itself. "Looking for love in all the wrong places". I propose to you that Consciousness (encased within the brain) does not know Itself, hence the lively quest and fascination for "other" intelligence, such as AGI. Sincerely, Albert

*/Richard Loosemore <[EMAIL PROTECTED]>/* wrote:

    [EMAIL PROTECTED] wrote:
     >
     > Hello Richard,
     >
     > If it's not too lengthy and unwieldy to answer, or give a general
    sense
     > as to why yourself and various researchers think so...
     >
     > Why is it that in the same e-mail you can make the statement so
     > confidently that "ego" or sense of selfhood is not something that
    the
     > naive observer should expect to just emerge naturally as a
    consequence
     > of succedding in building an AGI (and the qualities of which,
    such as
     > altruism, will have to be specifically designed in), while you
    just as
     > confidently state that consciousness itself will merely arise
    'for free'
     > as an undesigned emergent gift of building an AGI?
     >
     > I'm really curious about researcher's thinking on this and similar
     > points. It seems to lay at the core of what is so socially
     > controversial about singualrity-seeking in the first place.
     >
     > Thanks,
     >
     > ~Robert S.

    First, bear in mind that opinions are all over the map, so what I say
    here is one point of view, not everyone's.

    First, about consciousness.

    The full story is a long one, but I will try to cut to the part that is
    relevant to your question.

    Consciousness itself, I believe, is something that arises because of
    certain aspects of how the mind represents the world, and how it uses
    those mechanisms to represent what is going on inside itself. There is
    not really one thing that is "consciousness", of course (people use
    that
    word to designate many different things), but the most elusive aspects
    are the result of strange things happening in these representation
    mechanisms.

    The thing that actually gives rise to the thing we might call pure
    "subjective consciousness" (including qualia, etc) is a weirdness that
    happens when the system "bottoms out" during an attempt to unpack the
    meaning of things: normally, the mind can take any concept and ask
    itself "What *is* this thing?", and come up with a meaningful answer
    that involves more primitive concepts. Ask this of the concept [chair]
    and you might get a bunch of other concepts involving legs, a seat, a
    back, the act of human sitting, and so on. But when this same analysis
    mechanism is applied to certain concepts that are at the root of the
    mind's representation system, something peculiar happens: the system
    sets up a new temporary concept (a placeholder) ready to take the
    answer, but then it fails to actually attach anything to it. So when it
    asks itself "What is the essence of redness?" the answer is that it is
    "....", and nothing happens. Or rather something *more* than nothing
    happens, because the placeholder concept is set up, and then nothing is
    attached to it. The mind thinks "There is *something* it is like to be
    the essence of redness, but it is mysterious and indescribable".

    Now, you might want to quickly jump to the conclusion that what I am
    saying here is that "consciousness" is an artifact of the way minds
    represent the world.

    This is a very important point: I am not aligning myself with those who
    dismiss consciousness as just an artifact (or an epiphenomenon). In a
    sense, what I have said above does look like a dismissal of
    consciousness, but there is a second step in the argument.

    In this second step I point out that if you look deeply into what this
    mechanism does, and the question of how the mind assesses what is
    "real"
    or what things actually exist and can be analyzed or talked about
    meaningfully, you are forced to the conclusion that our best possible
    ideas about which things in the world "really exist" and which things
    are merely artifacts of our minds, it turns out that most of the time
    you can make a good separation, but there is one specific area where it
    will always be impossible to make a separation. In this one unique area
    - namely, the thing we call "consciousness" - we will always be forced
    to say that, scientifically, we have to accept that there is the thing
    we call consciousness is as real as anything else in the world, but
    unlike all other real things, it cannot be analyzed further. This is
    not an expression of "we don't know how to analyze this yet, but maybe
    in the future we will...." it is a clear statement that
    consciousness is
    just as real as anything else in the world, but it must necessarily be
    impossible to analyze.

    Now, going back to your question, this means that if we put the same
    kinds of mechanisms into a thinking machine as we have in our minds,
    then it will have "consciousness" just as we do, and it will experience
    the same feeling of mystery about it. We will never be able to
    objectively verify that consciousness is there (just as we cannot do
    this for each other, as humans) but we will be able to say precisely
    why
    we would expect the system to report its experience, and (most
    importantly) we will be able to give solid reasons for why we cannot
    analyze the nature of consciousness any further.

    But would those mechanisms be present in a machine? This is fairly easy
    to answer: if the machine were able to understand the world as well as
    us, then it is pretty much inevitable that the same class of mechanisms
    will be there. It is not really the exact mechanisms themselves that
    cause the problem, it is a fundamental issue to do with
    representations,
    and any sufficiently powerful representation system will have to show
    this effect. No way around it.

    So that is the answer to why I can say that consciousness will emerge
    "for free". We will not deliberately put it in, it will just come along
    if we make the system able to fully understand the world (and we are
    assuming, in this discussion, that the system is able to do that).

    (I described this entire theory of consciousness in a poster that I
    presented at the Tucson conference two years ago, but still have not
    had
    time to write it up completely. For what it is worth, I got David
    Chalmers to stand in front of the poster and debate the argument
    with me
    for a short while, and his verdict was that it was an original line of
    argument.)


    The second part of your question was why the "ego" or "self" will, on
    the other hand, not be something that just emerges for free.

    I was speaking a little loosely here, because there are many meanings
    for "ego" and "self", and I was just zeroing in on one aspect that was
    relevant to the original question asked by someone else. What I am
    menaing here is the stuff that determines how the system behaves, the
    things that drive it to do things, its agenda, desires, motivations,
    character, and so on. (The important question is whether it could be
    trusted to be benign).

    Here, it is important to understand that the mind really consists of
    two
    separate parts: the "thinking part" and the motivation/emotional
    system. We know this from our own experience, if we think about it
    enough: we talk about being "overcome by emotion" or "consumed by
    anger", etc. If you go around collecting expressions like this, you
    will notice that people frequently talk about these strong emotions and
    motivations as if they were caused by a separate module inside
    themselves. This appears to be a good intuition: they are indeed (as
    far as we can tell) the result of something distinct.

    So, for example, if you built a system capable of doing lots of
    thinking
    about the world, it would just randomly muse about things in a
    disjointed (and perhaps autic) way, never guiding itself to do anythig
    in particular.

    To make a system do something organized, you would have to give it
    goals
    and motivations. These would have to be designed: you could not build
    a "thinking part" and then leave it to come up with motivations of its
    own. This is a common science fiction error: it is always assumed that
    the thinking part would develop its own mitivations. Not so: it has to
    have some motivations built into it. What happens when we imagine
    science fiction robots is that we automatically insert the same
    motivation set as is found in human beings, without realising that this
    is a choice, not something that comes as part and parcel, along with
    pure intelligence.

    The $64,000 question then becomes what *kind* of motivations we give it.

    I have discussed that before, and it does not directly bear on your
    question, so I'll stop here. Okay, I'll stop after this paragraph ;-).
    I believe that we will eventually have to getting very sophisticated
    about how we design the motivational/emotional system (because this
    is a
    very primitive aspect of AI at the moment), and that when we do, we
    will
    realise that it is going to be very much easier to build a simple and
    benign motivational system than to build a malevolent one (because the
    latter will be unstable), and as a result of this the first AGI systems
    will be benevolent. After that, the first systems will supply all the
    other systems, and ensure (peacefully, and with grace) that no systems
    are built that have malevolent motivations. Because of this, I believe
    that we will quickly get onto an "upward spiral" toward a state in
    which
    int is impossible for these systems to become anything other than
    benevolent. This is extremely counterintuitive, of course, but only
    because 100% of our experience in this world has been with intelligent
    systems that have a particular (and particularly violent) set of
    motivations. We need to explore this question in depth, because it is
    fantastically important for the viability of the singularity idea.
    Alas, at the moment there is no sign of rational discussion of this
    issue, because as soon as the idea is mentioned, people come rusing
    forward with nightmare scenarios, and appeal to people's gut instincts
    and raw fears. (And worst of all, the Singualrity Institute for
    Artificial Intelligence (SIAI) is dominated by people who have invested
    their egos in a view of the world in which the only way to guarantee
    the
    safety of AI systems is through their own mathematical proofs.)

    Hope that helps, but please ask questions if it does not.



    Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56690000-705ea2

Reply via email to