Gary f: "I think it’s quite plausible that AI systems could reach that
level of autonomy and leave us behind in terms of intelligence, but what
would motivate them to kill us? I don’t think the Terminator scenario, or
that of HAL in *2001,* is any more realistic than, for example, the
scenario of the Spike Jonze film *Her*."

Gary, We live in a world gone mad with unbounded technological systems
destroying the life on the Earth and you want to parse the particulars of
whether "a machine" can be destructive? Isn't it blatantly obvious?
     And as John put it: "If no such goal is programmed in an AI system, it
just wanders aimlessly." Unless "some human(s) programmed that goal [of
destruction] into it."
     Though I admire your expertise on AI, these views seem to me
blindingly limited understandings of what a machine is, putting an
artificial divide between the machine and the human rather than seeing the
machine as continuous with the human. Or rather, the machine as continuous
with the automatic portion of what it means to be a human.
     Lewis Mumford pointed out that the first great megamachine was the
advent of civilization itself, and that the ancient megamachine of
civilization involved mostly human parts, specifically the bureaucracy, the
military, the legitimizing priesthood. It performed unprecedented amounts
of work and manifested not only an enormous magnification of power, but
literally the deification of power.
     The modern megamachine introduced a new system directive, to replace
as many of the human parts as possible, ultimately replacing all of them:
the perfection of the rationalization of life. This is, of course, rational
madness, our interesting variation on ancient Greek divine madness. The
Greeks saw how a greater wisdom could over flood the psyche, creatively or
destructively. Rational Pentheus discovered the cost for ignoring the
greater organic wisdom, ecstatic and spontaneous, that is also involved in
reasonableness, when he sought to imprison it in the form of Dionysus: he
literally lost his head!
    We live the opposite from divine madness in our rational madness:
living from a lesser projection of the rational-mechanical portions of
reasonableness extrapolated to godly dimensions: deus ex machina, our
savior!
     This projection of the newest and least matured portions of our
brains, the rationalizing cortex, cut free from the passions and the
traditions that provided bindings and boundings, has come to lord it over
the world. It does not wander aimlessly, this infantile tyrant. It projects
it's dogmas into science, technology, economy, and everyday habits of mind
(yes, John, there is no place for dogma in science, but that does not
prevent scientists from being dogmatic, or from thinking from the
unexamined dogmas of nominalism, or from the dogmas of the megamachine).
     The children and young adults endlessly pushing the buttons of the
devices that confine them to their screens are elements of the megamachine,
happily being further "programmed" to machine ways of living. Ditto many
(thankfully, not all) of the dominant views in science and technology, and,
of course, also in anti-scientific views, which are constructing with the
greatest speed and a religious-like passion our unsustainable dying world,
scientifically informed sustainability alternatives notwithstanding.
Perfection awaits us.
     What "would motivate them to kill us?"
     Rationally-mechanically infantilized us.

Gene Halton

"There is a wisdom that is woe; but there is a woe that is madness."


On Jun 15, 2017 11:42 AM, "John F Sowa" <s...@bestweb.net> wrote:

> On 6/15/2017 9:58 AM, g...@gnusystems.ca wrote:
>
>> To me, an intelligent system must have an internal guidance system
>> semiotically coupled with its external world, and must have some degree of
>> autonomy in its interactions with other systems.
>>
>
> That definition is compatible with Peirce's comment that the search
> for "the first nondegenerate Thirdness" is a more precise goal than
> the search for the origin of life.
>
> Note the comment by the biologist Lynn Margulis:  a bacterium swimming
> upstream in a glucose gradient exhibits intentionality.  In the article
> "Gaia is a tough bitch", she said “The growth, reproduction, and
> communication of these moving, alliance-forming bacteria” lie on
> a continuum “with our thought, with our happiness, our sensitivities
> and stimulations.”
>
> I think it’s quite plausible that AI systems could reach that level
>> of autonomy and leave us behind in terms of intelligence, but what
>> would motivate them to kill us?
>>
>
> Yes.  The only intentionality in today's AI systems is explicitly
> programmed in them -- for example, Google's goal of finding documents
> or the goal of a chess program to win a game.  If no such goal is
> programmed in an AI system, it just wanders aimlessly.
>
> The most likely reason why any AI system would have the goal to kill
> anything is that some human(s) programmed that goal into it.
>
> John
>
>
> -----------------------------
> PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON
> PEIRCE-L to this message. PEIRCE-L posts should go to
> peirce-L@list.iupui.edu . To UNSUBSCRIBE, send a message not to PEIRCE-L
> but to l...@list.iupui.edu with the line "UNSubscribe PEIRCE-L" in the
> BODY of the message. More at http://www.cspeirce.com/peirce-l/peirce-l.htm
> .
>
>
>
>
>
>
-----------------------------
PEIRCE-L subscribers: Click on "Reply List" or "Reply All" to REPLY ON PEIRCE-L 
to this message. PEIRCE-L posts should go to peirce-L@list.iupui.edu . To 
UNSUBSCRIBE, send a message not to PEIRCE-L but to l...@list.iupui.edu with the 
line "UNSubscribe PEIRCE-L" in the BODY of the message. More at 
http://www.cspeirce.com/peirce-l/peirce-l.htm .




Reply via email to