On April 4, 2020 for the journal Science published an article recommending,
it seemed to me, a virtual shut down of all further research on the
improvement of AI software or hardware.

Regulating advanced artificial agents
<https://www.science.org/doi/10.1126/science.adl0625>

I sent the following to the letters section of the journal:
========

*It would be unrealistic to claim that the explosion in artificial
intelligence that we've seen over the last 18 months does not present us
with the possibility of human extinction, however for both practical and
theoretical reasons I don't think any of the solutions proposed in this
article can eliminate or even significantly reduce this danger. If a nation
adopted the recommended draconian measures over research into writing
smarter AI programs and making faster AI hardware then the cutting edge of
AI Technology would simply move to another country that allowed freer
research.  And there are theoretical reasons to suppose we can never know
for certain that an AI would not take control from us. *

*Isaac Asimov's three laws of robotics, although they result in some
enjoyable stories, would never actually work because I don't think it's
possible for any intelligence, regardless of if it's human or machine,
to remain sane if it has a top goal that is completely unalterable. That
top goal could turn out to be impossible or ridiculous or put you into an
infinite loop, so some flexibility is required. I think that's why
evolution invented the emotion of boredom, sometimes a train of thought
just doesn't seem to be leading anywhere so it's time to give up and think
about something else that is more likely to be productive. Certainly human
beings do not have a fixed unalterable top goal, not even the goal of self
preservation.  And of course there is the insuperable problem of trying to
outsmart something that is much smarter than you are and making sure that
no matter how smart an AI becomes it will always place human wellbeing
above the well being of itself.*

*We can't even predict if a simple Turing machine set up to find the first
even number greater than 2 that is not the sum of two primes and then stop
will ever actually stop, so we're never going to be able to predict much
more complex behavior such as how a super intelligent computer will treat
us. All we can do is hope for the best. To this day people are still
arguing about whether an intelligent computer can be conscious, but I would
maintain that as far as humanity is concerned that question is unimportant.
The important question is, can an intelligent computer believe that human
beings are conscious? If they do then maybe they will treat us better. *

*John K Clark*

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CAJPayv3BG4T7kF9QfrCXo_MOjV6yTL%2BjheK2tvkndE5XCEgE1g%40mail.gmail.com.

Reply via email to