Jason,
I think there might be two ways of interpreting this, each with
different answers.
The first question: Does AI create more threats that never existed
before?
No more than a not well educated kid. Especially when with guns and
bombs.
I think the answer is most definitely yes. Some examples:
- Large scale unemployment/disempowerment of people who cannot
compete with increasing machine intelligence
The threat here is more in human stupidity, forgetting that money is a
tool for working less, not more. You are right this is frightening,
but nor more than the actual health politics (prohibition).
- Algorithms that identify and wipe out dissent / control opposition
But all technics can help. With the satellite system, we can localize
opposition through phones. Basically all technic, from fire to media,
can be misused by humans.
- New and terrifying weapons (e.g. https://www.youtube.com/watch?v=HipTO_7mUOw
)
- More infrastructure and systems that can be hacked or introduce
defects (air traffic control systems, self-driving cars, etc.)
Sure, the acceleration of information machine *is* full of possible
risks.
The second question: Will super intelligence ultimately decide to
eliminate us (as meaningless, redundant, to make room for more
computation, etc.)?
Not before we download ourselves into machines, and their will be
hybrids. Then, the whole point is in succeeding to transmit our values
to our kids, digital or not.
As I said, I would say that the universal machine is born intelligent,
and can only evolves keeping that intelligence intact, or diminishing
it, like nature does with the "adult state", which is basically when
intelligence is stopped and people believe have the answer on the
fundamental questions.
There is nothing new, but things accelerates. In geological time, the
"machine/word/number development" is an explosion. A reason to make
clear our values, and take seriously education and research, but also
the arts, etc.
I think we will come back to bacteria, a modern one with GPS and
connected to all the others. We will be virtual being simulated by
colony of bacteria, a special one which adapts itself well on many
planets, and transform itself into a quantum net when "frozen" below
-40 degree celcius (). We will expand forever, in most normal future,
even if that means changing of multi-multi-... verses.
This question is more interesting. I tend to fall in the camp that
we exercise little control over the ultimate decision made by such a
super intelligence, but I am optimistic that a super intelligence
will, during the course of its ascension, discover and formalize a
system of ethics, and this may lead to it deciding not to wipe out
other life forms. For example, it might discover the same ideas
expressed here ( https://www.researchgate.net/profile/Arnold_Zuboff/publication/233329805_One_Self_The_Logic_of_Experience/links/54adcdb60cf2213c5fe419ec/One-Self-The-Logic-of-Experience.pdf
) and therefore determine something like the golden rule is
rationally justified.
I will take a look. I read him in Mind's I. What is the golden rule?
Best,
Bruno
Jason
On Mon, Nov 27, 2017 at 3:32 PM, <agrayson2...@gmail.com> wrote:
IIRC, this is the view of Hawking and Musk.
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.
http://iridia.ulb.ac.be/~marchal/
--
You received this message because you are subscribed to the Google Groups
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.