In his recent "Provably Beneficial AI"2017 Asilomar conference talk:

 -https://www.youtube.com/watch?v=pARXQnX6QS8

...Stuart Russell argues that more paranoia would have helped
the nuclear industry - by making it more risk-conscious and
helping it to avoid Chernobyl. By analogy, more paranoia
would help us avoid disasters involving machine intelligence.

IMO, the cases where more paranoia is helpful do exist.
Crossing roads, getting into motor vehicles and eating
sugar are all cases where more caution seems as though
it would be prudent.

These are the exceptions, though. We are in a relatively safe
environment while our brains and emotions evolved in an
environment where predators lurked around every water hole.
As a result, most humans are dysfunctionally paranoid. This
has been well documented by Dan Gardner in the book
"Risk: Why We Fear the Things We Shouldn't - and Put
Ourselves in Greater Danger".

Irrational fear of vaccines has killed a large number
of people. Irrational fear of GMOs causes large scale
problems in the distribution of food. Irrational fear of
carbon dioxide has caused a $1.5 trillion dollar year
expenditure on getting rid of the stuff.

Stuart Russell's own example counts against his thesis:
Irrational fear of nuclear power is the problem that prevents
its deployment - causing many more deaths in coal mines as
a direct consequence. In fact, nuclear power is - and always
has been - a very safe energy-producing technology.

More caution does not typically lead to better outcomes. More
caution systematically and repeatedly leads to worse outcomes.
Humans are typically too paranoid for their own good. This is
the basic problem with fear-mongering and promoting risks:
the net effect on society is negative.

I don't have to go on about this too much because
Max More has done my work for me:

If you haven't done so before, go and read:

http://www.maxmore.com/perils.htm

Machine intelligence is having its own bout with the precautionary
principle at the moment - in the case of self-driving cars, trucks,
boats, trains and planes. We look set to cause a large number
of pointless deaths by throttling these technologies using the
precautionary principle. Let's calculate the number who lose their
lives during this process - so the costs of failing to deploy
intelligent machines are made very clear.

--
__________
 |im Tyler http://timtyler.org/ [email protected]



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to