On Monday, November 27, 2017 at 4:09:36 PM UTC-6, Jason wrote:
>
> I think there might be two ways of interpreting this, each with different 
> answers.
>
> The first question: Does AI create more threats that never existed before?
>
> I think the answer is most definitely yes. Some examples:
> - Large scale unemployment/disempowerment of people who cannot compete 
> with increasing machine intelligence
> - Algorithms that identify and wipe out dissent / control opposition
> - New and terrifying weapons (e.g. 
> https://www.youtube.com/watch?v=HipTO_7mUOw )
> - More infrastructure and systems that can be hacked or introduce defects 
> (air traffic control systems, self-driving cars, etc.)
>

There is a website  http://autonomousweapons.org/ where you can sign a 
support for a ban on these weapons. The slaughterbots seem almost 
inevitable and I suspect the best we might do is to delay their 
implementation. There though might be a fashion line of clothing which 
might protect you against this:

<https://lh3.googleusercontent.com/-o4xmauFPcZ8/Whygqb7PO7I/AAAAAAAADHs/YUY_rsukpW8FgcrKtM1gGC9LZ3BFKIDgwCLcBGAs/s1600/starwars%2Bstormtrooper.jpg>
or if you want to go retro you could try this

<https://lh3.googleusercontent.com/-gVEAPVMBoCU/Whyhb_rZOLI/AAAAAAAADH0/GTIVt-r-SJURQs0VhwAnJQRaWDB7uT0tgCLcBGAs/s1600/medieval%2Barmour.jpg>
I foresee a huge market for protectobots. We will have packs on us that 
release anti-slaughterbots when they appear. These are then a sort of 
miniature THAAD or Patriot antidrone system that takes these out before 
they take you out.

To me the biggest issue is whether AI systems become increasingly 
interlinked with the human brain. I would not be at all surprised if the 
major nodes on the internet are not computers by cyborg-linked brains. We 
humans seem already growing inward with our devices, where the picture 
below says it all. We are well on the path in this direction already.

LC

<https://lh3.googleusercontent.com/-6al_a2jsA8s/WhyjTCKpY_I/AAAAAAAADIA/Cu9w42Ysv4QSJu64EqorfFxfV1Oea3k5wCLcBGAs/s1600/smartphone%2Bzombie.jpg>



> The second question: Will super intelligence ultimately decide to 
> eliminate us (as meaningless, redundant, to make room for more computation, 
> etc.)?
>
> This question is more interesting. I tend to fall in the camp that we 
> exercise little control over the ultimate decision made by such a super 
> intelligence, but I am optimistic that a super intelligence will, during 
> the course of its ascension, discover and formalize a system of ethics, and 
> this may lead to it deciding not to wipe out other life forms.  For 
> example, it might discover the same ideas expressed here ( 
> https://www.researchgate.net/profile/Arnold_Zuboff/publication/233329805_One_Self_The_Logic_of_Experience/links/54adcdb60cf2213c5fe419ec/One-Self-The-Logic-of-Experience.pdf
>  
> ) and therefore determine something like the golden rule is rationally 
> justified.
>
> Jason
>

 

>
>
>
> On Mon, Nov 27, 2017 at 3:32 PM, <agrays...@gmail.com <javascript:>> 
> wrote:
>
>> IIRC, this is the view of Hawking and Musk.
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to everything-li...@googlegroups.com <javascript:>.
>> To post to this group, send email to everyth...@googlegroups.com 
>> <javascript:>.
>> Visit this group at https://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to