The problem and delight of the Terminator, Skynet monster, originally a mix of 
US-Soviet AI's merging together (courtesy Harlan Ellison) is that they lads act 
on their own. My own fear is not mad program code, but the humans behind the 
coding. A reviving cold war 3, or 4, is far more concerning that Johnny 5 going 
berserk. It is not the logic or illogic of devices, but ourselves. 
 
 
-----Original Message-----
From: Telmo Menezes <te...@telmomenezes.com>
To: everything-list <everything-list@googlegroups.com>
Sent: Tue, Aug 26, 2014 6:11 pm
Subject: Re: AI Dooms Us


Hi Terren,



On Tue, Aug 26, 2014 at 7:31 PM, Terren Suydam <terren.suy...@gmail.com> wrote:

For what it's worth, the kind of autonomous human-level (or greater) type of 
AI, or AGI, will *most likely* require an architecture, yet to be well 
understood, that is of an entirely different nature relative to the kind of 
architectures being engineered by those interests who desire highly complicated 
slaves.



The problem is that you need to define a utility function for these slaves. 
Even with currently known algorithms but more computational power, the machines 
might be able to take steps to maximize their utility function that are beyond 
our own intelligence. If we try to constrain the utility function in ways that 
would threaten our well-being, we can only impose such constraints up to the 
horizon of our own intelligence, but not further.


So your cleaning system might end up figuring out a way to contaminate your 
house with enough radiation to keep humans from interfering with the cleaning 
operation for millennia. This is not a real threat because I can anticipate it, 
but what lies beyond the anticipatory powers of the most intelligent human 
alive?
 
Telmo.



  In other words, I'm not losing any sleep about the military accidentally 
unleashing a terminator.  If I'm going to lose sleep over a predictable sudden 
loss of well being, I will focus instead on the much less technical and much 
more realistic threats arising from economic/societal collapse.


Terren





On Tue, Aug 26, 2014 at 12:39 PM, 'Chris de Morsella' via Everything List 
<everything-list@googlegroups.com> wrote:


We can engage and do so without overarching understanding of what we are doing 
and stuff will emerge out of our activities. AI will be (and is!) in my opinion 
emergent phenomena. We don’t really understand it, but we are accelerating its 
emergence never the less. 
Modern software systems with millions of lines of code are not fully understood 
by anybody anymore, people know about small specific regions of a system and 
some architects have a fuzzy and rather vague understanding of system dynamics 
as a whole, but mysterious stuff is already happening (ex. Google (or some 
researchers from Google) has recently reported that its photo recognition smart 
systems are acting in ways that the programmers don’t fully comprehend and that 
are not deterministic – i.e. explicable based on working through the code)
If you look at where the money is in AI research and development, it is largely 
focused on military, security state, and other allied sectors, with perhaps an 
anomaly in the financial sector where big money is being thrown at smart 
arbitrage systems. 
We will get the kind of AI we pay for.
-Chris
 
From: everything-list@googlegroups.com 
[mailto:everything-list@googlegroups.com] On Behalf Of Platonist Guitar Cowboy
Sent: Tuesday, August 26, 2014 7:57 AM
To: everything-list@googlegroups.com
Subject: Re: AI Dooms Us

 


If we engage a class of problems on which we can't reason, and throw tech at 
that, we'll catch the occasional fish, but we won't really know how or why. 
Some marine life is poisonous however, which might not be obvious in the catch. 

I prefer "keep it simple approaches to novelty":


>From G. Kreisel's "Obituary of K. Gödel":

Without losing sight of the permanent interest of his work, Gödel repeatedly 
stressed... how little novel mathematics was needed; only attention to some 
quite commonplace distinctions; in the case of his most famous work: between 
arithmetical truth on the one hand and derivability by formal rules on the 
other. Far from being uncomfortable about so to speak getting something from 
nothing, he saw his early successes as special cases of a fruitful general, but 
neglected scheme:

By attention or, equivalently, analysis of suitable traditional notions and 
issues, adding possibly a touch of precision, one arrives painlessly at 
appropriate concepts, correct conjectures, and generally easy proofs- 

Kreisel, 1980.

 

On Tue, Aug 26, 2014 at 12:37 AM, LizR <lizj...@gmail.com> wrote:

"I'll be back!"

 

On 26 August 2014 07:20, 'Chris de Morsella' via Everything List 
<everything-list@googlegroups.com> wrote:


a super-intelligent machine devoted to the killing of "enemy" human beings (+ 
opposing drones I suppose as well)

This does not bode well for a benign super-intelligence outcome does it?


 



-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.


 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.




-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.





-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.





-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to