On Friday, January 17, 2014 5:14:13 AM UTC-6, Bruno Marchal wrote:
>
> To be franc, I don't believe in super-intelligence. I do believe in 
> super-competence, relative to some domain, but as I have explained from 
> time to time, competence has a negative feedback on intelligence.
>
> Intelligence is a state of mind, almost only an attitude. Some animals are 
> intelligent.
>

"Intelligence" is one of those big broad words that can be taken different 
ways.  The MIRI folk are operating under a very specific notion of it.  In 
making an AI, they primarily want to make a machine that follows the 
optimal decision theoretic approach to maximizing its programmed utility 
function, and that continues to follow the same utility function even when 
it's allowed to change its own code.  They don't mean that it has to be 
conscious or self-aware or a person or thoughtful or extraordinarily 
perceptive or able to question its goals or so on.

Given that approach, then there are utility functions that would be totally 
disastrous for humanity, and there may be some that turn out very good for 
humanity.  So the question of "friendliness" is how best to build an AI 
with a utility function that is good for humanity and would stay good for 
humanity even as the AI rewrote its own software.

-Gabe

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to