Hi Bruno,

Bruno Marchal wrote:
> 
> We just cannot do artificial intelligence in a provable manner. We  
> need chance, or luck. Even if we get some intelligent machine, we will  
> not know-it-for sure (perhaps just believe it correctly).
But this is a quite weak statement, isn't it? It just prevents a mechanical
way of making a AI, or making a provably friendly AI (like Eliezer Yudkowsky
wants to do).

We can prove very little about what we do or "know" anyway. We can't prove
the validity of science, for example.

It doesn't even mean that there is no developmental process that will allow
us to create ever more powerful heuristics with which to find better AI
faster in a quite predictable way (not predictable what kind of AI we build,
just *that* we will build a powerful AI), right?
-- 
View this message in context: 
http://old.nabble.com/Mathematical-closure-of-consciousness-and-computation-tp31771136p31854285.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to