On 4/15/2010 1:06 PM, Skeletori wrote:
On Apr 9, 7:39 pm, Jason Resch<jasonre...@gmail.com>  wrote:
You would need to design a very general fitness test for measuring
intelligence, for example the shortness and speed at which it can find
proofs for randomly generated statements in math, for example.  Or the
accuracy and efficiency at which it can predict the next element given
sequenced pattern, the level of compression it can achieve (shortest
description) given well ordered information, etc.  With this fitness test
you could evolve better intelligences with genetic programming or a genetic
algorithm.
Those tests are good components of a general AI... but it still feels
like building a fully independent agent would involve a lot of
engineering. If we want to achieve an intelligence explosion, or TS,
we need some way of expressing that goal to the AI. ISTM it would take
a lot of prior knowledge.

If the agent was embodied in an actual robot, it would need to be able
to reason about humans. A simple goal like "stay alive" won't do
because it might decide to turn humans into biofuel. On the other
hand, if the agent was put in a virtual world things would be easier
because its interactions could be easily restricted... but it would
need some way of performing experiments in the real world to develop
new technologies. Unless it could achieve IE through pure mathematics.

Anyway, I think humans are going to fiddle with AIs as long as they
can, because it's more economical that way. We could plug in speech
recognition, vision, natural language, etc. modules to the AI to
bootstrap it, but even that could lead to problems. If there are any
loopholes in a fitness test (or reward function, or whatever) then the
AI will take advantage of them. For example, it could learn to
position itself in such a way that its vision system wouldn't
recognize a human, and then it could kill the human for fuel.

So I'm still suspecting that what we want a general AI to do wouldn't
be general at all but something very specific and complex. Are there
simple goals for a general AI?

I agree with the above and pushing the idea further has led me to the conclusion that intelligence is only relative to an environment. If you consider Hume's argument that induction cannot be justified - yet it is the basis of all our beliefs - you are led to wonder whether humans have "general intelligence". Don't we really just have intelligence in this particular world with it's regularities and "natural kinds"? Our "general intelligence" allows us to see and manipulate objects - but not quantum fields or space-time.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-l...@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to