On Tue, Jul 25, 2006 at 11:23:54AM +0200, Shane Legg wrote:
When measuring the intelligence of a human or other animal you
have to use an appropriate test -- clearly cats can't solve linguistic
Cats and people share common capabilities, which can be tested
for by the same test. A human
Hmmm...
About the measurement of general intelligence in AGI's ...
I would tend to advocate a vectorial intelligence approach
I tend to think that quantitatively or otherwise precisely defining
and measuring general intelligence -- as a single number -- is a bit
of a conceptual and pragmatic
On 7/25/06, Ben Goertzel [EMAIL PROTECTED] wrote:
Hmmm...About the measurement of general intelligence in AGI's ...I would tend to advocate a vectorial intelligence approachI'm not against a vector approach. Naturally every intelligent
system will have domains in which it is stronger than
On Sat, Jul 22, 2006 at 07:48:10PM +0200, Shane Legg wrote:
After some months looking around for tests of intelligence for
machines what I found
Why would machines need a different test of intelligence than
people or animals? Stick them into the Skinner box, make
them solve mazes, make
James,Currently I'm writing a much longer paper (about 40 pages) on intelligencemeasurement. A draft version of this will be ready in about a month whichI hope to circulate around a bit for comments and criticism. There is also
another guy who has recently come to my attention who is doing
On 7/13/06, Pei Wang [EMAIL PROTECTED] wrote:
Shane,Do you mean Warren Smith?Yes.Shane
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]
Shane, Thanks, I would appreciate that greatly.On the topic of measuring intelligence, what do you think about the actual structure of comparison of some of today's AI systems. I would like to see someone come up with and get support for a general fairly widespread set of test s for general AI
I think that public learning/training of an AGI would be a terrible disaster...
Look at what happened with OpenMind and MindPixel These projects
allowed the public to upload knowledge into them, which resulted in a
lot of knowledge of the general nature Jennifer Lopez got a nice
butt, etc.
Ben, Yes, but OpenMind did get quite a bit of usable information into it as well, and mainly they learned a lot about the process. I believe, and they are looking at as well, different ways of grading the participants themselves, so the obviously juvienile ones could be graded down and out of the
I agree that using the Net to recruit a team of volunteer AGI
teachers would be a good idea.
But opening the process up to random web-surfers is, IMO, asking for trouble...!
-- Ben
On 7/13/06, James Ratcliff [EMAIL PROTECTED] wrote:
Ben,
Yes, but OpenMind did get quite a bit of usable
Ben,
Though Piaget is my favorite psychologist, I don't think his theory on
Developmental Psychology applies to AI to the extent you suggested.
One major reason is: in a human baby, the mental learning process in
the mind and the biological developing process in the brain happen
together, while
On 06/07/06, Russell Wallace [EMAIL PROTECTED] wrote:
On 7/6/06, William Pearson [EMAIL PROTECTED] wrote:
How would you define the sorts of tasks humans are designed to carry
out? I can't see an easy way of categorising all the problems
individual humans have shown there worth at, such as
Eugen:
If intelligence is ability to solve hard tasks,
I use the singinst.org web site def:
www.singinst.org/seedAI/seedAI.html
Self-understanding: The ability to read and comprehend source code; the
ability to understand the function of a code fragment or the purpose of a
module; the
13 matches
Mail list logo