I would say that a superhuman Turing test would be to ask the AI any conceivable question.
It would either answer it correctly or if it did not know that answer it would be able to detail possible solutions and steps necessary prove or disprove the solutions. It should be capable of identifying negative ethical aspects certain solution and either avoiding those solutions or clearly arguing why the potential end would outweigh the means.
It should be able to recognize the costs associated with the possible solutions and recommend the most efficient ways of attacking the problem.
It the solutions it comes up with equal or exceed current experts in all fields then I would not hesitate to say that it was Superhuman in it's knowledge and intelligence.
-------------- Original message --------------
From: Matt Mahoney <[EMAIL PROTECTED]>
From: Russell Wallace <[EMAIL PROTECTED]>
I stated that a less intelligent entity cannot predict the behavior of a more intelligent entity. By intelligence, I mean information content, or Kolmogorov complexity.
>By that definition, a cloud of gas in thermal equilibrium is superintelligent. I think you need a new definition :P
That is a problem, isn't it. I'm afraid I don't have a good answer. The Turing test has been around since 1950 and so far nobody has come up with anything better. And how is a Turing test able to distinguish between human and superhuman?
-- Matt Mahoney, [EMAIL PROTECTED]
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]
