Everything you're talking about equates intelligence with problem solving which is essentially a very narrow view of what intelligence involves and that's fine if problem solving is the only measure of intelligence you care about. The problem with this perspective is that it excludes other aspects that many people consider to be part of intelligence such as artistic creativity, musical ability, poetic and story-telling abilities, empathic ability, and so forth.
As "weak" as the Turing test is, it goes some of the way to evaluating something that formal problem-solving tests of intelligence don't address: The quality of consciousness and understanding of hard to define things such as emotions and attitudes because it is based on human brains making judgements about the qualities of what may or may not be a human brain. Your example of testing of a super-intelligent alien would tell us nothing about its broader intelligence that we couldn't discern through dialog with it (what would be, in reality, a Turing test) ... indeed, what if the alien was horrible at problem solving but a genius at understanding how human emotions worked? Imagine an alien who couldn't solve a Sodoku puzzle or get a double digit score playing Tetris but in a single therapy session could deduce the source of your emotional problems, explain them to you in such a way that you could address them, and "cure" your depression, PTSD, or whatever your issues are ... would that alien be intelligent? [mg] On Thu, Jul 4, 2013 at 9:45 AM, Eric Walker <eric.wal...@gmail.com> wrote: > On Thu, Jul 4, 2013 at 7:21 AM, James Bowery <jabow...@gmail.com> wrote: > > Well since we're talking measurement and theory in the natural sciences, >> one is operating on nature and one does have a model of nature which is >> formal in the sense that any theory is formal. >> > > I think we are largely in agreement here. There are perhaps two or three > different "formal" approaches that are possible -- there's the formality of > a formal definition, i.e., "intelligence is A and B," where you can > rigorously show that A and B are satisfied or not, in a mathematical sense. > And then there's the formality of a procedure -- "its not clear exactly > what intelligence is and whether computers can have it, but we think we can > rigorously detect some examples of intelligence being used that could > potentially overlap with what computers can do now or in the future. For > our experiment, we'll try to place bounds the question by doing C and D, > and whatever we find, it will be interesting and statistically sound." And > then there's the formality of a model -- "we don't know exactly > what intelligence is or whether computers can have it, but we need to > approach the problem systematically and relate the results to other > experiments, so here are our general assumptions: E and F." > > It would probably be difficult to keep these three dimensions apart in > actual experiments. But it seems to me that the first kind of formality > could lead people into to assuming the answer implicitly in the question; > for example, "intelligence is the ability to solve a certain class of > NP-hard problems together with <fill in three other abilities>." > > Eric > >