The closest I've seen to a computer programme behaving in what might be
called an intelligent manner was in one of Douglas Hofstadter's books. (I
think it designed fonts or something?) At least as he described it, it
seemed to be doing something clever, but nowhere near the level needed to
pass the Turing Test "for real" - but that's the point, I suppose. You
can't expect to write a programme to pass the TT until you've written one
that can do tiny bits of cleverness, and then another one that uses those
tiny bits to be a bit more clever, and so on. In a way this is like the way
that SF writers thought we'd have soon robot servants that were almost
human, and might even rebel ... without realising that the process would
have to be higely, mind-bogglingly incremental.



On 13 June 2014 18:35, Pierz <pier...@gmail.com> wrote:

> Meh. The whole thing really just illustrates a fundamental problem with
> our current conception of AI -at least as it manifests in such 'tests'. It
> is perfectly clear that the Eliza-like program here just has some bunch of
> pre-prepared statements to regurgitate and the programmers have tried to
> wire these responses up to questions in such a way that they appear to be
> legitimate, spontaneous answers. But intelligence consists in the invention
> of those responses. This is always the problem with computer programs, at
> least as they exist today: they really just crystallize acts of human
> intelligence into strict, repeatable procedures. Even chess programs, which
> are arguably the closest thing we have to computer intelligence, depend on
> this crystallized intelligence, because the pruning rules and strategic
> heuristics they rely upon draw on deep human insights that the computer
> could never have arrived at itself. As humans we resemble computers to the
> extent that we have automated our behaviour - when we regurgitate a "good
> how are you?" in response to a social enquiry as to how we are we are
> fundamentally behaving like Eliza. But when we engage in real conversation
> or any other form of novel problem solving, we don't seem very
> computer-like at all, the point that Craig makes (ad nauseam).
>
> On Friday, June 13, 2014 5:20:16 AM UTC+10, John Clark wrote:
>
>> On Wed, Jun 11, 2014 at 4:22 PM, <ghi...@gmail.com> wrote:
>>
>> > If the TT has been watered down, then the first question for me would
>>> be "doesn't this logically pre-assume a set of explicit standards existed
>>> in the first place"?
>>>
>>
>> My answer is "no". So am I a human or a computer?
>>
>> > Has there ever been a robust set of standards?
>>>
>>
>> No, except that whatever procedure you use to judge the level of
>> intelligence of your fellow Human Beings it is only fair that you use the
>> same procedure when judging machines. I admit this is imperfect, humans can
>> turn out to be smarter or dumber than originally thought, but it's the only
>> tool we have for judging such things. If the judge is a idiot then the
>> Turing Test doesn't work very well, or if the subject is a genius but
>> pretending to be a idiot you well also probably end up making the wrong
>> judgement but such is life, you do the best you can with the tools at hand.
>>
>> By the way, for a long time machines have been able to beautifully
>> emulate the behavior of two particular types of humans, those in a coma and
>> those that are dead.
>>
>>    John K Clark
>>
>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to