On 13 Jun 2014, at 10:44, Pierz wrote:

Yes. But I have to wonder what we're doing wrong, because any sophisticated piece of modern software such as a modern OS or even this humble mailing list/forum software we are using is already "hugely mind-bogglingly incremental". It has evolved over decades of incremental improvement involving thousands upon thousands of workers building up layers of increasing abstraction from the unfriendly silicon goings-on down below. And yet Siri, far from being a virtual Scarlett Johannson, is still pretty much dumb as dog- shit (though she has some neat bits of crystallised intelligence built in. Inspired by "She" I asked her what she was wearing, and she said, "I can't tell you but it doesn't come off."). Well, I'm still agnostic on "comp", so I don't have to decide whether this conspicuous failure represents evidence against computationalism. I do however consider the bullish predictions of the likes of Deutsch (and even our own dear Bruno) that we shall be uploading our brains or something by the end of the century or sooner to be deluded. Deutsch wrote once (BoI?) that the computational power required for human intelligence is already present in a modern laptop; we just haven't had the programming breakthrough yet. I think that is preposterous and can hardly credit he actually believes it.


I think we had the programming breakthrough, by discovering the universal machine, and I begin to think she is already conscious and intelligent (perhaps even maximally).

Perhaps even Löbianity is already part of the fall. I take Löbian machines, like PA or ZF, as conscious as you and me. (yet more dissociated with respect to our local reality).

Uploading our mind might take one or two centuries, by nanotechnologies, but this does not mean we will understand our mind. Copying is just infinitely more easy than understanding. Not all people will bet on the same level, also.

Bruno





On Friday, June 13, 2014 6:07:56 PM UTC+10, Liz R wrote:
or even hugely.


On 13 June 2014 19:49, LizR <liz...@gmail.com> wrote:
The closest I've seen to a computer programme behaving in what might be called an intelligent manner was in one of Douglas Hofstadter's books. (I think it designed fonts or something?) At least as he described it, it seemed to be doing something clever, but nowhere near the level needed to pass the Turing Test "for real" - but that's the point, I suppose. You can't expect to write a programme to pass the TT until you've written one that can do tiny bits of cleverness, and then another one that uses those tiny bits to be a bit more clever, and so on. In a way this is like the way that SF writers thought we'd have soon robot servants that were almost human, and might even rebel ... without realising that the process would have to be higely, mind-bogglingly incremental.



On 13 June 2014 18:35, Pierz <pie...@gmail.com> wrote:
Meh. The whole thing really just illustrates a fundamental problem with our current conception of AI -at least as it manifests in such 'tests'. It is perfectly clear that the Eliza-like program here just has some bunch of pre-prepared statements to regurgitate and the programmers have tried to wire these responses up to questions in such a way that they appear to be legitimate, spontaneous answers. But intelligence consists in the invention of those responses. This is always the problem with computer programs, at least as they exist today: they really just crystallize acts of human intelligence into strict, repeatable procedures. Even chess programs, which are arguably the closest thing we have to computer intelligence, depend on this crystallized intelligence, because the pruning rules and strategic heuristics they rely upon draw on deep human insights that the computer could never have arrived at itself. As humans we resemble computers to the extent that we have automated our behaviour - when we regurgitate a "good how are you?" in response to a social enquiry as to how we are we are fundamentally behaving like Eliza. But when we engage in real conversation or any other form of novel problem solving, we don't seem very computer-like at all, the point that Craig makes (ad nauseam).

On Friday, June 13, 2014 5:20:16 AM UTC+10, John Clark wrote:
On Wed, Jun 11, 2014 at 4:22 PM, <ghi...@gmail.com> wrote:

> If the TT has been watered down, then the first question for me would be "doesn't this logically pre-assume a set of explicit standards existed in the first place"?

My answer is "no". So am I a human or a computer?

> Has there ever been a robust set of standards?

No, except that whatever procedure you use to judge the level of intelligence of your fellow Human Beings it is only fair that you use the same procedure when judging machines. I admit this is imperfect, humans can turn out to be smarter or dumber than originally thought, but it's the only tool we have for judging such things. If the judge is a idiot then the Turing Test doesn't work very well, or if the subject is a genius but pretending to be a idiot you well also probably end up making the wrong judgement but such is life, you do the best you can with the tools at hand.

By the way, for a long time machines have been able to beautifully emulate the behavior of two particular types of humans, those in a coma and those that are dead.

   John K Clark




--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
To post to this group, send email to everyth...@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.



--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to