On Sunday, June 15, 2014 11:44:24 PM UTC+10, Bruno Marchal wrote:
>
>
> On 15 Jun 2014, at 03:34, Pierz wrote:
>
>
>
> On Saturday, June 14, 2014 11:52:02 AM UTC+10, Liz R wrote:
>>
>> On 13 June 2014 23:35, Russell Standish <li...@hpcoders.com.au> wrote:
>>
>>> On Fri, Jun 13, 2014 at 01:44:25AM -0700, Pierz wrote:
>>> > Yes. But I have to wonder what we're doing wrong, because any 
>>> sophisticated
>>> > piece of  modern software such as a modern OS or even this humble 
>>> mailing
>>> > list/forum software we are using is already "hugely mind-bogglingly
>>> > incremental". It has evolved over decades of incremental improvement
>>> > involving thousands upon thousands of workers building up layers of
>>> > increasing abstraction from the unfriendly silicon goings-on down 
>>> below.
>>> > And yet Siri, far from being a virtual Scarlett Johannson, is still 
>>> pretty
>>> > much dumb as dog-shit (though she has some neat bits of crystallised
>>> > intelligence built in. Inspired by "She" I asked her what she was 
>>> wearing,
>>> > and she said, "I can't tell you but it doesn't come off."). Well, I'm 
>>> still
>>> > agnostic on "comp", so I don't have to decide whether this conspicuous
>>> > failure represents evidence against computationalism. I do however 
>>> consider
>>> > the bullish predictions of the likes of Deutsch (and even our own dear
>>> > Bruno) that we shall be uploading our brains or something by the end 
>>> of the
>>> > century or sooner to be deluded. Deutsch wrote once (BoI?) that the
>>> > computational power required for human intelligence is already present 
>>> in a
>>> > modern laptop; we just haven't had the programming breakthrough yet. I
>>> > think that is preposterous and can hardly credit he actually believes 
>>> it.
>>> >
>>>
>>> It overstates the facts somewhat - a modern laptop is probably still
>>> about 3 orders of magnitude less powerful than a human brain, but with
>>> Moore's law, that gap will be closed in about 15 years.
>>>
>>
>> Moore's law appears to have stopped working about 10 years ago, going by 
>> a comparison of modern home computers with old ones. That is, the 
>> processors haven't increased much in speed, but they have gained more 
>> "cores", i.e. they've been parallelised, and more memory and more storage. 
>> But the density of the components on the chips hasn't increased by the 
>> predicted amount (or so I'm told).
>>
>
> No - we are hitting limits now in terms of miniaturization that are posing 
> serious challenges to the continuation of Moore's law. So far, engineers 
> have - more or less - found ways of working around these problems, but this 
> can't continue indefinitely. However, it's really a subsidiary point. If we 
> require 1000x the power of a modern laptop, that's easily (if somewhat 
> expensively) achieved with parallelization, a la Google's PC farms. Of 
> course this only helps if we parallelize our AI algorithms, but given the 
> massive parallelism of the brain, this should be something we'd be doing 
> anyway. And yet I don't think anyone would argue that they could achieve 
> human-like intelligence even with all of Google's PCs roped together. It's 
> an article of faith that all that is required is a programming 
> breakthrough. I seriously doubt it. I believe that human intelligence is 
> fundamentally linked to qualia (consciousness), and I've yet to be 
> convinced that we have any understanding of that yet. I am familiar of 
> course with all the arguments on this subject, including Bruno's theory 
> about unprovable true statements etc, but in the end I remain unconvinced. 
> For instance I would ask how we would torture an artificial consciousness 
> (if we were cruel enough to want to)? How would we induce pain or pleasure? 
> Sure we can "reward" a program for correctly solving a problem in some kind 
> of learning algorithm, but anyone who understands programming and knows 
> what is really going on when that occurs must surely wonder how 
> incrementing a register induces pleasure (or decrementing it, pain). 
> Anyway. Old hat I guess. My point is it comes down to a "bet", as Bruno 
> likes to say. An statement of faith. At least Bruno admits it is such. 
>
>
> I do more than admit this. I insist it has to be logically the case that 
> it needs an act of faith.
>
> That is also the reason why I insist that it is a theology. It is, at the 
> least, the belief in a form of (ditital) reincarnation. 
>
>
>
>
> As things stand, given the current state of AI, I'd bet the other way. 
>
>
> Comp is not so nice with AI. Theoretical AI is a nest of beautiful 
> results, but they are all necessarily non constructive. We cannot program 
> intelligence, we can only recognize it, or not. It depends in large part of 
> us.
>
> In theoretical artificial intelligence, or learning theory(*), the results 
> can be sum up by the fact that a machine will be more intelligent than 
> another one if she is able to make more errors, to change its mind more 
> often, to work in team, to allow non falsifiable hypothesis, etc. 
>
> Certainly those look like sound approaches to problem solving. But if we 
consider our paradigmatic example of intelligence, Albert Einstein, we note 
that he strongly advocated the role of the (apparently) irrational in true 
intellectual creativity. The famous quote "imagination is more important 
than knowledge" for example. We have no idea yet how to program such a 
thing as "imagination", yet it seems critical, and not just for geniuses. 
The leap of understanding, the flash of insight, has almost a 'quantum' 
quality about it, a jump to a new level of organization. Now you might 
argue that such insight is the result of subconscious computational 
routines working and "returning a value", but if so, it seems to be a 
completely different kind of computation than the ones we're used to from 
programming. To me it seems more like a kind of mirroring, as if the mind 
attempts to reorganize itself into a new structure that mirrors the 
organization of the problem space. Suddenly it's as if all the neurons snap 
into place. It's never-endingly awesome (in the old sense of the word, but 
a little in the new too) to me that the abstraction that arises mentally 
from the gooey, messy substrate is so clean and pure. The mental model 
*feels* like a pure abstraction - this very sensation lends credence to a 
Platonic understanding of the mind-body problem, in my view. But all that 
aside, how do we get computers to do something analogous? At the moment, 
our perpetual problem is that all the true insight must be human supplied. 
The computer is stuck slavishly within the confines of the problem as it is 
specified. It never comes back and says: I think you're seeing this all 
wrong. Sure its results might suggest that, but only to a human 
interpreter, who will then have to slavishly reconfigure the machine. Not 
to say these problems can't be solved, but I can't share the insouciant 
confidence of John Clark, Deutsch et al that we are getting incrementally 
closer to the goal and it's just a matter of time. Reading John Clark's 
"sore loser" remarks, I can see he thinks that I (and other AI skeptics) 
must hold this skeptical view for emotional reasons, i.e., we can't bear to 
be "just"  machines, and to have our treasured mystical specialness 
wrenched from us. That may or may not be the case, but it's as irrelevant 
as whether John Clark has an emotional investment in his position - of 
course he does, and it oozes from every line. The fact is I can absorb 
blows to my ego - Many Worlds is a massive blow to personal specialness, 
but I have swallowed the pill however bitter for reasons of intellectual 
coherence. What I see of the state of AI on the other hand does not 
convince me that we're in the right paradigm yet. The Newtonians believed 
their clockwork universe had all the big problem sewn up and the rest was a 
matter of the sixth decimal place or whatever, but the "tiny explanatory 
gap" that remained turned out to contain quantum theory and relativity. The 
gap between hope and reality in AI is far  more glaring. That's why I 
consider the widespread faith in AI as misplaced, and certainly no less 
tied up in emotional investments than my skepticism. To create minds is to 
become gods, so the trade-in of a nostalgic, mystical notion of self seems 
like a cheap deal.  But I would be prepared to bet that a genuine paradigm 
shift in our understanding of mind on the scale of the shift from Newtonian 
to quantum world views awaits us before we succeed in constructing an 
intelligent conscious being.
 

> Machine's intelligence look like this. Whatever theory of intelligence you 
> suggest, a machine will be more intelligent by not applying it.
>
> Doesn't sound like an encouraging recipe for a programmer! She'd be better 
off staring blankly at the screen and waiting for the computer to say 
something insightful. But of course by "not applying it" you mean "applying 
something else"...

Intelligence is a protagorean virtue too, if not the most typical. It 
> escapes definitions.
>
> Bruno
>
> (*) The work of Putnam, Blum, Gold, Case and Smith, Oherson, Stob, 
> Weinstein.
>
>
>  
>
>>  
>>> However, it is also true that having a 1000-fold more powerful
>>> computer does not get you human intelligence, so the programming
>>> breakthrough is still required.
>>>
>>> Yes, you have to know how people do it.
>>  
>>
> Quote from ... someone: "If the brain were so simple we could understand 
> it, we'd be so simple we couldn't." 
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-li...@googlegroups.com <javascript:>.
> To post to this group, send email to everyth...@googlegroups.com 
> <javascript:>.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.
>
>
> http://iridia.ulb.ac.be/~marchal/
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to