On 15 Jun 2014, at 03:34, Pierz wrote:



On Saturday, June 14, 2014 11:52:02 AM UTC+10, Liz R wrote:
On 13 June 2014 23:35, Russell Standish <li...@hpcoders.com.au> wrote:
On Fri, Jun 13, 2014 at 01:44:25AM -0700, Pierz wrote:
> Yes. But I have to wonder what we're doing wrong, because any sophisticated > piece of modern software such as a modern OS or even this humble mailing
> list/forum software we are using is already "hugely mind-bogglingly
> incremental". It has evolved over decades of incremental improvement
> involving thousands upon thousands of workers building up layers of
> increasing abstraction from the unfriendly silicon goings-on down below. > And yet Siri, far from being a virtual Scarlett Johannson, is still pretty
> much dumb as dog-shit (though she has some neat bits of crystallised
> intelligence built in. Inspired by "She" I asked her what she was wearing, > and she said, "I can't tell you but it doesn't come off."). Well, I'm still > agnostic on "comp", so I don't have to decide whether this conspicuous > failure represents evidence against computationalism. I do however consider > the bullish predictions of the likes of Deutsch (and even our own dear > Bruno) that we shall be uploading our brains or something by the end of the
> century or sooner to be deluded. Deutsch wrote once (BoI?) that the
> computational power required for human intelligence is already present in a > modern laptop; we just haven't had the programming breakthrough yet. I > think that is preposterous and can hardly credit he actually believes it.
>

It overstates the facts somewhat - a modern laptop is probably still
about 3 orders of magnitude less powerful than a human brain, but with
Moore's law, that gap will be closed in about 15 years.

Moore's law appears to have stopped working about 10 years ago, going by a comparison of modern home computers with old ones. That is, the processors haven't increased much in speed, but they have gained more "cores", i.e. they've been parallelised, and more memory and more storage. But the density of the components on the chips hasn't increased by the predicted amount (or so I'm told).

No - we are hitting limits now in terms of miniaturization that are posing serious challenges to the continuation of Moore's law. So far, engineers have - more or less - found ways of working around these problems, but this can't continue indefinitely. However, it's really a subsidiary point. If we require 1000x the power of a modern laptop, that's easily (if somewhat expensively) achieved with parallelization, a la Google's PC farms. Of course this only helps if we parallelize our AI algorithms, but given the massive parallelism of the brain, this should be something we'd be doing anyway. And yet I don't think anyone would argue that they could achieve human-like intelligence even with all of Google's PCs roped together. It's an article of faith that all that is required is a programming breakthrough. I seriously doubt it. I believe that human intelligence is fundamentally linked to qualia (consciousness), and I've yet to be convinced that we have any understanding of that yet. I am familiar of course with all the arguments on this subject, including Bruno's theory about unprovable true statements etc, but in the end I remain unconvinced. For instance I would ask how we would torture an artificial consciousness (if we were cruel enough to want to)? How would we induce pain or pleasure? Sure we can "reward" a program for correctly solving a problem in some kind of learning algorithm, but anyone who understands programming and knows what is really going on when that occurs must surely wonder how incrementing a register induces pleasure (or decrementing it, pain). Anyway. Old hat I guess. My point is it comes down to a "bet", as Bruno likes to say. An statement of faith. At least Bruno admits it is such.

I do more than admit this. I insist it has to be logically the case that it needs an act of faith.

That is also the reason why I insist that it is a theology. It is, at the least, the belief in a form of (ditital) reincarnation.




As things stand, given the current state of AI, I'd bet the other way.

Comp is not so nice with AI. Theoretical AI is a nest of beautiful results, but they are all necessarily non constructive. We cannot program intelligence, we can only recognize it, or not. It depends in large part of us.

In theoretical artificial intelligence, or learning theory(*), the results can be sum up by the fact that a machine will be more intelligent than another one if she is able to make more errors, to change its mind more often, to work in team, to allow non falsifiable hypothesis, etc.

Machine's intelligence look like this. Whatever theory of intelligence you suggest, a machine will be more intelligent by not applying it.

Intelligence is a protagorean virtue too, if not the most typical. It escapes definitions.

Bruno

(*) The work of Putnam, Blum, Gold, Case and Smith, Oherson, Stob, Weinstein.




However, it is also true that having a 1000-fold more powerful
computer does not get you human intelligence, so the programming
breakthrough is still required.

Yes, you have to know how people do it.

Quote from ... someone: "If the brain were so simple we could understand it, we'd be so simple we couldn't."

--
You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to