Hi Ian,

Thanks for sharing, but see

http://people.idsia.ch/~juergen/naturedeepmind.html

and

http://www.reddit.com/r/MachineLearning/comments/2x4yy1/google_deepmind_nature_paper_humanlevel_control/

Demis has a flair for getting the attention of the press and media, and
Nature aren't exactly trotting after him in that respect. They made this [
https://www.youtube.com/watch?v=xN1d3qHMIEQ] 8-minute ad to publicise
DeepMind's paper. Interestingly, at about 4:40 the interviewer asks the two
authors what their system is poor at. Essentially, the system has no real
memory, and so is totally unable to learn long-term connections between
action choices and goals or rewards. "That's something we're working on
next". Yups.

It's no harm to the AGI community in certain respects. Due to this and
DeepMind's (and others') previous PR stunts/coups, the general public are
becoming familiar with the idea that major progress is being made towards
AI, and that the big hitters in the tech world are driving much of it. I no
longer get weird looks from people when I explain what I do.

On the other hand, this kind of propaganda makes it look like you just need
to ransack some old ideas from Machine Learning, stitch them together any
old way, add some modern Deep Learning sauce, and you've magically gotten
"the first artificial agent that is capable of learning to excel at a
diverse array of challenging tasks".

It plays some 1970's video games. Reasonably well. Against the Atari 2600,
which had a 1.9MHz 8-bit CPU, 4-32Kb of RAM, and a resolution of 160x192
pixels. (See http://en.wikipedia.org/wiki/Atari_2600_hardware for specs).

Erik is building a nice hybrid design which uses ideas from HTM and a
similar Q-based reinforcement learner (see his blog at
https://cireneikual.wordpress.com/). He's making interesting progress all
on his own, especially considering this paper has 19 named authors!

Maybe a few of us should give Erik a hand. If this rather flimsy
"breakthrough" can get Demis this much free PR (and he hardly needs it at
this point), then surely we could look forward to a few column inches too.

Regards,

Fergal Byrne


On Fri, Feb 27, 2015 at 3:50 PM, Ian Danforth <[email protected]> wrote:

> Didn't want anyone to miss this:
> http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html
>
> Ian
>



-- 

Fergal Byrne, Brenter IT

http://inbits.com - Better Living through Thoughtful Technology
http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne

Founder of Clortex: HTM in Clojure -
https://github.com/nupic-community/clortex

Author, Real Machine Intelligence with Clortex and NuPIC
Read for free or buy the book at https://leanpub.com/realsmartmachines

Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014:
http://euroclojure.com/2014/
and at LambdaJam Chicago, July 2014: http://www.lambdajam.com

e:[email protected] t:+353 83 4214179
Join the quest for Machine Intelligence at http://numenta.org
Formerly of Adnet [email protected] http://www.adnet.ie

Reply via email to