Concur w/Brian. While the authors present genuine contributions,
meta-learning doesn't apply well to zero-sized architectures.

I didn't get a lot from the article, the arxiv link for the work done is
https://arxiv.org/abs/1812.00332

Best,
-Chaz

On Sun, Mar 24, 2019 at 4:17 PM Brian Lee <brian.kihoon....@gmail.com>
wrote:

> this doesn't actually speed up the neural networks that much; it's a
> technique to more quickly brute-force the search space of possible neural
> networks for ones that execute faster while maintaining similar accuracy.
> Typical hype article.
>
> Anyway, the effort spent looking for bizarre architectures is probably
> better spent doing more iterations of zero-style self-play with the same
> architecture, since it seems likely we haven't maxed out the strength of
> our existing architectures.
>
> On Sun, Mar 24, 2019 at 6:29 PM Ray Tayek <rta...@ca.rr.com> wrote:
>
>>
>> https://www.extremetech.com/computing/288152-mit-develops-algorithm-to-accelerate-neural-networks-by-200x
>>
>> i wonder how much this would speed up go programs?
>>
>> thanks
>>
>> --
>> Honesty is a very expensive gift. So, don't expect it from cheap people -
>> Warren Buffett
>> http://tayek.com/
>>
>> _______________________________________________
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>
> _______________________________________________
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to