The thread is

        http://www.lifein19x19.com/forum/viewtopic.php?f=18&t=12922&p=201695

On Fri, Mar 25, 2016 at 09:16:07PM -0400, Brian Sheppard wrote:
> Hmm, seems to imply a 1000-Elo edge over human 9p. But such a player would 
> literally never lose a game to a human.
> 
> I take this as an example of the difficulty of extrapolating based on games 
> against computers. (and the slide seems to have a disclaimer to this effect, 
> if I am reading the text on the left hand side correctly). Computers have 
> structural similarities that exaggerates strength differences in head-to-head 
> comparisons. But against opponents that have different playing 
> characteristics, such as human 9p, then the strength distribution is 
> different.

I agree.  Well, computer vs. computer may be still (somewhat) fine as long
as it's different programs.  What I wrote in that thread:

The word covered by the speaker's head is "self".  Bot results in
self-play are always(?) massively exaggerated.  It's not uncommon to see
a 75% self-play winrate in selfplay to translate to 52% winrate against
a third-party reference opponent.  c.f. fig 7&8 in
http://pasky.or.cz/go/pachi-tr.pdf . Intuitively, I'd expect the effect
to be less pronounced with very strong programs, but we don't know
anything precise about the mechanics here and experiments are difficult.

It's no doubt today's AlphaGo is much stronger than the Nature version.
But how much?  We'll have a better idea when they pit it in more matches
with humans, and ideally when other programs catch up further.  Without
knowing more (like the rest of the slides or a statement by someone from
Deepmind), I wouldn't personally read much into this graph.

-- 
                                Petr Baudis
        If you have good ideas, good data and fast computers,
        you can do almost anything. -- Geoffrey Hinton
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to