> ... we witnessed hundreds of moves vetted by 9dan players, especially
> Michael Redmond's, where each move was vetted. 

This is a promising approach. But, there were also numerous moves where
the 9-dan pros said, that in *their* opinion, the moves were weak/wrong.
E.g. wasting ko threats for no reason. Moves even a 1p would never make.

If you want to argue that "their opinion" was wrong because they don't
understand the game at the level AlphaGo was playing at, then you can't
use their opinion in a positive way either.

> nearly all sporting events, given the small sample size involved) of
> statistical significance - suggesting that on another week the result
> might have been 4-1 to Lee Sedol.

If his 2nd game had been the one where he created vaguely alive/dead
groups and forced a mistake, and given that we were told the computer
was not being changed during the match, he might have created 2 wins
just by playing exactly the same.

And if he had known this in advance he might then have realized that
creating multiple weak groups and some large complicated kos are the way
to beat it, and so it could well have gone 4-1 to Lee Sedol in "another
week".

C'mon DeepMind, put that same version on KGS, set to only play 9p
players, with the same time controls, and let's get 40 games to give it
a proper ranking. (If 5 games against Lee Sedol are useful, 40 games
against a range of players with little to lose, who are systematically
trying to find its weaknesses, are going to be amazing.)

Darren
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to