It is just wild guesses  based on reasonable arguments but without evidence.


Le 26/10/2017 à 07:51, Hideki Kato a écrit :
> You can believe
>> Of what I understand same network architecture imply the same number of
>> block
> but David Silver told AlphaGo Master used 40 layers in 
> May. 
> http://www.bestchinanews.com/Science-Technology/10371.html
> # The paper was submitted in April.
>
> Usually, network "architecture" does not imply the number of 
> layers whereas "configulation" may do.
>
> Clearly they made 40 layers version first because it's 
> called "1st instance" where the 80 layers one is called "2nd 
> instance."  The 1st was trained 3 days and overtook AlphaGo 
> Lee.  Then they changed to the 2nd.  Awaring this fact, and 
> watching the growing curve of the 1st, I guess 40 layers was 
> not enough to reach AlphaGo Master level and so they 
> doubled the layers.
>
> Hideki
>
> Xavier Combelle: <1550c907-8b96-e4ea-1f5e-2344f394b...@gmail.com>:
>> As I understand the paper they directly created alphago zero with a 40 
>> block
>> setup.
>> They just made a reduced 20 block setup to compare on kifu prediction
>> (as far as I searched in the paper, it is the only
>> place where they mention the 20 block setup)
>> They specifically mention comparing several version of their software.
>> with various parameter
>> If the number of block was an important parameter I hope they would
>> mention it.
>> Of course they are a lot of things that they try and failed and we will
>> not know about
>> But I have hard time to believe that alphago zero with a 20 block is one
>> of them
>> About the paper, there is no mention of the number of block of master:
>> "AlphaGo Master is the program that defeated top human players by 60–0
>> in January, 2017 34 .
>> It was previously unpublished but uses the same neural network
>> architecture, reinforcement
>> learning algorithm, and MCTS algorithm as described in this paper.
>> However, it uses the
>> same handcrafted features and rollouts as AlphaGo Lee
>> and training was initialised by
>> supervised learning from human data."
>> Of what I understand same network architecture imply the same number of
>> block
>> Le 25/10/2017 à 17:58, Xavier Combelle a écrit :
>>> I understand better
>>> Le 25/10/2017 à 04:28, Hideki Kato a écrit :
>>>> Are you thinking the 1st instance could reach Master level 
>>>> if giving more training days?
>>>> I don't think so.  The performance would be stopping 
>>>> improving at 3 days.  If not, why they built the 2nd 
>>>> instance?
>>>> Best,
>>>> Hideki
>>>> Xavier Combelle: <05c04de1-59c4-8fcd-2dd1-094faabf3...@gmail.com>:
>>>>> How is it a fair comparison if there is only 3 days of training for 
>> Zero ?
>>>>> Master had longer training no ? Moreover, Zero has bootstrap problem
>>>>> because at the opposite of Master it don't learn from expert games
>>>>> which means that it is likely to be weaker with little training.
>>>>> Le 24/10/2017 à 20:20, Hideki Kato a écrit :
>>>>>> David Silver told Master used 40 layers network in May. 
>>>>>> According to new paper, Master used the same architecture 
>>>>>> as Zero.  So, Master used 20 blocks ResNet.  
>>>>>> The first instance of Zero, 20 blocks ResNet version, is 
>>>>>> weaker than Master (after 3 days training).  So, with the 
>>>>>> same layers (a fair comparison) Zero is weaker than 
>>>>>> Master.
>>>>>> Hideki
>>>>> _______________________________________________
>>>>> Computer-go mailing list
>>>>> Computer-go@computer-go.org
>>>>> http://computer-go.org/mailman/listinfo/computer-go
>> _______________________________________________
>> Computer-go mailing list
>> Computer-go@computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go

_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to