kexinyu commented on issue #17164: net.Cast("float16") doesn't work: Check 
failed: (*in_type)[i] == dtype_param (2 vs. 0) : This layer requires uniform 
type. Expected 'float32' v.s. given 'float16' at 'gamma'
URL: 
https://github.com/apache/incubator-mxnet/issues/17164#issuecomment-569779050
 
 
   > > > > I'm having a similar issue, but I guess this explains why the cast 
loses efficacy for BatchNorm layer?
   > > > > 
https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/nn/basic_layers.py#L359-L362
   > > > > ```
   > > > > class BatchNorm(HybridBlock):
   > > > >     ....
   > > > >     def cast(self, dtype):
   > > > >         if np.dtype(dtype).name == 'float16':
   > > > >             dtype = 'float32'
   > > > >         super(BatchNorm, self).cast(dtype)
   > > > > ```
   > > > > 
   > > > > 
   > > > > so 'gamma' is still in float32, while the input is in float16, which 
causes the check failure.
   > > > 
   > > > 
   > > > 姐 他为啥这么设置啊 我都懵了 这不让BN转fp16? 我该咋训练和转化。。。
   > > 
   > > 
   > > BatchNorm is a “blacklist” function for which 16 bits of precision may 
not be sufficient. So you want to ensure that inputs into BatchNorm layer use 
float32, or you may have convergence issues.
   > 
   > 噢 这样子啊!谢谢小姐姐~( ̄▽ ̄~)~ 
但是我心中有个疑问,batchnorm基本都是大量使用,该如何确保bn层的输入是fp32而其它算子是fp16呢。。。能否给个简单的代码样例
   
   我也正有这个疑惑,正在研究中哈哈,弄明白了告诉你=w=

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to