On 08-12-17 09:29, Rémi Coulom wrote:
> Hi,
> 
> Nvidia just announce the release of their new GPU for deep learning: 
> https://www.theverge.com/2017/12/8/16750326/nvidia-titan-v-announced-specs-price-release-date
>
>  "The Titan V is available today and is limited to two per
> customer."
> 
> $2,999, 110 TFLOPS!

You can test Voltas on AWS, the prices are very acceptable.

I had problems getting good convergence with fp16 training, even taking
into account all the tricks in NVIDIA's "mixed precision learning"
document and using the respective NVIDIA-caffe branches. It worked for
the policy network, but not for the value network.

You only get 110 TFLOPS when using the mixed precision fp16 into fp32
accumulator matrix multipliers from the Tensor Cores, otherwise it's not
so different from a 1080 Ti in speed. It has a lot of cores, but the
clock-speed is much lower.

I also had the impression that using the Tensor Cores disables the
Winograd transform, perhaps due to accuracy issues? So you lose a factor
~3 in speedup.

Things to consider before plunking down 3000 USD :-)

-- 
GCP
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to