On Wednesday, 6 May 2020 at 17:31:39 UTC, Jacob Carlborg wrote:
On 2020-05-06 12:23, data pulverizer wrote:

Yes, I'll do a blog or something on GitHub and link it.

It would be nice if you could get it published on the Dlang blog [1]. One usually get paid for that. Contact Mike Parker.

[1] https://blog.dlang.org

I'm definitely open to publishing it in dlang blog, getting paid would be nice. I've just done a full reconciliation of the output from D and Chapel with Julia's output they're all the same. In the calculation I used 32-bit floats to minimise memory consumption, I was also working with the 10,000 MINST image data (t10k-images-idx3-ubyte.gz) http://yann.lecun.com/exdb/mnist/ rather than random generated data.

The -O3 -O5 optimization on the ldc compiler is instrumental in bringing the times down, going with -02 based optimization even with the other flags gives us ~ 13 seconds for the 10,000 dataset rather than the very nice 1.5 seconds.

As an idea of how kernel matrix computations scale the file "train-images-idx3-ubyte.gz" contains 60,000 images and Julia performs a kernel matrix calculation in 1340 seconds while D performs it in 163 seconds - not really in line with the first time, I'd expect around 1.5*36 = 54 seconds; Chapel performs in 357 seconds - approximately line with the original and the new kernel matrix consumes about 14 GB of memory which is why I chose to use 32 bit floats - to give me an opportunity to do the kernel matrix calculation on my laptop that currently has 31GB of RAM.

Reply via email to