Hello,

I have a simulation code which uses intensively random number generation
(e.g. normal rng). I wanted to compare the performance of this
simulation across some Numpy implementations and some Julia
implementation. In the end, in both languages, the respective best
performing implementations are dominated by the rng, but Julia wins by a
factor of 4-5, because the rng is 4-5x faster.

Here are some timing results:

I*n IPython (Python 3.5.2 from Anaconda)*

    import numpy as np
    N = 10**6

    %timeit np.random.normal(size=N)
    10 loops, best of 3: 37.1 ms per loop

     %timeit np.random.uniform(size=N)
    100 loops, best of 3: 10.2 ms per loop

*In Julia (0.4.7 x86_64 from Debian testing)*

    N = 10^6

    @time randn(N);
      0.007802 seconds (6 allocations: 7.630 MB)

    @time rand(N);
      0.002059 seconds (8 allocations: 7.630 MB) (with some variations
    between trials)

Results are consistent in the sense that generating Gaussian numbers is
3-4 times slower than uniform in Python and in Julia.

But how come Julia is 4-5x faster since Numpy uses C implementation for
the entire process ? (Mersenne Twister -> uniform double -> Box-Muller
transform to get a Gaussian
https://github.com/numpy/numpy/blob/master/numpy/random/mtrand/randomkit.c).
Also I noticed that Julia uses a different algorithm (Ziggurat Method
from Marsaglia and Tsang ,
https://github.com/JuliaLang/julia/blob/master/base/random.jl#L700) but
this doesn't explain the difference for uniform rng.


best,

Pierre

_______________________________________________
NumPy-Discussion mailing list
NumPy-Discussion@python.org
https://mail.python.org/mailman/listinfo/numpy-discussion

Reply via email to