Take a look at the performance tips <http://docs.julialang.org/en/release-0.4/manual/performance-tips/>. The first time you run it, the function will compile. Then the compiled function is cached. On my computer I did:
a = rand(1000,1000) y=similar(a) @time a*a @time a*a @time A_mul_B!(y,a,a) @time A_mul_B!(y,a,a) Which gives output: 0.435561 seconds (367.13 k allocations: 20.108 MB, 1.58% gc time) 0.019922 seconds (7 allocations: 7.630 MB) 0.027144 seconds (53 allocations: 2.875 KB) 0.016211 seconds (4 allocations: 160 bytes) Notice how after compiling, the allocations and the timings go way down. For a more in-depth look at how Julia is looking to get the speed (and how to make the most of it), take a look at this blog post <http://www.stochasticlifestyle.com/7-julia-gotchas-handle/>. Julia is a little bit more complex than MATLAB, but the payoffs can be huge once you take the time to understand it. Happy Julia-ing! On Sunday, October 16, 2016 at 9:45:00 AM UTC-7, majid.z...@gmail.com wrote: > > i have run the same matrix multiplication in both matlab and julia but > matlab in much faster that julia, i have used both A_mul_B! and *() > functions > my codes are : > in matlab : > tic > a = rand(1000,1000) > a*a > toc > the output is : Elapsed time is 0.193979 seconds > > in Julia : > a = rand(1000,1000) > y=similar(a) > @time a*a > @time A_mul_B!(y,a,a) > > the output is: > 1.575159 seconds > 1.497884 seconds > Majid >