To give this discussions some facts I have done some benchmarking on my own

Matlab R2013a:

function [ y ] = perf( )
  N = 10000000;
  x = rand(N,1);
  y = x + x .* x + x .* x;
end

>> tic;y=perf();toc;
Elapsed time is 0.177664 seconds.

Julia 0.3 prerelease

 function perf()
   N = 10000000
   x = rand(N)
   y = x + x .* x + x .* x
 end

julia> @time perf()
elapsed time: 0.232852894 seconds (400002808 bytes allocated)

using Devectorize.jl

 function perf_devec()
   N = 10000000
   x = rand(N)
   @devec y = x + x .* x + x .* x
 end

julia> @time perf_devec()
elapsed time: 0.084605794 seconds (160000664 bytes allocated)

So seems all pretty consistent to me. Matlab is a little better in 
vectorized code as they presumely have a better memory caching. But still 
explicit devectorization using the @devec macro performs best. So using 
vectorized code in Julia is fine and "reasonable fast". If someone wants to 
do performance tweaking I don't see the issue telling him about 
devectorization.

Am Donnerstag, 22. Mai 2014 00:20:21 UTC+2 schrieb Tim Holy:
>
> I suspect they still generate temporaries, but probably have a better 
> garbage 
> collector and can re-use the same temporary across iterations. #5227 
> should 
> take care of the first, and Jeff has posted his hopes that the second will 
> also 
> be implemented (someday). I suspect both of those would be easier than 
> transforming vectorized expressions into optimized code without 
> temporaries, 
> and would achieve many of the same benefits. 
>
> --Tim 
>
> On Wednesday, May 21, 2014 02:44:50 PM Tobias Knopp wrote: 
> > Reading your posts I get the impression that Matlab outperforms Julia in 
> > vectorized code. I think it would be really great if you could give some 
> > numbers to let us know how far Julia lags behind. It seems they have an 
> > outstanding JIT if it can transform vectorized expressions into optimum 
> > code without temporaries. 
> > 
> > Am Mittwoch, 21. Mai 2014 20:36:07 UTC+2 schrieb Andreas Lobinger: 
> > > Hello colleague, 
> > > 
> > > On Wednesday, May 21, 2014 3:41:53 PM UTC+2, Tobias Knopp wrote: 
> > >> Julia does have high level SIMD constructs. 
> > >> But again, does Matlab avoid temporaries in vectorized expressions? 
> > > 
> > > I dont't know, but my matlab experience (and i belong to the group of 
> > > profile users and my unix insight doesn't stop at ps) shows me, that 
> they 
> > > somehow have a smart way of allocating temporaries. I was under the 
> > > impression, that the point of using a high-level language is that i do 
> not 
> > > need to care what internally the processing is, i just write code that 
> > > expresses what i want to calculate (i know, this is a idealized 
> > > environment, but still...). matlab is now for a few years a JIT 
> compiler 
> > > and i assume that below a certain level of abstraction we would see 
> > > technology like in julia. One thing i'd bet a premium bottle of tomato 
> > > ketchup on, is that they have dedicated logic for array operators and 
> an 
> > > optimizer for that. 
> > > 
> > > In julia (and this i'm writing only as observation on the discussions 
> > > here, my julia profiler and LLVM code skill are still evolving) 
> something 
> > > like A = A+B with A and B arrays seemed to be handled as 
> > > 
> > > allocate temp as size(A) 
> > > temp = +(A,B) 
> > > assign A to temp 
> > > 
> > > while something like add!(A,B) would be the solution without 
> additional 
> > > allocation. 
> > > 
> > > There might be a reason in the julia architecture why it's like this 
> and 
> > > i'm looking forward to see something like an expression level 
> optimizer, 
> > > rather than depending on LLVM internal optimizers. 
> > > 
> > > LLVM provides infrastructure for array datatypes but i'm missing 
> > > information, how this is already used in julia. 
> > > 
> > >> The fact that for loops are slower than vectorized expressions in 
> Matlab 
> > >> does not mean that the vectorized code is optimal and any better than 
> > >> what 
> > >> we have in Julia. 
> > > 
> > > There is a reason why there is a MAT in the name. Some time ago i was 
> busy 
> > > in finding out why we hit a hard boundary optimizing some runtime in a 
> > > simulation and (of course i could missinterpret something) align with 
> the 
> > > FPU performance of the system. In a lot of places matlab was slow and 
> in 
> > > some place it still is. For matrix and vector, basic or linear algebra 
> > > matlab is reasonably fast. 
> > > And exactly that situation leads to the effect that people try to 
> > > vectorize things that shouldn't be vectorized. 
>

Reply via email to