But is 

M += ((A*B) + (d*C) + (E .* M0)) 

any slower in Julia than in Matlab? Avoiding the temporaries is a lot less 
trivial as it might look like and I am not sure that Matlab does this.

Am Mittwoch, 21. Mai 2014 14:09:57 UTC+2 schrieb Andreas Lobinger:
>
> Hello colleague,
>
> On Wednesday, May 21, 2014 7:33:01 AM UTC+2, Andreas Noack Jensen wrote:
>>
>> Please consider b += Diagonal(c) instead of diagm. Diagonal(c) only 
>> stores the diagonal elements but works like diagm(c) for matrix arithmetic
>>
>> Vectorized code is always easier to understand
>>
>>
>> That is a strong statement. I have vectorised MATLAB code with repmat and 
>> meshgrids that is completely unreadable, but would be fairly easy to follow 
>> if written as loops. I really enjoy that I can just write a loop in Julia 
>> without slow execution.
>>
>
> If you write code in Matlab (or similar) that's using repmat (nowadays 
> bsxfun) and meshgrids (and reshape) you are operating on wrong side of 
> vectorising your code. Most likely (and there might be good reasons for 
> that) your data is not in the 'right' shape, so you reorder more than you 
> calculate. For problems like this explicit loops are often the better way 
> (so straightforward in julia) OR you take some thinking, why the data needs 
> to be reordered for certain operations. 
>
> The right side of vectorized code is the SIMD thinking which does 
> operations (some times linalg) on Matrix/Vector forms, and that in the best 
> sense leads to one-liners like
>
> M += ((A*B) + (d*C) + (E .* M0))
>
> which are easy to read/write/understand.
>
> Expressions like this seems to create in julia an incredible ammount of 
> intermediates and temporary allocations, while actually the output array 
> (M) is already allocated and the + and * operations just needed to be 
> executed with the 'right' indexing.
>
> Some people tend to solve this by writing explicit loops and create half a 
> page of code for the above one-liner.
>
>

Reply via email to