So, the issue here was the indexing clashing up against the column-major 
storage of multi-dimensional arrays?

On Sunday, May 8, 2016 at 10:10:54 AM UTC-7, Tk wrote:
>
> Could you try replacing
>    for i in 1:nx, j in 1:ny, k in 1:nz
> to
>    for k in 1:nz, j in 1:ny, i in 1:nx
> because your arrays are defined like a[i,j,k]?
>
> Another question is, how many cores is your Matlab code using?
>
>
> On Monday, May 9, 2016 at 2:03:58 AM UTC+9, feza wrote:
>>
>> Milan
>>
>> Script is here: 
>> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783
>>
>>
>> On Sunday, May 8, 2016 at 12:40:44 PM UTC-4, feza wrote:
>>>
>>> Thanks for the tip (initially I just transllated the matlab verbatim)
>>>
>>> Now I have made all the changes. In place operations, and direct 
>>> function calls.
>>> Despite these changes. Matlab is 3.6 seconds, new Julia  7.6 seconds
>>> TBH the results of this experiment are frustrating, I was hoping Julia 
>>> was going to provide a huge speedup (on the level of c)
>>>
>>> Am I still missing anything in the Julia code that is crucial to speed?
>>> @code_warntype looks ok sans a few red unions which i don't think are in 
>>> my control
>>>
>>>
>>> On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote:
>>>>
>>>> One of the really cool features of julia is that functions are allowed 
>>>> to have 
>>>> more than 0 arguments. It's even considered good style, and I highly 
>>>> recommend 
>>>> making use of this awesome feature in your code! :-) 
>>>>
>>>> In other words: try passing all variables as arguments to the 
>>>> functions. Even 
>>>> though you're wrapping everything in a function, performance-wise 
>>>> you're 
>>>> running up against an inference problem 
>>>> (https://github.com/JuliaLang/julia/issues/15276). In terms of coding 
>>>> style, 
>>>> you're still essentially using global variables. Honestly, these make 
>>>> your 
>>>> life harder in the end (
>>>> http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's 
>>>> not a bad thing that julia provides gentle encouragement to avoid using 
>>>> them, 
>>>> and you're losing out on opportunities by trying to sidestep that 
>>>> encouragement. 
>>>>
>>>> Best, 
>>>> --Tim 
>>>>
>>>> On Sunday, May 08, 2016 01:38:41 AM feza wrote: 
>>>> > That's no surprise your CPU is better :) 
>>>> > 
>>>> > Regarding devectorization 
>>>> >             for l in 1:q 
>>>> >             for k in 1:nz 
>>>> >             for j in 1:ny 
>>>> >             for i in 1:nx 
>>>> >             u = ux[i,j,k] 
>>>> >             v = uy[i,j,k] 
>>>> >             w = uz[i,j,k] 
>>>> > 
>>>> >                 cu = c[k,1]*u  + c[k,2]*v + c[k,3]*w 
>>>> >                 u2 = u*u + v*v + w*w 
>>>> >                 feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + 
>>>> 9/2*(cu*cu) 
>>>> > - 3/2*u2) 
>>>> >                 f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] 
>>>> >               end 
>>>> >               end 
>>>> >               end 
>>>> >              end 
>>>> > 
>>>> > Actually makes the code a lot slower.... 
>>>> > 
>>>> > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen 
>>>> wrote: 
>>>> > > For what it's worth  it run in about 3-4 seconds on my computer on 
>>>> latest 
>>>> > > v0.4. 
>>>> > > 
>>>> > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz 
>>>> > > 
>>>> > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen 
>>>> wrote: 
>>>> > >> As for the v0.5 performance (which is horrible), I think it's the 
>>>> boxing 
>>>> > >> issue with closure https://github.com/JuliaLang/julia/issues/15276 
>>>> . 
>>>> > >> Right? 
>>>> > >> 
>>>> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: 
>>>> > >>> You are using a lot of vectorized operations and Julia isn't as 
>>>> good as 
>>>> > >>> matlab is with those. 
>>>> > >>> 
>>>> > >>> The usual solution is to devectorized your code and to use loops 
>>>> (except 
>>>> > >>> for matrix multiplication if you have large matrices). 
>>>>
>>>>

Reply via email to