[julia-users] Re: ANN: PimpMyREPL.jl

2016-09-07 Thread feza
THanks, much better

On Wednesday, September 7, 2016 at 3:59:56 AM UTC-4, Kristoffer Carlsson 
wrote:
>
> After discussion with the Julia community stewards I have decided to 
> rename this package. It is now named "OhMyREPL" and can be found at: 
> https://github.com/KristofferC/OhMyREPL.jl. I apologize for the 
> inconvenience.



[julia-users] let block question

2016-08-12 Thread feza
Is there any difference between

version1:

let x
x = 0
end


vs.

version2:

let 
local x = 0
end


vs

version3:
let x = 0
end



version 1 and 2 

; Function Attrs: uwtable 
define i64 @julia_t2_67462() #0 { 
top:  
  ret i64 0   
} 

version 3

; Function Attrs: uwtable
define void @julia_t3_67453() #0 {
top:
  ret void
}



looking at @code_llvm it seems likeversion1 and version2 are identical 
and version3 is almost identical except for the fact that it returns a void.

Also is there  a good reference for understanding the output of code_llvm?


Re: [julia-users] JuliaCon schedule announced

2016-07-09 Thread feza
Patiently waiting on stefan's talk

On Sunday, July 3, 2016 at 1:58:48 PM UTC-4, Viral Shah wrote:
>
> They will keep trickling in. We will announce widely when everything is 
> up. 
>
> -viral 
>
>
> > On 03-Jul-2016, at 9:25 AM, dnm > 
> wrote: 
> > 
> > Will Stefan's talk and the other keynote be up? 
> > 
> > On Friday, July 1, 2016 at 12:36:19 AM UTC-4, Christian Peel wrote: 
> > A link: https://www.youtube.com/user/JuliaLanguage/videos 
> > 
> > On Thu, Jun 30, 2016 at 3:43 AM, Viral Shah  wrote: 
> > They have already started appearing. Hopefully by next week they will 
> all be up and we will announce then. 
> > 
> > -viral 
> > 
> > On Jun 28, 2016 11:44 AM, "mmh"  wrote: 
> > Hi Viral, we have an eta on when the talks will be up on youtube? 
> > 
> > On Wednesday, June 22, 2016 at 11:13:25 AM UTC-4, Viral Shah wrote: 
> > Live streaming was too expensive and we did not do it this year, but we 
> certainly want to next year. 
> > 
> > -viral 
> > 
> > On Jun 22, 2016 10:33 AM, "Gabriel Gellner"  
> wrote: 
> > For future conferences I would be super stoked to pay some fee to have 
> early access if that would help at all. Super stoked to see so many of 
> these sweet talks! 
> > 
> > On Wednesday, June 22, 2016 at 6:49:43 AM UTC-7, Viral Shah wrote: 
> > Yes they will be and hopefully much sooner than last year. 
> > 
> > -viral 
> > 
> > On Jun 22, 2016 7:31 AM, "nuffe"  wrote: 
> > Will all the talks be posted on youtube, like last year? If so, do you 
> know when? Thank you (overseas enthusiast) 
> > 
> > On Thursday, June 9, 2016 at 11:34:18 PM UTC+2, Viral Shah wrote: 
> > The JuliaCon talks and workshop schedule has now been announced. 
> > 
> > http://juliacon.org/schedule.html 
> > 
> > Please buy your tickets if you have been procrastinating. We have seen 
> tickets going much faster this year, and waiting until the day before is 
> unlikely to work this year. Please also spread the message to your friends 
> and colleagues and relevant mailing lists. Here's the conference poster for 
> emailing and printing: 
> > 
> > http://juliacon.org/pdf/juliacon2016poster3.pdf 
> > 
> > -viral 
> > 
> > 
> > 
> > -- 
> > chris...@ieee.org 
>
>

[julia-users] Could someone explain @static?

2016-07-09 Thread feza
The docs read

@static()

Partially evaluates an expression at parse time.

For example, @static is_windows() ? foo : bar will evaluateis_windows() and 
insert either foo or bar into the expression. This is useful in cases where 
a construct would be invalid on other platforms, such as a ccall to a 
non-existent function.

My understanding at a very top level is that julia code is first parsed, 
macro expanded, then compiled.

Now why is it important for this to be handled at parse time? I don't 
really understand the following `This is useful in cases where a construct 
would be invalid on other platforms, such as a ccall to a non-existent 
function.`

Why couldn't  a simple

` if is_windows(); foo; else; bar; end ` 

 work and why do we need @static?


Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips

2016-05-08 Thread feza
I mean the revised script runs just as fast if not a tad faster with the 
latest master as it does on 0.4.5 : )

On Sunday, May 8, 2016 at 5:20:08 PM UTC-4, Patrick Kofod Mogensen wrote:
>
> Same as v0.4, or same as before you changed the code?
>
> On Sunday, May 8, 2016 at 8:55:00 PM UTC+2, feza wrote:
>>
>> roughly the same speed.
>>
>> On Sunday, May 8, 2016 at 2:44:19 PM UTC-4, Patrick Kofod Mogensen wrote:
>>>
>>> out of curiosity, what about v0.5?
>>
>>

Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips

2016-05-08 Thread feza
roughly the same speed.

On Sunday, May 8, 2016 at 2:44:19 PM UTC-4, Patrick Kofod Mogensen wrote:
>
> out of curiosity, what about v0.5?



Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips

2016-05-08 Thread feza
With all that  done, the julia code runs about the same if not better than 
matlab (using 4 threads)

On Sunday, May 8, 2016 at 2:21:42 PM UTC-4, feza wrote:
>
> Well first problem was that the vectorized version of my code was very 
> slow.
> Then I devectorized still slow, because of the index clashing with the 
> column-major storage
> I assumed for i =1:10,j=1:10,k=1:10  does the index i first then j then k 
> wrongly...
>
> On Sunday, May 8, 2016 at 2:04:37 PM UTC-4, David Gold wrote:
>>
>> So, the issue here was the indexing clashing up against the column-major 
>> storage of multi-dimensional arrays?
>>
>> On Sunday, May 8, 2016 at 10:10:54 AM UTC-7, Tk wrote:
>>>
>>> Could you try replacing
>>>for i in 1:nx, j in 1:ny, k in 1:nz
>>> to
>>>for k in 1:nz, j in 1:ny, i in 1:nx
>>> because your arrays are defined like a[i,j,k]?
>>>
>>> Another question is, how many cores is your Matlab code using?
>>>
>>>
>>> On Monday, May 9, 2016 at 2:03:58 AM UTC+9, feza wrote:
>>>>
>>>> Milan
>>>>
>>>> Script is here: 
>>>> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783
>>>>
>>>>
>>>> On Sunday, May 8, 2016 at 12:40:44 PM UTC-4, feza wrote:
>>>>>
>>>>> Thanks for the tip (initially I just transllated the matlab verbatim)
>>>>>
>>>>> Now I have made all the changes. In place operations, and direct 
>>>>> function calls.
>>>>> Despite these changes. Matlab is 3.6 seconds, new Julia  7.6 seconds
>>>>> TBH the results of this experiment are frustrating, I was hoping Julia 
>>>>> was going to provide a huge speedup (on the level of c)
>>>>>
>>>>> Am I still missing anything in the Julia code that is crucial to speed?
>>>>> @code_warntype looks ok sans a few red unions which i don't think are 
>>>>> in my control
>>>>>
>>>>>
>>>>> On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote:
>>>>>>
>>>>>> One of the really cool features of julia is that functions are 
>>>>>> allowed to have 
>>>>>> more than 0 arguments. It's even considered good style, and I highly 
>>>>>> recommend 
>>>>>> making use of this awesome feature in your code! :-) 
>>>>>>
>>>>>> In other words: try passing all variables as arguments to the 
>>>>>> functions. Even 
>>>>>> though you're wrapping everything in a function, performance-wise 
>>>>>> you're 
>>>>>> running up against an inference problem 
>>>>>> (https://github.com/JuliaLang/julia/issues/15276). In terms of 
>>>>>> coding style, 
>>>>>> you're still essentially using global variables. Honestly, these make 
>>>>>> your 
>>>>>> life harder in the end (
>>>>>> http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's 
>>>>>> not a bad thing that julia provides gentle encouragement to avoid 
>>>>>> using them, 
>>>>>> and you're losing out on opportunities by trying to sidestep that 
>>>>>> encouragement. 
>>>>>>
>>>>>> Best, 
>>>>>> --Tim 
>>>>>>
>>>>>> On Sunday, May 08, 2016 01:38:41 AM feza wrote: 
>>>>>> > That's no surprise your CPU is better :) 
>>>>>> > 
>>>>>> > Regarding devectorization 
>>>>>> > for l in 1:q 
>>>>>> > for k in 1:nz 
>>>>>> > for j in 1:ny 
>>>>>> > for i in 1:nx 
>>>>>> > u = ux[i,j,k] 
>>>>>> > v = uy[i,j,k] 
>>>>>> > w = uz[i,j,k] 
>>>>>> > 
>>>>>> > cu = c[k,1]*u  + c[k,2]*v + c[k,3]*w 
>>>>>> > u2 = u*u + v*v + w*w 
>>>>>> > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + 
>>>>>> 9/2*(cu*cu) 
>>>>>> > - 3/2*u2) 
>>>>>> > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] 
>>>>>> >   end 
>>>>>> >   end 
>>>>>> >   end 
>>>>>> >  end 
>>>>>> > 
>>>>>> > Actually makes the code a lot slower 
>>>>>> > 
>>>>>> > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen 
>>>>>> wrote: 
>>>>>> > > For what it's worth  it run in about 3-4 seconds on my computer 
>>>>>> on latest 
>>>>>> > > v0.4. 
>>>>>> > > 
>>>>>> > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz 
>>>>>> > > 
>>>>>> > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod 
>>>>>> Mogensen wrote: 
>>>>>> > >> As for the v0.5 performance (which is horrible), I think it's 
>>>>>> the boxing 
>>>>>> > >> issue with closure 
>>>>>> https://github.com/JuliaLang/julia/issues/15276 . 
>>>>>> > >> Right? 
>>>>>> > >> 
>>>>>> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: 
>>>>>> > >>> You are using a lot of vectorized operations and Julia isn't as 
>>>>>> good as 
>>>>>> > >>> matlab is with those. 
>>>>>> > >>> 
>>>>>> > >>> The usual solution is to devectorized your code and to use 
>>>>>> loops (except 
>>>>>> > >>> for matrix multiplication if you have large matrices). 
>>>>>>
>>>>>>

Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips

2016-05-08 Thread feza
Well first problem was that the vectorized version of my code was very slow.
Then I devectorized still slow, because of the index clashing with the 
column-major storage
I assumed for i =1:10,j=1:10,k=1:10  does the index i first then j then k 
wrongly...

On Sunday, May 8, 2016 at 2:04:37 PM UTC-4, David Gold wrote:
>
> So, the issue here was the indexing clashing up against the column-major 
> storage of multi-dimensional arrays?
>
> On Sunday, May 8, 2016 at 10:10:54 AM UTC-7, Tk wrote:
>>
>> Could you try replacing
>>for i in 1:nx, j in 1:ny, k in 1:nz
>> to
>>for k in 1:nz, j in 1:ny, i in 1:nx
>> because your arrays are defined like a[i,j,k]?
>>
>> Another question is, how many cores is your Matlab code using?
>>
>>
>> On Monday, May 9, 2016 at 2:03:58 AM UTC+9, feza wrote:
>>>
>>> Milan
>>>
>>> Script is here: 
>>> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783
>>>
>>>
>>> On Sunday, May 8, 2016 at 12:40:44 PM UTC-4, feza wrote:
>>>>
>>>> Thanks for the tip (initially I just transllated the matlab verbatim)
>>>>
>>>> Now I have made all the changes. In place operations, and direct 
>>>> function calls.
>>>> Despite these changes. Matlab is 3.6 seconds, new Julia  7.6 seconds
>>>> TBH the results of this experiment are frustrating, I was hoping Julia 
>>>> was going to provide a huge speedup (on the level of c)
>>>>
>>>> Am I still missing anything in the Julia code that is crucial to speed?
>>>> @code_warntype looks ok sans a few red unions which i don't think are 
>>>> in my control
>>>>
>>>>
>>>> On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote:
>>>>>
>>>>> One of the really cool features of julia is that functions are allowed 
>>>>> to have 
>>>>> more than 0 arguments. It's even considered good style, and I highly 
>>>>> recommend 
>>>>> making use of this awesome feature in your code! :-) 
>>>>>
>>>>> In other words: try passing all variables as arguments to the 
>>>>> functions. Even 
>>>>> though you're wrapping everything in a function, performance-wise 
>>>>> you're 
>>>>> running up against an inference problem 
>>>>> (https://github.com/JuliaLang/julia/issues/15276). In terms of coding 
>>>>> style, 
>>>>> you're still essentially using global variables. Honestly, these make 
>>>>> your 
>>>>> life harder in the end (
>>>>> http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's 
>>>>> not a bad thing that julia provides gentle encouragement to avoid 
>>>>> using them, 
>>>>> and you're losing out on opportunities by trying to sidestep that 
>>>>> encouragement. 
>>>>>
>>>>> Best, 
>>>>> --Tim 
>>>>>
>>>>> On Sunday, May 08, 2016 01:38:41 AM feza wrote: 
>>>>> > That's no surprise your CPU is better :) 
>>>>> > 
>>>>> > Regarding devectorization 
>>>>> > for l in 1:q 
>>>>> > for k in 1:nz 
>>>>> > for j in 1:ny 
>>>>> > for i in 1:nx 
>>>>> > u = ux[i,j,k] 
>>>>> > v = uy[i,j,k] 
>>>>> > w = uz[i,j,k] 
>>>>> > 
>>>>> > cu = c[k,1]*u  + c[k,2]*v + c[k,3]*w 
>>>>> > u2 = u*u + v*v + w*w 
>>>>> > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + 
>>>>> 9/2*(cu*cu) 
>>>>> > - 3/2*u2) 
>>>>> > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] 
>>>>> >   end 
>>>>> >   end 
>>>>> >   end 
>>>>> >  end 
>>>>> > 
>>>>> > Actually makes the code a lot slower 
>>>>> > 
>>>>> > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen 
>>>>> wrote: 
>>>>> > > For what it's worth  it run in about 3-4 seconds on my computer on 
>>>>> latest 
>>>>> > > v0.4. 
>>>>> > > 
>>>>> > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz 
>>>>> > > 
>>>>> > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod 
>>>>> Mogensen wrote: 
>>>>> > >> As for the v0.5 performance (which is horrible), I think it's the 
>>>>> boxing 
>>>>> > >> issue with closure 
>>>>> https://github.com/JuliaLang/julia/issues/15276 . 
>>>>> > >> Right? 
>>>>> > >> 
>>>>> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: 
>>>>> > >>> You are using a lot of vectorized operations and Julia isn't as 
>>>>> good as 
>>>>> > >>> matlab is with those. 
>>>>> > >>> 
>>>>> > >>> The usual solution is to devectorized your code and to use loops 
>>>>> (except 
>>>>> > >>> for matrix multiplication if you have large matrices). 
>>>>>
>>>>>

Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips

2016-05-08 Thread feza
Wow thank you guys
I totally thought

for i in 1:nx, j in 1:ny, k in 1:nz


ran the i index first and then j and then k  !

This has been a great learning experience.

Much appreciated, now the julia code is about twice as fast!


On Sunday, May 8, 2016 at 1:12:30 PM UTC-4, Tk wrote:
>
> Also try:
> julia -O --check-bounds=no yourcode.jl
>
> On Monday, May 9, 2016 at 2:03:58 AM UTC+9, feza wrote:
>>
>> Milan
>>
>> Script is here: 
>> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783
>>
>>
>> On Sunday, May 8, 2016 at 12:40:44 PM UTC-4, feza wrote:
>>>
>>> Thanks for the tip (initially I just transllated the matlab verbatim)
>>>
>>> Now I have made all the changes. In place operations, and direct 
>>> function calls.
>>> Despite these changes. Matlab is 3.6 seconds, new Julia  7.6 seconds
>>> TBH the results of this experiment are frustrating, I was hoping Julia 
>>> was going to provide a huge speedup (on the level of c)
>>>
>>> Am I still missing anything in the Julia code that is crucial to speed?
>>> @code_warntype looks ok sans a few red unions which i don't think are in 
>>> my control
>>>
>>>
>>> On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote:
>>>>
>>>> One of the really cool features of julia is that functions are allowed 
>>>> to have 
>>>> more than 0 arguments. It's even considered good style, and I highly 
>>>> recommend 
>>>> making use of this awesome feature in your code! :-) 
>>>>
>>>> In other words: try passing all variables as arguments to the 
>>>> functions. Even 
>>>> though you're wrapping everything in a function, performance-wise 
>>>> you're 
>>>> running up against an inference problem 
>>>> (https://github.com/JuliaLang/julia/issues/15276). In terms of coding 
>>>> style, 
>>>> you're still essentially using global variables. Honestly, these make 
>>>> your 
>>>> life harder in the end (
>>>> http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's 
>>>> not a bad thing that julia provides gentle encouragement to avoid using 
>>>> them, 
>>>> and you're losing out on opportunities by trying to sidestep that 
>>>> encouragement. 
>>>>
>>>> Best, 
>>>> --Tim 
>>>>
>>>> On Sunday, May 08, 2016 01:38:41 AM feza wrote: 
>>>> > That's no surprise your CPU is better :) 
>>>> > 
>>>> > Regarding devectorization 
>>>> > for l in 1:q 
>>>> > for k in 1:nz 
>>>> > for j in 1:ny 
>>>> > for i in 1:nx 
>>>> > u = ux[i,j,k] 
>>>> > v = uy[i,j,k] 
>>>> > w = uz[i,j,k] 
>>>> > 
>>>> > cu = c[k,1]*u  + c[k,2]*v + c[k,3]*w 
>>>> > u2 = u*u + v*v + w*w 
>>>> > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + 
>>>> 9/2*(cu*cu) 
>>>> > - 3/2*u2) 
>>>> > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] 
>>>> >   end 
>>>> >   end 
>>>> >   end 
>>>> >  end 
>>>> > 
>>>> > Actually makes the code a lot slower 
>>>> > 
>>>> > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen 
>>>> wrote: 
>>>> > > For what it's worth  it run in about 3-4 seconds on my computer on 
>>>> latest 
>>>> > > v0.4. 
>>>> > > 
>>>> > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz 
>>>> > > 
>>>> > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen 
>>>> wrote: 
>>>> > >> As for the v0.5 performance (which is horrible), I think it's the 
>>>> boxing 
>>>> > >> issue with closure https://github.com/JuliaLang/julia/issues/15276 
>>>> . 
>>>> > >> Right? 
>>>> > >> 
>>>> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: 
>>>> > >>> You are using a lot of vectorized operations and Julia isn't as 
>>>> good as 
>>>> > >>> matlab is with those. 
>>>> > >>> 
>>>> > >>> The usual solution is to devectorized your code and to use loops 
>>>> (except 
>>>> > >>> for matrix multiplication if you have large matrices). 
>>>>
>>>>

Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips

2016-05-08 Thread feza
Milan

Script is 
here: https://gist.github.com/musmo/27436a340b41c01d51d557a655276783


On Sunday, May 8, 2016 at 12:40:44 PM UTC-4, feza wrote:
>
> Thanks for the tip (initially I just transllated the matlab verbatim)
>
> Now I have made all the changes. In place operations, and direct function 
> calls.
> Despite these changes. Matlab is 3.6 seconds, new Julia  7.6 seconds
> TBH the results of this experiment are frustrating, I was hoping Julia was 
> going to provide a huge speedup (on the level of c)
>
> Am I still missing anything in the Julia code that is crucial to speed?
> @code_warntype looks ok sans a few red unions which i don't think are in 
> my control
>
>
> On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote:
>>
>> One of the really cool features of julia is that functions are allowed to 
>> have 
>> more than 0 arguments. It's even considered good style, and I highly 
>> recommend 
>> making use of this awesome feature in your code! :-) 
>>
>> In other words: try passing all variables as arguments to the functions. 
>> Even 
>> though you're wrapping everything in a function, performance-wise you're 
>> running up against an inference problem 
>> (https://github.com/JuliaLang/julia/issues/15276). In terms of coding 
>> style, 
>> you're still essentially using global variables. Honestly, these make 
>> your 
>> life harder in the end (
>> http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's 
>> not a bad thing that julia provides gentle encouragement to avoid using 
>> them, 
>> and you're losing out on opportunities by trying to sidestep that 
>> encouragement. 
>>
>> Best, 
>> --Tim 
>>
>> On Sunday, May 08, 2016 01:38:41 AM feza wrote: 
>> > That's no surprise your CPU is better :) 
>> > 
>> > Regarding devectorization 
>> > for l in 1:q 
>> > for k in 1:nz 
>> > for j in 1:ny 
>> > for i in 1:nx 
>> > u = ux[i,j,k] 
>> > v = uy[i,j,k] 
>> > w = uz[i,j,k] 
>> > 
>> > cu = c[k,1]*u  + c[k,2]*v + c[k,3]*w 
>> > u2 = u*u + v*v + w*w 
>> > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + 
>> 9/2*(cu*cu) 
>> > - 3/2*u2) 
>> > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] 
>> >   end 
>> >   end 
>> >   end 
>> >  end 
>> > 
>> > Actually makes the code a lot slower 
>> > 
>> > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen 
>> wrote: 
>> > > For what it's worth  it run in about 3-4 seconds on my computer on 
>> latest 
>> > > v0.4. 
>> > > 
>> > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz 
>> > > 
>> > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen 
>> wrote: 
>> > >> As for the v0.5 performance (which is horrible), I think it's the 
>> boxing 
>> > >> issue with closure https://github.com/JuliaLang/julia/issues/15276 
>> . 
>> > >> Right? 
>> > >> 
>> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: 
>> > >>> You are using a lot of vectorized operations and Julia isn't as 
>> good as 
>> > >>> matlab is with those. 
>> > >>> 
>> > >>> The usual solution is to devectorized your code and to use loops 
>> (except 
>> > >>> for matrix multiplication if you have large matrices). 
>>
>>

Re: [julia-users] Re: why's my julia code running slower than matlab, despite performance tips

2016-05-08 Thread feza
Thanks for the tip (initially I just transllated the matlab verbatim)

Now I have made all the changes. In place operations, and direct function 
calls.
Despite these changes. Matlab is 3.6 seconds, new Julia  7.6 seconds
TBH the results of this experiment are frustrating, I was hoping Julia was 
going to provide a huge speedup (on the level of c)

Am I still missing anything in the Julia code that is crucial to speed?
@code_warntype looks ok sans a few red unions which i don't think are in my 
control


On Sunday, May 8, 2016 at 8:15:25 AM UTC-4, Tim Holy wrote:
>
> One of the really cool features of julia is that functions are allowed to 
> have 
> more than 0 arguments. It's even considered good style, and I highly 
> recommend 
> making use of this awesome feature in your code! :-) 
>
> In other words: try passing all variables as arguments to the functions. 
> Even 
> though you're wrapping everything in a function, performance-wise you're 
> running up against an inference problem 
> (https://github.com/JuliaLang/julia/issues/15276). In terms of coding 
> style, 
> you're still essentially using global variables. Honestly, these make your 
> life harder in the end (
> http://c2.com/cgi/wiki?GlobalVariablesAreBad)---it's 
> not a bad thing that julia provides gentle encouragement to avoid using 
> them, 
> and you're losing out on opportunities by trying to sidestep that 
> encouragement. 
>
> Best, 
> --Tim 
>
> On Sunday, May 08, 2016 01:38:41 AM feza wrote: 
> > That's no surprise your CPU is better :) 
> > 
> > Regarding devectorization 
> > for l in 1:q 
> > for k in 1:nz 
> > for j in 1:ny 
> > for i in 1:nx 
> > u = ux[i,j,k] 
> > v = uy[i,j,k] 
> > w = uz[i,j,k] 
> > 
> > cu = c[k,1]*u  + c[k,2]*v + c[k,3]*w 
> > u2 = u*u + v*v + w*w 
> > feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + 
> 9/2*(cu*cu) 
> > - 3/2*u2) 
> > f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l] 
> >   end 
> >   end 
> >   end 
> >  end 
> > 
> > Actually makes the code a lot slower 
> > 
> > On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen 
> wrote: 
> > > For what it's worth  it run in about 3-4 seconds on my computer on 
> latest 
> > > v0.4. 
> > > 
> > > CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz 
> > > 
> > > On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen 
> wrote: 
> > >> As for the v0.5 performance (which is horrible), I think it's the 
> boxing 
> > >> issue with closure https://github.com/JuliaLang/julia/issues/15276 . 
> > >> Right? 
> > >> 
> > >> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote: 
> > >>> You are using a lot of vectorized operations and Julia isn't as good 
> as 
> > >>> matlab is with those. 
> > >>> 
> > >>> The usual solution is to devectorized your code and to use loops 
> (except 
> > >>> for matrix multiplication if you have large matrices). 
>
>

[julia-users] Re: why's my julia code running slower than matlab, despite performance tips

2016-05-08 Thread feza
That's no surprise your CPU is better :) 

Regarding devectorization 
for l in 1:q
for k in 1:nz
for j in 1:ny
for i in 1:nx 
u = ux[i,j,k]
v = uy[i,j,k]
w = uz[i,j,k]

cu = c[k,1]*u  + c[k,2]*v + c[k,3]*w
u2 = u*u + v*v + w*w
feq[i,j,k,l] = weights[k]*ρ[i,j,k]*(1 + 3*cu + 9/2*(cu*cu) 
- 3/2*u2)
f[i,j,k,l] = f[i,j,k,l]*(1-ω) + ω*feq[i,j,k,l]
  end
  end
  end
 end

Actually makes the code a lot slower

On Sunday, May 8, 2016 at 4:37:18 AM UTC-4, Patrick Kofod Mogensen wrote:
>
> For what it's worth  it run in about 3-4 seconds on my computer on latest 
> v0.4. 
>
> CPU : Intel(R) Core(TM) i7-4600U CPU @ 2.10GHz
>
> On Sunday, May 8, 2016 at 10:33:14 AM UTC+2, Patrick Kofod Mogensen wrote:
>>
>> As for the v0.5 performance (which is horrible), I think it's the boxing 
>> issue with closure https://github.com/JuliaLang/julia/issues/15276 . 
>> Right?
>>
>> On Sunday, May 8, 2016 at 10:29:59 AM UTC+2, STAR0SS wrote:
>>>
>>> You are using a lot of vectorized operations and Julia isn't as good as 
>>> matlab is with those.
>>>
>>> The usual solution is to devectorized your code and to use loops (except 
>>> for matrix multiplication if you have large matrices).
>>>
>>

[julia-users] Re: why's my julia code running slower than matlab, despite performance tips

2016-05-08 Thread feza
Good catch altough this still doesn't explain away the difference

@code_warntype shows me feq, f, \rho, ux, uy, uz are   red for some reason 
eventhough I have explictly stated their types...


On Sunday, May 8, 2016 at 4:13:08 AM UTC-4, michae...@gmail.com wrote:
>
> I see that c is a constant array of Ints, and its elements multiply ux, uy 
> and uz in a loop, where ux, uy and uz are arrays of floats, so there's a 
> type stability problem.
>
> On Sunday, May 8, 2016 at 9:18:09 AM UTC+2, feza wrote:
>>
>> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783
>>
>> On Sunday, May 8, 2016 at 3:17:39 AM UTC-4, feza wrote:
>>>
>>> I have read the performance section and believe I have followed all the 
>>> suggested guidelines
>>>
>>> The same matlab script takes less than 3 seconds, julia 0.45  9.7 
>>> seconds  (julia 0.5 is even worse...)
>>>
>>> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783.js</a>
>>> ">
>>>
>>>

[julia-users] Re: why's my julia code running slower than matlab, despite performance tips

2016-05-08 Thread feza
https://gist.github.com/musmo/27436a340b41c01d51d557a655276783

On Sunday, May 8, 2016 at 3:17:39 AM UTC-4, feza wrote:
>
> I have read the performance section and believe I have followed all the 
> suggested guidelines
>
> The same matlab script takes less than 3 seconds, julia 0.45  9.7 seconds 
>  (julia 0.5 is even worse...)
>
> https://gist.github.com/musmo/27436a340b41c01d51d557a655276783.js</a>
> ">
>
>

[julia-users] why's my julia code running slower than matlab, despite performance tips

2016-05-08 Thread feza
I have read the performance section and believe I have followed all the 
suggested guidelines

The same matlab script takes less than 3 seconds, julia 0.45  9.7 seconds 
 (julia 0.5 is even worse...)

https://gist.github.com/musmo/27436a340b41c01d51d557a655276783.js";>



[julia-users] Re: GPU capabilities

2016-04-29 Thread feza
Thanks for sharing. For multiple GPUs do you have manually split the data 
to each GPU or does that get taken care of automatically? BTW for multi GPU 
stuff I assume you don't need SLI and that SLI is just for gaming.
 
On Friday, April 29, 2016 at 4:31:32 PM UTC-4, Chris Rackauckas wrote:
>
> Works great for me. Here's a tutorial where I describe something I did on 
> XSEDE's Comet 
> <http://www.stochasticlifestyle.com/julia-on-the-hpc-with-gpus/> which 
> has Tesla K80s. It works great. I have had code running on GTX970s, 980Tis, 
> K40s, and K80s with no problem.
>
> On Thursday, April 28, 2016 at 1:13:56 PM UTC-7, feza wrote:
>>
>> Hi All, 
>>
>> Has anyone here had experience using Julia  programming using Nvidia's 
>> Tesla K80 or K40  GPU? What was the experience, is it buggy or does Julia 
>> have no problem.?
>>
>

[julia-users] GPU capabilities

2016-04-28 Thread feza
Hi All, 

Has anyone here had experience using Julia  programming using Nvidia's 
Tesla K80 or K40  GPU? What was the experience, is it buggy or does Julia 
have no problem.?


[julia-users] Re: ANN: JuMP 0.12 released

2016-03-29 Thread feza
I suggest clarification in the documents regarding which mode of automatic 
differentiation since this can have a large impact on computation time.

It seems like this 'ForwardDiff is only used for used-defined functions 
with the autodiff=true option. ReverseDiffSparse is used for all other 
derivative computations.'   is not very well thought out. 
If the input dimension is much larger than the output dimension then 
autodiff=true should by default use reverse mode differentiation and 
otherwise forward mode differentation.



On Wednesday, March 9, 2016 at 8:27:06 AM UTC-5, Miles Lubin wrote:
>
> On Wednesday, March 9, 2016 at 12:52:38 AM UTC-5, Evan Fields wrote:
>>
>> Great to hear. Two minor questions which aren't clear (to me) from the 
>> documentation:
>> - Once a user defined function has been defined and registered, can it be 
>> incorporated into NL expressions via @defNLExpr?
>>
>
> Yes.
>  
>
>> - The documentation references both ForwardDiff.jl and 
>> ReverseDiffSparse.jl. Which is used where? What are the tradeoffs users 
>> should be aware of?
>>
>
> ForwardDiff is only used for used-defined functions with the autodiff=true 
> option. ReverseDiffSparse is used for all other derivative computations.
> Using ForwardDiff to compute a gradient of a user-defined function is not 
> particularly efficient for functions with high-dimensional input.
>  
>
>> Semi-unrelated: two days ago I was using JuMP 0.12 and NLopt to solve 
>> what should have been a very simple (2 variable) nonlinear problem. When I 
>> fed the optimal solution as the starting values for the variables, the 
>> solve(model) command (or NLopt) hung indefinitely. Perturbing my starting 
>> point by .0001 fixed that - solve returned a solution 
>> instantaneously-by-human-perception. Am I doing something dumb?
>>
>
> I've also observed hanging within NLopt but haven't had a chance to debug 
> it (anyone is welcome to do so!). Hanging usually means that NLopt is 
> iterating without converging, since NLopt has no output 
> . Try setting an 
> iteration limit.
>


[julia-users] Re: hypot question

2016-03-27 Thread feza
I don't think that's the reason, since:
``` 
if x == 0   
  
 r = y/one(x)   # Why not just return y?
```
This can only happen if  x = 0 or x = 0.0 and y = NaN  or  x = 0 or 
x = 0.0 and  y = 0 or y = 0.0

hypot(2,0)
2.0

hypot(0,2)
2.0

hypot(0,0)
0

hypot(0,0.0)
0.0



On Sunday, March 27, 2016 at 9:41:12 AM UTC-4, Andreas Noack wrote:
>
> It's to ensure that the return type doesn't depend on the value of x. If x 
> and y are integers then the return type of hypot1 will be Int if x==0 and 
> Float64 otherwise.
>
>

Re: [julia-users] How to suppress output from interactive session

2016-03-26 Thread feza
Sorry I meant without using  ;

I.e. to have it permanently without  ; 

On Saturday, March 26, 2016 at 6:40:16 PM UTC-4, Miguel Bazdresch wrote:
>
> 3;
>
> On the REPL the ; acts just like it does in Matlab, suppressing the output.
>
> -- mb
>
> On Sat, Mar 26, 2016 at 6:34 PM, feza > 
> wrote:
>
>> julia> 3
>> 3
>>
>>
>> How can I suppress this during an interactive julia session
>>
>> Thanks
>>
>
>

[julia-users] How to suppress output from interactive session

2016-03-26 Thread feza
julia> 3
3


How can I suppress this during an interactive julia session

Thanks


[julia-users] Re: hypot question

2016-03-26 Thread feza
Nice thanks for that.

The question remains why prefer hypot over hypot2?

On Saturday, March 26, 2016 at 5:28:45 PM UTC-4, Jeffrey Sarnoff wrote:
>
> there is that and more substantively imo, there  is this (so, *nevermind 
>> .. there is conformance)*:
>
> For example, 0*NaN must be NaN because 0*¥ is an INVALID operation ( NaN 
>> ). On the other hand, for hypot(x, y) := Ö(x*x + y*y) we find that hypot(¥, 
>> y) = +¥ for all real y , finite or not, and deduce that hypot(¥, NaN) = +¥ 
>> too; naive implementations of hypot may do differently.
>
> -- Lecture Notes on the Status of IEEE Standard 754 for Binary 
>> Floating-Point Arithmetic   IEEE 754 Status: William Kahan, 1997 
>> <https://www.cs.berkeley.edu/~wkahan/ieee754status/IEEE754.PDF>  (page 
>> 7)  
>
>
> For the hypot function, hypot(±0, ±0) is +0, hypot(±∞, qNaN) is +∞, and 
>> hypot(qNaN, ±∞) is +∞  -- IEEE 754-2008 (page 43) 
>
>  
>
>  
>
>>  
>
> For the hypot function, hypot(±0, ±0) is +0, hypot(±∞, qNaN) is +∞, and 
>> hypot(qNaN, ±∞) is +∞
>
>
> On Saturday, March 26, 2016 at 4:11:43 PM UTC-4, feza wrote:
>>
>> Actually I don't know if this is intended behavior or not for example  in 
>> MATLAB hypot(NaN,Inf) and hypot(Inf,NaN) both give NaN
>>
>> BUT
>> http://en.cppreference.com/w/c/numeric/math/hypot
>> specifies that even if one of the arugments is NaN hypot returns +Inf
>>
>>
>> On Saturday, March 26, 2016 at 3:54:22 PM UTC-4, feza wrote:
>>>
>>> Good catch Jeffrey. I will file a bug report!
>>>
>>> On Saturday, March 26, 2016 at 3:50:31 PM UTC-4, Jeffrey Sarnoff wrote:
>>>>
>>>> Looking at your note, I noticed this:
>>>>
>>>> * hypot(Inf,NaN) == hypot(NaN,Inf) == Inf*
>>>>
>>>> That cannot be correct because *sqrt(x^2 + NaN^2) => sqrt(x^2 + NaN) 
>>>> => sqrt(NaN) => NaN*
>>>>
>>>> On Saturday, March 26, 2016 at 3:23:32 PM UTC-4, feza wrote:
>>>>>
>>>>> Why is hypot1 preferred (in Base) over hypot2 ? To me it seems better 
>>>>> to just return yin the one commented line
>>>>>
>>>>> function hypot2{T<:AbstractFloat}(x::T, y::T) 
>>>>>
>>>>> x = abs(x) 
>>>>>   
>>>>> y = abs(y) 
>>>>>   
>>>>> if x < y   
>>>>>   
>>>>> x, y = y, x   
>>>>>
>>>>> end   
>>>>>
>>>>> if x == 0 
>>>>>
>>>>> return y## compare with below 
>>>>> 
>>>>> else   
>>>>>   
>>>>> r = y/x   
>>>>>
>>>>> if isnan(r)   
>>>>>
>>>>> isinf(x) && return x   
>>>>>   
>>>>> isinf(y) && return y   
>>>>>   
>>>>> return r   
>>>>>   
>>>>> end   
>>>>>
>>>>> end   
>>>>>
>>>>> x * sqrt(one(r)+r*r)   
>>>>>   
>>>>> end   
>>>>>
>>>>>
>>>>>
>>>>>  function hypot1{T<:AbstractFloat}(x::T, y::T) 
>>>>> 
>>>>>  x = abs(x)   
>>>>>  
>>>>>  y = abs(y)  

[julia-users] Re: hypot question

2016-03-26 Thread feza
Actually I don't know if this is intended behavior or not for example  in 
MATLAB hypot(NaN,Inf) and hypot(Inf,NaN) both give NaN

BUT
http://en.cppreference.com/w/c/numeric/math/hypot
specifies that even if one of the arugments is NaN hypot returns +Inf


On Saturday, March 26, 2016 at 3:54:22 PM UTC-4, feza wrote:
>
> Good catch Jeffrey. I will file a bug report!
>
> On Saturday, March 26, 2016 at 3:50:31 PM UTC-4, Jeffrey Sarnoff wrote:
>>
>> Looking at your note, I noticed this:
>>
>> * hypot(Inf,NaN) == hypot(NaN,Inf) == Inf*
>>
>> That cannot be correct because *sqrt(x^2 + NaN^2) => sqrt(x^2 + NaN) => 
>> sqrt(NaN) => NaN*
>>
>> On Saturday, March 26, 2016 at 3:23:32 PM UTC-4, feza wrote:
>>>
>>> Why is hypot1 preferred (in Base) over hypot2 ? To me it seems better to 
>>> just return yin the one commented line
>>>
>>> function hypot2{T<:AbstractFloat}(x::T, y::T)   
>>>  
>>> x = abs(x)   
>>> 
>>> y = abs(y)   
>>> 
>>> if x < y 
>>> 
>>> x, y = y, x 
>>>  
>>> end 
>>>  
>>> if x == 0   
>>>  
>>> return y## compare with below   
>>>   
>>> else 
>>> 
>>> r = y/x 
>>>  
>>> if isnan(r) 
>>>  
>>> isinf(x) && return x 
>>> 
>>> isinf(y) && return y 
>>> 
>>> return r 
>>> 
>>> end 
>>>  
>>> end 
>>>  
>>> x * sqrt(one(r)+r*r) 
>>> 
>>> end 
>>>  
>>>
>>>
>>>  function hypot1{T<:AbstractFloat}(x::T, y::T)   
>>>   
>>>  x = abs(x) 
>>>
>>>  y = abs(y) 
>>>
>>>  if x < y   
>>>
>>>  x, y = y, x 
>>>   
>>>  end 
>>>   
>>>  if x == 0   
>>>   
>>>  r = y/one(x)   # Why not just return y? 
>>>  
>>>  else   
>>>
>>>  r = y/x 
>>>   
>>>  if isnan(r) 
>>>   
>>>  isinf(x) && return x   
>>>
>>>  isinf(y) && return y   
>>>
>>>  return r   
>>>
>>>  end 
>>>   
>>>  end 
>>>   
>>>  x * sqrt(one(r)+r*r)   
>>>
>>>  end 
>>>   
>>>
>>

[julia-users] Re: hypot question

2016-03-26 Thread feza
Good catch Jeffrey. I will file a bug report!

On Saturday, March 26, 2016 at 3:50:31 PM UTC-4, Jeffrey Sarnoff wrote:
>
> Looking at your note, I noticed this:
>
> * hypot(Inf,NaN) == hypot(NaN,Inf) == Inf*
>
> That cannot be correct because *sqrt(x^2 + NaN^2) => sqrt(x^2 + NaN) => 
> sqrt(NaN) => NaN*
>
> On Saturday, March 26, 2016 at 3:23:32 PM UTC-4, feza wrote:
>>
>> Why is hypot1 preferred (in Base) over hypot2 ? To me it seems better to 
>> just return yin the one commented line
>>
>> function hypot2{T<:AbstractFloat}(x::T, y::T) 
>>
>> x = abs(x)   
>> 
>> y = abs(y)   
>> 
>> if x < y 
>> 
>> x, y = y, x   
>>
>> end   
>>
>> if x == 0 
>>
>> return y## compare with below 
>> 
>> else 
>> 
>> r = y/x   
>>
>> if isnan(r)   
>>
>> isinf(x) && return x 
>> 
>> isinf(y) && return y 
>> 
>> return r 
>> 
>> end   
>>
>> end   
>>
>> x * sqrt(one(r)+r*r) 
>> 
>> end   
>>
>>
>>
>>  function hypot1{T<:AbstractFloat}(x::T, y::T)   
>>   
>>  x = abs(x)   
>>  
>>  y = abs(y)   
>>  
>>  if x < y 
>>  
>>  x, y = y, x 
>>   
>>  end 
>>   
>>  if x == 0   
>>   
>>  r = y/one(x)   # Why not just return y?  
>>  else 
>>  
>>  r = y/x 
>>   
>>  if isnan(r) 
>>   
>>  isinf(x) && return x 
>>  
>>  isinf(y) && return y 
>>  
>>  return r 
>>  
>>  end 
>>   
>>  end 
>>   
>>  x * sqrt(one(r)+r*r) 
>>  
>>  end 
>>   
>>
>

[julia-users] hypot question

2016-03-26 Thread feza
Why is hypot1 preferred (in Base) over hypot2 ? To me it seems better to 
just return yin the one commented line

function hypot2{T<:AbstractFloat}(x::T, y::T)   
 
x = abs(x) 
  
y = abs(y) 
  
if x < y   
  
x, y = y, x 
 
end 
 
if x == 0   
 
return y## compare with below   
  
else   
  
r = y/x 
 
if isnan(r) 
 
isinf(x) && return x   
  
isinf(y) && return y   
  
return r   
  
end 
 
end 
 
x * sqrt(one(r)+r*r)   
  
end 
 


 function hypot1{T<:AbstractFloat}(x::T, y::T) 

 x = abs(x) 
   
 y = abs(y) 
   
 if x < y   
   
 x, y = y, x   

 end   

 if x == 0 

 r = y/one(x)   # Why not just return y?  
 else   
   
 r = y/x   

 if isnan(r)   

 isinf(x) && return x   
   
 isinf(y) && return y   
   
 return r   
   
 end   

 end   

 x * sqrt(one(r)+r*r)   
   
 end   



[julia-users] Re: Announcing JuDE: autocomplete and jump to definition support for Atom

2016-03-20 Thread feza
Nice I will try this! BTW which theme are you using :) 

On Sunday, March 20, 2016 at 2:58:15 PM UTC-4, James Dang wrote:
>
> Hi All, Julia has been great for me, and I wanted to give back a little. 
> LightTable and Atom are great editors, but I was really starting to miss 
> good intellisense-like autocomplete and basic navigation features like 
> jump-to-definition, especially on larger codebases. It's really quite a 
> slog to remember exactly where in which file a function was defined, or 
> what its exact arguments are. And maybe with better tooling, more people 
> will be drawn to the community. So I put a bit of work into a new package 
> for Atom that gives you that!
>
> https://atom.io/packages/jude
>
>
> 
>
>
> This is a bit different from what you get out of julia-client and 
> autocomplete-julia because it does a full syntax parsing and scope 
> resolution of your codebase without executing it in a Julia process. It 
> reparses very quickly on the fly without needing to save. And the matching 
> is precise, not fuzzy, giving you exactly what names are available in the 
> scope you are in currently. It's quite new and unpolished, but please try 
> it out and let me know what you think!
>
> Cheers,
> James
>
>

[julia-users] julia4 and julia5 under fedor0

2016-03-14 Thread feza
Hi all how can install both julia 4 and julia 5   on fedora  

I have read

http://julialang.org/downloads/platform.html

I have performed

sudo dnf copr enable nalimilan/julia-nightlies

sudo dnf copr enable nalimilan/julia

and then dnf install julia

this only gets me julia0.5 . Is there a way I can also get julia0.4 and 
have them installed side by side. Where typing julia calls v0.4 and calling 
julia5 calls v0.5

Thanks


[julia-users] Re: Bug in daxpy! ???

2016-03-07 Thread feza
posted issue : https://github.com/JuliaLang/julia/issues/15393 

On Friday, March 4, 2016 at 3:48:43 AM UTC-5, pev...@gmail.com wrote:
>
> Hello all,
> I was polishing my call and I have found the following definition of 
> daxpy! I was not aware of
>
>
> function axpy!{Ti<:Integer,Tj<:Integer}(α, x::AbstractArray, 
> rx::AbstractArray{Ti}, y::AbstractArray, ry::AbstractArray{Tj})
> if length(x) != length(y)
> throw(DimensionMismatch("x has length $(length(x)), but y has 
> length $(length(y))"))
> elseif minimum(rx) < 1 || maximum(rx) > length(x)
> throw(BoundsError(x, rx))
> elseif minimum(ry) < 1 || maximum(ry) > length(y)
> throw(BoundsError(y, ry))
> elseif length(rx) != length(ry)
> throw(ArgumentError("rx has length $(length(rx)), but ry has 
> length $(length(ry))"))
> end
> for i = 1:length(rx)
> @inbounds y[ry[i]] += x[rx[i]]*α
> end
> y
> end
>
> Is the first check
>  length(x) != length(y)
> really an intended behavior? 
>
> The multiplication goes over indexes rx and ry, should not be the check 
>  length(rx) != length(ry) ?
>
> Thanks for the clarification.
> Tomas
>
>
>

[julia-users] Re: [ANN] GLVisualize

2016-03-06 Thread feza
Package looks like great. In light of this comment, how's the 2d graphics? 
Can we expect some processing style API, I would love to help anyway I can.

Also I find some of the examples to be rough (antialiasing issues?)
Thanks.

On Thursday, March 3, 2016 at 7:05:44 AM UTC-5, Job van der Zwan wrote:
>
> On Monday, 29 February 2016 16:03:10 UTC+1, Simon Danisch wrote:
>>
>> > If not MatplotLib, could this become the Processing (and by extension 
>> OpenFrameworks, LibCinder) of Julia?
>>
>> That's definitely more the direction I'd like to take (although with a 
>> very different approach).
>> I hope that it will enable us to create a nice platform for accelerated 
>> data processing in general. With FireRender 
>> 
>>  as 
>> a backend it might also appeal to artists more!
>>
>
> I have to say, after years of Processing API indoctrination (which almost 
> every other CC framework follows), I didn't quite understand how the 
> interactive examples[0 ] 
> work. It took me a while to realise all the input is handled by the 
> Reactive package (which I wasn't familiar with) - maybe you want to mention 
> that in the documentation somehow?
>
> (offtopic: is there a package for capturing live camera feeds? I'd like to 
> mess around with slitscanning[1 
> ][2 
> ] as my "Hello World" 
> project for this package; and it looks like working with volumes is 
> relatively easy[3 ] in 
> Julia)
>


Re: [julia-users] inlace versions of .*=, ./= etc

2015-12-18 Thread feza
I think I am misunderstanding the temporary array allocation process. Is it 
allocating one or two temp arrays? Where have I gone wrong here: 

tmp = 2y (allocates a temporary array to store result)
tmp .-= 4z (also allocates a temporary array for 4z? Why not just use z 
directly, thus   tmp[i] = tmp[i] - 4*z[i] )
tmp ./= w  (Uses previous temp array and w to do the division overwriting 
tmp,  i.e. loops over tmp[i] = tmp[i]/w[i] )
x .+= tmp  (performs x[i] = x[i] + tmp[i] )



On Friday, December 18, 2015 at 1:53:02 PM UTC-5, Steven G. Johnson wrote:
>
>
>
> On Friday, December 18, 2015 at 1:32:16 PM UTC-5, Ethan Anderes wrote:
>>
>> Ok, thanks for the info (and @inbounds does improve it a bit). I usually 
>> follow your advice and fuse the operations together when I need the speed, 
>> but since I do all manner of combinations of vectorized operations 
>> throughout my module I tend to prefer using .*=, ./=, etc unless I need 
>> it.
>>
> Having "all manner of combinations" of these operations is a good reason 
> *not* to define in-place versions of these operations.  For example, 
> imagine the computation:
>
> x = x + (2y - 4z) ./ w
>
>
> with your proposed in-place assignment operations, I guess this would 
> become:
>
> tmp = 2y
> tmp .-= 4z
> tmp ./= w
> x .+= tmp
>
>
> which still allocates two temporary arrays (one for tmp and one for 4z), 
> and involves five separate loops.  Compare to:
>
> for i in eachindex(x)
> x[i] += (2y[i] - 4z[i]) / w[i]
> end
>
>
> which involves only one loop (and probably better cache performance as a 
> result) and no temporary arrays.  (You can add @inbounds if you want a bit 
> more performance and know that w/x/y/z have the same shape.)  Not only is 
> it more efficient than a sequence of in-place assignments, but I would 
> argue that it is much more readable as well, despite the need for an 
> explicit loop.
>
> Alternatively, you can use the Devectorize package, and something like
>
> @devec x[:] = x + (2y - 4z) ./ w
>
>
> will basically do the same thing as the loop if I understand @devec 
> correctly.
>


[julia-users] Re: Triangular Dispatch, Integerm Range and UniformScaling error

2015-11-22 Thread feza
Why not use

foo{I<:Integer}(u::UnitRange{I}) = 1

On Sunday, November 22, 2015 at 7:38:29 AM UTC-5, andrew cooke wrote:
>
>
> Out of my depth here - no idea if this is a bug or me...
>
> julia> foo{I<:Integer,U<:UnitRange{I}}(u::U) = 1
> ERROR: TypeError: UnitRange: in T, expected T<:Real, got UniformScaling{
> Int64}
>
> Version 0.4.1-pre+22 (2015-11-01 00:06 UTC)
>
> Thanks, Andrew
>


[julia-users] Re: julia -E or -e with print ?

2015-11-04 Thread feza
Ahhh gotcha

ok this now works for posterity's sake:
 julia -E 'print(\"hello\")'

On Wednesday, November 4, 2015 at 2:12:45 PM UTC-5, Zack L.-B. wrote:
>
> julia -e "print(\"hello\")"
>
> or
>
> julia -e 'print("hello")'
>
> You need to escape certain characters in your terminal so that they are 
> passed faithfully to Julia.
>
> On Wednesday, November 4, 2015 at 11:05:29 AM UTC-8, feza wrote:
>>
>> I must be doing something wrong but:
>>
>> julia -e "print("hello")"
>>
>> gives me
>> ERROR: syntax: incomplete: premature end of input
>>
>

[julia-users] julia -E or -e with print ?

2015-11-04 Thread feza
I must be doing something wrong but:

julia -e "print("hello")"

gives me
ERROR: syntax: incomplete: premature end of input


Re: [julia-users] Anaconda Python

2015-11-03 Thread feza
first came npm, and then .   jpm  : )

On Tuesday, November 3, 2015 at 6:27:39 AM UTC-5, Stefan Karpinski wrote:
>
> On Mon, Nov 2, 2015 at 9:00 PM, > 
> wrote:
>
>> Would you like it if someone came along and forked all of Julia, 
>> especially Pkg, and created forks of every package?   To do so would be 
>> entirely compliant with the MIT open source license.  So, it would be legal 
>> (not that license enforcement is common in the open source world). But, 
>> would it be DESIRABLE?  You've done a fine thing to rely largely on git and 
>> github.
>>
>
> This has happened: https://github.com/rened/DeclarativePackages.jl. And I 
> think it's great. There's a lot that Pkg doesn't do well currently and 
> René's fork does many things better. I'm planning on bringing a lot of 
> these ideas back into Julia's package manager in the future (I hope he 
> doesn't mind my larceny).
>


Re: [julia-users] Re: For loop = or in?

2015-10-29 Thread feza
My only problem with `=` vs `in`
is that even the base julia code is inconsistent! Looking at one file ( I 
can't remember which now)
it had both
i = 1:nr
and
i in 1:n
Again this was in the same file! Please tell me I am not being pedantic 
when I saw this and thought this must be fixed if even the base is being 
inconsistent.

On Thursday, October 29, 2015 at 8:44:03 AM UTC-4, mschauer wrote:
>
> Do we want to give up on this topic? Then we should do so in an earnest 
> way and close the case with a clear message, ideally after 
> establishing if we want to add a style recommendation about the use of 
> ``=`` and ``in`` to 
> http://docs.julialang.org/en/release-0.4/manual/style-guide/. Currently 
> the manual states in the control-flow chapter "In general, the for loop 
> construct can iterate over any container. In these cases, the alternative 
> (but fully equivalent) keyword in is typically used instead of =, since 
> it makes the code read more clearly."
>


[julia-users] Re: For loop = or in?

2015-10-28 Thread feza
actually its more about simple confusion rather than mental cost @DNF. 
Starting out you either use = or in then you see some other code and they 
use something else and wonder, what is right, is one notation faster or 
better, what's going on? Of course, it's not the simplest thing to try and 
search for in the documents (for someone not familiar with iterator 
terminology, searching for `in` or `=` is useless). Hence Fang's original 
post's concern (and others as evident from this post).

But really this seems like a fundamental enough language construct that 
there should be only one correct way; but on the other hand my brain 
doesn't have a problem with `=` and it seems natural since I have been 
using matlab for a while, even though `in` seems actually to make more 
sense here.

I don't think that this metaphor is actually relevant:

i=1:5
A[i]=B[i].*C[i]

or, you could write it as a loop...

for i=1:5
 A[i]=B[i]*C[i]
end

since the `for` changes meaning completely
i = 1:10
a[i] = b[i].*c[i]

and  the above only makes sense when indexing, not for other iterables


 
In any case, someone should collect the arguments and file an issue on 
github to at least get some additional opinions.

On Wednesday, October 28, 2015 at 8:45:16 AM UTC-4, DNF wrote:
>
> You are right, of course. It's just one of those minor cosmetic things you 
> fix in a pre-1.0 version, or then maybe never. And it's good not to have 
> too many of those.
>
> Also
> for i ∈ 1:N
> just looks incredibly awesome. 
>
>
> On Wednesday, October 28, 2015 at 1:38:57 PM UTC+1, STAR0SS wrote:
>>
>> I think people grossly exaggerate the "mental cost" of having both = and 
>> in. It's really not that complicated, well explained in the docs and can 
>> never cause bugs.
>>
>> On the other hand the depreciation cost will big quite large, given it 
>> seems both are used 50/50. Plus the numerous complain posts on this forum. 
>> Don't fix what's not broken.
>>
>>
>>
>>

Re: [julia-users] Re: For loop = or in?

2015-10-27 Thread feza
+1 @Tom Breloff .  
I was confused about this when starting out. Comparing   `for i in 1:3` vs 
 `for i = 1:3`, even though I regularly use matlab if you think about it 
for `i = 1:10` doesn't really make a lot of sense. It would be nice if it 
was just one way as opposed to the confusion about whether = or in should 
be used.

On Tuesday, October 27, 2015 at 10:26:44 AM UTC-4, Tom Breloff wrote:
>
> It's harmless, sure, but I would prefer that everyone uses "in" 
> exclusively so that there's one less thing to waste brainpower on.  You 
> don't say "for each x equals the range 1 to n", you say "for each x in the 
> range 1 to n".  I don't think "=" has a place here at all except to allow 
> copy/pasting of Matlab code (which creates other performance problems 
> anyways).
>
> On Tue, Oct 27, 2015 at 10:04 AM, Stefan Karpinski  > wrote:
>
>> My general approach is to only use = when the RHS is an explicit range, 
>> as in `for i = 1:n`. For everything else I use `for i in v`. I would be ok 
>> with dropping the = syntax at some point, but it seems pretty harmless to 
>> have it.
>>
>> On Tue, Oct 27, 2015 at 8:56 AM, FANG Colin > > wrote:
>>
>>> Thank you. In that case I will happily stick with `in`.
>>>
>>>
>>> On Monday, October 26, 2015 at 8:43:22 PM UTC, Alireza Nejati wrote:

 There is no difference, as far as I know.

 '=' seems to be used more for explicit ranges (i = 1:5) and 'in' seems 
 to be used more for variables (i in mylist). But using 'in' for everything 
 is ok too.

 The '=' is there for familiarity with matlab. Remember that julia's 
 syntax was in part designed to be familiar to matlab users.

 On Tuesday, October 27, 2015 at 8:26:07 AM UTC+13, FANG Colin wrote:
>
> Hi All
>
> I have got a stupid question:
>
> Are there any difference in "for i in 1:5" and "for i = 1:5"?
>
> Does the julia community prefer one to the other? I see use of both in 
> the documentations and source code.
>
> Personally I haven't seen much use of "for i = 1:5" in other 
> languages.
>
> Thanks.
>

>>
>

[julia-users] Re: Everything I wrote in version .3 is now 'depreciated'

2015-10-16 Thread feza
On a related note. What is the recommended procedure for dealing with 
depreciations? Do we just update all the deprecations and push the changes? 
This would make the package useless for 0.3 users or is this the 
recommended procedure.

On Friday, October 16, 2015 at 7:39:05 PM UTC-4, Forrest Curo wrote:
>
> So what's the easiest way -- given a long, long list of warnings -- to 
> find out what needs to be changed in a program (It runs, after replacing 
> calls to 'Base.Graphics' with 'Graphics' -- but the tk button that used to 
> close the window and exit now doesn't (Ah! I need to remember the default I 
> changed for those buttons to make that button work the way I wanted! (What 
> *was 
> *that?!)))
>
> and what should now be substituted?
>
> I mean, I could just follow that list of warnings from the top... but I'd 
> like to know a reference listing displaced packages & their replacements. 
>


[julia-users] juliaw.exe? Run julia without terminal window popup?

2015-10-16 Thread feza
How can  I run julia without a terminal window poping up.

Something like where in python you have: pythonw.exe to do this and  also 
javaw.exe for java.

I




[julia-users] Re: colors on cmd on Windows 10

2015-10-15 Thread feza
OPs i meant --precomplied=yes

with LLVM 3.5+ will this discrepancy in startup time be resolved?
#ifdef _OS_WINDOWS_// TODO remove this when using LLVM 3.5+ 
JL_OPTIONS_USE_PRECOMPILED_NO,#else JL_OPTIONS_USE_PRECOMPILED_YES,#endif
On Friday, October 16, 2015 at 2:17:13 AM UTC-4, Tony Kelman wrote:
>
> We do just that on Windows, for now: 
> https://github.com/JuliaLang/julia/blob/706600408aba8b142c47c2bc887bde0d9bf774cf/src/init.c#L73-L78
>
>
> On Thursday, October 15, 2015 at 11:03:11 PM UTC-7, feza wrote:
>>
>> Is there a way to have julia default to --precomplied=no ?
>>
>> On Friday, October 16, 2015 at 1:18:55 AM UTC-4, Tony Kelman wrote:
>>>
>>> Source builds on Windows may have started up a little faster than the 
>>> distributed binaries up to a few months ago, because we used to delete 
>>> sys.dll from the binaries. Having sys.dll present resulted in faster 
>>> startup, but made backtraces worse. Starting a few months ago we now 
>>> distribute sys.dll, but with the option to load code from it disabled by 
>>> default, for better backtraces in exchange for slower startup times. If you 
>>> don't mind your backtraces being worse, you can start Julia with the 
>>> command-line flag --precompiled=yes.
>>>
>>> We'll likely put in a shortcut to start Julia under mintty, a similar 
>>> nicer-than-default terminal emulator that Cygwin and MSYS2 use, starting in 
>>> 0.4.1 or thereabouts.
>>>
>>>
>>> On Thursday, October 15, 2015 at 9:36:50 PM UTC-7, feza wrote:
>>>>
>>>> Is Julia way slower on Windows? That's an interesting anecdote. Anyone 
>>>> else have something to add regarding this. I don't know why this would be 
>>>> true.
>>>> I did notice that when I built julia from source on my machine, my 
>>>> built version of julia was consistently slower than the binaries available 
>>>> on the julia website.
>>>>
>>>> On Friday, October 16, 2015 at 12:08:33 AM UTC-4, Lewis Levin wrote:
>>>>>
>>>>> Running Julia in powershell or cmd, the background color of output 
>>>>> lines is always black.  If you set cmd or powershell to black this is 
>>>>> fine. 
>>>>>  With any other default background color, you get Julia output written on 
>>>>> black bands across the window.
>>>>>
>>>>> I installed cmder (very nice thing).  It is a Conemu with clink and 
>>>>> other improvements. Small download and portable install on windows.  very 
>>>>> clean.
>>>>>
>>>>> Since cmder is a unix shell, Julia works nicely with it and inherits 
>>>>>  the shell colors:  always looks good.
>>>>>
>>>>> Probably should fix for windows users.  (I am Mac and Windows.)
>>>>>
>>>>> Julia is way slower on Windows and I am using a very fast Win machine 
>>>>> with flash drive.
>>>>>
>>>>

[julia-users] Re: colors on cmd on Windows 10

2015-10-15 Thread feza
Is there a way to have julia default to --precomplied=no ?

On Friday, October 16, 2015 at 1:18:55 AM UTC-4, Tony Kelman wrote:
>
> Source builds on Windows may have started up a little faster than the 
> distributed binaries up to a few months ago, because we used to delete 
> sys.dll from the binaries. Having sys.dll present resulted in faster 
> startup, but made backtraces worse. Starting a few months ago we now 
> distribute sys.dll, but with the option to load code from it disabled by 
> default, for better backtraces in exchange for slower startup times. If you 
> don't mind your backtraces being worse, you can start Julia with the 
> command-line flag --precompiled=yes.
>
> We'll likely put in a shortcut to start Julia under mintty, a similar 
> nicer-than-default terminal emulator that Cygwin and MSYS2 use, starting in 
> 0.4.1 or thereabouts.
>
>
> On Thursday, October 15, 2015 at 9:36:50 PM UTC-7, feza wrote:
>>
>> Is Julia way slower on Windows? That's an interesting anecdote. Anyone 
>> else have something to add regarding this. I don't know why this would be 
>> true.
>> I did notice that when I built julia from source on my machine, my built 
>> version of julia was consistently slower than the binaries available on the 
>> julia website.
>>
>> On Friday, October 16, 2015 at 12:08:33 AM UTC-4, Lewis Levin wrote:
>>>
>>> Running Julia in powershell or cmd, the background color of output lines 
>>> is always black.  If you set cmd or powershell to black this is fine.  With 
>>> any other default background color, you get Julia output written on black 
>>> bands across the window.
>>>
>>> I installed cmder (very nice thing).  It is a Conemu with clink and 
>>> other improvements. Small download and portable install on windows.  very 
>>> clean.
>>>
>>> Since cmder is a unix shell, Julia works nicely with it and inherits 
>>>  the shell colors:  always looks good.
>>>
>>> Probably should fix for windows users.  (I am Mac and Windows.)
>>>
>>> Julia is way slower on Windows and I am using a very fast Win machine 
>>> with flash drive.
>>>
>>

[julia-users] Re: colors on cmd on Windows 10

2015-10-15 Thread feza
Is Julia way slower on Windows? That's an interesting anecdote. Anyone else 
have something to add regarding this. I don't know why this would be true.
I did notice that when I built julia from source on my machine, my built 
version of julia was consistently slower than the binaries available on the 
julia website.

On Friday, October 16, 2015 at 12:08:33 AM UTC-4, Lewis Levin wrote:
>
> Running Julia in powershell or cmd, the background color of output lines 
> is always black.  If you set cmd or powershell to black this is fine.  With 
> any other default background color, you get Julia output written on black 
> bands across the window.
>
> I installed cmder (very nice thing).  It is a Conemu with clink and other 
> improvements. Small download and portable install on windows.  very clean.
>
> Since cmder is a unix shell, Julia works nicely with it and inherits  the 
> shell colors:  always looks good.
>
> Probably should fix for windows users.  (I am Mac and Windows.)
>
> Julia is way slower on Windows and I am using a very fast Win machine with 
> flash drive.
>


[julia-users] Re: ANN: A potential new Discourse-based Julia forum

2015-10-13 Thread feza
Wow this looks great. Much better than google groups which is rather 
annoying in many respects. Looking forward to using this sometime in the 
future. Do you think  mathjax support for latex equations would be useful 
for a Julia forum?

On Saturday, September 19, 2015 at 8:16:36 PM UTC-4, Jonathan Malmaud wrote:
>
> Hi all,
> There's been some chatter about maybe switching to a new, more modern 
> forum platform for Julia that could potentially subsume julia-users, 
> julia-dev, julia-stats, julia-gpu, and julia-jobs.   I created 
> http://julia.malmaud.com for us to try one out and see if we like it. 
> Please check it out and leave feedback. All the old posts from julia-users 
> have already been imported to it.
>
> It is using Discourse , the same forum 
> software used for the forums of Rust , 
> BoingBoing, and some other big sites. Benefits over Google Groups include 
> better support for topic tagging, community moderation features,  Markdown 
> (and hence syntax highlighting) in messages, inline previews of linked-to 
> Github issues, better mobile support, and more options for controlling when 
> and what you get emailed. The Discourse website 
>  does a better job of summarizing the 
> advantages than I could.
>
> To get things started, MIke Innes suggested having a topic on what we 
> plan on working on this coming wee 
> k.
>  
> I think that's a great idea.
>
> Just to be clear, this isn't "official" in any sense - it's just to 
> kickstart the discussion. 
>
> -Jon
>
>
>

Re: [julia-users] 900mb csv loading in Julia failed: memory comparison vs python pandas and R

2015-10-13 Thread feza
Finally was able to load it, but the process   consumes a ton of memory.
julia> @time train = readtable("./test.csv");
124.575362 seconds (376.11 M allocations: 13.438 GB, 10.77% gc time)



On Tuesday, October 13, 2015 at 4:34:05 PM UTC-4, feza wrote:
>
> Same here on a 12gb ram machine 
>
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.5.0-dev+429 (2015-09-29 09:47 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Commit f71e449 (14 days old master)
> |__/   |  x86_64-w64-mingw32
>
> julia> using DataFrames   
>  
>   
>  
> julia> train = readtable("./test.csv");   
>  
> ERROR: OutOfMemoryError() 
>  
>  in resize! at array.jl:452   
>  
>  in readnrows! at 
> C:\Users\Mustafa\.julia\v0.5\DataFrames\src\dataframe\io.jl:164  
>  in readtable! at 
> C:\Users\Mustafa\.julia\v0.5\DataFrames\src\dataframe\io.jl:767  
>  in readtable at 
> C:\Users\Mustafa\.julia\v0.5\DataFrames\src\dataframe\io.jl:847   
>  in readtable at 
> C:\Users\Mustafa\.julia\v0.5\DataFrames\src\dataframe\io.jl:893   
>
>
>
>
>
> On Tuesday, October 13, 2015 at 3:47:58 PM UTC-4, Yichao Yu wrote:
>>
>>
>> On Oct 13, 2015 2:47 PM, "Grey Marsh"  wrote:
>>
>> Which julia version are you using. There's sime gc tweak on 0.4 for that.
>>
>> >
>> > I was trying to load the training dataset from springleaf marketing 
>> response on Kaggle. The csv is 921 mb, has 145321 row and 1934 columns. My 
>> machine has 8 gb ram and julia ate 5.8gb+ memory after that I stopped julia 
>> as there was barely any memory left for OS to function properly. It took 
>> about 5-6 minutes later for the incomplete operation. I've windows 8  
>> 64bit. Used the following code to read the csv to Julia:
>> >
>> > using DataFrames
>> > train = readtable("C:\\train.csv")
>> >
>> > Next I tried to to load the same file in python: 
>> >
>> > import pandas as pd
>> > train = pd.read_csv("C:\\train.csv")
>> >
>> > This took ~2.4gb memory, about a minute time
>> >
>> > Checking the same in R again:
>> > df = read.csv('E:/Libraries/train.csv', as.is = T)
>> >
>> > This took 2-3 minutes and consumes 3.5gb mem on the same machine. 
>> >
>> > Why such discrepancy and why Julia even fails to load the csv before 
>> running out of memory? If there is any better way to get the file loaded in 
>> Julia?
>> >
>> >
>>
>

Re: [julia-users] 900mb csv loading in Julia failed: memory comparison vs python pandas and R

2015-10-13 Thread feza
Same here on a 12gb ram machine 

   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.5.0-dev+429 (2015-09-29 09:47 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit f71e449 (14 days old master)
|__/   |  x86_64-w64-mingw32

julia> using DataFrames 
   

   
julia> train = readtable("./test.csv"); 
   
ERROR: OutOfMemoryError()   
   
 in resize! at array.jl:452 
   
 in readnrows! at 
C:\Users\Mustafa\.julia\v0.5\DataFrames\src\dataframe\io.jl:164  
 in readtable! at 
C:\Users\Mustafa\.julia\v0.5\DataFrames\src\dataframe\io.jl:767  
 in readtable at 
C:\Users\Mustafa\.julia\v0.5\DataFrames\src\dataframe\io.jl:847   
 in readtable at 
C:\Users\Mustafa\.julia\v0.5\DataFrames\src\dataframe\io.jl:893   





On Tuesday, October 13, 2015 at 3:47:58 PM UTC-4, Yichao Yu wrote:
>
>
> On Oct 13, 2015 2:47 PM, "Grey Marsh" > 
> wrote:
>
> Which julia version are you using. There's sime gc tweak on 0.4 for that.
>
> >
> > I was trying to load the training dataset from springleaf marketing 
> response on Kaggle. The csv is 921 mb, has 145321 row and 1934 columns. My 
> machine has 8 gb ram and julia ate 5.8gb+ memory after that I stopped julia 
> as there was barely any memory left for OS to function properly. It took 
> about 5-6 minutes later for the incomplete operation. I've windows 8  
> 64bit. Used the following code to read the csv to Julia:
> >
> > using DataFrames
> > train = readtable("C:\\train.csv")
> >
> > Next I tried to to load the same file in python: 
> >
> > import pandas as pd
> > train = pd.read_csv("C:\\train.csv")
> >
> > This took ~2.4gb memory, about a minute time
> >
> > Checking the same in R again:
> > df = read.csv('E:/Libraries/train.csv', as.is = T)
> >
> > This took 2-3 minutes and consumes 3.5gb mem on the same machine. 
> >
> > Why such discrepancy and why Julia even fails to load the csv before 
> running out of memory? If there is any better way to get the file loaded in 
> Julia?
> >
> >
>


[julia-users] Re: ANN: jlcall - Call Julia from MATLAB through the MEX interface

2015-10-04 Thread feza
Looks like it works   cheers

Building with 'Microsoft Visual C++ 2015 Professional'.
cl /c /Zp8 /GR /W3 /EHs /nologo /MD /O2 /Oy- /DNDEBUG 
/D_CRT_SECURE_NO_DEPRECATE /D_SCL_SECURE_NO_DEPRECATE /D_SECURE_SCL=0   
/DMATLAB_MEX_FILE -IC:\Julia\Julia-0.5.0-dev\include\julia  -I"C:\Program 
Files\MATLAB\R2015b\extern\include" -I"C:\Program 
Files\MATLAB\R2015b\simulink\include" 
C:\Users\freze\julia_packages\jlcall\src\jlcall.cpp 
/FoC:\Users\freze\AppData\Local\Temp\mex_11449682771947_5816\jlcall.obj
jlcall.cpp
C:\Julia\Julia-0.5.0-dev\include\julia\julia.h(91): warning C4200: 
nonstandard extension used: zero-sized array in struct/union
C:\Julia\Julia-0.5.0-dev\include\julia\julia.h(91): note: Cannot generate 
copy-ctor or copy-assignment operator when UDT contains a zero-sized array
C:\Julia\Julia-0.5.0-dev\include\julia\julia.h(121): warning C4200: 
nonstandard extension used: zero-sized array in struct/union
C:\Julia\Julia-0.5.0-dev\include\julia\julia.h(121): note: Cannot generate 
copy-ctor or copy-assignment operator when UDT contains a zero-sized array
C:\Julia\Julia-0.5.0-dev\include\julia\julia.h(132): warning C4200: 
nonstandard extension used: zero-sized array in struct/union
C:\Julia\Julia-0.5.0-dev\include\julia\julia.h(132): note: Cannot generate 
copy-ctor or copy-assignment operator when UDT contains a zero-sized array
C:\Julia\Julia-0.5.0-dev\include\julia\julia.h(293): warning C4200: 
nonstandard extension used: zero-sized array in struct/union
C:\Julia\Julia-0.5.0-dev\include\julia\julia.h(293): note: Cannot generate 
copy-ctor or copy-assignment operator when UDT contains a zero-sized array
C:\Users\freze\julia_packages\jlcall\src\jlcall.cpp(27): warning C4800: 
'int': forcing value to bool 'true' or 'false' (performance warning)

link /nologo /manifest  /DLL  /EXPORT:mexFunction 
C:\Users\freze\AppData\Local\Temp\mex_11449682771947_5816\jlcall.obj 
 libjulia.dll.a  /LIBPATH:C:\Julia\Julia-0.5.0-dev\lib   
/LIBPATH:"C:\Program Files\MATLAB\R2015b\extern\lib\win64\microsoft" 
libmx.lib libmex.lib libmat.lib kernel32.lib user32.lib gdi32.lib 
winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib 
uuid.lib odbc32.lib odbccp32.lib 
/out:C:\Users\freze\julia_packages\jlcall\m\jlcall.mexw64
   Creating library C:\Users\freze\julia_packages\jlcall\m\jlcall.lib and 
object C:\Users\freze\julia_packages\jlcall\m\jlcall.exp

mt -outputresource:C:\Users\freze\julia_packages\jlcall\m\jlcall.mexw64;2 
-manifest C:\Users\freze\julia_packages\jlcall\m\jlcall.mexw64.manifest
Microsoft (R) Manifest Tool version 6.3.9600.17336

Copyright (c) Microsoft Corporation 2012. 

All rights reserved.


del C:\Users\freze\julia_packages\jlcall\m\jlcall.exp 
C:\Users\freze\julia_packages\jlcall\m\jlcall.lib 
C:\Users\freze\julia_packages\jlcall\m\jlcall.mexw64.manifest 
C:\Users\freze\julia_packages\jlcall\m\jlcall.ilk
MEX completed successfully.
"C:\Users\freze\julia_packages\jlcall\m" is already on the MATLAB path.
Configuration complete.





On Friday, October 2, 2015 at 7:45:49 PM UTC-4, Tracy Wadleigh wrote:
>
> I'm pleased to announce jlcall , a 
> project that exposes Julia to MATLAB through the MEX interface. (And only a 
> brief ten months after posting my gist 
>  with my 
> proof-of-concept, too. ;-))
>
> *Highlights*
>
>- Call any Julia function whose arguments can be marshaled to Julia 
>via MATLAB.jl's jvariable function and whose return value can be 
>marshaled to MATLAB via mxarray.
>- Evaluate arbitrary Julia expressions captured in MATLAB strings.
>
> *Advantages*
>
>- MATLAB users: extend your MATLAB workflow with Julia as a new MEX 
>extension language.
>- Julia users: use MATLAB's polished front end for your own work, or 
>at least use jlcall to facilitate better collaboration with your 
>MATLAB-bound colleagues.
>- Can avoid memory copies in some cases when crossing the language 
>boundary, as the two runtimes cohabit a common process and see the same 
>address space.
>
> A caveat: as of this writing, jlcall has been shown to work on exactly one 
> (Win64) machine: my workstation at work. It is the only machine with a 
> MATLAB license to which I have access. If you try it on another platform 
> any time soon, expect breakage. Please report it, though, as I would like 
> to see this project functional across all three platforms on which both 
> Julia and MATLAB are supported.
>


Re: [julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-09-30 Thread feza
FYI  this discussion is in relation to Julia 0.4.  Initially I had some 
deprecation warnings but they have mostly gone away. I have no real 
objection,  perhaps it's just a little weird that the repl  returns
julia> x
linspace(0.0,10.0,50)

as opposed to printing it out like a full array. Perhaps that would be a 
nice addition.


On Wednesday, September 30, 2015 at 3:27:02 PM UTC-4, Matt Bauman wrote:
>
> There can be reasons where a special read-only `Ones` array type is 
> beneficial: http://stackoverflow.com/a/30968709/176071 
> .
>  
>  It's just five lines of code, and Julia/LLVM is able to optimize it such 
> that multiplication is totally elided.  It's pretty cool.  But as others 
> said, these functions are fairly well entrenched in creating mutable 
> arrays.  The output from linspace, however, isn't typically mutated.
>
> Back to linspace, I'm still curious to hear more reasons for the strong 
> dislike.  Is it because of how it behaves?  Or how it performs?  Or how 
> it's displayed (which is also valid)?
>


Re: [julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread feza
Strange it *was* giving me an error saying deprecated and that I should use 
collect, but now it's fine.

On Tuesday, September 29, 2015 at 10:28:12 PM UTC-4, Sheehan Olver wrote:
>
> fez, I'm pretty sure the code works fine without the collect: when exp is 
> called on linspace it converts it to a vector.  Though the returned t will 
> be linspace object.
>
> On Wednesday, September 30, 2015 at 12:10:55 PM UTC+10, feza wrote:
>>
>> Here's the code I was using where I needed to use collect (I've been 
>> playing around with Julia, so any suggestions on this code for perf is 
>> welcome ;) ) . In general linspace (or the : notation)  is also used 
>> commonly to lay  a grid in space for solving a PDE for some other use 
>> cases. 
>>
>> function gp(n)
>> n = convert(Int,n)
>> t0 = 0
>> tf = 5
>> t = collect( linspace(t0, tf, n+1) )
>> sigma = exp( -(t - t[1]) )
>>
>> c = [sigma; sigma[(end-1):-1:2]]
>> lambda = fft(c)
>> eta = sqrt(lambda./(2*n))
>>
>> Z = randn(2*n) + im*randn(2*n)
>> x = real( fft( Z.*eta ) )
>> return (x, t)
>> end
>>
>>
>> On Tuesday, September 29, 2015 at 8:59:52 PM UTC-4, Stefan Karpinski 
>> wrote:
>>>
>>> I'm curious why you need a vector rather than an object. Do you mutate 
>>> it after creating it? Having linspace return an object instead of a 
>>> vector was a bit of a unclear judgement call so getting feedback would 
>>> be good.
>>>
>>> On Tuesday, September 29, 2015, Patrick Kofod Mogensen <
>>> patrick@gmail.com> wrote:
>>>
>>>> No:
>>>>
>>>> julia> logspace(0,3,5)
>>>> 5-element Array{Float64,1}:
>>>> 1.0
>>>> 5.62341
>>>>31.6228 
>>>>   177.828  
>>>>  1000.0   
>>>>
>>>> On Tuesday, September 29, 2015 at 8:50:47 PM UTC-4, Luke Stagner wrote:
>>>>>
>>>>> Thats interesting. Does logspace also return a range?
>>>>>
>>>>> On Tuesday, September 29, 2015 at 5:43:28 PM UTC-7, Chris wrote:
>>>>>>
>>>>>> In 0.4 the linspace function returns a range object, and you need to 
>>>>>> use collect() to expand it. I'm also interested in nicer syntax.
>>>>>
>>>>>

[julia-users] Alternative syntax for collect(linspace(0,1,n)) ?

2015-09-29 Thread feza
In matlab  linspace(0,1,n) returns a vector of floats, in Julia I have to 
call collect to turn the linspace to a vector floats. Is there a simpler 
syntax for this?


Re: [julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread feza
Here's the code I was using where I needed to use collect (I've been 
playing around with Julia, so any suggestions on this code for perf is 
welcome ;) ) . In general linspace (or the : notation)  is also used 
commonly to lay  a grid in space for solving a PDE for some other use 
cases. 

function gp(n)
n = convert(Int,n)
t0 = 0
tf = 5
t = collect( linspace(t0, tf, n+1) )
sigma = exp( -(t - t[1]) )

c = [sigma; sigma[(end-1):-1:2]]
lambda = fft(c)
eta = sqrt(lambda./(2*n))

Z = randn(2*n) + im*randn(2*n)
x = real( fft( Z.*eta ) )
return (x, t)
end


On Tuesday, September 29, 2015 at 8:59:52 PM UTC-4, Stefan Karpinski wrote:
>
> I'm curious why you need a vector rather than an object. Do you mutate it 
> after creating it? Having linspace return an object instead of a vector was 
> a bit of a unclear judgement call so getting feedback would be good.
>
> On Tuesday, September 29, 2015, Patrick Kofod Mogensen <
> patrick@gmail.com > wrote:
>
>> No:
>>
>> julia> logspace(0,3,5)
>> 5-element Array{Float64,1}:
>> 1.0
>> 5.62341
>>31.6228 
>>   177.828  
>>  1000.0   
>>
>> On Tuesday, September 29, 2015 at 8:50:47 PM UTC-4, Luke Stagner wrote:
>>>
>>> Thats interesting. Does logspace also return a range?
>>>
>>> On Tuesday, September 29, 2015 at 5:43:28 PM UTC-7, Chris wrote:

 In 0.4 the linspace function returns a range object, and you need to 
 use collect() to expand it. I'm also interested in nicer syntax.
>>>
>>>

[julia-users] Nicer syntax collect(linspace(0,1,n))?

2015-09-29 Thread feza
In matlab  x = linspace(0,1,n)  creates a vector of floats of length n. In 
julia it seems like the only way to do this is to use x = collect( 
linspace(0,1,n) ) . Is there a nicer syntax? I do mainly numeric computing 
and I find this quite common in my code.

Thanks.