Re: [julia-users] Re: vector assignment performance

2016-05-19 Thread Tim Holy
If you have a lot of small arrays, you might consider FixedSizeArrays, which 
may address some of these points.

Best,
--Tim

On Thursday, May 19, 2016 07:20:22 PM vava...@uwaterloo.ca wrote:
> Dear Yichao,
> 
> Thanks very much for the prompt response.  This question arises regarding a
> code for finite element stiffness matrix assembly.  This computation
> involves an outer loop over elements (possibly millions of them).  Inside
> this loop is a sequence of operations on 'small' vectors and matrices (say
> 3-by-3 matrices). These inner operations are mostly gaxpy and indirect
> addressing.  The code would be much more readable if all of these
> small-vector operations are written without explicit loops for i=1:3, but
> it seems that replacing loops with [:] causes major heap allocation.  Is
> there a macro package or other solution applicable to this form of
> computation?
> 
> Thanks,
> Steve Vavasis
> 
> On Thursday, May 19, 2016 at 9:47:12 PM UTC-4, vav...@uwaterloo.ca wrote:
> > The two functions test4 and test5 below are equivalent, but test5 is much
> > faster than test4.  Apparently test4 is carrying out a heap allocation on
> > each iteration of the j-loop.  Why?   In general, which kinds of
> > assignment
> > statements of the form = create temporaries, and which
> > don't?
> > 
> >  (In the example below, if the indirect addressing via array i is
> > 
> > eliminated, then the two functions have comparable performance.)
> > 
> > Thanks,
> > Steve Vavasis
> > 
> > function test4(n)
> > 
> > y = [2.0, 6.0, 3.0]
> > i = [1, 2, 3]
> > z = [0.0, 0.0, 0.0]
> > u = 0.0
> > for j = 1 : n
> > 
> > z[:] = y[i]
> > u += sum(z)
> > 
> > end
> > u
> > 
> > end
> > 
> > function test5(n)
> > 
> > y = [2.0, 6.0, 3.0]
> > i = [1, 2, 3]
> > z = [0.0, 0.0, 0.0]
> > u = 0.0
> > for j = 1 : n
> > 
> > for k = 1 : 3
> > 
> > z[k] = y[i[k]]
> > 
> > end
> > u += sum(z)
> > 
> > end
> > u
> > 
> > end
> > 
> > 
> > julia> @time Testmv.test4(1000)
> > 
> >   1.071396 seconds (20.00 M allocations: 1.192 GB, 7.03% gc time)
> > 
> > 1.1e8
> > 
> > julia> @time Testmv.test5(1000)
> > 
> >   0.184411 seconds (4.61 k allocations: 198.072 KB)
> > 
> > 1.1e8



[julia-users] parallel computing in julia

2016-05-19 Thread SHORE SHEN
Hi

Im doing a loop and need parallel computing.

from the doc, it is said to "julia -p n" command to start n processor in 
local mashine

however here's the result:

julia> julia -p 2
ERROR: syntax: extra token "2" after end of expression

also I tried addprocs(n), my computer is 4 core 8 threads, but I can add 9 
processors.


so if anyone can tell me how i should start all 4 core in my computer and 
what is the difference between julia -p n and addprocs(n).

thanks a lot


[julia-users] Re: vector assignment performance

2016-05-19 Thread Steven G. Johnson


On Thursday, May 19, 2016 at 10:20:22 PM UTC-4, vav...@uwaterloo.ca wrote:
>
> Dear Yichao,
>
> Thanks very much for the prompt response.  This question arises regarding 
> a code for finite element stiffness matrix assembly.  This computation 
> involves an outer loop over elements (possibly millions of them).  Inside 
> this loop is a sequence of operations on 'small' vectors and matrices (say 
> 3-by-3 matrices). These inner operations are mostly gaxpy and indirect 
> addressing.  The code would be much more readable if all of these 
> small-vector operations are written without explicit loops for i=1:3, but 
> it seems that replacing loops with [:] causes major heap allocation.  Is 
> there a macro package or other solution applicable to this form of 
> computation? 
>

copy!(dest, src) shouldn't allocate memory.

See also https://github.com/JuliaLang/julia/issues/16302


[julia-users] Re: vector assignment performance

2016-05-19 Thread vavasis
Dear Yichao,

Thanks very much for the prompt response.  This question arises regarding a 
code for finite element stiffness matrix assembly.  This computation 
involves an outer loop over elements (possibly millions of them).  Inside 
this loop is a sequence of operations on 'small' vectors and matrices (say 
3-by-3 matrices). These inner operations are mostly gaxpy and indirect 
addressing.  The code would be much more readable if all of these 
small-vector operations are written without explicit loops for i=1:3, but 
it seems that replacing loops with [:] causes major heap allocation.  Is 
there a macro package or other solution applicable to this form of 
computation? 

Thanks,
Steve Vavasis


 



On Thursday, May 19, 2016 at 9:47:12 PM UTC-4, vav...@uwaterloo.ca wrote:
>
> The two functions test4 and test5 below are equivalent, but test5 is much 
> faster than test4.  Apparently test4 is carrying out a heap allocation on 
> each iteration of the j-loop.  Why?   In general, which kinds of assignment 
> statements of the form = create temporaries, and which don't? 
>  (In the example below, if the indirect addressing via array i is 
> eliminated, then the two functions have comparable performance.)
>
> Thanks,
> Steve Vavasis
>
> function test4(n)
> y = [2.0, 6.0, 3.0]
> i = [1, 2, 3]
> z = [0.0, 0.0, 0.0]
> u = 0.0
> for j = 1 : n
> z[:] = y[i]
> u += sum(z)
> end
> u
> end
>
> function test5(n)
> y = [2.0, 6.0, 3.0]
> i = [1, 2, 3]
> z = [0.0, 0.0, 0.0]
> u = 0.0
> for j = 1 : n
> for k = 1 : 3
> z[k] = y[i[k]]
> end
> u += sum(z)
> end
> u
> end
>
>
> julia> @time Testmv.test4(1000)
>   1.071396 seconds (20.00 M allocations: 1.192 GB, 7.03% gc time)
> 1.1e8
>
> julia> @time Testmv.test5(1000)
>   0.184411 seconds (4.61 k allocations: 198.072 KB)
> 1.1e8
>
>

Re: [julia-users] Re: Coefficient of determination/R2/r-squared of model and accuracy of R2 estimate

2016-05-19 Thread Kevin Liu
Thanks. I might need some help if I encounter problems on this pseudo 
version. 

On Thursday, May 19, 2016 at 1:54:37 PM UTC-3, Milan Bouchet-Valat wrote:
>
> Le jeudi 19 mai 2016 à 09:30 -0700, Kevin Liu a écrit : 
> > It seems the pkg owners are still deciding 
> > 
> > Funcs to evaluate fit 
> >https://github.com/JuliaStats/GLM.jl/issues/74 
> > Add fit statistics functions and document existing ones  https://gith 
> > ub.com/JuliaStats/StatsBase.jl/pull/146 
> > Implement fit statistics functions   
> >   https://github.com/JuliaStats/GLM.jl/pull/115 
> These PRs have been merged, so we just need to tag a new release. Until 
> then, you can use Pkg.checkout() to use the development version 
> (function is called R² or R2). 
>
>
> Regards 
>
> > 
> > > I looked in GLM.jl but couldn't find a function for calculating the 
> > > R2 or the accuracy of the R2 estimate. 
> > > 
> > > My understanding is that both should appear with the glm() 
> > > function. Help would be appreciated.  
> > > 
> > > Kevin 
> > > 
>


Re: [julia-users] vector assignment performance

2016-05-19 Thread Yichao Yu
On Thu, May 19, 2016 at 9:47 PM,   wrote:
> The two functions test4 and test5 below are equivalent, but test5 is much
> faster than test4.  Apparently test4 is carrying out a heap allocation on
> each iteration of the j-loop.  Why?   In general, which kinds of assignment
> statements of the form = create temporaries, and which don't?

Setting array index never allocate (or never create any copy of the
elements assigned anyway) The thing that's allocating is the indexing.
`y[[1, 2, 3]]` creates a new array.

Also note that the form is `[index] = value` assigning to a
variable always means changing of bindings.

> (In the example below, if the indirect addressing via array i is eliminated,
> then the two functions have comparable performance.)
>
> Thanks,
> Steve Vavasis
>
> function test4(n)
> y = [2.0, 6.0, 3.0]
> i = [1, 2, 3]
> z = [0.0, 0.0, 0.0]
> u = 0.0
> for j = 1 : n
> z[:] = y[i]
> u += sum(z)
> end
> u
> end
>
> function test5(n)
> y = [2.0, 6.0, 3.0]
> i = [1, 2, 3]
> z = [0.0, 0.0, 0.0]
> u = 0.0
> for j = 1 : n
> for k = 1 : 3
> z[k] = y[i[k]]
> end
> u += sum(z)
> end
> u
> end
>
>
> julia> @time Testmv.test4(1000)
>   1.071396 seconds (20.00 M allocations: 1.192 GB, 7.03% gc time)
> 1.1e8
>
> julia> @time Testmv.test5(1000)
>   0.184411 seconds (4.61 k allocations: 198.072 KB)
> 1.1e8
>


[julia-users] vector assignment performance

2016-05-19 Thread vavasis
The two functions test4 and test5 below are equivalent, but test5 is much 
faster than test4.  Apparently test4 is carrying out a heap allocation on 
each iteration of the j-loop.  Why?   In general, which kinds of assignment 
statements of the form = create temporaries, and which don't? 
 (In the example below, if the indirect addressing via array i is 
eliminated, then the two functions have comparable performance.)

Thanks,
Steve Vavasis

function test4(n)
y = [2.0, 6.0, 3.0]
i = [1, 2, 3]
z = [0.0, 0.0, 0.0]
u = 0.0
for j = 1 : n
z[:] = y[i]
u += sum(z)
end
u
end

function test5(n)
y = [2.0, 6.0, 3.0]
i = [1, 2, 3]
z = [0.0, 0.0, 0.0]
u = 0.0
for j = 1 : n
for k = 1 : 3
z[k] = y[i[k]]
end
u += sum(z)
end
u
end


julia> @time Testmv.test4(1000)
  1.071396 seconds (20.00 M allocations: 1.192 GB, 7.03% gc time)
1.1e8

julia> @time Testmv.test5(1000)
  0.184411 seconds (4.61 k allocations: 198.072 KB)
1.1e8



Re: [julia-users] Re: Ubuntu bug? Executing .jl files via shebang does nothing on Ubuntu 16.04 x64

2016-05-19 Thread Jonathan Goldfarb
Yes [1] I've been bitten by this before as well; it's unfortunate. Is there 
perhaps some way to change some command line arguments after starting Julia?

[1] 
http://unix.stackexchange.com/questions/14887/the-way-to-use-usr-bin-env-sed-f-in-shebang/14892#14892

On Thursday, May 19, 2016 at 2:35:27 PM UTC-4, Yichao Yu wrote:
>
>
> On May 19, 2016 1:54 PM, "Adrian Salceanu"  > wrote:
> >
> > OK, I figured out what causes the problem. 
> >
> > It seems that on linux it does not like the --color=yes 
> >
> > Removed that and it works as expected. 
>
> Cmdline parseing for shebang is wierd. I believe linux only split at the 
> first space
>
> >
> > Cheers! 
> >
> >
> > joi, 19 mai 2016, 19:48:04 UTC+2, Adrian Salceanu a scris:
> >>
> >> Hi, 
> >>
> >> There seems to be a problem with executing .jl scripts on Ubuntu 16.04 
> x64 
> >>
> >> Take this simple program in exec_text.jl
> >>
> >> #!/usr/bin/env julia --color=yes
> >> println("all good!")
> >>
> >>
> >> On Mac OS: 
> >> $ ./exec_test.jl
> >> all good!
> >>
> >>
> >> On Ubuntu it just hangs
> >> $ ./exec_test.jl
> >>
> >> [=> never returns, does nothing]
> >>
> >> This works as expected: 
> >> $ julia exec_test.jl
> >> all good!
> >>
> >> But this is not an acceptable solution as I need to execute my program 
> in order to pass command line args to it. Otherwise julia would just gobble 
> up the command line args intended for my script. 
> >>
> >> Thanks,
> >> Adrian
>


Re: [julia-users] Re: Async file IO?

2016-05-19 Thread Andrei Zh

>
>
> To clarify: operations on the `IOStream` type use the "ios" library. 
> Operations on `LibuvStream` (and its subtypes) use Libuv.
>

Huh,  didn't know that. Sorry for confusion in previous answer and thanks 
for valuable comment. 

 

> On Thu, May 19, 2016 at 6:36 PM, Isaiah Norton  > wrote:
>
>> I don't think this happens for normal file IO.
>>
>>
>> Right, good point. The stuff in the thread linked by Andrei applies to 
>> Libuv streams only.
>>
>> I think that Julia IO is built on libuv
>>>
>>
>> Julia IO is mostly built on libuv, but file IO uses the internal ios 
>> library that is part of flisp.
>>
>> On Thu, May 19, 2016 at 5:43 PM, Yichao Yu > > wrote:
>>
>>> On Thu, May 19, 2016 at 5:24 PM, Andrei Zh >> > wrote:
>>> > Based on answers to my own question, I believe it's safe to assume that
>>> > `read()` on file will switch to another task during IO operation.
>>>
>>> I don't think this happens for normal file IO.
>>>
>>> >
>>> >
>>> >
>>> > On Tuesday, May 17, 2016 at 2:03:21 AM UTC+3, g wrote:
>>> >>
>>> >> Hello All,
>>> >>
>>> >> I think that Julia IO is built on libuv, and that libuv offers
>>> >> asynchronous IO to the filesystem. Is this exposed through the Julia 
>>> API? I
>>> >> can't figure it out from the documentation. I did a simple test
>>> >>
>>> >> julia> tic(); t = @async read(open("filename"),UInt8,25); 
>>> yield();
>>> >> toc()
>>> >>
>>> >> elapsed time: 9.094271773 seconds
>>> >>
>>> >> and it seems that reading blocks without yielding.
>>> >>
>>> >> Is there a way to do this where the reading task yields to other 
>>> processes
>>> >> while it is getting the data from disk?
>>> >>
>>> >>
>>> >>
>>> >
>>>
>>
>>
>

Re: [julia-users] Re: Async file IO?

2016-05-19 Thread Isaiah Norton
>
> Julia IO is mostly built on libuv, but file IO uses the internal ios
> library that is part of flisp.


(I guess file IO is kind of important so replace "mostly" with "partly"...)

To clarify: operations on the `IOStream` type use the "ios" library.
Operations on `LibuvStream` (and its subtypes) use Libuv.





On Thu, May 19, 2016 at 6:36 PM, Isaiah Norton 
wrote:

> I don't think this happens for normal file IO.
>
>
> Right, good point. The stuff in the thread linked by Andrei applies to
> Libuv streams only.
>
> I think that Julia IO is built on libuv
>>
>
> Julia IO is mostly built on libuv, but file IO uses the internal ios
> library that is part of flisp.
>
> On Thu, May 19, 2016 at 5:43 PM, Yichao Yu  wrote:
>
>> On Thu, May 19, 2016 at 5:24 PM, Andrei Zh 
>> wrote:
>> > Based on answers to my own question, I believe it's safe to assume that
>> > `read()` on file will switch to another task during IO operation.
>>
>> I don't think this happens for normal file IO.
>>
>> >
>> >
>> >
>> > On Tuesday, May 17, 2016 at 2:03:21 AM UTC+3, g wrote:
>> >>
>> >> Hello All,
>> >>
>> >> I think that Julia IO is built on libuv, and that libuv offers
>> >> asynchronous IO to the filesystem. Is this exposed through the Julia
>> API? I
>> >> can't figure it out from the documentation. I did a simple test
>> >>
>> >> julia> tic(); t = @async read(open("filename"),UInt8,25);
>> yield();
>> >> toc()
>> >>
>> >> elapsed time: 9.094271773 seconds
>> >>
>> >> and it seems that reading blocks without yielding.
>> >>
>> >> Is there a way to do this where the reading task yields to other
>> processes
>> >> while it is getting the data from disk?
>> >>
>> >>
>> >>
>> >
>>
>
>


Re: [julia-users] Re: Async file IO?

2016-05-19 Thread Isaiah Norton
>
> I don't think this happens for normal file IO.


Right, good point. The stuff in the thread linked by Andrei applies to
Libuv streams only.

I think that Julia IO is built on libuv
>

Julia IO is mostly built on libuv, but file IO uses the internal ios
library that is part of flisp.

On Thu, May 19, 2016 at 5:43 PM, Yichao Yu  wrote:

> On Thu, May 19, 2016 at 5:24 PM, Andrei Zh 
> wrote:
> > Based on answers to my own question, I believe it's safe to assume that
> > `read()` on file will switch to another task during IO operation.
>
> I don't think this happens for normal file IO.
>
> >
> >
> >
> > On Tuesday, May 17, 2016 at 2:03:21 AM UTC+3, g wrote:
> >>
> >> Hello All,
> >>
> >> I think that Julia IO is built on libuv, and that libuv offers
> >> asynchronous IO to the filesystem. Is this exposed through the Julia
> API? I
> >> can't figure it out from the documentation. I did a simple test
> >>
> >> julia> tic(); t = @async read(open("filename"),UInt8,25);
> yield();
> >> toc()
> >>
> >> elapsed time: 9.094271773 seconds
> >>
> >> and it seems that reading blocks without yielding.
> >>
> >> Is there a way to do this where the reading task yields to other
> processes
> >> while it is getting the data from disk?
> >>
> >>
> >>
> >
>


[julia-users] Re: moving data to workers for distributed workloads

2016-05-19 Thread 'Greg Plowman' via julia-users

>
>
> It looks like that SO answer is moving data into the global scope of each 
> worker. It is probably worth experimenting with but I'd be worried about 
> performance implications of non-const global variables. It's probably the 
> case that this is still a win for my use case though. Thanks for the link.
>


Why not declare input vector as const?


I have a similar requirement for a simulation. I found it convenient 
to wrap everything that is required on workers into a module.

module SphericalHarmonicTransforms
export spherical_harmonic_transforms

const input = Vector{Float64}(10^7)
...

function spherical_harmonic_transforms(idx)
coefficients = Vector{Complex128}(10^6)
... # reference input vector here
return coefficients
end
end



Then to propagate to all workers, just write using 
SphericalHarmonicTransforms

function sim()
idx = 1
limit = 1
nextidx() = (myidx = idx; idx += 1; myidx)
@sync for worker in workers()
@async while true
myidx = nextidx()
myidx ≤ limit || break
coefficients = remotecall_fetch(worker, 
spherical_harmonic_transforms, myidx)
write_results_to_disk(coefficients)
end
end
end

addprocs()
using SphericalHarmonicTransforms
sim()




Re: [julia-users] Re: Async file IO?

2016-05-19 Thread Yichao Yu
On Thu, May 19, 2016 at 5:24 PM, Andrei Zh  wrote:
> Based on answers to my own question, I believe it's safe to assume that
> `read()` on file will switch to another task during IO operation.

I don't think this happens for normal file IO.

>
>
>
> On Tuesday, May 17, 2016 at 2:03:21 AM UTC+3, g wrote:
>>
>> Hello All,
>>
>> I think that Julia IO is built on libuv, and that libuv offers
>> asynchronous IO to the filesystem. Is this exposed through the Julia API? I
>> can't figure it out from the documentation. I did a simple test
>>
>> julia> tic(); t = @async read(open("filename"),UInt8,25); yield();
>> toc()
>>
>> elapsed time: 9.094271773 seconds
>>
>> and it seems that reading blocks without yielding.
>>
>> Is there a way to do this where the reading task yields to other processes
>> while it is getting the data from disk?
>>
>>
>>
>


[julia-users] Re: Async file IO?

2016-05-19 Thread Andrei Zh
Based on answers to my own question 
, I 
believe it's safe to assume that `read()` on file will switch to another 
task during IO operation.  



On Tuesday, May 17, 2016 at 2:03:21 AM UTC+3, g wrote:
>
> Hello All,
>
> I think that Julia IO is built on libuv, and that libuv offers 
> asynchronous IO to the filesystem. Is this exposed through the Julia API? I 
> can't figure it out from the documentation. I did a simple test
>
> *julia> **tic(); t = @async read(open("filename"),UInt8,25); 
> yield(); toc()*
>
> elapsed time: 9.094271773 seconds
>
> and it seems that reading blocks without yielding.
>
> Is there a way to do this where the reading task yields to other processes 
> while it is getting the data from disk?
>
>
>
>

[julia-users] Re: moving data to workers for distributed workloads

2016-05-19 Thread Matthew Pearce
Michael

That's right. With `sow` the mod.eval of the third argument gets bound to 
the second; where mod defaults to Main. Maybe someone could think of a 
cleaner way to make values available for later work, but it seems to do the 
trick. It seems best to avoid using the `sow` function heavily.

`reap` returns the mod.eval of the second argument with no assignment on 
each pid.

Good luck!

Matthew

On Thursday, May 19, 2016 at 8:18:17 PM UTC+1, Michael Eastwood wrote:
>
> Hi Matthew,
>
> ClusterUtils.jl looks very useful. I will definitely try it out. Am I 
> correct in reading that the trick to moving input to the workers is here 
> 
> ?
>
> You're also correct that write_results_to_disk does actually depend on 
> myidx. I might have somewhat oversimplified the example.
>
> Thanks,
> Michael
>
> On Thursday, May 19, 2016 at 7:41:49 AM UTC-7, Matthew Pearce wrote:
>>
>> Hi Michael 
>>
>> Your current code looks like will pull back the `coefficients` across the 
>> network (500 gb transfer) and as you point out transfer `input` each time.
>>
>> I wrote a package ClusterUtils.jl 
>>  to handle my own problems 
>> (MCMC sampling) which were somewhat similar.
>>
>> Roughly - given the available info - if I was trying to do something 
>> similar I'd do:
>>
>> ```julia
>> using Compat
>> using ClusterUtils
>>
>> sow(pids, :input, input)
>>
>> @everywhere function dostuff(input, myidxs)
>> for myidx in myidxs
>> coefficients = spherical_harmonic_transforms(input[myidx])
>> write_results_to_disk(coefficients) #needs myidx as arg too probably
>>   end
>> end
>>
>> idxs = chunkit(limit, length(pids))
>> sow(pids, :work, :(Dict(zip($pids, $idxs
>>
>> reap(pids, :(dostuff(input, $work[myid()])))
>> ```
>>
>> This transfers `input` once, and writes something to disk from the remote 
>> process. 
>>
>>
>>
>>

Re: [julia-users] Re: [ANN & RFC] Measurements.jl: Uncertainty propagation library

2016-05-19 Thread Mosè Giordano
Hi Andre,

2016-05-19 9:05 GMT+02:00 Andre Bieler:
> As an experimental physicist I thank you very much for this package! :D

You're welcome :-)  If you have wishes or suggestions, please share them!

Bye,
Mosè


[julia-users] Re: Using LOAD_PATH from the command line

2016-05-19 Thread Tony Kelman
What exactly is in myfile.jl? Yes, when you call the julia command-line 
program on a script file it has to be able to read that script file, so 
either it's a relative path to where it's run from, or an absolute path. If 
you want to leverage LOAD_PATH to import modules, then try importing from a 
-e "..." command.


On Thursday, May 19, 2016 at 10:53:15 AM UTC-7, David Parks wrote:
>
> One minor correction, it's not looking to my home directory, it's looking 
> to my current directory for the script file. The point remains, it does not 
> use LOAD_PATH to find the script file I want to run. So to run a script I 
> need to specify the full path to the julia script file on the command line 
> every time I want to run it, I cannot take advantage of my LOAD_PATH 
> configuration in the same was as I can in the REPL.
>


[julia-users] Re: moving data to workers for distributed workloads

2016-05-19 Thread Michael Eastwood
Hi Fabian,

It looks like that SO answer is moving data into the global scope of each 
worker. It is probably worth experimenting with but I'd be worried about 
performance implications of non-const global variables. It's probably the 
case that this is still a win for my use case though. Thanks for the link.

I'm using alm2map and map2alm from LibHealpix.jl (
https://github.com/mweastwood/LibHealpix.jl) to do the spherical harmonic 
transforms.

Michael

On Thursday, May 19, 2016 at 12:45:05 AM UTC-7, Fabian Gans wrote:
>
> Hi Michael, 
>
> I recently had a similar problem and this SO thread helped me a lot: 
> http://stackoverflow.com/questions/27677399/julia-how-to-copy-data-to-another-processor-in-julia
>
> As a side question: Which code are you using to calculate spherical 
> harmonics transforms. I was looking for a julia package some time ago and 
> did not find any, so if your code is publicly available, could you point me 
> to it? 
>
> Thanks
> Fabian
>
>
>
> On Wednesday, May 18, 2016 at 9:28:34 PM UTC+2, Michael Eastwood wrote:
>>
>> Hi julia-users!
>>
>> I need some help with distributing a workload across tens of workers 
>> across several machines.
>>
>> The problem I am trying to solve involves calculating the elements of a 
>> large block diagonal matrix. The total size of the blocks is >500 GB, so I 
>> cannot store the entire thing in memory. The way the calculation works, I 
>> do a bunch of spherical harmonic transforms and the results give me one row 
>> in each block of the matrix.
>>
>> The following code illustrates what I am doing currently. I am 
>> distributing the spherical harmonic transforms amongst all the workers and 
>> bringing the data back to the master process to write the results to disk 
>> (the master process has each matrix block mmapped to disk).
>>
>> idx = 1
>> limit = 1
>> nextidx() = (myidx = idx; idx += 1; myidx)
>> @sync for worker in workers()
>> @async while true
>> myidx = nextidx()
>> myidx ≤ limit || break
>> coefficients = 
>> remotecall_fetch(spherical_harmonic_transforms, worker, input[myidx])
>> write_results_to_disk(coefficients)
>> end
>> end
>>
>> Each spherical harmonic transform takes O(10 seconds) so I thought the 
>> data movement cost would be negligible compared to this. However, if I have 
>> three machines each with 16 workers, machine 1 will have all 16 workers 
>> working hard (the master process is on machine 1) and machines 2&3 will 
>> have most of their workers idling. My hypothesis is that the cost of moving 
>> the data to and from the workers is preventing machines 2&3 from being 
>> fully utilized.
>>
>> coefficients is a vector of a million Complex128s (16 MB)
>> input is composed of two parts: 1) a vector of 10 million Float64s (100 
>> MB) and 2) a small amount of additional information that is negligibly 
>> small compared to the first part.
>>
>> The trick is that the first part of input (the 100 MB vector) doesn't 
>> change between iterations. So I could alleviate most of the data movement 
>> problem by moving that part to each worker once. Problem is that I can't 
>> seem to figure out how to do that. The manual (
>> http://docs.julialang.org/en/release-0.4/manual/parallel-computing/#remoterefs-and-abstractchannels)
>>  
>> is a little thin on how to use RemoteRefs.
>>
>> So how do you move data to workers in a way that it can be re-used on 
>> subsequent iterations? An example in the manual would be very helpful!
>>
>> Thanks,
>> Michael
>>
>

[julia-users] Re: moving data to workers for distributed workloads

2016-05-19 Thread Michael Eastwood
Hi Matthew,

ClusterUtils.jl looks very useful. I will definitely try it out. Am I 
correct in reading that the trick to moving input to the workers is here 

?

You're also correct that write_results_to_disk does actually depend on myidx. 
I might have somewhat oversimplified the example.

Thanks,
Michael

On Thursday, May 19, 2016 at 7:41:49 AM UTC-7, Matthew Pearce wrote:
>
> Hi Michael 
>
> Your current code looks like will pull back the `coefficients` across the 
> network (500 gb transfer) and as you point out transfer `input` each time.
>
> I wrote a package ClusterUtils.jl 
>  to handle my own problems 
> (MCMC sampling) which were somewhat similar.
>
> Roughly - given the available info - if I was trying to do something 
> similar I'd do:
>
> ```julia
> using Compat
> using ClusterUtils
>
> sow(pids, :input, input)
>
> @everywhere function dostuff(input, myidxs)
> for myidx in myidxs
> coefficients = spherical_harmonic_transforms(input[myidx])
> write_results_to_disk(coefficients) #needs myidx as arg too probably
>   end
> end
>
> idxs = chunkit(limit, length(pids))
> sow(pids, :work, :(Dict(zip($pids, $idxs
>
> reap(pids, :(dostuff(input, $work[myid()])))
> ```
>
> This transfers `input` once, and writes something to disk from the remote 
> process. 
>
>
>
>

Re: [julia-users] Re: Ubuntu bug? Executing .jl files via shebang does nothing on Ubuntu 16.04 x64

2016-05-19 Thread Yichao Yu
On May 19, 2016 1:54 PM, "Adrian Salceanu" 
wrote:
>
> OK, I figured out what causes the problem.
>
> It seems that on linux it does not like the --color=yes
>
> Removed that and it works as expected.

Cmdline parseing for shebang is wierd. I believe linux only split at the
first space

>
> Cheers!
>
>
> joi, 19 mai 2016, 19:48:04 UTC+2, Adrian Salceanu a scris:
>>
>> Hi,
>>
>> There seems to be a problem with executing .jl scripts on Ubuntu 16.04
x64
>>
>> Take this simple program in exec_text.jl
>>
>> #!/usr/bin/env julia --color=yes
>> println("all good!")
>>
>>
>> On Mac OS:
>> $ ./exec_test.jl
>> all good!
>>
>>
>> On Ubuntu it just hangs
>> $ ./exec_test.jl
>>
>> [=> never returns, does nothing]
>>
>> This works as expected:
>> $ julia exec_test.jl
>> all good!
>>
>> But this is not an acceptable solution as I need to execute my program
in order to pass command line args to it. Otherwise julia would just gobble
up the command line args intended for my script.
>>
>> Thanks,
>> Adrian


Re: [julia-users] beginner with Julia graphs

2016-05-19 Thread Andrea Vigliotti
yes it works now!

many thanks!!

andrea

On Thursday, May 19, 2016 at 8:08:31 PM UTC+2, Miguel Bazdresch wrote:
>
> You seem to be missing graphviz: http://www.graphviz.org/
>
> -- mb
>
> On Thu, May 19, 2016 at 12:21 PM, Andrea Vigliotti  > wrote:
>
>> Hi all!
>>
>> I'm trying to run this example (taken from here : 
>> https://github.com/JuliaLang/Graphs.jl/blob/master/doc/source/examples.rst
>> )
>>
>> using Graphs
>> g = simple_graph(3)
>> add_edge!(g, 1, 2)
>> add_edge!(g, 3, 2)
>> add_edge!(g, 3, 1)
>> plot(g)
>>
>>
>> but I get this error 
>> ERROR: could not spawn `neato -Tx11`: no such file or directory (ENOENT)
>>  in _jl_spawn at process.jl:262
>>  in anonymous at process.jl:415
>>  in setup_stdio at process.jl:403
>>  in spawn at process.jl:414
>>  in open at process.jl:483
>>  in plot at /home/andrea/.julia/v0.4/Graphs/src/dot.jl:91
>>
>>
>> I must be missing some library, can anybody help?
>>
>> many thanks in advance!
>>
>> Andrea
>>
>>
>

Re: [julia-users] beginner with Julia graphs

2016-05-19 Thread Miguel Bazdresch
You seem to be missing graphviz: http://www.graphviz.org/

-- mb

On Thu, May 19, 2016 at 12:21 PM, Andrea Vigliotti <
andrea.viglio...@gmail.com> wrote:

> Hi all!
>
> I'm trying to run this example (taken from here :
> https://github.com/JuliaLang/Graphs.jl/blob/master/doc/source/examples.rst
> )
>
> using Graphs
> g = simple_graph(3)
> add_edge!(g, 1, 2)
> add_edge!(g, 3, 2)
> add_edge!(g, 3, 1)
> plot(g)
>
>
> but I get this error
> ERROR: could not spawn `neato -Tx11`: no such file or directory (ENOENT)
>  in _jl_spawn at process.jl:262
>  in anonymous at process.jl:415
>  in setup_stdio at process.jl:403
>  in spawn at process.jl:414
>  in open at process.jl:483
>  in plot at /home/andrea/.julia/v0.4/Graphs/src/dot.jl:91
>
>
> I must be missing some library, can anybody help?
>
> many thanks in advance!
>
> Andrea
>
>


[julia-users] Re: Ubuntu bug? Executing .jl files via shebang does nothing on Ubuntu 16.04 x64

2016-05-19 Thread Adrian Salceanu
OK, I figured out what causes the problem. 

It seems that on linux it does not like the --color=yes 

Removed that and it works as expected. 

Cheers! 

joi, 19 mai 2016, 19:48:04 UTC+2, Adrian Salceanu a scris:
>
> Hi, 
>
> There seems to be a problem with executing .jl scripts on Ubuntu 16.04 x64 
>
> Take this simple program in exec_text.jl
>
> #!/usr/bin/env julia --color=yes
> println("all good!")
>
>
> On Mac OS: 
> $ ./exec_test.jl
> all good!
>
>
> On Ubuntu it just hangs
> $ ./exec_test.jl
>
> [=> never returns, does nothing]
>
> This works as expected: 
> $ julia exec_test.jl
> all good!
>
> But this is not an acceptable solution as I need to execute my program in 
> order to pass command line args to it. Otherwise julia would just gobble up 
> the command line args intended for my script. 
>
> Thanks,
> Adrian
>


[julia-users] Re: Using LOAD_PATH from the command line

2016-05-19 Thread David Parks
One minor correction, it's not looking to my home directory, it's looking 
to my current directory for the script file. The point remains, it does not 
use LOAD_PATH to find the script file I want to run. So to run a script I 
need to specify the full path to the julia script file on the command line 
every time I want to run it, I cannot take advantage of my LOAD_PATH 
configuration in the same was as I can in the REPL.


Re: [julia-users] Ubuntu bug? Executing .jl files via shebang does nothing on Ubuntu 16.04 x64

2016-05-19 Thread Isaiah Norton
What is `versioninfo()` for both cases? If your versions are different,
this might be a bug that has been fixed...

On Thu, May 19, 2016 at 1:48 PM, Adrian Salceanu 
wrote:

> Hi,
>
> There seems to be a problem with executing .jl scripts on Ubuntu 16.04 x64
>
> Take this simple program in exec_text.jl
>
> #!/usr/bin/env julia --color=yes
> println("all good!")
>
>
> On Mac OS:
> $ ./exec_test.jl
> all good!
>
>
> On Ubuntu it just hangs
> $ ./exec_test.jl
>
> [=> never returns, does nothing]
>
> This works as expected:
> $ julia exec_test.jl
> all good!
>
> But this is not an acceptable solution as I need to execute my program in
> order to pass command line args to it. Otherwise julia would just gobble up
> the command line args intended for my script.
>
> Thanks,
> Adrian
>


[julia-users] Ubuntu bug? Executing .jl files via shebang does nothing on Ubuntu 16.04 x64

2016-05-19 Thread Adrian Salceanu
Hi, 

There seems to be a problem with executing .jl scripts on Ubuntu 16.04 x64 

Take this simple program in exec_text.jl

#!/usr/bin/env julia --color=yes
println("all good!")


On Mac OS: 
$ ./exec_test.jl
all good!


On Ubuntu it just hangs
$ ./exec_test.jl

[=> never returns, does nothing]

This works as expected: 
$ julia exec_test.jl
all good!

But this is not an acceptable solution as I need to execute my program in 
order to pass command line args to it. Otherwise julia would just gobble up 
the command line args intended for my script. 

Thanks,
Adrian


Re: [julia-users] Re: static compilation

2016-05-19 Thread Isaiah Norton
It looks like there might be a parser ambiguity when the arguments are
given as a tuple (the whole thing is parsed as keyword argument to the
macro call).

Either of the following will work right now:

@Base.ccallable Int64 foo(x::Int64) = x+1
@Base.ccallable(Int64, function foo(x::Int64) x+1 end)

Please file a bug report with the signature you tried.

On Thu, May 19, 2016 at 10:51 AM, Ján Adamčák  wrote:

> Thanks @Jameson
>
> The problem with @ccallable is, I get following error(Version
> 0.5.0-dev+4124 (2016-05-16 22:35 UTC)):
>
> julia> @Base.ccallable (Int64, foo(x::Int64) = x+1)
> ERROR: expected method definition in @ccallable
>  in eval(::Module, ::Any) at .\boot.jl:226
>
> Thanks
>
> Dňa streda, 18. mája 2016 18:46:31 UTC+2 Jameson napísal(-a):
>>
>> You might need to be more specific than "doesn't work"; I tried it last
>> week and it worked for me. The usage is essentially the same (the macro in
>> v0.5 takes an extra argument of the return type, but the deprecation
>> warning should note this already).
>>
>> On Wed, May 18, 2016 at 7:59 AM Ján Adamčák  wrote:
>>
>>> Thanks @Jameson
>>>
>>> Do you have an idea, how the compilation will work on v0.5 release? I
>>> have tried with recent commits of master, but @ccallable doesn't work.
>>>
>>>
>>>
>>> Dňa sobota, 14. mája 2016 1:17:45 UTC+2 Jameson napísal(-a):

 > without jl_init()

 That is not implemented at this time, although patches are welcome.

 > it has something to do with ccallable

 Yes, it also is orthogonal to compile-all. It is possible that
 compile-all is non-functional on v0.4 on Windows, I know master has many
 enhancements, which may have included more stable Windows support. I
 suggest playing with these two options independently before jumping into
 combining them.

>>> On Fri, May 13, 2016 at 2:39 PM Ján Adamčák  wrote:

> Thanks @Jameson,
>
>
> I have successfully built .so with "--compile=all" flag on 0.5-dev on
> ubuntu 16.04, but my .so library is 110MB. The same compilation on
> Win10 crashes. I can call
>
> my function in c++ code using jl_init and jl_get_function.
>
>
> My primary goal is to compile a dll with my own exported function,
> which I can call without jl_init(). I think it has something to do with
> ccallable, which you have mentioned. I'm stacked at this point. Could you
> please explain in more details, or point me to Julia code, how to move
> forward?
>
>
> Thanks in advance.
>
> -jan
>
> Dňa piatok, 13. mája 2016 4:30:41 UTC+2 Jameson napísal(-a):
>>
>> We have been working on a number of simplifications on master, so
>> some of the best practices and extra steps aren't necessary anymore. But 
>> it
>> should still work on v0.4.
>>
>> There are a few different goals that can be accomplished by invoking
>> the Julia compiler directly, so it was a bit difficult to write that blog
>> post talking about them all generically. Since it touches on several of 
>> the
>> optimization options, I structured it in part to show how these layers 
>> can
>> build on each other. But I decided to leave out demonstrations of how
>> mixing various layers and options can be used to create other products.
>>
>> Since most of these steps are already configured in the Julia build
>> system, one of the easiest ways to augment it is to simply drop a
>> userimg.jl file into base/
>> This will then get incorporated into the usual build and become part
>> of the pre-defined system image.
>>
>> The `-e nothing` stage is there because you have to give it something
>> to evaluate (a file, stdin, or `-e`, etc.), or it will pop open the REPL
>> and wait for the user to enter commands. This is actually also a valid 
>> way
>> to create an executable and can be fun to play with as a development
>> exercise (I still do this on occasion to test out how it is handling odd
>> cases).
>>
>> To get a ccallable declaration to show up in the binary, the only
>> condition is that you must declare it ccallable in the same execution 
>> step
>> as the final output.
>>
>> -jameson
>>
>>
>> On Thu, May 12, 2016 at 10:23 AM Ján Adamčák 
>> wrote:
>>
>>> Thanks @Jameson,
>>>
>>> I am a bit confused about "you are not adding any code to the system
>>> image (`--eval nothing`)". According to your blog
>>> http://juliacomputing.com/blog/2016/02/09/static-julia.html , I
>>> think that this is a crucial point to obtain a small sized dll. Am I 
>>> right?
>>>
>>> What is then the right way to emit "ccallable" declarations in order
>>> to export julia function(s)? (foo in our example from the original post 
>>> in
>>> this 

Re: [julia-users] Using @spawnat to define var on particular process

2016-05-19 Thread Isaiah Norton
On Mon, May 16, 2016 at 7:48 PM, Alex Williams 
wrote:

The basic question is why this doesn't define `x` on worker #2:
>
> *@spawnat 2 x=1*
>
Because it is wrapped in a closure; so `x` is local to the closure when the
code is executed on the other process. To see the difference, try `@spawnat
2 global x = 1` then `@spawnat 2 names(Main)`

> Meanwhile this seems to work:
>
> *@spawnat 2 eval(:(x=1))*
>
`eval` operates in global scope by default (you can optionally specify a
module as the first argument).


> And this seems to work if you want to define `x` on all procs:
>
> *@everywhere x=1*
>
This calls `eval`, see above :)

I tried using macroexpand to see the difference. I don't have a good sense
> of what things like `*:copyast*` and `*:localize*` mean. If someone could
> help me parse this, I'd greatly appreciate it.
>

- `copyast` makes a copy of a given AST, handling GC roots, etc. (see
definition as `jl_copy_ast` in "src/ast.c").
- `Expr(:localize, expr)` encloses the expression in a let block containing
an entry for each variable referenced in the expression (see the definition
of `localize_vars` in "base/expr.jl").

Apologies if there is a good explanation of this elsewhere on the web. You
> can just point me there if so.
>

Chris Rackauckas has done some nice tutorials on parallelization recently,
which might touch on some of this (check juliabloggers.com).

If you have any improvement suggestions for the Base documentation, those
would be great (PRs welcome).



> -- Alex
>
>


Re: [julia-users] Re: Coefficient of determination/R2/r-squared of model and accuracy of R2 estimate

2016-05-19 Thread Milan Bouchet-Valat
Le jeudi 19 mai 2016 à 09:30 -0700, Kevin Liu a écrit :
> It seems the pkg owners are still deciding
> 
> Funcs to evaluate fit                                                
>        https://github.com/JuliaStats/GLM.jl/issues/74
> Add fit statistics functions and document existing ones  https://gith
> ub.com/JuliaStats/StatsBase.jl/pull/146
> Implement fit statistics functions                                  
>   https://github.com/JuliaStats/GLM.jl/pull/115
These PRs have been merged, so we just need to tag a new release. Until
then, you can use Pkg.checkout() to use the development version
(function is called R² or R2).


Regards

> 
> > I looked in GLM.jl but couldn't find a function for calculating the
> > R2 or the accuracy of the R2 estimate.
> > 
> > My understanding is that both should appear with the glm()
> > function. Help would be appreciated. 
> > 
> > Kevin
> > 


[julia-users] Re: Using LOAD_PATH from the command line

2016-05-19 Thread David Parks
Thanks, that helps, I didn't know about that. It does feel counter 
intuitive that loading juliarc isn't the default, but maybe there's some 
hidden logic I don't see.

Though, trying this, I do run into further issues that I'd like to post 
about.

D:\mydir>julia --startup-file=yes -- myfile.jl arg1 arg2

ERROR: could not open file d:\myprojects\julia\--
 in include at boot.jl:261
 in include_from_node1 at loading.jl:320
 in process_options at client.jl:280
 in _start at client.jl:378

So using -- didn't work, which seems to contrast the documentation that 
describes the command line format as:

julia [switches] -- [programfile] [args...]

And trying it without the -- seperator didn't do much better. I added a 
`print()` statement to juliarc.jl to validate that it's actually loading. 
At the command line I do see the print statement I added, "juliarc 
executed", so I know juliarc was loaded (which sets my LOAD_PATH).

In juliarc I have a series of these statements to set the path:

@everywhere push!(LOAD_PATH,"D:\\path\\to\\my\\files")

I can run `import myfile.jl` from the REPL (without being in the 
`myfile.jl` directory). But from the command line, it still only looks to 
HOME.

D:\mydir>julia --startup-file=yes myfile.jl arg1 arg2
juliarc executed
ERROR: could not open file d:\myprojects\julia\myfile.jl
 in include at boot.jl:261
 in include_from_node1 at loading.jl:320
 in process_options at client.jl:280
 in _start at client.jl:378




On Wednesday, May 18, 2016 at 6:46:12 PM UTC-7, Tony Kelman wrote:
>
> juliarc doesn't get loaded when you run Julia in script mode unless you 
> add the -F (--startup-file=yes) flag.
>
>
> On Wednesday, May 18, 2016 at 6:34:20 PM UTC-7, David Parks wrote:
>>
>> When running a script from the command line, julia seems to only search 
>> for the file in the `HOME` directory. 
>>
>> It doesn't appear to search the `LOAD_PATH` directory.
>>
>> Therefore any script not in `HOME` (which is probably every script) would 
>> need to be referenced by its full path.
>>
>> Am I missing something? I can `import` from the REPL, my `LOAD_PATH` is 
>> set up properly in `.juliarc`, it seems only logical that this 
>> functionality works the same from the command line, so I'm surprised to 
>> find it does not.
>>
>

[julia-users] Re: Coefficient of determination/R2/r-squared of model and accuracy of R2 estimate

2016-05-19 Thread Kevin Liu
It seems the pkg owners are still deciding

Funcs to evaluate fit   
 https://github.com/JuliaStats/GLM.jl/issues/74
Add fit statistics functions and document existing ones 
 https://github.com/JuliaStats/StatsBase.jl/pull/146
Implement fit statistics functions 
https://github.com/JuliaStats/GLM.jl/pull/115

On Thursday, May 19, 2016 at 1:15:17 PM UTC-3, Kevin Liu wrote:
>
> I looked in GLM.jl but couldn't find a function for calculating the R2 or 
> the accuracy of the R2 estimate.
>
> My understanding is that both should appear with the glm() function. Help 
> would be appreciated. 
>
> Kevin
>


[julia-users] beginner with Julia graphs

2016-05-19 Thread Andrea Vigliotti
Hi all!

I'm trying to run this example (taken from here : 
https://github.com/JuliaLang/Graphs.jl/blob/master/doc/source/examples.rst)

using Graphs
g = simple_graph(3)
add_edge!(g, 1, 2)
add_edge!(g, 3, 2)
add_edge!(g, 3, 1)
plot(g)


but I get this error 
ERROR: could not spawn `neato -Tx11`: no such file or directory (ENOENT)
 in _jl_spawn at process.jl:262
 in anonymous at process.jl:415
 in setup_stdio at process.jl:403
 in spawn at process.jl:414
 in open at process.jl:483
 in plot at /home/andrea/.julia/v0.4/Graphs/src/dot.jl:91


I must be missing some library, can anybody help?

many thanks in advance!

Andrea



[julia-users] Coefficient of determination/R2/r-squared of model and accuracy of R2 estimate

2016-05-19 Thread Kevin Liu
I looked in GLM.jl but couldn't find a function for calculating the R2 or 
the accuracy of the R2 estimate.

My understanding is that both should appear with the glm() function. Help 
would be appreciated. 

Kevin


Re: [julia-users] Re: static compilation

2016-05-19 Thread Ján Adamčák
Thanks @Jameson

The problem with @ccallable is, I get following error(Version 
0.5.0-dev+4124 (2016-05-16 22:35 UTC)):

julia> @Base.ccallable (Int64, foo(x::Int64) = x+1)
ERROR: expected method definition in @ccallable
 in eval(::Module, ::Any) at .\boot.jl:226

Thanks

Dňa streda, 18. mája 2016 18:46:31 UTC+2 Jameson napísal(-a):
>
> You might need to be more specific than "doesn't work"; I tried it last 
> week and it worked for me. The usage is essentially the same (the macro in 
> v0.5 takes an extra argument of the return type, but the deprecation 
> warning should note this already).
>
> On Wed, May 18, 2016 at 7:59 AM Ján Adamčák  > wrote:
>
>> Thanks @Jameson
>>
>> Do you have an idea, how the compilation will work on v0.5 release? I 
>> have tried with recent commits of master, but @ccallable doesn't work.
>>
>>
>>
>> Dňa sobota, 14. mája 2016 1:17:45 UTC+2 Jameson napísal(-a):
>>>
>>> > without jl_init()
>>>
>>> That is not implemented at this time, although patches are welcome. 
>>>
>>> > it has something to do with ccallable
>>>
>>> Yes, it also is orthogonal to compile-all. It is possible that 
>>> compile-all is non-functional on v0.4 on Windows, I know master has many 
>>> enhancements, which may have included more stable Windows support. I 
>>> suggest playing with these two options independently before jumping into 
>>> combining them. 
>>>
>> On Fri, May 13, 2016 at 2:39 PM Ján Adamčák  wrote:
>>>
 Thanks @Jameson,


 I have successfully built .so with "--compile=all" flag on 0.5-dev on 
 ubuntu 16.04, but my .so library is 110MB. The same compilation on 
 Win10 crashes. I can call

 my function in c++ code using jl_init and jl_get_function.


 My primary goal is to compile a dll with my own exported function, 
 which I can call without jl_init(). I think it has something to do with 
 ccallable, which you have mentioned. I'm stacked at this point. Could you 
 please explain in more details, or point me to Julia code, how to move 
 forward?


 Thanks in advance.

 -jan

 Dňa piatok, 13. mája 2016 4:30:41 UTC+2 Jameson napísal(-a):
>
> We have been working on a number of simplifications on master, so some 
> of the best practices and extra steps aren't necessary anymore. But it 
> should still work on v0.4.
>
> There are a few different goals that can be accomplished by invoking 
> the Julia compiler directly, so it was a bit difficult to write that blog 
> post talking about them all generically. Since it touches on several of 
> the 
> optimization options, I structured it in part to show how these layers 
> can 
> build on each other. But I decided to leave out demonstrations of how 
> mixing various layers and options can be used to create other products.
>
> Since most of these steps are already configured in the Julia build 
> system, one of the easiest ways to augment it is to simply drop a 
> userimg.jl file into base/ 
> This will then get incorporated into the usual build and become part 
> of the pre-defined system image.
>
> The `-e nothing` stage is there because you have to give it something 
> to evaluate (a file, stdin, or `-e`, etc.), or it will pop open the REPL 
> and wait for the user to enter commands. This is actually also a valid 
> way 
> to create an executable and can be fun to play with as a development 
> exercise (I still do this on occasion to test out how it is handling odd 
> cases).
>
> To get a ccallable declaration to show up in the binary, the only 
> condition is that you must declare it ccallable in the same execution 
> step 
> as the final output.
>
> -jameson
>
>
> On Thu, May 12, 2016 at 10:23 AM Ján Adamčák  
> wrote:
>
>> Thanks @Jameson,
>>
>> I am a bit confused about "you are not adding any code to the system 
>> image (`--eval nothing`)". According to your blog 
>> http://juliacomputing.com/blog/2016/02/09/static-julia.html , I 
>> think that this is a crucial point to obtain a small sized dll. Am I 
>> right?
>>
>> What is then the right way to emit "ccallable" declarations in order 
>> to export julia function(s)? (foo in our example from the original post 
>> in 
>> this thread)
>>
>> Is it okay to work with current version of julia 0.4.5. or I have to 
>> switch to another version; If yes, to which one?
>>
>> Thanks in advance.
>>
>> Dňa utorok, 10. mája 2016 22:13:57 UTC+2 Jameson napísal(-a):
>>
>>> The compile-all flag is only partially functional on v0.4. I think 
>>> it's best to just leave it off. I tested on master and fixed a bug with 
>>> emitting `@ccallable`, but that's unrelated. From the command line 
>>> below, 
>>> it looks 

[julia-users] Re: moving data to workers for distributed workloads

2016-05-19 Thread Matthew Pearce
Also...

The above doesn't respect ordering of the `myidx` variable. Not sure how 
your problem domain operates, so it could get more complicated if things 
like order of execution matter. ;)



[julia-users] Re: moving data to workers for distributed workloads

2016-05-19 Thread Matthew Pearce
Hi Michael 

Your current code looks like will pull back the `coefficients` across the 
network (500 gb transfer) and as you point out transfer `input` each time.

I wrote a package ClusterUtils.jl 
 to handle my own problems 
(MCMC sampling) which were somewhat similar.

Roughly - given the available info - if I was trying to do something 
similar I'd do:

```julia
using Compat
using ClusterUtils

sow(pids, :input, input)

@everywhere function dostuff(input, myidxs)
for myidx in myidxs
coefficients = spherical_harmonic_transforms(input[myidx])
write_results_to_disk(coefficients) #needs myidx as arg too probably
  end
end

idxs = chunkit(limit, length(pids))
sow(pids, :work, :(Dict(zip($pids, $idxs

reap(pids, :(dostuff(input, $work[myid()])))
```

This transfers `input` once, and writes something to disk from the remote 
process. 





Re: [julia-users] Re: Macro to generate equivalent types

2016-05-19 Thread Tom Breloff
This is very close to what I need.  Thanks!

On Thu, May 19, 2016 at 10:00 AM, Kristoffer Carlsson  wrote:

> Sounds like:
> https://github.com/JuliaLang/DataStructures.jl/blob/master/src/delegate.jl
>
> In action:
> https://github.com/JuliaLang/DataStructures.jl/blob/f67dc0b144d1fcc83eb48a7beddca746d4841f9e/src/default_dict.jl#L52
>
> On Thursday, May 19, 2016 at 3:53:35 PM UTC+2, Tom Breloff wrote:
>>
>> I have a pattern that I frequently use, and I'm considering writing a
>> macro to automate it, but figured I'd check if it exists already.
>>
>> I frequently want to have the equivalent of an existing type
>> functionally, but I want to be able to either dispatch on it or include
>> additional fields for other purposes.  An example is wrapping a dictionary:
>>
>> julia> immutable MyDict{K,V} <: Associative{K,V}
>>  d::Dict{K,V}
>>  otherfield::Int
>>end
>>
>>
>> julia> md = MyDict(Dict(:x=>1), 10)
>> Error showing value of type MyDict{Symbol,Int64}:
>> ERROR: MethodError: `length` has no method matching length(::MyDict{
>> Symbol,Int64})
>>
>>
>> julia> md[:x]
>> ERROR: MethodError: `get` has no method matching get(::MyDict{Symbol,
>> Int64}, ::Symbol, ::Symbol)
>> Closest candidates are:
>>   get(::ObjectIdDict, ::ANY, ::ANY)
>>   get{K,V}(::Dict{K,V}, ::Any, ::Any)
>>   get{K}(::WeakKeyDict{K,V}, ::Any, ::Any)
>>   ...
>>  in getindex at dict.jl:282
>>  in eval at REPL.jl:3
>>
>>
>> julia> Base.length(md::MyDict) = length(md.d)
>> length (generic function with 111 methods)
>>
>>
>> julia> md
>> MyDict{Symbol,Int64} with 1 entryError showing value of type MyDict{
>> Symbol,Int64}:
>> ERROR: MethodError: `start` has no method matching start(::MyDict{Symbol,
>> Int64})
>>  in isempty at iterator.jl:3
>>  in showdict at dict.jl:93
>>  in writemime at replutil.jl:36
>>  in display at REPL.jl:114
>>  in display at REPL.jl:117
>>  [inlined code] from multimedia.jl:151
>>  in display at multimedia.jl:163
>>  in print_response at REPL.jl:134
>>  in print_response at REPL.jl:121
>>  in anonymous at REPL.jl:624
>>  in run_interface at ./LineEdit.jl:1610
>>  in run_frontend at ./REPL.jl:863
>>  in run_repl at ./REPL.jl:167
>>  in _start at ./client.jl:420
>>
>> # the pain continues...
>>
>>
>>
>> I'd like to be able to automatically do "md[:x]" and have it work, but
>> instead I have to define a bunch of Base methods where I simply re-call the
>> method with "md.d".
>>
>> The macro I have in mind would grab the immediate supertype of the type
>> in question, make the new type a subtype of that supertype, and then define
>> pass-through methods for anything in the "methodswith(Dict)" list.
>>
>> Does this exist already?  Could I get myself in trouble somehow by
>> defining pass-through methods like this?
>>
>> Thanks!
>> Tom
>>
>


[julia-users] Re: Macro to generate equivalent types

2016-05-19 Thread Kristoffer Carlsson
Sounds 
like: https://github.com/JuliaLang/DataStructures.jl/blob/master/src/delegate.jl

In 
action: 
https://github.com/JuliaLang/DataStructures.jl/blob/f67dc0b144d1fcc83eb48a7beddca746d4841f9e/src/default_dict.jl#L52

On Thursday, May 19, 2016 at 3:53:35 PM UTC+2, Tom Breloff wrote:
>
> I have a pattern that I frequently use, and I'm considering writing a 
> macro to automate it, but figured I'd check if it exists already.
>
> I frequently want to have the equivalent of an existing type functionally, 
> but I want to be able to either dispatch on it or include additional fields 
> for other purposes.  An example is wrapping a dictionary:
>
> julia> immutable MyDict{K,V} <: Associative{K,V}
>  d::Dict{K,V}
>  otherfield::Int
>end
>
>
> julia> md = MyDict(Dict(:x=>1), 10)
> Error showing value of type MyDict{Symbol,Int64}:
> ERROR: MethodError: `length` has no method matching length(::MyDict{Symbol
> ,Int64})
>
>
> julia> md[:x]
> ERROR: MethodError: `get` has no method matching get(::MyDict{Symbol,Int64
> }, ::Symbol, ::Symbol)
> Closest candidates are:
>   get(::ObjectIdDict, ::ANY, ::ANY)
>   get{K,V}(::Dict{K,V}, ::Any, ::Any)
>   get{K}(::WeakKeyDict{K,V}, ::Any, ::Any)
>   ...
>  in getindex at dict.jl:282
>  in eval at REPL.jl:3
>
>
> julia> Base.length(md::MyDict) = length(md.d)
> length (generic function with 111 methods)
>
>
> julia> md
> MyDict{Symbol,Int64} with 1 entryError showing value of type MyDict{Symbol
> ,Int64}:
> ERROR: MethodError: `start` has no method matching start(::MyDict{Symbol,
> Int64})
>  in isempty at iterator.jl:3
>  in showdict at dict.jl:93
>  in writemime at replutil.jl:36
>  in display at REPL.jl:114
>  in display at REPL.jl:117
>  [inlined code] from multimedia.jl:151
>  in display at multimedia.jl:163
>  in print_response at REPL.jl:134
>  in print_response at REPL.jl:121
>  in anonymous at REPL.jl:624
>  in run_interface at ./LineEdit.jl:1610
>  in run_frontend at ./REPL.jl:863
>  in run_repl at ./REPL.jl:167
>  in _start at ./client.jl:420
>
> # the pain continues...
>
>
>
> I'd like to be able to automatically do "md[:x]" and have it work, but 
> instead I have to define a bunch of Base methods where I simply re-call the 
> method with "md.d".
>
> The macro I have in mind would grab the immediate supertype of the type in 
> question, make the new type a subtype of that supertype, and then define 
> pass-through methods for anything in the "methodswith(Dict)" list.
>
> Does this exist already?  Could I get myself in trouble somehow by 
> defining pass-through methods like this?
>
> Thanks!
> Tom
>


[julia-users] Macro to generate equivalent types

2016-05-19 Thread Tom Breloff
I have a pattern that I frequently use, and I'm considering writing a macro 
to automate it, but figured I'd check if it exists already.

I frequently want to have the equivalent of an existing type functionally, 
but I want to be able to either dispatch on it or include additional fields 
for other purposes.  An example is wrapping a dictionary:

julia> immutable MyDict{K,V} <: Associative{K,V}
 d::Dict{K,V}
 otherfield::Int
   end


julia> md = MyDict(Dict(:x=>1), 10)
Error showing value of type MyDict{Symbol,Int64}:
ERROR: MethodError: `length` has no method matching length(::MyDict{Symbol,
Int64})


julia> md[:x]
ERROR: MethodError: `get` has no method matching get(::MyDict{Symbol,Int64}, 
::Symbol, ::Symbol)
Closest candidates are:
  get(::ObjectIdDict, ::ANY, ::ANY)
  get{K,V}(::Dict{K,V}, ::Any, ::Any)
  get{K}(::WeakKeyDict{K,V}, ::Any, ::Any)
  ...
 in getindex at dict.jl:282
 in eval at REPL.jl:3


julia> Base.length(md::MyDict) = length(md.d)
length (generic function with 111 methods)


julia> md
MyDict{Symbol,Int64} with 1 entryError showing value of type MyDict{Symbol,
Int64}:
ERROR: MethodError: `start` has no method matching start(::MyDict{Symbol,
Int64})
 in isempty at iterator.jl:3
 in showdict at dict.jl:93
 in writemime at replutil.jl:36
 in display at REPL.jl:114
 in display at REPL.jl:117
 [inlined code] from multimedia.jl:151
 in display at multimedia.jl:163
 in print_response at REPL.jl:134
 in print_response at REPL.jl:121
 in anonymous at REPL.jl:624
 in run_interface at ./LineEdit.jl:1610
 in run_frontend at ./REPL.jl:863
 in run_repl at ./REPL.jl:167
 in _start at ./client.jl:420

# the pain continues...



I'd like to be able to automatically do "md[:x]" and have it work, but 
instead I have to define a bunch of Base methods where I simply re-call the 
method with "md.d".

The macro I have in mind would grab the immediate supertype of the type in 
question, make the new type a subtype of that supertype, and then define 
pass-through methods for anything in the "methodswith(Dict)" list.

Does this exist already?  Could I get myself in trouble somehow by defining 
pass-through methods like this?

Thanks!
Tom


[julia-users] Re: Change input key bindings

2016-05-19 Thread yousef . k . alhammad
Greetings all,

So, after much reading through Julia base, I believe I've figured out how 
to directly read keyboard inputs from the terminal.
First, here is the code:

###
function main()
  raw_mode()
  println("Just type away and see your characters print, line by line.")
  println("Press escape to exit.")
  input = readinput()
  while input != "\e"
println(input)
input = readinput()
  end
  println("Goodbye!")
end

raw_mode() = ccall(:jl_tty_set_mode, Int32, (Ptr{Void}, Int32), 
STDIN.handle, true) == 0 ||
 throw("FATAL: Terminal unable to enter raw mode.")

function readinput()
  input = readavailable(STDIN)
  chars = map(Char, input)
  return join(chars)
end

main()
###

Just paste this code in a .jl file and execute it directly using `julia 
filename.jl` in your favorite text terminal.
The function "raw_mode" forces the terminal to read from the keyboard 
directly without having to buffer after an enter key.
The function "readinput" will call "readavailable(STDIN)" to block until a 
key is pressed.
Once a key is pressed, "readavailable(STDIN)" will return the key pressed 
in Vector{UInt8}.
The rest of the function just converts that into a String so it is 
readable, and it's printed afterwards on screen.
This process is looped until the escape key is pressed, indicating that the 
program must now exit.

NOTE: Some terminals are pseudoterminals, and they are switched to raw mode 
using a different process. So if you can't get this example up and running, 
just let me know.

In short, just run the terminal in raw mode and then call on 
"readavailable(STDIN)" to receive keyboard inputs.
I don't know if this is good practice, but it does the job as expected, at 
least on Windows 10 x64.
This took me too long to figure out, so I thought I should share it as it 
may be useful to someone else in the future.


Regards,
Yousef




[julia-users] Re: moving data to workers for distributed workloads

2016-05-19 Thread Fabian Gans
Hi Michael, 

I recently had a similar problem and this SO thread helped me a lot: 
http://stackoverflow.com/questions/27677399/julia-how-to-copy-data-to-another-processor-in-julia

As a side question: Which code are you using to calculate spherical 
harmonics transforms. I was looking for a julia package some time ago and 
did not find any, so if your code is publicly available, could you point me 
to it? 

Thanks
Fabian



On Wednesday, May 18, 2016 at 9:28:34 PM UTC+2, Michael Eastwood wrote:
>
> Hi julia-users!
>
> I need some help with distributing a workload across tens of workers 
> across several machines.
>
> The problem I am trying to solve involves calculating the elements of a 
> large block diagonal matrix. The total size of the blocks is >500 GB, so I 
> cannot store the entire thing in memory. The way the calculation works, I 
> do a bunch of spherical harmonic transforms and the results give me one row 
> in each block of the matrix.
>
> The following code illustrates what I am doing currently. I am 
> distributing the spherical harmonic transforms amongst all the workers and 
> bringing the data back to the master process to write the results to disk 
> (the master process has each matrix block mmapped to disk).
>
> idx = 1
> limit = 1
> nextidx() = (myidx = idx; idx += 1; myidx)
> @sync for worker in workers()
> @async while true
> myidx = nextidx()
> myidx ≤ limit || break
> coefficients = remotecall_fetch(spherical_harmonic_transforms, 
> worker, input[myidx])
> write_results_to_disk(coefficients)
> end
> end
>
> Each spherical harmonic transform takes O(10 seconds) so I thought the 
> data movement cost would be negligible compared to this. However, if I have 
> three machines each with 16 workers, machine 1 will have all 16 workers 
> working hard (the master process is on machine 1) and machines 2&3 will 
> have most of their workers idling. My hypothesis is that the cost of moving 
> the data to and from the workers is preventing machines 2&3 from being 
> fully utilized.
>
> coefficients is a vector of a million Complex128s (16 MB)
> input is composed of two parts: 1) a vector of 10 million Float64s (100 
> MB) and 2) a small amount of additional information that is negligibly 
> small compared to the first part.
>
> The trick is that the first part of input (the 100 MB vector) doesn't 
> change between iterations. So I could alleviate most of the data movement 
> problem by moving that part to each worker once. Problem is that I can't 
> seem to figure out how to do that. The manual (
> http://docs.julialang.org/en/release-0.4/manual/parallel-computing/#remoterefs-and-abstractchannels)
>  
> is a little thin on how to use RemoteRefs.
>
> So how do you move data to workers in a way that it can be re-used on 
> subsequent iterations? An example in the manual would be very helpful!
>
> Thanks,
> Michael
>


[julia-users] Re: [ANN & RFC] Measurements.jl: Uncertainty propagation library

2016-05-19 Thread Andre Bieler
As an experimental physicist I thank you very much for this package! :D

On Tuesday, May 17, 2016 at 11:21:27 PM UTC+2, Mosè Giordano wrote:
>
> Hi Lucas,
>
> Looks great!
>>
>> How about overloading the ± operator?
>> using Measurements
>>
>> ±(μ, σ) = Measurement(μ, σ)
>> # => ± (generic function with 1 method)
>>
>> a = 4.5 ± 0.1
>> # 4.5 ± 0.1
>>
>>
>> I don't know if it's defined in other packages, but I've never seen it.
>>
>
>  This is a great idea, that's why I love Julia ;-)  Thanks a lot for your 
> suggestion, I already implemented it with latest commit!
>
> Bye,
> Mosè
>


[julia-users] Re: http://quant-econ.net/jl/linear_algebra.html

2016-05-19 Thread Henri Girard
Great job :)
Your add makes it great that's the only one I found on the net with ijulia 
Thanks your for your quick response :)
Henri

Le mercredi 18 mai 2016 11:04:11 UTC+2, Henri Girard a écrit :
>
> http://quant-econ.net/jl/linear_algebra.html
>
> It says :If you’re interested, the Julia code for producing this figure is 
> here
> But no link...
> The 3D link below works but I am looking for representing vectors
> Any help
> Henri
>