Re: [julia-users] poor performance of threads

2016-03-07 Thread Sam Kaplan
Cool.  Thanks for being persistent!  I guess it can help with your original 
problem since now you know it's possible :)  It might give you some 
confidence that once you address the issues raised by Tim, and if you 
subsequently still do not get a performance boost, then you can think more 
about why your original problem is not compute-bound.

Sam

On Monday, March 7, 2016 at 12:46:58 AM UTC-6, pev...@gmail.com wrote:
>
> Dear Sam,
> now the timing looks more optimistic,
>
> 111.858195 seconds (31.43 k allocations: 1.442 MB, 0.00% gc time)
> 267.895365 seconds
> verification succeeded
>
> I have about 2.5-folds speed-up with threads, which is better. It did not 
> help me with my original problem though.
> Nevertheless, thank you very much for discussion.
> Best wishes,
> Tomas
>
>

Re: [julia-users] poor performance of threads

2016-03-06 Thread Sam Kaplan
Hi Tomas,

If I read that example correctly, then you need to modify the code a little 
before running.  I think those two timings that you report are both from 
running the threaded code.  Do you agree? I think passing the option 
"verify=true" to "laplace3d" on line 134 should do the trick to get timings 
from both threaded and serial code.  I would also remove the @time macro 
from line 134 just to avoid confusion.  Sorry I can't help more, I don't 
have a Julia 0.5 with threading enabled installed at the moment.

Sam

On Friday, March 4, 2016 at 11:19:17 PM UTC-6, pev...@gmail.com wrote:
>
> Dear Sam,
> the output of the benchmark is following
>
> 105.290122 seconds (31.43 k allocations: 1.442 MB, 0.00% gc time)
> 107.445101 seconds (1.37 M allocations: 251.368 MB, 0.12% gc time)
>
> Tomas
>


Re: [julia-users] poor performance of threads

2016-03-04 Thread Sam Kaplan
Hello,

Perhaps it might help the discussion to try running one of the Julia 
threading performance benchmark codes on your MacBook Air.  For example:

https://github.com/JuliaLang/julia/blob/master/test/perf/threads/laplace3d/laplace3d.jl

Presumably that code is type-stable and compute-bound since it was written 
for the purpose of testing the threading code.  Anyways, if you had time to 
do that, I would certainly be interested in the result.  Also, the 
following GitHub issue might be of interest:

https://github.com/JuliaLang/julia/issues/14417


Thanks!

Sam

On Friday, March 4, 2016 at 8:14:33 AM UTC-6, pev...@gmail.com wrote:
>
> Thank you very much Tim.
> I am using the profiler and your package ProfileView quite extensively and 
> I know where is my Achille heel in the code, and it is cpu bound.  That's 
> why I am so puzzled with threads.
>
> I will try to use @code_warntype, never use it before. 
>
> Best wishes,
> Tomas
>


Re: [julia-users] Possible bug in @spawnat or fetch?

2015-04-30 Thread Sam Kaplan
Thanks Amit!  In the GitHub issue that you posted I'll add a link to back 
to this thread.

On Wednesday, April 29, 2015 at 10:40:54 PM UTC-5, Amit Murthy wrote:
>
> Simpler case.
>
> julia> function test2()
>@async x=1
>x = 2
>end
>
> test2 (generic function with 1 method)
>
>
> julia> test2()
>
> ERROR: UndefVarError: x not defined
>
>  in test2 at none:2
>
>
> Issue created : https://github.com/JuliaLang/julia/issues/11062
>
>
> On Thu, Apr 30, 2015 at 8:55 AM, Amit Murthy  > wrote:
>
>> Yes, this looks like a bug. In fact the below causes an error:
>>
>> function test2()
>>   ref = @spawnat workers()[1] begin
>>x=1
>>        end; 
>>   x=2
>> end
>>
>> Can you open an issue on github?
>>
>>
>> On Thu, Apr 30, 2015 at 7:07 AM, Sam Kaplan > > wrote:
>>
>>> Hello,
>>>
>>> I have the following code example:
>>> addprocs(1)
>>>
>>> function test1()
>>> ref = @spawnat workers()[1] begin
>>> x = 1
>>> end
>>> y = fetch(ref)
>>> @show y
>>> end
>>>
>>> function test2()
>>> ref = @spawnat workers()[1] begin
>>> x = 1
>>> end
>>> x = fetch(ref)
>>> @show x
>>> end
>>>
>>> function main()
>>> test1()
>>> test2()
>>> end
>>>
>>> main()
>>>
>>> giving the following output:
>>> y => 1
>>> ERROR: x not defined
>>>  in test2 at /tmp/test.jl:12
>>>  in main at /tmp/test.jl:21
>>>  in include at /usr/bin/../lib64/julia/sys.so
>>>  in include_from_node1 at ./loading.jl:128
>>>  in process_options at /usr/bin/../lib64/julia/sys.so
>>>  in _start at /usr/bin/../lib64/julia/sys.so
>>> while loading /tmp/test.jl, in expression starting on line 24
>>>
>>>
>>> Is this a valid error in the code or a bug in Julia?  The error seems to 
>>> be caused when the variable that is local to the `@spawnat` block has its 
>>> name mirrored by the variable being assigned to by the `fetch` call.
>>>
>>> For reference, I am running version 0.3.6:
>>>_
>>>_   _ _(_)_ |  A fresh approach to technical computing
>>>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>>>_ _   _| |_  __ _   |  Type "help()" for help.
>>>   | | | | | | |/ _` |  |
>>>   | | |_| | | | (_| |  |  Version 0.3.6
>>>  _/ |\__'_|_|_|\__'_|  |  
>>> |__/   |  x86_64-redhat-linux
>>>
>>>
>>> Thanks!
>>>
>>> Sam
>>>
>>
>>
>

[julia-users] Possible bug in @spawnat or fetch?

2015-04-29 Thread Sam Kaplan
Hello,

I have the following code example:
addprocs(1)

function test1()
ref = @spawnat workers()[1] begin
x = 1
end
y = fetch(ref)
@show y
end

function test2()
ref = @spawnat workers()[1] begin
x = 1
end
x = fetch(ref)
@show x
end

function main()
test1()
test2()
end

main()

giving the following output:
y => 1
ERROR: x not defined
 in test2 at /tmp/test.jl:12
 in main at /tmp/test.jl:21
 in include at /usr/bin/../lib64/julia/sys.so
 in include_from_node1 at ./loading.jl:128
 in process_options at /usr/bin/../lib64/julia/sys.so
 in _start at /usr/bin/../lib64/julia/sys.so
while loading /tmp/test.jl, in expression starting on line 24


Is this a valid error in the code or a bug in Julia?  The error seems to be 
caused when the variable that is local to the `@spawnat` block has its name 
mirrored by the variable being assigned to by the `fetch` call.

For reference, I am running version 0.3.6:
   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "help()" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.6
 _/ |\__'_|_|_|\__'_|  |  
|__/   |  x86_64-redhat-linux


Thanks!

Sam


[julia-users] Re: Parallel loop, what wroong ? Parallel is slower then normal

2015-01-31 Thread Sam Kaplan
Hi Paul,

If D is allocated on the master, then Julia will need to pass D from the 
master to the workers.  I'm guessing that this communication might be more 
expensive than the compute in your loops.  It may be useful to take a look 
at distributed arrays in the parallel section of the Julia docs.

Hope it helps.

Sam

On Saturday, January 31, 2015 at 7:38:22 AM UTC-6, paul analyst wrote:
>
>
> Parallel loop, what wroong ? Parallel is slower then normal 
>
> julia> @time for i=1:l
>w[i]=var(D[:,i])
>end
> elapsed time: 4.443197509 seconds (14074576 bytes allocated)
>
>
> julia> @time ww=@parallel (hcat) for i=1:l
>var(D[:,i])
>end
> elapsed time: 5.287007403 seconds (435449580 bytes allocated, 5.00% gc 
> time)
> 1x1 Array{Float64,2}:
>
> Paul
>
> julia> @time for i=1:l
>w[i]=var(D[:,i])
>end
> elapsed time: 4.331569152 seconds (8637464 bytes allocated)
>
> julia> @time ww=@parallel (hcat) for i=1:l
>var(D[:,i])
>end
> elapsed time: 4.908234336 seconds (422121448 bytes allocated, 4.85% gc 
> time)
> 1x1 Array{Float64,2}:
>  0.000703737  0.000731674  0.000582672  0.000803880.000759479  
> 0.000402509  0.0007118  0.000989408
>
> julia> size(D)
> (1,1)
>


[julia-users] Re: recommended reading for c-call interfaces?

2015-01-08 Thread Sam Kaplan
Hi Andraeas,

The documentation has a paragraph about this:

http://docs.julialang.org/en/release-0.3/manual/calling-c-and-fortran-code/ 
:

"Currently, it is not possible to reliably pass structs and other 
non-primitive types by *value* from Julia to/from C libraries. However, 
*pointers* to structs can be passed. The simplest case is that of C 
functions that generate and use *opaque* pointers to struct types, which 
can be passed to/from Julia as Ptr{Void} (or any other Ptr type). Memory 
allocation and deallocation of such objects must be handled by calls to the 
appropriate cleanup routines in the libraries being used, just like in any 
C program. A more complicated approach is to declare a composite type in 
Julia that mirrors a C struct, which allows the structure fields to be 
directly accessed in Julia. Given a Julia variable x of that type, a 
pointer can be passed as &x to a C function expecting a pointer to the 
corresponding struct. If the Julia type T is immutable, then a Julia 
Array{T} is stored in memory identically to a C array of the corresponding 
struct, and can be passed to a C program expecting such an array pointer."

I have never tried using the immutable approach to mirror a C structure, 
but it looks interesting.  Instead, I use the Ptr{Void} method.  So, in 
your C code you might have a couple of methods for building your type and 
setting a and b in your structure,

void* builder() { return (void*) malloc(sizeof(struct mytype)); }
void setter(struct mytype *t, int a, int b) { t.a=a ; t.b=b; }

Then in your Julia code, you would write something like,
 
t = ccall((:builder, "mylib"), Ptr{Void}, ())
ccall((:setter, "mylib"), Void, (Ptr{Void}, Int32,Int32), t, 1, 2)

Hope it helps.

Sam

On Thursday, January 8, 2015 3:29:21 AM UTC-6, Andreas Lobinger wrote:
>
> Hello colleagues,
>
> i thought i understood the ccall interface but now i ran into a problem 
> (segmentation fault) and i cannot really track down, where actually the 
> problem occures.
> I want/need to pass a pointer to structure (c style) to a library and the 
> function in the library writes entries in the structure.
>
> (the following i write from memory, i do not have the code on this 
> computer...)
>
> type mytype
> a::Int32
> b::Int32
> end
>
> t = mytype(0,0)
>
> ccall(:flip, Void, (mytype,), t)
>
>
> 1) is there somewhere code you would recommend to read?
> 2) how can i use code_lowered or code_llvm to actually see the details. 
> The above example is included in a module and it looks like the code is 
> compiled at 'using' so code_llvm e.g. only shows the call to the compiled 
> function, but not the inside.
> 3) other documentation (blog, FR) etc?
>
> Wishing a happy day,
>   Andreas
>


[julia-users] DArray scope problem -- user error or julia bug?

2014-10-12 Thread Sam Kaplan
Dear All,

I have being having an issue with Distributed Arrays in Julia 0.3.  It 
seems that the DArray is not broadcast to the workers when it is declared 
inside an if block.  Here are two codes listings.  Listing 1 works and 
declares the DArray before the if block.  Listing 2 does not work and 
declares the DArray inside the if block.  Can other people reproduce this 
behavior.  If so, is this expected, or should I file an issue?  Thanks!

Sam

Listing 1:
addprocs(4)

D = dones(4,1)
if 1 == 1
@sync begin
for i = 1:4
@spawnat procs(D)[i] begin
fill!(localpart(D),1.0*i)
end
end
end
@show D
end

Listing 2:
addprocs(4)

if 1 == 1
D = dones(4,1)
@sync begin
for i = 1:4
@spawnat procs(D)[i] begin
fill!(localpart(D),1.0*i)
end
end
end
@show D
end

The output of Listing 1 is:
$ julia listing1.jl 
D => [1.0
 2.0
 3.0
 4.0]

The output of listing 2 is:
julia listing2.jl 
exception on 2: exception on 5: exception on exception on 3: 4: ERROR: D not 
defined
 in anonymous at multi.jl:8
 in anonymous at multi.jl:848
 in run_work_thunk at multi.jl:621
 in run_work_thunk at multi.jl:630
 in anonymous at task.jl:6
ERROR: D not defined
 in anonymous at multi.jl:8
 in anonymous at multi.jl:848
 in run_work_thunk at multi.jl:621
 in run_work_thunk at multi.jl:630
 in anonymous at task.jl:6
ERROR: D not defined
 in anonymous at multi.jl:8
 in anonymous at multi.jl:848
 in run_work_thunk at multi.jl:621
 in run_work_thunk at multi.jl:630
 in anonymous at task.jl:6
ERROR: D not defined
 in anonymous at multi.jl:8
 in anonymous at multi.jl:848
 in run_work_thunk at multi.jl:621
 in run_work_thunk at multi.jl:630
 in anonymous at task.jl:6
D => [1.0
 1.0
 1.0
 1.0]




[julia-users] Combining modules with parallel computing

2014-09-07 Thread Sam Kaplan
Hi Richard,

Give

@everywhere using NLSolve

a try.  Hope that helps.

Sam



[julia-users] Expected behviour of addprocs(n, cman=SSHManager(; machines=machines)

2014-04-21 Thread Sam Kaplan
Hello,

I have a quick question about the SSHManager, and how it works with 
addprocs.  If,

machines = [machine1, machine2]

and I do,

cman = Base.SSHManager(;machines=machines)
addprocs(2, cman=cman)
for pid in workers()
fetch(@spawnat pid run(`hostname`))
end

Then I see that one process is running on 'machine1', and the other on 
'machine2'.  On the other hand, if I do:

cman = Base.SSHManager(;machines=machines)
addprocs(1, cman=cman)
addprocs(1, cman=cman)
for pid in workers()
fetch(@spawnat pid run(`hostname`))
end

Then I see that both processes are running on 'machine1'.

Is this expected behaviour, or a bug (or some other misunderstanding of 
mine)?

Thanks!

Sam