By "construction below", I mean this:
results = SharedArray(Float64, (m,n))
@sync @parallel for i = 1:n
results[:, i] = complicatedfunction(inputs[i])
end
On Saturday, January 30, 2016 at 2:31:40 PM UTC-5, Christopher Alexander
wrote:
>
> I have tried the construction below with no success.
I have tried the construction below with no success. In v0.4.3, I end up
getting a segmentation fault. In the latest v.0.5.0, the run time is 3-4x
as long as the non-parallelized version and the array constructed is vastly
different than the one that is constructed using the non-parallelized c
Thanks Ryan for the pointer, this is awesome work, I am looking forward to
this becoming part of the Julia release in Q3.
Sebastian
On Thu, Aug 20, 2015 at 3:34 PM, Ryan Cox wrote:
> Sebastian,
>
> This talk from JuliaCon 2015 discusses progress on OpenMP-like threading:
> Kiran Pamnany and Ra
Sebastian,
This talk from JuliaCon 2015 discusses progress on OpenMP-like threading:
Kiran Pamnany and Ranjan Anantharaman: Multi-threading Julia:
http://youtu.be/GvLhseZ4D8M?a
Ryan
On 08/19/2015 02:42 PM, Sebastian Nowozin wrote:
Hi Julio,
I believe this is a very common type of workload,
Sebastian, I'm not sure I understand you correctly, but point (1) in your
list can usually be taken care of by wrapping all the necessary
usings/requires/includes and definitions in a @everywhere begin ... end
block.
Julio, as for your original problem, I think Tim's advice about
SharedArrays
Hi Sebastian, thanks for sharing your experience in parallelizing Julia
code. I used OpenMP in the past too, it was very convenient in my C++
codebase. I remember of an initiative OpenACC that was trying to bring
OpenMP and GPU accelerators together, I don't know the current status of
it. It may be
Hi Julio,
I believe this is a very common type of workload, especially in scientific
computing.
In C++ one can use OpenMP for this type of computation, in Matlab there is
parfor. From the users perspective both "just work".
In Julia, I have not found an easy and convenient way to do such
com
Hi Ismael,
MPI is distributed memory, I'm trying to use all the cores in my single
workstation with shared memory instead. Thanks for the link anyways.
-Júlio
There is an MPI wrapper for Julia, I don't know if it'll suit your needs
thoug:
https://github.com/JuliaParallel/MPI.jl
El miércoles, 19 de agosto de 2015, 13:03:59 (UTC-5), Júlio Hoffimann
escribió:
>
> Hi Kristoffer, sorry for the delay and thanks for the code.
>
> What I want to do is very s
Hi Kristoffer, sorry for the delay and thanks for the code.
What I want to do is very simple: I have an expensive loop for i=1:N such
that each iteration is independent and produces a large array of size M.
The result of this loop is a matrix of size MxN. I have many CPU cores at
my disposal and w
Something like this?
@everywhere function fill(A::SharedArray)
for idx in Base.localindexes(A)
A[idx] = rand()
end
end
function fill_array(m, n)
A = SharedArray(Float64, (m, n))
@sync begin
for p in procs(q)
@async remotecall_wait(p, fill, A)
en
What am I doing wrong in the following code?
function foo(N; parallel=false)
if parallel && nprocs() < CPU_CORES
addprocs(CPU_CORES - nprocs())
end
result = SharedArray(Float64, 9, N)
@parallel for i=1:N
sleep(1)
result[:,i] = rand(3,3)[:]
end
result
end
If I call foo(60
Consider the following simplified example. There is an algorithm
implemented as a function foo(N). This algorithm repeats the same recipe N
times in a loop to fill in an array of arrays:
function foo(N)
# bunch of auxiliary variables goes here
# ...
result = []
for i=1:N
# complicated
Thank you Tim, will check it carefully.
-Júlio
Completely possible with SharedArrays, see
http://docs.julialang.org/en/latest/manual/parallel-computing/#shared-arrays
--Tim
On Sunday, August 09, 2015 02:41:42 PM Júlio Hoffimann wrote:
> Hi,
>
> Suppose I have a complicated but embarrassingly parallel loop,
> namely:
> https://github.com/jul
Hi,
Suppose I have a complicated but embarrassingly parallel loop,
namely:
https://github.com/juliohm/ImageQuilting.jl/blob/master/src/iqsim.jl#L167
How would you dispatch the iterations so that all cores in the client
computer are busy working? There is any construct in the language for that
16 matches
Mail list logo