Re: [julia-users] MPI.jl and composite types
Thanks! Thats nice, I was misled by the gather function, i think I was looking too low level. Joaquim > On 1 Oct 2016, at 16:37, Erik Schnetterwrote: > > In Julia, `MPI` can send objects of any type. These objects will > automatically be serialized and deserialized. The respective functions are > > ```Julia > function send(obj, dest::Integer, tag::Integer, comm::Comm) > function recv(src::Integer, tag::Integer, comm::Comm) > ``` > Note the lower-case function name, which indicates (following the Python > convention) a higher-level interface to MPI. > > There is also a function `irecv` that checks whether a message containing > such an object can be received, and receives it if so. > > -erik > > > >> On Sat, Oct 1, 2016 at 3:19 PM, Joaquim Masset Lacombe Dias Garcia >> wrote: >> Hi all, >> >> I have the following type: >> >> type bar >> a::Int >> b::Vector{Matrix{Float64}} >> c::Vector{Float64} >> end >> >> I would like to send instances of that type via MPI. >> Can I do it? >> Is there any serialization/deserialization procedure i could use for that? >> >> thanks! > > > > -- > Erik Schnetter > http://www.perimeterinstitute.ca/personal/eschnetter/
Re: [julia-users] Julia with fewer dependencies?
Thanks! That is a great start! Joaquim > On 6 de set de 2016, at 03:24, Michele Zaffalon> wrote: > > Is this maybe what you are looking for > https://groups.google.com/d/msg/julia-users/WStpLtrKiFA/JhiAbc-vAwAJ? > >> On Mon, Sep 5, 2016 at 11:10 PM, Joaquim Masset Lacombe Dias Garcia >> wrote: >> >> The basic question is "Can I compile a smaller version of julia?" >> >> For instance, I want to ship some program as an executable, something >> similar to what BuildExecutable.jl does, btw its very nice! >> The only problem is that with BuildExecutable.jl or even with the pure >> usrimg.jl trick create a huge folder with all the julia main libs. >> >> I looked for an existing post but all I could find is this archived old >> post: >> https://www.reddit.com/r/Julia/comments/2da03c/julia_with_fewer_dependencies/ >> >> Can I compile julia without some of those libs? >> >> One nice use would be if I have a program with no use of libopenblas that I >> could ship without having to send a 40Mb lib... >> >> Thanks in advance! >
Re: [julia-users] CUDArt: loop inside device do
Oh! Sure, thanks for the prompt answer! Sorry for the dumb question... Joaquim > On 12 de fev de 2016, at 20:36, Tim Holy <tim.h...@gmail.com> wrote: > >> On Friday, February 12, 2016 08:30:26 PM Joaquim Dias Garcia wrote: >> Is there any way around it? >> >> I was planning a monte-carlo code, but all the iteration rely on some huge >> amount of data which is always the same. So sending it back and forth to >> the device would be a bottleneck... > > Again, you can use loops, you just have to write your code in a way that is > actually valid syntax. Something like this: > > result = devices(dev->capability(dev)[1]>=2) do devlist >MyCudaModule.init(devlist) do dev >result = Array(T, n) >d_mat = CudaArray(mat) ># more allocation here... >for i = 1:n >result[i] = my_calculation(d_mat, othervariables, i) >end >result >end > end > > The problem with your old version is that `result = for i = 1:n...` is not > supported syntax in julia. > > --Tim >
Re: [julia-users] CUDArt: loop inside device do
Is there any way around it? I was planning a monte-carlo code, but all the iteration rely on some huge amount of data which is always the same. So sending it back and forth to the device would be a bottleneck...
Re: [julia-users] Why is the order of these loops barely affects the time?
Const made the big difference, But what is the problem with the inferred type? Joaquim > On 21 de dez de 2015, at 20:30, Milan Bouchet-Valatwrote: > > Le lundi 21 décembre 2015 à 14:04 -0800, Joaquim Masset Lacombe Dias > Garcia a écrit : >> Why is the order of these loops barely affects the time? > Before doing this kind of comparison, you should ensure your code is > written to run efficiently. For that, you need to do > const N = 100 > > Else, the type of the variables isn't correctly inferred. Also, you > need to call the functions once to exclude compilation time from the > measurement. > > After changing this (and fixing the name of the second argument for > cba), I get the maximum time difference between abc and cba, the latter > being 3x faster than the former (4x after adding @inbounds). > > > Regards > > >> N=100 >> x=rand(N,N,N) >> y=rand(N,N,N) >> >> function abc(x::Array{Float64,3},y::Array{Float64,3}) >>for a=1:N >>for b=1:N >>for c=1:N >>y[a,b,c]=x[a,b,c] >>end >>end >>end >> end >> function acb(x::Array{Float64,3},y::Array{Float64,3}) >>for a=1:N >>for c=1:N >>for b=1:N >>y[a,b,c]=x[a,b,c] >>end >>end >>end >> end >> function bac(x::Array{Float64,3},y::Array{Float64,3}) >>for b=1:N >>for a=1:N >>for c=1:N >>y[a,b,c]=x[a,b,c] >>end >>end >>end >> end >> function bca(x::Array{Float64,3},y::Array{Float64,3}) >>for b=1:N >>for c=1:N >>for a=1:N >>y[a,b,c]=x[a,b,c] >>end >>end >>end >> end >> function cab(x::Array{Float64,3},y::Array{Float64,3}) >>for c=1:N >>for a=1:N >>for b=1:N >>y[a,b,c]=x[a,b,c] >>end >>end >>end >> end >> function cba(x::Array{Float64,3},xx::Array{Float64,3}) >>for c=1:N >>for b=1:N >>for a=1:N >>y[a,b,c]=x[a,b,c] >>end >>end >>end >> end >> >> @time abc(x,y) >> @time acb(x,y) >> @time bac(x,y) >> @time bca(x,y) >> @time cab(x,y) >> @time cba(x,y)