[julia-users] Re: DataFrame to JSON

2015-11-08 Thread Eric Forgy
This is embarrassing since I'm just learning, but in the interest of 
getting feedback and improving my Julia coding skills, here is what I did:

function df2json(df::DataFrame)

nrow,ncol = size(df)

io = IOBuffer();
write(io,"[\n")
for irow = 1:nrow
  irow == nrow ? eor = "" : eor = ","
  write(io,"{")
  for icol = 1:ncol
icol == ncol ? eoe = "" : eoe = ","
sym = names(df)[icol]
name = string(sym)
if isa(value,Number)
  write(io,"\""*name*"\":"*string(df[irow,sym])*eoe)
else
  write(io,"\""*name*"\":\""*df[irow,sym]*"\""*eoe)
end
  end
  write(io,"}"*eor*"\n")
end
write(io,"]\n")

json = takebuf_string(io)
close(io)
json
end


Any thoughts/suggestions? Thank you.


[julia-users] DataFrame to JSON

2015-11-08 Thread Eric Forgy
Hi,

I need to serialize a DataFrame to JSON. I have read this:

   - https://github.com/JuliaStats/DataFrames.jl/issues/184
   
but my case is a little different.

If my DataFrame looks like this:

julia> df
8x2 DataFrames.DataFrame
| Row | A | B   |
|-|---|-|
| 1   | 1 | "M" |
| 2   | 2 | "F" |
| 3   | 3 | "F" |
| 4   | 4 | "M" |
| 5   | 5 | "F" |
| 6   | 6 | "M" |
| 7   | 7 | "M" |
| 8   | 8 | "F" |


Then I need my JSON to look like this (in order to talk to another API):

[

{"A": 1, "B": "M"},

{"A": 2, "B": "F"}, 

{"A": 3, "B": "F"},

{"A": 4, "B": "M"},

{"A": 5, "B": "F"},

{"A": 6, "B": "M"},

{"A": 7, "B": "M"},

{"A": 8, "B": "F"},

]


In Matlab, I would create an array of "structs". In Julia, I'm thinking I 
would need to dynamically create a type

immutable row
A::Int
B::AbstractString
end

 
based on the names in the DataFrame, which should be generic and not 
confined to two. Then I would construct an array of "row"s and finally 
using JSON.jl, serialize via JSON.json.

Is there a better/easier way to do this?

Thank you.


Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-08 Thread Tim Holy
This should be solvable, but I've never used cublas myself. I'd recommend 
filing an issue or pull request with that package.

--Tim

On Sunday, November 08, 2015 03:28:39 PM Jason Eckstein wrote:
> Interesting. I have the bug when using cublas.jl  I haven't found a way to
> initialize resources to avoid these errors but if I just use cudart on it's
> own there aren't memory errors.



Re: [julia-users] Arrays as streams / consuming data with take et al

2015-11-08 Thread Yichao Yu
On Sun, Nov 8, 2015 at 8:11 PM, andrew cooke  wrote:
> I'd like to be able to use take() and all the other iterator tools with a
> stream of data backed by an array (or string).
>
> By that I mean I'd like to be able to do something like:
>
>> stream = XXX([1,2,3,4,5])
>> collect(take(stream, 3))
> [1,2,3]
>> collect(take(stream, 2))
> [4,5]
>
> Is this possible?  I can find heavyweight looking streams for IO, and I can
> find lightweight iterables without state.  But I can't seem to find the
> particular mix described above.

Jeff's conclusion @ JuliaCon is that it seems impossible to implement
this (stateful iterator) currently in a generic and performant way so
I doubt you will find it in a generic iterator library (that works not
only on arrays). A version that works only on Arrays should be simple
enough to implement and doesn't sound useful enough to be in an
exported API so I guess you probably should just implement your own.

Ref 
https://groups.google.com/forum/?fromgroups=#!searchin/julia-users/iterator/julia-users/t4ZieI2_iwI/3NTw1k406qkJ

>
> (I think I can see how to write it myself; I'm asking if it already exists -
> seems like it should, but I can't find the right words to search for).
>
> Thanks,
> Andrew
>


[julia-users] Arrays as streams / consuming data with take et al

2015-11-08 Thread andrew cooke
I'd like to be able to use take() and all the other iterator tools with a 
stream of data backed by an array (or string).

By that I mean I'd like to be able to do something like:

> stream = XXX([1,2,3,4,5])
> collect(take(stream, 3))
[1,2,3]
> collect(take(stream, 2))
[4,5]

Is this possible?  I can find heavyweight looking streams for IO, and I can 
find lightweight iterables without state.  But I can't seem to find the 
particular mix described above.

(I think I can see how to write it myself; I'm asking if it already exists 
- seems like it should, but I can't find the right words to search for).

Thanks,
Andrew



Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-08 Thread Jason Eckstein
Interesting. I have the bug when using cublas.jl  I haven't found a way to 
initialize resources to avoid these errors but if I just use cudart on it's own 
there aren't memory errors.

Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-08 Thread Jason Eckstein
Interesting. I have the bug when using cublas.jl  I haven't found a way to 
initialize resources to avoid these errors but if I just use cudart on it's own 
there aren't memory errors.

Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-08 Thread Jason Eckstein
Interesting. I have the bug when using cublas.jl  I haven't found a way to 
initialize resources to avoid these errors but if I just use cudart on it's own 
there aren't memory errors.

Re: [julia-users] Julia vs C++ single dispatch performance comparison

2015-11-08 Thread Yichao Yu
On Sun, Nov 8, 2015 at 5:24 PM, Cristóvão Duarte Sousa
 wrote:
> The times for Julia code in my previous email are wrong, I wanted to write:
> 0.000126 seconds
> 0.004695 seconds (99.49 k allocations: 1.518 MB)
> 0.000871 seconds

FYI, I see a 25-40% performance improvement on my yyc/call-site-cache branch.

https://github.com/JuliaLang/julia/pull/11862

>
> On Sun, Nov 8, 2015 at 10:19 PM Cristóvão Duarte Sousa 
> wrote:
>>
>> I agree, in fact my knowledge about branch prediction mechanisms is rather
>> limited.
>> But doing if(rand()%2) instead of if(i%i) yields times close to the given
>> one. On the other hand, with a single type (with if(true)) the times drop to
>> around 320 μs.
>>
>> For julia code, randomness seems to have higher impact:
>> 0.000103 seconds
>> 0.002179 seconds
>> 0.000915 seconds
>> (again, these times highly variable)
>>
>> Anyway, I updated the codes to choose the type randomly.
>>
>> On Sun, Nov 8, 2015 at 3:44 AM Stefan Karpinski 
>> wrote:
>>>
>>> Yeah, that's a good point.
>>>
>>> On Saturday, November 7, 2015, Gustavo Goretkin
>>>  wrote:

 I think branch predictors on many platforms today use a table indexed on
 the history of the last couple of branches, so the period-two cycle you 
 have
 is likely getting a lot of correct branch hits. If you mean to totally
 defeat the branch prediction, I think you should use something 
 pseudorandom.

 On Nov 6, 2015 12:27 PM, "Cristóvão Duarte Sousa" 
 wrote:
>
> Hi,
>
> I've been wondering how Julia dispatching system would compare to the
> C++ virtual functions dispatch one.
> Hence, I write a simple test in both Julia and C++ that measures the
> calling of a function over the elements of both an array of concrete 
> objects
> and another of abstract pointers to objects of derived types.
>
> Here is the code https://gist.github.com/cdsousa/f5d669fe3fba7cf848d8 .
>
> The usual timings for C++ in my machine, for the concrete and the
> abstract arrays respectively, are around
> 0.000143 seconds
> 0.000725 seconds
>
> For the Julia code the timings have much more variability, but they are
> around
> 0.000133 seconds
> 0.002414 seconds
>
> This shows that Julia (single) dispatch performance is not that bad
> while it has some room to improvement.
>
> If I'm doing something terribly wrong in these tests, please tell me :)
>
> PS: Thank you all, developers of Julia!
>
>
>


Re: [julia-users] Do I have simd?

2015-11-08 Thread Rob J. Goedman
On another, slightly older system, I noticed similar (approximately identical) timings for the simd.jl test script using Julia 0.5:julia> include("/Users/rob/Projects/Julia/Rob/Julia/simd.jl")Julia Version 0.5.0-dev+720Commit 5920633* (2015-10-11 15:15 UTC)Platform Info:  System: Darwin (x86_64-apple-darwin15.0.0)  CPU: Intel(R) Core(TM) i7-3720QM CPU @ 2.60GHz  WORD_SIZE: 64  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)  LAPACK: libopenblas64_  LIBM: libopenlibm  LLVM: libLLVM-3.3First call to timeit(1000,1000):GFlop        = 0.6092165323090373GFlop (SIMD) = 0.4607065672339039Second call to timeit(1000,1000):GFlop        = 0.5935117884795207GFlop (SIMD) = 0.42286883095163036On that same system Julia 0.4 (installed from the Julia site) did show improved Gflop numbers and about a 6x improvement with simd.To see if that would help with Julia 0.5, I did (in the cloned julia directory, in a terminal):git pull https://github.com/JuliaLang/julia mastermake -j 4Lots of compile messages/warnings, but in the end:clang: error: linker command failed with exit code 1 (use -v to see invocation)make[3]: *** [libopenblas64_p-r0.2.15.dylib] Error 1make[2]: *** [shared] Error 2*** Clean the OpenBLAS build with 'make -C deps clean-openblas'. Rebuild with 'make OPENBLAS_USE_THREAD=0 if OpenBLAS had trouble linking libpthread.so, and with 'make OPENBLAS_TARGET_ARCH=NEHALEM' if there were errors building SandyBridge support. Both these options can also be used simultaneously. ***make[1]: *** [build/openblas/libopenblas64_.dylib] Error 1make: *** [julia-deps] Error 2I tried:brew updatebrew upgrademake -C deps clean-openblasmake -j 4and running the simd.jl script now shows:julia> include("/Users/rob/Projects/Julia/Rob/Julia/simd.jl")Julia Version 0.5.0-dev+1195Commit 68667a3* (2015-11-08 21:05 UTC)Platform Info:  System: Darwin (x86_64-apple-darwin15.0.0)  CPU: Intel(R) Core(TM) i7-3720QM CPU @ 2.60GHz  WORD_SIZE: 64  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)  LAPACK: libopenblas64_  LIBM: libopenlibm  LLVM: libLLVM-3.3First call to timeit(1000,1000):GFlop        = 1.4006308441321973GFlop (SIMD) = 13.561988458747821Second call to timeit(1000,1000):GFlop        = 2.300048186009497GFlop (SIMD) = 12.8439991844Not sure if this helps or is even the right way to remedy this.Regards,Rob

simd.jl
Description: Binary data
On Nov 6, 2015, at 5:35 PM, Rob J. Goedman  wrote:Thanks Seth,That's the end of my first attempt to figure out what’s happening here. Back to the drawing board!Regards,RobOn Nov 6, 2015, at 4:53 PM, Seth  wrote:Hi Rob,I built it (and openblas) myself (via git clone) since I'm testing out Cxx.jl. Xcode is Version 7.1 (7B91b).Seth.On Friday, November 6, 2015 at 3:54:04 PM UTC-8, Rob J Goedman wrote:Seth,You must have built  Julia 0.4.1-pre yourself. Did you use brew?It looks like you are on Yosemite and picked up a newer libLLVM. Which Xcode are you using?In the Julia.rb formula there is a test ENV.compiler, could it be clang is not being used? RobOn Nov 6, 2015, at 3:01 PM, Seth  wrote:For what it's worth, I'm gettingjulia> timeit(1000,1000)GFlop        = 2.3913033081289967GFlop (SIMD) = 2.2694726426420293julia> versioninfo()Julia Version 0.4.1-pre+22Commit 669222e* (2015-11-01 00:06 UTC)Platform Info:  System: Darwin (x86_64-apple-darwin14.5.0)  CPU: Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz  WORD_SIZE: 64  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)  LAPACK: libopenblas64_  LIBM: libopenlibm  LLVM: libLLVM-svnso it doesn't look like I'm taking advantage of simd either. :(On Friday, November 6, 2015 at 11:43:41 AM UTC-8, Rob J Goedman wrote:Hi DNF,In below versioninfo’s only libopenblas appears different. You installed using brew. The first thing I would try is to execute the steps under Common Issues listed on https://github.com/staticfloat/homebrew-julia. A bit further down on that site there is also some additional openblas related info.RobOn Nov 6, 2015, at 10:35 AM, DNF  wrote:Thanks for the feedback. It seems like this is not a problem for most.If anyone has even the faintest clue where I could start looking for a solution to this, I would be grateful. Perhaps there is some software I could run that would detect hardware problems, or maybe I am missing software dependencies of some kind? What could I even google for? All my searches just seem to bring up general info about SIMD, nothing like what I'm describing.On Friday, November 6, 2015 at 12:15:47 AM UTC+1, DNF wrote:I install using homebrew from here: https://github.com/staticfloat/homebrew-juliaI have limited understanding of the process, but believe there is some compilation involved.Julia Version 0.4.0Commit 0ff703b* (2015-10-08 06:20 UTC)Platform Info:  System: Darwin (x86_64-apple-darwin13.4.0)  CPU: Intel(R) Core(TM) i7-4980HQ CPU @ 2.80GHz  WORD_SIZE: 64  BLAS: libopenblas (USE64BITINT DYNAMIC

Re: [julia-users] Julia vs C++ single dispatch performance comparison

2015-11-08 Thread Cristóvão Duarte Sousa
The times for Julia code in my previous email are wrong, I wanted to write:
0.000126 seconds
0.004695 seconds (99.49 k allocations: 1.518 MB)
0.000871 seconds

On Sun, Nov 8, 2015 at 10:19 PM Cristóvão Duarte Sousa 
wrote:

> I agree, in fact my knowledge about branch prediction mechanisms is rather
> limited.
> But doing *if(rand()%2)* instead of *if(i%i)* yields times close to the
> given one. On the other hand, with a single type (with *if(true)*) the
> times drop to around 320 μs.
>
> For julia code, randomness seems to have higher impact:
> 0.000103 seconds
> 0.002179 seconds
> 0.000915 seconds
> (again, these times highly variable)
>
> Anyway, I updated the codes to choose the type randomly.
>
> On Sun, Nov 8, 2015 at 3:44 AM Stefan Karpinski 
> wrote:
>
>> Yeah, that's a good point.
>>
>> On Saturday, November 7, 2015, Gustavo Goretkin <
>> gustavo.goret...@gmail.com> wrote:
>>
>>> I think branch predictors on many platforms today use a table indexed on
>>> the history of the last couple of branches, so the period-two cycle you
>>> have is likely getting a lot of correct branch hits. If you mean to totally
>>> defeat the branch prediction, I think you should use something
>>> pseudorandom.
>>> On Nov 6, 2015 12:27 PM, "Cristóvão Duarte Sousa" 
>>> wrote:
>>>
 Hi,

 I've been wondering how Julia dispatching system would compare to the
 C++ virtual functions dispatch one.
 Hence, I write a simple test in both Julia and C++ that measures the
 calling of a function over the elements of both an array of concrete
 objects and another of abstract pointers to objects of derived types.

 Here is the code https://gist.github.com/cdsousa/f5d669fe3fba7cf848d8 .

 The usual timings for C++ in my machine, for the concrete and the
 abstract arrays respectively, are around
 0.000143 seconds
 0.000725 seconds

 For the Julia code the timings have much more variability, but they are
 around
 0.000133 seconds
 0.002414 seconds

 This shows that Julia (single) dispatch performance is not that bad
 while it has some room to improvement.

 If I'm doing something terribly wrong in these tests, please tell me :)

 PS: Thank you all, developers of Julia!





Re: [julia-users] Julia vs C++ single dispatch performance comparison

2015-11-08 Thread Cristóvão Duarte Sousa
I agree, in fact my knowledge about branch prediction mechanisms is rather
limited.
But doing *if(rand()%2)* instead of *if(i%i)* yields times close to the
given one. On the other hand, with a single type (with *if(true)*) the
times drop to around 320 μs.

For julia code, randomness seems to have higher impact:
0.000103 seconds
0.002179 seconds
0.000915 seconds
(again, these times highly variable)

Anyway, I updated the codes to choose the type randomly.

On Sun, Nov 8, 2015 at 3:44 AM Stefan Karpinski 
wrote:

> Yeah, that's a good point.
>
> On Saturday, November 7, 2015, Gustavo Goretkin <
> gustavo.goret...@gmail.com> wrote:
>
>> I think branch predictors on many platforms today use a table indexed on
>> the history of the last couple of branches, so the period-two cycle you
>> have is likely getting a lot of correct branch hits. If you mean to totally
>> defeat the branch prediction, I think you should use something
>> pseudorandom.
>> On Nov 6, 2015 12:27 PM, "Cristóvão Duarte Sousa" 
>> wrote:
>>
>>> Hi,
>>>
>>> I've been wondering how Julia dispatching system would compare to the
>>> C++ virtual functions dispatch one.
>>> Hence, I write a simple test in both Julia and C++ that measures the
>>> calling of a function over the elements of both an array of concrete
>>> objects and another of abstract pointers to objects of derived types.
>>>
>>> Here is the code https://gist.github.com/cdsousa/f5d669fe3fba7cf848d8 .
>>>
>>> The usual timings for C++ in my machine, for the concrete and the
>>> abstract arrays respectively, are around
>>> 0.000143 seconds
>>> 0.000725 seconds
>>>
>>> For the Julia code the timings have much more variability, but they are
>>> around
>>> 0.000133 seconds
>>> 0.002414 seconds
>>>
>>> This shows that Julia (single) dispatch performance is not that bad
>>> while it has some room to improvement.
>>>
>>> If I'm doing something terribly wrong in these tests, please tell me :)
>>>
>>> PS: Thank you all, developers of Julia!
>>>
>>>
>>>


[julia-users] Re: Escher Spinner

2015-11-08 Thread Shashi Gowda


On Saturday, November 7, 2015 at 4:21:09 AM UTC+5:30, Brandon Miller wrote:
>
> Code snippet: https://gist.github.com/r2dbg/14189258e9daee2cece6
>
> I'd like for the spinner on line 23 to spin while do_work() is running and 
> stop after do_work returns.  However, I can't seem to find a good way to do 
> this.  The form is blocking until all fields in plugsampler() are fulfilled.
>

Take a look at how the mc.jl example does something similar: 
https://github.com/shashi/Escher.jl/blob/master/examples/mc.jl#L24-L35

I think a better abstraction is possible for this though. A way to make a 
busy-signal for another signal?


[julia-users] Re: Julia Hands-on: Introductory Tutorial for Distributed Computing

2015-11-08 Thread André Lage
Thanks, Leandro.

I apologize for the problems with the Hangout session, but I had to install 
Epson 
software  to use the projector and 
I lost my connection to the Internet during the tutorial.


André Lage.

On Friday, November 6, 2015 at 1:34:27 PM UTC-3, Leandro Melo de Sales 
wrote:
>
> Hi Prof. Andre Lage,
>
>Congratulations for all your efforts to the Julia community. Thank you! 
>
> Best,
> Leandro.
>
> --
> PhD. Leandro Melo de Sales
> Institute of Computing (IC) at Federal University of Alagoas (UFAL), Brazil
> Pervasive and Mobile Computing Laboratory (COMPELab.org / IC)
>   
>   
>    
> 
>
> "The warrior is strong in loyalty, intensity, determination, initiative, 
> persistence, courage and willpower. The warrior is light in the soul, 
> self-trust and compassion. The warrior is often called to take the front 
> when other cowardly make a step backwards. There are warriors on the 
> battlefields and in everyday life."
>
> 2015-11-06 20:07 GMT+09:00 André Lage >:
>
>> Hello,
>>
>> I'll be giving the tutorial "Introduction to Julia for Distributed 
>> Computing" *today at 2pm (UCT-3)* at Brazilian (Northeast Region) 
>> High-performance Computing School. 
>>
>> The tutorial will be in *Portuguese* and all the support material (IJulia 
>> Notebooks) will be in *English*. You are welcome to participate through 
>> Hangout:
>>
>>
>> https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=ZWNhcHJlNzJmNHRqcWFmOWkxc2M3cDNmZjQgcHJvZi5hbGFnZUBt&tmsrc=prof.alage%40gmail.com
>>
>> Regards,
>>
>>
>> André Lage.
>>
>> -- 
>>
>>
>

Re: [julia-users] Re: Large Data Sets in Julia

2015-11-08 Thread André Lage
Thanks!


André Lage.

On Fri, Nov 6, 2015 at 7:20 AM, Tim Holy  wrote:

> Not sure if it's as high-level as you're hoping for, but julia has great
> support for arrays that are much bigger than memory. See Mmap.mmap and
> SharedArray(filename, T, dims).
>
> --Tim
>
> On Thursday, November 05, 2015 06:33:52 PM André Lage wrote:
> > hi Viral,
> >
> > Do you have any news on this?
> >
> > André Lage.
> >
> > On Wednesday, July 3, 2013 at 5:12:06 AM UTC-3, Viral Shah wrote:
> > > Hi all,
> > >
> > > I am cross-posting my reply to julia-stats and julia-users as there
> was a
> > > separate post on large logistic regressions on julia-users too.
> > >
> > > Just as these questions came up, Tanmay and I have been chatting about
> a
> > > general framework for working on problems that are too large to fit in
> > > memory, or need parallelism for performance. The idea is simple and
> based
> > > on providing a convenient and generic way to break up a problem into
> > > subproblems, each of which can then be scheduled to run anywhere. To
> start
> > > with, we will implement a map and mapreduce using this, and we hope
> that
> > > it
> > > should be able to handle large files sequentially, distributed data
> > > in-memory, and distributed filesystems within the same framework. Of
> > > course, this all sounds too good to be true. We are trying out a simple
> > > implementation, and if early results are promising, we can have a
> detailed
> > > discussion on API design and implementation.
> > >
> > > Doug, I would love to see if we can use some of this work to
> parallelize
> > > GLM at a higher level than using remotecall and fetch.
> > >
> > > -viral
> > >
> > > On Tuesday, July 2, 2013 11:10:35 PM UTC+5:30, Douglas Bates wrote:
> > >> On Tuesday, July 2, 2013 6:26:33 AM UTC-5, Raj DG wrote:
> > >>> Hi all,
> > >>>
> > >>> I am a regular user of R and also use it for handling very large data
> > >>> sets (~ 50 GB). We have enough RAM to fit all that data into memory
> for
> > >>> processing, so don't really need to do anything additional to chunk,
> > >>> etc.
> > >>>
> > >>> I wanted to get an idea of whether anyone has, in practice, performed
> > >>> analysis on large data sets using Julia. Use cases range from
> performing
> > >>> Cox Regression on ~ 40 million rows and over 10 independent
> variables to
> > >>> simple statistical analysis using T-Tests, etc. Also, how does the
> > >>> timings
> > >>> for operations like logistic regressions compare to Julia ? Are there
> > >>> any
> > >>> libraries/packages that can perform Cox, Poisson (Negative Binomial),
> > >>> and
> > >>> other regression types ?
> > >>>
> > >>> The benchmarks for Julia look promising, but in today's age of the
> "big
> > >>> data", it seems that the capability of handling large data is a
> > >>> pre-requisite to the future success of any new platform or language.
> > >>> Looking forward to your feedback,
> > >>
> > >> I think the potential for working with large data sets in Julia is
> better
> > >> than that in R.  Among other things Julia allows for memory-mapped
> files
> > >> and for distributed arrays, both of which have great potential.
> > >>
> > >> I have been working with some Biostatisticians on a prototype package
> for
> > >> working with snp data of the sort generated in genome-wide association
> > >> studies.  Current data sizes can be information on tens of thousands
> of
> > >> individuals (rows) for over a million snp positions (columns).  The
> > >> nature
> > >> of the data is such that each position provides one of four potential
> > >> values, including a missing value.  A compact storage format using 2
> bits
> > >> per position is widely used for such data.  We are able to read and
> > >> process
> > >> such a large array in a few seconds using memory-mapped files in
> Julia.
> > >>
> > >>  The amazing thing is that the code is pure Julia.  When I write in R
> I
> > >>  am
> > >>
> > >> always conscious of the bottlenecks and the need to write C or C++
> code
> > >> for
> > >> those places.  I haven't encountered cases where I need to write new
> code
> > >> in a compiled language to speed up a Julia function.  I have
> interfaced
> > >> to
> > >> existing numerical libraries but not writing fresh code.
> > >>
> > >> As John mentioned I have written the GLM package allowing for hooks to
> > >> use distributed arrays.  As yet I haven't had a large enough problem
> to
> > >> warrant fleshing out those hooks but I could be persuaded.
>
>


Re: [julia-users] Re: CUDART and CURAND problem on running the same "do" loop twice

2015-11-08 Thread Tim Holy
Are you sure this is due to CUDArt? For me, this works as many times as I care 
to try it (note that I'm not using CURAND):

using CUDArt
result = devices(dev->capability(dev)[1]>=2; nmax = 1) do devlist
device(devlist[1])
h = rand(Float32, 1000)
d_a = CudaArray(h)
a = to_host(d_a)
end

But I can replicate the bug if I use CURAND. I'd suggest filing an issue with 
that package. Most likely it needs to (and, you need to) initialize resources 
according to these instructions:
https://github.com/JuliaGPU/CUDArt.jl#initializing-and-freeing-ptx-modules

Best,
--Tim

On Saturday, November 07, 2015 02:52:16 PM Jason Eckstein wrote:
> I've had the same errors on MacOS, Windows 10, and Linux using the
> devices() do loop with CUDArt package.  All of the programs I run I place
> inside the outer do loop including all the repetitions because as soon as
> that block ends I cannot use any other CUDA functions without some type of
> pointer or memory error.  The only solution I've found is to restart Julia,
> but I noticed that if I run a program like yours with a julia command in
> the terminal rather than a REPL session, even if I just run one loop and I
> close the session I still get the memory errors when the script ends.
>  Perhaps someone involved in developing CUDArt can give some insight as to
> why the outer do-block loop cannot be ended cleanly.
> 
> On Friday, November 6, 2015 at 12:46:54 PM UTC-7, Joaquim Masset Lacombe
> 
> Dias Garcia wrote:
> > I was playing with the GPU (in both windows and mac) libraries and I came
> > up with the following errors:
> > The first time I execute the do loop, every thing goes well.
> > But If i try a second time in a row I get this invalid pointer error.
> > 
> > I got the same error in all my do loops, any ideas?
> > 
> > best,
> > Joaquim
> > 
> > *My code running on terminal:*
> > 
> > julia> using CUDArt
> > julia> using CURAND
> > julia> result = devices(dev->capability(dev)[1]>=2) do devlist
> > 
> >d_a = curand(Float32, 1000);
> >a = to_host(d_a);
> >
> >end
> > 
> > 1000-element Array{Float64,1}:
> >  0.438451
> >  0.460365
> >  0.250215
> >  0.494744
> >  0.0530111
> >  0.337699
> >  0.396763
> >  0.874419
> >  0.482167
> >  0.0428398
> >  ?
> >  0.563937
> >  0.80706
> >  0.190015
> >  0.334969
> >  0.622164
> >  0.710596
> >  0.0125895
> >  0.990388
> >  0.467796
> >  0.24313
> > 
> > julia> result = devices(dev->capability(dev)[1]>=2) do devlist
> > 
> >d_a = curand(Float32, 1000);
> >a = to_host(d_a);
> >
> >end
> > 
> > WARNING: CUDA error triggered from:
> >  in checkerror at
> > 
> > C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\libcudart-6.5.jl
> > 
> > :15
> > :
> >  in copy! at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\arrays.jl:152
> >  in to_host at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\arrays.jl:87
> >  in anonymous at none:3
> >  in devices at
> > 
> > C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\device.jl:61ERROR:
> > Launch failed, perhaps due to an invalid pointer
> > 
> >  in checkdrv at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\module.jl:6
> >  in close at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\device.jl:136
> >  in devices at C:\Users\joaquimgarcia\.julia\v0.4\CUDArt\src\device.jl:63