[julia-users] Finding all the arguments of a function

2016-01-29 Thread 'Antoine Messager' via julia-users
Hi all,

I forgot for the n-th time how to prevent a sampling from having two times 
the same value. While I wrote the previous sentence I remembered, but it is 
not my issue. I was wondering whether there is an option that would give 
all the arguments that can be used in a already implemented function. This 
could be useful as many documentations are not complete. I find *methods *not 
very informative. Nowhere I can now that what I needed was: 
*"replace=false"*
 

7 methods for generic function *sample*:


   - sample(a::*AbstractArray{T,N}*) at 
  /Users/am909/.julia/v0.4/StatsBase/src/sampling.jl:277 
  

  - sample*{T}*(a::*AbstractArray{T,N}*, n::*Integer*) at 
  /Users/am909/.julia/v0.4/StatsBase/src/sampling.jl:320 
  

  - sample*{T}*(a::*AbstractArray{T,N}*, dims::*Tuple{Vararg{Int64}}*) 
  at /Users/am909/.julia/v0.4/StatsBase/src/sampling.jl:324 
  

  - sample(wv::*StatsBase.WeightVec{W,Vec<:AbstractArray{T<:Real,1}}*) 
  at /Users/am909/.julia/v0.4/StatsBase/src/sampling.jl:335 
  

  - sample(a::*AbstractArray{T,N}*, wv::
  *StatsBase.WeightVec{W,Vec<:AbstractArray{T<:Real,1}}*) at
  /Users/am909/.julia/v0.4/StatsBase/src/sampling.jl:347 
  

  - sample*{T}*(a::*AbstractArray{T,N}*, wv::
  *StatsBase.WeightVec{W,Vec<:AbstractArray{T<:Real,1}}*, n::*Integer*) 
  at/Users/am909/.julia/v0.4/StatsBase/src/sampling.jl:529 
  

  - sample*{T}*(a::*AbstractArray{T,N}*, wv::
  *StatsBase.WeightVec{W,Vec<:AbstractArray{T<:Real,1}}*, dims::
  *Tuple{Vararg{Int64}}*) at
  /Users/am909/.julia/v0.4/StatsBase/src/sampling.jl:532 
  

   

Thank you very much

Antoine


[julia-users] Re: Finding all the arguments of a function

2016-01-29 Thread Bart Janssens


On Friday, January 29, 2016 at 12:26:20 PM UTC+1, Antoine Messager wrote:

> I forgot for the n-th time how to prevent a sampling from having two times 
> the same value. While I wrote the previous sentence I remembered, but it is 
> not my issue. I was wondering whether there is an option that would give 
> all the arguments that can be used in a already implemented function. This 
> could be useful as many documentations are not complete. I find *methods *not 
> very informative. Nowhere I can now that what I needed was: 
> *"replace=false"*
>  
>
>
Hi Antoine,

I don't think it's possible to query for keyword or optional arguments, 
according to the manual they are treated differently from positional 
arguments:
http://docs.julialang.org/en/release-0.4/manual/functions/#optional-arguments
http://docs.julialang.org/en/release-0.4/manual/methods/#man-note-on-optional-and-keyword-arguments

So it seems to me that only documentation or opening the code listed in the 
methods output could help here.

Cheers,

Bart


[julia-users] write julia consolle output in a text file

2016-01-29 Thread Michela Di Lullo
How do I write the julia consolle output in a txt file? 

Thanks 

Michela


[julia-users] write julia consolle output in a text file

2016-01-29 Thread Tomas Lycken
For example 

open("log.txt", "w") do f
  write(f," Hello, text file!")
end 

or 

julia your-script.jl > log.txt

depending on how you invoke your Julia code and what you mean by "console 
output". 

// T 

[julia-users] Julia version mismatch

2016-01-29 Thread Sisyphuss
I have this problem:

When I type
Julia --version

it shows that I am in 0.4.2-pre

But when I enter the Julia REPL,
I am in 0.4.4-pre+24





[julia-users] yet another benchmark with numba and fortran

2016-01-29 Thread Andre Bieler
Someone posted a benchmark for Numba and Fortran 

https://www.reddit.com/r/Python/comments/431tsm/numba_applied_to_high_intensity_computations_a/

As a Friday afternoon project I ported the Python version to Julia

https://github.com/abieler/LJSim.jl

(please go to the reddit link above to also download the python script to 
generate the necessary input data if
you want to replicate the benchmark)

My Julia implementation has about the same performance as the Numba version.
(It is slightly slower though) Both are well within a factor of 2 compared 
to the fortran implementation, which
is very good in my mind!

If you have suggestions for speed up please share :) I quickly profiled and 
@code_warntyped and didn't see anything worrying with my code.

Have a good weekend everybody!



[julia-users] Re: yet another benchmark with numba and fortran

2016-01-29 Thread Andre Bieler
sorry. dont know how that happened... fixed the typo and also added the 
python file to gerenate the input data

On Friday, January 29, 2016 at 6:27:23 PM UTC+1, Kristoffer Carlsson wrote:
>
> Ok, just a typo, should be boxl2
>
> On Friday, January 29, 2016 at 6:26:44 PM UTC+1, Kristoffer Carlsson wrote:
>>
>> boxl3 is not defined
>>
>> On Friday, January 29, 2016 at 6:19:09 PM UTC+1, Andre Bieler wrote:
>>>
>>> Someone posted a benchmark for Numba and Fortran 
>>>
>>>
>>> https://www.reddit.com/r/Python/comments/431tsm/numba_applied_to_high_intensity_computations_a/
>>>
>>> As a Friday afternoon project I ported the Python version to Julia
>>>
>>> https://github.com/abieler/LJSim.jl
>>>
>>> (please go to the reddit link above to also download the python script 
>>> to generate the necessary input data if
>>> you want to replicate the benchmark)
>>>
>>> My Julia implementation has about the same performance as the Numba 
>>> version.
>>> (It is slightly slower though) Both are well within a factor of 2 
>>> compared to the fortran implementation, which
>>> is very good in my mind!
>>>
>>> If you have suggestions for speed up please share :) I quickly profiled 
>>> and @code_warntyped and didn't see anything worrying with my code.
>>>
>>> Have a good weekend everybody!
>>>
>>>

[julia-users] Re: yet another benchmark with numba and fortran

2016-01-29 Thread Kristoffer Carlsson
boxl3 is not defined

On Friday, January 29, 2016 at 6:19:09 PM UTC+1, Andre Bieler wrote:
>
> Someone posted a benchmark for Numba and Fortran 
>
>
> https://www.reddit.com/r/Python/comments/431tsm/numba_applied_to_high_intensity_computations_a/
>
> As a Friday afternoon project I ported the Python version to Julia
>
> https://github.com/abieler/LJSim.jl
>
> (please go to the reddit link above to also download the python script to 
> generate the necessary input data if
> you want to replicate the benchmark)
>
> My Julia implementation has about the same performance as the Numba 
> version.
> (It is slightly slower though) Both are well within a factor of 2 compared 
> to the fortran implementation, which
> is very good in my mind!
>
> If you have suggestions for speed up please share :) I quickly profiled 
> and @code_warntyped and didn't see anything worrying with my code.
>
> Have a good weekend everybody!
>
>

[julia-users] Re: yet another benchmark with numba and fortran

2016-01-29 Thread Kristoffer Carlsson
Ok, just a typo, should be boxl2

On Friday, January 29, 2016 at 6:26:44 PM UTC+1, Kristoffer Carlsson wrote:
>
> boxl3 is not defined
>
> On Friday, January 29, 2016 at 6:19:09 PM UTC+1, Andre Bieler wrote:
>>
>> Someone posted a benchmark for Numba and Fortran 
>>
>>
>> https://www.reddit.com/r/Python/comments/431tsm/numba_applied_to_high_intensity_computations_a/
>>
>> As a Friday afternoon project I ported the Python version to Julia
>>
>> https://github.com/abieler/LJSim.jl
>>
>> (please go to the reddit link above to also download the python script to 
>> generate the necessary input data if
>> you want to replicate the benchmark)
>>
>> My Julia implementation has about the same performance as the Numba 
>> version.
>> (It is slightly slower though) Both are well within a factor of 2 
>> compared to the fortran implementation, which
>> is very good in my mind!
>>
>> If you have suggestions for speed up please share :) I quickly profiled 
>> and @code_warntyped and didn't see anything worrying with my code.
>>
>> Have a good weekend everybody!
>>
>>

Re: [julia-users] Julia version mismatch

2016-01-29 Thread Mauro
You have do to a

make cleanall

and recompile.  Happened to me, it's on github somewhere.

On Fri, 2016-01-29 at 17:55, Sisyphuss  wrote:
> I have this problem:
>
> When I type
> Julia --version
>
> it shows that I am in 0.4.2-pre
>
> But when I enter the Julia REPL,
> I am in 0.4.4-pre+24


[julia-users] Has module pre-compilation has been back-ported to Julia 0.3.11?

2016-01-29 Thread mcarrizosa
Looking to precompile modules in Julia 0.3.11. Wanted to know if this is 
possible and if anyone has successfully completed this.

-Michelle 


[julia-users] Julia vs Matlab: interpolation and looping

2016-01-29 Thread pokerhontas2k8
Hi,

my original problem is a dynammic programming problem in which I need to 
interpolate the value function on an irregular grid using a cubic spline 
method. I was translating my MATLAB code into Julia and used the Dierckx 
package in Julia to do the interpolation (there weren't some many 
alternatives that did spline on an irregular grid as far as I recall). In 
MATLAB I use interp1. It gave exactly the same result but it was about 50 
times slower which puzzled me. So I made this 
(http://stackoverflow.com/questions/34766029/julia-vs-matlab-why-is-my-julia-code-so-slow)
 
stackoverflow post. 

The post is messy and you don't need to read through it I think. The bottom 
line was that the Dierckx package apparently calls some Fortran code which 
seems to pretty old (and slow, and doesn't use multiple cores. Nobody knows 
what exactly the interp1 is doing. My guess is that it's coded in C?!). 

So I asked a friend of mine who knows a little bit of C and he was so kind 
to help me out. He translated the interpolation method into C and made it 
such that it uses multiple threads (I am working with 12 threads). He also 
put it on github here (https://github.com/nuffe/mnspline). Equipped with 
that, I went back to my original problem and reran it. The Julia code was 
still 3 times slower which left me puzzled again. The interpolation itself 
was much faster now than MATLAB's interp1 but somewhere on the way that 
advantage was lost. Consider the following minimal working example 
preserving the irregular grid of the original problem which highlights the 
point I think (the only action is in the loop, the other stuff is just 
generating the irregular grid):

MATLAB:

spacing=1.5;
Nxx = 300 ;
Naa = 350;
Nalal = 200; 
sigma = 10 ;
NoIter = 1; 

xx=NaN(Nxx,1);
xmin = 0.01;
xmax = 400;
xx(1) = xmin;
for i=2:Nxx
xx(i) = xx(i-1) + (xmax-xx(i-1))/((Nxx-i+1)^spacing);
end

f_util =  @(c) c.^(1-sigma)/(1-sigma);
W=NaN(Nxx,1);
W(:,1) = f_util(xx);

W_temp = NaN(Nalal,Naa);
xprime = NaN(Nalal,Naa);

tic
for banana=1:NoIter
%   tic
  xprime=ones(Nalal,Naa);
  W_temp=interp1(xx,W(:,1),xprime,'spline');
%   toc
end
toc


Julia:

include("mnspline.jl")

spacing=1.5
Nxx = 300
Naa = 350
Nalal = 200
sigma = 10
NoIter = 1

xx=Array(Float64,Nxx)
xmin = 0.01
xmax = 400
xx[1] = xmin
for i=2:Nxx
xx[i] = xx[i-1] + (xmax-xx[i-1])/((Nxx-i+1)^spacing)
end

f_util(c) =  c.^(1-sigma)/(1-sigma)
W=Array(Float64,Nxx,1)
W[:,1] = f_util(xx)


spl = mnspline(xx,W[:,1])

function performance(NoIter::Int64)
W_temp = Array(Float64,Nalal*Naa)
W_temp2 = Array(Float64,Nalal,Naa)
xprime=Array(Float64,Nalal,Naa)
for banana=1:NoIter
xprime=ones(Nalal,Naa)
W_temp = spl(xprime[:])
end
W_temp2 = reshape(W_temp,Nalal,Naa)
end

@time performance()


In the end I want to have a matrix, that's why I do all this reshaping in 
the Julia code. If I comment out the loop and just consider one iteration, 
Julia does it in  (I ran it several times, precompiling etc)

0.000565 seconds (13 allocations: 1.603 MB)

MATLAB on the other hand: Elapsed time is 0.007864 seconds.

The bottom line being that in Julia the code is much faster (more than 10 times 
in this case), which should be since I use all my threads and the method is 
written in C. However, if I don't comment out the loop and run the code as 
posted above:

Julia:
3.205262 seconds (118.99 k allocations: 15.651 GB, 14.08% gc time)

MATLAB:
Elapsed time is 4.514449 seconds.


If I run the whole loop apparently MATLAB catches up a lot. It is still slower 
in this case. In the original problem though Julia was only about 3 times 
faster within the loop and once I looped through MATLAB turned out be 3 times 
faster. I am stuck here, does anybody have an idea what might be going? / If I 
am doing something wrong in my Julia code? A hint what I could do better?
Right now I am trying to parallelize also the loop. But that's obviously unfair 
compared with MATLAB because you could also parallelize that (If you buy that 
expensive toolbox). 










[julia-users] Re: ANN: Julia "lite" branch available

2016-01-29 Thread Scott Jones
I've updated the branch again (after a tracking down and working around an 
issue introduced with #13412),
had to get that great jb/function PR in!
All unit tests pass.

On Thursday, January 21, 2016 at 9:02:50 PM UTC-5, Scott Jones wrote:
>
> This is still a WIP, and can definitely use some more work in 1) testing 
> on other platforms 2) better disentangling of documentation 3) advice on 
> how better to accomplish it's goals. 4) testing with different subsets of 
> functionality turned on (I've tested just with BUILD_FULL disabled ("lite" 
> version), or enabled (same as master) so far.
>
> This branch (spj/lite in ScottPJones repository, 
> https://github.com/ScottPJones/julia/tree/spj/lite) by default will build 
> a "lite" version of Julia, and by putting
> override BUILD_xxx = 1
> lines in Make.user, different functionality can be built back in (such as 
> BigInt, BigFloat, LinAlg, Float16, Mmap, Threads, ...).  See Make.inc for 
> the full list.
>
> I've also made it so that all unit tests pass (that don't use disabled 
> functionality).
> (the hard part there was that testing can be spread all over the place, 
> esp. for BigInt, BigFloat, Complex, and Rational types).
>
> It will also not build libraries such as arpack, lapack, openblas, fftw, 
> suitesparse, mpfr, gmp, depending on what BUILD_* options have been set.
>
> This is only a first step, the real goal is to be able to have a minimal 
> useful core, that can have the other parts easily added, in such a way that 
> they still appear to have
> been defined completely in Base.
> One place where I think this can be very useful is for building minimal 
> versions of Julia to run on things like the Raspberry Pi.
>
> -Scott
>
>
>

[julia-users] malloc error?

2016-01-29 Thread Nandana Sengupta
I'm using the Julia package LowRankModels 
(https://github.com/madeleineudell/LowRankModels.jl/tree/dataframe-ux) and 
coming across a malloc error when making a single innocuous change to my 
code. 

For instance in the following sample code, changing a single parameter (the 
rank k)  from 10 to 40 makes the model go from running smoothly to 
producing a malloc error. 

Would really appreciate any pointers towards what might be going on /any 
tips to debug this error. Thanks!

Branch of LowRankModels: dataframe-ux

Link to 
Data: https://dl.dropboxusercontent.com/u/24399038/GSS2014cleanestCV10.csv

Julia code that reproduces the error:

##

using DataFrames
# branch of LowRankModels found at 
https://github.com/NandanaSengupta/LowRankModels.jl/tree/dataframe-ux
using LowRankModels

### loading data table
df = readtable("GSS2014cleanestCV10.csv");
# eliminate first (id) column
df1 = df[:, 2:size(df)[2] ];


# vector of datatypes -- 3 types of columns: real, categorical and ordinal
datatypes = Array(Symbol, size(df1)[2])
datatypes[1:23] = :real
datatypes[24:54] = :ord
datatypes[55:size(df1)[2]] = :cat



## run GLRM AND cross_validate with rank k = 10 
##
 Runs without any error

glrm_10 = GLRM(df1, 10, datatypes)

srand(10)
t1, t2, t3, t4 = cross_validate(glrm_10, nfolds = 5, params = Params(), 
init = init_svd!);


## run GLRM AND cross_validate with rank k = 40 
##
 malloc error on cross_validate

glrm_40 = GLRM(df1, 40, datatypes) #, prob_scale = false)

srand(40)
t1, t2, t3, t4 = cross_validate(glrm_40, nfolds = 5, params = Params(), 
init = init_svd!);




[julia-users] malloc error?

2016-01-29 Thread Nandana Sengupta
 

I'm using the Julia package LowRankModels 
(https://github.com/madeleineudell/LowRankModels.jl/tree/dataframe-ux) and 
coming across a malloc error when making a single innocuous change to my 
code. 


For instance in the following sample code, changing a single parameter (the 
rank k) from 10 to 40 makes the model go from running smoothly to producing 
a malloc error. 


Would really appreciate any pointers towards what might be going on /any 
tips to debug this error. Details and code below. Thanks!


#


Wording of error: "julia(9849,0x7fff705d0300) malloc: *** error for object 
0x7f96a332f408: incorrect checksum for freed object - object was probably 
modified after being freed.

*** set a breakpoint in malloc_error_break to debug"


Julia Version: 0.4.1 


Branch of LowRankModels: dataframe-ux


Link to Data: 
https://dl.dropboxusercontent.com/u/24399038/GSS2014cleanestCV10.csv


Julia code that reproduces the error:



##


using DataFrames

# branch of LowRankModels found at 
https://github.com/NandanaSengupta/LowRankModels.jl/tree/dataframe-ux

using LowRankModels


### loading data table

df = readtable("GSS2014cleanestCV10.csv");

# eliminate first (id) column

df1 = df[:, 2:size(df)[2] ];



# vector of datatypes -- 3 types of columns: real, categorical and ordinal

datatypes = Array(Symbol, size(df1)[2])

datatypes[1:23] = :real

datatypes[24:54] = :ord

datatypes[55:size(df1)[2]] = :cat




## run GLRM AND cross_validate with rank k = 10 
##

 Runs without any error


glrm_10 = GLRM(df1, 10, datatypes)


srand(10)

t1, t2, t3, t4 = cross_validate(glrm_10, nfolds = 5, params = Params(), 
init = init_svd!);



## run GLRM AND cross_validate with rank k = 40 
##

 malloc error on cross_validate 


glrm_40 = GLRM(df1, 40, datatypes) #, prob_scale = false)


srand(40)

t1, t2, t3, t4 = cross_validate(glrm_40, nfolds = 5, params = Params(), 
init = init_svd!);


[julia-users] Re: malloc error?

2016-01-29 Thread Madeleine Udell
A bit more information: the error seems to be occurring in the function
`fit!` in algorithms/proxgrad.jl
.
There's almost no calling of C functions, so very little chance for memory
management problems: the only possible problems I can imagine are in

1) gemm! (eg line 59

)
2) accessing out-of-bounds entries via an ArrayView (eg line 84

)
3) accessing a garbage-collected variable (maybe accessing an array entry
via an array view?)

I've checked sizes of matrices via a bunch of @assert statements, and the
error occurs nondeterministically, which leads me to think that something
like 3) is happening.

Any ideas how we've managed to modify an object after freeing it?

On Fri, Jan 29, 2016 at 10:14 AM, Nandana Sengupta 
wrote:

> I'm using the Julia package LowRankModels (
> https://github.com/madeleineudell/LowRankModels.jl/tree/dataframe-ux) and
> coming across a malloc error when making a single innocuous change to my
> code.
>
>
> For instance in the following sample code, changing a single parameter
> (the rank k) from 10 to 40 makes the model go from running smoothly to
> producing a malloc error.
>
>
> Would really appreciate any pointers towards what might be going on /any
> tips to debug this error. Details and code below. Thanks!
>
>
> #
>
>
> Wording of error: "julia(9849,0x7fff705d0300) malloc: *** error for object
> 0x7f96a332f408: incorrect checksum for freed object - object was probably
> modified after being freed.
>
> *** set a breakpoint in malloc_error_break to debug"
>
>
> Julia Version: 0.4.1
>
>
> Branch of LowRankModels: dataframe-ux
>
>
> Link to Data:
> https://dl.dropboxusercontent.com/u/24399038/GSS2014cleanestCV10.csv
>
>
> Julia code that reproduces the error:
>
>
>
> ##
>
>
> using DataFrames
>
> # branch of LowRankModels found at
> https://github.com/NandanaSengupta/LowRankModels.jl/tree/dataframe-ux
>
> using LowRankModels
>
>
> ### loading data table
>
> df = readtable("GSS2014cleanestCV10.csv");
>
> # eliminate first (id) column
>
> df1 = df[:, 2:size(df)[2] ];
>
>
>
> # vector of datatypes -- 3 types of columns: real, categorical and ordinal
>
> datatypes = Array(Symbol, size(df1)[2])
>
> datatypes[1:23] = :real
>
> datatypes[24:54] = :ord
>
> datatypes[55:size(df1)[2]] = :cat
>
>
>
>
> ## run GLRM AND cross_validate with rank k = 10
> ##
>
>  Runs without any error
>
>
> glrm_10 = GLRM(df1, 10, datatypes)
>
>
> srand(10)
>
> t1, t2, t3, t4 = cross_validate(glrm_10, nfolds = 5, params = Params(),
> init = init_svd!);
>
>
>
> ## run GLRM AND cross_validate with rank k = 40
> ##
>
>  malloc error on cross_validate
>
>
> glrm_40 = GLRM(df1, 40, datatypes) #, prob_scale = false)
>
>
> srand(40)
>
> t1, t2, t3, t4 = cross_validate(glrm_40, nfolds = 5, params = Params(),
> init = init_svd!);
>



-- 
Madeleine Udell
Postdoctoral Fellow at the Center for the Mathematics of Information
California Institute of Technology
*https://courses2.cit.cornell.edu/mru8
*
(415) 729-4115


[julia-users] Re: yet another benchmark with numba and fortran

2016-01-29 Thread Jutho
Transposing coord (i.e. let the column index correspond to x,y,z and the 
row index to the different particles) helps a little bit (but not so much). 
Julia uses column major order, Python is different I believe. How big is 
the difference with Numba?

Op vrijdag 29 januari 2016 21:26:07 UTC+1 schreef STAR0SS:
>
> A significant part of the time is spent in computing the norms (r_sq = 
> rx*rx + ry*ry + rz*rz) and the absolute value (abs(rz) > boxl2), I don't 
> know if there's more efficient ways to compute those.
>


Re: [julia-users] Has module pre-compilation has been back-ported to Julia 0.3.11?

2016-01-29 Thread Yichao Yu
On Fri, Jan 29, 2016 at 5:51 PM,   wrote:
> Looking to precompile modules in Julia 0.3.11. Wanted to know if this is
> possible and if anyone has successfully completed this.

It is not and won't be.

>
> -Michelle


Re: [julia-users] Cannot pull with rebase...?

2016-01-29 Thread Joshua Duncan
Have you resolved this problem?  I have the same errors as you.  Just 
installed v0.4.3 on Windows 10.

Thanks,
Josh

On Friday, January 15, 2016 at 8:54:42 AM UTC-6, fab...@chalmers.se wrote:
>
> "Pkg.dir()" give the same "Cannot pull with rebase..." error message.
>
> Thanks.
>
> On Wednesday, January 13, 2016 at 5:12:37 PM UTC+1, Stefan Karpinski wrote:
>>
>> What is the result of `Pkg.dir()`  in the Julia REPL? That's where your 
>> packages live and the METADATA repo.
>>
>> On Wed, Jan 13, 2016 at 10:36 AM,  wrote:
>>
>>> Thanks for the reply. Unfortunately, I cannot find any such folder. This 
>>> is the binary 64-bit version of Julia 0.4.2 on W10x64pro. I expected this 
>>> to just install and run, but apparently this is not the case, at least not 
>>> on W10...
>>>
>>>
>>> On Tuesday, January 12, 2016 at 4:48:22 PM UTC+1, Stefan Karpinski wrote:

 If you go into ~/.julia/v0.4/METADATA and do `git status` you should 
 see what's going on there.

 On Tue, Jan 12, 2016 at 10:41 AM,  wrote:

> Hello all 
> (first post)
>
> I just now downloaded version 0.4.2 of Julia, and installed it on my 
> W10x64pro machine. I was going to use it to run the optimization 
> packages, 
> so I did the Pkg.update() on the Julia cmd line, as instructed here 
> http://www.juliaopt.org/.
>
> To my surprise, i get the following error message:
>
> INFO: Updating METADATA...
> error: Cannot pull with rebase: You have unstaged changes.
> ERROR: failed process: Process(`git pull --rebase -q`, 
> ProcessExited(1)) [1]
>  in pipeline_error at process.jl:555
>
> I have not done any changes to any source, just DL'ed and installed 
> the 64-bit binary for windows, and I have no clue what is going on... 
> Does 
> anyone have any ideas?
>
> Thanks.
>


>>

Re: [julia-users] Re: ANN: Julia "lite" branch available

2016-01-29 Thread Jeff Bezanson
This is interesting, and a good starting point for refactoring our
large Base library. Any fun statistics, e.g. build time and system
image size for the minimal version?


On Fri, Jan 29, 2016 at 3:00 PM, Scott Jones  wrote:
> I've updated the branch again (after a tracking down and working around an
> issue introduced with #13412),
> had to get that great jb/function PR in!
> All unit tests pass.
>
> On Thursday, January 21, 2016 at 9:02:50 PM UTC-5, Scott Jones wrote:
>>
>> This is still a WIP, and can definitely use some more work in 1) testing
>> on other platforms 2) better disentangling of documentation 3) advice on how
>> better to accomplish it's goals. 4) testing with different subsets of
>> functionality turned on (I've tested just with BUILD_FULL disabled ("lite"
>> version), or enabled (same as master) so far.
>>
>> This branch (spj/lite in ScottPJones repository,
>> https://github.com/ScottPJones/julia/tree/spj/lite) by default will build a
>> "lite" version of Julia, and by putting
>> override BUILD_xxx = 1
>> lines in Make.user, different functionality can be built back in (such as
>> BigInt, BigFloat, LinAlg, Float16, Mmap, Threads, ...).  See Make.inc for
>> the full list.
>>
>> I've also made it so that all unit tests pass (that don't use disabled
>> functionality).
>> (the hard part there was that testing can be spread all over the place,
>> esp. for BigInt, BigFloat, Complex, and Rational types).
>>
>> It will also not build libraries such as arpack, lapack, openblas, fftw,
>> suitesparse, mpfr, gmp, depending on what BUILD_* options have been set.
>>
>> This is only a first step, the real goal is to be able to have a minimal
>> useful core, that can have the other parts easily added, in such a way that
>> they still appear to have
>> been defined completely in Base.
>> One place where I think this can be very useful is for building minimal
>> versions of Julia to run on things like the Raspberry Pi.
>>
>> -Scott
>>
>>
>


Re: [julia-users] Re: ANN: Julia "lite" branch available

2016-01-29 Thread cdm

intriguing ... Thank You, Scott !

possibly of interest:

   
https://groups.google.com/forum/#!searchin/julia-users/alternate$20lisp/julia-users/am8opcv-5Mc/UdXyususBwAJ


enjoy !!!


Re: [julia-users] are tasks threads in 0.4?

2016-01-29 Thread Yichao Yu
On Fri, Jan 29, 2016 at 10:53 PM, andrew cooke  wrote:
>
> i've been away from julia for a while so am not up-to-date on changes, and
> am looking at an odd problem.
>
> i have some code, which is messier and more complex than i would like, which
> is called to print a graph of values.  the print code uses tasks.  in 0.3
> this works, but in 0.4 the program sits, using no CPU.
>
> if i dump the stack (using gstack PID) i see:
>
> Thread 4 (Thread 0x7efe3b6bb700 (LWP 1709)):
> #0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from
> /lib64/libpthread.so.0
> #1  0x7efe3bf62b5b in blas_thread_server () from
> /home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so
> #2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0
> #3  0x7f004231604d in clone () from /lib64/libc.so.6
> Thread 3 (Thread 0x7efe3aeba700 (LWP 1710)):
> #0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from
> /lib64/libpthread.so.0
> #1  0x7efe3bf62b5b in blas_thread_server () from
> /home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so
> #2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0
> #3  0x7f004231604d in clone () from /lib64/libc.so.6
> Thread 2 (Thread 0x7efe3a6b9700 (LWP 1711)):
> #0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from
> /lib64/libpthread.so.0
> #1  0x7efe3bf62b5b in blas_thread_server () from
> /home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so
> #2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0
> #3  0x7f004231604d in clone () from /lib64/libc.so.6
> Thread 1 (Thread 0x7f0044710740 (LWP 1708)):
> #0  0x7f0042e8120d in pause () from /lib64/libpthread.so.0
> #1  0x7f0040a190fe in julia_wait_17546 () at task.jl:364
> #2  0x7f0040a18ea1 in julia_wait_17544 () at task.jl:286
> #3  0x7f0040a40ffc in julia_lock_18599 () at lock.jl:23
> #4  0x7efe3ecdbeb7 in ?? ()
> #5  0x7ffd3e6ad2c0 in ?? ()
> #6  0x in ?? ()
>
> which looks suspiciously like some kind of deadlock.
>
> but i am not using threads, myself.  just tasks.

Tasks are not threads. You can see the threads are started by openblas.

IIUC tasks can have dead lock too, depending on how you use it.

>
> hence the question.  any pointers appreciated.
>
> thanks,
> andrew
>


[julia-users] are tasks threads in 0.4?

2016-01-29 Thread andrew cooke

i've been away from julia for a while so am not up-to-date on changes, and 
am looking at an odd problem.

i have some code, which is messier and more complex than i would like, 
which is called to print a graph of values.  the print code uses tasks.  in 
0.3 this works, but in 0.4 the program sits, using no CPU.

if i dump the stack (using gstack PID) i see:

Thread 4 (Thread 0x7efe3b6bb700 (LWP 1709)):
#0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
#1  0x7efe3bf62b5b in blas_thread_server () from 
/home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so
#2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0
#3  0x7f004231604d in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x7efe3aeba700 (LWP 1710)):
#0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
#1  0x7efe3bf62b5b in blas_thread_server () from 
/home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so
#2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0
#3  0x7f004231604d in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x7efe3a6b9700 (LWP 1711)):
#0  0x7f0042e7e05f in pthread_cond_wait@@GLIBC_2.3.2 () from 
/lib64/libpthread.so.0
#1  0x7efe3bf62b5b in blas_thread_server () from 
/home/andrew/pkg/julia-0.4/usr/bin/../lib/libopenblas64_.so
#2  0x7f0042e7a0a4 in start_thread () from /lib64/libpthread.so.0
#3  0x7f004231604d in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7f0044710740 (LWP 1708)):
#0  0x7f0042e8120d in pause () from /lib64/libpthread.so.0
#1  0x7f0040a190fe in julia_wait_17546 () at task.jl:364
#2  0x7f0040a18ea1 in julia_wait_17544 () at task.jl:286
#3  0x7f0040a40ffc in julia_lock_18599 () at lock.jl:23
#4  0x7efe3ecdbeb7 in ?? ()
#5  0x7ffd3e6ad2c0 in ?? ()
#6  0x in ?? ()

which looks suspiciously like some kind of deadlock.

but i am not using threads, myself.  just tasks.

hence the question.  any pointers appreciated.

thanks,
andrew



[julia-users] Re: SharedArray / parallel question

2016-01-29 Thread Christopher Alexander
I tried using @sync @parallel and ended up getting a segmentation fault, so 
I'm not really sure how to parallelize this loop.  This is inside of a 
larger module, so I'm not sure if something special has to be done (e.g. 
putting @everywhere somewhere).  I've searched this forum and the 
documentation, which is where I got the idea to add @sync.  I'd appreciate 
any input, as I'm kind of stuck.  The larger code in which this 
parallelization should take place is here:
https://github.com/pazzo83/QuantJulia.jl/blob/master/src/methods/lattice.jl

Thanks!

Chris

On Friday, January 29, 2016 at 4:54:18 PM UTC-5, Christopher Alexander 
wrote:
>
> Hello all, I have a question about the proper usage of SharedArray / 
> @parallel.  I am trying to use it in a particular part of my code, but I am 
> not getting the expected results (instead I am getting an array of zeros 
> each time).
>
> Here are the two functions that are involved:
> function partial_rollback!(lattice::TreeLattice, asset::DiscretizedAsset, 
> t::Float64)
>   from = asset.common.time
>
>   if QuantJulia.Math.is_close(from, t)
> return
>   end
>
>   iFrom = findfirst(lattice.tg.times .>= from)
>   iTo = findfirst(lattice.tg.times .>= t)
>
>   @simd for i = iFrom-1:-1:iTo
> newVals = step_back(lattice, i, asset.common.values)
> @inbounds asset.common.time = lattice.tg.times[i]
> asset.common.values = sdata(newVals)
> 
> if i != iTo
>   adjust_values!(asset)
> end
>   end
>
>   return asset
> end
>
> function step_back(lattice::TreeLattice, i::Int, vals::Vector{Float64})
>
>   newVals = SharedArray(Float64, get_size(lattice.impl, i))
>   @parallel for j = 1:length(newVals)
> val = 0.0
> for l = 1:lattice.n
>   val += probability(lattice.impl, i, j, l) * 
> vals[descendant(lattice.impl, i, j, l)]
> end
> val *= discount(lattice.impl, i, j)
> newVals[j] = val
>   end
>   retArray = sdata(newVals)
>   
>   return retArray
> end
>
> Is that to much complexity in the parallel loop?  Right now the max # of 
> times I've seen over this loop is in the 9000+ range, so that's why I 
> thought it would be better than pmap.
> Any suggestions?
>
> Thanks!
>
> Chris
>


[julia-users] Re: Julia vs Matlab: interpolation and looping

2016-01-29 Thread Andrew
Your loop has a ton of unnecessary allocation. You can 
move xprime=ones(Nalal,Naa) outside the loop.
Also, you are converting xprime to a vector at every iteration. You can 
also do this outside the loop.

After the changes, I get

julia> include("test2.jl");
WARNING: redefining constant lib
  3.726049 seconds (99.47 k allocations: 15.651 GB, 6.56% gc time)

julia> include("test3.jl");
WARNING: redefining constant lib
  2.352259 seconds (29.40 k allocations: 5.219 GB, 3.29% gc time)

in test3 the performance function is 

function performance(NoIter::Int64)
W_temp = Array(Float64,Nalal*Naa)
W_temp2 = Array(Float64,Nalal,Naa)
xprime=Array(Float64,Nalal,Naa)
xprime=ones(Nalal,Naa)
xprime = xprime[:]
for banana=1:NoIter
W_temp = spl(xprime)
end
W_temp2 = reshape(W_temp,Nalal,Naa)
end

I don't know if you have this in your original code though. Maybe it's just 
your example.

I also have need for fast cubic splines because I do dynamic programming, 
though I don't think Dierckx is a bottleneck for me. I think the 
Interpolations package might soon have cubic splines on a non-uniform grid, 
and it's very fast.


On Friday, January 29, 2016 at 8:02:53 PM UTC-5, pokerho...@googlemail.com 
wrote:
>
> Hi,
>
> my original problem is a dynammic programming problem in which I need to 
> interpolate the value function on an irregular grid using a cubic spline 
> method. I was translating my MATLAB code into Julia and used the Dierckx 
> package in Julia to do the interpolation (there weren't some many 
> alternatives that did spline on an irregular grid as far as I recall). In 
> MATLAB I use interp1. It gave exactly the same result but it was about 50 
> times slower which puzzled me. So I made this (
> http://stackoverflow.com/questions/34766029/julia-vs-matlab-why-is-my-julia-code-so-slow)
>  
> stackoverflow post. 
>
> The post is messy and you don't need to read through it I think. The 
> bottom line was that the Dierckx package apparently calls some Fortran code 
> which seems to pretty old (and slow, and doesn't use multiple cores. Nobody 
> knows what exactly the interp1 is doing. My guess is that it's coded in 
> C?!). 
>
> So I asked a friend of mine who knows a little bit of C and he was so kind 
> to help me out. He translated the interpolation method into C and made it 
> such that it uses multiple threads (I am working with 12 threads). He also 
> put it on github here (https://github.com/nuffe/mnspline). Equipped with 
> that, I went back to my original problem and reran it. The Julia code was 
> still 3 times slower which left me puzzled again. The interpolation itself 
> was much faster now than MATLAB's interp1 but somewhere on the way that 
> advantage was lost. Consider the following minimal working example 
> preserving the irregular grid of the original problem which highlights the 
> point I think (the only action is in the loop, the other stuff is just 
> generating the irregular grid):
>
> MATLAB:
>
> spacing=1.5;
> Nxx = 300 ;
> Naa = 350;
> Nalal = 200; 
> sigma = 10 ;
> NoIter = 1; 
>
> xx=NaN(Nxx,1);
> xmin = 0.01;
> xmax = 400;
> xx(1) = xmin;
> for i=2:Nxx
> xx(i) = xx(i-1) + (xmax-xx(i-1))/((Nxx-i+1)^spacing);
> end
>
> f_util =  @(c) c.^(1-sigma)/(1-sigma);
> W=NaN(Nxx,1);
> W(:,1) = f_util(xx);
>
> W_temp = NaN(Nalal,Naa);
> xprime = NaN(Nalal,Naa);
>
> tic
> for banana=1:NoIter
> %   tic
>   xprime=ones(Nalal,Naa);
>   W_temp=interp1(xx,W(:,1),xprime,'spline');
> %   toc
> end
> toc
>
>
> Julia:
>
> include("mnspline.jl")
>
> spacing=1.5
> Nxx = 300
> Naa = 350
> Nalal = 200
> sigma = 10
> NoIter = 1
>
> xx=Array(Float64,Nxx)
> xmin = 0.01
> xmax = 400
> xx[1] = xmin
> for i=2:Nxx
> xx[i] = xx[i-1] + (xmax-xx[i-1])/((Nxx-i+1)^spacing)
> end
>
> f_util(c) =  c.^(1-sigma)/(1-sigma)
> W=Array(Float64,Nxx,1)
> W[:,1] = f_util(xx)
>
>
> spl = mnspline(xx,W[:,1])
>
> function performance(NoIter::Int64)
> W_temp = Array(Float64,Nalal*Naa)
> W_temp2 = Array(Float64,Nalal,Naa)
> xprime=Array(Float64,Nalal,Naa)
> for banana=1:NoIter
> xprime=ones(Nalal,Naa)
> W_temp = spl(xprime[:])
> end
> W_temp2 = reshape(W_temp,Nalal,Naa)
> end
>
> @time performance()
>
>
> In the end I want to have a matrix, that's why I do all this reshaping in 
> the Julia code. If I comment out the loop and just consider one iteration, 
> Julia does it in  (I ran it several times, precompiling etc)
>
> 0.000565 seconds (13 allocations: 1.603 MB)
>
> MATLAB on the other hand: Elapsed time is 0.007864 seconds.
>
> The bottom line being that in Julia the code is much faster (more than 10 
> times in this case), which should be since I use all my threads and the 
> method is written in C. However, if I don't comment out the loop and run the 
> code as posted above:
>
> Julia:
> 3.205262 seconds (118.99 k allocations: 15.651 GB, 14.08% gc time)
>
> MATLAB:
> Elapsed time is 4.514449 seconds.
>
>
> If I run the whole loop 

[julia-users] SharedArray / parallel question

2016-01-29 Thread Christopher Alexander
Hello all, I have a question about the proper usage of SharedArray / 
@parallel.  I am trying to use it in a particular part of my code, but I am 
not getting the expected results (instead I am getting an array of zeros 
each time).

Here are the two functions that are involved:
function partial_rollback!(lattice::TreeLattice, asset::DiscretizedAsset, 
t::Float64)
  from = asset.common.time

  if QuantJulia.Math.is_close(from, t)
return
  end

  iFrom = findfirst(lattice.tg.times .>= from)
  iTo = findfirst(lattice.tg.times .>= t)

  @simd for i = iFrom-1:-1:iTo
newVals = step_back(lattice, i, asset.common.values)
@inbounds asset.common.time = lattice.tg.times[i]
asset.common.values = sdata(newVals)

if i != iTo
  adjust_values!(asset)
end
  end

  return asset
end

function step_back(lattice::TreeLattice, i::Int, vals::Vector{Float64})

  newVals = SharedArray(Float64, get_size(lattice.impl, i))
  @parallel for j = 1:length(newVals)
val = 0.0
for l = 1:lattice.n
  val += probability(lattice.impl, i, j, l) * 
vals[descendant(lattice.impl, i, j, l)]
end
val *= discount(lattice.impl, i, j)
newVals[j] = val
  end
  retArray = sdata(newVals)
  
  return retArray
end

Is that to much complexity in the parallel loop?  Right now the max # of 
times I've seen over this loop is in the 9000+ range, so that's why I 
thought it would be better than pmap.
Any suggestions?

Thanks!

Chris


[julia-users] Re: yet another benchmark with numba and fortran

2016-01-29 Thread STAR0SS
A significant part of the time is spent in computing the norms (r_sq = 
rx*rx + ry*ry + rz*rz) and the absolute value (abs(rz) > boxl2), I don't 
know if there's more efficient ways to compute those.


[julia-users] unix timestamp to DateTime & divexact errors

2016-01-29 Thread Michael Landis
Is there a language convention for converting/comparing unix timestamps 
(e.g. file modify dates) and DateTimes?

When I do:
fs = stat( "X.csv" );
ms = now() - fs.mtime;
typeof(ms) --> Base.Dates.Millisecond, even though now() is a DateTime

# you would think that creating a DateTime, would make them comparable, but
dt = DateTime( fs.mtime );
# still has fractional seconds in it, so computations with dt and 
constructed DateTimes produce divexact errors

What's the best stylistic workaround for that?




[julia-users] Re: unix timestamp to DateTime & divexact errors

2016-01-29 Thread Michael Landis
# Even after
dt = trunc(dt,Second);
I am still seeing divexact errors...

On Friday, January 29, 2016 at 12:44:35 PM UTC-8, Michael Landis wrote:
>
> Is there a language convention for converting/comparing unix timestamps 
> (e.g. file modify dates) and DateTimes?
>
> When I do:
> fs = stat( "X.csv" );
> ms = now() - fs.mtime;
> typeof(ms) --> Base.Dates.Millisecond, even though now() is a DateTime
>
> # you would think that creating a DateTime, would make them comparable, but
> dt = DateTime( fs.mtime );
> # still has fractional seconds in it, so computations with dt and 
> constructed DateTimes produce divexact errors
>
> What's the best stylistic workaround for that?
>
>
>