Re: [julia-users] functions on iterable types

2015-10-26 Thread harven


Le lundi 26 octobre 2015 07:15:56 UTC+1, DNF a écrit :
>
> Hmm. Trying to answer myself: I guess my suggested solution would miss 
> functions that don't specify the type, but just rely on the iterable 
> behaviour.
>
>
It is easy to get the list of thunks that work on iterables.

 type Iterable x::Float64 end
Base.start(x::Iterable) = 0
Base.next(x::Iterable) = 0
Base.done(x::Iterable) = true
show(filter(s -> try
   eval(s)(Iterable(0)) ; true
 catch
   false
 end,
names(Base)))

I am not sure how to provide default arguments for functions of several 
variables though.

Here is the list, excluding macros.

:apropos,:broadcast!_function,:broadcast_function,:conj,:ctranspose,:cycle,:deepcopy,
:display,:done,:dump,:eltype,:enumerate,:esc,:expand,:fetch,:fieldnames,:fill,:finalize,
:hash,:hcat,:htol,:identity,:info,:isbits,:isgeneric,:isimmutable,:isleaftype,:ltoh,:macroexpand,
:methods,:names,:next,:object_id,:permutations,:pointer_from_objref,:print,:println,:promote,
:promote_type,:push!,:redisplay,:repeated,:repr,:rmprocs,:show,:showall,:showcompact,
:sizeof,:start,:string,:summary,:symbol,:symdiff,:transpose,:typejoin,:vcat,:warn,:xdump,:zip


[julia-users] PyPlot histogram does not work (for me at least) in 0.4.0 (?)

2015-10-26 Thread Ferran Mazzanti
Hi folks,

using Linux Mint 17.1 here. I upgraded to julia 0.4.0 and now this simple 
code, taken from the web and tested on previous versions,

using PyPlot

x = randn(1000) # Values
nbins = 50 # Number of bins

fig = figure("pyplot_histogram",figsize=(6,6)) # Not strictly required
ax = axes() # Not strictly required
h = PyPlot.plt.hist(x,nbins) # Histogram, PyPlot.plt required to 
differentiate with conflicting hist command

Produces the following output

LoadError: type PyObject has no field hist
while loading In[133], in expression starting on line 6

 in getindex at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:240
 in pysequence_query at /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:781
 [inlined code] from /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:797
 in pytype_query at /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:826
 in convert at /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:846
 in pycall at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:399
 in call at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:407
 in close_queued_figs at /home/mazzanti/.julia/v0.4/PyPlot/src/PyPlot.jl:401

Any hint on that? Am I doing something wrong? If so, can anybody help on how to 
do histograms in Julia 0.4.0?

Thanks,

Ferran. 




Re: [julia-users] DataFrame type specification

2015-10-26 Thread Andrew Gibb
That worked. Thanks. 


[julia-users] Re: PyPlot histogram does not work (for me at least) in 0.4.0 (?)

2015-10-26 Thread Kristoffer Carlsson
Change last line to:

h = PyPlot.plt[:hist](x,nbins)



On Monday, October 26, 2015 at 11:28:35 AM UTC+1, Ferran Mazzanti wrote:
>
> Hi folks,
>
> using Linux Mint 17.1 here. I upgraded to julia 0.4.0 and now this simple 
> code, taken from the web and tested on previous versions,
>
> using PyPlot
>
> x = randn(1000) # Values
> nbins = 50 # Number of bins
>
> fig = figure("pyplot_histogram",figsize=(6,6)) # Not strictly required
> ax = axes() # Not strictly required
> h = PyPlot.plt.hist(x,nbins) # Histogram, PyPlot.plt required to 
> differentiate with conflicting hist command
>
> Produces the following output
>
> LoadError: type PyObject has no field hist
> while loading In[133], in expression starting on line 6
>
>  in getindex at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:240
>  in pysequence_query at 
> /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:781
>  [inlined code] from /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:797
>  in pytype_query at /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:826
>  in convert at /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:846
>  in pycall at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:399
>  in call at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:407
>  in close_queued_figs at /home/mazzanti/.julia/v0.4/PyPlot/src/PyPlot.jl:401
>
> Any hint on that? Am I doing something wrong? If so, can anybody help on how 
> to do histograms in Julia 0.4.0?
>
> Thanks,
>
> Ferran. 
>
>
>

[julia-users] Please help me understand comprehensions for creating an array

2015-10-26 Thread Ferran Mazzanti
Hi folks, 

I try to create an array of constant float64 values. Something I did was:

a = 0.8;
Nb = 100;
p = zeros(Nb)
for i in 1:Nb
p[i] = a/Nb
end

and typeof(p) returns
Array{Float64,1}
so far, so good :)

But now I do the following instead to shorten things:

a = 0.8;
Nb = 100;
p = [ a/Nb for i in 1:Nb]

and typeof(p) returns
Array{Any,1}

which is *big* pain since I obviously wanted to create an array of floats. 
So the questions are:
a) Is this behaviour normal/ expected?
b) If so, why is it? What is the logic of that? Isn't it true that the 
normal behaviour, in the statistical sense of what *most* people would 
expect, is to 
get floats right away? Or am I missing something?

I know I can always write 
p = float64( [ a/Nb for i in 1:Nb ] )
but anyway...

Cheers,

Ferran.


Array{Float64,1}




[julia-users] Re: PyPlot histogram does not work (for me at least) in 0.4.0 (?)

2015-10-26 Thread Ferran Mazzanti
That worked, thanks :) 

But this syntax I can not understand... where can I find documentation 
about how to do that? Just to avoid asking agains cuh kind of questions...

Thanks again.

On Monday, October 26, 2015 at 11:31:59 AM UTC+1, Kristoffer Carlsson wrote:
>
> Change last line to:
>
> h = PyPlot.plt[:hist](x,nbins)
>
>
>
> On Monday, October 26, 2015 at 11:28:35 AM UTC+1, Ferran Mazzanti wrote:
>>
>> Hi folks,
>>
>> using Linux Mint 17.1 here. I upgraded to julia 0.4.0 and now this simple 
>> code, taken from the web and tested on previous versions,
>>
>> using PyPlot
>>
>> x = randn(1000) # Values
>> nbins = 50 # Number of bins
>>
>> fig = figure("pyplot_histogram",figsize=(6,6)) # Not strictly required
>> ax = axes() # Not strictly required
>> h = PyPlot.plt.hist(x,nbins) # Histogram, PyPlot.plt required to 
>> differentiate with conflicting hist command
>>
>> Produces the following output
>>
>> LoadError: type PyObject has no field hist
>> while loading In[133], in expression starting on line 6
>>
>>  in getindex at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:240
>>  in pysequence_query at 
>> /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:781
>>  [inlined code] from /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:797
>>  in pytype_query at /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:826
>>  in convert at /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:846
>>  in pycall at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:399
>>  in call at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:407
>>  in close_queued_figs at /home/mazzanti/.julia/v0.4/PyPlot/src/PyPlot.jl:401
>>
>> Any hint on that? Am I doing something wrong? If so, can anybody help on how 
>> to do histograms in Julia 0.4.0?
>>
>> Thanks,
>>
>> Ferran. 
>>
>>
>>

[julia-users] Re: Please help me understand comprehensions for creating an array

2015-10-26 Thread Simon Danisch
This is not wanted but expected behavior, since type inference of non 
constant globals doesn't work very well (since the type can change 
unpredictable).
Two fixes: put your code in a function, or declare a and Nb as const.
This is directly 
related: http://docs.julialang.org/en/release-0.4/manual/performance-tips/

Best,
Simon

Am Montag, 26. Oktober 2015 11:35:06 UTC+1 schrieb Ferran Mazzanti:
>
> Hi folks, 
>
> I try to create an array of constant float64 values. Something I did was:
>
> a = 0.8;
> Nb = 100;
> p = zeros(Nb)
> for i in 1:Nb
> p[i] = a/Nb
>
end
>

> and typeof(p) returns
> Array{Float64,1}
> so far, so good :)
>
> But now I do the following instead to shorten things:
>
> a = 0.8;
> Nb = 100;
> p = [ a/Nb for i in 1:Nb]
>
> and typeof(p) returns
> Array{Any,1}
>
> which is *big* pain since I obviously wanted to create an array of floats. 
> So the questions are:
> a) Is this behaviour normal/ expected?
> b) If so, why is it? What is the logic of that? Isn't it true that the 
> normal behaviour, in the statistical sense of what *most* people would 
> expect, is to 
> get floats right away? Or am I missing something?
>
> I know I can always write 
> p = float64( [ a/Nb for i in 1:Nb ] )
> but anyway...
>
> Cheers,
>
> Ferran.
>
>
> Array{Float64,1}
>
>
>

[julia-users] Re: PyPlot histogram does not work (for me at least) in 0.4.0 (?)

2015-10-26 Thread Kristoffer Carlsson
Read here: https://github.com/stevengj/PyCall.jl#usage

More specifically, this section:

"The biggest diffence from Python is that object attributes/members are 
accessed with o[:attribute]rather than o.attribute, and you use get(o, key) 
rather 
than o[key]. (This is because Julia does not permit overloading the . operator 
yet.) See also the section on PyObject below, as well as the pywrap function 
to create anonymous modules that simulate . access (this is what 
@pyimportdoes). 
For example, using Biopython  we can do:

@pyimport Bio.Seq as s
@pyimport Bio.Alphabet as a
my_dna = s.Seq("AGTACACTGGT", a.generic_dna)
my_dna[:find]("ACT")

whereas in Python the last step would have been my_dna.find("ACT")"

On Monday, October 26, 2015 at 11:38:41 AM UTC+1, Ferran Mazzanti wrote:
>
> That worked, thanks :) 
>
> But this syntax I can not understand... where can I find documentation 
> about how to do that? Just to avoid asking agains cuh kind of questions...
>
> Thanks again.
>
> On Monday, October 26, 2015 at 11:31:59 AM UTC+1, Kristoffer Carlsson 
> wrote:
>>
>> Change last line to:
>>
>> h = PyPlot.plt[:hist](x,nbins)
>>
>>
>>
>> On Monday, October 26, 2015 at 11:28:35 AM UTC+1, Ferran Mazzanti wrote:
>>>
>>> Hi folks,
>>>
>>> using Linux Mint 17.1 here. I upgraded to julia 0.4.0 and now this 
>>> simple code, taken from the web and tested on previous versions,
>>>
>>> using PyPlot
>>>
>>> x = randn(1000) # Values
>>> nbins = 50 # Number of bins
>>>
>>> fig = figure("pyplot_histogram",figsize=(6,6)) # Not strictly required
>>> ax = axes() # Not strictly required
>>> h = PyPlot.plt.hist(x,nbins) # Histogram, PyPlot.plt required to 
>>> differentiate with conflicting hist command
>>>
>>> Produces the following output
>>>
>>> LoadError: type PyObject has no field hist
>>> while loading In[133], in expression starting on line 6
>>>
>>>  in getindex at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:240
>>>  in pysequence_query at 
>>> /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:781
>>>  [inlined code] from 
>>> /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:797
>>>  in pytype_query at /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:826
>>>  in convert at /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:846
>>>  in pycall at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:399
>>>  in call at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:407
>>>  in close_queued_figs at /home/mazzanti/.julia/v0.4/PyPlot/src/PyPlot.jl:401
>>>
>>> Any hint on that? Am I doing something wrong? If so, can anybody help on 
>>> how to do histograms in Julia 0.4.0?
>>>
>>> Thanks,
>>>
>>> Ferran. 
>>>
>>>
>>>

[julia-users] Re: Please help me understand comprehensions for creating an array

2015-10-26 Thread Kristoffer Carlsson
You can also rewrite it as p = Float64[ a/Nb for i in 1:Nb]

On Monday, October 26, 2015 at 12:08:33 PM UTC+1, Simon Danisch wrote:
>
> This is not wanted but expected behavior, since type inference of non 
> constant globals doesn't work very well (since the type can change 
> unpredictable).
> Two fixes: put your code in a function, or declare a and Nb as const.
> This is directly related: 
> http://docs.julialang.org/en/release-0.4/manual/performance-tips/
>
> Best,
> Simon
>
> Am Montag, 26. Oktober 2015 11:35:06 UTC+1 schrieb Ferran Mazzanti:
>>
>> Hi folks, 
>>
>> I try to create an array of constant float64 values. Something I did was:
>>
>> a = 0.8;
>> Nb = 100;
>> p = zeros(Nb)
>> for i in 1:Nb
>> p[i] = a/Nb
>>
> end
>>
>
>> and typeof(p) returns
>> Array{Float64,1}
>> so far, so good :)
>>
>> But now I do the following instead to shorten things:
>>
>> a = 0.8;
>> Nb = 100;
>> p = [ a/Nb for i in 1:Nb]
>>
>> and typeof(p) returns
>> Array{Any,1}
>>
>> which is *big* pain since I obviously wanted to create an array of 
>> floats. So the questions are:
>> a) Is this behaviour normal/ expected?
>> b) If so, why is it? What is the logic of that? Isn't it true that the 
>> normal behaviour, in the statistical sense of what *most* people would 
>> expect, is to 
>> get floats right away? Or am I missing something?
>>
>> I know I can always write 
>> p = float64( [ a/Nb for i in 1:Nb ] )
>> but anyway...
>>
>> Cheers,
>>
>> Ferran.
>>
>>
>> Array{Float64,1}
>>
>>
>>

Re: [julia-users] functions on iterable types

2015-10-26 Thread DNF
I must admit I don't understand very well what snippet is supposed to do. 
But, for example, it doesn't work for :sum or :maximum or anything like 
that.

On Monday, October 26, 2015 at 9:23:35 AM UTC+1, harven wrote:
>
>
>
> Le lundi 26 octobre 2015 07:15:56 UTC+1, DNF a écrit :
>>
>> Hmm. Trying to answer myself: I guess my suggested solution would miss 
>> functions that don't specify the type, but just rely on the iterable 
>> behaviour.
>>
>>
> It is easy to get the list of thunks that work on iterables.
>
>  type Iterable x::Float64 end
> Base.start(x::Iterable) = 0
> Base.next(x::Iterable) = 0
> Base.done(x::Iterable) = true
> show(filter(s -> try
>eval(s)(Iterable(0)) ; true
>  catch
>false
>  end,
> names(Base)))
>
> I am not sure how to provide default arguments for functions of several 
> variables though.
>
> Here is the list, excluding macros.
>
>
> :apropos,:broadcast!_function,:broadcast_function,:conj,:ctranspose,:cycle,:deepcopy,
>
> :display,:done,:dump,:eltype,:enumerate,:esc,:expand,:fetch,:fieldnames,:fill,:finalize,
>
> :hash,:hcat,:htol,:identity,:info,:isbits,:isgeneric,:isimmutable,:isleaftype,:ltoh,:macroexpand,
>
> :methods,:names,:next,:object_id,:permutations,:pointer_from_objref,:print,:println,:promote,
>
> :promote_type,:push!,:redisplay,:repeated,:repr,:rmprocs,:show,:showall,:showcompact,
>
> :sizeof,:start,:string,:summary,:symbol,:symdiff,:transpose,:typejoin,:vcat,:warn,:xdump,:zip
>


[julia-users] Re: Please help me understand comprehensions for creating an array

2015-10-26 Thread DNF
If you want constant values you should do 

p = zeros(Nb) + a/Nb
or 
p = ones(Nb) * a/Nb


On Monday, October 26, 2015 at 11:35:06 AM UTC+1, Ferran Mazzanti wrote:
>
> Hi folks, 
>
> I try to create an array of constant float64 values. Something I did was:
>
> a = 0.8;
> Nb = 100;
> p = zeros(Nb)
> for i in 1:Nb
> p[i] = a/Nb
> end
>
> and typeof(p) returns
> Array{Float64,1}
> so far, so good :)
>
> But now I do the following instead to shorten things:
>
> a = 0.8;
> Nb = 100;
> p = [ a/Nb for i in 1:Nb]
>
> and typeof(p) returns
> Array{Any,1}
>
> which is *big* pain since I obviously wanted to create an array of floats. 
> So the questions are:
> a) Is this behaviour normal/ expected?
> b) If so, why is it? What is the logic of that? Isn't it true that the 
> normal behaviour, in the statistical sense of what *most* people would 
> expect, is to 
> get floats right away? Or am I missing something?
>
> I know I can always write 
> p = float64( [ a/Nb for i in 1:Nb ] )
> but anyway...
>
> Cheers,
>
> Ferran.
>
>
> Array{Float64,1}
>
>
>

[julia-users] Re: Please help me understand comprehensions for creating an array

2015-10-26 Thread DNF
I learn new great stuff all the time with this language:

You should actually do

p = fill(a/Nb, Nb)


On Monday, October 26, 2015 at 12:59:03 PM UTC+1, DNF wrote:
>
> If you want constant values you should do 
>
> p = zeros(Nb) + a/Nb
> or 
> p = ones(Nb) * a/Nb
>


[julia-users] Re: Hausdorff distance

2015-10-26 Thread Andrew McLean
The naïve algorithm is O(N^2), I think the most efficient algorithms come 
close to O(N). There is quite a lot of literature on efficient algorithms.

- Andrew

On Friday, 23 October 2015 23:28:46 UTC+1, Júlio Hoffimann wrote:

> Hi,
>
> I want to make the Hausdorff distance (
> https://en.wikipedia.org/wiki/Hausdorff_distance) available in Julia, is 
> the Distances.jl package a good fit or I should create a separate package 
> just for this distance between point sets?
>
> I can think of a very simple (naive) implementation:
>
> using Distances
>
> A, B # matrices which columns represent the points in the pointset
> D = pairwise(Euclidean(), A, B)
> daB = maximum(minimum(D,2))
> dbA = maximum(minimum(D,1))
> result = max(daB, dbA)
>
> -Júlio
>


[julia-users] Re: Please help me understand comprehensions for creating an array

2015-10-26 Thread Ferran Mazzanti
Interesting answers :)

But the problem is not filling an array with constant values, but filling 
an array with a comprehension :)
I can change the

p = [ a/Nb for i in 1:Nb]

line with some other thing and still get the same answer. For instance:
a = 0.8;
Nb = 100;
p = [ i*1.0/Nb for i in 1:Nb]
typeof(p)

still produces Array{Any,1}



On Monday, October 26, 2015 at 11:35:06 AM UTC+1, Ferran Mazzanti wrote:
>
> Hi folks, 
>
> I try to create an array of constant float64 values. Something I did was:
>
> a = 0.8;
> Nb = 100;
> p = zeros(Nb)
> for i in 1:Nb
> p[i] = a/Nb
> end
>
> and typeof(p) returns
> Array{Float64,1}
> so far, so good :)
>
> But now I do the following instead to shorten things:
>
> a = 0.8;
> Nb = 100;
> p = [ a/Nb for i in 1:Nb]
>
> and typeof(p) returns
> Array{Any,1}
>
> which is *big* pain since I obviously wanted to create an array of floats. 
> So the questions are:
> a) Is this behaviour normal/ expected?
> b) If so, why is it? What is the logic of that? Isn't it true that the 
> normal behaviour, in the statistical sense of what *most* people would 
> expect, is to 
> get floats right away? Or am I missing something?
>
> I know I can always write 
> p = float64( [ a/Nb for i in 1:Nb ] )
> but anyway...
>
> Cheers,
>
> Ferran.
>
>
> Array{Float64,1}
>
>
>

Re: [julia-users] A grateful scientist

2015-10-26 Thread Stefan Karpinski
Thank you for writing this – it's a lovely thing to wake up to. I'm sure
all the others who make Julia happen and read this list daily feel the same.

On Sunday, October 25, 2015, Yakir Gagnon <12.ya...@gmail.com> wrote:

> Hi Julia community and developers,
> I'm a postdoc researching color vision, biological optics, polarization
> vision, and camouflage. I've always used Matlab in my research and made the
> switch to Julia about two years ago. I just wanted to report, for what it's
> worth, that as a researcher I think Julia is the best. I promote it
> everywhere I think it's appropriate, and use it almost exclusively.
> Just wanted to say a big fat thank you to all the developers and community
> for creating this magnificence.
>
> THANK YOU!
>


Re: [julia-users] Please help me understand comprehensions for creating an array

2015-10-26 Thread Stefan Karpinski
This will change on master soon but for now, either do this in a function
– with function arguments or local variables it's not a problem – or
specify an element type.

On Monday, October 26, 2015, Ferran Mazzanti 
wrote:

> Interesting answers :)
>
> But the problem is not filling an array with constant values, but filling
> an array with a comprehension :)
> I can change the
>
> p = [ a/Nb for i in 1:Nb]
>
> line with some other thing and still get the same answer. For instance:
> a = 0.8;
> Nb = 100;
> p = [ i*1.0/Nb for i in 1:Nb]
> typeof(p)
>
> still produces Array{Any,1}
>
>
>
> On Monday, October 26, 2015 at 11:35:06 AM UTC+1, Ferran Mazzanti wrote:
>>
>> Hi folks,
>>
>> I try to create an array of constant float64 values. Something I did was:
>>
>> a = 0.8;
>> Nb = 100;
>> p = zeros(Nb)
>> for i in 1:Nb
>> p[i] = a/Nb
>> end
>>
>> and typeof(p) returns
>> Array{Float64,1}
>> so far, so good :)
>>
>> But now I do the following instead to shorten things:
>>
>> a = 0.8;
>> Nb = 100;
>> p = [ a/Nb for i in 1:Nb]
>>
>> and typeof(p) returns
>> Array{Any,1}
>>
>> which is *big* pain since I obviously wanted to create an array of
>> floats. So the questions are:
>> a) Is this behaviour normal/ expected?
>> b) If so, why is it? What is the logic of that? Isn't it true that the
>> normal behaviour, in the statistical sense of what *most* people would
>> expect, is to
>> get floats right away? Or am I missing something?
>>
>> I know I can always write
>> p = float64( [ a/Nb for i in 1:Nb ] )
>> but anyway...
>>
>> Cheers,
>>
>> Ferran.
>>
>>
>> Array{Float64,1}
>>
>>
>>


[julia-users] Re: Please help me understand comprehensions for creating an array

2015-10-26 Thread DNF
OK. The phrasing of the question indicated to me that you were trying to 
create an array of Float64 values, and that the comprehension was just the 
tool to accomplish that.

In that case, Kristoffer Carlsson gave a good answer:
p = Float64[ a/Nb for i in 1:Nb]



On Monday, October 26, 2015 at 1:14:04 PM UTC+1, Ferran Mazzanti wrote:
>
> Interesting answers :)
>
> But the problem is not filling an array with constant values, but filling 
> an array with a comprehension :)
> I can change the
>
> p = [ a/Nb for i in 1:Nb]
>
> line with some other thing and still get the same answer. For instance:
> a = 0.8;
> Nb = 100;
> p = [ i*1.0/Nb for i in 1:Nb]
> typeof(p)
>
> still produces Array{Any,1}
>
>
>
> On Monday, October 26, 2015 at 11:35:06 AM UTC+1, Ferran Mazzanti wrote:
>>
>> Hi folks, 
>>
>> I try to create an array of constant float64 values. Something I did was:
>>
>> a = 0.8;
>> Nb = 100;
>> p = zeros(Nb)
>> for i in 1:Nb
>> p[i] = a/Nb
>> end
>>
>> and typeof(p) returns
>> Array{Float64,1}
>> so far, so good :)
>>
>> But now I do the following instead to shorten things:
>>
>> a = 0.8;
>> Nb = 100;
>> p = [ a/Nb for i in 1:Nb]
>>
>> and typeof(p) returns
>> Array{Any,1}
>>
>> which is *big* pain since I obviously wanted to create an array of 
>> floats. So the questions are:
>> a) Is this behaviour normal/ expected?
>> b) If so, why is it? What is the logic of that? Isn't it true that the 
>> normal behaviour, in the statistical sense of what *most* people would 
>> expect, is to 
>> get floats right away? Or am I missing something?
>>
>> I know I can always write 
>> p = float64( [ a/Nb for i in 1:Nb ] )
>> but anyway...
>>
>> Cheers,
>>
>> Ferran.
>>
>>
>> Array{Float64,1}
>>
>>
>>

Re: [julia-users] A grateful scientist

2015-10-26 Thread Jon Norberg
Utterly seconding that. Amazing community and beautiful language. 

Thanks all!

Jon Norberg 

Re: [julia-users] A grateful scientist

2015-10-26 Thread Scott T
I'll add my voice to say thanks as well! I made the switch from Python to 
Julia for some astrophysical models in my PhD, because I suck at C and 
Fortran and didn't like the messiness of dealing with something like 
Cython. Julia has been great for this and has introduced me gently to the 
usefulness of types and how to program for speed without throwing me in the 
deep end. I look forward to the day I can introduce people to it without 
having to preface the introduction with "You probably won't care about what 
I'm saying until the language and packages are more stable, but..."

Scott T

On Monday, 26 October 2015 12:47:47 UTC, Jon Norberg wrote:
>
> Utterly seconding that. Amazing community and beautiful language. 
>
> Thanks all!
>
> Jon Norberg 
>
>

Re: [julia-users] A grateful scientist

2015-10-26 Thread Scott T
(Oh and Yakir, your work sounds like one of the coolest interdisciplinary 
mixes I could possibly think of.)

On Monday, 26 October 2015 13:11:47 UTC, Scott T wrote:
>
> I'll add my voice to say thanks as well! I made the switch from Python to 
> Julia for some astrophysical models in my PhD, because I suck at C and 
> Fortran and didn't like the messiness of dealing with something like 
> Cython. Julia has been great for this and has introduced me gently to the 
> usefulness of types and how to program for speed without throwing me in the 
> deep end. I look forward to the day I can introduce people to it without 
> having to preface the introduction with "You probably won't care about what 
> I'm saying until the language and packages are more stable, but..."
>
> Scott T
>
> On Monday, 26 October 2015 12:47:47 UTC, Jon Norberg wrote:
>>
>> Utterly seconding that. Amazing community and beautiful language. 
>>
>> Thanks all!
>>
>> Jon Norberg 
>>
>>

Re: [julia-users] functions on iterable types

2015-10-26 Thread harven


Le lundi 26 octobre 2015 12:52:04 UTC+1, DNF a écrit :
>
> I must admit I don't understand very well what snippet is supposed to do. 
> But, for example, it doesn't work for :sum or :maximum or anything like 
> that.
>
> A thunk is a function that takes a single argument.
Cheers, 


[julia-users] CUDArt.CudaArry - fill! not working

2015-10-26 Thread Matthew Pearce
I'm not having much luck filling a CUDArt.CudaArray matrix with a value.

julia> C = CUDArt.CudaArray(Float64, (10,10))
CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
0x000b034a0e00),(10,10),0)

julia> fill!(C, 2.0)
ERROR: KeyError: (0,"fill_contiguous",Float64) not found
 [inlined code] from essentials.jl:58
 in getindex at dict.jl:719
 in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158

The fill! code works when matrix C is created by copying data to the gpu. 
This suggested to me the problem was one of memory allocation. However, 
I've tried variations on this which haven't worked, such as taking some of 
the source code:

julia> function NewCudaArray(T::Type, dims::Dims)
   n = prod(dims)
   p = CUDArt.malloc(T, n)
   CudaArray{T,length(dims)}(p, dims, device())
   end
NewCudaArray (generic function with 1 method)

julia> C = NewCudaArray(Float64, (10,10))
CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
0x000b034a1200),(10,10),0)

julia> fill!(C, 2.0)
ERROR: KeyError: (0,"fill_contiguous",Float64) not found
 [inlined code] from essentials.jl:58
 in getindex at dict.jl:719
 in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158

Copying things across unnecessarily sounds slow, so thoughts appreciated.


[julia-users] Re: CUDArt.CudaArry - fill! not working

2015-10-26 Thread Kristoffer Carlsson
I believe you need to use an initialized device. 
See https://github.com/JuliaGPU/CUDArt.jl#gpu-initialization

For example:

devices(dev->true) do devlist
 C = CUDArt.CudaArray(Float64, (10,10))
 fill!(C, 2.0)
 println(to_host(C))
end



On Monday, October 26, 2015 at 2:30:40 PM UTC+1, Matthew Pearce wrote:
>
> I'm not having much luck filling a CUDArt.CudaArray matrix with a value.
>
> julia> C = CUDArt.CudaArray(Float64, (10,10))
> CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
> 0x000b034a0e00),(10,10),0)
>
> julia> fill!(C, 2.0)
> ERROR: KeyError: (0,"fill_contiguous",Float64) not found
>  [inlined code] from essentials.jl:58
>  in getindex at dict.jl:719
>  in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158
>
> The fill! code works when matrix C is created by copying data to the gpu. 
> This suggested to me the problem was one of memory allocation. However, 
> I've tried variations on this which haven't worked, such as taking some of 
> the source code:
>
> julia> function NewCudaArray(T::Type, dims::Dims)
>n = prod(dims)
>p = CUDArt.malloc(T, n)
>CudaArray{T,length(dims)}(p, dims, device())
>end
> NewCudaArray (generic function with 1 method)
>
> julia> C = NewCudaArray(Float64, (10,10))
> CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
> 0x000b034a1200),(10,10),0)
>
> julia> fill!(C, 2.0)
> ERROR: KeyError: (0,"fill_contiguous",Float64) not found
>  [inlined code] from essentials.jl:58
>  in getindex at dict.jl:719
>  in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158
>
> Copying things across unnecessarily sounds slow, so thoughts appreciated.
>


[julia-users] Re: CUDArt.CudaArry - fill! not working

2015-10-26 Thread Kristoffer Carlsson
Or if you want to manually handle the devices (device 0 in this case):

CUDArt.init([0])
C = CUDArt.CudaArray(Float64, (10,10))
fill!(C, 2.0)
println(to_host(C))
CUDArt.close([0])



On Monday, October 26, 2015 at 2:44:20 PM UTC+1, Kristoffer Carlsson wrote:
>
> I believe you need to use an initialized device. See 
> https://github.com/JuliaGPU/CUDArt.jl#gpu-initialization
>
> For example:
>
> devices(dev->true) do devlist
>  C = CUDArt.CudaArray(Float64, (10,10))
>  fill!(C, 2.0)
>  println(to_host(C))
> end
>
>
>
> On Monday, October 26, 2015 at 2:30:40 PM UTC+1, Matthew Pearce wrote:
>>
>> I'm not having much luck filling a CUDArt.CudaArray matrix with a value.
>>
>> julia> C = CUDArt.CudaArray(Float64, (10,10))
>> CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
>> 0x000b034a0e00),(10,10),0)
>>
>> julia> fill!(C, 2.0)
>> ERROR: KeyError: (0,"fill_contiguous",Float64) not found
>>  [inlined code] from essentials.jl:58
>>  in getindex at dict.jl:719
>>  in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158
>>
>> The fill! code works when matrix C is created by copying data to the gpu. 
>> This suggested to me the problem was one of memory allocation. However, 
>> I've tried variations on this which haven't worked, such as taking some of 
>> the source code:
>>
>> julia> function NewCudaArray(T::Type, dims::Dims)
>>n = prod(dims)
>>p = CUDArt.malloc(T, n)
>>CudaArray{T,length(dims)}(p, dims, device())
>>end
>> NewCudaArray (generic function with 1 method)
>>
>> julia> C = NewCudaArray(Float64, (10,10))
>> CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
>> 0x000b034a1200),(10,10),0)
>>
>> julia> fill!(C, 2.0)
>> ERROR: KeyError: (0,"fill_contiguous",Float64) not found
>>  [inlined code] from essentials.jl:58
>>  in getindex at dict.jl:719
>>  in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158
>>
>> Copying things across unnecessarily sounds slow, so thoughts appreciated.
>>
>

Re: [julia-users] functions on iterable types

2015-10-26 Thread harven


Le lundi 26 octobre 2015 12:52:04 UTC+1, DNF a écrit :
>
> I must admit I don't understand very well what snippet is supposed to do. 
> But, for example, it doesn't work for :sum or :maximum or anything like 
> that.
>
>
This comes from the fact that sum and maximum don't work on empty arrays. 
Which are of course perfectly valid iterables. The code I gave tests all 
functions on base on a object with a type for which start/next/done is 
implemented. Collecting the values of that object just gives an empty array 
of type Any. 


[julia-users] Re: parallel and PyCall

2015-10-26 Thread Matthew Pearce
Thought I had an idea about this, I was wrong:

```julia

julia> @everywhere using PyCall

julia> @everywhere @pyimport pylab

julia> remotecall_fetch(pylab.cumsum, 5, collect(1:10))
ERROR: cannot serialize a pointer
 [inlined code] from error.jl:21
 in serialize at serialize.jl:420
 [inlined code] from dict.jl:372
 in serialize at serialize.jl:428
 in serialize at serialize.jl:310
 in serialize at serialize.jl:420 (repeats 2 times)
 in serialize at serialize.jl:302
 in serialize at serialize.jl:420
 [inlined code] from dict.jl:372
 in serialize at serialize.jl:428
 in serialize at serialize.jl:310
 in serialize at serialize.jl:420 (repeats 2 times)
 in serialize at serialize.jl:302
 in serialize at serialize.jl:420
 [inlined code] from dict.jl:372
 in send_msg_ at multi.jl:222
 [inlined code] from multi.jl:177
 in remotecall_fetch at multi.jl:728
 [inlined code] from multi.jl:368
 in remotecall_fetch at multi.jl:734

julia> pylab.cumsum(collect(1:10))
10-element Array{Int64,1}:
  1
  3
  6
 10
 15
 21
 28
 36
 45
 55

```


Re: [julia-users] CUDArt.CudaArry - fill! not working

2015-10-26 Thread Tim Holy
When your problem is with a particular package, it's often best to file an 
issue with the package.

In this case, I bet your CUDA/CUDArt installation is broken, since for me this 
works fine:

julia> devices(dev->true) do devlist
   device(0)
   C = CudaArray(Float64, (10,10))
   fill!(C, 2.0)
   nothing
   end

fill! depends on a hand-written kernel `fill_contiguous`, and if your system is 
busted then this function probably didn't get built. What does 
Pkg.build("CUDArt") say? 

--Tim

On Monday, October 26, 2015 06:30:40 AM Matthew Pearce wrote:
> I'm not having much luck filling a CUDArt.CudaArray matrix with a value.
> 
> julia> C = CUDArt.CudaArray(Float64, (10,10))
> CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
> 0x000b034a0e00),(10,10),0)
> 
> julia> fill!(C, 2.0)
> ERROR: KeyError: (0,"fill_contiguous",Float64) not found
>  [inlined code] from essentials.jl:58
>  in getindex at dict.jl:719
>  in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158
> 
> The fill! code works when matrix C is created by copying data to the gpu.
> This suggested to me the problem was one of memory allocation. However,
> I've tried variations on this which haven't worked, such as taking some of
> the source code:
> 
> julia> function NewCudaArray(T::Type, dims::Dims)
>n = prod(dims)
>p = CUDArt.malloc(T, n)
>CudaArray{T,length(dims)}(p, dims, device())
>end
> NewCudaArray (generic function with 1 method)
> 
> julia> C = NewCudaArray(Float64, (10,10))
> CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
> 0x000b034a1200),(10,10),0)
> 
> julia> fill!(C, 2.0)
> ERROR: KeyError: (0,"fill_contiguous",Float64) not found
>  [inlined code] from essentials.jl:58
>  in getindex at dict.jl:719
>  in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158
> 
> Copying things across unnecessarily sounds slow, so thoughts appreciated.



Re: [julia-users] functions on iterable types

2015-10-26 Thread Stefan Karpinski
On Mon, Oct 26, 2015 at 9:28 AM, harven  wrote:

>
> A thunk is a function that takes a single argument.
>

A thunk is a zero-argument function in functional programming – i.e. a
piece of deferred computation.


[julia-users] Re: CUDArt.CudaArry - fill! not working

2015-10-26 Thread Matthew Pearce
Thanks Kristoffer, indeed that works. 

I'm slightly baffled by the initialisation process. In my session I had 
already used fill! successfully on a matrix already on the device, so 
presumed everything had been initialised. Clearly that's not enough.




[julia-users] Re: CUDArt.CudaArry - fill! not working

2015-10-26 Thread Matthew Pearce
Thanks also Tim.



[julia-users] Performance of SubStrings Sets

2015-10-26 Thread Matt
I'm writing a package that compute various string distances.  Some 
distances require to compare the set of n-grams (continugous sequences of n 
characters) for each string.
For now, the implementation of these distances is quite slow. I've written 
the function for jaccard on a gist here .
 The function is 10x slower than R stringdist 
 (written in C and based on 
binary trees rather than hash tables). Profiling shows that most of time 
comes from the creation of the Set of q-gram. Can you think of a way to 
improve its performance?



[julia-users] Computing a matrix with SharedArrays longer than without parallelism

2015-10-26 Thread amiksvi
Dear all,

This topic follows that one: 
https://groups.google.com/forum/#!topic/julia-users/S86qxRkJ0ao which was 
not extremely easy to read so I take the liberty to reformulate my problem 
in simpler terms.
I want to compute a matrix row by row, and since these computations can be 
done independently, on the advice of Tim Holy, I use SharedArrays, where 
each worker is given a range of rows on which to perform a given operation.
The code is:

@everywhere begin
m = Int(1e3)
n = Int(5e3)
mat_b = rand(m, n)
function compute_row!(smat_a, mat_b, irange, n)
for i in irange
smat_a[i,i:n] = mean(mat_b[:,i] .* mat_b[:,i:n], 1)
end
end
end

smat_a = SharedArray(Float64, (n,n))

tic()
@sync begin
for p in procs(smat_a)
@async begin
irange = p-1:length(procs(smat_a)):n
remotecall_wait(p, compute_row!, smat_a, mat_b, irange, n)
end
end
end
toc()

And the problem is that it is always slower than the sequential version... 
I have no idea why this does not work.
Any suggestion appreciated, thanks a lot!


Re: [julia-users] Re: Hausdorff distance

2015-10-26 Thread Júlio Hoffimann
Hi Andrew,

Could you please point to a good paper? This naive implementation I showed
is working surprisingly well for me though.

-Júlio


[julia-users] Re: CUDArt.CudaArry - fill! not working

2015-10-26 Thread Matthew Pearce
Incidentally this second method doesn't work.

On Monday, October 26, 2015 at 1:46:03 PM UTC, Kristoffer Carlsson wrote:
>
> Or if you want to manually handle the devices (device 0 in this case):
>
> CUDArt.init([0])
> C = CUDArt.CudaArray(Float64, (10,10))
> fill!(C, 2.0)
> println(to_host(C))
> CUDArt.close([0])
>
>
julia> using CUDArt

julia> CUDArt.init([0])
ERROR: UndefVarError: init not defined

julia> C = CUDArt.CudaArray(Float64, (10,10))
CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
0x000b034a0c00),(10,10),0)

julia> fill!(C, 2.0)
ERROR: KeyError: (0,"fill_contiguous",Float64) not found
 [inlined code] from essentials.jl:58
 in getindex at dict.jl:719
 in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158

julia> println(to_host(C))
[-131072.0011474827 8.290725244651441e-76 -131072.66519424706 -
131072.79735252648 7.114269581235735e81 6.805647992284872e38 
6.805647850367426e38 -2.376364460420424e-212 7.124662174923278e38 
5.370009195891395e-152
 [...]
 -131072.26610348016 9.67251204874471e-76 8.290722196472935e-76 
3.022659974162853e-77 1.2624824795916418e-190 6.1835059447865614e-229 
5.075888440444962e-116 7.114269581235735e81 2.190889012369281e-154 -
6.805647751035016e38]

julia> CUDArt.close([0])
ERROR: MethodError: `close` has no method matching close(::Array{Int64,1})
 



[julia-users] Re: CUDArt.CudaArry - fill! not working

2015-10-26 Thread Kristoffer Carlsson
julia> CUDArt.init([0])
ERROR: UndefVarError: init not defined

is surprising.

Try reinstall the package.

On Monday, October 26, 2015 at 3:39:53 PM UTC+1, Matthew Pearce wrote:
>
> Incidentally this second method doesn't work.
>
> On Monday, October 26, 2015 at 1:46:03 PM UTC, Kristoffer Carlsson wrote:
>>
>> Or if you want to manually handle the devices (device 0 in this case):
>>
>> CUDArt.init([0])
>> C = CUDArt.CudaArray(Float64, (10,10))
>> fill!(C, 2.0)
>> println(to_host(C))
>> CUDArt.close([0])
>>
>>
> julia> using CUDArt
>
> julia> CUDArt.init([0])
> ERROR: UndefVarError: init not defined
>
> julia> C = CUDArt.CudaArray(Float64, (10,10))
> CUDArt.CudaArray{Float64,2}(CUDArt.CudaPtr{Float64}(Ptr{Float64} @
> 0x000b034a0c00),(10,10),0)
>
> julia> fill!(C, 2.0)
> ERROR: KeyError: (0,"fill_contiguous",Float64) not found
>  [inlined code] from essentials.jl:58
>  in getindex at dict.jl:719
>  in fill! at /home/mcp50/.julia/v0.5/CUDArt/src/arrays.jl:158
>
> julia> println(to_host(C))
> [-131072.0011474827 8.290725244651441e-76 -131072.66519424706 -
> 131072.79735252648 7.114269581235735e81 6.805647992284872e38 
> 6.805647850367426e38 -2.376364460420424e-212 7.124662174923278e38 
> 5.370009195891395e-152
>  [...]
>  -131072.26610348016 9.67251204874471e-76 8.290722196472935e-76 
> 3.022659974162853e-77 1.2624824795916418e-190 6.1835059447865614e-229 
> 5.075888440444962e-116 7.114269581235735e81 2.190889012369281e-154 -
> 6.805647751035016e38]
>
> julia> CUDArt.close([0])
> ERROR: MethodError: `close` has no method matching close(::Array{Int64,1})
>  
>
>

Re: [julia-users] Re: Re: are array slices views in 0.4?

2015-10-26 Thread Fabian Gans


On Friday, October 23, 2015 at 9:08:55 PM UTC+2, Christoph Ortner wrote:
>
>
> Apparently yes. For me it is very counterintuitive that this would be the 
> default behaviour. But presumably there was a lot of discussion that this 
> is desirable.
>
> (a) What are reasons, other than performance?
>

I think this is the issue where this change was brought up: 
https://github.com/JuliaLang/julia/issues/3701 and the respective pull 
request is here: https://github.com/JuliaLang/julia/pull/9150 The main 
reason seems to be that Julia generally prefers reference-passing by 
default so this fits the language spirit.  


> (b) Is this still under discussion or pretty much settled?
>
>
>From reading the issues, the core devs agree on the change and there wasn't 
a single objection so far. 

 

> (c) if I want to write code now that shouldn't break with 0.5, what should 
> I do?
>

I think when you need a copy, just surround your getindex with a copy 
function. (e.g. copy(x[:,10]) instead of x[:,10]). 
Regarding this change I am also more on the sceptical side. I would very 
much prefer a copy-on-write like solution like Matlab and R provide, but I 
don't know if and how this would be possible to implement, so I don't raise 
my voice here. 
To me the main benefit of this change is that it drove the main developers 
to make array views much more performant and first class members of julia. 
As Tim Holy mentioned, the actual change seems to be be very small, but it 
needed and still needs a lot of work to make it possible. 





[julia-users] Re: PyPlot histogram does not work (for me at least) in 0.4.0 (?)

2015-10-26 Thread Ferran Mazzanti
Oh, thnks for the info...

On Monday, October 26, 2015 at 12:25:26 PM UTC+1, Kristoffer Carlsson wrote:
>
> Read here: https://github.com/stevengj/PyCall.jl#usage
>
> More specifically, this section:
>
> "The biggest diffence from Python is that object attributes/members are 
> accessed with o[:attribute]rather than o.attribute, and you use get(o, 
> key) rather than o[key]. (This is because Julia does not permit 
> overloading the . operator yet.) See also the section on PyObject below, 
> as well as the pywrap function to create anonymous modules that simulate . 
> access 
> (this is what @pyimportdoes). For example, using Biopython 
>  we can do:
>
> @pyimport Bio.Seq as s
> @pyimport Bio.Alphabet as a
> my_dna = s.Seq("AGTACACTGGT", a.generic_dna)
> my_dna[:find]("ACT")
>
> whereas in Python the last step would have been my_dna.find("ACT")"
>
> On Monday, October 26, 2015 at 11:38:41 AM UTC+1, Ferran Mazzanti wrote:
>>
>> That worked, thanks :) 
>>
>> But this syntax I can not understand... where can I find documentation 
>> about how to do that? Just to avoid asking agains cuh kind of questions...
>>
>> Thanks again.
>>
>> On Monday, October 26, 2015 at 11:31:59 AM UTC+1, Kristoffer Carlsson 
>> wrote:
>>>
>>> Change last line to:
>>>
>>> h = PyPlot.plt[:hist](x,nbins)
>>>
>>>
>>>
>>> On Monday, October 26, 2015 at 11:28:35 AM UTC+1, Ferran Mazzanti wrote:

 Hi folks,

 using Linux Mint 17.1 here. I upgraded to julia 0.4.0 and now this 
 simple code, taken from the web and tested on previous versions,

 using PyPlot

 x = randn(1000) # Values
 nbins = 50 # Number of bins

 fig = figure("pyplot_histogram",figsize=(6,6)) # Not strictly required
 ax = axes() # Not strictly required
 h = PyPlot.plt.hist(x,nbins) # Histogram, PyPlot.plt required to 
 differentiate with conflicting hist command

 Produces the following output

 LoadError: type PyObject has no field hist
 while loading In[133], in expression starting on line 6

  in getindex at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:240
  in pysequence_query at 
 /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:781
  [inlined code] from 
 /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:797
  in pytype_query at 
 /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:826
  in convert at /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:846
  in pycall at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:399
  in call at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:407
  in close_queued_figs at 
 /home/mazzanti/.julia/v0.4/PyPlot/src/PyPlot.jl:401

 Any hint on that? Am I doing something wrong? If so, can anybody help on 
 how to do histograms in Julia 0.4.0?

 Thanks,

 Ferran. 




[julia-users] OT: make githup notifications un-read?

2015-10-26 Thread Andreas Lobinger
Hello colleagues,

it feels strange to ask this here, but is there a way to read githup 
notifications and keep them unread at the same time? As i use a smartphone 
i sometimes during the day scan my githup notifications and by doing that i 
put them in 'read' state and by the time i reach a proper computer to work 
on code i might not remember everything i read through the day...

Wishing a happy day,
Andreas


[julia-users] Re: OT: make githup notifications un-read?

2015-10-26 Thread Waldir Pimenta
I added the feed of my github notifications to my feed reader app. That 
works well for me as a way to do just that: see the list of notifications, 
open the corrsponding issues / PRs, and still have the github notifications 
marked as unread when I access github.com directly. The only problem is 
that now you have the opposite problem: you have to mark the notifications 
as read in two places.

On Monday, October 26, 2015 at 5:16:41 PM UTC, Andreas Lobinger wrote:
>
> Hello colleagues,
>
> it feels strange to ask this here, but is there a way to read githup 
> notifications and keep them unread at the same time? As i use a smartphone 
> i sometimes during the day scan my githup notifications and by doing that i 
> put them in 'read' state and by the time i reach a proper computer to work 
> on code i might not remember everything i read through the day...
>
> Wishing a happy day,
> Andreas
>


Re: [julia-users] Re: OT: make githup notifications un-read?

2015-10-26 Thread Yichao Yu
On Mon, Oct 26, 2015 at 1:22 PM, Waldir Pimenta
 wrote:
> I added the feed of my github notifications to my feed reader app. That
> works well for me as a way to do just that: see the list of notifications,
> open the corrsponding issues / PRs, and still have the github notifications
> marked as unread when I access github.com directly. The only problem is that
> now you have the opposite problem: you have to mark the notifications as
> read in two places.

My solution is to have email notifications and set the email to unread
if I want/need to deal with it later.

The advantage is that gmail can automatically set the github
notification as read most of the time, so you don't have to read it
twice either.

>
>
> On Monday, October 26, 2015 at 5:16:41 PM UTC, Andreas Lobinger wrote:
>>
>> Hello colleagues,
>>
>> it feels strange to ask this here, but is there a way to read githup
>> notifications and keep them unread at the same time? As i use a smartphone i
>> sometimes during the day scan my githup notifications and by doing that i
>> put them in 'read' state and by the time i reach a proper computer to work
>> on code i might not remember everything i read through the day...
>>
>> Wishing a happy day,
>> Andreas


Re: [julia-users] Re: Hausdorff distance

2015-10-26 Thread Andrew McLean
This is not exactly my area of expertise, but this recently published paper
looks like a good starting point.

[1]
A. A. Taha and A. Hanbury, ‘An Efficient Algorithm for Calculating the
Exact Hausdorff Distance’, *IEEE Transactions on Pattern Analysis and
Machine Intelligence*, vol. 37, no. 11, pp. 2153–2163, Nov. 2015.

http://dx.doi.org/10.1109/TPAMI.2015.2408351


On 26 October 2015 at 14:22, Júlio Hoffimann 
wrote:

> Hi Andrew,
>
> Could you please point to a good paper? This naive implementation I showed
> is working surprisingly well for me though.
>
> -Júlio
>


Re: [julia-users] Re: Re: are array slices views in 0.4?

2015-10-26 Thread Christoph Ortner
Fabian - Many thanks for your comments. This was very helpful.

(c) if I want to write code now that shouldn't break with 0.5, what should 
> I do?
>

I think when you need a copy, just surround your getindex with a copy 
function. (e.g. copy(x[:,10]) instead of x[:,10]). 

But this would lead me to make two copies. I was more interested in seeing 
whether there is a guideline on how to write code now so it doesn't have to 
be rewritten for 0.5.


Regarding this change I am also more on the sceptical side. I would very 
much prefer a copy-on-write like solution like Matlab and R provide, but I 
don't know if and how this would be possible to implement, so I don't raise 
my voice here. 
To me the main benefit of this change is that it drove the main developers 
to make array views much more performant and first class members of julia. 
As Tim Holy mentioned, the actual change seems to be be very small, but it 
needed and still needs a lot of work to make it possible. 


My own scepticism comes from the idea that using immutable objects 
throughout prevents bugs and one should only use mutable objects sparingly 
(primarily for performance - but I thought it shouldn't be the default)

Christoph





Re: [julia-users] Re: Hausdorff distance

2015-10-26 Thread Júlio Hoffimann
Thanks, will take a look.

-Júlio


[julia-users] Re: JuliaBox limitations on parallel computing

2015-10-26 Thread cdm

perhaps time for an asterisk on the JuliaBox banner:

"The Julia community is doing amazing things. We want you in on it!*"

* ... but not massively in parallel, consuming the resources of the 
community.



there are positives and negatives to this sort of problem ...

the fact that the problem can even arise says great things
about the capabilities of the language and the service.

well done ...



On Sunday, October 25, 2015 at 2:11:25 AM UTC-7, Tanmay K. Mohapatra wrote:
>
> Hi,
>
> from the past few days we have had some users run large parallel programs 
> on JuliaBox sessions. While in some cases they succeed, we see a lot of 
> failures due to resource constraints. Though we have plans to enable large 
> programs in future, we do not allocate enough resources for that now.
>
> Also, since JuliaBox sessions run on shared infrastructure, this affects 
> other sessions that are co-located on the same machine.  We will be putting 
> in place more checks and restrictions soon to prevent co-located sessions 
> from being impacted.
>
> We would request users to refrain from running large parallel programs for 
> now. If this is being done as part of some university class, please write 
> to us. Probably a separately provisioned cluster will be more appropriate 
> for that.
>
> - JuliaBox Team
>


Re: [julia-users] Re: Re: are array slices views in 0.4?

2015-10-26 Thread Gabriel Gellner


On Monday, 26 October 2015 11:17:58 UTC-7, Christoph Ortner wrote:
>
> Fabian - Many thanks for your comments. This was very helpful.
>
> (c) if I want to write code now that shouldn't break with 0.5, what should 
>> I do?
>>
>
> I think when you need a copy, just surround your getindex with a copy 
> function. (e.g. copy(x[:,10]) instead of x[:,10]). 
>
> But this would lead me to make two copies. I was more interested in seeing 
> whether there is a guideline on how to write code now so it doesn't have to 
> be rewritten for 0.5.
>
>
> you could do copy(sub(x, :, 10)) to avoid two copies. Not as pretty ;)
 

> My own scepticism comes from the idea that using immutable objects 
> throughout prevents bugs and one should only use mutable objects sparingly 
> (primarily for performance - but I thought it shouldn't be the default)
>
> the all immutable idea has its merits, but it is really not how Julia 
works already. If I pass the name of an array to a function I get a 
reference not a copy. I only get a copy of a slice of that same array at 
the moment which feels inconsistent to me. if your function accepts an 
array and you are mutating it in the body of the function I really don't 
like that

func(a) 
func(b[1:2, 3:4])

has different behavior if I am mutating the passed in array. This will be a 
source of far more bugs in my mind than simply having it so that if you 
mutate the passed in arguments to a function you are changing the 
parameters. 

Also I would hate to have to do

func(ref(a))

or some such thing when I have no plans to change a in my function (which 
for me is the farr more common case).

Ultimately I find that Julia treats us like consenting adults when it comes 
to passing objects to functions ... and the new array views behavior simply 
adds consistency.


Re: [julia-users] A grateful scientist

2015-10-26 Thread Gabriel Gellner
Seriously +9000 to this sentiment.
I am new to Julia but man what this community has made is incredible. The 
beauty of this project staggers me. Not having to mess around with C for a 
large chunk of my new codes inner loops feels like magic every time.

Thank you all so much.

Gabriel

On Monday, 26 October 2015 06:16:05 UTC-7, Scott T wrote:
>
> (Oh and Yakir, your work sounds like one of the coolest interdisciplinary 
> mixes I could possibly think of.)
>
> On Monday, 26 October 2015 13:11:47 UTC, Scott T wrote:
>>
>> I'll add my voice to say thanks as well! I made the switch from Python to 
>> Julia for some astrophysical models in my PhD, because I suck at C and 
>> Fortran and didn't like the messiness of dealing with something like 
>> Cython. Julia has been great for this and has introduced me gently to the 
>> usefulness of types and how to program for speed without throwing me in the 
>> deep end. I look forward to the day I can introduce people to it without 
>> having to preface the introduction with "You probably won't care about what 
>> I'm saying until the language and packages are more stable, but..."
>>
>> Scott T
>>
>> On Monday, 26 October 2015 12:47:47 UTC, Jon Norberg wrote:
>>>
>>> Utterly seconding that. Amazing community and beautiful language. 
>>>
>>> Thanks all!
>>>
>>> Jon Norberg 
>>>
>>>

Re: [julia-users] Re: Re: are array slices views in 0.4?

2015-10-26 Thread Stefan Karpinski
On Mon, Oct 26, 2015 at 2:17 PM, Christoph Ortner <
christophortn...@gmail.com> wrote:

> Fabian - Many thanks for your comments. This was very helpful.
>
> (c) if I want to write code now that shouldn't break with 0.5, what should
>> I do?
>>
>
> I think when you need a copy, just surround your getindex with a copy
> function. (e.g. copy(x[:,10]) instead of x[:,10]).
>
> But this would lead me to make two copies. I was more interested in seeing
> whether there is a guideline on how to write code now so it doesn't have to
> be rewritten for 0.5.
>

There will be a solution in the Compat package once this change is made. It
will probably consist of having a replacement for getindex that creates a
slice rather than a copy so that calling copy won't result in two copies.
I.e. it will backport the 0.5 behavior to earlier versions of Julia.


> Regarding this change I am also more on the sceptical side. I would very
>> much prefer a copy-on-write like solution like Matlab and R provide, but I
>> don't know if and how this would be possible to implement, so I don't raise
>> my voice here.
>> To me the main benefit of this change is that it drove the main
>> developers to make array views much more performant and first class members
>> of julia. As Tim Holy mentioned, the actual change seems to be be very
>> small, but it needed and still needs a lot of work to make it possible.
>
>
> My own scepticism comes from the idea that using immutable objects
> throughout prevents bugs and one should only use mutable objects sparingly
> (primarily for performance - but I thought it shouldn't be the default)
>

Copy-on-write is complex and leads to brittle performance properties that
cannot be reasoned about locally. The semantics of R and Matlab also
notoriously make it impossible to write efficient mutating functions –
people generally end up writing C extensions to do that.

It remains to be seen how this pans out, but keep in mind that C, C++,
Java, Fortran, Julia, Python, Ruby, etc. all use mutable non-copy-on-write
arrays everywhere and the world has not ended. Slices are a bit different,
but NumPy, for example, creates views for slices and that works well in the
SciPy ecosystem.

Philosophically, I think that returning views from operations is
problematic when the object being viewed is conceptually a single value –
strings being a good example that have gone different ways in different
languages. In C everyone thinks of strings as arrays of characters and it
works pretty well since everyone has that in mind. In higher level
languages, people stop thinking of strings this way, which means that
making strings mutable or returning slices of them as views becomes
problematic because it's at odds with how we think of strings. Arrays are
the prototypical example of a container-like thing, so I don't think that
this will be that confusing. If you "own" the array, then it's ok to make a
slice and potentially mutate it – if you don't, then it's not ok. We could
potentially add tooling to help enforce this since we know by the f! naming
convention which functions should and shouldn't mutate their arguments.


[julia-users] deserialize error with closured array of functions

2015-10-26 Thread Александр Кольцов
About my problem. Code:
...
p = _belineInterpolateGrid(map(t -> sin( norm(t) ), grid), grid)
serialize(open("/data/test.function", "w"), p)
p0 = deserialize(open("/data/test.function", "r"))

>> ERROR: MethodError: `convert` has no method matching 
convert(::Type{LambdaStaticData}, ::Array{Any,1})
This may have arisen from a call to the constructor LambdaStaticData(...),
since type constructors fall back to convert methods.
Closest candidates are:
  call{T}(::Type{T}, ::Any)
  convert{T}(::Type{T}, ::T)
 [inlined code] from int.jl:187
 in deserialize at serialize.jl:536
 [inlined code] from operators.jl:313
 in handle_deserialize at serialize.jl:475
 in deserialize_array at serialize.jl:614
 in handle_deserialize at serialize.jl:463
 [inlined code] from essentials.jl:116
 in deserialize at serialize.jl:696
 [inlined code] from operators.jl:313
 in handle_deserialize at serialize.jl:475
 [inlined code] from int.jl:187
 in deserialize at serialize.jl:482
 [inlined code] from operators.jl:313
 in handle_deserialize at serialize.jl:475
 [inlined code] from int.jl:187
 in deserialize at serialize.jl:538
 [inlined code] from operators.jl:313
 in handle_deserialize at serialize.jl:475
 [inlined code] from int.jl:187
 in deserialize at serialize.jl:435

In function "_belineInterpolateGrid" I used closure of Array{Function}:

function _belineInterpolateGrid(PP, Grid)
...
P = Array(Function, N-1, M-1)
...
poly = (x,y) -> begin
 i_x, i_y =  i(x, y)
 return P[i_x, i_y](x, y)
end
return poly
end

What's happened and can I fix It? Or this is a bug?


[julia-users] Re: Help on optimization problem

2015-10-26 Thread Vincent Zoonekynd
It is also similar to (metric) multi-dimensional scaling (MDS):
the problem of finding coordinates that best explain a distance matrix
(it is often a full matrix, but most of the algorithms should also work 
with a partial one).
Your approximate coordinates would be used as an initial guess for the 
search.

On Friday, 23 October 2015 11:10:35 UTC+8, vav...@uwaterloo.ca wrote:
>
> The problem of recovering (x,y,z) coordinates given partial pairwise 
> distance information is a famous problem in the optimization community and 
> has attracted the attention of many top people.  It is sometimes called the 
> 'Euclidean distance matrix completion' problem by mathematicians and the 
> 'sensor localization' problem by engineers.  If you search in google 
> scholar on either of these terms, you'll find many papers.  Jon Dattorio 
> wrote an entire book on the problem.
>
> -- Steve Vavasis
>
>
> On Tuesday, October 20, 2015 at 11:12:44 PM UTC-4, Spencer Russell wrote:
>>
>> I have a bunch of points in 3D whose positions I know approximately, with 
>> low-noise distance measurements between them (not necessarily fully 
>> connected). I want to improve my estimate of their 3D coordinates. This 
>> smells like an optimization problem so I’m thinking of using it as an 
>> excuse to learn JuMP. The model variables (coordinates) and objective 
>> function seem pretty straightforward (probably sum of squared distance 
>> errors for the objective?) but I don’t know enough about the various 
>> solvers to know which one to use. I know a tiny bit about optimization 
>> (I’ve implemented simplex, L-M, and conjugate GD for a class) but get 
>> quickly outside my depth when it gets theoretical. 
>>
>> Without any known positions the system is underconstrained because the 
>> whole collection could can translate, rotate, and reflect, but if I set 3 
>> non-colinear points at known positions it should be solvable. I can also 
>> supply pretty good guesses for initial values on locations, at least close 
>> enough that the closest minima is the one I want (I think). 
>>
>> so in short my questions boil down to: 
>>
>> 1. What solver is appropriate for this sort of problem? (or should I just 
>> go with a spring-force algorithm like in GraphLayout.jl?) 
>> 2. (JuMP Specific) - Should I specify my known positions as model 
>> variables with equality constraints, or just normal julia variables that 
>> show up in my objective function? 
>>
>> Thanks! 
>>
>> -s
>
>

[julia-users] low rank matrix-vector multiplication and order of operations

2015-10-26 Thread Michael Lindon
I have a somewhat large matrix-vector product to calculate, where the 
matrix is not full rank. I can perform a low rank factorization of the nxn 
Matrix M as M=LL' where L is nxp.  I'd like to use this information to 
speed up the computation. Here is a minimal example:

x=rand(1,100)   

   
v=rand(1)   

   
xx=x*x' 

   
xt=x'   

   
@time xx*v; 

   
@time x*x'*v;   

   
@time x*(x'*v); 

   
@time x*(xt*v);

here are the timings:

julia> @time xx*v;
  0.038634 seconds (8 allocations: 78.375 KB)
julia> @time x*x'*v;
  0.385250 seconds (17 allocations: 770.646 MB, 13.14% gc time)
julia> @time x*(x'*v);
  0.000662 seconds (10 allocations: 79.266 KB)
julia> @time x*(xt*v);
  0.000819 seconds (10 allocations: 79.266 KB)

I would like to understand why x*(x'*v) is much faster than  x*x'*v, and 
uses less memory. I would have thought the order of operations would have 
gone from right to left but it seems the parentheses made a huge 
difference. I also might have expected that x' would just flick a TRANS='T" 
switch in the BLAS implementation but by the looks of it there is a very 
large amount of memory allocation going on in x*x'*v.


[julia-users] WinRPM Download failure

2015-10-26 Thread Achu
I got this when I ran Pkg.update() and also on WinRPM.update()

INFO: Building WinRPM
INFO: Downloading 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
WARNING: Unknown download failure, error code: 2148270086
WARNING: Retry 1/5 downloading: 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
WARNING: Unknown download failure, error code: 2148270086
WARNING: Retry 2/5 downloading: 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
WARNING: Unknown download failure, error code: 2148270086
WARNING: Retry 3/5 downloading: 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
WARNING: Unknown download failure, error code: 2148270086
WARNING: Retry 4/5 downloading: 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
WARNING: Unknown download failure, error code: 2148270086
WARNING: Retry 5/5 downloading: 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
WARNING: received error 0 while downloading 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
INFO: Downloading 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
WARNING: Unknown download failure, error code: 2148270086
WARNING: Retry 1/5 downloading: 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
WARNING: Unknown download failure, error code: 2148270086
WARNING: Retry 2/5 downloading: 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
WARNING: Unknown download failure, error code: 2148270086
WARNING: Retry 3/5 downloading: 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
WARNING: Unknown download failure, error code: 2148270086
WARNING: Retry 4/5 downloading: 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
WARNING: Unknown download failure, error code: 2148270086
WARNING: Retry 5/5 downloading: 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
WARNING: received error 0 while downloading 
https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml

Heading to the link gives me a 404 error.


Re: [julia-users] A question of Style: Iterators into regular Arrays

2015-10-26 Thread Scott Jones
>One possible naming scheme could be to follow the example of Int and 
Integer and have Vec, Mat, Arr be the concrete types and Vector, Matrix and 
Array be the abstract types. I'm really not sure this would be worth the 
trouble at this point or if we really want the AbstractArray names to be 
any shorter.

That sounds like a quite good idea, which, if carried out completely, could 
eliminate some inconsistencies in the naming of abstract vs. concrete 
types, that have been causing people grief.

So:

> Abstract Concrete
> Signed (Integer) Int*
> Unsigned (Integer) UInt*
> Float Flt*
> Decimal Dec*
> Array Arr
> Vector Vec
> Matrix Mat
> String Str (maybe Str{Binary}, Str{ASCII}, Str{UTF8}, Str{UTF16}, 
> Str{UTF32})




[julia-users] Using Juno/LT to run Julia Code, Error, `getindex` has no method matching getindex(::DataFrame, ::ASCIIString

2015-10-26 Thread aarond
As you can see in the screen shot below this is the chunk of code I am 
trying to run. Error seems to occur with the `getindex` and it seems to be 
related to the line of code:

for (index, idImage) in enumerate(labelsInfoTrain["ID"], as both are 
highlighted in pink,

Very confused how to fix, look forward to the help






[julia-users] Pkg.add("DecisionTree") getting error, failed pocess: Process(`git ' --worktree = C:\Users.....

2015-10-26 Thread aarond
Hi I am running my Julia code in Juno/LT. I am trying to add the decision 
tree package but I am getting a major error seen in the screenshot below. 
Any help would be much appreciated.





[julia-users] For loop = or in?

2015-10-26 Thread FANG Colin
Hi All

I have got a stupid question:

Are there any difference in "for i in 1:5" and "for i = 1:5"?

Does the julia community prefer one to the other? I see use of both in the 
documentations and source code.

Personally I haven't seen much use of "for i = 1:5" in other languages.

Thanks.


[julia-users] GUI applications with Julia

2015-10-26 Thread Piotr W
Hi,

I consider use of Julia in my project. I have a low experience in 
programming in the "classical" meaning, but I have some experience in 
programming in Mathcad. Now I would like to rewrite the algorithm I have 
developed in Mathcad and create an application with GUI. My question is if 
Julia is suitable for development of GUI applications? Are there any GUI 
builders / libraries / frameworks like Qt or WPF, or could adopt I some 
other such a library? Thanks in advance for help.


[julia-users] julia style to resize array with initialization?

2015-10-26 Thread Cameron McBride
Hi All,

What's the best julian way to do the following:

function vecadd!(vec, i, v)
n = length(vec)
if n < i
resize!(vec, i)
vec[n+1:i] = 0.0
end
vec[i] += v
end

This seems somewhat typical for a growing array that's not just incremental
(i.e. not just a push!() operation), so I feel like I missed something.
Better suggestions?

Thanks!

Cameron


[julia-users] Re: WinRPM Download failure

2015-10-26 Thread Christopher Alexander
I'm not sure if they've tagged their latest version of master, but the 
change is in there:

https://github.com/JuliaLang/WinRPM.jl/commit/cb12a4136e9afb2a158ff888bc8ea81d3e72e619

Basically the openSUSE_13.1 needs to be openSUSE_13.2 in the url.  I had 
this problem like last week and I just changed it manually for now.  I 
guess you could checkout master as well.

Chris

On Monday, October 26, 2015 at 3:26:07 PM UTC-4, Achu wrote:
>
> I got this when I ran Pkg.update() and also on WinRPM.update()
>
> INFO: Building WinRPM
> INFO: Downloading 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
> WARNING: Unknown download failure, error code: 2148270086
> WARNING: Retry 1/5 downloading: 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
> WARNING: Unknown download failure, error code: 2148270086
> WARNING: Retry 2/5 downloading: 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
> WARNING: Unknown download failure, error code: 2148270086
> WARNING: Retry 3/5 downloading: 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
> WARNING: Unknown download failure, error code: 2148270086
> WARNING: Retry 4/5 downloading: 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
> WARNING: Unknown download failure, error code: 2148270086
> WARNING: Retry 5/5 downloading: 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
> WARNING: received error 0 while downloading 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win32/openSUSE_13.1/repodata/repomd.xml
> INFO: Downloading 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
> WARNING: Unknown download failure, error code: 2148270086
> WARNING: Retry 1/5 downloading: 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
> WARNING: Unknown download failure, error code: 2148270086
> WARNING: Retry 2/5 downloading: 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
> WARNING: Unknown download failure, error code: 2148270086
> WARNING: Retry 3/5 downloading: 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
> WARNING: Unknown download failure, error code: 2148270086
> WARNING: Retry 4/5 downloading: 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
> WARNING: Unknown download failure, error code: 2148270086
> WARNING: Retry 5/5 downloading: 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
> WARNING: received error 0 while downloading 
> https://cache.e.ip.saba.us/http://download.opensuse.org/repositories/windows:/mingw:/win64/openSUSE_13.1/repodata/repomd.xml
>
> Heading to the link gives me a 404 error.
>


[julia-users] Re: julia style to resize array with initialization?

2015-10-26 Thread Josh Langsfeld
You could do 'append!(vec, zeros(i-n))'.

On Monday, October 26, 2015 at 3:32:56 PM UTC-4, Cameron McBride wrote:
>
> Hi All, 
>
> What's the best julian way to do the following: 
>
> function vecadd!(vec, i, v)
> n = length(vec)
> if n < i
> resize!(vec, i)
> vec[n+1:i] = 0.0
> end
> vec[i] += v
> end
>
> This seems somewhat typical for a growing array that's not just 
> incremental (i.e. not just a push!() operation), so I feel like I missed 
> something. Better suggestions?
>
> Thanks!
>
> Cameron
>


Re: [julia-users] Re: Re: are array slices views in 0.4?

2015-10-26 Thread Christoph Ortner

 Hi Stefan,

Many thanks for the clarifications.

Just for the record: I've often complained  about the changes to Julia 
since 0.2, but this is one that I am more on the fence about.

Christoph



[julia-users] alternate LISP integration ...

2015-10-26 Thread cdm


i have recently consumed an element of the video content emanating from the 
S.L. StrangeLoop ...

   "Pixie - A Lightweight Lisp with 'Magical' Powers" by Timothy Baldridge

   https://www.youtube.com/watch?v=1AjhFZVfB9c


curious to see if anyone has any impressions / comparisons of Pixie with 
femtolisp.

some of the LISP fu T. Baldridge was dealing looked really impressive ...


of interest:

   https://twitter.com/timbaldridge/status/643789308747345920

   https://github.com/search?utf8=%E2%9C%93&q=femtolisp


thanks,


~ cdm


[julia-users] use of @generated for specializing on functions

2015-10-26 Thread Alireza Nejati
Hi all,

I was wondering if this is a julian use of the @generated macro:

type Functor{Symbol} end

# A simple general product-sum operator;
# returns a[1]⊙b[1] ⊕ a[2]⊙b[2] ⊕ ...
@generated function dot{⊕,⊙,T}(::Type{Functor{⊕}}, ::Type{Functor{⊙}}, 
a::Array{T}, b::Array{T})
return quote
assert(length(a)==length(b))
p = zero(T)
for i=1:length(a)
@inbounds p=$⊕(p,$⊙(a[i],b[i]))
end
p
end
end

The idea is to produce a specialized dot() operator that can work with 
arbitrary product and sum operators yet still be computationally efficient. 
These are some examples on how to use the above function:

dot(Functor{:+},   Functor{:*}, [-1,0,1],[1,0,1])  # vector dot; returns 0
dot(Functor{:max}, Functor{:+}, [-1,0,1],[1,0,1])  # max-sum; returns 2
dot(Functor{:|},   Functor{:&}, [true,false,true],[false,true,true])  # 
constraint satisfaction; returns true

Testing shows that this is faster by about 10x than passing the functions 
directly and runs at the same speed . It also doesn't go through any memory 
allocations.


[julia-users] Re: A grateful scientist

2015-10-26 Thread Alireza Nejati
I've been coding in julia so much lately that I actually think my brain 
might be forgetting the other languages I used to know!

On Monday, October 26, 2015 at 4:30:26 PM UTC+13, Yakir Gagnon wrote:
>
> Hi Julia community and developers,
> I'm a postdoc researching color vision, biological optics, polarization 
> vision, and camouflage. I've always used Matlab in my research and made the 
> switch to Julia about two years ago. I just wanted to report, for what it's 
> worth, that as a researcher I think Julia is the best. I promote it 
> everywhere I think it's appropriate, and use it almost exclusively. 
> Just wanted to say a big fat thank you to all the developers and community 
> for creating this magnificence.
>
> THANK YOU! 
>


Re: [julia-users] use of @generated for specializing on functions

2015-10-26 Thread Yichao Yu
On Mon, Oct 26, 2015 at 4:27 PM, Alireza Nejati  wrote:
> Hi all,
>
> I was wondering if this is a julian use of the @generated macro:
>
> type Functor{Symbol} end
>
> # A simple general product-sum operator;
> # returns a[1]⊙b[1] ⊕ a[2]⊙b[2] ⊕ ...
> @generated function dot{⊕,⊙,T}(::Type{Functor{⊕}}, ::Type{Functor{⊙}},
> a::Array{T}, b::Array{T})
> return quote
> assert(length(a)==length(b))
> p = zero(T)
> for i=1:length(a)
> @inbounds p=$⊕(p,$⊙(a[i],b[i]))
> end
> p
> end
> end
>
> The idea is to produce a specialized dot() operator that can work with
> arbitrary product and sum operators yet still be computationally efficient.
> These are some examples on how to use the above function:
>
> dot(Functor{:+},   Functor{:*}, [-1,0,1],[1,0,1])  # vector dot; returns 0
> dot(Functor{:max}, Functor{:+}, [-1,0,1],[1,0,1])  # max-sum; returns 2
> dot(Functor{:|},   Functor{:&}, [true,false,true],[false,true,true])  #
> constraint satisfaction; returns true
>
> Testing shows that this is faster by about 10x than passing the functions
> directly and runs at the same speed . It also doesn't go through any memory
> allocations.

As a temporary personal solution before Jeff fix the actual problem,
maybe ,in general, no.

See my comment 
https://github.com/JuliaLang/julia/issues/12357#issuecomment-125791661
and https://github.com/JuliaLang/julia/pull/12322#issuecomment-126810815

These cannot be a generic API since you can't get the scope right and
it would be really confusing when it will work and when it will not.


[julia-users] Re: use of @generated for specializing on functions

2015-10-26 Thread Alireza Nejati
I missed a word in there. Meant to say, "runs at the same speed as 
hard-coding the * and + functions". Anyway, I wanted to know if there's a 
better way of doing something similar without using the @generated macro.


[julia-users] Re: WinRPM Download failure

2015-10-26 Thread Tony Kelman
Pretty sure I tagged that change as soon as I noticed the 13.1 downloads 
stopped working. You likely need to Pkg.update(), be sure you don't have local 
modifications preventing the winrpm package from updating, then restart julia 
and do Pkg.build()

[julia-users] Re: For loop = or in?

2015-10-26 Thread Alireza Nejati
There is no difference, as far as I know.

'=' seems to be used more for explicit ranges (i = 1:5) and 'in' seems to 
be used more for variables (i in mylist). But using 'in' for everything is 
ok too.

The '=' is there for familiarity with matlab. Remember that julia's syntax 
was in part designed to be familiar to matlab users.

On Tuesday, October 27, 2015 at 8:26:07 AM UTC+13, FANG Colin wrote:
>
> Hi All
>
> I have got a stupid question:
>
> Are there any difference in "for i in 1:5" and "for i = 1:5"?
>
> Does the julia community prefer one to the other? I see use of both in the 
> documentations and source code.
>
> Personally I haven't seen much use of "for i = 1:5" in other languages.
>
> Thanks.
>


[julia-users] Re: alternate LISP integration ...

2015-10-26 Thread cdm

sorry to have left off the repo link:

   https://github.com/pixie-lang/pixie


[julia-users] Re: use of @generated for specializing on functions

2015-10-26 Thread Alireza Nejati
I didn't know there was already a discussion going on this. Thanks for the 
links.

My goal here isn't to replace Dot{} but rather to figure out what the most 
julian way of doing this would be. Thanks again though.


[julia-users] Re: A grateful scientist

2015-10-26 Thread Christopher Fisher
I work with mathematical models of cognition and switched to Julia from 
Matlab about a year and a half ago. Aside from the debugger (step,step in, 
etc.), I don't miss Matlab all. Once the statistical packages mature, I'll 
phase out R too. I think the language is shaping up quite well and the 
community has been very helpful for technical support. I'll be sure to cite 
Julia in future papers.

On Sunday, October 25, 2015 at 11:30:26 PM UTC-4, Yakir Gagnon wrote:
>
> Hi Julia community and developers,
> I'm a postdoc researching color vision, biological optics, polarization 
> vision, and camouflage. I've always used Matlab in my research and made the 
> switch to Julia about two years ago. I just wanted to report, for what it's 
> worth, that as a researcher I think Julia is the best. I promote it 
> everywhere I think it's appropriate, and use it almost exclusively. 
> Just wanted to say a big fat thank you to all the developers and community 
> for creating this magnificence.
>
> THANK YOU! 
>


[julia-users] Re: alternate LISP integration ...

2015-10-26 Thread cdm

somewhat related ...

is it still possible to get
the femtolisp REPL in the
post v0.4.0 world ... ?

thanks,

cdm


Re: [julia-users] A grateful scientist

2015-10-26 Thread Yakir Gagnon
Thanks Scott T! 

On Monday, October 26, 2015 at 11:16:05 PM UTC+10, Scott T wrote:
>
> (Oh and Yakir, your work sounds like one of the coolest interdisciplinary 
> mixes I could possibly think of.)
>
> On Monday, 26 October 2015 13:11:47 UTC, Scott T wrote:
>>
>> I'll add my voice to say thanks as well! I made the switch from Python to 
>> Julia for some astrophysical models in my PhD, because I suck at C and 
>> Fortran and didn't like the messiness of dealing with something like 
>> Cython. Julia has been great for this and has introduced me gently to the 
>> usefulness of types and how to program for speed without throwing me in the 
>> deep end. I look forward to the day I can introduce people to it without 
>> having to preface the introduction with "You probably won't care about what 
>> I'm saying until the language and packages are more stable, but..."
>>
>> Scott T
>>
>> On Monday, 26 October 2015 12:47:47 UTC, Jon Norberg wrote:
>>>
>>> Utterly seconding that. Amazing community and beautiful language. 
>>>
>>> Thanks all!
>>>
>>> Jon Norberg 
>>>
>>>

[julia-users] Re: parallel and PyCall

2015-10-26 Thread Yakir Gagnon


Yea, right? So what’s the answer? How can we if at all do any PyCalls 
parallely? 

On Monday, October 26, 2015 at 11:49:35 PM UTC+10, Matthew Pearce wrote:

Thought I had an idea about this, I was wrong:
>
> ```julia
>
> julia> @everywhere using PyCall
>
> julia> @everywhere @pyimport pylab
>
> julia> remotecall_fetch(pylab.cumsum, 5, collect(1:10))
> ERROR: cannot serialize a pointer
>  [inlined code] from error.jl:21
>  in serialize at serialize.jl:420
>  [inlined code] from dict.jl:372
>  in serialize at serialize.jl:428
>  in serialize at serialize.jl:310
>  in serialize at serialize.jl:420 (repeats 2 times)
>  in serialize at serialize.jl:302
>  in serialize at serialize.jl:420
>  [inlined code] from dict.jl:372
>  in serialize at serialize.jl:428
>  in serialize at serialize.jl:310
>  in serialize at serialize.jl:420 (repeats 2 times)
>  in serialize at serialize.jl:302
>  in serialize at serialize.jl:420
>  [inlined code] from dict.jl:372
>  in send_msg_ at multi.jl:222
>  [inlined code] from multi.jl:177
>  in remotecall_fetch at multi.jl:728
>  [inlined code] from multi.jl:368
>  in remotecall_fetch at multi.jl:734
>
> julia> pylab.cumsum(collect(1:10))
> 10-element Array{Int64,1}:
>   1
>   3
>   6
>  10
>  15
>  21
>  28
>  36
>  45
>  55
>
> ```
>
​


[julia-users] low rank matrix-vector multiplication and order of operations

2015-10-26 Thread Kristoffer Carlsson
With parenthesis you never do a matrix matrix multiplication. 

[julia-users] low rank matrix-vector multiplication and order of operations

2015-10-26 Thread Kristoffer Carlsson
See for example https://en.m.wikipedia.org/wiki/Matrix_chain_multiplication

[julia-users] Re: JuliaBox limitations on parallel computing

2015-10-26 Thread André Lage
Hi,

I'll be giving a hands-on tutorial on Julia next week. I expect about 30 
people in 6th November, from 12h30 to 16h (UTC-3), to use IJulia notebook 
sessions to learn Julia and run some simple parallel code with few Workers. 

Would it be a problem?

Thanks,


André Lage.

On Sunday, October 25, 2015 at 6:11:19 AM UTC-3, tanmaykm wrote:
>
> Hi,
>
> from the past few days we have had some users run large parallel programs 
> on JuliaBox sessions. While in some cases they succeed, we see a lot of 
> failures due to resource constraints. Though we have plans to enable large 
> programs in future, we do not allocate enough resources for that now.
>
> Also, since JuliaBox sessions run on shared infrastructure, this affects 
> other sessions that are co-located on the same machine.  We will be putting 
> in place more checks and restrictions soon to prevent co-located sessions 
> from being impacted.
>
> We would request users to refrain from running large parallel programs for 
> now. If this is being done as part of some university class, please write 
> to us. Probably a separately provisioned cluster will be more appropriate 
> for that.
>
> - JuliaBox Team
>


[julia-users] Re: WinRPM Download failure

2015-10-26 Thread Avik Sengupta
Yes, this has been tagged. However, in this update, the source URL's have 
changed, but they wont take effect till Julia is restarted (if WinRPM has 
been loaded in that session prior to the update). 

Regards
-
Avik

On Monday, 26 October 2015 20:38:49 UTC, Tony Kelman wrote:
>
> Pretty sure I tagged that change as soon as I noticed the 13.1 downloads 
> stopped working. You likely need to Pkg.update(), be sure you don't have 
> local modifications preventing the winrpm package from updating, then 
> restart julia and do Pkg.build()



[julia-users] Dear @sprintf, you shouldn't do this

2015-10-26 Thread J Luis
It took me a while to figure out why it was erroring

Given this piece of a script

@show(sampr1, sampr2)
@show(typeof(sampr1), typeof(sampr2))
@show(@sprintf("-T%d/%d/1", sampr1[1], sampr2[1]))

it error ed with an incomprehensible error

sampr1 = [-1167.0]
sampr2 = [1169.0]
typeof(sampr1) = Array{Float64,2}
typeof(sampr2) = Array{Float64,2}
@sprintf("-T%d/%d/1",sampr1[1],sampr2[1]) = "-T-1167/1169/1"
ERROR: TypeError: non-boolean (Array{Bool,2}) used in boolean context
 [inlined code] from show.jl:127

it turned out that it wanted

@show(@sprintf("-T%d/%d/1", sampr1[1,1], sampr2[1,1]))

but the sampr1 & sampr2 are the result of a previous computation so not at 
all obvious of what was going on. Specially because accessing sampr1[1] is 
a perfectly valid statement

julia> sampr1 = zeros(1,1);
1x1 Array{Float64,2}:
 0.0

julia> sampr1[1]
0.0



[julia-users] Re: Dear @sprintf, you shouldn't do this

2015-10-26 Thread J Luis
Update. It turned out that the error was still different. Further down the 
script I had another similar line

  t = gmt(@sprintf("filter1d -Fm1 -T%d/%d/1 -E", sampr1, sampr2), 
ship_pg)

and was this line that was causing the error (because 'sampr1, sampr2' was 
not accepted, only sampr1[1], sampr2[1]) but the error message was very 
misleading because it accused the first occurrence of @sprintf()

Julia v0.4 on Win 

segunda-feira, 26 de Outubro de 2015 às 23:37:00 UTC, J Luis escreveu:
>
> It took me a while to figure out why it was erroring
>
> Given this piece of a script
>
> @show(sampr1, sampr2)
> @show(typeof(sampr1), typeof(sampr2))
> @show(@sprintf("-T%d/%d/1", sampr1[1], sampr2[1]))
>
> it error ed with an incomprehensible error
>
> sampr1 = [-1167.0]
> sampr2 = [1169.0]
> typeof(sampr1) = Array{Float64,2}
> typeof(sampr2) = Array{Float64,2}
> @sprintf("-T%d/%d/1",sampr1[1],sampr2[1]) = "-T-1167/1169/1"
> ERROR: TypeError: non-boolean (Array{Bool,2}) used in boolean context
>  [inlined code] from show.jl:127
>
> it turned out that it wanted
>
> @show(@sprintf("-T%d/%d/1", sampr1[1,1], sampr2[1,1]))
>
> but the sampr1 & sampr2 are the result of a previous computation so not at 
> all obvious of what was going on. Specially because accessing sampr1[1] is 
> a perfectly valid statement
>
> julia> sampr1 = zeros(1,1);
> 1x1 Array{Float64,2}:
>  0.0
>
> julia> sampr1[1]
> 0.0
>
>

[julia-users] Re: Interest in a Seattle-Area Julia Meetup?

2015-10-26 Thread lewis
Happy also to join.

On Monday, October 19, 2015 at 8:47:06 PM UTC-7, Daniel Jones wrote:
>
> Count me in! I'm happy to present something as well. I know of a few Julia 
> users at UW, not all of whom are necessarily on github, so I suspect 
> there'd be more than 9 people interested.
>
>
> On Monday, October 19, 2015 at 12:46:02 PM UTC-7, tim@multiscalehn.com 
> wrote:
>>
>> I work for a company where we are big fans of Julia, and are using it for 
>> several projects. We have thrown around the idea of hosting a meetup. We 
>> have the space and the resources to put it on, and could provide some good 
>> content. I know there are some active Julia devs at the UW but I wanted to 
>> put out feelers to see who might be interested in attending, or even 
>> better, giving a talk or demo. I guarantee a good time will be had by all.
>>
>> - Tim
>>
>

[julia-users] Re: WinRPM Download failure

2015-10-26 Thread Tony Kelman
I don't expect the default source urls to change more than once every few 
years, except for tweaks having to do with caching.

[julia-users] Re: [ANN] MXNet.jl - Flexible and Efficient Deep Learning for Julia

2015-10-26 Thread Carlo Lucibello
How does this compare to Mocha.jl?

Il giorno lunedì 26 ottobre 2015 04:27:31 UTC+1, Chiyuan Zhang ha scritto:
>
> MXNet.jl  is the dmlc/mxnet 
>  Julia  package. 
> MXNet.jl brings flexible and efficient GPU computing and state-of-art deep 
> learning to Julia. Some highlight of features include:
>
>- Efficient tensor/matrix computation across multiple devices, 
>including multiple CPUs, GPUs and distributed server nodes.
>- Flexible symbolic manipulation to composite and construct 
>state-of-the-art deep learning models.
>
> Here is an exmple of how training a simple 3-layer MLP on MNIST looks like:
>
> using MXNet
>
> mlp = @mx.chain mx.Variable(:data) =>
>   mx.FullyConnected(name=:fc1, num_hidden=128) =>
>   mx.Activation(name=:relu1, act_type=:relu)   =>
>   mx.FullyConnected(name=:fc2, num_hidden=64)  =>
>   mx.Activation(name=:relu2, act_type=:relu)   =>
>   mx.FullyConnected(name=:fc3, num_hidden=10)  =>
>   mx.Softmax(name=:softmax)
> # data provider
> batch_size = 100include(joinpath(Pkg.dir("MXNet"), 
> "/examples/mnist/mnist-data.jl"))
> train_provider, eval_provider = get_mnist_providers(batch_size)
> # setup model
> model = mx.FeedForward(mlp, context=mx.cpu())
> # optimizer
> optimizer = mx.SGD(lr=0.1, momentum=0.9, weight_decay=0.1)
> # fit parameters
> mx.fit(model, optimizer, train_provider, n_epoch=20, eval_data=eval_provider)
>
> For more details, please refer to the document 
>  and examples 
> .
>
>
> Enjoy!
>
> - pluskid
>


RE: [julia-users] Re: Re: are array slices views in 0.4?

2015-10-26 Thread David Anthoff
Are there plans to throw deprecation warnings in julia 0.5 whenever one slices 
an array with [], and then reuse the [] syntax to return views in julia 0.6? 
That would be approach that is consistent with previous changes of 
functionality, right?

 

I’m very much in favor of the new design, but I’m very worried about the 
transition. There seems an enormous potential for subtle bugs to go undetected 
for a long time… the tuple change was nicely phased in, as were any other 
breaking changes since I’ve used julia (since 0.2), i.e. I always had the 
impression that as long as I just fixed all depreciation warnings when a new 
version came out, I would be good. But my understanding right now for the array 
change is that the behavior of slicing with [] will change drastically, with 
essentially no indicator where in my code I might run into trouble, right?

 

From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On 
Behalf Of Stefan Karpinski
Sent: Monday, October 26, 2015 12:05 PM
To: Julia Users 
Subject: Re: [julia-users] Re: Re: are array slices views in 0.4?

 

On Mon, Oct 26, 2015 at 2:17 PM, Christoph Ortner mailto:christophortn...@gmail.com> > wrote:

Fabian - Many thanks for your comments. This was very helpful.

 

(c) if I want to write code now that shouldn't break with 0.5, what should I do?


I think when you need a copy, just surround your getindex with a copy function. 
(e.g. copy(x[:,10]) instead of x[:,10]). 

 

But this would lead me to make two copies. I was more interested in seeing 
whether there is a guideline on how to write code now so it doesn't have to be 
rewritten for 0.5.

 

There will be a solution in the Compat package once this change is made. It 
will probably consist of having a replacement for getindex that creates a slice 
rather than a copy so that calling copy won't result in two copies. I.e. it 
will backport the 0.5 behavior to earlier versions of Julia.

 

Regarding this change I am also more on the sceptical side. I would very much 
prefer a copy-on-write like solution like Matlab and R provide, but I don't 
know if and how this would be possible to implement, so I don't raise my voice 
here. 
To me the main benefit of this change is that it drove the main developers to 
make array views much more performant and first class members of julia. As Tim 
Holy mentioned, the actual change seems to be be very small, but it needed and 
still needs a lot of work to make it possible. 

 

My own scepticism comes from the idea that using immutable objects throughout 
prevents bugs and one should only use mutable objects sparingly (primarily for 
performance - but I thought it shouldn't be the default)

 

Copy-on-write is complex and leads to brittle performance properties that 
cannot be reasoned about locally. The semantics of R and Matlab also 
notoriously make it impossible to write efficient mutating functions – people 
generally end up writing C extensions to do that.

 

It remains to be seen how this pans out, but keep in mind that C, C++, Java, 
Fortran, Julia, Python, Ruby, etc. all use mutable non-copy-on-write arrays 
everywhere and the world has not ended. Slices are a bit different, but NumPy, 
for example, creates views for slices and that works well in the SciPy 
ecosystem.

 

Philosophically, I think that returning views from operations is problematic 
when the object being viewed is conceptually a single value – strings being a 
good example that have gone different ways in different languages. In C 
everyone thinks of strings as arrays of characters and it works pretty well 
since everyone has that in mind. In higher level languages, people stop 
thinking of strings this way, which means that making strings mutable or 
returning slices of them as views becomes problematic because it's at odds with 
how we think of strings. Arrays are the prototypical example of a 
container-like thing, so I don't think that this will be that confusing. If you 
"own" the array, then it's ok to make a slice and potentially mutate it – if 
you don't, then it's not ok. We could potentially add tooling to help enforce 
this since we know by the f! naming convention which functions should and 
shouldn't mutate their arguments.



[julia-users] missing const qualifier

2015-10-26 Thread Carlo Lucibello
It would be nice to annotate the return type of methods with a constant 
qualifier, in order to have 
an efficient and safe behaviour at the same time. 

I mean something like this:

type A
  data::Vector{Int}
end

# invalid but desiderable julia code
const function getdata(a::A)
  return a.data
end 

a = A(ones(10))
data = getdata(a)

data[1] = 2  # ERROR
a.data[1] = 2 # OK
  


[julia-users] Re: missing const qualifier

2015-10-26 Thread vavasis
In 2014 when I first learned about Julia, I also suggested on this 
newsgroup that there should be a 'const' keyword as in C++ to annotate 
function arguments and return variables that are supposed to be read-only. 
 Possibly you can find the old thread with google.  I received a lot of 
feedback from experienced Julia users and core developers that convinced me 
that this is probably not a good idea.  Here are some reasons that I can 
recall from the earlier discussion that adding 'const' to Julia may not be 
a good idea.

(1) The 'const' keyword would make the multiple-dispatch system much more 
confusing because it would entail new rules about how the 'const' keyword 
affects closeness in the type hierarchy.

(2) You can already get the desired effect in Julia by defining your own 
subtype of DenseArray in which getindex works as usual but setindex! throws 
an error.

(3) The promise that a routine won't change a 'const' argument could easily 
be defeated by aliasing (i.e., a function is invoked with a const argument, 
but another non-const argument refers to the same piece of data), so it may 
give the user a false sense of security.

-- Steve Vavasis







On Monday, October 26, 2015 at 10:29:34 PM UTC-4, Carlo Lucibello wrote:
>
> It would be nice to annotate the return type of methods with a constant 
> qualifier, in order to have 
> an efficient and safe behaviour at the same time. 
>
> I mean something like this:
>
> type A
>   data::Vector{Int}
> end
>
> # invalid but desiderable julia code
> const function getdata(a::A)
>   return a.data
> end 
>
> a = A(ones(10))
> data = getdata(a)
>
> data[1] = 2  # ERROR
> a.data[1] = 2 # OK
>   
>


[julia-users] Figure out how to get pointer to ge

2015-10-26 Thread Taylor Maxwell
I am moving some of my code to 0.4 and I am having trouble figuring out how 
to get a pointer for beginning of a UInt matrix.  In the past I did:

*julia> **bpt = convert(Ptr{Uint8},b)  #where b it my Uint8 matrix*

*Ptr{Uint8} @0x7fdd243fca70*

in 0.4 I get:

*julia> **bpt = convert(Ptr{UInt8},b)*

*ERROR: MethodError: `convert` has no method matching 
convert(::Type{Ptr{UInt8}}, ::Array{UInt8,1})*

*This may have arisen from a call to the constructor Ptr{UInt8}(...),*

*since type constructors fall back to convert methods.*

Closest candidates are:

  call{T}(::Type{T}, ::Any)

  convert{T<:Union{Int8,UInt8}}(::Type{Ptr{T<:Union{Int8,UInt8}}}, 
*::Cstring*)

  convert{T}(::Type{Ptr{T}}, *::UInt64*)

Unfortunately I am not certain any of these options match what I am looking 
for:

The method used in 0.3 is:
convert{T}(::Type{Ptr{T}},a::Array{T,N}) at pointer.jl:22
which is:
convert{T}(::Type{Ptr{T}}, a::Array{T}) = ccall(:jl_array_ptr, Ptr{T}, 
(Any,), a)
this obviously does not exist in 0.4.

My use case is that I read in a Uint8 matrix from a PLINK .bed file format 
which is a Uint8 matrix that is a dense way to pack in genetic locus 
genotypes every two bits and every column represents a new locus.  With the 
pointer at the beginning of the Uint8 matrix, I can easily work my way down 
a column and extract the genotype calls with bit operations across the 
matrix.  This kind of work is at the edge of my understanding (i.e. 
pointers and bit operations) so despite looking at some of the code for the 
suggested convert methods I am not sure how to get back to what I 
originally use with what is available in 0.4.

In 0.4 I can just make my own new convert call with the old code from 0.3 
by:
import Base.convert
convert{T}(::Type{Ptr{T}}, a::Array{T}) = ccall(:jl_array_ptr, Ptr{T}, 
(Any,), a)
and it get:

*julia> **bpt = convert(Ptr{UInt8},b)*

*Ptr{UInt8} @0x00011253ac00*


But I wanted to know if the was a technical reason for that convert call to 
be removed in 0.4 and if using this "reinstated" convert call will do what 
it originally did in 0.3.




[julia-users] Re: PyPlot histogram does not work (for me at least) in 0.4.0 (?)

2015-10-26 Thread David P. Sanders
(You then must call Base.hist if you want the version from Base.)

El lunes, 26 de octubre de 2015, 23:13:23 (UTC-6), David P. Sanders 
escribió:
>
>
> If you do `using PyPlot` then you can just use `hist` directly:
>
> using PyPlot
>
> x = randn(1)
> hist(x, 100)
>
> El lunes, 26 de octubre de 2015, 10:21:59 (UTC-6), Ferran Mazzanti 
> escribió:
>>
>> Oh, thnks for the info...
>>
>> On Monday, October 26, 2015 at 12:25:26 PM UTC+1, Kristoffer Carlsson 
>> wrote:
>>>
>>> Read here: https://github.com/stevengj/PyCall.jl#usage 
>>> 
>>>
>>> More specifically, this section:
>>>
>>> "The biggest diffence from Python is that object attributes/members are 
>>> accessed with o[:attribute]rather than o.attribute, and you use get(o, 
>>> key) rather than o[key]. (This is because Julia does not permit 
>>> overloading the . operator yet.) See also the section on PyObject below, 
>>> as well as the pywrap function to create anonymous modules that 
>>> simulate . access (this is what @pyimportdoes). For example, using 
>>> Biopython  we can do:
>>>
>>> @pyimport Bio.Seq as s
>>> @pyimport Bio.Alphabet as a
>>> my_dna = s.Seq("AGTACACTGGT", a.generic_dna)
>>> my_dna[:find]("ACT")
>>>
>>> whereas in Python the last step would have been my_dna.find("ACT")"
>>>
>>> On Monday, October 26, 2015 at 11:38:41 AM UTC+1, Ferran Mazzanti wrote:

 That worked, thanks :) 

 But this syntax I can not understand... where can I find documentation 
 about how to do that? Just to avoid asking agains cuh kind of questions...

 Thanks again.

 On Monday, October 26, 2015 at 11:31:59 AM UTC+1, Kristoffer Carlsson 
 wrote:
>
> Change last line to:
>
> h = PyPlot.plt[:hist](x,nbins)
>
>
>
> On Monday, October 26, 2015 at 11:28:35 AM UTC+1, Ferran Mazzanti 
> wrote:
>>
>> Hi folks,
>>
>> using Linux Mint 17.1 here. I upgraded to julia 0.4.0 and now this 
>> simple code, taken from the web and tested on previous versions,
>>
>> using PyPlot
>>
>> x = randn(1000) # Values
>> nbins = 50 # Number of bins
>>
>> fig = figure("pyplot_histogram",figsize=(6,6)) # Not strictly required
>> ax = axes() # Not strictly required
>> h = PyPlot.plt.hist(x,nbins) # Histogram, PyPlot.plt required to 
>> differentiate with conflicting hist command
>>
>> Produces the following output
>>
>> LoadError: type PyObject has no field hist
>> while loading In[133], in expression starting on line 6
>>
>>  in getindex at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:240
>>  in pysequence_query at 
>> /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:781
>>  [inlined code] from 
>> /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:797
>>  in pytype_query at 
>> /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:826
>>  in convert at /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:846
>>  in pycall at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:399
>>  in call at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:407
>>  in close_queued_figs at 
>> /home/mazzanti/.julia/v0.4/PyPlot/src/PyPlot.jl:401
>>
>> Any hint on that? Am I doing something wrong? If so, can anybody help on 
>> how to do histograms in Julia 0.4.0?
>>
>> Thanks,
>>
>> Ferran. 
>>
>>
>>

[julia-users] Re: PyPlot histogram does not work (for me at least) in 0.4.0 (?)

2015-10-26 Thread David P. Sanders

If you do `using PyPlot` then you can just use `hist` directly:

using PyPlot

x = randn(1)
hist(x, 100)

El lunes, 26 de octubre de 2015, 10:21:59 (UTC-6), Ferran Mazzanti escribió:
>
> Oh, thnks for the info...
>
> On Monday, October 26, 2015 at 12:25:26 PM UTC+1, Kristoffer Carlsson 
> wrote:
>>
>> Read here: https://github.com/stevengj/PyCall.jl#usage 
>> 
>>
>> More specifically, this section:
>>
>> "The biggest diffence from Python is that object attributes/members are 
>> accessed with o[:attribute]rather than o.attribute, and you use get(o, 
>> key) rather than o[key]. (This is because Julia does not permit 
>> overloading the . operator yet.) See also the section on PyObject below, 
>> as well as the pywrap function to create anonymous modules that simulate 
>> . access (this is what @pyimportdoes). For example, using Biopython 
>>  we can do:
>>
>> @pyimport Bio.Seq as s
>> @pyimport Bio.Alphabet as a
>> my_dna = s.Seq("AGTACACTGGT", a.generic_dna)
>> my_dna[:find]("ACT")
>>
>> whereas in Python the last step would have been my_dna.find("ACT")"
>>
>> On Monday, October 26, 2015 at 11:38:41 AM UTC+1, Ferran Mazzanti wrote:
>>>
>>> That worked, thanks :) 
>>>
>>> But this syntax I can not understand... where can I find documentation 
>>> about how to do that? Just to avoid asking agains cuh kind of questions...
>>>
>>> Thanks again.
>>>
>>> On Monday, October 26, 2015 at 11:31:59 AM UTC+1, Kristoffer Carlsson 
>>> wrote:

 Change last line to:

 h = PyPlot.plt[:hist](x,nbins)



 On Monday, October 26, 2015 at 11:28:35 AM UTC+1, Ferran Mazzanti wrote:
>
> Hi folks,
>
> using Linux Mint 17.1 here. I upgraded to julia 0.4.0 and now this 
> simple code, taken from the web and tested on previous versions,
>
> using PyPlot
>
> x = randn(1000) # Values
> nbins = 50 # Number of bins
>
> fig = figure("pyplot_histogram",figsize=(6,6)) # Not strictly required
> ax = axes() # Not strictly required
> h = PyPlot.plt.hist(x,nbins) # Histogram, PyPlot.plt required to 
> differentiate with conflicting hist command
>
> Produces the following output
>
> LoadError: type PyObject has no field hist
> while loading In[133], in expression starting on line 6
>
>  in getindex at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:240
>  in pysequence_query at 
> /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:781
>  [inlined code] from 
> /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:797
>  in pytype_query at 
> /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:826
>  in convert at /home/mazzanti/.julia/v0.4/PyCall/src/conversions.jl:846
>  in pycall at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:399
>  in call at /home/mazzanti/.julia/v0.4/PyCall/src/PyCall.jl:407
>  in close_queued_figs at 
> /home/mazzanti/.julia/v0.4/PyPlot/src/PyPlot.jl:401
>
> Any hint on that? Am I doing something wrong? If so, can anybody help on 
> how to do histograms in Julia 0.4.0?
>
> Thanks,
>
> Ferran. 
>
>
>

[julia-users] Re: PyPlot histogram does not work (for me at least) in 0.4.0 (?)

2015-10-26 Thread David P. Sanders
Apologies, this seems to have changed recently -- PyPlot no longer 
overwrites Base.hist.

You can do 
plt[:hist]


[julia-users] [ANN] DataStreams.jl, CSV.jl, SQLite.jl New Releases

2015-10-26 Thread Jacob Quinn
Hey everyone,

I know it's been mentioned here and there, but now it's official: two new 
packages have been officially released for 0.4, DataStreams.jl and CSV.jl. 
SQLite.jl has also gone through a big overhaul to modernize the code and 
rework the data processing interface.

DataStreams.jl is a new package with a lofty goal and not a lot of code. It 
aims to put forth a data ingestion/processing framework that can be used by 
all types of data-reader/ingestion/source/sink/writer type packages. The 
basic idea is that for a type of data source, defining a `Source` and 
`Sink` types, and then implementing the various combinations of 
`Data.stream!(::Source, ::Sink)` methods that make sense. For example, 
CSV.jl and SQLite.jl now both have `Source` and `Sink` types, and I've 
simply defined the following methods between the two packages:

Data.stream!(source::CSV.Source, sink::SQLite.Sink)  =>  parse a CSV file 
represented by `source` directly into the SQLite table represented by `sink`
Data.stream!(source::SQLite.Source, sink::CSV.Sink)  =>  fetch the SQLite 
table represented by `source` directly out to a CSV file represented by 
`sink`

The DataStreams.jl package also defines a `Data.Table` type which is simply:

type Table{T}
schema::Data.Schema
data::T
end

this is meant as a "backend-agnostic" kind of type that represents an 
in-memory Julia structure. Currently the default constructors put a 
`Vector{NullableVector}` as the `.data` field, but it could really be 
anything you wanted (e.g. DataFrame, Matrix, etc.). The aim of `Data.Table` 
certainly isn't to replace something like DataFrames, but rather to act as 
a default "pure julia type" with the DataStreams.jl framework. Indeed, to 
do a non-copying convert of a `Data.Table` to a `DataFrame` is just: 
`DataFrame(dt::Data.Table)`.

You can see more details in the blog post I wrote up 
here: http://julialang.org/blog/2015/10/datastreams/

A big thanks to a number of people as well who have helped encourage and 
develop these packages with me. I truly love the community and caliber of 
people around here and just want to say thanks.

DataStreams.jl: https://github.com/JuliaDB/DataStreams.jl
CSV.jl: https://github.com/JuliaDB/CSV.jl
SQLite.jl: https://github.com/JuliaDB/SQLite.jl

-Jacob


Re: [julia-users] 900mb csv loading in Julia failed: memory comparison vs python pandas and R

2015-10-26 Thread Jacob Quinn
Just a quick follow-up here: after some benchmarking of my own on a windows
machine, the culprit ended up being a deathly slow `strtod` system library
function on windows. It takes a few hoops to get the performance right,
which I discovered is already done in Base Julia, it just wasn't exported.

My PR to Base Julia  has
been accepted and is backport pending, so once Julia 0.4.1 is released,
CSV.jl will be updated to use the new code and will require that version of
Julia to enable similar great performance cross-platform.

-Jacob

On Wed, Oct 14, 2015 at 3:51 AM, bernhard  wrote:

> with readtable the julia process goes up to 6.3 GB and stays there. It
> takes 95 seconds. (@time shows "373M, allocations: 13GB, 7% GC time")
> I will try Jacob's approach again.
>
>
> Am Mittwoch, 14. Oktober 2015 10:59:06 UTC+2 schrieb Milan Bouchet-Valat:
>>
>> Le mercredi 14 octobre 2015 à 00:15 -0700, Grey Marsh a écrit :
>> > Done with the testing in the cloud instance.
>> > It works and the timings in my case
>> >
>> > 58.346345 seconds (694.00 M allocations: 12.775 GB, 2.63% gc time)
>> >
>> > result of "top" command:  VIRT: 11.651g RES: 3.579g
>> >
>> > ~13gb memory for a 900mb file!
>> > Thanks to Jacob atleast I was able check that the process works.
>> As Yichao noted, at no point in the import did Julia use 13GB of RAM.
>> That's the total amount of memory that was allocated and freed by
>> pieces (694M of them). You'd need to watch the Julia process while
>> working to see what's the maximum value of RES when importing.
>>
>>
>> Regards
>>
>> > On Wednesday, October 14, 2015 at 12:10:02 PM UTC+5:30, bernhard
>> > wrote:
>> > > Jacob
>> > >
>> > > I do run into the same issue as Grey. the step
>> > > ds = DataStreams.DataTable(f);
>> > > gets stuck.
>> > > I also tried this with a smaller file (150MB) which I have. This
>> > > file is read by readtable in 15s. But the DataTable function
>> > > freezes. I use 0.4 on Windows 7.
>> > >
>> > > I note that your code did work on a tiny file though (40 lines or
>> > > so).
>> > > I do get a dataframe, but when I show it (by simply typing df, or
>> > > dump(df)) Julia crashes...
>> > >
>> > > Bernhard
>> > >
>> > >
>> > > Am Mittwoch, 14. Oktober 2015 06:54:16 UTC+2 schrieb Grey Marsh:
>> > > > I am using Julia 0.4 for this purpose, if that's what is meant by
>> > > > "0.4 only".
>> > > >
>> > > > On Wednesday, October 14, 2015 at 9:53:09 AM UTC+5:30, Jacob
>> > > > Quinn wrote:
>> > > > > Oh yes, I forgot to mention that the CSV/DataStreams code is
>> > > > > 0.4 only. Definitely interested to hear about any
>> > > > > results/experiences though.
>> > > > >
>> > > > > -Jacob
>> > > > >
>> > > > > On Tue, Oct 13, 2015 at 10:11 PM, Yichao Yu 
>> > > > > wrote:
>> > > > > > On Wed, Oct 14, 2015 at 12:02 AM, Grey Marsh <
>> > > > > > kd.k...@gmail.com> wrote:
>> > > > > > > @Jacob, I tried your approach. Somehow it got stuck in the
>> > > > > > "@time ds =
>> > > > > > > DataStreams.DataTable(f)" line. After 15 minutes running,
>> > > > > > julia is using
>> > > > > > > ~500mb and 1 cpu core with no sign of end. The memory use
>> > > > > > has been almost
>> > > > > > > same for the whole duration of 15 minutes. I'm letting it
>> > > > > > run, hoping that
>> > > > > > > it finishes after some time.
>> > > > > > >
>> > > > > > > From your run, I can see it needs 12gb memory which is
>> > > > > > higher than my
>> > > > > > > machine memory of 8gb. could it be the problem?
>> > > > > >
>> > > > > > 12GB is the total number of memory ever allocated during the
>> > > > > > timing. A
>> > > > > > lot of them might be intermediate results that are freed by
>> > > > > > the GC.
>> > > > > > Also, from the output of @time, it looks like 0.4.
>> > > > > >
>> > > > > > >
>> > > > > > > On Wednesday, October 14, 2015 at 2:28:09 AM UTC+5:30,
>> > > > > > Jacob Quinn wrote:
>> > > > > > >>
>> > > > > > >> I'm hesitant to suggest, but if you're in a bind, I have
>> > > > > > an experimental
>> > > > > > >> package for fast CSV reading. The API has stabilized
>> > > > > > somewhat over the last
>> > > > > > >> week and I'm planning a more broad release soon, but I'd
>> > > > > > still consider it
>> > > > > > >> alpha mode. That said, if anyone's willing to give it a
>> > > > > > drive, you just need
>> > > > > > >> to
>> > > > > > >>
>> > > > > > >> Pkg.add("Libz")
>> > > > > > >> Pkg.add("NullableArrays")
>> > > > > > >> Pkg.clone("https://github.com/quinnj/DataStreams.jl";)
>> > > > > > >> Pkg.clone("https://github.com/quinnj/CSV.jl";)
>> > > > > > >>
>> > > > > > >> With the original file referenced here I get:
>> > > > > > >>
>> > > > > > >> julia> reload("CSV")
>> > > > > > >>
>> > > > > > >> julia> f =
>> > > > > > CSV.Source("/Users/jacobquinn/Downloads/train.csv";null="NA")
>> > > > > > >> CSV.Source: "/Users/jacobquinn/Downloads/train.csv"
>> > > > > > >> delim: ','
>> > > > > > >> quotechar: '"'
>> > > > > > >> escapechar: '\\'
>

[julia-users] Re: JuliaBox limitations on parallel computing

2015-10-26 Thread tanmaykm
Hi Chris,

a few minutes of running parallel Julia will not really have a negative 
impact.
The parallel programs I was referring to were 100 odd workers each, and 
there were more than one users running similar tasks at some point which 
was too much for a single machine to handle.

Also, we have not banned you or anyone from JuliaBox because of this. :)
Do let us know if you are still unable to login.

Cheers,
Tanmay

On Tuesday, October 27, 2015 at 10:53:33 AM UTC+5:30, Christopher Fusting 
wrote:
>
> Hi!
>
> I believe this is my fault. I've been running some items in parallel for 
> class. I believe only one session ran away (1+ hours) while the others were 
> < 10 minutes. Sorry for the trouble, I didn't realize there was a negative 
> impact on others / the infrastructure. In the future I'll run processor 
> intensive items on my local machine.
>
> Incidentally I am unable to login to Juliabox now (is this related?). I 
> have not backed up my work to github and as such would respectfully request 
> the ability to copy my files over if I have been banned from the service. 
> Some of them have due dates :).
>
> Cheers,
>
> _Chris
>
> On Sunday, October 25, 2015 at 5:11:19 AM UTC-4, tanmaykm wrote:
>>
>> Hi,
>>
>> from the past few days we have had some users run large parallel programs 
>> on JuliaBox sessions. While in some cases they succeed, we see a lot of 
>> failures due to resource constraints. Though we have plans to enable large 
>> programs in future, we do not allocate enough resources for that now.
>>
>> Also, since JuliaBox sessions run on shared infrastructure, this affects 
>> other sessions that are co-located on the same machine.  We will be putting 
>> in place more checks and restrictions soon to prevent co-located sessions 
>> from being impacted.
>>
>> We would request users to refrain from running large parallel programs 
>> for now. If this is being done as part of some university class, please 
>> write to us. Probably a separately provisioned cluster will be more 
>> appropriate for that.
>>
>> - JuliaBox Team
>>
>

[julia-users] Re: JuliaBox limitations on parallel computing

2015-10-26 Thread tanmaykm
Hi André,
I think it should be fine as long as the code is not compute intensive and 
across <10 workers.
Best,
Tanmay

On Tuesday, October 27, 2015 at 10:53:33 AM UTC+5:30, André Lage wrote:
>
> Hi,
>
> I'll be giving a hands-on tutorial on Julia next week. I expect about 30 
> people in 6th November, from 12h30 to 16h (UTC-3), to use IJulia notebook 
> sessions to learn Julia and run some simple parallel code with few Workers. 
>
> Would it be a problem?
>
> Thanks,
>
>
> André Lage.
>
> On Sunday, October 25, 2015 at 6:11:19 AM UTC-3, tanmaykm wrote:
>>
>> Hi,
>>
>> from the past few days we have had some users run large parallel programs 
>> on JuliaBox sessions. While in some cases they succeed, we see a lot of 
>> failures due to resource constraints. Though we have plans to enable large 
>> programs in future, we do not allocate enough resources for that now.
>>
>> Also, since JuliaBox sessions run on shared infrastructure, this affects 
>> other sessions that are co-located on the same machine.  We will be putting 
>> in place more checks and restrictions soon to prevent co-located sessions 
>> from being impacted.
>>
>> We would request users to refrain from running large parallel programs 
>> for now. If this is being done as part of some university class, please 
>> write to us. Probably a separately provisioned cluster will be more 
>> appropriate for that.
>>
>> - JuliaBox Team
>>
>

[julia-users] Re: JuliaBox limitations on parallel computing

2015-10-26 Thread tanmaykm
Hi Iliyan, this sounds great!


On Tuesday, October 27, 2015 at 10:53:33 AM UTC+5:30, Iliyan Zarov wrote:
>
> For anyone wishing to run large parallel programs I may be able to help - 
> I'm working on a cloud computing platform for Julia that takes care of 
> provisioning and managing all resources. More info at https://evoqus.com
>  . 
>
> It's currently in beta, get in touch if interested.
>
>
> Best,
>
> Iliyan
>
> On Monday, October 26, 2015 at 6:41:57 PM UTC, cdm wrote:
>>
>>
>> perhaps time for an asterisk on the JuliaBox banner:
>>
>> "The Julia community is doing amazing things. We want you in on it!*"
>>
>> * ... but not massively in parallel, consuming the resources of the 
>> community.
>>
>>
>>
>> there are positives and negatives to this sort of problem ...
>>
>> the fact that the problem can even arise says great things
>> about the capabilities of the language and the service.
>>
>> well done ...
>>
>>
>>
>> On Sunday, October 25, 2015 at 2:11:25 AM UTC-7, Tanmay K. Mohapatra 
>> wrote:
>>>
>>> Hi,
>>>
>>> from the past few days we have had some users run large parallel 
>>> programs on JuliaBox sessions. While in some cases they succeed, we see a 
>>> lot of failures due to resource constraints. Though we have plans to enable 
>>> large programs in future, we do not allocate enough resources for that now.
>>>
>>> Also, since JuliaBox sessions run on shared infrastructure, this affects 
>>> other sessions that are co-located on the same machine.  We will be putting 
>>> in place more checks and restrictions soon to prevent co-located sessions 
>>> from being impacted.
>>>
>>> We would request users to refrain from running large parallel programs 
>>> for now. If this is being done as part of some university class, please 
>>> write to us. Probably a separately provisioned cluster will be more 
>>> appropriate for that.
>>>
>>> - JuliaBox Team
>>>
>>

[julia-users] Moving from 0.3 to 0.4

2015-10-26 Thread Phil Tomson
Are there any docs on moving from 0.3 to 0.4? Or do we just look in the 
changelog?

I know some things have been deprecated and other things added. Also 
looking for kind of a "Best practices" sort of guideline for 0.4 - I 
suspect there were practices in 0.3 that aren't recommended now in 0.4.


Re: [julia-users] 900mb csv loading in Julia failed: memory comparison vs python pandas and R

2015-10-26 Thread bernhard
Thanks. I appreciate your efforts.
Looking forward to 0.4.1. in that case.

Am Dienstag, 27. Oktober 2015 06:30:32 UTC+1 schrieb Jacob Quinn:
>
> Just a quick follow-up here: after some benchmarking of my own on a 
> windows machine, the culprit ended up being a deathly slow `strtod` system 
> library function on windows. It takes a few hoops to get the performance 
> right, which I discovered is already done in Base Julia, it just wasn't 
> exported.
>
> My PR to Base Julia  has 
> been accepted and is backport pending, so once Julia 0.4.1 is released, 
> CSV.jl will be updated to use the new code and will require that version of 
> Julia to enable similar great performance cross-platform.
>
> -Jacob
>
> On Wed, Oct 14, 2015 at 3:51 AM, bernhard 
> > wrote:
>
>> with readtable the julia process goes up to 6.3 GB and stays there. It 
>> takes 95 seconds. (@time shows "373M, allocations: 13GB, 7% GC time")
>> I will try Jacob's approach again.
>>
>>
>> Am Mittwoch, 14. Oktober 2015 10:59:06 UTC+2 schrieb Milan Bouchet-Valat:
>>>
>>> Le mercredi 14 octobre 2015 à 00:15 -0700, Grey Marsh a écrit : 
>>> > Done with the testing in the cloud instance. 
>>> > It works and the timings in my case 
>>> > 
>>> > 58.346345 seconds (694.00 M allocations: 12.775 GB, 2.63% gc time) 
>>> > 
>>> > result of "top" command:  VIRT: 11.651g RES: 3.579g 
>>> > 
>>> > ~13gb memory for a 900mb file! 
>>> > Thanks to Jacob atleast I was able check that the process works. 
>>> As Yichao noted, at no point in the import did Julia use 13GB of RAM. 
>>> That's the total amount of memory that was allocated and freed by 
>>> pieces (694M of them). You'd need to watch the Julia process while 
>>> working to see what's the maximum value of RES when importing. 
>>>
>>>
>>> Regards 
>>>
>>> > On Wednesday, October 14, 2015 at 12:10:02 PM UTC+5:30, bernhard 
>>> > wrote: 
>>> > > Jacob 
>>> > > 
>>> > > I do run into the same issue as Grey. the step 
>>> > > ds = DataStreams.DataTable(f); 
>>> > > gets stuck. 
>>> > > I also tried this with a smaller file (150MB) which I have. This 
>>> > > file is read by readtable in 15s. But the DataTable function 
>>> > > freezes. I use 0.4 on Windows 7. 
>>> > > 
>>> > > I note that your code did work on a tiny file though (40 lines or 
>>> > > so). 
>>> > > I do get a dataframe, but when I show it (by simply typing df, or 
>>> > > dump(df)) Julia crashes... 
>>> > > 
>>> > > Bernhard 
>>> > > 
>>> > > 
>>> > > Am Mittwoch, 14. Oktober 2015 06:54:16 UTC+2 schrieb Grey Marsh: 
>>> > > > I am using Julia 0.4 for this purpose, if that's what is meant by 
>>> > > > "0.4 only". 
>>> > > > 
>>> > > > On Wednesday, October 14, 2015 at 9:53:09 AM UTC+5:30, Jacob 
>>> > > > Quinn wrote: 
>>> > > > > Oh yes, I forgot to mention that the CSV/DataStreams code is 
>>> > > > > 0.4 only. Definitely interested to hear about any 
>>> > > > > results/experiences though. 
>>> > > > > 
>>> > > > > -Jacob 
>>> > > > > 
>>> > > > > On Tue, Oct 13, 2015 at 10:11 PM, Yichao Yu  
>>> > > > > wrote: 
>>> > > > > > On Wed, Oct 14, 2015 at 12:02 AM, Grey Marsh < 
>>> > > > > > kd.k...@gmail.com> wrote: 
>>> > > > > > > @Jacob, I tried your approach. Somehow it got stuck in the 
>>> > > > > > "@time ds = 
>>> > > > > > > DataStreams.DataTable(f)" line. After 15 minutes running, 
>>> > > > > > julia is using 
>>> > > > > > > ~500mb and 1 cpu core with no sign of end. The memory use 
>>> > > > > > has been almost 
>>> > > > > > > same for the whole duration of 15 minutes. I'm letting it 
>>> > > > > > run, hoping that 
>>> > > > > > > it finishes after some time. 
>>> > > > > > > 
>>> > > > > > > From your run, I can see it needs 12gb memory which is 
>>> > > > > > higher than my 
>>> > > > > > > machine memory of 8gb. could it be the problem? 
>>> > > > > > 
>>> > > > > > 12GB is the total number of memory ever allocated during the 
>>> > > > > > timing. A 
>>> > > > > > lot of them might be intermediate results that are freed by 
>>> > > > > > the GC. 
>>> > > > > > Also, from the output of @time, it looks like 0.4. 
>>> > > > > > 
>>> > > > > > > 
>>> > > > > > > On Wednesday, October 14, 2015 at 2:28:09 AM UTC+5:30, 
>>> > > > > > Jacob Quinn wrote: 
>>> > > > > > >> 
>>> > > > > > >> I'm hesitant to suggest, but if you're in a bind, I have 
>>> > > > > > an experimental 
>>> > > > > > >> package for fast CSV reading. The API has stabilized 
>>> > > > > > somewhat over the last 
>>> > > > > > >> week and I'm planning a more broad release soon, but I'd 
>>> > > > > > still consider it 
>>> > > > > > >> alpha mode. That said, if anyone's willing to give it a 
>>> > > > > > drive, you just need 
>>> > > > > > >> to 
>>> > > > > > >> 
>>> > > > > > >> Pkg.add("Libz") 
>>> > > > > > >> Pkg.add("NullableArrays") 
>>> > > > > > >> Pkg.clone("https://github.com/quinnj/DataStreams.jl";) 
>>> > > > > > >> Pkg.clone("https://github.com/quinnj/CSV.jl";) 
>>> > > > > > >> 
>>> > > > > > >> With t