Re: [julia-users] Re: How can I find the common elements of two matrix(or arrays)?

2016-07-21 Thread Joshua Ballanco
Use `enumerate`:

julia> a = [1,3,5,8]
julia> b = [1,2,5,7]
julia> intersect(enumerate(a), enumerate(b))
2-element Array{Tuple{Int64,Int64},1}:
 (1,1)
 (3,5)


On July 5, 2016 at 13:20:15, siyu song (siyuphs...@gmail.com) wrote:

Good to know this method. Thanks a lot.

在 2016年7月6日星期三 UTC+9上午1:45:21,Cedric St-Jean写道:
>
> In Julia, if speed isn't too important, this gives the same results:
>
> a, b = [-1,-2,-3], [-3,-4,-5,-2]
> inter = intersect(a, b)
> (Int[findfirst(a, x) for x in inter], Int[findfirst(b, x) for x in inter])
>
> And it should be a good deal faster than the MATLABism. Other functions
> you might find useful: ind2sub, div (integer division)
>
> On Tue, Jul 5, 2016 at 12:20 PM, siyu song  > wrote:
>
>> Thanks, Fred, for your answer. But in fact I want to know the index of
>> the common elements of two integer vectors(Elements are all different in
>> each vectors).
>> For example, v1 = [1,2,3] and v2[3,4,5,2]. So the answer should be
>> common_index1 = [2,3], common_index2 = [1,4].
>> I use a function as
>> function find_common(a,b)
>>a = reshape(a,length(a),1);
>>b = reshape(b,1,length(b));
>>la = length(a);
>>lb = length(b);
>>a = a[:,ones(1,lb)];
>>b = b[ones(la,1),:];
>>comab = find(x->x==true,a .== b);
>>comab = comab.';
>>coma = mod(comab+la-1,la)+1;
>>comb = floor(Int64,(comab+la-1)/la);
>>return coma,comb;
>> end
>>
>> So coma and comb is exactly what I want. In matlab this is easy to do.
>> But with julia, I haven't thought of a clever answer yet.
>> In matlab we can simply get coma and comb by [coma, comb] = find(a==b).
>>
>> 在 2016年7月5日星期二 UTC+9下午7:02:34,Fred写道:
>>
>>> julia> a=[1,3,5,7]
>>> 4-element Array{Int64,1}:
>>>  1
>>>  3
>>>  5
>>>  7
>>>
>>>
>>> julia> b=[2,3,5,6,7]
>>> 5-element Array{Int64,1}:
>>>  2
>>>  3
>>>  5
>>>  6
>>>  7
>>>
>>>
>>> julia> intersect(a,b)
>>> 3-element Array{Int64,1}:
>>>  3
>>>  5
>>>  7
>>>
>>>
>>> julia> union(a,b)
>>> 6-element Array{Int64,1}:
>>>  1
>>>  3
>>>  5
>>>  7
>>>  2
>>>  6
>>>
>>>
>>>
>>> Le lundi 4 juillet 2016 04:18:10 UTC+2, siyu song a écrit :

 But intersect doesn't tell us the index of the elements in the
 matrix(array), I think.

>>>
>


[julia-users] Working with DataFrame columns types that are Nullable

2016-07-21 Thread John Best

I've got ODBC.jl set up to retrieve a couple of queries. This works, but it 
is returning a DataFrame with column eltypes of Nullable{Int64}, 
Nullable{Dec64}, etc. I'd like to convert the numeric element types to 
Float64 for use in my analysis (which was written based on reading .csv's 
of the data). I would try to present an example, but I can't seem to 
construct a basic DataFrame with NullableArray columns without getting:

ERROR: MethodError: `upgrade_vector` has no method matching 
upgrade_vector(::NullableArrays.NullableArray{Int64,1})WARNING: Error 
showing method candidates, aborted

 in setindex! at 
/home/jkbest/.julia/v0.4/DataFrames/src/dataframe/dataframe.jl:368
 in DataFrame at 
/home/jkbest/.julia/v0.4/DataFrames/src/dataframe/dataframe.jl:104

This is also the error I get when I try to manually convert a column, i.e.

df[:colA] = NullableArrays.NullableArray{Float64}(df[:colA])

This is the first time I've tried working with NullableArrays. Is there any 
way to convert a DataFrame of NullableArrays to a DataFrame of DataArrays? 
And how would I do that? At least I know that my existing code works for 
that.

Thanks,
John


[julia-users] Re: Julia at the SIAM Annual Conference

2016-07-21 Thread Chris Rackauckas
Wait until next year. I can't present on Julia codes there until they're 
published (adviser's rules). I think in general that would apply a to a lot 
of people: started getting involved during v0.3, did a project in v0.4 
which is now submitted, but won't be out there to present until next year. 
With the way the package ecosystem is looking though, I hope we can have a 
Julia meetup!

On Monday, July 11, 2016 at 4:33:49 PM UTC-7, Xiangxi Gao wrote:
>
> So I am crashing the SIAM annual conference held at the Boston Westin 
> Hotel this year from 7/11 to 7/15 and noticed a lot of Mathworks folks but 
> sadly no signs of Julia (yet).This type of conference seems like a good way 
> to promote Julia, especially when a lot of those attending are doing 
> something in technical computing. 
>


[julia-users] Re: Julia at the SIAM Annual Conference

2016-07-21 Thread Sheehan Olver
Julia Computing sponsored the bags.  Though I was surprised there was no 
booth..

On Tuesday, July 12, 2016 at 9:33:49 AM UTC+10, Xiangxi Gao wrote:
>
> So I am crashing the SIAM annual conference held at the Boston Westin 
> Hotel this year from 7/11 to 7/15 and noticed a lot of Mathworks folks but 
> sadly no signs of Julia (yet).This type of conference seems like a good way 
> to promote Julia, especially when a lot of those attending are doing 
> something in technical computing. 
>


Re: [julia-users] A foolproof question about ylim

2016-07-21 Thread Yichao Yu
On Thu, Jul 21, 2016 at 7:03 PM,   wrote:
>
>
> How can I get ylim when using PyPlot? I tried all the suggestions I found on
> stackoverflow. None of them works, e.g. get_ylim(). Here is the error:
>
> ERROR: LoadError: UndefVarError: get_ylim not defined
>
> Can anyone give me an example of getting the current ylim? Thanks!!


Err, `ylim()`? I believe this is how you do that in python too.


Re: [julia-users] Re: accessing an expression's global scope from macro

2016-07-21 Thread Yichao Yu
On Thu, Jul 21, 2016 at 7:02 PM, Marius Millea  wrote:
>
>
> On Thu, Jul 21, 2016 at 11:37 PM, Cedric St-Jean 
> wrote:
>>
>> Neat macro.
>>
>>>
>>> For this though, my macro needs to somehow figure out that "inc" was also
>>> defined with @self (since it shouldn't blindly add self as a first arg so
>>> other non-@self'ed function calls). Is this possible in Julia?
>>
>>
>> You could have a global Set that would contain the names of the functions
>> that were defined with @self. But IMO this is going to bite you at one point
>> or another.
>
>
> Yea certainly a possibility, although even this doesn't seem that robust
> since you're only doing this based on function name, and you don't know if
> its referring to a different function in any given call environment. I'm
> starting to doubt its truly possible at compile time, although still
> thinking

No it's not.

>
>
>
>>
>>
>> FYI Mauro's package has something similar.
>
>
> Some interesting stuff in there, thanks!
>
>>
>>
>> I would suggest using a global variable, if you want to avoid explicitly
>> passing `self` all over the place. It would look like this:
>>
>> const self = Array{mytype}()   # trick to avoid the globals' poor
>> performance
>>
>> @self function foo()
>>x = x + 1   # expands into self[].x = self[].x + 1
>> end
>>
>> @with_self(mytype(200)) do
>># expands into
>># try
>>#... save the current value of self
>>#global self[] = mytype(200)
>>#... code
>># finally
>>#global self[] = ...restore previous value
>># end
>>...
>> end
>>
>> I used this idiom in Common Lisp all the time. It's strictly equivalent to
>> passing the object around to every function, and doesn't break the
>> "functionalness" of the code.
>>
>> Cédric
>>
>>
>> On Thursday, July 21, 2016 at 4:01:20 PM UTC-4, Marius Millea wrote:
>>>
>>> In an attempt to make some numerical code (ie something thats basically
>>> just a bunch of equations) more readable, I am trying to write a macro that
>>> lets me write the code more succinctly. The code uses parameters from some
>>> data structure, call it "mytype", so its littered with "t.a", "t.b", etc..
>>> where t::mytype. My macro basically splices in the the "t." part for me. Its
>>> kind of like how C++ member functions automatically access the class's
>>> fields, as an example. To my amazement / growing love of Julia, I actually
>>> managed to hack it together without too much difficulty, it looks like this,
>>>
>>>
>>> macro self(func)
>>> @assert func.head == :function
>>>
>>> # add "self" as a first function argument
>>> insert!(func.args[1].args,2,:(self::mytype))
>>>
>>>
>>> # recurse through AST and rename X to self.X if
>>> # its a fieldname of mytype
>>> function visit(ex)
>>> if typeof(ex) == Expr
>>> ex.args = map(visit,ex.args)
>>> elseif (typeof(ex) == Symbol) & (ex in fieldnames(mytype))
>>> return :(self.$ex)
>>> end
>>> ex
>>> end
>>> func.args[2] = visit(func.args[2])
>>>
>>> show(func) # print the edited function so we can see it in action
>>>
>>> :($(esc(func)))
>>> end
>>>
>>>
>>>
>>>
>>> Here it is in action:
>>>
>>> > @self function inc()
>>> x = x + 1
>>> end
>>>
>>>
>>> :(function inc(self::mytype)
>>> self.x = self.x + 1
>>> end)
>>>
>>>
>>> inc (generic function with 1 method)
>>>
>>>
>>>
>>>
>>> > inc(mytype(0))
>>> 1
>>>
>>>
>>>
>>> where I'm assuming I've defined mytype as
>>>
>>> type mytype
>>> x
>>> end
>>>
>>>
>>>
>>> As you can see, all it did was add self::mytype as an arg and replace x
>>> with self.x everywhere it found it. This is also super nice because there is
>>> zero run-time overhead vs. having written the "self." myself, everything
>>> happens compile time.
>>>
>>> Now for the question. I'd like to also to be able automatically pass the
>>> "self" argument to functions, so that I could write something like,
>>>
>>> @self function inc2()
>>> inc()
>>> inc()
>>> end
>>>
>>>
>>>
>>> and it would produce
>>>
>>> function inc2(self::mytype)
>>> inc(self)
>>> inc(self)
>>> end
>>>
>>>
>>>
>>> For this though, my macro needs to somehow figure out that "inc" was also
>>> defined with @self (since it shouldn't blindly add self as a first arg so
>>> other non-@self'ed function calls). Is this possible in Julia? I suppose
>>> somehow the macro must access the global scope where the expression is being
>>> evaluated? I'm not entirely sure that's doable. I'm happy to take any tips
>>> how to achieve this though, especially ones incurring minimal overhead for
>>> the rewritten function. Thanks!
>>>
>


[julia-users] A foolproof question about ylim

2016-07-21 Thread chobbes158


How can I get ylim when using PyPlot? I tried all the suggestions I found 
on stackoverflow. None of them works, e.g. get_ylim(). Here is the error:

ERROR: LoadError: UndefVarError: get_ylim not defined

Can anyone give me an example of getting the current ylim? Thanks!!


Re: [julia-users] Re: accessing an expression's global scope from macro

2016-07-21 Thread Marius Millea
On Thu, Jul 21, 2016 at 11:37 PM, Cedric St-Jean 
wrote:

> Neat macro.
>
>
>> For this though, my macro needs to somehow figure out that "inc" was also
>> defined with @self (since it shouldn't blindly add self as a first arg so
>> other non-@self'ed function calls). Is this possible in Julia?
>>
>
> You could have a global Set that would contain the names of the functions
> that were defined with @self. But IMO this is going to bite you at one
> point or another.
>

Yea certainly a possibility, although even this doesn't seem that robust
since you're only doing this based on function name, and you don't know if
its referring to a different function in any given call environment. I'm
starting to doubt its truly possible at compile time, although still
thinking




>
> FYI Mauro's package has something similar
> .
>

Some interesting stuff in there, thanks!


>
> I would suggest using a global variable, if you want to avoid explicitly
> passing `self` all over the place. It would look like this:
>
> const self = Array{mytype}()   # trick to avoid the globals' poor
> performance
>
> @self function foo()
>x = x + 1   # expands into self[].x = self[].x + 1
> end
>
> @with_self(mytype(200)) do
># expands into
># try
>#... save the current value of self
>#global self[] = mytype(200)
>#... code
># finally
>#global self[] = ...restore previous value
># end
>...
> end
>
> I used this idiom in Common Lisp all the time. It's strictly equivalent to
> passing the object around to every function, and doesn't break the
> "functionalness" of the code.
>
> Cédric
>
>
> On Thursday, July 21, 2016 at 4:01:20 PM UTC-4, Marius Millea wrote:
>>
>> In an attempt to make some numerical code (ie something thats basically
>> just a bunch of equations) more readable, I am trying to write a macro that
>> lets me write the code more succinctly. The code uses parameters from some
>> data structure, call it "mytype", so its littered with "t.a", "t.b", etc..
>> where t::mytype. My macro basically splices in the the "t." part for me.
>> Its kind of like how C++ member functions automatically access the class's
>> fields, as an example. To my amazement / growing love of Julia, I actually
>> managed to hack it together without too much difficulty, it looks like this,
>>
>>
>> macro self(func)
>> @assert func.head == :function
>>
>> # add "self" as a first function argument
>> insert!(func.args[1].args,2,:(self::mytype))
>>
>>
>> # recurse through AST and rename X to self.X if
>> # its a fieldname of mytype
>> function visit(ex)
>> if typeof(ex) == Expr
>> ex.args = map(visit,ex.args)
>> elseif (typeof(ex) == Symbol) & (ex in fieldnames(mytype))
>> return :(self.$ex)
>> end
>> ex
>> end
>> func.args[2] = visit(func.args[2])
>>
>> show(func) # print the edited function so we can see it in action
>>
>> :($(esc(func)))
>> end
>>
>>
>>
>>
>> Here it is in action:
>>
>> > @self function inc()
>> x = x + 1
>> end
>>
>>
>> :(function inc(self::mytype)
>> self.x = self.x + 1
>> end)
>>
>>
>> inc (generic function with 1 method)
>>
>>
>>
>>
>> > inc(mytype(0))
>> 1
>>
>>
>>
>> where I'm assuming I've defined mytype as
>>
>> type mytype
>> x
>> end
>>
>>
>>
>> As you can see, all it did was add self::mytype as an arg and replace x
>> with self.x everywhere it found it. This is also super nice because there
>> is zero run-time overhead vs. having written the "self." myself, everything
>> happens compile time.
>>
>> Now for the question. I'd like to also to be able automatically pass the
>> "self" argument to functions, so that I could write something like,
>>
>> @self function inc2()
>> inc()
>> inc()
>> end
>>
>>
>>
>> and it would produce
>>
>> function inc2(self::mytype)
>> inc(self)
>> inc(self)
>> end
>>
>>
>>
>> For this though, my macro needs to somehow figure out that "inc" was also
>> defined with @self (since it shouldn't blindly add self as a first arg so
>> other non-@self'ed function calls). Is this possible in Julia? I suppose
>> somehow the macro must access the global scope where the expression is
>> being evaluated? I'm not entirely sure that's doable. I'm happy to take any
>> tips how to achieve this though, especially ones incurring minimal overhead
>> for the rewritten function. Thanks!
>>
>>


[julia-users] Re: I can't believe this spped-up !

2016-07-21 Thread 'Greg Plowman' via julia-users
and also compare (note the @sync)

@time @sync @parallel for i in 1:10
sleep(1)
end

Also note that using reduction with @parallel will also wait:
 z = @parallel (*) for i = 1:n
 A
 end


On Friday, July 22, 2016 at 3:11:15 AM UTC+10, Kristoffer Carlsson wrote:

>
>
> julia> @time for i in 1:10
>sleep(1)
>end
>  10.054067 seconds (60 allocations: 3.594 KB)
>
>
> julia> @time @parallel for i in 1:10
>sleep(1)
>end
>   0.195556 seconds (28.91 k allocations: 1.302 MB)
> 1-element Array{Future,1}:
>  Future(1,1,8,#NULL)
>
>
>
> On Thursday, July 21, 2016 at 6:00:47 PM UTC+2, Ferran Mazzanti wrote:
>>
>> Hi,
>>
>> mostly showing my astonishment, but I can even understand the figures in 
>> this stupid parallelization code
>>
>> A = [[1.0 1.0001];[1.0002 1.0003]]
>> z = A
>> tic()
>> for i in 1:10
>> z *= A
>> end
>> toc()
>> A
>>
>> produces
>>
>> elapsed time: 105.458639263 seconds
>>
>> 2x2 Array{Float64,2}:
>>  1.0 1.0001
>>  1.0002  1.0003
>>
>>
>>
>> But then add @parallel in the for loop
>>
>> A = [[1.0 1.0001];[1.0002 1.0003]]
>> z = A
>> tic()
>> @parallel for i in 1:10
>> z *= A
>> end
>> toc()
>> A
>>
>> and get 
>>
>> elapsed time: 0.008912282 seconds
>>
>> 2x2 Array{Float64,2}:
>>  1.0 1.0001
>>  1.0002  1.0003
>>
>>
>> look at the elapsed time differences! And I'm running this on my Xeon 
>> desktop, not even a cluster
>> Of course A-B reports
>>
>> 2x2 Array{Float64,2}:
>>  0.0  0.0
>>  0.0  0.0
>>
>>
>> So is this what one should expect from this kind of simple 
>> paralleizations? If so, I'm definitely *in love* with Julia :):):)
>>
>> Best,
>>
>> Ferran.
>>
>>
>>

[julia-users] Pipeline interoperability with IOStream and IOBuffer

2016-07-21 Thread William Wong
Hello, 

I'm trying to continue the discussion of 
https://github.com/JuliaLang/julia/issues/15479

julia> run(pipeline(IOBuffer("a xyz b"), `grep xyz`))
ERROR: MethodError: `uvtype` has no method matching 
uvtype(::Base.AbstractIOBuffer{Array{UInt8,1}})
 in _jl_spawn at process.jl:253
 in anonymous at process.jl:415
 in setup_stdio at process.jl:403
 in spawn at process.jl:414
 in spawn at process.jl:293
 in run at process.jl:530


I feel like we should be able to use PipeBuffer/IOBuffer with pipeline.  Do 
the Julia devs believe we should never expect to be able to pipeline using 
IOBuffer/PipeBuffer?

It seems like many other people are expecting to be able to do this too:
https://github.com/JuliaLang/julia/issues/14437
https://github.com/JuliaLang/julia/issues/3823#issuecomment-157714083

A point brought up in the the issues say that one is a file abstraction and 
one is a stream abstraction.  I now have a bit better understanding of what 
that means but I couldn't find any official documentation on the 
differences between a stream and a file.
This is especially confusing if you consider 
filestream = open("somefile", "w")  - is a file as a stream abstraction but
buffer = IOBuffer() - is a stream (as the docs currently say) as a file 
abstraction.

The former works with pipeline, but not the latter.

In either case, it seems that I need an IOStream to pipe data into a 
pipeline command. How can I turn data into a stream that is not a file?
https://groups.google.com/forum/#!msg/julia-users/R-F3F97leh4/o4zKINZbbvUJ 
asks a similar question but readall produces a string, I still need to 
stream it into a pipeline command.

Thank you,
Will


RE: [julia-users] What does Base.box mean in code_warntype?

2016-07-21 Thread David Anthoff
Ah, ok, so I can just safely ignore it! Thanks, David

> -Original Message-
> From: julia-users@googlegroups.com [mailto:julia-
> us...@googlegroups.com] On Behalf Of Yichao Yu
> Sent: Thursday, July 21, 2016 2:40 PM
> To: Julia Users 
> Subject: Re: [julia-users] What does Base.box mean in code_warntype?
> 
> On Thu, Jul 21, 2016 at 5:33 PM, David Anthoff 
> wrote:
> > Thanks everyone for the answers!
> >
> > I guess Tim's email in particular means that the presence of box might
> > indicate a problem, or not ;)
> 
> Base.box in the ast doesn't indicate a problem. Any type instability should be
> highlighted independently.
> 
> >
> > I guess it would be nice if there was some (easy) way to figure out
> > whether things get boxed or not, apart from looking at the assembler/llvm
> code.
> >
> >> -Original Message-
> >> From: julia-users@googlegroups.com [mailto:julia-
> >> us...@googlegroups.com] On Behalf Of Tim Holy
> >> Sent: Tuesday, July 19, 2016 10:55 AM
> >> To: julia-users@googlegroups.com
> >> Subject: Re: [julia-users] What does Base.box mean in code_warntype?
> >>
> >> They can mean "real" boxing and consequent performance problems, but
> >> sometimes these get auto-removed during compilation. I see this all
> >> the
> > time
> >> when writing array code, for example this function which takes an
> >> input
> > tuple
> >> and adds 1 to each element:
> >>
> >> julia> @inline inc1(a) = _inc1(a...)
> >> inc1 (generic function with 1 method)
> >>
> >> julia> @inline _inc1(a1, a...) = (a1+1, _inc1(a...)...)
> >> _inc1 (generic function with 1 method)
> >>
> >> julia> _inc1() = ()
> >> _inc1 (generic function with 2 methods)
> >>
> >> julia> inc1((3,5,7))
> >> (4,6,8)
> >>
> >> # Let's try using inc1 in another function
> >> julia> foo() = (ret = inc1((3,5,7)); prod(ret))
> >> foo (generic function with 1 method)
> >>
> >> julia> foo()
> >> 192
> >>
> >> julia> @code_warntype inc1((3,5,7))
> >> Variables:
> >>   #self#::#inc1
> >>   a::Tuple{Int64,Int64,Int64}
> >>
> >> Body:
> >>   begin
> >>   SSAValue(1) = (Core.getfield)(a::Tuple{Int64,Int64,Int64},2)::Int64
> >>   SSAValue(2) = (Core.getfield)(a::Tuple{Int64,Int64,Int64},3)::Int64
> >>   return
> >> (Core.tuple)((Base.box)(Int64,(Base.add_int)((Core.getfield)
> >> (a::Tuple{Int64,Int64,Int64},1)::Int64,1)),(Base.box)(Int64,(Base.add
> >> _int) (SSAValue(1),1)),(Base.box)(Int64,(Base.add_int)(SSAValue(2),
> >> 1)))::Tuple{Int64,Int64,Int64}
> >>   end::Tuple{Int64,Int64,Int64}
> >>
> >> julia> @code_llvm inc1((3,5,7))
> >>
> >> define void @julia_inc1_67366([3 x i64]* noalias sret, [3 x i64]*) #0
> >> {
> >> top:
> >>   %thread_ptr = call i8* asm "movq %fs:0, $0", "=r"() #2
> >>   %2 = getelementptr inbounds [3 x i64], [3 x i64]* %1, i64 0, i64 1
> >>   %3 = getelementptr inbounds [3 x i64], [3 x i64]* %1, i64 0, i64 2
> >>   %4 = getelementptr inbounds [3 x i64], [3 x i64]* %1, i64 0, i64 0
> >>   %5 = load i64, i64* %4, align 8
> >>   %6 = add i64 %5, 1
> >>   %7 = load i64, i64* %2, align 8
> >>   %8 = add i64 %7, 1
> >>   %9 = load i64, i64* %3, align 8
> >>   %10 = add i64 %9, 1
> >>   %11 = getelementptr inbounds [3 x i64], [3 x i64]* %0, i64 0, i64 0
> >>   store i64 %6, i64* %11, align 8
> >>   %12 = getelementptr inbounds [3 x i64], [3 x i64]* %0, i64 0, i64 1
> >>   store i64 %8, i64* %12, align 8
> >>   %13 = getelementptr inbounds [3 x i64], [3 x i64]* %0, i64 0, i64 2
> >>   store i64 %10, i64* %13, align 8
> >>   ret void
> >> }
> >>
> >> julia> @code_llvm foo()
> >>
> >> define i64 @julia_foo_67563() #0 {
> >> top:
> >>   %thread_ptr = call i8* asm "movq %fs:0, $0", "=r"() #2
> >>   ret i64 192
> >> }
> >>
> >> I think you'd be hard-pressed to complain about inefficiencies in
> >> foo()
> > ;-).
> >>
> >> --Tim
> >>
> >> On Tuesday, July 19, 2016 1:42:46 PM CDT Isaiah Norton wrote:
> >> > On Fri, Jul 15, 2016 at 5:02 PM, David Anthoff
> >> > 
> >> wrote:
> >> > > What do these mean?
> >> >
> >> > http://stackoverflow.com/questions/13055/what-is-boxing-and-
> unboxin
> >> > g-
> >> a
> >> > nd-wha
> >> > t-are-the-trade-offs
> >> > > And should I be worried, i.e. is this an indication that
> >> > > something slow might be going on?
> >> >
> >> > Boxing requires allocation and can block optimizations, so it can
> >> > be a problem to have box/unbox at points where you might hope to be
> >> > working with contiguous primitive values (such as within a loop).
> >> > But there's really no hard-and-fast rule.
> >> >
> >> > > --
> >> > >
> >> > > David Anthoff
> >> > >
> >> > > University of California, Berkeley
> >> > >
> >> > >
> >> > >
> >> > > http://www.david-anthoff.com
> >>
> >


Re: [julia-users] What does Base.box mean in code_warntype?

2016-07-21 Thread Yichao Yu
On Thu, Jul 21, 2016 at 5:33 PM, David Anthoff  wrote:
> Thanks everyone for the answers!
>
> I guess Tim's email in particular means that the presence of box might
> indicate a problem, or not ;)

Base.box in the ast doesn't indicate a problem. Any type instability
should be highlighted independently.

>
> I guess it would be nice if there was some (easy) way to figure out whether
> things get boxed or not, apart from looking at the assembler/llvm code.
>
>> -Original Message-
>> From: julia-users@googlegroups.com [mailto:julia-
>> us...@googlegroups.com] On Behalf Of Tim Holy
>> Sent: Tuesday, July 19, 2016 10:55 AM
>> To: julia-users@googlegroups.com
>> Subject: Re: [julia-users] What does Base.box mean in code_warntype?
>>
>> They can mean "real" boxing and consequent performance problems, but
>> sometimes these get auto-removed during compilation. I see this all the
> time
>> when writing array code, for example this function which takes an input
> tuple
>> and adds 1 to each element:
>>
>> julia> @inline inc1(a) = _inc1(a...)
>> inc1 (generic function with 1 method)
>>
>> julia> @inline _inc1(a1, a...) = (a1+1, _inc1(a...)...)
>> _inc1 (generic function with 1 method)
>>
>> julia> _inc1() = ()
>> _inc1 (generic function with 2 methods)
>>
>> julia> inc1((3,5,7))
>> (4,6,8)
>>
>> # Let's try using inc1 in another function
>> julia> foo() = (ret = inc1((3,5,7)); prod(ret))
>> foo (generic function with 1 method)
>>
>> julia> foo()
>> 192
>>
>> julia> @code_warntype inc1((3,5,7))
>> Variables:
>>   #self#::#inc1
>>   a::Tuple{Int64,Int64,Int64}
>>
>> Body:
>>   begin
>>   SSAValue(1) = (Core.getfield)(a::Tuple{Int64,Int64,Int64},2)::Int64
>>   SSAValue(2) = (Core.getfield)(a::Tuple{Int64,Int64,Int64},3)::Int64
>>   return (Core.tuple)((Base.box)(Int64,(Base.add_int)((Core.getfield)
>> (a::Tuple{Int64,Int64,Int64},1)::Int64,1)),(Base.box)(Int64,(Base.add_int)
>> (SSAValue(1),1)),(Base.box)(Int64,(Base.add_int)(SSAValue(2),
>> 1)))::Tuple{Int64,Int64,Int64}
>>   end::Tuple{Int64,Int64,Int64}
>>
>> julia> @code_llvm inc1((3,5,7))
>>
>> define void @julia_inc1_67366([3 x i64]* noalias sret, [3 x i64]*) #0 {
>> top:
>>   %thread_ptr = call i8* asm "movq %fs:0, $0", "=r"() #2
>>   %2 = getelementptr inbounds [3 x i64], [3 x i64]* %1, i64 0, i64 1
>>   %3 = getelementptr inbounds [3 x i64], [3 x i64]* %1, i64 0, i64 2
>>   %4 = getelementptr inbounds [3 x i64], [3 x i64]* %1, i64 0, i64 0
>>   %5 = load i64, i64* %4, align 8
>>   %6 = add i64 %5, 1
>>   %7 = load i64, i64* %2, align 8
>>   %8 = add i64 %7, 1
>>   %9 = load i64, i64* %3, align 8
>>   %10 = add i64 %9, 1
>>   %11 = getelementptr inbounds [3 x i64], [3 x i64]* %0, i64 0, i64 0
>>   store i64 %6, i64* %11, align 8
>>   %12 = getelementptr inbounds [3 x i64], [3 x i64]* %0, i64 0, i64 1
>>   store i64 %8, i64* %12, align 8
>>   %13 = getelementptr inbounds [3 x i64], [3 x i64]* %0, i64 0, i64 2
>>   store i64 %10, i64* %13, align 8
>>   ret void
>> }
>>
>> julia> @code_llvm foo()
>>
>> define i64 @julia_foo_67563() #0 {
>> top:
>>   %thread_ptr = call i8* asm "movq %fs:0, $0", "=r"() #2
>>   ret i64 192
>> }
>>
>> I think you'd be hard-pressed to complain about inefficiencies in foo()
> ;-).
>>
>> --Tim
>>
>> On Tuesday, July 19, 2016 1:42:46 PM CDT Isaiah Norton wrote:
>> > On Fri, Jul 15, 2016 at 5:02 PM, David Anthoff 
>> wrote:
>> > > What do these mean?
>> >
>> > http://stackoverflow.com/questions/13055/what-is-boxing-and-unboxing-
>> a
>> > nd-wha
>> > t-are-the-trade-offs
>> > > And should I be worried, i.e. is this an indication that something
>> > > slow might be going on?
>> >
>> > Boxing requires allocation and can block optimizations, so it can be a
>> > problem to have box/unbox at points where you might hope to be working
>> > with contiguous primitive values (such as within a loop). But there's
>> > really no hard-and-fast rule.
>> >
>> > > --
>> > >
>> > > David Anthoff
>> > >
>> > > University of California, Berkeley
>> > >
>> > >
>> > >
>> > > http://www.david-anthoff.com
>>
>


[julia-users] Re: accessing an expression's global scope from macro

2016-07-21 Thread Cedric St-Jean
Neat macro.
 

> For this though, my macro needs to somehow figure out that "inc" was also 
> defined with @self (since it shouldn't blindly add self as a first arg so 
> other non-@self'ed function calls). Is this possible in Julia?
>

You could have a global Set that would contain the names of the functions 
that were defined with @self. But IMO this is going to bite you at one 
point or another.

FYI Mauro's package has something similar 
.

I would suggest using a global variable, if you want to avoid explicitly 
passing `self` all over the place. It would look like this:

const self = Array{mytype}()   # trick to avoid the globals' poor 
performance

@self function foo()
   x = x + 1   # expands into self[].x = self[].x + 1
end

@with_self(mytype(200)) do
   # expands into 
   # try
   #... save the current value of self
   #global self[] = mytype(200)
   #... code
   # finally
   #global self[] = ...restore previous value
   # end
   ...
end

I used this idiom in Common Lisp all the time. It's strictly equivalent to 
passing the object around to every function, and doesn't break the 
"functionalness" of the code.

Cédric

On Thursday, July 21, 2016 at 4:01:20 PM UTC-4, Marius Millea wrote:
>
> In an attempt to make some numerical code (ie something thats basically 
> just a bunch of equations) more readable, I am trying to write a macro that 
> lets me write the code more succinctly. The code uses parameters from some 
> data structure, call it "mytype", so its littered with "t.a", "t.b", etc.. 
> where t::mytype. My macro basically splices in the the "t." part for me. 
> Its kind of like how C++ member functions automatically access the class's 
> fields, as an example. To my amazement / growing love of Julia, I actually 
> managed to hack it together without too much difficulty, it looks like this,
>
>
> macro self(func)
> @assert func.head == :function
>
> # add "self" as a first function argument
> insert!(func.args[1].args,2,:(self::mytype))
> 
> 
> # recurse through AST and rename X to self.X if 
> # its a fieldname of mytype
> function visit(ex)
> if typeof(ex) == Expr
> ex.args = map(visit,ex.args)
> elseif (typeof(ex) == Symbol) & (ex in fieldnames(mytype))
> return :(self.$ex)
> end
> ex
> end
> func.args[2] = visit(func.args[2])
> 
> show(func) # print the edited function so we can see it in action
> 
> :($(esc(func)))
> end
>
>
>
>
> Here it is in action:
>
> > @self function inc()
> x = x + 1
> end
>
>
> :(function inc(self::mytype) 
> self.x = self.x + 1
> end)
>
>
> inc (generic function with 1 method)
>
>
>
>
> > inc(mytype(0))
> 1
>
>
>
> where I'm assuming I've defined mytype as 
>
> type mytype
> x
> end
>
>
>
> As you can see, all it did was add self::mytype as an arg and replace x 
> with self.x everywhere it found it. This is also super nice because there 
> is zero run-time overhead vs. having written the "self." myself, everything 
> happens compile time. 
>
> Now for the question. I'd like to also to be able automatically pass the 
> "self" argument to functions, so that I could write something like, 
>
> @self function inc2()
> inc()
> inc()
> end
>
>
>
> and it would produce
>
> function inc2(self::mytype)
> inc(self)
> inc(self)
> end
>
>
>
> For this though, my macro needs to somehow figure out that "inc" was also 
> defined with @self (since it shouldn't blindly add self as a first arg so 
> other non-@self'ed function calls). Is this possible in Julia? I suppose 
> somehow the macro must access the global scope where the expression is 
> being evaluated? I'm not entirely sure that's doable. I'm happy to take any 
> tips how to achieve this though, especially ones incurring minimal overhead 
> for the rewritten function. Thanks!
>
>

[julia-users] Re: Strange performance issue in filling in a matrix column

2016-07-21 Thread Gunnar Farnebäck
fill_W1! allocates memory because it makes copies when constructing the 
right hand sides. fill_W2 allocates memory in order to construct the 
comprehensions (that you then discard). In both cases memory allocation 
could plausibly be avoided by a sufficiently smart compiler, but until 
Julia becomes that smart, have a look at the sub function to provide views 
instead of copies for the right hand sides of fill_W1!.

On Thursday, July 21, 2016 at 5:07:34 PM UTC+2, Michael Prange wrote:
>
> I'm a new user, so have mercy in your responses. 
>
> I've written a method that takes a matrix and vector as input and then 
> fills in column icol of that matrix with the vector of given values that 
> have been shifted upward by ishift indices with periodic boundary 
> conditions. To make this clear, given the matrix
>
> W = [1  2
> 3  4
> 5  6]
>
> the vector w = [7  8  9], icol = 2 and ishift = 1, the new value of W is 
> given by
>
> W = [1  8
> 3  9
> 5  7]
>
> I need a fast way of doing this for large matrices. I wrote three methods 
> that should (In my naive mind) give the same performance results, but @time 
> reports otherwise.  The method definitions and the performance results are 
> given below. Can someone teach me why the results are so different? The 
> method fill_W! is too wordy for my tastes, but the more compact notation in 
> fill_W1! and fill_W2! achieve poorer results. Any why do these latter two 
> methods allocate so much memory when the whole point of these methods is to 
> use already-allocated memory.
>
> Michael
>
> ### Definitions
>
>
> function fill_W1!{TF}(W::Matrix{TF}, icol::Int, w::Vector{TF}, 
> ishift::Int)
> @assert(size(W,1) == length(w), "Dimension mismatch between W and w")
> W[1:(end-ishift),icol] = w[(ishift+1):end]
> W[(end-(ishift-1)):end,icol] = w[1:ishift]
> return
> end
>
>
> function fill_W2!{TF}(W::Matrix{TF}, icol::Int, w::Vector{TF}, 
> ishift::Int)
> @assert(size(W,1) == length(w), "Dimension mismatch between W and w")
> [W[i,icol] = w[i+ishift] for i in 1:(length(w)-ishift)]
> [W[end-ishift+i,icol] = w[i] for i in 1:ishift]
> return
> end
>
>
> function fill_W!{TF}(W::Matrix{TF}, icol::Int, w::Vector{TF}, 
> ishift::Int)
> @assert(size(W,1) == length(w), "Dimension mismatch between W and w")
> n = length(w)
> for j in 1:(n-ishift)
> W[j,icol] = w[j+ishift]
> end
> for j in (n-(ishift-1)):n
> W[j,icol] = w[j-(n-ishift)]
> end
> end
>
>
> # Performance Results
> julia>
> W = rand(100,2)
> w = rand(100)
> println("fill_W!:")
> println(@time fill_W!(W, 2, w, 2))
> println("fill_W1!:")
> println(@time fill_W1!(W, 2, w, 2))
> println("fill_W2!:")
> println(@time fill_W2!(W, 2, w, 2))
>
>
> Out>
> fill_W!:
>  0.002801 seconds (4 allocations: 160 bytes)
> nothing
> fill_W1!:
>  0.007427 seconds (9 allocations: 7.630 MB)
> [0.152463397611579,0.6314166578356002]
> fill_W2!:
>  0.005587 seconds (7 allocations: 7.630 MB)
> [0.152463397611579,0.6314166578356002]
>
>
>

RE: [julia-users] What does Base.box mean in code_warntype?

2016-07-21 Thread David Anthoff
Thanks everyone for the answers!

I guess Tim's email in particular means that the presence of box might
indicate a problem, or not ;)

I guess it would be nice if there was some (easy) way to figure out whether
things get boxed or not, apart from looking at the assembler/llvm code.

> -Original Message-
> From: julia-users@googlegroups.com [mailto:julia-
> us...@googlegroups.com] On Behalf Of Tim Holy
> Sent: Tuesday, July 19, 2016 10:55 AM
> To: julia-users@googlegroups.com
> Subject: Re: [julia-users] What does Base.box mean in code_warntype?
> 
> They can mean "real" boxing and consequent performance problems, but
> sometimes these get auto-removed during compilation. I see this all the
time
> when writing array code, for example this function which takes an input
tuple
> and adds 1 to each element:
> 
> julia> @inline inc1(a) = _inc1(a...)
> inc1 (generic function with 1 method)
> 
> julia> @inline _inc1(a1, a...) = (a1+1, _inc1(a...)...)
> _inc1 (generic function with 1 method)
> 
> julia> _inc1() = ()
> _inc1 (generic function with 2 methods)
> 
> julia> inc1((3,5,7))
> (4,6,8)
> 
> # Let's try using inc1 in another function
> julia> foo() = (ret = inc1((3,5,7)); prod(ret))
> foo (generic function with 1 method)
> 
> julia> foo()
> 192
> 
> julia> @code_warntype inc1((3,5,7))
> Variables:
>   #self#::#inc1
>   a::Tuple{Int64,Int64,Int64}
> 
> Body:
>   begin
>   SSAValue(1) = (Core.getfield)(a::Tuple{Int64,Int64,Int64},2)::Int64
>   SSAValue(2) = (Core.getfield)(a::Tuple{Int64,Int64,Int64},3)::Int64
>   return (Core.tuple)((Base.box)(Int64,(Base.add_int)((Core.getfield)
> (a::Tuple{Int64,Int64,Int64},1)::Int64,1)),(Base.box)(Int64,(Base.add_int)
> (SSAValue(1),1)),(Base.box)(Int64,(Base.add_int)(SSAValue(2),
> 1)))::Tuple{Int64,Int64,Int64}
>   end::Tuple{Int64,Int64,Int64}
> 
> julia> @code_llvm inc1((3,5,7))
> 
> define void @julia_inc1_67366([3 x i64]* noalias sret, [3 x i64]*) #0 {
> top:
>   %thread_ptr = call i8* asm "movq %fs:0, $0", "=r"() #2
>   %2 = getelementptr inbounds [3 x i64], [3 x i64]* %1, i64 0, i64 1
>   %3 = getelementptr inbounds [3 x i64], [3 x i64]* %1, i64 0, i64 2
>   %4 = getelementptr inbounds [3 x i64], [3 x i64]* %1, i64 0, i64 0
>   %5 = load i64, i64* %4, align 8
>   %6 = add i64 %5, 1
>   %7 = load i64, i64* %2, align 8
>   %8 = add i64 %7, 1
>   %9 = load i64, i64* %3, align 8
>   %10 = add i64 %9, 1
>   %11 = getelementptr inbounds [3 x i64], [3 x i64]* %0, i64 0, i64 0
>   store i64 %6, i64* %11, align 8
>   %12 = getelementptr inbounds [3 x i64], [3 x i64]* %0, i64 0, i64 1
>   store i64 %8, i64* %12, align 8
>   %13 = getelementptr inbounds [3 x i64], [3 x i64]* %0, i64 0, i64 2
>   store i64 %10, i64* %13, align 8
>   ret void
> }
> 
> julia> @code_llvm foo()
> 
> define i64 @julia_foo_67563() #0 {
> top:
>   %thread_ptr = call i8* asm "movq %fs:0, $0", "=r"() #2
>   ret i64 192
> }
> 
> I think you'd be hard-pressed to complain about inefficiencies in foo()
;-).
> 
> --Tim
> 
> On Tuesday, July 19, 2016 1:42:46 PM CDT Isaiah Norton wrote:
> > On Fri, Jul 15, 2016 at 5:02 PM, David Anthoff 
> wrote:
> > > What do these mean?
> >
> > http://stackoverflow.com/questions/13055/what-is-boxing-and-unboxing-
> a
> > nd-wha
> > t-are-the-trade-offs
> > > And should I be worried, i.e. is this an indication that something
> > > slow might be going on?
> >
> > Boxing requires allocation and can block optimizations, so it can be a
> > problem to have box/unbox at points where you might hope to be working
> > with contiguous primitive values (such as within a loop). But there's
> > really no hard-and-fast rule.
> >
> > > --
> > >
> > > David Anthoff
> > >
> > > University of California, Berkeley
> > >
> > >
> > >
> > > http://www.david-anthoff.com
> 



[julia-users] Re: Calling all users of ParallelAccelerator.

2016-07-21 Thread André Lage
Hi Todd,

First, congratulations to @acc team for the great job! 

We are implementing a new version of CloudArray 
(https://github.com/gsd-ufal/CloudArray.jl) by using 
Parallel.Accelerator.jl. We are implementing a cloud service for processing 
fully PolSAR images, real PolSAR images from NASA UAVSAR project 
(http://uavsar.jpl.nasa.gov), we have ~4 TB of fully PolSAR images in Azure 
SSD disks. We forked JuliaBox and adapt it to Azure, we use Julia on top of 
Docker and Azure. 

Naelson (Cc'ed) had some troubles after an update, he'll write here if he 
still hasn't solved the problem yet.

We're glad to hear that ParallelAccelerator.jl will use Julis threads, this 
will probably save us time in investigating how to take advantage of both 
@acc and threads.

Best,


André Lage.

On Saturday, July 16, 2016 at 12:25:27 PM UTC-3, Chris Rackauckas wrote:
>
> Thank you for this work! I am particularly interested in working with it 
> for the Xeon Phi. I haven't actually gotten to do extensive tests of the 
> work from https://github.com/IntelLabs/CompilerTools.jl/issues/1 yet. 
> Will be doing this over the summer. 
>
> I am trying to incorporate it into DifferentialEquations.jl to speed up 
> some routines. Also will probably use it in VectorizedRoutines.jl. One 
> issue I am having is dealing with ParallelAccelerator as a conditional 
> dependency: I want to add the @acc macro only when the user has the package 
> installed (and working?). This is crucial since the package does work for 
> Windows as well. Conditionally applying macros and packages is difficult.
>
> On Tuesday, July 12, 2016 at 1:23:05 PM UTC-7, Todd Anderson wrote:
>>
>> Hello,
>>
>>   I'm one of the developers of the Intel ParallelAccelerator package for 
>> Julia.  https://github.com/IntelLabs/ParallelAccelerator.jl
>>
>>   Now that the package has been out for a while, I'd like to poll the 
>> user community.
>>
>> 1) Who has used the package to accelerate some real application that they 
>> are working on?  If you fall into this category, please drop us a note.
>> 2) If you tried the package but it didn't work for some reason or you 
>> need support for some feature also please let us know.  Soon after Julia 
>> 0.5 is released we will be releasing an updated version of 
>> ParallelAccelerator with support for parallelization via threading through 
>> regular Julia codegen.  By going through Julia codegen, code coverage will 
>> be greatly improved.  Our current path through C++ with openmp has several 
>> restrictions about what Julia features can be converted to C and most of 
>> these restrictions are therefore lifted by going through native Julia 
>> codegen.
>> 3) If you haven't heard about ParallelAccelerator before and you have an 
>> application that is array or stencil oriented and you would like to see if 
>> it can be automatically parallelized then please check out our package.
>>
>> thanks,
>>
>> Todd
>>
>>
>>

[julia-users] Re: JuliaCon schedule announced

2016-07-21 Thread David P. Sanders
Thanks!

El jueves, 21 de julio de 2016, 17:26:09 (UTC+2), Viral Shah escribió:
>
> Both these tutorials are up now. The others seem are there. 
>
> -viral
>
> On Sunday, July 17, 2016 at 1:00:17 AM UTC-4, Tony Kelman wrote:
>>
>> I don't see the tutorial that David Sanders gave, or the one that I gave. 
>> Might be others missing too?
>
>

Re: [julia-users] howto import fix_dec?

2016-07-21 Thread Jeffrey Sarnoff
thanks, I think I found the problem -- my float() function should force 
Float64


On Thursday, July 21, 2016 at 4:36:53 PM UTC-4, Jeffrey Sarnoff wrote:
>
> I thought I could specialize fix_dec(), the catchall is something like 
> `fix_dec(x::AbstractFloat, n::Int)` and I had intended to define 
> `fix_dec{P}(x::ArbFloat{P}, n::Int)`.
>
> On Thursday, July 21, 2016 at 4:31:37 PM UTC-4, Yichao Yu wrote:
>>
>> On Thu, Jul 21, 2016 at 3:42 PM, Jeffrey Sarnoff 
>>  wrote: 
>> > I got this error 
>> > ERROR: StackOverflowError: 
>> >  in fix_dec(::ArbFloats.ArbFloat{116}, ::Int64) at ./printf.jl:932 
>> (repeats 
>> > 8 times) 
>> > 
>> > 
>> > I tried to import Base.fix_dec, Core.fix_dec to override the definition 
>> -- 
>> > neither worked. 
>>
>> What are you trying to override? 
>> Overriding a definition and calling the old on in it isn't supported. 
>>
>

[julia-users] Re: `abs` has no method matching abs(::Array{Any,1})

2016-07-21 Thread Ping Hou
Yes, it works. Thank you so much for your help!

On Thursday, July 21, 2016 at 4:11:24 PM UTC-4, Gabriel Gellner wrote:
>
> Can you just cast the array to Float64 or whatever numeric type you need 
> to column to be?
>
> On Thursday, July 21, 2016 at 9:50:14 AM UTC-7, Ping Hou wrote:
>>
>> Hi,
>>
>> I encountered a problem when I running my code. 
>>
>> LoadError: MethodError: `abs` has no method matching abs(::Array{Any,1})
>> while loading In[21], in expression starting on line 12
>>
>>
>> Could anybody help me to fix it?
>>
>> Best,
>> Ping
>>
>

Re: [julia-users] howto import fix_dec?

2016-07-21 Thread Jeffrey Sarnoff
I thought I could specialize fix_dec(), the catchall is something like 
`fix_dec(x::AbstractFloat, n::Int)` and I had intended to define 
`fix_dec{P}(x::ArbFloat{P}, n::Int)`.

On Thursday, July 21, 2016 at 4:31:37 PM UTC-4, Yichao Yu wrote:
>
> On Thu, Jul 21, 2016 at 3:42 PM, Jeffrey Sarnoff 
>  wrote: 
> > I got this error 
> > ERROR: StackOverflowError: 
> >  in fix_dec(::ArbFloats.ArbFloat{116}, ::Int64) at ./printf.jl:932 
> (repeats 
> > 8 times) 
> > 
> > 
> > I tried to import Base.fix_dec, Core.fix_dec to override the definition 
> -- 
> > neither worked. 
>
> What are you trying to override? 
> Overriding a definition and calling the old on in it isn't supported. 
>


Re: [julia-users] accessing an expression's global scope from macro

2016-07-21 Thread Yichao Yu
On Thu, Jul 21, 2016 at 4:01 PM, Marius Millea  wrote:
> In an attempt to make some numerical code (ie something thats basically just
> a bunch of equations) more readable, I am trying to write a macro that lets
> me write the code more succinctly. The code uses parameters from some data
> structure, call it "mytype", so its littered with "t.a", "t.b", etc.. where
> t::mytype. My macro basically splices in the the "t." part for me. Its kind
> of like how C++ member functions automatically access the class's fields, as
> an example. To my amazement / growing love of Julia, I actually managed to
> hack it together without too much difficulty, it looks like this,
>
>
> macro self(func)
> @assert func.head == :function
>
> # add "self" as a first function argument
> insert!(func.args[1].args,2,:(self::mytype))
>
>
> # recurse through AST and rename X to self.X if
> # its a fieldname of mytype
> function visit(ex)
> if typeof(ex) == Expr
> ex.args = map(visit,ex.args)
> elseif (typeof(ex) == Symbol) & (ex in fieldnames(mytype))
> return :(self.$ex)
> end
> ex
> end
> func.args[2] = visit(func.args[2])
>
> show(func) # print the edited function so we can see it in action
>
> :($(esc(func)))
> end
>
>
>
>
> Here it is in action:
>
>> @self function inc()
> x = x + 1
> end
>
>
> :(function inc(self::mytype)
> self.x = self.x + 1
> end)
>
>
> inc (generic function with 1 method)
>
>
>
>
>> inc(mytype(0))
> 1
>
>
>
> where I'm assuming I've defined mytype as
>
> type mytype
> x
> end
>
>
>
> As you can see, all it did was add self::mytype as an arg and replace x with
> self.x everywhere it found it. This is also super nice because there is zero
> run-time overhead vs. having written the "self." myself, everything happens
> compile time.
>
> Now for the question. I'd like to also to be able automatically pass the
> "self" argument to functions, so that I could write something like,
>
> @self function inc2()
> inc()
> inc()
> end
>
>
>
> and it would produce
>
> function inc2(self::mytype)
> inc(self)
> inc(self)
> end
>
>
>
> For this though, my macro needs to somehow figure out that "inc" was also
> defined with @self (since it shouldn't blindly add self as a first arg so
> other non-@self'ed function calls). Is this possible in Julia? I suppose
> somehow the macro must access the global scope where the expression is being
> evaluated? I'm not entirely sure that's doable. I'm happy to take any tips
> how to achieve this though, especially ones incurring minimal overhead for
> the rewritten function. Thanks!

You should not do this. It is possible to access the current module
but you don't have any scope information.

>


Re: [julia-users] howto import fix_dec?

2016-07-21 Thread Yichao Yu
On Thu, Jul 21, 2016 at 3:42 PM, Jeffrey Sarnoff
 wrote:
> I got this error
> ERROR: StackOverflowError:
>  in fix_dec(::ArbFloats.ArbFloat{116}, ::Int64) at ./printf.jl:932 (repeats
> 8 times)
>
>
> I tried to import Base.fix_dec, Core.fix_dec to override the definition --
> neither worked.

What are you trying to override?
Overriding a definition and calling the old on in it isn't supported.


[julia-users] Re: `abs` has no method matching abs(::Array{Any,1})

2016-07-21 Thread Gabriel Gellner
Can you just cast the array to Float64 or whatever numeric type you need to 
column to be?

On Thursday, July 21, 2016 at 9:50:14 AM UTC-7, Ping Hou wrote:
>
> Hi,
>
> I encountered a problem when I running my code. 
>
> LoadError: MethodError: `abs` has no method matching abs(::Array{Any,1})
> while loading In[21], in expression starting on line 12
>
>
> Could anybody help me to fix it?
>
> Best,
> Ping
>


[julia-users] accessing an expression's global scope from macro

2016-07-21 Thread Marius Millea
In an attempt to make some numerical code (ie something thats basically 
just a bunch of equations) more readable, I am trying to write a macro that 
lets me write the code more succinctly. The code uses parameters from some 
data structure, call it "mytype", so its littered with "t.a", "t.b", etc.. 
where t::mytype. My macro basically splices in the the "t." part for me. 
Its kind of like how C++ member functions automatically access the class's 
fields, as an example. To my amazement / growing love of Julia, I actually 
managed to hack it together without too much difficulty, it looks like this,


macro self(func)
@assert func.head == :function
   
# add "self" as a first function argument
insert!(func.args[1].args,2,:(self::mytype))


# recurse through AST and rename X to self.X if 
# its a fieldname of mytype
function visit(ex)
if typeof(ex) == Expr
ex.args = map(visit,ex.args)
elseif (typeof(ex) == Symbol) & (ex in fieldnames(mytype))
return :(self.$ex)
end
ex
end
func.args[2] = visit(func.args[2])

show(func) # print the edited function so we can see it in action

:($(esc(func)))
end




Here it is in action:

> @self function inc()
x = x + 1
end


:(function inc(self::mytype) 
self.x = self.x + 1
end)


inc (generic function with 1 method)




> inc(mytype(0))
1



where I'm assuming I've defined mytype as 

type mytype
x
end



As you can see, all it did was add self::mytype as an arg and replace x 
with self.x everywhere it found it. This is also super nice because there 
is zero run-time overhead vs. having written the "self." myself, everything 
happens compile time. 

Now for the question. I'd like to also to be able automatically pass the 
"self" argument to functions, so that I could write something like, 

@self function inc2()
inc()
inc()
end



and it would produce

function inc2(self::mytype)
inc(self)
inc(self)
end



For this though, my macro needs to somehow figure out that "inc" was also 
defined with @self (since it shouldn't blindly add self as a first arg so 
other non-@self'ed function calls). Is this possible in Julia? I suppose 
somehow the macro must access the global scope where the expression is 
being evaluated? I'm not entirely sure that's doable. I'm happy to take any 
tips how to achieve this though, especially ones incurring minimal overhead 
for the rewritten function. Thanks!



[julia-users] howto import fix_dec?

2016-07-21 Thread Jeffrey Sarnoff
I got this error
ERROR: StackOverflowError:
 in fix_dec(::ArbFloats.ArbFloat{116}, ::Int64) at ./printf.jl:932 (repeats 
8 times)


I tried to import Base.fix_dec, Core.fix_dec to override the definition -- 
neither worked.


[julia-users] Re: Function `copyconvert`?

2016-07-21 Thread Kristoffer Carlsson
Discussion: https://github.com/JuliaLang/julia/issues/12441

On Thursday, July 21, 2016 at 2:49:19 PM UTC-4, gTcV wrote:
>
> I recently frequently encounter the situation where I need to both copy as 
> well as optionally convert an object. It turns out `convert` on its own 
> will not do the job in this case as it doesn't create a copy if the 
> conversion is trivial:
>
> julia> v = Vector{Int}();
> julia> convert(Vector{Int}, v) === v
> true
> julia> convert(Vector{Float64}, v) === v
> false
>
> So to be safe I have to write `copy(convert(NewT,obj))`, but that creates 
> two copies in case `NewT != obj` [1]. I assume this must be a fairly common 
> problem, and I am surprised Julia doesn't offer a solution to it. 
>
> The following is a first attempt at a solution, but I would not be 
> surprised if there are edge cases where this approach fails. 
>
> function copyconvert{T}(::Type{T}, x)
> y = convert(T,x)
> if y === x
> return copy(x)
> else 
> return y
> end
> end
>
> [1] In C++, the compiler would optimise this case down to one copy ("copy 
> elision"), but I assume the Julia compiler doesn't. Correct?
>


Re: [julia-users] Composite Type Array

2016-07-21 Thread Stefan Karpinski
It's a little unclear what you want to do that you can't figure out how to
accomplish. You can allocate an uninitialized vector of ExampleEvent
objects:

julia> type ExampleEvent
   fld1::ASCIIString
   fld2::Int16
   fld3::Int64
   fld4::Int64
   fld5::Int64
   fld6::Int64
   fld7::Int64
   end

julia> events = Vector{ExampleEvent}(1000)
1000-element Array{ExampleEvent,1}:
 #undef
 #undef
 #undef
   ⋮
 #undef
 #undef
 #undef




On Thu, Jul 21, 2016 at 2:51 PM,  wrote:

>
> Hi
>
> I was working on processing large data sets & historically I've used
> structs in C++ & other languages for this type of task. I attempted to use
> a Composite Type in Julia & preallocate a large array before filling it
> w/values as my algo processes the data.
>
> My example was:
>
> type ExampleEvent
>
> fld1::ASCIIString
> fld2::Int16
> fld3::Int64
> fld4::Int64
> fld5::Int64
> fld6::Int64
> fld7::Int64
>
> end
>
> I googled around & from what I found, & all the docs examples I tried out,
> there isn't an obvious way to declare an array of composite type without
> having to do some work arounds.
>
> I liked the language in several other respects but it seems to be missing
> helpful tools to make the programmer's life easy. Am I missing something?
> If not, why is a data structure like this not easily available?
>
> thanks in advance
>
> best,
> A
>


[julia-users] Re: unable to connect to FTP using IP address

2016-07-21 Thread Samuel Massinon
Hi Yared,

The error you are getting is something LibCURL is erring on, as described 
here. https://curl.haxx.se/libcurl/c/libcurl-errors.html

If I try using curl with your settings, I get
~ $ curl -u anonymous '192.168.251.200/dataOnFTP.bin'
Enter host password for user 'anonymous':
curl: (7) Failed to connect to 192.168.251.200 port 80: Network is 
unreachable

The FTPClient.jl uses the same library as curl and if you could post how to 
get the file with curl, I might be able to better serve you.

On Wednesday, July 20, 2016 at 4:17:29 PM UTC-5, Yared Melese wrote:
>
>
>
> Hello 
>
> Would you please let me know if I missed anything, I am using FTPClient 
> and using IP address as a host but not able to connect 
> Here are my commands 
>
> using FTPClient
> ftp_init()
> ftp = FTP(host="192.168.251.200", implt=true, ssl=true, user="anonymous", 
> pswd="")
> binary(ftp)
> file = download(ftp, "dataOnFTP.bin", "C:\Users\xyz\test.bin")
> close(ftp)
> ftp_cleanup()
>
> when sending  " using FTPClient" there are bunch of warnings as shown 
> below partially
> WARNING: Base.String is deprecated, use AbstractString instead.
>   likely near C:\Users\melese\.julia\v0.4\FTPClient\src\FTPC.jl:35
> WARNING: Base.String is deprecated, use AbstractString instead.
>   likely near C:\Users\melese\.julia\v0.4\FTPClient\src\FTPC.jl:67
> WARNING: Base.Uint8 is deprecated, use UInt8 instead.
>   likely near C:\Users\melese\.julia\v0.4\FTPClient\src\FTPC.jl:81
> ..
> .
> .and at the end I am getting the following error 
>
> ERROR: Failed to connect. :: LibCURL error #7
>  [inlined code] from C:\Users\melese\.julia\v0.4\FTPClient\src\FTPC.jl:138
>  in ftp_command at C:\Users\melese\.julia\v0.4\FTPClient\src\FTPC.jl:454
>  in ftp_connect at C:\Users\melese\.julia\v0.4\FTPClient\src\FTPC.jl:493
>  in call at C:\Users\melese\.julia\v0.4\FTPClient\src\FTPObject.jl:23
>
> Thanks 
> Yared
>


[julia-users] Composite Type Array

2016-07-21 Thread maxent219

Hi 

I was working on processing large data sets & historically I've used 
structs in C++ & other languages for this type of task. I attempted to use 
a Composite Type in Julia & preallocate a large array before filling it 
w/values as my algo processes the data. 

My example was:

type ExampleEvent

fld1::ASCIIString
fld2::Int16
fld3::Int64
fld4::Int64
fld5::Int64
fld6::Int64
fld7::Int64

end

I googled around & from what I found, & all the docs examples I tried out, 
there isn't an obvious way to declare an array of composite type without 
having to do some work arounds. 

I liked the language in several other respects but it seems to be missing 
helpful tools to make the programmer's life easy. Am I missing something? 
If not, why is a data structure like this not easily available? 

thanks in advance

best,
A 


[julia-users] Function `copyconvert`?

2016-07-21 Thread gTcV
I recently frequently encounter the situation where I need to both copy as 
well as optionally convert an object. It turns out `convert` on its own 
will not do the job in this case as it doesn't create a copy if the 
conversion is trivial:

julia> v = Vector{Int}();
julia> convert(Vector{Int}, v) === v
true
julia> convert(Vector{Float64}, v) === v
false

So to be safe I have to write `copy(convert(NewT,obj))`, but that creates 
two copies in case `NewT != obj` [1]. I assume this must be a fairly common 
problem, and I am surprised Julia doesn't offer a solution to it. 

The following is a first attempt at a solution, but I would not be 
surprised if there are edge cases where this approach fails. 

function copyconvert{T}(::Type{T}, x)
y = convert(T,x)
if y === x
return copy(x)
else 
return y
end
end

[1] In C++, the compiler would optimise this case down to one copy ("copy 
elision"), but I assume the Julia compiler doesn't. Correct?


[julia-users] Re: Converting from ColorTypes to Tuple

2016-07-21 Thread Nate
Another quick solution is to just create an array/tuple that PyPlot will
recognize as an RGB array/tuple:

myColors = distinguishable_colors(N)
PyPlot_myColors = [[red(i), green(i), blue(i)] for i in myColors]

You can also save yourself the time from needing to call the new color
scheme for every plot by redefining the color cycle:

ax[:set_color_cycle](PyPlot_myColors)



Gabriel Gellner wrote
> I use the following as a utility to have PyPlot.jl/PyCall.jl automatically 
> convert RGB types into tuples
> 
> function PyObject(t::Color)
> trgb = convert(RGB, t)
> ctup = map(float, (red(trgb), green(trgb), blue(trgb)))
> o = PyObject(ctup)
> return o
> end
> 
> I'm sure it can be tweaked to be more general. But it works so far when I 
> am doing quick and dirty plotting :) Good luck!
> 
> On Thursday, July 7, 2016 at 6:09:41 PM UTC-7, Tim Holy wrote:
> 
>> On Thursday, July 7, 2016 4:41:10 PM CDT Islam Badreldin wrote: 
>> > Maybe 
>> > this means PyPlot.jl needs to add better support for ColorTypes? 
>>
>> That sounds like a very reasonable solution. I don't really know PyPlot
>> at 
>> all, so I don't have any advice to offer, but given how well you seem to 
>> understand things already it seems that matters are in excellent hands 
>> :-). 
>>
>> Best, 
>> --Tim 
>>
>>





--
View this message in context: 
http://julia-programming-language.2336112.n4.nabble.com/Converting-from-ColorTypes-to-Tuple-tp43674p44391.html
Sent from the Julia Users mailing list archive at Nabble.com.


Re: [julia-users] Array printing in 0.5

2016-07-21 Thread daycaster
Yes, true, I just copied it so that I knew what character it was.

[julia-users] Re: I can't believe this spped-up !

2016-07-21 Thread Kristoffer Carlsson


julia> @time for i in 1:10
   sleep(1)
   end
 10.054067 seconds (60 allocations: 3.594 KB)


julia> @time @parallel for i in 1:10
   sleep(1)
   end
  0.195556 seconds (28.91 k allocations: 1.302 MB)
1-element Array{Future,1}:
 Future(1,1,8,#NULL)



On Thursday, July 21, 2016 at 6:00:47 PM UTC+2, Ferran Mazzanti wrote:
>
> Hi,
>
> mostly showing my astonishment, but I can even understand the figures in 
> this stupid parallelization code
>
> A = [[1.0 1.0001];[1.0002 1.0003]]
> z = A
> tic()
> for i in 1:10
> z *= A
> end
> toc()
> A
>
> produces
>
> elapsed time: 105.458639263 seconds
>
> 2x2 Array{Float64,2}:
>  1.0 1.0001
>  1.0002  1.0003
>
>
>
> But then add @parallel in the for loop
>
> A = [[1.0 1.0001];[1.0002 1.0003]]
> z = A
> tic()
> @parallel for i in 1:10
> z *= A
> end
> toc()
> A
>
> and get 
>
> elapsed time: 0.008912282 seconds
>
> 2x2 Array{Float64,2}:
>  1.0 1.0001
>  1.0002  1.0003
>
>
> look at the elapsed time differences! And I'm running this on my Xeon 
> desktop, not even a cluster
> Of course A-B reports
>
> 2x2 Array{Float64,2}:
>  0.0  0.0
>  0.0  0.0
>
>
> So is this what one should expect from this kind of simple 
> paralleizations? If so, I'm definitely *in love* with Julia :):):)
>
> Best,
>
> Ferran.
>
>
>

[julia-users] Re: I can't believe this spped-up !

2016-07-21 Thread Nathan Smith
in Jupyer notebook, add processors with addprocs(N) 

On Thursday, 21 July 2016 12:59:02 UTC-4, Nathan Smith wrote:
>
> To be clear, you need to compare the final 'z' not the final 'A' to check 
> if your calculations are consistent. The matrix A does not change through 
> out this calculation, but the matrix z does.
> Also, there is no parallelism with the @parallel loop unless your start 
> julia with 'julia -np N' where N is the number of processes you'd like to 
> use.
>
> On Thursday, 21 July 2016 12:45:17 UTC-4, Ferran Mazzanti wrote:
>>
>> Hi Nathan,
>>
>> I posted the codes, so you can check if they do the same thing or not. 
>> These went to separate cells in Jupyter, nothing more and nothing less.
>> Not even a single line I didn't post. And yes I understand your line of 
>> reasoning, so that's why I got astonished also.
>> But I can see what is making this huge difference, and I'd like to know :)
>>
>> Best,
>>
>> Ferran.
>>
>> On Thursday, July 21, 2016 at 6:31:57 PM UTC+2, Nathan Smith wrote:
>>>
>>> Hey Ferran, 
>>>
>>> You should be suspicious when your apparent speed up surpasses the level 
>>> of parallelism available on your CPU. I looks like your codes don't 
>>> actually compute the same thing.
>>>
>>> I'm assuming you're trying to compute the matrix exponential of A 
>>> (A^10) by repeatedly multiplying A. In your parallel code, each 
>>> process gets a local copy of 'z' and
>>> uses that. This means each process is computing something like 
>>> (A^(10/# of procs)). Check out this 
>>> 
>>>  section 
>>> of the documentation on parallel map and loops to see what I mean.
>>>
>>> That said, that doesn't explain your speed up completely, you should 
>>> also make sure that each part of your script is wrapped in a function and 
>>> that you 'warm-up' each function by running it once before comparing.
>>>
>>> Cheers, 
>>> Nathan
>>>
>>> On Thursday, 21 July 2016 12:00:47 UTC-4, Ferran Mazzanti wrote:

 Hi,

 mostly showing my astonishment, but I can even understand the figures 
 in this stupid parallelization code

 A = [[1.0 1.0001];[1.0002 1.0003]]
 z = A
 tic()
 for i in 1:10
 z *= A
 end
 toc()
 A

 produces

 elapsed time: 105.458639263 seconds

 2x2 Array{Float64,2}:
  1.0 1.0001
  1.0002  1.0003



 But then add @parallel in the for loop

 A = [[1.0 1.0001];[1.0002 1.0003]]
 z = A
 tic()
 @parallel for i in 1:10
 z *= A
 end
 toc()
 A

 and get 

 elapsed time: 0.008912282 seconds

 2x2 Array{Float64,2}:
  1.0 1.0001
  1.0002  1.0003


 look at the elapsed time differences! And I'm running this on my Xeon 
 desktop, not even a cluster
 Of course A-B reports

 2x2 Array{Float64,2}:
  0.0  0.0
  0.0  0.0


 So is this what one should expect from this kind of simple 
 paralleizations? If so, I'm definitely *in love* with Julia :):):)

 Best,

 Ferran.




[julia-users] Re: I can't believe this spped-up !

2016-07-21 Thread Nathan Smith
To be clear, you need to compare the final 'z' not the final 'A' to check 
if your calculations are consistent. The matrix A does not change through 
out this calculation, but the matrix z does.
Also, there is no parallelism with the @parallel loop unless your start 
julia with 'julia -np N' where N is the number of processes you'd like to 
use.

On Thursday, 21 July 2016 12:45:17 UTC-4, Ferran Mazzanti wrote:
>
> Hi Nathan,
>
> I posted the codes, so you can check if they do the same thing or not. 
> These went to separate cells in Jupyter, nothing more and nothing less.
> Not even a single line I didn't post. And yes I understand your line of 
> reasoning, so that's why I got astonished also.
> But I can see what is making this huge difference, and I'd like to know :)
>
> Best,
>
> Ferran.
>
> On Thursday, July 21, 2016 at 6:31:57 PM UTC+2, Nathan Smith wrote:
>>
>> Hey Ferran, 
>>
>> You should be suspicious when your apparent speed up surpasses the level 
>> of parallelism available on your CPU. I looks like your codes don't 
>> actually compute the same thing.
>>
>> I'm assuming you're trying to compute the matrix exponential of A 
>> (A^10) by repeatedly multiplying A. In your parallel code, each 
>> process gets a local copy of 'z' and
>> uses that. This means each process is computing something like 
>> (A^(10/# of procs)). Check out this 
>> 
>>  section 
>> of the documentation on parallel map and loops to see what I mean.
>>
>> That said, that doesn't explain your speed up completely, you should also 
>> make sure that each part of your script is wrapped in a function and that 
>> you 'warm-up' each function by running it once before comparing.
>>
>> Cheers, 
>> Nathan
>>
>> On Thursday, 21 July 2016 12:00:47 UTC-4, Ferran Mazzanti wrote:
>>>
>>> Hi,
>>>
>>> mostly showing my astonishment, but I can even understand the figures in 
>>> this stupid parallelization code
>>>
>>> A = [[1.0 1.0001];[1.0002 1.0003]]
>>> z = A
>>> tic()
>>> for i in 1:10
>>> z *= A
>>> end
>>> toc()
>>> A
>>>
>>> produces
>>>
>>> elapsed time: 105.458639263 seconds
>>>
>>> 2x2 Array{Float64,2}:
>>>  1.0 1.0001
>>>  1.0002  1.0003
>>>
>>>
>>>
>>> But then add @parallel in the for loop
>>>
>>> A = [[1.0 1.0001];[1.0002 1.0003]]
>>> z = A
>>> tic()
>>> @parallel for i in 1:10
>>> z *= A
>>> end
>>> toc()
>>> A
>>>
>>> and get 
>>>
>>> elapsed time: 0.008912282 seconds
>>>
>>> 2x2 Array{Float64,2}:
>>>  1.0 1.0001
>>>  1.0002  1.0003
>>>
>>>
>>> look at the elapsed time differences! And I'm running this on my Xeon 
>>> desktop, not even a cluster
>>> Of course A-B reports
>>>
>>> 2x2 Array{Float64,2}:
>>>  0.0  0.0
>>>  0.0  0.0
>>>
>>>
>>> So is this what one should expect from this kind of simple 
>>> paralleizations? If so, I'm definitely *in love* with Julia :):):)
>>>
>>> Best,
>>>
>>> Ferran.
>>>
>>>
>>>

[julia-users] Re: Help Julia win a performance comparison!

2016-07-21 Thread Chris Rackauckas
Nevermind. You have a non-zero probability of having zero offspring since 
it's Poisson. This works if every element is at least 1. However, you can 
have the population size decrease, which then causes errors if you resize 
first. But then you still want to put the new elements in the first n slots 
of the array (which also contains the values you want to write over), so I 
don't think you can do that directly in place unless someone has a clever 
trick.

However, I could just keep a second array around for writing into, and then 
write into x. That saves allocations with a few more writes, but is a net 
speedup. I posted an Edit 3 explaining that.

On Thursday, July 21, 2016 at 8:38:42 AM UTC-7, Chris Rackauckas wrote:
>
> I see it now. Sum the elements to resize the array, and then loop through 
> backwards adding the values (so that way you don't overwrite what 
> you haven't used).
>
> On Thursday, July 21, 2016 at 8:34:11 AM UTC-7, Kristoffer Carlsson wrote:
>>
>> Sum the elements and resize the array to that length? 
>
>

[julia-users] Re: I can't believe this spped-up !

2016-07-21 Thread Ferran Mazzanti
Nathan,

the execution of these two functions gives essentially the same timings, no 
matter of many processes I have added with addprocs()
Very surprising to me...
Of course I prefer the speeded-up version :)

Best,

Ferran.

On Thursday, July 21, 2016 at 6:40:14 PM UTC+2, Nathan Smith wrote:
>
> Try comparing these two function:
>
> function serial_example()
> A = [[1.0 1.001];[1.002 1.003]
> z = A 
> for i in 1:10
> z *= A
> end
> return z
> end
>
> function parallel_example()
> A = [[1.0 1.001]; [1.002 1.003]]
> z = @parallel (*) for i in 1:10
> A
> end
> return z
> end
>
>

Re: [julia-users] Array printing in 0.5

2016-07-21 Thread Stefan Karpinski
The output format isn't intended to be valid input format in either version
of Julia, e.g. on 0.4:

julia> 2x3
ERROR: UndefVarError: x3 not defined
 in eval(::Module, ::Any) at ./boot.jl:234
 in macro expansion at ./REPL.jl:92 [inlined]
 in (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at ./event.jl:46


On Thu, Jul 21, 2016 at 12:45 PM, daycaster  wrote:

> (I'm just untangling some confusion on my end. Is the following correct?)
>
> In 0.4, array dimensions were printed like this:
>
> julia> zeros(2,3)
> 2x3 Array{Float64,2}:
>  0.0  0.0  0.0
>  0.0  0.0  0.0
>
> In 0.5, the "x" is replaced with a "×":
>
> julia> zeros(2,3)
> 2×3 Array{Float64,2}:
> 0.0  0.0  0.0
> 0.0  0.0  0.0
>
> but apparently this character isn't a multiplication, but the
> cross-product operator:
>
> julia> 2×3 # copy/paste from above
> ERROR: MethodError: no method matching cross(::Int64, ::Int64)
>  in eval(::Module, ::Any) at ./boot.jl:234
>  in macro expansion at ./REPL.jl:92 [inlined]
>  in (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at ./event.jl:46
>
> But to get this character you type `\times`, although 'times' is
> (according to
> http://docs.julialang.org/en/release-0.4/manual/mathematical-operations/)
> the name for "*", but this is the symbol for the `cross()` function...
>
> So wouldn't it be more logical to report the size of arrays using "by"?
>
>  julia> zeros(2,3)
>  2 by 3 Array{Float64,2}:
>  0.0  0.0  0.0
>  0.0  0.0  0.0
>
>


[julia-users] Array printing in 0.5

2016-07-21 Thread daycaster
(I'm just untangling some confusion on my end. Is the following correct?)

In 0.4, array dimensions were printed like this:

julia> zeros(2,3)
2x3 Array{Float64,2}:
 0.0  0.0  0.0
 0.0  0.0  0.0

In 0.5, the "x" is replaced with a "×":

julia> zeros(2,3)
2×3 Array{Float64,2}:
0.0  0.0  0.0
0.0  0.0  0.0

but apparently this character isn't a multiplication, but the cross-product 
operator:

julia> 2×3 # copy/paste from above 
ERROR: MethodError: no method matching cross(::Int64, ::Int64)
 in eval(::Module, ::Any) at ./boot.jl:234
 in macro expansion at ./REPL.jl:92 [inlined]
 in (::Base.REPL.##1#2{Base.REPL.REPLBackend})() at ./event.jl:46

But to get this character you type `\times`, although 'times' is (according to 
http://docs.julialang.org/en/release-0.4/manual/mathematical-operations/) the 
name for "*", but this is the symbol for the `cross()` function...

So wouldn't it be more logical to report the size of arrays using "by"?

 julia> zeros(2,3)
 2 by 3 Array{Float64,2}:
 0.0  0.0  0.0
 0.0  0.0  0.0



[julia-users] Re: I can't believe this spped-up !

2016-07-21 Thread Ferran Mazzanti
Hi Nathan,

I posted the codes, so you can check if they do the same thing or not. 
These went to separate cells in Jupyter, nothing more and nothing less.
Not even a single line I didn't post. And yes I understand your line of 
reasoning, so that's why I got astonished also.
But I can see what is making this huge difference, and I'd like to know :)

Best,

Ferran.

On Thursday, July 21, 2016 at 6:31:57 PM UTC+2, Nathan Smith wrote:
>
> Hey Ferran, 
>
> You should be suspicious when your apparent speed up surpasses the level 
> of parallelism available on your CPU. I looks like your codes don't 
> actually compute the same thing.
>
> I'm assuming you're trying to compute the matrix exponential of A 
> (A^10) by repeatedly multiplying A. In your parallel code, each 
> process gets a local copy of 'z' and
> uses that. This means each process is computing something like 
> (A^(10/# of procs)). Check out this 
> 
>  section 
> of the documentation on parallel map and loops to see what I mean.
>
> That said, that doesn't explain your speed up completely, you should also 
> make sure that each part of your script is wrapped in a function and that 
> you 'warm-up' each function by running it once before comparing.
>
> Cheers, 
> Nathan
>
> On Thursday, 21 July 2016 12:00:47 UTC-4, Ferran Mazzanti wrote:
>>
>> Hi,
>>
>> mostly showing my astonishment, but I can even understand the figures in 
>> this stupid parallelization code
>>
>> A = [[1.0 1.0001];[1.0002 1.0003]]
>> z = A
>> tic()
>> for i in 1:10
>> z *= A
>> end
>> toc()
>> A
>>
>> produces
>>
>> elapsed time: 105.458639263 seconds
>>
>> 2x2 Array{Float64,2}:
>>  1.0 1.0001
>>  1.0002  1.0003
>>
>>
>>
>> But then add @parallel in the for loop
>>
>> A = [[1.0 1.0001];[1.0002 1.0003]]
>> z = A
>> tic()
>> @parallel for i in 1:10
>> z *= A
>> end
>> toc()
>> A
>>
>> and get 
>>
>> elapsed time: 0.008912282 seconds
>>
>> 2x2 Array{Float64,2}:
>>  1.0 1.0001
>>  1.0002  1.0003
>>
>>
>> look at the elapsed time differences! And I'm running this on my Xeon 
>> desktop, not even a cluster
>> Of course A-B reports
>>
>> 2x2 Array{Float64,2}:
>>  0.0  0.0
>>  0.0  0.0
>>
>>
>> So is this what one should expect from this kind of simple 
>> paralleizations? If so, I'm definitely *in love* with Julia :):):)
>>
>> Best,
>>
>> Ferran.
>>
>>
>>

[julia-users] Re: I can't believe this spped-up !

2016-07-21 Thread Ferran Mazzanti
I posted this because I also find the results... astonishingly surprising. 
Howeverm the timings are apparently real, as the first one took more than 
1.5mins on my wrist watch, and the second calculation was instantly.
And no, no function wrapping whatsoever...

On Thursday, July 21, 2016 at 6:22:50 PM UTC+2, Chris Rackauckas wrote:
>
> I wouldn't expect that much of a change unless you have a whole lot of 
> cores (even then, wouldn't expect this much of a change).
>
> Is this wrapped in a function when you're timing it?
>
> On Thursday, July 21, 2016 at 9:00:47 AM UTC-7, Ferran Mazzanti wrote:
>>
>> Hi,
>>
>> mostly showing my astonishment, but I can even understand the figures in 
>> this stupid parallelization code
>>
>> A = [[1.0 1.0001];[1.0002 1.0003]]
>> z = A
>> tic()
>> for i in 1:10
>> z *= A
>> end
>> toc()
>> A
>>
>> produces
>>
>> elapsed time: 105.458639263 seconds
>>
>> 2x2 Array{Float64,2}:
>>  1.0 1.0001
>>  1.0002  1.0003
>>
>>
>>
>> But then add @parallel in the for loop
>>
>> A = [[1.0 1.0001];[1.0002 1.0003]]
>> z = A
>> tic()
>> @parallel for i in 1:10
>> z *= A
>> end
>> toc()
>> A
>>
>> and get 
>>
>> elapsed time: 0.008912282 seconds
>>
>> 2x2 Array{Float64,2}:
>>  1.0 1.0001
>>  1.0002  1.0003
>>
>>
>> look at the elapsed time differences! And I'm running this on my Xeon 
>> desktop, not even a cluster
>> Of course A-B reports
>>
>> 2x2 Array{Float64,2}:
>>  0.0  0.0
>>  0.0  0.0
>>
>>
>> So is this what one should expect from this kind of simple 
>> paralleizations? If so, I'm definitely *in love* with Julia :):):)
>>
>> Best,
>>
>> Ferran.
>>
>>
>>

[julia-users] Re: I can't believe this spped-up !

2016-07-21 Thread Nathan Smith
Try comparing these two function:

function serial_example()
A = [[1.0 1.001];[1.002 1.003]
z = A 
for i in 1:10
z *= A
end
return z
end

function parallel_example()
A = [[1.0 1.001]; [1.002 1.003]]
z = @parallel (*) for i in 1:10
A
end
return z
end



[julia-users] Re: I can't believe this spped-up !

2016-07-21 Thread Nathan Smith
Hey Ferran, 

You should be suspicious when your apparent speed up surpasses the level of 
parallelism available on your CPU. I looks like your codes don't actually 
compute the same thing.

I'm assuming you're trying to compute the matrix exponential of A 
(A^10) by repeatedly multiplying A. In your parallel code, each 
process gets a local copy of 'z' and
uses that. This means each process is computing something like 
(A^(10/# of procs)). Check out this 

 section 
of the documentation on parallel map and loops to see what I mean.

That said, that doesn't explain your speed up completely, you should also 
make sure that each part of your script is wrapped in a function and that 
you 'warm-up' each function by running it once before comparing.

Cheers, 
Nathan

On Thursday, 21 July 2016 12:00:47 UTC-4, Ferran Mazzanti wrote:
>
> Hi,
>
> mostly showing my astonishment, but I can even understand the figures in 
> this stupid parallelization code
>
> A = [[1.0 1.0001];[1.0002 1.0003]]
> z = A
> tic()
> for i in 1:10
> z *= A
> end
> toc()
> A
>
> produces
>
> elapsed time: 105.458639263 seconds
>
> 2x2 Array{Float64,2}:
>  1.0 1.0001
>  1.0002  1.0003
>
>
>
> But then add @parallel in the for loop
>
> A = [[1.0 1.0001];[1.0002 1.0003]]
> z = A
> tic()
> @parallel for i in 1:10
> z *= A
> end
> toc()
> A
>
> and get 
>
> elapsed time: 0.008912282 seconds
>
> 2x2 Array{Float64,2}:
>  1.0 1.0001
>  1.0002  1.0003
>
>
> look at the elapsed time differences! And I'm running this on my Xeon 
> desktop, not even a cluster
> Of course A-B reports
>
> 2x2 Array{Float64,2}:
>  0.0  0.0
>  0.0  0.0
>
>
> So is this what one should expect from this kind of simple 
> paralleizations? If so, I'm definitely *in love* with Julia :):):)
>
> Best,
>
> Ferran.
>
>
>

[julia-users] Re: I can't believe this spped-up !

2016-07-21 Thread Chris Rackauckas
I wouldn't expect that much of a change unless you have a whole lot of 
cores (even then, wouldn't expect this much of a change).

Is this wrapped in a function when you're timing it?

On Thursday, July 21, 2016 at 9:00:47 AM UTC-7, Ferran Mazzanti wrote:
>
> Hi,
>
> mostly showing my astonishment, but I can even understand the figures in 
> this stupid parallelization code
>
> A = [[1.0 1.0001];[1.0002 1.0003]]
> z = A
> tic()
> for i in 1:10
> z *= A
> end
> toc()
> A
>
> produces
>
> elapsed time: 105.458639263 seconds
>
> 2x2 Array{Float64,2}:
>  1.0 1.0001
>  1.0002  1.0003
>
>
>
> But then add @parallel in the for loop
>
> A = [[1.0 1.0001];[1.0002 1.0003]]
> z = A
> tic()
> @parallel for i in 1:10
> z *= A
> end
> toc()
> A
>
> and get 
>
> elapsed time: 0.008912282 seconds
>
> 2x2 Array{Float64,2}:
>  1.0 1.0001
>  1.0002  1.0003
>
>
> look at the elapsed time differences! And I'm running this on my Xeon 
> desktop, not even a cluster
> Of course A-B reports
>
> 2x2 Array{Float64,2}:
>  0.0  0.0
>  0.0  0.0
>
>
> So is this what one should expect from this kind of simple 
> paralleizations? If so, I'm definitely *in love* with Julia :):):)
>
> Best,
>
> Ferran.
>
>
>

[julia-users] I can't believe this spped-up !

2016-07-21 Thread Ferran Mazzanti
Hi,

mostly showing my astonishment, but I can even understand the figures in 
this stupid parallelization code

A = [[1.0 1.0001];[1.0002 1.0003]]
z = A
tic()
for i in 1:10
z *= A
end
toc()
A

produces

elapsed time: 105.458639263 seconds

2x2 Array{Float64,2}:
 1.0 1.0001
 1.0002  1.0003



But then add @parallel in the for loop

A = [[1.0 1.0001];[1.0002 1.0003]]
z = A
tic()
@parallel for i in 1:10
z *= A
end
toc()
A

and get 

elapsed time: 0.008912282 seconds

2x2 Array{Float64,2}:
 1.0 1.0001
 1.0002  1.0003


look at the elapsed time differences! And I'm running this on my Xeon 
desktop, not even a cluster
Of course A-B reports

2x2 Array{Float64,2}:
 0.0  0.0
 0.0  0.0


So is this what one should expect from this kind of simple paralleizations? 
If so, I'm definitely *in love* with Julia :):):)

Best,

Ferran.




[julia-users] Re: Help Julia win a performance comparison!

2016-07-21 Thread Chris Rackauckas
I see it now. Sum the elements to resize the array, and then loop through 
backwards adding the values (so that way you don't overwrite what 
you haven't used).

On Thursday, July 21, 2016 at 8:34:11 AM UTC-7, Kristoffer Carlsson wrote:
>
> Sum the elements and resize the array to that length? 



[julia-users] Re: Coveralls and coverage issues

2016-07-21 Thread Kristoffer Carlsson
Cached...

[julia-users] Re: Coveralls and coverage issues

2016-07-21 Thread Kristoffer Carlsson
Sometimes the badge image is cashed and can be quite hard to update. Ctrl + F5 
sometimes work.

[julia-users] Re: Performance issues with stochastic simulation

2016-07-21 Thread Chris Rackauckas
You can change line 70 to be in place with a loop:

for i in 1:length(x)
  x[i] = x[i] + deltax[i]
end

I don't think you can do

x[:] =x .+deltax

as fancy syntax here since the x is part of the statement though (you can 
check). This should cut out an allocation here and bring down the time. 

Do you need to use a WeightVec? If you do (for future things), keep the 
WeightVec separate from the Vector so that the types aren't changing. let 
wpf always be the WeightVec you make from pf. Otherwise pf isn't type 
stable. It would be best if you could make F in-place as well since this is 
where your bottleneck is.

On Thursday, July 21, 2016 at 7:56:51 AM UTC-7, Simon Frost wrote:
>
> Dear All,
>
> I'm having some issues with code speed for some Gillespie type 
> simulations. The toy model is described here:
>
>
> http://phylodynamics.blogspot.co.uk/2013/06/comparing-performance-of-r-and-rcpp-for.html
> http://phylodynamics.blogspot.co.uk/2013/06/an-sir-model-in-julia.html
>
> I get good performance with my vanilla Julia code, but a more generic 
> implementation is slower:
>
> http://github.com/sdwfrost/Gillespie.jl
>
> The gist is here:
>
> https://gist.github.com/sdwfrost/1b4bce19faf2d7b8624cac048a36f32d
>
> Lines 57 and 70 appear to be the culprit:
>
> https://github.com/sdwfrost/Gillespie.jl/blob/master/src/SSA.jl
>
> I've tried some devectorisation, but in my hackery, I appear to get side 
> effects, where the argument x0 passed to the ssa function is modified. Any 
> tips?
>
> Best
> Simon
>


[julia-users] Re: Help Julia win a performance comparison!

2016-07-21 Thread Kristoffer Carlsson
Sum the elements and resize the array to that length? 

[julia-users] Re: JuliaCon schedule announced

2016-07-21 Thread Viral Shah
Both these tutorials are up now. The others seem are there. 

-viral

On Sunday, July 17, 2016 at 1:00:17 AM UTC-4, Tony Kelman wrote:
>
> I don't see the tutorial that David Sanders gave, or the one that I gave. 
> Might be others missing too?



Re: [julia-users] Re: JuliaCon schedule announced

2016-07-21 Thread Viral Shah
I doubt we are going to be able to do much at this point. Andreas and I are 
checking with the video person, but not looking promising on this front.

-viral

On Tuesday, July 19, 2016 at 1:39:40 AM UTC-4, Christian Peel wrote:
>
> I saw quite a few videos with the same problem.  
>
> On Mon, Jul 18, 2016 at 1:52 AM, Mauro  wrote:
>
>> A request for a correction: in Keno's Gallium talk the bottom line of
>> the screen is cut off.  As most of his talk is a demo, where most things
>> happen in the bottom line, this makes it hard to follow along.  Is there
>> any chance that this can be re-edited?
>>
>> On Sun, 2016-07-17 at 07:00, Tony Kelman  wrote:
>> > I don't see the tutorial that David Sanders gave, or the one that I 
>> gave. Might be others missing too?
>>
>
>
>
> -- 
> chris.p...@ieee.org
>


[julia-users] Re: FFT, PSD and Windowing functions

2016-07-21 Thread Yared Melese
Hi Islam, 

Thanks for your input 

 I was able to find all windowing functions; however, there is nothing 
about PSD ( power spectral density). In python and matlab, there is 
function pwelch which does both windowing and FFT and wondering if there is 
the same function in Julia.

Here is simple trial I had but it is complaining about type mismatch

fb= fb[:]*hamming(length(fb))
fb = fft(fb) # take FFT of signal 

LoadError: MethodError: `*` has no method matching 
*(::Array{Complex{Float64},1}, ::Array{Float64,1})
Closest candidates are:
  *(::Any, ::Any, !Matched::Any, !Matched::Any...)
  
*{T<:Union{Complex{Float32},Complex{Float64},Float32,Float64},S}(!Matched::Union{DenseArray{T<:Union{Complex{Float32},Complex{Float64},Float32,Float64},2},SubArray{T<:Union{Complex{Float32},Complex{Float64},Float32,Float64},2,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64,LD}},
 
::Union{DenseArray{S,1},SubArray{S,1,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64,LD}})
  
*{TA,TB}(!Matched::Base.LinAlg.AbstractTriangular{TA,S<:AbstractArray{T,2}}, 
::Union{DenseArray{TB,1},DenseArray{TB,2},SubArray{TB,1,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64,LD},SubArray{TB,2,A<:DenseArray{T,N},I<:Tuple{Vararg{Union{Colon,Int64,Range{Int64,LD}})
  ...
in include_string at loading.jl:282
in include_string at C:\Users\melese\.julia\v0.4\CodeTools\src\eval.jl:32
in anonymous at C:\Users\melese\.julia\v0.4\Atom\src\eval.jl:84
in withpath at C:\Users\melese\.julia\v0.4\Requires\src\require.jl:37
in withpath at C:\Users\melese\.julia\v0.4\Atom\src\eval.jl:53
[inlined code] from C:\Users\melese\.julia\v0.4\Atom\src\eval.jl:83
in anonymous at task.jl:58
while loading D:\userdata\melese\Desktop\fft.jl, in expression starting on 
line 23


On Wednesday, July 20, 2016 at 9:15:40 AM UTC-5, Yared Melese wrote:

> Hello 
>
> Would you please let me know the package available to do windowing, FFT 
> and PSD? 
>
> Currently, I have bin file that I have processed in Julia and need to 
> window it and take preferably PSD but FFT work as well
>
> Thanks 
> Yared
>
>  
>
>

[julia-users] Re: Help Julia win a performance comparison!

2016-07-21 Thread Chris Rackauckas
Let me explain. The easy place to add an in-place operation with resize! 
would be with the RNG call, rpois. I used resize! to make the Poisson RNG 
go a little faster. It's now:

  function rpois!(n::Int,p::Vector{Float64},out::Vector{Int})
resize!(out,n)
for i in 1:n
  @inbounds out[i] = StatsFuns.RFunctions.poisrand(p[i]) 
#rand(Poisson(p[i]))
end
  end

and then I change the script to use that. However, it doesn't help very 
much, the time IN the RNG calls is still the major timing consuming 
element, and the allocation wasn't much of a factor.

The other place that has an allocation is in the R.rep function. I am not 
sure how to make that an in-place function though. For reference, this 
function takes an array like [0.1 0.2 0.3] and an array of ints [2 3 1] and 
duplicates each element that many times: [0.1 0.1 0.2 0.2 0.2 0.3]. With it 
being in-place, you don't have an easy way to reference what values should 
be duplicated how many times.  In this case, the allocation is a (pretty 
small, but still shows up) measurable part of the timing, but is harder to 
deal with.

So we're stuck with RNG time as the major issue, but can chop off a little 
more if this allocation can be dealt with better (the R code has both of 
these same issues, which is why we're virtually tied).

On Thursday, July 21, 2016 at 5:49:23 AM UTC-7, Steven G. Johnson wrote:
>
>
>
> On Thursday, July 21, 2016 at 5:37:12 AM UTC-4, Chris Rackauckas wrote:
>>
>> Maybe. I thought about that, but I don't think that satisfies the 
>> "elegant and compactness" requirement, unless there's an easy way to do the 
>> growing without too much extra code hanging around. 
>>
>
> Why is `resize!` so much harder than allocating a new array? 
>


[julia-users] Re: Coveralls and coverage issues

2016-07-21 Thread Simon Frost
Odd; refresh on Chrome wasn't refreshing properly. Fancy looking at my code 
speed question ;) ?

On Thursday, July 21, 2016 at 3:49:01 PM UTC+1, Chris Rackauckas wrote:
>
> Click refresh when you're on the repo readme? It updated on my screen, 
> refresh to make sure you're not displaying the site from cache.
>
> On Thursday, July 21, 2016 at 7:41:47 AM UTC-7, Simon Frost wrote:
>>
>> Dear Chris,
>>
>> Yes, I am an idiot ;)
>>
>> Any idea why the badge isn't updating?
>>
>> Best
>> Simon
>>
>> On Thursday, July 21, 2016 at 9:06:51 AM UTC+1, Chris Rackauckas wrote:
>>>
>>> Look at the files it's trying to cover... it's DataFrames.jl :)
>>>
>>> I sent you a pull request to fix your travis.yml to be for your package.
>>>
>>> On Thursday, July 21, 2016 at 12:16:35 AM UTC-7, Simon Frost wrote:

 Dear All,

 I'm trying to get code coverage working, but despite having some tests 
 - at the moment, just running examples - I get 0% coverage

 http://github.com/sdwfrost/Gillespie.jl

 Is this because I'm just using 'include' in runtests.jl?

 Best
 Simon

>>>

[julia-users] Strange performance issue in filling in a matrix column

2016-07-21 Thread Michael Prange
I'm a new user, so have mercy in your responses. 

I've written a method that takes a matrix and vector as input and then 
fills in column icol of that matrix with the vector of given values that 
have been shifted upward by ishift indices with periodic boundary 
conditions. To make this clear, given the matrix

W = [1  2
3  4
5  6]

the vector w = [7  8  9], icol = 2 and ishift = 1, the new value of W is 
given by

W = [1  8
3  9
5  7]

I need a fast way of doing this for large matrices. I wrote three methods 
that should (In my naive mind) give the same performance results, but @time 
reports otherwise.  The method definitions and the performance results are 
given below. Can someone teach me why the results are so different? The 
method fill_W! is too wordy for my tastes, but the more compact notation in 
fill_W1! and fill_W2! achieve poorer results. Any why do these latter two 
methods allocate so much memory when the whole point of these methods is to 
use already-allocated memory.

Michael

### Definitions


function fill_W1!{TF}(W::Matrix{TF}, icol::Int, w::Vector{TF}, 
ishift::Int)
@assert(size(W,1) == length(w), "Dimension mismatch between W and w")
W[1:(end-ishift),icol] = w[(ishift+1):end]
W[(end-(ishift-1)):end,icol] = w[1:ishift]
return
end


function fill_W2!{TF}(W::Matrix{TF}, icol::Int, w::Vector{TF}, 
ishift::Int)
@assert(size(W,1) == length(w), "Dimension mismatch between W and w")
[W[i,icol] = w[i+ishift] for i in 1:(length(w)-ishift)]
[W[end-ishift+i,icol] = w[i] for i in 1:ishift]
return
end


function fill_W!{TF}(W::Matrix{TF}, icol::Int, w::Vector{TF}, 
ishift::Int)
@assert(size(W,1) == length(w), "Dimension mismatch between W and w")
n = length(w)
for j in 1:(n-ishift)
W[j,icol] = w[j+ishift]
end
for j in (n-(ishift-1)):n
W[j,icol] = w[j-(n-ishift)]
end
end


# Performance Results
julia>
W = rand(100,2)
w = rand(100)
println("fill_W!:")
println(@time fill_W!(W, 2, w, 2))
println("fill_W1!:")
println(@time fill_W1!(W, 2, w, 2))
println("fill_W2!:")
println(@time fill_W2!(W, 2, w, 2))


Out>
fill_W!:
 0.002801 seconds (4 allocations: 160 bytes)
nothing
fill_W1!:
 0.007427 seconds (9 allocations: 7.630 MB)
[0.152463397611579,0.6314166578356002]
fill_W2!:
 0.005587 seconds (7 allocations: 7.630 MB)
[0.152463397611579,0.6314166578356002]




[julia-users] Performance issues with stochastic simulation

2016-07-21 Thread Simon Frost
Dear All,

I'm having some issues with code speed for some Gillespie type simulations. 
The toy model is described here:

http://phylodynamics.blogspot.co.uk/2013/06/comparing-performance-of-r-and-rcpp-for.html
http://phylodynamics.blogspot.co.uk/2013/06/an-sir-model-in-julia.html

I get good performance with my vanilla Julia code, but a more generic 
implementation is slower:

http://github.com/sdwfrost/Gillespie.jl

The gist is here:

https://gist.github.com/sdwfrost/1b4bce19faf2d7b8624cac048a36f32d

Lines 57 and 70 appear to be the culprit:

https://github.com/sdwfrost/Gillespie.jl/blob/master/src/SSA.jl

I've tried some devectorisation, but in my hackery, I appear to get side 
effects, where the argument x0 passed to the ssa function is modified. Any 
tips?

Best
Simon


[julia-users] Re: Coveralls and coverage issues

2016-07-21 Thread Chris Rackauckas
Click refresh when you're on the repo readme? It updated on my screen, 
refresh to make sure you're not displaying the site from cache.

On Thursday, July 21, 2016 at 7:41:47 AM UTC-7, Simon Frost wrote:
>
> Dear Chris,
>
> Yes, I am an idiot ;)
>
> Any idea why the badge isn't updating?
>
> Best
> Simon
>
> On Thursday, July 21, 2016 at 9:06:51 AM UTC+1, Chris Rackauckas wrote:
>>
>> Look at the files it's trying to cover... it's DataFrames.jl :)
>>
>> I sent you a pull request to fix your travis.yml to be for your package.
>>
>> On Thursday, July 21, 2016 at 12:16:35 AM UTC-7, Simon Frost wrote:
>>>
>>> Dear All,
>>>
>>> I'm trying to get code coverage working, but despite having some tests - 
>>> at the moment, just running examples - I get 0% coverage
>>>
>>> http://github.com/sdwfrost/Gillespie.jl
>>>
>>> Is this because I'm just using 'include' in runtests.jl?
>>>
>>> Best
>>> Simon
>>>
>>

[julia-users] Re: Coveralls and coverage issues

2016-07-21 Thread Simon Frost
Dear Chris,

Yes, I am an idiot ;)

Any idea why the badge isn't updating?

Best
Simon

On Thursday, July 21, 2016 at 9:06:51 AM UTC+1, Chris Rackauckas wrote:
>
> Look at the files it's trying to cover... it's DataFrames.jl :)
>
> I sent you a pull request to fix your travis.yml to be for your package.
>
> On Thursday, July 21, 2016 at 12:16:35 AM UTC-7, Simon Frost wrote:
>>
>> Dear All,
>>
>> I'm trying to get code coverage working, but despite having some tests - 
>> at the moment, just running examples - I get 0% coverage
>>
>> http://github.com/sdwfrost/Gillespie.jl
>>
>> Is this because I'm just using 'include' in runtests.jl?
>>
>> Best
>> Simon
>>
>

Re: [julia-users] Help Julia win a performance comparison!

2016-07-21 Thread Tom Breloff
I had the same thought. Could just make a new AbstractArray which keeps a
larger array and tracks the current usage. I bet it's 10 lines of code to
make it generic.

On Thursday, July 21, 2016, Christoph Ortner 
wrote:

>
> feels like one may want a little auxiliary package that can make available
> small chunks from a long pre-allocated vector.
>
> On Thursday, 21 July 2016 10:37:12 UTC+1, Chris Rackauckas wrote:
>>
>> Maybe. I thought about that, but I don't think that satisfies the
>> "elegant and compactness" requirement, unless there's an easy way to do the
>> growing without too much extra code hanging around.
>>
>> On Thursday, July 21, 2016 at 1:54:10 AM UTC-7, Christoph Ortner wrote:
>>>
>>> could still preallocate and grow as needed?
>>>
>>> On Thursday, 21 July 2016 02:48:58 UTC+1, Chris Rackauckas wrote:

 Most of the arrays are changing size each time though, since they
 represent a population which changes each timestep.

 On Wednesday, July 20, 2016 at 6:47:39 PM UTC-7, Steven G. Johnson
 wrote:
>
> It looks like you are allocating lots of arrays in your doStep
> inner-loop function, so I'm sure you could improve it by moving the
> allocations out of the inner loop.  (In general, vectorized routines are
> convenient but they aren't the fastest way to do things.)
>



[julia-users] name of current executable (similar to Bash $0)

2016-07-21 Thread Curtis Vogt
I believe what you want is the constant `PROGRAM_FILE`. 
http://julia.readthedocs.io/en/latest/stdlib/constants/#Base.PROGRAM_FILE

[julia-users] name of current executable (similar to Bash $0)

2016-07-21 Thread Tamas Papp
I using a script written in Julia, called with the shebang line

#!/usr/bin/env julia

Now I would like to have symlinks of a different name to the script, and
have it perform slighly different tasks depending on which name was
used. In Bash, I would use $0, is there something equivalent in Julia?
Does not need to be portable, Linux is fine.

Best,

Tamas


[julia-users] Re: Help Julia win a performance comparison!

2016-07-21 Thread Christoph Ortner

feels like one may want a little auxiliary package that can make available 
small chunks from a long pre-allocated vector.

On Thursday, 21 July 2016 10:37:12 UTC+1, Chris Rackauckas wrote:
>
> Maybe. I thought about that, but I don't think that satisfies the "elegant 
> and compactness" requirement, unless there's an easy way to do the growing 
> without too much extra code hanging around. 
>
> On Thursday, July 21, 2016 at 1:54:10 AM UTC-7, Christoph Ortner wrote:
>>
>> could still preallocate and grow as needed?
>>
>> On Thursday, 21 July 2016 02:48:58 UTC+1, Chris Rackauckas wrote:
>>>
>>> Most of the arrays are changing size each time though, since they 
>>> represent a population which changes each timestep.
>>>
>>> On Wednesday, July 20, 2016 at 6:47:39 PM UTC-7, Steven G. Johnson wrote:

 It looks like you are allocating lots of arrays in your doStep 
 inner-loop function, so I'm sure you could improve it by moving the 
 allocations out of the inner loop.  (In general, vectorized routines are 
 convenient but they aren't the fastest way to do things.)

>>>

[julia-users] Re: Help Julia win a performance comparison!

2016-07-21 Thread Chris Rackauckas
Maybe. I thought about that, but I don't think that satisfies the "elegant 
and compactness" requirement, unless there's an easy way to do the growing 
without too much extra code hanging around. 

On Thursday, July 21, 2016 at 1:54:10 AM UTC-7, Christoph Ortner wrote:
>
> could still preallocate and grow as needed?
>
> On Thursday, 21 July 2016 02:48:58 UTC+1, Chris Rackauckas wrote:
>>
>> Most of the arrays are changing size each time though, since they 
>> represent a population which changes each timestep.
>>
>> On Wednesday, July 20, 2016 at 6:47:39 PM UTC-7, Steven G. Johnson wrote:
>>>
>>> It looks like you are allocating lots of arrays in your doStep 
>>> inner-loop function, so I'm sure you could improve it by moving the 
>>> allocations out of the inner loop.  (In general, vectorized routines are 
>>> convenient but they aren't the fastest way to do things.)
>>>
>>

[julia-users] Re: Help Julia win a performance comparison!

2016-07-21 Thread Christoph Ortner
could still preallocate and grow as needed?

On Thursday, 21 July 2016 02:48:58 UTC+1, Chris Rackauckas wrote:
>
> Most of the arrays are changing size each time though, since they 
> represent a population which changes each timestep.
>
> On Wednesday, July 20, 2016 at 6:47:39 PM UTC-7, Steven G. Johnson wrote:
>>
>> It looks like you are allocating lots of arrays in your doStep inner-loop 
>> function, so I'm sure you could improve it by moving the allocations out of 
>> the inner loop.  (In general, vectorized routines are convenient but they 
>> aren't the fastest way to do things.)
>>
>

[julia-users] Re: Coveralls and coverage issues

2016-07-21 Thread Chris Rackauckas
Look at the files it's trying to cover... it's DataFrames.jl :)

I sent you a pull request to fix your travis.yml to be for your package.

On Thursday, July 21, 2016 at 12:16:35 AM UTC-7, Simon Frost wrote:
>
> Dear All,
>
> I'm trying to get code coverage working, but despite having some tests - 
> at the moment, just running examples - I get 0% coverage
>
> http://github.com/sdwfrost/Gillespie.jl
>
> Is this because I'm just using 'include' in runtests.jl?
>
> Best
> Simon
>


[julia-users] Coveralls and coverage issues

2016-07-21 Thread Simon Frost
Dear All,

I'm trying to get code coverage working, but despite having some tests - at 
the moment, just running examples - I get 0% coverage

http://github.com/sdwfrost/Gillespie.jl

Is this because I'm just using 'include' in runtests.jl?

Best
Simon