Re: [julia-users] juliabloggers.com is now live!

2014-06-17 Thread Mauro
Thanks for putting this together!  One more thing about authors, on pages 
like for example this one
http://www.juliabloggers.com/using-asciiplots-jl/
there should be the same attribution as on the front page.

On Tuesday, June 17, 2014 1:19:48 AM UTC+1, Randy Zwitch wrote:
>
> Ok, there is now more obvious attribution on each post, with the author 
> name and link of the original post prominently displayed before the article.
>
> If anyone else has any other recommendations/requests (still need a 
> logo!), please let me know.
>


Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Florian Oswald
Hi Dahua,
I cannot find Base.maxabs (i.e. Julia says Base.maxabs not defined)

I'm here:

julia> versioninfo()
Julia Version 0.3.0-prerelease+2703
Commit 942ae42* (2014-04-22 18:57 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin12.5.0)
  CPU: Intel(R) Core(TM) i5-2435M CPU @ 2.40GHz
  WORD_SIZE: 64
  BLAS: libgfortblas
  LAPACK: liblapack
  LIBM: libopenlibm

cheers

On Monday, 16 June 2014 17:13:44 UTC+1, Dahua Lin wrote:
>
> First, I agree with John that you don't have to declare the types in 
> general, like in a compiled language. It seems that Julia would be able to 
> infer the types of most variables in your codes.
>
> There are several ways that your code's efficiency may be improved:
>
> (1) You can use @inbounds to waive bound checking in several places, such 
> as line 94 and 95 (in RBC_Julia.jl)
> (2) Line 114 and 116 involves reallocating new arrays, which is probably 
> unnecessary. Also note that Base.maxabs can compute the maximum of absolute 
> value more efficiently than maximum(abs( ... ))
>
> In terms of measurement, did you pre-compile the function before measuring 
> the runtime?
>
> A side note about code style. It seems that it uses a lot of Java-ish 
> descriptive names with camel case. Julia practice tends to encourage more 
> concise naming.
>
> Dahua
>
>
>
> On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:
>>
>> Maybe it would be good to verify the claim made at 
>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
>>  
>>
>> I would think that specifying all those types wouldn’t matter much if the 
>> code doesn’t have type-stability problems. 
>>
>>  — John 
>>
>> On Jun 16, 2014, at 8:52 AM, Florian Oswald  
>> wrote: 
>>
>> > Dear all, 
>> > 
>> > I thought you might find this paper interesting: 
>> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
>> > 
>> > It takes a standard model from macro economics and computes it's 
>> solution with an identical algorithm in several languages. Julia is roughly 
>> 2.6 times slower than the best C++ executable. I was bit puzzled by the 
>> result, since in the benchmarks on http://julialang.org/, the slowest 
>> test is 1.66 times C. I realize that those benchmarks can't cover all 
>> possible situations. That said, I couldn't really find anything unusual in 
>> the Julia code, did some profiling and removed type inference, but still 
>> that's as fast as I got it. That's not to say that I'm disappointed, I 
>> still think this is great. Did I miss something obvious here or is there 
>> something specific to this algorithm? 
>> > 
>> > The codes are on github at 
>> > 
>> > https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
>> > 
>> > 
>>
>>

Re: [julia-users] support for '?' suffix on functions that return boolean types.

2014-06-17 Thread Job van der Zwan
On Monday, 16 June 2014 03:33:32 UTC+2, Jacob Quinn wrote:
>
> it has nice discoverability properties (tab-completion)
>
Oh that's an interesting one. Never consciously thought of the interaction 
between naming conventions and autocomplete functionality before.
 

> isn't generally too awkward (though double s's are sometimes weird 
> `issubset` `issubtype`, and it took me a while to figure out isa() => is 
> a). 
>

I take it you don't like camel case? (which makes me wonder: is there a 
consensus for the idiomatic way to label multi-word identifiers in Julia?) 


Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Tomas Lycken
It seems Base.maxabs was added (by Dahua) as late as May 30 
- 
https://github.com/JuliaLang/julia/commit/78bbf10c125a124bc8a1a25e8aaaea1cbc6e0ebc

If you update your Julia to the latest master, you'll have it =)

// T

On Tuesday, June 17, 2014 10:20:05 AM UTC+2, Florian Oswald wrote:
>
> Hi Dahua,
> I cannot find Base.maxabs (i.e. Julia says Base.maxabs not defined)
>
> I'm here:
>
> julia> versioninfo()
> Julia Version 0.3.0-prerelease+2703
> Commit 942ae42* (2014-04-22 18:57 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin12.5.0)
>   CPU: Intel(R) Core(TM) i5-2435M CPU @ 2.40GHz
>   WORD_SIZE: 64
>   BLAS: libgfortblas
>   LAPACK: liblapack
>   LIBM: libopenlibm
>
> cheers
>
> On Monday, 16 June 2014 17:13:44 UTC+1, Dahua Lin wrote:
>>
>> First, I agree with John that you don't have to declare the types in 
>> general, like in a compiled language. It seems that Julia would be able to 
>> infer the types of most variables in your codes.
>>
>> There are several ways that your code's efficiency may be improved:
>>
>> (1) You can use @inbounds to waive bound checking in several places, such 
>> as line 94 and 95 (in RBC_Julia.jl)
>> (2) Line 114 and 116 involves reallocating new arrays, which is probably 
>> unnecessary. Also note that Base.maxabs can compute the maximum of absolute 
>> value more efficiently than maximum(abs( ... ))
>>
>> In terms of measurement, did you pre-compile the function before 
>> measuring the runtime?
>>
>> A side note about code style. It seems that it uses a lot of Java-ish 
>> descriptive names with camel case. Julia practice tends to encourage more 
>> concise naming.
>>
>> Dahua
>>
>>
>>
>> On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:
>>>
>>> Maybe it would be good to verify the claim made at 
>>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
>>>  
>>>
>>> I would think that specifying all those types wouldn’t matter much if 
>>> the code doesn’t have type-stability problems. 
>>>
>>>  — John 
>>>
>>> On Jun 16, 2014, at 8:52 AM, Florian Oswald  
>>> wrote: 
>>>
>>> > Dear all, 
>>> > 
>>> > I thought you might find this paper interesting: 
>>> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
>>> > 
>>> > It takes a standard model from macro economics and computes it's 
>>> solution with an identical algorithm in several languages. Julia is roughly 
>>> 2.6 times slower than the best C++ executable. I was bit puzzled by the 
>>> result, since in the benchmarks on http://julialang.org/, the slowest 
>>> test is 1.66 times C. I realize that those benchmarks can't cover all 
>>> possible situations. That said, I couldn't really find anything unusual in 
>>> the Julia code, did some profiling and removed type inference, but still 
>>> that's as fast as I got it. That's not to say that I'm disappointed, I 
>>> still think this is great. Did I miss something obvious here or is there 
>>> something specific to this algorithm? 
>>> > 
>>> > The codes are on github at 
>>> > 
>>> > https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
>>> > 
>>> > 
>>>
>>>

Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Florian Oswald
hi tim - True!
(why on earth would I do that?)

defining it outside reproduces the speed gain. thanks!


On 16 June 2014 18:30, Tim Holy  wrote:

> From the sound of it, one possibility is that you made it a "private
> function"
> inside the computeTuned function. That creates the equivalent of an
> anonymous
> function, which is slow. You need to make it a generic function (define it
> outside computeTuned).
>
> --Tim
>
> On Monday, June 16, 2014 06:16:49 PM Florian Oswald wrote:
> > interesting!
> > just tried that - I defined mylog inside the computeTuned function
> >
> >
> https://github.com/floswald/Comparison-Programming-Languages-Economics/blob/
> > master/julia/floswald/model.jl#L193
> >
> > but that actually slowed things down considerably. I'm on a mac as well,
> > but it seems that's not enough to compare this? or where did you define
> > this function?
> >
> >
> > On 16 June 2014 18:02, Andreas Noack Jensen <
> andreasnoackjen...@gmail.com>
> >
> > wrote:
> > > I think that the log in openlibm is slower than most system logs. On my
> > > mac, if I use
> > >
> > > mylog(x::Float64) = ccall((:log, "libm"), Float64, (Float64,), x)
> > >
> > > the code runs 25 pct. faster. If I also use @inbounds and devectorise
> the
> > > max(abs) it runs in 2.26 seconds on my machine. The C++ version with
> the
> > > XCode compiler and -O3 runs in 1.9 seconds.
> > >
> > >
> > > 2014-06-16 18:21 GMT+02:00 Florian Oswald :
> > >
> > > Hi guys,
> > >
> > >> thanks for the comments. Notice that I'm not the author of this code
> [so
> > >> variable names are not on me :-) ] just tried to speed it up a bit. In
> > >> fact, declaring types before running the computation function and
> using
> > >> @inbounds made the code 24% faster than the benchmark version. here's
> my
> > >> attempt
> > >>
> > >>
> > >>
> https://github.com/floswald/Comparison-Programming-Languages-Economics/tr
> > >> ee/master/julia/floswald
> > >>
> > >> should try the Base.maxabs.
> > >>
> > >> in profiling this i found that a lot of time is spent here:
> > >>
> > >>
> > >>
> https://github.com/floswald/Comparison-Programming-Languages-Economics/bl
> > >> ob/master/julia/floswald/model.jl#L119
> > >>
> > >> which i'm not sure how to avoid.
> > >>
> > >> On 16 June 2014 17:13, Dahua Lin  wrote:
> > >>> First, I agree with John that you don't have to declare the types in
> > >>> general, like in a compiled language. It seems that Julia would be
> able
> > >>> to
> > >>> infer the types of most variables in your codes.
> > >>>
> > >>> There are several ways that your code's efficiency may be improved:
> > >>>
> > >>> (1) You can use @inbounds to waive bound checking in several places,
> > >>> such as line 94 and 95 (in RBC_Julia.jl)
> > >>> (2) Line 114 and 116 involves reallocating new arrays, which is
> probably
> > >>> unnecessary. Also note that Base.maxabs can compute the maximum of
> > >>> absolute
> > >>> value more efficiently than maximum(abs( ... ))
> > >>>
> > >>> In terms of measurement, did you pre-compile the function before
> > >>> measuring the runtime?
> > >>>
> > >>> A side note about code style. It seems that it uses a lot of Java-ish
> > >>> descriptive names with camel case. Julia practice tends to encourage
> > >>> more
> > >>> concise naming.
> > >>>
> > >>> Dahua
> > >>>
> > >>> On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:
> >  Maybe it would be good to verify the claim made at
> >  https://github.com/jesusfv/Comparison-Programming-> 
> Languages-Economics/blob/master/RBC_Julia.jl#L9
> > 
> >  I would think that specifying all those types wouldn’t matter much
> if
> >  the code doesn’t have type-stability problems.
> > 
> >   — John
> > 
> >  On Jun 16, 2014, at 8:52 AM, Florian Oswald 
> > 
> >  wrote:
> >  > Dear all,
> > 
> >  > I thought you might find this paper interesting:
> >  http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
> > 
> >  > It takes a standard model from macro economics and computes it's
> > 
> >  solution with an identical algorithm in several languages. Julia is
> >  roughly
> >  2.6 times slower than the best C++ executable. I was bit puzzled by
> the
> >  result, since in the benchmarks on http://julialang.org/, the
> slowest
> >  test is 1.66 times C. I realize that those benchmarks can't cover
> all
> >  possible situations. That said, I couldn't really find anything
> unusual
> >  in
> >  the Julia code, did some profiling and removed type inference, but
> >  still
> >  that's as fast as I got it. That's not to say that I'm
> disappointed, I
> >  still think this is great. Did I miss something obvious here or is
> >  there
> >  something specific to this algorithm?
> > 
> >  > The codes are on github at
> >  >
> >  >
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
> > >
> > > --
> > > Med venlig

[julia-users] Using multiple y-axes in PyPlot

2014-06-17 Thread Tomas Lycken
Is there a way to use multiple axes in PyPlot, such as [this matplotlib 
example](http://matplotlib.org/examples/api/two_scales.html)?

I've tried various approaches to get the axes objects I need, but I can't 
manage to get them as anything but general PyObjects, which of course don't 
have the plotting methods I need.

Anyone got any hints? =)

// T




Re: [julia-users] support for '?' suffix on functions that return boolean types.

2014-06-17 Thread Mauro
On Tue, 2014-06-17 at 09:29, j.l.vanderz...@gmail.com wrote:
> On Monday, 16 June 2014 03:33:32 UTC+2, Jacob Quinn wrote:
>>
>> it has nice discoverability properties (tab-completion)
>>
> Oh that's an interesting one. Never consciously thought of the interaction 
> between naming conventions and autocomplete functionality before.
>  
>
>> isn't generally too awkward (though double s's are sometimes weird 
>> `issubset` `issubtype`, and it took me a while to figure out isa() => is 
>> a). 
>>
>
> I take it you don't like camel case? (which makes me wonder: is there a 
> consensus for the idiomatic way to label multi-word identifiers in Julia?) 

http://docs.julialang.org/en/latest/manual/variables/?highlight=camelcase#stylistic-conventions

says:
- Names of variables are in lower case.
- Word separation can be indicated by underscores ('\_'), but use of
  underscores is discouraged unless the name would be hard to read
  otherwise.
- Names of Types begin with a capital letter and word separation is
  shown with CamelCase instead of underscores.
- Names of functions and macros are in lower case, without underscores.
- Functions that modify their inputs have names that end in !. These
  functions are sometimes called mutating functions or in-place
  functions.

I think the general style is: try to use single-word.  If using
multi-word, use one which can be read without _, if that is not possible
us _.


[julia-users] Re: animation using Gtk+/Cairo

2014-06-17 Thread Andreas Lobinger
Hello colleague,

i'm doing Gtk(2)+cairo animations (in python to admit it) in a different 
context and i have done it using the gtk main loop and glib timers; but i 
had to put some effort into making the animation fast, so not to interfere 
with the gtk main for too long.

In the Gtk.jl afaiu the implementors not to use the gtk_main, so the 
straight forward way should be use julia timing. I read the gtk+cairo 
threading tutorial some time ago, but it looked scary to me.

What you could think about is actually implement backing storage on your 
own - prepare a cairo.ImageSurface draw to that (in an own thread) and copy 
to the gtk Canvas (with set_source and paint()), actually copy to the 
cairo_context of the canvas in the expose event. The copying is pretty fast 
nowadays. 
I just realized that the 'alternative' solution on the link is something 
like this.




Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Milan Bouchet-Valat
Le lundi 16 juin 2014 à 14:59 -0700, Jesus Villaverde a écrit :
> Also, defining
> 
> mylog(x::Float64) = ccall((:log, "libm"), Float64, (Float64,), x)
> 
> made quite a bit of difference for me, from 1.92 to around 1.55. If I
> also add @inbounds, I go down to 1.45, making Julia only twice as
> sslow as C++. Numba still beats Julia, which kind of bothers me a bit
Since Numba uses LLVM too, you should be able to compare the LLVM IR it
generates to that generated by Julia. Doing this at least for the tight
loop would be very interesting.


My two cents

> Thanks for the suggestions.
> 
> On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:
> Hi
> 
> 
> 1) Yes, we pre-compiled the function.
> 
> 
> 2) As I mentioned before, we tried the code with and without
> type declaration, it makes a difference.
> 
> 
> 3) The variable names turns out to be quite useful because
> this code will be eventually nested into a much larger project
> where it is convenient to have very explicit names.
> 
> 
> Thanks 
> 
> On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:
> First, I agree with John that you don't have to
> declare the types in general, like in a compiled
> language. It seems that Julia would be able to infer
> the types of most variables in your codes.
> 
> 
> There are several ways that your code's efficiency may
> be improved:
> 
> 
> (1) You can use @inbounds to waive bound checking in
> several places, such as line 94 and 95 (in
> RBC_Julia.jl)
> (2) Line 114 and 116 involves reallocating new arrays,
> which is probably unnecessary. Also note that
> Base.maxabs can compute the maximum of absolute value
> more efficiently than maximum(abs( ... ))
> 
> 
> In terms of measurement, did you pre-compile the
> function before measuring the runtime?
> 
> 
> A side note about code style. It seems that it uses a
> lot of Java-ish descriptive names with camel case.
> Julia practice tends to encourage more concise naming.
> 
> 
> Dahua
> 
> 
> 
> 
> 
> On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles
> White wrote:
> Maybe it would be good to verify the claim
> made at
> 
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
>  
> 
> I would think that specifying all those types
> wouldn’t matter much if the code doesn’t have
> type-stability problems. 
> 
>  — John 
> 
> On Jun 16, 2014, at 8:52 AM, Florian Oswald
>  wrote: 
> 
> > Dear all, 
> > 
> > I thought you might find this paper
> interesting:
> 
> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
> > 
> > It takes a standard model from macro
> economics and computes it's solution with an
> identical algorithm in several languages.
> Julia is roughly 2.6 times slower than the
> best C++ executable. I was bit puzzled by the
> result, since in the benchmarks on
> http://julialang.org/, the slowest test is
> 1.66 times C. I realize that those benchmarks
> can't cover all possible situations. That
> said, I couldn't really find anything unusual
> in the Julia code, did some profiling and
> removed type inference, but still that's as
> fast as I got it. That's not to say that I'm
> disappointed, I still think this is great. Did
> I miss something obvious here or is there
> something specific to this algorithm? 
> > 
> > The codes are on github at 
> > 
>

[julia-users] parallel loop with mutable types

2014-06-17 Thread Jon Norberg
I have: 

type parms
  r::Float64
  K::Float64
end

k=Array(parms,20)

for i =1:20 k[i]=parms(1.1,2.2) end

addprocs(1)

nprocs() -> 2

@parallel for i=1:20 k[i].r=2.0 end

gives error:

julia> @parallel for i=1:20 k[i].r=2.0 end
fatal error on 2:
julia> ERROR: parms not defined
 in deserialize at serialize.jl:470
 in handle_deserialize at serialize.jl:327
 in deserialize at serialize.jl:398
 in handle_deserialize at serialize.jl:327
 in deserialize at serialize.jl:310
 in anonymous at serialize.jl:330
 in ntuple at tuple.jl:30
 in deserialize_tuple at serialize.jl:330
 in handle_deserialize at serialize.jl:322
 in deserialize at serialize.jl:368
 in handle_deserialize at serialize.jl:327
 in deserialize at serialize.jl:310
 in anonymous at serialize.jl:330
 in ntuple at tuple.jl:30
 in deserialize_tuple at serialize.jl:330
 in handle_deserialize at serialize.jl:322
 in deserialize at serialize.jl:368
 in handle_deserialize at serialize.jl:327
 in anonymous at task.jl:835
Worker 2 terminated.

Do I need to share the k array or somehow let the parallel code know how to 
handle an array of type parms?

Thanks for any help here!

Jon


[julia-users] Re: An appreciation of two contributors among many

2014-06-17 Thread Jon Norberg
I'll second that, great community and some very very helpful people that 
put a lot of effort into this.

Thanks


Re: [julia-users] juliabloggers.com is now live!

2014-06-17 Thread Hans W Borchers
I am thinking about a blog specializing in numerical math, optimization, 
and maybe simulation, in Julia.  I will not set up a blog of my own, but 
I would like to contribute to such a blog.  Is there something like a 
"shared/joined" blog, or are there some of you interested in this area 
and in working together on such a blog?



[julia-users] Re: parallel loop with mutable types

2014-06-17 Thread Tomas Lycken


You’ll need to evaluate the type definition on all processes, using the 
macro 

@everywhere type parms
...
end

*after* adding the worker process. If I do that, I can run your code 
without error. (However, k seems to be unchanged - you might have to use a 
DArray (distributed array) in order to get the changes back to the main 
thread…)

// T
On Tuesday, June 17, 2014 12:27:52 PM UTC+2, Jon Norberg wrote:

I have: 
>
> type parms
>   r::Float64
>   K::Float64
> end
>
> k=Array(parms,20)
>
> for i =1:20 k[i]=parms(1.1,2.2) end
>
> addprocs(1)
>
> nprocs() -> 2
>
> @parallel for i=1:20 k[i].r=2.0 end
>
> gives error:
>
> julia> @parallel for i=1:20 k[i].r=2.0 end
> fatal error on 2:
> julia> ERROR: parms not defined
>  in deserialize at serialize.jl:470
>  in handle_deserialize at serialize.jl:327
>  in deserialize at serialize.jl:398
>  in handle_deserialize at serialize.jl:327
>  in deserialize at serialize.jl:310
>  in anonymous at serialize.jl:330
>  in ntuple at tuple.jl:30
>  in deserialize_tuple at serialize.jl:330
>  in handle_deserialize at serialize.jl:322
>  in deserialize at serialize.jl:368
>  in handle_deserialize at serialize.jl:327
>  in deserialize at serialize.jl:310
>  in anonymous at serialize.jl:330
>  in ntuple at tuple.jl:30
>  in deserialize_tuple at serialize.jl:330
>  in handle_deserialize at serialize.jl:322
>  in deserialize at serialize.jl:368
>  in handle_deserialize at serialize.jl:327
>  in anonymous at task.jl:835
> Worker 2 terminated.
>
> Do I need to share the k array or somehow let the parallel code know how 
> to handle an array of type parms?
>
> Thanks for any help here!
>
> Jon
>
​


[julia-users] Re: parallel loop with mutable types

2014-06-17 Thread Jon Norberg
Great, solve first problem, thanks. 

using DArray though gives

julia> k=DArray(parms,20)
exception on 2: ERROR: no method parms((UnitRange{Int64},))
 in anonymous at multi.jl:840
 in run_work_thunk at multi.jl:613
 in run_work_thunk at multi.jl:622
 in anonymous at task.jl:6
ERROR: assertion failed
 in DArray at darray.jl:18
 in DArray at darray.jl:39
 in DArray at darray.jl:48




[julia-users] Re: parallel loop with mutable types

2014-06-17 Thread Jon Norberg
also this works but does not change values in b

@parallel for i=1:20 b[i]=k[i].r*k[i].K end

I tried making b=DArray{Float64,1} or b=dones(20,1) but still values in b 
are not updated

do I need to use spawn/fetch or pmap or something like this?

Sorry, not fluent in parallel programming yet, but I am trying to make my 
simple code scalable from start

//J


Re: [julia-users] animation using Gtk+/Cairo

2014-06-17 Thread Abe Schneider
Thank you everyone for the fast replies!

After looking at ImageView and the sources, here's the solution I came up 
with:

w = Gtk.@Window() |>
(body=Gtk.@Box(:v) |>
  (canvas=Gtk.@Canvas(600, 600)) |>
showall

function redraw_canvas(canvas)
  ctx = getgc(canvas)
  h = height(canvas)
  w = width(canvas)

  # draw background
  rectangle(ctx, 0, 0, w, h)
  set_source_rgb(ctx, 1, 1, 1)
  fill(ctx)

  # draw objects
  # ...

  # tell Gtk+ to redisplay
  draw(canvas)
end

function init(canvas, delay::Float64, interval::Float64)
  update_timer = Timer(timer -> redraw_canvas(canvas))
  start_timer(update_timer, delay, interval)
end

update_timer = init(canvas, 2, 1)
if !isinteractive()
  wait(Condition())
end

stop_timer(update_timer)

I haven't looked yet into what is required to do double-buffering (or if 
it's enabled by default). I also copied the 'wait(Condition())' from the 
docs, though it's not clear to me what the condition is (if I close the 
window, the program is still running -- I'm assuming that means I need to 
connect the signal for window destruction to said condition).

A

On Monday, June 16, 2014 9:33:42 PM UTC-4, Jameson wrote:
>
> I would definately use Julia's timers. See `Gtk.jl/src/cairo.jl` for an 
> example interface to the Cairo backing to a Gtk window (used in 
> `Winston.jl/src/gtk.jl`). If you are using this wrapper, call `draw(w)` to 
> force a redraw immediately, or `draw(w,false)` to queue a redraw request 
> for when Gtk is idle.
>
>
> On Mon, Jun 16, 2014 at 9:12 PM, Tim Holy 
> > wrote:
>
>> ImageView's navigation.jl contains an example. The default branch is Tk
>> (because  as far as binary distribution goes, Tk is "solved" and Gtk isn't
>> yet), but it has a gtk branch you can look at.
>>
>> --Tim
>>
>> On Monday, June 16, 2014 04:01:46 PM Abe Schneider wrote:
>> > I was looking for a way to display a simulation in Julia. Originally I 
>> was
>> > going to just use PyPlot, but it occurred to me it would be better to 
>> just
>> > use Gtk+ + Cairo to do the drawing rather than something whose main 
>> purpose
>> > is drawing graphs.
>> >
>> > So far, following the examples on the Github page, I have no problem
>> > creating a window with a Cairo canvas. I can also display content on the
>> > canvas fairly easily (which speaks volumes on the awesomeness of Julia 
>> and
>> > the Gtk+ library). However, after looking through the code and samples,
>> > it's not obvious to me how to redraw the canvas every fraction of a 
>> second
>> > to display new content.
>> >
>> > I did find an example of animating with Cairo and Gtk+ in C
>> > (http://cairographics.org/threaded_animation_with_cairo/). However, I
>> > assume one would want to use Julia's timers instead of of GLibs? 
>> Secondly,
>> > there in their function 'timer_exe', call is made directly to Gtk+ to 
>> send
>> > a redraw queue to the window. Is there a cleaner way to do it with the 
>> Gtk+
>> > library?
>> >
>> > Thanks!
>> > A
>>
>>
>

[julia-users] Re: An appreciation of two contributors among many

2014-06-17 Thread Florian Oswald
absolutely agree! 

Also Douglas: very happy to hear that you are porting the lme4 package to 
Julia.

On Tuesday, 17 June 2014 11:43:22 UTC+1, Jon Norberg wrote:
>
> I'll second that, great community and some very very helpful people that 
> put a lot of effort into this.
>
> Thanks
>


[julia-users] Re: parallel loop with mutable types

2014-06-17 Thread Tomas Lycken


pmap is probably useful here.

Just to make sure, you *have* read the manual section on parallel 
programming , 
right? There's a lot of good stuff there =)

//T

On Tuesday, June 17, 2014 1:30:36 PM UTC+2, Jon Norberg wrote:

also this works but does not change values in b
>
> @parallel for i=1:20 b[i]=k[i].r*k[i].K end
>
> I tried making b=DArray{Float64,1} or b=dones(20,1) but still values in b 
> are not updated
>
> do I need to use spawn/fetch or pmap or something like this?
>
> Sorry, not fluent in parallel programming yet, but I am trying to make my 
> simple code scalable from start
>
> //J
>
​


[julia-users] Remez algorithm

2014-06-17 Thread Hans W Borchers

*Is there an implementation of the Remez algorithm in Julia,or is someone 
working on this?*

Sometimes it is important to have a (polynomial) *minmax approximation* to 
a curve or function (on a finite interval), i.e., an approximating 
polynomial of a certain maximum degree such that the maximum (absolute) 
error is minimized.

A least-squares approach will not work. For example, given a hundred or 
more discrete points representing the Runge function on [-1, 1], package 
*CurveFit* will generate a polynomial of degree 10 that has a maximum 
distance of about 0.10..., while the true minimax solution will have a 
maximal distance of about 0.06... !

The Remez algorithm  solves 
this problem applying an iterative procedure. As Nick Trefethen has once 
said about other implementations of this algorithm:

"One can find a few other computer programs in circulation, but 
overall, it 
 seems that there is no widely-used program at present for computing 
best 
 approximations"

The most reliable and accurate existing realization nowadays appears to be 
the one available in Trefethen's MATLAB toolbox *chebfun*, operating with 
Chebyshev approximations -- perhaps package *ApproxFun* would be a good 
starting point.

I thought that Julia might be an appropriate scientific computing 
environment to realize an efficient and accurate version of the Remez 
algorithm. I am considering doing it myself, but would prefer if someone 
with a better command of Julia has already done this.


[julia-users] Re: signals handling

2014-06-17 Thread Stephen Chisholm
I'm able to catch the InterruptException with the code below when running 
in the REPL, but it doesn't seem to get thrown when running the code in a 
script.

while true
try sleep(1)
println("running...")
catch err 
println("error: $err")
end 
end


On Monday, 16 June 2014 18:30:36 UTC-3, Ivar Nesje wrote:
>
> SIGINT gets converted to a InterruptException, that can be caught in a 
> catch statement. If you happened to be in a ccall, you might cause your 
> program to be in a corrupt state and leak resources such as memory.
>
> I'm not sure how you can interact with other signals.
>


Re: [julia-users] Remez algorithm

2014-06-17 Thread João Felipe Santos
I wanted to do this for DSP.jl as this is used for filter design, but all
opensource implementations I could find to use as a reference just wrapped
the same old piece of Fortran code or a low-level translation of it to C
(this is the case in Scipy). As I am not terribly familiar with the
algorithm's internal workings and had other priorities at the moment, I
ended up never working on it. I think that we could benefit from a nice and
clean Julia rewrite of the algorithm, though.

--
João Felipe Santos


On Tue, Jun 17, 2014 at 8:13 AM, Hans W Borchers 
wrote:

>
> *Is there an implementation of the Remez algorithm in Julia,or is someone
> working on this?*
>
> Sometimes it is important to have a (polynomial) *minmax approximation*
> to a curve or function (on a finite interval), i.e., an approximating
> polynomial of a certain maximum degree such that the maximum (absolute)
> error is minimized.
>
> A least-squares approach will not work. For example, given a hundred or
> more discrete points representing the Runge function on [-1, 1], package
> *CurveFit* will generate a polynomial of degree 10 that has a maximum
> distance of about 0.10..., while the true minimax solution will have a
> maximal distance of about 0.06... !
>
> The Remez algorithm  solves
> this problem applying an iterative procedure. As Nick Trefethen has once
> said about other implementations of this algorithm:
>
> "One can find a few other computer programs in circulation, but
> overall, it
>  seems that there is no widely-used program at present for computing
> best
>  approximations"
>
> The most reliable and accurate existing realization nowadays appears to be
> the one available in Trefethen's MATLAB toolbox *chebfun*, operating with
> Chebyshev approximations -- perhaps package *ApproxFun* would be a good
> starting point.
>
> I thought that Julia might be an appropriate scientific computing
> environment to realize an efficient and accurate version of the Remez
> algorithm. I am considering doing it myself, but would prefer if someone
> with a better command of Julia has already done this.
>


Re: [julia-users] repeat()'s docstring: maybe some clarification needed?

2014-06-17 Thread Bruno Rodrigues
For outer, I think that it would be clearer to say that it repeats (or 
clones?) the whole array along the specified dimensions. For inner, I think 
it's ok.

Looking at the tests for repeat, I think we could use this as an example:

As an illustrative example, let's consider array A:
A = [1 2;3 4]

2x2 Array{Int64,2}:
 1  2
 3  4

If you want to repeat array A along the second dimension:

repeat(A,inner=[1,1],outer=[1,2])
2x4 Array{Int64,2}:
 1  2  1  2
 3  4  3  4

You can also repeat the columns first:

repeat(A,inner=[1,2],outer=[1,1])
2x4 Array{Int64,2}:
 1  1  2  2
 3  3  4  4



You can also create a new array that repeats A along a third dimension:

repeat(A,inner=[1,1],outer=[1,1,2])
2x2x2 Array{Int64,3}:
[:, :, 1] =
 1  2
 3  4

[:, :, 2] =
 1  2
 3  4


Is there a limit on how a docstring can be? Could we add more examples?

On Thursday, June 12, 2014 4:59:14 PM UTC+2, John Myles White wrote:
>
> Rewriting the documentation for repeat would be great. I’m the guilty 
> party for that piece of documentation and agree that it’s not very good. 
> Rewriting it from scratch is probably a good idea.
>
> I’m not sure I think `tile` is much better than `outer`. Maybe we should 
> use something like `perelement` and `perslice` as the keywords? If we 
> revise the keywords, we should also find terms to describe one additional 
> piece of functionality I’d like to add to repeat: the ability to repeat 
> specific elements a distinct number of times. That’s the main thing that 
> repeat is missing that you’d get from R’s rep function.
>
> If you’re looking for good examples for the documentation, there are a 
> bunch of tests for `repeat` you could use as inspiration: 
> https://github.com/JuliaLang/julia/blob/b320b66db8fb97cc3b96fe4089b7b15528ab346c/test/arrayops.jl#L302
>
>  — John
>
> On Jun 12, 2014, at 6:17 AM, Patrick O'Leary  > wrote:
>
> On Thursday, June 12, 2014 7:57:03 AM UTC-5, Bruno Rodrigues wrote:
>>
>> repeat() is much more useful that Matlab's repmat(), but the docstring is 
>> unclear, at least for me. Unfortunately, I don't have, right now, any 
>> proposals to correct it. Could maybe an example be added to the docstring? 
>> Maybe it could be clearer this way.
>>
>
> I think an example would help make this immediately obvious. I also wonder 
> if the keyword arguments could be better--I don't have a good alternative 
> for "inner", but "tile" seems like a good alternative to "outer". That may 
> at least be useful in a rework of the doc.
>
> Note that you don't have to supply both keyword arguments, only one, so if 
> you're not using the feature of "inner" you can simply omit it. 
>
>
>

[julia-users] Re: Help with getting an array of arrays into a 2D array

2014-06-17 Thread Andrew Simper
Hi Johan,

Thanks for posting that example, that really helped to speed things up.

Pre-allocating the array and storing the computation directly into that 
array is what I was hoping the array comprehension could do, but I can't 
trick it into inserting the array directly into a 2D array

I meant to have the first entry be the input signal not t alone. I also 
want a syntax that will keep all the important information in a single line 
that is easy to understand what the input signal is and what arguments are 
being passed to the number crunching function, so I came up with this 
alternative:

function lowpass(input::Float64, lp::Float64, g::Float64)
hp::Float64 = input - lp
lp += g * hp
[lp, hp]
end
function processarray(f, input::Vector{Float64}, args...)
ret = f(input[1], args...)
output = Array(Float64, length(input), length(ret)+1)
output[1,:] = [input[1], ret]
for i in 1:length(input)-1
ret = f(input[i], args...)
output[i,:] = [input[i], ret]
end
output
end
s = 0.0;
data = processarray(lowpass, [sin(t) for t in linspace(0,2pi,2*44100)], s, 
0.5)

So the last line here you can see the function lowpass being called with 
the input of sin(t) and arguments s, and 0.5, so not quite as nice as the 
array comprehension syntax, but not too bad.

I would love to see support for parametric multi-dimensional array 
comprehensions, since then I could write something like this, which is 
really powerful:

s = 0.0;
data = [begin input=sin(t); [t, input, lowpass(input, s, 0.5)]... end for t 
in linspace(0,2pi,2*44100)]

88200x4 Array{Float64,2}:
 0.0  0.0   0.00.0 
   
 7.12387e-5   7.12387e-53.56194e-5 7.12387e-5
 0.000142477  0.000142477   7.12387e-5 0.000142477 
 0.000213716  0.000213716   0.0001068580.000213716
 0.000284955  0.000284955   0.0001424770.000284955
 0.000356194  0.000356194   0.0001780970.000356194
 0.000427432  0.000427432   0.0002137160.000427432
 0.000498671  0.000498671   0.0002493360.000498671
 0.00056991   0.000569910.0002849550.00056991   
 0.000641149  0.000641149   0.0003205740.000641149
 0.000712387  0.000712387   0.0003561940.000712387
 0.000783626  0.000783626   0.0003918130.000783626
 0.000854865  0.000854865   0.0004274320.000854865
 ⋮   
 6.2824  -0.000783626  -0.000391813  -0.000783626  
 6.28247 -0.000712387  -0.000356194  -0.000712387 
 6.28254 -0.000641149  -0.000320574  -0.000641149 
 6.28262 -0.00056991   -0.000284955  -0.00056991   
 6.28269 -0.000498671  -0.000249336  -0.000498671 
 6.28276 -0.000427432  -0.000213716  -0.000427432 
 6.28283 -0.000356194  -0.000178097  -0.000356194 
 6.2829  -0.000284955  -0.000142477  -0.000284955  
 6.28297 -0.000213716  -0.000106858  -0.000213716 
 6.28304 -0.000142477  -7.12387e-5   -0.000142477  
 6.28311 -7.12387e-5   -3.56194e-5   -7.12387e-5
 6.28319 -2.44929e-16  -1.22465e-16  -2.44929e-16 


On Monday, June 16, 2014 5:36:20 PM UTC+8, Johan Sigfrids wrote:
>
> I suspect it is easier to just pre-allocate an array of the correct 
> dimensions and then assign into it. Something like this:
>
> function lowpassarray(arr::Vector{Float64})
> out = Array(Float64, length(arr), 3)
> s = 0.0
> for i in 1:length(arr)
> out[i, 1] = arr[i]
> out[i, 2], out[i, 3] = lowpass(s, sin(arr[i]), 0.5)
> end
> out
> end
>
> data = linspace(0, 2pi, 20)
> data = lowpassarray(data, 0.0)
>
>
> 20x3 Array{Float64,2}:
>  0.00.0   0.0
>  0.330694   0.16235   0.324699   
>  0.661388   0.307106  0.614213   
>  0.992082   0.418583  0.837166   
>  1.322780.48470.9694 
>  1.653470.498292  0.996584   
>  1.984160.457887  0.915773   
>  2.314860.367862  0.735724   
>  2.645550.237974  0.475947   
>  2.976250.0822973 0.164595   
>  3.30694   -0.0822973-0.164595   
>  3.63763   -0.237974 -0.475947   
>  3.96833   -0.367862 -0.735724   
>  4.29902   -0.457887 -0.915773   
>  4.62972   -0.498292 -0.996584   
>  4.96041   -0.4847   -0.9694 
>  5.2911-0.418583 -0.837166   
>  5.6218-0.307106 -0.614213   
>  5.95249   -0.16235  -0.324699   
>  6.28319   -1.22465e-16  -2.44929e-16
>
>
>
>
> On Monday, June 16, 2014 9:54:57 AM UTC+3, Andrew Simper wrote:
>>
>> When I'm working with time series data I often end up with things like 
>> this:
>>
>> function lowpass(lp::Float64, input::Float64, g::Float64)
>> hp::Float64 = input - lp
>> lp += g * hp
>> [lp, hp]
>> end
>> s = 0.0;
>> data=[flatten([t, lowpass(s, sin(t), 0.5)]) for t in linspace(0,2pi,20)]
>>
>> 20-element Array{Any,1}:
>>  [0.0,0.0,0.0]  
>>  [0.330694,0.16235,0.324699]
>>  [0.661388,0.307106,0.614213]   
>>  [0.992082,0.418583,0.837166]  

Re: [julia-users] Re: juliabloggers.com is now live!

2014-06-17 Thread Randy Zwitch
My apologies, I think the link got mangled last time. Here it is again: 
http://www.juliabloggers.com/feed/ 

On Monday, June 16, 2014 9:18:45 PM UTC-4, K leo wrote:
>
> Is there something wrong with the feed? 
>
> http://www.juliabloggers.com/feed/ 
> juliabloggers.com  
> Entered url doesn't contain valid feed or doesn't link to feed. It is 
> also possible feed contains no items. 
>
> On 06/16/2014 08:52 PM, Randy Zwitch wrote: 
> > Nothing shady about it at all and a good reminder I need to add a 
> > visible RSS icon. 
> > 
> > Here's the feed: 
> > 
> > http://www.juliabloggers.com/feed/ 
> > 
> > 
> > On Monday, June 16, 2014 7:32:49 AM UTC-4, Tomas Lycken wrote: 
> > 
> > Nice! 
> > 
> > I'm missing a feature: something to help me pull this into my RSS 
> > reader. If it feels shady to re-publish an aggregated blog like 
> > this in RSS, at least a list of the feeds that are currently 
> > pulled in would be nice, but ultimately I'd like to add 
> > juliabloggers.com  to my feedly and get 
> > everything posted there - and if someone comes in tomorrow and 
> > adds a new feed to the site, I get that content too. 
> > 
> > // T 
> > 
> > On Monday, June 16, 2014 1:17:47 PM UTC+2, Randy Zwitch wrote: 
> > 
> > Hey everyone - 
> > 
> > Several posts had popped up over the past few month about 
> > creating a centralized location for Julia content. I'm proud 
> > to announce that http://www.juliabloggers.com/ 
> >  is now live! This is still 
> > very much a work-in-progress, as the theme is fairly vanilla 
> > and I need to work out some oddities with how the content is 
> > ingested, but the concept certainly works. 
> > 
> > If you'd like to contribute your content to Julia Bloggers, 
> > all it takes is submitting an RSS/Atom feed via this link: 
> > 
> > http://www.juliabloggers.com/julia-bloggers-submit-rss-feed/ 
> >  
> > 
> > Once your link is imported into Julia Bloggers, that's it. The 
> > site will regularly check your feed for new content, then post 
> > it to Julia Bloggers once it becomes available. 
> > 
> > To-Do: 
> > 
> > While the current theme adds an author to each post, it is my 
> > intention to put a larger attribution section for each post, 
> > to make it clear the post owner and a link back to the 
> > original blog location (similar to how R-Bloggers has it at 
> > the end of each post). 
> > 
> > Logo: If anyone wants to create a logo, perhaps modifying the 
> > current Julia SVG code to read 'JuliaBloggers' or something 
> > similar, that would be fantastic. The header needs to be 
> > 960x250 or so, but if you make it larger/higher resolution I 
> > can deal with the sizing I need. 
> > 
> > We've already got 3 contributors so far, and the content is 
> > getting posted to Twitter via 
> > https://twitter.com/juliabloggers 
> > . 
> > 
> > If there are any questions or comments, please comment here. 
> > 
> > Thanks! 
> > Randy 
> > 
> > 
>
>

Re: [julia-users] juliabloggers.com is now live!

2014-06-17 Thread Randy Zwitch
I think this is just a caching issue, the attribution should be on all 
pages.

On Tuesday, June 17, 2014 3:44:47 AM UTC-4, Mauro wrote:
>
> Thanks for putting this together!  One more thing about authors, on pages 
> like for example this one
> http://www.juliabloggers.com/using-asciiplots-jl/
> there should be the same attribution as on the front page.
>
> On Tuesday, June 17, 2014 1:19:48 AM UTC+1, Randy Zwitch wrote:
>>
>> Ok, there is now more obvious attribution on each post, with the author 
>> name and link of the original post prominently displayed before the article.
>>
>> If anyone else has any other recommendations/requests (still need a 
>> logo!), please let me know.
>>
>

Re: [julia-users] animation using Gtk+/Cairo

2014-06-17 Thread Tobias Knopp
Hi Abe,

the idea of the wait condition is that the program would be immidiately 
closed if run in script mode (from the shell). In the REPL one usually 
wants the program to return so that the REPL is still active.

Cheers,

Tobi 

Am Dienstag, 17. Juni 2014 13:46:32 UTC+2 schrieb Abe Schneider:
>
> Thank you everyone for the fast replies!
>
> After looking at ImageView and the sources, here's the solution I came up 
> with:
>
> w = Gtk.@Window() |>
> (body=Gtk.@Box(:v) |>
>   (canvas=Gtk.@Canvas(600, 600)) |>
> showall
>
> function redraw_canvas(canvas)
>   ctx = getgc(canvas)
>   h = height(canvas)
>   w = width(canvas)
>
>   # draw background
>   rectangle(ctx, 0, 0, w, h)
>   set_source_rgb(ctx, 1, 1, 1)
>   fill(ctx)
>
>   # draw objects
>   # ...
>
>   # tell Gtk+ to redisplay
>   draw(canvas)
> end
>
> function init(canvas, delay::Float64, interval::Float64)
>   update_timer = Timer(timer -> redraw_canvas(canvas))
>   start_timer(update_timer, delay, interval)
> end
>
> update_timer = init(canvas, 2, 1)
> if !isinteractive()
>   wait(Condition())
> end
>
> stop_timer(update_timer)
>
> I haven't looked yet into what is required to do double-buffering (or if 
> it's enabled by default). I also copied the 'wait(Condition())' from the 
> docs, though it's not clear to me what the condition is (if I close the 
> window, the program is still running -- I'm assuming that means I need to 
> connect the signal for window destruction to said condition).
>
> A
>
> On Monday, June 16, 2014 9:33:42 PM UTC-4, Jameson wrote:
>>
>> I would definately use Julia's timers. See `Gtk.jl/src/cairo.jl` for an 
>> example interface to the Cairo backing to a Gtk window (used in 
>> `Winston.jl/src/gtk.jl`). If you are using this wrapper, call `draw(w)` to 
>> force a redraw immediately, or `draw(w,false)` to queue a redraw request 
>> for when Gtk is idle.
>>
>>
>> On Mon, Jun 16, 2014 at 9:12 PM, Tim Holy  wrote:
>>
>>> ImageView's navigation.jl contains an example. The default branch is Tk
>>> (because  as far as binary distribution goes, Tk is "solved" and Gtk 
>>> isn't
>>> yet), but it has a gtk branch you can look at.
>>>
>>> --Tim
>>>
>>> On Monday, June 16, 2014 04:01:46 PM Abe Schneider wrote:
>>> > I was looking for a way to display a simulation in Julia. Originally I 
>>> was
>>> > going to just use PyPlot, but it occurred to me it would be better to 
>>> just
>>> > use Gtk+ + Cairo to do the drawing rather than something whose main 
>>> purpose
>>> > is drawing graphs.
>>> >
>>> > So far, following the examples on the Github page, I have no problem
>>> > creating a window with a Cairo canvas. I can also display content on 
>>> the
>>> > canvas fairly easily (which speaks volumes on the awesomeness of Julia 
>>> and
>>> > the Gtk+ library). However, after looking through the code and samples,
>>> > it's not obvious to me how to redraw the canvas every fraction of a 
>>> second
>>> > to display new content.
>>> >
>>> > I did find an example of animating with Cairo and Gtk+ in C
>>> > (http://cairographics.org/threaded_animation_with_cairo/). However, I
>>> > assume one would want to use Julia's timers instead of of GLibs? 
>>> Secondly,
>>> > there in their function 'timer_exe', call is made directly to Gtk+ to 
>>> send
>>> > a redraw queue to the window. Is there a cleaner way to do it with the 
>>> Gtk+
>>> > library?
>>> >
>>> > Thanks!
>>> > A
>>>
>>>
>>

Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Bruno Rodrigues
Hi Pr. Villaverde, just wanted to say that it was your paper that made me 
try Julia. I must say that I am very happy with the switch! Will you 
continue using Julia for your research?


Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Jesus Villaverde
Ah Sorry, over 20 years of coding in Matlab :(

Yes, you are right, once I change that line, the type definition is 
irrelevant. We should change the paper and the code ASAP

On Tuesday, June 17, 2014 12:03:29 AM UTC-4, Peter Simon wrote:
>
> By a process of elimination, I determined that the only variable whose 
> declaration affected the run time was vGridCapital.  The variable is 
> declared to be of type Array{Float64,1}, but is initialized as
>
>
> vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState
>
> which, unlike in Matlab, produces a Range object, rather than an array. 
>  If the line above is modified to
>
> vGridCapital = [0.5*capitalSteadyState:0.1:1.5*capitalSteadyState]
>
> then the type instability is eliminated, and all type declarations can be 
> removed with no effect on execution time.
>
> --Peter
>
>
> On Monday, June 16, 2014 2:59:31 PM UTC-7, Jesus Villaverde wrote:
>>
>> Also, defining
>>
>> mylog(x::Float64) = ccall((:log, "libm"), Float64, (Float64,), x)
>>
>> made quite a bit of difference for me, from 1.92 to around 1.55. If I also 
>> add @inbounds, I go down to 1.45, making Julia only twice as sslow as C++. 
>> Numba still beats Julia, which kind of bothers me a bit
>>
>>
>> Thanks for the suggestions.
>>
>>
>> On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:
>>>
>>> Hi
>>>
>>> 1) Yes, we pre-compiled the function.
>>>
>>> 2) As I mentioned before, we tried the code with and without type 
>>> declaration, it makes a difference.
>>>
>>> 3) The variable names turns out to be quite useful because this code 
>>> will be eventually nested into a much larger project where it is convenient 
>>> to have very explicit names.
>>>
>>> Thanks 
>>>
>>> On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:

 First, I agree with John that you don't have to declare the types in 
 general, like in a compiled language. It seems that Julia would be able to 
 infer the types of most variables in your codes.

 There are several ways that your code's efficiency may be improved:

 (1) You can use @inbounds to waive bound checking in several places, 
 such as line 94 and 95 (in RBC_Julia.jl)
 (2) Line 114 and 116 involves reallocating new arrays, which is 
 probably unnecessary. Also note that Base.maxabs can compute the maximum 
 of 
 absolute value more efficiently than maximum(abs( ... ))

 In terms of measurement, did you pre-compile the function before 
 measuring the runtime?

 A side note about code style. It seems that it uses a lot of Java-ish 
 descriptive names with camel case. Julia practice tends to encourage more 
 concise naming.

 Dahua



 On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:
>
> Maybe it would be good to verify the claim made at 
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
>  
>
> I would think that specifying all those types wouldn’t matter much if 
> the code doesn’t have type-stability problems. 
>
>  — John 
>
> On Jun 16, 2014, at 8:52 AM, Florian Oswald  
> wrote: 
>
> > Dear all, 
> > 
> > I thought you might find this paper interesting: 
> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
> > 
> > It takes a standard model from macro economics and computes it's 
> solution with an identical algorithm in several languages. Julia is 
> roughly 
> 2.6 times slower than the best C++ executable. I was bit puzzled by the 
> result, since in the benchmarks on http://julialang.org/, the slowest 
> test is 1.66 times C. I realize that those benchmarks can't cover all 
> possible situations. That said, I couldn't really find anything unusual 
> in 
> the Julia code, did some profiling and removed type inference, but still 
> that's as fast as I got it. That's not to say that I'm disappointed, I 
> still think this is great. Did I miss something obvious here or is there 
> something specific to this algorithm? 
> > 
> > The codes are on github at 
> > 
> > 
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
> > 
> > 
>
>

Re: [julia-users] juliabloggers.com is now live!

2014-06-17 Thread Tobias Knopp
Randy, would it be possible to integrate the page in julialang.org (under 
the blog section)?
If not it would probably be good to add a link there + maybe remove the 
dublicated posts.

Cheers,

Tobi

Am Dienstag, 17. Juni 2014 14:39:46 UTC+2 schrieb Randy Zwitch:
>
> I think this is just a caching issue, the attribution should be on all 
> pages.
>
> On Tuesday, June 17, 2014 3:44:47 AM UTC-4, Mauro wrote:
>>
>> Thanks for putting this together!  One more thing about authors, on pages 
>> like for example this one
>> http://www.juliabloggers.com/using-asciiplots-jl/
>> there should be the same attribution as on the front page.
>>
>> On Tuesday, June 17, 2014 1:19:48 AM UTC+1, Randy Zwitch wrote:
>>>
>>> Ok, there is now more obvious attribution on each post, with the author 
>>> name and link of the original post prominently displayed before the article.
>>>
>>> If anyone else has any other recommendations/requests (still need a 
>>> logo!), please let me know.
>>>
>>

Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Stefan Karpinski
Not your fault at all. We need to make this kind of thing easier to discover. 
Eg with

https://github.com/astrieanna/TypeCheck.jl

> On Jun 17, 2014, at 8:35 AM, Jesus Villaverde  
> wrote:
> 
> Ah Sorry, over 20 years of coding in Matlab :(
> 
> Yes, you are right, once I change that line, the type definition is 
> irrelevant. We should change the paper and the code ASAP
> 
>> On Tuesday, June 17, 2014 12:03:29 AM UTC-4, Peter Simon wrote:
>> By a process of elimination, I determined that the only variable whose 
>> declaration affected the run time was vGridCapital.  The variable is 
>> declared to be of type Array{Float64,1}, but is initialized as
>> 
>> 
>> vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState
>> 
>> which, unlike in Matlab, produces a Range object, rather than an array.  If 
>> the line above is modified to
>> 
>> vGridCapital = [0.5*capitalSteadyState:0.1:1.5*capitalSteadyState]
>> 
>> then the type instability is eliminated, and all type declarations can be 
>> removed with no effect on execution time.
>> 
>> --Peter
>> 
>> 
>>> On Monday, June 16, 2014 2:59:31 PM UTC-7, Jesus Villaverde wrote:
>>> Also, defining
>>> 
>>> mylog(x::Float64) = ccall((:log, "libm"), Float64, (Float64,), x)
>>> 
>>> made quite a bit of difference for me, from 1.92 to around 1.55. If I also 
>>> add @inbounds, I go down to 1.45, making Julia only twice as sslow as C++. 
>>> Numba still beats Julia, which kind of bothers me a bit
>>> 
>>> Thanks for the suggestions.
>>> 
 On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:
 Hi
 
 1) Yes, we pre-compiled the function.
 
 2) As I mentioned before, we tried the code with and without type 
 declaration, it makes a difference.
 
 3) The variable names turns out to be quite useful because this code will 
 be eventually nested into a much larger project where it is convenient to 
 have very explicit names.
 
 Thanks 
 
> On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:
> First, I agree with John that you don't have to declare the types in 
> general, like in a compiled language. It seems that Julia would be able 
> to infer the types of most variables in your codes.
> 
> There are several ways that your code's efficiency may be improved:
> 
> (1) You can use @inbounds to waive bound checking in several places, such 
> as line 94 and 95 (in RBC_Julia.jl)
> (2) Line 114 and 116 involves reallocating new arrays, which is probably 
> unnecessary. Also note that Base.maxabs can compute the maximum of 
> absolute value more efficiently than maximum(abs( ... ))
> 
> In terms of measurement, did you pre-compile the function before 
> measuring the runtime?
> 
> A side note about code style. It seems that it uses a lot of Java-ish 
> descriptive names with camel case. Julia practice tends to encourage more 
> concise naming.
> 
> Dahua
> 
> 
> 
>> On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:
>> Maybe it would be good to verify the claim made at 
>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
>>  
>> 
>> I would think that specifying all those types wouldn’t matter much if 
>> the code doesn’t have type-stability problems. 
>> 
>>  — John 
>> 
>> On Jun 16, 2014, at 8:52 AM, Florian Oswald  
>> wrote: 
>> 
>> > Dear all, 
>> > 
>> > I thought you might find this paper interesting: 
>> > http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
>> > 
>> > It takes a standard model from macro economics and computes it's 
>> > solution with an identical algorithm in several languages. Julia is 
>> > roughly 2.6 times slower than the best C++ executable. I was bit 
>> > puzzled by the result, since in the benchmarks on 
>> > http://julialang.org/, the slowest test is 1.66 times C. I realize 
>> > that those benchmarks can't cover all possible situations. That said, 
>> > I couldn't really find anything unusual in the Julia code, did some 
>> > profiling and removed type inference, but still that's as fast as I 
>> > got it. That's not to say that I'm disappointed, I still think this is 
>> > great. Did I miss something obvious here or is there something 
>> > specific to this algorithm? 
>> > 
>> > The codes are on github at 
>> > 
>> > https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
>> > 
>> > 
>> 


Re: [julia-users] Re: An appreciation of two contributors among many

2014-06-17 Thread Stefan Karpinski
> On Jun 17, 2014, at 7:51 AM, Florian Oswald  wrote:
> 
> Also Douglas: very happy to hear that you are porting the lme4 package to 
> Julia.

Doug has been on a quiet, long-term mission to accomplish this exact goal, 
building all the necessary bits and pieces. In fact, I suspect this is why he 
started using Julia in the first place.

I can't possibly second the recognition of the amazing quality of Dahua and 
Tim's code enough. The fact that you are various pieces of very high quality 
numerical code written by them – or Steven Johnson – is one of the critical 
reasons Julia is amazing to use today.

Re: [julia-users] Remez algorithm

2014-06-17 Thread Jay Kickliter
João, I found the same code 

 
a few days ago, hoping it would give me a good starting point for a Julia 
implementation. But I understand it at all. A blind port wouldn't be 
feasible either considering all those GOTOs. So I gave up and decided to 
take another stab at understudying polyphase decomposition, which isn't 
going very well either.

Han's, I wouldn't be surprised if no one else is working on it. Here's 
 another 
reference that may help. Click on the links under Parks McClellan C++ 
Source Code.

On Tuesday, June 17, 2014 6:25:00 AM UTC-6, João Felipe Santos wrote:
>
> I wanted to do this for DSP.jl as this is used for filter design, but all 
> opensource implementations I could find to use as a reference just wrapped 
> the same old piece of Fortran code or a low-level translation of it to C 
> (this is the case in Scipy). As I am not terribly familiar with the 
> algorithm's internal workings and had other priorities at the moment, I 
> ended up never working on it. I think that we could benefit from a nice and 
> clean Julia rewrite of the algorithm, though.
>
> --
> João Felipe Santos
>
>
> On Tue, Jun 17, 2014 at 8:13 AM, Hans W Borchers  > wrote:
>
>>
>> *Is there an implementation of the Remez algorithm in Julia,or is someone 
>> working on this?*
>>
>> Sometimes it is important to have a (polynomial) *minmax approximation* 
>> to a curve or function (on a finite interval), i.e., an approximating 
>> polynomial of a certain maximum degree such that the maximum (absolute) 
>> error is minimized.
>>
>> A least-squares approach will not work. For example, given a hundred or 
>> more discrete points representing the Runge function on [-1, 1], package 
>> *CurveFit* will generate a polynomial of degree 10 that has a maximum 
>> distance of about 0.10..., while the true minimax solution will have a 
>> maximal distance of about 0.06... !
>>
>> The Remez algorithm  
>> solves this problem applying an iterative procedure. As Nick Trefethen has 
>> once said about other implementations of this algorithm:
>>
>> "One can find a few other computer programs in circulation, but 
>> overall, it 
>>  seems that there is no widely-used program at present for computing 
>> best 
>>  approximations"
>>
>> The most reliable and accurate existing realization nowadays appears to 
>> be the one available in Trefethen's MATLAB toolbox *chebfun*, operating 
>> with Chebyshev approximations -- perhaps package *ApproxFun* would be a 
>> good starting point.
>>
>> I thought that Julia might be an appropriate scientific computing 
>> environment to realize an efficient and accurate version of the Remez 
>> algorithm. I am considering doing it myself, but would prefer if someone 
>> with a better command of Julia has already done this.
>>
>
>

Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Jesus Villaverde
I think so! Matlab is just too slow for many things and a bit old in some 
dimensions. I often use C++ but for a lot of stuff, it is just to 
cumbersome.

On Tuesday, June 17, 2014 8:50:02 AM UTC-4, Bruno Rodrigues wrote:
>
> Hi Pr. Villaverde, just wanted to say that it was your paper that made me 
> try Julia. I must say that I am very happy with the switch! Will you 
> continue using Julia for your research?
>


Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Jesus Villaverde
Thanks! I'll learn those tools. In any case, paper updated online, github 
page with new commit. This is really great. Nice example of aggregation of 
information. Economists love that :)

On Tuesday, June 17, 2014 9:11:08 AM UTC-4, Stefan Karpinski wrote:
>
> Not your fault at all. We need to make this kind of thing easier to 
> discover. Eg with
>
> https://github.com/astrieanna/TypeCheck.jl
>
> On Jun 17, 2014, at 8:35 AM, Jesus Villaverde  > wrote:
>
> Ah Sorry, over 20 years of coding in Matlab :(
>
> Yes, you are right, once I change that line, the type definition is 
> irrelevant. We should change the paper and the code ASAP
>
> On Tuesday, June 17, 2014 12:03:29 AM UTC-4, Peter Simon wrote:
>>
>> By a process of elimination, I determined that the only variable whose 
>> declaration affected the run time was vGridCapital.  The variable is 
>> declared to be of type Array{Float64,1}, but is initialized as
>>
>>
>> vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState
>>
>> which, unlike in Matlab, produces a Range object, rather than an array. 
>>  If the line above is modified to
>>
>> vGridCapital = [0.5*capitalSteadyState:0.1:1.5*capitalSteadyState]
>>
>> then the type instability is eliminated, and all type declarations can be 
>> removed with no effect on execution time.
>>
>> --Peter
>>
>>
>> On Monday, June 16, 2014 2:59:31 PM UTC-7, Jesus Villaverde wrote:
>>>
>>> Also, defining
>>>
>>> mylog(x::Float64) = ccall((:log, "libm"), Float64, (Float64,), x)
>>>
>>> made quite a bit of difference for me, from 1.92 to around 1.55. If I also 
>>> add @inbounds, I go down to 1.45, making Julia only twice as sslow as C++. 
>>> Numba still beats Julia, which kind of bothers me a bit
>>>
>>>
>>> Thanks for the suggestions.
>>>
>>>
>>> On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:

 Hi

 1) Yes, we pre-compiled the function.

 2) As I mentioned before, we tried the code with and without type 
 declaration, it makes a difference.

 3) The variable names turns out to be quite useful because this code 
 will be eventually nested into a much larger project where it is 
 convenient 
 to have very explicit names.

 Thanks 

 On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:
>
> First, I agree with John that you don't have to declare the types in 
> general, like in a compiled language. It seems that Julia would be able 
> to 
> infer the types of most variables in your codes.
>
> There are several ways that your code's efficiency may be improved:
>
> (1) You can use @inbounds to waive bound checking in several places, 
> such as line 94 and 95 (in RBC_Julia.jl)
> (2) Line 114 and 116 involves reallocating new arrays, which is 
> probably unnecessary. Also note that Base.maxabs can compute the maximum 
> of 
> absolute value more efficiently than maximum(abs( ... ))
>
> In terms of measurement, did you pre-compile the function before 
> measuring the runtime?
>
> A side note about code style. It seems that it uses a lot of Java-ish 
> descriptive names with camel case. Julia practice tends to encourage more 
> concise naming.
>
> Dahua
>
>
>
> On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:
>>
>> Maybe it would be good to verify the claim made at 
>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
>>  
>>
>> I would think that specifying all those types wouldn’t matter much if 
>> the code doesn’t have type-stability problems. 
>>
>>  — John 
>>
>> On Jun 16, 2014, at 8:52 AM, Florian Oswald  
>> wrote: 
>>
>> > Dear all, 
>> > 
>> > I thought you might find this paper interesting: 
>> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
>> > 
>> > It takes a standard model from macro economics and computes it's 
>> solution with an identical algorithm in several languages. Julia is 
>> roughly 
>> 2.6 times slower than the best C++ executable. I was bit puzzled by the 
>> result, since in the benchmarks on http://julialang.org/, the 
>> slowest test is 1.66 times C. I realize that those benchmarks can't 
>> cover 
>> all possible situations. That said, I couldn't really find anything 
>> unusual 
>> in the Julia code, did some profiling and removed type inference, but 
>> still 
>> that's as fast as I got it. That's not to say that I'm disappointed, I 
>> still think this is great. Did I miss something obvious here or is there 
>> something specific to this algorithm? 
>> > 
>> > The codes are on github at 
>> > 
>> > 
>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics 
>> > 
>> > 
>>
>>

Re: [julia-users] animation using Gtk+/Cairo

2014-06-17 Thread Tim Holy
On Tuesday, June 17, 2014 04:46:31 AM Abe Schneider wrote:
> I haven't looked yet into what is required to do double-buffering (or if
> it's enabled by default). I also copied the 'wait(Condition())' from the
> docs, though it's not clear to me what the condition is (if I close the
> window, the program is still running -- I'm assuming that means I need to
> connect the signal for window destruction to said condition).

Good point. You probably already figured it out on your own, but I just updated 
the Gtk docs to describe a better solution.

--Tim



Re: [julia-users] repeat()'s docstring: maybe some clarification needed?

2014-06-17 Thread Tomas Lycken


I’d like to add the following example too, of using *both* inner and outer, 
to show off the flexibility of repeat:

julia> repeat(A, inner=[1,2], outer=[2,1])
4x4 Array{Int64,2}:
 1  1  2  2
 3  3  4  4
 1  1  2  2
 3  3  4  4

Until I had tried that in the REPL myself, I didn’t really trust that I 
actually understood what keywords really meant. Now I think I do.
// T


On Tuesday, June 17, 2014 2:36:39 PM UTC+2, Bruno Rodrigues wrote:

For outer, I think that it would be clearer to say that it repeats (or 
> clones?) the whole array along the specified dimensions. For inner, I think 
> it's ok.
>
> Looking at the tests for repeat, I think we could use this as an example:
>
> As an illustrative example, let's consider array A:
> A = [1 2;3 4]
>
> 2x2 Array{Int64,2}:
>  1  2
>  3  4
>
> If you want to repeat array A along the second dimension:
>
> repeat(A,inner=[1,1],outer=[1,2])
> 2x4 Array{Int64,2}:
>  1  2  1  2
>  3  4  3  4
>
> You can also repeat the columns first:
>
> repeat(A,inner=[1,2],outer=[1,1])
> 2x4 Array{Int64,2}:
>  1  1  2  2
>  3  3  4  4
>
>
>
> You can also create a new array that repeats A along a third dimension:
>
> repeat(A,inner=[1,1],outer=[1,1,2])
> 2x2x2 Array{Int64,3}:
> [:, :, 1] =
>  1  2
>  3  4
>
> [:, :, 2] =
>  1  2
>  3  4
>
>
> Is there a limit on how a docstring can be? Could we add more examples?
>
> On Thursday, June 12, 2014 4:59:14 PM UTC+2, John Myles White wrote:
>>
>> Rewriting the documentation for repeat would be great. I’m the guilty 
>> party for that piece of documentation and agree that it’s not very good. 
>> Rewriting it from scratch is probably a good idea.
>>
>> I’m not sure I think `tile` is much better than `outer`. Maybe we should 
>> use something like `perelement` and `perslice` as the keywords? If we 
>> revise the keywords, we should also find terms to describe one additional 
>> piece of functionality I’d like to add to repeat: the ability to repeat 
>> specific elements a distinct number of times. That’s the main thing that 
>> repeat is missing that you’d get from R’s rep function.
>>
>> If you’re looking for good examples for the documentation, there are a 
>> bunch of tests for `repeat` you could use as inspiration: 
>> https://github.com/JuliaLang/julia/blob/b320b66db8fb97cc3b96fe4089b7b15528ab346c/test/arrayops.jl#L302
>>
>>  — John
>>
>> On Jun 12, 2014, at 6:17 AM, Patrick O'Leary  
>> wrote:
>>
>> On Thursday, June 12, 2014 7:57:03 AM UTC-5, Bruno Rodrigues wrote:
>>>
>>> repeat() is much more useful that Matlab's repmat(), but the docstring 
>>> is unclear, at least for me. Unfortunately, I don't have, right now, any 
>>> proposals to correct it. Could maybe an example be added to the docstring? 
>>> Maybe it could be clearer this way.
>>>
>>
>> I think an example would help make this immediately obvious. I also 
>> wonder if the keyword arguments could be better--I don't have a good 
>> alternative for "inner", but "tile" seems like a good alternative to 
>> "outer". That may at least be useful in a rework of the doc.
>>
>> Note that you don't have to supply both keyword arguments, only one, so 
>> if you're not using the feature of "inner" you can simply omit it. 
>>
>>
>> ​


Re: [julia-users] juliabloggers.com is now live!

2014-06-17 Thread Randy Zwitch
I did add the julialang.org/blog feed to Julia Bloggers already. The 
attribution is a bit messed up because they are re-directing their feed 
using Feedburner, then Feedburner re-directs to the actual URL; I'll try to 
figure out how to get the attribution to point directly to the blog post.

Here are the posts from the official blog:

http://www.juliabloggers.com/author/julia-developers/

On Tuesday, June 17, 2014 9:04:37 AM UTC-4, Tobias Knopp wrote:
>
> Randy, would it be possible to integrate the page in julialang.org (under 
> the blog section)?
> If not it would probably be good to add a link there + maybe remove the 
> dublicated posts.
>
> Cheers,
>
> Tobi
>
> Am Dienstag, 17. Juni 2014 14:39:46 UTC+2 schrieb Randy Zwitch:
>>
>> I think this is just a caching issue, the attribution should be on all 
>> pages.
>>
>> On Tuesday, June 17, 2014 3:44:47 AM UTC-4, Mauro wrote:
>>>
>>> Thanks for putting this together!  One more thing about authors, on 
>>> pages like for example this one
>>> http://www.juliabloggers.com/using-asciiplots-jl/
>>> there should be the same attribution as on the front page.
>>>
>>> On Tuesday, June 17, 2014 1:19:48 AM UTC+1, Randy Zwitch wrote:

 Ok, there is now more obvious attribution on each post, with the author 
 name and link of the original post prominently displayed before the 
 article.

 If anyone else has any other recommendations/requests (still need a 
 logo!), please let me know.

>>>

Re: [julia-users] Re: Function roots() in package Polynomial

2014-06-17 Thread Alan Edelman
I just tried roots in the Polynomial package

here's what happened

@time roots(Poly([randn(100)]));

LAPACKException(99)
while loading In[10], in expression starting on line 44
 in geevx! at linalg/lapack.jl:1225
 in eigfact! at linalg/factorization.jl:531
 in eigfact at linalg/factorization.jl:554
 in roots at /Users/julia/.julia/v0.3/Polynomial/src/Polynomial.jl:358


my first question would be why are we calling geevx for a matrix

known to be Hessenberg?


I'd be happy to have a time comparable to matlab's though i'm sure there

are faster algorithms out there as well








On Friday, May 9, 2014 11:21:11 PM UTC-4, Tony Kelman wrote:
>
> By default GitHub doesn't enable issue tracking in forked repositories, 
> the person who makes the fork has to manually go do that under settings.
>
>
> On Friday, May 9, 2014 9:39:56 AM UTC-7, Hans W Borchers wrote:
>>
>> @Jameson
>> I am writing a small report on scientific programming with Julia. I 
>> changed the section on polynomials by now basing it on the newer(?) 
>> Polynomials.jl. This works quite fine, and roots() computes the zeros of 
>> the Wilkinson polynomial to quite satisfying accuracy.
>>
>> It's a bit irritating that the README file still documents the old order 
>> of sequence of coefficients while the code already implements the 
>> coefficients in increasing order of exponents. I see there is a pull 
>> request for an updated README, but this is almost 4 weeks old.
>>
>> Testing one of my examples,
>>
>> julia> using Polynomials
>>
>> julia> p4 = poly([1.0, 1im, -1.0, -1im])
>> Poly(--1.0 + 1.0x^4)
>>
>>
>> which appears to indicate a bug in printing the polynomial. The stored 
>> coefficient is really and correctly -1.0 as can be seen from
>>
>> julia> p4[0]
>> -1.0 + 0.0im
>>
>>
>> I wanted to report that as an issue on the project page, but I did not 
>> find a button for starting the issue tracker. Does this mean the 
>> Polynomial.jl project is still 'private' in some sense?
>>
>> I know there have been long discussions on which is the right order for 
>> the coefficients of a polynomial. But I feel it uneasy that the defining 
>> order in MATLAB and other numerical computing systems has been changed so 
>> drastically. Well, we have to live with it.
>>
>>
>> On Friday, May 9, 2014 7:53:30 AM UTC+2, Hans W Borchers wrote:
>>>
>>> Thanks a lot. Just a few minutes ago I saw here on the list an 
>>> announcement
>>> of the "Least-squares curve fitting package" with poly_fit, among others.
>>> I think this is good enough for me at the moment.
>>>
>>> I will come back to your suggestion concerning polynomials when I have a
>>> better command of the type system. For polynomials there is surprisingly
>>> many more interesting functionality than is usually implemented.
>>>
>>>
>>> On Friday, May 9, 2014 6:30:06 AM UTC+2, Jameson wrote:

 As the author of Polynomial.jl, I'll say that being "a bit 
 unsatisfied" is a good reason to make pull requests for any and all 
 improvements :) 

 While loladiro is now the official maintainer of Polynomials.jl (since 
 he volunteered to do the badly-needed work to switch the coefficient 
 order), if I had access, I would accept a pull request for additional 
 roots() methods (parameterized by an enum type, for overloading, and 
 possibly also a realroots function), horner method functions, polyfit, 
 etc. 

 I would not accept a pull request for allowing a vector instead of a 
 Polynomial in any method, however. IMHO, this is a completely 
 unnecessary "optimization", which encourages the user to conflate the 
 concept of a Vector and a Polynomial without benefit. It could even 
 potentially lead to subtle bugs (since indexing a polynomial is 
 different from indexing a vector), or passing in the roots instead of 
 the polynomial. 

 I think merging your proposal for a polyfit function with 
 StatsBase.fit makes sense. You could use a tuple parameter to combine 
 the Polynomial parameter with the degrees information: 

 function fit((T::(Type{Polynomial},Int), data) 
   P, deg = T 
   return Poly( pfit(deg, data) ) #where pfit represents the 
 calculation of the polynomial-of-best fit, and may or may not be a 
 separate function 
 end 
 fit((Polynomial,3), data) 

 David de Laat put together a pull request to add his content to 
 Polynomial: https://github.com/vtjnash/Polynomial.jl/pull/25. He also 
 indicated he would update it for Polynomials.jl so that it could be 
 merged. 



Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Tony Kelman
Your matrices are kinda small so it might not make much difference, but it 
would be interesting to see whether using the Tridiagonal type could speed 
things up at all.


On Tuesday, June 17, 2014 6:25:24 AM UTC-7, Jesus Villaverde wrote:
>
> Thanks! I'll learn those tools. In any case, paper updated online, github 
> page with new commit. This is really great. Nice example of aggregation of 
> information. Economists love that :)
>
> On Tuesday, June 17, 2014 9:11:08 AM UTC-4, Stefan Karpinski wrote:
>>
>> Not your fault at all. We need to make this kind of thing easier to 
>> discover. Eg with
>>
>> https://github.com/astrieanna/TypeCheck.jl
>>
>> On Jun 17, 2014, at 8:35 AM, Jesus Villaverde  
>> wrote:
>>
>> Ah Sorry, over 20 years of coding in Matlab :(
>>
>> Yes, you are right, once I change that line, the type definition is 
>> irrelevant. We should change the paper and the code ASAP
>>
>> On Tuesday, June 17, 2014 12:03:29 AM UTC-4, Peter Simon wrote:
>>>
>>> By a process of elimination, I determined that the only variable whose 
>>> declaration affected the run time was vGridCapital.  The variable is 
>>> declared to be of type Array{Float64,1}, but is initialized as
>>>
>>>
>>> vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState
>>>
>>> which, unlike in Matlab, produces a Range object, rather than an array. 
>>>  If the line above is modified to
>>>
>>> vGridCapital = [0.5*capitalSteadyState:0.1:1.5*capitalSteadyState]
>>>
>>> then the type instability is eliminated, and all type declarations can 
>>> be removed with no effect on execution time.
>>>
>>> --Peter
>>>
>>>
>>> On Monday, June 16, 2014 2:59:31 PM UTC-7, Jesus Villaverde wrote:

 Also, defining

 mylog(x::Float64) = ccall((:log, "libm"), Float64, (Float64,), x)

 made quite a bit of difference for me, from 1.92 to around 1.55. If I also 
 add @inbounds, I go down to 1.45, making Julia only twice as sslow as C++. 
 Numba still beats Julia, which kind of bothers me a bit


 Thanks for the suggestions.


 On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:
>
> Hi
>
> 1) Yes, we pre-compiled the function.
>
> 2) As I mentioned before, we tried the code with and without type 
> declaration, it makes a difference.
>
> 3) The variable names turns out to be quite useful because this code 
> will be eventually nested into a much larger project where it is 
> convenient 
> to have very explicit names.
>
> Thanks 
>
> On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:
>>
>> First, I agree with John that you don't have to declare the types in 
>> general, like in a compiled language. It seems that Julia would be able 
>> to 
>> infer the types of most variables in your codes.
>>
>> There are several ways that your code's efficiency may be improved:
>>
>> (1) You can use @inbounds to waive bound checking in several places, 
>> such as line 94 and 95 (in RBC_Julia.jl)
>> (2) Line 114 and 116 involves reallocating new arrays, which is 
>> probably unnecessary. Also note that Base.maxabs can compute the maximum 
>> of 
>> absolute value more efficiently than maximum(abs( ... ))
>>
>> In terms of measurement, did you pre-compile the function before 
>> measuring the runtime?
>>
>> A side note about code style. It seems that it uses a lot of Java-ish 
>> descriptive names with camel case. Julia practice tends to encourage 
>> more 
>> concise naming.
>>
>> Dahua
>>
>>
>>
>> On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:
>>>
>>> Maybe it would be good to verify the claim made at 
>>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/blob/master/RBC_Julia.jl#L9
>>>  
>>>
>>> I would think that specifying all those types wouldn’t matter much 
>>> if the code doesn’t have type-stability problems. 
>>>
>>>  — John 
>>>
>>> On Jun 16, 2014, at 8:52 AM, Florian Oswald  
>>> wrote: 
>>>
>>> > Dear all, 
>>> > 
>>> > I thought you might find this paper interesting: 
>>> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf 
>>> > 
>>> > It takes a standard model from macro economics and computes it's 
>>> solution with an identical algorithm in several languages. Julia is 
>>> roughly 
>>> 2.6 times slower than the best C++ executable. I was bit puzzled by the 
>>> result, since in the benchmarks on http://julialang.org/, the 
>>> slowest test is 1.66 times C. I realize that those benchmarks can't 
>>> cover 
>>> all possible situations. That said, I couldn't really find anything 
>>> unusual 
>>> in the Julia code, did some profiling and removed type inference, but 
>>> still 
>>> that's as fast as I got it

Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Cameron McBride
Do any of the more initiated have an idea why Numba performs better for
this application, as both it and Julia use LLVM?  I'm just asking out of
pure curiosity.

Cameron


On Tue, Jun 17, 2014 at 10:11 AM, Tony Kelman  wrote:

> Your matrices are kinda small so it might not make much difference, but it
> would be interesting to see whether using the Tridiagonal type could speed
> things up at all.
>
>
> On Tuesday, June 17, 2014 6:25:24 AM UTC-7, Jesus Villaverde wrote:
>>
>> Thanks! I'll learn those tools. In any case, paper updated online, github
>> page with new commit. This is really great. Nice example of aggregation of
>> information. Economists love that :)
>>
>> On Tuesday, June 17, 2014 9:11:08 AM UTC-4, Stefan Karpinski wrote:
>>>
>>> Not your fault at all. We need to make this kind of thing easier to
>>> discover. Eg with
>>>
>>> https://github.com/astrieanna/TypeCheck.jl
>>>
>>> On Jun 17, 2014, at 8:35 AM, Jesus Villaverde 
>>> wrote:
>>>
>>> Ah Sorry, over 20 years of coding in Matlab :(
>>>
>>> Yes, you are right, once I change that line, the type definition is
>>> irrelevant. We should change the paper and the code ASAP
>>>
>>> On Tuesday, June 17, 2014 12:03:29 AM UTC-4, Peter Simon wrote:

 By a process of elimination, I determined that the only variable whose
 declaration affected the run time was vGridCapital.  The variable is
 declared to be of type Array{Float64,1}, but is initialized as


 vGridCapital = 0.5*capitalSteadyState:0.1:1.5*capitalSteadyState

 which, unlike in Matlab, produces a Range object, rather than an array.
  If the line above is modified to

 vGridCapital = [0.5*capitalSteadyState:0.1:1.5*capitalSteadyState]

 then the type instability is eliminated, and all type declarations can
 be removed with no effect on execution time.

 --Peter


 On Monday, June 16, 2014 2:59:31 PM UTC-7, Jesus Villaverde wrote:
>
> Also, defining
>
> mylog(x::Float64) = ccall((:log, "libm"), Float64, (Float64,), x)
>
> made quite a bit of difference for me, from 1.92 to around 1.55. If I 
> also add @inbounds, I go down to 1.45, making Julia only twice as sslow 
> as C++. Numba still beats Julia, which kind of bothers me a bit
>
>
> Thanks for the suggestions.
>
>
> On Monday, June 16, 2014 4:56:34 PM UTC-4, Jesus Villaverde wrote:
>>
>> Hi
>>
>> 1) Yes, we pre-compiled the function.
>>
>> 2) As I mentioned before, we tried the code with and without type
>> declaration, it makes a difference.
>>
>> 3) The variable names turns out to be quite useful because this code
>> will be eventually nested into a much larger project where it is 
>> convenient
>> to have very explicit names.
>>
>> Thanks
>>
>> On Monday, June 16, 2014 12:13:44 PM UTC-4, Dahua Lin wrote:
>>>
>>> First, I agree with John that you don't have to declare the types in
>>> general, like in a compiled language. It seems that Julia would be able 
>>> to
>>> infer the types of most variables in your codes.
>>>
>>> There are several ways that your code's efficiency may be improved:
>>>
>>> (1) You can use @inbounds to waive bound checking in several places,
>>> such as line 94 and 95 (in RBC_Julia.jl)
>>> (2) Line 114 and 116 involves reallocating new arrays, which is
>>> probably unnecessary. Also note that Base.maxabs can compute the 
>>> maximum of
>>> absolute value more efficiently than maximum(abs( ... ))
>>>
>>> In terms of measurement, did you pre-compile the function before
>>> measuring the runtime?
>>>
>>> A side note about code style. It seems that it uses a lot of
>>> Java-ish descriptive names with camel case. Julia practice tends to
>>> encourage more concise naming.
>>>
>>> Dahua
>>>
>>>
>>>
>>> On Monday, June 16, 2014 10:55:50 AM UTC-5, John Myles White wrote:

 Maybe it would be good to verify the claim made at
 https://github.com/jesusfv/Comparison-Programming-
 Languages-Economics/blob/master/RBC_Julia.jl#L9

 I would think that specifying all those types wouldn’t matter much
 if the code doesn’t have type-stability problems.

  — John

 On Jun 16, 2014, at 8:52 AM, Florian Oswald 
 wrote:

 > Dear all,
 >
 > I thought you might find this paper interesting:
 http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
 >
 > It takes a standard model from macro economics and computes it's
 solution with an identical algorithm in several languages. Julia is 
 roughly
 2.6 times slower than the best C++ executable. I was bit puzzled by the
 result, since in the benchmarks on http://julialang.org/, the
>>>

[julia-users] Re: signals handling

2014-06-17 Thread Stephen Chisholm
I'm able to register a callback function using signal in libc, see the code 
below.

SIGINT=2

function catch_function(x)
println("caught signal $x")
exit(0)::Nothing
end
catch_function_c = cfunction(catch_function, None, (Int64,))
ccall((:signal, "libc"), Void, (Int64, Ptr{Void}), SIGINT, catch_function_c)

while true
sleep(1)
end



On Tuesday, 17 June 2014 09:24:43 UTC-3, Stephen Chisholm wrote:
>
> I'm able to catch the InterruptException with the code below when running 
> in the REPL, but it doesn't seem to get thrown when running the code in a 
> script.
>
> while true
> try sleep(1)
> println("running...")
> catch err 
> println("error: $err")
> end 
> end
>
>
> On Monday, 16 June 2014 18:30:36 UTC-3, Ivar Nesje wrote:
>>
>> SIGINT gets converted to a InterruptException, that can be caught in a 
>> catch statement. If you happened to be in a ccall, you might cause your 
>> program to be in a corrupt state and leak resources such as memory.
>>
>> I'm not sure how you can interact with other signals.
>>
>

Re: [julia-users] animation using Gtk+/Cairo

2014-06-17 Thread Jameson Nash
This code is not valid, since getgc does not always have a valid drawing
context to return. Instead you need to provide Canvas with a callback
function via a call to redraw in which you do all the work, then just call
draw(canvas) in your timer callback to force an update to the view.
double-buffering is enabled by default.

wait(Condition()) is the same wait(), and means sleep until this task is
signaled, and thereby prevents the program from exiting early


On Tue, Jun 17, 2014 at 7:46 AM, Abe Schneider 
wrote:

> Thank you everyone for the fast replies!
>
> After looking at ImageView and the sources, here's the solution I came up
> with:
>
> w = Gtk.@Window() |>
> (body=Gtk.@Box(:v) |>
>   (canvas=Gtk.@Canvas(600, 600)) |>
> showall
>
> function redraw_canvas(canvas)
>   ctx = getgc(canvas)
>   h = height(canvas)
>   w = width(canvas)
>
>   # draw background
>   rectangle(ctx, 0, 0, w, h)
>   set_source_rgb(ctx, 1, 1, 1)
>   fill(ctx)
>
>   # draw objects
>   # ...
>
>   # tell Gtk+ to redisplay
>   draw(canvas)
> end
>
> function init(canvas, delay::Float64, interval::Float64)
>   update_timer = Timer(timer -> redraw_canvas(canvas))
>   start_timer(update_timer, delay, interval)
> end
>
> update_timer = init(canvas, 2, 1)
> if !isinteractive()
>   wait(Condition())
> end
>
> stop_timer(update_timer)
>
> I haven't looked yet into what is required to do double-buffering (or if
> it's enabled by default). I also copied the 'wait(Condition())' from the
> docs, though it's not clear to me what the condition is (if I close the
> window, the program is still running -- I'm assuming that means I need to
> connect the signal for window destruction to said condition).
>
> A
>
>
> On Monday, June 16, 2014 9:33:42 PM UTC-4, Jameson wrote:
>
>> I would definately use Julia's timers. See `Gtk.jl/src/cairo.jl` for an
>> example interface to the Cairo backing to a Gtk window (used in
>> `Winston.jl/src/gtk.jl`). If you are using this wrapper, call `draw(w)` to
>> force a redraw immediately, or `draw(w,false)` to queue a redraw request
>> for when Gtk is idle.
>>
>>
>> On Mon, Jun 16, 2014 at 9:12 PM, Tim Holy  wrote:
>>
>>> ImageView's navigation.jl contains an example. The default branch is Tk
>>> (because  as far as binary distribution goes, Tk is "solved" and Gtk
>>> isn't
>>> yet), but it has a gtk branch you can look at.
>>>
>>> --Tim
>>>
>>> On Monday, June 16, 2014 04:01:46 PM Abe Schneider wrote:
>>> > I was looking for a way to display a simulation in Julia. Originally I
>>> was
>>> > going to just use PyPlot, but it occurred to me it would be better to
>>> just
>>> > use Gtk+ + Cairo to do the drawing rather than something whose main
>>> purpose
>>> > is drawing graphs.
>>> >
>>> > So far, following the examples on the Github page, I have no problem
>>> > creating a window with a Cairo canvas. I can also display content on
>>> the
>>> > canvas fairly easily (which speaks volumes on the awesomeness of Julia
>>> and
>>> > the Gtk+ library). However, after looking through the code and samples,
>>> > it's not obvious to me how to redraw the canvas every fraction of a
>>> second
>>> > to display new content.
>>> >
>>> > I did find an example of animating with Cairo and Gtk+ in C
>>> > (http://cairographics.org/threaded_animation_with_cairo/). However, I
>>> > assume one would want to use Julia's timers instead of of GLibs?
>>> Secondly,
>>> > there in their function 'timer_exe', call is made directly to Gtk+ to
>>> send
>>> > a redraw queue to the window. Is there a cleaner way to do it with the
>>> Gtk+
>>> > library?
>>> >
>>> > Thanks!
>>> > A
>>>
>>>
>>


Re: [julia-users] Re: Function roots() in package Polynomial

2014-06-17 Thread Tony Kelman
The implementation in https://github.com/Keno/Polynomials.jl looks a bit 
better-scaled (not taking reciprocals of the eigenvalues of the companion 
matrix), though it still might be better off on the original Wilkinson 
example if the companion matrix were transposed.

Doesn't look like Julia has a nicely usable Hessenberg type yet 
(https://github.com/JuliaLang/julia/issues/6434 - there is hessfact and a 
Hessenberg factorization object, but those don't look designed to be 
user-constructed), and I don't see any sign of ccall's into the Hessenberg 
routine {sdcz}hseqr either.


On Tuesday, June 17, 2014 7:08:11 AM UTC-7, Alan Edelman wrote:
>
> I just tried roots in the Polynomial package
>
> here's what happened
>
> @time roots(Poly([randn(100)]));
>
> LAPACKException(99)
> while loading In[10], in expression starting on line 44
>  in geevx! at linalg/lapack.jl:1225
>  in eigfact! at linalg/factorization.jl:531
>  in eigfact at linalg/factorization.jl:554
>  in roots at /Users/julia/.julia/v0.3/Polynomial/src/Polynomial.jl:358
>
>
> my first question would be why are we calling geevx for a matrix
>
> known to be Hessenberg?
>
>
> I'd be happy to have a time comparable to matlab's though i'm sure there
>
> are faster algorithms out there as well
>
>
>
>
>
>
>
>
> On Friday, May 9, 2014 11:21:11 PM UTC-4, Tony Kelman wrote:
>>
>> By default GitHub doesn't enable issue tracking in forked repositories, 
>> the person who makes the fork has to manually go do that under settings.
>>
>>
>> On Friday, May 9, 2014 9:39:56 AM UTC-7, Hans W Borchers wrote:
>>>
>>> @Jameson
>>> I am writing a small report on scientific programming with Julia. I 
>>> changed the section on polynomials by now basing it on the newer(?) 
>>> Polynomials.jl. This works quite fine, and roots() computes the zeros of 
>>> the Wilkinson polynomial to quite satisfying accuracy.
>>>
>>> It's a bit irritating that the README file still documents the old order 
>>> of sequence of coefficients while the code already implements the 
>>> coefficients in increasing order of exponents. I see there is a pull 
>>> request for an updated README, but this is almost 4 weeks old.
>>>
>>> Testing one of my examples,
>>>
>>> julia> using Polynomials
>>>
>>> julia> p4 = poly([1.0, 1im, -1.0, -1im])
>>> Poly(--1.0 + 1.0x^4)
>>>
>>>
>>> which appears to indicate a bug in printing the polynomial. The stored 
>>> coefficient is really and correctly -1.0 as can be seen from
>>>
>>> julia> p4[0]
>>> -1.0 + 0.0im
>>>
>>>
>>> I wanted to report that as an issue on the project page, but I did not 
>>> find a button for starting the issue tracker. Does this mean the 
>>> Polynomial.jl project is still 'private' in some sense?
>>>
>>> I know there have been long discussions on which is the right order for 
>>> the coefficients of a polynomial. But I feel it uneasy that the defining 
>>> order in MATLAB and other numerical computing systems has been changed so 
>>> drastically. Well, we have to live with it.
>>>
>>>
>>> On Friday, May 9, 2014 7:53:30 AM UTC+2, Hans W Borchers wrote:

 Thanks a lot. Just a few minutes ago I saw here on the list an 
 announcement
 of the "Least-squares curve fitting package" with poly_fit, among 
 others.
 I think this is good enough for me at the moment.

 I will come back to your suggestion concerning polynomials when I have a
 better command of the type system. For polynomials there is surprisingly
 many more interesting functionality than is usually implemented.


 On Friday, May 9, 2014 6:30:06 AM UTC+2, Jameson wrote:
>
> As the author of Polynomial.jl, I'll say that being "a bit 
> unsatisfied" is a good reason to make pull requests for any and all 
> improvements :) 
>
> While loladiro is now the official maintainer of Polynomials.jl (since 
> he volunteered to do the badly-needed work to switch the coefficient 
> order), if I had access, I would accept a pull request for additional 
> roots() methods (parameterized by an enum type, for overloading, and 
> possibly also a realroots function), horner method functions, polyfit, 
> etc. 
>
> I would not accept a pull request for allowing a vector instead of a 
> Polynomial in any method, however. IMHO, this is a completely 
> unnecessary "optimization", which encourages the user to conflate the 
> concept of a Vector and a Polynomial without benefit. It could even 
> potentially lead to subtle bugs (since indexing a polynomial is 
> different from indexing a vector), or passing in the roots instead of 
> the polynomial. 
>
> I think merging your proposal for a polyfit function with 
> StatsBase.fit makes sense. You could use a tuple parameter to combine 
> the Polynomial parameter with the degrees information: 
>
> function fit((T::(Type{Polynomial},Int), data) 
>   P, deg = T 
>   return Poly( pfit(deg, da

Re: [julia-users] Re: Function roots() in package Polynomial

2014-06-17 Thread Iain Dunning
I see both Polynomial and Polynomials in METADATA - is Polynomials a 
replacement for Polynomial?

On Tuesday, June 17, 2014 10:46:55 AM UTC-4, Tony Kelman wrote:
>
> The implementation in https://github.com/Keno/Polynomials.jl looks a bit 
> better-scaled (not taking reciprocals of the eigenvalues of the companion 
> matrix), though it still might be better off on the original Wilkinson 
> example if the companion matrix were transposed.
>
> Doesn't look like Julia has a nicely usable Hessenberg type yet (
> https://github.com/JuliaLang/julia/issues/6434 - there is hessfact and a 
> Hessenberg factorization object, but those don't look designed to be 
> user-constructed), and I don't see any sign of ccall's into the Hessenberg 
> routine {sdcz}hseqr either.
>
>
> On Tuesday, June 17, 2014 7:08:11 AM UTC-7, Alan Edelman wrote:
>>
>> I just tried roots in the Polynomial package
>>
>> here's what happened
>>
>> @time roots(Poly([randn(100)]));
>>
>> LAPACKException(99)
>> while loading In[10], in expression starting on line 44
>>  in geevx! at linalg/lapack.jl:1225
>>  in eigfact! at linalg/factorization.jl:531
>>  in eigfact at linalg/factorization.jl:554
>>  in roots at /Users/julia/.julia/v0.3/Polynomial/src/Polynomial.jl:358
>>
>>
>> my first question would be why are we calling geevx for a matrix
>>
>> known to be Hessenberg?
>>
>>
>> I'd be happy to have a time comparable to matlab's though i'm sure there
>>
>> are faster algorithms out there as well
>>
>>
>>
>>
>>
>>
>>
>>
>> On Friday, May 9, 2014 11:21:11 PM UTC-4, Tony Kelman wrote:
>>>
>>> By default GitHub doesn't enable issue tracking in forked repositories, 
>>> the person who makes the fork has to manually go do that under settings.
>>>
>>>
>>> On Friday, May 9, 2014 9:39:56 AM UTC-7, Hans W Borchers wrote:

 @Jameson
 I am writing a small report on scientific programming with Julia. I 
 changed the section on polynomials by now basing it on the newer(?) 
 Polynomials.jl. This works quite fine, and roots() computes the zeros of 
 the Wilkinson polynomial to quite satisfying accuracy.

 It's a bit irritating that the README file still documents the old 
 order of sequence of coefficients while the code already implements the 
 coefficients in increasing order of exponents. I see there is a pull 
 request for an updated README, but this is almost 4 weeks old.

 Testing one of my examples,

 julia> using Polynomials

 julia> p4 = poly([1.0, 1im, -1.0, -1im])
 Poly(--1.0 + 1.0x^4)


 which appears to indicate a bug in printing the polynomial. The stored 
 coefficient is really and correctly -1.0 as can be seen from

 julia> p4[0]
 -1.0 + 0.0im


 I wanted to report that as an issue on the project page, but I did not 
 find a button for starting the issue tracker. Does this mean the 
 Polynomial.jl project is still 'private' in some sense?

 I know there have been long discussions on which is the right order for 
 the coefficients of a polynomial. But I feel it uneasy that the defining 
 order in MATLAB and other numerical computing systems has been changed so 
 drastically. Well, we have to live with it.


 On Friday, May 9, 2014 7:53:30 AM UTC+2, Hans W Borchers wrote:
>
> Thanks a lot. Just a few minutes ago I saw here on the list an 
> announcement
> of the "Least-squares curve fitting package" with poly_fit, among 
> others.
> I think this is good enough for me at the moment.
>
> I will come back to your suggestion concerning polynomials when I have 
> a
> better command of the type system. For polynomials there is 
> surprisingly
> many more interesting functionality than is usually implemented.
>
>
> On Friday, May 9, 2014 6:30:06 AM UTC+2, Jameson wrote:
>>
>> As the author of Polynomial.jl, I'll say that being "a bit 
>> unsatisfied" is a good reason to make pull requests for any and all 
>> improvements :) 
>>
>> While loladiro is now the official maintainer of Polynomials.jl 
>> (since 
>> he volunteered to do the badly-needed work to switch the coefficient 
>> order), if I had access, I would accept a pull request for additional 
>> roots() methods (parameterized by an enum type, for overloading, and 
>> possibly also a realroots function), horner method functions, 
>> polyfit, 
>> etc. 
>>
>> I would not accept a pull request for allowing a vector instead of a 
>> Polynomial in any method, however. IMHO, this is a completely 
>> unnecessary "optimization", which encourages the user to conflate the 
>> concept of a Vector and a Polynomial without benefit. It could even 
>> potentially lead to subtle bugs (since indexing a polynomial is 
>> different from indexing a vector), or passing in the roots instead of 
>> the polynomial

Re: [julia-users] Re: An appreciation of two contributors among many

2014-06-17 Thread Tim Holy
On Tuesday, June 17, 2014 09:06:20 AM Stefan Karpinski wrote:
> I can't possibly second the recognition of the amazing quality of Dahua and
> Tim's code enough. The fact that you are various pieces of very high
> quality numerical code written by them – or Steven Johnson – is one of the
> critical reasons Julia is amazing to use today.

To share the love a bit: what I find so amazing these days is the return on 
investment. In the early days, I was basically the only contributor to my own 
packages, and I felt a bit like I was "carrying a load" by putting code out 
there. But these days, the gifts from others exceed my own contributions many 
fold. I started trying to list people who have contributed in ways that have 
directly affected me (either in base Julia or to packages I care a lot about), 
and it quickly got completely out of hand. So, I really like Jiahao's "421 
face of Julia," that really sums it up for me.

--Tim



Re: [julia-users] Re: Function roots() in package Polynomial

2014-06-17 Thread Andreas Noack Jensen
I can't make roots fail with the example you gave.

It is right that there isn't yet eigfact methods for the Hessenberg type
and the LAPACK routines are not wrapped. The last part wouldn't be
difficult, but we need to think about the Hessenberg type. Right now it is
a Factorization and it stores the elementary reflectors for the
transformation to Hessenberg form. We might want a
Hessenberg<:AbstractMatrix similarly to e.g. Triangular.


2014-06-17 16:46 GMT+02:00 Tony Kelman :

> The implementation in https://github.com/Keno/Polynomials.jl looks a bit
> better-scaled (not taking reciprocals of the eigenvalues of the companion
> matrix), though it still might be better off on the original Wilkinson
> example if the companion matrix were transposed.
>
> Doesn't look like Julia has a nicely usable Hessenberg type yet (
> https://github.com/JuliaLang/julia/issues/6434 - there is hessfact and a
> Hessenberg factorization object, but those don't look designed to be
> user-constructed), and I don't see any sign of ccall's into the Hessenberg
> routine {sdcz}hseqr either.
>
>
> On Tuesday, June 17, 2014 7:08:11 AM UTC-7, Alan Edelman wrote:
>>
>> I just tried roots in the Polynomial package
>>
>> here's what happened
>>
>> @time roots(Poly([randn(100)]));
>>
>> LAPACKException(99)
>> while loading In[10], in expression starting on line 44
>>  in geevx! at linalg/lapack.jl:1225
>>  in eigfact! at linalg/factorization.jl:531
>>  in eigfact at linalg/factorization.jl:554
>>  in roots at /Users/julia/.julia/v0.3/Polynomial/src/Polynomial.jl:358
>>
>>
>> my first question would be why are we calling geevx for a matrix
>>
>> known to be Hessenberg?
>>
>>
>> I'd be happy to have a time comparable to matlab's though i'm sure there
>>
>> are faster algorithms out there as well
>>
>>
>>
>>
>>
>>
>>
>>
>> On Friday, May 9, 2014 11:21:11 PM UTC-4, Tony Kelman wrote:
>>>
>>> By default GitHub doesn't enable issue tracking in forked repositories,
>>> the person who makes the fork has to manually go do that under settings.
>>>
>>>
>>> On Friday, May 9, 2014 9:39:56 AM UTC-7, Hans W Borchers wrote:

 @Jameson
 I am writing a small report on scientific programming with Julia. I
 changed the section on polynomials by now basing it on the newer(?)
 Polynomials.jl. This works quite fine, and roots() computes the zeros of
 the Wilkinson polynomial to quite satisfying accuracy.

 It's a bit irritating that the README file still documents the old
 order of sequence of coefficients while the code already implements the
 coefficients in increasing order of exponents. I see there is a pull
 request for an updated README, but this is almost 4 weeks old.

 Testing one of my examples,

 julia> using Polynomials

 julia> p4 = poly([1.0, 1im, -1.0, -1im])
 Poly(--1.0 + 1.0x^4)


 which appears to indicate a bug in printing the polynomial. The stored
 coefficient is really and correctly -1.0 as can be seen from

 julia> p4[0]
 -1.0 + 0.0im


 I wanted to report that as an issue on the project page, but I did not
 find a button for starting the issue tracker. Does this mean the
 Polynomial.jl project is still 'private' in some sense?

 I know there have been long discussions on which is the right order for
 the coefficients of a polynomial. But I feel it uneasy that the defining
 order in MATLAB and other numerical computing systems has been changed so
 drastically. Well, we have to live with it.


 On Friday, May 9, 2014 7:53:30 AM UTC+2, Hans W Borchers wrote:
>
> Thanks a lot. Just a few minutes ago I saw here on the list an
> announcement
> of the "Least-squares curve fitting package" with poly_fit, among
> others.
> I think this is good enough for me at the moment.
>
> I will come back to your suggestion concerning polynomials when I have
> a
> better command of the type system. For polynomials there is
> surprisingly
> many more interesting functionality than is usually implemented.
>
>
> On Friday, May 9, 2014 6:30:06 AM UTC+2, Jameson wrote:
>>
>> As the author of Polynomial.jl, I'll say that being "a bit
>> unsatisfied" is a good reason to make pull requests for any and all
>> improvements :)
>>
>> While loladiro is now the official maintainer of Polynomials.jl
>> (since
>> he volunteered to do the badly-needed work to switch the coefficient
>> order), if I had access, I would accept a pull request for additional
>> roots() methods (parameterized by an enum type, for overloading, and
>> possibly also a realroots function), horner method functions,
>> polyfit,
>> etc.
>>
>> I would not accept a pull request for allowing a vector instead of a
>> Polynomial in any method, however. IMHO, this is a completely
>> unnecessary "optimization", which en

Re: [julia-users] juliabloggers.com is now live!

2014-06-17 Thread Stefan Karpinski
I set that up back in 2012 and I know squat about RSS or setting up feeds,
so if anyone has any good ideas about alternate setups, I'm all ears. I
have no attachment to feedburner.


On Tue, Jun 17, 2014 at 9:33 AM, Randy Zwitch 
wrote:

> I did add the julialang.org/blog feed to Julia Bloggers already. The
> attribution is a bit messed up because they are re-directing their feed
> using Feedburner, then Feedburner re-directs to the actual URL; I'll try to
> figure out how to get the attribution to point directly to the blog post.
>
> Here are the posts from the official blog:
>
> http://www.juliabloggers.com/author/julia-developers/
>
>
> On Tuesday, June 17, 2014 9:04:37 AM UTC-4, Tobias Knopp wrote:
>>
>> Randy, would it be possible to integrate the page in julialang.org
>> (under the blog section)?
>> If not it would probably be good to add a link there + maybe remove the
>> dublicated posts.
>>
>> Cheers,
>>
>> Tobi
>>
>> Am Dienstag, 17. Juni 2014 14:39:46 UTC+2 schrieb Randy Zwitch:
>>>
>>> I think this is just a caching issue, the attribution should be on all
>>> pages.
>>>
>>> On Tuesday, June 17, 2014 3:44:47 AM UTC-4, Mauro wrote:

 Thanks for putting this together!  One more thing about authors, on
 pages like for example this one
 http://www.juliabloggers.com/using-asciiplots-jl/
 there should be the same attribution as on the front page.

 On Tuesday, June 17, 2014 1:19:48 AM UTC+1, Randy Zwitch wrote:
>
> Ok, there is now more obvious attribution on each post, with the
> author name and link of the original post prominently displayed before the
> article.
>
> If anyone else has any other recommendations/requests (still need a
> logo!), please let me know.
>



Re: [julia-users] Re: signals handling

2014-06-17 Thread Stefan Karpinski
That is very unlikely to be reliable, but it's cool that it works. I think
that we probably should change SIGINT from raising a normal error to
triggering some kind of interrupt handling mechanism (which can in turn
raise an error by default).


On Tue, Jun 17, 2014 at 10:41 AM, Stephen Chisholm 
wrote:

> I'm able to register a callback function using signal in libc, see the
> code below.
>
> SIGINT=2
>
> function catch_function(x)
> println("caught signal $x")
> exit(0)::Nothing
> end
> catch_function_c = cfunction(catch_function, None, (Int64,))
> ccall((:signal, "libc"), Void, (Int64, Ptr{Void}), SIGINT,
> catch_function_c)
>
> while true
> sleep(1)
> end
>
>
>
> On Tuesday, 17 June 2014 09:24:43 UTC-3, Stephen Chisholm wrote:
>>
>> I'm able to catch the InterruptException with the code below when running
>> in the REPL, but it doesn't seem to get thrown when running the code in a
>> script.
>>
>> while true
>> try sleep(1)
>> println("running...")
>> catch err
>> println("error: $err")
>> end
>> end
>>
>>
>> On Monday, 16 June 2014 18:30:36 UTC-3, Ivar Nesje wrote:
>>>
>>> SIGINT gets converted to a InterruptException, that can be caught in a
>>> catch statement. If you happened to be in a ccall, you might cause your
>>> program to be in a corrupt state and leak resources such as memory.
>>>
>>> I'm not sure how you can interact with other signals.
>>>
>>


Re: [julia-users] Re: signals handling

2014-06-17 Thread Stephen Chisholm
I like the idea of an interrupt handling mechanism.  What do you see that 
would make the signal/libc approach unreliable?

On Tuesday, 17 June 2014 12:18:11 UTC-3, Stefan Karpinski wrote:
>
> That is very unlikely to be reliable, but it's cool that it works. I think 
> that we probably should change SIGINT from raising a normal error to 
> triggering some kind of interrupt handling mechanism (which can in turn 
> raise an error by default).
>
>
> On Tue, Jun 17, 2014 at 10:41 AM, Stephen Chisholm  > wrote:
>
>> I'm able to register a callback function using signal in libc, see the 
>> code below.
>>
>> SIGINT=2
>>
>> function catch_function(x)
>> println("caught signal $x")
>> exit(0)::Nothing
>> end
>> catch_function_c = cfunction(catch_function, None, (Int64,))
>> ccall((:signal, "libc"), Void, (Int64, Ptr{Void}), SIGINT, 
>> catch_function_c)
>>
>> while true
>>  sleep(1)
>> end
>>
>>
>>
>> On Tuesday, 17 June 2014 09:24:43 UTC-3, Stephen Chisholm wrote:
>>>
>>> I'm able to catch the InterruptException with the code below when 
>>> running in the REPL, but it doesn't seem to get thrown when running the 
>>> code in a script.
>>>
>>> while true
>>> try sleep(1)
>>> println("running...")
>>> catch err 
>>> println("error: $err")
>>> end 
>>> end
>>>
>>>
>>> On Monday, 16 June 2014 18:30:36 UTC-3, Ivar Nesje wrote:

 SIGINT gets converted to a InterruptException, that can be caught in a 
 catch statement. If you happened to be in a ccall, you might cause your 
 program to be in a corrupt state and leak resources such as memory.

 I'm not sure how you can interact with other signals.

>>>
>

Re: [julia-users] Re: Function roots() in package Polynomial

2014-06-17 Thread Stefan Karpinski
On Tue, Jun 17, 2014 at 10:53 AM, Iain Dunning 
wrote:

> I see both Polynomial and Polynomials in METADATA - is Polynomials a
> replacement for Polynomial?
>

Yes, Polynomials is the newer version with good indexing order – i.e. p[0]
is the constant term. We should probably get this in better order. It may
make sense to break the connection with the old repo and put it under some
organization so that more people can work on it. What org would be most
appropriate?


Re: [julia-users] repeat()'s docstring: maybe some clarification needed?

2014-06-17 Thread John Myles White
Maybe we should use a couple of examples. I agree that Tomas’s example is an 
important one to include to make people understand things.

I’d also suggest including one example that produces a higher-dimensional 
output than the input. This is one of the big differences between repeat and 
repmat.

 — John

On Jun 17, 2014, at 6:30 AM, Tomas Lycken  wrote:

> I’d like to add the following example too, of using both inner and outer, to 
> show off the flexibility of repeat:
> 
> julia> repeat(A, inner=[1,2], outer=[2,1])
> 4x4 Array{Int64,2}:
>  1  1  2  2
>  3  3  4  4
>  1  1  2  2
>  3  3  4  4
> Until I had tried that in the REPL myself, I didn’t really trust that I 
> actually understood what keywords really meant. Now I think I do.
> // T
> 
> 
> On Tuesday, June 17, 2014 2:36:39 PM UTC+2, Bruno Rodrigues wrote:
> 
> 
> 
> For outer, I think that it would be clearer to say that it repeats (or 
> clones?) the whole array along the specified dimensions. For inner, I think 
> it's ok.
> 
> Looking at the tests for repeat, I think we could use this as an example:
> 
> As an illustrative example, let's consider array A:
> A = [1 2;3 4]
> 
> 2x2 Array{Int64,2}:
>  1  2
>  3  4
> 
> If you want to repeat array A along the second dimension:
> 
> repeat(A,inner=[1,1],outer=[1,2])
> 2x4 Array{Int64,2}:
>  1  2  1  2
>  3  4  3  4
> 
> You can also repeat the columns first:
> 
> repeat(A,inner=[1,2],outer=[1,1])
> 2x4 Array{Int64,2}:
>  1  1  2  2
>  3  3  4  4
> 
> 
> 
> You can also create a new array that repeats A along a third dimension:
> 
> repeat(A,inner=[1,1],outer=[1,1,2])
> 2x2x2 Array{Int64,3}:
> [:, :, 1] =
>  1  2
>  3  4
> 
> [:, :, 2] =
>  1  2
>  3  4
> 
> 
> Is there a limit on how a docstring can be? Could we add more examples?
> 
> On Thursday, June 12, 2014 4:59:14 PM UTC+2, John Myles White wrote:
> Rewriting the documentation for repeat would be great. I’m the guilty party 
> for that piece of documentation and agree that it’s not very good. Rewriting 
> it from scratch is probably a good idea.
> 
> I’m not sure I think `tile` is much better than `outer`. Maybe we should use 
> something like `perelement` and `perslice` as the keywords? If we revise the 
> keywords, we should also find terms to describe one additional piece of 
> functionality I’d like to add to repeat: the ability to repeat specific 
> elements a distinct number of times. That’s the main thing that repeat is 
> missing that you’d get from R’s rep function.
> 
> If you’re looking for good examples for the documentation, there are a bunch 
> of tests for `repeat` you could use as inspiration: 
> https://github.com/JuliaLang/julia/blob/b320b66db8fb97cc3b96fe4089b7b15528ab346c/test/arrayops.jl#L302
> 
>  — John
> 
> On Jun 12, 2014, at 6:17 AM, Patrick O'Leary  wrote:
> 
>> On Thursday, June 12, 2014 7:57:03 AM UTC-5, Bruno Rodrigues wrote:
>> repeat() is much more useful that Matlab's repmat(), but the docstring is 
>> unclear, at least for me. Unfortunately, I don't have, right now, any 
>> proposals to correct it. Could maybe an example be added to the docstring? 
>> Maybe it could be clearer this way.
>> 
>> I think an example would help make this immediately obvious. I also wonder 
>> if the keyword arguments could be better--I don't have a good alternative 
>> for "inner", but "tile" seems like a good alternative to "outer". That may 
>> at least be useful in a rework of the doc.
>> 
>> Note that you don't have to supply both keyword arguments, only one, so if 
>> you're not using the feature of "inner" you can simply omit it. 
> 
> 
> 
> ​



Re: [julia-users] Re: signals handling

2014-06-17 Thread Stefan Karpinski
There's very few things you can safely do in a signal handler. Calling a
julia function can potentially lead to code generation, GC, etc., all of
which is bad news in a signal handler. That's why we need a first-class
mechanism for this: install a Julia function as a handler and the system
arranges for your function to be called when the interrupt happens, but
safely.


On Tue, Jun 17, 2014 at 11:21 AM, Stephen Chisholm 
wrote:

> I like the idea of an interrupt handling mechanism.  What do you see that
> would make the signal/libc approach unreliable?
>
>
> On Tuesday, 17 June 2014 12:18:11 UTC-3, Stefan Karpinski wrote:
>
>> That is very unlikely to be reliable, but it's cool that it works. I
>> think that we probably should change SIGINT from raising a normal error to
>> triggering some kind of interrupt handling mechanism (which can in turn
>> raise an error by default).
>>
>>
>> On Tue, Jun 17, 2014 at 10:41 AM, Stephen Chisholm 
>> wrote:
>>
>>> I'm able to register a callback function using signal in libc, see the
>>> code below.
>>>
>>> SIGINT=2
>>>
>>> function catch_function(x)
>>> println("caught signal $x")
>>> exit(0)::Nothing
>>> end
>>> catch_function_c = cfunction(catch_function, None, (Int64,))
>>> ccall((:signal, "libc"), Void, (Int64, Ptr{Void}), SIGINT,
>>> catch_function_c)
>>>
>>> while true
>>>  sleep(1)
>>> end
>>>
>>>
>>>
>>> On Tuesday, 17 June 2014 09:24:43 UTC-3, Stephen Chisholm wrote:

 I'm able to catch the InterruptException with the code below when
 running in the REPL, but it doesn't seem to get thrown when running the
 code in a script.

 while true
 try sleep(1)
 println("running...")
 catch err
 println("error: $err")
 end
 end


 On Monday, 16 June 2014 18:30:36 UTC-3, Ivar Nesje wrote:
>
> SIGINT gets converted to a InterruptException, that can be caught in a
> catch statement. If you happened to be in a ccall, you might cause your
> program to be in a corrupt state and leak resources such as memory.
>
> I'm not sure how you can interact with other signals.
>

>>


[julia-users] Re: Remez algorithm

2014-06-17 Thread Steven G. Johnson
Note that Remez algorithm can be used to find optimal (minimax/Chebyshev) 
rational functions (ratios of polynomials), not just polynomials, and it 
would be good to support this case as well.

Of course, you can do pretty well for many functions just by sampling at a 
lot of points, in which case the minimax problem turns into an 
finite-dimensional LP (for polynomials) or a sequence of LPs (for rational 
functions).The tricky "Remez" part is finding the extrema in order to 
sample at the optimal points, as I understand it.


[julia-users] type mismatch weirdness

2014-06-17 Thread J Luis
Hi, this is a continuation of what I by mistake posted in devs-list

https://groups.google.com/forum/?fromgroups=#!topic/julia-dev/GojOx4nI-xo

that now stroke me again (in another place)

function imcdCanvasPutImage(_canvas, _image, _x, _y, _w, _h, _xmin, _xmax, 
_ymin, _ymax)
println("typeof(_image): ", _image)
println("isa(_image, Ptr{imImage}): ", isa(_image, Ptr{imImage}))

which when execute prints this astonishing conclusion

C:\programs\Gits\IUP.jl\src>c:\programs\julia64\julia execa.jl
typeof(_image): Ptr{imImage} @0x00180c30
isa(_image, Ptr{imImage}): false

However, when I try to reproduce this on the REPL, than it works

Joaquim


[julia-users] Re: type mismatch weirdness

2014-06-17 Thread Matt Bauman
Is imImage an abstract type?  That would account for both behaviors. 
 Julia's type parameters are "invariant", which means that Array{Int,1} is 
not a subtype of Array{Real,1}.  The documentation talks about this in the 
parametric section: Parametric Composite Types 
. 
 Searching for invariant and covariant will bring up a bunch of threads 
here, too.

On Tuesday, June 17, 2014 11:55:05 AM UTC-4, J Luis wrote:
>
> Hi, this is a continuation of what I by mistake posted in devs-list
>
> https://groups.google.com/forum/?fromgroups=#!topic/julia-dev/GojOx4nI-xo
>
> that now stroke me again (in another place)
>
> function imcdCanvasPutImage(_canvas, _image, _x, _y, _w, _h, _xmin, _xmax, 
> _ymin, _ymax)
> println("typeof(_image): ", _image)
> println("isa(_image, Ptr{imImage}): ", isa(_image, Ptr{imImage}))
>
> which when execute prints this astonishing conclusion
>
> C:\programs\Gits\IUP.jl\src>c:\programs\julia64\julia execa.jl
> typeof(_image): Ptr{imImage} @0x00180c30
> isa(_image, Ptr{imImage}): false
>
> However, when I try to reproduce this on the REPL, than it works
>
> Joaquim
>


Re: [julia-users] juliabloggers.com is now live!

2014-06-17 Thread Randy Zwitch
It's up to you. If you remove Feedburner, you remove whatever stats you 
might be getting. But you'll get a better looking attribution URL on Julia 
Bloggers. So you'd get:

http://julialang.org/blog/2013/05/graphical-user-interfaces-part1/

instead of

http://feedproxy.google.com/~r/JuliaLang/~3/SHZGDk581qM/graphical-user-interfaces-part1

I don't think you guys have any problems ranking in Google, so I'm not sure 
it's even worth the 5 minutes to change.

On Tuesday, June 17, 2014 11:15:40 AM UTC-4, Stefan Karpinski wrote:
>
> I set that up back in 2012 and I know squat about RSS or setting up feeds, 
> so if anyone has any good ideas about alternate setups, I'm all ears. I 
> have no attachment to feedburner.
>
>
> On Tue, Jun 17, 2014 at 9:33 AM, Randy Zwitch  > wrote:
>
>> I did add the julialang.org/blog feed to Julia Bloggers already. The 
>> attribution is a bit messed up because they are re-directing their feed 
>> using Feedburner, then Feedburner re-directs to the actual URL; I'll try to 
>> figure out how to get the attribution to point directly to the blog post.
>>
>> Here are the posts from the official blog:
>>
>> http://www.juliabloggers.com/author/julia-developers/
>>
>>
>> On Tuesday, June 17, 2014 9:04:37 AM UTC-4, Tobias Knopp wrote:
>>>
>>> Randy, would it be possible to integrate the page in julialang.org 
>>> (under the blog section)?
>>> If not it would probably be good to add a link there + maybe remove the 
>>> dublicated posts.
>>>
>>> Cheers,
>>>
>>> Tobi
>>>
>>> Am Dienstag, 17. Juni 2014 14:39:46 UTC+2 schrieb Randy Zwitch:

 I think this is just a caching issue, the attribution should be on all 
 pages.

 On Tuesday, June 17, 2014 3:44:47 AM UTC-4, Mauro wrote:
>
> Thanks for putting this together!  One more thing about authors, on 
> pages like for example this one
> http://www.juliabloggers.com/using-asciiplots-jl/
> there should be the same attribution as on the front page.
>
> On Tuesday, June 17, 2014 1:19:48 AM UTC+1, Randy Zwitch wrote:
>>
>> Ok, there is now more obvious attribution on each post, with the 
>> author name and link of the original post prominently displayed before 
>> the 
>> article.
>>
>> If anyone else has any other recommendations/requests (still need a 
>> logo!), please let me know.
>>
>
>

[julia-users] Re: Remez algorithm

2014-06-17 Thread Hans W Borchers
That is right. The Remez algorithm finds the minimax polynomial independent 
of any given prior discretization of the function.

For discrete points you can apply LP or an "iteratively reweighted least 
square" approach that for this problem converges quickly and is quite 
accurate. Implementing it in Julia will only take a few lines of code.

See my discussion two years ago with Pedro -- from the *chebfun* project -- 
about why Remez is better than solving this problem as an optimization task:

http://scicomp.stackexchange.com/questions/1531/the-remez-algorithm

You'll see that the Remez algorithm gets it slightly better.


On Tuesday, June 17, 2014 5:12:21 PM UTC+2, Steven G. Johnson wrote:
>
> Note that Remez algorithm can be used to find optimal (minimax/Chebyshev) 
> rational functions (ratios of polynomials), not just polynomials, and it 
> would be good to support this case as well.
>
> Of course, you can do pretty well for many functions just by sampling at a 
> lot of points, in which case the minimax problem turns into an 
> finite-dimensional LP (for polynomials) or a sequence of LPs (for rational 
> functions).The tricky "Remez" part is finding the extrema in order to 
> sample at the optimal points, as I understand it.
>


Re: [julia-users] type mismatch weirdness

2014-06-17 Thread Keno Fischer
Looks like you're not actually calling typeof in the first statement.


On Tue, Jun 17, 2014 at 11:55 AM, J Luis  wrote:

> Hi, this is a continuation of what I by mistake posted in devs-list
>
> https://groups.google.com/forum/?fromgroups=#!topic/julia-dev/GojOx4nI-xo
>
> that now stroke me again (in another place)
>
> function imcdCanvasPutImage(_canvas, _image, _x, _y, _w, _h, _xmin, _xmax,
> _ymin, _ymax)
> println("typeof(_image): ", _image)
> println("isa(_image, Ptr{imImage}): ", isa(_image, Ptr{imImage}))
>
> which when execute prints this astonishing conclusion
>
> C:\programs\Gits\IUP.jl\src>c:\programs\julia64\julia execa.jl
> typeof(_image): Ptr{imImage} @0x00180c30
> isa(_image, Ptr{imImage}): false
>
> However, when I try to reproduce this on the REPL, than it works
>
> Joaquim
>


[julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Cristóvão Duarte Sousa
I've just done measurements of algorithm inner loop times in my machine by 
changing the code has shown in this commit 

.

I've found out something... see for yourself:

using Winston
numba_times = readdlm("numba_times.dat")[10:end];
plot(numba_times)


julia_times = readdlm("julia_times.dat")[10:end];
plot(julia_times)


println((median(numba_times), mean(numba_times), var(numba_times)))
(0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)

println((median(julia_times), mean(julia_times), var(julia_times)))
(0.00282404404,0.0034863882123824454,1.7058255003790299e-6)

So, while inner loop times have more or less the same median on both Julia 
and Numba tests, the mean and variance are higher in Julia.

Can that be due to the garbage collector being kicking in?


On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:
>
> Dear all,
>
> I thought you might find this paper interesting: 
> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
>
> It takes a standard model from macro economics and computes it's solution 
> with an identical algorithm in several languages. Julia is roughly 2.6 
> times slower than the best C++ executable. I was bit puzzled by the result, 
> since in the benchmarks on http://julialang.org/, the slowest test is 
> 1.66 times C. I realize that those benchmarks can't cover all possible 
> situations. That said, I couldn't really find anything unusual in the Julia 
> code, did some profiling and removed type inference, but still that's as 
> fast as I got it. That's not to say that I'm disappointed, I still think 
> this is great. Did I miss something obvious here or is there something 
> specific to this algorithm? 
>
> The codes are on github at 
>
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
>
>
>

Re: [julia-users] Re: signals handling

2014-06-17 Thread Stephen Chisholm
That's a good point regarding code gen and garbage collection, etc.. I 
should be able to work with this for now but definitely look forward to a 
safer first-class mechanism.  Is there an issue opened for this on github?

On Tuesday, 17 June 2014 12:36:02 UTC-3, Stefan Karpinski wrote:
>
> There's very few things you can safely do in a signal handler. Calling a 
> julia function can potentially lead to code generation, GC, etc., all of 
> which is bad news in a signal handler. That's why we need a first-class 
> mechanism for this: install a Julia function as a handler and the system 
> arranges for your function to be called when the interrupt happens, but 
> safely.
>
>
> On Tue, Jun 17, 2014 at 11:21 AM, Stephen Chisholm  > wrote:
>
>> I like the idea of an interrupt handling mechanism.  What do you see that 
>> would make the signal/libc approach unreliable?
>>
>>
>> On Tuesday, 17 June 2014 12:18:11 UTC-3, Stefan Karpinski wrote:
>>
>>> That is very unlikely to be reliable, but it's cool that it works. I 
>>> think that we probably should change SIGINT from raising a normal error to 
>>> triggering some kind of interrupt handling mechanism (which can in turn 
>>> raise an error by default).
>>>
>>>
>>> On Tue, Jun 17, 2014 at 10:41 AM, Stephen Chisholm  
>>> wrote:
>>>
 I'm able to register a callback function using signal in libc, see the 
 code below.

 SIGINT=2

 function catch_function(x)
 println("caught signal $x")
 exit(0)::Nothing
 end
 catch_function_c = cfunction(catch_function, None, (Int64,))
 ccall((:signal, "libc"), Void, (Int64, Ptr{Void}), SIGINT, 
 catch_function_c)

 while true
  sleep(1)
 end



 On Tuesday, 17 June 2014 09:24:43 UTC-3, Stephen Chisholm wrote:
>
> I'm able to catch the InterruptException with the code below when 
> running in the REPL, but it doesn't seem to get thrown when running the 
> code in a script.
>
> while true
> try sleep(1)
> println("running...")
> catch err 
> println("error: $err")
> end 
> end
>
>
> On Monday, 16 June 2014 18:30:36 UTC-3, Ivar Nesje wrote:
>>
>> SIGINT gets converted to a InterruptException, that can be caught in 
>> a catch statement. If you happened to be in a ccall, you might cause 
>> your 
>> program to be in a corrupt state and leak resources such as memory.
>>
>> I'm not sure how you can interact with other signals.
>>
>
>>>
>

Re: [julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Stefan Karpinski
That definitely smells like a GC issue. Python doesn't have this particular
problem since it uses reference counting.


On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa 
wrote:

> I've just done measurements of algorithm inner loop times in my machine by
> changing the code has shown in this commit
> 
> .
>
> I've found out something... see for yourself:
>
> using Winston
> numba_times = readdlm("numba_times.dat")[10:end];
> plot(numba_times)
>
>
> 
> julia_times = readdlm("julia_times.dat")[10:end];
> plot(julia_times)
>
>
> 
> println((median(numba_times), mean(numba_times), var(numba_times)))
> (0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)
>
> println((median(julia_times), mean(julia_times), var(julia_times)))
> (0.00282404404,0.0034863882123824454,1.7058255003790299e-6)
>
> So, while inner loop times have more or less the same median on both Julia
> and Numba tests, the mean and variance are higher in Julia.
>
> Can that be due to the garbage collector being kicking in?
>
>
> On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:
>>
>> Dear all,
>>
>> I thought you might find this paper interesting: http://economics.
>> sas.upenn.edu/~jesusfv/comparison_languages.pdf
>>
>> It takes a standard model from macro economics and computes it's solution
>> with an identical algorithm in several languages. Julia is roughly 2.6
>> times slower than the best C++ executable. I was bit puzzled by the result,
>> since in the benchmarks on http://julialang.org/, the slowest test is
>> 1.66 times C. I realize that those benchmarks can't cover all possible
>> situations. That said, I couldn't really find anything unusual in the Julia
>> code, did some profiling and removed type inference, but still that's as
>> fast as I got it. That's not to say that I'm disappointed, I still think
>> this is great. Did I miss something obvious here or is there something
>> specific to this algorithm?
>>
>> The codes are on github at
>>
>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
>>
>>
>>


Re: [julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread John Myles White
Sounds like we need to rerun these benchmarks after the new GC branch gets 
updated.

 -- John

On Jun 17, 2014, at 9:31 AM, Stefan Karpinski  wrote:

> That definitely smells like a GC issue. Python doesn't have this particular 
> problem since it uses reference counting.
> 
> 
> On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa  
> wrote:
> I've just done measurements of algorithm inner loop times in my machine by 
> changing the code has shown in this commit.
> 
> I've found out something... see for yourself:
> 
> using Winston
> numba_times = readdlm("numba_times.dat")[10:end];
> plot(numba_times)
> 
> 
> julia_times = readdlm("julia_times.dat")[10:end];
> plot(julia_times)
> 
> 
> 
> println((median(numba_times), mean(numba_times), var(numba_times)))
> (0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)
> 
> println((median(julia_times), mean(julia_times), var(julia_times)))
> (0.00282404404,0.0034863882123824454,1.7058255003790299e-6)
> 
> So, while inner loop times have more or less the same median on both Julia 
> and Numba tests, the mean and variance are higher in Julia.
> 
> Can that be due to the garbage collector being kicking in?
> 
> 
> On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:
> Dear all,
> 
> I thought you might find this paper interesting: 
> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
> 
> It takes a standard model from macro economics and computes it's solution 
> with an identical algorithm in several languages. Julia is roughly 2.6 times 
> slower than the best C++ executable. I was bit puzzled by the result, since 
> in the benchmarks on http://julialang.org/, the slowest test is 1.66 times C. 
> I realize that those benchmarks can't cover all possible situations. That 
> said, I couldn't really find anything unusual in the Julia code, did some 
> profiling and removed type inference, but still that's as fast as I got it. 
> That's not to say that I'm disappointed, I still think this is great. Did I 
> miss something obvious here or is there something specific to this algorithm? 
> 
> The codes are on github at 
> 
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
> 
> 
> 



Re: [julia-users] type mismatch weirdness

2014-06-17 Thread J Luis
*imType* is a composite type

Keno, is right. But now I do and still ...

function imcdCanvasPutImage(_canvas, _image, _x, _y, _w, _h, _xmin, _xmax, 
_ymin, _ymax)
println("_image: ", _image)
println("typeof(_image): ", typeof(_image))
println("isa(_image, Ptr{imImage}): ", isa(_image, Ptr{imImage}))


C:\programs\Gits\IUP.jl\src>c:\programs\julia64\julia execa.jl
_image: Ptr{imImage} @0x1647c470
typeof(_image): Ptr{imImage}
isa(_image, Ptr{imImage}): false

To reproduce this, you would need the to try the IUP.jl and uncoment the 3 
lines at

https://github.com/joa-quim/IUP.jl/blob/master/src/IUP_CD.jl#L356

plus changing 'img' to 'image' in (and run the example)

https://github.com/joa-quim/IUP.jl/blob/master/examples/im_view_.jl#L119

Terça-feira, 17 de Junho de 2014 17:13:19 UTC+1, Keno Fischer escreveu:
>
> Looks like you're not actually calling typeof in the first statement.
>
>
> On Tue, Jun 17, 2014 at 11:55 AM, J Luis > 
> wrote:
>
>> Hi, this is a continuation of what I by mistake posted in devs-list
>>
>> https://groups.google.com/forum/?fromgroups=#!topic/julia-dev/GojOx4nI-xo
>>
>> that now stroke me again (in another place)
>>
>> function imcdCanvasPutImage(_canvas, _image, _x, _y, _w, _h, _xmin, _xmax
>> , _ymin, _ymax)
>> println("typeof(_image): ", _image)
>> println("isa(_image, Ptr{imImage}): ", isa(_image, Ptr{imImage}))
>>
>> which when execute prints this astonishing conclusion
>>
>> C:\programs\Gits\IUP.jl\src>c:\programs\julia64\julia execa.jl
>> typeof(_image): Ptr{imImage} @0x00180c30
>> isa(_image, Ptr{imImage}): false
>>
>> However, when I try to reproduce this on the REPL, than it works
>>
>> Joaquim
>>
>
>

Re: [julia-users] animation using Gtk+/Cairo

2014-06-17 Thread Abe Schneider
@Tim: Awesome, exactly what I was looking for. Thank you.

@Jameson: Just to check, do you mean something like:

function redraw_canvas(canvas)
  draw(canvas)
end

draw(canvas) do widget
  # ...
end

If so, I'll re-post my code with the update. It may be useful to someone 
else to see the entire code as an example.

Thanks!
A


On Tuesday, June 17, 2014 10:44:16 AM UTC-4, Jameson wrote:
>
> This code is not valid, since getgc does not always have a valid drawing 
> context to return. Instead you need to provide Canvas with a callback 
> function via a call to redraw in which you do all the work, then just call 
> draw(canvas) in your timer callback to force an update to the view. 
> double-buffering is enabled by default.
>
> wait(Condition()) is the same wait(), and means sleep until this task is 
> signaled, and thereby prevents the program from exiting early
>
>
> On Tue, Jun 17, 2014 at 7:46 AM, Abe Schneider  > wrote:
>
>> Thank you everyone for the fast replies!
>>
>> After looking at ImageView and the sources, here's the solution I came up 
>> with:
>>
>> w = Gtk.@Window() |>
>> (body=Gtk.@Box(:v) |>
>>   (canvas=Gtk.@Canvas(600, 600)) |>
>> showall
>>
>> function redraw_canvas(canvas)
>>   ctx = getgc(canvas)
>>   h = height(canvas)
>>   w = width(canvas)
>>
>>   # draw background
>>   rectangle(ctx, 0, 0, w, h)
>>   set_source_rgb(ctx, 1, 1, 1)
>>   fill(ctx)
>>
>>   # draw objects
>>   # ...
>>
>>   # tell Gtk+ to redisplay
>>   draw(canvas)
>> end
>>
>> function init(canvas, delay::Float64, interval::Float64)
>>   update_timer = Timer(timer -> redraw_canvas(canvas))
>>   start_timer(update_timer, delay, interval)
>> end
>>
>> update_timer = init(canvas, 2, 1)
>> if !isinteractive()
>>   wait(Condition())
>> end
>>
>> stop_timer(update_timer)
>>
>> I haven't looked yet into what is required to do double-buffering (or if 
>> it's enabled by default). I also copied the 'wait(Condition())' from the 
>> docs, though it's not clear to me what the condition is (if I close the 
>> window, the program is still running -- I'm assuming that means I need to 
>> connect the signal for window destruction to said condition).
>>
>> A
>>
>>
>> On Monday, June 16, 2014 9:33:42 PM UTC-4, Jameson wrote:
>>
>>> I would definately use Julia's timers. See `Gtk.jl/src/cairo.jl` for an 
>>> example interface to the Cairo backing to a Gtk window (used in 
>>> `Winston.jl/src/gtk.jl`). If you are using this wrapper, call `draw(w)` to 
>>> force a redraw immediately, or `draw(w,false)` to queue a redraw request 
>>> for when Gtk is idle.
>>>
>>>
>>> On Mon, Jun 16, 2014 at 9:12 PM, Tim Holy  wrote:
>>>
 ImageView's navigation.jl contains an example. The default branch is Tk
 (because  as far as binary distribution goes, Tk is "solved" and Gtk 
 isn't
 yet), but it has a gtk branch you can look at.

 --Tim

 On Monday, June 16, 2014 04:01:46 PM Abe Schneider wrote:
 > I was looking for a way to display a simulation in Julia. Originally 
 I was
 > going to just use PyPlot, but it occurred to me it would be better to 
 just
 > use Gtk+ + Cairo to do the drawing rather than something whose main 
 purpose
 > is drawing graphs.
 >
 > So far, following the examples on the Github page, I have no problem
 > creating a window with a Cairo canvas. I can also display content on 
 the
 > canvas fairly easily (which speaks volumes on the awesomeness of 
 Julia and
 > the Gtk+ library). However, after looking through the code and 
 samples,
 > it's not obvious to me how to redraw the canvas every fraction of a 
 second
 > to display new content.
 >
 > I did find an example of animating with Cairo and Gtk+ in C
 > (http://cairographics.org/threaded_animation_with_cairo/). However, I
 > assume one would want to use Julia's timers instead of of GLibs? 
 Secondly,
 > there in their function 'timer_exe', call is made directly to Gtk+ to 
 send
 > a redraw queue to the window. Is there a cleaner way to do it with 
 the Gtk+
 > library?
 >
 > Thanks!
 > A


>>>
>

Re: [julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Cristóvão Duarte Sousa
Are you talking about the incremental GC 
?

It happens that, since I'm making some experiments with a (pseudo-)realtime 
simulation with Julia, I also have that branch compiled.
In my realtime experiment, at the activation of a Timer with a period of 
2.2ms, I get a big delay of  +/-9ms each +/-1sec. when using master Julia.
By using the incremental GC those delays disappear.

However, in the time measurements I described before, the use of 
the incremental GC doesn't seem to produce any better results...
 
On Tuesday, June 17, 2014 5:32:34 PM UTC+1, John Myles White wrote:
>
> Sounds like we need to rerun these benchmarks after the new GC branch gets 
> updated.
>
>  -- John
>
> On Jun 17, 2014, at 9:31 AM, Stefan Karpinski  > wrote:
>
> That definitely smells like a GC issue. Python doesn't have this 
> particular problem since it uses reference counting.
>
>
> On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa  > wrote:
>
>> I've just done measurements of algorithm inner loop times in my machine 
>> by changing the code has shown in this commit 
>> 
>> .
>>
>> I've found out something... see for yourself:
>>
>> using Winston
>> numba_times = readdlm("numba_times.dat")[10:end];
>> plot(numba_times)
>>
>>
>> 
>> julia_times = readdlm("julia_times.dat")[10:end];
>> plot(julia_times)
>>
>>
>> 
>> println((median(numba_times), mean(numba_times), var(numba_times)))
>> (0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)
>>
>> println((median(julia_times), mean(julia_times), var(julia_times)))
>> (0.00282404404,0.0034863882123824454,1.7058255003790299e-6)
>>
>> So, while inner loop times have more or less the same median on both 
>> Julia and Numba tests, the mean and variance are higher in Julia.
>>
>> Can that be due to the garbage collector being kicking in?
>>
>>
>> On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:
>>>
>>> Dear all,
>>>
>>> I thought you might find this paper interesting: http://economics.
>>> sas.upenn.edu/~jesusfv/comparison_languages.pdf
>>>
>>> It takes a standard model from macro economics and computes it's 
>>> solution with an identical algorithm in several languages. Julia is roughly 
>>> 2.6 times slower than the best C++ executable. I was bit puzzled by the 
>>> result, since in the benchmarks on http://julialang.org/, the slowest 
>>> test is 1.66 times C. I realize that those benchmarks can't cover all 
>>> possible situations. That said, I couldn't really find anything unusual in 
>>> the Julia code, did some profiling and removed type inference, but still 
>>> that's as fast as I got it. That's not to say that I'm disappointed, I 
>>> still think this is great. Did I miss something obvious here or is there 
>>> something specific to this algorithm? 
>>>
>>> The codes are on github at 
>>>
>>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
>>>
>>>
>>>
>
>

Re: [julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Peter Simon
As pointed out by Dahua, there is a lot of unnecessary memory allocation. 
 This can be reduced significantly by replacing the lines

maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
mValueFunction= mValueFunctionNew
mValueFunctionNew = zeros(nGridCapital,nGridProductivity)




with

maxDifference  = maximum(abs!(subtract!(mValueFunction, 
mValueFunctionNew)))
(mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
mValueFunction)
fill!(mValueFunctionNew, 0.0)



abs! and subtract! require adding the line

using NumericExtensions



prior to the function line.  I think the OP used Julia 0.2; I don't believe 
that NumericExtensions will work with that old version.  When I combine 
these changes with adding 

@inbounds begin
...
end



block around the "while" loop, I get about 25% reduction in execution time, 
and reduction of memory allocation from roughly 700 MByte to 180 MByte

--Peter


On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:
>
> Sounds like we need to rerun these benchmarks after the new GC branch gets 
> updated.
>
>  -- John
>
> On Jun 17, 2014, at 9:31 AM, Stefan Karpinski  > wrote:
>
> That definitely smells like a GC issue. Python doesn't have this 
> particular problem since it uses reference counting.
>
>
> On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa  > wrote:
>
>> I've just done measurements of algorithm inner loop times in my machine 
>> by changing the code has shown in this commit 
>> 
>> .
>>
>> I've found out something... see for yourself:
>>
>> using Winston
>> numba_times = readdlm("numba_times.dat")[10:end];
>> plot(numba_times)
>>
>>
>> 
>> julia_times = readdlm("julia_times.dat")[10:end];
>> plot(julia_times)
>>
>>
>> 
>> println((median(numba_times), mean(numba_times), var(numba_times)))
>> (0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)
>>
>> println((median(julia_times), mean(julia_times), var(julia_times)))
>> (0.00282404404,0.0034863882123824454,1.7058255003790299e-6)
>>
>> So, while inner loop times have more or less the same median on both 
>> Julia and Numba tests, the mean and variance are higher in Julia.
>>
>> Can that be due to the garbage collector being kicking in?
>>
>>
>> On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:
>>>
>>> Dear all,
>>>
>>> I thought you might find this paper interesting: http://economics.
>>> sas.upenn.edu/~jesusfv/comparison_languages.pdf
>>>
>>> It takes a standard model from macro economics and computes it's 
>>> solution with an identical algorithm in several languages. Julia is roughly 
>>> 2.6 times slower than the best C++ executable. I was bit puzzled by the 
>>> result, since in the benchmarks on http://julialang.org/, the slowest 
>>> test is 1.66 times C. I realize that those benchmarks can't cover all 
>>> possible situations. That said, I couldn't really find anything unusual in 
>>> the Julia code, did some profiling and removed type inference, but still 
>>> that's as fast as I got it. That's not to say that I'm disappointed, I 
>>> still think this is great. Did I miss something obvious here or is there 
>>> something specific to this algorithm? 
>>>
>>> The codes are on github at 
>>>
>>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
>>>
>>>
>>>
>
>

Re: [julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Andreas Noack Jensen
...but the Numba version doesn't use tricks like that.

The uniform metric can also be calculated with a small loop. I think that
requiring dependencies is against the purpose of the exercise.


2014-06-17 18:56 GMT+02:00 Peter Simon :

> As pointed out by Dahua, there is a lot of unnecessary memory allocation.
>  This can be reduced significantly by replacing the lines
>
> maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
> mValueFunction= mValueFunctionNew
> mValueFunctionNew = zeros(nGridCapital,nGridProductivity)
>
>
>
>
> with
>
> maxDifference  = maximum(abs!(subtract!(mValueFunction,
> mValueFunctionNew)))
> (mValueFunction, mValueFunctionNew) = (mValueFunctionNew,
> mValueFunction)
> fill!(mValueFunctionNew, 0.0)
>
>
>
> abs! and subtract! require adding the line
>
> using NumericExtensions
>
>
>
> prior to the function line.  I think the OP used Julia 0.2; I don't
> believe that NumericExtensions will work with that old version.  When I
> combine these changes with adding
>
> @inbounds begin
> ...
> end
>
>
>
> block around the "while" loop, I get about 25% reduction in execution
> time, and reduction of memory allocation from roughly 700 MByte to 180 MByte
>
> --Peter
>
>
> On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:
>
>> Sounds like we need to rerun these benchmarks after the new GC branch
>> gets updated.
>>
>>  -- John
>>
>> On Jun 17, 2014, at 9:31 AM, Stefan Karpinski 
>> wrote:
>>
>> That definitely smells like a GC issue. Python doesn't have this
>> particular problem since it uses reference counting.
>>
>>
>> On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa <
>> cri...@gmail.com> wrote:
>>
>>> I've just done measurements of algorithm inner loop times in my machine
>>> by changing the code has shown in this commit
>>> 
>>> .
>>>
>>> I've found out something... see for yourself:
>>>
>>> using Winston
>>> numba_times = readdlm("numba_times.dat")[10:end];
>>> plot(numba_times)
>>>
>>>
>>> 
>>> julia_times = readdlm("julia_times.dat")[10:end];
>>> plot(julia_times)
>>>
>>>
>>> 
>>> println((median(numba_times), mean(numba_times), var(numba_times)))
>>> (0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)
>>>
>>> println((median(julia_times), mean(julia_times), var(julia_times)))
>>> (0.00282404404,0.0034863882123824454,1.7058255003790299e-6)
>>>
>>> So, while inner loop times have more or less the same median on both
>>> Julia and Numba tests, the mean and variance are higher in Julia.
>>>
>>> Can that be due to the garbage collector being kicking in?
>>>
>>>
>>> On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:

 Dear all,

 I thought you might find this paper interesting: http://economics.
 sas.upenn.edu/~jesusfv/comparison_languages.pdf

 It takes a standard model from macro economics and computes it's
 solution with an identical algorithm in several languages. Julia is roughly
 2.6 times slower than the best C++ executable. I was bit puzzled by the
 result, since in the benchmarks on http://julialang.org/, the slowest
 test is 1.66 times C. I realize that those benchmarks can't cover all
 possible situations. That said, I couldn't really find anything unusual in
 the Julia code, did some profiling and removed type inference, but still
 that's as fast as I got it. That's not to say that I'm disappointed, I
 still think this is great. Did I miss something obvious here or is there
 something specific to this algorithm?

 The codes are on github at

 https://github.com/jesusfv/Comparison-Programming-Languages-Economics



>>
>>


-- 
Med venlig hilsen

Andreas Noack Jensen


Re: [julia-users] juliabloggers.com is now live!

2014-06-17 Thread Ivar Nesje
I'm not sure Google loves a "content farm", that creates no new material, 
but just republishes what others have published previously.



kl. 17:38:24 UTC+2 tirsdag 17. juni 2014 skrev Randy Zwitch følgende:
>
> It's up to you. If you remove Feedburner, you remove whatever stats you 
> might be getting. But you'll get a better looking attribution URL on Julia 
> Bloggers. So you'd get:
>
> http://julialang.org/blog/2013/05/graphical-user-interfaces-part1/
>
> instead of
>
>
> http://feedproxy.google.com/~r/JuliaLang/~3/SHZGDk581qM/graphical-user-interfaces-part1
>
> I don't think you guys have any problems ranking in Google, so I'm not 
> sure it's even worth the 5 minutes to change.
>
> On Tuesday, June 17, 2014 11:15:40 AM UTC-4, Stefan Karpinski wrote:
>>
>> I set that up back in 2012 and I know squat about RSS or setting up 
>> feeds, so if anyone has any good ideas about alternate setups, I'm all 
>> ears. I have no attachment to feedburner.
>>
>>
>> On Tue, Jun 17, 2014 at 9:33 AM, Randy Zwitch  
>> wrote:
>>
>>> I did add the julialang.org/blog feed to Julia Bloggers already. The 
>>> attribution is a bit messed up because they are re-directing their feed 
>>> using Feedburner, then Feedburner re-directs to the actual URL; I'll try to 
>>> figure out how to get the attribution to point directly to the blog post.
>>>
>>> Here are the posts from the official blog:
>>>
>>> http://www.juliabloggers.com/author/julia-developers/
>>>
>>>
>>> On Tuesday, June 17, 2014 9:04:37 AM UTC-4, Tobias Knopp wrote:

 Randy, would it be possible to integrate the page in julialang.org 
 (under the blog section)?
 If not it would probably be good to add a link there + maybe remove the 
 dublicated posts.

 Cheers,

 Tobi

 Am Dienstag, 17. Juni 2014 14:39:46 UTC+2 schrieb Randy Zwitch:
>
> I think this is just a caching issue, the attribution should be on all 
> pages.
>
> On Tuesday, June 17, 2014 3:44:47 AM UTC-4, Mauro wrote:
>>
>> Thanks for putting this together!  One more thing about authors, on 
>> pages like for example this one
>> http://www.juliabloggers.com/using-asciiplots-jl/
>> there should be the same attribution as on the front page.
>>
>> On Tuesday, June 17, 2014 1:19:48 AM UTC+1, Randy Zwitch wrote:
>>>
>>> Ok, there is now more obvious attribution on each post, with the 
>>> author name and link of the original post prominently displayed before 
>>> the 
>>> article.
>>>
>>> If anyone else has any other recommendations/requests (still need a 
>>> logo!), please let me know.
>>>
>>
>>

Re: [julia-users] animation using Gtk+/Cairo

2014-06-17 Thread Jameson Nash
Yes. Although I think the draw...do function is actually redraw...do (this
is actually a shared interface with Tk.jl, although I recommend Gtk :)

Sent from my phone.

On Tuesday, June 17, 2014, Abe Schneider  wrote:

> @Tim: Awesome, exactly what I was looking for. Thank you.
>
> @Jameson: Just to check, do you mean something like:
>
> function redraw_canvas(canvas)
>   draw(canvas)
> end
>
> draw(canvas) do widget
>   # ...
> end
>
> If so, I'll re-post my code with the update. It may be useful to someone
> else to see the entire code as an example.
>
> Thanks!
> A
>
>
> On Tuesday, June 17, 2014 10:44:16 AM UTC-4, Jameson wrote:
>>
>> This code is not valid, since getgc does not always have a valid drawing
>> context to return. Instead you need to provide Canvas with a callback
>> function via a call to redraw in which you do all the work, then just call
>> draw(canvas) in your timer callback to force an update to the view.
>> double-buffering is enabled by default.
>>
>> wait(Condition()) is the same wait(), and means sleep until this task is
>> signaled, and thereby prevents the program from exiting early
>>
>>
>> On Tue, Jun 17, 2014 at 7:46 AM, Abe Schneider 
>> wrote:
>>
>>> Thank you everyone for the fast replies!
>>>
>>> After looking at ImageView and the sources, here's the solution I came
>>> up with:
>>>
>>> w = Gtk.@Window() |>
>>> (body=Gtk.@Box(:v) |>
>>>   (canvas=Gtk.@Canvas(600, 600)) |>
>>> showall
>>>
>>> function redraw_canvas(canvas)
>>>   ctx = getgc(canvas)
>>>   h = height(canvas)
>>>   w = width(canvas)
>>>
>>>   # draw background
>>>   rectangle(ctx, 0, 0, w, h)
>>>   set_source_rgb(ctx, 1, 1, 1)
>>>   fill(ctx)
>>>
>>>   # draw objects
>>>   # ...
>>>
>>>   # tell Gtk+ to redisplay
>>>   draw(canvas)
>>> end
>>>
>>> function init(canvas, delay::Float64, interval::Float64)
>>>   update_timer = Timer(timer -> redraw_canvas(canvas))
>>>   start_timer(update_timer, delay, interval)
>>> end
>>>
>>> update_timer = init(canvas, 2, 1)
>>> if !isinteractive()
>>>   wait(Condition())
>>> end
>>>
>>> stop_timer(update_timer)
>>>
>>> I haven't looked yet into what is required to do double-buffering (or if
>>> it's enabled by default). I also copied the 'wait(Condition())' from the
>>> docs, though it's not clear to me what the condition is (if I close the
>>> window, the program is still running -- I'm assuming that means I need to
>>> connect the signal for window destruction to said condition).
>>>
>>> A
>>>
>>>
>>> On Monday, June 16, 2014 9:33:42 PM UTC-4, Jameson wrote:
>>>
 I would definately use Julia's timers. See `Gtk.jl/src/cairo.jl` for an
 example interface to the Cairo backing to a Gtk window (used in
 `Winston.jl/src/gtk.jl`). If you are using this wrapper, call `draw(w)` to
 force a redraw immediately, or `draw(w,false)` to queue a redraw request
 for when Gtk is idle.


 On Mon, Jun 16, 2014 at 9:12 PM, Tim Holy  wrote:

> ImageView's navigation.jl contains an example. The default branch is Tk
> (because  as far as binary distribution goes, Tk is "solved" and Gtk
> isn't
> yet), but it has a gtk branch you can look at.
>
> --Tim
>
> On Monday, June 16, 2014 04:01:46 PM Abe Schneider wrote:
> > I was looking for a way to display a simulation in Julia. Originally
> I was
> > going to just use PyPlot, but it occurred to me it would be better
> to just
> > use Gtk+ + Cairo to do the drawing rather than something whose main
> purpose
> > is drawing graphs.
> >
> > So far, following the examples on the Github page, I have no problem
> > creating a window with a Cairo canvas. I can also display content on
> the
> > canvas fairly easily (which speaks volumes on the awesomeness of
> Julia and
> > the Gtk+ library). However, after looking through the code and
> samples,
> > it's not obvious to me how to redraw the canvas every fraction of a
> second
> > to display new content.
> >
> > I did find an example of animating with Cairo and Gtk+ in C
> > (http://cairographics.org/threaded_animation_with_cairo/). However,
> I
> > assume one would want to use Julia's timers instead of of GLibs?
> Secondly,
> > there in their function 'timer_exe', call is made directly to Gtk+
> to send
> > a redraw queue to the window. Is there a cleaner way to do it with
> the Gtk+
> > library?
> >
> > Thanks!
> > A
>
>

>>


Re: [julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Peter Simon
You're right.  Replacing the NumericExtensions function calls with a small 
loop

maxDifference  = 0.0
for k = 1:length(mValueFunction)
maxDifference = max(maxDifference, abs(mValueFunction[k]- 
mValueFunctionNew[k]))
end


makes no significant difference in execution time or memory allocation and 
eliminates the dependency.

--Peter


On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:
>
> ...but the Numba version doesn't use tricks like that. 
>
> The uniform metric can also be calculated with a small loop. I think that 
> requiring dependencies is against the purpose of the exercise.
>
>
> 2014-06-17 18:56 GMT+02:00 Peter Simon >:
>
>> As pointed out by Dahua, there is a lot of unnecessary memory allocation. 
>>  This can be reduced significantly by replacing the lines
>>
>> maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
>> mValueFunction= mValueFunctionNew
>> mValueFunctionNew = zeros(nGridCapital,nGridProductivity)
>>
>>
>>
>>
>> with
>>
>> maxDifference  = maximum(abs!(subtract!(mValueFunction, 
>> mValueFunctionNew)))
>> (mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
>> mValueFunction)
>> fill!(mValueFunctionNew, 0.0)
>>
>>
>>
>> abs! and subtract! require adding the line
>>
>> using NumericExtensions
>>
>>
>>
>> prior to the function line.  I think the OP used Julia 0.2; I don't 
>> believe that NumericExtensions will work with that old version.  When I 
>> combine these changes with adding 
>>
>> @inbounds begin
>> ...
>> end
>>
>>
>>
>> block around the "while" loop, I get about 25% reduction in execution 
>> time, and reduction of memory allocation from roughly 700 MByte to 180 MByte
>>
>> --Peter
>>
>>
>> On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:
>>
>>> Sounds like we need to rerun these benchmarks after the new GC branch 
>>> gets updated.
>>>
>>>  -- John
>>>
>>> On Jun 17, 2014, at 9:31 AM, Stefan Karpinski  
>>> wrote:
>>>
>>> That definitely smells like a GC issue. Python doesn't have this 
>>> particular problem since it uses reference counting.
>>>
>>>
>>> On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa <
>>> cri...@gmail.com> wrote:
>>>
 I've just done measurements of algorithm inner loop times in my machine 
 by changing the code has shown in this commit 
 
 .

 I've found out something... see for yourself:

 using Winston
 numba_times = readdlm("numba_times.dat")[10:end];
 plot(numba_times)


 
 julia_times = readdlm("julia_times.dat")[10:end];
 plot(julia_times)


 
 println((median(numba_times), mean(numba_times), var(numba_times)))
 (0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)

  println((median(julia_times), mean(julia_times), var(julia_times)))
 (0.00282404404,0.0034863882123824454,1.7058255003790299e-6)

 So, while inner loop times have more or less the same median on both 
 Julia and Numba tests, the mean and variance are higher in Julia.

 Can that be due to the garbage collector being kicking in?


 On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:
>
> Dear all,
>
> I thought you might find this paper interesting: http://economics.
> sas.upenn.edu/~jesusfv/comparison_languages.pdf
>
> It takes a standard model from macro economics and computes it's 
> solution with an identical algorithm in several languages. Julia is 
> roughly 
> 2.6 times slower than the best C++ executable. I was bit puzzled by the 
> result, since in the benchmarks on http://julialang.org/, the slowest 
> test is 1.66 times C. I realize that those benchmarks can't cover all 
> possible situations. That said, I couldn't really find anything unusual 
> in 
> the Julia code, did some profiling and removed type inference, but still 
> that's as fast as I got it. That's not to say that I'm disappointed, I 
> still think this is great. Did I miss something obvious here or is there 
> something specific to this algorithm? 
>
> The codes are on github at 
>
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
>
>
>
>>>
>>>
>
>
> -- 
> Med venlig hilsen
>
> Andreas Noack Jensen
>  


Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Florian Oswald
thanks peter. I made that devectorizing change after dalua suggested so. It
made a massive difference!

On Tuesday, 17 June 2014, Peter Simon  wrote:

> You're right.  Replacing the NumericExtensions function calls with a small
> loop
>
> maxDifference  = 0.0
> for k = 1:length(mValueFunction)
> maxDifference = max(maxDifference, abs(mValueFunction[k]-
> mValueFunctionNew[k]))
> end
>
>
> makes no significant difference in execution time or memory allocation and
> eliminates the dependency.
>
> --Peter
>
>
> On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:
>>
>> ...but the Numba version doesn't use tricks like that.
>>
>> The uniform metric can also be calculated with a small loop. I think that
>> requiring dependencies is against the purpose of the exercise.
>>
>>
>> 2014-06-17 18:56 GMT+02:00 Peter Simon :
>>
>>> As pointed out by Dahua, there is a lot of unnecessary memory
>>> allocation.  This can be reduced significantly by replacing the lines
>>>
>>> maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
>>> mValueFunction= mValueFunctionNew
>>> mValueFunctionNew = zeros(nGridCapital,nGridProductivity)
>>>
>>>
>>>
>>>
>>> with
>>>
>>> maxDifference  = maximum(abs!(subtract!(mValueFunction,
>>> mValueFunctionNew)))
>>> (mValueFunction, mValueFunctionNew) = (mValueFunctionNew,
>>> mValueFunction)
>>> fill!(mValueFunctionNew, 0.0)
>>>
>>>
>>>
>>> abs! and subtract! require adding the line
>>>
>>> using NumericExtensions
>>>
>>>
>>>
>>> prior to the function line.  I think the OP used Julia 0.2; I don't
>>> believe that NumericExtensions will work with that old version.  When I
>>> combine these changes with adding
>>>
>>> @inbounds begin
>>> ...
>>> end
>>>
>>>
>>>
>>> block around the "while" loop, I get about 25% reduction in execution
>>> time, and reduction of memory allocation from roughly 700 MByte to 180 MByte
>>>
>>> --Peter
>>>
>>>
>>> On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:
>>>
 Sounds like we need to rerun these benchmarks after the new GC branch
 gets updated.

  -- John

 On Jun 17, 2014, at 9:31 AM, Stefan Karpinski 
 wrote:

 That definitely smells like a GC issue. Python doesn't have this
 particular problem since it uses reference counting.


 On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa <
 cri...@gmail.com> wrote:

> I've just done measurements of algorithm inner loop times in my
> machine by changing the code has shown in this commit
> 
> .
>
> I've found out something... see for yourself:
>
> using Winston
> numba_times = readdlm("numba_times.dat")[10:end];
> plot(numba_times)
>
>
> 
> julia_times = readdlm("julia_times.dat")[10:end];
> plot(julia_times)
>
>
> 
> println((median(numba_times), mean(numba_times), var(numba_times)))
> (0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)
>
>  println((median(julia_times), mean(julia_times), var(julia_times)))
> (0.00282404404,0.0034863882123824454,1.7058255003790299e-6)
>
> So, while inner loop times have more or less the same median on both
> Julia and Numba tests, the mean and variance are higher in Julia.
>
> Can that be due to the garbage collector being kicking in?
>
>
> On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:
>>
>> Dear all,
>>
>> I thought you might find this paper interesting: http://economics.
>> sas.upenn.edu/~jesusfv/comparison_languages.pdf
>>
>> It takes a standard model from macro economics and computes it's
>> solution with an identical algorithm in several languages. Julia is 
>> roughly
>> 2.6 times slower than the best C++ executable. I was bit puzzled by the
>> result, since in the benchmarks on http://julialang.org/, the
>> slowest test is 1.66 times C. I realize that those benchmarks can't cover
>> all possible situations. That said, I couldn't really find anything 
>> unusual
>> in the Julia code, did some profiling and removed type inference, but 
>> still
>> that's as fast as I got it. That's not to say that I'm disappointed, I
>> still think this is great. Did I miss something obvious here or is there
>> something specific to this algorithm?
>>
>> The codes are on github at
>>
>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
>>
>>
>>


>>
>>
>> --
>> Med venlig hilsen
>

Re: [julia-users] juliabloggers.com is now live!

2014-06-17 Thread Stefan Karpinski
I don't think we should be concerned about google. We've got plenty of google 
mojo already and we're not trying to sell anything here.

> On Jun 17, 2014, at 1:18 PM, Ivar Nesje  wrote:
> 
> I'm not sure Google loves a "content farm", that creates no new material, but 
> just republishes what others have published previously.
> 
> 
> 
> kl. 17:38:24 UTC+2 tirsdag 17. juni 2014 skrev Randy Zwitch følgende:
>> 
>> It's up to you. If you remove Feedburner, you remove whatever stats you 
>> might be getting. But you'll get a better looking attribution URL on Julia 
>> Bloggers. So you'd get:
>> 
>> http://julialang.org/blog/2013/05/graphical-user-interfaces-part1/
>> 
>> instead of
>> 
>> http://feedproxy.google.com/~r/JuliaLang/~3/SHZGDk581qM/graphical-user-interfaces-part1
>> 
>> I don't think you guys have any problems ranking in Google, so I'm not sure 
>> it's even worth the 5 minutes to change.
>> 
>>> On Tuesday, June 17, 2014 11:15:40 AM UTC-4, Stefan Karpinski wrote:
>>> I set that up back in 2012 and I know squat about RSS or setting up feeds, 
>>> so if anyone has any good ideas about alternate setups, I'm all ears. I 
>>> have no attachment to feedburner.
>>> 
>>> 
 On Tue, Jun 17, 2014 at 9:33 AM, Randy Zwitch  
 wrote:
 I did add the julialang.org/blog feed to Julia Bloggers already. The 
 attribution is a bit messed up because they are re-directing their feed 
 using Feedburner, then Feedburner re-directs to the actual URL; I'll try 
 to figure out how to get the attribution to point directly to the blog 
 post.
 
 Here are the posts from the official blog:
 
 http://www.juliabloggers.com/author/julia-developers/
 
 
> On Tuesday, June 17, 2014 9:04:37 AM UTC-4, Tobias Knopp wrote:
> Randy, would it be possible to integrate the page in julialang.org (under 
> the blog section)?
> If not it would probably be good to add a link there + maybe remove the 
> dublicated posts.
> 
> Cheers,
> 
> Tobi
> 
> Am Dienstag, 17. Juni 2014 14:39:46 UTC+2 schrieb Randy Zwitch:
>> 
>> I think this is just a caching issue, the attribution should be on all 
>> pages.
>> 
>>> On Tuesday, June 17, 2014 3:44:47 AM UTC-4, Mauro wrote:
>>> Thanks for putting this together!  One more thing about authors, on 
>>> pages like for example this one
>>> http://www.juliabloggers.com/using-asciiplots-jl/
>>> there should be the same attribution as on the front page.
>>> 
 On Tuesday, June 17, 2014 1:19:48 AM UTC+1, Randy Zwitch wrote:
 Ok, there is now more obvious attribution on each post, with the 
 author name and link of the original post prominently displayed before 
 the article.
 
 If anyone else has any other recommendations/requests (still need a 
 logo!), please let me know.
>>> 


RE: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread David Anthoff
I submitted three pull requests to the original repo that get rid of three 
different array allocations in loops and that make things a fair bit faster 
altogether:

 

https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls

 

I think it would also make sense to run these benchmarks on julia 0.3.0 instead 
of 0.2.1, given that there have been a fair number of performance imrpovements.

 

From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On 
Behalf Of Florian Oswald
Sent: Tuesday, June 17, 2014 10:50 AM
To: julia-users@googlegroups.com
Subject: Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < 
Java < Matlab < the rest

 

thanks peter. I made that devectorizing change after dalua suggested so. It 
made a massive difference!

On Tuesday, 17 June 2014, Peter Simon mailto:psimon0...@gmail.com> > wrote:

You're right.  Replacing the NumericExtensions function calls with a small loop

 

maxDifference  = 0.0
for k = 1:length(mValueFunction)
maxDifference = max(maxDifference, abs(mValueFunction[k]- 
mValueFunctionNew[k]))
end


makes no significant difference in execution time or memory allocation and 
eliminates the dependency.

 

--Peter



On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:

...but the Numba version doesn't use tricks like that. 

 

The uniform metric can also be calculated with a small loop. I think that 
requiring dependencies is against the purpose of the exercise.

 

2014-06-17 18:56 GMT+02:00 Peter Simon mailto:psimo...@gmail.com> >:

As pointed out by Dahua, there is a lot of unnecessary memory allocation.  This 
can be reduced significantly by replacing the lines

 

maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
mValueFunction= mValueFunctionNew
mValueFunctionNew = zeros(nGridCapital,nGridProductivity)

 

 

with

 

maxDifference  = maximum(abs!(subtract!(mValueFunction, 
mValueFunctionNew)))
(mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
mValueFunction)
fill!(mValueFunctionNew, 0.0)

 

abs! and subtract! require adding the line

 

using NumericExtensions

 

prior to the function line.  I think the OP used Julia 0.2; I don't believe 
that NumericExtensions will work with that old version.  When I combine these 
changes with adding 

 

@inbounds begin
...
end

 

block around the "while" loop, I get about 25% reduction in execution time, and 
reduction of memory allocation from roughly 700 MByte to 180 MByte

 

--Peter



On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:

Sounds like we need to rerun these benchmarks after the new GC branch gets 
updated.

 

 -- John

 

On Jun 17, 2014, at 9:31 AM, Stefan Karpinski mailto:ste...@karpinski.org> > wrote:

 

That definitely smells like a GC issue. Python doesn't have this particular 
problem since it uses reference counting.

 

On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa mailto:cri...@gmail.com> > wrote:

I've just done measurements of algorithm inner loop times in my machine by 
changing the code has shown in this commit 

 .

 

I've found out something... see for yourself:

 

using Winston
numba_times = readdlm("numba_times.dat")[10:end];
plot(numba_times)

 

 

julia_times = readdlm("julia_times.dat")[10:end];
plot(julia_times)

 

 

 

println((median(numba_times), mean(numba_times), var(numba_times)))

(0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)

 

println((median(julia_times), mean(julia_times), var(julia_times)))

(0.00282404404,0.0034863882123824454,1.7058255003790299e-6)

 

So, while inner loop times have more or less the same median on both Julia and 
Numba tests, the mean and variance are higher in Julia.

 

Can that be due to the garbage collector being kicking in?



On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:

Dear all,

 

I thought you might find this paper interesting: 
http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf

 

It takes a standard model from macro economics and computes it's solution with 
an identical algorithm in several languages. Julia is roughly 2.6 times slower 
than the best C++ executable. I was bit puzzled by the result, since in the 
benchmarks on http://julialang.org/, the slowest test is 1.66 times C. I 
realize that those benchmarks can't cover all possible situations. That said, I 
couldn't really find anything unusual in the Julia code, did some profiling and 
removed type inference, but still that's as fast as I got it. That's not to say 
that I'm disa

[julia-users] Type variables acting differently when accessed through an array

2014-06-17 Thread Jack Holland
Does anyone have an explanation for why a type variable (e.g. Uint64) acts 
differently when accessed in an array (e.g. [Uint64]) than by itself? 
Here's some sample code to show what I mean:

import Base.convert

function convert{T1<:Integer, T2<:String}(totype::Type{T1}, value::T2)
 println("convert: $totype, $value")
 #parseint(totype, value)
end

for t in [Uint64]
 convert(t, "4")
 convert(Uint64, "4")
 println(t)
end

This code prints:

convert: Uint64, 4
Uint64

As you can see, only the second call to `convert` prints the message. Why 
isn't the convert method defined above called twice? Why does placing 
Uint64 in an array change its behavior? Is there something I'm missing here 
about how for loops or types work in Julia? Is this a bug or intended 
behavior?


Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Peter Simon
Sorry, Florian and David, for not seeing that you were way ahead of me.

On the subject of the log function:  I tried implementing mylog() as 
defined by Andreas on Julia running on CentOS and the result was a 
significant slowdown! (Yes, I defined the mylog function outside of main, 
at the module level).  Not sure if this is due to variation in the quality 
of the libm function on various systems or what.  If so, then it makes 
sense that Julia wants a uniformly accurate and fast implementation via 
openlibm.  But for fastest transcendental function performance, I assume 
that one must use the micro-coded versions built into the processor's 
FPU--Is that what the fast libm implementations do?  In that case, how 
could one hope to compete when using a C-coded version?

--Peter


On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:
>
> I submitted three pull requests to the original repo that get rid of three 
> different array allocations in loops and that make things a fair bit faster 
> altogether:
>
>  
>
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls
>
>  
>
> I think it would also make sense to run these benchmarks on julia 0.3.0 
> instead of 0.2.1, given that there have been a fair number of performance 
> imrpovements.
>
>  
>
> *From:* julia...@googlegroups.com  [mailto:
> julia...@googlegroups.com ] *On Behalf Of *Florian Oswald
> *Sent:* Tuesday, June 17, 2014 10:50 AM
> *To:* julia...@googlegroups.com 
> *Subject:* Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < 
> Julia < Java < Matlab < the rest
>
>  
>
> thanks peter. I made that devectorizing change after dalua suggested so. 
> It made a massive difference!
>
> On Tuesday, 17 June 2014, Peter Simon > 
> wrote:
>
> You're right.  Replacing the NumericExtensions function calls with a small 
> loop
>
>  
>
> maxDifference  = 0.0
> for k = 1:length(mValueFunction)
> maxDifference = max(maxDifference, abs(mValueFunction[k]- 
> mValueFunctionNew[k]))
> end
>
>
> makes no significant difference in execution time or memory allocation and 
> eliminates the dependency.
>
>  
>
> --Peter
>
>
>
> On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:
>
> ...but the Numba version doesn't use tricks like that. 
>
>  
>
> The uniform metric can also be calculated with a small loop. I think that 
> requiring dependencies is against the purpose of the exercise.
>
>  
>
> 2014-06-17 18:56 GMT+02:00 Peter Simon :
>
> As pointed out by Dahua, there is a lot of unnecessary memory allocation. 
>  This can be reduced significantly by replacing the lines
>
>  
>
> maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
> mValueFunction= mValueFunctionNew
> mValueFunctionNew = zeros(nGridCapital,nGridProductivity)
>
>  
>
>  
>
> with
>
>  
>
> maxDifference  = maximum(abs!(subtract!(mValueFunction, 
> mValueFunctionNew)))
> (mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
> mValueFunction)
> fill!(mValueFunctionNew, 0.0)
>
>  
>
> abs! and subtract! require adding the line
>
>  
>
> using NumericExtensions
>
>  
>
> prior to the function line.  I think the OP used Julia 0.2; I don't 
> believe that NumericExtensions will work with that old version.  When I 
> combine these changes with adding 
>
>  
>
> @inbounds begin
> ...
> end
>
>  
>
> block around the "while" loop, I get about 25% reduction in execution 
> time, and reduction of memory allocation from roughly 700 MByte to 180 MByte
>
>  
>
> --Peter
>
>
>
> On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:
>
> Sounds like we need to rerun these benchmarks after the new GC branch gets 
> updated.
>
>  
>
>  -- John
>
>  
>
> On Jun 17, 2014, at 9:31 AM, Stefan Karpinski  
> wrote:
>
>  
>
> That definitely smells like a GC issue. Python doesn't have this 
> particular problem since it uses reference counting.
>
>  
>
> On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa  
> wrote:
>
> I've just done measurements of algorithm inner loop times in my machine by 
> changing the code has shown in this commit 
> 
> .
>
>  
>
> I've found out something... see for yourself:
>
>  
>
> using Winston
> numba_times = readdlm("numba_times.dat")[10:end];
> plot(numba_times)
>
>
> 
>
> julia_times = readdlm("julia_times.dat")[10:end];
> plot(julia_times)
>
>  
>
>
> 
>
> println((median(numba_times), mean(numba_times), var(numba_times)))
>
> (0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)
>
>  
>
> println((median(julia_times), mean(julia_times), var(julia_times)))
>
> (0.00282

RE: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread David Anthoff
Another interesting result from the paper is how much faster Visual C++ 2010 
generated code is than gcc, on Windows. For their example, the gcc runtime is 
2.29 the runtime of the MS compiled version. The difference might be even 
larger with Visual C++ 2013 because that is when MS added an auto-vectorizer 
that is on by default.

 

I vaguely remember a discussion about compiling julia itself with the MS 
compiler on Windows, is that working and is that making a performance 
difference?

 

From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On 
Behalf Of Peter Simon
Sent: Tuesday, June 17, 2014 12:08 PM
To: julia-users@googlegroups.com
Subject: Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < 
Java < Matlab < the rest

 

Sorry, Florian and David, for not seeing that you were way ahead of me.

 

On the subject of the log function:  I tried implementing mylog() as defined by 
Andreas on Julia running on CentOS and the result was a significant slowdown! 
(Yes, I defined the mylog function outside of main, at the module level).  Not 
sure if this is due to variation in the quality of the libm function on various 
systems or what.  If so, then it makes sense that Julia wants a uniformly 
accurate and fast implementation via openlibm.  But for fastest transcendental 
function performance, I assume that one must use the micro-coded versions built 
into the processor's FPU--Is that what the fast libm implementations do?  In 
that case, how could one hope to compete when using a C-coded version?

 

--Peter



On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:

I submitted three pull requests to the original repo that get rid of three 
different array allocations in loops and that make things a fair bit faster 
altogether:

 

https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls

 

I think it would also make sense to run these benchmarks on julia 0.3.0 instead 
of 0.2.1, given that there have been a fair number of performance imrpovements.

 

From: julia...@googlegroups.com   
[mailto:julia...@googlegroups.com  ] On Behalf Of Florian Oswald
Sent: Tuesday, June 17, 2014 10:50 AM
To: julia...@googlegroups.com  
Subject: Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < 
Java < Matlab < the rest

 

thanks peter. I made that devectorizing change after dalua suggested so. It 
made a massive difference!

On Tuesday, 17 June 2014, Peter Simon  > wrote:

You're right.  Replacing the NumericExtensions function calls with a small loop

 

maxDifference  = 0.0
for k = 1:length(mValueFunction)
maxDifference = max(maxDifference, abs(mValueFunction[k]- 
mValueFunctionNew[k]))
end


makes no significant difference in execution time or memory allocation and 
eliminates the dependency.

 

--Peter



On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:

...but the Numba version doesn't use tricks like that. 

 

The uniform metric can also be calculated with a small loop. I think that 
requiring dependencies is against the purpose of the exercise.

 

2014-06-17 18:56 GMT+02:00 Peter Simon mailto:psimo...@gmail.com> >:

As pointed out by Dahua, there is a lot of unnecessary memory allocation.  This 
can be reduced significantly by replacing the lines

 

maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
mValueFunction= mValueFunctionNew
mValueFunctionNew = zeros(nGridCapital,nGridProductivity)

 

 

with

 

maxDifference  = maximum(abs!(subtract!(mValueFunction, 
mValueFunctionNew)))
(mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
mValueFunction)
fill!(mValueFunctionNew, 0.0)

 

abs! and subtract! require adding the line

 

using NumericExtensions

 

prior to the function line.  I think the OP used Julia 0.2; I don't believe 
that NumericExtensions will work with that old version.  When I combine these 
changes with adding 

 

@inbounds begin
...
end

 

block around the "while" loop, I get about 25% reduction in execution time, and 
reduction of memory allocation from roughly 700 MByte to 180 MByte

 

--Peter



On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:

Sounds like we need to rerun these benchmarks after the new GC branch gets 
updated.

 

 -- John

 

On Jun 17, 2014, at 9:31 AM, Stefan Karpinski mailto:ste...@karpinski.org> > wrote:

 

That definitely smells like a GC issue. Python doesn't have this particular 
problem since it uses reference counting.

 

On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa mailto:cri...@gmail.com> > wrote:

I've just done measurements of algorithm inner loop times in my machine by 
changing the code has shown in this commit 

 .

 

I've found out something... see for yourself:

 

using Winston
numba

Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Tobias Knopp
There are some remaining issues but compilation with MSVC is almost 
possible. I did some initial work and Tony Kelman made lots of progress 
in https://github.com/JuliaLang/julia/pull/6230. But there have not been 
any speed comparisons as far as I know. Note that Julia uses JIT 
compilation and thus I would not expect to have the source compiler have a 
huge impact.


Am Dienstag, 17. Juni 2014 21:25:50 UTC+2 schrieb David Anthoff:
>
> Another interesting result from the paper is how much faster Visual C++ 
> 2010 generated code is than gcc, on Windows. For their example, the gcc 
> runtime is 2.29 the runtime of the MS compiled version. The difference 
> might be even larger with Visual C++ 2013 because that is when MS added an 
> auto-vectorizer that is on by default.
>
>  
>
> I vaguely remember a discussion about compiling julia itself with the MS 
> compiler on Windows, is that working and is that making a performance 
> difference?
>
>  
>
> *From:* julia...@googlegroups.com  [mailto:
> julia...@googlegroups.com ] *On Behalf Of *Peter Simon
> *Sent:* Tuesday, June 17, 2014 12:08 PM
> *To:* julia...@googlegroups.com 
> *Subject:* Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < 
> Julia < Java < Matlab < the rest
>
>  
>
> Sorry, Florian and David, for not seeing that you were way ahead of me.
>
>  
>
> On the subject of the log function:  I tried implementing mylog() as 
> defined by Andreas on Julia running on CentOS and the result was a 
> significant slowdown! (Yes, I defined the mylog function outside of main, 
> at the module level).  Not sure if this is due to variation in the quality 
> of the libm function on various systems or what.  If so, then it makes 
> sense that Julia wants a uniformly accurate and fast implementation via 
> openlibm.  But for fastest transcendental function performance, I assume 
> that one must use the micro-coded versions built into the processor's 
> FPU--Is that what the fast libm implementations do?  In that case, how 
> could one hope to compete when using a C-coded version?
>
>  
>
> --Peter
>
>
>
> On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:
>
> I submitted three pull requests to the original repo that get rid of three 
> different array allocations in loops and that make things a fair bit faster 
> altogether:
>
>  
>
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls
>
>  
>
> I think it would also make sense to run these benchmarks on julia 0.3.0 
> instead of 0.2.1, given that there have been a fair number of performance 
> imrpovements.
>
>  
>
> *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
> Behalf Of *Florian Oswald
> *Sent:* Tuesday, June 17, 2014 10:50 AM
> *To:* julia...@googlegroups.com
> *Subject:* Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < 
> Julia < Java < Matlab < the rest
>
>  
>
> thanks peter. I made that devectorizing change after dalua suggested so. 
> It made a massive difference!
>
> On Tuesday, 17 June 2014, Peter Simon  wrote:
>
> You're right.  Replacing the NumericExtensions function calls with a small 
> loop
>
>  
>
> maxDifference  = 0.0
> for k = 1:length(mValueFunction)
> maxDifference = max(maxDifference, abs(mValueFunction[k]- 
> mValueFunctionNew[k]))
> end
>
>
> makes no significant difference in execution time or memory allocation and 
> eliminates the dependency.
>
>  
>
> --Peter
>
>
>
> On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:
>
> ...but the Numba version doesn't use tricks like that. 
>
>  
>
> The uniform metric can also be calculated with a small loop. I think that 
> requiring dependencies is against the purpose of the exercise.
>
>  
>
> 2014-06-17 18:56 GMT+02:00 Peter Simon :
>
> As pointed out by Dahua, there is a lot of unnecessary memory allocation. 
>  This can be reduced significantly by replacing the lines
>
>  
>
> maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
> mValueFunction= mValueFunctionNew
> mValueFunctionNew = zeros(nGridCapital,nGridProductivity)
>
>  
>
>  
>
> with
>
>  
>
> maxDifference  = maximum(abs!(subtract!(mValueFunction, 
> mValueFunctionNew)))
> (mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
> mValueFunction)
> fill!(mValueFunctionNew, 0.0)
>
>  
>
> abs! and subtract! require adding the line
>
>  
>
> using NumericExtensions
>
>  
>
> prior to the function line.  I think the OP used Julia 0.2; I don't 
> believe that NumericExtensions will work with that old version.  When I 
> combine these changes with adding 
>
>  
>
> @inbounds begin
> ...
> end
>
>  
>
> block around the "while" loop, I get about 25% reduction in execution 
> time, and reduction of memory allocation from roughly 700 MByte to 180 MByte
>
>  
>
> --Peter
>
>
>
> On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:
>
> Sounds like we need to rerun

Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Tony Kelman
I got pretty far on that a few months ago, 
see https://github.com/JuliaLang/julia/pull/6230 
and https://github.com/JuliaLang/julia/issues/6349

A couple of tiny changes aren't in master at the moment, but I was able to 
get libjulia compiled and julia.exe starting system image bootstrap. It hit 
a stack overflow at osutils.jl which is right after inference.jl, so the 
problem is likely in compiling type inference. Apparently I was missing 
some flags that are used in the MinGW build to increase the default stack 
size. Haven't gotten back to giving it another try recently.


On Tuesday, June 17, 2014 12:25:50 PM UTC-7, David Anthoff wrote:
>
> Another interesting result from the paper is how much faster Visual C++ 
> 2010 generated code is than gcc, on Windows. For their example, the gcc 
> runtime is 2.29 the runtime of the MS compiled version. The difference 
> might be even larger with Visual C++ 2013 because that is when MS added an 
> auto-vectorizer that is on by default.
>
>  
>
> I vaguely remember a discussion about compiling julia itself with the MS 
> compiler on Windows, is that working and is that making a performance 
> difference?
>
>  
>
> *From:* julia...@googlegroups.com  [mailto:
> julia...@googlegroups.com ] *On Behalf Of *Peter Simon
> *Sent:* Tuesday, June 17, 2014 12:08 PM
> *To:* julia...@googlegroups.com 
> *Subject:* Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < 
> Julia < Java < Matlab < the rest
>
>  
>
> Sorry, Florian and David, for not seeing that you were way ahead of me.
>
>  
>
> On the subject of the log function:  I tried implementing mylog() as 
> defined by Andreas on Julia running on CentOS and the result was a 
> significant slowdown! (Yes, I defined the mylog function outside of main, 
> at the module level).  Not sure if this is due to variation in the quality 
> of the libm function on various systems or what.  If so, then it makes 
> sense that Julia wants a uniformly accurate and fast implementation via 
> openlibm.  But for fastest transcendental function performance, I assume 
> that one must use the micro-coded versions built into the processor's 
> FPU--Is that what the fast libm implementations do?  In that case, how 
> could one hope to compete when using a C-coded version?
>
>  
>
> --Peter
>
>
>
> On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:
>
> I submitted three pull requests to the original repo that get rid of three 
> different array allocations in loops and that make things a fair bit faster 
> altogether:
>
>  
>
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls
>
>  
>
> I think it would also make sense to run these benchmarks on julia 0.3.0 
> instead of 0.2.1, given that there have been a fair number of performance 
> imrpovements.
>
>  
>
> *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
> Behalf Of *Florian Oswald
> *Sent:* Tuesday, June 17, 2014 10:50 AM
> *To:* julia...@googlegroups.com
> *Subject:* Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < 
> Julia < Java < Matlab < the rest
>
>  
>
> thanks peter. I made that devectorizing change after dalua suggested so. 
> It made a massive difference!
>
> On Tuesday, 17 June 2014, Peter Simon  wrote:
>
> You're right.  Replacing the NumericExtensions function calls with a small 
> loop
>
>  
>
> maxDifference  = 0.0
> for k = 1:length(mValueFunction)
> maxDifference = max(maxDifference, abs(mValueFunction[k]- 
> mValueFunctionNew[k]))
> end
>
>
> makes no significant difference in execution time or memory allocation and 
> eliminates the dependency.
>
>  
>
> --Peter
>
>
>
> On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:
>
> ...but the Numba version doesn't use tricks like that. 
>
>  
>
> The uniform metric can also be calculated with a small loop. I think that 
> requiring dependencies is against the purpose of the exercise.
>
>  
>
> 2014-06-17 18:56 GMT+02:00 Peter Simon :
>
> As pointed out by Dahua, there is a lot of unnecessary memory allocation. 
>  This can be reduced significantly by replacing the lines
>
>  
>
> maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
> mValueFunction= mValueFunctionNew
> mValueFunctionNew = zeros(nGridCapital,nGridProductivity)
>
>  
>
>  
>
> with
>
>  
>
> maxDifference  = maximum(abs!(subtract!(mValueFunction, 
> mValueFunctionNew)))
> (mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
> mValueFunction)
> fill!(mValueFunctionNew, 0.0)
>
>  
>
> abs! and subtract! require adding the line
>
>  
>
> using NumericExtensions
>
>  
>
> prior to the function line.  I think the OP used Julia 0.2; I don't 
> believe that NumericExtensions will work with that old version.  When I 
> combine these changes with adding 
>
>  
>
> @inbounds begin
> ...
> end
>
>  
>
> block around the "while" loop, I get about 25% reduction in exec

RE: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread David Anthoff
I was more thinking that this might make a difference for some of the 
dependencies, like openblas? But I’m not even sure that can be compiled at all 
using MS compilers…

 

From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On 
Behalf Of Tobias Knopp
Sent: Tuesday, June 17, 2014 12:42 PM
To: julia-users@googlegroups.com
Subject: Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < 
Java < Matlab < the rest

 

There are some remaining issues but compilation with MSVC is almost possible. I 
did some initial work and Tony Kelman made lots of progress in 
https://github.com/JuliaLang/julia/pull/6230. But there have not been any speed 
comparisons as far as I know. Note that Julia uses JIT compilation and thus I 
would not expect to have the source compiler have a huge impact.

 


Am Dienstag, 17. Juni 2014 21:25:50 UTC+2 schrieb David Anthoff:

Another interesting result from the paper is how much faster Visual C++ 2010 
generated code is than gcc, on Windows. For their example, the gcc runtime is 
2.29 the runtime of the MS compiled version. The difference might be even 
larger with Visual C++ 2013 because that is when MS added an auto-vectorizer 
that is on by default.

 

I vaguely remember a discussion about compiling julia itself with the MS 
compiler on Windows, is that working and is that making a performance 
difference?

 

From: julia...@googlegroups.com   
[mailto:julia...@googlegroups.com  ] On Behalf Of Peter Simon
Sent: Tuesday, June 17, 2014 12:08 PM
To: julia...@googlegroups.com  
Subject: Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < 
Java < Matlab < the rest

 

Sorry, Florian and David, for not seeing that you were way ahead of me.

 

On the subject of the log function:  I tried implementing mylog() as defined by 
Andreas on Julia running on CentOS and the result was a significant slowdown! 
(Yes, I defined the mylog function outside of main, at the module level).  Not 
sure if this is due to variation in the quality of the libm function on various 
systems or what.  If so, then it makes sense that Julia wants a uniformly 
accurate and fast implementation via openlibm.  But for fastest transcendental 
function performance, I assume that one must use the micro-coded versions built 
into the processor's FPU--Is that what the fast libm implementations do?  In 
that case, how could one hope to compete when using a C-coded version?

 

--Peter



On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:

I submitted three pull requests to the original repo that get rid of three 
different array allocations in loops and that make things a fair bit faster 
altogether:

 

https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls

 

I think it would also make sense to run these benchmarks on julia 0.3.0 instead 
of 0.2.1, given that there have been a fair number of performance imrpovements.

 

From: julia...@googlegroups.com   
[mailto:julia...@googlegroups.com] On Behalf Of Florian Oswald
Sent: Tuesday, June 17, 2014 10:50 AM
To: julia...@googlegroups.com  
Subject: Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < 
Java < Matlab < the rest

 

thanks peter. I made that devectorizing change after dalua suggested so. It 
made a massive difference!

On Tuesday, 17 June 2014, Peter Simon mailto:psimo...@gmail.com> > wrote:

You're right.  Replacing the NumericExtensions function calls with a small loop

 

maxDifference  = 0.0
for k = 1:length(mValueFunction)
maxDifference = max(maxDifference, abs(mValueFunction[k]- 
mValueFunctionNew[k]))
end


makes no significant difference in execution time or memory allocation and 
eliminates the dependency.

 

--Peter



On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:

...but the Numba version doesn't use tricks like that. 

 

The uniform metric can also be calculated with a small loop. I think that 
requiring dependencies is against the purpose of the exercise.

 

2014-06-17 18:56 GMT+02:00 Peter Simon mailto:psimo...@gmail.com> >:

As pointed out by Dahua, there is a lot of unnecessary memory allocation.  This 
can be reduced significantly by replacing the lines

 

maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
mValueFunction= mValueFunctionNew
mValueFunctionNew = zeros(nGridCapital,nGridProductivity)

 

 

with

 

maxDifference  = maximum(abs!(subtract!(mValueFunction, 
mValueFunctionNew)))
(mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
mValueFunction)
fill!(mValueFunctionNew, 0.0)

 

abs! and subtract! require adding the line

 

using NumericExtensions

 

prior to the function line.  I think the OP used Julia 0.2; I don't believe 
that NumericExtensions will work with that old version.  When I combine these 
changes with adding 

[julia-users] Re: An appreciation of two contributors among many

2014-06-17 Thread Dahua Lin
Hi, Doug,

Thanks for the nice words. Contributing to Julia and its ecosystems has 
been one of the most rewarding activity that I has ever experienced. I also 
learned a lot from the discussions with fellow contributors. It feels 
wonderful to see your efforts are impacting the world in a positive way.

Dahua


On Monday, June 16, 2014 4:48:56 PM UTC-5, Douglas Bates wrote:
>
> So many talented people have contributed so much to the Julia project that 
> it would not be possible to acknowledge them all.
>
> Nonetheless, my recent work has made me especially appreciative of the 
> work of Tim Holy for the Profile code and the ProfileView package and of 
> Dahua Lin for the  NumericExtensions and NumericFuns in particular.  These 
> are incredible tools.
>
> It is so easy to forget the you should profile before you attempt to 
> optimize your code.  I just learned that again.  I was getting very good 
> performance on an example using my MixedModels package - about twice the 
> speed of the R/C++ package lme4 that other contributors and I have been 
> working on seemingly forever.  Then I profiled my Julia code, which was 
> already 2-3 as fast as the R/C++ version, viewed the profile and thought, 
> "what's that wide bar over on the left?".  I "knew" where the function must 
> be spending its time and, of course, most of the time was being taken up in 
> another part of the function entirely.  Some rewriting has now resulted in 
> code that is 10 times as fast as the R/C++ code.
>
> I had a similar experience earlier in this development cycle when I 
> replaced a call to fma! in the NumericExtensions package with an explicit 
> loop using @inbounds that I "knew" would be just as fast.  It wasn't.  
>
>
>

Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Tony Kelman
We're diverging from the topic of the thread, but anyway...

No, MSVC OpenBLAS will probably never happen, you'd have to CMake-ify the 
whole thing and probably translate all of the assembly to Intel syntax. And 
skip the Fortran, or use Intel's compiler. I don't think they have the 
resources to do that.

There's a C99-only optimized BLAS implementation under development by the 
FLAME group at University of Texas here https://github.com/flame/blis that 
does aim to eventually support MSVC. It's nowhere near as mature as 
OpenBLAS in terms of automatically detecting architecture, cache sizes, 
etc. But their papers look very promising. They could use more people 
poking at it and submitting patches to get it to the usability level we'd 
need.

The rest of the dependencies vary significantly in how painful they would 
be to build with MSVC. GMP in particular was forked into a new project 
called MPIR, with MSVC compatibility being one of the major reasons.



On Tuesday, June 17, 2014 12:47:49 PM UTC-7, David Anthoff wrote:
>
> I was more thinking that this might make a difference for some of the 
> dependencies, like openblas? But I’m not even sure that can be compiled at 
> all using MS compilers…
>
>  
>
> *From:* julia...@googlegroups.com  [mailto:
> julia...@googlegroups.com ] *On Behalf Of *Tobias Knopp
> *Sent:* Tuesday, June 17, 2014 12:42 PM
> *To:* julia...@googlegroups.com 
> *Subject:* Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < 
> Julia < Java < Matlab < the rest
>
>  
>
> There are some remaining issues but compilation with MSVC is almost 
> possible. I did some initial work and Tony Kelman made lots of progress in 
> https://github.com/JuliaLang/julia/pull/6230. But there have not been any 
> speed comparisons as far as I know. Note that Julia uses JIT compilation 
> and thus I would not expect to have the source compiler have a huge impact.
>
>  
>
>
> Am Dienstag, 17. Juni 2014 21:25:50 UTC+2 schrieb David Anthoff:
>
> Another interesting result from the paper is how much faster Visual C++ 
> 2010 generated code is than gcc, on Windows. For their example, the gcc 
> runtime is 2.29 the runtime of the MS compiled version. The difference 
> might be even larger with Visual C++ 2013 because that is when MS added an 
> auto-vectorizer that is on by default.
>
>  
>
> I vaguely remember a discussion about compiling julia itself with the MS 
> compiler on Windows, is that working and is that making a performance 
> difference?
>
>  
>
> *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
> Behalf Of *Peter Simon
> *Sent:* Tuesday, June 17, 2014 12:08 PM
> *To:* julia...@googlegroups.com
> *Subject:* Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < 
> Julia < Java < Matlab < the rest
>
>  
>
> Sorry, Florian and David, for not seeing that you were way ahead of me.
>
>  
>
> On the subject of the log function:  I tried implementing mylog() as 
> defined by Andreas on Julia running on CentOS and the result was a 
> significant slowdown! (Yes, I defined the mylog function outside of main, 
> at the module level).  Not sure if this is due to variation in the quality 
> of the libm function on various systems or what.  If so, then it makes 
> sense that Julia wants a uniformly accurate and fast implementation via 
> openlibm.  But for fastest transcendental function performance, I assume 
> that one must use the micro-coded versions built into the processor's 
> FPU--Is that what the fast libm implementations do?  In that case, how 
> could one hope to compete when using a C-coded version?
>
>  
>
> --Peter
>
>
>
> On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:
>
> I submitted three pull requests to the original repo that get rid of three 
> different array allocations in loops and that make things a fair bit faster 
> altogether:
>
>  
>
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls
>
>  
>
> I think it would also make sense to run these benchmarks on julia 0.3.0 
> instead of 0.2.1, given that there have been a fair number of performance 
> imrpovements.
>
>  
>
> *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
> Behalf Of *Florian Oswald
> *Sent:* Tuesday, June 17, 2014 10:50 AM
> *To:* julia...@googlegroups.com
> *Subject:* Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < 
> Julia < Java < Matlab < the rest
>
>  
>
> thanks peter. I made that devectorizing change after dalua suggested so. 
> It made a massive difference!
>
> On Tuesday, 17 June 2014, Peter Simon  wrote:
>
> You're right.  Replacing the NumericExtensions function calls with a small 
> loop
>
>  
>
> maxDifference  = 0.0
> for k = 1:length(mValueFunction)
> maxDifference = max(maxDifference, abs(mValueFunction[k]- 
> mValueFunctionNew[k]))
> end
>
>
> makes no significant difference in execution time or memory allocation and 
> eliminates the dependency.
>
>

Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Tobias Knopp
I think one has to distinguish between the Julia core dependencies and the 
runtime dependencies. The later (like OpenBlas) don't tell us much how fast 
"Julia" is. The libm issue discussed in this thread is of such a nature.

Am Dienstag, 17. Juni 2014 22:03:51 UTC+2 schrieb Tony Kelman:
>
> We're diverging from the topic of the thread, but anyway...
>
> No, MSVC OpenBLAS will probably never happen, you'd have to CMake-ify the 
> whole thing and probably translate all of the assembly to Intel syntax. And 
> skip the Fortran, or use Intel's compiler. I don't think they have the 
> resources to do that.
>
> There's a C99-only optimized BLAS implementation under development by the 
> FLAME group at University of Texas here https://github.com/flame/blis 
> that does aim to eventually support MSVC. It's nowhere near as mature as 
> OpenBLAS in terms of automatically detecting architecture, cache sizes, 
> etc. But their papers look very promising. They could use more people 
> poking at it and submitting patches to get it to the usability level we'd 
> need.
>
> The rest of the dependencies vary significantly in how painful they would 
> be to build with MSVC. GMP in particular was forked into a new project 
> called MPIR, with MSVC compatibility being one of the major reasons.
>
>
>
> On Tuesday, June 17, 2014 12:47:49 PM UTC-7, David Anthoff wrote:
>>
>> I was more thinking that this might make a difference for some of the 
>> dependencies, like openblas? But I’m not even sure that can be compiled at 
>> all using MS compilers…
>>
>>  
>>
>> *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
>> Behalf Of *Tobias Knopp
>> *Sent:* Tuesday, June 17, 2014 12:42 PM
>> *To:* julia...@googlegroups.com
>> *Subject:* Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < 
>> Julia < Java < Matlab < the rest
>>
>>  
>>
>> There are some remaining issues but compilation with MSVC is almost 
>> possible. I did some initial work and Tony Kelman made lots of progress in 
>> https://github.com/JuliaLang/julia/pull/6230. But there have not been 
>> any speed comparisons as far as I know. Note that Julia uses JIT 
>> compilation and thus I would not expect to have the source compiler have a 
>> huge impact.
>>
>>  
>>
>>
>> Am Dienstag, 17. Juni 2014 21:25:50 UTC+2 schrieb David Anthoff:
>>
>> Another interesting result from the paper is how much faster Visual C++ 
>> 2010 generated code is than gcc, on Windows. For their example, the gcc 
>> runtime is 2.29 the runtime of the MS compiled version. The difference 
>> might be even larger with Visual C++ 2013 because that is when MS added an 
>> auto-vectorizer that is on by default.
>>
>>  
>>
>> I vaguely remember a discussion about compiling julia itself with the MS 
>> compiler on Windows, is that working and is that making a performance 
>> difference?
>>
>>  
>>
>> *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
>> Behalf Of *Peter Simon
>> *Sent:* Tuesday, June 17, 2014 12:08 PM
>> *To:* julia...@googlegroups.com
>> *Subject:* Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < 
>> Julia < Java < Matlab < the rest
>>
>>  
>>
>> Sorry, Florian and David, for not seeing that you were way ahead of me.
>>
>>  
>>
>> On the subject of the log function:  I tried implementing mylog() as 
>> defined by Andreas on Julia running on CentOS and the result was a 
>> significant slowdown! (Yes, I defined the mylog function outside of main, 
>> at the module level).  Not sure if this is due to variation in the quality 
>> of the libm function on various systems or what.  If so, then it makes 
>> sense that Julia wants a uniformly accurate and fast implementation via 
>> openlibm.  But for fastest transcendental function performance, I assume 
>> that one must use the micro-coded versions built into the processor's 
>> FPU--Is that what the fast libm implementations do?  In that case, how 
>> could one hope to compete when using a C-coded version?
>>
>>  
>>
>> --Peter
>>
>>
>>
>> On Tuesday, June 17, 2014 10:57:47 AM UTC-7, David Anthoff wrote:
>>
>> I submitted three pull requests to the original repo that get rid of 
>> three different array allocations in loops and that make things a fair bit 
>> faster altogether:
>>
>>  
>>
>>
>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls
>>
>>  
>>
>> I think it would also make sense to run these benchmarks on julia 0.3.0 
>> instead of 0.2.1, given that there have been a fair number of performance 
>> imrpovements.
>>
>>  
>>
>> *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On 
>> Behalf Of *Florian Oswald
>> *Sent:* Tuesday, June 17, 2014 10:50 AM
>> *To:* julia...@googlegroups.com
>> *Subject:* Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < 
>> Julia < Java < Matlab < the rest
>>
>>  
>>
>> thanks peter. I made that devectorizing change after dalua suggested so. 
>> It made a massive difference!
>>
>> On Tu

[julia-users] Keeping cloned Packages up to date

2014-06-17 Thread Tobias Knopp
Is there a way to keep packages that have been cloned using Pkg.clone up to 
date?

I currently manually go to the package directory and do "git pull" but I 
thought there might be a command from within Julia.

Thanks,

Tobi


Re: [julia-users] Keeping cloned Packages up to date

2014-06-17 Thread Elliot Saba
I'm pretty sure `Pkg.update()` does this.
-E


On Tue, Jun 17, 2014 at 1:30 PM, Tobias Knopp 
wrote:

> Is there a way to keep packages that have been cloned using Pkg.clone up
> to date?
>
> I currently manually go to the package directory and do "git pull" but I
> thought there might be a command from within Julia.
>
> Thanks,
>
> Tobi
>


Re: [julia-users] Keeping cloned Packages up to date

2014-06-17 Thread Elliot Saba
At least, on 0.3.  Perhaps not on 0.2.1
-E


On Tue, Jun 17, 2014 at 1:55 PM, Elliot Saba  wrote:

> I'm pretty sure `Pkg.update()` does this.
> -E
>
>
> On Tue, Jun 17, 2014 at 1:30 PM, Tobias Knopp  > wrote:
>
>> Is there a way to keep packages that have been cloned using Pkg.clone up
>> to date?
>>
>> I currently manually go to the package directory and do "git pull" but I
>> thought there might be a command from within Julia.
>>
>> Thanks,
>>
>> Tobi
>>
>
>


Re: [julia-users] Keeping cloned Packages up to date

2014-06-17 Thread Tobias Knopp
Ok I thought this would be not the case. But one issue might be that I had 
modified files in some packages and this cannot work then of course.

Am Dienstag, 17. Juni 2014 22:56:29 UTC+2 schrieb Elliot Saba:
>
> At least, on 0.3.  Perhaps not on 0.2.1
> -E
>
>
> On Tue, Jun 17, 2014 at 1:55 PM, Elliot Saba  > wrote:
>
>> I'm pretty sure `Pkg.update()` does this.
>> -E
>>
>>
>> On Tue, Jun 17, 2014 at 1:30 PM, Tobias Knopp > > wrote:
>>
>>> Is there a way to keep packages that have been cloned using Pkg.clone up 
>>> to date?
>>>
>>> I currently manually go to the package directory and do "git pull" but I 
>>> thought there might be a command from within Julia.
>>>
>>> Thanks,
>>>
>>> Tobi
>>>
>>
>>
>

[julia-users] Notebooks for a Julia workshop

2014-06-17 Thread Douglas Bates
I am presenting a workshop on Julia programming for faculty and students in 
my department.  I assume those attending have a background in R so some of 
the discussion has a "things are done that way in R and this way in Julia" 
flavor.

Notebooks (well, one notebook as of today but more to come) are at

github.com/dmbates/JuliaWorkshop




Re: [julia-users] Re: juliabloggers.com is now live!

2014-06-17 Thread cnbiz850
I use g2reader (http://www.g2reader.com) and can't subscribe to this.  
Don't know why.  It complains about:


Entered url doesn't contain valid feed or doesn't link to feed. It is
also possible feed contains no items.

On 06/17/2014 08:38 PM, Randy Zwitch wrote:
My apologies, I think the link got mangled last time. Here it is 
again: http://www.juliabloggers.com/feed/ 



On Monday, June 16, 2014 9:18:45 PM UTC-4, K leo wrote:

Is there something wrong with the feed?

http://www.juliabloggers.com/feed/
juliabloggers.com

Entered url doesn't contain valid feed or doesn't link to feed. It is
also possible feed contains no items.





Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Jesus Villaverde
I run the code on 0.3.0. It did not improve things (in fact, there was a 
3-5% deterioration)



On Tuesday, June 17, 2014 1:57:47 PM UTC-4, David Anthoff wrote:
>
> I submitted three pull requests to the original repo that get rid of three 
> different array allocations in loops and that make things a fair bit faster 
> altogether:
>
>  
>
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls
>
>  
>
> I think it would also make sense to run these benchmarks on julia 0.3.0 
> instead of 0.2.1, given that there have been a fair number of performance 
> imrpovements.
>
>  
>
> *From:* julia...@googlegroups.com  [mailto:
> julia...@googlegroups.com ] *On Behalf Of *Florian Oswald
> *Sent:* Tuesday, June 17, 2014 10:50 AM
> *To:* julia...@googlegroups.com 
> *Subject:* Re: [julia-users] Benchmarking study: C++ < Fortran < Numba < 
> Julia < Java < Matlab < the rest
>
>  
>
> thanks peter. I made that devectorizing change after dalua suggested so. 
> It made a massive difference!
>
> On Tuesday, 17 June 2014, Peter Simon > 
> wrote:
>
> You're right.  Replacing the NumericExtensions function calls with a small 
> loop
>
>  
>
> maxDifference  = 0.0
> for k = 1:length(mValueFunction)
> maxDifference = max(maxDifference, abs(mValueFunction[k]- 
> mValueFunctionNew[k]))
> end
>
>
> makes no significant difference in execution time or memory allocation and 
> eliminates the dependency.
>
>  
>
> --Peter
>
>
>
> On Tuesday, June 17, 2014 10:05:03 AM UTC-7, Andreas Noack Jensen wrote:
>
> ...but the Numba version doesn't use tricks like that. 
>
>  
>
> The uniform metric can also be calculated with a small loop. I think that 
> requiring dependencies is against the purpose of the exercise.
>
>  
>
> 2014-06-17 18:56 GMT+02:00 Peter Simon :
>
> As pointed out by Dahua, there is a lot of unnecessary memory allocation. 
>  This can be reduced significantly by replacing the lines
>
>  
>
> maxDifference  = maximum(abs(mValueFunctionNew-mValueFunction))
> mValueFunction= mValueFunctionNew
> mValueFunctionNew = zeros(nGridCapital,nGridProductivity)
>
>  
>
>  
>
> with
>
>  
>
> maxDifference  = maximum(abs!(subtract!(mValueFunction, 
> mValueFunctionNew)))
> (mValueFunction, mValueFunctionNew) = (mValueFunctionNew, 
> mValueFunction)
> fill!(mValueFunctionNew, 0.0)
>
>  
>
> abs! and subtract! require adding the line
>
>  
>
> using NumericExtensions
>
>  
>
> prior to the function line.  I think the OP used Julia 0.2; I don't 
> believe that NumericExtensions will work with that old version.  When I 
> combine these changes with adding 
>
>  
>
> @inbounds begin
> ...
> end
>
>  
>
> block around the "while" loop, I get about 25% reduction in execution 
> time, and reduction of memory allocation from roughly 700 MByte to 180 MByte
>
>  
>
> --Peter
>
>
>
> On Tuesday, June 17, 2014 9:32:34 AM UTC-7, John Myles White wrote:
>
> Sounds like we need to rerun these benchmarks after the new GC branch gets 
> updated.
>
>  
>
>  -- John
>
>  
>
> On Jun 17, 2014, at 9:31 AM, Stefan Karpinski  
> wrote:
>
>  
>
> That definitely smells like a GC issue. Python doesn't have this 
> particular problem since it uses reference counting.
>
>  
>
> On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa  
> wrote:
>
> I've just done measurements of algorithm inner loop times in my machine by 
> changing the code has shown in this commit 
> 
> .
>
>  
>
> I've found out something... see for yourself:
>
>  
>
> using Winston
> numba_times = readdlm("numba_times.dat")[10:end];
> plot(numba_times)
>
>
> 
>
> julia_times = readdlm("julia_times.dat")[10:end];
> plot(julia_times)
>
>  
>
>
> 
>
> println((median(numba_times), mean(numba_times), var(numba_times)))
>
> (0.0028225183486938477,0.0028575707378805993,2.4830103817464292e-8)
>
>  
>
> println((median(julia_times), mean(julia_times), var(julia_times)))
>
> (0.00282404404,0.0034863882123824454,1.7058255003790299e-6)
>
>  
>
> So, while inner loop times have more or less the same median on both Julia 
> and Numba tests, the mean and variance are higher in Julia.
>
>  
>
> Can that be due to the garbage collector being kicking in?
>
>
>
> On Monday, June 16, 2014 4:52:07 PM UTC+1, Florian Oswald wrote:
>
> Dear all,
>
>  
>
> I thought you might find this paper interesting: 
> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
>
>  
>
> It takes a standard model from macro economics and computes it's solution 
> with an identical algorithm in several languages. Julia is roughly 2.6 
> times slower than the best C++ executable. 

[julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Alireza Nejati
> But for fastest transcendental function performance, I assume that one 
must use the micro-coded versions built into the processor's FPU--Is that 
what the fast libm implementations do?

Not at all. Libm's version of log() is about twice as fast as the CPU's own 
log function, at least on a modern x86_64 processor (really fast log 
implementations use optimized look-up tables). I had a look at your code 
and it seems that the 'consumption' variable is always in the very narrow 
range of 0.44950 to 0.56872. If you plot the log function in this tiny 
range, it is very flat and linear. I think that if you simply replaced it 
with a 2- or 4-part piecewise approximation, you could get significant 
speedup across the board, in julia, c++, and others, with only a very small 
approximation error.

On Tuesday, June 17, 2014 3:52:07 AM UTC+12, Florian Oswald wrote:
>
> Dear all,
>
> I thought you might find this paper interesting: 
> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
>
> It takes a standard model from macro economics and computes it's solution 
> with an identical algorithm in several languages. Julia is roughly 2.6 
> times slower than the best C++ executable. I was bit puzzled by the result, 
> since in the benchmarks on http://julialang.org/, the slowest test is 
> 1.66 times C. I realize that those benchmarks can't cover all possible 
> situations. That said, I couldn't really find anything unusual in the Julia 
> code, did some profiling and removed type inference, but still that's as 
> fast as I got it. That's not to say that I'm disappointed, I still think 
> this is great. Did I miss something obvious here or is there something 
> specific to this algorithm? 
>
> The codes are on github at 
>
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
>
>
>

Re: [julia-users] animation using Gtk+/Cairo

2014-06-17 Thread Abe Schneider
Okay, what works for me using your suggestion (except for me [with a 1 day 
old Julia and package] it's 'draw' and not 'redraw'), I have:

function update(canvas, scene::Scene)
  # update scene
  # ...
  
  # redraw
  draw(canvas)
end

function init(canvas, scene::Scene)
  update_timer = Timer(timer -> update(canvas, scene))
  start_timer(update_timer, 1, 0.5)
  return update_timer
end

function draw_scene(canvas, scene::Scene)
  ctx = getgc(canvas)
  h = height(canvas)
  w = width(canvas)

  rectangle(ctx, 0, 0, w, h)
  set_source_rgb(ctx, 1, 1, 1)
  fill(ctx)

  # use scene to draw other elements
  # ...
end

scene = Scene(...)
update_timer = init(canvas, scene)
draw(canvas -> draw_scene(canvas, scene), canvas)

if !isinteractive()
  cond = Condition()
  signal_connect(win, :destroy) do widget
notify(cond)
  end

  wait(cond)
end

stop_timer(update_timer)


I wasn't sure if the way I bound the scene to the redraw is the nicest 
approach to take. If the function took additional parameters, that seems 
like it might be the most straight forward. E.g.:

draw(canvas, scene) do widget
   # ...
end

# here 'myscene' would be passed as the second parameter to the other draw
draw(canvas, myscene)


A

On Tuesday, June 17, 2014 1:16:11 PM UTC-4, Jameson wrote:
>
> Yes. Although I think the draw...do function is actually redraw...do 
> (this is actually a shared interface with Tk.jl, although I recommend Gtk :)
>
> Sent from my phone. 
>
> On Tuesday, June 17, 2014, Abe Schneider  > wrote:
>
>> @Tim: Awesome, exactly what I was looking for. Thank you.
>>
>> @Jameson: Just to check, do you mean something like:
>>
>> function redraw_canvas(canvas)
>>   draw(canvas)
>> end
>>
>> draw(canvas) do widget
>>   # ...
>> end
>>
>> If so, I'll re-post my code with the update. It may be useful to someone 
>> else to see the entire code as an example.
>>
>> Thanks!
>> A
>>
>>
>> On Tuesday, June 17, 2014 10:44:16 AM UTC-4, Jameson wrote:
>>>
>>> This code is not valid, since getgc does not always have a valid drawing 
>>> context to return. Instead you need to provide Canvas with a callback 
>>> function via a call to redraw in which you do all the work, then just call 
>>> draw(canvas) in your timer callback to force an update to the view. 
>>> double-buffering is enabled by default.
>>>
>>> wait(Condition()) is the same wait(), and means sleep until this task is 
>>> signaled, and thereby prevents the program from exiting early
>>>
>>>
>>> On Tue, Jun 17, 2014 at 7:46 AM, Abe Schneider  
>>> wrote:
>>>
 Thank you everyone for the fast replies!

 After looking at ImageView and the sources, here's the solution I came 
 up with:

 w = Gtk.@Window() |>
 (body=Gtk.@Box(:v) |>
   (canvas=Gtk.@Canvas(600, 600)) |>
 showall

 function redraw_canvas(canvas)
   ctx = getgc(canvas)
   h = height(canvas)
   w = width(canvas)

   # draw background
   rectangle(ctx, 0, 0, w, h)
   set_source_rgb(ctx, 1, 1, 1)
   fill(ctx)

   # draw objects
   # ...

   # tell Gtk+ to redisplay
   draw(canvas)
 end

 function init(canvas, delay::Float64, interval::Float64)
   update_timer = Timer(timer -> redraw_canvas(canvas))
   start_timer(update_timer, delay, interval)
 end

 update_timer = init(canvas, 2, 1)
 if !isinteractive()
   wait(Condition())
 end

 stop_timer(update_timer)

 I haven't looked yet into what is required to do double-buffering (or 
 if it's enabled by default). I also copied the 'wait(Condition())' from 
 the 
 docs, though it's not clear to me what the condition is (if I close the 
 window, the program is still running -- I'm assuming that means I need to 
 connect the signal for window destruction to said condition).

 A


 On Monday, June 16, 2014 9:33:42 PM UTC-4, Jameson wrote:

> I would definately use Julia's timers. See `Gtk.jl/src/cairo.jl` for 
> an example interface to the Cairo backing to a Gtk window (used in 
> `Winston.jl/src/gtk.jl`). If you are using this wrapper, call `draw(w)` 
> to 
> force a redraw immediately, or `draw(w,false)` to queue a redraw request 
> for when Gtk is idle.
>
>
> On Mon, Jun 16, 2014 at 9:12 PM, Tim Holy  wrote:
>
>> ImageView's navigation.jl contains an example. The default branch is 
>> Tk
>> (because  as far as binary distribution goes, Tk is "solved" and Gtk 
>> isn't
>> yet), but it has a gtk branch you can look at.
>>
>> --Tim
>>
>> On Monday, June 16, 2014 04:01:46 PM Abe Schneider wrote:
>> > I was looking for a way to display a simulation in Julia. 
>> Originally I was
>> > going to just use PyPlot, but it occurred to me it would be better 
>> to just
>> > use Gtk+ + Cairo to do the drawing rather than something whose main 
>> purpose
>> > is drawing graphs.
>> >
>>>

[julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Dahua Lin
Perhaps we should first profile the codes, and see which part constitutes 
the bottleneck?

Dahua


On Tuesday, June 17, 2014 5:23:24 PM UTC-5, Alireza Nejati wrote:
>
> > But for fastest transcendental function performance, I assume that one 
> must use the micro-coded versions built into the processor's FPU--Is that 
> what the fast libm implementations do?
>
> Not at all. Libm's version of log() is about twice as fast as the CPU's 
> own log function, at least on a modern x86_64 processor (really fast log 
> implementations use optimized look-up tables). I had a look at your code 
> and it seems that the 'consumption' variable is always in the very narrow 
> range of 0.44950 to 0.56872. If you plot the log function in this tiny 
> range, it is very flat and linear. I think that if you simply replaced it 
> with a 2- or 4-part piecewise approximation, you could get significant 
> speedup across the board, in julia, c++, and others, with only a very small 
> approximation error.
>
> On Tuesday, June 17, 2014 3:52:07 AM UTC+12, Florian Oswald wrote:
>>
>> Dear all,
>>
>> I thought you might find this paper interesting: 
>> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
>>
>> It takes a standard model from macro economics and computes it's solution 
>> with an identical algorithm in several languages. Julia is roughly 2.6 
>> times slower than the best C++ executable. I was bit puzzled by the result, 
>> since in the benchmarks on http://julialang.org/, the slowest test is 
>> 1.66 times C. I realize that those benchmarks can't cover all possible 
>> situations. That said, I couldn't really find anything unusual in the Julia 
>> code, did some profiling and removed type inference, but still that's as 
>> fast as I got it. That's not to say that I'm disappointed, I still think 
>> this is great. Did I miss something obvious here or is there something 
>> specific to this algorithm? 
>>
>> The codes are on github at 
>>
>> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
>>
>>
>>

[julia-users] Re: Benchmarking study: C++ < Fortran < Numba < Julia < Java < Matlab < the rest

2014-06-17 Thread Alireza Nejati
Dahua: On my setup, most of the time is spent in the log function.

On Tuesday, June 17, 2014 3:52:07 AM UTC+12, Florian Oswald wrote:
>
> Dear all,
>
> I thought you might find this paper interesting: 
> http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
>
> It takes a standard model from macro economics and computes it's solution 
> with an identical algorithm in several languages. Julia is roughly 2.6 
> times slower than the best C++ executable. I was bit puzzled by the result, 
> since in the benchmarks on http://julialang.org/, the slowest test is 
> 1.66 times C. I realize that those benchmarks can't cover all possible 
> situations. That said, I couldn't really find anything unusual in the Julia 
> code, did some profiling and removed type inference, but still that's as 
> fast as I got it. That's not to say that I'm disappointed, I still think 
> this is great. Did I miss something obvious here or is there something 
> specific to this algorithm? 
>
> The codes are on github at 
>
> https://github.com/jesusfv/Comparison-Programming-Languages-Economics
>
>
>

[julia-users] Array "chunks"

2014-06-17 Thread TR NS
This is a dumb question I am sure, but I found no obvious answer in the 
standard docs. 

How does one take chunks of an array? In my particular case I also need 
them to overlap. What I need:

list = [1,2,3,4,5]

chunks(list, 3, 1)

# -> [[1,2,3], [2,3,4], [3,4,5]]

Although I am sure it would return an iterator.

(Note, I am using the term "chunks" b/c that is the term Elixir uses for 
this).



Re: [julia-users] Re: Project organization and CLI

2014-06-17 Thread TR NS


On Monday, June 16, 2014 4:44:11 PM UTC-4, Stefan Karpinski wrote:
>
> Generic functions are the reason this issue is less pressing in Julia. 
> Instead of Ngrams.report and Words.report or ngramsreport and wordsreport, 
> you can have report(x::Ngrams, ...) and report(x::Words, ...) – Ngrams, 
> Words and report can all live in the same namespace without any issues and 
> the two report methods are just different ways to report things.
>

My first reaction was "Oh yeah, cool!" But on later consideration I don't 
think this works. I take your point in general --method dispatch is really 
an awesome feature of Julia that may indeed lesson the need for 
compartmentalization. But in my case NGrams and Words aren't types, they 
have no state. They are simply a related set of functions. Conceivably I 
could create two separate packages altogether, one for ngrams counts and 
the other for individual word counts. But I don't want them to be separate 
packages; obviously they have some things in common. I just want to keep 
them nicely separated within the same package.





[julia-users] Re: Array "chunks"

2014-06-17 Thread yfractal
how about this?

```
julia> a = [1,2,3,10,20,30,100,200,300]

julia> r = reshape(a,3,3)
3x3 Array{Int64,2}:
 1  10  100
 2  20  200
 3  30  300

julia> r[:,1]
3-element Array{Int64,1}:
 1
 2
 3
```


Re: [julia-users] Re: Array "chunks"

2014-06-17 Thread Kevin Squire
(It's not clear to me how your answer matches the OP's request...?)

The partition function in https://github.com/JuliaLang/Iterators.jl will do
this:


julia> using Iterators

julia> list = [1,2,3,4,5]
5-element Array{Int64,1}:
 1
 2
 3
 4
 5

julia> for p in partition(list, 3, 1)
  println(p)
   end
(1,2,3)
(2,3,4)
(3,4,5)

Normally, "collect" will collect the elements of an iterable into an array.
 In this case, though, there is a bug in Iterators.jl which causes problems
with this (the length of a partition is not defined properly.)  I'll fix
this momentarily.

Cheers,
   Kevin


On Tue, Jun 17, 2014 at 7:27 PM,  wrote:

> how about this?
>
> ```
> julia> a = [1,2,3,10,20,30,100,200,300]
>
> julia> r = reshape(a,3,3)
> 3x3 Array{Int64,2}:
>  1  10  100
>  2  20  200
>  3  30  300
>
> julia> r[:,1]
> 3-element Array{Int64,1}:
>  1
>  2
>  3
> ```
>


[julia-users] Re: Project organization and CLI

2014-06-17 Thread yfractal
Hope i understand your question :)

I think this may be related the pwd.

I create a "bin" directory, and touch corpus

and the in the corpus I write 

```
println("the pwd is:")
println(pwd())

println("the code/clj.lj is:")
println(joinpath(pwd(),"../"))

```

Then i run the file by ` julia bin/corpus ` and got

```
the pwd is:
/Users/y/tmp
the code/clj.lj is:
/Users/y/tmp/../
```


the "include" use the "abspath", and `abspath(a::String) = 
normpath(isabspath(a) ? a : joinpath(pwd(),a))`, so i guess it related to 
pwd.

Maybe put the file in the root is ok.


TR NS於 2014年6月17日星期二UTC+8上午12時16分23秒寫道:
>
> I am trying to organize my current project so that I have a few separate 
> files/modules to keep things tidy, including a separate cli module for 
> invoking the program. But I am having some trouble bringing it all 
> together. The main of my project's layout is:
>
> bin/
>   corpus
> code/
>   cli.jl
>   ngrams.jl
>   ...
>
> Where `bin/corpus` contains:
>
> #!/usr/bin/env julia
> include("../code/cli.jl")
> Corpus.Cli.run(ARGS)
>
> And `code/cli.jl` starts out with:
>
> module Corpus
> module Cli
>
> include("./ngrams.jl")
>
> function run(args)
>   ...
>
> But when I run `bin/corpus`, the program just hangs, and it appears to be 
> doing so on the `include`.
>
> So how does one do this properly?
>
> I have read through the documentation on Modules, but it is not very clear 
> to me. In fact, to be honest, it seems overly complicated, with `include`, 
> `import`, `require`, `using`, etc. (Makes me long for the simplicity of 
> Lua's and Javascript/NPM's `require`.)
>
>
>
>
>
>
> println("Corpus")
>
> include("../code/cli.jl")
>
> Corpus.Cli.run(ARGS)
>
>
>

Re: [julia-users] Re: Array "chunks"

2014-06-17 Thread Kevin Squire
Actually, this was fixed in the latest version of the package, but it
wasn't tagged, so I did that.  The following works with the latest tagged
version:

julia> collect(partition(list, 3, 1))
3-element Array{(Int64,Int64,Int64),1}:
 (1,2,3)
 (2,3,4)
 (3,4,5)



On Tue, Jun 17, 2014 at 7:34 PM, Kevin Squire 
wrote:

> (It's not clear to me how your answer matches the OP's request...?)
>
> The partition function in https://github.com/JuliaLang/Iterators.jl will
> do this:
>
>
> julia> using Iterators
>
> julia> list = [1,2,3,4,5]
>  5-element Array{Int64,1}:
>  1
>  2
>  3
>  4
>  5
>
> julia> for p in partition(list, 3, 1)
>   println(p)
>end
> (1,2,3)
> (2,3,4)
> (3,4,5)
>
> Normally, "collect" will collect the elements of an iterable into an
> array.  In this case, though, there is a bug in Iterators.jl which causes
> problems with this (the length of a partition is not defined properly.)
>  I'll fix this momentarily.
>
> Cheers,
>Kevin
>
>
> On Tue, Jun 17, 2014 at 7:27 PM,  wrote:
>
>> how about this?
>>
>> ```
>> julia> a = [1,2,3,10,20,30,100,200,300]
>>
>> julia> r = reshape(a,3,3)
>> 3x3 Array{Int64,2}:
>>  1  10  100
>>  2  20  200
>>  3  30  300
>>
>> julia> r[:,1]
>> 3-element Array{Int64,1}:
>>  1
>>  2
>>  3
>> ```
>>
>
>


[julia-users] Speeding up parametric array comprehension

2014-06-17 Thread Andrew Simper
I have an expression of an M-element Array{Float64, 1}, and I have N points 
I would like to evaluate this expression on and store the results into an 
NxM Array{Float64,2}. I can get the output I want by passing  the 
expression to be evaluated (expr), the parametric variable (exprt), and a 
vector of the time values to a function that calls @eval for each of the N 
values of time, and this works fine but is a bit slow. An ideas how to 
speed this up?

I'm not sure of the correct terminology but to me this is longhand for what 
I view as parametric array comprehension, where you specify P ranges and 
get a P+1 dim array back, instead of the regular non-parametric 
comprehension where you have to specify P ranges and get a P dim array back.

function lowpass(input::Float64, lp::Float64, g::Float64)
hp::Float64 = input - lp
lp += g * hp
[lp, hp]
end
function processarrayexpr(expr, exprt, time::Vector{Float64})
len = length(time)
ret = @eval begin
$exprt = $time[1]
$expr
end
output = Array(Float64, len, length(ret))
output[1,:] = ret
for i in 1:len-1
output[i,:] = @eval begin
$exprt = $time[$i]
$expr
end
end
output
end
s = 0.0;
data = processarrayexpr(:(begin input=sin(100.0*t); [t, input, 
lowpass(input, s, 0.5)] end), :(t), linspace(0.0,2.0pi,2*44100))

88200x4 Array{Float64,2}:
 0.0   0.0  0.0  0.0   
 7.12387e-50.00712381   0.00356191   0.00712381
 0.000142477   0.01424730.00712363   0.0142473 
 0.000213716   0.02137  0.010685 0.02137   
 0.000284955   0.02849160.01424580.0284916 
 0.000356194   0.03561180.01780590.0356118 
 0.000427432   0.04273020.02136510.0427302 
 0.000498671   0.04984650.02492320.0498465 
 0.000569910.05696010.02848010.0569601 
 0.000641149   0.06407090.03203550.0640709 
 0.000712387   0.07117850.03558920.0711785 
 0.000783626   0.07828240.03914120.0782824 
 0.000854865   0.08538240.04269120.0853824 
 ⋮ 
 6.2824   -0.0782824   -0.0391412   -0.0782824 
 6.28247  -0.0711785   -0.0355892   -0.0711785 
 6.28254  -0.0640709   -0.0320355   -0.0640709 
 6.28262  -0.0569601   -0.0284801   -0.0569601 
 6.28269  -0.0498465   -0.0249232   -0.0498465 
 6.28276  -0.0427302   -0.0213651   -0.0427302 
 6.28283  -0.0356118   -0.0178059   -0.0356118 
 6.2829   -0.0284916   -0.0142458   -0.0284916 
 6.28297  -0.02137 -0.010685-0.02137   
 6.28304  -0.0142473   -0.00712363  -0.0142473 
 6.28311  -0.00712381  -0.00356191  -0.00712381
 0.0   0.0  0.0  0.0   





  1   2   >