If you get the new ProfileView it should work. A new version was recently
tagged.
On Sunday, June 19, 2016 at 6:59:11 PM UTC+2, Marius Millea wrote:
>
> Ah, that makes sense. So I tried with the latest 0.5 nightly and I go from
> ~3ms to ~1ms, a nice improvement! (different than what Andrew
Ah, that makes sense. So I tried with the latest 0.5 nightly and I go from
~3ms to ~1ms, a nice improvement! (different than what Andrew reported
above, so perhaps something changed over the last few nights tho)
Unfortunately ProfileView is giving me an error on 0.5, but from printing
the profile
As Eric pointed out, with Julia 0.4.x functions passed as arguments are not
optimized as their type is difficult to infer. That's why the profiler shows
jl_generic_function being the bottleneck. Try it with 0.5 and things could get
dramatically faster.
Actually I suppose normalizing by calls to integrand does answer your point
about implementations*, *just about algorithm. Its true, when I look at the
ProfileView (attached) it does seem like most of the time is actually spent
inside quadgk. In fact, most of it is inside the jl_apply_generic
They *are* different algorithms, but when I was comparing speeds with the
other codes, I compared it in terms of time per number of calls of the
inner integrand function. So basically I'm testing the speed of the
integrand functions themselves, as well as the speed of the integration
library
They *are* different algorithms, but when I was comparing speeds with the
other codes, I compared it in terms of time per number of calls of the
inner integrand function. So basically I'm testing the speed of the
integrand functions themselves, as well as the speed of the integration
library
What integration library are you using with Cython/Fortran? Is it using the
same algorithm as quadgk? Your code seems so simple I imagine this is just
comparing the quadrature implementations :)
On Saturday, June 18, 2016 at 5:53:57 AM UTC-7, Marius Millea wrote:
>
> Hi all, I'm sort of just
I don't think the anonymous y -> 1/g(y) and the nested function invg(y) =
1//g(y) are any different in terms of performance. They both form closures
over local variables and they should both be faster under 0.5.
I did run your code under 0.5dev and it was slower, but I think they're
still
I'd be curious to see if any magic happens if you try to run it with v0.5 :)
Although not anonymous, you're still passing functions as arguments and I
think that might be faster in v0.5, but I'm not sure.
Ahh sorry, forget the 2x slower thing, I had accidentally changed something
else. Both the anonymous y->1/g(y) and invg(y) give essentially the exact
same run time.
There are a number of 1's and 0's, but AFAICT they shouldn't cause any type
instabilities, if the input variable y or x is a
Thanks, yea, I had read that too and at some point checked if it mattered
and it didn't seem to which wasn't entirely surprising since its on the
outer loop.
But I just checked again given your comment and on Julia 0.4.5 it seems to
actually be 2x slower if I switch it to this:
function f(x)
Which version of Julia are you using? One thing that stands out is the
anonymous function y->1/g(y) being passed as an argument to quadgk. I'm not
an expert, but I've heard this is slow in v0.4 and below, but should be
fast in v0.5. Just a though.
On Saturday, June 18, 2016 at 8:53:57 PM
12 matches
Mail list logo