There is also the GC improvement patch that is waiting for the 0.3 release,
which should help improve the GC performance. With better escape analysis,
it should be possible to reuse the garbage from vectorized expressions in a
loop in the next iteration, and significantly reduce GC load. It shou
Thanks a lot for all your answers! Now I need to take a break to learn all
these cool stuff and get prepared to such a bright future :)
On Mon, Jul 21, 2014 at 11:03 PM, Stefan Karpinski
wrote:
> Automatic, general loop fusion is something that we want to make possible
> and Jeff and I have bee
Automatic, general loop fusion is something that we want to make possible
and Jeff and I have been discussion quite a bit lately. There's a few ideas
that seem promising, but they won't happen immediately. The combination of
doing better escape analysis and loop fusion should help these problems
qu
InplaceOps.jl is another package that can help: it substitutes some
matrix operations with their mutable BLAS-based equivalents.
On 21 July 2014 10:10, Tim Holy wrote:
> codegen is a big one, as are inference.jl, gf.c, and cgutils.cpp. But there
> are optimization sprinkled throughout (e.g., ccal
codegen is a big one, as are inference.jl, gf.c, and cgutils.cpp. But there
are optimization sprinkled throughout (e.g., ccall.cpp).
You might be interested in this:
https://github.com/JuliaLang/julia/issues/3440
Most of the optimizations so far are low level; most of the higher-level stuff
ten
Julia-syntax.scm (code lowering to ssa form) and type inference in base (type
propagation, data flow analysis, inlining) are other places where julia
performs compiler optimizations.
Could you please point me to where these optimizations take place? I see
some other transformations (like escape analysis, for example) happening in
codegen, are there any other places I should look at?
On Mon, Jul 21, 2014 at 2:43 PM, Tim Holy wrote:
> On Monday, July 21, 2014 02:33:26 PM Andr
On Monday, July 21, 2014 02:33:26 PM Andrei wrote:
> I see one disadvantage of using these tools, however - they are much harder
> to read. Are there any plans for automatic code optimization on compiler
> level?
There are already many optimizations in place. But there's always more you
could do.
Great write up! After some experiments I was able to reduce GC time from
65% to only 15% and see opportunities to do even better. Most important
things for me were:
1. Some BLAS functions (especially "gemm!", which is pretty flexible).
2. Manual devectorization (@devec didn't work for my case).
Dahua Lin's post at http://julialang.org/blog/2013/09/fast-numeric/
might be helpful.
On Sunday, July 20, 2014 11:41:19 AM UTC-4, Andrei Zh wrote:
>
> Recently I found that my application spends ~65% of time in garbage
> collector. I'm looking for ways to reduce amount of memory produced by
> in
10 matches
Mail list logo