Use the `@fastmath` macro. This dynamically enables floating-point
optimizations in the expression (or function) to which you apply it.
Compare `@inbounds`, `@simd`.

-erik

On Sun, Dec 6, 2015 at 5:23 PM, Meik Hellmund <meik.hellm...@gmail.com>
wrote:

> Hi,
>
> It seems that julia does not "optimize away" constant expressions in
> functions:
>
> julia> f(x)=x+2-2
> f (generic function with 1 method)
>
> julia> f(1.e-18)
> 0.0
>
> This is of course correct in Float64 arithmetics.
> But I wonder: In languages like Fortran and C the result of such code
>  depends on the "optimization flags" used when compiling.
> With optimization, the compiler would reduce the function to f(x)=x.
> Is there something comparable in Julia?
>
> Another test. Compare
>     f(x)=x+sin(.34567)
>
> to
>
>     const sn=sin(.34567)
>     g(x)=x+sn
>
>
> julia> y=0; @time(for i in 1:10^9; y=f(y); end)
>  22.489526 seconds (2.00 G allocations: 29.802 GB, 3.50% gc time)
>
> julia> y=0; @time(for i in 1:10^9; y=g(y); end)
>  16.268512 seconds (1000.00 M allocations: 14.901 GB, 2.61% gc time)
>
>
> It looks like Julia does not optimize even the simplest constant
> expressions so that they are  evaluated  only once.
> And why, by all means, does it allocate so much memory for that?
>  Is there something I overlook?
>
>  Best wishes,
> Meik
>
>


-- 
Erik Schnetter <schnet...@gmail.com>
http://www.perimeterinstitute.ca/personal/eschnetter/

Reply via email to