Make sure you time it twice – the faster version may generate more code.

On Fri, Jan 22, 2016 at 1:21 PM, Bryan Rivera <futurehori...@gmail.com>
wrote:

> dude..
>
> dictZ = Dict{Int, Int}()
>
> for z = 1:1000
>     dictZ[z] = z
> end
>
> function testNoFunCopy()
>   for z = 1:1000
>       function2.z = dictZ[z]
>       # do whatever with function2, including making a copy
>   end
> end
>
> @code_llvm testNoFunCopy()
> @code_native testNoFunCopy()
>
> @time testNoFunCopy()
>
> # Test to see if multiple functions are created.  They are.
> # We would only need to create a single function if we used julia anon,
> but its time inefficient.
>
> dict = Dict{Int, Any}()
>
> for z = 1:1000
>     function2 = @anon c -> (c + z)
>     dict[z] =  function2
> end
>
> a = 1
> b = 2
>
> function testWithFunCopy()
>   for z = 1:1000
>     function1(a,b, dict[z])
>   end
> end
>
>
> @code_llvm testWithFunCopy()
> @code_native testWithFunCopy()
>
> @time testWithFunCopy()
>
>
> For 1000 elements:
>
> 0.00019s vs 0.035s respectively
>
> Thanks!
>
> Is the reason the faster code has more allocations bc it is
> inserting vars into the single function?  (Opposed to the slower
> code already having its vars filled in.)
>
>
> On Friday, January 22, 2016 at 12:23:59 PM UTC-5, Tim Holy wrote:
>>
>> Just use
>>
>> z = 1
>> function2 = @anon c -> c + z
>> for z = 1:100
>>     function2.z = z
>>     # do whatever with function2, including making a copy
>> end
>>
>> --Tim
>>
>> On Friday, January 22, 2016 08:55:25 AM Cedric St-Jean wrote:
>> > (non-mutating) Closures and FastAnonymous work essentially the same
>> way.
>> > They store the data that is closed over (more or less) and a function
>> > pointer. The thing is that there's only one data structure in Julia for
>> all
>> > regular anonymous functions, whereas FastAnonymous creates one per
>> @anon
>> > site. Because the FastAnonymous-created datatype is specific to that
>> > function definition, the standard Julia machinery takes over and
>> produces
>> > efficient code. It's just as good as if the function had been defined
>> > normally with `function foo(...) ... end`
>> >
>> >
>> > for z = 1:100
>> >     function2 = @anon c -> (c + z)
>> >
>> >     dict[z] =  function2
>> > end
>> >
>> >
>> > So we end up creating multiple functions for each z value.
>> >
>> >
>> > In this code, whether you use @anon or not, Julia will create 100
>> object
>> > instances to store the z values.
>> >
>> > The speed difference between the two will soon be gone.
>> > <https://github.com/JuliaLang/julia/pull/13412>
>> >
>> > Cédric
>> >
>> > On Friday, January 22, 2016 at 11:31:36 AM UTC-5, Bryan Rivera wrote:
>> > > I have to do some investigating here.  I thought we could do
>> something
>> > > like that but wasn't quite sure how it would look.
>> > >
>> > > Check this out:
>> > >
>> > > This code using FastAnonymous optimizes to the very same code below
>> it
>> > > where functions have been manually injected:
>> > >
>> > > using FastAnonymous
>> > >
>> > >
>> > > function function1(a, b, function2)
>> > >
>> > >   if(a > b)
>> > >
>> > >     c = a + b
>> > >     return function2(c)
>> > >
>> > >   else
>> > >
>> > >     # do anything
>> > >     # but return nothing
>> > >
>> > >   end
>> > >
>> > > end
>> > >
>> > >
>> > > z = 10
>> > > function2 = @anon c -> (c + z)
>> > >
>> > >
>> > > a = 1
>> > > b = 2
>> > > @code_llvm function1(a, b, function2)
>> > > @code_native function1(a, b, function2)
>> > >
>> > > Manually unrolled equivalent:
>> > >
>> > > function function1(a, b, z)
>> > >
>> > >   if(a > b)
>> > >
>> > >     c = a + b
>> > >     return function2(c, z)
>> > >
>> > >   else
>> > >
>> > >     # do anything
>> > >     # but return nothing
>> > >
>> > >   end
>> > >
>> > > end
>> > >
>> > >
>> > > function function2(c, z)
>> > >
>> > >   return c + z
>> > >
>> > > end
>> > >
>> > >
>> > > a = 1
>> > > b = 2
>> > > z = 10
>> > >
>> > >
>> > > @code_llvm function1(a, b, z)
>> > >
>> > > @code_native function1(a, b, z)
>> > >
>> > > However, this is a bit too simplistic.  My program actually does
>> this:
>> > >
>> > > # Test to see if multiple functions are created.  They are.
>> > > # We would only need to create a single function if we used julia
>> anon,
>> > > but its time inefficient.
>> > >
>> > > dict = Dict{Int, Any}()
>> > > for z = 1:100
>> > >
>> > >     function2 = @anon c -> (c + z)
>> > >
>> > >     dict[z] =  function2
>> > >
>> > > end
>> > >
>> > >
>> > > a = 1
>> > > b = 2
>> > >
>> > > function test()
>> > >
>> > >   function1(a,b, dict[100])
>> > >   function1(a,b, dict[50])
>> > >
>> > > end
>> > >
>> > > @code_llvm test()
>> > > @code_native test()
>> > >
>> > >
>> > >
>> > > So we end up creating multiple functions for each z value.  We could
>> use
>> > > Julia's anon funs, which would only create a single function, however
>> > > these
>> > > lamdas are less performant than FastAnon.
>> > >
>> > > So its a space vs time tradeoff, I want the speed of FastAnon,
>> without the
>> > > spacial overhead of storing multiple functions.
>> > >
>> > > Can we be greedy?  :)
>> > >
>> > > On Thursday, January 21, 2016 at 9:56:51 PM UTC-5, Cedric St-Jean
>> wrote:
>> > >> Something like this?
>> > >>
>> > >> function function1(a, b, f) # Variable needed in callback fun
>> injected.
>> > >>
>> > >>     if(a > b)
>> > >>
>> > >>       c = a + b
>> > >>       res = f(c) # Callback function has been injected.
>> > >>       return res + 1
>> > >>
>> > >>     else
>> > >>
>> > >>       # do anything
>> > >>       # but return nothing
>> > >>
>> > >>     end
>> > >>
>> > >> end
>> > >>
>> > >> type SomeCallBack
>> > >>
>> > >>     z::Int
>> > >>
>> > >> end
>> > >> Base.call(callback::SomeCallBack, c) = c + callback.z
>> > >>
>> > >> function1(2, 1, SomeCallBack(10))
>> > >>
>> > >> Because of JIT, this is 100% equivalent to your "callback function
>> has
>> > >> been injected" example, performance-wise. My feeling is that .call
>> > >> overloading is not to be abused in Julia, so I would favor using a
>> > >> regular
>> > >> function call with a descriptive name instead of call overloading,
>> but
>> > >> the
>> > >> same performance guarantees apply. Does that answer your question?
>> > >>
>> > >> On Thursday, January 21, 2016 at 9:02:50 PM UTC-5, Bryan Rivera
>> wrote:
>> > >>> I think what I wrote above might be too complicated, as it is an
>> attempt
>> > >>> to solve this problem.
>> > >>>
>> > >>> In essence this is what I want:
>> > >>>
>> > >>>
>> > >>> # wasmerged, _, _, _ = elide_pairwise!(ttree1, ttree2, canmerge;
>> nbrs=idbgv)
>>
>> > >>> function function1(a, b, onGreaterThanCallback)
>> > >>>
>> > >>>   if(a > b)
>> > >>>
>> > >>>     c = a + b
>> > >>>     res = onGreaterThanCallback(c, z)
>> > >>>     return res + 1
>> > >>>
>> > >>>   else
>> > >>>
>> > >>>     # do anything
>> > >>>     # but return nothing
>> > >>>
>> > >>>   end
>> > >>>
>> > >>> end
>> > >>>
>> > >>>
>> > >>> global onGreaterThanCallback = (c) -> c + z
>> > >>>
>> > >>> function1(a, b, onGreaterThanCallback)
>> > >>>
>> > >>>
>> > >>> Problems:
>> > >>>
>> > >>> The global variable.
>> > >>>
>> > >>> The anonymous function which has performance impact (vs other
>> > >>> approaches).  We could use Tim Holy's @anon, but then the value of
>> `z`
>> > >>> is
>> > >>> fixed at function definition, which we don't always want.
>> > >>>
>> > >>> I think that the ideal optimization would look like this:
>> > >>>       function function1(a, b, z) # Variable needed in callback fun
>> > >>>
>> > >>> injected.
>> > >>>
>> > >>>         if(a > b)
>> > >>>
>> > >>>           c = a + b
>> > >>>           res = c + z # Callback function has been injected.
>> > >>>           return res + 1
>> > >>>
>> > >>>         else
>> > >>>
>> > >>>           # do anything
>> > >>>           # but return nothing
>> > >>>
>> > >>>         end
>> > >>>
>> > >>>       end
>> > >>>
>> > >>>
>> > >>>       function1(a, b, z)
>> > >>>
>> > >>> In OO languages we would be using an abstract class or its
>> equivalent.
>> > >>>
>> > >>>  But I've thought about it, and read the discussions on interfaces,
>> and
>> > >>>
>> > >>> don't see those solutions optimizing the code out like I did above.
>> > >>>
>> > >>> Any ideas?
>>
>>

Reply via email to