Re: [julia-users] Re: eval in current scope

2016-09-30 Thread Marius Millea


> Would eval'ing the type inside the macro work? This shows [:x, :y]
>
>
This only works if A and type_fields are defined in the same module though. 
Although to be honest it surprised me a bit that it works at all, I guess 
the type definitions are evaluated prior to macro expansions? 


A macro which defines a type-specific version @self_MyType of your @self 
> macro at the definition of the type: 


Yea, the solutions both me and fcard coded up originally involved having to 
call a macro on the type definition, this is precisely what I'm trying to 
get rid of right now. The reason for not using @unpack is just that its 
more verbose than this solution (at the price of the type redefinition 
thing, but for me its a fine tradeoff). It *really* like getting to write 
super concise functions which read just like the math they represent, 
nothing extra distracting, e.g. from my actual code:

"""Hubble constant at redshift z"""
@self Params Hubble(z) = Hfac*sqrt(ρx_over_ωx*((ωc+ωb)*(1+z)^3 + ωk*(1+z)^2 
+ ωΛ) + ργ(z) + ρν(z))


"""Optical depth between two redshifts given a free electron fraction 
history Xe"""
@self Params function τ(Xe::Function, z1, z2)
σT*(ωb*ρx_over_ωx)/mH*(1-Yp) * quad(z->Xe(z)/Hubble(z)*(1+z)^2, z1, z2)
end






 



Re: [julia-users] Re: eval in current scope

2016-09-29 Thread Marius Millea
I think there's at least once scenario where eval-in-a-macro is not a 
mistake, mainly when you want to generate some code that depends on 1) some 
passed in expression and 2) something which can only be known at runtime. 
Here's my example:

The macro (@self) which I'm writing takes a type name and a function 
definition, and gives the function a "self" argument of that type and 
rewrites all occurrences of the type's fields, X, to self.X. Effectively it 
takes this:

type MyType
x
end

@self MyType function inc()
x += 1
end

and spits out:

function inc(self::MyType)
self.x += 1
end

(if this sounds familiar, its because I've discussed it here before, which 
spun off this , that I'm 
currently working on tweaking)


To do this my code needs to modify the function expression, but this 
modification depends on fieldnames(MyType), which can *only* be known at 
runtime. Hence what I'm doing is, 

macro self(typ,func)

function modify_func(fieldnames)
# in here I have access to `func` expression *and* 
`fieldnames(typ)` as evaluated at runtime
# return modified func expression
end

quote
$(esc(:eval))($modify_func(fieldnames($(esc(typ)
end 
 
end

I don't see a cleaner way of doing this, but I'm happy to take suggestions. 

(Btw: my original question was w.r.t. that last "eval', which makes it so 
that currently this doesn't work on function closures. I'm still processing 
the several suggestions in this context...)


On Tuesday, September 27, 2016 at 5:12:44 PM UTC+2, Steven G. Johnson wrote:
>
> On Tuesday, September 27, 2016 at 10:27:59 AM UTC-4, Stefan Karpinski 
> wrote:
>>
>> But note that if you want some piece of f to be "pasted" from the user 
>> and have access to certain parts of the local state of f, it's probably a 
>> much better design to let the user pass in a function which f calls, 
>> passing the function the state that it should have access to:
>>
>
> Right; using "eval" in a function is almost always a mistake, an 
> indication that you should really be using a higher-order function. 
> 
>


[julia-users] Re: eval in current scope

2016-09-27 Thread Marius Millea
Aha, interesting! I think that might work, let me see if it actually works 
in my real case... Fyi, in Julia it might look like this:

julia> function f(x)
   eval(:(x->x+1))(x)
   end
f (generic function with 1 method)

julia> f(3)
4

Coincidentally as I've been digging I think that's the same solution 
suggested 
here https://github.com/JuliaLang/julia/issues/2386#issuecomment-13966397

Marius


On Tuesday, September 27, 2016 at 2:36:48 PM UTC+2, Jussi Piitulainen wrote:
>
> You might be able to wrap your expression so as to create a function 
> instead, and call the function with the values of the variables that the 
> actual expression depends on. In Python, because I haven't learned to 
> construct expressions in Julia yet and don't have the time to learn it now:
>
> def f(x): return eval("lambda x: x + 1")(x)
>
>
>
> tiistai 27. syyskuuta 2016 12.28.40 UTC+3 Marius Millea kirjoitti:
>>
>> Hi, is there a way to "eval" something in the current scope? My problem 
>> is the following, I've written a macro that, inside the returned 
>> expression, builds an expression which I need to eval. It looks like this,
>>
>> macro foo()
>> quote
>> ex = ...
>> eval_in_current_scope(ex)
>> end
>> end
>>
>> Now, you might say I'm using macros wrong and I should just be doing,
>>
>> macro foo()
>> ex = ...
>> end
>>  
>>
>> but in this case when I build "ex", it needs to occur at runtime since it 
>> depends on some things only available then. So is there any way to go about 
>> this? Thanks. 
>>
>>

[julia-users] Re: eval in current scope

2016-09-27 Thread Marius Millea

>
> Macros are functions evaluated at parse-time.  The runtime scope doesn't 
> even exist when the macro is called.

 
That's right, the answer may well have nothing to do with marcos (maybe I 
obscured the question by even mentioning them in an attempt to give bigger 
context to what I'm trying to accomplish). 

I guess it really boils to just "is there a way to eval something in the 
current scope". Not knowing much about the internals of all of this, given 
that "eval" does exactly that in the global scope, I guess it wouldn't seem 
like such a stretch that something exists for the current scope. For 
example, in Python it exists, 

In [1]: def f(x):
   ...: return eval("x+1")


In [2]: f(3)
Out[2]: 4



But perhaps the JIT requirements make it impossible in Julia? 

julia> function f(x)
  eval(:(x+1))
   end
f (generic function with 1 method)


julia> f(3)
ERROR: UndefVarError: x not defined
 in eval(::Module, ::Any) at ./boot.jl:234
 in f(::Int64) at ./REPL[1]:2





[julia-users] Re: eval in current scope

2016-09-27 Thread Marius Millea
And just to be clear, by "current scope" here I mean the scope of where the 
code from this macro is getting "pasted", not the macro scope. 


On Tuesday, September 27, 2016 at 11:28:40 AM UTC+2, Marius Millea wrote:
>
> Hi, is there a way to "eval" something in the current scope? My problem is 
> the following, I've written a macro that, inside the returned expression, 
> builds an expression which I need to eval. It looks like this,
>
> macro foo()
> quote
> ex = ...
> eval_in_current_scope(ex)
> end
> end
>
> Now, you might say I'm using macros wrong and I should just be doing,
>
> macro foo()
> ex = ...
> end
>  
>
> but in this case when I build "ex", it needs to occur at runtime since it 
> depends on some things only available then. So is there any way to go about 
> this? Thanks. 
>
>

[julia-users] eval in current scope

2016-09-27 Thread Marius Millea
Hi, is there a way to "eval" something in the current scope? My problem is 
the following, I've written a macro that, inside the returned expression, 
builds an expression which I need to eval. It looks like this,

macro foo()
quote
ex = ...
eval_in_current_scope(ex)
end
end

Now, you might say I'm using macros wrong and I should just be doing,

macro foo()
ex = ...
end
 

but in this case when I build "ex", it needs to occur at runtime since it 
depends on some things only available then. So is there any way to go about 
this? Thanks. 



[julia-users] Is this a bug (related to scoping / nested macros)?

2016-09-25 Thread Marius Millea
I can't figure out why this doesn't work:

julia> macro outer()
   quote
   macro inner()
   end
   @inner
   end
   end


julia> @outer
ERROR: UndefVarError: @inner not defined



Could it be a bug (I'm on 0.5) or am I missing something about how macros 
work?


Re: [julia-users] How to call macro stored in variable

2016-09-25 Thread Marius Millea
Ahh nice, thanks. Your macrocall suggestions reads cleanly too, I think 
it'd look something like this:

julia> macro macrocall(mac,args...)
   Expr(:macrocall,esc(mac),map(esc,args)...)
   end
@macrocall (macro with 1 method)

julia> @macrocall idmacro 1+2
3


What's the problem with local variables you mention though? I can't think 
of where this wouldn't work. 


On Sunday, September 25, 2016 at 2:08:46 PM UTC+2, Yichao Yu wrote:
>
> On Sun, Sep 25, 2016 at 7:25 AM, Marius Millea  > wrote: 
> > I can store a macro to a variable (let use the identity macro "id" as an 
> > example), 
> > 
> > julia> idmacro = macro id(ex) 
> >:($(esc(ex))) 
> >end 
> > @id (macro with 1 method) 
> > 
> > 
> > How can I use this macro now? I can *almost* do it by hand by passing an 
> > expression as an argument and eval'ing the result, but it doesn't work 
> > because there's esc's left, 
>
> Assuming this is just for understanding how macros are implemented, 
> you can just construct a microcall expression 
>
> julia> idmacro = macro id(ex) 
>esc(ex) 
>end 
> @id (macro with 1 method) 
>
> julia> idmacro(:(1 + 2)) 
> :($(Expr(:escape, :(1 + 2 
>
> julia> eval(Expr(:macrocall, :idmacro, :(1 + 2))) 
> 3 
>
> This is unlikely what you want to do in real code. 
>
> Side notes, 
>
> As shown above, you can just do `esc(ex)`, `:($foo)` is equivalent to 
> `foo` 
>
> You can also use a variable name starts with `@` since that's the 
> syntax that triggers the parsing to a macrocall expression. 
>
> julia> eval(:($(Symbol("@idmacro")) = idmacro)) 
> @id (macro with 1 method) 
>
> julia> @idmacro 1 + 2 
> 3 
>
> julia> Meta.show_sexpr(:(@idmacro 1 + 2)) 
> (:macrocall, Symbol("@idmacro"), (:call, :+, 1, 2)) 
>
>
> You can do this transformation with a macro too (i.e. make @macrocall 
> idmacro 1 + 2 construct a macrocall expression) but it's not really 
> useful since you can't do this for a local variable, which is also why 
> I said don't do this in real code. 
>
> > 
> > julia> @id 1+2 
> > 3 
> > 
> > 
> > julia> idmacro(:(1+2)) 
> > :($(Expr(:escape, :(1 + 2 
> > 
> > 
> > julia> eval(idmacro(:(1+2))) 
> > ERROR: syntax: unhandled expr (escape (call + 1 2)) 
> >  in eval(::Module, ::Any) at ./boot.jl:234 
> >  in eval(::Any) at ./boot.jl:233 
> > 
> > 
> > 
> > Does Julia provide a way to use such macros stored in variables? Thanks! 
>


[julia-users] Re: How to call macro stored in variable

2016-09-25 Thread Marius Millea
Now that you mention it I'm not sure why I thought returning :($esc(ex)) 
was better than esc(ex), I think they give identical results in this case 
(maybe all cases?). 

But at any rate, that doesn't affect this problem since both do give the 
identical result. The problem seems to be that the returned expression in 
general might have esc's in it which ordinarily get taken out when invoking 
a macro the usual way, but don't get taken out in this case. 



On Sunday, September 25, 2016 at 1:49:41 PM UTC+2, Lutfullah Tomak wrote:
>
> It you should return just esc(ex). It seems :($(esc(ex))) makes it an 
> expression wrapping an expression.



[julia-users] How to call macro stored in variable

2016-09-25 Thread Marius Millea
I can store a macro to a variable (let use the identity macro "id" as an 
example),

julia> idmacro = macro id(ex)
   :($(esc(ex)))
   end
@id (macro with 1 method)


How can I use this macro now? I can *almost* do it by hand by passing an 
expression as an argument and eval'ing the result, but it doesn't work 
because there's esc's left,

julia> @id 1+2
3


julia> idmacro(:(1+2))
:($(Expr(:escape, :(1 + 2


julia> eval(idmacro(:(1+2)))
ERROR: syntax: unhandled expr (escape (call + 1 2))
 in eval(::Module, ::Any) at ./boot.jl:234
 in eval(::Any) at ./boot.jl:233



Does Julia provide a way to use such macros stored in variables? Thanks!


[julia-users] Re: [ANN] ClobberingReload.jl: a more convenient reload, and an Autoreload for 0.5

2016-09-20 Thread Marius Millea
Looks great, thanks for this. Dropped it inplace of Autoreload.jl and works 
as advertised from what I've seen thus far. I had been hoping something 
like Autoreload.jl would stick around and be maintained, I find 
Jupyter+Autoreload makes for a really pleasant workflow. 

Marius


On Tuesday, September 20, 2016 at 6:33:24 PM UTC+2, Cedric St-Jean wrote:
>
> Hello, 
>
> ClobberingReload 
> .creload("ModuleName") 
> is an alternative to reload("ModuleName") for interactive development. 
> Instead of creating a new module object, it evaluates the modified code 
> inside the existing module object, clobbering the existing definitions. 
> This means that:
>
> using ClobberingReload
> import M
>
> x = M.Cat(4)
>
> ...
>
> creload("M")
>
> M.chase(x)  # no need to reinitialize x
>
> Unlike reload(), it works fine with `using`
>
> using ClobberingReload
> using M
>
> x = Cat(4)
>
> ...
>
> creload("M")
>
> chase(x) 
>
> ClobberingReload also works as a drop-in replacement for *@malmaud*'s 
> great Autoreload.jl  package. 
> See this section 
> .
>
> The package has not been tested as extensively as I would have liked 
> before release, but it's rather simple code, and with 0.5 releasing today 
> and Autoreload being mostly unmaintained 
> , 
> hopefully this can help some 0.5 users. Please file an issue should you 
> encounter a problem.
>
> Install it with
>
> Pkg.clone("git://github.com/cstjean/ClobberingReload.jl.git")
>
>
> Best,
>
> Cédric
>


Re: [julia-users] Re: ANN: A potential new Discourse-based Julia forum

2016-09-19 Thread Marius Millea
+1 for Discourse, which I could have done without spamming the list with 
another message if this were Discourse :)


[julia-users] accessing globals from keyword arg defaults

2016-09-18 Thread Marius Millea
I'd like to access global variables from the default values of keywords 
arguments, e.g.:

x = 3
function f(;x=x) #<- this default value of x here should refer to x in the 
global scope which is 3
   ...
end

Is there any way to do this? I had guessed the following might work but it 
doesn't:

function f(;x=(global x; x)) 
   ...
end


[julia-users] unexpected mapslice result on 0.5rc3

2016-09-14 Thread Marius Millea
Is this the expected behavior? 

julia> mapslices(x->tuple(x), [1 2; 3 4], 1)
1×2 Array{Tuple{Array{Int64,1}},2}:
 ([2,4],)  ([2,4],)

julia> mapslices(x->tuple(x...), [1 2; 3 4], 1)
1×2 Array{Tuple{Int64,Int64},2}:
 (1,3)  (2,4)


The first case certainly came as pretty unexpected to me. Does it have 
something to do with copies vs views into the array?


Re: [julia-users] @threads not providing as big speedup as expected

2016-08-29 Thread Marius Millea
Thanks, I did notice that, but regardless this shouldn't affect the scaling 
with NCPUs, and in fact as you say, it doesn't change performance at all.  

On Monday, August 29, 2016 at 7:27:44 PM UTC+2, Diego Javier Zea wrote:
>
> Looks like the type of *d_cl* isn't inferred correctly. *d_cl = Dict(i => 
> ones(3,3,nl) for i=1:np)::Dict{Int64,Array{Float64,3}}* helps with that, 
> but I din't see a change in performance. Best
>
>
>
> 
>


Re: [julia-users] @threads not providing as big speedup as expected

2016-08-29 Thread Marius Millea
Thanks, just tried wrapping the for loop inside a function and it seems to 
make the @threads version slightly slower and serial version slightly 
faster, so I'm even further from the speedup I was hoping for! Reading 
through that Issue and linked ones, I guess I may not be the only one 
seeing this. 

For ref, what I did:

function myloop(inv_cl,d_cl,fish,ijs,nl)
@threads for ij in ijs
i,j = ij
for l in 1:nl
fish[i,j] += 
(2*l+1)/2*trace(inv_cl[:,:,l]*d_cl[i][:,:,l]*inv_cl[:,:,l]*d_cl[j][:,:,l])
end
end
end

function test(nl,np)
inv_cl = ones(3,3,nl)
d_cl = Dict(i => ones(3,3,nl) for i=1:np)

fish = zeros(np,np)
ijs = [(i,j) for i=1:np, j=1:np]

myloop(inv_cl,d_cl,fish,ijs,nl)
end

# with @threads
@timeit test(3000,40)
1 loops, best of 3: 3.84 s per loop

# without @threads
@timeit test(3000,40)
1 loops, best of 3: 2.33 s per loop







On Monday, August 29, 2016 at 6:50:15 PM UTC+2, Tim Holy wrote:
>
> Very quickly (train to catch!): try this https://github.com/JuliaLang/julia/ 
>
> issues/17395#issuecomment-241911387 
> <https://github.com/JuliaLang/julia/issues/17395#issuecomment-241911387> 
> and see if it helps. 
>
> --Tim 
>
> On Monday, August 29, 2016 9:22:09 AM CDT Marius Millea wrote: 
> > I've parallelized some code with @threads, but instead of a factor NCPUs 
> > speed improvement (for me, 8), I'm seeing rather a bit under a factor 2. 
> I 
> > suppose the answer may be that my bottleneck isn't computation, rather 
> > memory access. But during running the code, I see my CPU usage go to 
> 100% 
> > on all 8 CPUs, if it were memory access would I still see this? Maybe 
> the 
> > answer is yes, in which case memory access is likely the culprit; is 
> there 
> > some way to confirm this though? If no, how do I figure out what *is* 
> the 
> > culprit? 
> > 
> > Here's a stripped down version of my code, 
> > 
> > 
> > function test(nl,np) 
> > 
> > inv_cl = ones(3,3,nl) 
> > d_cl = Dict(i => ones(3,3,nl) for i=1:np) 
> > 
> > fish = zeros(np,np) 
> > ijs = [(i,j) for i=1:np, j=1:np] 
> > 
> > Threads.@threads for ij in ijs 
> > i,j = ij 
> > for l in 1:nl 
> > fish[i,j] += 
> (2*l+1)/2*trace(inv_cl[:,:,l]*d_cl[i][:,:,l]*inv_cl 
> > [:,:,l]*d_cl[j][:,:,l]) 
> > end 
> > end 
> > 
> > end 
> > 
> > 
> > # with the @threads 
> > @timeit test(3000,40) 
> > 1 loops, best of 3: 3.17 s per loop 
> > 
> > # now remove the @threads from above 
> > @timeit test(3000,40) 
> > 1 loops, best of 3: 4.42 s per loop 
> > 
> > 
> > 
> > Thanks. 
>
>
>

[julia-users] @threads not providing as big speedup as expected

2016-08-29 Thread Marius Millea
I've parallelized some code with @threads, but instead of a factor NCPUs 
speed improvement (for me, 8), I'm seeing rather a bit under a factor 2. I 
suppose the answer may be that my bottleneck isn't computation, rather 
memory access. But during running the code, I see my CPU usage go to 100% 
on all 8 CPUs, if it were memory access would I still see this? Maybe the 
answer is yes, in which case memory access is likely the culprit; is there 
some way to confirm this though? If no, how do I figure out what *is* the 
culprit? 

Here's a stripped down version of my code, 


function test(nl,np)

inv_cl = ones(3,3,nl)
d_cl = Dict(i => ones(3,3,nl) for i=1:np)

fish = zeros(np,np)
ijs = [(i,j) for i=1:np, j=1:np]

Threads.@threads for ij in ijs
i,j = ij
for l in 1:nl
fish[i,j] += (2*l+1)/2*trace(inv_cl[:,:,l]*d_cl[i][:,:,l]*inv_cl
[:,:,l]*d_cl[j][:,:,l])
end
end

end


# with the @threads
@timeit test(3000,40)
1 loops, best of 3: 3.17 s per loop

# now remove the @threads from above
@timeit test(3000,40)
1 loops, best of 3: 4.42 s per loop



Thanks. 



Re: [julia-users] Re: accessing an expression's global scope from macro

2016-07-24 Thread Marius Millea
I think you are right btw, the compiler got rid of the wrapper function for
the "+" call, since all I see above is Base.add_float.

On Sun, Jul 24, 2016 at 4:55 PM, Marius Millea 
wrote:

> Here's my very simple test case. I will also try on my actual code.
>
> using SelfFunctions
> using TimeIt
>
> @selftype self type mytype
> x::Float64
> end
>
> t = mytype(0)
>
>
> # Test @self'ed version:
>
> @self @inline function f1()
> 1+x
> end
>
> @timeit f1(t)
> println(@code_warntype(f1(t)))
>
> # 100 loops, best of 3: 100.64 ns per loop
> # Variables:
> #   sf::SelfFunctions.SelfFunction{###f1_selfimpl#271}
> #   args::Tuple{mytype}
> #
> # Body:
> #   begin
> #   # meta: location /home/marius/workspace/selffunctions/test.jl
> ##f1_selfimpl#271 11
> #   SSAValue(1) =
> (Core.getfield)((Core.getfield)(args::Tuple{mytype},1)::mytype,:x)::Float64
> #   # meta: pop location
> #   return
> (Base.box)(Base.Float64,(Base.add_float)((Base.box)(Float64,(Base.sitofp)(Float64,1)),SSAValue(1)))
> #   end::Float64
>
>
>
>
> # Test non-@self'ed version:
>
> @inline function f2(t::mytype)
> 1+t.x
> end
>
> @timeit f2(t)
> println(@code_warntype(f2(t)))
>
> # 1000 loops, best of 3: 80.13 ns per loop
> # Variables:
> #   #self#::#f2
> #   t::mytype
> #
> # Body:
> #   begin
> #   return
> (Base.box)(Base.Float64,(Base.add_float)((Base.box)(Float64,(Base.sitofp)(Float64,1)),(Core.getfield)(t::mytype,:x)::Float64))
> #   end::Float64
> # nothing
>
>
> I'm not sure if its the creation of the SSAValue intermediate value or the
> extra getfield lookup, but you can see it slows down from ~80 to ~100ns.
>
>
> Marius
>
>
>
> On Sunday, July 24, 2016 at 3:52:38 PM UTC+2, Fábio Cardeal wrote:
>>
>> The compiler is pretty smart about removing these extra function calls,
>> so I didn't get any extra overhead on my test cases. I went ahead and added
>> `@inline` to the selfcall deckles. You can also do this:
>>
>>   @self @inline function inc2()
>> inc()
>> inc()
>>   end
>>
>> Update from the gist and try using some @inlines and see if it helps. You
>> can also send me your test cases if you want.
>>
>> In general, these techniques of adding and using compile time information
>> shouldn't cause any definite slowdown, even if we need to do some tweaking
>> with them meta tags. The compiler isn't perfect about this yet, but I think
>> our case is covered. (I hope?)
>>
>


Re: [julia-users] Re: accessing an expression's global scope from macro

2016-07-24 Thread Marius Millea
Here's my very simple test case. I will also try on my actual code. 

using SelfFunctions
using TimeIt

@selftype self type mytype
x::Float64
end

t = mytype(0)


# Test @self'ed version:

@self @inline function f1()
1+x
end

@timeit f1(t)
println(@code_warntype(f1(t)))

# 100 loops, best of 3: 100.64 ns per loop
# Variables:
#   sf::SelfFunctions.SelfFunction{###f1_selfimpl#271}
#   args::Tuple{mytype}
# 
# Body:
#   begin 
#   # meta: location /home/marius/workspace/selffunctions/test.jl 
##f1_selfimpl#271 11
#   SSAValue(1) = 
(Core.getfield)((Core.getfield)(args::Tuple{mytype},1)::mytype,:x)::Float64
#   # meta: pop location
#   return 
(Base.box)(Base.Float64,(Base.add_float)((Base.box)(Float64,(Base.sitofp)(Float64,1)),SSAValue(1)))
#   end::Float64




# Test non-@self'ed version:

@inline function f2(t::mytype)
1+t.x
end

@timeit f2(t)
println(@code_warntype(f2(t)))

# 1000 loops, best of 3: 80.13 ns per loop
# Variables:
#   #self#::#f2
#   t::mytype
# 
# Body:
#   begin 
#   return 
(Base.box)(Base.Float64,(Base.add_float)((Base.box)(Float64,(Base.sitofp)(Float64,1)),(Core.getfield)(t::mytype,:x)::Float64))
#   end::Float64
# nothing


I'm not sure if its the creation of the SSAValue intermediate value or the 
extra getfield lookup, but you can see it slows down from ~80 to ~100ns. 


Marius



On Sunday, July 24, 2016 at 3:52:38 PM UTC+2, Fábio Cardeal wrote:
>
> The compiler is pretty smart about removing these extra function calls, so 
> I didn't get any extra overhead on my test cases. I went ahead and added 
> `@inline` to the selfcall deckles. You can also do this:
>
>   @self @inline function inc2()
> inc()
> inc()
>   end
>  
> Update from the gist and try using some @inlines and see if it helps. You 
> can also send me your test cases if you want.
>
> In general, these techniques of adding and using compile time information 
> shouldn't cause any definite slowdown, even if we need to do some tweaking 
> with them meta tags. The compiler isn't perfect about this yet, but I think 
> our case is covered. (I hope?)
>


Re: [julia-users] Re: accessing an expression's global scope from macro

2016-07-24 Thread Marius Millea
Very nice! Didn't understand your hint earlier but now I do!

My only problem with this solution is the (perhaps unavoidable) run-time
overhead, since every single function call gets wrapped in one extra
function call. With a very simple test function that just does some
arithmetic, I'm seeing about a 25% slow down using this. I wonder if
there's other ways to achieve functionally exactly what you've done here
but that involve something faster than a function call? In any case, this
is nice and I may use it anyway.

On Sun, Jul 24, 2016 at 3:21 AM, Fábio Cardeal  wrote:

> Hyy :)
>
> I made an implementation:
> https://gist.github.com/fcard/f356b01d5bb160dd486b9518ac292582
>
> Julia 0.5 only. Enjoy...? Bye bye
>
> --
> -
>


Re: [julia-users] Re: accessing an expression's global scope from macro

2016-07-22 Thread Marius Millea
Yea, thats a good point. Granted for my purpose I'm never going to be
redefining these types mid program. I suppose one might extend your @unpack
to work on an expression and do the substitution recursively like my thing
does, then you could write,

@unpack aa: a function(x,aa:A)
  sin(2pi/a*x)
  a = 3 #can also assign without repacking
end

which seems slightly less hacky than what I'm doing but serves a similar
purpose.

Marius


On Fri, Jul 22, 2016 at 9:01 AM, Mauro  wrote:

>
> On Fri, 2016-07-22 at 01:02, Marius Millea  wrote:
> >> FYI Mauro's package has something similar
> >> <http://parametersjl.readthedocs.io/en/latest/manual/>.
> >>
> >
> > Some interesting stuff in there, thanks!
>
> The problem with your `@self` and with Parameters.jl's
> `@unpack_SomeType` macros is that it is easy to introduce bugs.
> Consider:
>
>   type A # and register it with @self
>  a
>   end
>   @self f(x,aa:A) = sin(2pi/a*x)
>
> Sometime later you refactor type A:
>
>   type A
>  a
>  pi
>   end
>
> now your function f is broken.  So, for every change in type A you need
> to check all functions which use `@self`.
>
> Instead I now use the @unpack macro (and its companion @pack), also part
> of Paramters.jl.  Then above f becomes
>
>   function f(x,aa::A)
> @unpack aa: a
> sin(2pi/a*x)
>   end
>
> This is still much more compact than writing out all the aa.a, etc. (if
> there are lots of field accesses) but safe.  Also, it clearly states, at
> the top of the function, which fields of a type are actually used.
>


Re: [julia-users] accessing an expression's global scope from macro

2016-07-22 Thread Marius Millea
On Thu, Jul 21, 2016 at 10:33 PM, Yichao Yu  wrote:

> On Thu, Jul 21, 2016 at 4:01 PM, Marius Millea 
> wrote:
> > In an attempt to make some numerical code (ie something thats basically
> just
> > a bunch of equations) more readable, I am trying to write a macro that
> lets
> > me write the code more succinctly. The code uses parameters from some
> data
> > structure, call it "mytype", so its littered with "t.a", "t.b", etc..
> where
> > t::mytype. My macro basically splices in the the "t." part for me. Its
> kind
> > of like how C++ member functions automatically access the class's
> fields, as
> > an example. To my amazement / growing love of Julia, I actually managed
> to
> > hack it together without too much difficulty, it looks like this,
> >
> >
> > macro self(func)
> > @assert func.head == :function
> >
> > # add "self" as a first function argument
> > insert!(func.args[1].args,2,:(self::mytype))
> >
> >
> > # recurse through AST and rename X to self.X if
> > # its a fieldname of mytype
> > function visit(ex)
> > if typeof(ex) == Expr
> > ex.args = map(visit,ex.args)
> > elseif (typeof(ex) == Symbol) & (ex in fieldnames(mytype))
> > return :(self.$ex)
> > end
> > ex
> > end
> > func.args[2] = visit(func.args[2])
> >
> > show(func) # print the edited function so we can see it in action
> >
> > :($(esc(func)))
> > end
> >
> >
> >
> >
> > Here it is in action:
> >
> >> @self function inc()
> > x = x + 1
> > end
> >
> >
> > :(function inc(self::mytype)
> > self.x = self.x + 1
> > end)
> >
> >
> > inc (generic function with 1 method)
> >
> >
> >
> >
> >> inc(mytype(0))
> > 1
> >
> >
> >
> > where I'm assuming I've defined mytype as
> >
> > type mytype
> > x
> > end
> >
> >
> >
> > As you can see, all it did was add self::mytype as an arg and replace x
> with
> > self.x everywhere it found it. This is also super nice because there is
> zero
> > run-time overhead vs. having written the "self." myself, everything
> happens
> > compile time.
> >
> > Now for the question. I'd like to also to be able automatically pass the
> > "self" argument to functions, so that I could write something like,
> >
> > @self function inc2()
> > inc()
> > inc()
> > end
> >
> >
> >
> > and it would produce
> >
> > function inc2(self::mytype)
> > inc(self)
> > inc(self)
> > end
> >
> >
> >
> > For this though, my macro needs to somehow figure out that "inc" was also
> > defined with @self (since it shouldn't blindly add self as a first arg so
> > other non-@self'ed function calls). Is this possible in Julia? I suppose
> > somehow the macro must access the global scope where the expression is
> being
> > evaluated? I'm not entirely sure that's doable. I'm happy to take any
> tips
> > how to achieve this though, especially ones incurring minimal overhead
> for
> > the rewritten function. Thanks!
>
> You should not do this. It is possible to access the current module
> but you don't have any scope information.
>

Do you mean that its possible to get the module where the expression (not
the macro) is defined? If so, how do I do that?




>
> >
>


Re: [julia-users] Re: accessing an expression's global scope from macro

2016-07-21 Thread Marius Millea
On Thu, Jul 21, 2016 at 11:37 PM, Cedric St-Jean 
wrote:

> Neat macro.
>
>
>> For this though, my macro needs to somehow figure out that "inc" was also
>> defined with @self (since it shouldn't blindly add self as a first arg so
>> other non-@self'ed function calls). Is this possible in Julia?
>>
>
> You could have a global Set that would contain the names of the functions
> that were defined with @self. But IMO this is going to bite you at one
> point or another.
>

Yea certainly a possibility, although even this doesn't seem that robust
since you're only doing this based on function name, and you don't know if
its referring to a different function in any given call environment. I'm
starting to doubt its truly possible at compile time, although still
thinking




>
> FYI Mauro's package has something similar
> <http://parametersjl.readthedocs.io/en/latest/manual/>.
>

Some interesting stuff in there, thanks!


>
> I would suggest using a global variable, if you want to avoid explicitly
> passing `self` all over the place. It would look like this:
>
> const self = Array{mytype}()   # trick to avoid the globals' poor
> performance
>
> @self function foo()
>x = x + 1   # expands into self[].x = self[].x + 1
> end
>
> @with_self(mytype(200)) do
># expands into
># try
>#... save the current value of self
>#global self[] = mytype(200)
>#... code
># finally
>#global self[] = ...restore previous value
># end
>...
> end
>
> I used this idiom in Common Lisp all the time. It's strictly equivalent to
> passing the object around to every function, and doesn't break the
> "functionalness" of the code.
>
> Cédric
>
>
> On Thursday, July 21, 2016 at 4:01:20 PM UTC-4, Marius Millea wrote:
>>
>> In an attempt to make some numerical code (ie something thats basically
>> just a bunch of equations) more readable, I am trying to write a macro that
>> lets me write the code more succinctly. The code uses parameters from some
>> data structure, call it "mytype", so its littered with "t.a", "t.b", etc..
>> where t::mytype. My macro basically splices in the the "t." part for me.
>> Its kind of like how C++ member functions automatically access the class's
>> fields, as an example. To my amazement / growing love of Julia, I actually
>> managed to hack it together without too much difficulty, it looks like this,
>>
>>
>> macro self(func)
>> @assert func.head == :function
>>
>> # add "self" as a first function argument
>> insert!(func.args[1].args,2,:(self::mytype))
>>
>>
>> # recurse through AST and rename X to self.X if
>> # its a fieldname of mytype
>> function visit(ex)
>> if typeof(ex) == Expr
>> ex.args = map(visit,ex.args)
>> elseif (typeof(ex) == Symbol) & (ex in fieldnames(mytype))
>> return :(self.$ex)
>> end
>> ex
>> end
>> func.args[2] = visit(func.args[2])
>>
>> show(func) # print the edited function so we can see it in action
>>
>> :($(esc(func)))
>> end
>>
>>
>>
>>
>> Here it is in action:
>>
>> > @self function inc()
>> x = x + 1
>> end
>>
>>
>> :(function inc(self::mytype)
>> self.x = self.x + 1
>> end)
>>
>>
>> inc (generic function with 1 method)
>>
>>
>>
>>
>> > inc(mytype(0))
>> 1
>>
>>
>>
>> where I'm assuming I've defined mytype as
>>
>> type mytype
>> x
>> end
>>
>>
>>
>> As you can see, all it did was add self::mytype as an arg and replace x
>> with self.x everywhere it found it. This is also super nice because there
>> is zero run-time overhead vs. having written the "self." myself, everything
>> happens compile time.
>>
>> Now for the question. I'd like to also to be able automatically pass the
>> "self" argument to functions, so that I could write something like,
>>
>> @self function inc2()
>> inc()
>> inc()
>> end
>>
>>
>>
>> and it would produce
>>
>> function inc2(self::mytype)
>> inc(self)
>> inc(self)
>> end
>>
>>
>>
>> For this though, my macro needs to somehow figure out that "inc" was also
>> defined with @self (since it shouldn't blindly add self as a first arg so
>> other non-@self'ed function calls). Is this possible in Julia? I suppose
>> somehow the macro must access the global scope where the expression is
>> being evaluated? I'm not entirely sure that's doable. I'm happy to take any
>> tips how to achieve this though, especially ones incurring minimal overhead
>> for the rewritten function. Thanks!
>>
>>


[julia-users] accessing an expression's global scope from macro

2016-07-21 Thread Marius Millea
In an attempt to make some numerical code (ie something thats basically 
just a bunch of equations) more readable, I am trying to write a macro that 
lets me write the code more succinctly. The code uses parameters from some 
data structure, call it "mytype", so its littered with "t.a", "t.b", etc.. 
where t::mytype. My macro basically splices in the the "t." part for me. 
Its kind of like how C++ member functions automatically access the class's 
fields, as an example. To my amazement / growing love of Julia, I actually 
managed to hack it together without too much difficulty, it looks like this,


macro self(func)
@assert func.head == :function
   
# add "self" as a first function argument
insert!(func.args[1].args,2,:(self::mytype))


# recurse through AST and rename X to self.X if 
# its a fieldname of mytype
function visit(ex)
if typeof(ex) == Expr
ex.args = map(visit,ex.args)
elseif (typeof(ex) == Symbol) & (ex in fieldnames(mytype))
return :(self.$ex)
end
ex
end
func.args[2] = visit(func.args[2])

show(func) # print the edited function so we can see it in action

:($(esc(func)))
end




Here it is in action:

> @self function inc()
x = x + 1
end


:(function inc(self::mytype) 
self.x = self.x + 1
end)


inc (generic function with 1 method)




> inc(mytype(0))
1



where I'm assuming I've defined mytype as 

type mytype
x
end



As you can see, all it did was add self::mytype as an arg and replace x 
with self.x everywhere it found it. This is also super nice because there 
is zero run-time overhead vs. having written the "self." myself, everything 
happens compile time. 

Now for the question. I'd like to also to be able automatically pass the 
"self" argument to functions, so that I could write something like, 

@self function inc2()
inc()
inc()
end



and it would produce

function inc2(self::mytype)
inc(self)
inc(self)
end



For this though, my macro needs to somehow figure out that "inc" was also 
defined with @self (since it shouldn't blindly add self as a first arg so 
other non-@self'ed function calls). Is this possible in Julia? I suppose 
somehow the macro must access the global scope where the expression is 
being evaluated? I'm not entirely sure that's doable. I'm happy to take any 
tips how to achieve this though, especially ones incurring minimal overhead 
for the rewritten function. Thanks!



Re: [julia-users] Re: Possible bug: very slow module load after addprocs()

2016-07-20 Thread Marius Millea
Done, see https://github.com/JuliaLang/julia/issues/17509



On Wednesday, July 20, 2016 at 5:21:23 PM UTC+2, Cedric St-Jean wrote:
>
> That does look suspicious. Maybe file an issue if there isn't one?
>
> On Wed, Jul 20, 2016 at 4:31 AM, Marius Millea  > wrote:
>
>> I don't think that theory totally works, it seems to scale to some extent 
>> with the length of time to load the package itself. Another example:
>>
>> julia> tic(); using PyPlot; toc()
>> elapsed time: 3.395904233 seconds
>>
>> vs
>>
>> julia> addprocs();
>> julia> tic(); using PyPlot; toc()
>> elapsed time: 13.877550518 seconds
>>
>> or even:
>>
>> julia> addprocs();
>> julia> using Empty; tic(); using PyPlot; toc()
>> elapsed time: 7.357315778 seconds
>>
>>
>> In any case, it can get pretty painful loading a few modules at the 
>> beginning of my parallelized scripts... 
>>
>>
>>
>>
>> On Tuesday, July 19, 2016 at 4:55:40 PM UTC+2, Cedric St-Jean wrote:
>>>
>>> Yes, that's what I meant. Presumably the multi-proc machinery is getting 
>>> compiled at the first `using`. It's the same reason why "println(2+2)" is 
>>> very slow on first use, but fast afterwards. 
>>>
>>> On Tue, Jul 19, 2016 at 10:41 AM, Marius Millea  
>>> wrote:
>>>
>>>> Seems it may have something to do with that. If I understood correctly 
>>>> what you're saying, if I create Empty2.jl defining module Empty2, I get, 
>>>>
>>>> julia> addprocs();
>>>>
>>>> julia> tic(); using Empty; toc()
>>>> elapsed time: 2.706353202 seconds
>>>> 2.706353202
>>>>
>>>> julia> tic(); using Empty; toc()
>>>> elapsed time: 0.00042397 seconds
>>>> 0.00042397
>>>>
>>>> julia> tic(); using Empty2; toc()
>>>> elapsed time: 0.029200919 seconds
>>>> 0.029200919
>>>>
>>>> julia> tic(); using Empty2; toc()
>>>> elapsed time: 0.000193097 seconds
>>>> 0.000193097
>>>>
>>>>
>>>>
>>>> That first load of Empty2 at 0.02 secs is much more in line with what 
>>>> loading it on a single processor takes. 
>>>>
>>>>
>>>>
>>>> On Tuesday, July 19, 2016 at 4:13:15 PM UTC+2, Cedric St-Jean wrote:
>>>>>
>>>>> Maybe there is some warm-up JIT time in there? If you create an Empty2 
>>>>> module and load it after Empty, is it also slow?
>>>>>
>>>>> On Tuesday, July 19, 2016 at 9:07:01 AM UTC-4, Marius Millea wrote:
>>>>>>
>>>>>> I noticed that once I addprocs(), subsequent "using" statements were 
>>>>>> extremely slow. I guess in this case its loading the module on each 
>>>>>> processor, but if it happens in parallel it shouldn't be *that* much 
>>>>>> more 
>>>>>> wall time, and here I'm talking about two orders of magnitude 
>>>>>> difference. 
>>>>>>
>>>>>> Assuming I've got a file Empty.jl who contents is,
>>>>>>
>>>>>> module Empty
>>>>>> end
>>>>>>
>>>>>> then single threaded:
>>>>>>
>>>>>> tic()
>>>>>> using Empty
>>>>>> toc()
>>>>>> elapsed time: 0.024461076 seconds
>>>>>>
>>>>>> vs. multi-threaded:
>>>>>>
>>>>>> addprocs() #I've got 8 procs
>>>>>> tic()
>>>>>> using Empty
>>>>>> toc()
>>>>>> elapsed time: 2.479418079 seconds
>>>>>>
>>>>>>
>>>>>> Should I submit this as an Issue on Github, or is there something 
>>>>>> else going on? I've checked both Julia 0.4.5. and 0.5 (01e3c8a). I'm on 
>>>>>> Ubuntu 16.04 64bit. 
>>>>>>
>>>>>>
>>>>>>
>>>
>

Re: [julia-users] Re: Possible bug: very slow module load after addprocs()

2016-07-20 Thread Marius Millea
I don't think that theory totally works, it seems to scale to some extent 
with the length of time to load the package itself. Another example:

julia> tic(); using PyPlot; toc()
elapsed time: 3.395904233 seconds

vs

julia> addprocs();
julia> tic(); using PyPlot; toc()
elapsed time: 13.877550518 seconds

or even:

julia> addprocs();
julia> using Empty; tic(); using PyPlot; toc()
elapsed time: 7.357315778 seconds


In any case, it can get pretty painful loading a few modules at the 
beginning of my parallelized scripts... 




On Tuesday, July 19, 2016 at 4:55:40 PM UTC+2, Cedric St-Jean wrote:
>
> Yes, that's what I meant. Presumably the multi-proc machinery is getting 
> compiled at the first `using`. It's the same reason why "println(2+2)" is 
> very slow on first use, but fast afterwards. 
>
> On Tue, Jul 19, 2016 at 10:41 AM, Marius Millea  > wrote:
>
>> Seems it may have something to do with that. If I understood correctly 
>> what you're saying, if I create Empty2.jl defining module Empty2, I get, 
>>
>> julia> addprocs();
>>
>> julia> tic(); using Empty; toc()
>> elapsed time: 2.706353202 seconds
>> 2.706353202
>>
>> julia> tic(); using Empty; toc()
>> elapsed time: 0.00042397 seconds
>> 0.00042397
>>
>> julia> tic(); using Empty2; toc()
>> elapsed time: 0.029200919 seconds
>> 0.029200919
>>
>> julia> tic(); using Empty2; toc()
>> elapsed time: 0.000193097 seconds
>> 0.000193097
>>
>>
>>
>> That first load of Empty2 at 0.02 secs is much more in line with what 
>> loading it on a single processor takes. 
>>
>>
>>
>> On Tuesday, July 19, 2016 at 4:13:15 PM UTC+2, Cedric St-Jean wrote:
>>>
>>> Maybe there is some warm-up JIT time in there? If you create an Empty2 
>>> module and load it after Empty, is it also slow?
>>>
>>> On Tuesday, July 19, 2016 at 9:07:01 AM UTC-4, Marius Millea wrote:
>>>>
>>>> I noticed that once I addprocs(), subsequent "using" statements were 
>>>> extremely slow. I guess in this case its loading the module on each 
>>>> processor, but if it happens in parallel it shouldn't be *that* much more 
>>>> wall time, and here I'm talking about two orders of magnitude difference. 
>>>>
>>>> Assuming I've got a file Empty.jl who contents is,
>>>>
>>>> module Empty
>>>> end
>>>>
>>>> then single threaded:
>>>>
>>>> tic()
>>>> using Empty
>>>> toc()
>>>> elapsed time: 0.024461076 seconds
>>>>
>>>> vs. multi-threaded:
>>>>
>>>> addprocs() #I've got 8 procs
>>>> tic()
>>>> using Empty
>>>> toc()
>>>> elapsed time: 2.479418079 seconds
>>>>
>>>>
>>>> Should I submit this as an Issue on Github, or is there something else 
>>>> going on? I've checked both Julia 0.4.5. and 0.5 (01e3c8a). I'm on Ubuntu 
>>>> 16.04 64bit. 
>>>>
>>>>
>>>>
>

[julia-users] Re: Possible bug: very slow module load after addprocs()

2016-07-19 Thread Marius Millea
Seems it may have something to do with that. If I understood correctly what 
you're saying, if I create Empty2.jl defining module Empty2, I get, 

julia> addprocs();

julia> tic(); using Empty; toc()
elapsed time: 2.706353202 seconds
2.706353202

julia> tic(); using Empty; toc()
elapsed time: 0.00042397 seconds
0.00042397

julia> tic(); using Empty2; toc()
elapsed time: 0.029200919 seconds
0.029200919

julia> tic(); using Empty2; toc()
elapsed time: 0.000193097 seconds
0.000193097



That first load of Empty2 at 0.02 secs is much more in line with what 
loading it on a single processor takes. 



On Tuesday, July 19, 2016 at 4:13:15 PM UTC+2, Cedric St-Jean wrote:
>
> Maybe there is some warm-up JIT time in there? If you create an Empty2 
> module and load it after Empty, is it also slow?
>
> On Tuesday, July 19, 2016 at 9:07:01 AM UTC-4, Marius Millea wrote:
>>
>> I noticed that once I addprocs(), subsequent "using" statements were 
>> extremely slow. I guess in this case its loading the module on each 
>> processor, but if it happens in parallel it shouldn't be *that* much more 
>> wall time, and here I'm talking about two orders of magnitude difference. 
>>
>> Assuming I've got a file Empty.jl who contents is,
>>
>> module Empty
>> end
>>
>> then single threaded:
>>
>> tic()
>> using Empty
>> toc()
>> elapsed time: 0.024461076 seconds
>>
>> vs. multi-threaded:
>>
>> addprocs() #I've got 8 procs
>> tic()
>> using Empty
>> toc()
>> elapsed time: 2.479418079 seconds
>>
>>
>> Should I submit this as an Issue on Github, or is there something else 
>> going on? I've checked both Julia 0.4.5. and 0.5 (01e3c8a). I'm on Ubuntu 
>> 16.04 64bit. 
>>
>>
>>

[julia-users] Possible bug: very slow module load after addprocs()

2016-07-19 Thread Marius Millea
I noticed that once I addprocs(), subsequent "using" statements were 
extremely slow. I guess in this case its loading the module on each 
processor, but if it happens in parallel it shouldn't be *that* much more 
wall time, and here I'm talking about two orders of magnitude difference. 

Assuming I've got a file Empty.jl who contents is,

module Empty
end

then single threaded:

tic()
using Empty
toc()
elapsed time: 0.024461076 seconds

vs. multi-threaded:

addprocs() #I've got 8 procs
tic()
using Empty
toc()
elapsed time: 2.479418079 seconds


Should I submit this as an Issue on Github, or is there something else 
going on? I've checked both Julia 0.4.5. and 0.5 (01e3c8a). I'm on Ubuntu 
16.04 64bit. 




Re: [julia-users] Re: Tips for optimizing this short code snippet

2016-06-19 Thread Marius Millea
Ah, that makes sense. So I tried with the latest 0.5 nightly and I go from
~3ms to ~1ms, a nice improvement! (different than what Andrew reported
above, so perhaps something changed over the last few nights tho)
Unfortunately ProfileView is giving me an error on 0.5, but from printing
the profile data I can at least confirm jl_apply_generic is no longer being
called.

On Sun, Jun 19, 2016 at 6:06 PM, Giuseppe Ragusa 
wrote:

> As Eric pointed out, with Julia 0.4.x functions passed as arguments are
> not optimized as their type is difficult to infer. That's why the profiler
> shows jl_generic_function being the bottleneck. Try it with 0.5 and things
> could get dramatically faster.


[julia-users] Re: Tips for optimizing this short code snippet

2016-06-19 Thread Marius Millea


Actually I suppose normalizing by calls to integrand does answer your point 
about implementations*, *just about algorithm. Its true, when I look at the 
ProfileView (attached) it does seem like most of the time is actually spent 
inside quadgk. In fact, most of it is inside the jl_apply_generic function. 
I don't know enough about Julia to know what that function does. Could that 
be a sign there's something non-optimal going on? (To profile this I am 
doing @profile for _=1:1; test.f(1.); end to get enough samples, is 
that correct?) 


Marius




On Sunday, June 19, 2016 at 4:41:47 PM UTC+2, Marius Millea wrote:
>
> They *are* different algorithms, but when I was comparing speeds with the 
> other codes, I compared it in terms of time per number of calls of the 
> inner integrand function. So basically I'm testing the speed of the 
> integrand functions themselves, as well as the speed of the integration 
> library code, as well as any function call overhead type thing. With this 
> metric, the Julia code was close, but it was the slowest (although of 
> course far more succinct and easy to read). 
>
>
> On Saturday, June 18, 2016 at 7:46:35 PM UTC+2, Gabriel Gellner wrote:
>>
>> What integration library are you using with Cython/Fortran? Is it using 
>> the same algorithm as quadgk? Your code seems so simple I imagine this is 
>> just comparing the quadrature implementations :)
>>
>> On Saturday, June 18, 2016 at 5:53:57 AM UTC-7, Marius Millea wrote:
>>>
>>> Hi all, I'm sort of just starting out with Julia, I'm trying to get 
>>> gauge of how fast I can make some code of which I have Cython and Fortran 
>>> versions to see if I should continue down the path of converting more or my 
>>> stuff to Julia (which in general I'd very much like to, if I can get it 
>>> fast enough). I thought maybe I'd post the code in question here to see if 
>>> I could get any tips. I've stripped down the original thing to what I think 
>>> are the important parts, a nested integration with an inner function 
>>> closure and some global variables. 
>>>
>>> module test
>>>
>>> const a = 1.
>>>
>>> function f(x)
>>> quadgk(y->1/g(y),0,x)[1]  # <=== outer integral
>>> end
>>>
>>> function g(y)
>>> integrand(x) = x^2*sqrt(x^2*y^2+a)/(exp(sqrt(x^2+y^2))+a)
>>> quadgk(integrand,0,Inf)[1]   # <=== inner integral
>>> end
>>>
>>> end
>>>
>>>
>>> > @timeit test.f(1.)
>>> 100 loops, best of 3: 3.10 ms per loop
>>>
>>>
>>>
>>>
>>> Does anyone have any tips that squeezes a little more out of this code? 
>>> I have run ProfileView on it, and although I'm not sure I fully understand 
>>> how to read its output, I think it's saying the majority of runtime is 
>>> spent in quadgk itself. So perhaps I should look into using a different 
>>> integration library? 
>>>
>>> Thanks for any help. 
>>>
>>>

[julia-users] Re: Tips for optimizing this short code snippet

2016-06-19 Thread Marius Millea
They *are* different algorithms, but when I was comparing speeds with the 
other codes, I compared it in terms of time per number of calls of the 
inner integrand function. So basically I'm testing the speed of the 
integrand functions themselves, as well as the speed of the integration 
library code, as well as any function call overhead type thing. With this 
metric, the Julia code was close, but it was the slowest (although of 
course far more succinct and easy to read). 


On Saturday, June 18, 2016 at 7:46:35 PM UTC+2, Gabriel Gellner wrote:
>
> What integration library are you using with Cython/Fortran? Is it using 
> the same algorithm as quadgk? Your code seems so simple I imagine this is 
> just comparing the quadrature implementations :)
>
> On Saturday, June 18, 2016 at 5:53:57 AM UTC-7, Marius Millea wrote:
>>
>> Hi all, I'm sort of just starting out with Julia, I'm trying to get gauge 
>> of how fast I can make some code of which I have Cython and Fortran 
>> versions to see if I should continue down the path of converting more or my 
>> stuff to Julia (which in general I'd very much like to, if I can get it 
>> fast enough). I thought maybe I'd post the code in question here to see if 
>> I could get any tips. I've stripped down the original thing to what I think 
>> are the important parts, a nested integration with an inner function 
>> closure and some global variables. 
>>
>> module test
>>
>> const a = 1.
>>
>> function f(x)
>> quadgk(y->1/g(y),0,x)[1]  # <=== outer integral
>> end
>>
>> function g(y)
>> integrand(x) = x^2*sqrt(x^2*y^2+a)/(exp(sqrt(x^2+y^2))+a)
>> quadgk(integrand,0,Inf)[1]   # <=== inner integral
>> end
>>
>> end
>>
>>
>> > @timeit test.f(1.)
>> 100 loops, best of 3: 3.10 ms per loop
>>
>>
>>
>>
>> Does anyone have any tips that squeezes a little more out of this code? I 
>> have run ProfileView on it, and although I'm not sure I fully understand 
>> how to read its output, I think it's saying the majority of runtime is 
>> spent in quadgk itself. So perhaps I should look into using a different 
>> integration library? 
>>
>> Thanks for any help. 
>>
>>

[julia-users] Re: Tips for optimizing this short code snippet

2016-06-19 Thread Marius Millea
They *are* different algorithms, but when I was comparing speeds with the 
other codes, I compared it in terms of time per number of calls of the 
inner integrand function. So basically I'm testing the speed of the 
integrand functions themselves, as well as the speed of the integration 
library code, as well as any function call overhead type thing. With this 
metric, the Julia code was close, but it was the slowest (although of 
course far more succinct and easy to read). 

Marius




On Saturday, June 18, 2016 at 7:46:35 PM UTC+2, Gabriel Gellner wrote:
>
> What integration library are you using with Cython/Fortran? Is it using 
> the same algorithm as quadgk? Your code seems so simple I imagine this is 
> just comparing the quadrature implementations :)
>
> On Saturday, June 18, 2016 at 5:53:57 AM UTC-7, Marius Millea wrote:
>>
>> Hi all, I'm sort of just starting out with Julia, I'm trying to get gauge 
>> of how fast I can make some code of which I have Cython and Fortran 
>> versions to see if I should continue down the path of converting more or my 
>> stuff to Julia (which in general I'd very much like to, if I can get it 
>> fast enough). I thought maybe I'd post the code in question here to see if 
>> I could get any tips. I've stripped down the original thing to what I think 
>> are the important parts, a nested integration with an inner function 
>> closure and some global variables. 
>>
>> module test
>>
>> const a = 1.
>>
>> function f(x)
>> quadgk(y->1/g(y),0,x)[1]  # <=== outer integral
>> end
>>
>> function g(y)
>> integrand(x) = x^2*sqrt(x^2*y^2+a)/(exp(sqrt(x^2+y^2))+a)
>> quadgk(integrand,0,Inf)[1]   # <=== inner integral
>> end
>>
>> end
>>
>>
>> > @timeit test.f(1.)
>> 100 loops, best of 3: 3.10 ms per loop
>>
>>
>>
>>
>> Does anyone have any tips that squeezes a little more out of this code? I 
>> have run ProfileView on it, and although I'm not sure I fully understand 
>> how to read its output, I think it's saying the majority of runtime is 
>> spent in quadgk itself. So perhaps I should look into using a different 
>> integration library? 
>>
>> Thanks for any help. 
>>
>>

[julia-users] Re: Tips for optimizing this short code snippet

2016-06-18 Thread Marius Millea
Ahh sorry, forget the 2x slower thing, I had accidentally changed something 
else. Both the anonymous y->1/g(y) and invg(y) give essentially the exact 
same run time. 

There are a number of 1's and 0's, but AFAICT they shouldn't cause any type 
instabilities, if the input variable y or x is a Float64, the output should 
always be Float64 also. In any case I did check switching them to 1. and 
0.'s but that also has no effect. 

Marius




On Saturday, June 18, 2016 at 4:08:59 PM UTC+2, Eric Forgy wrote:
>
> Try code_warntype. I'm guessing you have some type instabilities, e.g. I 
> see some 1's and 0's, where it might be better to use 1.0 and 0.0. Not sure 
> :)
>
> On Saturday, June 18, 2016 at 9:48:29 PM UTC+8, Marius Millea wrote:
>>
>> Thanks, yea, I had read that too and at some point checked if it mattered 
>> and it didn't seem to which wasn't entirely surprising since its on the 
>> outer loop. 
>>
>> But I just checked again given your comment and on Julia 0.4.5 it seems 
>> to actually be 2x slower if I switch it to this:
>>
>> function f(x)
>> invg(y) = 1/g(y)
>> quadgk(invg,0,x)[1]  # <=== outer integral
>> end
>>
>> Odd...
>>
>>
>> On Saturday, June 18, 2016 at 3:41:37 PM UTC+2, Eric Forgy wrote:
>>>
>>> Which version of Julia are you using? One thing that stands out is the 
>>> anonymous function y->1/g(y) being passed as an argument to quadgk. I'm not 
>>> an expert, but I've heard this is slow in v0.4 and below, but should be 
>>> fast in v0.5. Just a though.
>>>
>>> On Saturday, June 18, 2016 at 8:53:57 PM UTC+8, Marius Millea wrote:
>>>>
>>>> Hi all, I'm sort of just starting out with Julia, I'm trying to get 
>>>> gauge of how fast I can make some code of which I have Cython and Fortran 
>>>> versions to see if I should continue down the path of converting more or 
>>>> my 
>>>> stuff to Julia (which in general I'd very much like to, if I can get it 
>>>> fast enough). I thought maybe I'd post the code in question here to see if 
>>>> I could get any tips. I've stripped down the original thing to what I 
>>>> think 
>>>> are the important parts, a nested integration with an inner function 
>>>> closure and some global variables. 
>>>>
>>>> module test
>>>>
>>>> const a = 1.
>>>>
>>>> function f(x)
>>>> quadgk(y->1/g(y),0,x)[1]  # <=== outer integral
>>>> end
>>>>
>>>> function g(y)
>>>> integrand(x) = x^2*sqrt(x^2*y^2+a)/(exp(sqrt(x^2+y^2))+a)
>>>> quadgk(integrand,0,Inf)[1]   # <=== inner integral
>>>> end
>>>>
>>>> end
>>>>
>>>>
>>>> > @timeit test.f(1.)
>>>> 100 loops, best of 3: 3.10 ms per loop
>>>>
>>>>
>>>>
>>>>
>>>> Does anyone have any tips that squeezes a little more out of this code? 
>>>> I have run ProfileView on it, and although I'm not sure I fully understand 
>>>> how to read its output, I think it's saying the majority of runtime is 
>>>> spent in quadgk itself. So perhaps I should look into using a different 
>>>> integration library? 
>>>>
>>>> Thanks for any help. 
>>>>
>>>>

[julia-users] Re: Tips for optimizing this short code snippet

2016-06-18 Thread Marius Millea
Thanks, yea, I had read that too and at some point checked if it mattered 
and it didn't seem to which wasn't entirely surprising since its on the 
outer loop. 

But I just checked again given your comment and on Julia 0.4.5 it seems to 
actually be 2x slower if I switch it to this:

function f(x)
invg(y) = 1/g(y)
quadgk(invg,0,x)[1]  # <=== outer integral
end

Odd...


On Saturday, June 18, 2016 at 3:41:37 PM UTC+2, Eric Forgy wrote:
>
> Which version of Julia are you using? One thing that stands out is the 
> anonymous function y->1/g(y) being passed as an argument to quadgk. I'm not 
> an expert, but I've heard this is slow in v0.4 and below, but should be 
> fast in v0.5. Just a though.
>
> On Saturday, June 18, 2016 at 8:53:57 PM UTC+8, Marius Millea wrote:
>>
>> Hi all, I'm sort of just starting out with Julia, I'm trying to get gauge 
>> of how fast I can make some code of which I have Cython and Fortran 
>> versions to see if I should continue down the path of converting more or my 
>> stuff to Julia (which in general I'd very much like to, if I can get it 
>> fast enough). I thought maybe I'd post the code in question here to see if 
>> I could get any tips. I've stripped down the original thing to what I think 
>> are the important parts, a nested integration with an inner function 
>> closure and some global variables. 
>>
>> module test
>>
>> const a = 1.
>>
>> function f(x)
>> quadgk(y->1/g(y),0,x)[1]  # <=== outer integral
>> end
>>
>> function g(y)
>> integrand(x) = x^2*sqrt(x^2*y^2+a)/(exp(sqrt(x^2+y^2))+a)
>> quadgk(integrand,0,Inf)[1]   # <=== inner integral
>> end
>>
>> end
>>
>>
>> > @timeit test.f(1.)
>> 100 loops, best of 3: 3.10 ms per loop
>>
>>
>>
>>
>> Does anyone have any tips that squeezes a little more out of this code? I 
>> have run ProfileView on it, and although I'm not sure I fully understand 
>> how to read its output, I think it's saying the majority of runtime is 
>> spent in quadgk itself. So perhaps I should look into using a different 
>> integration library? 
>>
>> Thanks for any help. 
>>
>>

[julia-users] Tips for optimizing this short code snippet

2016-06-18 Thread Marius Millea
Hi all, I'm sort of just starting out with Julia, I'm trying to get gauge 
of how fast I can make some code of which I have Cython and Fortran 
versions to see if I should continue down the path of converting more or my 
stuff to Julia (which in general I'd very much like to, if I can get it 
fast enough). I thought maybe I'd post the code in question here to see if 
I could get any tips. I've stripped down the original thing to what I think 
are the important parts, a nested integration with an inner function 
closure and some global variables. 

module test

const a = 1.

function f(x)
quadgk(y->1/g(y),0,x)[1]  # <=== outer integral
end

function g(y)
integrand(x) = x^2*sqrt(x^2*y^2+a)/(exp(sqrt(x^2+y^2))+a)
quadgk(integrand,0,Inf)[1]   # <=== inner integral
end

end


> @timeit test.f(1.)
100 loops, best of 3: 3.10 ms per loop




Does anyone have any tips that squeezes a little more out of this code? I 
have run ProfileView on it, and although I'm not sure I fully understand 
how to read its output, I think it's saying the majority of runtime is 
spent in quadgk itself. So perhaps I should look into using a different 
integration library? 

Thanks for any help. 



[julia-users] Re: Custom string escaping in docstrings

2016-06-16 Thread Marius Millea
Ah, works perfectly, thanks!

On Thursday, June 16, 2016 at 10:12:51 AM UTC+2, Michael Hatherly wrote:
>
> You should use the @doc_str macro when you have LaTeX characters that 
> need escaping. 
> http://docs.julialang.org/en/latest/manual/documentation/#syntax-guide
>
> — Mike
> On Thursday, 16 June 2016 01:51:27 UTC+2, Marius Millea wrote:
>>
>> My docstrings often contain Latex so they have $ and \ characters in 
>> them, so I'd like to not have to escape them manually every time. I'm 
>> trying to do so by defining an R_str macro, but it seems to prevent the 
>> docstring from attaching to its function. Is there a way to achieve this?
>>
>> macro R_str(s)
>> s
>> end
>>
>> R"""
>> My docstring $a+\alpha$
>> """
>> function myfunc()
>> end
>>
>> >?myfunc
>>   No documentation found.
>>
>>
>>
>> Thanks. 
>>
>

[julia-users] Custom string escaping in docstrings

2016-06-15 Thread Marius Millea
My docstrings often contain Latex so they have $ and \ characters in them, 
so I'd like to not have to escape them manually every time. I'm trying to 
do so by defining an R_str macro, but it seems to prevent the docstring 
from attaching to its function. Is there a way to achieve this?

macro R_str(s)
s
end

R"""
My docstring $a+\alpha$
"""
function myfunc()
end

>?myfunc
  No documentation found.



Thanks. 


[julia-users] Re: The simple @parallel example from the docs not working

2016-06-12 Thread Marius Millea
Ah, I missed in the docs that if you don't give a reduction operator it 
executes asynchronously and you need to prepend with @sync to make sure 
there workers have actually finished running the loop. 




On Saturday, June 11, 2016 at 4:02:31 PM UTC+2, Marius Millea wrote:
>
> Kinda new to Julia so not sure where to post this but I'll start here. The 
> simple example from the docs involving @parallel and SharedArray doesn't 
> seem to work. I would think I should end up with a=1:10, but instead it all 
> zeros. 
>
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.5.0-dev+4553 (2016-06-06 02:05 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Commit f4cb80b (5 days old master)
> |__/   |  x86_64-unknown-linux-gnu
>
> julia> addprocs()
> 8-element Array{Int64,1}:
>  2
>  3
>  4
>  5
>  6
>  7
>  8
>  9
>
> julia> begin
>a = SharedArray(Float64,10)
>@parallel for i=1:10
>  a[i] = i
>end
>end
> 8-element Array{Any,1}:
>  Future(2,1,26,#NULL)
>  Future(3,1,27,#NULL)
>  Future(4,1,28,#NULL)
>  Future(5,1,29,#NULL)
>  Future(6,1,30,#NULL)
>  Future(7,1,31,#NULL)
>  Future(8,1,32,#NULL)
>  Future(9,1,33,#NULL)
>
> julia> a
> 10-element SharedArray{Float64,1}:
>  0.0
>  0.0
>  0.0
>  0.0
>  0.0
>  0.0
>  0.0
>  0.0
>  0.0
>  0.0
>
>
> This is using a nightly build of v0.5.0 which I figured I should since 
> this exact example is included in the 0.5.0 docs but not the 0.4.5 ones. In 
> any case, from what I can gather this *should* also work on 0.4.5, and I've 
> tried it and the results is identical as above. If I just put these 
> commands in a script it also doesn't work. One note, if I just execute each 
> command on its own line, rather than grouping them with the begin, then it 
> works. 
>
> Any ideas whats going on? 
>


[julia-users] The simple @parallel example from the docs not working

2016-06-11 Thread Marius Millea
Kinda new to Julia so not sure where to post this but I'll start here. The 
simple example from the docs involving @parallel and SharedArray doesn't 
seem to work. I would think I should end up with a=1:10, but instead it all 
zeros. 

   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "?help" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.5.0-dev+4553 (2016-06-06 02:05 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit f4cb80b (5 days old master)
|__/   |  x86_64-unknown-linux-gnu

julia> addprocs()
8-element Array{Int64,1}:
 2
 3
 4
 5
 6
 7
 8
 9

julia> begin
   a = SharedArray(Float64,10)
   @parallel for i=1:10
 a[i] = i
   end
   end
8-element Array{Any,1}:
 Future(2,1,26,#NULL)
 Future(3,1,27,#NULL)
 Future(4,1,28,#NULL)
 Future(5,1,29,#NULL)
 Future(6,1,30,#NULL)
 Future(7,1,31,#NULL)
 Future(8,1,32,#NULL)
 Future(9,1,33,#NULL)

julia> a
10-element SharedArray{Float64,1}:
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0
 0.0


This is using a nightly build of v0.5.0 which I figured I should since this 
exact example is included in the 0.5.0 docs but not the 0.4.5 ones. In any 
case, from what I can gather this *should* also work on 0.4.5, and I've 
tried it and the results is identical as above. If I just put these 
commands in a script it also doesn't work. One note, if I just execute each 
command on its own line, rather than grouping them with the begin, then it 
works. 

Any ideas whats going on?