> Would eval'ing the type inside the macro work? This shows [:x, :y]
>
>
This only works if A and type_fields are defined in the same module though.
Although to be honest it surprised me a bit that it works at all, I guess
the type definitions are evaluated prior to macro expansions?
A
I think there's at least once scenario where eval-in-a-macro is not a
mistake, mainly when you want to generate some code that depends on 1) some
passed in expression and 2) something which can only be known at runtime.
Here's my example:
The macro (@self) which I'm writing takes a type name
nk that's the same solution
suggested
here https://github.com/JuliaLang/julia/issues/2386#issuecomment-13966397
Marius
On Tuesday, September 27, 2016 at 2:36:48 PM UTC+2, Jussi Piitulainen wrote:
>
> You might be able to wrap your expression so as to create a function
> instead, and
>
> Macros are functions evaluated at parse-time. The runtime scope doesn't
> even exist when the macro is called.
That's right, the answer may well have nothing to do with marcos (maybe I
obscured the question by even mentioning them in an attempt to give bigger
context to what I'm trying
And just to be clear, by "current scope" here I mean the scope of where the
code from this macro is getting "pasted", not the macro scope.
On Tuesday, September 27, 2016 at 11:28:40 AM UTC+2, Marius Millea wrote:
>
> Hi, is there a way to "eval" someth
Hi, is there a way to "eval" something in the current scope? My problem is
the following, I've written a macro that, inside the returned expression,
builds an expression which I need to eval. It looks like this,
macro foo()
quote
ex = ...
eval_in_current_scope(ex)
end
I can't figure out why this doesn't work:
julia> macro outer()
quote
macro inner()
end
@inner
end
end
julia> @outer
ERROR: UndefVarError: @inner not defined
Could it be a bug (I'm on 0.5) or am I missing something
roblem with local variables you mention though? I can't think
of where this wouldn't work.
On Sunday, September 25, 2016 at 2:08:46 PM UTC+2, Yichao Yu wrote:
>
> On Sun, Sep 25, 2016 at 7:25 AM, Marius Millea <marius...@gmail.com
> > wrote:
> > I can store a macro to a var
Now that you mention it I'm not sure why I thought returning :($esc(ex))
was better than esc(ex), I think they give identical results in this case
(maybe all cases?).
But at any rate, that doesn't affect this problem since both do give the
identical result. The problem seems to be that the
I can store a macro to a variable (let use the identity macro "id" as an
example),
julia> idmacro = macro id(ex)
:($(esc(ex)))
end
@id (macro with 1 method)
How can I use this macro now? I can *almost* do it by hand by passing an
expression as an argument and eval'ing the
Looks great, thanks for this. Dropped it inplace of Autoreload.jl and works
as advertised from what I've seen thus far. I had been hoping something
like Autoreload.jl would stick around and be maintained, I find
Jupyter+Autoreload makes for a really pleasant workflow.
Marius
On Tuesday
+1 for Discourse, which I could have done without spamming the list with
another message if this were Discourse :)
I'd like to access global variables from the default values of keywords
arguments, e.g.:
x = 3
function f(;x=x) #<- this default value of x here should refer to x in the
global scope which is 3
...
end
Is there any way to do this? I had guessed the following might work but it
doesn't:
Is this the expected behavior?
julia> mapslices(x->tuple(x), [1 2; 3 4], 1)
1×2 Array{Tuple{Array{Int64,1}},2}:
([2,4],) ([2,4],)
julia> mapslices(x->tuple(x...), [1 2; 3 4], 1)
1×2 Array{Tuple{Int64,Int64},2}:
(1,3) (2,4)
The first case certainly came as pretty unexpected to me. Does it
Thanks, I did notice that, but regardless this shouldn't affect the scaling
with NCPUs, and in fact as you say, it doesn't change performance at all.
On Monday, August 29, 2016 at 7:27:44 PM UTC+2, Diego Javier Zea wrote:
>
> Looks like the type of *d_cl* isn't inferred correctly. *d_cl =
ssuecomment-241911387>
> and see if it helps.
>
> --Tim
>
> On Monday, August 29, 2016 9:22:09 AM CDT Marius Millea wrote:
> > I've parallelized some code with @threads, but instead of a factor NCPUs
> > speed improvement (for me, 8), I'm seeing rather a bit under a fa
I've parallelized some code with @threads, but instead of a factor NCPUs
speed improvement (for me, 8), I'm seeing rather a bit under a factor 2. I
suppose the answer may be that my bottleneck isn't computation, rather
memory access. But during running the code, I see my CPU usage go to 100%
I think you are right btw, the compiler got rid of the wrapper function for
the "+" call, since all I see above is Base.add_float.
On Sun, Jul 24, 2016 at 4:55 PM, Marius Millea <mariusmil...@gmail.com>
wrote:
> Here's my very simple test case. I will also try on my act
, best of 3: 100.64 ns per loop
# Variables:
# sf::SelfFunctions.SelfFunction{###f1_selfimpl#271}
# args::Tuple{mytype}
#
# Body:
# begin
# # meta: location /home/marius/workspace/selffunctions/test.jl
##f1_selfimpl#271 11
# SSAValue(1) =
(Core.getfield)((Core.getfield)(args
Very nice! Didn't understand your hint earlier but now I do!
My only problem with this solution is the (perhaps unavoidable) run-time
overhead, since every single function call gets wrapped in one extra
function call. With a very simple test function that just does some
arithmetic, I'm seeing
)
a = 3 #can also assign without repacking
end
which seems slightly less hacky than what I'm doing but serves a similar
purpose.
Marius
On Fri, Jul 22, 2016 at 9:01 AM, Mauro <mauro...@runbox.com> wrote:
>
> On Fri, 2016-07-22 at 01:02, Marius Millea <mariusmil...@gmail.com&g
On Thu, Jul 21, 2016 at 10:33 PM, Yichao Yu <yyc1...@gmail.com> wrote:
> On Thu, Jul 21, 2016 at 4:01 PM, Marius Millea <mariusmil...@gmail.com>
> wrote:
> > In an attempt to make some numerical code (ie something thats basically
> just
> > a bunch of equa
elf
>#global self[] = mytype(200)
>#... code
># finally
>#global self[] = ...restore previous value
># end
>...
> end
>
> I used this idiom in Common Lisp all the time. It's strictly equivalent to
> passing the object around
In an attempt to make some numerical code (ie something thats basically
just a bunch of equations) more readable, I am trying to write a macro that
lets me write the code more succinctly. The code uses parameters from some
data structure, call it "mytype", so its littered with "t.a", "t.b",
Done, see https://github.com/JuliaLang/julia/issues/17509
On Wednesday, July 20, 2016 at 5:21:23 PM UTC+2, Cedric St-Jean wrote:
>
> That does look suspicious. Maybe file an issue if there isn't one?
>
> On Wed, Jul 20, 2016 at 4:31 AM, Marius Millea <marius...@gmail.com
&g
Cedric St-Jean wrote:
>
> Yes, that's what I meant. Presumably the multi-proc machinery is getting
> compiled at the first `using`. It's the same reason why "println(2+2)" is
> very slow on first use, but fast afterwards.
>
> On Tue, Jul 19, 2016 at 10:41 AM, Ma
Ah, that makes sense. So I tried with the latest 0.5 nightly and I go from
~3ms to ~1ms, a nice improvement! (different than what Andrew reported
above, so perhaps something changed over the last few nights tho)
Unfortunately ProfileView is giving me an error on 0.5, but from printing
the profile
function.
I don't know enough about Julia to know what that function does. Could that
be a sign there's something non-optimal going on? (To profile this I am
doing @profile for _=1:1; test.f(1.); end to get enough samples, is
that correct?)
Marius
On Sunday, June 19, 2016 at 4:41:47
using with Cython/Fortran? Is it using
> the same algorithm as quadgk? Your code seems so simple I imagine this is
> just comparing the quadrature implementations :)
>
> On Saturday, June 18, 2016 at 5:53:57 AM UTC-7, Marius Millea wrote:
>>
>> Hi all, I'm sort of just start
code, as well as any function call overhead type thing. With this
metric, the Julia code was close, but it was the slowest (although of
course far more succinct and easy to read).
Marius
On Saturday, June 18, 2016 at 7:46:35 PM UTC+2, Gabriel Gellner wrote:
>
> What integration l
r x is a Float64, the output should
always be Float64 also. In any case I did check switching them to 1. and
0.'s but that also has no effect.
Marius
On Saturday, June 18, 2016 at 4:08:59 PM UTC+2, Eric Forgy wrote:
>
> Try code_warntype. I'm guessing you have some type instabilities, e.g. I
&
to quadgk. I'm not
> an expert, but I've heard this is slow in v0.4 and below, but should be
> fast in v0.5. Just a though.
>
> On Saturday, June 18, 2016 at 8:53:57 PM UTC+8, Marius Millea wrote:
>>
>> Hi all, I'm sort of just starting out with Julia, I'm trying to get gau
Hi all, I'm sort of just starting out with Julia, I'm trying to get gauge
of how fast I can make some code of which I have Cython and Fortran
versions to see if I should continue down the path of converting more or my
stuff to Julia (which in general I'd very much like to, if I can get it
fast
> On Thursday, 16 June 2016 01:51:27 UTC+2, Marius Millea wrote:
>>
>> My docstrings often contain Latex so they have $ and \ characters in
>> them, so I'd like to not have to escape them manually every time. I'm
>> trying to do so by defining an R_str macro, but it seem
My docstrings often contain Latex so they have $ and \ characters in them,
so I'd like to not have to escape them manually every time. I'm trying to
do so by defining an R_str macro, but it seems to prevent the docstring
from attaching to its function. Is there a way to achieve this?
macro
Ah, I missed in the docs that if you don't give a reduction operator it
executes asynchronously and you need to prepend with @sync to make sure
there workers have actually finished running the loop.
On Saturday, June 11, 2016 at 4:02:31 PM UTC+2, Marius Millea wrote:
>
> Kinda new to
Kinda new to Julia so not sure where to post this but I'll start here. The
simple example from the docs involving @parallel and SharedArray doesn't
seem to work. I would think I should end up with a=1:10, but instead it all
zeros.
_ _ _(_)_ | A fresh approach to technical
Hi everybody,
I wrote a simulation code for Plasma Actuator. This is based on my previous
code written in Python and uses Suzen model combined with Navier-Stokes
equations. It's about 5 times faster than Python but still simulation takes
days to finish. I am new to Julia and I tried to follow
38 matches
Mail list logo