[julia-users] Re: optimizing Julia code for numerical integration of PDE

2015-12-18 Thread DNF
Your last line is:
u = real(ifft(u))

In that case you seem to be throwing away all the intermediate 
calculations. Should it be:
u = real(ifft(uf))
?


Re: [julia-users] Writing a function for two different types, avoiding code duplication

2015-12-18 Thread DNF
On Thursday, December 17, 2015 at 12:05:59 PM UTC+1, ami...@gmail.com wrote:
>
> I should mention, however, that the exact solution I needed was: 
> myf{T1,T2}(x::Union{T1,T2}).
>

What was wrong with myf(x::Union{T1,T2})? Looks right to me. Why do you 
need to parameterize?


[julia-users] Re: help speeding up nonparametric regression?

2015-12-18 Thread michael . creel
I did profiling, and ProfileView.jl worked fine, and is definitely pretty 
slick. There were no surprises, though, the sections with the @time-ings 
are the ones that are costly.

On Thursday, December 17, 2015 at 6:27:11 PM UTC+1, michae...@gmail.com 
wrote:
>
> I tried using the profiler with another problem a few months ago, and 
> ProfileView was not working for me then. I will give it another try. 
> However, the parts of the code that impact the timing are pretty narrowly 
> identified already. I have read the performance guide pretty carefully, and 
> I don't see how to improve the current code with its suggestions. I suspect 
> that trying to avoid using large arrays, and doing more with loops, might 
> help. That would be a change of strategy, though, rather than an 
> optimization of the current approach.
>
> On Thursday, December 17, 2015 at 3:54:07 PM UTC+1, Kristoffer Carlsson 
> wrote:
>>
>> Why haven't you tried to profile it? That's is the first thing that 
>> anyone that would try to help you would do. Use 
>> https://github.com/timholy/ProfileView.jl see what is slow and see if it 
>> is explained in the performance guide.
>>
>> Then you can ask a much better question, like "why is this statement" 
>> slow instead of posting a whole function and ask someone to optimize the 
>> whole thing.
>>
>

Re: [julia-users] Define a function for two unrelated types

2015-12-18 Thread Milan Bouchet-Valat
Le jeudi 17 décembre 2015 à 15:16 -0800, amik...@gmail.com a écrit :
> This reply is on a new thread but since you mentioned the solution
> with duck-typing, of which I was aware, I have a related question.
> Assuming I know that this function will be applied to only types T1
> or T2, is it better to use the second solution with Union in terms of
> performance? I would assume so but am no expert in this field...
No, type annotations do not make any performance difference here. They
simply restrict the types that a caller can pass, that's all.


Regards


[julia-users] Noob question on array = 502-elementArray{FixedSizeArrays.Point{3,Float32},1}

2015-12-18 Thread kleinsplash
This is a 1D array, each of the 502 elements is 
a FixedSizeArrays.Point{3,Float32} with 3 values. How do I access the 
values? 



[julia-users] Re: optimizing Julia code for numerical integration of PDE

2015-12-18 Thread John Gibson
Good point, thanks. I am still getting used to the semantics of 
assigning/copying/referencing Julia arrays.

On Friday, December 18, 2015 at 1:13:38 AM UTC-5, Lutfullah Tomak wrote:
>
># timestepping loop
> for n = 0:Nt
>
> Nn1f = copy(Nnf);
>
> Though unrelated to unrolling, here copy seems to be redundant since Nnf 
> is assigned to another arrray after that.
>
>

[julia-users] Re: optimizing Julia code for numerical integration of PDE

2015-12-18 Thread John Gibson
Yes, thanks! 

I also mistakenly used n as the index variable in a pair of nested loops, 
but the loop-index scope rules seem to have saved me there.

On Friday, December 18, 2015 at 3:01:02 AM UTC-5, DNF wrote:
>
> Your last line is:
> u = real(ifft(u))
>
> In that case you seem to be throwing away all the intermediate 
> calculations. Should it be:
> u = real(ifft(uf))
> ?
>



Re: [julia-users] Writing a function for two different types, avoiding code duplication

2015-12-18 Thread amiksvi
DNF, that's because, for practical reasons, this function is defined inside 
a module, but the types are defined by the user outside. So when Julia uses 
the module, it doesn't know yet these T1 and T2.
Lutfullah, this solution doesn't work for the same reason.


Re: [julia-users] Writing a function for two different types, avoiding code duplication

2015-12-18 Thread Lutfullah Tomak
I guess I missed the part outside but isn't it the same without imposing any 
type on x without defining T1, and T2 before the module import? Like 
myf{T1,T2}(x::Union{T1,T2}) vs myf{T<:ANY}(x::T)

Re: [julia-users] Noob question on array = 502-elementArray{FixedSizeArrays.Point{3,Float32},1}

2015-12-18 Thread Tim Holy
Try x[17][2]. Also, if you're not familiar with it, the `reinterpret` function 
can sometimes be your friend---you can use it to convert between a 
Vector{Point{3}} and a matrix of size 3-by-502 without copying the memory 
(though it allocates a bit for the "wrapper").

Best,
--Tim

On Friday, December 18, 2015 01:17:57 AM kleinsplash wrote:
> This is a 1D array, each of the 502 elements is
> a FixedSizeArrays.Point{3,Float32} with 3 values. How do I access the
> values?



[julia-users] Re: Memory leak in Task and fatal error

2015-12-18 Thread DeadbraiN


среда, 16 декабря 2015 г., 17:02:11 UTC+2 пользователь DeadbraiN написал:
>
> Hi everybody,
>
> It looks like i found memory leak in a Task related code and fatal error. 
> First, please look at this example:
> function leak()
> for i=1:1
> f = eval(:(function() produce() end))
> t = @async f()
> consume(t)
> # stop the Task
> t.exception = null
> try
>  yieldto(t)
> end
> end
> gc()
> end
> Every time i run leak() function, memory usage of juia process increases 
> on ~35mb on my machine.
>
> Second issue (with fatal error) is with the same code, but one simple 
> change:
> function leak()
> for i=1:1
> f = eval(:(function() produce() end))
> t = @async f()
> #consume(t)  # this is a change
> # stop the Task
> t.exception = null
> try
>   yieldto(t)
> end
> end
> gc()
> end
> This change produces an error like this:
>
> fatal: error thrown and no exception handler available.
> null
> rec_backtrace at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so 
> (unknown line)
> jl_throw at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so (unknown 
> line)
> unknown function (ip: 0x7fc9e946f372)
> yieldto at ./task.jl:71
> jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so 
> (unknown line)
> wait at ./task.jl:371
> task_done_hook at task.jl:174
> jl_apply_generic at /usr/bin/../lib/x86_64-linux-gnu/julia/libjulia.so 
> (unknown line)
> unknown function (ip: 0x7fc9e946eb4a)
> unknown function (ip: (nil))
>
> And finally, it works fine (without memory leaks) with this small change:
>
> function leak()
> for i=1:10 # added one zero symbol at the end
> f = function() produce() end # removed eval()
> t = @async f()
> consume(t)
> # stop the Task
> t.exception = null
> try
>  yieldto(t)
> end
> end
> gc()
> end
>
> So, my question is: what i should change to prevent memory leaks?
> Thanks
>
> julia> versioninfo()
> Julia Version 0.4.2
> Commit bb73f34 (2015-12-06 21:47 UTC)
> Platform Info:
>   System: Linux (x86_64-linux-gnu)
>   CPU: Intel(R) Core(TM) i7-4700HQ CPU @ 2.40GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (NO_LAPACK NO_LAPACKE DYNAMIC_ARCH NO_AFFINITY Haswell)
>   LAPACK: liblapack.so.3
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
>
> This is optimized variant of memory leak example:
>
function leak()
for i=1:10
t = Task(eval(:(function() produce() end)))
consume(t)

try
 t.exception = null
 yieldto(t)
end
end
gc()
end

 


[julia-users] Re: Memory leak in Task and fatal error

2015-12-18 Thread DeadbraiN
This is shorter version of memory leak example:
function leak()
for i=1:10
t = Task(eval(:(function() produce() end)))
consume(t)

try
 t.exception = null
 yieldto(t)
end
end
gc()
end




[julia-users] why are various NaNs isequal?

2015-12-18 Thread Tamas Papp
For example,

julia> isequal(NaN,NaN16)
true

julia> isequal(NaN,NaN32)
true

This is of course documented in the manual, what I would like to
understand is the motivation for this design decision. Some languages
have a progression of equality predicates --- eg Common Lisp has EQ,
EQL, EQUAL, and EQUALP, each more permissive than the next one. But ==
and isequal do not nest, since NaN's are of course not == to anything
under IEEE, even themselves.

Before reading about this in the manual, I thought of isequal as object
identity ("A and B are equal when they cannot be distinguished"), but
apparently that's the wrong concept.

Just curious -- there must be a good reason and I would like to know it.

Best,

Tamas


Re: [julia-users] Noob question on array = 502-elementArray{FixedSizeArrays.Point{3,Float32},1}

2015-12-18 Thread kleinsplash
Awesome! the reinterpret did it.. very much appreciated. 

On Friday, 18 December 2015 13:42:15 UTC+2, Tim Holy wrote:
>
> Try x[17][2]. Also, if you're not familiar with it, the `reinterpret` 
> function 
> can sometimes be your friend---you can use it to convert between a 
> Vector{Point{3}} and a matrix of size 3-by-502 without copying the memory 
> (though it allocates a bit for the "wrapper"). 
>
> Best, 
> --Tim 
>
> On Friday, December 18, 2015 01:17:57 AM kleinsplash wrote: 
> > This is a 1D array, each of the 502 elements is 
> > a FixedSizeArrays.Point{3,Float32} with 3 values. How do I access the 
> > values? 
>
>

[julia-users] Re: why are various NaNs isequal?

2015-12-18 Thread Jeffrey Sarnoff
here is the relevant discussion:  
https://github.com/JuliaLang/julia/issues/5314 

On Friday, December 18, 2015 at 8:09:10 AM UTC-5, Tamas Papp wrote:
>
> For example, 
>
> julia> isequal(NaN,NaN16) 
> true 
>
> julia> isequal(NaN,NaN32) 
> true 
>
> This is of course documented in the manual, what I would like to 
> understand is the motivation for this design decision. Some languages 
> have a progression of equality predicates --- eg Common Lisp has EQ, 
> EQL, EQUAL, and EQUALP, each more permissive than the next one. But == 
> and isequal do not nest, since NaN's are of course not == to anything 
> under IEEE, even themselves. 
>
> Before reading about this in the manual, I thought of isequal as object 
> identity ("A and B are equal when they cannot be distinguished"), but 
> apparently that's the wrong concept. 
>
> Just curious -- there must be a good reason and I would like to know it. 
>
> Best, 
>
> Tamas 
>


[julia-users] Re: why are various NaNs isequal?

2015-12-18 Thread Jeffrey Sarnoff
(more) 
this discussion   https://github.com/JuliaLang/julia/issues/8343

On Friday, December 18, 2015 at 9:23:48 AM UTC-5, Jeffrey Sarnoff wrote:
>
> here is the relevant discussion:  
> https://github.com/JuliaLang/julia/issues/5314 
>
> On Friday, December 18, 2015 at 8:09:10 AM UTC-5, Tamas Papp wrote:
>>
>> For example, 
>>
>> julia> isequal(NaN,NaN16) 
>> true 
>>
>> julia> isequal(NaN,NaN32) 
>> true 
>>
>> This is of course documented in the manual, what I would like to 
>> understand is the motivation for this design decision. Some languages 
>> have a progression of equality predicates --- eg Common Lisp has EQ, 
>> EQL, EQUAL, and EQUALP, each more permissive than the next one. But == 
>> and isequal do not nest, since NaN's are of course not == to anything 
>> under IEEE, even themselves. 
>>
>> Before reading about this in the manual, I thought of isequal as object 
>> identity ("A and B are equal when they cannot be distinguished"), but 
>> apparently that's the wrong concept. 
>>
>> Just curious -- there must be a good reason and I would like to know it. 
>>
>> Best, 
>>
>> Tamas 
>>
>

Re: [julia-users] why are various NaNs isequal?

2015-12-18 Thread Erik Schnetter
isequal converts ("promotes") both arguments to a common type, if they have
different types. Thus NaN16 and NaN32 are converted to NaN (i.e. NaN64),
hence isequal returns true. See `isequal(5, 5.0)`.

The two equality notions in Julia do nest: `isequal` and `is`, where the
latter is more strict than the former. The operator `==` is basically
identical to `isequal`, except it also handles IEEE semantics, which
technically isn't a valid notion of equality, and thus it cannot nest.

-erik


On Fri, Dec 18, 2015 at 8:09 AM, Tamas Papp  wrote:

> For example,
>
> julia> isequal(NaN,NaN16)
> true
>
> julia> isequal(NaN,NaN32)
> true
>
> This is of course documented in the manual, what I would like to
> understand is the motivation for this design decision. Some languages
> have a progression of equality predicates --- eg Common Lisp has EQ,
> EQL, EQUAL, and EQUALP, each more permissive than the next one. But ==
> and isequal do not nest, since NaN's are of course not == to anything
> under IEEE, even themselves.
>
> Before reading about this in the manual, I thought of isequal as object
> identity ("A and B are equal when they cannot be distinguished"), but
> apparently that's the wrong concept.
>
> Just curious -- there must be a good reason and I would like to know it.
>
> Best,
>
> Tamas
>



-- 
Erik Schnetter 
http://www.perimeterinstitute.ca/personal/eschnetter/


Re: [julia-users] why are various NaNs isequal?

2015-12-18 Thread Stefan Karpinski
Hashing. You want isequal(NaN,NaN) so that the NaN hash bucket doesn't end
up being one big endless collision without equality. Also since you have
isequal(x, convert(T, x)) for most other numeric x, it only seems sane to
have the same thing hold for NaNs of different types.

On Fri, Dec 18, 2015 at 9:42 AM, Erik Schnetter  wrote:

> isequal converts ("promotes") both arguments to a common type, if they
> have different types. Thus NaN16 and NaN32 are converted to NaN (i.e.
> NaN64), hence isequal returns true. See `isequal(5, 5.0)`.
>
> The two equality notions in Julia do nest: `isequal` and `is`, where the
> latter is more strict than the former. The operator `==` is basically
> identical to `isequal`, except it also handles IEEE semantics, which
> technically isn't a valid notion of equality, and thus it cannot nest.
>
> -erik
>
>
> On Fri, Dec 18, 2015 at 8:09 AM, Tamas Papp  wrote:
>
>> For example,
>>
>> julia> isequal(NaN,NaN16)
>> true
>>
>> julia> isequal(NaN,NaN32)
>> true
>>
>> This is of course documented in the manual, what I would like to
>> understand is the motivation for this design decision. Some languages
>> have a progression of equality predicates --- eg Common Lisp has EQ,
>> EQL, EQUAL, and EQUALP, each more permissive than the next one. But ==
>> and isequal do not nest, since NaN's are of course not == to anything
>> under IEEE, even themselves.
>>
>> Before reading about this in the manual, I thought of isequal as object
>> identity ("A and B are equal when they cannot be distinguished"), but
>> apparently that's the wrong concept.
>>
>> Just curious -- there must be a good reason and I would like to know it.
>>
>> Best,
>>
>> Tamas
>>
>
>
>
> --
> Erik Schnetter 
> http://www.perimeterinstitute.ca/personal/eschnetter/
>


[julia-users] Re: help speeding up nonparametric regression?

2015-12-18 Thread Patrick Kofod Mogensen
I don't have a computer neaby, but it is not surprising that calling npreg 
tales a lotion the total time, since that is basically most of your script. 
What lines in npreg are the slow one?Or maybe I didn't see where you put the 
@time's

[julia-users] Re: help speeding up nonparametric regression?

2015-12-18 Thread Kristoffer Carlsson
npreg spends almost all its time in multithreaded blas doing QR 
factorizations so there isn't much "tweaking" you can do do improve 
performance.

On Friday, December 18, 2015 at 9:11:07 AM UTC+1, michae...@gmail.com wrote:
>
> I did profiling, and ProfileView.jl worked fine, and is definitely pretty 
> slick. There were no surprises, though, the sections with the @time-ings 
> are the ones that are costly.
>
> On Thursday, December 17, 2015 at 6:27:11 PM UTC+1, michae...@gmail.com 
> wrote:
>>
>> I tried using the profiler with another problem a few months ago, and 
>> ProfileView was not working for me then. I will give it another try. 
>> However, the parts of the code that impact the timing are pretty narrowly 
>> identified already. I have read the performance guide pretty carefully, and 
>> I don't see how to improve the current code with its suggestions. I suspect 
>> that trying to avoid using large arrays, and doing more with loops, might 
>> help. That would be a change of strategy, though, rather than an 
>> optimization of the current approach.
>>
>> On Thursday, December 17, 2015 at 3:54:07 PM UTC+1, Kristoffer Carlsson 
>> wrote:
>>>
>>> Why haven't you tried to profile it? That's is the first thing that 
>>> anyone that would try to help you would do. Use 
>>> https://github.com/timholy/ProfileView.jl see what is slow and see if 
>>> it is explained in the performance guide.
>>>
>>> Then you can ask a much better question, like "why is this statement" 
>>> slow instead of posting a whole function and ask someone to optimize the 
>>> whole thing.
>>>
>>

Re: [julia-users] @code_native may emit wrong register name

2015-12-18 Thread Stefan Karpinski
No worries.

On Thu, Dec 17, 2015 at 11:42 PM, Lutfullah Tomak 
wrote:

> Silly me its correct. Since I'm returning a tuple the first argument to
> the actual function is pointer to the this tuple. So, %r8d correct
> register. I only noticed  it after using @code_llvm. I am sorry for the
> noise.


Re: [julia-users] Writing a function for two different types, avoiding code duplication

2015-12-18 Thread Stefan Karpinski
If the types haven't been defined at the point where you're defining the
method, how could this possibly be made to work? The only thing that could
work is to dispatch on some common supertype of T1 and T2, e.g. Any.

On Fri, Dec 18, 2015 at 6:26 AM, Lutfullah Tomak 
wrote:

> I guess I missed the part outside but isn't it the same without imposing
> any type on x without defining T1, and T2 before the module import? Like
> myf{T1,T2}(x::Union{T1,T2}) vs myf{T<:ANY}(x::T)


[julia-users] Juno bundles for Julia v0.4

2015-12-18 Thread Mike Innes
Hey All,

Juno bundles including Julia v0.4 are now available on the Julia downloads
page . If you're still using Juno with
Julia v0.3 the upgrade is definitely recommended – among other things,
features like precompilation make using packages like Gadfly much easier.
Enjoy!

Cheers,
Mike


[julia-users] How do I know if there are any items in a PriorityQueue?

2015-12-18 Thread Tomas Lycken
As the title says: how can I check if there are any items left to dequeue 
from a PriorityQueue (the one from Base.Collections)?

// T


Re: [julia-users] How do I know if there are any items in a PriorityQueue?

2015-12-18 Thread Tim Holy
isempty?

--Tim

On Friday, December 18, 2015 09:19:12 AM Tomas Lycken wrote:
> As the title says: how can I check if there are any items left to dequeue
> from a PriorityQueue (the one from Base.Collections)?
> 
> // T



[julia-users] inlace versions of .*=, ./= etc

2015-12-18 Thread Ethan Anderes


Hi everyone. I have a pretty basic question. Are there inplace vectorized 
operation for things like A .*= B, A ./= B when A and B are Array{Float64, 
d} for general d? I’m ok with writing these myself (and I love the fact 
that Julia allows me to do this and have it be fast) but I have the feeling 
like it must be somewhere in Base and I’m missing it. Also, with regard to 
writing my own version, I would like to avoid the tiny bit of additional 
technical debt incurred by defining these myself (and the fact that A .*= B, 
A ./= B, are just so easy to read).

Note: I get an error if I do A[:] .*= B for Array{Float64, d}, d>1. I could 
do A[:,...,:] .*= B or A[:] .*= B[:] but the former needs to work for 
generic dimension and I’m worried the later has issues with regard to 
looping, linear indexing and still creats temporaries. Also, scale!(A,B) 
doesn’t work for me when both A and B are of the same dimension. 

This snippit gives the timings.


function myownscale!(A,B)
for ix in eachindex(A,B)
A[ix] = A[ix] * B[ix]
end
end

function test1!(A,B)
A[:] .*= B  # <-- gives error
end

function test2!(A,B)
A[:,:] .*= B  
end

function test3!(A,B)
A[:] .*= B[:]  
end

A, B = rand(1_000, 1_000), rand(1_000, 1_000);

test1!(A,B);  #<-- error

# warmup
@time test2!(A,B);
@time test3!(A,B);
@time myownscale!(A,B);

@time test2!(A,B);
@time test3!(A,B);
@time myownscale!(A,B);

The last three commands result in:

julia> @time test2!(A,B);
  0.011629 seconds (22 allocations: 15.260 MB, 34.11% gc time)

julia> @time test3!(A,B);
  0.007802 seconds (27 allocations: 22.889 MB, 19.09% gc time)

julia> @time myownscale!(A,B);
  0.001341 seconds (4 allocations: 160 bytes)

So, am I missing something in Base or is idiomatic Julia just defining 
these myself?
​


Re: [julia-users] inlace versions of .*=, ./= etc

2015-12-18 Thread Yichao Yu
On Fri, Dec 18, 2015 at 1:09 PM, Ethan Anderes  wrote:
> Hi everyone. I have a pretty basic question. Are there inplace vectorized
> operation for things like A .*= B, A ./= B when A and B are Array{Float64,
> d} for general d? I’m ok with writing these myself (and I love the fact that
> Julia allows me to do this and have it be fast) but I have the feeling like
> it must be somewhere in Base and I’m missing it. Also, with regard to
> writing my own version, I would like to avoid the tiny bit of additional
> technical debt incurred by defining these myself (and the fact that A .*= B,
> A ./= B, are just so easy to read).

I usually just write out the loop myself. It has the addition benefit
of being able to add other operations at the same time which I often
end up doing (e.g. do A = A .* B * c + d instead)
Ref https://github.com/JuliaLang/julia/issues/249

P.S. you can add `@inbounds` to your `myownscale!` which might speed
it up by a factor of a few (due to automatic simd)

>
> Note: I get an error if I do A[:] .*= B for Array{Float64, d}, d>1. I could
> do A[:,...,:] .*= B or A[:] .*= B[:] but the former needs to work for
> generic dimension and I’m worried the later has issues with regard to
> looping, linear indexing and still creats temporaries. Also, scale!(A,B)
> doesn’t work for me when both A and B are of the same dimension.
>
> This snippit gives the timings.
>
> function myownscale!(A,B)
> for ix in eachindex(A,B)
> A[ix] = A[ix] * B[ix]
> end
> end
>
> function test1!(A,B)
> A[:] .*= B  # <-- gives error
> end
>
> function test2!(A,B)
> A[:,:] .*= B
> end
>
> function test3!(A,B)
> A[:] .*= B[:]
> end
>
> A, B = rand(1_000, 1_000), rand(1_000, 1_000);
>
> test1!(A,B);  #<-- error
>
> # warmup
> @time test2!(A,B);
> @time test3!(A,B);
> @time myownscale!(A,B);
>
> @time test2!(A,B);
> @time test3!(A,B);
> @time myownscale!(A,B);
>
> The last three commands result in:
>
> julia> @time test2!(A,B);
>   0.011629 seconds (22 allocations: 15.260 MB, 34.11% gc time)
>
> julia> @time test3!(A,B);
>   0.007802 seconds (27 allocations: 22.889 MB, 19.09% gc time)
>
> julia> @time myownscale!(A,B);
>   0.001341 seconds (4 allocations: 160 bytes)
>
> So, am I missing something in Base or is idiomatic Julia just defining these
> myself?


[julia-users] Re: PSA: Managing packages on JuliaBox (recover from notebooks failing to start on JuliaBox)

2015-12-18 Thread cdm

doing "ls -a" shows the .bashrc file ...

but "nano .bashrc" does not launch an edit mode.


what tools are available on JuliaBox for such file editing?

thanks,

~cdm



On Thursday, December 17, 2015 at 8:17:30 PM UTC-8, Tanmay K. Mohapatra 
wrote:
>
> All Julia binaries are installed at /opt.
> You could update the PATH in your .bashrc suitably to pick up the version 
> that you prefer.
>
>

Re: [julia-users] inlace versions of .*=, ./= etc

2015-12-18 Thread Ethan Anderes


Ok, thanks for the info (and @inbounds does improve it a bit). I usually 
follow your advice and fuse the operations together when I need the speed, 
but since I do all manner of combinations of vectorized operations 
throughout my module I tend to prefer using .*=, ./=, etc unless I need it. 

Is there a way to define function .*!=, ./!= and call it in the form A .*!= 
B? If I try the following it gives me an error:

function .*!=(A,B)
@inbounds for ix in eachindex(A,B)
A[ix] = A[ix] * B[ix]
end
end

On Friday, December 18, 2015 at 10:20:31 AM UTC-8, Yichao Yu wrote:

On Fri, Dec 18, 2015 at 1:09 PM, Ethan Anderes  > wrote: 
> > Hi everyone. I have a pretty basic question. Are there inplace 
> vectorized 
> > operation for things like A .*= B, A ./= B when A and B are 
> Array{Float64, 
> > d} for general d? I’m ok with writing these myself (and I love the fact 
> that 
> > Julia allows me to do this and have it be fast) but I have the feeling 
> like 
> > it must be somewhere in Base and I’m missing it. Also, with regard to 
> > writing my own version, I would like to avoid the tiny bit of additional 
> > technical debt incurred by defining these myself (and the fact that A 
> .*= B, 
> > A ./= B, are just so easy to read). 
>
> I usually just write out the loop myself. It has the addition benefit 
> of being able to add other operations at the same time which I often 
> end up doing (e.g. do A = A .* B * c + d instead) 
> Ref https://github.com/JuliaLang/julia/issues/249 
>
> P.S. you can add `@inbounds` to your `myownscale!` which might speed 
> it up by a factor of a few (due to automatic simd) 
>
> > 
> > Note: I get an error if I do A[:] .*= B for Array{Float64, d}, d>1. I 
> could 
> > do A[:,...,:] .*= B or A[:] .*= B[:] but the former needs to work for 
> > generic dimension and I’m worried the later has issues with regard to 
> > looping, linear indexing and still creats temporaries. Also, scale!(A,B) 
> > doesn’t work for me when both A and B are of the same dimension. 
> > 
> > This snippit gives the timings. 
> > 
> > function myownscale!(A,B) 
> > for ix in eachindex(A,B) 
> > A[ix] = A[ix] * B[ix] 
> > end 
> > end 
> > 
> > function test1!(A,B) 
> > A[:] .*= B  # <-- gives error 
> > end 
> > 
> > function test2!(A,B) 
> > A[:,:] .*= B 
> > end 
> > 
> > function test3!(A,B) 
> > A[:] .*= B[:] 
> > end 
> > 
> > A, B = rand(1_000, 1_000), rand(1_000, 1_000); 
> > 
> > test1!(A,B);  #<-- error 
> > 
> > # warmup 
> > @time test2!(A,B); 
> > @time test3!(A,B); 
> > @time myownscale!(A,B); 
> > 
> > @time test2!(A,B); 
> > @time test3!(A,B); 
> > @time myownscale!(A,B); 
> > 
> > The last three commands result in: 
> > 
> > julia> @time test2!(A,B); 
> >   0.011629 seconds (22 allocations: 15.260 MB, 34.11% gc time) 
> > 
> > julia> @time test3!(A,B); 
> >   0.007802 seconds (27 allocations: 22.889 MB, 19.09% gc time) 
> > 
> > julia> @time myownscale!(A,B); 
> >   0.001341 seconds (4 allocations: 160 bytes) 
> > 
> > So, am I missing something in Base or is idiomatic Julia just defining 
> these 
> > myself? 
>
​


[julia-users] Re: PSA: Managing packages on JuliaBox (recover from notebooks failing to start on JuliaBox)

2015-12-18 Thread tanmaykm
"vi" is available.
The notebook interface also allows simple file editing.
"emacs" may be available too, though I'm not sure.

- Tanmay

On Saturday, December 19, 2015 at 12:01:42 AM UTC+5:30, c.d. mclean wrote:
>
>
> doing "ls -a" shows the .bashrc file ...
>
> but "nano .bashrc" does not launch an edit mode.
>
>
> what tools are available on JuliaBox for such file editing?
>
> thanks,
>
> ~cdm
>
>
>
> On Thursday, December 17, 2015 at 8:17:30 PM UTC-8, Tanmay K. Mohapatra 
> wrote:
>>
>> All Julia binaries are installed at /opt.
>> You could update the PATH in your .bashrc suitably to pick up the version 
>> that you prefer.
>>
>>

Re: [julia-users] inlace versions of .*=, ./= etc

2015-12-18 Thread Steven G. Johnson


On Friday, December 18, 2015 at 1:32:16 PM UTC-5, Ethan Anderes wrote:
>
> Ok, thanks for the info (and @inbounds does improve it a bit). I usually 
> follow your advice and fuse the operations together when I need the speed, 
> but since I do all manner of combinations of vectorized operations 
> throughout my module I tend to prefer using .*=, ./=, etc unless I need 
> it.
>
Having "all manner of combinations" of these operations is a good reason 
*not* to define in-place versions of these operations.  For example, 
imagine the computation:

x = x + (2y - 4z) ./ w


with your proposed in-place assignment operations, I guess this would 
become:

tmp = 2y
tmp .-= 4z
tmp ./= w
x .+= tmp


which still allocates two temporary arrays (one for tmp and one for 4z), 
and involves five separate loops.  Compare to:

for i in eachindex(x)
x[i] += (2y[i] - 4z[i]) / w[i]
end


which involves only one loop (and probably better cache performance as a 
result) and no temporary arrays.  (You can add @inbounds if you want a bit 
more performance and know that w/x/y/z have the same shape.)  Not only is 
it more efficient than a sequence of in-place assignments, but I would 
argue that it is much more readable as well, despite the need for an 
explicit loop.

Alternatively, you can use the Devectorize package, and something like

@devec x[:] = x + (2y - 4z) ./ w


will basically do the same thing as the loop if I understand @devec 
correctly.


Re: [julia-users] inlace versions of .*=, ./= etc

2015-12-18 Thread Kristoffer Carlsson
I would also recommend the Devectorize package. It saves having to write the 
boilerplate for these types of loops.

Re: [julia-users] inlace versions of .*=, ./= etc

2015-12-18 Thread Ethan Anderes


I see your point and I definitely agree with that example. At the risk of 
extending this discussion beyond it’s usefulness I’ve attached a snippet of 
some of the code that motivated this question. The issue is the variable 
tmp_rtnk in the loop. I think it’s as readable as it possibly can be and I 
was hoping to make a small syntax change so I don’t’ rebind tmp_rtnk each 
each (although I’m well aware that the there are a bunch of temporarily 
arrays created in the right hand side of operations involving tmp_rtnk). 

function Cℓbiasfun{dm}(parms::PhaseParms{dm})
m1s   = ones(parms.Cϕϕk)
r = √(sum([abs2(kdim) for kdim in parms.k]))
rtnk = zeros(parms.x[1])
for p = 1:dm, q = 1:dm, p′ = 1:dm, q′ = 1:dm
estϕC2_kℓ_pq   =  unnormalized_estϕkfun(parms.C2k[p,q],   m1s, parms) 
estϕC2_kℓ_p′q′ =  unnormalized_estϕkfun(parms.C2k[p′,q′], m1s, parms) 
for ωinx in find( r .<= 20 )
ω = Float64[parms.k[jj][ωinx] for jj = 1:dm]
C2_kℓminusω_pq= shiftfk_by_ω(parms.C2k[p,q], -ω, parms)
C2_kℓminusω_p′q′  = shiftfk_by_ω(parms.C2k[p′,q′], -ω, parms)
estϕC2_kℓminusω_pq=  unnormalized_estϕkfun(C2_kℓminusω_pq,  
m1s, parms) 
estϕC2_kℓminusω_p′q′  =  
unnormalized_estϕkfun(C2_kℓminusω_p′q′,m1s, parms) 
tmp_rtnk   =  
shiftfk_by_ω(parms.ξk[q].*conj(parms.ξk[p′]).*parms.Cϕϕk, -ω, parms) 
tmp_rtnk .*=  
parms.ξk[p][ωinx].*conj(parms.ξk[q′][ωinx]).*parms.Cϕϕk[ωinx]
tmp_rtnk .-=  conj(estϕC2_kℓ_p′q′ - estϕC2_kℓminusω_p′q′) 
tmp_rtnk ./=  estϕC2_kℓ_pq - estϕC2_kℓminusω_pq
tmp_rtnk .*=  (parms.deltk / 2π) ^ dm 
rtnk .+=  real(tmp_rtnk)
end
end
rtnk ./= abs2(invAℓfun(parms))
squash!(rtnk)
return rtnk
end

Anyway, I think your point is that in this case just write out 
myupdate!(tmp_rtnk, 
A,B,C,D,E) for this function and replace

tmp_rtnk   =  shiftfk_by_ω(parms.ξk[q].*conj(parms.ξk[p′]).*parms.Cϕϕk, -ω, 
parms) 
tmp_rtnk .*=  parms.ξk[p][ωinx].*conj(parms.ξk[q′][ωinx]).*parms.Cϕϕk[ωinx]
tmp_rtnk .-=  conj(estϕC2_kℓ_p′q′ - estϕC2_kℓminusω_p′q′) 
tmp_rtnk ./=  estϕC2_kℓ_pq - estϕC2_kℓminusω_pq
tmp_rtnk .*=  (parms.deltk / 2π) ^ dm

with 

myupdate!(tmp_rtnk,
shiftfk_by_ω(parms.ξk[q].*conj(parms.ξk[p′]).*parms.Cϕϕk, -ω, parms) ,
parms.ξk[p][ωinx].*conj(parms.ξk[q′][ωinx]).*parms.Cϕϕk[ωinx],
conj(estϕC2_kℓ_p′q′ - estϕC2_kℓminusω_p′q′),
estϕC2_kℓ_pq - estϕC2_kℓminusω_pq,
(parms.deltk / 2π) ^ dm 
)

and then write a specialized myupdate! for each such function in my module.

Its funny...I tend to think my question comes from the fact that I'm 
somewhat addicted to tinkering with Julia functions for  speed/memory 
improvements. I probably need to learn how to stop.

Anyway, I leared a lot from asking so thanks!

On Friday, December 18, 2015 at 10:53:02 AM UTC-8, Steven G. Johnson wrote:


>
> On Friday, December 18, 2015 at 1:32:16 PM UTC-5, Ethan Anderes wrote:
>>
>> Ok, thanks for the info (and @inbounds does improve it a bit). I usually 
>> follow your advice and fuse the operations together when I need the speed, 
>> but since I do all manner of combinations of vectorized operations 
>> throughout my module I tend to prefer using .*=, ./=, etc unless I need 
>> it.
>>
> Having "all manner of combinations" of these operations is a good reason 
> *not* to define in-place versions of these operations.  For example, 
> imagine the computation:
>
> x = x + (2y - 4z) ./ w
>
>
> with your proposed in-place assignment operations, I guess this would 
> become:
>
> tmp = 2y
> tmp .-= 4z
> tmp ./= w
> x .+= tmp
>
>
> which still allocates two temporary arrays (one for tmp and one for 4z), 
> and involves five separate loops.  Compare to:
>
> for i in eachindex(x)
> x[i] += (2y[i] - 4z[i]) / w[i]
> end
>
>
> which involves only one loop (and probably better cache performance as a 
> result) and no temporary arrays.  (You can add @inbounds if you want a bit 
> more performance and know that w/x/y/z have the same shape.)  Not only is 
> it more efficient than a sequence of in-place assignments, but I would 
> argue that it is much more readable as well, despite the need for an 
> explicit loop.
>
> Alternatively, you can use the Devectorize package, and something like
>
> @devec x[:] = x + (2y - 4z) ./ w
>
>
> will basically do the same thing as the loop if I understand @devec 
> correctly.
>
​


[julia-users] DataFrames and special characters

2015-12-18 Thread Joaquim Masset Lacombe Dias Garcia
I have noticed that DataFrames´ readtable cannot read by default special 
characters such ac $ % - & and it replaces everything by underscore.

Is there any way to make it read those characters just like the R function 
fread does?


Re: [julia-users] DataFrames and special characters

2015-12-18 Thread Tom Short
This is in the header, right? This has come up a couple of times recently.
There's a new option to readtable to allow nonstandard column names. See:

https://github.com/JuliaStats/DataFrames.jl/pull/896

It's only in the devel version of DataFrames.


On Fri, Dec 18, 2015 at 2:50 PM, Joaquim Masset Lacombe Dias Garcia <
joaquimdgar...@gmail.com> wrote:

> I have noticed that DataFrames´ readtable cannot read by default special
> characters such ac $ % - & and it replaces everything by underscore.
>
> Is there any way to make it read those characters just like the R function
> fread does?
>


[julia-users] Operations on TypeVars

2015-12-18 Thread Kristoffer Carlsson
I want to create a type (a fixed size array) that is parameterized on an 
Int but does not have exactly the same number of components as the 
parameter. This does not work however:

# Works:
immutable Vec2{N, T}
v::NTuple{N, T}
end

Vec2{3, Float64}((1.0,2.0,3.0))
# Vec2{3,Float64}((1.0,2.0,3.0))

# Sad times
immutable Vec2{N, T}
v::NTuple{N+1, T}
end
ERROR: MethodError: `+` has no method matching +(::TypeVar, ::Int64)
Closest candidates are:
  +(::Any, ::Any, ::Any, ::Any...)
  +(::Int64, ::Int64)
  +(::Complex{Bool}, ::Real)
  ...

Is there anyway to do something like this? It is possible in C++ using 
templates for example.

My application is for writing a library for tensors (to be used in solid 
mechanics) on top of the FixedSizeArrays package and I want the tensors to 
be parameterized on the rank and dimension which means that the number of 
components is f(rank, dimension) where f is some function Int x Int -> Int.

Best regards,
Kristoffer Carlsson


[julia-users] Re: Operations on TypeVars

2015-12-18 Thread Steven G. Johnson
https://github.com/JuliaLang/julia/issues/8322


[julia-users] Re: Operations on TypeVars

2015-12-18 Thread Kristoffer Carlsson
Thanks for the link.

Re: [julia-users] inlace versions of .*=, ./= etc

2015-12-18 Thread feza
I think I am misunderstanding the temporary array allocation process. Is it 
allocating one or two temp arrays? Where have I gone wrong here: 

tmp = 2y (allocates a temporary array to store result)
tmp .-= 4z (also allocates a temporary array for 4z? Why not just use z 
directly, thus   tmp[i] = tmp[i] - 4*z[i] )
tmp ./= w  (Uses previous temp array and w to do the division overwriting 
tmp,  i.e. loops over tmp[i] = tmp[i]/w[i] )
x .+= tmp  (performs x[i] = x[i] + tmp[i] )



On Friday, December 18, 2015 at 1:53:02 PM UTC-5, Steven G. Johnson wrote:
>
>
>
> On Friday, December 18, 2015 at 1:32:16 PM UTC-5, Ethan Anderes wrote:
>>
>> Ok, thanks for the info (and @inbounds does improve it a bit). I usually 
>> follow your advice and fuse the operations together when I need the speed, 
>> but since I do all manner of combinations of vectorized operations 
>> throughout my module I tend to prefer using .*=, ./=, etc unless I need 
>> it.
>>
> Having "all manner of combinations" of these operations is a good reason 
> *not* to define in-place versions of these operations.  For example, 
> imagine the computation:
>
> x = x + (2y - 4z) ./ w
>
>
> with your proposed in-place assignment operations, I guess this would 
> become:
>
> tmp = 2y
> tmp .-= 4z
> tmp ./= w
> x .+= tmp
>
>
> which still allocates two temporary arrays (one for tmp and one for 4z), 
> and involves five separate loops.  Compare to:
>
> for i in eachindex(x)
> x[i] += (2y[i] - 4z[i]) / w[i]
> end
>
>
> which involves only one loop (and probably better cache performance as a 
> result) and no temporary arrays.  (You can add @inbounds if you want a bit 
> more performance and know that w/x/y/z have the same shape.)  Not only is 
> it more efficient than a sequence of in-place assignments, but I would 
> argue that it is much more readable as well, despite the need for an 
> explicit loop.
>
> Alternatively, you can use the Devectorize package, and something like
>
> @devec x[:] = x + (2y - 4z) ./ w
>
>
> will basically do the same thing as the loop if I understand @devec 
> correctly.
>


[julia-users] Re: Memory leak in Task and fatal error

2015-12-18 Thread DeadbraiN
It looks like this is known issue and Core team knows about it. So, waiting 
for the solution...


[julia-users] Why variables passed to this macro are referencing to the module?

2015-12-18 Thread Diego Javier Zea
I defined the macro @iteratelist 

 
in PairwiseListMatrices. I used a similar macro in function definitions 

 
inside that package without problems. However, the same code doesn't work 
for function definitions outside that package, like in the example below. 
Why was *l* expanded as *PairwiseListMatrices.l *? Thanks!




*Code: *

using PairwiseListMatrices

PLM = PairwiseListMatrix([1, 2, 3], false)

function test(plm)
   l = []
   @iteratelist plm Base.push!(l, list[k])
   l
end

function test_macroexpand(plm)
   l = []
   println( macroexpand( quote @iteratelist plm Base.push!(l, list[k]) 
end))
   l
end

test(PLM)

test_macroexpand(PLM)





Re: [julia-users] inlace versions of .*=, ./= etc

2015-12-18 Thread Ethan Anderes


My understanding is that an expression like tmp .-= 4z generates two 
temporary arrays. It expands to tmp = tmp .- 4z so one temporary array for 
4z then one for tmp .- 4z (the final tmp rebinds to that).

On Friday, December 18, 2015 at 2:11:48 PM UTC-8, feza wrote:

I think I am misunderstanding the temporary array allocation process. Is it 
> allocating one or two temp arrays? Where have I gone wrong here: 
>
> tmp = 2y (allocates a temporary array to store result)
> tmp .-= 4z (also allocates a temporary array for 4z? Why not just use z 
> directly, thus   tmp[i] = tmp[i] - 4*z[i] )
> tmp ./= w  (Uses previous temp array and w to do the division overwriting 
> tmp,  i.e. loops over tmp[i] = tmp[i]/w[i] )
> x .+= tmp  (performs x[i] = x[i] + tmp[i] )
>
>
>
> On Friday, December 18, 2015 at 1:53:02 PM UTC-5, Steven G. Johnson wrote:
>>
>>
>>
>> On Friday, December 18, 2015 at 1:32:16 PM UTC-5, Ethan Anderes wrote:
>>>
>>> Ok, thanks for the info (and @inbounds does improve it a bit). I 
>>> usually follow your advice and fuse the operations together when I need the 
>>> speed, but since I do all manner of combinations of vectorized operations 
>>> throughout my module I tend to prefer using .*=, ./=, etc unless I need 
>>> it.
>>>
>> Having "all manner of combinations" of these operations is a good reason 
>> *not* to define in-place versions of these operations.  For example, 
>> imagine the computation:
>>
>> x = x + (2y - 4z) ./ w
>>
>>
>> with your proposed in-place assignment operations, I guess this would 
>> become:
>>
>> tmp = 2y
>> tmp .-= 4z
>> tmp ./= w
>> x .+= tmp
>>
>>
>> which still allocates two temporary arrays (one for tmp and one for 4z), 
>> and involves five separate loops.  Compare to:
>>
>> for i in eachindex(x)
>> x[i] += (2y[i] - 4z[i]) / w[i]
>> end
>>
>>
>> which involves only one loop (and probably better cache performance as a 
>> result) and no temporary arrays.  (You can add @inbounds if you want a bit 
>> more performance and know that w/x/y/z have the same shape.)  Not only is 
>> it more efficient than a sequence of in-place assignments, but I would 
>> argue that it is much more readable as well, despite the need for an 
>> explicit loop.
>>
>> Alternatively, you can use the Devectorize package, and something like
>>
>> @devec x[:] = x + (2y - 4z) ./ w
>>
>>
>> will basically do the same thing as the loop if I understand @devec 
>> correctly.
>>
> ​


[julia-users] Re: Using callable types or FastAnonymous with Sundials

2015-12-18 Thread Dan
The parameter for the `cvode` function is `f` and so is the function you 
want to use. These get confused, and it tries to use the "function" `J` 
instead. Changing the parameter name to something other than `{f}` should 
work

On Wednesday, December 16, 2015 at 4:26:41 PM UTC+2, Simon Frost wrote:
>
> Dear Julia Users,
>
> I'm trying to speed up some code that employs passing functions as 
> arguments. One part of the code solves an ODE; if I use CVODE from 
> Sundials, and rewrite the function to accept callable types, I get a 
> ReadOnlyMemoryError - as I don't know the Sundials API, can someone help me 
> with where I'm going wrong? Code below.
>
> Best
> Simon
>
> **
>
> using Sundials
>
> function cvode{f}(::Type{f}, y0::Vector{Float64}, t::Vector{Float64}; 
> reltol::Float64=1e-4, abstol::Float64=1e-6)
> neq = length(y0)
> mem = Sundials.CVodeCreate(Sundials.CV_BDF, Sundials.CV_NEWTON)
> flag = Sundials.CVodeInit(mem, cfunction(Sundials.cvodefun, Int32, 
> (Sundials.realtype, Sundials.N_Vector, Sundials.N_Vector, Ref{Function})), 
> t[1], Sundials.nvector(y0))
> flag = Sundials.CVodeSetUserData(mem, f)
> flag = Sundials.CVodeSStolerances(mem, reltol, abstol)
> flag = Sundials.CVDense(mem, neq)
> yres = zeros(length(t), length(y0))
> yres[1,:] = y0
> y = copy(y0)
> tout = [0.0]
> for k in 2:length(t)
> flag = Sundials.CVode(mem, t[k], y, tout, Sundials.CV_NORMAL)
> yres[k,:] = y
> end
> Sundials.CVodeFree([mem])
> return yres
> end
>
> function f(t, y, ydot)
> ydot[1] = 0.1*(-72-y[1])+0.1*1.4*exp((y[1]+48)/1.4)+10
> ydot[3] = 0.
> ydot[2] = 0.
> end
>
> immutable J; end
> call(::Type{J},t, y, ydot) = f(t, y, ydot)
>
> t = [0.1:0.0001:1]
> res = cvode(J, [-60.0, 0.0, 0.0], t);
>


[julia-users] memory allocation with enumerate

2015-12-18 Thread Davide Lasagna
Consider the following piece of code, a toy model of a larger code:

immutable Element{T}
a::T
end

type Mesh{E}
elsvec::Vector{E}
end

immutable Elements{I}
els::I
end
elements(m::Mesh) = Elements(m.elsvec)
Base.start(e::Elements) = 1
Base.next(e::Elements, i::Int) = e.els[i], i+1
Base.done(e::Elements, i::Int) = length(e.els) == i
Base.eltype{I}(::Type{Elements{I}}) = eltype(I)


const v = Element[Element(i) for i = 1:100]
const m = Mesh(v)

function test(m)
I = 1
for (j, e) in enumerate(elements(m))
I += j
end
I
end

function test2(m)
I = 1
j = 1
for e in elements(m)
I += j
j += 1
end
I
end

@time test(m)
@time test(m)
@time test(m)


@time test2(m)
@time test2(m)
@time test2(m)


Why does test results is such a huge memory allocation?

Thanks


Re: [julia-users] memory allocation with enumerate

2015-12-18 Thread Yichao Yu
On Fri, Dec 18, 2015 at 8:27 PM, Davide Lasagna  wrote:
> Consider the following piece of code, a toy model of a larger code:
>
> immutable Element{T}
> a::T
> end
>
> type Mesh{E}
> elsvec::Vector{E}
> end
>
> immutable Elements{I}
> els::I
> end
> elements(m::Mesh) = Elements(m.elsvec)
> Base.start(e::Elements) = 1
> Base.next(e::Elements, i::Int) = e.els[i], i+1
> Base.done(e::Elements, i::Int) = length(e.els) == i
> Base.eltype{I}(::Type{Elements{I}}) = eltype(I)
>
>
> const v = Element[Element(i) for i = 1:100]

^^ Element is an abstract type

> const m = Mesh(v)
>
> function test(m)
> I = 1
> for (j, e) in enumerate(elements(m))

Therefore `e` is not type stable.

The codegen cannot optimize away the tuple (j, e) (returned from the
enumerate iterator) so it has to be allocated.

> I += j
> end
> I
> end
>
> function test2(m)
> I = 1
> j = 1
> for e in elements(m)
> I += j
> j += 1
> end
> I
> end
>
> @time test(m)
> @time test(m)
> @time test(m)
>
>
> @time test2(m)
> @time test2(m)
> @time test2(m)
>
>
> Why does test results is such a huge memory allocation?
>
> Thanks


[julia-users] Re: Why variables passed to this macro are referencing to the module?

2015-12-18 Thread Diego Javier Zea
This works fine if I quote and interpolate *l*:

julia> function test(plm)
  l = []
  @iteratelist plm Base.push!(:($l), list[k])
  l
   end
test (generic function with 1 method)


julia> test(PLM)
3-element Array{Any,1}:
 1
 2
 3




On Friday, December 18, 2015 at 9:02:08 PM UTC-3, Diego Javier Zea wrote:
>
> I defined the macro @iteratelist 
> 
>  
> in PairwiseListMatrices. I used a similar macro in function definitions 
> 
>  
> inside that package without problems. However, the same code doesn't work 
> for function definitions outside that package, like in the example below. 
> Why was *l* expanded as *PairwiseListMatrices.l *? Thanks!
>
>
>
> 
>
> *Code: *
>
> using PairwiseListMatrices
>
> PLM = PairwiseListMatrix([1, 2, 3], false)
>
> function test(plm)
>l = []
>@iteratelist plm Base.push!(l, list[k])
>l
> end
>
> function test_macroexpand(plm)
>l = []
>println( macroexpand( quote @iteratelist plm Base.push!(l, list[k]) 
> end))
>l
> end
>
> test(PLM)
>
> test_macroexpand(PLM)
>
>
>
>