[julia-users] Re: Read GML format graphs by using LightGraphs.jl

2015-07-16 Thread Seth
Hi Charles,

I've implemented Andrew's excellent GML parser in LightGraphs (in the "gml" 
branch for now until a few things can be clarified and docs can be 
written). To use it, checkout the gml branch and then use g = 
readgml("/path/to/file.gml").

Note that there are a few issues with the GML parser (see the ones I opened 
at https://github.com/andrewcooke/ParserCombinator.jl/issues) but it seems 
to work for smaller gml files at this point.

Note also that readgml() is not type-stable, as it will create either a 
Graph or DiGraph depending on what it finds in the file. Not sure this is a 
huge problem.

On Thursday, July 16, 2015 at 3:11:41 PM UTC-7, andrew cooke wrote:
>
>
> hi, seth contacted me to see whether my ParserCombinator library could do 
> this, and i've just finished adding support for GML.  you can see it at 
> https://github.com/andrewcooke/ParserCombinator.jl#parsers
>
> that is currently only available via git (not yet in a published 
> release).  i've also emailed seth.  if either of you could have a look and 
> tell me whether it's adequate or not, and / or report any bugs then i can 
> look at releasing a version.
>
> cheers,
> andrew
>
>
> On Saturday, 11 July 2015 11:04:08 UTC-3, Seth wrote:
>>
>> Hi Charles,
>>
>> You're correct; for persistence, LightGraphs currently supports an 
>> internal graph representation and GraphML. If you can convert to GraphML 
>> you're in luck; otherwise, if you open up an issue someone might be able to 
>> code something up.
>>
>> The quickest thing, perhaps, would be to import the gml into NetworkX (
>> http://networkx.github.io) and then write it out as GraphML, which you 
>> should then be able to use in LightGraphs.
>>
>> Seth.
>>
>>
>> On Saturday, July 11, 2015 at 3:48:52 AM UTC-7, Charles Santana wrote:
>>>
>>> Hi folks,
>>>
>>> Following the suggestion of Seth (
>>> https://groups.google.com/forum/#!topic/julia-users/Ftdo2LmxC-g) I am 
>>> trying to use LightGraphs.jl to read my graph files. 
>>>
>>> My graphs are in GML format (
>>> http://gephi.github.io/users/supported-graph-formats/gml-format/). 
>>> However, as far as I understand, LightGraphs.jl can not read graphs in this 
>>> format.
>>>
>>> I just found this thread talking about the creation of a GraphsIO.jl and 
>>> its integration with Graphs.jl and LightGraphs.jl (
>>> https://github.com/JuliaLang/Graphs.jl/issues/37) Do you have any news 
>>> about it? How can I read GML files to work with LightGraphs.jl?
>>>
>>> Thanks for any help!
>>>
>>> Charles
>>>
>>> -- 
>>> Um axé! :)
>>>
>>> --
>>> Charles Novaes de Santana, PhD
>>> http://www.imedea.uib-csic.es/~charles
>>>  
>>

Re: [julia-users] pmap - version to return reduced result only

2015-07-16 Thread Greg Plowman
Thanks Jameson.

My error was from the line: trialCounts = MySimulation(trial, numIter)
Error message: trialCounts not defined

This had something to do with the variable name (used later, perhaps if 
statements guarding scope and now local)
In any case, if I change variable name, it works: counts = MySimulation(
trial, numIter)


Thanks again for your help.

Greg



On Friday, July 17, 2015 at 2:48:44 PM UTC+10, Jameson wrote:

> i believe that length(chunks) will be <= nworkers()
>
> the last statement of the for loop should be the "return" value from that 
> iteration. (for example: the variable name `trialCount`).
>
> On Fri, Jul 17, 2015 at 12:12 AM Greg Plowman  > wrote:
>
> OK thanks.
> I didn't consider @parallel (probably because I considered it for only 
> large trials of small work units, whereas I considered pmap more suited to 
> relatively small trials of longer running work units)
> In any case, @parallel works fine.
>
> Old pmap code skeleton:
> trialCounts = pmap(MySimulation, [1:numTrials], fill(numIter, numTrials))
> totalCounts = sum(trialCounts)
>
> New @parallel code
> totalCounts = @parallel (+) for trial = 1:numTrials
> MySimulation(trial, numIter)
> end
>
>
>
> However, I have 2 questions:
>
> 1. When I try to modify @parallel code to assign the result to a variable 
> inside the loop, I get an error.
> I don't understand the @parallel macro, but I'm guessing I can't assign to 
> variable inside loop?
>  
> totalCounts = @parallel (+) for trial = 1:numTrials
> trialCount = MySimulation(trial, numPlays)
> print(trialCount) # or some other processing with trialCount
> end
>
>
>
> 2. Again I don't @parallel macro but it seems to call preduce (see below), 
> which seems to collect results in an array of size numTrials / nworkers().
> If this is so, then memory requirement still has a dependency on the 
> number of trials.
> I was trying to limit the results array to the number of workers, 
> independent of number of trials.
> Is my understanding here correct?
>
> function preduce(reducer, f, N::Int)
> chunks = splitrange(N, nworkers())
> results = cell(length(chunks))
> for i in 1:length(chunks)
> results[i] = @spawn f(first(chunks[i]), last(chunks[i]))
> end
> mapreduce(fetch, reducer, results)
> end
>
>
>
> Greg
>
>
>
>
>
>
>
>
>  
>
> On Friday, July 10, 2015 at 12:24:16 PM UTC+10, Jameson wrote:
>
> this sounds like you may be looking for the `@parallel reduce_fn for itm = 
> lst; f(itm); end` map-reducer construct (described on the same page)?
>
> On Thu, Jul 9, 2015 at 9:23 PM Greg Plowman  wrote:
>
> I have been using pmap for simulations and find it very useful 
> and convenient.
> However, sometimes I want to run a large number of trials where the 
> results are also large. This requires a lot of memory to hold the returned 
> results.
> If I'm only interested the final, reduced result, and not concerned with 
> the raw individual trial results, then returning entire array seems 
> unnecessary.
> I want to reduce on the fly, avoiding the need to keep all trial results.
> I want to run more trials than workers for load balancing. (And possibly 
> because I'm interested in summary results of individual trials, not the 
> entire raw results).
>
> With the help of the simplified version of pmap presented in the docs (
> http://julia.readthedocs.org/en/latest/manual/parallel-computing/), I 
> have a tenuous understanding of how pmap works. Although the actual 
> implementation scares me.
> In any case, I was wondering before I progress further, whether a modified 
> version of pmap could be designed to reduce on-the-fly.
> Here are some modifications to the simplified, documentation version.
> Would something like this work? I'm worried about the shared updates to 
> final_result. Will these happen orderly? What else should I consider?
>
>
> * function pmap(f, lst)
>
> * function pmap_reduce(f, lst, reduce_fn)  # extra argument is reduce 
> function 
> np = nprocs()  # determine the number of processes available
> n = length(lst)
>
>
> *   results = cell(n)
> *   results = cell(np)  # hold results for currently executing procs only
> *   final_result = cell(1)  # holds the final, reduced result
>
> i = 1
> # function to produce the next work item from the queue.
> # in this case it's just an index.
> nextidx() = (idx=i; i+=1; idx)
>
> @sync begin
> for p=1:np
> if p != myid() || np == 1
> @async begin
> while true
> idx = nextidx()
> if idx > n
> break
> end
>
> *   results[idx] = remotecall_fetch(p, f, lst[idx])
> *   results[p] = remotecall_fetch(
>
> ...



Re: [julia-users] pmap - version to return reduced result only

2015-07-16 Thread Jameson Nash
i believe that length(chunks) will be <= nworkers()

the last statement of the for loop should be the "return" value from that
iteration. (for example: the variable name `trialCount`).

On Fri, Jul 17, 2015 at 12:12 AM Greg Plowman 
wrote:

> OK thanks.
> I didn't consider @parallel (probably because I considered it for only
> large trials of small work units, whereas I considered pmap more suited to
> relatively small trials of longer running work units)
> In any case, @parallel works fine.
>
> Old pmap code skeleton:
> trialCounts = pmap(MySimulation, [1:numTrials], fill(numIter, numTrials))
> totalCounts = sum(trialCounts)
>
> New @parallel code
> totalCounts = @parallel (+) for trial = 1:numTrials
> MySimulation(trial, numIter)
> end
>
>
>
> However, I have 2 questions:
>
> 1. When I try to modify @parallel code to assign the result to a variable
> inside the loop, I get an error.
> I don't understand the @parallel macro, but I'm guessing I can't assign to
> variable inside loop?
>
> totalCounts = @parallel (+) for trial = 1:numTrials
> trialCount = MySimulation(trial, numPlays)
> print(trialCount) # or some other processing with trialCount
> end
>
>
>
> 2. Again I don't @parallel macro but it seems to call preduce (see below),
> which seems to collect results in an array of size numTrials / nworkers().
> If this is so, then memory requirement still has a dependency on the
> number of trials.
> I was trying to limit the results array to the number of workers,
> independent of number of trials.
> Is my understanding here correct?
>
> function preduce(reducer, f, N::Int)
> chunks = splitrange(N, nworkers())
> results = cell(length(chunks))
> for i in 1:length(chunks)
> results[i] = @spawn f(first(chunks[i]), last(chunks[i]))
> end
> mapreduce(fetch, reducer, results)
> end
>
>
>
> Greg
>
>
>
>
>
>
>
>
>
>
> On Friday, July 10, 2015 at 12:24:16 PM UTC+10, Jameson wrote:
>
>> this sounds like you may be looking for the `@parallel reduce_fn for itm
>> = lst; f(itm); end` map-reducer construct (described on the same page)?
>>
>> On Thu, Jul 9, 2015 at 9:23 PM Greg Plowman  wrote:
>>
>>> I have been using pmap for simulations and find it very useful
>>> and convenient.
>>> However, sometimes I want to run a large number of trials where the
>>> results are also large. This requires a lot of memory to hold the returned
>>> results.
>>> If I'm only interested the final, reduced result, and not concerned with
>>> the raw individual trial results, then returning entire array seems
>>> unnecessary.
>>> I want to reduce on the fly, avoiding the need to keep all trial results.
>>> I want to run more trials than workers for load balancing. (And possibly
>>> because I'm interested in summary results of individual trials, not the
>>> entire raw results).
>>>
>>> With the help of the simplified version of pmap presented in the docs (
>>> http://julia.readthedocs.org/en/latest/manual/parallel-computing/), I
>>> have a tenuous understanding of how pmap works. Although the actual
>>> implementation scares me.
>>> In any case, I was wondering before I progress further, whether a
>>> modified version of pmap could be designed to reduce on-the-fly.
>>> Here are some modifications to the simplified, documentation version.
>>> Would something like this work? I'm worried about the shared updates to
>>> final_result. Will these happen orderly? What else should I consider?
>>>
>>>
>>> * function pmap(f, lst)
>>>
>>> * function pmap_reduce(f, lst, reduce_fn)  # extra argument is reduce
>>> function
>>> np = nprocs()  # determine the number of processes available
>>> n = length(lst)
>>>
>>>
>>> *   results = cell(n)
>>> *   results = cell(np)  # hold results for currently executing procs
>>> only
>>> *   final_result = cell(1)  # holds the final, reduced result
>>>
>>> i = 1
>>> # function to produce the next work item from the queue.
>>> # in this case it's just an index.
>>> nextidx() = (idx=i; i+=1; idx)
>>>
>>> @sync begin
>>> for p=1:np
>>> if p != myid() || np == 1
>>> @async begin
>>> while true
>>> idx = nextidx()
>>> if idx > n
>>> break
>>> end
>>>
>>> *   results[idx] = remotecall_fetch(p, f, lst[idx])
>>> *   results[p] = remotecall_fetch(p, f, lst[idx])  #
>>> return results into array indexed by proc
>>> *   reduce_fn(final_result, results[p])  # combine
>>> results[p] into final_result using reduction function
>>> end
>>> end
>>> end
>>> end
>>> end
>>>
>>> *   results
>>> *   final_result  # return reduced result
>>> end
>>>
>>>
>>>


Re: [julia-users] pmap - version to return reduced result only

2015-07-16 Thread Greg Plowman
OK thanks.
I didn't consider @parallel (probably because I considered it for only 
large trials of small work units, whereas I considered pmap more suited to 
relatively small trials of longer running work units)
In any case, @parallel works fine.

Old pmap code skeleton:
trialCounts = pmap(MySimulation, [1:numTrials], fill(numIter, numTrials))
totalCounts = sum(trialCounts)

New @parallel code
totalCounts = @parallel (+) for trial = 1:numTrials
MySimulation(trial, numIter)
end



However, I have 2 questions:

1. When I try to modify @parallel code to assign the result to a variable 
inside the loop, I get an error.
I don't understand the @parallel macro, but I'm guessing I can't assign to 
variable inside loop?
 
totalCounts = @parallel (+) for trial = 1:numTrials
trialCount = MySimulation(trial, numPlays)
print(trialCount) # or some other processing with trialCount
end



2. Again I don't @parallel macro but it seems to call preduce (see below), 
which seems to collect results in an array of size numTrials / nworkers().
If this is so, then memory requirement still has a dependency on the number 
of trials.
I was trying to limit the results array to the number of workers, 
independent of number of trials.
Is my understanding here correct?

function preduce(reducer, f, N::Int)
chunks = splitrange(N, nworkers())
results = cell(length(chunks))
for i in 1:length(chunks)
results[i] = @spawn f(first(chunks[i]), last(chunks[i]))
end
mapreduce(fetch, reducer, results)
end



Greg








 

On Friday, July 10, 2015 at 12:24:16 PM UTC+10, Jameson wrote:

> this sounds like you may be looking for the `@parallel reduce_fn for itm = 
> lst; f(itm); end` map-reducer construct (described on the same page)?
>
> On Thu, Jul 9, 2015 at 9:23 PM Greg Plowman  > wrote:
>
>> I have been using pmap for simulations and find it very useful 
>> and convenient.
>> However, sometimes I want to run a large number of trials where the 
>> results are also large. This requires a lot of memory to hold the returned 
>> results.
>> If I'm only interested the final, reduced result, and not concerned with 
>> the raw individual trial results, then returning entire array seems 
>> unnecessary.
>> I want to reduce on the fly, avoiding the need to keep all trial results.
>> I want to run more trials than workers for load balancing. (And possibly 
>> because I'm interested in summary results of individual trials, not the 
>> entire raw results).
>>
>> With the help of the simplified version of pmap presented in the docs (
>> http://julia.readthedocs.org/en/latest/manual/parallel-computing/), I 
>> have a tenuous understanding of how pmap works. Although the actual 
>> implementation scares me.
>> In any case, I was wondering before I progress further, whether a 
>> modified version of pmap could be designed to reduce on-the-fly.
>> Here are some modifications to the simplified, documentation version.
>> Would something like this work? I'm worried about the shared updates to 
>> final_result. Will these happen orderly? What else should I consider?
>>
>>
>> * function pmap(f, lst)
>>
>> * function pmap_reduce(f, lst, reduce_fn)  # extra argument is reduce 
>> function 
>> np = nprocs()  # determine the number of processes available
>> n = length(lst)
>>
>>
>> *   results = cell(n)
>> *   results = cell(np)  # hold results for currently executing procs only
>> *   final_result = cell(1)  # holds the final, reduced result
>>
>> i = 1
>> # function to produce the next work item from the queue.
>> # in this case it's just an index.
>> nextidx() = (idx=i; i+=1; idx)
>>
>> @sync begin
>> for p=1:np
>> if p != myid() || np == 1
>> @async begin
>> while true
>> idx = nextidx()
>> if idx > n
>> break
>> end
>>
>> *   results[idx] = remotecall_fetch(p, f, lst[idx])
>> *   results[p] = remotecall_fetch(p, f, lst[idx])  # 
>> return results into array indexed by proc
>> *   reduce_fn(final_result, results[p])  # combine 
>> results[p] into final_result using reduction function
>> end
>> end
>> end
>> end
>> end
>>
>> *   results
>> *   final_result  # return reduced result
>> end
>>
>>
>>

Re: [julia-users] Question about @eval and quoting

2015-07-16 Thread Tom Breloff
Sorry I'm not being very clear.  Lets try some code.  I ran your first 
version, which creates a global method "func" which is a function 
specialized on the function f.  Here's the comparison to the local function 
that is created in the second version.  Notice the areas in bold, which are 
where the function is called. 


# this is the "func" from the first version

julia> println(@code_typed func(x,x))
Any[:($(Expr(:lambda, Any[:dest,:src], 
Any[Any[Any[:dest,Array{Float64,1},0],Any[:src,Array{Float64,1},0],Any[symbol("#s1"),Int64,2],Any[:i,Int64,18]],Any[],Any[UnitRange{Int64},Tuple{Int64,Int64},Float64,Int64,Float64,Float64,Float64,Int64,Int64],Any[]],
 
:(begin  # none, line 5:
GenSym(3) = (top(arraylen))(dest::Array{Float64,1})::Int64
GenSym(0) = $(Expr(:new, UnitRange{Int64}, 1, 
:(((top(getfield))(Intrinsics,:select_value))((top(sle_int))(1,GenSym(3))::Bool,GenSym(3),(top(box))(Int64,(top(sub_int))(1,1)))::Int64)))
#s1 = (top(getfield))(GenSym(0),:start)::Int64
unless (top(box))(Bool,(top(not_int))(#s1::Int64 === 
(top(box))(Int64,(top(add_int))((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool))
 
goto 1
2: 
GenSym(7) = #s1::Int64
GenSym(8) = (top(box))(Int64,(top(add_int))(#s1::Int64,1))
i = GenSym(7)
#s1 = GenSym(8) # line 6:
*GenSym(4) = 
(top(arrayref))(src::Array{Float64,1},i::Int64)::Float64*
*GenSym(6) = 
(top(ccall))((top(tuple))("sin",GlobalRef(Base.Math,:libm))::Tuple{ASCIIString,ASCIIString},Float64,(top(svec))(Float64)::SimpleVector,GenSym(4),0)::Float64*
*GenSym(2) = 
(GlobalRef(Base.Math,:nan_dom_err))(GenSym(6),GenSym(4))::Float64*

(top(arrayset))(dest::Array{Float64,1},GenSym(2),i::Int64)::Array{Float64,1}
3: 
unless 
(top(box))(Bool,(top(not_int))((top(box))(Bool,(top(not_int))(#s1::Int64 
=== 
(top(box))(Int64,(top(add_int))((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool
 
goto 2
1: 
0: 
return
end::Void]


#this is a slightly modified second version:

julia> function map2!{F}(f::F, dest::AbstractArray, src::AbstractArray)
   function func(dest, src)
   for i in 1:length(dest)
   dest[i] = f(src[i])
   end
   end
   println(@code_typed func(dest, src))
   return dest
   end
map2! (generic function with 1 method)

julia> map2!(sin, x,x)
Any[:($(Expr(:lambda, Any[:dest,:src], 
Any[Any[Any[:dest,Array{Float64,1},0],Any[:src,Array{Float64,1},0],Any[symbol("#s1"),Int64,2],Any[:i,Int64,18]],Any[Any[:f,Function,1]],Any[UnitRange{Int64},Tuple{Int64,Int64},Any,Int64,Int64,Int64],Any[:F]],
 
:(begin  # none, line 3:
GenSym(3) = (top(arraylen))(dest::Array{Float64,1})::Int64
GenSym(0) = $(Expr(:new, UnitRange{Int64}, 1, 
:(((top(getfield))(Intrinsics,:select_value))((top(sle_int))(1,GenSym(3))::Bool,GenSym(3),(top(box))(Int64,(top(sub_int))(1,1)))::Int64)))
#s1 = (top(getfield))(GenSym(0),:start)::Int64
unless (top(box))(Bool,(top(not_int))(#s1::Int64 === 
(top(box))(Int64,(top(add_int))((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool))
 
goto 1
2: 
GenSym(4) = #s1::Int64
GenSym(5) = (top(box))(Int64,(top(add_int))(#s1::Int64,1))
i = GenSym(4)
#s1 = GenSym(5) # line 4:
*GenSym(2) = 
(f::F)((top(arrayref))(src::Array{Float64,1},i::Int64)::Float64)*

(top(arrayset))(dest::Array{Float64,1},convert(Float64,GenSym(2)),i::Int64)::Array{Float64,1}
3: 
unless 
(top(box))(Bool,(top(not_int))((top(box))(Bool,(top(not_int))(#s1::Int64 
=== 
(top(box))(Int64,(top(add_int))((top(getfield))(GenSym(0),:stop)::Int64,1))::Bool
 
goto 2
1: 
0: 
return
end::Void]





On Thursday, July 16, 2015 at 9:53:57 PM UTC-4, David Gold wrote:
>
> Yichao & Tom,
>
> Thank you both for your explanations. It's starting to come together for 
> me now. Tom, I will admit that I'm unclear on what you mean by `f` not 
> being accessed as `the same kind of function` between access in the global 
> scope vs. access within the closure of `map`. I mean, I (more or less) 
> understand the distinction you draw subsequently, and now I'm just trying 
> to picture how that's working w/r/t to the Julia internals. As in, are the 
> objects themselves of different types? Are they represented differently in 
> memory? Or is it that they're treated differently by the compiler due to 
> some subtlety about how `quote` expressions get lowered? Currently the most 
> sensible mental picture of your explanation I can draw for myself involves 
> the latter explanation (i.e. the compiler-related one).
>
> In any case, I really appreciate your taking the time to reason through 
> this with me. 
>
> On Thursday, July 16, 2015 at 6:02:29 PM UTC-4, Tom Breloff wrote:
>>
>> David: the "f(src(i))" in the second call is referencing the local 
>> argument "f::Function" passed into the

Re: [julia-users] Question about @eval and quoting

2015-07-16 Thread David Gold
Yichao & Tom,

Thank you both for your explanations. It's starting to come together for me 
now. Tom, I will admit that I'm unclear on what you mean by `f` not being 
accessed as `the same kind of function` between access in the global scope 
vs. access within the closure of `map`. I mean, I (more or less) understand 
the distinction you draw subsequently, and now I'm just trying to picture 
how that's working w/r/t to the Julia internals. As in, are the objects 
themselves of different types? Are they represented differently in memory? 
Or is it that they're treated differently by the compiler due to some 
subtlety about how `quote` expressions get lowered? Currently the most 
sensible mental picture of your explanation I can draw for myself involves 
the latter explanation (i.e. the compiler-related one).

In any case, I really appreciate your taking the time to reason through 
this with me. 

On Thursday, July 16, 2015 at 6:02:29 PM UTC-4, Tom Breloff wrote:
>
> David: the "f(src(i))" in the second call is referencing the local 
> argument "f::Function" passed into the map! method.  It is not accessing 
> the same kind of function as if you defined it globally.  I think the first 
> definition is effectively just grabbing f's Symbol, and then calling the 
> method associated with that symbol in global scope.  When functions can be 
> resolved fully at compile-time, there's a chance for better type 
> resolution.  The second version keeps a "Function" object around for every 
> call while the first version only uses that "Function" object to get the 
> symbol/name.
>
> Hope this helps?
>
> On Thu, Jul 16, 2015 at 4:21 PM, Yichao Yu 
> > wrote:
>
>> On Thu, Jul 16, 2015 at 4:15 PM, David Gold > > wrote:
>> > First, a note: Please disregard the use of `A` in the above function
>> > definitions! Those ought to be `src`. I just got very confused as to why
>> > those definitions worked at all, until I realized that my test `Array`
>> > argument was also named `A`... So, the definitions in question ought to 
>> be
>> >
>> > function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
>> > _f = Expr(:quote, f)
>> > @eval begin
>> > function func(dest, src)
>> > for i in 1:length(dest)
>> > dest[i] = $_f(src[i])
>> > end
>> > end
>> > func($dest, $src)
>> > return $dest
>> > end
>> > end
>> >
>> > function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
>> >
>> > function func(dest, src)
>> >
>> > for i in 1:length(dest)
>> >
>> > dest[i] = f(src[i])
>> >
>> > end
>> >
>> > end
>> >
>> > func(dest, src)
>> >
>> > return dest
>> >
>> > end
>> >
>> >
>> > (though technically only the `A` as called in `func(dest, A)` in the old
>> > definitions really mattered).
>> >
>> >
>> > Tom,
>> >
>> > I don't understand the difference that global scope makes. `f` is not 
>> passed
>>
>> My guess is that this is because closures are slow in julia and IIRC
>> type inference is not doing a very good job at infering referencing to
>> variables in the outer scope, especially since those can be changed by
>> the closure.
>>
>> > as an argument to `func` -- why is the subsequent call `func(dest, 
>> src)` not
>> > amenable to type inference w/r/t to the runtime types of `dest`, `src` 
>> and
>> > the knowledge that the particular value of `f` as passed to `map!`  is
>> > hardcoded into the `func`'s body? Does the compiler implicitly treat 
>> `f` as
>> > an "argument" of `func` when it senses that it is inherited from the 
>> closure
>> > defined by `map`? Does the fact that `eval` works in global scope
>> > effectively "trick" (not at all confident in this word choice) the 
>> compiler
>> > into forgetting that `f` is only present in the body of `func` because 
>> it
>> > was at one point the argument of `map!`?
>> >
>> > On Thursday, July 16, 2015 at 1:16:12 PM UTC-4, Tom Breloff wrote:
>> >>
>> >> I believe eval puts the function in global scope and thus has complete
>> >> type information on the function.  Your second attempt takes in a 
>> "Function"
>> >> type which could be anything, and thus the compiler can't specialize 
>> very
>> >> much.  This problem may eventually go away if the Function type can be
>> >> parametized with input and output type information.
>> >>
>> >> On Thu, Jul 16, 2015 at 11:22 AM, David Gold  
>> wrote:
>> >>>
>> >>> Suppose I want to apply the trick that makes `broadcast!` fast to 
>> `map!`.
>> >>> Because of the specificity of `map!`'s functionality, I don't 
>> necessarily
>> >>> need to cache the internally declared functions, so I just write:
>> >>>
>> >>> function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
>> >>> _f = Expr(:quote, f)
>> >>> @eval begin
>> >>> function func(dest, A)
>> >>> for i in 1:length(dest)
>> >>> dest[i] = $_f(A[i])
>> >>> end
>> >>> end
>> >>> func(

[julia-users] Re: Solving nonlinear equations quickly using FastAnonymous @anon and Julia 0.4

2015-07-16 Thread Andrew
fzero(f, j, guess) works for me when f and j are functions. f(Af, guess) 
works for me now when Af is an @anon function. 

On Tuesday, July 7, 2015 at 7:34:39 PM UTC-4, j verzani wrote:
>
> Okay, this just got fixed as much as I could with v"0.1.15" (there is no 
> fzero(f,j,guess) signature).
>
> On Tuesday, July 7, 2015 at 4:38:41 PM UTC-4, Andrew wrote:
>>
>> Just checked. So,  Roots.fzero(f, guess) does work. However, 
>> Roots.fzero(f, j, guess) doesn't work, and neither does Roots.newton(f, j, 
>> guess). 
>>
>> I looked at the Roots.jl source and I see ::Function annotations on the 
>> methods with the jacobian, but not the regular one.
>>
>> On Tuesday, July 7, 2015 at 4:22:17 PM UTC-4, j verzani wrote:
>>>
>>> It isn't your first choice, but `Roots.fzero` can have `@anon` functions 
>>> passed to it, unless I forgot to tag a new version after making that change 
>>> on master not so long ago.
>>>
>>> On Tuesday, July 7, 2015 at 2:29:51 PM UTC-4, Andrew wrote:

 I'm writing this in case other people are trying to do the same thing 
 I've done, and also to see if anyone has any suggestions.

 Recently I have been writing some code that requires solving lots(tens 
 of thousands) of simple non-linear equations. The application is 
 economics, 
 I am solving an intratemporal first order condition for optimal labor 
 supply given the state and a savings decision. This requires solving the 
 same equation many times, but with different parameters.

 As far as I know, the standard ways to do this are to either define a 
 nested function which by the lexical scoping rules inherits the parameters 
 of the outer function, or use an anonymous function. Both these methods 
 are 
 slow right now because Julia can't inline those functions. However, the 
 FastAnonymous package lets you define an anonymous "function", which 
 behaves exactly like a function but isn't type ::Function, which is fast. 
 Crucially for me, in Julia 0.4 you can modify the parameters of the 
 function you get out of FastAnonymous. I rewrote some code I had which 
 depended on solving a lot of non-linear equations, and it's now 3 times as 
 fast, running in 2s instead of 6s.

 Here I'll describe a simplified version of my setup and point out a few 
 issues.

 1. I store the anonymous function in a type that I will pass along to 
 the function which needs to solve the nonlinear equation. I use a 
 parametric type here since the type of an anonymous function seems to vary 
 with every instance. For example, 

 typeof(UF.fhoursFOC)
 FastAnonymous.##Closure#11431{Ptr{Void} 
 @0x7f2c2eb26e30,0x10e636ff02d85766,(:h,)}


 To construct the type,

 immutable CRRA_labor{T1, T2} <: LaborChoice # <: means "subtype of"
 sigmac::Float64
 sigmal::Float64
 psi::Float64
 hoursmax::Float64
 state::State # Encodes info on how to solve itself
 fhoursFOC::T1
 fJACOBhoursFOC::T2
 end

 To set up the anonymous functions fhoursFOC and fJACOBhoursFOC (the 
 jacobian), I define a constructor 

 function CRRA_labor(sigmac,sigmal,psi,hoursmax,state)
 fhoursFOC = @anon h -> hoursFOC(CRRA_labor(sigmac,sigmal,psi,
 hoursmax,state,0., 0.) , h, state)
 fJACOBhoursFOC = @anon jh -> JACOBhoursFOC(CRRA_labor(sigmac,sigmal
 ,psi,hoursmax,state,0., 0.) , jh, state)
 CRRA_labor(sigmac,sigmal,psi,hoursmax,state,fhoursFOC, 
 fJACOBhoursFOC)
 end

 This looks a bit complicated because the nonlinear equation I need to 
 solve, hoursFOC, relies on the type CRRA_labor, as well as some aggregate 
 and idiosyncratic state info, to set up the problem. To encode this 
 information, I define a dummy instance of CRRA_labor, where I supply 0's 
 in 
 place of the anonymous functions. I tried to make a self-referential type 
 here as described in the documentation, but I couldn't get it to work, so 
 I 
 went with the dummy instance instead.

 @anon sets up the anonymous function. This means that code like 
 fhoursFOC(0.5) will return a value.

 2. Now that I have my anonymous function taking only 1 variable, I can 
 use the nonlinear equation solver. Unfortunately, the existing nonlinear 
 equation solvers like Roots.fzero and NLsolve ask the argument to be of 
 type ::Function. Since anonymous functions work like functions but are 
 actually some different type, they wouldn't accept my argument. Instead, I 
 wrote my own Newton method, which is like 5 lines of code, where I don't 
 restrict the argument type.

 I think it would be very straightforward to make this a multivariate 
 Newton method.

 function myNewton(f, j, x)
 for n = 1:100
 fx , jx = f(x), j(x)
 abs(fx) < 1e-6 && r

[julia-users] @manipulate button not updating Markdown display

2015-07-16 Thread Renee Trochet
Hi all,

I'm working on a module that will use a toggle button to show and hide 
content in an IJulia notebook. Everything works as intended if the content 
is HTML, but I also need to be able to display equations. I have two 
questions:

   1. Is it possible to display equations within blocks of HTML? If so, 
   how? It would be nice to take advantage of the MathJax already running in 
   the notebooks. I'd love it if someone could point me in the right direction.
   2. How can I get Markdown to hide/show when the button is 
   clicked? Currently, clicking the button correctly changes the field that 
   determines whether to display the object, but you have to run the code cell 
   again for the display to change (see Section 2).

Below are the relevant bits of code.

I'd appreciate any pointers you can give. Thank you!

-

*1) *This code for HTML blocks works correctly:

using Reactive
using Interact
import Base.writemime

type Revealable
html::ASCIIString
divclass::ASCIIString
show::Bool
end

function revealable(x::Revealable)
@manipulate for n in togglebutton(; label=string("Show/Hide", x.divclass 
== "" ? "" : string(" ", uppercase(x.divclass[1]),x.divclass[2:end])), value
=x.show, signal=Input(x.show))
x.show = n
x
end
end

function Base.writemime(stream, ::MIME"text/html", x::Revealable)
if x.show
println(stream, string("", x.
html, ""))
else
println(stream, """
"""
)
end
end

To run it:

h = Revealable("Any HTML can go here!", "hint", false)
revealable(h)



*2)* This code requires the user to re-run the cell after clicking the 
button:

using Markdown
using Reactive
using Interact
import Base.writemime

type Revealable
content::Markdown.MD
divclass::ASCIIString
show::Bool
end

function revealable(x::Revealable)
@manipulate for n in togglebutton(; label=string("Show/Hide", x.divclass 
== "" ? "" : string(" ", uppercase(x.divclass[1]),x.divclass[2:end])), value
=x.show, signal=Input(x.show))
x.show = n
x
end
end

function writemime(stream, ::MIME"text/latex", x::Revealable)
   if x.show
display(x.content)
else
display("")
end
end

To run it:

m = Revealable(md"""
#Heading!

Here is some LaTeX: ${3+a}\over{2-b^4}$
""", "hint", false)

revealable(m)



[julia-users] Re: Read GML format graphs by using LightGraphs.jl

2015-07-16 Thread andrew cooke

hi, seth contacted me to see whether my ParserCombinator library could do 
this, and i've just finished adding support for GML.  you can see it at 
https://github.com/andrewcooke/ParserCombinator.jl#parsers

that is currently only available via git (not yet in a published release).  
i've also emailed seth.  if either of you could have a look and tell me 
whether it's adequate or not, and / or report any bugs then i can look at 
releasing a version.

cheers,
andrew


On Saturday, 11 July 2015 11:04:08 UTC-3, Seth wrote:
>
> Hi Charles,
>
> You're correct; for persistence, LightGraphs currently supports an 
> internal graph representation and GraphML. If you can convert to GraphML 
> you're in luck; otherwise, if you open up an issue someone might be able to 
> code something up.
>
> The quickest thing, perhaps, would be to import the gml into NetworkX (
> http://networkx.github.io) and then write it out as GraphML, which you 
> should then be able to use in LightGraphs.
>
> Seth.
>
>
> On Saturday, July 11, 2015 at 3:48:52 AM UTC-7, Charles Santana wrote:
>>
>> Hi folks,
>>
>> Following the suggestion of Seth (
>> https://groups.google.com/forum/#!topic/julia-users/Ftdo2LmxC-g) I am 
>> trying to use LightGraphs.jl to read my graph files. 
>>
>> My graphs are in GML format (
>> http://gephi.github.io/users/supported-graph-formats/gml-format/). 
>> However, as far as I understand, LightGraphs.jl can not read graphs in this 
>> format.
>>
>> I just found this thread talking about the creation of a GraphsIO.jl and 
>> its integration with Graphs.jl and LightGraphs.jl (
>> https://github.com/JuliaLang/Graphs.jl/issues/37) Do you have any news 
>> about it? How can I read GML files to work with LightGraphs.jl?
>>
>> Thanks for any help!
>>
>> Charles
>>
>> -- 
>> Um axé! :)
>>
>> --
>> Charles Novaes de Santana, PhD
>> http://www.imedea.uib-csic.es/~charles
>>  
>

Re: [julia-users] Question about @eval and quoting

2015-07-16 Thread Tom Breloff
David: the "f(src(i))" in the second call is referencing the local argument
"f::Function" passed into the map! method.  It is not accessing the same
kind of function as if you defined it globally.  I think the first
definition is effectively just grabbing f's Symbol, and then calling the
method associated with that symbol in global scope.  When functions can be
resolved fully at compile-time, there's a chance for better type
resolution.  The second version keeps a "Function" object around for every
call while the first version only uses that "Function" object to get the
symbol/name.

Hope this helps?

On Thu, Jul 16, 2015 at 4:21 PM, Yichao Yu  wrote:

> On Thu, Jul 16, 2015 at 4:15 PM, David Gold 
> wrote:
> > First, a note: Please disregard the use of `A` in the above function
> > definitions! Those ought to be `src`. I just got very confused as to why
> > those definitions worked at all, until I realized that my test `Array`
> > argument was also named `A`... So, the definitions in question ought to
> be
> >
> > function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
> > _f = Expr(:quote, f)
> > @eval begin
> > function func(dest, src)
> > for i in 1:length(dest)
> > dest[i] = $_f(src[i])
> > end
> > end
> > func($dest, $src)
> > return $dest
> > end
> > end
> >
> > function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
> >
> > function func(dest, src)
> >
> > for i in 1:length(dest)
> >
> > dest[i] = f(src[i])
> >
> > end
> >
> > end
> >
> > func(dest, src)
> >
> > return dest
> >
> > end
> >
> >
> > (though technically only the `A` as called in `func(dest, A)` in the old
> > definitions really mattered).
> >
> >
> > Tom,
> >
> > I don't understand the difference that global scope makes. `f` is not
> passed
>
> My guess is that this is because closures are slow in julia and IIRC
> type inference is not doing a very good job at infering referencing to
> variables in the outer scope, especially since those can be changed by
> the closure.
>
> > as an argument to `func` -- why is the subsequent call `func(dest, src)`
> not
> > amenable to type inference w/r/t to the runtime types of `dest`, `src`
> and
> > the knowledge that the particular value of `f` as passed to `map!`  is
> > hardcoded into the `func`'s body? Does the compiler implicitly treat `f`
> as
> > an "argument" of `func` when it senses that it is inherited from the
> closure
> > defined by `map`? Does the fact that `eval` works in global scope
> > effectively "trick" (not at all confident in this word choice) the
> compiler
> > into forgetting that `f` is only present in the body of `func` because it
> > was at one point the argument of `map!`?
> >
> > On Thursday, July 16, 2015 at 1:16:12 PM UTC-4, Tom Breloff wrote:
> >>
> >> I believe eval puts the function in global scope and thus has complete
> >> type information on the function.  Your second attempt takes in a
> "Function"
> >> type which could be anything, and thus the compiler can't specialize
> very
> >> much.  This problem may eventually go away if the Function type can be
> >> parametized with input and output type information.
> >>
> >> On Thu, Jul 16, 2015 at 11:22 AM, David Gold 
> wrote:
> >>>
> >>> Suppose I want to apply the trick that makes `broadcast!` fast to
> `map!`.
> >>> Because of the specificity of `map!`'s functionality, I don't
> necessarily
> >>> need to cache the internally declared functions, so I just write:
> >>>
> >>> function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
> >>> _f = Expr(:quote, f)
> >>> @eval begin
> >>> function func(dest, A)
> >>> for i in 1:length(dest)
> >>> dest[i] = $_f(A[i])
> >>> end
> >>> end
> >>> func($dest, $A)
> >>> return $dest
> >>> end
> >>> end
> >>>
> >>> which does indeed show improved performance:
> >>>
> >>> srand(1)
> >>> N = 5_000_000
> >>> A = rand(N)
> >>> X = Array(Float64, N)
> >>> f(x) = 5 * x
> >>> map!(f, X, A);
> >>>
> >>> julia> map!(f, X, A);
> >>>
> >>>
> >>> julia> @time map!(f, X, A);
> >>>
> >>>   17.459 milliseconds (2143 allocations: 109 KB)
> >>>
> >>>
> >>> julia> Base.map!(f, X, A);
> >>>
> >>>
> >>> julia> @time Base.map!(f, X, A);
> >>>
> >>>  578.520 milliseconds (1 k allocations: 305 MB, 6.45% gc time)
> >>>
> >>>
> >>> Okay. But the following attempt does not experience the same speedup:
> >>>
> >>>
> >>> function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
> >>>
> >>> function func(dest, A)
> >>>
> >>> for i in 1:length(dest)
> >>>
> >>> dest[i] = f(A[i])
> >>>
> >>> end
> >>>
> >>> end
> >>>
> >>> func(dest, A)
> >>>
> >>> return dest
> >>>
> >>> end
> >>>
> >>>
> >>> julia> map!(f, X, A);
> >>>
> >>>
> >>> julia> @time map!(f, X, A);
> >>>
> >>>  564.823 milliseconds (2 k allocations: 305 MB, 6.4

Re: [julia-users] Question about @eval and quoting

2015-07-16 Thread Yichao Yu
On Thu, Jul 16, 2015 at 4:15 PM, David Gold  wrote:
> First, a note: Please disregard the use of `A` in the above function
> definitions! Those ought to be `src`. I just got very confused as to why
> those definitions worked at all, until I realized that my test `Array`
> argument was also named `A`... So, the definitions in question ought to be
>
> function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
> _f = Expr(:quote, f)
> @eval begin
> function func(dest, src)
> for i in 1:length(dest)
> dest[i] = $_f(src[i])
> end
> end
> func($dest, $src)
> return $dest
> end
> end
>
> function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
>
> function func(dest, src)
>
> for i in 1:length(dest)
>
> dest[i] = f(src[i])
>
> end
>
> end
>
> func(dest, src)
>
> return dest
>
> end
>
>
> (though technically only the `A` as called in `func(dest, A)` in the old
> definitions really mattered).
>
>
> Tom,
>
> I don't understand the difference that global scope makes. `f` is not passed

My guess is that this is because closures are slow in julia and IIRC
type inference is not doing a very good job at infering referencing to
variables in the outer scope, especially since those can be changed by
the closure.

> as an argument to `func` -- why is the subsequent call `func(dest, src)` not
> amenable to type inference w/r/t to the runtime types of `dest`, `src` and
> the knowledge that the particular value of `f` as passed to `map!`  is
> hardcoded into the `func`'s body? Does the compiler implicitly treat `f` as
> an "argument" of `func` when it senses that it is inherited from the closure
> defined by `map`? Does the fact that `eval` works in global scope
> effectively "trick" (not at all confident in this word choice) the compiler
> into forgetting that `f` is only present in the body of `func` because it
> was at one point the argument of `map!`?
>
> On Thursday, July 16, 2015 at 1:16:12 PM UTC-4, Tom Breloff wrote:
>>
>> I believe eval puts the function in global scope and thus has complete
>> type information on the function.  Your second attempt takes in a "Function"
>> type which could be anything, and thus the compiler can't specialize very
>> much.  This problem may eventually go away if the Function type can be
>> parametized with input and output type information.
>>
>> On Thu, Jul 16, 2015 at 11:22 AM, David Gold  wrote:
>>>
>>> Suppose I want to apply the trick that makes `broadcast!` fast to `map!`.
>>> Because of the specificity of `map!`'s functionality, I don't necessarily
>>> need to cache the internally declared functions, so I just write:
>>>
>>> function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
>>> _f = Expr(:quote, f)
>>> @eval begin
>>> function func(dest, A)
>>> for i in 1:length(dest)
>>> dest[i] = $_f(A[i])
>>> end
>>> end
>>> func($dest, $A)
>>> return $dest
>>> end
>>> end
>>>
>>> which does indeed show improved performance:
>>>
>>> srand(1)
>>> N = 5_000_000
>>> A = rand(N)
>>> X = Array(Float64, N)
>>> f(x) = 5 * x
>>> map!(f, X, A);
>>>
>>> julia> map!(f, X, A);
>>>
>>>
>>> julia> @time map!(f, X, A);
>>>
>>>   17.459 milliseconds (2143 allocations: 109 KB)
>>>
>>>
>>> julia> Base.map!(f, X, A);
>>>
>>>
>>> julia> @time Base.map!(f, X, A);
>>>
>>>  578.520 milliseconds (1 k allocations: 305 MB, 6.45% gc time)
>>>
>>>
>>> Okay. But the following attempt does not experience the same speedup:
>>>
>>>
>>> function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
>>>
>>> function func(dest, A)
>>>
>>> for i in 1:length(dest)
>>>
>>> dest[i] = f(A[i])
>>>
>>> end
>>>
>>> end
>>>
>>> func(dest, A)
>>>
>>> return dest
>>>
>>> end
>>>
>>>
>>> julia> map!(f, X, A);
>>>
>>>
>>> julia> @time map!(f, X, A);
>>>
>>>  564.823 milliseconds (2 k allocations: 305 MB, 6.44% gc time)
>>>
>>>
>>> My question is: Why is `eval`-ing the body of `map!` necessary for
>>> supporting the type inference/other optimizations that give the first
>>> revised `map!` method greater performance? I suspect that there's something
>>> about what `eval` does, aside from just "evaluate an expression" that I'm
>>> not quite grokking -- but what? Also, what risks in particular does invoking
>>> `eval` at runtime inside the body of a function -- as opposed to directly
>>> inside the global scope of a module -- pose?
>>>
>>>
>>> Thanks,
>>>
>>> D
>>>
>>>
>>
>


[julia-users] Juno/LightTable - "Connected to Julia" but cannot evaluate

2015-07-16 Thread Eric Forgy
julia> versioninfo()
Julia Version 0.3.10
Commit c8ceeef* (2015-06-24 13:54 UTC)
Platform Info:
  System: Windows (x86_64-w64-mingw32)
  CPU: AMD Opteron(tm) Processor 4171 HE
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Barcelona)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

-

Hi,
 
I was happily using Juno/LightTable on MacOS and now I am trying to install 
Juno/LightTable (version 0.7.2) on a Windows VM on Azure. Here are my 
user.behaviors:

[
 ;; The app tag is kind of like global scope. You assign behaviors that 
affect
 ;; all of Light Table to it.
 [:app :lt.objs.style/set-skin "dark"]
 [:app :lt.objs.plugins/load-js "user_compiled.js"]
 [:app :lt.objs.langs.julia/julia-path "C:\\Program 
Files\\Julia-0.3.10\\bin\\julia.exe"]
 ;; The editor tag is applied to all editors
 [:editor :lt.objs.editor/no-wrap]
 [:editor :lt.objs.style/set-theme "june-night"]
 ;; Here we can add behaviors to just clojure editors
 [:editor.clojure :lt.plugins.clojure/print-length 1000]
 ;; Behaviors specific to a user-defined object
 [:user.hello :lt.plugins.user/on-close-destroy]
 ;; To subtract a behavior, prefix the name with '-' e.g.
 ;;  [:app :-lt.objs.intro/show-intro]
]

When I launch LightTable, it says "Spinning up Julia" and then after a few 
seconds, it says "Connected to Julia". However, when I try to highlight or 
select any code and right-click, the "Evaluate" option is greyed out so I'm 
not able to run any code.

Any ideas?

Thanks a lot!
 


Re: [julia-users] Question about @eval and quoting

2015-07-16 Thread David Gold
First, a note: Please disregard the use of `A` in the above function 
definitions! Those ought to be `src`. I just got very confused as to why 
those definitions worked at all, until I realized that my test `Array` 
argument was also named `A`... So, the definitions in question ought to be

function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
_f = Expr(:quote, f)
@eval begin
function func(dest, src)
for i in 1:length(dest)
dest[i] = $_f(src[i])
end
end
func($dest, $src)
return $dest
end
end

function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)

function func(dest, src)

for i in 1:length(dest)

dest[i] = f(src[i])

end

end

func(dest, src)

return dest

end


(though technically only the `A` as called in `func(dest, A)` in the old 
definitions really mattered).

Tom,

I don't understand the difference that global scope makes. `f` is not 
passed as an argument to `func` -- why is the subsequent call `func(dest, 
src)` not amenable to type inference w/r/t to the runtime types of `dest`, 
`src` and the knowledge that the particular value of `f` as passed to 
`map!`  is hardcoded into the `func`'s body? Does the compiler implicitly 
treat `f` as an "argument" of `func` when it senses that it is inherited 
from the closure defined by `map`? Does the fact that `eval` works in 
global scope effectively "trick" (not at all confident in this word choice) 
the compiler into forgetting that `f` is only present in the body of `func` 
because it was at one point the argument of `map!`?

On Thursday, July 16, 2015 at 1:16:12 PM UTC-4, Tom Breloff wrote:
>
> I believe eval puts the function in global scope and thus has complete 
> type information on the function.  Your second attempt takes in a 
> "Function" type which could be anything, and thus the compiler can't 
> specialize very much.  This problem may eventually go away if the Function 
> type can be parametized with input and output type information.
>
> On Thu, Jul 16, 2015 at 11:22 AM, David Gold  > wrote:
>
>> Suppose I want to apply the trick that makes `broadcast!` fast to `map!`. 
>> Because of the specificity of `map!`'s functionality, I don't necessarily 
>> need to cache the internally declared functions, so I just write:
>>
>> function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
>> _f = Expr(:quote, f)
>> @eval begin
>> function func(dest, A)
>> for i in 1:length(dest)
>> dest[i] = $_f(A[i])
>> end
>> end
>> func($dest, $A)
>> return $dest
>> end
>> end
>>
>> which does indeed show improved performance:
>>
>> srand(1)
>> N = 5_000_000
>> A = rand(N)
>> X = Array(Float64, N)
>> f(x) = 5 * x
>> map!(f, X, A);
>>
>> *julia> **map!(f, X, A);*
>>
>>
>>  *julia> **@time map!(f, X, A);*
>>
>>   17.459 milliseconds (2143 allocations: 109 KB)
>>
>>
>> *julia> **Base.map!(f, X, A);*
>>
>>
>> *julia> **@time Base.map!(f, X, A);*
>>
>>  578.520 milliseconds (1 k allocations: 305 MB, 6.45% gc time)
>>
>>
>> Okay. But the following attempt does not experience the same speedup:
>>
>>
>> function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
>>
>> function func(dest, A)
>>
>> for i in 1:length(dest)
>>
>> dest[i] = f(A[i])
>>
>> end
>>
>> end
>>
>> func(dest, A)
>>
>> return dest
>>
>> end
>>
>>
>> *julia> **map!(f, X, A);*
>>
>>
>> *julia> **@time map!(f, X, A);*
>>
>>  564.823 milliseconds (2 k allocations: 305 MB, 6.44% gc time)
>>
>>
>> My question is: Why is `eval`-ing the body of `map!` necessary for 
>> supporting the type inference/other optimizations that give the first 
>> revised `map!` method greater performance? I suspect that there's something 
>> about what `eval` does, aside from just "evaluate an expression" that I'm 
>> not quite grokking -- but what? Also, what risks in particular does 
>> invoking `eval` at runtime inside the body of a function -- as opposed to 
>> directly inside the global scope of a module -- pose?
>>
>>
>> Thanks,
>>
>> D
>>
>>
>>
>

[julia-users] Re: Compile time factorial example using staged functions

2015-07-16 Thread Magnus Lie Hetland
I realize it's not what you were aiming for, but since you mentioned that 
you hoped for a simpler solution … you do have full use of the Julia 
language, compile-time, when writing macros. So you could use a loop, for 
example – or even call the built-in factorial function compile-time:

*macro fac(n)*

*:($(factorial(n)))*

*end*

*julia> **macroexpand(:(@fac(10)))*

*3628800*


[julia-users] Parallel for loop over partitions iterator

2015-07-16 Thread Uthsav Chitra
So I'm trying to iterate over the list of partitions of something, say 
`1:n` for some `n` between 13 and 21. The code that I ideally want to run 
looks something like this:

valid_num = @parallel (+) for p in partitions(1:n)
  int(is_valid(p))
end

println(valid_num)

This would use the `@parallel for` to map-reduce my problem. For example, 
compare this to the example in the Julia documentation:

nheads = @parallel (+) for i=1:2
  Int(rand(Bool))
end

However, if I try my adaptation of the loop, I get the following error:

ERROR: `getindex` has no method matching 
getindex(::SetPartitions{UnitRange{Int64}}, ::Int64)
 in anonymous at no file:1433
 in anonymous at multi.jl:1279
 in run_work_thunk at multi.jl:621
 in run_work_thunk at multi.jl:630
 in anonymous at task.jl:6

which I think is because you cannot call `p[3]` if `p=partitions(1:n)`, 
which explains the getindex error.

I've tried using `pmap` to solve my problem, but because the number of 
partitions can get really big, really quickly (there are more than 2.5 
million partitions of `1:13`, and when I get to `1:21` things will be 
huge), constructing such a large array becomes an issue. I left it running 
over night and it still didn't finish.

Does anyone have any advice for how I can efficiently iterate over 
partitions in parallel? I have access to a ~30 core computer and my task 
seems easily parallelizable, so I would be really grateful if anyone knows 
a good way to do this in Julia. 

Thank you so much!


Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Stefan Karpinski
Immutables automatically have value-based hashing defined for them. That's
a dangerous default for mutable values since it makes it easy to stick
something in a dict, then mutate it, and "lose it", e.g.:

type Mutable
 x::Int
end

Base.hash(m::Mutable, h::UInt) = hash(m.x, h + (0x17d88030d571c6e3 % UInt))
==(m1::Mutable, m2::Mutable) = m1.x == m2.x

julia> m = Mutable(0)
Mutable(0)

julia> d = Dict()
Dict{Any,Any} with 0 entries

julia> d[m] = "here"
"here"

julia> m.x = 1
1

julia> d[m]
ERROR: KeyError: Mutable(1) not found
 in getindex at dict.jl:695

julia> d
Dict{Any,Any} with 1 entry:
  Mutable(1) => "here"


On Thu, Jul 16, 2015 at 12:35 PM, Seth  wrote:

>
>
> On Thursday, July 16, 2015 at 9:25:01 AM UTC-7, Matt Bauman wrote:
>>
>> On Thursday, July 16, 2015 at 12:19:25 PM UTC-4, milktrader wrote:
>>>
>>> Also, back to the OP question, is the correct solution to simply define
>>>
>>>  Base.hash(f::Foo) = f.x
>>>
>>
>> No, I'd define Base.hash(f::Foo) = hash(f.x, 0x64c74221932dea5b), where
>> I chose the constant by rand(UInt).  This way it won't collide with other
>> types.  I really need to spend a bit more time with my interfaces chapter
>> and add this info there.
>>
>>
> How would you do this (in a performant way) for a type that has two
> internal values? That is, I have
>
> immutable Foo{T}
>x::T
>y::T
> end
>
>
> Obviously, the hash should be based on both values, right? I could do
>
> Base.hash(f::Foo) = hash(hash(f.x, f.y), 0x64c74221932dea5b)
>
> But that calls hash twice. (Is this even necessary with immutables?)
>
>


Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Seth
OK, I think I got it:

type Foo{T}
  x::T
  y::T
end

Base.hash(f::Foo) = hash(f.x, hash(f.y, pair_seed))




On Thursday, July 16, 2015 at 9:46:12 AM UTC-7, Matt Bauman wrote:
>
> Goodness, I'll get this right one of these times.  Sorry for spouting off 
> so much disinformation!  I'll try to make up for it by adding more examples 
> to the documentation.
>
> const _foo_seed = UInt === UInt64 ? 0x64c74221932dea5b : 0x80783eb4
> Base.hash(f::Foo, h::UInt) = hash(f.x, h += _foo_seed)
>
> The canonical hash function to define is the one that takes a second 
> `UInt` argument.  Here's what the help has to say about hashing multiple 
> values: "New types should implement the 2-argument form, typically  by 
> calling the 2-argument `hash` method recursively in order to mix hashes of 
> the contents with each other (and with `h`)".  And that's exactly how most 
> base functions do it.
>
> On Thursday, July 16, 2015 at 12:35:25 PM UTC-4, Seth wrote:
>>
>>
>>
>> On Thursday, July 16, 2015 at 9:25:01 AM UTC-7, Matt Bauman wrote:
>>>
>>> On Thursday, July 16, 2015 at 12:19:25 PM UTC-4, milktrader wrote:

 Also, back to the OP question, is the correct solution to simply define 

  Base.hash(f::Foo) = f.x

>>>
>>> No, I'd define Base.hash(f::Foo) = hash(f.x, 0x64c74221932dea5b), where 
>>> I chose the constant by rand(UInt).  This way it won't collide with other 
>>> types.  I really need to spend a bit more time with my interfaces chapter 
>>> and add this info there.
>>>
>>>
>> How would you do this (in a performant way) for a type that has two 
>> internal values? That is, I have
>>
>> immutable Foo{T}
>>x::T
>>y::T
>> end
>>
>>
>> Obviously, the hash should be based on both values, right? I could do
>>
>> Base.hash(f::Foo) = hash(hash(f.x, f.y), 0x64c74221932dea5b)
>>
>> But that calls hash twice. (Is this even necessary with immutables?)
>>
>>

Re: [julia-users] Question about @eval and quoting

2015-07-16 Thread Tom Breloff
I believe eval puts the function in global scope and thus has complete type
information on the function.  Your second attempt takes in a "Function"
type which could be anything, and thus the compiler can't specialize very
much.  This problem may eventually go away if the Function type can be
parametized with input and output type information.

On Thu, Jul 16, 2015 at 11:22 AM, David Gold  wrote:

> Suppose I want to apply the trick that makes `broadcast!` fast to `map!`.
> Because of the specificity of `map!`'s functionality, I don't necessarily
> need to cache the internally declared functions, so I just write:
>
> function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
> _f = Expr(:quote, f)
> @eval begin
> function func(dest, A)
> for i in 1:length(dest)
> dest[i] = $_f(A[i])
> end
> end
> func($dest, $A)
> return $dest
> end
> end
>
> which does indeed show improved performance:
>
> srand(1)
> N = 5_000_000
> A = rand(N)
> X = Array(Float64, N)
> f(x) = 5 * x
> map!(f, X, A);
>
> *julia> **map!(f, X, A);*
>
>
>  *julia> **@time map!(f, X, A);*
>
>   17.459 milliseconds (2143 allocations: 109 KB)
>
>
> *julia> **Base.map!(f, X, A);*
>
>
> *julia> **@time Base.map!(f, X, A);*
>
>  578.520 milliseconds (1 k allocations: 305 MB, 6.45% gc time)
>
>
> Okay. But the following attempt does not experience the same speedup:
>
>
> function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
>
> function func(dest, A)
>
> for i in 1:length(dest)
>
> dest[i] = f(A[i])
>
> end
>
> end
>
> func(dest, A)
>
> return dest
>
> end
>
>
> *julia> **map!(f, X, A);*
>
>
> *julia> **@time map!(f, X, A);*
>
>  564.823 milliseconds (2 k allocations: 305 MB, 6.44% gc time)
>
>
> My question is: Why is `eval`-ing the body of `map!` necessary for
> supporting the type inference/other optimizations that give the first
> revised `map!` method greater performance? I suspect that there's something
> about what `eval` does, aside from just "evaluate an expression" that I'm
> not quite grokking -- but what? Also, what risks in particular does
> invoking `eval` at runtime inside the body of a function -- as opposed to
> directly inside the global scope of a module -- pose?
>
>
> Thanks,
>
> D
>
>
>


Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Matt Bauman
Goodness, I'll get this right one of these times.  Sorry for spouting off 
so much disinformation!  I'll try to make up for it by adding more examples 
to the documentation.

const _foo_seed = UInt === UInt64 ? 0x64c74221932dea5b : 0x80783eb4
Base.hash(f::Foo, h::UInt) = hash(f.x, h += _foo_seed)

The canonical hash function to define is the one that takes a second `UInt` 
argument.  Here's what the help has to say about hashing multiple values: 
"New types should implement the 2-argument form, typically  by calling the 
2-argument `hash` method recursively in order to mix hashes of the contents 
with each other (and with `h`)".  And that's exactly how most base 
functions do it.

On Thursday, July 16, 2015 at 12:35:25 PM UTC-4, Seth wrote:
>
>
>
> On Thursday, July 16, 2015 at 9:25:01 AM UTC-7, Matt Bauman wrote:
>>
>> On Thursday, July 16, 2015 at 12:19:25 PM UTC-4, milktrader wrote:
>>>
>>> Also, back to the OP question, is the correct solution to simply define 
>>>
>>>  Base.hash(f::Foo) = f.x
>>>
>>
>> No, I'd define Base.hash(f::Foo) = hash(f.x, 0x64c74221932dea5b), where 
>> I chose the constant by rand(UInt).  This way it won't collide with other 
>> types.  I really need to spend a bit more time with my interfaces chapter 
>> and add this info there.
>>
>>
> How would you do this (in a performant way) for a type that has two 
> internal values? That is, I have
>
> immutable Foo{T}
>x::T
>y::T
> end
>
>
> Obviously, the hash should be based on both values, right? I could do
>
> Base.hash(f::Foo) = hash(hash(f.x, f.y), 0x64c74221932dea5b)
>
> But that calls hash twice. (Is this even necessary with immutables?)
>
>

Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Seth


On Thursday, July 16, 2015 at 9:25:01 AM UTC-7, Matt Bauman wrote:
>
> On Thursday, July 16, 2015 at 12:19:25 PM UTC-4, milktrader wrote:
>>
>> Also, back to the OP question, is the correct solution to simply define 
>>
>>  Base.hash(f::Foo) = f.x
>>
>
> No, I'd define Base.hash(f::Foo) = hash(f.x, 0x64c74221932dea5b), where I 
> chose the constant by rand(UInt).  This way it won't collide with other 
> types.  I really need to spend a bit more time with my interfaces chapter 
> and add this info there.
>
>
How would you do this (in a performant way) for a type that has two 
internal values? That is, I have

immutable Foo{T}
   x::T
   y::T
end


Obviously, the hash should be based on both values, right? I could do

Base.hash(f::Foo) = hash(hash(f.x, f.y), 0x64c74221932dea5b)

But that calls hash twice. (Is this even necessary with immutables?)



Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Matt Bauman
On Thursday, July 16, 2015 at 12:25:01 PM UTC-4, Matt Bauman wrote:
>
> On Thursday, July 16, 2015 at 12:19:25 PM UTC-4, milktrader wrote:
>>
>> Also, back to the OP question, is the correct solution to simply define 
>>
>>  Base.hash(f::Foo) = f.x
>>
>
> No, I'd define Base.hash(f::Foo) = hash(f.x, 0x64c74221932dea5b), where I 
> chose the constant by rand(UInt).  This way it won't collide with other 
> types.  I really need to spend a bit more time with my interfaces chapter 
> and add this info there.
>

Gah, that won't work on 32 bit, sorry. The current idiom for this in base 
is something like:

const _foo_seed = UInt === UInt64 ? 0x64c74221932dea5b : 0x80783eb4
Base.hash(f::Foo) = hash(f.x, _foo_seed)

It'd be nice to make this easier.


Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Matt Bauman
On Thursday, July 16, 2015 at 12:19:25 PM UTC-4, milktrader wrote:
>
> Also, back to the OP question, is the correct solution to simply define 
>
>  Base.hash(f::Foo) = f.x
>

No, I'd define Base.hash(f::Foo) = hash(f.x, 0x64c74221932dea5b), where I 
chose the constant by rand(UInt).  This way it won't collide with other 
types.  I really need to spend a bit more time with my interfaces chapter 
and add this info there.

On Thursday, July 16, 2015 at 12:07:50 PM UTC-4, Seth wrote:

> If I'm reading between the lines correctly, the default/existing hash 
> function is based on the last byte of the ID? Is there a reason we don't 
> make the table wider to reduce the chances of collisions (or would this 
> have bad effects on memory utilization)?
>

It's not really a hash collision, but rather a hashtable index collision. 
 A new dict only has 16 slots for its keys, and the index for an element is 
simply chosen by the last 8 bits of its hash.  If those last 8 bits match 
the hash of an object already in the table, there's a collision, and the 
Dict checks to see if the existing element `isequal` to the new one (and 
doesn't look at the hash again).  So the hashes are different, but isequal 
says the objects are the same.  This is why things are wonky.


[julia-users] recv from UdpSocket with timeout?

2015-07-16 Thread Jay Kickliter
I'm curious to know also. I remember running into the same problem and not 
finding an easy solution. 

Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread milktrader
Also, back to the OP question, is the correct solution to simply define 

 Base.hash(f::Foo) = f.x



On Thursday, July 16, 2015 at 12:13:15 PM UTC-4, Seth wrote:
>
> Ah, ok, thanks. This might cause issues with one of my packages, which is 
> why I'm interested. How would you approach creating the hash dispatch for 
> custom types, and does this impact immutables as well?
>
> On Thursday, July 16, 2015 at 9:09:08 AM UTC-7, Stefan Karpinski wrote:
>>
>> This is just an artifact of memory layout – hashing by identity is based 
>> on object_id which is based on memory address. It's not a meaningful 
>> behavioral difference between Julia versions. Sticking mutable objects for 
>> which hash and == disagree is an undefined behavior and causes dictionaries 
>> to do potentially weird things.
>>
>> On Thu, Jul 16, 2015 at 12:04 PM, Seth  wrote:
>>
>>> I can't because I just rebuilt to latest to test 
>>> https://github.com/JuliaLang/julia/issues/12063 - but I'll try on the 
>>> latest master...
>>>
>>> ... and Julia Version 0.4.0-dev+6005 Commit 242bf47 does not appear to 
>>> have the issue (I'm getting two results returned for unique()).
>>>
>>> On Thursday, July 16, 2015 at 8:36:03 AM UTC-7, Stefan Karpinski wrote:

 Dan and/or Seth, can you try that again and check if hash(foos[1]) and 
 hash(foos[2]) have the same last hex digit?

 On Thu, Jul 16, 2015 at 11:30 AM, Matt Bauman  wrote:

> Bizarre.  I happen to have last updated on *exactly* the same commit 
> SHA, but I'm seeing the original (expected) behavior:
>
> $ julia -q
> julia> versioninfo()
> Julia Version 0.4.0-dev+5860
> Commit 7fa43ed (2015-07-08 20:57 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin14.3.0)
>   CPU: Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
> julia> type Foo
>x::Int
>end
>
> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
> == (generic function with 109 methods)
>
> julia> unique([Foo(4),Foo(4)])
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> @which hash(Foo(4), zero(UInt))
> hash(x::ANY, h::UInt64) at hashing.jl:10
>
> Might there be some package that changes this behavior?  Is the result 
> of `@which hash(Foo(4), zero(Uint))` the same as what I show above?
>
>
> On Thursday, July 16, 2015 at 11:02:46 AM UTC-4, Seth wrote:
>>
>> I can confirm this works as described by milktrader on 0.4.0-dev+5860 
>> (2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).
>>
>> julia> unique(foos)
>> 1-element Array{Foo,1}:
>>  Foo(4)
>>
>>
>> On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski 
>> wrote:
>>>
>>> I don't see that on 0.4-dev – it also doesn't seem possible without 
>>> having defined a hash method since unique is implemented with a dict.
>>>
>>> On Thu, Jul 16, 2015 at 10:29 AM, milktrader  
>>> wrote:
>>>
 Julia 0.4- has different behavior ...

 First, with 0.3.9

 julia> versioninfo()
 Julia Version 0.3.9
 Commit 31efe69 (2015-05-30 11:24 UTC)
 Platform Info:
   System: Darwin (x86_64-apple-darwin13.4.0)
   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia> type Foo
 x::Int
 end

 julia> import Base: ==

 julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
 == (generic function with 80 methods)

 julia> foos = [Foo(4), Foo(4)]
 2-element Array{Foo,1}:
  Foo(4)
  Foo(4)

 julia> unique(foos)
 2-element Array{Foo,1}:
  Foo(4)
  Foo(4)

 julia> unique(foos)[1] == unique(foos)[2]
 true

 And now 0.4-dev

 julia> versioninfo()
 Julia Version 0.4.0-dev+5587
 Commit 78760e2 (2015-06-25 14:27 UTC)
 Platform Info:
   System: Darwin (x86_64-apple-darwin13.4.0)
   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia> type Foo
 x::Int
 end

 julia> import Base: ==

 julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
 == (generic function with 108 methods)

 julia> foos = [F

Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Seth
Ah, ok, thanks. This might cause issues with one of my packages, which is 
why I'm interested. How would you approach creating the hash dispatch for 
custom types, and does this impact immutables as well?

On Thursday, July 16, 2015 at 9:09:08 AM UTC-7, Stefan Karpinski wrote:
>
> This is just an artifact of memory layout – hashing by identity is based 
> on object_id which is based on memory address. It's not a meaningful 
> behavioral difference between Julia versions. Sticking mutable objects for 
> which hash and == disagree is an undefined behavior and causes dictionaries 
> to do potentially weird things.
>
> On Thu, Jul 16, 2015 at 12:04 PM, Seth  > wrote:
>
>> I can't because I just rebuilt to latest to test 
>> https://github.com/JuliaLang/julia/issues/12063 - but I'll try on the 
>> latest master...
>>
>> ... and Julia Version 0.4.0-dev+6005 Commit 242bf47 does not appear to 
>> have the issue (I'm getting two results returned for unique()).
>>
>> On Thursday, July 16, 2015 at 8:36:03 AM UTC-7, Stefan Karpinski wrote:
>>>
>>> Dan and/or Seth, can you try that again and check if hash(foos[1]) and 
>>> hash(foos[2]) have the same last hex digit?
>>>
>>> On Thu, Jul 16, 2015 at 11:30 AM, Matt Bauman  wrote:
>>>
 Bizarre.  I happen to have last updated on *exactly* the same commit 
 SHA, but I'm seeing the original (expected) behavior:

 $ julia -q
 julia> versioninfo()
 Julia Version 0.4.0-dev+5860
 Commit 7fa43ed (2015-07-08 20:57 UTC)
 Platform Info:
   System: Darwin (x86_64-apple-darwin14.3.0)
   CPU: Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia> type Foo
x::Int
end

 julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
 == (generic function with 109 methods)

 julia> unique([Foo(4),Foo(4)])
 2-element Array{Foo,1}:
  Foo(4)
  Foo(4)

 julia> @which hash(Foo(4), zero(UInt))
 hash(x::ANY, h::UInt64) at hashing.jl:10

 Might there be some package that changes this behavior?  Is the result 
 of `@which hash(Foo(4), zero(Uint))` the same as what I show above?


 On Thursday, July 16, 2015 at 11:02:46 AM UTC-4, Seth wrote:
>
> I can confirm this works as described by milktrader on 0.4.0-dev+5860 
> (2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).
>
> julia> unique(foos)
> 1-element Array{Foo,1}:
>  Foo(4)
>
>
> On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski wrote:
>>
>> I don't see that on 0.4-dev – it also doesn't seem possible without 
>> having defined a hash method since unique is implemented with a dict.
>>
>> On Thu, Jul 16, 2015 at 10:29 AM, milktrader  
>> wrote:
>>
>>> Julia 0.4- has different behavior ...
>>>
>>> First, with 0.3.9
>>>
>>> julia> versioninfo()
>>> Julia Version 0.3.9
>>> Commit 31efe69 (2015-05-30 11:24 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>> x::Int
>>> end
>>>
>>> julia> import Base: ==
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 80 methods)
>>>
>>> julia> foos = [Foo(4), Foo(4)]
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)[1] == unique(foos)[2]
>>> true
>>>
>>> And now 0.4-dev
>>>
>>> julia> versioninfo()
>>> Julia Version 0.4.0-dev+5587
>>> Commit 78760e2 (2015-06-25 14:27 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>> x::Int
>>> end
>>>
>>> julia> import Base: ==
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 108 methods)
>>>
>>> julia> foos = [Foo(4), Foo(4)]
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)
>>> 1-element Array{Foo,1}:
>>>  Foo(4)
>>>
>>> julia> unique(foos)[1] == unique(foos)[2]
>>> ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
>>>  Foo(4)
>>>   a

Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Seth
Stefan,

If I'm reading between the lines correctly, the default/existing hash 
function is based on the last byte of the ID? Is there a reason we don't 
make the table wider to reduce the chances of collisions (or would this 
have bad effects on memory utilization)?

On Thursday, July 16, 2015 at 8:50:52 AM UTC-7, Stefan Karpinski wrote:
>
> Well, that's it then. Cause: accidental hash collision. The fix is to 
> define hash for the type. This makes me wonder if we shouldn't just leave 
> hash undefined for custom mutable types and make it easy to opt into 
> hashing by identity. At least then you'll get a clear no method error (and 
> we could trap that and add more helpful information), instead of weird 
> behavior when you define == but not hash for your types.
>
> On Thu, Jul 16, 2015 at 11:39 AM, milktrader  > wrote:
>
>> julia> hash(foos[1]) #and hash(foos[2])
>> 0xfa40ebab47e8bee1
>>
>> julia> hash(foos[2])
>> 0x00ef97f955461671
>>
>> On Thursday, July 16, 2015 at 11:36:03 AM UTC-4, Stefan Karpinski wrote:
>>>
>>> Dan and/or Seth, can you try that again and check if hash(foos[1]) and 
>>> hash(foos[2]) have the same last hex digit?
>>>
>>> On Thu, Jul 16, 2015 at 11:30 AM, Matt Bauman  wrote:
>>>
 Bizarre.  I happen to have last updated on *exactly* the same commit 
 SHA, but I'm seeing the original (expected) behavior:

 $ julia -q
 julia> versioninfo()
 Julia Version 0.4.0-dev+5860
 Commit 7fa43ed (2015-07-08 20:57 UTC)
 Platform Info:
   System: Darwin (x86_64-apple-darwin14.3.0)
   CPU: Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia> type Foo
x::Int
end

 julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
 == (generic function with 109 methods)

 julia> unique([Foo(4),Foo(4)])
 2-element Array{Foo,1}:
  Foo(4)
  Foo(4)

 julia> @which hash(Foo(4), zero(UInt))
 hash(x::ANY, h::UInt64) at hashing.jl:10

 Might there be some package that changes this behavior?  Is the result 
 of `@which hash(Foo(4), zero(Uint))` the same as what I show above?


 On Thursday, July 16, 2015 at 11:02:46 AM UTC-4, Seth wrote:
>
> I can confirm this works as described by milktrader on 0.4.0-dev+5860 
> (2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).
>
> julia> unique(foos)
> 1-element Array{Foo,1}:
>  Foo(4)
>
>
> On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski wrote:
>>
>> I don't see that on 0.4-dev – it also doesn't seem possible without 
>> having defined a hash method since unique is implemented with a dict.
>>
>> On Thu, Jul 16, 2015 at 10:29 AM, milktrader  
>> wrote:
>>
>>> Julia 0.4- has different behavior ...
>>>
>>> First, with 0.3.9
>>>
>>> julia> versioninfo()
>>> Julia Version 0.3.9
>>> Commit 31efe69 (2015-05-30 11:24 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>> x::Int
>>> end
>>>
>>> julia> import Base: ==
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 80 methods)
>>>
>>> julia> foos = [Foo(4), Foo(4)]
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)[1] == unique(foos)[2]
>>> true
>>>
>>> And now 0.4-dev
>>>
>>> julia> versioninfo()
>>> Julia Version 0.4.0-dev+5587
>>> Commit 78760e2 (2015-06-25 14:27 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>> x::Int
>>> end
>>>
>>> julia> import Base: ==
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 108 methods)
>>>
>>> julia> foos = [Foo(4), Foo(4)]
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)
>>> 1-element Array{Foo,1}:
>>>  Foo(4)
>>>
>>> julia> unique(foos)[1] == unique(foos)[2]
>>> ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
>>>  Foo(4)
>>>   at index [2]
>>

Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Stefan Karpinski
On Thu, Jul 16, 2015 at 12:07 PM, Seth  wrote:

> Stefan,
>
> If I'm reading between the lines correctly, the default/existing hash
> function is based on the last byte of the ID? Is there a reason we don't
> make the table wider to reduce the chances of collisions (or would this
> have bad effects on memory utilization)?
>

The initial size of a Dict is 16 slots, which is why two values whose hash
values have the same last hex digit collide – the slot is the hash modulo
the size of the Dict slots array. The reason not to make it bigger is that
if you have a lot of small Dicts you don't want to waste memory. Making it
bigger doesn't fix the problem, it just makes it harder to discover, which
strikes me as worse, not better.


Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Stefan Karpinski
This is just an artifact of memory layout – hashing by identity is based on
object_id which is based on memory address. It's not a meaningful
behavioral difference between Julia versions. Sticking mutable objects for
which hash and == disagree is an undefined behavior and causes dictionaries
to do potentially weird things.

On Thu, Jul 16, 2015 at 12:04 PM, Seth  wrote:

> I can't because I just rebuilt to latest to test
> https://github.com/JuliaLang/julia/issues/12063 - but I'll try on the
> latest master...
>
> ... and Julia Version 0.4.0-dev+6005 Commit 242bf47 does not appear to
> have the issue (I'm getting two results returned for unique()).
>
> On Thursday, July 16, 2015 at 8:36:03 AM UTC-7, Stefan Karpinski wrote:
>>
>> Dan and/or Seth, can you try that again and check if hash(foos[1]) and
>> hash(foos[2]) have the same last hex digit?
>>
>> On Thu, Jul 16, 2015 at 11:30 AM, Matt Bauman  wrote:
>>
>>> Bizarre.  I happen to have last updated on *exactly* the same commit
>>> SHA, but I'm seeing the original (expected) behavior:
>>>
>>> $ julia -q
>>> julia> versioninfo()
>>> Julia Version 0.4.0-dev+5860
>>> Commit 7fa43ed (2015-07-08 20:57 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin14.3.0)
>>>   CPU: Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>>x::Int
>>>end
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 109 methods)
>>>
>>> julia> unique([Foo(4),Foo(4)])
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> @which hash(Foo(4), zero(UInt))
>>> hash(x::ANY, h::UInt64) at hashing.jl:10
>>>
>>> Might there be some package that changes this behavior?  Is the result
>>> of `@which hash(Foo(4), zero(Uint))` the same as what I show above?
>>>
>>>
>>> On Thursday, July 16, 2015 at 11:02:46 AM UTC-4, Seth wrote:

 I can confirm this works as described by milktrader on 0.4.0-dev+5860
 (2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).

 julia> unique(foos)
 1-element Array{Foo,1}:
  Foo(4)


 On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski wrote:
>
> I don't see that on 0.4-dev – it also doesn't seem possible without
> having defined a hash method since unique is implemented with a dict.
>
> On Thu, Jul 16, 2015 at 10:29 AM, milktrader 
> wrote:
>
>> Julia 0.4- has different behavior ...
>>
>> First, with 0.3.9
>>
>> julia> versioninfo()
>> Julia Version 0.3.9
>> Commit 31efe69 (2015-05-30 11:24 UTC)
>> Platform Info:
>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>> julia> type Foo
>> x::Int
>> end
>>
>> julia> import Base: ==
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 80 methods)
>>
>> julia> foos = [Foo(4), Foo(4)]
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)[1] == unique(foos)[2]
>> true
>>
>> And now 0.4-dev
>>
>> julia> versioninfo()
>> Julia Version 0.4.0-dev+5587
>> Commit 78760e2 (2015-06-25 14:27 UTC)
>> Platform Info:
>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>> julia> type Foo
>> x::Int
>> end
>>
>> julia> import Base: ==
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 108 methods)
>>
>> julia> foos = [Foo(4), Foo(4)]
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)
>> 1-element Array{Foo,1}:
>>  Foo(4)
>>
>> julia> unique(foos)[1] == unique(foos)[2]
>> ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
>>  Foo(4)
>>   at index [2]
>>  in getindex at array.jl:292
>>
>>
>>
>> On Thursday, July 16, 2015 at 9:36:21 AM UTC-4, Stefan Karpinski
>> wrote:
>>>
>>> You need to also define a hash method for this type.
>>>
>>>
>>> On Jul 16, 2015, at 9:16 AM, Marc Gallant 
>>> wrote:
>>>
>>> The unique function doesn't appear to work using iterables of custom
>>> composite types, e.g.,
>>>
>>> julia> type 

[julia-users] recv from UdpSocket with timeout?

2015-07-16 Thread Spencer Russell
Is there a way to use a timeout with `recv(sock::UdpSocket)`?

-s


Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread milktrader
Yep, restarting I don't get the one-element array

julia> type Foo
x::Int
end

julia> import Base: ==

julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
== (generic function with 108 methods)

julia> foos = [Foo(4), Foo(4)]
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)

julia> unique(foos)
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)


On Thursday, July 16, 2015 at 11:50:52 AM UTC-4, Stefan Karpinski wrote:
>
> Well, that's it then. Cause: accidental hash collision. The fix is to 
> define hash for the type. This makes me wonder if we shouldn't just leave 
> hash undefined for custom mutable types and make it easy to opt into 
> hashing by identity. At least then you'll get a clear no method error (and 
> we could trap that and add more helpful information), instead of weird 
> behavior when you define == but not hash for your types.
>
> On Thu, Jul 16, 2015 at 11:39 AM, milktrader  > wrote:
>
>> julia> hash(foos[1]) #and hash(foos[2])
>> 0xfa40ebab47e8bee1
>>
>> julia> hash(foos[2])
>> 0x00ef97f955461671
>>
>> On Thursday, July 16, 2015 at 11:36:03 AM UTC-4, Stefan Karpinski wrote:
>>>
>>> Dan and/or Seth, can you try that again and check if hash(foos[1]) and 
>>> hash(foos[2]) have the same last hex digit?
>>>
>>> On Thu, Jul 16, 2015 at 11:30 AM, Matt Bauman  wrote:
>>>
 Bizarre.  I happen to have last updated on *exactly* the same commit 
 SHA, but I'm seeing the original (expected) behavior:

 $ julia -q
 julia> versioninfo()
 Julia Version 0.4.0-dev+5860
 Commit 7fa43ed (2015-07-08 20:57 UTC)
 Platform Info:
   System: Darwin (x86_64-apple-darwin14.3.0)
   CPU: Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia> type Foo
x::Int
end

 julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
 == (generic function with 109 methods)

 julia> unique([Foo(4),Foo(4)])
 2-element Array{Foo,1}:
  Foo(4)
  Foo(4)

 julia> @which hash(Foo(4), zero(UInt))
 hash(x::ANY, h::UInt64) at hashing.jl:10

 Might there be some package that changes this behavior?  Is the result 
 of `@which hash(Foo(4), zero(Uint))` the same as what I show above?


 On Thursday, July 16, 2015 at 11:02:46 AM UTC-4, Seth wrote:
>
> I can confirm this works as described by milktrader on 0.4.0-dev+5860 
> (2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).
>
> julia> unique(foos)
> 1-element Array{Foo,1}:
>  Foo(4)
>
>
> On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski wrote:
>>
>> I don't see that on 0.4-dev – it also doesn't seem possible without 
>> having defined a hash method since unique is implemented with a dict.
>>
>> On Thu, Jul 16, 2015 at 10:29 AM, milktrader  
>> wrote:
>>
>>> Julia 0.4- has different behavior ...
>>>
>>> First, with 0.3.9
>>>
>>> julia> versioninfo()
>>> Julia Version 0.3.9
>>> Commit 31efe69 (2015-05-30 11:24 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>> x::Int
>>> end
>>>
>>> julia> import Base: ==
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 80 methods)
>>>
>>> julia> foos = [Foo(4), Foo(4)]
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)[1] == unique(foos)[2]
>>> true
>>>
>>> And now 0.4-dev
>>>
>>> julia> versioninfo()
>>> Julia Version 0.4.0-dev+5587
>>> Commit 78760e2 (2015-06-25 14:27 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>> x::Int
>>> end
>>>
>>> julia> import Base: ==
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 108 methods)
>>>
>>> julia> foos = [Foo(4), Foo(4)]
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)
>>> 1-element Array{Foo,1}:
>>>  Foo(4)
>>>
>>> julia> unique(foos)[1] == unique(foos)[2]
>>> ERROR: BoundsError: attempt

Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Seth
I can't because I just rebuilt to latest to 
test https://github.com/JuliaLang/julia/issues/12063 - but I'll try on the 
latest master...

... and Julia Version 0.4.0-dev+6005 Commit 242bf47 does not appear to have 
the issue (I'm getting two results returned for unique()).

On Thursday, July 16, 2015 at 8:36:03 AM UTC-7, Stefan Karpinski wrote:
>
> Dan and/or Seth, can you try that again and check if hash(foos[1]) and 
> hash(foos[2]) have the same last hex digit?
>
> On Thu, Jul 16, 2015 at 11:30 AM, Matt Bauman  > wrote:
>
>> Bizarre.  I happen to have last updated on *exactly* the same commit SHA, 
>> but I'm seeing the original (expected) behavior:
>>
>> $ julia -q
>> julia> versioninfo()
>> Julia Version 0.4.0-dev+5860
>> Commit 7fa43ed (2015-07-08 20:57 UTC)
>> Platform Info:
>>   System: Darwin (x86_64-apple-darwin14.3.0)
>>   CPU: Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>> julia> type Foo
>>x::Int
>>end
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 109 methods)
>>
>> julia> unique([Foo(4),Foo(4)])
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> @which hash(Foo(4), zero(UInt))
>> hash(x::ANY, h::UInt64) at hashing.jl:10
>>
>> Might there be some package that changes this behavior?  Is the result of 
>> `@which hash(Foo(4), zero(Uint))` the same as what I show above?
>>
>>
>> On Thursday, July 16, 2015 at 11:02:46 AM UTC-4, Seth wrote:
>>>
>>> I can confirm this works as described by milktrader on 0.4.0-dev+5860 
>>> (2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).
>>>
>>> julia> unique(foos)
>>> 1-element Array{Foo,1}:
>>>  Foo(4)
>>>
>>>
>>> On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski wrote:

 I don't see that on 0.4-dev – it also doesn't seem possible without 
 having defined a hash method since unique is implemented with a dict.

 On Thu, Jul 16, 2015 at 10:29 AM, milktrader  
 wrote:

> Julia 0.4- has different behavior ...
>
> First, with 0.3.9
>
> julia> versioninfo()
> Julia Version 0.3.9
> Commit 31efe69 (2015-05-30 11:24 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin13.4.0)
>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
> julia> type Foo
> x::Int
> end
>
> julia> import Base: ==
>
> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
> == (generic function with 80 methods)
>
> julia> foos = [Foo(4), Foo(4)]
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> unique(foos)
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> unique(foos)[1] == unique(foos)[2]
> true
>
> And now 0.4-dev
>
> julia> versioninfo()
> Julia Version 0.4.0-dev+5587
> Commit 78760e2 (2015-06-25 14:27 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin13.4.0)
>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
> julia> type Foo
> x::Int
> end
>
> julia> import Base: ==
>
> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
> == (generic function with 108 methods)
>
> julia> foos = [Foo(4), Foo(4)]
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> unique(foos)
> 1-element Array{Foo,1}:
>  Foo(4)
>
> julia> unique(foos)[1] == unique(foos)[2]
> ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
>  Foo(4)
>   at index [2]
>  in getindex at array.jl:292
>
>
>
> On Thursday, July 16, 2015 at 9:36:21 AM UTC-4, Stefan Karpinski wrote:
>>
>> You need to also define a hash method for this type.
>>
>>
>> On Jul 16, 2015, at 9:16 AM, Marc Gallant  
>> wrote:
>>
>> The unique function doesn't appear to work using iterables of custom 
>> composite types, e.g.,
>>
>> julia> type Foo
>>x::Int
>>end
>>
>> julia> import Base: ==
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 85 methods)
>>
>> julia> unique(foos)
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)[1] == unique(foos)[2]
>> true
>>
>>
>> Is this the intended behaviour?
>>
>>

>

Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Stefan Karpinski
Well, that's it then. Cause: accidental hash collision. The fix is to
define hash for the type. This makes me wonder if we shouldn't just leave
hash undefined for custom mutable types and make it easy to opt into
hashing by identity. At least then you'll get a clear no method error (and
we could trap that and add more helpful information), instead of weird
behavior when you define == but not hash for your types.

On Thu, Jul 16, 2015 at 11:39 AM, milktrader  wrote:

> julia> hash(foos[1]) #and hash(foos[2])
> 0xfa40ebab47e8bee1
>
> julia> hash(foos[2])
> 0x00ef97f955461671
>
> On Thursday, July 16, 2015 at 11:36:03 AM UTC-4, Stefan Karpinski wrote:
>>
>> Dan and/or Seth, can you try that again and check if hash(foos[1]) and
>> hash(foos[2]) have the same last hex digit?
>>
>> On Thu, Jul 16, 2015 at 11:30 AM, Matt Bauman  wrote:
>>
>>> Bizarre.  I happen to have last updated on *exactly* the same commit
>>> SHA, but I'm seeing the original (expected) behavior:
>>>
>>> $ julia -q
>>> julia> versioninfo()
>>> Julia Version 0.4.0-dev+5860
>>> Commit 7fa43ed (2015-07-08 20:57 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin14.3.0)
>>>   CPU: Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>>x::Int
>>>end
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 109 methods)
>>>
>>> julia> unique([Foo(4),Foo(4)])
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> @which hash(Foo(4), zero(UInt))
>>> hash(x::ANY, h::UInt64) at hashing.jl:10
>>>
>>> Might there be some package that changes this behavior?  Is the result
>>> of `@which hash(Foo(4), zero(Uint))` the same as what I show above?
>>>
>>>
>>> On Thursday, July 16, 2015 at 11:02:46 AM UTC-4, Seth wrote:

 I can confirm this works as described by milktrader on 0.4.0-dev+5860
 (2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).

 julia> unique(foos)
 1-element Array{Foo,1}:
  Foo(4)


 On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski wrote:
>
> I don't see that on 0.4-dev – it also doesn't seem possible without
> having defined a hash method since unique is implemented with a dict.
>
> On Thu, Jul 16, 2015 at 10:29 AM, milktrader 
> wrote:
>
>> Julia 0.4- has different behavior ...
>>
>> First, with 0.3.9
>>
>> julia> versioninfo()
>> Julia Version 0.3.9
>> Commit 31efe69 (2015-05-30 11:24 UTC)
>> Platform Info:
>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>> julia> type Foo
>> x::Int
>> end
>>
>> julia> import Base: ==
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 80 methods)
>>
>> julia> foos = [Foo(4), Foo(4)]
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)[1] == unique(foos)[2]
>> true
>>
>> And now 0.4-dev
>>
>> julia> versioninfo()
>> Julia Version 0.4.0-dev+5587
>> Commit 78760e2 (2015-06-25 14:27 UTC)
>> Platform Info:
>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>> julia> type Foo
>> x::Int
>> end
>>
>> julia> import Base: ==
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 108 methods)
>>
>> julia> foos = [Foo(4), Foo(4)]
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)
>> 1-element Array{Foo,1}:
>>  Foo(4)
>>
>> julia> unique(foos)[1] == unique(foos)[2]
>> ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
>>  Foo(4)
>>   at index [2]
>>  in getindex at array.jl:292
>>
>>
>>
>> On Thursday, July 16, 2015 at 9:36:21 AM UTC-4, Stefan Karpinski
>> wrote:
>>>
>>> You need to also define a hash method for this type.
>>>
>>>
>>> On Jul 16, 2015, at 9:16 AM, Marc Gallant 
>>> wrote:
>>>
>>> The unique function doesn't appear to work using iterables of custom
>>> composite types, e.g.,
>>>
>>> julia> type Foo
>>>x::Int
>>>end
>>>
>>> julia> import B

Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Matt Bauman
Sure enough:

julia> unique([Foo(4),Foo(4)])
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)

julia> unique([Foo(4),Foo(4)])
1-element Array{Foo,1}:
 Foo(4)

julia> unique([Foo(4),Foo(4)])
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)

I think they just got "unlucky."  This seems to collide far too easily.

On Thursday, July 16, 2015 at 11:39:34 AM UTC-4, milktrader wrote:
>
> julia> hash(foos[1]) #and hash(foos[2])
> 0xfa40ebab47e8bee1
>
> julia> hash(foos[2])
> 0x00ef97f955461671
>
> On Thursday, July 16, 2015 at 11:36:03 AM UTC-4, Stefan Karpinski wrote:
>>
>> Dan and/or Seth, can you try that again and check if hash(foos[1]) and 
>> hash(foos[2]) have the same last hex digit?
>>
>> On Thu, Jul 16, 2015 at 11:30 AM, Matt Bauman  wrote:
>>
>>> Bizarre.  I happen to have last updated on *exactly* the same commit 
>>> SHA, but I'm seeing the original (expected) behavior:
>>>
>>> $ julia -q
>>> julia> versioninfo()
>>> Julia Version 0.4.0-dev+5860
>>> Commit 7fa43ed (2015-07-08 20:57 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin14.3.0)
>>>   CPU: Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>>x::Int
>>>end
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 109 methods)
>>>
>>> julia> unique([Foo(4),Foo(4)])
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> @which hash(Foo(4), zero(UInt))
>>> hash(x::ANY, h::UInt64) at hashing.jl:10
>>>
>>> Might there be some package that changes this behavior?  Is the result 
>>> of `@which hash(Foo(4), zero(Uint))` the same as what I show above?
>>>
>>>
>>> On Thursday, July 16, 2015 at 11:02:46 AM UTC-4, Seth wrote:

 I can confirm this works as described by milktrader on 0.4.0-dev+5860 
 (2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).

 julia> unique(foos)
 1-element Array{Foo,1}:
  Foo(4)


 On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski wrote:
>
> I don't see that on 0.4-dev – it also doesn't seem possible without 
> having defined a hash method since unique is implemented with a dict.
>
> On Thu, Jul 16, 2015 at 10:29 AM, milktrader  
> wrote:
>
>> Julia 0.4- has different behavior ...
>>
>> First, with 0.3.9
>>
>> julia> versioninfo()
>> Julia Version 0.3.9
>> Commit 31efe69 (2015-05-30 11:24 UTC)
>> Platform Info:
>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>> julia> type Foo
>> x::Int
>> end
>>
>> julia> import Base: ==
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 80 methods)
>>
>> julia> foos = [Foo(4), Foo(4)]
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)[1] == unique(foos)[2]
>> true
>>
>> And now 0.4-dev
>>
>> julia> versioninfo()
>> Julia Version 0.4.0-dev+5587
>> Commit 78760e2 (2015-06-25 14:27 UTC)
>> Platform Info:
>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>> julia> type Foo
>> x::Int
>> end
>>
>> julia> import Base: ==
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 108 methods)
>>
>> julia> foos = [Foo(4), Foo(4)]
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)
>> 1-element Array{Foo,1}:
>>  Foo(4)
>>
>> julia> unique(foos)[1] == unique(foos)[2]
>> ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
>>  Foo(4)
>>   at index [2]
>>  in getindex at array.jl:292
>>
>>
>>
>> On Thursday, July 16, 2015 at 9:36:21 AM UTC-4, Stefan Karpinski 
>> wrote:
>>>
>>> You need to also define a hash method for this type.
>>>
>>>
>>> On Jul 16, 2015, at 9:16 AM, Marc Gallant  
>>> wrote:
>>>
>>> The unique function doesn't appear to work using iterables of custom 
>>> composite types, e.g.,
>>>
>>> julia> type Foo
>>>x::Int
>>>end
>>>
>>> julia> import Base: ==
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function wi

Re: [julia-users] scoping rule help for functions within functions

2015-07-16 Thread Ethan Anderes


Ahhh, the closure explanation makes sense Mauro. I have used closures 
before but my shared state variables were always arrays which were mutated 
inplace so the closure scoping behavior acted just like the global scoping 
rules. For example, my closures always looked like this:

julia> function makeclosure{T<:Real,d}(state::Array{T,d})
   function updatestate()
   state[:] .+= 1
   end
   function printstate()
   println(state)
   end
   return updatestate::Function, printstate::Function
   end
makeclosure (generic function with 1 method)

julia> u1, p1 = makeclosure(rand(2,2))
(updatestate,printstate)

julia> p1
printstate (generic function with 1 method)

julia> p1()
[0.3377313966339588 0.2705755846071063
 0.4035312377015656 0.7008059914967171]

julia> u1()
4-element Array{Float64,1}:
 1.33773
 1.40353
 1.27058
 1.70081

julia> p1()
[1.3377313966339588 1.2705755846071063
 1.4035312377015656 1.700805991496717]

I can now see that people would want similar behavior for non-arrays and 
non-mutating operations on the local state. In particular, the following 
closure works like the above closure but for Numbers.

julia> function makeclosure{T<:Number}(state::T)
   function updatestate()
   state += 1
   end
   function printstate()
   println(state)
   end
   return updatestate::Function, printstate::Function
   end
makeclosure (generic function with 2 methods)

julia> u2, p2 = makeclosure(1)
(updatestate,printstate)

julia> p2()
1

julia> u2()
2

julia> p2()
2

Of course, now the scoping behavior within makeclosure conflicts with the 
global REPL behavior.

I took a look at your updated version of that section in the manual. I like 
the new example on closures. I have two comments. First, you might want to 
extend the closure example beyond the let block. Mainly because it wasn’t 
immediately obvious to me how that let block closure was useful. Second, 
I’m starting to think that the manual could use a subsection titled 
“Scoping rules for nested functions” which makes clear that functions 
within functions have different behavior than functions defined in a module 
or the REPL.

Thanks for your help Mauro!

BTW: Mauro, feel free to copy, cut, paste and mutate any of the above text 
if you think it would be useful for your draft of that section of the 
Manual.

On Thursday, July 16, 2015 at 6:01:36 AM UTC-7, Mauro wrote:

> Question 2) What advantage is there for changing the scoping rules for 
> > nested functions? 
>
> How is this for an explanation: 
>
> https://github.com/mauro3/julia/blob/m3/scope-doc/doc/manual/variables-and-scoping.rst#hard-vs-soft-local-scope
>  
>
​


Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread milktrader
julia> hash(foos[1]) #and hash(foos[2])
0xfa40ebab47e8bee1

julia> hash(foos[2])
0x00ef97f955461671

On Thursday, July 16, 2015 at 11:36:03 AM UTC-4, Stefan Karpinski wrote:
>
> Dan and/or Seth, can you try that again and check if hash(foos[1]) and 
> hash(foos[2]) have the same last hex digit?
>
> On Thu, Jul 16, 2015 at 11:30 AM, Matt Bauman  > wrote:
>
>> Bizarre.  I happen to have last updated on *exactly* the same commit SHA, 
>> but I'm seeing the original (expected) behavior:
>>
>> $ julia -q
>> julia> versioninfo()
>> Julia Version 0.4.0-dev+5860
>> Commit 7fa43ed (2015-07-08 20:57 UTC)
>> Platform Info:
>>   System: Darwin (x86_64-apple-darwin14.3.0)
>>   CPU: Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>> julia> type Foo
>>x::Int
>>end
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 109 methods)
>>
>> julia> unique([Foo(4),Foo(4)])
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> @which hash(Foo(4), zero(UInt))
>> hash(x::ANY, h::UInt64) at hashing.jl:10
>>
>> Might there be some package that changes this behavior?  Is the result of 
>> `@which hash(Foo(4), zero(Uint))` the same as what I show above?
>>
>>
>> On Thursday, July 16, 2015 at 11:02:46 AM UTC-4, Seth wrote:
>>>
>>> I can confirm this works as described by milktrader on 0.4.0-dev+5860 
>>> (2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).
>>>
>>> julia> unique(foos)
>>> 1-element Array{Foo,1}:
>>>  Foo(4)
>>>
>>>
>>> On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski wrote:

 I don't see that on 0.4-dev – it also doesn't seem possible without 
 having defined a hash method since unique is implemented with a dict.

 On Thu, Jul 16, 2015 at 10:29 AM, milktrader  
 wrote:

> Julia 0.4- has different behavior ...
>
> First, with 0.3.9
>
> julia> versioninfo()
> Julia Version 0.3.9
> Commit 31efe69 (2015-05-30 11:24 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin13.4.0)
>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
> julia> type Foo
> x::Int
> end
>
> julia> import Base: ==
>
> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
> == (generic function with 80 methods)
>
> julia> foos = [Foo(4), Foo(4)]
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> unique(foos)
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> unique(foos)[1] == unique(foos)[2]
> true
>
> And now 0.4-dev
>
> julia> versioninfo()
> Julia Version 0.4.0-dev+5587
> Commit 78760e2 (2015-06-25 14:27 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin13.4.0)
>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
> julia> type Foo
> x::Int
> end
>
> julia> import Base: ==
>
> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
> == (generic function with 108 methods)
>
> julia> foos = [Foo(4), Foo(4)]
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> unique(foos)
> 1-element Array{Foo,1}:
>  Foo(4)
>
> julia> unique(foos)[1] == unique(foos)[2]
> ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
>  Foo(4)
>   at index [2]
>  in getindex at array.jl:292
>
>
>
> On Thursday, July 16, 2015 at 9:36:21 AM UTC-4, Stefan Karpinski wrote:
>>
>> You need to also define a hash method for this type.
>>
>>
>> On Jul 16, 2015, at 9:16 AM, Marc Gallant  
>> wrote:
>>
>> The unique function doesn't appear to work using iterables of custom 
>> composite types, e.g.,
>>
>> julia> type Foo
>>x::Int
>>end
>>
>> julia> import Base: ==
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 85 methods)
>>
>> julia> unique(foos)
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)[1] == unique(foos)[2]
>> true
>>
>>
>> Is this the intended behaviour?
>>
>>

>

Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Stefan Karpinski
Dan and/or Seth, can you try that again and check if hash(foos[1]) and
hash(foos[2]) have the same last hex digit?

On Thu, Jul 16, 2015 at 11:30 AM, Matt Bauman  wrote:

> Bizarre.  I happen to have last updated on *exactly* the same commit SHA,
> but I'm seeing the original (expected) behavior:
>
> $ julia -q
> julia> versioninfo()
> Julia Version 0.4.0-dev+5860
> Commit 7fa43ed (2015-07-08 20:57 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin14.3.0)
>   CPU: Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
> julia> type Foo
>x::Int
>end
>
> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
> == (generic function with 109 methods)
>
> julia> unique([Foo(4),Foo(4)])
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> @which hash(Foo(4), zero(UInt))
> hash(x::ANY, h::UInt64) at hashing.jl:10
>
> Might there be some package that changes this behavior?  Is the result of 
> `@which
> hash(Foo(4), zero(Uint))` the same as what I show above?
>
>
> On Thursday, July 16, 2015 at 11:02:46 AM UTC-4, Seth wrote:
>>
>> I can confirm this works as described by milktrader on 0.4.0-dev+5860
>> (2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).
>>
>> julia> unique(foos)
>> 1-element Array{Foo,1}:
>>  Foo(4)
>>
>>
>> On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski wrote:
>>>
>>> I don't see that on 0.4-dev – it also doesn't seem possible without
>>> having defined a hash method since unique is implemented with a dict.
>>>
>>> On Thu, Jul 16, 2015 at 10:29 AM, milktrader  wrote:
>>>
 Julia 0.4- has different behavior ...

 First, with 0.3.9

 julia> versioninfo()
 Julia Version 0.3.9
 Commit 31efe69 (2015-05-30 11:24 UTC)
 Platform Info:
   System: Darwin (x86_64-apple-darwin13.4.0)
   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia> type Foo
 x::Int
 end

 julia> import Base: ==

 julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
 == (generic function with 80 methods)

 julia> foos = [Foo(4), Foo(4)]
 2-element Array{Foo,1}:
  Foo(4)
  Foo(4)

 julia> unique(foos)
 2-element Array{Foo,1}:
  Foo(4)
  Foo(4)

 julia> unique(foos)[1] == unique(foos)[2]
 true

 And now 0.4-dev

 julia> versioninfo()
 Julia Version 0.4.0-dev+5587
 Commit 78760e2 (2015-06-25 14:27 UTC)
 Platform Info:
   System: Darwin (x86_64-apple-darwin13.4.0)
   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia> type Foo
 x::Int
 end

 julia> import Base: ==

 julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
 == (generic function with 108 methods)

 julia> foos = [Foo(4), Foo(4)]
 2-element Array{Foo,1}:
  Foo(4)
  Foo(4)

 julia> unique(foos)
 1-element Array{Foo,1}:
  Foo(4)

 julia> unique(foos)[1] == unique(foos)[2]
 ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
  Foo(4)
   at index [2]
  in getindex at array.jl:292



 On Thursday, July 16, 2015 at 9:36:21 AM UTC-4, Stefan Karpinski wrote:
>
> You need to also define a hash method for this type.
>
>
> On Jul 16, 2015, at 9:16 AM, Marc Gallant 
> wrote:
>
> The unique function doesn't appear to work using iterables of custom
> composite types, e.g.,
>
> julia> type Foo
>x::Int
>end
>
> julia> import Base: ==
>
> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
> == (generic function with 85 methods)
>
> julia> unique(foos)
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> unique(foos)[1] == unique(foos)[2]
> true
>
>
> Is this the intended behaviour?
>
>
>>>


Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Stefan Karpinski
I just tried it on that exact version and do not see that behavior:

julia> versioninfo()
Julia Version 0.4.0-dev+5587
Commit 78760e2 (2015-06-25 14:27 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin14.3.0)
  CPU: Intel(R) Core(TM) M-5Y71 CPU @ 1.20GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Prescott)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

julia> type Foo
   x::Int
   end

julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
== (generic function with 108 methods)

julia> foos = [Foo(4), Foo(4)]
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)

julia> unique(foos)
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)


If you look at the definition of unique, it doesn't see like it could
possibly do that without defining a hash method for Foo:

function unique(C)
out = Array(eltype(C),0)
seen = Set{eltype(C)}()
for x in C
if !in(x, seen)
push!(seen, x)
push!(out, x)
end
end
out
end


The only way I can think of that this might occur is if you happen to get a
hash collision with the default hashing function – which actually has about
a 1/16 chance of happening. Makes me wonder if we shouldn't check both hash
values and equality in our Dict implementation. Saving hash values is
actually fairly common in hash table implementations since it allows
rehashing the table without actually recomputing the hash of every key.

On Thu, Jul 16, 2015 at 11:02 AM, Seth  wrote:

> I can confirm this works as described by milktrader on 0.4.0-dev+5860
> (2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).
>
> julia> unique(foos)
> 1-element Array{Foo,1}:
>  Foo(4)
>
>
> On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski wrote:
>>
>> I don't see that on 0.4-dev – it also doesn't seem possible without
>> having defined a hash method since unique is implemented with a dict.
>>
>> On Thu, Jul 16, 2015 at 10:29 AM, milktrader  wrote:
>>
>>> Julia 0.4- has different behavior ...
>>>
>>> First, with 0.3.9
>>>
>>> julia> versioninfo()
>>> Julia Version 0.3.9
>>> Commit 31efe69 (2015-05-30 11:24 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>> x::Int
>>> end
>>>
>>> julia> import Base: ==
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 80 methods)
>>>
>>> julia> foos = [Foo(4), Foo(4)]
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)[1] == unique(foos)[2]
>>> true
>>>
>>> And now 0.4-dev
>>>
>>> julia> versioninfo()
>>> Julia Version 0.4.0-dev+5587
>>> Commit 78760e2 (2015-06-25 14:27 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>> x::Int
>>> end
>>>
>>> julia> import Base: ==
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 108 methods)
>>>
>>> julia> foos = [Foo(4), Foo(4)]
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)
>>> 1-element Array{Foo,1}:
>>>  Foo(4)
>>>
>>> julia> unique(foos)[1] == unique(foos)[2]
>>> ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
>>>  Foo(4)
>>>   at index [2]
>>>  in getindex at array.jl:292
>>>
>>>
>>>
>>> On Thursday, July 16, 2015 at 9:36:21 AM UTC-4, Stefan Karpinski wrote:

 You need to also define a hash method for this type.


 On Jul 16, 2015, at 9:16 AM, Marc Gallant  wrote:

 The unique function doesn't appear to work using iterables of custom
 composite types, e.g.,

 julia> type Foo
x::Int
end

 julia> import Base: ==

 julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
 == (generic function with 85 methods)

 julia> unique(foos)
 2-element Array{Foo,1}:
  Foo(4)
  Foo(4)

 julia> unique(foos)[1] == unique(foos)[2]
 true


 Is this the intended behaviour?


>>


Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Matt Bauman
Bizarre.  I happen to have last updated on *exactly* the same commit SHA, 
but I'm seeing the original (expected) behavior:

$ julia -q
julia> versioninfo()
Julia Version 0.4.0-dev+5860
Commit 7fa43ed (2015-07-08 20:57 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin14.3.0)
  CPU: Intel(R) Core(TM) i5 CPU   M 520  @ 2.40GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT NO_AFFINITY NEHALEM)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

julia> type Foo
   x::Int
   end

julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
== (generic function with 109 methods)

julia> unique([Foo(4),Foo(4)])
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)

julia> @which hash(Foo(4), zero(UInt))
hash(x::ANY, h::UInt64) at hashing.jl:10

Might there be some package that changes this behavior?  Is the result of 
`@which 
hash(Foo(4), zero(Uint))` the same as what I show above?

On Thursday, July 16, 2015 at 11:02:46 AM UTC-4, Seth wrote:
>
> I can confirm this works as described by milktrader on 0.4.0-dev+5860 
> (2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).
>
> julia> unique(foos)
> 1-element Array{Foo,1}:
>  Foo(4)
>
>
> On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski wrote:
>>
>> I don't see that on 0.4-dev – it also doesn't seem possible without 
>> having defined a hash method since unique is implemented with a dict.
>>
>> On Thu, Jul 16, 2015 at 10:29 AM, milktrader  wrote:
>>
>>> Julia 0.4- has different behavior ...
>>>
>>> First, with 0.3.9
>>>
>>> julia> versioninfo()
>>> Julia Version 0.3.9
>>> Commit 31efe69 (2015-05-30 11:24 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>> x::Int
>>> end
>>>
>>> julia> import Base: ==
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 80 methods)
>>>
>>> julia> foos = [Foo(4), Foo(4)]
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)[1] == unique(foos)[2]
>>> true
>>>
>>> And now 0.4-dev
>>>
>>> julia> versioninfo()
>>> Julia Version 0.4.0-dev+5587
>>> Commit 78760e2 (2015-06-25 14:27 UTC)
>>> Platform Info:
>>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>>   WORD_SIZE: 64
>>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>>   LAPACK: libopenblas
>>>   LIBM: libopenlibm
>>>   LLVM: libLLVM-3.3
>>>
>>> julia> type Foo
>>> x::Int
>>> end
>>>
>>> julia> import Base: ==
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 108 methods)
>>>
>>> julia> foos = [Foo(4), Foo(4)]
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)
>>> 1-element Array{Foo,1}:
>>>  Foo(4)
>>>
>>> julia> unique(foos)[1] == unique(foos)[2]
>>> ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
>>>  Foo(4)
>>>   at index [2]
>>>  in getindex at array.jl:292
>>>
>>>
>>>
>>> On Thursday, July 16, 2015 at 9:36:21 AM UTC-4, Stefan Karpinski wrote:

 You need to also define a hash method for this type.


 On Jul 16, 2015, at 9:16 AM, Marc Gallant  wrote:

 The unique function doesn't appear to work using iterables of custom 
 composite types, e.g.,

 julia> type Foo
x::Int
end

 julia> import Base: ==

 julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
 == (generic function with 85 methods)

 julia> unique(foos)
 2-element Array{Foo,1}:
  Foo(4)
  Foo(4)

 julia> unique(foos)[1] == unique(foos)[2]
 true


 Is this the intended behaviour?


>>

[julia-users] Question about @eval and quoting

2015-07-16 Thread David Gold
Suppose I want to apply the trick that makes `broadcast!` fast to `map!`. 
Because of the specificity of `map!`'s functionality, I don't necessarily 
need to cache the internally declared functions, so I just write:

function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)
_f = Expr(:quote, f)
@eval begin
function func(dest, A)
for i in 1:length(dest)
dest[i] = $_f(A[i])
end
end
func($dest, $A)
return $dest
end
end

which does indeed show improved performance:

srand(1)
N = 5_000_000
A = rand(N)
X = Array(Float64, N)
f(x) = 5 * x
map!(f, X, A);

*julia> **map!(f, X, A);*


 *julia> **@time map!(f, X, A);*

  17.459 milliseconds (2143 allocations: 109 KB)


*julia> **Base.map!(f, X, A);*


*julia> **@time Base.map!(f, X, A);*

 578.520 milliseconds (1 k allocations: 305 MB, 6.45% gc time)


Okay. But the following attempt does not experience the same speedup:


function map!{F}(f::F, dest::AbstractArray, src::AbstractArray)

function func(dest, A)

for i in 1:length(dest)

dest[i] = f(A[i])

end

end

func(dest, A)

return dest

end


*julia> **map!(f, X, A);*


*julia> **@time map!(f, X, A);*

 564.823 milliseconds (2 k allocations: 305 MB, 6.44% gc time)


My question is: Why is `eval`-ing the body of `map!` necessary for 
supporting the type inference/other optimizations that give the first 
revised `map!` method greater performance? I suspect that there's something 
about what `eval` does, aside from just "evaluate an expression" that I'm 
not quite grokking -- but what? Also, what risks in particular does 
invoking `eval` at runtime inside the body of a function -- as opposed to 
directly inside the global scope of a module -- pose?


Thanks,

D




Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Seth
I can confirm this works as described by milktrader on 0.4.0-dev+5860 
(2015-07-08 20:57 UTC) Commit 7fa43ed (7 days old master).

julia> unique(foos)
1-element Array{Foo,1}:
 Foo(4)


On Thursday, July 16, 2015 at 7:52:03 AM UTC-7, Stefan Karpinski wrote:
>
> I don't see that on 0.4-dev – it also doesn't seem possible without having 
> defined a hash method since unique is implemented with a dict.
>
> On Thu, Jul 16, 2015 at 10:29 AM, milktrader  > wrote:
>
>> Julia 0.4- has different behavior ...
>>
>> First, with 0.3.9
>>
>> julia> versioninfo()
>> Julia Version 0.3.9
>> Commit 31efe69 (2015-05-30 11:24 UTC)
>> Platform Info:
>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>> julia> type Foo
>> x::Int
>> end
>>
>> julia> import Base: ==
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 80 methods)
>>
>> julia> foos = [Foo(4), Foo(4)]
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)[1] == unique(foos)[2]
>> true
>>
>> And now 0.4-dev
>>
>> julia> versioninfo()
>> Julia Version 0.4.0-dev+5587
>> Commit 78760e2 (2015-06-25 14:27 UTC)
>> Platform Info:
>>   System: Darwin (x86_64-apple-darwin13.4.0)
>>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>>   WORD_SIZE: 64
>>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>>   LAPACK: libopenblas
>>   LIBM: libopenlibm
>>   LLVM: libLLVM-3.3
>>
>> julia> type Foo
>> x::Int
>> end
>>
>> julia> import Base: ==
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 108 methods)
>>
>> julia> foos = [Foo(4), Foo(4)]
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)
>> 1-element Array{Foo,1}:
>>  Foo(4)
>>
>> julia> unique(foos)[1] == unique(foos)[2]
>> ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
>>  Foo(4)
>>   at index [2]
>>  in getindex at array.jl:292
>>
>>
>>
>> On Thursday, July 16, 2015 at 9:36:21 AM UTC-4, Stefan Karpinski wrote:
>>>
>>> You need to also define a hash method for this type.
>>>
>>>
>>> On Jul 16, 2015, at 9:16 AM, Marc Gallant  wrote:
>>>
>>> The unique function doesn't appear to work using iterables of custom 
>>> composite types, e.g.,
>>>
>>> julia> type Foo
>>>x::Int
>>>end
>>>
>>> julia> import Base: ==
>>>
>>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>>> == (generic function with 85 methods)
>>>
>>> julia> unique(foos)
>>> 2-element Array{Foo,1}:
>>>  Foo(4)
>>>  Foo(4)
>>>
>>> julia> unique(foos)[1] == unique(foos)[2]
>>> true
>>>
>>>
>>> Is this the intended behaviour?
>>>
>>>
>

Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Stefan Karpinski
I don't see that on 0.4-dev – it also doesn't seem possible without having
defined a hash method since unique is implemented with a dict.

On Thu, Jul 16, 2015 at 10:29 AM, milktrader  wrote:

> Julia 0.4- has different behavior ...
>
> First, with 0.3.9
>
> julia> versioninfo()
> Julia Version 0.3.9
> Commit 31efe69 (2015-05-30 11:24 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin13.4.0)
>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
> julia> type Foo
> x::Int
> end
>
> julia> import Base: ==
>
> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
> == (generic function with 80 methods)
>
> julia> foos = [Foo(4), Foo(4)]
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> unique(foos)
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> unique(foos)[1] == unique(foos)[2]
> true
>
> And now 0.4-dev
>
> julia> versioninfo()
> Julia Version 0.4.0-dev+5587
> Commit 78760e2 (2015-06-25 14:27 UTC)
> Platform Info:
>   System: Darwin (x86_64-apple-darwin13.4.0)
>   CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
>   WORD_SIZE: 64
>   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
>   LAPACK: libopenblas
>   LIBM: libopenlibm
>   LLVM: libLLVM-3.3
>
> julia> type Foo
> x::Int
> end
>
> julia> import Base: ==
>
> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
> == (generic function with 108 methods)
>
> julia> foos = [Foo(4), Foo(4)]
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> unique(foos)
> 1-element Array{Foo,1}:
>  Foo(4)
>
> julia> unique(foos)[1] == unique(foos)[2]
> ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
>  Foo(4)
>   at index [2]
>  in getindex at array.jl:292
>
>
>
> On Thursday, July 16, 2015 at 9:36:21 AM UTC-4, Stefan Karpinski wrote:
>>
>> You need to also define a hash method for this type.
>>
>>
>> On Jul 16, 2015, at 9:16 AM, Marc Gallant  wrote:
>>
>> The unique function doesn't appear to work using iterables of custom
>> composite types, e.g.,
>>
>> julia> type Foo
>>x::Int
>>end
>>
>> julia> import Base: ==
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 85 methods)
>>
>> julia> unique(foos)
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)[1] == unique(foos)[2]
>> true
>>
>>
>> Is this the intended behaviour?
>>
>>


Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread milktrader
Julia 0.4- has different behavior ...

First, with 0.3.9

julia> versioninfo()
Julia Version 0.3.9
Commit 31efe69 (2015-05-30 11:24 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin13.4.0)
  CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

julia> type Foo
x::Int
end

julia> import Base: ==

julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
== (generic function with 80 methods)

julia> foos = [Foo(4), Foo(4)]
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)

julia> unique(foos)
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)

julia> unique(foos)[1] == unique(foos)[2]
true

And now 0.4-dev

julia> versioninfo()
Julia Version 0.4.0-dev+5587
Commit 78760e2 (2015-06-25 14:27 UTC)
Platform Info:
  System: Darwin (x86_64-apple-darwin13.4.0)
  CPU: Intel(R) Core(TM)2 Duo CPU P7350  @ 2.00GHz
  WORD_SIZE: 64
  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Penryn)
  LAPACK: libopenblas
  LIBM: libopenlibm
  LLVM: libLLVM-3.3

julia> type Foo
x::Int
end

julia> import Base: ==

julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
== (generic function with 108 methods)

julia> foos = [Foo(4), Foo(4)]
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)

julia> unique(foos)
1-element Array{Foo,1}:
 Foo(4)

julia> unique(foos)[1] == unique(foos)[2]
ERROR: BoundsError: attempt to access 1-element Array{Foo,1}:
 Foo(4)
  at index [2]
 in getindex at array.jl:292



On Thursday, July 16, 2015 at 9:36:21 AM UTC-4, Stefan Karpinski wrote:
>
> You need to also define a hash method for this type.
>
>
> On Jul 16, 2015, at 9:16 AM, Marc Gallant  > wrote:
>
> The unique function doesn't appear to work using iterables of custom 
> composite types, e.g.,
>
> julia> type Foo
>x::Int
>end
>
> julia> import Base: ==
>
> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
> == (generic function with 85 methods)
>
> julia> unique(foos)
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> unique(foos)[1] == unique(foos)[2]
> true
>
>
> Is this the intended behaviour?
>
>

Re: [julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Stefan Karpinski
You need to also define a hash method for this type.


> On Jul 16, 2015, at 9:16 AM, Marc Gallant  wrote:
> 
> The unique function doesn't appear to work using iterables of custom 
> composite types, e.g.,
> 
> julia> type Foo
>x::Int
>end
> 
> julia> import Base: ==
> 
> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
> == (generic function with 85 methods)
> 
> julia> unique(foos)
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
> 
> julia> unique(foos)[1] == unique(foos)[2]
> true
> 
> Is this the intended behaviour?


[julia-users] Re: The unique function and iterables of custom composite types

2015-07-16 Thread Marc Gallant
Whoops! Here is the updated code:


julia> type Foo
   x::Int
   end

julia> import Base: ==

julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
== (generic function with 85 methods)

julia> foos = [Foo(4), Foo(4)]
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)

julia> unique(foos)
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)

julia> unique(foos)[1] == unique(foos)[2]
true




On Thursday, July 16, 2015 at 9:33:26 AM UTC-4, milktrader wrote:
>
> How did you get  foos?
>
> On Thursday, July 16, 2015 at 9:16:01 AM UTC-4, Marc Gallant wrote:
>>
>> The unique function doesn't appear to work using iterables of custom 
>> composite types, e.g.,
>>
>> julia> type Foo
>>x::Int
>>end
>>
>> julia> import Base: ==
>>
>> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
>> == (generic function with 85 methods)
>>
>> julia> unique(foos)
>> 2-element Array{Foo,1}:
>>  Foo(4)
>>  Foo(4)
>>
>> julia> unique(foos)[1] == unique(foos)[2]
>> true
>>
>>
>> Is this the intended behaviour?
>>
>

[julia-users] Re: The unique function and iterables of custom composite types

2015-07-16 Thread milktrader
How did you get  foos?

On Thursday, July 16, 2015 at 9:16:01 AM UTC-4, Marc Gallant wrote:
>
> The unique function doesn't appear to work using iterables of custom 
> composite types, e.g.,
>
> julia> type Foo
>x::Int
>end
>
> julia> import Base: ==
>
> julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
> == (generic function with 85 methods)
>
> julia> unique(foos)
> 2-element Array{Foo,1}:
>  Foo(4)
>  Foo(4)
>
> julia> unique(foos)[1] == unique(foos)[2]
> true
>
>
> Is this the intended behaviour?
>


[julia-users] The unique function and iterables of custom composite types

2015-07-16 Thread Marc Gallant
The unique function doesn't appear to work using iterables of custom 
composite types, e.g.,

julia> type Foo
   x::Int
   end

julia> import Base: ==

julia> ==(f1::Foo, f2::Foo) = f1.x == f2.x
== (generic function with 85 methods)

julia> unique(foos)
2-element Array{Foo,1}:
 Foo(4)
 Foo(4)

julia> unique(foos)[1] == unique(foos)[2]
true


Is this the intended behaviour?


Re: [julia-users] scoping rule help for functions within functions

2015-07-16 Thread Mauro
> Question 2) What advantage is there for changing the scoping rules for 
> nested functions?

How is this for an explanation:
https://github.com/mauro3/julia/blob/m3/scope-doc/doc/manual/variables-and-scoping.rst#hard-vs-soft-local-scope


[julia-users] Re: Embedding Julia with C++

2015-07-16 Thread Kostas Tavlaridis-Gyparakis
Hello again,
And once again thanks for the constant support you are providing it has been
really helpful so far.
So, I did follow all of the steps you suggested and cp the directory with 
the name
with the big number to an other file changed the soft link and also the 
make file,
and this did finally work!
So, I am really grateful for your persistant help and support.
Now I will focus in the other problem I am facing in the new post regarding 
trying
to call c++ function from julia that inside them calling other julia 
functions.
As I will be afk today for the rest of the day probably will return on this 
until tomorrow.
Once again thanks for all the help and support is much appreciated!


On Monday, June 22, 2015 at 3:03:31 PM UTC+2, Kostas Tavlaridis-Gyparakis 
wrote:
>
> Hello,
> I am trying to embed Julia in C++ but I currently face some sort of issues.
> I am trying to follow the instructions shown here 
> .
> First things first, I run Ubuntu 15.04 and my Julia version is v. 0.3.2 
> (it's the 
> version that is automatic installed when installing julia from ubuntu 
> center).
> I use eclipse for my C++ projects, yet again I have the following issues, 
> ac-
> cording to the instructions before trying to write any julia code in C or 
> C++ you
> need first to:
> 1) link the julia library (assuming I undersand correctly this refers to  
> libjulia.so),
> which should be located in "Julia_DIR/usr/lib", yet again in my julia 
> directory
> there is no folder under the name usr. I did though find a libjulia.so 
> file in an
> other directory of my pc ("/usr/lib/x86_64-linux-gnu/julia") and added 
> this one
> instead.
> 2) include the path of julia.h which should be located in 
> "Julia_DIR/inclue/julia"
> now again in my julia directory there are no such folders and in general 
> there
> is nowhere in my pc any file such as julia.h. I did sth that is probably 
> wrong 
> and stupid but couldn't come up with anything else I downloaded this 
>  and I
> included the location of where julia.h is located to eclipse as well with 
> the direc-
> tions of all the other header files that were inculuded inside julia.h.
>
> Now when in Eclipse I am trying to compile and run a few simple julia 
> commands
> having included julia.h i receive an error saying that there is no uv.h 
> file in my
> system which is needed in one of the julia header files.
> I know that my whole approach is wrong, but yet again I couldn't find 
> anywhere
> in my pc the proper folders or files in order to follow the steps that 
> were sugges-
> ted in the julia website for running julia code inside C++.
> Any help would be much appreciated.
>
> Also, one more thing I wanted to ask is the following, in general writing 
> Julia
> code inside a C++ code is limited?
> What I want to do in general is write a JuMP model inside C++, so in 
> general
> is this possible, in the sense that by embedding Julia inside C++, will I 
> be able
> to use all of the tools and code of Julia language or is this only limited 
> to a cer-
> tain amount of commands and packages?
>


Re: [julia-users] Re: scoping rule help for functions within functions

2015-07-16 Thread Mauro
> are there any new scoping issues raised by the use of parallelization?  Are 
> variables in global scope automatically available to different processors?

(I'm no parallel expert.)  The distributed parallel programming
currently supported in Julia has completely separate scopes for each
processor.  For threaded parallel computing (once it lands) it will
may be different but I don't know how.


Re: [julia-users] scoping rule help for functions within functions

2015-07-16 Thread Mauro
>  The rule is that inside a function any variable assignment
> which would write to a *global* variable, makes that variable local. 
>
> I guess I still don’t understand how the nested function scoping behavior 
> is described by the above sentence. Maybe I don’t understand what you mean 
> by *global* and how I could determine if an assignment “would write to a 
> *global* variable”. My best attempt at describing this behavior in my own 
> way would be as follows: the assignment x = repmat(y.', 2, 1) within mesh 
> within foo treats x as “local-to-foo” and “global-to-mesh”. Is this 
> consistent with what your saying?

Have you read the whole of the scope-section? ;-)  There is one
subsection on global scope, so hopefully that clears what global scope
is, if not please let me know.  So defining `mesh` at the REPL prompt:

mesh(y) = (x = repmat(y.', 2, 1); return x)

means that x inside mesh will be local as it is assigned to.  However if
mesh gets nested inside a function which itself has a local x:

function foo()
   x = 1 # assignment makes it implicitly local
   mesh(y) = (x = repmat(y.', 2, 1); return x)
   mesh(5)
   return x
end

then the x inside mesh will refer to the *local* x.  If foo does not
define an x then mesh has its own x

function foo()
   xx = 1
   mesh(y) = (x = repmat(y.', 2, 1); return x)
   mesh(5)
   return xx
end

> Question 2) What advantage is there for changing the scoping rules for 
> nested functions? I did notice the following example in your manual but it 
> is not clear to me how it (or if it was intended to) explain the usefulness 
> of different scoping rules for nested functions.
>
> even(n) = n == 0 ? true  :  odd(n-1)
> odd(n)  = n == 0 ? false : even(n-1)
>
> julia> even(3)
> false
>
> julia> odd(3)
> true

No, in this example there are no nested functions, so above local vs
global does not apply.  This example is about being able to refer to
variables inside functions which are only defined later.

The reason for the hard-scope rule is to allow to make closures which
have their private state which cannot be access from outside:

let
private_state = 0
global foo
foo() = (private_state +=1; private_state)
end

Calling foo will produce the natural numbers without anyone being able
to access the private_state directly.  This is different to

private_state = 0
foo() = (global private_state +=1; private_state)

where private_state is a global and can be accessed by anyone (although
only written to module locally).  I will add something about this to the
manual.

(PS: My statement "rewrote the scope section" should have read "updated
the scope section" as it still contains lots of the original text.  No
offence meant to the original authors.)