[julia-users] Re: Small package for timing sections of code and pretty printing the results

2016-04-06 Thread Kristoffer Carlsson
Thanks for the tip. Maybe I could add a custom print version with verbosity 
options. I don't want the default output to be too cluttered.

On Wednesday, April 6, 2016 at 4:07:22 AM UTC+2, Jeffrey Sarnoff wrote:
>
> You might have an option to show  "wall time / call"   
> or just include it in the table however you like the columns.
>
>   section  |  avg time |  number|  time used   |  time used
>|  per call |  of calls  |  all calls   |  % of total
> 
>  --|---||--|
>  in a loop |   101.156 |5   |  505.779ms   |16
>
>
> On Tuesday, April 5, 2016 at 9:39:38 PM UTC-4, Jeffrey Sarnoff wrote:
>>
>>  This is very useful and much appreciated. Thank you. 
>>
>> On Tuesday, April 5, 2016 at 3:42:45 PM UTC-4, Kristoffer Carlsson wrote:
>>>
>>> Hello everyone,
>>>
>>> I put up a new (unregistered) small package for timing different 
>>> sections of code. It works similar to @time in Base but you also give the 
>>> code section being timed a label. We can then track the total time and 
>>> number of calls that are made to code sections with that label and pretty 
>>> print it in the end. This feature existed in a C++ library I used to use 
>>> (deal.II) and I missed it in Julia.
>>>
>>> Here is a small example.
>>>
>>> using TimerOutputs
>>>
>>> const time_tracker = TimerOutput();
>>>
>>> @timeit time_tracker "sleeping" sleep(1)
>>>
>>> @timeit time_tracker "loop" for i in 1:10
>>>rand()
>>>sleep(0.1)
>>> end
>>>
>>> v = 0.0
>>> for i in 1:5
>>> v += @timeit time_tracker "in a loop" begin
>>>sleep(0.1)
>>>rand()
>>> end
>>> end
>>>
>>> print(time_tracker)
>>> +-+++
>>> | Total wallclock time elapsed since start|   3.155 s  ||
>>> | |||
>>> | Section | no. calls |  wall time | % of total |
>>> +-+++
>>> | loop| 1 |   1.012 s  |   32 % |
>>> | sleeping| 1 |   1.002 s  |   32 % |
>>> | in a loop   | 5 | 505.779 ms |   16 % |
>>> +-+++
>>>
>>> Feel free to comment on the package name, macro name, format of the 
>>> output etc. 
>>>
>>> The URL is: https://github.com/KristofferC/TimerOutputs.jl
>>>
>>> Best regards, Kristoffer
>>>
>>

[julia-users] Re: enforcing homogeneity of vector elements in function signature

2016-04-06 Thread 'Greg Plowman' via julia-users
Actually, I'm totally wrong.
The Union won't work.
Sorry for bad post.


On Wednesday, April 6, 2016 at 1:52:15 PM UTC+10, Greg Plowman wrote:

>
> A workaround would be to have two methods, one for the homogeneous 
>> elements in the first parameter, as you suggest, and a second for a vector 
>> with homogeneous elements in both parameters, with both T, N specified in 
>> the signature. But I have to write an extra method...
>>
>
> As pointed out, you can't have a *single* type for the method signature 
> for both homogeneous and heterogeneous N, because of Julia's parametric 
> type invariance.
>
> However, it seems you might want to write two methods to specialise the 
> implementation.
>
> If not, you can
> use a common functional body as Jeffrey suggested, or
> use an empty signature (will match anything, but no performance penalty)
> use a Union in the signature:
>
> function bar{T,N}(x::Union{Vector{Foo{T}},Vector{Foo{T,N}}})
> println("Hello, Foo!")
> end
>
>
> On Tuesday, April 5, 2016 at 6:46:21 AM UTC+10, Davide Lasagna wrote:
>
>> Thanks, yes, I have tried this, but did not mention what happens.
>>
>> For the signature you suggest, you get a `MethodError` in the case the 
>> vector `x` is homogeneous in both parameters.
>>
>> Look at this code
>>
>> type Foo{T, N}
>> a::NTuple{N, T}
>> end
>>
>> function make_homogeneous_Foos(M)
>> fs = Foo{Float64, 2}[]
>> for i = 1:M
>> f = Foo{Float64, 2}((0.0, 0.0))
>> push!(fs, f)
>> end
>> fs
>> end
>>
>> function bar{T}(x::Vector{Foo{T}})
>> println("Hello, Foo!")
>> end
>>
>> const fs = make_homogeneous_Foos(100)
>>
>> bar(fs)
>>
>> which results in 
>>
>> ERROR: LoadError: MethodError: `bar` has no method matching 
>> bar(::Array{Foo{Float64,2},1})
>>
>> A workaround would be to have two methods, one for the homogeneous 
>> elements in the first parameter, as you suggest, and a second for a vector 
>> with homogeneous elements in both parameters, with both T, N specified in 
>> the signature. But I have to write an extra method...
>>
>>
>> On Monday, April 4, 2016 at 9:32:55 PM UTC+1, John Myles White wrote:
>>>
>>> Vector{Foo{T}}?
>>>
>>> On Monday, April 4, 2016 at 1:25:46 PM UTC-7, Davide Lasagna wrote:

 Hi all, 

 Consider the following example code

 type Foo{T, N}
 a::NTuple{N, T}
 end

 function make_Foos(M)
 fs = Foo{Float64}[]
 for i = 1:M
 N = rand(1:2)
 f = Foo{Float64, N}(ntuple(i->0.0, N))
 push!(fs, f)
 end
 fs
 end

 function bar{F<:Foo}(x::Vector{F})
 println("Hello, Foo!")
 end

 const fs = make_Foos(100)

 bar(fs)

 What would be the signature of `bar` to enforce that all the entries of 
 `x` have the same value for the first parameter T? As it is now, `x` could 
 contain an `Foo{Float64}` and a `Foo{Int64}`, whereas I would like to 
 enforce homogeneity of the vector elements in the first parameter.

 Thanks




[julia-users] Chipmunk package build on windows

2016-04-06 Thread Yonghee Kim
I'm trying to install "Chipmunk " 
Package on windows7

building "Chipmunk" requires `cmake` and `make` command
So I installed cmake and MinGW to 
use those commands on windows command prompt. 


But I still couldn't build Chipmunk package because `make` command still 
dosen't work like on linux

"build.jl" 
 fails 
at line 25 *run(`make`)*

-
Console
Checking dependencies...
Good...
HEAD is now at 079a859... Finish the build scripts.
HEAD is now at 079a859... Finish the build scripts.
Configuring Chipmunk2D version 7.0.0
-- Configuring done
-- Generating done
-- Build files have been written to: 
C:/Users/Yonghee/.julia/v0.4/Chipmunk/deps/Chipmunk2D
make: *** No targets specified and no makefile found.  Stop.
---


I figured if i can specify target for `make` command, it might work. 
but I couldn't figure out what/how I can specify target for `make` command. 

Is there anyway to get around this?


Re: [julia-users] optical character recognition

2016-04-06 Thread Tim Holy
warn("$file does not exist")

is trying to reference a variable named file, which (as the error message 
indicates) is undefined. Presumably you want this to be

warn("$nameFile does not exist")

--Tim

On Tuesday, April 05, 2016 11:16:06 AM mgopi...@uncc.edu wrote:
> Hi Friends,
> 
> 
> I am trying to test the implementation of character recognition using Julia
> in my Windows 10 machine using the tutorial from
> https://www.kaggle.com/c/street-view-getting-started-with-julia/data. I am
> trying to train my system using the sample images and the code works fine
> in mac and in windows machine I am facing the trouble.
> 
> 
> Below is the code which I am using to implement it.
> 
> Pkg.add("Images")
> 
> Pkg.add("DataFrames")
> 
> 
> using Images
> 
> using DataFrames
> 
> using Images.ColorTypes
> 
> using FixedPointNumbers
> 
> using Iterators
> 
> 
> function read_data(typeData, labelsInfo, imageSize, path)
> 
> x = zeros(size(labelsInfo, 1), imageSize)
> 
> if !ispath(path)
> 
> error("$path is not valid")
> 
> end
> 
> for (index, idImage) in enumerate(labelsInfo[:ID])
> 
> nameFile = "C:\\Users\\user\\Desktop\\test
> \\OCR_test\\$(typeData)Resized\\$(typeData)Resized\\$(idImage).bmp"
> 
> if isfile(nameFile)
> 
> img = load(nameFile)
> 
> 
> temp = convert(Image{Gray}, img)
> 
> 
> x[index, :] = reshape(temp, 1, imageSize)
> 
> 
> else
> 
> warn("$file does not exist")
> 
> end
> 
> end
> 
> return x
> 
> end
> 
> 
> imageSize = 400
> 
> 
> 
> path = "C:\\Users\\user\\Desktop\\test\\OCR_test"
> 
> 
> println(path)
> 
> labelsInfoTrain = readtable("$(path)\\trainLabels.csv")
> 
> 
> xTrain = read_data("train", labelsInfoTrain, imageSize, path)
> 
> 
> 
> yTrain = map(x -> x[1], labelsInfoTrain[:Class])
> 
> 
> yTrain = int(yTrain)
> 
> 
> 
> labelsInfoTest = readtable("$(path)\\sampleSubmission.csv")
> 
> 
> xTest = read_data("test", labelsInfoTest, imageSize, path)
> 
> 
> 
> Pkg.add("DecisionTree")
> 
> using DecisionTree
> 
> 
> model = build_forest(yTrain, xTrain, 20, 50, 1.0)
> 
> 
> predTest = apply_forest(model, xTest)
> 
> 
> labelsInfoTest[:Class] = char(predTest)
> 
> writetable("$(path)\\juliaSubmission.csv", labelsInfoTest, header=true,
> separator=',')
> 
> 
> Below is the error I am getting while implementing the code
> 
> 
> julia> include("C:\\Users\\user\\Desktop\\test\\OCR_test\\Forest.jl")
> 
> INFO: Nothing to be done
> 
> INFO: Nothing to be done
> 
> C:\Users\user\Desktop\test\OCR_test
> 
> ERROR: LoadError: UndefVarError: file not defined
> 
>  in read_data at C:\Users\user\Desktop\test\OCR_test\Forest.jl:25
> 
>  in include at boot.jl:261
> 
>  in include_from_node1 at loading.jl:304
> 
> while loading C:\Users\user\Desktop\test\OCR_test\Forest.jl, in expression
> starting on line 43
> 
> 
> I am new to Julia and I really appreciate any help you could provide on this
> 
> 
> Thanks & Regards



[julia-users] Re: enforcing homogeneity of vector elements in function signature

2016-04-06 Thread Jeffrey Sarnoff
That the Union does not work (and that reasonable people have written the 
example expecting the Union to work) is good feedback -- a surprise where 
Julia is motivated to become unsurprising.

Julia requires there to be two distinct outer method signatures (Na != Nb, 
T1 != T2) 
to absorb both [ NTuple{N,T0}, NTuple{N,T0} ] and [ NTuple{Na,T0}, 
NTuple{Nb, T0} ] 
and to reject both [ NTuple{N,T1}, NTuple{N,T2| ] and [ NTuple{Na,T1}, 
NTuple{Nb, T2} ]

function bar{T,N}(x::Vector{Foo{T,N}})
and function bar{T  }(x::Vector{Foo{T  }})

*previewing possible Julia acumen that already very quickly knows if there 
were any sensibility-preserving lexical microtransformations that map these 
signatures all onto one specific semantic intent *

As Julia knows for these two signatures there is no single path entered 
together and no single path entered at distinct places which reaches an 
equivalence of meaning, utility, expression and consequence,
they are distinct and they are inequivalent. (and there are no two paths 
..., so they are distinct, inequivalent, and somewhere disjoint).

That ought to be a characterization, an aspectual assignment of referential 
consistency, that works and is software design ready as a valid premise -- 
a good-enough reason to choose to use the Union.

I think that should be one of the pertinent suffusions Julia's language 
design excellence carries outward.  

Union{ OneType, TwoType, RedType, BlueType } might simplify to Union{ 
IntsTyped, ColorsTyped } and does not simplify to OneType or to TwoType or 
to RedType or to BlueType (where none are Type{Any}).

in v0.4, Union{  Vector{Foo{T}}, Vector{Foo{T,N}} } == Union{ Vector{ 
Foo{T,N} }, Vector{ Foo{T,N} } } == Vector{Foo{T,N}}
but the docs say "A type union is a special abstract type which includes as 
objects all instances of any of its argument types ..."
*That is Julia's motivation 
for smoothing this wrinkle.*




On Wednesday, April 6, 2016 at 4:58:06 AM UTC-4, Greg Plowman wrote:
>
> Actually, I'm totally wrong.
> The Union won't work.
> Sorry for bad post.
>
>
> On Wednesday, April 6, 2016 at 1:52:15 PM UTC+10, Greg Plowman wrote:
>
>>
>> A workaround would be to have two methods, one for the homogeneous 
>>> elements in the first parameter, as you suggest, and a second for a vector 
>>> with homogeneous elements in both parameters, with both T, N specified in 
>>> the signature. But I have to write an extra method...
>>>
>>
>> As pointed out, you can't have a *single* type for the method signature 
>> for both homogeneous and heterogeneous N, because of Julia's parametric 
>> type invariance.
>>
>> However, it seems you might want to write two methods to specialise the 
>> implementation.
>>
>> If not, you can
>> use a common functional body as Jeffrey suggested, or
>> use an empty signature (will match anything, but no performance penalty)
>> use a Union in the signature:
>>
>> function bar{T,N}(x::Union{Vector{Foo{T}},Vector{Foo{T,N}}})
>> println("Hello, Foo!")
>> end
>>
>>
>> On Tuesday, April 5, 2016 at 6:46:21 AM UTC+10, Davide Lasagna wrote:
>>
>>> Thanks, yes, I have tried this, but did not mention what happens.
>>>
>>> For the signature you suggest, you get a `MethodError` in the case the 
>>> vector `x` is homogeneous in both parameters.
>>>
>>> Look at this code
>>>
>>> type Foo{T, N}
>>> a::NTuple{N, T}
>>> end
>>>
>>> function make_homogeneous_Foos(M)
>>> fs = Foo{Float64, 2}[]
>>> for i = 1:M
>>> f = Foo{Float64, 2}((0.0, 0.0))
>>> push!(fs, f)
>>> end
>>> fs
>>> end
>>>
>>> function bar{T}(x::Vector{Foo{T}})
>>> println("Hello, Foo!")
>>> end
>>>
>>> const fs = make_homogeneous_Foos(100)
>>>
>>> bar(fs)
>>>
>>> which results in 
>>>
>>> ERROR: LoadError: MethodError: `bar` has no method matching 
>>> bar(::Array{Foo{Float64,2},1})
>>>
>>> A workaround would be to have two methods, one for the homogeneous 
>>> elements in the first parameter, as you suggest, and a second for a vector 
>>> with homogeneous elements in both parameters, with both T, N specified in 
>>> the signature. But I have to write an extra method...
>>>
>>>
>>> On Monday, April 4, 2016 at 9:32:55 PM UTC+1, John Myles White wrote:

 Vector{Foo{T}}?

 On Monday, April 4, 2016 at 1:25:46 PM UTC-7, Davide Lasagna wrote:
>
> Hi all, 
>
> Consider the following example code
>
> type Foo{T, N}
> a::NTuple{N, T}
> end
>
> function make_Foos(M)
> fs = Foo{Float64}[]
> for i = 1:M
> N = rand(1:2)
> f = Foo{Float64, N}(ntuple(i->0.0, N))
> push!(fs, f)
> end
> fs
> end
>
> function bar{F<:Foo}(x::Vector{F})
> println("Hello, Foo!")
> end
>
> const fs = make_Foos(100)
>
> bar(fs)
>
> What would be the signature of `bar` to enforce that all the entries 
> of `x

[julia-users] Re: Small package for timing sections of code and pretty printing the results

2016-04-06 Thread Jeffrey Sarnoff
sure .. anyway you prefer 

On Wednesday, April 6, 2016 at 4:15:51 AM UTC-4, Kristoffer Carlsson wrote:
>
> Thanks for the tip. Maybe I could add a custom print version with 
> verbosity options. I don't want the default output to be too cluttered.
>
> On Wednesday, April 6, 2016 at 4:07:22 AM UTC+2, Jeffrey Sarnoff wrote:
>>
>> You might have an option to show  "wall time / call"   
>> or just include it in the table however you like the columns.
>>
>>   section  |  avg time |  number|  time used   |  time used
>>|  per call |  of calls  |  all calls   |  % of total
>> 
>>  --|---||--|
>>  in a loop |   101.156 |5   |  505.779ms   |16
>>
>>
>> On Tuesday, April 5, 2016 at 9:39:38 PM UTC-4, Jeffrey Sarnoff wrote:
>>>
>>>  This is very useful and much appreciated. Thank you. 
>>>
>>> On Tuesday, April 5, 2016 at 3:42:45 PM UTC-4, Kristoffer Carlsson wrote:

 Hello everyone,

 I put up a new (unregistered) small package for timing different 
 sections of code. It works similar to @time in Base but you also give the 
 code section being timed a label. We can then track the total time and 
 number of calls that are made to code sections with that label and pretty 
 print it in the end. This feature existed in a C++ library I used to use 
 (deal.II) and I missed it in Julia.

 Here is a small example.

 using TimerOutputs

 const time_tracker = TimerOutput();

 @timeit time_tracker "sleeping" sleep(1)

 @timeit time_tracker "loop" for i in 1:10
rand()
sleep(0.1)
 end

 v = 0.0
 for i in 1:5
 v += @timeit time_tracker "in a loop" begin
sleep(0.1)
rand()
 end
 end

 print(time_tracker)

 +-+++
 | Total wallclock time elapsed since start|   3.155 s  |   
  |
 | ||   
  |
 | Section | no. calls |  wall time | % of total 
 |

 +-+++
 | loop| 1 |   1.012 s  |   32 % 
 |
 | sleeping| 1 |   1.002 s  |   32 % 
 |
 | in a loop   | 5 | 505.779 ms |   16 % 
 |

 +-+++

 Feel free to comment on the package name, macro name, format of the 
 output etc. 

 The URL is: https://github.com/KristofferC/TimerOutputs.jl

 Best regards, Kristoffer

>>>

[julia-users] Re: Help with convoluted types and Vararg

2016-04-06 Thread Jeffrey Sarnoff
this works -- more than than .. well no

type Foo
   x::Vector{}
end

z = [Pair((+,1,5,7), 3), Pair((-,6,5,3,5,8), 1)]

Foo(z)
Foo( Pair{A,Int64}[ (+,1,5,7)=>3, (-,6,5,3,5,8)=>1 ] )


On Tuesday, April 5, 2016 at 2:51:39 PM UTC-4, Seth wrote:
>
> Hi all,
>
> I have the following on 0.4.6-pre+18:
>
> z = [Pair((+,1,5,7), 3), Pair((-,6,5,3,5,8), 1)]
> type Foo
> x::Array{Pair{Tuple{Function, Vararg{Int}}, Int}}
> end
>
>
> and I'm getting
>
> julia> Foo(z)
> ERROR: MethodError: `convert` has no method matching 
> convert(::Type{Pair{Tuple{Function,Vararg{Int64}},Int64}}, 
> ::Pair{Tuple{Function,Int64,Int64,Int64},Int64})
> This may have arisen from a call to the constructor 
> Pair{Tuple{Function,Vararg{Int64}},Int64}(...),
> since type constructors fall back to convert methods.
> Closest candidates are:
>   Pair{A,B}(::Any, ::Any)
>   call{T}(::Type{T}, ::Any)
>   convert{T}(::Type{T}, ::T)
>  in copy! at abstractarray.jl:310
>  in call at none:2
>
>
> It's probably a stupid oversight, but I'm stuck. Can someone point me to 
> the error?
>


Re: [julia-users] MethodError: '+' has no method matching +(::DateTime, ::Int64)

2016-04-06 Thread Milan Bouchet-Valat
Le lundi 04 avril 2016 à 11:03 -0600, Jacob Quinn a écrit :
> Sorry, I should have been more clear. 
> 
> I was trying to express that perhaps we should document these
> previously-internal methods so that they are actually a part of the
> official interface/exported. They're not unsafe or anything and
> people may actually have a use for these, so maybe we should just
> document to everythings more clear.
OK. However, supporting day(::Integer) increases the chances that
people will confuse day() and Day(), as in the current thread. In
general, I think it would be clearer to require the caller to create a
Date object from Rata Die days, and then use the standard Date API.
Since Date is immutable, the creation of the object will be optimized
out where possible.

BTW, is there any reason why we have both day() and dayofmonth(), but
not week() and weekofyear()? SQL provides both. I'd say we should
either follow it in duplicating functions, or not have any duplicates
at all.


Regards


> On Mon, Apr 4, 2016 at 10:49 AM, Milan Bouchet-Valat  wrote:
> > Le lundi 04 avril 2016 à 10:27 -0600, Jacob Quinn a écrit :
> > > Hmmm.yeah, it's not ideal, I guess. Dates.day(::Integer) is
> > > indeed an internal method that takes the # of Rata Die days (i.e. the
> > > value of Int(Date(2015,1,1))) and returns the day of the month for
> > > that Rata Die. It might be worth documenting so that it's more clear
> > > what's going on when people search/help it.
> > If these methods are internal, they shouldn't be added to exported
> > functions. A useful convention is to prefix these with an underscore.
> > 
> > 
> > Regards
> > 
> > 
> > > -Jacob
> > >
> > > On Mon, Apr 4, 2016 at 10:02 AM, Josh Langsfeld 
> > > wrote:
> > > > Shouldn’t Dates.day(1) be the MethodError here? It calls what
> > > > appears to be an internal calculation method that happens to have
> > > > the same name as the exported and documented method.
> > > >
> > > > On Monday, April 4, 2016 at 11:21:47 AM UTC-4, Jacob Quinn wrote:
> > > >
> > > > > Dates.day is the accessor funciton, returning the day of month
> > > > > for a Date/DateTime.
> > > > >
> > > > > Dates.Day is the Period type representing the span of one day.
> > > > >
> > > > > So you'll want something like:
> > > > >
> > > > > now() + Dates.Day(1)
> > > > >
> > > > > -Jacob
> > > > >
> > > > >
> > > > > On Mon, Apr 4, 2016 at 5:48 AM, Josh  wrote:
> > > > > > When trying to increment and decrement dates I get the method
> > > > > > error stated above. 
> > > > > >
> > > > > > For example with: now() + Dates.day(1)
> > > > > >
> > > > > > I get the error:
> > > > > >
> > > > > > ERROR: MethodError: `+` has no method matching +(::DateTime,
> > > > > > ::Int64)
> > > > > > Closest candidates are:
> > > > > >   +(::Any, ::Any, ::Any, ::Any...)
> > > > > >   +(::Int64, ::Int64)
> > > > > >   +(::Complex{Bool}, ::Real)
> > > > > >   ...
> > > > > >
> > > > > > But with doing something like this: Date(2015,12,25) - today() 
> > > > > >
> > > > > > I get the correct result with no error.
> > > > > >
> > > > > > Any ideas?
> > > > > > Thanks
> > > > > >
> > > > > >
> > > > >
> > 


[julia-users] deprecated syntax foo(): to warn or not to warn ?

2016-04-06 Thread Didier Verna

  Hello,

1. can somebody please explain why the "foo ()" syntax is deprecated ?

2. also, the warnings policy seems inconsistent to me, or I'm missing
something:

julia> foo (3)
WARNING: deprecated syntax "foo (".
Use "foo(" instead.

julia> (+) (1, 2, 3)
WARNING: deprecated syntax "+ (".
Use "+(" instead.

OK, but then calling "+ (1, 2, 3)" does not issue a warning.

-- 
ELS'16 registration open! http://www.european-lisp-symposium.org

Lisp, Jazz, Aïkido: http://www.didierverna.info


Re: [julia-users] deprecated syntax foo(): to warn or not to warn ?

2016-04-06 Thread Isaiah Norton
For explanation, see https://github.com/JuliaLang/julia/issues/7232 (tl;dr:
better support for macro calls *without* parentheses)

The last one may come down to precedence rules, but worth filing an issue
to discuss.





On Wed, Apr 6, 2016 at 9:10 AM, Didier Verna  wrote:

>
>   Hello,
>
> 1. can somebody please explain why the "foo ()" syntax is deprecated ?
>
> 2. also, the warnings policy seems inconsistent to me, or I'm missing
> something:
>
> julia> foo (3)
> WARNING: deprecated syntax "foo (".
> Use "foo(" instead.
>
> julia> (+) (1, 2, 3)
> WARNING: deprecated syntax "+ (".
> Use "+(" instead.
>
> OK, but then calling "+ (1, 2, 3)" does not issue a warning.
>
> --
> ELS'16 registration open! http://www.european-lisp-symposium.org
>
> Lisp, Jazz, Aïkido: http://www.didierverna.info
>


Re: [julia-users] Chipmunk package build on windows

2016-04-06 Thread Isaiah Norton
Try running the commands here manually at a mingw shell:
https://github.com/zyedidia/Chipmunk.jl/blob/master/deps/build.jl

In particular, cmake must be run first in order to generate the makefile. I
would guess that there was some configuration error hidden/suppressed when
cmake was called by the build script. Running directly may give a better
idea where the problem is.

On Wed, Apr 6, 2016 at 5:42 AM, Yonghee Kim  wrote:

> I'm trying to install "Chipmunk "
> Package on windows7
>
> building "Chipmunk" requires `cmake` and `make` command
> So I installed cmake and MinGW to
> use those commands on windows command prompt.
>
>
> But I still couldn't build Chipmunk package because `make` command still
> dosen't work like on linux
>
> "build.jl"
>  fails
> at line 25 *run(`make`)*
>
> -
> Console
> Checking dependencies...
> Good...
> HEAD is now at 079a859... Finish the build scripts.
> HEAD is now at 079a859... Finish the build scripts.
> Configuring Chipmunk2D version 7.0.0
> -- Configuring done
> -- Generating done
> -- Build files have been written to:
> C:/Users/Yonghee/.julia/v0.4/Chipmunk/deps/Chipmunk2D
> make: *** No targets specified and no makefile found.  Stop.
> ---
>
>
> I figured if i can specify target for `make` command, it might work.
> but I couldn't figure out what/how I can specify target for `make`
> command.
>
> Is there anyway to get around this?
>


[julia-users] function definitions short form

2016-04-06 Thread Didier Verna

  Just a thought, not important but only for the record.

The long forms for function definition are nicely homogeneous:

function foo(x)
  2x
end

and

function (x)
  2x
end


It's a shame that the short ones aren't. I understand that using
(x) = 2x
for anonymous functions could entail a syntactic collision with
multiple assignment / pattern matching / whateveryoucallit, so perhaps
defining short named function had better be done like this:

foo(x) -> 2x

-- 
ELS'16 registration open! http://www.european-lisp-symposium.org

Lisp, Jazz, Aïkido: http://www.didierverna.info


Re: [julia-users] function definitions short form

2016-04-06 Thread Isaiah Norton
`foo = x -> 2x` works.

On Wed, Apr 6, 2016 at 9:30 AM, Didier Verna  wrote:

>
>   Just a thought, not important but only for the record.
>
> The long forms for function definition are nicely homogeneous:
>
> function foo(x)
>   2x
> end
>
> and
>
> function (x)
>   2x
> end
>
>
> It's a shame that the short ones aren't. I understand that using
> (x) = 2x
> for anonymous functions could entail a syntactic collision with
> multiple assignment / pattern matching / whateveryoucallit, so perhaps
> defining short named function had better be done like this:
>
> foo(x) -> 2x
>
> --
> ELS'16 registration open! http://www.european-lisp-symposium.org
>
> Lisp, Jazz, Aïkido: http://www.didierverna.info
>


Re: [julia-users] deprecated syntax foo(): to warn or not to warn ?

2016-04-06 Thread Didier Verna
Isaiah Norton  wrote:

> For explanation, see https://github.com/JuliaLang/julia/issues/7232
> (tl;dr: better support for macro calls *without* parentheses)

  Ouch! So it seems Julia is missing Lisp's distinction between macros
  and symbol-macros. Thanks for the pointer.

-- 
ELS'16 registration open! http://www.european-lisp-symposium.org

Lisp, Jazz, Aïkido: http://www.didierverna.info


[julia-users] Creating a subtype of IO that wraps an IOBuffer

2016-04-06 Thread Daniel Arndt
This is shorter than it looks, it's mostly code / output. Feel free to skip 
to the tl;dr.

I'm playing around with an idea where I've created a new type that wraps an 
IOBuffer.

This new type would hold some other information as well, but for it's 
read/write operations, I wanted to just pass the calls on to the 
encapsulated IOBuffer. I thought this would be fairly simple (as any good 
programming story begins):

import Base: write


type NewIOType <: IO
buffer::IOBuffer
some_other_stuff::Int

new() = new(IOBuffer())
end


write(io::NewIOType, x...) = write(io.buffer, x...)

However, write seems to conflict with multiple other definitions:

WARNING: New definition 
write(Main.NewIOType, Any...) at In[1]:10
is ambiguous with: 
write(Base.IO, Base.Complex) at complex.jl:78.
To fix, define 
write(Main.NewIOType, Base.Complex)
before the new definition.
WARNING: New definition 
write(Main.NewIOType, Any...) at In[1]:10
is ambiguous with: 
write(Base.IO, Base.Rational) at rational.jl:66.
...
and on and on
...

I understand the problem: Although my first parameter is more specific, *my 
second is not*. In an exploratory move, I tried:

write{T}(io::NewIOType, x::T) = write(io.buffer, x)

Thinking that this would create a new write function for every type T and 
therefore be more specific (*I could use some clarity here, as obviously my 
understanding is incorrect*). I get this:

WARNING: New definition 
write(Main.NewIOType, #T<:Any) at In[1]:10
is ambiguous with: 
write(Base.IO, Base.Complex) at complex.jl:78.
To fix, define 
write(Main.NewIOType, _<:Base.Complex)
before the new definition.
WARNING: New definition 
write(Main.NewIOType, #T<:Any) at In[1]:10
is ambiguous with: 
write(Base.IO, Base.Rational) at rational.jl:66.
To fix, define 
write(Main.NewIOType, _<:Base.Rational)

*tl;dr Can I wrap an IO instance, and pass all calls to 'write' to the 
wrapped instance's version?*

I am entirely capable and aware of other approaches, so while I do 
appreciate suggestions of alternative approaches, I am specifically 
wondering if there is some mechanism that I'm missing that easily overcomes 
this.

Cheers,
Dan



Re: [julia-users] Creating a subtype of IO that wraps an IOBuffer

2016-04-06 Thread Milan Bouchet-Valat
Le mercredi 06 avril 2016 à 06:48 -0700, Daniel Arndt a écrit :
> This is shorter than it looks, it's mostly code / output. Feel free
> to skip to the tl;dr.
> 
> I'm playing around with an idea where I've created a new type that
> wraps an IOBuffer.
> 
> This new type would hold some other information as well, but for it's
> read/write operations, I wanted to just pass the calls on to the
> encapsulated IOBuffer. I thought this would be fairly simple (as any
> good programming story begins):
> 
> import Base: write
> 
> 
> type NewIOType <: IO
>     buffer::IOBuffer
>     some_other_stuff::Int
>     
>     new() = new(IOBuffer())
> end
> 
> 
> write(io::NewIOType, x...) = write(io.buffer, x...)
> 
> However, write seems to conflict with multiple other definitions:
> 
> WARNING: New definition 
>     write(Main.NewIOType, Any...) at In[1]:10
> is ambiguous with: 
>     write(Base.IO, Base.Complex) at complex.jl:78.
> To fix, define 
>     write(Main.NewIOType, Base.Complex)
> before the new definition.
> WARNING: New definition 
>     write(Main.NewIOType, Any...) at In[1]:10
> is ambiguous with: 
>     write(Base.IO, Base.Rational) at rational.jl:66.
> ...
> and on and on
> ...
> 
> I understand the problem: Although my first parameter is more
> specific, my second is not. In an exploratory move, I tried:
> 
> write{T}(io::NewIOType, x::T) = write(io.buffer, x)
> 
> Thinking that this would create a new write function for every type T
> and therefore be more specific (I could use some clarity here, as
> obviously my understanding is incorrect). I get this:
> 
> WARNING: New definition 
>     write(Main.NewIOType, #T<:Any) at In[1]:10
> is ambiguous with: 
>     write(Base.IO, Base.Complex) at complex.jl:78.
> To fix, define 
>     write(Main.NewIOType, _<:Base.Complex)
> before the new definition.
> WARNING: New definition 
>     write(Main.NewIOType, #T<:Any) at In[1]:10
> is ambiguous with: 
>     write(Base.IO, Base.Rational) at rational.jl:66.
> To fix, define 
>     write(Main.NewIOType, _<:Base.Rational)
> 
> tl;dr Can I wrap an IO instance, and pass all calls to 'write' to the
> wrapped instance's version?
> 
> I am entirely capable and aware of other approaches, so while I do
> appreciate suggestions of alternative approaches, I am specifically
> wondering if there is some mechanism that I'm missing that easily
> overcomes this.
I think you only need to provide write(s::NewIOType, x::UInt8). All
higher-level write() methods will automatically use it.


Regards


[julia-users] Weird singleton varags display

2016-04-06 Thread Didier Verna

  Can somebody please explain this to me:

julia> foo(args...) = args
foo (generic function with 1 method)

foo(1)
(1,)

i.e., why the trailing comma ?

-- 
ELS'16 registration open! http://www.european-lisp-symposium.org

Lisp, Jazz, Aïkido: http://www.didierverna.info


Re: [julia-users] Re: regression from 0.43 to 0.5dev, and back to 0.43 on fedora23

2016-04-06 Thread Johannes Wagner


On Tuesday, April 5, 2016 at 7:54:16 PM UTC+2, Milan Bouchet-Valat wrote:
>
> Le mardi 05 avril 2016 à 10:18 -0700, Johannes Wagner a écrit : 
> > hey Milan, 
> > so consider following code: 
> > 
> > Pkg.clone("git://github.com/kbarbary/TimeIt.jl.git") 
> > using TimeIt 
> > 
> > v = rand(3) 
> > r = rand(6000,3) 
> > x = linspace(0.0, 10.0, 500) * (v./sqrt(sumabs2(v)))' 
> > 
> > dotprods = r * x[2,:] 
> > imexp= cis(dotprods) 
> > sumprod  = sum(imexp) * sum(conj(imexp)) 
> > 
> > f(r, x) = r * x[2,:] 
> > g(r, x) = r * x' 
> > h(imexp)= sum(imexp) * sum(conj(imexp)) 
> > 
> > function s(r, x) 
> > result = zeros(size(x,1)) 
> > for i = 1:size(x,1) 
> > imexp= cis(r * x[i,:]) 
> > result[i]= sum(imexp) * sum(conj(imexp)) 
> > end 
> > return result 
> > end 
> > 
> > @timeit zeros(size(x,1)) 
> > @timeit f(r,x) 
> > @timeit g(r,x) 
> > @timeit cis(dotprods) 
> > @timeit h(imexp) 
> > @timeit s(r,x) 
> > 
> > @code_native f(r,x) 
> > @code_native g(r,x) 
> > @code_native cis(dotprods) 
> > @code_native h(imexp) 
> > @code_native s(r,x) 
> > 
> > and I attached the output of the last @code_native s(r,x) as text 
> > files for the binary tarball, as well as the latest nalimilan update. 
> > For the whole function s, the exported code looks actually the same 
> > everywhere. 
> > But s(r,x) is the one that is considerable slower on the i7 than the 
> > i5, whereas all the other timed calls are more or less same speed on 
> > i5 and i7. Here are the timings in the same order as above (all run 
> > repeatedly to not have compile time in it for last one): 
> > 
> > i7: 
> > 100 loops, best of 3: 871.68 ns per loop 
> > 1 loops, best of 3: 10.84 µs per loop 
> > 100 loops, best of 3: 5.19 ms per loop 
> > 1 loops, best of 3: 71.35 µs per loop 
> > 1 loops, best of 3: 26.65 µs per loop 
> > 1 loops, best of 3: 159.99 ms per loop 
> > 
> > i5: 
> > 10 loops, best of 3: 1.01 µs per loop 
> > 1 loops, best of 3: 10.93 µs per loop 
> > 100 loops, best of 3: 5.09 ms per loop 
> > 1 loops, best of 3: 75.93 µs per loop 
> > 1 loops, best of 3: 29.23 µs per loop 
> > 1 loops, best of 3: 103.70 ms per loop 
> > 
> > So based on inside s(r,x) calls, the i7 should be faster, but the 
> > whole s(r,x) is slower. Still clueless... And don't know how to 
> > further pin this down... 
> Thanks. I think you got mixed up with the different files, as the 
> versioninfo() output indicates. Anyway, there's enough info to check 
> which file corresponds to which Julia version, so that's OK. Indeed, 
> when comparing the tests with binary tarballs, there's a call 
> to jl_alloc_array_1d with the i7 (julia050_tarball-haswell-i7.txt), 
> which is not present with the i5 (incorrectly named julia050_haswell- 
> i7.txt). This is really unexpected. 
>

I'm afraid not. Filename was correct, header was wrong. The difference in 
instructions for the whole loop is between tarball and nalimilan repo. See 
the attached file (double checked) again. Despite the assembly differences, 
the tarball and nalimilan julia on i5 behave same and have same speed. Same 
for the i7, both slower. The tarball julia 0.50 just seems a tad faster 
(2-5%) for both, i5 and i7.
 

> Could you file an issue on GitHub with a summary of what we've found 
> (essentially your message), as well as links to 3 Gists giving the code 
> and the contents of the two .txt files I mentioned above? That would be 
> very helpful. Do not mention the Fedora packages at all, as the binary 
> tarballs are closer to what Julia developers use. 
>
>
> Regards 
>
>
> > cheers, Johannes 
> > 
> > 
> > 
> > 
> > > Le lundi 04 avril 2016 à 10:36 -0700, Johannes Wagner a écrit :  
> > > > hey guys,  
> > > > so attached you find text files with @code_native output for the  
> > > > instructions   
> > > > - r * x[1,:]  
> > > > - cis(imexp)  
> > > > - sum(imexp) * sum(conj(imexp))  
> > > >  
> > > > for julia 0.5.   
> > > >  
> > > > Hardware I run on is a Haswell i5 machine, a Haswell i7 machine, 
> > > and  
> > > > a IvyBridge i5 machine. Turned out on an Haswell i5 machine the 
> > > code  
> > > > also runs fast. Only the Haswell i7 machine is the slow one. 
> > > This  
> > > > really drove me nuts. First I thought it was the OS, then the  
> > > > architecture, and now its just from i5 to i7 Anyways, I 
> > > don't  
> > > > know anything about x86 assembly, but the julia 0.45 code is the 
> > > same  
> > > > on all machines. However, for the dot product, the 0.5 code has  
> > > > already 2 different instructions on the i5 vs. the i7 (line 
> > > 44&47).  
> > > > For the cis call also (line 149...). And the IvyBridge i5 code 
> > > is  
> > > > similar to the Haswell i5. I included also versioninfo() at the 
> > > top  
> > > > of the file. So you could just look at a vimdiff of the julia0.5  
> > > > files... Can anyone make sense out of this?  
> > > I'm definitel

Re: [julia-users] Weird singleton varags display

2016-04-06 Thread Mauro
>   Can somebody please explain this to me:
>
> julia> foo(args...) = args
> foo (generic function with 1 method)
>
> foo(1)
> (1,)
>
> i.e., why the trailing comma ?

It's a tuple.

foo(1,2)
(1,2)


Re: [julia-users] Weird singleton varags display

2016-04-06 Thread Didier Verna
Mauro  wrote:

> It's a tuple.
>
> foo(1,2)
> (1,2)

and foo(1,2,3) => (1,2,3) and so on. But I still don't understand :-)

-- 
ELS'16 registration open! http://www.european-lisp-symposium.org

Lisp, Jazz, Aïkido: http://www.didierverna.info


Re: [julia-users] Creating a subtype of IO that wraps an IOBuffer

2016-04-06 Thread Daniel Arndt
Great! that *seems* to work in my extremely limited testing so far, thanks 
Milan!

Please note that I have since identified the error in my constructor 
(entirely unrelated to any issues)... it should read:

NewIOType() = new(IOBuffer())



On Wednesday, 6 April 2016 11:06:17 UTC-3, Milan Bouchet-Valat wrote:
>
> Le mercredi 06 avril 2016 à 06:48 -0700, Daniel Arndt a écrit : 
> > This is shorter than it looks, it's mostly code / output. Feel free 
> > to skip to the tl;dr. 
> > 
> > I'm playing around with an idea where I've created a new type that 
> > wraps an IOBuffer. 
> > 
> > This new type would hold some other information as well, but for it's 
> > read/write operations, I wanted to just pass the calls on to the 
> > encapsulated IOBuffer. I thought this would be fairly simple (as any 
> > good programming story begins): 
> > 
> > import Base: write 
> > 
> > 
> > type NewIOType <: IO 
> > buffer::IOBuffer 
> > some_other_stuff::Int 
> >  
> > new() = new(IOBuffer()) 
> > end 
> > 
> > 
> > write(io::NewIOType, x...) = write(io.buffer, x...) 
> > 
> > However, write seems to conflict with multiple other definitions: 
> > 
> > WARNING: New definition  
> > write(Main.NewIOType, Any...) at In[1]:10 
> > is ambiguous with:  
> > write(Base.IO, Base.Complex) at complex.jl:78. 
> > To fix, define  
> > write(Main.NewIOType, Base.Complex) 
> > before the new definition. 
> > WARNING: New definition  
> > write(Main.NewIOType, Any...) at In[1]:10 
> > is ambiguous with:  
> > write(Base.IO, Base.Rational) at rational.jl:66. 
> > ... 
> > and on and on 
> > ... 
> > 
> > I understand the problem: Although my first parameter is more 
> > specific, my second is not. In an exploratory move, I tried: 
> > 
> > write{T}(io::NewIOType, x::T) = write(io.buffer, x) 
> > 
> > Thinking that this would create a new write function for every type T 
> > and therefore be more specific (I could use some clarity here, as 
> > obviously my understanding is incorrect). I get this: 
> > 
> > WARNING: New definition  
> > write(Main.NewIOType, #T<:Any) at In[1]:10 
> > is ambiguous with:  
> > write(Base.IO, Base.Complex) at complex.jl:78. 
> > To fix, define  
> > write(Main.NewIOType, _<:Base.Complex) 
> > before the new definition. 
> > WARNING: New definition  
> > write(Main.NewIOType, #T<:Any) at In[1]:10 
> > is ambiguous with:  
> > write(Base.IO, Base.Rational) at rational.jl:66. 
> > To fix, define  
> > write(Main.NewIOType, _<:Base.Rational) 
> > 
> > tl;dr Can I wrap an IO instance, and pass all calls to 'write' to the 
> > wrapped instance's version? 
> > 
> > I am entirely capable and aware of other approaches, so while I do 
> > appreciate suggestions of alternative approaches, I am specifically 
> > wondering if there is some mechanism that I'm missing that easily 
> > overcomes this. 
> I think you only need to provide write(s::NewIOType, x::UInt8). All 
> higher-level write() methods will automatically use it. 
>
>
> Regards 
>


Re: [julia-users] Re: regression from 0.43 to 0.5dev, and back to 0.43 on fedora23

2016-04-06 Thread Johannes Wagner
and last update, I disabled Hyperthreading on the i7, and now it performes 
as expected.

i7 no HT:

100 loops, best of 3: 879.57 ns per loop
10 loops, best of 3: 9.88 µs per loop
100 loops, best of 3: 4.46 ms per loop
1 loops, best of 3: 69.89 µs per loop
1 loops, best of 3: 26.67 µs per loop
10 loops, best of 3: 95.08 ms per loop

i7 with HT:

100 loops, best of 3: 871.68 ns per loop
1 loops, best of 3: 10.84 µs per loop
100 loops, best of 3: 5.19 ms per loop
1 loops, best of 3: 71.35 µs per loop
1 loops, best of 3: 26.65 µs per loop
1 loops, best of 3: 159.99 ms per loop

So all calls inside the loop are the same speed, but the whole loop, with 
identical assembly code is ~60% slower if HT is enabled. Where can this 
problem then arise from? LLVM? or thread pinning in the OS? Probably not a 
julia problem then...



On Tuesday, April 5, 2016 at 7:54:16 PM UTC+2, Milan Bouchet-Valat wrote:
>
> Le mardi 05 avril 2016 à 10:18 -0700, Johannes Wagner a écrit : 
> > hey Milan, 
> > so consider following code: 
> > 
> > Pkg.clone("git://github.com/kbarbary/TimeIt.jl.git") 
> > using TimeIt 
> > 
> > v = rand(3) 
> > r = rand(6000,3) 
> > x = linspace(0.0, 10.0, 500) * (v./sqrt(sumabs2(v)))' 
> > 
> > dotprods = r * x[2,:] 
> > imexp= cis(dotprods) 
> > sumprod  = sum(imexp) * sum(conj(imexp)) 
> > 
> > f(r, x) = r * x[2,:] 
> > g(r, x) = r * x' 
> > h(imexp)= sum(imexp) * sum(conj(imexp)) 
> > 
> > function s(r, x) 
> > result = zeros(size(x,1)) 
> > for i = 1:size(x,1) 
> > imexp= cis(r * x[i,:]) 
> > result[i]= sum(imexp) * sum(conj(imexp)) 
> > end 
> > return result 
> > end 
> > 
> > @timeit zeros(size(x,1)) 
> > @timeit f(r,x) 
> > @timeit g(r,x) 
> > @timeit cis(dotprods) 
> > @timeit h(imexp) 
> > @timeit s(r,x) 
> > 
> > @code_native f(r,x) 
> > @code_native g(r,x) 
> > @code_native cis(dotprods) 
> > @code_native h(imexp) 
> > @code_native s(r,x) 
> > 
> > and I attached the output of the last @code_native s(r,x) as text 
> > files for the binary tarball, as well as the latest nalimilan update. 
> > For the whole function s, the exported code looks actually the same 
> > everywhere. 
> > But s(r,x) is the one that is considerable slower on the i7 than the 
> > i5, whereas all the other timed calls are more or less same speed on 
> > i5 and i7. Here are the timings in the same order as above (all run 
> > repeatedly to not have compile time in it for last one): 
> > 
> > i7: 
> > 100 loops, best of 3: 871.68 ns per loop 
> > 1 loops, best of 3: 10.84 µs per loop 
> > 100 loops, best of 3: 5.19 ms per loop 
> > 1 loops, best of 3: 71.35 µs per loop 
> > 1 loops, best of 3: 26.65 µs per loop 
> > 1 loops, best of 3: 159.99 ms per loop 
> > 
> > i5: 
> > 10 loops, best of 3: 1.01 µs per loop 
> > 1 loops, best of 3: 10.93 µs per loop 
> > 100 loops, best of 3: 5.09 ms per loop 
> > 1 loops, best of 3: 75.93 µs per loop 
> > 1 loops, best of 3: 29.23 µs per loop 
> > 1 loops, best of 3: 103.70 ms per loop 
> > 
> > So based on inside s(r,x) calls, the i7 should be faster, but the 
> > whole s(r,x) is slower. Still clueless... And don't know how to 
> > further pin this down... 
> Thanks. I think you got mixed up with the different files, as the 
> versioninfo() output indicates. Anyway, there's enough info to check 
> which file corresponds to which Julia version, so that's OK. Indeed, 
> when comparing the tests with binary tarballs, there's a call 
> to jl_alloc_array_1d with the i7 (julia050_tarball-haswell-i7.txt), 
> which is not present with the i5 (incorrectly named julia050_haswell- 
> i7.txt). This is really unexpected. 
>
> Could you file an issue on GitHub with a summary of what we've found 
> (essentially your message), as well as links to 3 Gists giving the code 
> and the contents of the two .txt files I mentioned above? That would be 
> very helpful. Do not mention the Fedora packages at all, as the binary 
> tarballs are closer to what Julia developers use. 
>
>
> Regards 
>
>
> > cheers, Johannes 
> > 
> > 
> > 
> > 
> > > Le lundi 04 avril 2016 à 10:36 -0700, Johannes Wagner a écrit :  
> > > > hey guys,  
> > > > so attached you find text files with @code_native output for the  
> > > > instructions   
> > > > - r * x[1,:]  
> > > > - cis(imexp)  
> > > > - sum(imexp) * sum(conj(imexp))  
> > > >  
> > > > for julia 0.5.   
> > > >  
> > > > Hardware I run on is a Haswell i5 machine, a Haswell i7 machine, 
> > > and  
> > > > a IvyBridge i5 machine. Turned out on an Haswell i5 machine the 
> > > code  
> > > > also runs fast. Only the Haswell i7 machine is the slow one. 
> > > This  
> > > > really drove me nuts. First I thought it was the OS, then the  
> > > > architecture, and now its just from i5 to i7 Anyways, I 
> > > don't  
> > > > know anything about x86 assembly, but the julia 0.45 code is the 
> > > same  
> > > > on all machines.

[julia-users] The manual about explicit types

2016-04-06 Thread Didier Verna

  I think the section on functions in the user manual should be fixed in
  two places:

- it says "The types of keyword arguments can be made explicit as
  follows" which is misleading because, IIUC, every function parameter
  can have an explicit type.

- in the end, it says "None of the examples given here provide any type
  annotations on their arguments" which is wrong.

-- 
ELS'16 registration open! http://www.european-lisp-symposium.org

Lisp, Jazz, Aïkido: http://www.didierverna.info


Re: [julia-users] Weird singleton varags display

2016-04-06 Thread Mauro
On Wed, 2016-04-06 at 16:17, Didier Verna  wrote:
> Mauro  wrote:
>
>> It's a tuple.
>>
>> foo(1,2)
>> (1,2)
>
> and foo(1,2,3) => (1,2,3) and so on. But I still don't understand :-)

Did you find:
http://docs.julialang.org/en/release-0.4/manual/functions/#varargs-functions

Quoting:

It is often convenient to be able to write functions taking an arbitrary
number of arguments. Such functions are traditionally known as “varargs”
functions, which is short for “variable number of arguments”. You can
define a varargs function by following the last argument with an
ellipsis:

julia> bar(a,b,x...) = (a,b,x)
bar (generic function with 1 method)

The variables a and b are bound to the first two argument values as
usual, and the variable x is bound to an iterable collection of the zero
or more values passed to bar after its first two arguments:

julia> bar(1,2)
(1,2,())

julia> bar(1,2,3)
(1,2,(3,))

julia> bar(1,2,3,4)
(1,2,(3,4))

julia> bar(1,2,3,4,5,6)
(1,2,(3,4,5,6))


Re: [julia-users] Weird singleton varags display

2016-04-06 Thread Didier Verna
Mauro  wrote:

> Did you find:
> http://docs.julialang.org/en/release-0.4/manual/functions/#varargs-functions

  That's while reading this that the question occurred to me.

-- 
ELS'16 registration open! http://www.european-lisp-symposium.org

Lisp, Jazz, Aïkido: http://www.didierverna.info


Re: [julia-users] Weird singleton varags display

2016-04-06 Thread Milan Bouchet-Valat
Le mercredi 06 avril 2016 à 16:09 +0200, Didier Verna a écrit :
>   Can somebody please explain this to me:
> 
> julia> foo(args...) = args
> foo (generic function with 1 method)
> 
> foo(1)
> (1,)
> 
> i.e., why the trailing comma ?
julia> (1)
1

julia> (1,)
(1,)

The first syntax returns an integer: only the second one returns a
tuple. Parentheses alone only group terms, they don't create a tuple.


Regards


Re: [julia-users] Weird singleton varags display

2016-04-06 Thread Didier Verna
Milan Bouchet-Valat  wrote:

> The first syntax returns an integer: only the second one returns a
> tuple. Parentheses alone only group terms, they don't create a tuple.

  Doh!  That's what I was missing. Thanks!

-- 
ELS'16 registration open! http://www.european-lisp-symposium.org

Lisp, Jazz, Aïkido: http://www.didierverna.info


Re: [julia-users] The manual about explicit types

2016-04-06 Thread Mauro
On Wed, 2016-04-06 at 16:24, Didier Verna  wrote:
>   I think the section on functions in the user manual should be fixed in
>   two places:
>
> - it says "The types of keyword arguments can be made explicit as
>   follows" which is misleading because, IIUC, every function parameter
>   can have an explicit type.

Maybe a reference to the later sections should be made here or the
type-ing moved to the later section.

> - in the end, it says "None of the examples given here provide any type
>   annotations on their arguments" which is wrong.

This statement should be relaxed but is needed for the next sentence.

A pull request is most welcome ;-).  Easiest is if you go to:
https://github.com/JuliaLang/julia/blob/master/doc/manual/functions.rst
and click on the little pencil.


Re: [julia-users] The manual about explicit types

2016-04-06 Thread Didier Verna
Mauro  wrote:

> A pull request is most welcome ;-).  Easiest is if you go to:
> https://github.com/JuliaLang/julia/blob/master/doc/manual/functions.rst
> and click on the little pencil.

  Will do. I also have spotted several other places that would need
  fixing.

-- 
ELS'16 registration open! http://www.european-lisp-symposium.org

Lisp, Jazz, Aïkido: http://www.didierverna.info


Re: [julia-users] Re: regression from 0.43 to 0.5dev, and back to 0.43 on fedora23

2016-04-06 Thread Milan Bouchet-Valat
Le mercredi 06 avril 2016 à 07:25 -0700, Johannes Wagner a écrit :
> and last update, I disabled Hyperthreading on the i7, and now it
> performes as expected.
> 
> i7 no HT:
> 
> 100 loops, best of 3: 879.57 ns per loop
> 10 loops, best of 3: 9.88 µs per loop
> 100 loops, best of 3: 4.46 ms per loop
> 1 loops, best of 3: 69.89 µs per loop
> 1 loops, best of 3: 26.67 µs per loop
> 10 loops, best of 3: 95.08 ms per loop
> 
> i7 with HT:
> 
> 100 loops, best of 3: 871.68 ns per loop
> 1 loops, best of 3: 10.84 µs per loop
> 100 loops, best of 3: 5.19 ms per loop
> 1 loops, best of 3: 71.35 µs per loop
> 1 loops, best of 3: 26.65 µs per loop
> 1 loops, best of 3: 159.99 ms per loop
> 
> So all calls inside the loop are the same speed, but the whole loop,
> with identical assembly code is ~60% slower if HT is enabled. Where
> can this problem then arise from? LLVM? or thread pinning in the OS?
> Probably not a julia problem then...
Indeed, in the last assembly output you sent, there are no differences
between i5 and i7 (as expected). So this isn't Julia's nor LLVM's
fault. No idea whether there might be an issue with the CPU itself, but
it's quite surprising.


Regards


> > Le mardi 05 avril 2016 à 10:18 -0700, Johannes Wagner a écrit : 
> > > hey Milan, 
> > > so consider following code: 
> > > 
> > > Pkg.clone("git://github.com/kbarbary/TimeIt.jl.git") 
> > > using TimeIt 
> > > 
> > > v = rand(3) 
> > > r = rand(6000,3) 
> > > x = linspace(0.0, 10.0, 500) * (v./sqrt(sumabs2(v)))' 
> > > 
> > > dotprods = r * x[2,:] 
> > > imexp    = cis(dotprods) 
> > > sumprod  = sum(imexp) * sum(conj(imexp)) 
> > > 
> > > f(r, x) = r * x[2,:]     
> > > g(r, x) = r * x' 
> > > h(imexp)    = sum(imexp) * sum(conj(imexp)) 
> > > 
> > > function s(r, x) 
> > >         result = zeros(size(x,1)) 
> > >         for i = 1:size(x,1) 
> > >                 imexp    = cis(r * x[i,:]) 
> > >                 result[i]= sum(imexp) * sum(conj(imexp)) 
> > >         end 
> > >         return result 
> > > end 
> > > 
> > > @timeit zeros(size(x,1)) 
> > > @timeit f(r,x) 
> > > @timeit g(r,x) 
> > > @timeit cis(dotprods) 
> > > @timeit h(imexp) 
> > > @timeit s(r,x) 
> > > 
> > > @code_native f(r,x) 
> > > @code_native g(r,x) 
> > > @code_native cis(dotprods) 
> > > @code_native h(imexp) 
> > > @code_native s(r,x) 
> > > 
> > > and I attached the output of the last @code_native s(r,x) as
> > text 
> > > files for the binary tarball, as well as the latest nalimilan
> > update. 
> > > For the whole function s, the exported code looks actually the
> > same 
> > > everywhere. 
> > > But s(r,x) is the one that is considerable slower on the i7 than
> > the 
> > > i5, whereas all the other timed calls are more or less same speed
> > on 
> > > i5 and i7. Here are the timings in the same order as above (all
> > run 
> > > repeatedly to not have compile time in it for last one): 
> > > 
> > > i7: 
> > > 100 loops, best of 3: 871.68 ns per loop 
> > > 1 loops, best of 3: 10.84 µs per loop 
> > > 100 loops, best of 3: 5.19 ms per loop 
> > > 1 loops, best of 3: 71.35 µs per loop 
> > > 1 loops, best of 3: 26.65 µs per loop 
> > > 1 loops, best of 3: 159.99 ms per loop 
> > > 
> > > i5: 
> > > 10 loops, best of 3: 1.01 µs per loop 
> > > 1 loops, best of 3: 10.93 µs per loop 
> > > 100 loops, best of 3: 5.09 ms per loop 
> > > 1 loops, best of 3: 75.93 µs per loop 
> > > 1 loops, best of 3: 29.23 µs per loop 
> > > 1 loops, best of 3: 103.70 ms per loop 
> > > 
> > > So based on inside s(r,x) calls, the i7 should be faster, but
> > the 
> > > whole s(r,x) is slower. Still clueless... And don't know how to 
> > > further pin this down... 
> > Thanks. I think you got mixed up with the different files, as the 
> > versioninfo() output indicates. Anyway, there's enough info to
> > check 
> > which file corresponds to which Julia version, so that's OK.
> > Indeed, 
> > when comparing the tests with binary tarballs, there's a call 
> > to jl_alloc_array_1d with the i7 (julia050_tarball-haswell-
> > i7.txt), 
> > which is not present with the i5 (incorrectly
> > named julia050_haswell- 
> > i7.txt). This is really unexpected. 
> > 
> > Could you file an issue on GitHub with a summary of what we've
> > found 
> > (essentially your message), as well as links to 3 Gists giving the
> > code 
> > and the contents of the two .txt files I mentioned above? That
> > would be 
> > very helpful. Do not mention the Fedora packages at all, as the
> > binary 
> > tarballs are closer to what Julia developers use. 
> > 
> > 
> > Regards 
> > 
> > 
> > > cheers, Johannes 
> > > 
> > > 
> > > 
> > > 
> > > > Le lundi 04 avril 2016 à 10:36 -0700, Johannes Wagner a
> > écrit :  
> > > > > hey guys,  
> > > > > so attached you find text files with @code_native output for
> > the  
> > > > > instructions   
> > > > > - r * x[1,:]  
> > > > > - cis(imexp)  
> > > > > - sum(imexp) * sum(conj(imexp))  
> > > > >  
> > >

Re: [julia-users] regression from 0.43 to 0.5dev, and back to 0.43 on fedora23

2016-04-06 Thread Johannes Wagner


> On 6 Apr  2016, at 4:46 PM, Milan Bouchet-Valat  wrote:
> 
> Le mercredi 06 avril 2016 à 07:25 -0700, Johannes Wagner a écrit :
>> and last update, I disabled Hyperthreading on the i7, and now it
>> performes as expected.
>> 
>> i7 no HT:
>> 
>> 100 loops, best of 3: 879.57 ns per loop
>> 10 loops, best of 3: 9.88 µs per loop
>> 100 loops, best of 3: 4.46 ms per loop
>> 1 loops, best of 3: 69.89 µs per loop
>> 1 loops, best of 3: 26.67 µs per loop
>> 10 loops, best of 3: 95.08 ms per loop
>> 
>> i7 with HT:
>> 
>> 100 loops, best of 3: 871.68 ns per loop
>> 1 loops, best of 3: 10.84 µs per loop
>> 100 loops, best of 3: 5.19 ms per loop
>> 1 loops, best of 3: 71.35 µs per loop
>> 1 loops, best of 3: 26.65 µs per loop
>> 1 loops, best of 3: 159.99 ms per loop
>> 
>> So all calls inside the loop are the same speed, but the whole loop,
>> with identical assembly code is ~60% slower if HT is enabled. Where
>> can this problem then arise from? LLVM? or thread pinning in the OS?
>> Probably not a julia problem then...
> Indeed, in the last assembly output you sent, there are no differences
> between i5 and i7 (as expected). So this isn't Julia's nor LLVM's
> fault. No idea whether there might be an issue with the CPU itself, but
> it's quite surprising.

Run it on a 2nd i7 machine. Same behavior, so definitely not a faulty cpu. Do 
you have any other idea what to do? Just leave it as is and now use julia with 
disabled hyper threading is not really satisfactory...


> Regards
> 
> 
>>> Le mardi 05 avril 2016 à 10:18 -0700, Johannes Wagner a écrit : 
 hey Milan, 
 so consider following code: 
  
 Pkg.clone("git://github.com/kbarbary/TimeIt.jl.git") 
 using TimeIt 
  
 v = rand(3) 
 r = rand(6000,3) 
 x = linspace(0.0, 10.0, 500) * (v./sqrt(sumabs2(v)))' 
  
 dotprods = r * x[2,:] 
 imexp= cis(dotprods) 
 sumprod  = sum(imexp) * sum(conj(imexp)) 
  
 f(r, x) = r * x[2,:] 
 g(r, x) = r * x' 
 h(imexp)= sum(imexp) * sum(conj(imexp)) 
  
 function s(r, x) 
 result = zeros(size(x,1)) 
 for i = 1:size(x,1) 
 imexp= cis(r * x[i,:]) 
 result[i]= sum(imexp) * sum(conj(imexp)) 
 end 
 return result 
 end 
  
 @timeit zeros(size(x,1)) 
 @timeit f(r,x) 
 @timeit g(r,x) 
 @timeit cis(dotprods) 
 @timeit h(imexp) 
 @timeit s(r,x) 
  
 @code_native f(r,x) 
 @code_native g(r,x) 
 @code_native cis(dotprods) 
 @code_native h(imexp) 
 @code_native s(r,x) 
  
 and I attached the output of the last @code_native s(r,x) as
>>> text 
 files for the binary tarball, as well as the latest nalimilan
>>> update. 
 For the whole function s, the exported code looks actually the
>>> same 
 everywhere. 
 But s(r,x) is the one that is considerable slower on the i7 than
>>> the 
 i5, whereas all the other timed calls are more or less same speed
>>> on 
 i5 and i7. Here are the timings in the same order as above (all
>>> run 
 repeatedly to not have compile time in it for last one): 
  
 i7: 
 100 loops, best of 3: 871.68 ns per loop 
 1 loops, best of 3: 10.84 µs per loop 
 100 loops, best of 3: 5.19 ms per loop 
 1 loops, best of 3: 71.35 µs per loop 
 1 loops, best of 3: 26.65 µs per loop 
 1 loops, best of 3: 159.99 ms per loop 
  
 i5: 
 10 loops, best of 3: 1.01 µs per loop 
 1 loops, best of 3: 10.93 µs per loop 
 100 loops, best of 3: 5.09 ms per loop 
 1 loops, best of 3: 75.93 µs per loop 
 1 loops, best of 3: 29.23 µs per loop 
 1 loops, best of 3: 103.70 ms per loop 
  
 So based on inside s(r,x) calls, the i7 should be faster, but
>>> the 
 whole s(r,x) is slower. Still clueless... And don't know how to 
 further pin this down... 
>>> Thanks. I think you got mixed up with the different files, as the 
>>> versioninfo() output indicates. Anyway, there's enough info to
>>> check 
>>> which file corresponds to which Julia version, so that's OK.
>>> Indeed, 
>>> when comparing the tests with binary tarballs, there's a call 
>>> to jl_alloc_array_1d with the i7 (julia050_tarball-haswell-
>>> i7.txt), 
>>> which is not present with the i5 (incorrectly
>>> named julia050_haswell- 
>>> i7.txt). This is really unexpected. 
>>> 
>>> Could you file an issue on GitHub with a summary of what we've
>>> found 
>>> (essentially your message), as well as links to 3 Gists giving the
>>> code 
>>> and the contents of the two .txt files I mentioned above? That
>>> would be 
>>> very helpful. Do not mention the Fedora packages at all, as the
>>> binary 
>>> tarballs are closer to what Julia developers use. 
>>> 
>>> 
>>> Regards 
>>> 
>>> 
 cheers, Johannes 
  
  
  
  
> Le lundi 04 avril 2016 à 10:36 -0700, Johannes Wagn

Re: [julia-users] Creating a subtype of IO that wraps an IOBuffer

2016-04-06 Thread Kevin Squire
Have you seen https://github.com/BioJulia/BufferedStreams.jl?  Is that
close to what you're trying to do?

Cheers,
   Kevin

On Wed, Apr 6, 2016 at 7:23 AM, Daniel Arndt  wrote:

> Great! that *seems* to work in my extremely limited testing so far, thanks
> Milan!
>
> Please note that I have since identified the error in my constructor
> (entirely unrelated to any issues)... it should read:
>
> NewIOType() = new(IOBuffer())
>
>
>
> On Wednesday, 6 April 2016 11:06:17 UTC-3, Milan Bouchet-Valat wrote:
>>
>> Le mercredi 06 avril 2016 à 06:48 -0700, Daniel Arndt a écrit :
>> > This is shorter than it looks, it's mostly code / output. Feel free
>> > to skip to the tl;dr.
>> >
>> > I'm playing around with an idea where I've created a new type that
>> > wraps an IOBuffer.
>> >
>> > This new type would hold some other information as well, but for it's
>> > read/write operations, I wanted to just pass the calls on to the
>> > encapsulated IOBuffer. I thought this would be fairly simple (as any
>> > good programming story begins):
>> >
>> > import Base: write
>> >
>> >
>> > type NewIOType <: IO
>> > buffer::IOBuffer
>> > some_other_stuff::Int
>> >
>> > new() = new(IOBuffer())
>> > end
>> >
>> >
>> > write(io::NewIOType, x...) = write(io.buffer, x...)
>> >
>> > However, write seems to conflict with multiple other definitions:
>> >
>> > WARNING: New definition
>> > write(Main.NewIOType, Any...) at In[1]:10
>> > is ambiguous with:
>> > write(Base.IO, Base.Complex) at complex.jl:78.
>> > To fix, define
>> > write(Main.NewIOType, Base.Complex)
>> > before the new definition.
>> > WARNING: New definition
>> > write(Main.NewIOType, Any...) at In[1]:10
>> > is ambiguous with:
>> > write(Base.IO, Base.Rational) at rational.jl:66.
>> > ...
>> > and on and on
>> > ...
>> >
>> > I understand the problem: Although my first parameter is more
>> > specific, my second is not. In an exploratory move, I tried:
>> >
>> > write{T}(io::NewIOType, x::T) = write(io.buffer, x)
>> >
>> > Thinking that this would create a new write function for every type T
>> > and therefore be more specific (I could use some clarity here, as
>> > obviously my understanding is incorrect). I get this:
>> >
>> > WARNING: New definition
>> > write(Main.NewIOType, #T<:Any) at In[1]:10
>> > is ambiguous with:
>> > write(Base.IO, Base.Complex) at complex.jl:78.
>> > To fix, define
>> > write(Main.NewIOType, _<:Base.Complex)
>> > before the new definition.
>> > WARNING: New definition
>> > write(Main.NewIOType, #T<:Any) at In[1]:10
>> > is ambiguous with:
>> > write(Base.IO, Base.Rational) at rational.jl:66.
>> > To fix, define
>> > write(Main.NewIOType, _<:Base.Rational)
>> >
>> > tl;dr Can I wrap an IO instance, and pass all calls to 'write' to the
>> > wrapped instance's version?
>> >
>> > I am entirely capable and aware of other approaches, so while I do
>> > appreciate suggestions of alternative approaches, I am specifically
>> > wondering if there is some mechanism that I'm missing that easily
>> > overcomes this.
>> I think you only need to provide write(s::NewIOType, x::UInt8). All
>> higher-level write() methods will automatically use it.
>>
>>
>> Regards
>>
>


Re: [julia-users] regression from 0.43 to 0.5dev, and back to 0.43 on fedora23

2016-04-06 Thread Milan Bouchet-Valat
Le mercredi 06 avril 2016 à 17:02 +0200, Johannes Wagner a écrit :
> 
> > 
> > On 6 Apr  2016, at 4:46 PM, Milan Bouchet-Valat  wrote:
> > 
> > Le mercredi 06 avril 2016 à 07:25 -0700, Johannes Wagner a écrit :
> > > 
> > > and last update, I disabled Hyperthreading on the i7, and now it
> > > performes as expected.
> > > 
> > > i7 no HT:
> > > 
> > > 100 loops, best of 3: 879.57 ns per loop
> > > 10 loops, best of 3: 9.88 µs per loop
> > > 100 loops, best of 3: 4.46 ms per loop
> > > 1 loops, best of 3: 69.89 µs per loop
> > > 1 loops, best of 3: 26.67 µs per loop
> > > 10 loops, best of 3: 95.08 ms per loop
> > > 
> > > i7 with HT:
> > > 
> > > 100 loops, best of 3: 871.68 ns per loop
> > > 1 loops, best of 3: 10.84 µs per loop
> > > 100 loops, best of 3: 5.19 ms per loop
> > > 1 loops, best of 3: 71.35 µs per loop
> > > 1 loops, best of 3: 26.65 µs per loop
> > > 1 loops, best of 3: 159.99 ms per loop
> > > 
> > > So all calls inside the loop are the same speed, but the whole loop,
> > > with identical assembly code is ~60% slower if HT is enabled. Where
> > > can this problem then arise from? LLVM? or thread pinning in the OS?
> > > Probably not a julia problem then...
> > Indeed, in the last assembly output you sent, there are no differences
> > between i5 and i7 (as expected). So this isn't Julia's nor LLVM's
> > fault. No idea whether there might be an issue with the CPU itself, but
> > it's quite surprising.
> Run it on a 2nd i7 machine. Same behavior, so definitely not a faulty
> cpu. Do you have any other idea what to do? Just leave it as is and
> now use julia with disabled hyper threading is not really
> satisfactory...
Sorry, I'm clueless. You could ask on forums dedicated to CPUs or on
Intel forums. You could also try with a different OS, just in case.



Regards

> > 
> > Regards
> > 
> > 
> > > 
> > > > 
> > > > Le mardi 05 avril 2016 à 10:18 -0700, Johannes Wagner a écrit : 
> > > > > 
> > > > > hey Milan, 
> > > > > so consider following code: 
> > > > >  
> > > > > Pkg.clone("git://github.com/kbarbary/TimeIt.jl.git") 
> > > > > using TimeIt 
> > > > >  
> > > > > v = rand(3) 
> > > > > r = rand(6000,3) 
> > > > > x = linspace(0.0, 10.0, 500) * (v./sqrt(sumabs2(v)))' 
> > > > >  
> > > > > dotprods = r * x[2,:] 
> > > > > imexp= cis(dotprods) 
> > > > > sumprod  = sum(imexp) * sum(conj(imexp)) 
> > > > >  
> > > > > f(r, x) = r * x[2,:] 
> > > > > g(r, x) = r * x' 
> > > > > h(imexp)= sum(imexp) * sum(conj(imexp)) 
> > > > >  
> > > > > function s(r, x) 
> > > > > result = zeros(size(x,1)) 
> > > > > for i = 1:size(x,1) 
> > > > > imexp= cis(r * x[i,:]) 
> > > > > result[i]= sum(imexp) * sum(conj(imexp)) 
> > > > > end 
> > > > > return result 
> > > > > end 
> > > > >  
> > > > > @timeit zeros(size(x,1)) 
> > > > > @timeit f(r,x) 
> > > > > @timeit g(r,x) 
> > > > > @timeit cis(dotprods) 
> > > > > @timeit h(imexp) 
> > > > > @timeit s(r,x) 
> > > > >  
> > > > > @code_native f(r,x) 
> > > > > @code_native g(r,x) 
> > > > > @code_native cis(dotprods) 
> > > > > @code_native h(imexp) 
> > > > > @code_native s(r,x) 
> > > > >  
> > > > > and I attached the output of the last @code_native s(r,x) as
> > > > text 
> > > > > 
> > > > > files for the binary tarball, as well as the latest nalimilan
> > > > update. 
> > > > > 
> > > > > For the whole function s, the exported code looks actually the
> > > > same 
> > > > > 
> > > > > everywhere. 
> > > > > But s(r,x) is the one that is considerable slower on the i7 than
> > > > the 
> > > > > 
> > > > > i5, whereas all the other timed calls are more or less same speed
> > > > on 
> > > > > 
> > > > > i5 and i7. Here are the timings in the same order as above (all
> > > > run 
> > > > > 
> > > > > repeatedly to not have compile time in it for last one): 
> > > > >  
> > > > > i7: 
> > > > > 100 loops, best of 3: 871.68 ns per loop 
> > > > > 1 loops, best of 3: 10.84 µs per loop 
> > > > > 100 loops, best of 3: 5.19 ms per loop 
> > > > > 1 loops, best of 3: 71.35 µs per loop 
> > > > > 1 loops, best of 3: 26.65 µs per loop 
> > > > > 1 loops, best of 3: 159.99 ms per loop 
> > > > >  
> > > > > i5: 
> > > > > 10 loops, best of 3: 1.01 µs per loop 
> > > > > 1 loops, best of 3: 10.93 µs per loop 
> > > > > 100 loops, best of 3: 5.09 ms per loop 
> > > > > 1 loops, best of 3: 75.93 µs per loop 
> > > > > 1 loops, best of 3: 29.23 µs per loop 
> > > > > 1 loops, best of 3: 103.70 ms per loop 
> > > > >  
> > > > > So based on inside s(r,x) calls, the i7 should be faster, but
> > > > the 
> > > > > 
> > > > > whole s(r,x) is slower. Still clueless... And don't know how to 
> > > > > further pin this down... 
> > > > Thanks. I think you got mixed up with the different files, as the 
> > > > versioninfo() output indicates. Anyway, there's enough info to
> > > > check 
> > > > which file corresponds to which Juli

[julia-users] Using @pyimport inside a module

2016-04-06 Thread Alex Mellnik
I rely on a small Python module which I am currently calling directly with 
@pyimport in a notebook:

using PyCall
@pyimport datetime
@pyimport TheModule

cursor = TheModule.connect() 


This works fine, but now I would like to wrap the Python module with some 
other functions in a julia package.  I tried 

module TheModuleInJulia

using PyCall
@pyimport TheModule

export geteverything 


function geteverything()

return collect(TheModule.connect())

end


and several varaitions thereof.  In my main file I can

using PyCall, TheModule


which works fine, but when I  

geteverything()


I get an error saying that TheModule is not defined.  What's the correct 
way to import and use the python module within the julia module? I didn't 
see anything about this in the docs for PyCall. Thanks -A


[julia-users] pmap scheduling and idle workers near the "end" of a job

2016-04-06 Thread Thomas Covert
The manual suggests that pmap(f, lst) will dynamically "feed" elements of 
lst to the function f as each worker completes its previous assignment, and 
in my read of the code for pmap, this is indeed what it does.

However, I have found that, in practice, many of the workers that I spin up 
for pmap tasks are idle for the last, say, half of the total time needed to 
complete the task.  In my pmap usage, it is the case that the complexity of 
the workload varies across elements of lst, so that some elements should 
take a long time to compute (say, 15 minutes on a core of my machine) and 
others a short time (less than 1 minute).  Knowing about this heterogeneity 
and observing this pattern of idle workers after about half of the work is 
done would normally lead me to think that pmap is scheduling workers ahead 
of time, not dynamically.  Some workers will get "lucky" and have easier 
than average workload, and others are unlucky and have harder workload.  At 
the end of the calculation, only the unlucky workers are still working. 
 However, this isn't what pmap is doing, so I'm kinda confused. 

Am I crazy?  The documentation for pmap says that it is scheduling tasks 
dynamically and I am pre-randomizing the order of work in lst so that 
worker 1 doesn't get easier tasks, in expectation, than worker N.  Or is it 
more likely that I've got a bug somewhere?




[julia-users] Re: PyCall Seaborn error

2016-04-06 Thread Etienne DdM
Same problem here. I know for a fact that seaborn is properly setup and 
Julia is pointing to the right installation of Python. Tried it on Windows 
8 and 10 and I get the same error...

On Thursday, February 4, 2016 at 11:05:43 PM UTC-8, St Elmo Wilken wrote:
>
> The website indicates that numpy, scipy, pandas, matplotlib and optionally 
> statsmodels are required. I tried to @pyimport them all before I @pyimport 
> seaborn but that didn't work either...
>


[julia-users] reloading functions

2016-04-06 Thread tcs
Hi Everyone!

It appears as if autoreload is broken for versions past 4.0. I found it 
very convenient to quickly be able to reload a function, which has been 
altered in the file, this way. Is there a way to update changes to 
functions only (without the need to redefine other variables) past 4.0?

Thanks very much for your suggestions. :)


[julia-users] Re: PyCall Seaborn error

2016-04-06 Thread Etienne DdM
I managed to get it working. See 
https://github.com/stevengj/PyCall.jl/issues/87 for the explanations.

On Wednesday, April 6, 2016 at 9:43:20 AM UTC-7, Etienne DdM wrote:
>
> Same problem here. I know for a fact that seaborn is properly setup and 
> Julia is pointing to the right installation of Python. Tried it on Windows 
> 8 and 10 and I get the same error...
>
> On Thursday, February 4, 2016 at 11:05:43 PM UTC-8, St Elmo Wilken wrote:
>>
>> The website indicates that numpy, scipy, pandas, matplotlib and 
>> optionally statsmodels are required. I tried to @pyimport them all before I 
>> @pyimport seaborn but that didn't work either...
>>
>

[julia-users] Re: Julia console with inline graphics?

2016-04-06 Thread Seth
ITerm with Compose.jl and TerminalExtensions.jl allows me to use 
GraphPlots.jl to visualize LightGraph output.

On Saturday, April 2, 2016 at 3:45:22 AM UTC-7, Oliver Schulz wrote:
>
> Hi,
>
> I'm looking for a Julia console with inline graphics (e.g. to display 
> Gadfly plots). There's Jupyter/IJulia, of course, but I saw a picture of 
> something more console-like in the AxisArrays readme (at the end of 
> https://github.com/mbauman/AxisArrays.jl#example-of-currently-implemented-behavior)
>  
> - does anyone know what's been used there?
>
> Cheers,
>
> Oliver
>
>

[julia-users] Re: raytracing in julia

2016-04-06 Thread jw3126
Thanks for the help offer! It is good to have something boost.julia style. 
At the moment it looks like I will not have enough time to wrap FireRays 
though.


On Friday, March 25, 2016 at 2:54:01 PM UTC+1, jw3126 wrote:
>
> I need to do things like building a simple geometry which consists of a 
> few polygons, cylinders, spheres and calculate if/where rays hit these 
> objects.
>
> Is there some julia library which does this already? Or some easy to wrap 
> C/Fortran library? Any suggestions?
> I would prefer a solution, which does not depend on vectorization, but can 
> be called efficiently as part of a loop, one ray at a time.
>


Re: [julia-users] Re: Capturing Error messages as strings

2016-04-06 Thread Isaiah Norton
If you can manage wrap your remote code in `try` blocks, maybe the
following example will help. (I've had to use things like this for remote
debugging in the past):

```
julia> addprocs(4)
4-element Array{Int64,1}:
 2
 3
 4
 5

julia> @everywhere E = Any[]

julia> @everywhere try error("foo") catch err push!(E, string(err, " on ",
myid(), "\n\n", stacktrace())) end

julia> fetch( @spawnat 3 (global E; E[1] ))
"ErrorException(\"foo\") on 3\n\n[ in eval(::Module, ::Any) at boot.jl:237,
[inlined code] from sysimg.jl:11\n in
(::Base.Serializer.__deserialized_types__.##261)() at multi.jl:1502, in
run_work_thunk(::Base.##252#254{Base.CallMsg{:call_fetch}}, ::Bool) at
multi.jl:744, [inlined code] from multi.jl:1038\n in
(::Base.##251#253{Base.CallMsg{:call_fetch},TCPSocket})() at event.jl:46]"
```

On Mon, Apr 4, 2016 at 2:40 AM, Matthew Pearce  wrote:

> Anyone?
>
> At the moment it is very hard to debug parallel work. With issues like
> #14456  and related
> problems, it would be extraordinarily helpful to have access to the full
> error messages from remotes.
>
> I care enough about this to actually try writing code implementing any
> hints / suggestions people might have.
>
>
>
>
>
>
> On Wednesday, March 30, 2016 at 5:53:53 PM UTC+1, Matthew Pearce wrote:
>>
>>
>> Anyone know how to capture error messages as strings? This is for
>> debugging of code run on remote machines, as the trace on the master node
>> appears to be incomplete.
>>
>> Note I am *not* asking about the atexit() behaviour. I am asking about
>> capturing non-fatal errors like sqrt(-1) rather than segfaults etc.
>>
>> Much appreciated
>>
>> Matthew
>>
>


[julia-users] Interfacing C++ function which takes structure and arrays

2016-04-06 Thread Jānis Erdmanis
Hello,

are anyone using CxxWrap.jl library? I am trying to figure out a way to 
pass array to the function. So far I have came up to to this nonworking 
(makes julia to crash) code
#include 
#include 


std::string greet(float * mem)
{
   return "hello, world";
}


JULIA_CPP_MODULE_BEGIN(registry)
  cxx_wrap::Module& hello = registry.create_module("Hello");
  hello.method("greet", &greet);
JULIA_CPP_MODULE_END
which is compiled with makefile
all: code

JULIA_DIR = $(shell which julia)
IJULIA = $(shell dirname $(shell dirname $(JULIA_DIR)))/include/julia
ICxxWrap = $(shell julia -e 
'print(Pkg.dir("CxxWrap","deps","usr","include"))')
LDIR=$(shell julia -e 'print(Pkg.dir("CxxWrap","deps","usr","lib"))')

LIBS = -lcxx_wrap -ljulia
CFLAGS=-I$(IJULIA) -I$(ICxxWrap) -L$(LDIR)

code: code.cpp
 g++ code.cpp -fPIC -std=c++11 -lcxx_wrap -shared -o libhello.so $(CFLAGS)
and eventually interfaced with
using CxxWrap
wrap_modules(joinpath(".","libhello"))
@show Hello.greet()

My small project needs to interface one function which takes a structures. 
These stores pointers to arrays and other parameters which I would like to 
set up in julia. So I also need to understand understand how to set and get 
values from interfaced structs for example DoubleData from 
https://github.com/barche/CxxWrap.jl/blob/master/deps/src/examples/types.cpp. 
 


[julia-users] What is the correct way to use the type alias Vector and Matrix in multiple dispatch?

2016-04-06 Thread Po Choi

hello(A::Matrix) = 1
hello(A::Vector{Matrix}) = 2
A = randn(3,3);
AA = [randn(3,3) for k in 1:4];
hello(A)
hello(AA)

The output has method error.
julia> hello(A)
1

julia> hello(AA)
ERROR: MethodError: `hello` has no method matching 
hello(::Array{Array{Float64,2},1})


If I write down the types explicitly,
hi(A::Array{Float64,2}) = 1
hi(A::Array{Array{Float64,2},1}) = 2
A = randn(3,3);
AA = [randn(3,3) for k in 1:4];
hi(A)
hi(AA)
The output is what I expect.
julia> hi(A)
1

julia> hi(AA)
2

Am I using Vector and Matrix in a wrong way?


[julia-users] Re: Call for GSoC mentors for proposed projects under the JuliaQuantum org

2016-04-06 Thread cdm

slightly OT ...

is the JuliaQuantum team tracking developments at Microsoft Research with
the Language-Integrated Quantum Operations (LIQUi|>) Simulator ?

   https://github.com/StationQ/Liquid


best,

cdm



On Thursday, March 17, 2016 at 9:18:34 AM UTC-7, Xiaodong Qi wrote:
>
> 
>
> the current members in the org seem pretty busy for the summer, 
>
 

 

>
> JuliaQuantum is a pretty small Julia org, but we have successfully 
> supported a GSoC project last year and have most of the fundamental 
> framework of the Julia libraries for quantum science set up in our current 
> scope. Hopefully see your participations in our current J/GSoC issue thread 
> at https://github.com/JuliaQuantum/JuliaQuantum.github.io/issues/32 and 
> contributions for the community in a long run. More information about the 
> org can be found in our website at http://juliaquantum.github.io.
>
> Thanking you,
> Qi
>


Re: [julia-users] What is the correct way to use the type alias Vector and Matrix in multiple dispatch?

2016-04-06 Thread Yichao Yu
On Wed, Apr 6, 2016 at 4:23 PM, Po Choi  wrote:
>
> hello(A::Matrix) = 1
> hello(A::Vector{Matrix}) = 2

http://julia.readthedocs.org/en/latest/manual/types/#parametric-composite-types

Vector{Matrix{Float64}} is not a subtype of Vector{Matrix}

> A = randn(3,3);
> AA = [randn(3,3) for k in 1:4];
> hello(A)
> hello(AA)
>
> The output has method error.
> julia> hello(A)
> 1
>
> julia> hello(AA)
> ERROR: MethodError: `hello` has no method matching
> hello(::Array{Array{Float64,2},1})
>
>
> If I write down the types explicitly,
> hi(A::Array{Float64,2}) = 1
> hi(A::Array{Array{Float64,2},1}) = 2
> A = randn(3,3);
> AA = [randn(3,3) for k in 1:4];
> hi(A)
> hi(AA)
> The output is what I expect.
> julia> hi(A)
> 1
>
> julia> hi(AA)
> 2
>
> Am I using Vector and Matrix in a wrong way?


[julia-users] Re: pmap scheduling and idle workers near the "end" of a job

2016-04-06 Thread 'Greg Plowman' via julia-users

It's difficult to comment without knowing more detail about numbers of 
workers, their relative speed, number of tasks and their expected 
completion times.

As an extreme example, say you have 4 workers (all of the same speed) and 
2x15-minute tasks and 16x1-minute tasks.

Depending on how this is scheduled, this will take between 15 to 19 
minutes. Optimally:
Worker 1: 15
Worker 2: 15
Worker 3: 1 1 1 1 1 1 1 1 
Worker 4: 1 1 1 1 1 1 1 1

After 8 minutes, workers 3 and 4 will be idle, and remain idle for the 
remaining 7 minutes before workers 1 and 2 finish.

I had a similar problem where I had fast and slow workers, and initially 
split the work into a number of tasks similar to the number of workers.
This left an overhang similar to what you describe.
In my case more granularity helped. Splitting into many tasks so that 
#tasks >> #workers helped.



On Thursday, April 7, 2016 at 2:21:28 AM UTC+10, Thomas Covert wrote:

> The manual suggests that pmap(f, lst) will dynamically "feed" elements of 
> lst to the function f as each worker completes its previous assignment, and 
> in my read of the code for pmap, this is indeed what it does.
>
> However, I have found that, in practice, many of the workers that I spin 
> up for pmap tasks are idle for the last, say, half of the total time needed 
> to complete the task.  In my pmap usage, it is the case that the complexity 
> of the workload varies across elements of lst, so that some elements should 
> take a long time to compute (say, 15 minutes on a core of my machine) and 
> others a short time (less than 1 minute).  Knowing about this heterogeneity 
> and observing this pattern of idle workers after about half of the work is 
> done would normally lead me to think that pmap is scheduling workers ahead 
> of time, not dynamically.  Some workers will get "lucky" and have easier 
> than average workload, and others are unlucky and have harder workload.  At 
> the end of the calculation, only the unlucky workers are still working. 
>  However, this isn't what pmap is doing, so I'm kinda confused. 
>
> Am I crazy?  The documentation for pmap says that it is scheduling tasks 
> dynamically and I am pre-randomizing the order of work in lst so that 
> worker 1 doesn't get easier tasks, in expectation, than worker N.  Or is it 
> more likely that I've got a bug somewhere?
>
>
>

[julia-users] Re: Interfacing C++ function which takes structure and arrays

2016-04-06 Thread Bart Janssens
Hi,

The problem is that the float* type was not mapped yet by CxxWrap. I have 
just added it, so your example should work after you 
Pkg.checkout("CxxWrap") and rebuild. Before rebuilding you may need to 
remove the usr dir in .julia/v0.4/CxxWrap/deps. Also, you need to pass a 
Float32 array to Hello.greet in Julia.

Note that I am adding functionality as-needed, so if you hit other problems 
like this feel free to create an issue on Github.

Cheers,

Bart


On Wednesday, April 6, 2016 at 9:51:39 PM UTC+2, Jānis Erdmanis wrote:
>
> Hello,
>
> are anyone using CxxWrap.jl library? I am trying to figure out a way to 
> pass array to the function. So far I have came up to to this nonworking 
> (makes julia to crash) code
>


[julia-users] [ANN] LoggedDicts.jl

2016-04-06 Thread jock . lawrie
Hi there,

Introducing LoggedDicts.jl .

A LoggedDict is simply a Dict for which every write is logged to a 
user-defined output.
Handy as a light-weight, intra-process data store in applications/scripts 
that do not require high write frequency (though this is an addressable 
problem).

Comments/criticisms/suggestions welcome.

Cheers,
Jock




[julia-users] Re: Using @pyimport inside a module

2016-04-06 Thread Cedric St-Jean
Hi Alex, it looks like you're doing everything right. Just to make sure: 
does  "TheModuleInJulia" have a final `end` statement? I just tried this 
code in a notebook:

module TheModuleInJulia
using PyCall
@pyimport numpy
export geteverything 

function geteverything()
return collect(numpy.eye(10))
end
end

TheModuleInJulia.geteverything()

It worked for me. Does it work for you?

On Wednesday, April 6, 2016 at 12:18:09 PM UTC-4, Alex Mellnik wrote:
>
> I rely on a small Python module which I am currently calling directly with 
> @pyimport in a notebook:
>
> using PyCall
> @pyimport datetime
> @pyimport TheModule
>
> cursor = TheModule.connect() 
>
>
> This works fine, but now I would like to wrap the Python module with some 
> other functions in a julia package.  I tried 
>
> module TheModuleInJulia
>
> using PyCall
> @pyimport TheModule
>
> export geteverything 
>
>
> function geteverything()
>
> return collect(TheModule.connect())
>
> end
>
>
> and several varaitions thereof.  In my main file I can
>
> using PyCall, TheModule
>
>
> which works fine, but when I  
>
> geteverything()
>
>
> I get an error saying that TheModule is not defined.  What's the correct 
> way to import and use the python module within the julia module? I didn't 
> see anything about this in the docs for PyCall. Thanks -A
>


[julia-users] Re: What is the correct way to use the type alias Vector and Matrix in multiple dispatch?

2016-04-06 Thread Sisyphuss
You may want to apply `@vectorize_1arg` macro to `hello`.

http://docs.julialang.org/en/release-0.4/manual/arrays/

On Wednesday, April 6, 2016 at 10:23:16 PM UTC+2, Po Choi wrote:
>
>
> hello(A::Matrix) = 1
> hello(A::Vector{Matrix}) = 2
> A = randn(3,3);
> AA = [randn(3,3) for k in 1:4];
> hello(A)
> hello(AA)
>
> The output has method error.
> julia> hello(A)
> 1
>
> julia> hello(AA)
> ERROR: MethodError: `hello` has no method matching 
> hello(::Array{Array{Float64,2},1})
>
>
> If I write down the types explicitly,
> hi(A::Array{Float64,2}) = 1
> hi(A::Array{Array{Float64,2},1}) = 2
> A = randn(3,3);
> AA = [randn(3,3) for k in 1:4];
> hi(A)
> hi(AA)
> The output is what I expect.
> julia> hi(A)
> 1
>
> julia> hi(AA)
> 2
>
> Am I using Vector and Matrix in a wrong way?
>


[julia-users] Re: [ANN] LoggedDicts.jl

2016-04-06 Thread Cedric St-Jean
Neat! Could you support Logged{Dict}? Then it would work with any 
associative collection.

On Wednesday, April 6, 2016 at 6:28:51 PM UTC-4, jock@gmail.com wrote:
>
> Hi there,
>
> Introducing LoggedDicts.jl .
>
> A LoggedDict is simply a Dict for which every write is logged to a 
> user-defined output.
> Handy as a light-weight, intra-process data store in applications/scripts 
> that do not require high write frequency (though this is an addressable 
> problem).
>
> Comments/criticisms/suggestions welcome.
>
> Cheers,
> Jock
>
>
>

[julia-users] Re: [ANN] LoggedDicts.jl

2016-04-06 Thread jock . lawrie
Do you have any specific use cases in mind?

I'd rather keep it simple and add features only when there's a compelling 
use case.


On Thursday, April 7, 2016 at 8:46:33 AM UTC+10, Cedric St-Jean wrote:
>
> Neat! Could you support Logged{Dict}? Then it would work with any 
> associative collection.
>
> On Wednesday, April 6, 2016 at 6:28:51 PM UTC-4, jock@gmail.com wrote:
>>
>> Hi there,
>>
>> Introducing LoggedDicts.jl 
>> .
>>
>> A LoggedDict is simply a Dict for which every write is logged to a 
>> user-defined output.
>> Handy as a light-weight, intra-process data store in applications/scripts 
>> that do not require high write frequency (though this is an addressable 
>> problem).
>>
>> Comments/criticisms/suggestions welcome.
>>
>> Cheers,
>> Jock
>>
>>
>>

[julia-users] Re: Help with convoluted types and Vararg

2016-04-06 Thread 'Greg Plowman' via julia-users
Pair is a parametric type, and in Julia these are invariant, 
meaning element subtyping does not imply pair subtyping.

In your case, the pair elements are subtypes:

Tuple{Function,Int,Int,Int} <: Tuple{Function,Vararg{Int}} # true
Int <: Int # true

but the Pair is not:

Pair{Tuple{Function,Int,Int,Int}, Int} <: Pair{Tuple{Function,Vararg{Int}}, 
Int} # false


On Wednesday, April 6, 2016 at 4:51:39 AM UTC+10, Seth wrote:

> Hi all,
>
> I have the following on 0.4.6-pre+18:
>
> z = [Pair((+,1,5,7), 3), Pair((-,6,5,3,5,8), 1)]
> type Foo
> x::Array{Pair{Tuple{Function, Vararg{Int}}, Int}}
> end
>
>
> and I'm getting
>
> julia> Foo(z)
> ERROR: MethodError: `convert` has no method matching 
> convert(::Type{Pair{Tuple{Function,Vararg{Int64}},Int64}}, 
> ::Pair{Tuple{Function,Int64,Int64,Int64},Int64})
> This may have arisen from a call to the constructor 
> Pair{Tuple{Function,Vararg{Int64}},Int64}(...),
> since type constructors fall back to convert methods.
> Closest candidates are:
>   Pair{A,B}(::Any, ::Any)
>   call{T}(::Type{T}, ::Any)
>   convert{T}(::Type{T}, ::T)
>  in copy! at abstractarray.jl:310
>  in call at none:2
>
>
> It's probably a stupid oversight, but I'm stuck. Can someone point me to 
> the error?
>


[julia-users] Pulling out residuals and fitted values from lmm in MixedModels

2016-04-06 Thread Christina Castellani
I am having difficulty figuring out how to pull out the residuals and 
fitted values from the result of an lmm in Julia using the package 
MixedModels.jl

In R I would use "residuals(m)" or "fitted(m)", where m is the lme output, 
to get the desired result. 

Any advice would be greatly appreciated. 


[julia-users] Re: pmap scheduling and idle workers near the "end" of a job

2016-04-06 Thread Thomas Covert
Its hard to construct a MWE without the data I am using, but I can give a 
bit more detail.  

There are 20 workers and about 4200 elements in lst, so using your 
terminology, I've got many more tasks than workers.  If lst were sorted by 
complexity (its not, I randomized the order), the complexity of item i is 
roughly cubic in i.

On Wednesday, April 6, 2016 at 4:29:16 PM UTC-5, Greg Plowman wrote:
>
>
> It's difficult to comment without knowing more detail about numbers of 
> workers, their relative speed, number of tasks and their expected 
> completion times.
>
> As an extreme example, say you have 4 workers (all of the same speed) and 
> 2x15-minute tasks and 16x1-minute tasks.
>
> Depending on how this is scheduled, this will take between 15 to 19 
> minutes. Optimally:
> Worker 1: 15
> Worker 2: 15
> Worker 3: 1 1 1 1 1 1 1 1 
> Worker 4: 1 1 1 1 1 1 1 1
>
> After 8 minutes, workers 3 and 4 will be idle, and remain idle for the 
> remaining 7 minutes before workers 1 and 2 finish.
>
> I had a similar problem where I had fast and slow workers, and initially 
> split the work into a number of tasks similar to the number of workers.
> This left an overhang similar to what you describe.
> In my case more granularity helped. Splitting into many tasks so that 
> #tasks >> #workers helped.
>
>
>
> On Thursday, April 7, 2016 at 2:21:28 AM UTC+10, Thomas Covert wrote:
>
>> The manual suggests that pmap(f, lst) will dynamically "feed" elements of 
>> lst to the function f as each worker completes its previous assignment, and 
>> in my read of the code for pmap, this is indeed what it does.
>>
>> However, I have found that, in practice, many of the workers that I spin 
>> up for pmap tasks are idle for the last, say, half of the total time needed 
>> to complete the task.  In my pmap usage, it is the case that the complexity 
>> of the workload varies across elements of lst, so that some elements should 
>> take a long time to compute (say, 15 minutes on a core of my machine) and 
>> others a short time (less than 1 minute).  Knowing about this heterogeneity 
>> and observing this pattern of idle workers after about half of the work is 
>> done would normally lead me to think that pmap is scheduling workers ahead 
>> of time, not dynamically.  Some workers will get "lucky" and have easier 
>> than average workload, and others are unlucky and have harder workload.  At 
>> the end of the calculation, only the unlucky workers are still working. 
>>  However, this isn't what pmap is doing, so I'm kinda confused. 
>>
>> Am I crazy?  The documentation for pmap says that it is scheduling tasks 
>> dynamically and I am pre-randomizing the order of work in lst so that 
>> worker 1 doesn't get easier tasks, in expectation, than worker N.  Or is it 
>> more likely that I've got a bug somewhere?
>>
>>
>>

[julia-users] Re: pmap scheduling and idle workers near the "end" of a job

2016-04-06 Thread 'Greg Plowman' via julia-users
In that case, try sorting the tasks in descending order of complexity (i.e. 
start longest running tasks first).


On Thursday, April 7, 2016 at 9:23:56 AM UTC+10, Thomas Covert wrote:

> Its hard to construct a MWE without the data I am using, but I can give a 
> bit more detail.  
>
> There are 20 workers and about 4200 elements in lst, so using your 
> terminology, I've got many more tasks than workers.  If lst were sorted by 
> complexity (its not, I randomized the order), the complexity of item i is 
> roughly cubic in i.
>
> On Wednesday, April 6, 2016 at 4:29:16 PM UTC-5, Greg Plowman wrote:
>>
>>
>> It's difficult to comment without knowing more detail about numbers of 
>> workers, their relative speed, number of tasks and their expected 
>> completion times.
>>
>> As an extreme example, say you have 4 workers (all of the same speed) and 
>> 2x15-minute tasks and 16x1-minute tasks.
>>
>> Depending on how this is scheduled, this will take between 15 to 19 
>> minutes. Optimally:
>> Worker 1: 15
>> Worker 2: 15
>> Worker 3: 1 1 1 1 1 1 1 1 
>> Worker 4: 1 1 1 1 1 1 1 1
>>
>> After 8 minutes, workers 3 and 4 will be idle, and remain idle for the 
>> remaining 7 minutes before workers 1 and 2 finish.
>>
>> I had a similar problem where I had fast and slow workers, and initially 
>> split the work into a number of tasks similar to the number of workers.
>> This left an overhang similar to what you describe.
>> In my case more granularity helped. Splitting into many tasks so that 
>> #tasks >> #workers helped.
>>
>>
>>
>> On Thursday, April 7, 2016 at 2:21:28 AM UTC+10, Thomas Covert wrote:
>>
>>> The manual suggests that pmap(f, lst) will dynamically "feed" elements 
>>> of lst to the function f as each worker completes its previous assignment, 
>>> and in my read of the code for pmap, this is indeed what it does.
>>>
>>> However, I have found that, in practice, many of the workers that I spin 
>>> up for pmap tasks are idle for the last, say, half of the total time needed 
>>> to complete the task.  In my pmap usage, it is the case that the complexity 
>>> of the workload varies across elements of lst, so that some elements should 
>>> take a long time to compute (say, 15 minutes on a core of my machine) and 
>>> others a short time (less than 1 minute).  Knowing about this heterogeneity 
>>> and observing this pattern of idle workers after about half of the work is 
>>> done would normally lead me to think that pmap is scheduling workers ahead 
>>> of time, not dynamically.  Some workers will get "lucky" and have easier 
>>> than average workload, and others are unlucky and have harder workload.  At 
>>> the end of the calculation, only the unlucky workers are still working. 
>>>  However, this isn't what pmap is doing, so I'm kinda confused. 
>>>
>>> Am I crazy?  The documentation for pmap says that it is scheduling tasks 
>>> dynamically and I am pre-randomizing the order of work in lst so that 
>>> worker 1 doesn't get easier tasks, in expectation, than worker N.  Or is it 
>>> more likely that I've got a bug somewhere?
>>>
>>>
>>>

[julia-users] Re: Call for GSoC mentors for proposed projects under the JuliaQuantum org

2016-04-06 Thread Xiaodong Qi
Indeed, we have reached out to the QuArC 
 of 
Microsoft. I had dinner with one of the leaders of QuArC in Feb this year 
and had talked about the ideas and vision we share on quantum compilers and 
quantum simulations to put into actions in JuliaQuantum. If you follow 
closely, a GSoC project JuliaQuantum supports this year will attempt to 
implement a framework for quantum computing which could be a step closer to 
LiQUiD. Martin Roetteler from Microsoft has been invited to provide 
insights and suggestions to JuliaQuantum's projects over time. People from 
Microsoft also unofficially offered software 
development/conference/training opportunities to JuliaQuantum members. 
Despite of the complicated protocols that Microsoft and other companies are 
using for open source projects, we will work from there to produce the best 
we can for our community with Microsoft and other collaborators on our 
network. I am not working directly to JuliaQuantum for now, but hopefully 
more people will be joining the org to create new opportunities for good. 
 Thanks for your question.

Bests,
Qi

On Wednesday, April 6, 2016 at 2:25:25 PM UTC-6, cdm wrote:
>
>
> slightly OT ...
>
> is the JuliaQuantum team tracking developments at Microsoft Research with
> the Language-Integrated Quantum Operations (LIQUi|>) Simulator ?
>
>https://github.com/StationQ/Liquid
>
>
> best,
>
> cdm
>
>
>
> On Thursday, March 17, 2016 at 9:18:34 AM UTC-7, Xiaodong Qi wrote:
>>
>> 
>>
>> the current members in the org seem pretty busy for the summer, 
>>
>  
> 
>  
>
>>
>> JuliaQuantum is a pretty small Julia org, but we have successfully 
>> supported a GSoC project last year and have most of the fundamental 
>> framework of the Julia libraries for quantum science set up in our current 
>> scope. Hopefully see your participations in our current J/GSoC issue thread 
>> at https://github.com/JuliaQuantum/JuliaQuantum.github.io/issues/32 and 
>> contributions for the community in a long run. More information about the 
>> org can be found in our website at http://juliaquantum.github.io.
>>
>> Thanking you,
>> Qi
>>
>

Re: [julia-users] What is the correct way to use the type alias Vector and Matrix in multiple dispatch?

2016-04-06 Thread Po Choi

Does it make sense to declare a variable with the type `Matrix`?

julia> methods(hello)
# 2 methods for generic function "hello":
hello(A::Array{T,2}) at none:1
hello(A::Array{Array{T,2},1}) at none:1

julia> AA = [randn(3,3) for k in 1:4];

julia> AAA = Matrix[randn(3,3) for k in 1:4];

julia> hello(AA)
ERROR: MethodError: `hello` has no method matching 
hello(::Array{Array{Float64,2},1})

julia> hello(AAA)
2


julia> typeof(AA)
Array{Array{Float64,2},1}

julia> typeof(AAA)
Array{Array{T,2},1}

julia> Array{T,2}
ERROR: UndefVarError: T not defined


I am a little bit confused about the `T`. Why can `T` appear inside `AAA` 
without being declared?


On Wednesday, April 6, 2016 at 1:44:38 PM UTC-7, Yichao Yu wrote:
>
> On Wed, Apr 6, 2016 at 4:23 PM, Po Choi > 
> wrote: 
> > 
> > hello(A::Matrix) = 1 
> > hello(A::Vector{Matrix}) = 2 
>
>
> http://julia.readthedocs.org/en/latest/manual/types/#parametric-composite-types
>  
>
> Vector{Matrix{Float64}} is not a subtype of Vector{Matrix} 
>
> > A = randn(3,3); 
> > AA = [randn(3,3) for k in 1:4]; 
> > hello(A) 
> > hello(AA) 
> > 
> > The output has method error. 
> > julia> hello(A) 
> > 1 
> > 
> > julia> hello(AA) 
> > ERROR: MethodError: `hello` has no method matching 
> > hello(::Array{Array{Float64,2},1}) 
> > 
> > 
> > If I write down the types explicitly, 
> > hi(A::Array{Float64,2}) = 1 
> > hi(A::Array{Array{Float64,2},1}) = 2 
> > A = randn(3,3); 
> > AA = [randn(3,3) for k in 1:4]; 
> > hi(A) 
> > hi(AA) 
> > The output is what I expect. 
> > julia> hi(A) 
> > 1 
> > 
> > julia> hi(AA) 
> > 2 
> > 
> > Am I using Vector and Matrix in a wrong way? 
>


Re: [julia-users] Re: [ANN] LoggedDicts.jl

2016-04-06 Thread Cedric St-Jean
I didn't have anything in mind, just curious. Simplicity is a very laudable
goal, I agree.

On Wed, Apr 6, 2016 at 7:00 PM,  wrote:

> Do you have any specific use cases in mind?
>
> I'd rather keep it simple and add features only when there's a compelling
> use case.
>
>
>
> On Thursday, April 7, 2016 at 8:46:33 AM UTC+10, Cedric St-Jean wrote:
>>
>> Neat! Could you support Logged{Dict}? Then it would work with any
>> associative collection.
>>
>> On Wednesday, April 6, 2016 at 6:28:51 PM UTC-4, jock@gmail.com
>> wrote:
>>>
>>> Hi there,
>>>
>>> Introducing LoggedDicts.jl
>>> .
>>>
>>> A LoggedDict is simply a Dict for which every write is logged to a
>>> user-defined output.
>>> Handy as a light-weight, intra-process data store in
>>> applications/scripts that do not require high write frequency (though this
>>> is an addressable problem).
>>>
>>> Comments/criticisms/suggestions welcome.
>>>
>>> Cheers,
>>> Jock
>>>
>>>
>>>


Re: [julia-users] What is the correct way to use the type alias Vector and Matrix in multiple dispatch?

2016-04-06 Thread Andy Ferris
T is meant to be a parametric type, defined in this case in the definition 
of Matrix (as a type alias) and also Array (as a type parameter with the 
same name). In typeof(AAA) it's pulling T out of that definition of the 
typealias. You could have written AAA = Matrix{Float64}[randn(3,3) for k in 
1:4] to define T.

Further, types in Julia are not covariant which means even if A <: B, we do 
NOT have Type{A} <: Type{B}. In your case that reads Matrix{Float64} <: 
Matrix, but not Vector{Matrix{Float64}} <: Vector{Matrix}.

To be generic, your function definitions could take a type parameter, like:

hello{T}(A::Vector{Matrix{T}}) = 2

Here we have introduced a new "parameteric type" variable "T". It could 
have been named anything. It works since there is some T (==Float64) where 
it will find a match. 

Or, to be specific about the input type, just define:

hello(A::Vector{Matrix{Float64}}) = 2

You don't have to be afraid of using Matrix and Vector, but you do have to 
think about how that might interact with the non-covariant type system. In 
cases like these AFAIK the only way to make a generic function is to 
introduce type parameters (using either Array{T,2} or Matrix{T} should be 
fully equivalent).

Does that help?
Andy


On Thursday, April 7, 2016 at 9:58:50 AM UTC+10, Po Choi wrote:
>
>
> Does it make sense to declare a variable with the type `Matrix`?
>
> julia> methods(hello)
> # 2 methods for generic function "hello":
> hello(A::Array{T,2}) at none:1
> hello(A::Array{Array{T,2},1}) at none:1
>
> julia> AA = [randn(3,3) for k in 1:4];
>
> julia> AAA = Matrix[randn(3,3) for k in 1:4];
>
> julia> hello(AA)
> ERROR: MethodError: `hello` has no method matching 
> hello(::Array{Array{Float64,2},1})
>
> julia> hello(AAA)
> 2
>
>
> julia> typeof(AA)
> Array{Array{Float64,2},1}
>
> julia> typeof(AAA)
> Array{Array{T,2},1}
>
> julia> Array{T,2}
> ERROR: UndefVarError: T not defined
>
>
> I am a little bit confused about the `T`. Why can `T` appear inside `AAA` 
> without being declared?
>
>
> On Wednesday, April 6, 2016 at 1:44:38 PM UTC-7, Yichao Yu wrote:
>>
>> On Wed, Apr 6, 2016 at 4:23 PM, Po Choi  wrote: 
>> > 
>> > hello(A::Matrix) = 1 
>> > hello(A::Vector{Matrix}) = 2 
>>
>>
>> http://julia.readthedocs.org/en/latest/manual/types/#parametric-composite-types
>>  
>>
>> Vector{Matrix{Float64}} is not a subtype of Vector{Matrix} 
>>
>> > A = randn(3,3); 
>> > AA = [randn(3,3) for k in 1:4]; 
>> > hello(A) 
>> > hello(AA) 
>> > 
>> > The output has method error. 
>> > julia> hello(A) 
>> > 1 
>> > 
>> > julia> hello(AA) 
>> > ERROR: MethodError: `hello` has no method matching 
>> > hello(::Array{Array{Float64,2},1}) 
>> > 
>> > 
>> > If I write down the types explicitly, 
>> > hi(A::Array{Float64,2}) = 1 
>> > hi(A::Array{Array{Float64,2},1}) = 2 
>> > A = randn(3,3); 
>> > AA = [randn(3,3) for k in 1:4]; 
>> > hi(A) 
>> > hi(AA) 
>> > The output is what I expect. 
>> > julia> hi(A) 
>> > 1 
>> > 
>> > julia> hi(AA) 
>> > 2 
>> > 
>> > Am I using Vector and Matrix in a wrong way? 
>>
>

Re: [julia-users] Re: [ANN] LoggedDicts.jl

2016-04-06 Thread jock . lawrie
The existing support for Dict is flexible enough, but point taken - it 
could be nice.

As an example, suppose the Associative of interest is a PriorityQueue. Then 
we can:

using LoggedDicts
using Base.Collections

ld = LoggedDict("myld", "mydict.log")
set!(ld, "pq", PriorityQueue())
set!(ld, "pq", 1, 11)
set!(ld, "pq", 2, 22)

I suppose your suggestion would enable something like:

ld = LoggedDict("myld", "mydict.log")
set!(ld, 1, 11)
set!(ld, 2, 22)

...which is neater.









On Thursday, April 7, 2016 at 11:23:27 AM UTC+10, Cedric St-Jean wrote:
>
> I didn't have anything in mind, just curious. Simplicity is a very 
> laudable goal, I agree.
>
> On Wed, Apr 6, 2016 at 7:00 PM, > wrote:
>
>> Do you have any specific use cases in mind?
>>
>> I'd rather keep it simple and add features only when there's a compelling 
>> use case.
>>
>>
>>
>> On Thursday, April 7, 2016 at 8:46:33 AM UTC+10, Cedric St-Jean wrote:
>>>
>>> Neat! Could you support Logged{Dict}? Then it would work with any 
>>> associative collection.
>>>
>>> On Wednesday, April 6, 2016 at 6:28:51 PM UTC-4, jock@gmail.com 
>>> wrote:

 Hi there,

 Introducing LoggedDicts.jl 
 .

 A LoggedDict is simply a Dict for which every write is logged to a 
 user-defined output.
 Handy as a light-weight, intra-process data store in 
 applications/scripts that do not require high write frequency (though this 
 is an addressable problem).

 Comments/criticisms/suggestions welcome.

 Cheers,
 Jock



>

[julia-users] strategy design pattern

2016-04-06 Thread Tomas Lycken
Multiple dispatch is actually really well suited for this. The idiomatic 
approach in Julia is quite similar to your c++ example, at least conceptually:

immutable Foo end
immutable Bar end

alg(::Type{Foo}) = println("I'm using foo") 
alg(::Type{Bar}) = println("I'm using bar") 

strategy = Foo # or Bar 
alg(strategy) 

// T