Re: [julia-users] zero cost subarray?

2015-04-19 Thread René Donner
What about something like unsafe_updateview!(view, indices...) ?

It could be used like this (pseudocode):

  view = unsafe_view(data, 1, 1, :)  # to construct / allocate
  for i in ..., j in ...
unsafe_updateview!(view, i, j, :)  
# use view
  end

In the trivial case of unsafe_view(data, :, :, i) this would boil down to a 
single pointer update. Of course passing around these views outside of their 
scope is rather discouraged. I use this pattern a lot and it proved to be very 
handy / fast.



Am 20.04.2015 um 02:08 schrieb Dahua Lin :

> My benchmark shows that element indexing has been as fast as it can be for 
> array views (or subarrays in Julia 0.4). 
> 
> Now the problem is actually the construction of views/subarrays. To optimize 
> the overhead of this part, the compiler may need to introduce additional 
> optimization.
> 
> Dahua 
> 
> 
> On Monday, April 20, 2015 at 6:39:35 AM UTC+8, Sebastian Good wrote:
> —track-allocation still requires guesswork, as optimizations can move the 
> allocation to a different place than you would expect.
> On April 19, 2015 at 4:36:19 PM, Peter Brady (peter...@gmail.com) wrote:
> 
>> So I discovered the --track-allocation option and now I am really confused:
>> 
>> Here's my session:
>> 
>> $ julia --track-allocation=all
>>_
>>_   _ _(_)_ |  A fresh approach to technical computing
>>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>>_ _   _| |_  __ _   |  Type "help()" for help.
>>   | | | | | | |/ _` |  |
>>   | | |_| | | | (_| |  |  Version 0.3.8-pre+13 (2015-04-17 18:08 UTC)
>>  _/ |\__'_|_|_|\__'_|  |  Commit 0df962d* (2 days old release-0.3)
>> |__/   |  x86_64-redhat-linux
>> 
>> julia> include("test.jl")
>> test_all (generic function with 1 method)
>> 
>> julia> test_unsafe(5)
>> 
>> And here's the relevant part of the resulting test.jl.mem file.  Note that I 
>> commented out some calls to 'size' and replaced with the appropriate 
>> hard-coded values but the resulting allocation is the same... Can anyone 
>> shed some light on this while I wait for 0.4 to compile?
>> 
>> - function update(a::AbstractArray, idx, off)
>>   8151120 for i=1:320 #size(a, idx)
>> 0 a[i] = -10*off+i
>> - end
>> 0 a
>> - end
>> - 
>>- function setk_UnSafe{T}(a::Array{T,3})
>>   760 us = UnsafeSlice(a, 3)
>> 0 for j=1:size(a,2),i=1:size(a,1)
>>   8151120 us.start = (j-1)*320+i #size(a,1)+i
>> - #off = sub2ind(size(a), i, j, 1)
>> 0 update(us, 3, us.start)
>> - end
>> 0 a
>> - end
>> - function test_unsafe(n)
>> 0 a = zeros(Int, (320, 320, 320))
>> - # warmup
>> 0 setk_UnSafe(a);
>> 0 clear_malloc_data()
>> - #@time (
>> 0 for i=1:n; setk_UnSafe(a); end
>> - end
>> 
>> 
>> On Sunday, April 19, 2015 at 2:21:56 PM UTC-6, Peter Brady wrote:
>> @Dahua, thanks for adding an unsafeview!  I appreciate how quickly this 
>> community responds.
>> 
>> I've added the following function to my test.jl script
>> function setk_unsafeview{T}(a::Array{T,3})
>> for j=1:size(a,2),i=1:size(a,1)
>> off = sub2ind(size(a), i, j, 1)
>> update(unsafe_view(a, i, j, :), 3, off)
>> end
>> a
>> end
>>  But I'm not seeing the large increase in performance I was expecting.  My 
>> timings are now
>> 
>> julia> test_all(5);
>> test_stride
>> elapsed time: 2.156173128 seconds (0 bytes allocated)
>> test_view
>> elapsed time: 9.30964534 seconds (94208000 bytes allocated, 0.47% gc time)
>> test_unsafe
>> elapsed time: 2.169307471 seconds (16303000 bytes allocated)
>> test_unsafeview
>> elapsed time: 8.955876793 seconds (90112000 bytes allocated, 0.41% gc time)
>> 
>> To be fair, I am cheating a bit with my custom 'UnsafeSlice' since I make 
>> only one instance and simply update the offset on each iteration.  If I make 
>> it immutable and create a new instance on every iteration (as I do for the 
>> view and unsafeview), things slow down a little and the allocation goes 
>> south:
>> 
>> julia> test_all(5);
>> test_stride
>> elapsed time: 2.159909265 seconds (0 bytes allocated)
>> test_view
>> elapsed time: 9.029025282 seconds (94208000 bytes allocated, 0.43% gc time)
>> test_unsafe
>> elapsed time: 2.621667854 seconds (114606240 bytes allocated, 2.41% gc time)
>> test_unsafeview
>> elapsed time: 8.888434466 seconds (90112000 bytes allocated, 0.44% gc time)
>> 
>> These are all with 0.3.8-pre.  I'll try compiling master and see what 
>> happens.  I'm still confused about why allocating a single type with a 
>> pointer, 2 ints and a tuple costs so much memory though.
>> 
>> 
>> 
>> On Sunday, April 19, 2015 at 11:38:17 AM UTC-6, Tim Holy wrote:
>> It's not just escape analysis, as this (new) issue demonstrates:
>> https://github.com/JuliaLang/julia/issues/10899

[julia-users] 4º Julia Meetup, México D.F.

2015-04-19 Thread Ismael VC
If you live in Mexico City, the metropolitan area or just plan to visit us 
this weekend, you are welcome next Saturday May 9 starting at 11:00 am, 
at the *laboratory Libre IV, 2º floot, Physics Department, Faculty of 
Science, Univercity City, UNAM*!

Details: http://www.meetup.com/julialang-mx


Re: [julia-users] Re: Tip: use eachindex when iterating over arrays

2015-04-19 Thread Christian Peel
Thanks for the PSA; I'd enjoy more of them.

On Sun, Apr 19, 2015 at 7:56 PM, Dahua Lin  wrote:

> Thanks for the great work!
>
> Dahua
>
>
> On Monday, April 20, 2015 at 9:47:13 AM UTC+8, Tim Holy wrote:
>>
>> For those of you wanting to write code that will perform well on
>> different
>> AbstractArray types, starting with julia 0.4 it will be recommended that
>> you
>> should typically write
>>
>> for i in eachindex(A)
>> # do something with i and/or A[i]
>> end
>>
>> rather than
>>
>> for i = 1:length(A)
>> # do something with i and/or A[i]
>> end
>>
>> The syntax
>>
>> for a in A
>> # do something with a
>> end
>>
>> is unchanged.
>>
>> If you're using julia 0.3, the Compat package (starting with version
>> 0.4.1)
>> defines `eachindex(A) = 1:length(A)`, so if you're willing to use Compat
>> you
>> can already start using this syntax.
>>
>>
>> This will make a difference, in julia 0.4, when indexing arrays for which
>> a
>> single linear index is inefficient---in such cases, `i` will be a
>> multidimensional index object. You can still say `A[i]`, and it will
>> likely be
>> several times faster than if `i` were an integer. In contrast, if `A` is
>> an
>> array for which linear indexing is fast, then `eachindex(A) =
>> 1:length(A)` as
>> previously.
>>
>> You can read more about this in the documentation for multidimensional
>> arrays
>> in julia 0.4:
>> http://docs.julialang.org/en/latest/manual/arrays/
>>
>> This public service announcement has been sponsored by the Department of
>> Arrays and Array Indexing.
>>
>> Best,
>> --Tim
>>
>>


-- 
chris.p...@ieee.org


[julia-users] Re: Tip: use eachindex when iterating over arrays

2015-04-19 Thread Dahua Lin
Thanks for the great work!

Dahua

On Monday, April 20, 2015 at 9:47:13 AM UTC+8, Tim Holy wrote:
>
> For those of you wanting to write code that will perform well on different 
> AbstractArray types, starting with julia 0.4 it will be recommended that 
> you 
> should typically write 
>
> for i in eachindex(A) 
> # do something with i and/or A[i] 
> end 
>
> rather than 
>
> for i = 1:length(A) 
> # do something with i and/or A[i] 
> end 
>
> The syntax 
>
> for a in A 
> # do something with a 
> end 
>
> is unchanged. 
>
> If you're using julia 0.3, the Compat package (starting with version 
> 0.4.1) 
> defines `eachindex(A) = 1:length(A)`, so if you're willing to use Compat 
> you 
> can already start using this syntax. 
>
>
> This will make a difference, in julia 0.4, when indexing arrays for which 
> a 
> single linear index is inefficient---in such cases, `i` will be a 
> multidimensional index object. You can still say `A[i]`, and it will 
> likely be 
> several times faster than if `i` were an integer. In contrast, if `A` is 
> an 
> array for which linear indexing is fast, then `eachindex(A) = 1:length(A)` 
> as 
> previously. 
>
> You can read more about this in the documentation for multidimensional 
> arrays 
> in julia 0.4: 
> http://docs.julialang.org/en/latest/manual/arrays/ 
>
> This public service announcement has been sponsored by the Department of 
> Arrays and Array Indexing. 
>
> Best, 
> --Tim 
>
>

[julia-users] Re: Same Pkg.dir(), different package?

2015-04-19 Thread Seth
Turns out that the same code was being loaded; the issue was that the error 
was in compilation, not execution (another victim of the tupocalypse), so 
the debug statements weren't even given a chance to print anything. 

On Sunday, April 19, 2015 at 5:04:00 PM UTC-7, Seth wrote:
>
> Hi,
>
> I have two julia binaries: one in /usr/local/bin (5-day-old master), and 
> one in /Users/seth/dev/julia/julia/usr/bin/julia (this is the latest master 
> from today). I'm seeing some very weird behavior: when I do a 
> Pkg.dir("LightGraphs") in both, I get "/Users/seth/.julia/v0.4/LightGraphs
> ", but when I use a function in the package, different code is being 
> executed.
>
> Things that may be significant:
>
> /Users/seth/.julia/v0.4/LightGraphs is a symlink to 
> /Users/seth/dev/julia/wip/LightGraphs
>
> The code I'm executing is a separate module within LightGraphs ("module 
> AStar") and the function (a_star) is exported:
>
> module AStar
>
>
> using LightGraphs
> using Base.Collections
> using Compat
>
> export a_star
> ...
>
> I changed a_star to print some debugging information (using both info() 
> and println()). In the 5-day-old master REPL (/usr/local/bin), the 
> debugging info is displayed when I invoke a_star(). In the new master 
> REPL (/Users/seth/julia/julia/usr/bin), the debugging info is not printed, 
> nor are any changes  I make to astar.jl reflected when exiting and 
> restarting the REPL and then issuing "using LightGraphs".
>
> This is probably a very simple mistake on my part but I'm too close to it. 
> Could someone please point out what I'm doing wrong?
>
>
>

[julia-users] Re: How to fix package after breaking change in 0.4?

2015-04-19 Thread Seth
Fixed. The issue was that the PQ was being generated based on version, both 
branches of the conditional that tested against VERSION were being 
compiled, and I introduced the @compat syntax only in the branch that 
actually needed it. That meant that the other branch (applicable to 0.3) 
was failing compile. Sorry for the bother.



On Sunday, April 19, 2015 at 6:20:28 PM UTC-7, Seth wrote:
>
> Sorry about this: please don't disregard. It is possible that the 
> TypeError here is being generated during compilation, not during execution, 
> in which case I still need help :)
>
>
>
> On Sunday, April 19, 2015 at 4:54:08 PM UTC-7, Seth wrote:
>>
>> Disregard - this appears to be another issue. Will post separately.
>>
>> On Sunday, April 19, 2015 at 4:17:42 PM UTC-7, Seth wrote:
>>>
>>> Thanks, Matt - this is very helpful.
>>>
>>> I'm running into what a problem with PriorityQueue in Base.Collections, 
>>> though. I changed
>>>
>>>  PriorityQueue((Float64,Array{Edge,1},Int), Float64) 
>>>
>>> to
>>>
>>>  
>>> PriorityQueue(@compat(Tuple{Float64,Array{Edge,1},Int}), Float64)
>>>
>>> and am getting an error:
>>>
>>> ERROR: TypeError: apply_type: in PriorityQueue, expected Type{T}, got 
>>> Tuple{DataType,DataType,DataType}
>>>
>>> What am I doing wrong?
>>>
>>>
>>> On Sunday, April 19, 2015 at 2:17:26 PM UTC-7, Matt Bauman wrote:

 I think this is what your after (for Foo = Int and Bar = Float64):

 julia> Tuple{Int,Float64}[]
 0-element Array{Tuple{Int64,Float64},1}


 julia> push!(ans, (1, 2.))
 1-element Array{Tuple{Int64,Float64},1}:
  (1,2.0)

 Documentation is unfortunately still in the process of being updated. 
  Basically, anywhere you had a tuple of types, you now must write 
 `Tuple{Int, Float64}` instead of `(Int, Float64)`.  In cases where you had 
 a vararg tuple specification, you now write `Tuple{Int, Vararg{Float64}}` 
 instead of `(Int, Float64...)`.  That latter vararg syntax is still up for 
 debate.

 On the plus side, you no longer need to work around constructing tuples 
 by splatting: (1, (2,3)…) now works as you would expect it to.  And 
 there's 
 no longer a strange type/value duality to ().

 On Sunday, April 19, 2015 at 5:06:00 PM UTC-4, Seth wrote:
>
> Following up:
>
> How does one now write
>
> foo = (Foo, Bar)[]
>
> ?
>
> Sorry for all the questions here. I really don't understand the 
> changes that were made and I'd like to get my package working again as 
> quickly as possible.
>
> Are there docs anywhere (written for novices, that is) on what changed 
> and how to adapt?
>
>
> On Sunday, April 19, 2015 at 12:09:27 PM UTC-7, Tony Kelman wrote:
>>
>> That will cause the code to not work on 0.3. To get code that works 
>> on both 0.3 and 0.4, use the Compat.jl package, and
>>
>>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
>> edgelist::Vector{@compat(Tuple{T,T})}) 
>>
>>
>> On Sunday, April 19, 2015 at 11:58:42 AM UTC-7, Avik Sengupta wrote:
>>>
>>>
>>> Try this: 
>>>
>>>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
>>> edgelist::Vector{Tuple{T,T}}) 
>>>
>>> On Monday, 20 April 2015 00:18:33 UTC+5:30, Seth wrote:

 Could someone please explain what's going on here and what I need 
 to do to fix my package with the latest 0.4 tuple changes?

 Here's the error (from pkg.julialang.org):

 ERROR: LoadError: LoadError: LoadError: TypeError: apply_type: in 
 alias, expected Type{T}, got Tuple{TypeVar,TypeVar}
  in include at ./boot.jl:250
  in include_from_node1 at ./loading.jl:129
  in include at ./boot.jl:250
  in include_from_node1 at ./loading.jl:129
  in reload_path at ./loading.jl:153
  in _require at ./loading.jl:68
  in require at ./loading.jl:51
  in include at ./boot.jl:250
  in include_from_node1 at loading.jl:129
  in process_options at ./client.jl:299
  in _start at ./client.jl:398
 while loading 
 /home/vagrant/testpkg/v0.4/LightGraphs/src/smallgraphs.jl, in 
 expression starting on line 120
 while loading 
 /home/vagrant/testpkg/v0.4/LightGraphs/src/LightGraphs.jl, in 
 expression starting on line 93
 while loading /vagrant/nightlyAL/PKGEVAL_LightGraphs_using.jl, in 
 expression starting on line 4


 Here's the line in question:

 function _make_simple_undirected_graph{T<:Integer}(n::T, edgelist::
 Vector{(T,T)})

 I confess to not yet fully understanding the new change to tuples, 
 and I'm lost as to how to fix my code to comply with the new rules.

 Thanks.

>>>

[julia-users] Re: Parsing 12-hour Clock Timestamps

2015-04-19 Thread WooKyoung Noh
Hello, I've written some code to test with it,
by changing base/dates/io.jl, base/dates/types.jl
https://gist.github.com/wookay/7bf4d8c2afb35920f688

There's Date Field Symbol Table
http://unicode.org/reports/tr35/tr35-6.html#Date_Format_Patterns

thank you.

WooKyoung Noh

On Sunday, April 19, 2015 at 10:40:16 PM UTC+9, Pontus Stenetorp wrote:
>
> Everyone, 
>
> I am currently parsing some data that unfortunately uses a 12-hour 
> clock format.  Reading the docs [1] and the source, [2] I am now 
> fairly certain that `Base.Dates` currently lacks support to parse 
> something like `"Apr 1, 2015 1:02:03 PM"`.  Am I correct in this? 
> Also, what would you recommend as an alternative library? 
>
> Pontus 
>
> [1]: 
> http://docs.julialang.org/en/latest/stdlib/dates/#Dates.Dates.DateFormat 
> [2]: 
> https://github.com/JuliaLang/julia/blob/32aee08d0b833233cd22b7b1de01ae769395b3b8/base/dates/io.jl
>  
>


[julia-users] Tip: use eachindex when iterating over arrays

2015-04-19 Thread Tim Holy
For those of you wanting to write code that will perform well on different 
AbstractArray types, starting with julia 0.4 it will be recommended that you 
should typically write

for i in eachindex(A)
# do something with i and/or A[i]
end

rather than

for i = 1:length(A)
# do something with i and/or A[i]
end

The syntax

for a in A
# do something with a
end

is unchanged.

If you're using julia 0.3, the Compat package (starting with version 0.4.1) 
defines `eachindex(A) = 1:length(A)`, so if you're willing to use Compat you 
can already start using this syntax.


This will make a difference, in julia 0.4, when indexing arrays for which a 
single linear index is inefficient---in such cases, `i` will be a 
multidimensional index object. You can still say `A[i]`, and it will likely be 
several times faster than if `i` were an integer. In contrast, if `A` is an 
array for which linear indexing is fast, then `eachindex(A) = 1:length(A)` as 
previously.

You can read more about this in the documentation for multidimensional arrays 
in julia 0.4:
http://docs.julialang.org/en/latest/manual/arrays/

This public service announcement has been sponsored by the Department of 
Arrays and Array Indexing.

Best,
--Tim



[julia-users] Re: How to fix package after breaking change in 0.4?

2015-04-19 Thread Seth
Sorry about this: please don't disregard. It is possible that the TypeError 
here is being generated during compilation, not during execution, in which 
case I still need help :)



On Sunday, April 19, 2015 at 4:54:08 PM UTC-7, Seth wrote:
>
> Disregard - this appears to be another issue. Will post separately.
>
> On Sunday, April 19, 2015 at 4:17:42 PM UTC-7, Seth wrote:
>>
>> Thanks, Matt - this is very helpful.
>>
>> I'm running into what a problem with PriorityQueue in Base.Collections, 
>> though. I changed
>>
>>  PriorityQueue((Float64,Array{Edge,1},Int), Float64) 
>>
>> to
>>
>>  
>> PriorityQueue(@compat(Tuple{Float64,Array{Edge,1},Int}), Float64)
>>
>> and am getting an error:
>>
>> ERROR: TypeError: apply_type: in PriorityQueue, expected Type{T}, got 
>> Tuple{DataType,DataType,DataType}
>>
>> What am I doing wrong?
>>
>>
>> On Sunday, April 19, 2015 at 2:17:26 PM UTC-7, Matt Bauman wrote:
>>>
>>> I think this is what your after (for Foo = Int and Bar = Float64):
>>>
>>> julia> Tuple{Int,Float64}[]
>>> 0-element Array{Tuple{Int64,Float64},1}
>>>
>>>
>>> julia> push!(ans, (1, 2.))
>>> 1-element Array{Tuple{Int64,Float64},1}:
>>>  (1,2.0)
>>>
>>> Documentation is unfortunately still in the process of being updated. 
>>>  Basically, anywhere you had a tuple of types, you now must write 
>>> `Tuple{Int, Float64}` instead of `(Int, Float64)`.  In cases where you had 
>>> a vararg tuple specification, you now write `Tuple{Int, Vararg{Float64}}` 
>>> instead of `(Int, Float64...)`.  That latter vararg syntax is still up for 
>>> debate.
>>>
>>> On the plus side, you no longer need to work around constructing tuples 
>>> by splatting: (1, (2,3)…) now works as you would expect it to.  And there's 
>>> no longer a strange type/value duality to ().
>>>
>>> On Sunday, April 19, 2015 at 5:06:00 PM UTC-4, Seth wrote:

 Following up:

 How does one now write

 foo = (Foo, Bar)[]

 ?

 Sorry for all the questions here. I really don't understand the changes 
 that were made and I'd like to get my package working again as quickly as 
 possible.

 Are there docs anywhere (written for novices, that is) on what changed 
 and how to adapt?


 On Sunday, April 19, 2015 at 12:09:27 PM UTC-7, Tony Kelman wrote:
>
> That will cause the code to not work on 0.3. To get code that works on 
> both 0.3 and 0.4, use the Compat.jl package, and
>
>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
> edgelist::Vector{@compat(Tuple{T,T})}) 
>
>
> On Sunday, April 19, 2015 at 11:58:42 AM UTC-7, Avik Sengupta wrote:
>>
>>
>> Try this: 
>>
>>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
>> edgelist::Vector{Tuple{T,T}}) 
>>
>> On Monday, 20 April 2015 00:18:33 UTC+5:30, Seth wrote:
>>>
>>> Could someone please explain what's going on here and what I need to 
>>> do to fix my package with the latest 0.4 tuple changes?
>>>
>>> Here's the error (from pkg.julialang.org):
>>>
>>> ERROR: LoadError: LoadError: LoadError: TypeError: apply_type: in 
>>> alias, expected Type{T}, got Tuple{TypeVar,TypeVar}
>>>  in include at ./boot.jl:250
>>>  in include_from_node1 at ./loading.jl:129
>>>  in include at ./boot.jl:250
>>>  in include_from_node1 at ./loading.jl:129
>>>  in reload_path at ./loading.jl:153
>>>  in _require at ./loading.jl:68
>>>  in require at ./loading.jl:51
>>>  in include at ./boot.jl:250
>>>  in include_from_node1 at loading.jl:129
>>>  in process_options at ./client.jl:299
>>>  in _start at ./client.jl:398
>>> while loading 
>>> /home/vagrant/testpkg/v0.4/LightGraphs/src/smallgraphs.jl, in 
>>> expression starting on line 120
>>> while loading 
>>> /home/vagrant/testpkg/v0.4/LightGraphs/src/LightGraphs.jl, in 
>>> expression starting on line 93
>>> while loading /vagrant/nightlyAL/PKGEVAL_LightGraphs_using.jl, in 
>>> expression starting on line 4
>>>
>>>
>>> Here's the line in question:
>>>
>>> function _make_simple_undirected_graph{T<:Integer}(n::T, edgelist::
>>> Vector{(T,T)})
>>>
>>> I confess to not yet fully understanding the new change to tuples, 
>>> and I'm lost as to how to fix my code to comply with the new rules.
>>>
>>> Thanks.
>>>
>>

Re: [julia-users] zero cost subarray?

2015-04-19 Thread Sebastian Good
Optimizing the creation of many small structures during execution typically 
comes down to either cleverly eliminating the need to allocate them in the 
first place (via escape analysis, and the like) or making the first generation 
of the garbage collector wickedly fast. I understand both of these are being 
worked.
On April 19, 2015 at 8:08:53 PM, Dahua Lin (linda...@gmail.com) wrote:

My benchmark shows that element indexing has been as fast as it can be for 
array views (or subarrays in Julia 0.4). 

Now the problem is actually the construction of views/subarrays. To optimize 
the overhead of this part, the compiler may need to introduce additional 
optimization.

Dahua 


On Monday, April 20, 2015 at 6:39:35 AM UTC+8, Sebastian Good wrote:
—track-allocation still requires guesswork, as optimizations can move the 
allocation to a different place than you would expect.
On April 19, 2015 at 4:36:19 PM, Peter Brady (peter...@gmail.com) wrote:

So I discovered the --track-allocation option and now I am really confused:

Here's my session:

$ julia --track-allocation=all
               _
   _       _ _(_)_     |  A fresh approach to technical computing
  (_)     | (_) (_)    |  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "help()" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.8-pre+13 (2015-04-17 18:08 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit 0df962d* (2 days old release-0.3)
|__/                   |  x86_64-redhat-linux

julia> include("test.jl")
test_all (generic function with 1 method)

julia> test_unsafe(5)

And here's the relevant part of the resulting test.jl.mem file.  Note that I 
commented out some calls to 'size' and replaced with the appropriate hard-coded 
values but the resulting allocation is the same... Can anyone shed some light 
on this while I wait for 0.4 to compile?

        - function update(a::AbstractArray, idx, off)
  8151120     for i=1:320 #size(a, idx)
        0         a[i] = -10*off+i
        -     end
        0     a
        - end
        - 
       - function setk_UnSafe{T}(a::Array{T,3})
      760     us = UnsafeSlice(a, 3)
        0     for j=1:size(a,2),i=1:size(a,1)
  8151120         us.start = (j-1)*320+i #size(a,1)+i
        -         #off = sub2ind(size(a), i, j, 1)
        0         update(us, 3, us.start)
        -     end
        0     a
        - end
        - function test_unsafe(n)
        0     a = zeros(Int, (320, 320, 320))
        -     # warmup
        0     setk_UnSafe(a);
        0     clear_malloc_data()
        -     #@time (
        0     for i=1:n; setk_UnSafe(a); end
        - end


On Sunday, April 19, 2015 at 2:21:56 PM UTC-6, Peter Brady wrote:
@Dahua, thanks for adding an unsafeview!  I appreciate how quickly this 
community responds.

I've added the following function to my test.jl script
function setk_unsafeview{T}(a::Array{T,3})
    for j=1:size(a,2),i=1:size(a,1)
        off = sub2ind(size(a), i, j, 1)
        update(unsafe_view(a, i, j, :), 3, off)
    end
    a
end
 But I'm not seeing the large increase in performance I was expecting.  My 
timings are now

julia> test_all(5);
test_stride
elapsed time: 2.156173128 seconds (0 bytes allocated)
test_view
elapsed time: 9.30964534 seconds (94208000 bytes allocated, 0.47% gc time)
test_unsafe
elapsed time: 2.169307471 seconds (16303000 bytes allocated)
test_unsafeview
elapsed time: 8.955876793 seconds (90112000 bytes allocated, 0.41% gc time)

To be fair, I am cheating a bit with my custom 'UnsafeSlice' since I make only 
one instance and simply update the offset on each iteration.  If I make it 
immutable and create a new instance on every iteration (as I do for the view 
and unsafeview), things slow down a little and the allocation goes south:

julia> test_all(5);
test_stride
elapsed time: 2.159909265 seconds (0 bytes allocated)
test_view
elapsed time: 9.029025282 seconds (94208000 bytes allocated, 0.43% gc time)
test_unsafe
elapsed time: 2.621667854 seconds (114606240 bytes allocated, 2.41% gc time)
test_unsafeview
elapsed time: 8.888434466 seconds (90112000 bytes allocated, 0.44% gc time)

These are all with 0.3.8-pre.  I'll try compiling master and see what happens.  
I'm still confused about why allocating a single type with a pointer, 2 ints 
and a tuple costs so much memory though.



On Sunday, April 19, 2015 at 11:38:17 AM UTC-6, Tim Holy wrote:
It's not just escape analysis, as this (new) issue demonstrates:
https://github.com/JuliaLang/julia/issues/10899

--Tim

On Sunday, April 19, 2015 12:33:51 PM Sebastian Good wrote:
> Their size seems much decreased. I’d imagine to totally avoid allocation in
> this benchmark requires an optimization that really has nothing to do with
> subarrays per se. You’d have to do an escape analysis and see that Aj never
> left sumcols. Not easy in practice, since it’s passed to slice and length,
> and you’d have to make sure they didn’t squirrel it away or pass it on to
> someone else. Then you could stack al

Re: [julia-users] zero cost subarray?

2015-04-19 Thread Dahua Lin
My benchmark shows that element indexing has been as fast as it can be for 
array views (or subarrays in Julia 0.4). 

Now the problem is actually the construction of views/subarrays. To 
optimize the overhead of this part, the compiler may need to introduce 
additional optimization.

Dahua 


On Monday, April 20, 2015 at 6:39:35 AM UTC+8, Sebastian Good wrote:
>
> —track-allocation still requires guesswork, as optimizations can move the 
> allocation to a different place than you would expect.
>
> On April 19, 2015 at 4:36:19 PM, Peter Brady (peter...@gmail.com 
> ) wrote:
>
> So I discovered the --track-allocation option and now I am really 
> confused: 
>
> Here's my session:
>
>  $ julia --track-allocation=all
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "help()" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.3.8-pre+13 (2015-04-17 18:08 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Commit 0df962d* (2 days old release-0.3)
> |__/   |  x86_64-redhat-linux
>
> julia> include("test.jl")
> test_all (generic function with 1 method)
>
> julia> test_unsafe(5)
>  
> And here's the relevant part of the resulting test.jl.mem file.  Note that 
> I commented out some calls to 'size' and replaced with the appropriate 
> hard-coded values but the resulting allocation is the same... Can anyone 
> shed some light on this while I wait for 0.4 to compile?
>
>   - function update(a::AbstractArray, idx, off)
>   8151120 for i=1:320 #size(a, idx)
> 0 a[i] = -10*off+i
> - end
> 0 a
> - end
> - 
>- function setk_UnSafe{T}(a::Array{T,3})
>   760 us = UnsafeSlice(a, 3)
> 0 for j=1:size(a,2),i=1:size(a,1)
>   8151120 us.start = (j-1)*320+i #size(a,1)+i
> - #off = sub2ind(size(a), i, j, 1)
> 0 update(us, 3, us.start)
> - end
> 0 a
> - end
> - function test_unsafe(n)
> 0 a = zeros(Int, (320, 320, 320))
> - # warmup
> 0 setk_UnSafe(a);
> 0 clear_malloc_data()
> - #@time (
> 0 for i=1:n; setk_UnSafe(a); end
> - end
>   
>
> On Sunday, April 19, 2015 at 2:21:56 PM UTC-6, Peter Brady wrote: 
>>
>> @Dahua, thanks for adding an unsafeview!  I appreciate how quickly this 
>> community responds. 
>>
>> I've added the following function to my test.jl script
>>  function setk_unsafeview{T}(a::Array{T,3})
>> for j=1:size(a,2),i=1:size(a,1)
>> off = sub2ind(size(a), i, j, 1)
>> update(unsafe_view(a, i, j, :), 3, off)
>> end
>> a
>> end
>>   But I'm not seeing the large increase in performance I was expecting. 
>>  My timings are now
>>
>>  julia> test_all(5);
>> test_stride
>> elapsed time: 2.156173128 seconds (0 bytes allocated)
>> test_view
>> elapsed time: 9.30964534 seconds (94208000 bytes allocated, 0.47% gc time)
>> test_unsafe
>> elapsed time: 2.169307471 seconds (16303000 bytes allocated)
>> test_unsafeview
>> elapsed time: 8.955876793 seconds (90112000 bytes allocated, 0.41% gc 
>> time)
>>  
>> To be fair, I am cheating a bit with my custom 'UnsafeSlice' since I make 
>> only one instance and simply update the offset on each iteration.  If I 
>> make it immutable and create a new instance on every iteration (as I do for 
>> the view and unsafeview), things slow down a little and the allocation goes 
>> south:
>>
>>   julia> test_all(5);
>> test_stride
>> elapsed time: 2.159909265 seconds (0 bytes allocated)
>> test_view
>> elapsed time: 9.029025282 seconds (94208000 bytes allocated, 0.43% gc 
>> time)
>> test_unsafe
>> elapsed time: 2.621667854 seconds (114606240 bytes allocated, 2.41% gc 
>> time)
>> test_unsafeview
>> elapsed time: 8.888434466 seconds (90112000 bytes allocated, 0.44% gc 
>> time)
>>  
>> These are all with 0.3.8-pre.  I'll try compiling master and see what 
>> happens.  I'm still confused about why allocating a single type with a 
>> pointer, 2 ints and a tuple costs so much memory though.
>>
>>
>>
>> On Sunday, April 19, 2015 at 11:38:17 AM UTC-6, Tim Holy wrote: 
>>>
>>> It's not just escape analysis, as this (new) issue demonstrates:
>>>  https://github.com/JuliaLang/julia/issues/10899
>>>
>>> --Tim
>>>
>>> On Sunday, April 19, 2015 12:33:51 PM Sebastian Good wrote:
>>> > Their size seems much decreased. I’d imagine to totally avoid 
>>> allocation in
>>> > this benchmark requires an optimization that really has nothing to do 
>>> with
>>> > subarrays per se. You’d have to do an escape analysis and see that Aj 
>>> never
>>> > left sumcols. Not easy in practice, since it’s passed to slice and 
>>> length,
>>> > and you’d have to make sure they didn’t squirrel it away or pass it on 
>>> to
>>> > someone else. Then you could stack allocate it, or even destructur

[julia-users] Same Pkg.dir(), different package?

2015-04-19 Thread Seth
Hi,

I have two julia binaries: one in /usr/local/bin (5-day-old master), and 
one in /Users/seth/dev/julia/julia/usr/bin/julia (this is the latest master 
from today). I'm seeing some very weird behavior: when I do a 
Pkg.dir("LightGraphs") in both, I get "/Users/seth/.julia/v0.4/LightGraphs", 
but when I use a function in the package, different code is being executed.

Things that may be significant:

/Users/seth/.julia/v0.4/LightGraphs is a symlink to 
/Users/seth/dev/julia/wip/LightGraphs

The code I'm executing is a separate module within LightGraphs ("module 
AStar") and the function (a_star) is exported:

module AStar


using LightGraphs
using Base.Collections
using Compat

export a_star
...

I changed a_star to print some debugging information (using both info() and 
println()). In the 5-day-old master REPL (/usr/local/bin), the debugging 
info is displayed when I invoke a_star(). In the new master REPL 
(/Users/seth/julia/julia/usr/bin), the debugging info is not printed, nor 
are any changes  I make to astar.jl reflected when exiting and restarting 
the REPL and then issuing "using LightGraphs".

This is probably a very simple mistake on my part but I'm too close to it. 
Could someone please point out what I'm doing wrong?




[julia-users] Re: How to fix package after breaking change in 0.4?

2015-04-19 Thread Seth
Disregard - this appears to be another issue. Will post separately.

On Sunday, April 19, 2015 at 4:17:42 PM UTC-7, Seth wrote:
>
> Thanks, Matt - this is very helpful.
>
> I'm running into what a problem with PriorityQueue in Base.Collections, 
> though. I changed
>
>  PriorityQueue((Float64,Array{Edge,1},Int), Float64) 
>
> to
>
>  
> PriorityQueue(@compat(Tuple{Float64,Array{Edge,1},Int}), Float64)
>
> and am getting an error:
>
> ERROR: TypeError: apply_type: in PriorityQueue, expected Type{T}, got 
> Tuple{DataType,DataType,DataType}
>
> What am I doing wrong?
>
>
> On Sunday, April 19, 2015 at 2:17:26 PM UTC-7, Matt Bauman wrote:
>>
>> I think this is what your after (for Foo = Int and Bar = Float64):
>>
>> julia> Tuple{Int,Float64}[]
>> 0-element Array{Tuple{Int64,Float64},1}
>>
>>
>> julia> push!(ans, (1, 2.))
>> 1-element Array{Tuple{Int64,Float64},1}:
>>  (1,2.0)
>>
>> Documentation is unfortunately still in the process of being updated. 
>>  Basically, anywhere you had a tuple of types, you now must write 
>> `Tuple{Int, Float64}` instead of `(Int, Float64)`.  In cases where you had 
>> a vararg tuple specification, you now write `Tuple{Int, Vararg{Float64}}` 
>> instead of `(Int, Float64...)`.  That latter vararg syntax is still up for 
>> debate.
>>
>> On the plus side, you no longer need to work around constructing tuples 
>> by splatting: (1, (2,3)…) now works as you would expect it to.  And there's 
>> no longer a strange type/value duality to ().
>>
>> On Sunday, April 19, 2015 at 5:06:00 PM UTC-4, Seth wrote:
>>>
>>> Following up:
>>>
>>> How does one now write
>>>
>>> foo = (Foo, Bar)[]
>>>
>>> ?
>>>
>>> Sorry for all the questions here. I really don't understand the changes 
>>> that were made and I'd like to get my package working again as quickly as 
>>> possible.
>>>
>>> Are there docs anywhere (written for novices, that is) on what changed 
>>> and how to adapt?
>>>
>>>
>>> On Sunday, April 19, 2015 at 12:09:27 PM UTC-7, Tony Kelman wrote:

 That will cause the code to not work on 0.3. To get code that works on 
 both 0.3 and 0.4, use the Compat.jl package, and

  function _make_simple_undirected_graph{T<:Integer}(n::T, 
 edgelist::Vector{@compat(Tuple{T,T})}) 


 On Sunday, April 19, 2015 at 11:58:42 AM UTC-7, Avik Sengupta wrote:
>
>
> Try this: 
>
>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
> edgelist::Vector{Tuple{T,T}}) 
>
> On Monday, 20 April 2015 00:18:33 UTC+5:30, Seth wrote:
>>
>> Could someone please explain what's going on here and what I need to 
>> do to fix my package with the latest 0.4 tuple changes?
>>
>> Here's the error (from pkg.julialang.org):
>>
>> ERROR: LoadError: LoadError: LoadError: TypeError: apply_type: in alias, 
>> expected Type{T}, got Tuple{TypeVar,TypeVar}
>>  in include at ./boot.jl:250
>>  in include_from_node1 at ./loading.jl:129
>>  in include at ./boot.jl:250
>>  in include_from_node1 at ./loading.jl:129
>>  in reload_path at ./loading.jl:153
>>  in _require at ./loading.jl:68
>>  in require at ./loading.jl:51
>>  in include at ./boot.jl:250
>>  in include_from_node1 at loading.jl:129
>>  in process_options at ./client.jl:299
>>  in _start at ./client.jl:398
>> while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/smallgraphs.jl, 
>> in expression starting on line 120
>> while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/LightGraphs.jl, 
>> in expression starting on line 93
>> while loading /vagrant/nightlyAL/PKGEVAL_LightGraphs_using.jl, in 
>> expression starting on line 4
>>
>>
>> Here's the line in question:
>>
>> function _make_simple_undirected_graph{T<:Integer}(n::T, edgelist::
>> Vector{(T,T)})
>>
>> I confess to not yet fully understanding the new change to tuples, 
>> and I'm lost as to how to fix my code to comply with the new rules.
>>
>> Thanks.
>>
>

[julia-users] Re: How to fix package after breaking change in 0.4?

2015-04-19 Thread Seth
Thanks, Matt - this is very helpful.

I'm running into what a problem with PriorityQueue in Base.Collections, 
though. I changed

 PriorityQueue((Float64,Array{Edge,1},Int), Float64) 

to

 
PriorityQueue(@compat(Tuple{Float64,Array{Edge,1},Int}), Float64)

and am getting an error:

ERROR: TypeError: apply_type: in PriorityQueue, expected Type{T}, got Tuple{
DataType,DataType,DataType}

What am I doing wrong?


On Sunday, April 19, 2015 at 2:17:26 PM UTC-7, Matt Bauman wrote:
>
> I think this is what your after (for Foo = Int and Bar = Float64):
>
> julia> Tuple{Int,Float64}[]
> 0-element Array{Tuple{Int64,Float64},1}
>
>
> julia> push!(ans, (1, 2.))
> 1-element Array{Tuple{Int64,Float64},1}:
>  (1,2.0)
>
> Documentation is unfortunately still in the process of being updated. 
>  Basically, anywhere you had a tuple of types, you now must write 
> `Tuple{Int, Float64}` instead of `(Int, Float64)`.  In cases where you had 
> a vararg tuple specification, you now write `Tuple{Int, Vararg{Float64}}` 
> instead of `(Int, Float64...)`.  That latter vararg syntax is still up for 
> debate.
>
> On the plus side, you no longer need to work around constructing tuples by 
> splatting: (1, (2,3)…) now works as you would expect it to.  And there's no 
> longer a strange type/value duality to ().
>
> On Sunday, April 19, 2015 at 5:06:00 PM UTC-4, Seth wrote:
>>
>> Following up:
>>
>> How does one now write
>>
>> foo = (Foo, Bar)[]
>>
>> ?
>>
>> Sorry for all the questions here. I really don't understand the changes 
>> that were made and I'd like to get my package working again as quickly as 
>> possible.
>>
>> Are there docs anywhere (written for novices, that is) on what changed 
>> and how to adapt?
>>
>>
>> On Sunday, April 19, 2015 at 12:09:27 PM UTC-7, Tony Kelman wrote:
>>>
>>> That will cause the code to not work on 0.3. To get code that works on 
>>> both 0.3 and 0.4, use the Compat.jl package, and
>>>
>>>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
>>> edgelist::Vector{@compat(Tuple{T,T})}) 
>>>
>>>
>>> On Sunday, April 19, 2015 at 11:58:42 AM UTC-7, Avik Sengupta wrote:


 Try this: 

  function _make_simple_undirected_graph{T<:Integer}(n::T, 
 edgelist::Vector{Tuple{T,T}}) 

 On Monday, 20 April 2015 00:18:33 UTC+5:30, Seth wrote:
>
> Could someone please explain what's going on here and what I need to 
> do to fix my package with the latest 0.4 tuple changes?
>
> Here's the error (from pkg.julialang.org):
>
> ERROR: LoadError: LoadError: LoadError: TypeError: apply_type: in alias, 
> expected Type{T}, got Tuple{TypeVar,TypeVar}
>  in include at ./boot.jl:250
>  in include_from_node1 at ./loading.jl:129
>  in include at ./boot.jl:250
>  in include_from_node1 at ./loading.jl:129
>  in reload_path at ./loading.jl:153
>  in _require at ./loading.jl:68
>  in require at ./loading.jl:51
>  in include at ./boot.jl:250
>  in include_from_node1 at loading.jl:129
>  in process_options at ./client.jl:299
>  in _start at ./client.jl:398
> while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/smallgraphs.jl, 
> in expression starting on line 120
> while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/LightGraphs.jl, 
> in expression starting on line 93
> while loading /vagrant/nightlyAL/PKGEVAL_LightGraphs_using.jl, in 
> expression starting on line 4
>
>
> Here's the line in question:
>
> function _make_simple_undirected_graph{T<:Integer}(n::T, edgelist::
> Vector{(T,T)})
>
> I confess to not yet fully understanding the new change to tuples, and 
> I'm lost as to how to fix my code to comply with the new rules.
>
> Thanks.
>


Re: [julia-users] zero cost subarray?

2015-04-19 Thread Sebastian Good
—track-allocation still requires guesswork, as optimizations can move the 
allocation to a different place than you would expect.
On April 19, 2015 at 4:36:19 PM, Peter Brady (petertbr...@gmail.com) wrote:

So I discovered the --track-allocation option and now I am really confused:

Here's my session:

$ julia --track-allocation=all
               _
   _       _ _(_)_     |  A fresh approach to technical computing
  (_)     | (_) (_)    |  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "help()" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.8-pre+13 (2015-04-17 18:08 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit 0df962d* (2 days old release-0.3)
|__/                   |  x86_64-redhat-linux

julia> include("test.jl")
test_all (generic function with 1 method)

julia> test_unsafe(5)

And here's the relevant part of the resulting test.jl.mem file.  Note that I 
commented out some calls to 'size' and replaced with the appropriate hard-coded 
values but the resulting allocation is the same... Can anyone shed some light 
on this while I wait for 0.4 to compile?

        - function update(a::AbstractArray, idx, off)
  8151120     for i=1:320 #size(a, idx)
        0         a[i] = -10*off+i
        -     end
        0     a
        - end
        - 
       - function setk_UnSafe{T}(a::Array{T,3})
      760     us = UnsafeSlice(a, 3)
        0     for j=1:size(a,2),i=1:size(a,1)
  8151120         us.start = (j-1)*320+i #size(a,1)+i
        -         #off = sub2ind(size(a), i, j, 1)
        0         update(us, 3, us.start)
        -     end
        0     a
        - end
        - function test_unsafe(n)
        0     a = zeros(Int, (320, 320, 320))
        -     # warmup
        0     setk_UnSafe(a);
        0     clear_malloc_data()
        -     #@time (
        0     for i=1:n; setk_UnSafe(a); end
        - end


On Sunday, April 19, 2015 at 2:21:56 PM UTC-6, Peter Brady wrote:
@Dahua, thanks for adding an unsafeview!  I appreciate how quickly this 
community responds.

I've added the following function to my test.jl script
function setk_unsafeview{T}(a::Array{T,3})
    for j=1:size(a,2),i=1:size(a,1)
        off = sub2ind(size(a), i, j, 1)
        update(unsafe_view(a, i, j, :), 3, off)
    end
    a
end
 But I'm not seeing the large increase in performance I was expecting.  My 
timings are now

julia> test_all(5);
test_stride
elapsed time: 2.156173128 seconds (0 bytes allocated)
test_view
elapsed time: 9.30964534 seconds (94208000 bytes allocated, 0.47% gc time)
test_unsafe
elapsed time: 2.169307471 seconds (16303000 bytes allocated)
test_unsafeview
elapsed time: 8.955876793 seconds (90112000 bytes allocated, 0.41% gc time)

To be fair, I am cheating a bit with my custom 'UnsafeSlice' since I make only 
one instance and simply update the offset on each iteration.  If I make it 
immutable and create a new instance on every iteration (as I do for the view 
and unsafeview), things slow down a little and the allocation goes south:

julia> test_all(5);
test_stride
elapsed time: 2.159909265 seconds (0 bytes allocated)
test_view
elapsed time: 9.029025282 seconds (94208000 bytes allocated, 0.43% gc time)
test_unsafe
elapsed time: 2.621667854 seconds (114606240 bytes allocated, 2.41% gc time)
test_unsafeview
elapsed time: 8.888434466 seconds (90112000 bytes allocated, 0.44% gc time)

These are all with 0.3.8-pre.  I'll try compiling master and see what happens.  
I'm still confused about why allocating a single type with a pointer, 2 ints 
and a tuple costs so much memory though.



On Sunday, April 19, 2015 at 11:38:17 AM UTC-6, Tim Holy wrote:
It's not just escape analysis, as this (new) issue demonstrates:
https://github.com/JuliaLang/julia/issues/10899

--Tim

On Sunday, April 19, 2015 12:33:51 PM Sebastian Good wrote:
> Their size seems much decreased. I’d imagine to totally avoid allocation in
> this benchmark requires an optimization that really has nothing to do with
> subarrays per se. You’d have to do an escape analysis and see that Aj never
> left sumcols. Not easy in practice, since it’s passed to slice and length,
> and you’d have to make sure they didn’t squirrel it away or pass it on to
> someone else. Then you could stack allocate it, or even destructure it into
> a bunch of scalar mutations on the stack. After eliminating dead code,
> you’d end up with a no-allocation loop much like you’d write by hand. This
> sort of optimization seems to be quite tricky for compilers to pull off,
> but it’s a common pattern in numerical code.
>
> In Julia is such cleverness left entirely to LLVM, or are there optimization
> passes in Julia itself? On April 19, 2015 at 6:49:21 AM, Tim Holy
> (tim@gmail.com) wrote:
>
> Sorry to be slow to chime in here, but the tuple overhaul has landed and
> they are still not zero-cost:
>
> function sumcols(A)
> s = 0.0
> for j = 1:size(A,2)
> Aj = slice(A, :, j)
> for i = 1:length(Aj)
> s += Aj[i]
> end
> end
> s
> end
>
> Even in the la

[julia-users] Re: How to fix package after breaking change in 0.4?

2015-04-19 Thread Matt Bauman
I think this is what your after (for Foo = Int and Bar = Float64):

julia> Tuple{Int,Float64}[]
0-element Array{Tuple{Int64,Float64},1}


julia> push!(ans, (1, 2.))
1-element Array{Tuple{Int64,Float64},1}:
 (1,2.0)

Documentation is unfortunately still in the process of being updated. 
 Basically, anywhere you had a tuple of types, you now must write 
`Tuple{Int, Float64}` instead of `(Int, Float64)`.  In cases where you had 
a vararg tuple specification, you now write `Tuple{Int, Vararg{Float64}}` 
instead of `(Int, Float64...)`.  That latter vararg syntax is still up for 
debate.

On the plus side, you no longer need to work around constructing tuples by 
splatting: (1, (2,3)…) now works as you would expect it to.  And there's no 
longer a strange type/value duality to ().

On Sunday, April 19, 2015 at 5:06:00 PM UTC-4, Seth wrote:
>
> Following up:
>
> How does one now write
>
> foo = (Foo, Bar)[]
>
> ?
>
> Sorry for all the questions here. I really don't understand the changes 
> that were made and I'd like to get my package working again as quickly as 
> possible.
>
> Are there docs anywhere (written for novices, that is) on what changed and 
> how to adapt?
>
>
> On Sunday, April 19, 2015 at 12:09:27 PM UTC-7, Tony Kelman wrote:
>>
>> That will cause the code to not work on 0.3. To get code that works on 
>> both 0.3 and 0.4, use the Compat.jl package, and
>>
>>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
>> edgelist::Vector{@compat(Tuple{T,T})}) 
>>
>>
>> On Sunday, April 19, 2015 at 11:58:42 AM UTC-7, Avik Sengupta wrote:
>>>
>>>
>>> Try this: 
>>>
>>>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
>>> edgelist::Vector{Tuple{T,T}}) 
>>>
>>> On Monday, 20 April 2015 00:18:33 UTC+5:30, Seth wrote:

 Could someone please explain what's going on here and what I need to do 
 to fix my package with the latest 0.4 tuple changes?

 Here's the error (from pkg.julialang.org):

 ERROR: LoadError: LoadError: LoadError: TypeError: apply_type: in alias, 
 expected Type{T}, got Tuple{TypeVar,TypeVar}
  in include at ./boot.jl:250
  in include_from_node1 at ./loading.jl:129
  in include at ./boot.jl:250
  in include_from_node1 at ./loading.jl:129
  in reload_path at ./loading.jl:153
  in _require at ./loading.jl:68
  in require at ./loading.jl:51
  in include at ./boot.jl:250
  in include_from_node1 at loading.jl:129
  in process_options at ./client.jl:299
  in _start at ./client.jl:398
 while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/smallgraphs.jl, 
 in expression starting on line 120
 while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/LightGraphs.jl, 
 in expression starting on line 93
 while loading /vagrant/nightlyAL/PKGEVAL_LightGraphs_using.jl, in 
 expression starting on line 4


 Here's the line in question:

 function _make_simple_undirected_graph{T<:Integer}(n::T, edgelist::
 Vector{(T,T)})

 I confess to not yet fully understanding the new change to tuples, and 
 I'm lost as to how to fix my code to comply with the new rules.

 Thanks.

>>>

[julia-users] Re: How to fix package after breaking change in 0.4?

2015-04-19 Thread Seth
Following up:

How does one now write

foo = (Foo, Bar)[]

?

Sorry for all the questions here. I really don't understand the changes 
that were made and I'd like to get my package working again as quickly as 
possible.

Are there docs anywhere (written for novices, that is) on what changed and 
how to adapt?


On Sunday, April 19, 2015 at 12:09:27 PM UTC-7, Tony Kelman wrote:
>
> That will cause the code to not work on 0.3. To get code that works on 
> both 0.3 and 0.4, use the Compat.jl package, and
>
>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
> edgelist::Vector{@compat(Tuple{T,T})}) 
>
>
> On Sunday, April 19, 2015 at 11:58:42 AM UTC-7, Avik Sengupta wrote:
>>
>>
>> Try this: 
>>
>>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
>> edgelist::Vector{Tuple{T,T}}) 
>>
>> On Monday, 20 April 2015 00:18:33 UTC+5:30, Seth wrote:
>>>
>>> Could someone please explain what's going on here and what I need to do 
>>> to fix my package with the latest 0.4 tuple changes?
>>>
>>> Here's the error (from pkg.julialang.org):
>>>
>>> ERROR: LoadError: LoadError: LoadError: TypeError: apply_type: in alias, 
>>> expected Type{T}, got Tuple{TypeVar,TypeVar}
>>>  in include at ./boot.jl:250
>>>  in include_from_node1 at ./loading.jl:129
>>>  in include at ./boot.jl:250
>>>  in include_from_node1 at ./loading.jl:129
>>>  in reload_path at ./loading.jl:153
>>>  in _require at ./loading.jl:68
>>>  in require at ./loading.jl:51
>>>  in include at ./boot.jl:250
>>>  in include_from_node1 at loading.jl:129
>>>  in process_options at ./client.jl:299
>>>  in _start at ./client.jl:398
>>> while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/smallgraphs.jl, in 
>>> expression starting on line 120
>>> while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/LightGraphs.jl, in 
>>> expression starting on line 93
>>> while loading /vagrant/nightlyAL/PKGEVAL_LightGraphs_using.jl, in 
>>> expression starting on line 4
>>>
>>>
>>> Here's the line in question:
>>>
>>> function _make_simple_undirected_graph{T<:Integer}(n::T, edgelist::
>>> Vector{(T,T)})
>>>
>>> I confess to not yet fully understanding the new change to tuples, and 
>>> I'm lost as to how to fix my code to comply with the new rules.
>>>
>>> Thanks.
>>>
>>

Re: [julia-users] zero cost subarray?

2015-04-19 Thread Peter Brady
So I discovered the --track-allocation option and now I am really confused:

Here's my session:

$ julia --track-allocation=all
   _
   _   _ _(_)_ |  A fresh approach to technical computing
  (_) | (_) (_)|  Documentation: http://docs.julialang.org
   _ _   _| |_  __ _   |  Type "help()" for help.
  | | | | | | |/ _` |  |
  | | |_| | | | (_| |  |  Version 0.3.8-pre+13 (2015-04-17 18:08 UTC)
 _/ |\__'_|_|_|\__'_|  |  Commit 0df962d* (2 days old release-0.3)
|__/   |  x86_64-redhat-linux

julia> include("test.jl")
test_all (generic function with 1 method)

julia> test_unsafe(5)

And here's the relevant part of the resulting test.jl.mem file.  Note that 
I commented out some calls to 'size' and replaced with the appropriate 
hard-coded values but the resulting allocation is the same... Can anyone 
shed some light on this while I wait for 0.4 to compile?

- function update(a::AbstractArray, idx, off)
  8151120 for i=1:320 #size(a, idx)
0 a[i] = -10*off+i
- end
0 a
- end
- 
   - function setk_UnSafe{T}(a::Array{T,3})
  760 us = UnsafeSlice(a, 3)
0 for j=1:size(a,2),i=1:size(a,1)
  8151120 us.start = (j-1)*320+i #size(a,1)+i
- #off = sub2ind(size(a), i, j, 1)
0 update(us, 3, us.start)
- end
0 a
- end
- function test_unsafe(n)
0 a = zeros(Int, (320, 320, 320))
- # warmup
0 setk_UnSafe(a);
0 clear_malloc_data()
- #@time (
0 for i=1:n; setk_UnSafe(a); end
- end


On Sunday, April 19, 2015 at 2:21:56 PM UTC-6, Peter Brady wrote:
>
> @Dahua, thanks for adding an unsafeview!  I appreciate how quickly this 
> community responds.
>
> I've added the following function to my test.jl script
> function setk_unsafeview{T}(a::Array{T,3})
> for j=1:size(a,2),i=1:size(a,1)
> off = sub2ind(size(a), i, j, 1)
> update(unsafe_view(a, i, j, :), 3, off)
> end
> a
> end
>  But I'm not seeing the large increase in performance I was expecting.  My 
> timings are now
>
> julia> test_all(5);
> test_stride
> elapsed time: 2.156173128 seconds (0 bytes allocated)
> test_view
> elapsed time: 9.30964534 seconds (94208000 bytes allocated, 0.47% gc time)
> test_unsafe
> elapsed time: 2.169307471 seconds (16303000 bytes allocated)
> test_unsafeview
> elapsed time: 8.955876793 seconds (90112000 bytes allocated, 0.41% gc time)
>
> To be fair, I am cheating a bit with my custom 'UnsafeSlice' since I make 
> only one instance and simply update the offset on each iteration.  If I 
> make it immutable and create a new instance on every iteration (as I do for 
> the view and unsafeview), things slow down a little and the allocation goes 
> south:
>
> julia> test_all(5);
> test_stride
> elapsed time: 2.159909265 seconds (0 bytes allocated)
> test_view
> elapsed time: 9.029025282 seconds (94208000 bytes allocated, 0.43% gc time)
> test_unsafe
> elapsed time: 2.621667854 seconds (114606240 bytes allocated, 2.41% gc 
> time)
> test_unsafeview
> elapsed time: 8.888434466 seconds (90112000 bytes allocated, 0.44% gc time)
>
> These are all with 0.3.8-pre.  I'll try compiling master and see what 
> happens.  I'm still confused about why allocating a single type with a 
> pointer, 2 ints and a tuple costs so much memory though.
>
>
>
> On Sunday, April 19, 2015 at 11:38:17 AM UTC-6, Tim Holy wrote:
>>
>> It's not just escape analysis, as this (new) issue demonstrates: 
>> https://github.com/JuliaLang/julia/issues/10899 
>>
>> --Tim 
>>
>> On Sunday, April 19, 2015 12:33:51 PM Sebastian Good wrote: 
>> > Their size seems much decreased. I’d imagine to totally avoid 
>> allocation in 
>> > this benchmark requires an optimization that really has nothing to do 
>> with 
>> > subarrays per se. You’d have to do an escape analysis and see that Aj 
>> never 
>> > left sumcols. Not easy in practice, since it’s passed to slice and 
>> length, 
>> > and you’d have to make sure they didn’t squirrel it away or pass it on 
>> to 
>> > someone else. Then you could stack allocate it, or even destructure it 
>> into 
>> > a bunch of scalar mutations on the stack. After eliminating dead code, 
>> > you’d end up with a no-allocation loop much like you’d write by hand. 
>> This 
>> > sort of optimization seems to be quite tricky for compilers to pull 
>> off, 
>> > but it’s a common pattern in numerical code. 
>> > 
>> > In Julia is such cleverness left entirely to LLVM, or are there 
>> optimization 
>> > passes in Julia itself? On April 19, 2015 at 6:49:21 AM, Tim Holy 
>> > (tim@gmail.com) wrote: 
>> > 
>> > Sorry to be slow to chime in here, but the tuple overhaul has landed 
>> and 
>> > they are still not zero-cost: 
>> > 
>> > function sumcols(A) 
>> > s = 0.0 
>> > for j = 1:size(A,2) 
>> > Aj = slice(A, :, j) 
>> > for i = 1:length(Aj) 
>> > s += Aj[

Re: [julia-users] zero cost subarray?

2015-04-19 Thread Peter Brady
@Dahua, thanks for adding an unsafeview!  I appreciate how quickly this 
community responds.

I've added the following function to my test.jl script
function setk_unsafeview{T}(a::Array{T,3})
for j=1:size(a,2),i=1:size(a,1)
off = sub2ind(size(a), i, j, 1)
update(unsafe_view(a, i, j, :), 3, off)
end
a
end
 But I'm not seeing the large increase in performance I was expecting.  My 
timings are now

julia> test_all(5);
test_stride
elapsed time: 2.156173128 seconds (0 bytes allocated)
test_view
elapsed time: 9.30964534 seconds (94208000 bytes allocated, 0.47% gc time)
test_unsafe
elapsed time: 2.169307471 seconds (16303000 bytes allocated)
test_unsafeview
elapsed time: 8.955876793 seconds (90112000 bytes allocated, 0.41% gc time)

To be fair, I am cheating a bit with my custom 'UnsafeSlice' since I make 
only one instance and simply update the offset on each iteration.  If I 
make it immutable and create a new instance on every iteration (as I do for 
the view and unsafeview), things slow down a little and the allocation goes 
south:

julia> test_all(5);
test_stride
elapsed time: 2.159909265 seconds (0 bytes allocated)
test_view
elapsed time: 9.029025282 seconds (94208000 bytes allocated, 0.43% gc time)
test_unsafe
elapsed time: 2.621667854 seconds (114606240 bytes allocated, 2.41% gc time)
test_unsafeview
elapsed time: 8.888434466 seconds (90112000 bytes allocated, 0.44% gc time)

These are all with 0.3.8-pre.  I'll try compiling master and see what 
happens.  I'm still confused about why allocating a single type with a 
pointer, 2 ints and a tuple costs so much memory though.



On Sunday, April 19, 2015 at 11:38:17 AM UTC-6, Tim Holy wrote:
>
> It's not just escape analysis, as this (new) issue demonstrates: 
> https://github.com/JuliaLang/julia/issues/10899 
>
> --Tim 
>
> On Sunday, April 19, 2015 12:33:51 PM Sebastian Good wrote: 
> > Their size seems much decreased. I’d imagine to totally avoid allocation 
> in 
> > this benchmark requires an optimization that really has nothing to do 
> with 
> > subarrays per se. You’d have to do an escape analysis and see that Aj 
> never 
> > left sumcols. Not easy in practice, since it’s passed to slice and 
> length, 
> > and you’d have to make sure they didn’t squirrel it away or pass it on 
> to 
> > someone else. Then you could stack allocate it, or even destructure it 
> into 
> > a bunch of scalar mutations on the stack. After eliminating dead code, 
> > you’d end up with a no-allocation loop much like you’d write by hand. 
> This 
> > sort of optimization seems to be quite tricky for compilers to pull off, 
> > but it’s a common pattern in numerical code. 
> > 
> > In Julia is such cleverness left entirely to LLVM, or are there 
> optimization 
> > passes in Julia itself? On April 19, 2015 at 6:49:21 AM, Tim Holy 
> > (tim@gmail.com ) wrote: 
> > 
> > Sorry to be slow to chime in here, but the tuple overhaul has landed and 
> > they are still not zero-cost: 
> > 
> > function sumcols(A) 
> > s = 0.0 
> > for j = 1:size(A,2) 
> > Aj = slice(A, :, j) 
> > for i = 1:length(Aj) 
> > s += Aj[i] 
> > end 
> > end 
> > s 
> > end 
> > 
> > Even in the latest 0.4, this still allocates memory. On the other hand, 
> > while SubArrays allocate nearly 2x more memory than ArrayViews, the 
> speed 
> > of the two (replacing `slice` with `view` above) is, for me, nearly 
> > identical. 
> > 
> > --Tim 
> > 
> > On Friday, April 17, 2015 08:30:27 PM Sebastian Good wrote: 
> > > This was discussed a few weeks ago 
> > > 
> > > https://groups.google.com/d/msg/julia-users/IxrvV8ABZoQ/uWZu5-IB3McJ 
> > > 
> > > I think the bottom line is that the current implementation *should* be 
> > > 'zero-cost' once a set of planned improvements and optimizations take 
> > > place. One of the key ones is a tuple overhaul. 
> > > 
> > > Fair to say it can never be 'zero' cost since there is different 
> inherent 
> > > overhead depending on the type of subarray, e.g. offset, slice, 
> > > re-dimension, etc. however the implementation is quite clever about 
> > > allowing specialization of those. 
> > > 
> > > In a common case (e.g. a constant offset or simple stride) my 
> > > understanding 
> > > is that the structure will be type-specialized and likely stack 
> allocated 
> > > in many cases, reducing to what you'd write by hand. At least this is 
> what 
> > > they're after. 
> > > 
> > > On Friday, April 17, 2015 at 4:24:14 PM UTC-4, Peter Brady wrote: 
> > > > Thanks for the links. I'll check out ArrayViews as it looks like 
> what I 
> > > > was going to do manually without wrapping it in a type. 
> > > > 
> > > > By semi-dim agnostic I meant that the differencing algorithm itself 
> only 
> > > > cares about one dimension but that dimension is different for 
> different 
> > > > directions. Only a few toplevel routines actually need to know about 
> the 
> > > > dimensionality of the problem. 
> > > > 
> > > > On Friday, April 17, 2015 at 2:04:39 PM UTC

[julia-users] Re: How to fix package after breaking change in 0.4?

2015-04-19 Thread Seth
Sorry - please disregard. I managed to omit the Vector{} in my paste. It's 
working correctly. Thank you.

On Sunday, April 19, 2015 at 12:44:38 PM UTC-7, Seth wrote:
>
> This isn't a drop-in replacement:
>
> ERROR: LoadError: MethodError: `_make_simple_undirected_graph` has no 
> method matching _make_simple_undirected_graph(::Int64, ::Array{(Int64,
> Int64),1})
>
> I'm calling it like so:
>
> function PetersenGraph()
> e = [
> (1, 2), (1, 5), (1, 6),
> (2, 3), (2, 7),
> (3, 4), (3, 8),
> (4, 5), (4, 9),
> (5, 10),
> (6, 8), (6, 9),
> (7, 9), (7, 10),
> (8, 10)
> ]
> return _make_simple_undirected_graph(10,e)
> end
>
>
>
>
> On Sunday, April 19, 2015 at 12:09:27 PM UTC-7, Tony Kelman wrote:
>>
>> That will cause the code to not work on 0.3. To get code that works on 
>> both 0.3 and 0.4, use the Compat.jl package, and
>>
>>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
>> edgelist::Vector{@compat(Tuple{T,T})}) 
>>
>>
>> On Sunday, April 19, 2015 at 11:58:42 AM UTC-7, Avik Sengupta wrote:
>>>
>>>
>>> Try this: 
>>>
>>>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
>>> edgelist::Vector{Tuple{T,T}}) 
>>>
>>> On Monday, 20 April 2015 00:18:33 UTC+5:30, Seth wrote:

 Could someone please explain what's going on here and what I need to do 
 to fix my package with the latest 0.4 tuple changes?

 Here's the error (from pkg.julialang.org):

 ERROR: LoadError: LoadError: LoadError: TypeError: apply_type: in alias, 
 expected Type{T}, got Tuple{TypeVar,TypeVar}
  in include at ./boot.jl:250
  in include_from_node1 at ./loading.jl:129
  in include at ./boot.jl:250
  in include_from_node1 at ./loading.jl:129
  in reload_path at ./loading.jl:153
  in _require at ./loading.jl:68
  in require at ./loading.jl:51
  in include at ./boot.jl:250
  in include_from_node1 at loading.jl:129
  in process_options at ./client.jl:299
  in _start at ./client.jl:398
 while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/smallgraphs.jl, 
 in expression starting on line 120
 while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/LightGraphs.jl, 
 in expression starting on line 93
 while loading /vagrant/nightlyAL/PKGEVAL_LightGraphs_using.jl, in 
 expression starting on line 4


 Here's the line in question:

 function _make_simple_undirected_graph{T<:Integer}(n::T, edgelist::
 Vector{(T,T)})

 I confess to not yet fully understanding the new change to tuples, and 
 I'm lost as to how to fix my code to comply with the new rules.

 Thanks.

>>>

[julia-users] Re: How to fix package after breaking change in 0.4?

2015-04-19 Thread Seth
This isn't a drop-in replacement:

ERROR: LoadError: MethodError: `_make_simple_undirected_graph` has no 
method matching _make_simple_undirected_graph(::Int64, ::Array{(Int64,Int64
),1})

I'm calling it like so:

function PetersenGraph()
e = [
(1, 2), (1, 5), (1, 6),
(2, 3), (2, 7),
(3, 4), (3, 8),
(4, 5), (4, 9),
(5, 10),
(6, 8), (6, 9),
(7, 9), (7, 10),
(8, 10)
]
return _make_simple_undirected_graph(10,e)
end




On Sunday, April 19, 2015 at 12:09:27 PM UTC-7, Tony Kelman wrote:
>
> That will cause the code to not work on 0.3. To get code that works on 
> both 0.3 and 0.4, use the Compat.jl package, and
>
>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
> edgelist::Vector{@compat(Tuple{T,T})}) 
>
>
> On Sunday, April 19, 2015 at 11:58:42 AM UTC-7, Avik Sengupta wrote:
>>
>>
>> Try this: 
>>
>>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
>> edgelist::Vector{Tuple{T,T}}) 
>>
>> On Monday, 20 April 2015 00:18:33 UTC+5:30, Seth wrote:
>>>
>>> Could someone please explain what's going on here and what I need to do 
>>> to fix my package with the latest 0.4 tuple changes?
>>>
>>> Here's the error (from pkg.julialang.org):
>>>
>>> ERROR: LoadError: LoadError: LoadError: TypeError: apply_type: in alias, 
>>> expected Type{T}, got Tuple{TypeVar,TypeVar}
>>>  in include at ./boot.jl:250
>>>  in include_from_node1 at ./loading.jl:129
>>>  in include at ./boot.jl:250
>>>  in include_from_node1 at ./loading.jl:129
>>>  in reload_path at ./loading.jl:153
>>>  in _require at ./loading.jl:68
>>>  in require at ./loading.jl:51
>>>  in include at ./boot.jl:250
>>>  in include_from_node1 at loading.jl:129
>>>  in process_options at ./client.jl:299
>>>  in _start at ./client.jl:398
>>> while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/smallgraphs.jl, in 
>>> expression starting on line 120
>>> while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/LightGraphs.jl, in 
>>> expression starting on line 93
>>> while loading /vagrant/nightlyAL/PKGEVAL_LightGraphs_using.jl, in 
>>> expression starting on line 4
>>>
>>>
>>> Here's the line in question:
>>>
>>> function _make_simple_undirected_graph{T<:Integer}(n::T, edgelist::
>>> Vector{(T,T)})
>>>
>>> I confess to not yet fully understanding the new change to tuples, and 
>>> I'm lost as to how to fix my code to comply with the new rules.
>>>
>>> Thanks.
>>>
>>

[julia-users] Re: How to fix package after breaking change in 0.4?

2015-04-19 Thread Tony Kelman
That will cause the code to not work on 0.3. To get code that works on both 
0.3 and 0.4, use the Compat.jl package, and

 function _make_simple_undirected_graph{T<:Integer}(n::T, 
edgelist::Vector{@compat(Tuple{T,T})}) 


On Sunday, April 19, 2015 at 11:58:42 AM UTC-7, Avik Sengupta wrote:
>
>
> Try this: 
>
>  function _make_simple_undirected_graph{T<:Integer}(n::T, 
> edgelist::Vector{Tuple{T,T}}) 
>
> On Monday, 20 April 2015 00:18:33 UTC+5:30, Seth wrote:
>>
>> Could someone please explain what's going on here and what I need to do 
>> to fix my package with the latest 0.4 tuple changes?
>>
>> Here's the error (from pkg.julialang.org):
>>
>> ERROR: LoadError: LoadError: LoadError: TypeError: apply_type: in alias, 
>> expected Type{T}, got Tuple{TypeVar,TypeVar}
>>  in include at ./boot.jl:250
>>  in include_from_node1 at ./loading.jl:129
>>  in include at ./boot.jl:250
>>  in include_from_node1 at ./loading.jl:129
>>  in reload_path at ./loading.jl:153
>>  in _require at ./loading.jl:68
>>  in require at ./loading.jl:51
>>  in include at ./boot.jl:250
>>  in include_from_node1 at loading.jl:129
>>  in process_options at ./client.jl:299
>>  in _start at ./client.jl:398
>> while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/smallgraphs.jl, in 
>> expression starting on line 120
>> while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/LightGraphs.jl, in 
>> expression starting on line 93
>> while loading /vagrant/nightlyAL/PKGEVAL_LightGraphs_using.jl, in expression 
>> starting on line 4
>>
>>
>> Here's the line in question:
>>
>> function _make_simple_undirected_graph{T<:Integer}(n::T, edgelist::Vector
>> {(T,T)})
>>
>> I confess to not yet fully understanding the new change to tuples, and 
>> I'm lost as to how to fix my code to comply with the new rules.
>>
>> Thanks.
>>
>

[julia-users] Re: How to fix package after breaking change in 0.4?

2015-04-19 Thread Avik Sengupta

Try this: 

 function _make_simple_undirected_graph{T<:Integer}(n::T, 
edgelist::Vector{Tuple{T,T}}) 

On Monday, 20 April 2015 00:18:33 UTC+5:30, Seth wrote:
>
> Could someone please explain what's going on here and what I need to do to 
> fix my package with the latest 0.4 tuple changes?
>
> Here's the error (from pkg.julialang.org):
>
> ERROR: LoadError: LoadError: LoadError: TypeError: apply_type: in alias, 
> expected Type{T}, got Tuple{TypeVar,TypeVar}
>  in include at ./boot.jl:250
>  in include_from_node1 at ./loading.jl:129
>  in include at ./boot.jl:250
>  in include_from_node1 at ./loading.jl:129
>  in reload_path at ./loading.jl:153
>  in _require at ./loading.jl:68
>  in require at ./loading.jl:51
>  in include at ./boot.jl:250
>  in include_from_node1 at loading.jl:129
>  in process_options at ./client.jl:299
>  in _start at ./client.jl:398
> while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/smallgraphs.jl, in 
> expression starting on line 120
> while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/LightGraphs.jl, in 
> expression starting on line 93
> while loading /vagrant/nightlyAL/PKGEVAL_LightGraphs_using.jl, in expression 
> starting on line 4
>
>
> Here's the line in question:
>
> function _make_simple_undirected_graph{T<:Integer}(n::T, edgelist::Vector
> {(T,T)})
>
> I confess to not yet fully understanding the new change to tuples, and I'm 
> lost as to how to fix my code to comply with the new rules.
>
> Thanks.
>


[julia-users] How to fix package after breaking change in 0.4?

2015-04-19 Thread Seth
Could someone please explain what's going on here and what I need to do to 
fix my package with the latest 0.4 tuple changes?

Here's the error (from pkg.julialang.org):

ERROR: LoadError: LoadError: LoadError: TypeError: apply_type: in alias, 
expected Type{T}, got Tuple{TypeVar,TypeVar}
 in include at ./boot.jl:250
 in include_from_node1 at ./loading.jl:129
 in include at ./boot.jl:250
 in include_from_node1 at ./loading.jl:129
 in reload_path at ./loading.jl:153
 in _require at ./loading.jl:68
 in require at ./loading.jl:51
 in include at ./boot.jl:250
 in include_from_node1 at loading.jl:129
 in process_options at ./client.jl:299
 in _start at ./client.jl:398
while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/smallgraphs.jl, in 
expression starting on line 120
while loading /home/vagrant/testpkg/v0.4/LightGraphs/src/LightGraphs.jl, in 
expression starting on line 93
while loading /vagrant/nightlyAL/PKGEVAL_LightGraphs_using.jl, in expression 
starting on line 4


Here's the line in question:

function _make_simple_undirected_graph{T<:Integer}(n::T, edgelist::Vector{(T
,T)})

I confess to not yet fully understanding the new change to tuples, and I'm 
lost as to how to fix my code to comply with the new rules.

Thanks.


Re: [julia-users] build errors OSX 10.10.2

2015-04-19 Thread Elliot Saba
Yes, this does seem like something is fishy with BinDeps.jl.  The output of 
Pkg.status() would be helpful here.
-E

On Saturday, April 18, 2015 at 9:15:07 AM UTC-7, Kevin Squire wrote:
>
> Hi Edward, 
>
> What version of BinDeps.jl is installed?  Is this a fresh install of 
> either Julia or those packages, or were they just updated (and started not 
> working)?
>
> Cheers,
>Kevin
>
> On Sat, Apr 18, 2015 at 8:08 AM, Edward Chen  > wrote:
>
>> To whom it may concern:
>>
>> I am getting build errors when I try to update my METADATA. When I try to 
>> rebuild Homebrew, Nettle and ZMQ, I also get errors saying that a @setup 
>> file is not available.
>>
>> Thanks for your help,
>> Ed
>>
>> [ BUILD ERRORS 
>> ]
>>
>> WARNING: Homebrew, Nettle and ZMQ had build errors.
>>
>>  - packages with build errors remain installed in /Users/ehchen/.julia/v0.3
>>  - build the package(s) and all dependencies with `Pkg.build("Homebrew", 
>> "Nettle", "ZMQ")`
>>  - build a single package by running its `deps/build.jl` script
>>
>> 
>>
>>
>>
>>
>>
>> INFO: Building Nettle
>> ===[ ERROR: Nettle 
>> ]
>>
>> @setup not defined
>> while loading /Users/ehchen/.julia/v0.3/Nettle/deps/build.jl, in expression 
>> starting on line 5
>>
>> 
>> INFO: Building ZMQ
>> =[ ERROR: ZMQ 
>> ]=
>>
>> @setup not defined
>> while loading /Users/ehchen/.julia/v0.3/ZMQ/deps/build.jl, in expression 
>> starting on line 4
>>
>> 
>>
>> [ BUILD ERRORS 
>> ]
>>
>> WARNING: Nettle and ZMQ had build errors.
>>
>>  - packages with build errors remain installed in /Users/ehchen/.julia/v0.3
>>  - build the package(s) and all dependencies with `Pkg.build("Nettle", 
>> "ZMQ")`
>>  - build a single package by running its `deps/build.jl` script
>>
>> 
>>
>>
>

Re: [julia-users] Re: Symbolic relations package

2015-04-19 Thread Marcus Appelros
Hi Kevin, thanks for the link! From the end of that thread:

"Has anybody written pure Julia symbolic math for things like:

f = (x**y + y**z + z**x)**100
g = f.expand()"

"As far as I know there is no Julia package which supports such symbolic 
expressions and manipulation."

Now there is!

Saw a more recent dev discussion calling for someone to write a package 
like this. Have looked through the package list many times and never found 
anything that appeared alike the vision of Equations, SymPy has some common 
functionality however certainly didn't start developing in Julia to use 
Python.

Developing this code is indeed very enjoying and as more of the planned 
features become released a solid user base will be established, have 
expanded the todolist with an impelling to read the discussion in your link 
so as to hasten the construction of such a foundation, as per your 
recommendation. 

With love. <3


Re: [julia-users] zero cost subarray?

2015-04-19 Thread Tim Holy
It's not just escape analysis, as this (new) issue demonstrates:
https://github.com/JuliaLang/julia/issues/10899

--Tim

On Sunday, April 19, 2015 12:33:51 PM Sebastian Good wrote:
> Their size seems much decreased. I’d imagine to totally avoid allocation in
> this benchmark requires an optimization that really has nothing to do with
> subarrays per se. You’d have to do an escape analysis and see that Aj never
> left sumcols. Not easy in practice, since it’s passed to slice and length,
> and you’d have to make sure they didn’t squirrel it away or pass it on to
> someone else. Then you could stack allocate it, or even destructure it into
> a bunch of scalar mutations on the stack. After eliminating dead code,
> you’d end up with a no-allocation loop much like you’d write by hand. This
> sort of optimization seems to be quite tricky for compilers to pull off,
> but it’s a common pattern in numerical code. 
> 
> In Julia is such cleverness left entirely to LLVM, or are there optimization
> passes in Julia itself? On April 19, 2015 at 6:49:21 AM, Tim Holy
> (tim.h...@gmail.com) wrote:
> 
> Sorry to be slow to chime in here, but the tuple overhaul has landed and
> they are still not zero-cost:
> 
> function sumcols(A)
> s = 0.0
> for j = 1:size(A,2)
> Aj = slice(A, :, j)
> for i = 1:length(Aj)
> s += Aj[i]
> end
> end
> s
> end
> 
> Even in the latest 0.4, this still allocates memory. On the other hand,
> while SubArrays allocate nearly 2x more memory than ArrayViews, the speed
> of the two (replacing `slice` with `view` above) is, for me, nearly
> identical.
> 
> --Tim
> 
> On Friday, April 17, 2015 08:30:27 PM Sebastian Good wrote:
> > This was discussed a few weeks ago
> > 
> > https://groups.google.com/d/msg/julia-users/IxrvV8ABZoQ/uWZu5-IB3McJ
> > 
> > I think the bottom line is that the current implementation *should* be
> > 'zero-cost' once a set of planned improvements and optimizations take
> > place. One of the key ones is a tuple overhaul.
> > 
> > Fair to say it can never be 'zero' cost since there is different inherent
> > overhead depending on the type of subarray, e.g. offset, slice,
> > re-dimension, etc. however the implementation is quite clever about
> > allowing specialization of those.
> > 
> > In a common case (e.g. a constant offset or simple stride) my
> > understanding
> > is that the structure will be type-specialized and likely stack allocated
> > in many cases, reducing to what you'd write by hand. At least this is what
> > they're after.
> > 
> > On Friday, April 17, 2015 at 4:24:14 PM UTC-4, Peter Brady wrote:
> > > Thanks for the links. I'll check out ArrayViews as it looks like what I
> > > was going to do manually without wrapping it in a type.
> > > 
> > > By semi-dim agnostic I meant that the differencing algorithm itself only
> > > cares about one dimension but that dimension is different for different
> > > directions. Only a few toplevel routines actually need to know about the
> > > dimensionality of the problem.
> > > 
> > > On Friday, April 17, 2015 at 2:04:39 PM UTC-6, René Donner wrote:
> > >> As far as I have measured it sub in 0.4 is still not cheap, as it
> > >> provides the flexibility to deal with all kinds of strides and offsets,
> > >> and
> > >> the view object itself thus has a certain size. See
> > >> https://github.com/rened/FunctionalData.jl#efficiency for a simple
> > >> analysis, where the speed is mostly dominated by the speed of the
> > >> "sub-view" mechanism.
> > >> 
> > >> To get faster views which require strides etc you can try
> > >> https://github.com/JuliaLang/ArrayViews.jl
> > >> 
> > >> What do you mean by semi-dim agnostic? In case you only need indexing
> > >> along the last dimension (like a[:,:,i] and a[:,:,:,i]) you can use
> > >> 
> > >> https://github.com/rened/FunctionalData.jl#efficient-views-details
> > >> 
> > >> which uses normal DenseArrays and simple pointer updates internally. It
> > >> can also update a view in-place, by just incrementing the pointer.
> > >> 
> > >> Am 17.04.2015 um 21:48 schrieb Peter Brady :
> > >> > Inorder to write some differencing algorithms in a semi-dimensional
> > >> 
> > >> agnostic manner the code I've written makes heavy use of subarrays
> > >> which
> > >> turn out to be rather costly. I've noticed some posts on the cost of
> > >> subarrays here and that things will be better in 0.4. Can someone
> > >> comment
> > >> on how much better? Would subarray (or anything like it) be on par with
> > >> simply passing an offset and stride (constant) and computing the index
> > >> myself? I'm currently using the 0.3 release branch.



Re: [julia-users] What happened to hist2d! ?

2015-04-19 Thread Pontus Stenetorp
On 19 April 2015 at 17:56, DumpsterDoofus  wrote:
>
> According to the 0.3.7 documentation at
> http://julia-demo.readthedocs.org/en/stable/stdlib/math.html?highlight=hist2d!#Base.hist2d!,
> the function hist2d! is included as a standard function.
>
> However, when I type hist2d! into the REPL, I just get an error message:
> julia> hist2d!
> ERROR: hist2d! not defined
>
> But when I type Base.hist2d!, it does appear to recognize it. Does this mean
> it wasn't properly exported?

This has been fixed on the 0.4 branch [1] following a bug report [2].
Apparently it was however not backported to the 0.3 branch.

[1]: 
https://github.com/JuliaLang/julia/commit/afed98993a6ff7223359270337c0c106b1a7d938
[2]: https://github.com/JuliaLang/julia/pull/10049

Pontus


[julia-users] What happened to hist2d! ?

2015-04-19 Thread DumpsterDoofus
According to the 0.3.7 documentation 
at 
http://julia-demo.readthedocs.org/en/stable/stdlib/math.html?highlight=hist2d!#Base.hist2d!,
 
the function hist2d! is included as a standard function.

However, when I type hist2d! into the REPL, I just get an error message:
julia> hist2d!
ERROR: hist2d! not defined

But when I type Base.hist2d!, it does appear to recognize it. Does this 
mean it wasn't properly exported?


Re: [julia-users] zero cost subarray?

2015-04-19 Thread Sebastian Good
Their size seems much decreased. I’d imagine to totally avoid allocation in 
this benchmark requires an optimization that really has nothing to do with 
subarrays per se. You’d have to do an escape analysis and see that Aj never 
left sumcols. Not easy in practice, since it’s passed to slice and length, and 
you’d have to make sure they didn’t squirrel it away or pass it on to someone 
else. Then you could stack allocate it, or even destructure it into a bunch of 
scalar mutations on the stack. After eliminating dead code, you’d end up with a 
no-allocation loop much like you’d write by hand. This sort of optimization 
seems to be quite tricky for compilers to pull off, but it’s a common pattern 
in numerical code. 

In Julia is such cleverness left entirely to LLVM, or are there optimization 
passes in Julia itself?
On April 19, 2015 at 6:49:21 AM, Tim Holy (tim.h...@gmail.com) wrote:

Sorry to be slow to chime in here, but the tuple overhaul has landed and they  
are still not zero-cost:  

function sumcols(A)  
s = 0.0  
for j = 1:size(A,2)  
Aj = slice(A, :, j)  
for i = 1:length(Aj)  
s += Aj[i]  
end  
end  
s  
end  

Even in the latest 0.4, this still allocates memory. On the other hand, while  
SubArrays allocate nearly 2x more memory than ArrayViews, the speed of the two  
(replacing `slice` with `view` above) is, for me, nearly identical.  

--Tim  


On Friday, April 17, 2015 08:30:27 PM Sebastian Good wrote:  
> This was discussed a few weeks ago  
>  
> https://groups.google.com/d/msg/julia-users/IxrvV8ABZoQ/uWZu5-IB3McJ  
>  
> I think the bottom line is that the current implementation *should* be  
> 'zero-cost' once a set of planned improvements and optimizations take  
> place. One of the key ones is a tuple overhaul.  
>  
> Fair to say it can never be 'zero' cost since there is different inherent  
> overhead depending on the type of subarray, e.g. offset, slice,  
> re-dimension, etc. however the implementation is quite clever about  
> allowing specialization of those.  
>  
> In a common case (e.g. a constant offset or simple stride) my understanding  
> is that the structure will be type-specialized and likely stack allocated  
> in many cases, reducing to what you'd write by hand. At least this is what  
> they're after.  
>  
> On Friday, April 17, 2015 at 4:24:14 PM UTC-4, Peter Brady wrote:  
> > Thanks for the links. I'll check out ArrayViews as it looks like what I  
> > was going to do manually without wrapping it in a type.  
> >  
> > By semi-dim agnostic I meant that the differencing algorithm itself only  
> > cares about one dimension but that dimension is different for different  
> > directions. Only a few toplevel routines actually need to know about the  
> > dimensionality of the problem.  
> >  
> > On Friday, April 17, 2015 at 2:04:39 PM UTC-6, René Donner wrote:  
> >> As far as I have measured it sub in 0.4 is still not cheap, as it  
> >> provides the flexibility to deal with all kinds of strides and offsets,  
> >> and  
> >> the view object itself thus has a certain size. See  
> >> https://github.com/rened/FunctionalData.jl#efficiency for a simple  
> >> analysis, where the speed is mostly dominated by the speed of the  
> >> "sub-view" mechanism.  
> >>  
> >> To get faster views which require strides etc you can try  
> >> https://github.com/JuliaLang/ArrayViews.jl  
> >>  
> >> What do you mean by semi-dim agnostic? In case you only need indexing  
> >> along the last dimension (like a[:,:,i] and a[:,:,:,i]) you can use  
> >>  
> >> https://github.com/rened/FunctionalData.jl#efficient-views-details  
> >>  
> >> which uses normal DenseArrays and simple pointer updates internally. It  
> >> can also update a view in-place, by just incrementing the pointer.  
> >>  
> >> Am 17.04.2015 um 21:48 schrieb Peter Brady :  
> >> > Inorder to write some differencing algorithms in a semi-dimensional  
> >>  
> >> agnostic manner the code I've written makes heavy use of subarrays which  
> >> turn out to be rather costly. I've noticed some posts on the cost of  
> >> subarrays here and that things will be better in 0.4. Can someone  
> >> comment  
> >> on how much better? Would subarray (or anything like it) be on par with  
> >> simply passing an offset and stride (constant) and computing the index  
> >> myself? I'm currently using the 0.3 release branch.  



[julia-users] Re: Cannot open certain IJulia account anymore. But can open others.

2015-04-19 Thread Viral Shah
Are there some files we can clear out? Because you are out of quota, I am 
guessing that the system is unable to restore your files from the backup. 
We need such things to be a bit more flexible eventually.

-viral

On Sunday, April 19, 2015 at 7:25:22 PM UTC+5:30, Anders Madsen wrote:
>
> I recently was disconnected in a situation where I had run out of 
> diskspace.
> Next time I tried to access, I got the message 
>
> "Could not start your instance! Please try again"
>
> Did that several times, with the same result.
>
> What can I do to become once again a happy IJulia user.
>
> I have seen this kind of behavior described by others, who got some 
> authoritative help.
> My account is the email adress from which this message is sent. 
>


Re: [julia-users] Re: Non-GPL Julia?

2015-04-19 Thread Viral Shah
And it is merged now. 

On Saturday, April 18, 2015 at 4:22:26 PM UTC+5:30, Scott Jones wrote:
>
> That's great! That solves our dilemma for us! 
>
> Scott



Re: [julia-users] Re: Symbolic relations package

2015-04-19 Thread Kevin Squire
Hi Marcus,

It's great that you're exploring Julia in this way.  Judging by the
responses, there hasn't been a huge amount of interest yet.

If you're having fun (and it looks like it!), keep at it.  If you're really
looking to find ways to collaborate, you might try to look at other
symbolic math packages for Julia, and for previous discussions on symbolic
math in Julia.

I was going to list a few packages for you, but instead I'll just point you
to this post from a year ago on julia-dev, which has pointers to 6 other
symbolic math packages:

https://groups.google.com/d/msg/julia-dev/NTfS9fJuIcE/qmRK38exjooJ

That discussion, itself, also might be of interest.

Cheers,
   Kevin

On Sun, Apr 19, 2015 at 2:47 AM, Marcus Appelros 
wrote:

> Looking for testers and testwriters, especially for recent versions since
> the internet connection here (not so central Kenya) does not allow frequent
> updates.
>
> Pkg.clone("git://github.com/jhlq/Equations.jl.git")
>
> WIP:
> * A generic matches(ex::Expression,pattern::Expression) so instead of
> applying matchers functions to a equation one applies a web of equations.
> * Contracting multiple terms into Pow(a,b).
> * More derivative rules.
> * Int (∫) type.
> * Extensive testing.
>


[julia-users] Cannot open certain IJulia account anymore. But can open others.

2015-04-19 Thread Anders Madsen
I recently was disconnected in a situation where I had run out of diskspace.
Next time I tried to access, I got the message 

"Could not start your instance! Please try again"

Did that several times, with the same result.

What can I do to become once again a happy IJulia user.

I have seen this kind of behavior described by others, who got some 
authoritative help.
My account is the email adress from which this message is sent. 


Re: [julia-users] My project on transforming Cirru(a nother syntax) to Julia AST

2015-04-19 Thread Jiyin Yiyong
Thx. And I spend quote some trying things described in thie article: 
http://blog.leahhanson.us/julia-introspects.html
Think it's time to dig into code now.

On Sunday, April 19, 2015 at 8:04:39 PM UTC+8, Isaiah wrote:
>
> And I got some confusions in understanding the AST of function 
>> definitions. Is there any docs or source code that I can get help from?
>
>
> The AST format is not documented (and not officially stable, though not 
> drastically unstable either), but there are some reflection functions that 
> may help (expand, code_lowered, code_typed). For example:
>
> julia> expand(quote function foo() x = 1 end end)
> :(begin  # none, line 1:
> return $(Expr(:method, :foo, 
> :((top(svec))((top(apply_type))(Tuple)::Any,(top(svec))()::Any)::Any), 
> AST(:($(Expr(:lambda, Any[], Any[Any[:x],Any[Any[:x,:Any,18]],Any[],0], 
> :(begin  # none, line 1:
> x = 1
> return 1
> end::Any), false))
> end)
>
> There is a reflection/introspection section in the manual with some more 
> tips, or look at `base/reflection.jl` for the actual code (and some tricks 
> that aren't in the manual).
>
> On Sun, Apr 19, 2015 at 6:42 AM, Jiyin Yiyong  > wrote:
>
>> I was playing with a syntax called Cirru which is like Lisp but with far 
>> less parentheses in JavaScript.
>> https://github.com/Cirru
>> And I got CirruScript which compiles to ES6 AST and then generates 
>> JavaScript in ES5.
>> https://github.com/Cirru/cirru-script
>> Later I think it's fun to have my toy language running on top of LLVM, 
>> and tried a few ideas. And here's my current project called CirruSepal.jl 
>> which transforms my code to Julia AST.
>> the parser: https://github.com/Cirru/CirruParser.jl
>> the prototype: https://github.com/Cirru/CirruSepal.jl
>>
>> For example, I got a file of Cirru like this:
>>
>> ```cirru
>> = a :demo
>> call println a
>> call println (tuple 1)
>> call println (cell1d 1)
>> call println (call symbol a)
>>
>> call println true false 
>> ```
>>
>> I'm going to parse it into an Array with CirruParser.jl, then generates 
>> Julia AST with CirruSepal.jl using the metaprogramming skills from Julia. I 
>> just think it might be quite interesting that I can run a interpreter in 
>> Julia, or something like an alternative syntax for Julia(a inconvenient one 
>> though).
>>
>> Just want to see if anyone is interested on such a syntax for writing 
>> Julia AST?
>>
>> And I got some confusions in understanding the AST of function 
>> definitions. Is there any docs or source code that I can get help from?
>>
>> Thanks.
>>
>
>

[julia-users] Parsing 12-hour Clock Timestamps

2015-04-19 Thread Pontus Stenetorp
Everyone,

I am currently parsing some data that unfortunately uses a 12-hour
clock format.  Reading the docs [1] and the source, [2] I am now
fairly certain that `Base.Dates` currently lacks support to parse
something like `"Apr 1, 2015 1:02:03 PM"`.  Am I correct in this?
Also, what would you recommend as an alternative library?

Pontus

[1]: http://docs.julialang.org/en/latest/stdlib/dates/#Dates.Dates.DateFormat
[2]: 
https://github.com/JuliaLang/julia/blob/32aee08d0b833233cd22b7b1de01ae769395b3b8/base/dates/io.jl


Re: [julia-users] My project on transforming Cirru(a nother syntax) to Julia AST

2015-04-19 Thread Isaiah Norton
>
> And I got some confusions in understanding the AST of function
> definitions. Is there any docs or source code that I can get help from?


The AST format is not documented (and not officially stable, though not
drastically unstable either), but there are some reflection functions that
may help (expand, code_lowered, code_typed). For example:

julia> expand(quote function foo() x = 1 end end)
:(begin  # none, line 1:
return $(Expr(:method, :foo,
:((top(svec))((top(apply_type))(Tuple)::Any,(top(svec))()::Any)::Any),
AST(:($(Expr(:lambda, Any[], Any[Any[:x],Any[Any[:x,:Any,18]],Any[],0],
:(begin  # none, line 1:
x = 1
return 1
end::Any), false))
end)

There is a reflection/introspection section in the manual with some more
tips, or look at `base/reflection.jl` for the actual code (and some tricks
that aren't in the manual).

On Sun, Apr 19, 2015 at 6:42 AM, Jiyin Yiyong  wrote:

> I was playing with a syntax called Cirru which is like Lisp but with far
> less parentheses in JavaScript.
> https://github.com/Cirru
> And I got CirruScript which compiles to ES6 AST and then generates
> JavaScript in ES5.
> https://github.com/Cirru/cirru-script
> Later I think it's fun to have my toy language running on top of LLVM, and
> tried a few ideas. And here's my current project called CirruSepal.jl which
> transforms my code to Julia AST.
> the parser: https://github.com/Cirru/CirruParser.jl
> the prototype: https://github.com/Cirru/CirruSepal.jl
>
> For example, I got a file of Cirru like this:
>
> ```cirru
> = a :demo
> call println a
> call println (tuple 1)
> call println (cell1d 1)
> call println (call symbol a)
>
> call println true false
> ```
>
> I'm going to parse it into an Array with CirruParser.jl, then generates
> Julia AST with CirruSepal.jl using the metaprogramming skills from Julia. I
> just think it might be quite interesting that I can run a interpreter in
> Julia, or something like an alternative syntax for Julia(a inconvenient one
> though).
>
> Just want to see if anyone is interested on such a syntax for writing
> Julia AST?
>
> And I got some confusions in understanding the AST of function
> definitions. Is there any docs or source code that I can get help from?
>
> Thanks.
>


Re: [julia-users] Latest on wrapping C structs for use in Julia

2015-04-19 Thread Isaiah Norton
Hi Simon,


> As for the second point, the signature in beagle.h is


Correct -- but in the example Julia gist there are only two arguments
passed, rather than three:
https://gist.github.com/sdwfrost/5c574857bd91648fb7ee#file-beagle-jl-L103-L106

As far as the incorrect result, I would suggest to recheck a few things:
- double check the signatures, for example I don't think
beagleUpdateTransitionMatrices is correct in the Julia version (does not
take `nodeIndices` array)
- make sure that the matrix ordering as you declare the eigenvector
matrices in Julia is as-expected by beagle

Best,
Isaiah



On Sun, Apr 19, 2015 at 3:14 AM, Simon Frost  wrote:

> Dear Isaiah,
>
> Thanks - I noted before that changing types to immutable in some cases
> fixed things, but I didn't think that this would apply to BeagleOperation.
> I'll read over the new docs carefully.
>
> As for the second point, the signature in beagle.h is
>
> /**
>  * @brief Set a state frequency buffer
>  *
>  * This function copies a state frequency array into an instance buffer.
>  *
>  * @param instance  Instance number (input)
>  * @param stateFrequenciesIndex Index of state frequencies buffer (input)
>  * @param inStateFrequenciesState frequencies array (stateCount)
> (input)
>  *
>  * @return error code
>  */
> BEAGLE_DLLEXPORT int beagleSetStateFrequencies(int instance,
>  int stateFrequenciesIndex,
>  const double*
> inStateFrequencies);
>
> I'll follow up with the BEAGLE devs to see why your fix makes the script
> run, as the output (logL) isn't correct.
>
> Best
> Simon
>


Re: [julia-users] zero cost subarray?

2015-04-19 Thread Dahua Lin
The latest version of ArrayViews (v0.6.0) now provides unsafe views (that 
maintain raw pointers instead of the parent array). See 
https://github.com/JuliaLang/ArrayViews.jl#view-types

You may see whether this makes your code more performant. Be careful, you 
should make sure that unsafe views are used only within a local scope and 
don't pass them around, otherwise you may possibly run into memory 
corruption or segfault.

Dahua


On Sunday, April 19, 2015 at 6:49:20 PM UTC+8, Tim Holy wrote:
>
> Sorry to be slow to chime in here, but the tuple overhaul has landed and 
> they 
> are still not zero-cost: 
>
> function sumcols(A) 
> s = 0.0 
> for j = 1:size(A,2) 
> Aj = slice(A, :, j) 
> for i = 1:length(Aj) 
> s += Aj[i] 
> end 
> end 
> s 
> end 
>
> Even in the latest 0.4, this still allocates memory. On the other hand, 
> while 
> SubArrays allocate nearly 2x more memory than ArrayViews, the speed of the 
> two 
> (replacing `slice` with `view` above) is, for me, nearly identical. 
>
> --Tim 
>
>
> On Friday, April 17, 2015 08:30:27 PM Sebastian Good wrote: 
> > This was discussed a few weeks ago 
> > 
> > https://groups.google.com/d/msg/julia-users/IxrvV8ABZoQ/uWZu5-IB3McJ 
> > 
> > I think the bottom line is that the current implementation *should* be 
> > 'zero-cost' once a set of planned improvements and optimizations take 
> > place. One of the key ones is a tuple overhaul. 
> > 
> > Fair to say it can never be 'zero' cost since there is different 
> inherent 
> > overhead depending on the type of subarray, e.g. offset, slice, 
> > re-dimension, etc. however the implementation is quite clever about 
> > allowing specialization of those. 
> > 
> > In a common case (e.g. a constant offset or simple stride) my 
> understanding 
> > is that the structure will be type-specialized and likely stack 
> allocated 
> > in many cases, reducing to what you'd write by hand. At least this is 
> what 
> > they're after. 
> > 
> > On Friday, April 17, 2015 at 4:24:14 PM UTC-4, Peter Brady wrote: 
> > > Thanks for the links.  I'll check out ArrayViews as it looks like what 
> I 
> > > was going to do manually without wrapping it in a type. 
> > > 
> > > By semi-dim agnostic I meant that the differencing algorithm itself 
> only 
> > > cares about one dimension but that dimension is different for 
> different 
> > > directions. Only a few toplevel routines actually need to know about 
> the 
> > > dimensionality of the problem. 
> > > 
> > > On Friday, April 17, 2015 at 2:04:39 PM UTC-6, René Donner wrote: 
> > >> As far as I have measured it sub in 0.4 is still not cheap, as it 
> > >> provides the flexibility to deal with all kinds of strides and 
> offsets, 
> > >> and 
> > >> the view object itself thus has a certain size. See 
> > >> https://github.com/rened/FunctionalData.jl#efficiency for a simple 
> > >> analysis, where the speed is mostly dominated by the speed of the 
> > >> "sub-view" mechanism. 
> > >> 
> > >> To get faster views which require strides etc you can try 
> > >> https://github.com/JuliaLang/ArrayViews.jl 
> > >> 
> > >> What do you mean by semi-dim agnostic? In case you only need indexing 
> > >> along the last dimension (like a[:,:,i] and a[:,:,:,i]) you can use 
> > >> 
> > >>   https://github.com/rened/FunctionalData.jl#efficient-views-details 
> > >> 
> > >> which uses normal DenseArrays and simple pointer updates internally. 
> It 
> > >> can also update a view in-place, by just incrementing the pointer. 
> > >> 
> > >> Am 17.04.2015 um 21:48 schrieb Peter Brady : 
> > >> > Inorder to write some differencing algorithms in a semi-dimensional 
> > >> 
> > >> agnostic manner the code I've written makes heavy use of subarrays 
> which 
> > >> turn out to be rather costly. I've noticed some posts on the cost of 
> > >> subarrays here and that things will be better in 0.4.  Can someone 
> > >> comment 
> > >> on how much better?  Would subarray (or anything like it) be on par 
> with 
> > >> simply passing an offset and stride (constant) and computing the 
> index 
> > >> myself? I'm currently using the 0.3 release branch. 
>
>

Re: [julia-users] How to create command tools with Julia modules?

2015-04-19 Thread Jiyin Yiyong
This package is helpful.

On Sunday, April 19, 2015 at 6:57:48 PM UTC+8, Isaiah wrote:
>
> You will probably also want ArgParse:
> https://github.com/carlobaldassi/ArgParse.jl
>
> On Sun, Apr 19, 2015 at 6:20 AM, Jiyin Yiyong  > wrote:
>
>> Yes that's true. I was a npm user, and a Go user, both of them provides 
>> solution for creating command line tools. It doesn't mater if I have to 
>> create files by my own(just a few for steps). But I want to known how the 
>> community handles this problem. If I want to publish a package with a 
>> command line, is there a better solution?
>>
>> On Sunday, April 19, 2015 at 7:51:20 AM UTC+8, Jameson wrote:
>>>
>>> this isn't really a julia question, but just a general unix question. 
>>> putting a shebang[1] at the top of any file will turn that file into a 
>>> command line tool.
>>>
>>> [1] http://en.wikipedia.org/wiki/Shebang_(Unix)
>>>
>>> On Sat, Apr 18, 2015 at 3:10 PM Patrick Kofod Mogensen <
>>> patrick@gmail.com> wrote:
>>>
 I might be wrong, but I think Jiyin wants to make program binaries out 
 from Julia code. Julia is probably not the right choice for this (right 
 now).


 On Saturday, April 18, 2015 at 8:07:02 PM UTC+2, Mauro wrote:

> Did you see this: 
> http://docs.julialang.org/en/latest/manual/running-external-programs/ 
>
> On Sat, 2015-04-18 at 10:01, Jiyin Yiyong  wrote: 
> > I want to create a command line tool by create a Julia module. But 
> it's not 
> > mentioned in the docs? Is there a quick solution for that? 
>
>
>

Re: [julia-users] How to create command tools with Julia modules?

2015-04-19 Thread Isaiah Norton
You will probably also want ArgParse:
https://github.com/carlobaldassi/ArgParse.jl

On Sun, Apr 19, 2015 at 6:20 AM, Jiyin Yiyong  wrote:

> Yes that's true. I was a npm user, and a Go user, both of them provides
> solution for creating command line tools. It doesn't mater if I have to
> create files by my own(just a few for steps). But I want to known how the
> community handles this problem. If I want to publish a package with a
> command line, is there a better solution?
>
> On Sunday, April 19, 2015 at 7:51:20 AM UTC+8, Jameson wrote:
>>
>> this isn't really a julia question, but just a general unix question.
>> putting a shebang[1] at the top of any file will turn that file into a
>> command line tool.
>>
>> [1] http://en.wikipedia.org/wiki/Shebang_(Unix)
>>
>> On Sat, Apr 18, 2015 at 3:10 PM Patrick Kofod Mogensen <
>> patrick@gmail.com> wrote:
>>
>>> I might be wrong, but I think Jiyin wants to make program binaries out
>>> from Julia code. Julia is probably not the right choice for this (right
>>> now).
>>>
>>>
>>> On Saturday, April 18, 2015 at 8:07:02 PM UTC+2, Mauro wrote:
>>>
 Did you see this:
 http://docs.julialang.org/en/latest/manual/running-external-programs/

 On Sat, 2015-04-18 at 10:01, Jiyin Yiyong  wrote:
 > I want to create a command line tool by create a Julia module. But
 it's not
 > mentioned in the docs? Is there a quick solution for that?




Re: [julia-users] zero cost subarray?

2015-04-19 Thread Tim Holy
Sorry to be slow to chime in here, but the tuple overhaul has landed and they 
are still not zero-cost:

function sumcols(A)
s = 0.0
for j = 1:size(A,2)
Aj = slice(A, :, j)
for i = 1:length(Aj)
s += Aj[i]
end
end
s
end

Even in the latest 0.4, this still allocates memory. On the other hand, while 
SubArrays allocate nearly 2x more memory than ArrayViews, the speed of the two 
(replacing `slice` with `view` above) is, for me, nearly identical.

--Tim


On Friday, April 17, 2015 08:30:27 PM Sebastian Good wrote:
> This was discussed a few weeks ago
> 
> https://groups.google.com/d/msg/julia-users/IxrvV8ABZoQ/uWZu5-IB3McJ
> 
> I think the bottom line is that the current implementation *should* be
> 'zero-cost' once a set of planned improvements and optimizations take
> place. One of the key ones is a tuple overhaul.
> 
> Fair to say it can never be 'zero' cost since there is different inherent
> overhead depending on the type of subarray, e.g. offset, slice,
> re-dimension, etc. however the implementation is quite clever about
> allowing specialization of those.
> 
> In a common case (e.g. a constant offset or simple stride) my understanding
> is that the structure will be type-specialized and likely stack allocated
> in many cases, reducing to what you'd write by hand. At least this is what
> they're after.
> 
> On Friday, April 17, 2015 at 4:24:14 PM UTC-4, Peter Brady wrote:
> > Thanks for the links.  I'll check out ArrayViews as it looks like what I
> > was going to do manually without wrapping it in a type.
> > 
> > By semi-dim agnostic I meant that the differencing algorithm itself only
> > cares about one dimension but that dimension is different for different
> > directions. Only a few toplevel routines actually need to know about the
> > dimensionality of the problem.
> > 
> > On Friday, April 17, 2015 at 2:04:39 PM UTC-6, René Donner wrote:
> >> As far as I have measured it sub in 0.4 is still not cheap, as it
> >> provides the flexibility to deal with all kinds of strides and offsets,
> >> and
> >> the view object itself thus has a certain size. See
> >> https://github.com/rened/FunctionalData.jl#efficiency for a simple
> >> analysis, where the speed is mostly dominated by the speed of the
> >> "sub-view" mechanism.
> >> 
> >> To get faster views which require strides etc you can try
> >> https://github.com/JuliaLang/ArrayViews.jl
> >> 
> >> What do you mean by semi-dim agnostic? In case you only need indexing
> >> along the last dimension (like a[:,:,i] and a[:,:,:,i]) you can use
> >> 
> >>   https://github.com/rened/FunctionalData.jl#efficient-views-details
> >> 
> >> which uses normal DenseArrays and simple pointer updates internally. It
> >> can also update a view in-place, by just incrementing the pointer.
> >> 
> >> Am 17.04.2015 um 21:48 schrieb Peter Brady :
> >> > Inorder to write some differencing algorithms in a semi-dimensional
> >> 
> >> agnostic manner the code I've written makes heavy use of subarrays which
> >> turn out to be rather costly. I've noticed some posts on the cost of
> >> subarrays here and that things will be better in 0.4.  Can someone
> >> comment
> >> on how much better?  Would subarray (or anything like it) be on par with
> >> simply passing an offset and stride (constant) and computing the index
> >> myself? I'm currently using the 0.3 release branch.



[julia-users] My project on transforming Cirru(a nother syntax) to Julia AST

2015-04-19 Thread Jiyin Yiyong
I was playing with a syntax called Cirru which is like Lisp but with far 
less parentheses in JavaScript.
https://github.com/Cirru
And I got CirruScript which compiles to ES6 AST and then generates 
JavaScript in ES5.
https://github.com/Cirru/cirru-script
Later I think it's fun to have my toy language running on top of LLVM, and 
tried a few ideas. And here's my current project called CirruSepal.jl which 
transforms my code to Julia AST.
the parser: https://github.com/Cirru/CirruParser.jl
the prototype: https://github.com/Cirru/CirruSepal.jl

For example, I got a file of Cirru like this:

```cirru
= a :demo
call println a
call println (tuple 1)
call println (cell1d 1)
call println (call symbol a)

call println true false 
```

I'm going to parse it into an Array with CirruParser.jl, then generates 
Julia AST with CirruSepal.jl using the metaprogramming skills from Julia. I 
just think it might be quite interesting that I can run a interpreter in 
Julia, or something like an alternative syntax for Julia(a inconvenient one 
though).

Just want to see if anyone is interested on such a syntax for writing Julia 
AST?

And I got some confusions in understanding the AST of function definitions. 
Is there any docs or source code that I can get help from?

Thanks.


Re: [julia-users] How to create command tools with Julia modules?

2015-04-19 Thread Jiyin Yiyong
Yes that's true. I was a npm user, and a Go user, both of them provides 
solution for creating command line tools. It doesn't mater if I have to 
create files by my own(just a few for steps). But I want to known how the 
community handles this problem. If I want to publish a package with a 
command line, is there a better solution?

On Sunday, April 19, 2015 at 7:51:20 AM UTC+8, Jameson wrote:
>
> this isn't really a julia question, but just a general unix question. 
> putting a shebang[1] at the top of any file will turn that file into a 
> command line tool.
>
> [1] http://en.wikipedia.org/wiki/Shebang_(Unix)
>
> On Sat, Apr 18, 2015 at 3:10 PM Patrick Kofod Mogensen <
> patrick@gmail.com > wrote:
>
>> I might be wrong, but I think Jiyin wants to make program binaries out 
>> from Julia code. Julia is probably not the right choice for this (right 
>> now).
>>
>>
>> On Saturday, April 18, 2015 at 8:07:02 PM UTC+2, Mauro wrote:
>>
>>> Did you see this: 
>>> http://docs.julialang.org/en/latest/manual/running-external-programs/ 
>>>
>>> On Sat, 2015-04-18 at 10:01, Jiyin Yiyong  wrote: 
>>> > I want to create a command line tool by create a Julia module. But 
>>> it's not 
>>> > mentioned in the docs? Is there a quick solution for that? 
>>>
>>>

[julia-users] Re: Symbolic relations package

2015-04-19 Thread Marcus Appelros
Looking for testers and testwriters, especially for recent versions since 
the internet connection here (not so central Kenya) does not allow frequent 
updates.

Pkg.clone("git://github.com/jhlq/Equations.jl.git")

WIP:
* A generic matches(ex::Expression,pattern::Expression) so instead of 
applying matchers functions to a equation one applies a web of equations.
* Contracting multiple terms into Pow(a,b).
* More derivative rules.
* Int (∫) type.
* Extensive testing.


Re: [julia-users] Latest on wrapping C structs for use in Julia

2015-04-19 Thread Simon Frost
Dear Isaiah,

Thanks - I noted before that changing types to immutable in some cases 
fixed things, but I didn't think that this would apply to BeagleOperation. 
I'll read over the new docs carefully.

As for the second point, the signature in beagle.h is

/**
 * @brief Set a state frequency buffer
 *
 * This function copies a state frequency array into an instance buffer.
 *
 * @param instance  Instance number (input)
 * @param stateFrequenciesIndex Index of state frequencies buffer (input)
 * @param inStateFrequenciesState frequencies array (stateCount) (input)
 *
 * @return error code
 */
BEAGLE_DLLEXPORT int beagleSetStateFrequencies(int instance,
 int stateFrequenciesIndex,
 const double* inStateFrequencies); 
  

I'll follow up with the BEAGLE devs to see why your fix makes the script 
run, as the output (logL) isn't correct.

Best
Simon


Re: [julia-users] custom type equality

2015-04-19 Thread Marcus Appelros
Wrote it from the phone so not at the time, have used a similar solution 
where at first the initial if was missing so different types with the same 
arguments returned true. Just did a benchmark with 3 cases, one with the 
literal function above (after fixing the missing ')'), one with n directly 
being the name and one with literal a.a==b.a checks, a has 5 random 
variables and b is its deepcopy.

julia> @time for i in 1:1000;a==b;end #with length
elapsed time: 0.001184089 seconds (96000 bytes allocated)

julia> @time for i in 1:1000;a==b;end #with for n in names
elapsed time: 0.000963879 seconds (96000 bytes allocated)

julia> @time for i in 1:1000;a==b;end #with explicit check
elapsed time: 0.000156515 seconds (0 bytes allocated)

Interestingly the length allocation does not have an impact. So according 
to this result for every second you spend writing an explicit == you can do 
a million generic checks. The difference becomes less pronounced with the 
random variables replaced by arrays with a thousand random variables:

julia> @time for i in 1:1000;a==b;end #generic
elapsed time: 0.013679254 seconds (496000 bytes allocated)

julia> @time for i in 1:1000;a==b;end #explicit
elapsed time: 0.012587005 seconds (40 bytes allocated)