Re: [julia-users] Passing N-D Julia arrays to C functions

2016-11-04 Thread Tim Holy
As I tried to explain before, Aptrs corresponds to a Ptr{Ptr{T}} *no matter
how many dimensions you have*. But showing the C code you want to use makes
your problem vastly clearer: now I know you want a `Ptr{Ptr{Ptr{Ptr{T`
for 4 dimensions.

I think you want a recursive solution, something like
```
ptr1d(A::AbstractVector) = pointer(A)

function ptr1d(A::AbstractArray)
colons = fill(:, ndims(A) - 1)
ptr1 = ptr1d(view(A, colons..., 1)) # get 1 pointer so we know the type
Aptr = Array{typeof(ptr1)}(size(A)[end])
Aptr[1] = ptr1
for i = 2:size(A)[end]
Aptr[i] = ptr1d(view(A, colons..., i))
end
Aptr
end
```

Then just pass `ptr1d(A)` as the last argument of your ccall.

Best,
--Tim


On Fri, Nov 4, 2016 at 10:07 AM, Alexander Lyapin <
lyapinaalexan...@gmail.com> wrote:

> Tim, thank you for reply!
>
> Everything was fine when I tried simple 2D array. Then I moved to real
> problem and got some errors... please check:
>
> -test.c-
> #include 
> double test(double param) {
> return param[1][1][1][1];
> }
>
>
> Julia-
> # ar is Array{Float64,4}
>
> sz = size(ar)[1:end]
> Aptrs = Array{Ptr{eltype(ar)}}(sz)
> for I in CartesianRange(sz)
> Aptrs[I] = pointer(ar, sub2ind(size(ar), 1, I.I...))
> end
> println(ccall(("test","test"),Float64,(Ptr{Ptr{Ptr{Ptr{Float64,),
> Aptrs))
>
>
> something is wrong here...
>
>
> пятница, 4 ноября 2016 г., 17:19:17 UTC+3 пользователь Tim Holy написал:
>>
>> ccall(sym, Void, (Ptr{Ptr{T}},), Aptrs)
>>
>> When you pass an array via `ccall`, it takes the pointer of the data,
>> which is why this produces a Ptr{Ptr{T}} from an Array{Ptr{T}}.
>>
>> Best,
>> --Tim
>>
>> On Fri, Nov 4, 2016 at 9:14 AM, Alexander Lyapin 
>> wrote:
>>
>>> but in C i have **double, wich is pointer pointer to array.
>>> Also in link that I gave, there an example for 2D where it is
>>> Ptr{Ptr{Float64}}
>>>
>>> But anyway, could you give an example for ccall in your case???
>>>
>>> пятница, 4 ноября 2016 г., 17:05:57 UTC+3 пользователь Yichao Yu написал:
>>>>
>>>> On Fri, Nov 4, 2016 at 9:22 AM, Alexander Lyapin
>>>>  wrote:
>>>> > There is a topic:
>>>> > https://groups.google.com/d/msg/julia-users/EK9oNzzaoAk/kJqagPL0Ku0J
>>>> >
>>>> > However could some one give an example how to pass 3-d or 4-d array
>>>> to C
>>>> > function.
>>>> >
>>>> > I have Array{Float64, 4} and for ccall I use
>>>> Ptr{Ptr{Ptr{Ptr{Float64 as
>>>> > Type of parameter. Is there some way to make it more common for
>>>> > Array{Float64, N}??? Thank you
>>>>
>>>> No matter what the dimension is,it's always a `Ptr{eltype(array)}` in
>>>> C (`Ptr{Float64}` in thiscase) and since it's just a normalpointer,
>>>> you always need topass the dimensions explicitlyto C in another
>>>> argumetns.
>>>>
>>>
>>


Re: [julia-users] Passing N-D Julia arrays to C functions

2016-11-04 Thread Tim Holy
ccall(sym, Void, (Ptr{Ptr{T}},), Aptrs)

When you pass an array via `ccall`, it takes the pointer of the data, which
is why this produces a Ptr{Ptr{T}} from an Array{Ptr{T}}.

Best,
--Tim

On Fri, Nov 4, 2016 at 9:14 AM, Alexander Lyapin  wrote:

> but in C i have **double, wich is pointer pointer to array.
> Also in link that I gave, there an example for 2D where it is
> Ptr{Ptr{Float64}}
>
> But anyway, could you give an example for ccall in your case???
>
> пятница, 4 ноября 2016 г., 17:05:57 UTC+3 пользователь Yichao Yu написал:
>>
>> On Fri, Nov 4, 2016 at 9:22 AM, Alexander Lyapin
>>  wrote:
>> > There is a topic:
>> > https://groups.google.com/d/msg/julia-users/EK9oNzzaoAk/kJqagPL0Ku0J
>> >
>> > However could some one give an example how to pass 3-d or 4-d array to
>> C
>> > function.
>> >
>> > I have Array{Float64, 4} and for ccall I use
>> Ptr{Ptr{Ptr{Ptr{Float64 as
>> > Type of parameter. Is there some way to make it more common for
>> > Array{Float64, N}??? Thank you
>>
>> No matter what the dimension is,it's always a `Ptr{eltype(array)}` in
>> C (`Ptr{Float64}` in thiscase) and since it's just a normalpointer,
>> you always need topass the dimensions explicitlyto C in another
>> argumetns.
>>
>


Re: [julia-users] Passing N-D Julia arrays to C functions

2016-11-04 Thread Tim Holy
If you need the pointers to each lower-dimensional slice, you should be
able to build it like so (warning: untested):

```
sz = size(A)[2:end]  # if you need type-stability, use Base.tail(size(A))
instead
Aptrs = Array{Ptr{eltype(A)}(sz)
for I in CartesianRange(sz)
Aptrs[I] = pointer(A, sub2ind(size(A), 1, I.I...))
end
```
and then pass Aptrs to your C function.

Note that A must be an Array (not a generic AbstractArray) for this to work.

Best,
--Tim

On Fri, Nov 4, 2016 at 9:05 AM, Yichao Yu  wrote:

> On Fri, Nov 4, 2016 at 9:22 AM, Alexander Lyapin
>  wrote:
> > There is a topic:
> > https://groups.google.com/d/msg/julia-users/EK9oNzzaoAk/kJqagPL0Ku0J
> >
> > However could some one give an example how to pass 3-d or 4-d array to C
> > function.
> >
> > I have Array{Float64, 4} and for ccall I use Ptr{Ptr{Ptr{Ptr{Float64
> as
> > Type of parameter. Is there some way to make it more common for
> > Array{Float64, N}??? Thank you
>
> No matter what the dimension is,it's always a `Ptr{eltype(array)}` in
> C (`Ptr{Float64}` in thiscase) and since it's just a normalpointer,
> you always need topass the dimensions explicitlyto C in another
> argumetns.
>


Re: [julia-users] ANN: Highlights.jl

2016-11-03 Thread Tim Holy
Very nice!

--Tim

On Wed, Nov 2, 2016 at 3:14 PM, Michael Hatherly 
wrote:

> I’m pleased to announce the initial 0.1 release of Highlights.jl
>  — a Julia package for
> highlighting source code similar to the well-known Python package called
> Pygments .
>
> The documentation for the package can be found here
> . Currently there are
> only a handful of supported languages and colour schemes, so if your
> favourite language (aside from Julia) is missing then please feel free to add
> it to the requests list
>  or open a PR.
>
> Any bugs or feature requests can be made over on the issue tracker
> .
>
> — Mike
>


Re: [julia-users] Re: Fast vector element-wise multiplication

2016-11-02 Thread Tim Holy
Hmm, that's surprising. Looks like we're using generic broadcasting
machinery for that operation (check out what @which P.*P returns). Might be
good to add .* to this line:
https://github.com/JuliaLang/julia/blob/b7f1aa7554c71d3759702b9c2e14904ebdc94199/base/arraymath.jl#L69.
Want to make a pull request?

Best,
--Tim


Re: [julia-users] Reducing complexity of OffsetArrays

2016-11-01 Thread Tim Holy
There's still a simple model: the indices are the *key* of the entry. Think
about an array as a very special Dict. You could create a Dict with integer
keys, let's say from -2:5. Presumably you wouldn't be exactly happy if
there were some magical way that the number you originally stored as `d[-1]
= 3.2` were alternatively accessible as `d[2]`, simply because the smallest
index was -2 and therefore 3.2 is the "second" entry?

Like a Dict, for an array the value always goes with the key (the indices).
Perhaps this will help:
```
julia> using OffsetArrays

julia> a = OffsetArray(rand(11), -5:5)
OffsetArrays.OffsetArray{Float64,1,Array{Float64,1}} with indices -5:5:
 0.815289
 0.0043941
 0.00403153
 0.478065
 0.150709
 0.256156
 0.934703
 0.672495
 0.428721
 0.242469
 0.43742

julia> idx = OffsetArray(-1:1, -1:1)
OffsetArrays.OffsetArray{Int64,1,UnitRange{Int64}} with indices -1:1:
 -1
  0
  1

julia> b = a[idx]
OffsetArrays.OffsetArray{Float64,1,Array{Float64,1}} with indices -1:1:
 0.150709
 0.256156
 0.934703

julia> a[-1]
0.15070935766983662

julia> b[-1]
0.15070935766983662
```
So indexing `b = a[idx]` means that `b[j] = a[idx[j]]`. Does that help?

Best,
--Tim

On Tue, Nov 1, 2016 at 3:36 PM, Bob Portmann  wrote:

> Like I said, no real practical experience yet. The increase in complexity
> that I fear is the loss of, e.g., writing arr[2,3] and having it not be the
> element in the 2nd row and third column (i.e., the loss of a simple model
> of how things are laid out). Maybe my fears are unfounded. Others don't
> seem concerned it would seem.
>
> I'll check out those packages that you mention.
>
> Thanks,
> Bob
>
> On Sun, Oct 30, 2016 at 2:29 PM, Tim Holy  wrote:
>
>> I'm afraid I still don't understand the claimed big increment in
>> complexity. First, let's distinguish "generic offset arrays" from the
>> OffsetArrays.jl package. If you're happy using OffsetArrays, you don't have
>> to write your own offset-array type. Being able to use an established &
>> tested package reduces your burden a lot, and you can ignore the second
>> half of the devdocs page entirely.
>>
>> If you just want to *use* OffsetArrays.jl, the basic changes in coding
>> style for writing indices-aware code are:
>>
>> - any place you used to call `size`, you probably want to call `indices`
>> instead (and likely make minor adjustments elsewhere, since `incides`
>> returns a tuple-of-ranges---but such changes tend to be very obvious);
>> - check all uses of `similar`; some will stay as-is, other will migrate
>> to `similar(f, inds)` style.
>>
>> In my experience, that's just about it. The devdocs goes into quite a lot
>> of detail to explain the rationale, but really the actual changes are quite
>> small. While you can't quite do it via `grep` and `sed`, to me that just
>> doesn't seem complicated.
>>
>> Where the pain comes is that if you're converting old code, you sometimes
>> have to think your way through it again---"hmm, what do I really mean by
>> this index"? If your code had complicated indexing the first time you wrote
>> it, unfortunately you're going to have to think about it carefully again;
>> so in some cases, "porting" code is almost as bad as writing it the first
>> time. However, if you write indices-aware code in the first place, in my
>> experience the added burden is almost negligible, and in quite a few cases
>> the ability to offset array indices makes things *easier* (e.g., "padding"
>> an array on its edges is oh-so-much-clearer than it used to be, it's like a
>> different world). That's the whole reason I implemented this facility in
>> julia-0.5: to make life easier, not to make it harder. (Personally I think
>> the whole argument over 0-based and 1-based indexing is stupid; it's the
>> ability to use arbitrary indices that I find interesting & useful, and it
>> makes most of my code prettier.)
>>
>> For examples of packages that use OffsetArrays, check the following:
>> - CatIndices
>> - FFTViews
>> - ImageFiltering
>>
>> ImageFiltering is a mixed bag: there's a small performance penalty in a
>> few cases (even if you use @unsafe) because that LLVM doesn't always
>> optimize code as well as it could in principle (maybe @polly will help
>> someday...). Because image filtering is an extraordinarily
>> performance-sensitive operation, there are a few places where I had to make
>> some yucky hand optimizations.
>>
>> Again, I'm very happy to continue this conversation---I'd really like to
>&g

Re: [julia-users] Reducing complexity of OffsetArrays

2016-10-30 Thread Tim Holy
s are so
> scarce in the julia ecosystem. I'm just looking ahead to a world where it
> will no longer be possible to assume AbstractArrays are OneTo. This seems
> more painful than telling the 0-based crowd to code against 1-based arrays.
>
>
> On Sat, Oct 29, 2016 at 10:40 AM, Tim Holy  wrote:
>
>> I should add, abstract discussions are all well and good, but I fully
>> recognize you may have something very specific in mind and my answer
>> simply
>> fails to understand your concern. If you can give one or more concrete
>> examples of where the current system is either bug-prone or a pain in the
>> neck, I'd love to see what you mean.
>>
>> Best,
>> --Tim
>>
>> On Thursday, October 27, 2016 5:21:26 PM CDT Bob Portmann wrote:
>> > TL;DR I think the type system should take care of converting between
>> > different types of arrays and thus free the user to code against the
>> type
>> > of array they want (OneTo, ZeroTo, Offset, etc).
>> >
>> > Maybe I have a deep mis-understanding of how the new generalized offset
>> > arrays are suppose to work in Julia. If so I'd be happy to be set
>> straight.
>> > If not then I think the present system will lead to unnecessary
>> complexity
>> > and bug-ridden code. In the present system one can define array types
>> that
>> > are not one-based but can have arbitrary offsets. This is a great idea
>> but
>> > to make it work a very general system for indexing relative to the ends
>> of
>> > the array has been devised. If one writes a function that accepts an
>> > `AbstractArray` then one MUST use this more general system or produce
>> bugs
>> > when e.g. a `ZeroToArray` is passed in. This puts a large burden on
>> writers
>> > of library code that works on arrays and I think is an unnecessary
>> > complexity.
>> >
>> > Since Fortran arrays are a nice example let look at the situation in
>> > Fortran. If I define an array in a function `real arr(-10:10)` then I
>> would
>> > index it using indices that range from -10 to 10. If I pass this into
>> > another function that declared the array using its total size (usually
>> > passed in separately or in a parameter statement) `real arr(21)` then in
>> > that subroutine one would index the array using "normal" indices that
>> range
>> > from 1 to 21. I.E., you state in the function how you want to treat the
>> > array and are not forced to work with in offset form unless you want
>> to. In
>> > my opinion, this is what makes using offset arrays in Fortran sane and
>> is
>> > the key thing that Julia should try to emulate.
>> >
>> > One way to get this behavior in Julia would be to use the type system
>> and
>> > `convert`. Since `AbstractArray` has meant until 0.5 a `OneTo` array I
>> > think it is wise that it remain that way so that all old code will work
>> > unchanged (even with the new array types). Thus, I propose adding a new
>> > top-level type above AbstractArray type to capture all arrays something
>> > like:
>> >
>> > ```
>> > Array{T,N} <: DenseArray{T,N} <: AbstractArray{T,N} <:
>> > AbstractAllArray{T,N} <: Any
>> > ```
>> >
>> > And the offset arrays would be subtypes of `AbstractAllArrays` on a
>> > different branch
>> >
>> > ```
>> > OffsetArray{T,N} <: AbstractOffsetArray{T,N} <: AbstractAllArray{T,N}
>> <: Any
>> > ```
>> >
>> > And similarly for `ZeroToArray`.
>> >
>> > Then, if one declares an Array as
>> >
>> > ```
>> > function func(arr::AbstractArray)
>> > ```
>> >
>> > one can safely assume it is a 1-based Array inside `func`. If an offset
>> > array is passed into `func` then it is automatically converted to a
>> `OneTo`
>> > array inside `func`. This is the key point of this proposal and is
>> similar
>> > to the auto-conversion of, e.g., Int to Floats in a function call.
>> > Similarly if one declares an array as a `ZeroToArray` in a function and
>> > passes in an `Array` then it will be converted to a `ZeroToArray` in the
>> > function. Some conversions would need more information and thus would be
>> > disallowed (e.g., `Array` to `OffsetArray`). I think this system would
>> go a
>> > long way towards making `ZeroToArray`s and `OffsetArray`s simple to use
>> in
>> > Julia w/o using the generalized indexing techniques introduced in Julia
>> > 0.5. Note that it would still be possible to use the `AbstractAllArray`
>> > type and accept all arrays w/o conversion and use the generalized
>> indexing
>> > techniques.
>> >
>> > I'm curious what people think about this.
>> >
>> > Bob
>> >
>> > ps Paste into github to see in formatted form. I'm not sure how to get
>> that
>> > in an email.
>>
>>
>>
>


Re: [julia-users] Reducing complexity of OffsetArrays

2016-10-29 Thread Tim Holy
I should add, abstract discussions are all well and good, but I fully 
recognize you may have something very specific in mind and my answer simply 
fails to understand your concern. If you can give one or more concrete 
examples of where the current system is either bug-prone or a pain in the 
neck, I'd love to see what you mean.

Best,
--Tim

On Thursday, October 27, 2016 5:21:26 PM CDT Bob Portmann wrote:
> TL;DR I think the type system should take care of converting between
> different types of arrays and thus free the user to code against the type
> of array they want (OneTo, ZeroTo, Offset, etc).
> 
> Maybe I have a deep mis-understanding of how the new generalized offset
> arrays are suppose to work in Julia. If so I'd be happy to be set straight.
> If not then I think the present system will lead to unnecessary complexity
> and bug-ridden code. In the present system one can define array types that
> are not one-based but can have arbitrary offsets. This is a great idea but
> to make it work a very general system for indexing relative to the ends of
> the array has been devised. If one writes a function that accepts an
> `AbstractArray` then one MUST use this more general system or produce bugs
> when e.g. a `ZeroToArray` is passed in. This puts a large burden on writers
> of library code that works on arrays and I think is an unnecessary
> complexity.
> 
> Since Fortran arrays are a nice example let look at the situation in
> Fortran. If I define an array in a function `real arr(-10:10)` then I would
> index it using indices that range from -10 to 10. If I pass this into
> another function that declared the array using its total size (usually
> passed in separately or in a parameter statement) `real arr(21)` then in
> that subroutine one would index the array using "normal" indices that range
> from 1 to 21. I.E., you state in the function how you want to treat the
> array and are not forced to work with in offset form unless you want to. In
> my opinion, this is what makes using offset arrays in Fortran sane and is
> the key thing that Julia should try to emulate.
> 
> One way to get this behavior in Julia would be to use the type system and
> `convert`. Since `AbstractArray` has meant until 0.5 a `OneTo` array I
> think it is wise that it remain that way so that all old code will work
> unchanged (even with the new array types). Thus, I propose adding a new
> top-level type above AbstractArray type to capture all arrays something
> like:
> 
> ```
> Array{T,N} <: DenseArray{T,N} <: AbstractArray{T,N} <:
> AbstractAllArray{T,N} <: Any
> ```
> 
> And the offset arrays would be subtypes of `AbstractAllArrays` on a
> different branch
> 
> ```
> OffsetArray{T,N} <: AbstractOffsetArray{T,N} <: AbstractAllArray{T,N} <: Any
> ```
> 
> And similarly for `ZeroToArray`.
> 
> Then, if one declares an Array as
> 
> ```
> function func(arr::AbstractArray)
> ```
> 
> one can safely assume it is a 1-based Array inside `func`. If an offset
> array is passed into `func` then it is automatically converted to a `OneTo`
> array inside `func`. This is the key point of this proposal and is similar
> to the auto-conversion of, e.g., Int to Floats in a function call.
> Similarly if one declares an array as a `ZeroToArray` in a function and
> passes in an `Array` then it will be converted to a `ZeroToArray` in the
> function. Some conversions would need more information and thus would be
> disallowed (e.g., `Array` to `OffsetArray`). I think this system would go a
> long way towards making `ZeroToArray`s and `OffsetArray`s simple to use in
> Julia w/o using the generalized indexing techniques introduced in Julia
> 0.5. Note that it would still be possible to use the `AbstractAllArray`
> type and accept all arrays w/o conversion and use the generalized indexing
> techniques.
> 
> I'm curious what people think about this.
> 
> Bob
> 
> ps Paste into github to see in formatted form. I'm not sure how to get that
> in an email.




Re: [julia-users] Reducing complexity of OffsetArrays

2016-10-29 Thread Tim Holy
At least with the non-1 arrays I know of now, you can always get the OneTo 
array form with parent(A). However, that's not something I feel comfortable 
saying will be guaranteed forever.

In terms of the "automatic conversion" part of your proposal, I'm a bit 
uncomfortable because in my view the indices mean something: how should
copy!(A, B)
work if B's indices are not contained within A's indices? To me the indices 
*are* the identity of a location in the array, so making it too easy to rename 
things (esp. automatically) seems like a formula for serious bugs: if I wanted 
to copy my data to indices -5:5, it's a bug if it gets stuck into the slots 
1:11 instead because of some silent index-renaming.

If I understand correctly, regarding complexity, your biggest concern is about 
the range types "affiliated" with a non-1 array type. First, those are 
optional, 
in the sense that many algorithms can be written without them. What makes them 
seemingly necessary in some circumstances is `similar` and the fact that Julia 
(unlike Fortran) has a type system with significant performance implications.

For example, suppose you have defined a "sparse" AbstractArray type with non-1 
indices, and a matching dense array type. Let's say you pass in one of these 
sparse arrays S to some *generic* algorithm (perhaps one that someone else 
wrote, without your use case in mind)  which needs to allocate some kind of 
dense output. Then what you need to say is "allocate me an array similar to S, 
but make it dense, with the same indices as S." `similar(S)` might not work, 
because that might create a sparse output; nor would `Array{eltype(S)}
(size(S))` because `S` might have non-1 indices and `Array` doesn't support 
that.

So the adopted solution is to allow you to say

similar(dims->Array{eltype(S)}(dims), indices(S))

and your package needs to specialize this method to do what you want it to do. 
Notice here that you have only one mechanism to control *which* type of array 
gets created: the type returned by `indices(S)`. If it's a OneTo-tuple this 
will just create an Array, but if it's something else then something else will 
be created. The author of a non-1 array package gets to customize this if s/he 
takes advantage of the opportunity to define a new index type; alternatively, 
you could just make sure that the `OffsetArrays` package has been loaded, and 
then use UnitRange for the return value of `indices`, and it will create an 
`OffsetArray` for you. If you're happy with that, you don't have to define a 
custom indices type.

Now, we could get around this by declaring "one true non-1 array type," but 
this might be constraining. For example, https://github.com/JuliaArrays/
CatIndices.jl defines a BidirectionalVector array type that causes `shift!` and 
`unshift!` to adjust the indices of the first element, complementary to how 
`push!` and `pop!` adjust the indices of the last element; I don't think this 
is something you could assume was desired in general, but it's very useful in 
particular contexts.

I'm all in favor of simplifying this where possible, and would love ideas for 
how to do it. But I'm not sure that Fortran is a perfect model, because the 
flexibility of Julia's type system allows useful behaviors that make Fortran 
look so...static. 

Best,
--Tim

On Thursday, October 27, 2016 5:21:26 PM CDT Bob Portmann wrote:
> TL;DR I think the type system should take care of converting between
> different types of arrays and thus free the user to code against the type
> of array they want (OneTo, ZeroTo, Offset, etc).
> 
> Maybe I have a deep mis-understanding of how the new generalized offset
> arrays are suppose to work in Julia. If so I'd be happy to be set straight.
> If not then I think the present system will lead to unnecessary complexity
> and bug-ridden code. In the present system one can define array types that
> are not one-based but can have arbitrary offsets. This is a great idea but
> to make it work a very general system for indexing relative to the ends of
> the array has been devised. If one writes a function that accepts an
> `AbstractArray` then one MUST use this more general system or produce bugs
> when e.g. a `ZeroToArray` is passed in. This puts a large burden on writers
> of library code that works on arrays and I think is an unnecessary
> complexity.
> 
> Since Fortran arrays are a nice example let look at the situation in
> Fortran. If I define an array in a function `real arr(-10:10)` then I would
> index it using indices that range from -10 to 10. If I pass this into
> another function that declared the array using its total size (usually
> passed in separately or in a parameter statement) `real arr(21)` then in
> that subroutine one would index the array using "normal" indices that range
> from 1 to 21. I.E., you state in the function how you want to treat the
> array and are not forced to work with in offset form unless you want to. In
> my opinion, this is what mak

Re: [julia-users] Re: ImageView very slow

2016-10-25 Thread Tim Holy
Yes, it should be pretty responsive now---ImageView doesn't have to go
through a Python layer, so I think it's a bit snappier. It's been a while
since I've used Matlab, but at least when I first compared them ImageView
was considerably better than Matlab for interactive display of movies, etc.
But Tk does have pretty significant limitations; now that the Images
rewrite is largely done I'm planning to rewrite ImageView using Gtk. Not
started yet, though.

On Tue, Oct 25, 2016 at 5:24 PM, Paul B.  wrote:

> Tim, when you say you fixed this, you mean the deprecation warnings?  I'll
> definitely use the new version of Images.  I haven't had a chance yet since
> posting this.  Images and related libraries look great.
>
> The graphics functionality in Julia that deal with bitmaps like ImageView
> and pcolor in PyPlot seem slow.  I'm used to imagesc in MATLAB which can be
> unresponsive for very large images but not quite as sluggish as what I've
> been seeing.  From what little I found, it sounds like a lot of these use
> Tk which is controlled through text commands.  Am I mistaken?  Is there
> something maybe simpler but quicker than ImageView that will display a
> pseudo-color image?  Otherwise I might just bang something up that uses
> Images to save a PNG file then calls an external viewer to display it.
>
>>


Re: [julia-users] Re: Trait for exactness of numbers

2016-10-25 Thread Tim Holy
> Why not use dispatch instead?

Because subtyping isn't powerful enough for all needs. For example:


julia> using Unitful

julia> const mm = u"mm"
mm

julia> isa(3.2mm, AbstractFloat)
false


You'd probably like to use the fancy logic of `FloatRange` if you're
constructing a range `3.2mm:0.1mm:4.8mm`, so the solution is to dispatch on
a trait that indicates that arithmetic isn't exact (which is what really is
going on inside that code anyway---who cares what kind of number type it
is).

Best,
--Tim

On Tue, Oct 25, 2016 at 12:55 PM, Ismael Venegas Castelló <
ismael.vc1...@gmail.com> wrote:

> Why not use dispatch instead?
>
> isexact(::Integer) = true
> isexact(::Rational) = true
> isexact(x::Complex) = isexact(x.re)
> isexact(::Any) = false


Re: [julia-users] Re: Trait for exactness of numbers

2016-10-24 Thread Tim Holy
+1. We need number traits for a variety of circumstances; I was also
contemplating them as a step in generalizing the FloatRange/StepRange
distinction, for example to Unitful numbers (numbers with physical units).
You need type-stability for the created range object, so I think a trait is
the only way to go here.

--Tim

On Mon, Oct 24, 2016 at 3:30 PM, Jeffrey Sarnoff 
wrote:

> for values, something like this may do:
>
> function isexact(x)
> if isa(x, Integer) || isa(x, Rational)
> true
> elseif isa(x, Complex)
> isExact(x.re)
> else
> false
> end
>  end
>
>
>
>
> On Monday, October 24, 2016 at 2:09:09 PM UTC-4, jw3126 wrote:
>>
>> A couple of times I was in a situation, where I had two algorithms
>> solving the same problem, one which was faster and one which was more
>> numerically stable. Now for some types of numbers (FloatingPoint,
>> Complex{Float64}...) numerical stability is important while for others it
>> does not matter (e.g. Integer, Complex{Int64}...) etc.
>> In such situations it would be handy to have a mechanism (a trait) which
>> decides whether a type is exact or prone to rounding.
>> Maybe there is already such a thing somewhere?
>>
>


Re: [julia-users] Arrays with custom indices examples?

2016-10-24 Thread Tim Holy
That document is aimed at developers to tell them how to make their package
ready for arrays that have custom indices. As a user, the key line is:

> Such array types are expected to be supplied through packages.

I recommend the OffsetArrays package.

Great to see someone interested in trying this. Definitely report any bugs
you find; as Isaiah said, this is still experimental functionality, and
many packages probably aren't fully ready yet. It will mostly take users
who want that functionality to notice where it needs to be added.

Best,
--Tim

On Mon, Oct 24, 2016 at 1:34 PM, Angel de Vicente <
angel.vicente.garr...@gmail.com> wrote:

> Hi,
>
> Isaiah Norton  writes:
> > mg = zeros(Int,(0:4,0:4))
> >
> > This isn't related to indexing -- it doesn't work with `1:4`
> > either.
>
> But it doesn't complain if I do:
> ,
> | julia> mg=zeros(Int,(0:4))
> | 5-element Array{Int64,1}:
> |  0
> |  0
> |  0
> |  0
> |  0
> `
>
> (though then I cannot access mg[0], the indices would go as for a
> standard Julia array, 1 to 5.
>
>
> > Use:
> > zeros(Int, 4, 4)
>
> Probably I'm missing something very basic, but then I would just have a
> normal array, where the indices would go from 1:4, in each dimension.
>
> In the page on "Arrays with custom indices" I don't see any clear
> example, so I'm a bit lost as to how I could create an array wih
> non-standard inidces and how to use it. Any basic example would be
> greatly appreciated.
>
> Thanks and sorry if I'm asking very basic questions, I just had a week
> or so of exposure to Julia...
> --
> Ángel de Vicente
> http://www.iac.es/galeria/angelv/
>


Re: [julia-users] congratulations to Indian dost

2016-10-21 Thread Tim Holy
Don't look at me, I swear it was just a simple change in the return type of
a function---they should have used a parametric type definition. Another
$1billion down the drain...

Best,
--Tim

On Wed, Oct 19, 2016 at 11:50 PM, Michele Zaffalon <
michele.zaffa...@gmail.com> wrote:

> Tim Holy: I had hoped you learnt to be more careful with untested
> versions. See what happened this time around: https://www.theguardian.com/
> science/2016/oct/20/bad-day-for-space-probes-one-lost-on-
> mars-another-in-safe-mode-at-jupiter?
>
> On Wednesday, September 24, 2014 at 5:27:52 PM UTC+2, Tim Holy wrote:
>>
>> "Sorry, sir, but our Mars probe crashed. It's a complete loss."
>>
>> "Oh no! Why?"
>>
>> "Well, just before it was about to dock, Tim Holy tagged a new version of
>> Images. Turns out it had a bug in it that prevented the cameras from
>> saving
>> images to disk. So we were flying blind. We tried rolling back to an
>> earlier
>> version, but recent changes to Pkg meant that `pin` wasn't working
>> either. So
>> we hit the planet."
>>
>> "Next time, don't run master."
>>
>> --Tim
>>
>> On Wednesday, September 24, 2014 08:12:35 AM John Myles White wrote:
>> > Not that I’m aware of. I’d say the thread is probably off-topic.
>> >
>> >  — John
>> >
>> > On Sep 24, 2014, at 8:10 AM, Stefan Karpinski 
>> wrote:
>> > > Is there any evidence that Julia was used to accomplish that?
>> > >
>> > > On Wed, Sep 24, 2014 at 11:05 AM, John Myles White
>> > >  wrote: I think this is the flight to Mars
>> that
>> > > India just finsihed.
>> > >
>> > >  — John
>> > >
>> > > On Sep 24, 2014, at 8:04 AM, Stefan Karpinski 
>> wrote:
>> > >> I have no idea what this is about. Can you clarify?
>> > >>
>> > >> On Wed, Sep 24, 2014 at 3:47 AM, K Leo  wrote:
>> > >> for the wonderful achievement with Mangalyaan!
>> > >>
>> > >> With a budget less than a Hollywood movie, I bet they must have
>> largely
>> > >> used (and supported?) open sources - Julia included?
>>
>>


Re: [julia-users] Please help me to convert a BitArray to a Integer array in Julia

2016-10-17 Thread Tim Holy
julia> a = bitrand(3,5)
3×5 BitArray{2}:
  true  false  false  true   true
 false   true   true  true  false
  true   true   true  true   true

julia> Int.(a)
3×5 Array{Int64,2}:
 1  0  0  1  1
 0  1  1  1  0
 1  1  1  1  1

julia> convert(Array{Int}, a)
3×5 Array{Int64,2}:
 1  0  0  1  1
 0  1  1  1  0
 1  1  1  1  1


On Mon, Oct 17, 2016 at 2:55 AM, Sujoy Datta  wrote:

>
>
> I am a new user of Julia. Please help me to convert a nxm BitArray to an
> nxm IntegerArray.
> What I want is to print 1 for 'true' and 0 for 'false'.
> Thank you in advance.
>


Re: [julia-users] Re: ImageView very slow

2016-10-16 Thread Tim Holy
Hmm, I fixed this on the images-next branch but forgot to fix it on master.
Will be fixed in https://github.com/timholy/Images.jl/pull/564.

If you're just getting started with Julia & Images, you might want to
consider using the upcoming version of Images. See
https://github.com/timholy/Images.jl/issues/542#issuecomment-254059092.

Best,
--Tim


Re: [julia-users] ANN: RawArray.jl

2016-10-13 Thread Tim Holy
A couple of points:
- As mentioned by others, NRRD doesn't have performance issues (except
likely for very small arrays, due to the overhead of parsing). If you just
want a raw dump of memory, you can get that, and if it's big it uses
`Mmap.mmap` when it reads the data back in. So you can read terabyte-sized
arrays.
- in my opinion, NRRD does have a few deficiencies (
https://sourceforge.net/p/teem/bugs/14/), of which the most serious is
probably the lack of a real test suite. (I'd guess this is probably the
main reason it hasn't taken over the world when it comes to storing
images.) But I don't think it would affect how you would use it for storing
`Complex{Float32}`s or whatever.
- the `images-next` branch is the current state-of-the-art in julia's
support for NRRD: it's almost a complete rewrite from previous versions, as
part of https://github.com/timholy/Images.jl/issues/542. Aside from being
better-designed and having better support for the standard, it no longer
does any reinterpretation of data unless the metadata are such that it is
certain this is what should happen.

That said, I'm not evangelizing its use, just wanting to make sure you
understand what NRRD is and isn't.

Best,
--Tim

On Thu, Oct 13, 2016 at 8:05 AM, Páll Haraldsson 
wrote:

>
> [Explaining more, and correcting typo..]
>
> On Sunday, October 9, 2016 at 12:04:35 AM UTC, Páll Haraldsson wrote:
>
>> FLIF is not a replacement for all uses (multidimensional, would be
>> interesting to know if could to be extended to..), but seem to be the best
>> option for non-lossy image compression:
>>
>
> Of course, say three dimensional, could be trivially, concatenated FLIF
> files/bytestreams of 2D slices.
>
> But I really meant, could there be a way to compress whole array/"image",
> similar to how wavelet (and fft) can be generalized to more dimensions?
>


Re: [julia-users] Re: indirect array indexing

2016-10-01 Thread Tim Holy
For simple cases there's also the IndirectArrays package:
https://github.com/JuliaArrays/IndirectArrays.jl

Best,
--Tim

On Fri, Sep 30, 2016 at 10:51 PM,  wrote:

> OK!  many thanks for your fast reply
> --a
>
>
> On Friday, September 30, 2016 at 3:53:43 PM UTC+2, apao...@gmail.com
> wrote:
>>
>> Hi guys,
>>
>> I'm learning the language while implementing an advanced package for
>> geometric and solid modeling.
>> My question is: What is the right idiom (shorter and/or faster) for
>> writing this kind of array indexing:
>>
>> linesFromLineArray(V,EV) = Any[[V[:,EV[:,k][1]] V[:,EV[:,k][2]]  ] for
>> k=1:size(EV,2)]
>>
>> The efficiency is the strongest issue, since this method should provide
>> the indirect indexing for any kind of cellular complexes.
>> Many thanks for your help
>>
>>
>> julia> V,EV
>> (
>> 2x10 Array{Float64,2}:
>>  0.13611  0.14143  0.38501  0.42103  0.96927  0.90207  0.0 0.11508
>>  0.61437  0.52335
>>  0.59824  0.58921  0.25964  0.24118  0.19741  0.34109  0.0213  0.0
>>  0.05601  0.17309,
>>
>> 2x5 Array{Int64,2}:
>>  1  3  5  7   9
>>  2  4  6  8  10)
>>
>> julia> linesFromLineArray(V,EV)
>> 5-element Array{Any,1}:
>>  2x2 Array{Float64,2}:
>>  0.13611  0.14143
>>  0.59824  0.58921
>>  2x2 Array{Float64,2}:
>>  0.38501  0.42103
>>  0.25964  0.24118
>>  2x2 Array{Float64,2}:
>>  0.96927  0.90207
>>  0.19741  0.34109
>>  2x2 Array{Float64,2}:
>>  0.0 0.11508
>>  0.0213  0.0
>>  2x2 Array{Float64,2}:
>>  0.61437  0.52335
>>  0.05601  0.17309
>>
>>


Re: [julia-users] Re: Any 0.5 performance tips?

2016-09-29 Thread Tim Holy
No real clue about what's happening, but my immediate thought was that if
your algorithm is iterative and uses some kind of threshold to decide
convergence, then it seems possible that a change in the accuracy of some
computation might lead to it getting "stuck" occasionally due to roundoff
error. That's probably more likely to happen because of some kind of
worsening rather than some improvement, but either is conceivable.

If that's even a possible explanation, I'd check for unusually-large
numbers of iterations and then print some kind of convergence info.

Best,
--Tim

On Thu, Sep 29, 2016 at 1:21 PM, Andrew  wrote:

> In the 0.4 version the above times are pretty consistent. I never observe
> any several thousand allocation calls. I wonder if compilation is occurring
> repeatedly.
>
> This isn't terribly pressing for me since I'm not currently working on
> this project, but if there's an easy fix it would be useful for future work.
>
> (sorry I didn't mean to post twice. For some reason hitting spacebar was
> interpreted as the post command?)
>
>
> On Thursday, September 29, 2016 at 2:15:35 PM UTC-4, Andrew wrote:
>>
>> I've used @code_warntype everywhere I can think to and I've only found
>> one Core.box. The @code_warntype looks like this
>>
>> Variables:
>>   #self#::#innerloop#3133{#bellman_obj}
>>   state::State{IdioState,AggState}
>>   EVspline::Dierckx.Spline1D
>>   model::Model{CRRA_Family,AggState}
>>   policy::PolicyFunctions{Array{Float64,6},Array{Int64,6}}
>>   OO::NW
>>   #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1
>> D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,
>> 6},Array{Int64,6}},NW,#bellman_obj}
>>
>> Body:
>>   begin
>>   #3130::##3130#3134{State{IdioState,AggState},Dierckx.Spline1
>> D,Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,
>> 6},Array{Int64,6}},NW,#bellman_obj} = $(Expr(:new,
>> ##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,Model
>> {CRRA_Family,AggState},PolicyFunctions{Array{Float64,6},
>> Array{Int64,6}},NW,#bellman_obj}, :(state), :(EVspline), :(model),
>> :(policy), :(OO), :((Core.getfield)(#self#,:bellman_obj)::#bellman_obj)))
>>   SSAValue(0) = #3130::##3130#3134{State{IdioS
>> tate,AggState},Dierckx.Spline1D,Model{CRRA_Family,AggState},
>> PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#bellman_obj}
>>   (Core.setfield!)((Core.getfield)(#self#::#innerloop#3133{#
>> bellman_obj},:obj)::CORE.BOX,:contents,SSAValue(0))::##3130#
>> 3134{State{IdioState,AggState},Dierckx.Spline1D,Model{CRRA_
>> Family,AggState},PolicyFunctions{Array{Float64,6},Array{Int64,6}},NW,#
>> bellman_obj}
>>   return SSAValue(0)
>>   end::##3130#3134{State{IdioState,AggState},Dierckx.Spline1D,
>> Model{CRRA_Family,AggState},PolicyFunctions{Array{Float64,
>> 6},Array{Int64,6}},NW,#bellman_obj}
>>
>>
>> I put the CORE.BOX in all caps near the bottom.
>>
>> I have no idea if this is actually a problem. The return type is stable.
>> Also, this function appears in an outer loop.
>>
>> What I noticed putting a @time in places is that in 0.5, occasionally
>> calls to my nonlinear equation solver take a really long time, like here:
>>
>>   0.069224 seconds (9.62 k allocations: 487.873 KB)
>>   0.07 seconds (39 allocations: 1.922 KB)
>>   0.06 seconds (29 allocations: 1.391 KB)
>>   0.11 seconds (74 allocations: 3.781 KB)
>>   0.09 seconds (54 allocations: 2.719 KB)
>>   0.08 seconds (54 allocations: 2.719 KB)
>>   0.08 seconds (49 allocations: 2.453 KB)
>>   0.07 seconds (44 allocations: 2.188 KB)
>>   0.07 seconds (44 allocations: 2.188 KB)
>>   0.06 seconds (39 allocations: 1.922 KB)
>>   0.07 seconds (39 allocations: 1.922 KB)
>>   0.06 seconds (39 allocations: 1.922 KB)
>>   0.05 seconds (34 allocations: 1.656 KB)
>>   0.05 seconds (34 allocations: 1.656 KB)
>>   0.04 seconds (29 allocations: 1.391 KB)
>>   0.04 seconds (24 allocations: 1.125 KB)
>>   0.007399 seconds (248 allocations: 15.453 KB)
>>   0.09 seconds (30 allocations: 1.594 KB)
>>   0.04 seconds (25 allocations: 1.328 KB)
>>   0.04 seconds (25 allocations: 1.328 KB)
>>
>>   0.10 seconds (70 allocations: 3.719 KB)
>>   0.072703 seconds (41.74 k allocations: 1.615 MB)
>>
>>
>>
>>
>>
>>
>>
>>
>> On Thursday, September 29, 2016 at 1:37:18 AM UTC-4, Kristoffer Carlsson
>> wrote:
>>>
>>> Look for Core.Box in @code_warntype. See https://github.com/JuliaLang/j
>>> ulia/issues/15276
>>
>>


Re: [julia-users] Re: Why was a fundamental indexing inconsistency introduced in 0.5?

2016-09-26 Thread Tim Holy
Try explaining both indexing behaviors to a newcomer and you'll see the
difference.

Old behavior: `3:3` causes the dimension to be retained; `3` causes the
dimension to be dropped if it's a 'trailing dimension' (all the later
indices are also scalars) but retained if it's a 'leading dimension'
(there's at least one vector dimension after this one).

New behavior: `3:3` causes the dimension to be retained; `3` causes the
dimension to be dropped.

A lot simpler, no? If the former rule seemed better, it's surely because of
habit, not because it's actually simpler.


Re: [julia-users] fieldtype() for parameterised types

2016-09-19 Thread Tim Holy
You don't have to do this in the type definition, you can do it at 
introspection time:

fieldtype(Foo{Float64,3},:a) returns Array{T,N}
fieldtype(Foo{Float64,3},:b) returns Array{Float64,3}

More generally:

julia> T, N = TypeVar(:T, true), TypeVar(:N, true) 
(T,N) 

julia> fieldtype(Foo{T,N}, :a).parameters[1].bound 
false 

julia> fieldtype(Foo{T,N}, :b).parameters[1].bound 
true

You can inspect the pieces to see how this all works. (`dump` is your friend.)

Best,
--Tim

On Monday, September 19, 2016 6:36:15 PM CDT 'Greg Plowman' via julia-users 
wrote:
> For a parameterised composite type, I want to distinguish between
> fields defined with parameters and generic fields.
> 
> An example is probably best:
> 
> type Foo{T,N}
> a::Array
> b::Array{T,N}
> end
> 
> fieldtype(Foo,:a) returns Array{T,N}
> fieldtype(Foo,:b) returns Array{T,N}
> 
> 
> 
> And if I use different parameters:
> 
> type Bar{S,M}
> a::Array
> b::Array{S,M}
> end
> 
> fieldtype(Bar,:a) returns Array{T,N}
> fieldtype(Bar,:b) returns Array{S,M}
> 
> 
> So if I'm careful to use different parameters to the default for the
> field type (Array in this case), I could perhaps distinguish between
> parameterised/non-parameterised fields.
> 
> However, it would be nice if fieldtype returned no parameters for case
> where they are not specified for the field.
> i.e. fieldtype(Foo,:a) returns Array
> 
> Are there any other suggestions?




Re: [julia-users] Read a stuctured binary file with big endian unsigned integers (4 bytes) and big endian floats (4 bytes)

2016-09-18 Thread Tim Holy
See ntoh and hton. Perhaps even better, see StrPack.jl.

--Tim

On Sunday, September 18, 2016 1:13:43 AM CDT Femto Trader wrote:
> Hello,
> 
> I'd like to read this file
> http://www.dukascopy.com/datafeed/EURUSD/2016/02/14/20h_ticks.bi5
> using Julia.
> 
> It's a LZMA compressed file.
> 
> I can decompressed it using
> cp 20h_ticks.bi5 20h_ticks.xz
> xz --decompress --format=lzma 20h_ticks.xz
> 
> Now, I have a 20h_ticks binary file.
> 
> It's a stuctured binary file with array of records
>   Date unsigned integer 4 bytes
>   Ask  unsigned integer 4 bytes
>   Bid  unsigned integer 4 bytes
>   AskVolume float 4 bytes
>   BidVolume float 4 bytes
> 
> 
> Using Python I'm able to read it and get a Pandas DataFrame
> 
> import numpy as np
> import pandas as pd
> import datetime
> symb = "EURUSD"
> dt_chunk = datetime.datetime(2016, 2, 14)
> record_dtype = np.dtype([
> ('Date', '>u4'),
> ('Ask', '>u4'),
> ('Bid', '>u4'),
> ('AskVolume', '>f4'),
> ('BidVolume', '>f4'),
> ])
> 
> data = np.fromfile("20h_ticks", dtype=record_dtype)
> columns = ["Date", "Ask", "Bid", "AskVolume", "BidVolume"]
> df = pd.DataFrame(data, columns=columns)
> if symb[3:] == "JPY":
> p_digits = 3
> else:
> p_digits = 5
> for p in ["Ask", "Bid"]:
> df[p] = df[p] / 10**p_digits
> df["Date"] = dt_chunk + pd.to_timedelta(df["Date"], unit="ms")
> df = df.set_index("Date")
> 
> I'd like to do the same using Julia
> 
> I did
> 
> symb = "EURUSD"
> day_chunk = Date(2016, 2, 14)
> h_chunk = 20
> dt_chunk = DateTime(day_chunk) + Base.Dates.Hour(h_chunk)
> filename = @sprintf "%02dh_ticks" h_chunk
> println(filename)
> 
> immutable TickRecordType
>   Date::UInt32
>   Ask::UInt32
>   Bid::UInt32
>   AskVolume::Float32
>   BidVolume::Float32
> end
> 
> f = open(filename)
> 
> # ...
> 
> close(f)
> 
> but I'm blocked now ...
> 
> Any help will be great.
> 
> Kind regards




Re: [julia-users] What is the best way to element-wise right shift an array?

2016-09-18 Thread Tim Holy
On Sunday, September 18, 2016 2:55:46 AM CDT K leo wrote:
> I have been using simply A=[0; A[1:end-1]], but found it to be somehow
> quite expensive.  I saw that there is unshift! but it has to be followed up
> with deleteat! to make the array the same size, i.e. there need to be two
> operations.  So how can I get a better performance doing the shift?

If A will always have the same length, you might use CircularBuffer. If A will 
have some predictable maximum size, you could use CircularDeque. Both of these 
are in https://github.com/JuliaLang/DataStructures.jl

--Tim





Re: [julia-users] Re: wrapping a julia package

2016-09-16 Thread Tim Holy
Yes, the idea of wrappers that reinterpret or convert makes sense.

On Friday, September 16, 2016 9:29:33 AM CDT Páll Haraldsson wrote:
> type Color
> r::Uint8
> g::Uint8
> b::Uint8
> a::Uint8

RGBA{U8} is bitwise identical to this, so you can use reinterpret as needed.

--Tim



Re: [julia-users] unexpected mapslice result on 0.5rc3

2016-09-15 Thread Tim Holy
https://github.com/JuliaLang/julia/issues/18524

On Wednesday, September 14, 2016 9:15:45 AM CDT Marius Millea wrote:
> Is this the expected behavior?
> 
> julia> mapslices(x->tuple(x), [1 2; 3 4], 1)
> 1×2 Array{Tuple{Array{Int64,1}},2}:
>  ([2,4],)  ([2,4],)
> 
> julia> mapslices(x->tuple(x...), [1 2; 3 4], 1)
> 1×2 Array{Tuple{Int64,Int64},2}:
>  (1,3)  (2,4)
> 
> 
> The first case certainly came as pretty unexpected to me. Does it have
> something to do with copies vs views into the array?




Re: [julia-users] Hard-to-debug deep inlining

2016-09-13 Thread Tim Holy
Which version of julia? If you're not using 0.5, try it and you might be 
pleased.

You can also launch `julia --inline=no`, which occasionally still remains 
useful.

--Tim

On Tuesday, September 13, 2016 8:55:58 AM CDT Tim Wheeler wrote:
> Hi Julia Users,
> 
> So I was looking at ConjugatePriors.jl and trying to resolve its problems
> with respect to the latest Distributions.jl. As discussed in issue 11
> , testing
> ConjugatePriors after removing the REQUIRE bounds results in:
> 
> MethodError: no method matching
> _rand!(::Distributions.MvNormalCanon{PDMats.PDMat{Float64,Array{Float64,2}},
> Array{Float64,1}},
> ::Array{Float64,1}) on line 52 of conjugates_mvnormal.jl
> 
>  s_mvnormal.jl#L52>. and line 25 of fallbacks.jl
> 
> If you check that line you find the following:
> 
> posterior_randmodel(pri, G::IncompleteFormulation, x) = complete(G, pri,
> posterior_rand(pri, G, x))
> 
> Okay, the problem isn't really there. The call to posterior_rand is inlined
> (I assume), so it doesn't show up in the test stack trace. So we manually
> go to:
> 
> posterior_rand(pri, G::IncompleteFormulation, x) = Base.rand(posterior_canon
> (pri, G, x))
> 
> 
> This also isn't the problem, at least not directly.
> 
> In fact, the also inlined call to posterior_canon(pri, G, x) works fine. It
> returns an MvNormalCanon object and then Base.rand is called.
> 
> This calls some inlined functions, which eventually call
> Base._rand!(MvNormalCanon, x::Vector), which leads to the problem, namely
> that _rand!(MvNormalCannon, x::Matrix) is all that is defined.
> 
> But why was that so hard to discover? Why does only line 25 of fallbacks,jl
> show up in the error stack trace? Was there a better way to debug this?




Re: [julia-users] Re: equivalent of numpy newaxis?

2016-09-12 Thread Tim Holy
*julia> a = rand(3)* 
*3-element Array{Float64,1}:* 
*0.47428 * 
*0.505429* 
*0.198919* 
*julia> reshape(a, (3,1))* 
*3×1 Array{Float64,2}:* 
*0.47428 * 
*0.505429* 
*0.198919* 
*julia> reshape(a, (1,3))* 
*1×3 Array{Float64,2}:* 
*0.47428  0.505429  0.198919*

Is that what you want? (Note that for both of them, the result is 
2-dimensional.)

--Tim

On Monday, September 12, 2016 6:47:04 PM CDT Neal Becker wrote:
> I haven't studied it, but I guess that newaxis increases the dimensionality,
> while specifying 0 for the stride.  Can reshape do that?
> 
> Tim Holy wrote:
> > I'm not certain I understand what `np.newaxis` does, but doesn't `reshape`
> > do the same thing? (newaxis does look like a convenient way to specify
> > shape, though.)
> > 
> > Best,
> > --Tim
> > 
> > On Monday, September 12, 2016 3:28:56 PM CDT Neal Becker wrote:
> >> Some time ago I asked this question
> >> http://stackoverflow.com/questions/25486506/julia-broadcasting-equivalent
> >> -of -numpy-newaxis
> >> 
> >> As a more interesting example, here is some real python code I use:
> >> dist = mag_sqr (demod_out[:,np.newaxis] - const.map[np.newaxis,:])
> >> 
> >> where demod_out, const.map are each vectors, mag_sqr performs
> >> element-wise euclidean distance, and the result is a 2D array whose 1st
> >> axis matches the 1st axis of demod_out, and the 2nd axis matches the 2nd
> >> axis of const.map.
> >> 
> >> 
> >> From the answers I've seen, julia doesn't really have an equivalent
> >> functionality.  The idea here is, without allocating a new array,
> >> manipulate the strides to cause broadcasting.
> >> 
> >> AFAICT, the best for Julia would be just forget the vectorized code, and
> >> explicitly write out loops to perform the computation.  OK, I guess, but
> >> maybe not as readable.
> >> 
> >> Is there any news on this front?




Re: [julia-users] equivalent of numpy newaxis?

2016-09-12 Thread Tim Holy
I'm not certain I understand what `np.newaxis` does, but doesn't `reshape` do 
the same thing? (newaxis does look like a convenient way to specify shape, 
though.)

Best,
--Tim

On Monday, September 12, 2016 3:28:56 PM CDT Neal Becker wrote:
> Some time ago I asked this question
> http://stackoverflow.com/questions/25486506/julia-broadcasting-equivalent-of
> -numpy-newaxis
> 
> As a more interesting example, here is some real python code I use:
> dist = mag_sqr (demod_out[:,np.newaxis] - const.map[np.newaxis,:])
> 
> where demod_out, const.map are each vectors, mag_sqr performs element-wise
> euclidean distance, and the result is a 2D array whose 1st axis matches the
> 1st axis of demod_out, and the 2nd axis matches the 2nd axis of const.map.
> 
> 
> From the answers I've seen, julia doesn't really have an equivalent
> functionality.  The idea here is, without allocating a new array, manipulate
> the strides to cause broadcasting.
> 
> AFAICT, the best for Julia would be just forget the vectorized code, and
> explicitly write out loops to perform the computation.  OK, I guess, but
> maybe not as readable.
> 
> Is there any news on this front?




Re: [julia-users] Want to contribute to Julia

2016-09-12 Thread Tim Holy
There are some great resources at http://julialang.org/learning/

Best,
--Tim

On Monday, September 12, 2016 7:56:46 AM CDT 
rishucod...@gmail.com wrote:
> Thanks for the help. Can you suggest me what should I learn to 
work in
> Julia?




Re: [julia-users] Julia in top 50 on the TIOBE index

2016-09-11 Thread Tim Holy
Had not seen that, thanks for sharing.

For this month's survey at least, passing Clojure, within sight of Rust, and 
nearly 1/3 of the way to Fortran. Pretty impressive. Nice work, Julia 
community!

Best,
--Tim

On Sunday, September 11, 2016 12:52:21 AM CDT Viral Shah wrote:
> Perhaps many of you have already seen this, but Julia enters top 50 on the
> TIOBE index for the first time! In fact their September headline is exactly
> that and they believe it will only get better in the coming few months!
> That might just line up well with Julia 1.0 next year.
> 
> http://www.tiobe.com/tiobe-index/
> 
> -viral




Re: [julia-users] How does garbage collection really work in Julia?

2016-09-10 Thread Tim Holy
Every time julia allocates memory, it adds the number of bytes to a running 
tally; once that 
tally gets big enough, it does a garbage-collection pass (and resets the 
tally). 
Consequently GC is triggered only when you allocate, but even the smallest 
allocation can 
trigger GC if it pushes the tally over the threshold. It thus depends on where 
that tally was 
when you started. Keep in mind that there are many uses of memory that may not 
be 
obvious; for example, JIT compilation uses memory because julia code is 
compiled by julia 
code.

Best,
--Tim

On Saturday, September 10, 2016 7:05:18 AM CDT K leo wrote:
> Let's say in REPL you define an object, then after many many other
> operations, Julia's GC wouldn't free the object you defined earlier until
> you exit Julia or you somehow really run out of memory or something.  Is
> that the correct understanding?
> 
> Now I am trying to think what happens with the jl_... calls one performs
> from C/C++ code.  The procedure is that one calls jl_init first to start a
> Julia session, then a bunch other jl_... calls to handle some data, and
> then at last jl_atexit_hook to stop using Julia.  Is the mechanism here
> similar to REPL in the sense that the Julia session is up and running
> before one calls  jl_atexit_hook?  So Julia GC would not free objects you
> created with jl_... calls until you exit or you somehow really run out of
> memory or something.
> 
> Having read the section in Embedding Julia, I get the sense that it is the
> case.  So we only need to worry about GC after all jl_... calls, not in
> 
> between jl_... calls.  So here it says :
> > “However, it is generally safe to use pointers in between jl_... calls.
> > 
> >> But in order to make sure that values can survive jl_... calls, we have
> >> to
> >> tell Julia that we hold a reference to a Julia value. ”
> > 
> > Excerpt From: unknown. “Julia Language 0.6.0-dev documentation.” iBooks.
> > 
> > Is my understanding correct?  Or can anyone please explain otherwise?




Re: [julia-users] Re: ProfileView not compatible with julia-0.5?

2016-09-10 Thread Tim Holy
More likely you just need to run Pkg.update(). For example, you can test with a 
"dummy" 
package repository:

$ mkdir /tmp/pkgs 
*  _* 
*  _   _ _(_)_ |  A fresh approach to technical computing* 
* (_) | (_) (_)|  Documentation: http://docs.julialang.org* 
*  _ _   _| |_  __ _   |  Type "?help" for help.* 
* | | | | | | |/ _` |  |* 
* | | |_| | | | (_| |  |  Version 0.5.0-rc3+49 (2016-09-08 05:47 UTC)* 
*_/ |\__'_|_|_|\__'_|  |  Commit e1d2965* (2 days old release-0.5)* 
*|__/   |  x86_64-linux-gnu* 
*julia> Pkg.init()* 
*INFO: Initializing package repository /tmp/pkgs/v0.5* 
*INFO: Cloning METADATA from https://github.com/JuliaLang/METADATA.jl*  




 
*julia> Pkg.add("ProfileView")* 
*INFO: Cloning cache of BinDeps from 
https://github.com/JuliaLang/BinDeps.jl.git* 


and the install & build succeeds.

Best,
--Tim

On Friday, September 9, 2016 3:20:03 PM CDT Chris Rackauckas wrote:
> Did you checkout master?
> 
> On Friday, September 9, 2016 at 2:55:21 PM UTC-7, Neal Becker wrote:
> > using ProfileView
> > INFO: Precompiling module ProfileView.
> > WARNING: Module Compat with uuid 314389968181888 is missing from the
> > cache.
> > This may mean module Compat does not support precompilation but is
> > imported
> > by a module that does.
> > ERROR: LoadError: Declaring __precompile__(false) is not allowed in files
> > that are being precompiled.
> > 
> >  in macro expansion; at ./none:2 [inlined]
> >  in anonymous at ./:?
> > 
> > while loading /home/nbecker/.julia/v0.5/ProfileView/src/ProfileView.jl, in
> > expression starting on line 5
> > ERROR: Failed to precompile ProfileView to
> > /home/nbecker/.julia/lib/v0.5/ProfileView.ji.
> > 
> >  in eval_user_input(::Any, ::Base.REPL.REPLBackend) at ./REPL.jl:64
> >  in macro expansion at ./REPL.jl:95 [inlined]
> >  in (::Base.REPL.##3#4{Base.REPL.REPLBackend})() at ./event.jl:68




Re: [julia-users] Re: Cell array of images like Matlab

2016-09-10 Thread Tim Holy
Try putting a semicolon at the end of that line.

Thanks for the bug report! https://github.com/timholy/Images.jl/issues/554

--Tim

On Friday, September 9, 2016 5:18:45 PM CDT Drew Mahedy wrote:
> In Julia, if I try and concatenate two images together of type:
> 
> Array{ColorTypes.Gray{FixedPointNumbers.UFixed{UInt8,8}},2} via
> 
> cat( 3, img, img ),
> 
> I get the following error:
> 
> Only two-dimensional images are supported
> 
> On Friday, September 9, 2016 at 5:01:39 PM UTC-7, Drew Mahedy wrote:
> > Here is what the code looks like in MATLAB:
> > 
> > AA = cell( size(trimcoorddir,1), 1 );
> > 
> > figpathdir2 = dir( strrep( figpath2, '\', filesep ) );
> > fignames2 = {figpathdir2.name};
> > 
> > for i = 1:length(trimcoorddir);
> > 
> > if all( strcmp( fignames2, trimcoord{i}.filename ) == 0 );
> > 
> > A = imread( strrep( [figpath, '\', trimcoord{i}.filename], '\',
> > 
> > filesep ) );
> > 
> > AA{i} = rgb2gray( A );
> > AA{i} = 255 - medfilt2( AA{i}, [25 25] ) + AA{i};  %Time
> > Consuming!
> > AA{i} = imadjust( AA{i}, [0.96 1], [0 1], 1.0 );
> > 
> > imwrite( AA{i}, strrep( [figpath2, '\', trimcoord{i}.filename],
> > 
> > '\', filesep ) );
> > 
> > else
> > 
> > AA{i} = imread( strrep( [figpath2, '\', trimcoord{i}.filename],
> > 
> > '\', filesep ) );
> > 
> > end
> > 
> > end
> > 
> > On Wednesday, September 7, 2016 at 7:24:14 PM UTC-7, Uwe Fechner wrote:
> >> Could you post code, that reproduces the problem?
> >> 
> >> On Thursday, September 8, 2016 at 1:28:00 AM UTC+2, Drew Mahedy wrote:
> >>> I'm just wondering if there is a way to load several RGB .jpg images
> >>> into a 4-D array like can be done in MATLAB using cell array. I have the
> >>> Images package installed but concatenating 2 images in the 3rd or 4th
> >>> dimension says that 'only two-dimensional images are supported in
> >>> assert2d'
> >>> when I try.




Re: [julia-users] Re: How I did get a spurious method for *{T}(Irrational{T},Irrational{T})?

2016-09-09 Thread Tim Holy
https://github.com/JuliaLang/julia/issues/18419

--Tim

On Wednesday, September 7, 2016 12:59:05 AM CDT Lutfullah Tomak wrote:
> A reduced case that also makes multiplication of the same Irrational an
> error.
> 
>_
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
> 
>   | | | | | | |/ _` |  |
>   | | |
>   | | |_| | | | (_| |  |  Version 0.5.0-rc3+0 (2016-08-22 23:43 UTC)
> 
>  _/ |\__'_|_|_|\__'_|  |
> 
> |__/   |  x86_64-linux-gnu
> 
> julia> module Erratic
> 
> 
>const DummyUnion =  Union{Irrational,AbstractFloat}
>abstract DummyReal <: Real
>immutable DummyType <: DummyReal
>var::DummyUnion
>end
> 
> 
>mulSym(x::DummyUnion, y::DummyUnion) = x*y
>mulSym(x::DummyUnion, y::DummyType)  = DummyType(mulSym(x, y.var
> ))
> 
> 
>import Base.*
>*(x::DummyUnion, y::DummyReal) = mulSym(x, y)
> 
> 
>end
> Erratic
> 
> 
> julia> pi*pi
> ERROR: * not defined for Irrational{:π}
>  in error(::String, ::String, ::Vararg{Any,N}) at ./error.jl:22
>  in no_op_err(::String, ::Type{T}) at ./promotion.jl:254
>  in *(::Irrational{:π}, ::Irrational{:π}) at ./promotion.jl:256
>  in eval(::Module, ::Any) at ./boot.jl:234
> 
> 
> julia> pi*Erratic.DummyType(pi)
> ERROR: * not defined for Irrational{:π}
>  in error(::String, ::String, ::Vararg{Any,N}) at ./error.jl:22
>  in no_op_err(::String, ::Type{T}) at ./promotion.jl:254
>  in *(::Irrational{:π}, ::Irrational{:π}) at ./promotion.jl:256
>  in mulSym(::Irrational{:π}, ::Irrational{:π}) at ./REPL[1]:9
>  in mulSym(::Irrational{:π}, ::Erratic.DummyType) at ./REPL[1]:10
>  in *(::Irrational{:π}, ::Erratic.DummyType) at ./REPL[1]:13
>  in eval(::Module, ::Any) at ./boot.jl:234
> 
> 
> julia> pi*Erratic.DummyType(2.)
> Erratic.DummyType(6.283185307179586)
> 
> DummyReal does not cover Irrationals but it somehow triggers default
> *{T<:Number}(x::T, y::T) = no_op_err("*", T)
> What kind of number does not have multiplication defined with itself?




Re: [julia-users] Re: Is it better to call Julia from C++ or call C++ from Julia?

2016-09-08 Thread Tim Holy
On Thursday, September 8, 2016 6:31:37 AM CDT Páll Haraldsson wrote:
> Not sure I understand ("this case"), in another thread Tim Holy cautioned 
> that C++ would not be inlined into Julia functions, as Julia functions can 
> be.

My caution was for the opposite direction: you can't inline julia functions 
into C++.

--Tim



Re: [julia-users] There is very little overhead calling Julia from C++

2016-09-08 Thread Tim Holy
Keep in mind that statements like "very little overhead" depend entirely on 
what you're comparing against. Your TestFunc is quite expensive, so it's not 
surprising that how it's called adds little overhead. If you called a much 
cheaper function, you might come to a rather different conclusion. I'm not 
saying you can't/shouldn't do this, but you should be aware that your 
conclusions may not generalize to all usage patterns.

For example, much of what makes julia fun is the fact that you can build up 
complicated functionality from "atomic" pieces that do very little work on 
their own, and julia links them all together (using a great deal of inlining) 
to deliver awesome performance. Presumably you'll lose those advantages when 
calling individual functions from C++.

Best,
--Tim

On Wednesday, September 7, 2016 9:43:29 PM CDT K leo wrote:
> I just did a test of calling a Julia function 100,000 times, both from
> Julia and from C++.  The execution times are very close.  The results are
> as follows.  This is on Xubuntu 16.04 64bits.
> 
> ***Julia   **
> 
>   | | |_| | | | (_| |  |  Version 0.5.0-rc3+0 (2016-08-22 23:43 UTC)
> 
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
> 
> |__/   |  x86_64-unknown-linux-gnu
> 
> julia> include("speedTest.jl")
> speedTest
> 
> julia> speedTest.TestLoop()
> elapsed time: 3.21365718 seconds
> 3.21365718
> 
> 
> ***   C++***
> 
> > g++ -o test -fPIC -I$JULIA_DIR/include/julia test3.cpp -L$JULIA_DIR/lib/
> 
> -L$JULIA_DIR/lib/julia -lLLVM-3.7.1 -ljulia
> $JULIA_DIR/lib/julia/libstdc++.so.6
> 
> > ./test
> 
> 3.22423
> 
> The codes are shown below:
> 
> Julai:
> > module speedTest
> > 
> >> function TestFunc()
> >> 
> > f=0.
> > 
> > for i=1:1
> > 
> > f += Float64(i*fld(3,2))*sqrt(rand()+1.)
> > 
> > end
> > 
> > end
> > 
> >> function TestLoop()
> >> 
> > tic()
> > 
> > for i=1:10
> > 
> > TestFunc()
> > 
> > end
> > 
> > toc()
> > 
> > end
> > 
> >> end
> 
> C++:
> > #include 
> > 
> > #include 
> > 
> > #include 
> > 
> > using namespace std;
> > 
> > typedef unsigned long long timestamp_t;
> > 
> >> static timestamp_t get_timestamp ()
> > 
> > {
> > 
> >   struct timeval now;
> >   
> >   gettimeofday (&now, NULL);
> >   
> >   return  now.tv_usec + (timestamp_t)now.tv_sec * 100;
> > 
> > }
> > 
> >> int main(int argc, char *argv[])
> > 
> > {
> > 
> > jl_init(NULL);
> > 
> >> jl_load("speedTest.jl");
> > 
> > jl_value_t * mod = (jl_value_t*)jl_eval_string("speedTest");
> > 
> > jl_function_t * func = jl_get_function((jl_module_t*)mod,"TestFunc");
> > 
> >> timestamp_t t0 = get_timestamp();
> >> 
> >> 
> >> for(int i=1; i<10; i++) {
> >> 
> > jl_call0(func);
> > 
> > }
> > 
> >> timestamp_t t1 = get_timestamp();
> >> 
> >> 
> >> double secs = (t1 - t0) / 100.0L;
> > 
> > cout<< secs << endl;
> > 
> >> jl_atexit_hook(0);
> > 
> > return 0;
> > 
> > }




Re: [julia-users] Re: Saving and Loading data (when JLD is not suitable)

2016-08-31 Thread Tim Holy
https://github.com/JuliaIO/JLD.jl/blob/master/doc/jld.md#custom-serialization

On Wednesday, August 31, 2016 1:56:25 AM CDT Kristoffer Carlsson wrote:
> I am sure you know but I just want to point out that it is possible to
> overload the way JLD saves and loads types. You can write a custom save
> function that transforms the thing you are trying to save into a format
> more suitable for HDF5 and then the loader can transform it back into your
> original format.




Re: [julia-users] Re: Adding items into a tuple

2016-08-30 Thread Tim Holy
On Tuesday, August 30, 2016 3:57:36 PM CDT Yichao Yu wrote:
> And even then, this completely looses the advantage of using tuple
> (inferrable size and element types) so you shouldn't do this in general
> unless you are going to do a lot of work with the tiny tuple afterwards.

Right. If you want to grow a tuple, you should use "lispy recursion" so the 
compiler can reason about the size, which is part of the type of the tuple. 
(`for` loops are completely off the table for this kind of programming.) 
Here's an example that builds a tuple of N `true`s (i.e., functionally 
equivalent to `ntuple(d->true, N)`):

buildtrues{N}(::Type{Val{N}}) = _buildtrues((), Val{N}) # initialization

_buildtrues{N}(out::NTuple{N}, ::Type{Val{N}}) = out   # termination

@inline _buildtrues{N}(out, ::Type{Val{N}}) =
_buildtrues((out..., true), Val{N})   # the "inner loop"

Key features include the `@inline` and the fact that `N` is available to the 
type system. For a particular inferrable`N<15`, the compiler will just figure 
out the end result and return that; the "apparent" recursion runs at compile 
time, not runtime.

Note that if your condition isn't evaluatable at compile time, and especially 
if it changes from one "iteration" to the next, then you're frankly much 
better off using Arrays rather than tuples. Don't attempt to fake compile-time 
evaluation using `Val` unless the condition is already embedded in the type 
system (e.g., coming from an `::Array{T,N}`) or it's the same `Val` being used 
bazillions of times. See http://docs.julialang.org/en/latest/manual/
performance-tips/#types-with-values-as-parameters and the section after that.

This is not just a theoretical concern: while tuples have enormous advantages 
in certain settings, trying to fake this with tuples/Val and getting it wrong 
could easily result in a 30-100 fold performance *penalty* compared to using 
Arrays.

Best,
--Tim



Re: [julia-users] Running Julia in Ubuntu

2016-08-30 Thread Tim Holy
I'm rather surprised that building from source gave you version 0.4.7; I don't 
think that makes sense (it should have been master). Are you sure you know 
what you're running? (Try `.julia` from the directory in which you built 
julia, rather than just `julia`.)

Best,
--Tim

On Monday, August 29, 2016 3:30:34 PM CDT Angshuman Goswami wrote:
> I was running Julia to run my MPC code. I needed to upgrade and hence i
> deleted the folder i cloned from git hub. Now I have two problems:
> 
> 1) Installing julia by sudo get-apt install julia, I get the following
> message:
> 
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> Package julia is not available, but is referred to by another package.
> This may mean that the package is missing, has been obsoleted, or
> is only available from another source
> 
> E: Package 'julia' has no installation candidate
> 
> 2) When I cloned the github link by  git clone
> https://github.com/JuliaLang/julia.git
> 
> I tried make -j N
> 
> it didn't work
> 
> 3) I then used
> 
> git pull && make
> 
> Now Julia was updated to 0.4.7
> And now I thought it will finally work.
> But now when I do i) using PyCall or  ii) using RobotOS
> I get the following error:
> julia: codegen.cpp:3155: llvm::Value* emit_expr(jl_value_t*, jl_codectx_t*,
> bool, bool): Assertion `ctx->gensym_assigned.at(idx)' failed.
> 
> signal (6): Aborted
> ERROR: LoadError: Failed to precompile PyCall to
> /home/odroid/.julia/lib/v0.4/PyCall.ji while loading
> /home/odroid/.julia/v0.4/RobotOS/src/RobotOS.jl, in expression starting on
> line 3
> 
> M stuck




Re: [julia-users] Re: Return type of eye()

2016-08-29 Thread Tim Holy
I confess I'm not quite sure what the right answer is here. It would seem 
overkill to have both `I` and something that's the same thing, except sized. 
OTOH, I see the attraction.

Maybe if it were part of a general mechanism, e.g., SizedArrayOperator{O,N} 
(for performing an operation `O` on array with `N` dimensions of specified 
size) it wouldn't be so bad. That would rely on finding other uses for this 
concept, however.

Best,
--Tim

On Monday, August 29, 2016 12:29:14 PM CDT Júlio Hoffimann wrote:
> Tim,
> 
> Would it make sense to have "I" as an object that acts like UniformScaling
> and doesn't require any memory allocation, but is only transformed into
> sparse matrix via the [] operator? Maybe something similar to arrays ->
> subarrays?
> 
> -Júlio




Re: [julia-users] @threads not providing as big speedup as expected

2016-08-29 Thread Tim Holy
Just noticed that you're allocating memory on each iteration. If you have the 
patience to write out all those matrix operations explicitly, it should help. 
Alternatively, perhaps try ParallelAccelerator.

Best,
--Tim

On Monday, August 29, 2016 10:49:40 AM CDT Marius Millea wrote:
> Thanks, just tried wrapping the for loop inside a function and it seems to
> make the @threads version slightly slower and serial version slightly
> faster, so I'm even further from the speedup I was hoping for! Reading
> through that Issue and linked ones, I guess I may not be the only one
> seeing this.
> 
> For ref, what I did:
> 
> function myloop(inv_cl,d_cl,fish,ijs,nl)
> @threads for ij in ijs
> i,j = ij
> for l in 1:nl
> fish[i,j] +=
> (2*l+1)/2*trace(inv_cl[:,:,l]*d_cl[i][:,:,l]*inv_cl[:,:,l]*d_cl[j][:,:,l])
> end
> end
> end
> 
> function test(nl,np)
> inv_cl = ones(3,3,nl)
> d_cl = Dict(i => ones(3,3,nl) for i=1:np)
> 
> fish = zeros(np,np)
> ijs = [(i,j) for i=1:np, j=1:np]
> 
> myloop(inv_cl,d_cl,fish,ijs,nl)
> end
> 
> # with @threads
> @timeit test(3000,40)
> 1 loops, best of 3: 3.84 s per loop
> 
> # without @threads
> @timeit test(3000,40)
> 1 loops, best of 3: 2.33 s per loop
> 
> On Monday, August 29, 2016 at 6:50:15 PM UTC+2, Tim Holy wrote:
> > Very quickly (train to catch!): try this
> > https://github.com/JuliaLang/julia/
> > 
> > issues/17395#issuecomment-241911387
> > <https://github.com/JuliaLang/julia/issues/17395#issuecomment-241911387>
> > and see if it helps.
> > 
> > --Tim
> > 
> > On Monday, August 29, 2016 9:22:09 AM CDT Marius Millea wrote:
> > > I've parallelized some code with @threads, but instead of a factor NCPUs
> > > speed improvement (for me, 8), I'm seeing rather a bit under a factor 2.
> > 
> > I
> > 
> > > suppose the answer may be that my bottleneck isn't computation, rather
> > > memory access. But during running the code, I see my CPU usage go to
> > 
> > 100%
> > 
> > > on all 8 CPUs, if it were memory access would I still see this? Maybe
> > 
> > the
> > 
> > > answer is yes, in which case memory access is likely the culprit; is
> > 
> > there
> > 
> > > some way to confirm this though? If no, how do I figure out what *is*
> > 
> > the
> > 
> > > culprit?
> > > 
> > > Here's a stripped down version of my code,
> > > 
> > > 
> > > function test(nl,np)
> > > 
> > > inv_cl = ones(3,3,nl)
> > > d_cl = Dict(i => ones(3,3,nl) for i=1:np)
> > > 
> > > fish = zeros(np,np)
> > > ijs = [(i,j) for i=1:np, j=1:np]
> > > 
> > > Threads.@threads for ij in ijs
> > > 
> > > i,j = ij
> > > for l in 1:nl
> > > 
> > > fish[i,j] +=
> > 
> > (2*l+1)/2*trace(inv_cl[:,:,l]*d_cl[i][:,:,l]*inv_cl
> > 
> > > [:,:,l]*d_cl[j][:,:,l])
> > > 
> > > end
> > > 
> > > end
> > > 
> > > end
> > > 
> > > 
> > > # with the @threads
> > > @timeit test(3000,40)
> > > 1 loops, best of 3: 3.17 s per loop
> > > 
> > > # now remove the @threads from above
> > > @timeit test(3000,40)
> > > 1 loops, best of 3: 4.42 s per loop
> > > 
> > > 
> > > 
> > > Thanks.




Re: [julia-users] @threads not providing as big speedup as expected

2016-08-29 Thread Tim Holy
Very quickly (train to catch!): try this https://github.com/JuliaLang/julia/
issues/17395#issuecomment-241911387
and see if it helps.

--Tim

On Monday, August 29, 2016 9:22:09 AM CDT Marius Millea wrote:
> I've parallelized some code with @threads, but instead of a factor NCPUs
> speed improvement (for me, 8), I'm seeing rather a bit under a factor 2. I
> suppose the answer may be that my bottleneck isn't computation, rather
> memory access. But during running the code, I see my CPU usage go to 100%
> on all 8 CPUs, if it were memory access would I still see this? Maybe the
> answer is yes, in which case memory access is likely the culprit; is there
> some way to confirm this though? If no, how do I figure out what *is* the
> culprit?
> 
> Here's a stripped down version of my code,
> 
> 
> function test(nl,np)
> 
> inv_cl = ones(3,3,nl)
> d_cl = Dict(i => ones(3,3,nl) for i=1:np)
> 
> fish = zeros(np,np)
> ijs = [(i,j) for i=1:np, j=1:np]
> 
> Threads.@threads for ij in ijs
> i,j = ij
> for l in 1:nl
> fish[i,j] += (2*l+1)/2*trace(inv_cl[:,:,l]*d_cl[i][:,:,l]*inv_cl
> [:,:,l]*d_cl[j][:,:,l])
> end
> end
> 
> end
> 
> 
> # with the @threads
> @timeit test(3000,40)
> 1 loops, best of 3: 3.17 s per loop
> 
> # now remove the @threads from above
> @timeit test(3000,40)
> 1 loops, best of 3: 4.42 s per loop
> 
> 
> 
> Thanks.




Re: [julia-users] Re: Return type of eye()

2016-08-29 Thread Tim Holy
On Monday, August 29, 2016 10:40:10 AM CDT Júlio Hoffimann wrote:
> Why would one want dense identity matrix?

Because you might want it to serve as an initialization for an iterative 
optimization routine (e.g., ICA) that updates the solution in place, and which 
assumes a dense matrix?

We could theoretically get rid of `eye` and just use `convert(Array, I)`. I 
like that idea except for one detail: `I` doesn't actually have a size.

--Tim



Re: [julia-users] Re: Hard time with Compat and 0.5

2016-08-29 Thread Tim Holy
To rephrase what Steven and Tony said, for some things you won't need a macro. 
For example, `unsafe_wrap` didn't exist on Julia 0.4, but Compat contains an 
implementation  of `unsafe_wrap` for use on Julia 0.4. It's just a plain-old 
function call, so you don't need `@compat`---just use it in the same way you'd 
use `unsafe_wrap` on Julia 0.5.

`@compat` is necessary only for more complicated cases where there's a reason 
why you can't make it just an ordinary function call.

To sum up, to get rid of deprecation warnings write your code in the new 
syntax. Add `@compat` only when Compat.jl's README tells you to do so.

--Tim

On Monday, August 29, 2016 8:46:42 AM CDT J Luis wrote:
> Ok, but than how do I quiet the tons of deprecation messages that show up?
> 
> segunda-feira, 29 de Agosto de 2016 às 15:57:34 UTC+1, Tony Kelman escreveu:
> > You generally only need to call the @compat macro when you're trying to
> > use some new syntax that didn't parse correctly on older versions of
> > Julia.
> > If it parses correctly, Compat usually implements it with normal functions
> > and methods, no need for a syntax-rewriting macro.
> > 
> > On Monday, August 29, 2016 at 6:11:19 AM UTC-7, J Luis wrote:
> >>> No, it is:
> >>> 
> >>> t = unsafe_wrap(Array, Gb.data, h.size)
> >>> 
> >>> as in the deprecation warning.
> >> 
> >> Thanks (I'd figured it out too meanwhile)
> >> 
> >>> (You don't need @compat just for function calls.   You only need @compat
> >>> for things where the syntax changes in a more complicated way.)
> >> 
> >> Hmm, what do you mean by this? If I don't use @compat (which I tried) I
> >> get tons of deprecation messages.




Re: [julia-users] Wiener deconvolution

2016-08-29 Thread Tim Holy
Or perhaps make the package Deconvolution.jl, and have wiener be the first (and 
currently only) method?

Best,
--Tim

On Sunday, August 28, 2016 12:44:44 PM CDT Mosè Giordano wrote:
> Hi all,
> 
> I wrote a very simple implementation of the Wiener deconvolution
>  in Julia.  You can
> find it here as a (non-registered) package:
> https://github.com/giordano/Wiener.jl  The README.md has a basic example of
> use.
> 
> I'm not sure it deserves to be a package on its own, so I'm willing to
> include it in another package, but which?  The Wiener deconvolution is
> often used in image processing, a good place could be Images.jl (which
> already has other tools to manipulate images), but this technique can be
> applied to any kind of signal, not just images, thus another option could
> be DSP.jl.  Please, suggest me where this code can go.
> 
> Cheers,
> Mosè




Re: [julia-users] Wiener deconvolution

2016-08-29 Thread Tim Holy
If you can't find another home, I'd be happy to have it in Images, but to me 
DSP seems like the (slightly) better choice.

That said, I also don't think it's terrible to have small packages (they are 
easier to document and faster for newcomers to come to grips with), so it 
could also stay a package on its own.

Best,
--Tim

On Sunday, August 28, 2016 12:44:44 PM CDT Mosè Giordano wrote:
> Hi all,
> 
> I wrote a very simple implementation of the Wiener deconvolution
>  in Julia.  You can
> find it here as a (non-registered) package:
> https://github.com/giordano/Wiener.jl  The README.md has a basic example of
> use.
> 
> I'm not sure it deserves to be a package on its own, so I'm willing to
> include it in another package, but which?  The Wiener deconvolution is
> often used in image processing, a good place could be Images.jl (which
> already has other tools to manipulate images), but this technique can be
> applied to any kind of signal, not just images, thus another option could
> be DSP.jl.  Please, suggest me where this code can go.
> 
> Cheers,
> Mosè




Re: [julia-users] Unstructured Spline2D (progress / performance)

2016-08-29 Thread Tim Holy
For unstructured grids, it depends a lot on what you want. I'm a fan of 
piecewise linear polyhedral interpolation, but there are many other choices:
https://en.wikipedia.org/wiki/Multivariate_interpolation#Irregular_grid_.
28scattered_data.29. One of the (many) Voronoi/Delaunay packages should make 
this pretty easy to implement.

If you do put something together, please consider contributing it to 
Interpolations.jl!

Best,
--Tim

On Monday, August 29, 2016 10:46:01 AM CDT Christoph Russ wrote:
> Hi everyone,
> 
> I am currently using Dierckx.Spline2D to interpolate 2D data. Until now I
> could rely on the (irregular) grid input with an approximate size of 1000 x
> 800 datapoints, which didn't give me any trouble. Unfortunately, I now need
> to use an unstructured setting, where each x,y,z point is defined
> separately. Each input array is about 800,000 elements long and
> constructing the spline appears to take a very long time or not work at
> all. (Input data are large Float64 values for x and y with a comparable
> small variance in z.)
> 
> Is there a way I could occasionally output compute progress or can you
> recommend another interpolation package / approach / performance
> optimization that I should be using / doing?
> 
> Thank you,
> Chris
> 
> 
> PS: for n > 4.0 the code below produces:
> 
> ERROR: The required storage space exceeds the available storage space:
> nxest or nyest too small, or s too small. Try increasing s.
>  in Spline2D at /home/chrisruss/.julia/v0.4/Dierckx/src/Dierckx.jl:534
> 
> ##
> using Dierckx
> 
> n = 4.0
> 
> x = collect(1.0:n)
> y = collect(1.0:n)
> z2d = rand(size(x,1),size(y,1))
> spReg = Spline2D(x,y,z2d)
> 
> xi = 1.0:0.42:n
> yi = 1.0:0.42:n
> 
> z2di = evalgrid(spReg, xi, yi)
> 
> x2d = Float64[]
> y2d = Float64[]
> 
> for i=1:round(Int,n)
>   append!(x2d,x)
>   for j=1:round(Int,n)
> push!(y2d,y[i])
>   end
> end
> 
> spUnstr = Spline2D(x2d,y2d,z2d[:])
> 
> zu2di = evalgrid(spUnstr, xi, yi)
> 
> abs(sum(zu2di .- z2di)) < 1.0e-10
> ##




Re: [julia-users] does quote...end really create a QuoteNode?

2016-08-28 Thread Tim Holy
julia> qn = :(:n) 
:(:n) 

julia> ex = :("My docstring about n, which has value $($qn))") 
:("My docstring about n, which has value $(:n))") 

julia> dump(ex) 
Expr 
 head: Symbol string 
 args: Array{Any}((3,)) 
   1: String "My docstring about n, which has value " 
   2: QuoteNode 
 value: Symbol n 
   3: String ")" 
 typ: Any 

julia> dump(qn) 
QuoteNode 
 value: Symbol n

--Tim

On Sunday, August 28, 2016 12:27:02 AM CDT Spencer Russell wrote:
> the metaprogramming docs say that `quote…end` produces a `QuoteNode`, but
> when I try:
> 
> julia> quo = quote
>x=2
>y=3
>end
> quote  # none, line 2:
> x = 2 # none, line 3:
> y = 3
> end
> 
> julia> noquo = :(
>x=2;
>y=3)
> quote
> x = 2
> y = 3
> end
> 
> and `dump` the results, they both seem the same except for the presence of
> line number nodes in the `quote…end` one (on 0.4.6 and 0.5). Is there
> something else that creates the `QuoteNode`?
> 
> Thanks,
> Spencer




Re: [julia-users] Blob detection and size measurement in Julia?

2016-08-27 Thread Tim Holy
Good catch. Looks like the edge-handling in `findlocalmaxima` needed to be a 
bit more refined---it was discarding results from the first and last sigma-
values supplied by the user.

I may have fixed this in https://github.com/timholy/Images.jl/commit/
7336f35c824b15de9e4d0def8e739bdeb6ed3b3d, can you do `Pkg.checkout("Images")` 
and test?

Best,
--Tim

On Friday, August 26, 2016 8:11:44 PM CDT Alex Mellnik wrote:
> Hi,
> 
> I'm attempting to measure the size and position of roughly spherical,
> well-defined objects in images using Julia.  I don't have any previous
> experience working with images and was wondering if anyone could point me
> toward the appropriate library/function.
> 
> I know that there's a blob_LoG
>  oG> in Images.jl which appears to do roughly what I'm interested in, but I
> may be mistaken and it looks my images will need pre-processing as I
> haven't yet been able to get a non-null result.  There's also the new
> ImageFeatures.jl and bindings for OpenCV, but neither have much in the way
> of documentation yet.
> 
> Thanks for any suggestions you can provide,
> 
> Alex




Re: [julia-users] Images package: Create and show image

2016-08-26 Thread Tim Holy
You must be using Julia 0.5, where `view` means something different than 
you're expecting (it creates a SubArray). Maybe try `display(img)`.

Best,
--Tim

On Friday, August 26, 2016 1:31:46 PM CDT Bill Maier wrote:
> I have the following code in an IJulia notebook(). The image is saved
> correctly to the disk file but I cannot show it in the notebook. Is this a
> bug or am I doing something wrong?
> 
> using Images
> imgdata = Array{UInt8}(640, 480, 3);
> for i in 1:480
> for j in 1:640
> imgdata[j,i,1] = round(UInt8, abs(250*sin(i/16)))
> imgdata[j,i,2] = round(UInt8, abs(cos(pi*j/320)))
> imgdata[j,i,3] = imgdata[2]
> end
> end;
> img = colorim(imgdata);
> save("testimg.png", img)
> view(img)
> 
> LoadError: BoundsError: attempt to access ()
>   at index [0]
> while loading In[7], in expression starting on line 1
> 
>  in rdims(::Tuple{},
> ::Tuple{Base.OneTo{Int64},Base.OneTo{Int64},Base.OneTo{Int64}},
> ::Type{Val{0}}) at ./reshapedarray.jl:52 in
> reshape(::Array{FixedPointNumbers.UFixed{UInt8,8},3}, ::Type{Val{0}}) at
> ./reshapedarray.jl:45 in
> view(::Images.Image{FixedPointNumbers.UFixed{UInt8,8},3,Array{FixedPointNum
> bers.UFixed{UInt8,8},3}}) at /home/bmaier/.julia/v0.6/Images/src/core.jl:513




Re: [julia-users] Recursive/Circular parametric types

2016-08-25 Thread Tim Holy
That seems likely to work, though obviously some operations will not be 
inferrable. You might be just as happy with

type PhylogenyNode
kind::Symbol   # :clade, etc.
from::PhyologenyEdge
to::Vector{PhylogenyEdge}
confidence::Float64
end

type PhylogenyEdge
kind::Symbol  # :branch, etc.
from::PhylogenyNode
to::PhylogenyNode
end

and then manually dispatching on "kind". Here, everything is inferrable.

A favorite trick (not my invention, of course) is to have root nodes point to 
themselves, so that you don't have to define a separate type for empty.

Something to consider seriously is using an established package like 
LightGraphs and storing any metadata separately. If you really want/need 
separate types for the different nodes, one advantage of this strategy is that 
at least graph traversal will be fast (inferrable)---it's only the metadata 
lookup that will be type-unstable.

--Tim

On Thursday, August 25, 2016 3:26:39 AM CDT Ben Ward wrote:
> Hi Tim,
> 
> That's a shame, I was hoping that doing the above would let me create
> several different concrete PhylogenyNode and PhylogenyEdge types, that can
> be used together. I guess since this is an abuse I have to pair one
> concrete PhylogenyEdge type with one concrete PhylogenyNode type? I
> wondered is a valid alternative to do something like this:
> 
> abstract AbstractNode
> abstract PhylogenyNode <: AbstractNode
> abstract NetworkNode <: AbstractNode
> abstract AbstractEdge
> abstract PhylogenyEdge <: AbstractEdge
> abstract NetworkEdge <: AbstractEdge
> 
> type Branch <: PhylogenyEdge
>from::PhylogenyNode
>to::PhylogenyNode
>length::Float64
> 
> function Branch()
>x = new()
>length!(x, -1.0)
>return x
>end
> end
> 
> type Clade <: PhylogenyNode
>from::PhylogenyEdge
>to::Vector{PhylogenyEdge}
>confidence::Float64
> 
> function Clade()
>x = new()
>x.to = Vector{PhylogenyEdge}()
>confidence!(x, -1.0)
>return x
>end
> end
> 
> And define getters and setters in such a way that type assertions make
> things certain for the compiler?
> I saw that Jeff proposed a similar solution in julia issue #269 to handle
> circular type declarations.
> 
> On Wednesday, August 24, 2016 at 4:11:06 PM UTC+1, Tim Holy wrote:
> > I don't think that's type-stable. Since each node of each tree will also
> > be a
> > different type, I also think you'll end up hating life due to compile
> > times.
> > There's some (peripherally) relevant discussion at
> > http://docs.julialang.org/
> > 
> > en/latest/manual/performance-tips/#the-dangers-of-abusing-multiple-dispatc
> > h-
> > 
> > aka-more-on-types-with-values-as-parameters
> > <http://docs.julialang.org/en/latest/manual/performance-tips/#the-dangers-> 
> > > of-abusing-multiple-dispatch-aka-more-on-types-with-values-as-parameters>
> > 
> > Best,
> > --Tim
> > 
> > On Tuesday, August 23, 2016 3:28:17 PM CDT Ben Ward wrote:
> > > I'm doing some development and wondered if this kind of pattern is
> > > problematic:
> > > 
> > > abstract AbstractNode
> > > abstract PhylogenyNode <: AbstractNode
> > > abstract NetworkNode <: AbstractNode
> > > abstract AbstractEdge
> > > abstract PhylogenyEdge <: AbstractEdge
> > > abstract NetworkEdge <: AbstractEdge
> > > 
> > > type Branch{N <: PhylogenyNode} <: PhylogenyEdge
> > > 
> > > from::N{Branch}
> > > to::N{Branch}
> > > length::Float64
> > > 
> > > function Branch{N}(::Type{N})
> > > 
> > > x = new()
> > > length!(x, -1.0)
> > > return x
> > > 
> > > end
> > > 
> > > end
> > > 
> > > type Clade{E <: PhylogenyEdge} <: PhylogenyNode
> > > 
> > > from::E
> > > to::Vector{E}
> > > confidence::Float64
> > > 
> > > function Clade{E}(::Type{E})
> > > 
> > > x = new()
> > > x.to = Vector{E}()
> > > confidence!(x, -1.0)
> > > return x
> > > 
> > > end
> > > 
> > > end
> > > 
> > > 
> > > 
> > > As you can see both concrete types are parametric, and as a result there
> > 
> > is
> > 
> > > a certain circularity to it
> > > Clade{Branch{Clade{Branch{Clade{Branch} That ultimately ends in
> > > something like Clade{Branch{N<:PhylogenyNode}}. I'd like to know if this
> > 
> > is
> > 
> > > type-certain or not - the fact it terminates in N<:PhylogenyNode or
> > > E<:PhylogenyEdge makes me doubt it.




Re: [julia-users] High memory consumption with loops

2016-08-24 Thread Tim Holy
What have you tried so far? See http://docs.julialang.org/en/latest/manual/
performance-tips/#tools, esp. @code_warntype.

--Tim

On Wednesday, August 24, 2016 1:55:23 PM CDT Venil Noronha wrote:
> The following code seems to consume a lot of memory. Could anyone spot what
> I'm doing wrong here? I'm also using the JuMP library.
> 
> function initUtilityConstraint()
> c = categories[1]
> me = attack_methods[1]
> t = teams[1]
> tuple = Simulate.cmt(c, me, t)
> w = windows[1]
> r = resources[1]
> wrtuple = Simulate.wr(w, r)
> for ni in 1:size(list,1), c in categories,* f in flights*
> performloop(ni, c, f, tuple, wrtuple)
> end
> end
> 
> function performloop(ni, c, f, tuple, wrtuple)
> fi = findfirst(flights, f)
> for w in windows, me in attack_methods
> tuple.c = c
> tuple.m = me
> total = 0.0
> for t in teams
> tuple.t = t
> strat = linearizeStrategyS(f, c, t, w, ni)
> total = total + effectiveness[tuple]*strat
> end
> 
> total = ( total*(flight_vals[fi]*(-1)) + flight_vals[fi] )
> 
> for w2 in owindows, r2 in resources
> wrtuple.w = w2
> wrtuple.r = r2
> over = linearizeOverflow(w2, r2, ni)
> total = total - resource_fines[wrtuple]*over
> end
> # println(string( sc[c], "<=", ( total*(flight_vals[fi]*(-1)) +
> flight_vals[fi] )))
> @addConstraint(m, sc[c] <= total)
> end
> end
> 
> Following are the stats for piece this code. Stats were recorded using
> @time.
> 
> For 1 item in the flights array, it takes about 620KB to execute the
> performloop method - peak memory consumption by the program is 8.84GBs.
> 2 flights - 1.02MB per iteration of performloop - peak 8.88GBs.
> 3 flights - 3.45MB - 9.60GBs
> 4 flights - 4.35MB - 10.24GBs
> 5 flights - 10.8MB - 15.63GBs
> 
> It'd be great if someone could help me with this asap!
> 
> Thanks.




Re: [julia-users] Recursive/Circular parametric types

2016-08-24 Thread Tim Holy
I don't think that's type-stable. Since each node of each tree will also be a 
different type, I also think you'll end up hating life due to compile times. 
There's some (peripherally) relevant discussion at http://docs.julialang.org/
en/latest/manual/performance-tips/#the-dangers-of-abusing-multiple-dispatch-
aka-more-on-types-with-values-as-parameters

Best,
--Tim

On Tuesday, August 23, 2016 3:28:17 PM CDT Ben Ward wrote:
> I'm doing some development and wondered if this kind of pattern is
> problematic:
> 
> abstract AbstractNode
> abstract PhylogenyNode <: AbstractNode
> abstract NetworkNode <: AbstractNode
> abstract AbstractEdge
> abstract PhylogenyEdge <: AbstractEdge
> abstract NetworkEdge <: AbstractEdge
> 
> type Branch{N <: PhylogenyNode} <: PhylogenyEdge
> from::N{Branch}
> to::N{Branch}
> length::Float64
> 
> function Branch{N}(::Type{N})
> x = new()
> length!(x, -1.0)
> return x
> end
> end
> 
> type Clade{E <: PhylogenyEdge} <: PhylogenyNode
> from::E
> to::Vector{E}
> confidence::Float64
> 
> function Clade{E}(::Type{E})
> x = new()
> x.to = Vector{E}()
> confidence!(x, -1.0)
> return x
> end
> end
> 
> 
> 
> As you can see both concrete types are parametric, and as a result there is
> a certain circularity to it
> Clade{Branch{Clade{Branch{Clade{Branch} That ultimately ends in
> something like Clade{Branch{N<:PhylogenyNode}}. I'd like to know if this is
> type-certain or not - the fact it terminates in N<:PhylogenyNode or
> E<:PhylogenyEdge makes me doubt it.




Re: [julia-users] Re: ANN: Documenter.jl 0.3

2016-08-22 Thread Tim Holy
I haven't inspected anything in detail, but I agree with Kristoffer that my 
first impression is that I really like the new look. Nice work, folks.

Best,
--Tim

On Monday, August 22, 2016 1:30:39 PM CDT Kristoffer Carlsson wrote:
> Just from looking at the generated docs for the Documenter package I would
> say that I strongly like the layout and "feel" of the HTML output. Much
> better than what I have seen from using the Mkdocs.
> 
> Maybe it is a bit redundant to have the base module name before all the
> type and function names?
> 
> On Monday, August 22, 2016 at 10:23:35 PM UTC+2, Michael Hatherly wrote:
> > Yes, Morten's additions to the package over the summer have really be
> > great!
> > 
> > I'd also like to re-emphasise his call for feedback and suggestions
> > (there's already been lots, but we're always looking for more) on the new
> > HTML output,
> > and the package in general. We want to end up with a documentation
> > solution for Julia that any package author can simply pick up and use
> > without having
> > to worry about anything other than writing great content for their docs.
> > It's definitely not there just yet, but we'll get there soon enough I'm
> > sure.
> > 
> > On Monday, 22 August 2016 18:58:43 UTC+2, David Anthoff wrote:
> >> Yes, this is really cool, much appreciated!!
> >> 
> >> 
> >> 
> >> *From:* julia...@googlegroups.com [mailto:julia...@googlegroups.com] *On
> >> Behalf Of *Christoph Ortner
> >> *Sent:* Saturday, August 20, 2016 6:56 PM
> >> *To:* julia-users 
> >> *Subject:* [julia-users] Re: ANN: Documenter.jl 0.3
> >> 
> >> 
> >> 
> >> this is really nice; thank you for putting this package together.
> >> 
> >> On Saturday, 20 August 2016 12:36:21 UTC+1, Morten Piibeleht wrote:
> >> 
> >> On Saturday, August 20, 2016 at 2:18:37 AM UTC+3, Christoph Ortner wrote:
> >> 
> >> I want to give this a try but I can't find the example of HTML output,
> >> which is supposed to be in test/html?
> >> 
> >> 
> >> 
> >> Thank you.
> >> 
> >> 
> >> 
> >> I apologize, the linked docs are a bit outdated and will be updated
> >> shortly. As was already mentioned, since Documenter uses the HTML output
> >> for its own docs, `docs/make.jl` is the best example.
> >> 
> >> 
> >> 
> >> `mkdocs.yml` has been dropped indeed. Instead the site's structure and
> >> title have to be defined in `make.jl`, via the (currently undocumented)
> >> `sitename` and `pages` options.
> >> 
> >> 
> >> 
> >> The HTML site gets built into `build/` directly, where we normally have
> >> outputted the processed Markdown files (with the filenames being
> >> translated
> >> as `path/file.md` -> `path/file.html`).
> >> 
> >> 
> >> 
> >> On Saturday, August 20, 2016 at 2:53:23 AM UTC+3, Stefan Karpinski wrote:
> >> 
> >> On Fri, Aug 19, 2016 at 5:07 PM, Morten Piibeleht  >> 
> >> > wrote:
> >> [*] Developed as part of Morten’s Google Summer of Code project
> >>  >> etails/> .
> >> 
> >> Since I think that page is private, here's the description of the
> >> project:
> >> 
> >> 
> >> 
> >> Yes, the correct link should have been
> >> https://summerofcode.withgoogle.com/projects/#5046486001778688 (but it
> >> basically only contains the description Stefan already posted).




Re: [julia-users] Re: higher rank sparse matrices

2016-08-21 Thread Tim Holy
Perhaps https://github.com/JuliaComputing/NDSparseData.jl?

--Tim

On Sunday, August 21, 2016 8:14:48 AM CDT Kristof Cools wrote:
> Just wondering whether there have emerged any packages for this in the
> meantime. I need a rank 3 sparse matrix to implement a retarded potential
> integral equation solver. The structure will have non zero entries for all
> values of the first two indices and a varying but fixed length set for the
> third index:
> 
> A[i,j,k] != 0 for k0(i,j) <= k <= k1(i,j)




Re: [julia-users] suspect memory allocation in generated function

2016-08-21 Thread Tim Holy
Not an answer to your question (I don't have time to check now), but in case 
it helps perhaps FFTViews may be a useful model. (The code is shorter than the 
README :-).)

https://github.com/timholy/FFTViews.jl

I think getting its tests to pass requires what will become julia-0.5+rc3 
(currently https://github.com/JuliaLang/julia/pull/18156 aka branch tk/
backports-0.5.0-rc3).

Best,
--Tim

On Sunday, August 21, 2016 9:09:21 AM CDT Davide Lasagna wrote:
> Hi,
> 
> I am implementing a custom array type where indexing wraps around on
> user-specified periodic dimensions. This makes it easier to write stencil
> operations, at the expenses of some performance penalty, (3x slower). The
> basic code can be found here
> .
> 
> I am overloading the getindex function by defining a `@generated` getindex
> method, in which I add bound checking code, lines 45-49, only over the
> non-periodic dimensions. When the bound checking code is generated I get a
> large memory allocation and a significant slow down. I can't seem to
> understand why, as the generated look "clean" enough to me.
> 
> Similar large allocations occur when the `@noinline` is removed in front of
> the function `peri2ind`, although probably unrelated.
> 
> Any pointers are welcome.
> 
> Thanks.




Re: [julia-users] Re: ANN: Optim v0.6

2016-08-14 Thread Tim Holy
For what it's worth, long ago I was caught by that once, too.

To fix it, I think all one would have to do is make the doc badges 2x larger 
(if possible).

--Tim

On Sunday, August 14, 2016 1:22:26 AM CDT Kristoffer Carlsson wrote:
> What did you expect them to be? They are right under a big table header
> saying "Documentation". They have the same style as the other badges which
> are also clickable and have been for years. I think it is worth seeing if
> more people can't find the documentation before doing any changes.




Re: [julia-users] Re: "eval cannot be used in a generated function"

2016-08-10 Thread Tim Holy
AFAICT, it remains possible to do dynamic type generation if you (1) print the 
code that would define the type to a file, and (2) `include` the file.

function create_type_dynamically{T}(::Type{T})
type_name = string("MyType", T)
isdefined(Main, Symbol(type_name)) && return nothing
filename = joinpath(tempdir(), string(T))
open(filename, "w") do io
println(io, """
type $type_name
val::$T
end
""")
end
eval(include(filename))
nothing
end

Is this somehow less evil than doing it in a generated function?

Best,
--Tim

On Wednesday, August 10, 2016 9:49:23 PM CDT Jameson Nash wrote:
> > Why is it impossible to generate a new type at run time? I surely can do
> 
> this by calling `eval` at module scope.
> 
> module scope is compile time != runtime
> 
> > Or I could create a type via a macro.
> 
> Again, compile time != runtime
> 
> > Given this, I can also call `eval` in a function, if I ensure the
> 
> function is called only once.
> 
> > Note that I've been doing this in Julia 0.4 without any (apparent)
> 
> problems.
> 
> Sure, I'm just here to tell you why it won't work that way in v0.5
> 
> > I'm not defining thousands of types in my code. I define one type, and
> 
> use it all over the place. However, each time my code runs (for days!), it
> defines a different type, chosen by a set of user parameters. I'm also not
> adding constraints to type parameters -- the type parameters are just `Int`
> values.
> 
> Right, the basic tradeoff required here is that you just need to provide a
> convenient way for your user to declare the type at the toplevel that will
> be used for the run. For example, you can just JIT the code for the whole
> run at the beginning:
> 
> function do_run()
>   return @eval begin
>  lots of function definitions
>  do_work()
>   end
> end
> 
> On Wed, Aug 10, 2016 at 5:14 PM Erik Schnetter  wrote:
> > On Wed, Aug 10, 2016 at 1:45 PM, Jameson  wrote:
> >> AFAIK, defining an arbitrary new type at runtime is impossible, sorry. In
> >> v0.4 it was allowed, because we hoped that people understood not to try.
> >> See also https://github.com/JuliaLang/julia/issues/16806. Note that it
> >> is insufficient to "handle" the repeat calling via caching in a Dict or
> >> similar such mechanism. It must always compute the exact final output
> >> from
> >> the input values alone (e.g. it must truly be const pure).
> > 
> > The generated function first calculates the name of the type, then checks
> > (`isdefined`) if this type is defined, and if so, returns it. Otherwise it
> > is defined and then returned. This corresponds to looking up the type via
> > `eval(typename)` (a symbol). I assume this is as pure as it gets.
> > 
> > Why is it impossible to generate a new type at run time? I surely can do
> > this by calling `eval` at module scope. Or I could create a type via a
> > macro. Given this, I can also call `eval` in a function, if I ensure the
> > function is called only once. Note that I've been doing this in Julia 0.4
> > without any (apparent) problems.
> > 
> > Being able to define types with arbitrary constraints in the type
> > 
> >> parameters works OK for toy demos, but it's intentionally rather
> >> difficult
> >> since it causes performance issues at scale. Operations on Array are
> >> likely
> >> to be much faster (including the allocation) than on Tuple (due to the
> >> cost
> >> of *not* allocating) unless that Tuple is very small.
> > 
> > I'm not defining thousands of types in my code. I define one type, and use
> > it all over the place. However, each time my code runs (for days!), it
> > defines a different type, chosen by a set of user parameters. I'm also not
> > adding constraints to type parameters -- the type parameters are just
> > `Int`
> > values.
> > 
> > And yes, I am using a mutable `Vector{T}` as underlying storage, that's
> > not the issue here. The speedup comes from knowing the size of the array
> > ahead of time, which allows the compiler to optimize indexing expressions.
> > I've benchmarked it, and examined the generated machine code. There's no
> > doubt that generating a type is the "right thing" to do in this case.
> > 
> > -erik
> > 
> > On Wednesday, August 10, 2016 at 1:25:15 PM UTC-4, Erik Schnetter wrote:
> >>> I want to create a type, and need more flexibility than Julia's `type`
> >>> definitions offer (see ).
> >>> Currently, I have a function that generates the type, and returns the
> >>> type.
> >>> 
> >>> I would like to make this a generated function (as it was in Julia 0.4).
> >>> The advantage is that this leads to type stability: The generated type
> >>> only
> >>> depends on the types of the arguments pass to the function, and Julia
> >>> would
> >>> be able to infer the type.
> >>> 
> >>> In practice, this looks like
> >>> 
> >>> using FastArrays
> >>> # A (10x10) fixed-size arraytypealias Arr2d_10x10 FastArray(1:10, 1:10)
> >>> a2 = Arr2d

Re: [julia-users] Is the master algorithm on the roadmap?

2016-08-09 Thread Tim Holy
I'd recommend starting by picking a very small project. For example, fix a bug 
or implement a small improvement in a package that you already find useful or 
interesting. That way you'll get some guidance while making a positive 
contribution; once you know more about julia, it will be easier to see your 
way forward.

Best,
--Tim

On Monday, August 8, 2016 8:22:01 PM CDT Kevin Liu wrote:
> I have no idea where to start and where to finish. Founders' help would be
> wonderful.
> 
> On Tuesday, August 9, 2016 at 12:19:26 AM UTC-3, Kevin Liu wrote:
> > After which I have to code Felix into Julia, a relational optimizer for
> > statistical inference with Tuffy 
> > inside, for enterprise settings.
> > 
> > On Tuesday, August 9, 2016 at 12:07:32 AM UTC-3, Kevin Liu wrote:
> >> Can I get tips on bringing Alchemy's optimized Tuffy
> >>  in Java to Julia while showing the
> >> best of Julia? I am going for the most correct way, even if it means
> >> coding
> >> Tuffy into C and Julia.
> >> 
> >> On Sunday, August 7, 2016 at 8:34:37 PM UTC-3, Kevin Liu wrote:
> >>> I'll try to build it, compare it, and show it to you guys. I offered to
> >>> do this as work. I am waiting to see if they will accept it.
> >>> 
> >>> On Sunday, August 7, 2016 at 6:15:50 PM UTC-3, Stefan Karpinski wrote:
>  Kevin, as previously requested by Isaiah, please take this to some
>  other forum or maybe start a blog.
>  
>  On Sat, Aug 6, 2016 at 10:53 PM, Kevin Liu  wrote:
> > Symmetry-based learning, Domingos, 2014
> > https://www.microsoft.com/en-us/research/video/symmetry-based-learning
> > /
> > 
> > Approach 2: Deep symmetry networks generalize convolutional neural
> > networks by tying parameters and pooling over an arbitrary symmetry
> > group,
> > not just the translation group. In preliminary experiments, they
> > outperformed convnets on a digit recognition task.
> > 
> > On Friday, August 5, 2016 at 4:56:45 PM UTC-3, Kevin Liu wrote:
> >> Minsky died of a cerebral hemorrhage at the age of 88.[40]
> >>  Ray
> >> Kurzweil  says he was
> >> contacted by the cryonics organization Alcor Life Extension
> >> Foundation
> >> 
> >> seeking
> >> Minsky's body.[41]
> >> 
> >> Kurzweil believes that Minsky was cryonically preserved by Alcor and
> >> will be revived by 2045.[41]
> >> 
> >> Minsky
> >> was a member of Alcor's Scientific Advisory Board
> >> .[42]
> >> 
> >> In
> >> keeping with their policy of protecting privacy, Alcor will neither
> >> confirm
> >> nor deny that Alcor has cryonically preserved Minsky.[43]
> >> 
> >> 
> >> We better do a good job.
> >> 
> >> On Friday, August 5, 2016 at 4:45:42 PM UTC-3, Kevin Liu wrote:
> >>> *So, I think in the next 20 years (2003), if we can get rid of all
> >>> of the traditional approaches to artificial intelligence, like
> >>> neural nets
> >>> and genetic algorithms and rule-based systems, and just turn our
> >>> sights a
> >>> little bit higher to say, can we make a system that can use all
> >>> those
> >>> things for the right kind of problem? Some problems are good for
> >>> neural
> >>> nets; we know that others, neural nets are hopeless on them. Genetic
> >>> algorithms are great for certain things; I suspect I know what
> >>> they're bad
> >>> at, and I won't tell you. (Laughter)*  - Minsky, founder of CSAIL
> >>> MIT
> >>> 
> >>> *Those programmers tried to find the single best way to represent
> >>> knowledge - Only Logic protects us from paradox.* - Minsky (see
> >>> attachment from his lecture)
> >>> 
> >>> On Friday, August 5, 2016 at 8:12:03 AM UTC-3, Kevin Liu wrote:
>  Markov Logic Network is being used for the continuous development
>  of drugs to cure cancer at MIT's CanceRX ,
>  on
>  DARPA's largest AI project to date, Personalized Assistant that
>  Learns (PAL) , progenitor of Siri. One of
>  Alchemy's largest applications to date was to learn a semantic
>  network
>  (knowledge graph as Google calls it) from the web.
>  
>  Some on Probabilistic Inductive Logic Programming / Probabilistic
>  Logic Programming / Statistical Relational Learning from CSAIL
>  

Re: [julia-users] precompiling all packages

2016-08-09 Thread Tim Holy
There are several previous threads on this, hopefully one of those will help.

Best,
--Tim

On Monday, August 8, 2016 8:13:42 PM CDT Chang Kwon wrote:
> Is there a way to precompile all packages at once? Each time that I run
> Pkg.update(), I would also like to precompile all packages so that when I
> actually use packages I don't need to precompile.
> 
> Chang




Re: [julia-users] How do I write `import Base.!` in julia 0.5?

2016-08-09 Thread Tim Holy
import Base: !

also works (and you can include more operators).

--Tim

On Monday, August 8, 2016 9:59:25 PM CDT Fengyang Wang wrote:
> On Monday, August 8, 2016 at 10:26:46 AM UTC-4, Kevin Squire wrote:
> > Try
> > 
> >   import Base.(!)
> > 
> > Cheers,
> > 
> >   Kevin
> 
> Why do import statements have different syntax? This syntax, I'm pretty
> sure, has either meant getfield or broadcast—in neither case does it
> actually refer to the ! function.




Re: [julia-users] Re: CALL TO ACTION for package devs

2016-08-05 Thread Tim Holy
Aside from checking out the package's master, a glance at the list of open PRs 
and recent commits usually tells you a lot.

Best,
--Tim

On Thursday, August 4, 2016 7:23:11 PM CDT Chris Rackauckas wrote:
> Not always. Some repos have a dev branch or a v0.5 compatibility branch,
> etc. I think that if you're looking to start working on the package, the
> first thing to do is get in contact with one of the authors (join the
> Gitter channel if they have one).
> 
> On Thursday, August 4, 2016 at 4:50:55 PM UTC-7, Kristoffer Carlsson wrote:
> > I would say that if you want to contribute to a package you should be on
> > the latest master commit and work from there.
> > 
> > On Friday, August 5, 2016 at 1:45:46 AM UTC+2, Eric Forgy wrote:
> >> I agree. Another consequence of v0.5 updates on master that are not
> >> tagged is that newbies, wanting to help, end up fixing things that are
> >> already fixed not realizing. I've done this a couple of times and it is
> >> frustrating.




Re: [julia-users] Re: Tuples of Functions

2016-08-02 Thread Tim Holy
On Wednesday, August 3, 2016 6:34:51 AM CDT Yichao Yu wrote:
> I guess I could register it after some clean up later, given that we had
> FastAnonymous package (which was in a very similar situation, if not
> slightly more undefined than this one, sorry @timholy ;-p )

Hey, no sweat. I agree that FastAnonymous was a (temporarily very useful) 
heinous hack, and that it's main benefit was to annoy Jeff so much that he 
made functions fast. In that one respect, I'll immodestly say that it was a 
brilliant and spectacularly successful package---I dare you to try to match it 
:-).

--Tim



Re: [julia-users] I'd like to see something like the Rust platform proposal in the Julia ecosystem

2016-08-01 Thread Tim Holy
module MyMetaPackage

using Reexport

@reexport using PackageA
@reexport using PackageB
...

end

Best.
--Tim

On Monday, August 1, 2016 1:48:47 AM CDT Steven Sagaert wrote:
> is more than just a webpage with a list of packages... for starters the
> concept of metapackage.
> 
> On Monday, August 1, 2016 at 10:25:33 AM UTC+2, Tamas Papp wrote:
> > Maybe you already know about it, but there is a curated list of packages
> > at https://github.com/svaksha/Julia.jl
> > 
> > On Mon, Aug 01 2016, Steven Sagaert wrote:
> > > see https://aturon.github.io/blog/2016/07/27/rust-platform/




Re: [julia-users] Re: Writing code compatible with custom-indexed arrays

2016-07-26 Thread Tim Holy
My only goal is to provide a smooth path for catching likely bugs in "legacy" 
code; if people start abusing `len`, that's their own *#!$ fault. This is 
about protecting the innocent (people who wrote code that was perfectly fine in 
an earlier era), not about trying to prevent abuse.

So, I'm not against giving _length a better name and exporting it. Could you 
please file these concerns as an issue? It might get more attention from the 
right parties that way.

Best,
--Tim

On Tuesday, July 26, 2016 5:03:31 AM CDT Oliver Schulz wrote:
> > Are you saying we should export what is currently _length?
> 
> Well, maybe not exactly length, but I believe Base should export some short
> equivalent length(linearindices(A)), to encourage people to write code that
> will work with both standard and custom-indexed arrays.
> 
> In a sense (at least according to
> http://docs.julialang.org/en/latest/devdocs/offset-arrays/), we're kinda
> deprecating Base.length. Not explicitly, but as the recommendation is to
> not support it for custom-indexed arrays, I can't use Base.length any more
> if I want to write generic code. After all, I don't know if someone else
> may want to use my function with a custom-indexed array. But I find that I
> often do need the length of the input (e.g. for normalization),
> and length(linearindices(A)) is just soo unwieldy. Of course every package
> can define it's own shortcut for it - but isn't that a bit un-Julianic? Or
> are you worried that people will then start using (e.g.) 1:len(A) instead
> of 1:length(A), resulting in an even bigger mess?
> 
> On Tuesday, July 26, 2016 at 1:47:57 PM UTC+2, Tim Holy wrote:
> > On Tuesday, July 26, 2016 2:31:10 AM CDT Oliver Schulz wrote:
> > > I should have known you already have a (very elegant) solution in your
> > > pocket. :-) I really like your proposed first:last syntax!
> > 
> > As discussed in that issue, the problem is that it may not be workable
> > (it's
> > also very unlikely to be merged for julia 0.5). The "safe" alternative is
> > to
> > modify the parser, although that has the disadvantage that you can't pass
> > constructs with `start` or `end` as arguments to functions---it only works
> > inside of [] brackets, which was what motivated addition of the `@view`
> > macro.
> > Bottom line is that we don't have a good solution to this problem at the
> > moment.
> > 
> > Interestingly, it seems likely that one could experiment with https://
> > github.com/JuliaLang/julia/pull/15750 in a package. If it becomes popular
> > then
> > it might help clarify whether one should move it into base.
> > 
> > > Concerning Base._length, I was rather thinking about something for the
> > > average user to use instead of length. For everyday
> > > use, length(linearindices(A)) is just too unwieldy, IMHO.
> > 
> > Are you saying we should export what is currently _length? Keep in mind
> > that
> > folks who want to add support for unconventional indices to their own
> > packages
> > can of course make this one-line definition themselves, and then they
> > don't
> > have to worry about whether Base will eliminate or rename it.
> > 
> > Best,
> > --Tim




Re: [julia-users] Re: Writing code compatible with custom-indexed arrays

2016-07-26 Thread Tim Holy
OK, new package here:

https://github.com/timholy/EndpointRanges.jl

It might make life easier even for folks who aren't using custom-indexed 
arrays. Give it a whirl and see what breaks!

Best,
--Tim

On Tuesday, July 26, 2016 2:31:10 AM CDT Oliver Schulz wrote:
> Hi Tim,
> 
> I should have known you already have a (very elegant) solution in your
> pocket. :-) I really like your proposed first:last syntax!
> 
> Concerning Base._length, I was rather thinking about something for the
> average user to use instead of length. For everyday
> use, length(linearindices(A)) is just too unwieldy, IMHO.
> 
> 
> Cheers,
> 
> Oliver
> 
> On Tuesday, July 26, 2016 at 9:21:27 AM UTC+2, Oliver Schulz wrote:
> > I was reading up on the new arrays with custom indices in 0.5. Now that
> > array indices do not implicitly start with one, what's the generic
> > replacement for index ranges like
> > 
> > A[2:end-1]
> > 
> > Will there by something like
> > 
> > A[start+1:end-1]
> > 
> > or so?
> > 
> > Also, since for safety reasons length(A) is now supposed to throw an error
> > for custom-indexed Arrays (for safety, to prevent 1:length[A]), the
> > current
> > recommendation
> > (http://docs.julialang.org/en/latest/devdocs/offset-arrays/)
> > is to use
> > 
> > length(linearindices(A))
> > 
> > instead of length() in generic code. This strikes me as very lengthy
> > indeed (pardon the pun) and will make code less readable - how about a new
> > length function like
> > 
> > Base.len(A) = length(linearindices(A))
> > 
> > or similar? If there's no "official" replacement for length in Base, I
> > suspect that packages will add their own internal version of it at some
> > point. Also, people might get tempted to continue using length() out of
> > laziness (if there's no short replacement) - resulting in code that fails
> > for the new arrays.




Re: [julia-users] Re: Writing code compatible with custom-indexed arrays

2016-07-26 Thread Tim Holy
On Tuesday, July 26, 2016 2:31:10 AM CDT Oliver Schulz wrote:
> I should have known you already have a (very elegant) solution in your
> pocket. :-) I really like your proposed first:last syntax!

As discussed in that issue, the problem is that it may not be workable (it's 
also very unlikely to be merged for julia 0.5). The "safe" alternative is to 
modify the parser, although that has the disadvantage that you can't pass 
constructs with `start` or `end` as arguments to functions---it only works 
inside of [] brackets, which was what motivated addition of the `@view` macro. 
Bottom line is that we don't have a good solution to this problem at the 
moment.

Interestingly, it seems likely that one could experiment with https://
github.com/JuliaLang/julia/pull/15750 in a package. If it becomes popular then 
it might help clarify whether one should move it into base.

> Concerning Base._length, I was rather thinking about something for the
> average user to use instead of length. For everyday
> use, length(linearindices(A)) is just too unwieldy, IMHO.

Are you saying we should export what is currently _length? Keep in mind that 
folks who want to add support for unconventional indices to their own packages 
can of course make this one-line definition themselves, and then they don't 
have to worry about whether Base will eliminate or rename it.

Best,
--Tim



Re: [julia-users] Writing code compatible with custom-indexed arrays

2016-07-26 Thread Tim Holy
Hi Oliver,

On Tuesday, July 26, 2016 12:21:27 AM CDT Oliver Schulz wrote:
> Will there by something like
> 
> A[start+1:end-1]

That would be lovely. See discussion in https://github.com/JuliaLang/julia/
pull/15750.

> length(linearindices(A))
> 
> instead of length() in generic code. This strikes me as very lengthy indeed
> (pardon the pun) and will make code less readable - how about a new length
> function like
> 
> Base.len(A) = length(linearindices(A))
> 
> or similar?

There's Base._length(A), though of course keep in mind that it's not exported 
and thus subject to change more than most.

Best,
--Tim



Re: [julia-users] Re: A Very Simple Benchmark for Brutal-force Loops in Several Languages: revised, Julia is fast!

2016-07-25 Thread Tim Holy
Given the apparent interest in the topic and the decisions that people seem to 
be making, it seems worth pointing out that folks are still using apples-to-
oranges comparisons on this benchmark.

There are at least two important differences:
- in the other languages, `linspace` allocates a vector, but in Julia it 
(currently) creates a compact object from which values are computed on-the-fly. 
That computation involves a division, and division is slow.
- The languages like C aren't doing bounds-checking. You might imagine adding 
`@inbounds` to the Julia version. But `@inbounds` has no impact on LinSpace 
objects in julia 0.4. In julia 0.5, it does work like you'd expect (thanks, 
Blake Johnson).

Combining these observations, we can `collect` the values into a `Vector` and 
then use `@inbounds`. For me the version below is nearly twice as fast as the 
original:

function benchmark()
nsamples = 100
x = collect(linspace(0, 5, nsamples))
y = zeros(nsamples)
# attempt to trigger JIT to compile all functions needed in the loops
# before profiling
a = cos(0.0); a = 1.5*2.5; a = 1.5+2.5;
println("\nBrutal-force loops, 100 times:")
@time begin
for m = 1:100
@inbounds for n = 1:nsamples
y[n] = cos(2*x[n]+5);
end
end
end
end
benchmark();

Best,
--Tim

On Monday, July 25, 2016 2:02:41 PM CDT Zhong Pan wrote:
> Agree that while raw speed is important, in most situations it wouldn't be
> the most important reason to choose one programming language over another.
> 
> I came from the angle of an engineer in a small company. For myself, the
> main attraction of Julia was the easiness to achieve decent speed without
> making much explicit effort: that means what feels more natural vectorized
> will be vectorized, while what feels more natural in a loop will be in a
> loop; that means I don't need to resort to another language or a library
> only for improving speed; and that means apart from sticking to a couple
> good habits, I can use objects, functions etc. the same way inside a loop
> vs. outside. None of these is critical by itself, but they add up to an
> uninterrupted flow of thoughts while writing code to explore, try, fail,
> and retry, for many iterations.
> 
> During this "less careful" prototyping, 1-2x slow down is fine, but with
> Julia I know I won't sit there for tens of minutes waiting for a result
> while debating myself whether I should rewrite it in C++ or rehaul the code
> with Cython etc.; instead I can rest assured that as long as my algo and
> coding have no mistakes or major flaws, the speed is close to what I will
> get even if I make several times more effort to rewrite it in C++.
> 
> Another big deal for me is the resulted removal of the barrier between
> prototype and production code. For production I can review and improve my
> code carefully, but rewriting it in a less expressive language is too much.
> 
> I was a huge fan of Python (heck I even persuaded my previous boss, a VP,
> to pick up Python - though I don't know if he really had time to finish it.
> 
> :-)). However, the slow raw speed and the over-freedom to change class
> 
> definition anywhere always gave me the itch to find something better. My
> brother at JPL who worked on Python projects also complained about having
> to think really hard to vectorize almost everything and then couldn't
> easily understand what he was doing a few months later because the code was
> too unnatural for the problem; the indentation was also a big headache as
> collaborators use different editors with different tab definitions.
> 
> So I'm really happy to have found Julia, which gave me the same joy as
> coding in Python and removed the main itches.
> 
> -Zhong




Re: [julia-users] Re: Tuple type to tuple of types

2016-07-24 Thread Tim Holy
As long as you don't mind preserving exactly what's between the {},
 (t.parameters...)
is an easy way to get this.

--Tim

On Sunday, July 24, 2016 5:21:56 AM CDT Kristoffer Carlsson wrote:
> Maybe;
> 
> type MyType end
> 
> function f(t::DataType)
> I = ()
> for t in t.types
> if isa(t, TypeVar)
> I = (I..., Any)
> else
> I = (I..., t)
> end
> end
> return I
> end
> 
>  julia> f(Tuple{Int64, Float64})
> (Int64,Float64)
> 
> julia> f(Tuple{Int64, MyType, Float32})
> (Int64,MyType,Float32)
> 
> julia> f(NTuple{3})
> (Any,Any,Any)
> 
> julia> f(Tuple)
> (Vararg{Any,N},julia> f(Tuple{Int64, Float64})
> (Int64,Float64)
> 
> Probably has crappy performance though.
> 
> On Sunday, July 24, 2016 at 1:52:47 PM UTC+2, jw3126 wrote:
> > I need a function, which accepts an arbitrary tuple type and returns the
> > types of the components of the tuple. For example
> > ```
> > f(Tuple{Int64, Float64})--> (Int64, Float64)
> > f(Tuple{Int64, MyType, Float32}) --> (Int64, MyType, Float32)
> > f(NTuple{3})  --> (Any, Any, Any)
> > f(Tuple) --> Some error since
> > length is unknown
> > ```
> > 
> > How to accomplish this?




Re: [julia-users] findpeaks

2016-07-20 Thread Tim Holy
On Wednesday, July 20, 2016 4:01:00 PM CDT Gabriel Gellner wrote:
> Is there a standard (Base or common package) that implements something akin
> to matlab's `findpeaks`. It is easy enough to make something like this, but
> I imagined that this would be something that already exists but that I have
> just missed?

julia> using Images

help?> Images
search: Images nimages Image ImageCmap OverlayImage AbstractImage 
AbstractImageDirect AbstractImageIndexed imaverage integral_image

  Images is a package for representing and processing images.

  Constructors, conversions, and traits:

  - Construction: `Image`, `ImageCmap`, `grayim`, `colorim`, `convert`, 
`copyproperties`, `shareproperties`
  - Traits: `colordim`, `colorspace`, `coords_spatial`, `data`, `isdirect`, 
`isxfirst`, `isyfirst`, `pixelspacing`, `properties`, `sdims`, 
`spacedirections`, `spatialorder`, `storageorder`, `timedim`
  - Size-related traits: `height`, `nchannels`, `ncolorelem`, `nimages`, 
`size_spatial`, `width`, `widthheight`
  - Trait assertions: `assert_2d`, `assert_scalar_color`, 
`assert_timedim_last`, `assert_xfirst`, `assert_yfirst`
  - Indexing operations: `getindexim`, `sliceim`, `subim`
  - Conversions: `convert`, `raw`, `reinterpret`, `separate`

  Contrast/coloration:

  - `MapInfo`: `MapNone`, `BitShift`, `ClampMinMax`, `ScaleMinMax`, 
`ScaleAutoMinMax`, etc.
  - `imadjustintensity`, `sc`, `imstretch`, `imcomplement`

  Algorithms:

  - Reductions: `maxfinite`, `maxabsfinite`, `minfinite`, `meanfinite`, `sad`, 
`ssd`, `integral_image`
  - Resizing: `restrict`, `imresize` (not yet exported)
  - Filtering: `imfilter`, `imfilter_fft`, `imfilter_gaussian`, `imfilter_LoG`, 
`imROF`, `ncc`, `padarray`
  - Filtering kernels: `ando[345]`, `guassian2d`, `imaverage`, `imdog`, 
`imlaplacian`, `prewitt`, `sobel`
  - Exposure : `imhist`, `histeq`
  - Gradients: `backdiffx`, `backdiffy`, `forwarddiffx`, `forwarddiffy`, 
`imgradients`
  - Edge detection: `imedge`, `imgradients`, `thin_edges`, `magnitude`, 
`phase`, `magnitudephase`, `orientation`, `canny`
  - Corner detection: `imcorner`
  - Blob detection: `blob_LoG`, `findlocalmaxima`, `findlocalminima`
  - Morphological operations: `dilate`, `erode`, `closing`, `opening`, 
`tophat`, `bothat`, `morphogradient`, `morpholaplace`
  - Connected components: `label_components`, `component_boxes`, 
`component_lengths`, `component_indices`, `component_subscripts`, 
`component_centroids`

  Test images and phantoms (see also TestImages.jl):

  - `shepp_logan`

help?> findlocalmaxima
search: findlocalmaxima findlocalminima

  findlocalmaxima(img, [region, edges]) -> Vector{Tuple}

  Returns the coordinates of elements whose value is larger than all of their 
immediate neighbors. region is a list of dimensions to consider. edges is a 
boolean specifying whether to
  include the first and last elements of each dimension.

(I think it would be great if most packages would provide an API summary with 
?PkgName. Not that I always remember myself)

--Tim



Re: [julia-users] What does Base.box mean in code_warntype?

2016-07-19 Thread Tim Holy
They can mean "real" boxing and consequent performance problems, but sometimes 
these get auto-removed during compilation. I see this all the time when 
writing array code, for example this function which takes an input tuple and 
adds 1 to each element:

julia> @inline inc1(a) = _inc1(a...)
inc1 (generic function with 1 method)

julia> @inline _inc1(a1, a...) = (a1+1, _inc1(a...)...)
_inc1 (generic function with 1 method)

julia> _inc1() = ()
_inc1 (generic function with 2 methods)

julia> inc1((3,5,7))
(4,6,8)

# Let's try using inc1 in another function
julia> foo() = (ret = inc1((3,5,7)); prod(ret))
foo (generic function with 1 method)

julia> foo()
192

julia> @code_warntype inc1((3,5,7))
Variables:
  #self#::#inc1
  a::Tuple{Int64,Int64,Int64}

Body:
  begin 
  SSAValue(1) = (Core.getfield)(a::Tuple{Int64,Int64,Int64},2)::Int64
  SSAValue(2) = (Core.getfield)(a::Tuple{Int64,Int64,Int64},3)::Int64
  return (Core.tuple)((Base.box)(Int64,(Base.add_int)((Core.getfield)
(a::Tuple{Int64,Int64,Int64},1)::Int64,1)),(Base.box)(Int64,(Base.add_int)
(SSAValue(1),1)),(Base.box)(Int64,(Base.add_int)(SSAValue(2),
1)))::Tuple{Int64,Int64,Int64}
  end::Tuple{Int64,Int64,Int64}

julia> @code_llvm inc1((3,5,7))

define void @julia_inc1_67366([3 x i64]* noalias sret, [3 x i64]*) #0 {
top:
  %thread_ptr = call i8* asm "movq %fs:0, $0", "=r"() #2
  %2 = getelementptr inbounds [3 x i64], [3 x i64]* %1, i64 0, i64 1
  %3 = getelementptr inbounds [3 x i64], [3 x i64]* %1, i64 0, i64 2
  %4 = getelementptr inbounds [3 x i64], [3 x i64]* %1, i64 0, i64 0
  %5 = load i64, i64* %4, align 8
  %6 = add i64 %5, 1
  %7 = load i64, i64* %2, align 8
  %8 = add i64 %7, 1
  %9 = load i64, i64* %3, align 8
  %10 = add i64 %9, 1
  %11 = getelementptr inbounds [3 x i64], [3 x i64]* %0, i64 0, i64 0
  store i64 %6, i64* %11, align 8
  %12 = getelementptr inbounds [3 x i64], [3 x i64]* %0, i64 0, i64 1
  store i64 %8, i64* %12, align 8
  %13 = getelementptr inbounds [3 x i64], [3 x i64]* %0, i64 0, i64 2
  store i64 %10, i64* %13, align 8
  ret void
}

julia> @code_llvm foo()

define i64 @julia_foo_67563() #0 {
top:
  %thread_ptr = call i8* asm "movq %fs:0, $0", "=r"() #2
  ret i64 192
}

I think you'd be hard-pressed to complain about inefficiencies in foo() ;-).

--Tim

On Tuesday, July 19, 2016 1:42:46 PM CDT Isaiah Norton wrote:
> On Fri, Jul 15, 2016 at 5:02 PM, David Anthoff  wrote:
> > What do these mean?
> 
> http://stackoverflow.com/questions/13055/what-is-boxing-and-unboxing-and-wha
> t-are-the-trade-offs
> > And should I be worried, i.e. is this an indication that something slow
> > might be going on?
> 
> Boxing requires allocation and can block optimizations, so it can be a
> problem to have box/unbox at points where you might hope to be working with
> contiguous primitive values (such as within a loop). But there's really no
> hard-and-fast rule.
> 
> > --
> > 
> > David Anthoff
> > 
> > University of California, Berkeley
> > 
> > 
> > 
> > http://www.david-anthoff.com




Re: [julia-users] Re: Few questions about methods and sub()

2016-07-16 Thread Tim Holy
Perhaps just

f = (U,V) ->  flipdim(conv(flipdim(U, 1), V), 1)
if su < sv
return f([u; zeros(eltype(u), sv - su)], v)
elseif sv < su
return f(u, [v; zeros(eltype(v), su - sv)])
else
return f(u, v)
end

?

Worth pointing out that this is not specific to SubArrays; any AbstractArray 
that isn't an Array will have the same problem.

But appending elements to a SubArray is not supported.

Best,
--Tim

On Saturday, July 16, 2016 7:41:10 AM CDT CrocoDuck O'Ducks wrote:
> Hey!
> 
> I started having a deeper look at the code. I tried this first fix:
> 
> function xcorr_fix(u, v)
> 
> su = size(u, 1)
> sv = size(v, 1)
> 
> if su < sv
> # if u is a SubArray this will changes its type:
> # u = [u; zeros(eltype(u),sv-su)]
> # This avoids type instability, but forces array copy:
> U = [u; zeros(eltype(u), sv - su)]
> elseif sv < su
> # See above:
> V = [v; zeros(eltype(v), su - sv)]
> else
> U = collect(u)
> V = collect(v)
> end
> 
> flipdim(conv(flipdim(U, 1), V), 1)
> 
> end
> 
> This appears to be somewhat effective, but there are few problems:
> 
>- It defeats the purpose of passing views to the function, as a copy of
>the parent arrays is performed. Is that right?
>- Passing SubArrays to conv() seems to produce type instability of p and
>y variables.
> 
> Which makes me think that first conv() should be fixed (maybe adding a
> method for the Real signals case?). Second, I feel like views should be
> allowed to go all the way to fft() in order to have a performance boost by
> using them. Is there any way to append elements through SubArrays? I tried
> append!(), insert!() and push!() but none worked... Maybe I could overwrite
> to the input SubArrays new SubArrays of this kind?
> 
> 
> sub([u.parent; zeros(whatever_pad_length)], :, 1)
> 
> 
> I guess I will make more experiments to get comfortable with SubArrays. Any
> suggestion is welcome!




Re: [julia-users] Trying to understand parametric types as function arguments

2016-07-16 Thread Tim Holy
I never could tell my shapes apart. Nearly failed kindergarten because all my 
lines had 3 corners.

--Tim

On Saturday, July 16, 2016 4:20:04 AM CDT Patrick Kofod Mogensen wrote:
> On Thursday, July 14, 2016 at 10:14:38 PM UTC+2, Tim Holy wrote:
> > *Until we get "diagonal dispatch,"* I think the only way to do this is to
> > expand
> 
> > the number of arguments in the function:
> Triangular dispatch, right? Diagonal dispatch is exactly what we have :)
> 
> > myfunc(vec::Vector) = _myfunc(eltype(vec), vec)  # internal function
> > _myfunc{D<:Dict{ASCIIString}}(::Type{D}, vec) = 1
> > _myfunc{D}(::Type{D}, vec) = 2
> > 
> > julia> myfunc([Dict("a"=>1), Dict("b"=>1.0)])
> > 1
> > 
> > julia> myfunc([Dict(1=>:hello), Dict(2=>:world)])
> > 2
> > 
> > Best,
> > --Tim
> > 
> > On Thursday, July 14, 2016 1:05:20 PM CDT Yichao Yu wrote:
> > > On Thu, Jul 14, 2016 at 12:41 PM, David Barton  > 
> > > wrote:
> > > > Hi,
> > > > 
> > > > I'm trying to write a function that accepts a vector of values that
> > 
> > are of
> > 
> > > > the same (overall) type but with different parameterisations. As a
> > 
> > simple
> > 
> > > > example, consider the vector = [Dict("a"=>1), Dict("b"=>1.0)]. I can
> > > > easily
> > > > use a function along the lines of
> > > > 
> > > > function myfunc(vec::Vector{Dict})
> > > > 
> > > >   # do something
> > > > 
> > > > end
> > > > 
> > > > but I'd like to be able to restrict the parameterisation slightly so
> > 
> > that
> > 
> > > > the first parameter of the Dict type is an ASCIIString (not my actual
> > 
> > use
> > 
> > > > case but follows exactly the same pattern). I've tried doing something
> > > > like
> > > > 
> > > > function myfunc{T}(vec::Vector{Dict{ASCIIString, T}})
> > > > 
> > > >   # do something
> > > > 
> > > > end
> > > > 
> > > > but this seems to enforce the condition that all the Dicts in the
> > 
> > vector
> > 
> > > > have the same parametric type T (so my example of
> > 
> > myfunc([Dict("a"=>1),
> > 
> > > > Dict("b"=>1.0)]) fails). Is there any way of expressing this
> > 
> > constraint?
> > 
> > > > Or
> > > 
> > > The issue is the type of `[Dict("a"=>1), Dict("b"=>1.0)]` It has type
> > > `Vector{Dict{String}}` which cannot be matched by the signature you
> > > provide. I'm not sure if there's currently a clean way to be able to
> > > match this type in additional to the more strict types.
> > > 
> > > > will I just have to use the first form of myfunc with some extra
> > 
> > checking
> > 
> > > > in the function body?
> > > > 
> > > > Thanks
> > > > David




Re: [julia-users] Inherit from AbstractMatrix

2016-07-15 Thread Tim Holy
Hi Madeleine,

Is there any chance you have two versions of julia installed, and are running 
the wrong one? (Confession: I make that mistake surprisingly frequently!) The 
reason I ask is that on current master:

julia> methodswith(Array, kron)
0-element Array{Method,1}

In master (which is what you linked to) they have all been abstract-ized, yet 
in the julia session you showed, the "closest candidates" list includes 
several methods specific for `Array`.

Best,
--Tim

On Thursday, July 14, 2016 1:53:28 PM CDT Madeleine Udell wrote:
> Convex.jl defines an abstract type AbstractExpr, from which all important
> concrete types in the package are descended. We've defined a bunch of
> functions that allow users to treat AbstractExprs as though they were
> matrices: indexing, hcat, vcat, multiplication, addition, subtraction, etc.
> Can we use these to automatically get more advanced functions, like (for
> example) kron?
> 
> The code for kron
> 
> uses only indexing and multiplication operators. But I'm not sure how to
> make AbstractExpr (and its subtypes) subtype AbstractMatrix{Float64} or
> subtype AbstractArray{Float64,2} to make this transition seamless.
> 
> Simply defining
> 
> abstract AbstractExpr <: AbstractArray{Float64,2}
> 
> results in a method error: (Variable is defined as a subtype of
> AbstractExpr)
> 
> julia> kron(A,B)
> ERROR: MethodError: `kron` has no method matching kron(::Array{Float64,2},
> 
> ::Convex.Variable)
> 
> Closest candidates are:
>   kron(::Any, ::Any, ::Any, ::Any...)
>   kron{T,S}(::Array{T,2}, ::Array{S,2})
>   kron(::Union{Array{T,1},Array{T,2}}, ::Number)
> 
> 1) Is it possible to have Convex's AbstractExpr inherit from AbstractArray?
> What other methods would we need to implement to make this work?
> 2) Is it a terrible idea, for reasons I haven't thought of?
> 
> Thanks!
> Madeleine




Re: [julia-users] Trying to understand parametric types as function arguments

2016-07-15 Thread Tim Holy
julia> c = [Dict("a"=>1), Dict("b"=>2)]
2-element Array{Dict{ASCIIString,Int64},1}:
 Dict("a"=>1)
 Dict("b"=>2)

julia> myfunc(c)
ERROR: MethodError: `myfunc` has no method matching 
myfunc(::Array{Dict{ASCIIString,Int64},1})

Here, Julia is complaining that your argument is too-well typed :-). (See 
invariance/covariance/contravariance in the manual.)

The version I posted handles both of them.

Best,
--Tim

On Thursday, July 14, 2016 9:54:08 PM CDT 'Greg Plowman' via julia-users 
wrote:
> > What about the following?
> 
> myfunc(vec::Vector{Dict{ASCIIString}}) = 1
> a = [ Dict("a"=>1), Dict("b"=>1.0) ]
> b = [ Dict(1=>:hello), Dict(2=>:world) ]
> 
> 
> julia> myfunc(a)
> 1
> 
> julia> myfunc(b)
> ERROR: MethodError: `myfunc` has no method matching
> myfunc(::Array{Dict{Int64,Symbol},1})




Re: [julia-users] Trying to understand parametric types as function arguments

2016-07-14 Thread Tim Holy
Until we get "diagonal dispatch," I think the only way to do this is to expand 
the number of arguments in the function:

myfunc(vec::Vector) = _myfunc(eltype(vec), vec)  # internal function
_myfunc{D<:Dict{ASCIIString}}(::Type{D}, vec) = 1
_myfunc{D}(::Type{D}, vec) = 2

julia> myfunc([Dict("a"=>1), Dict("b"=>1.0)])
1

julia> myfunc([Dict(1=>:hello), Dict(2=>:world)])
2

Best,
--Tim

On Thursday, July 14, 2016 1:05:20 PM CDT Yichao Yu wrote:
> On Thu, Jul 14, 2016 at 12:41 PM, David Barton  wrote:
> > Hi,
> > 
> > I'm trying to write a function that accepts a vector of values that are of
> > the same (overall) type but with different parameterisations. As a simple
> > example, consider the vector = [Dict("a"=>1), Dict("b"=>1.0)]. I can
> > easily
> > use a function along the lines of
> > 
> > function myfunc(vec::Vector{Dict})
> > 
> >   # do something
> > 
> > end
> > 
> > but I'd like to be able to restrict the parameterisation slightly so that
> > the first parameter of the Dict type is an ASCIIString (not my actual use
> > case but follows exactly the same pattern). I've tried doing something
> > like
> > 
> > function myfunc{T}(vec::Vector{Dict{ASCIIString, T}})
> > 
> >   # do something
> > 
> > end
> > 
> > but this seems to enforce the condition that all the Dicts in the vector
> > have the same parametric type T (so my example of myfunc([Dict("a"=>1),
> > Dict("b"=>1.0)]) fails). Is there any way of expressing this constraint?
> > Or
> 
> The issue is the type of `[Dict("a"=>1), Dict("b"=>1.0)]` It has type
> `Vector{Dict{String}}` which cannot be matched by the signature you
> provide. I'm not sure if there's currently a clean way to be able to
> match this type in additional to the more strict types.
> 
> > will I just have to use the first form of myfunc with some extra checking
> > in the function body?
> > 
> > Thanks
> > David




Re: [julia-users] How to determine the type of a complex data structure

2016-07-12 Thread Tim Holy
What might be missing is to note the important difference between abstract and 
concrete types. Otherwise it might indeed be interpreted as advice to use 
curly-brackets in your type definitions.

Best,
--Tim

On Tuesday, July 12, 2016 12:36:28 PM CDT Yichao Yu wrote:
> On Tue, Jul 12, 2016 at 12:23 PM, Fred  wrote:
> > I tried that "change" because of this part of the documentation (in
> > http://docs.julialang.org/en/release-0.4/manual/performance-tips/) :
> > 
> > For example:
> > 
> > julia> type MyType{T<:AbstractFloat}
> > 
> >  a::T
> >
> >end
> > 
> > This is a better choice than
> > 
> > julia> type MyStillAmbiguousType
> > 
> >  a::AbstractFloat
> >
> >end
> 
> Please read the text around this example. In particular
> 
> > This allows a to be of any type. This can often be useful, but it does
> > have a downside: for objects of type MyAmbiguousType, the compiler will
> > not be able to generate high-performance code.
> > because the first version specifies the type of a from the type of the 
wrapper object. For example:
> It's the type used for the field that is important. Whether the type
> itself is parametrized or not has nothing to do with it. Type
> parameter only helps to make the type more generic (in case you want
> to use the type for other field types too)
> 
> Do note that,
> 1. your `d` is `Dict{Any,Any}` so accessing the value will be type unstable.
> 2. As Jeffrey mentioned, you can make the type immutable if you don't need
> to mutate it.
> 3. `typeof(1)` is `Int` and not necessarily `Int64`. It is `Int32` on 32bit.
> > Le mardi 12 juillet 2016 17:32:09 UTC+2, Yichao Yu a écrit :
> >> On Tue, Jul 12, 2016 at 11:10 AM, Fred  wrote:
> >> > I am referring to all 3, ie the type of Ions_frag :)
> >> 
> >> Well, can you quote the performance tip? There's nothing wrong with
> >> the type you are using in terms of performance so I'd like to know
> >> which part of the doc is misleading.




Re: [julia-users] A Very Simple Benchmark for Brutal-force Loops in Several Languages: Julia far from approaching C++ speed - any suggestions?

2016-07-11 Thread Tim Holy
The documentation page seems to be down at the moment, but when it's back up 
see the "performance tips" page.

In particular,
- never benchmark anything in global scope, always wrap it in a function
- always run once before you benchmark it (to force JIT compilation)

--Tim

On Sunday, July 10, 2016 9:19:13 PM CDT Zhong Pan wrote:
> Hi,
> 
> For work I really desired a language for numerical / data processing that's
> as easy and capable as Python but still fast running loops. That's why I am
> very excited when I noticed Julia. First allow me to me say, fabulous work!
> I really appreciate the effort of those who make open-source software great
> for everyone.
> 
> I did a simple & naive benchmark involving brutal-force loops with some
> floating-point calculations on Windows (have to use Windows for work). I
> measured execution time in Python, Julia, VC++, C#.NET, Java, and Matlab.
> To my surprise, Julia is not only much slower than VC++ in this test, but
> it's much slower than C#.NET, Java, and even Matlab.
> 
> Please see details of the test in the attached PDF file. Could someone help
> me out here - did I make an amateur mistake or miss something? Was Julia
> slow because of Windows? Or was it just the loops that's slow (as this
> "benchmark" really didn't test anything else)? I really want to make Julia
> my go-to language in work, so any suggestion is appreciated.
> 
> Cheers.
> -Zhong




Re: [julia-users] Few questions about methods and sub()

2016-07-09 Thread Tim Holy
Thanks!
--Tim

On Saturday, July 9, 2016 6:40:36 AM CDT CrocoDuck O'Ducks wrote:
> Thanks!
> 
> I filed the issue right now. I will also have a deeper look at the source
> code <https://github.com/JuliaLang/julia/blob/master/base/dsp.jl>...
> although I am probably too much of a coding noob to figure out what to do
> (even with your precious hint).
> 
> On Saturday, 9 July 2016 14:15:29 UTC+1, Tim Holy wrote:
> > Looks like xcorr has a type-instability. You can see this from
> > 
> > x = rand(10 * 192000); u = rand(10 * 192000, 3);
> > su = sub(u, :, 1);
> > @code_warntype xcorr(x, su)
> > 
> > Would you mind filing an issue?
> > https://github.com/JuliaLang/julia/issues/new
> > Alternatively, if you feel up to fixing it, the hint is "don't reuse
> > variable
> > names when the type might change."
> > 
> > Best,
> > --Tim
> > 
> > On Saturday, July 9, 2016 5:01:24 AM CDT CrocoDuck O'Ducks wrote:
> > > Hi there!
> > > 
> > > I am making few experiments with simple methods. This method calculates
> > 
> > the
> > 
> > > delay between two signals (mono-dimensional arrays) using xcorr():
> > > 
> > > function finddelay{T <: Real}(x::AbstractArray{T, 1},
> > 
> > u::AbstractArray{T, 1
> > 
> > > })
> > > 
> > > # Delay as lag between cross correlation from origin of time
> > > 
> > > sₓᵤ = xcorr(x, u)
> > > 
> > > ct_idx = cld(length(sₓᵤ), 2)
> > > 
> > > _, pk_idx = findmax(sₓᵤ, 1)
> > > 
> > > δ = ct_idx - pk_idx[1]
> > > 
> > > return δ
> > > 
> > > end
> > > 
> > > Now, I would like to add a method that picks up a mono-dimensional array
> > 
> > x,
> > 
> > > a bi-dimensional array u, and then calculates the delay between x and
> > 
> > each
> > 
> > > of the columns of u.  I tried this:
> > > 
> > > function finddelay{T <: Real}(x::AbstractArray{T, 1},
> > 
> > u::AbstractArray{T, 2
> > 
> > > })
> > > 
> > > nᵤ = size(u, 2)
> > > 
> > > δ = Array(Int, 1, nᵤ)
> > > 
> > > for s = 1:nᵤ
> > > 
> > > δ[s] = finddelay(x, u[:, s])
> > > 
> > > end
> > > 
> > > return δ
> > > 
> > > end
> > > 
> > > It works good enough, here the benchmarks:
> > > 
> > > x = rand(10 * 192000); u = rand(10 * 192000, 3)
> > > 
> > > @benchmark finddelay(x, u)
> > >  Benchmark Results 
> > > 
> > >  Time per evaluation: 1.78 s [1.65 s, 1.91 s]
> > > 
> > > Proportion of time in GC: 13.60% [10.52%, 16.69%]
> > > 
> > > Memory allocated: 878.94 mb
> > >
> > >Number of allocations: 718 allocations
> > >
> > >Number of samples: 4
> > >
> > >Number of evaluations: 4
> > >  
> > >  Time spent benchmarking: 9.21 s
> > >  
> > >  
> > >  However, by reading this cool book
> > > 
> > > <https://www.packtpub.com/application-development/julia-high-performance
> > > >,
> > > 
> > > I was suggested that using Array Views through sub() should reduce
> > 
> > memory
> > 
> > > usage. So I tried it:
> > > 
> > > function finddelay{T <: Real}(x::AbstractArray{T, 1},
> > 
> > u::AbstractArray{T, 2
> > 
> > > })
> > > 
> > > nᵤ = size(u, 2)
> > > 
> > > δ = Array(Int, 1, nᵤ)
> > > 
> > > xs = sub(x, :, 1) # I need all arguments of xcorr() to be Views
> > > 
> > > for s = 1:nᵤ
> > > 
> > > us = sub(u, :, s)
> > > 
> > > δ[s] = finddelay(xs, us)
> > > 
> > > end
> > > 
> > > return δ
> > > 
> > > end
> > > 
> > > Which benchmarks as follows (same input arrays):
> > > 
> > > @benchmark finddelay(x, u)
> > >  Benchmark Results 
> > > 
> > >  Time per evaluation: 2.44 s [2.41 s, 2.47 s]
> > > 
> > > Proportion of time in GC: 12.55% [12.54%, 12.57%]
> > > 
> > > Memory allocated: 1.07 gb
> > >
> > >Number of allocations: 17279216 allocations
> > >
> > >Number of samples: 3
> > >
> > >Number of evaluations: 3
> > >  
> > >  Time spent benchmarking: 9.93 s
> > > 
> > > I obtained the same benchmark by trying to copy the body of first method
> > 
> > in
> > 
> > > the loop of the second, adapting it to use with sub().
> > > 
> > > As such, I have few questions:
> > >- First of all, it is good practice to nest methods like this?
> > >- Second, why is sub associated with a huge increase in allocations?
> > 
> > Am
> > 
> > >I doing it wrong? Or maybe it is xcorr() that needs values and cannot
> > > 
> > > work with references?




Re: [julia-users] Few questions about methods and sub()

2016-07-09 Thread Tim Holy
Looks like xcorr has a type-instability. You can see this from

x = rand(10 * 192000); u = rand(10 * 192000, 3);
su = sub(u, :, 1);
@code_warntype xcorr(x, su)

Would you mind filing an issue? https://github.com/JuliaLang/julia/issues/new
Alternatively, if you feel up to fixing it, the hint is "don't reuse variable 
names when the type might change."

Best,
--Tim

On Saturday, July 9, 2016 5:01:24 AM CDT CrocoDuck O'Ducks wrote:
> Hi there!
> 
> I am making few experiments with simple methods. This method calculates the
> delay between two signals (mono-dimensional arrays) using xcorr():
> 
> function finddelay{T <: Real}(x::AbstractArray{T, 1}, u::AbstractArray{T, 1
> })
> 
> # Delay as lag between cross correlation from origin of time
> 
> sₓᵤ = xcorr(x, u)
> 
> ct_idx = cld(length(sₓᵤ), 2)
> 
> _, pk_idx = findmax(sₓᵤ, 1)
> 
> δ = ct_idx - pk_idx[1]
> 
> return δ
> 
> end
> 
> Now, I would like to add a method that picks up a mono-dimensional array x,
> a bi-dimensional array u, and then calculates the delay between x and each
> of the columns of u.  I tried this:
> 
> function finddelay{T <: Real}(x::AbstractArray{T, 1}, u::AbstractArray{T, 2
> })
> 
> nᵤ = size(u, 2)
> 
> δ = Array(Int, 1, nᵤ)
> 
> for s = 1:nᵤ
> δ[s] = finddelay(x, u[:, s])
> end
> 
> return δ
> 
> end
> 
> It works good enough, here the benchmarks:
> 
> x = rand(10 * 192000); u = rand(10 * 192000, 3)
> 
> @benchmark finddelay(x, u)
>  Benchmark Results 
>  Time per evaluation: 1.78 s [1.65 s, 1.91 s]
> Proportion of time in GC: 13.60% [10.52%, 16.69%]
> Memory allocated: 878.94 mb
>Number of allocations: 718 allocations
>Number of samples: 4
>Number of evaluations: 4
>  Time spent benchmarking: 9.21 s
> 
> 
>  However, by reading this cool book
> ,
> I was suggested that using Array Views through sub() should reduce memory
> usage. So I tried it:
> 
> function finddelay{T <: Real}(x::AbstractArray{T, 1}, u::AbstractArray{T, 2
> })
> 
> nᵤ = size(u, 2)
> 
> δ = Array(Int, 1, nᵤ)
> 
> xs = sub(x, :, 1) # I need all arguments of xcorr() to be Views
> 
> for s = 1:nᵤ
> 
> us = sub(u, :, s)
> 
> δ[s] = finddelay(xs, us)
> 
> end
> 
> return δ
> 
> end
> 
> Which benchmarks as follows (same input arrays):
> 
> @benchmark finddelay(x, u)
>  Benchmark Results 
>  Time per evaluation: 2.44 s [2.41 s, 2.47 s]
> Proportion of time in GC: 12.55% [12.54%, 12.57%]
> Memory allocated: 1.07 gb
>Number of allocations: 17279216 allocations
>Number of samples: 3
>Number of evaluations: 3
>  Time spent benchmarking: 9.93 s
> 
> 
> 
> I obtained the same benchmark by trying to copy the body of first method in
> the loop of the second, adapting it to use with sub().
> 
> As such, I have few questions:
> 
>- First of all, it is good practice to nest methods like this?
>- Second, why is sub associated with a huge increase in allocations? Am
>I doing it wrong? Or maybe it is xcorr() that needs values and cannot
> work with references?




Re: [julia-users] Ambiguity error when dispatching on Union types

2016-07-09 Thread Tim Holy
Ambiguities are often a bit tricky. Two tips I've adopted:

- Have as few methods as possible, and declare types on their arguments only 
when absolutely necessary. These measures greatly reduce your exposure to the 
risk of ambiguities. To achieve this, it sometimes takes a fair bit of thought 
to design your API.

- Use one layer of indirection per argument you want to specialize. There are 
a couple of ways to pull this off, and the best way to do it usually depends 
on the specific goal you're trying to achieve. But here one approach might be 
to decide that `f` will only be specialized on argument 2, and will otherwise 
dispatch to `f1` (which you can specialize for argument 1).

f(a, b) = f1(a, b)
f(a, b::A) = "from A"
f1(a::Int, b) = "Int"
f1(a, b) = "other"

Best,
--Tim

On Friday, July 8, 2016 8:27:30 PM CDT Darwin Darakananda wrote:
> Is there a recommended way to getting around that?  The example above had a
> union of only two types, but in the actual code I'm working on there are a
> couple more.  Would I have to copying the code over and over with just
> small changes to the type signature? I guess you could use a macro to
> splice the types in.
> 
> On Friday, July 8, 2016 at 7:58:02 PM UTC-7, Yichao Yu wrote:
> > On Fri, Jul 8, 2016 at 10:32 PM, Darwin Darakananda
> > 
> > > wrote:
> > > Hi everyone,
> > > 
> > > I have some code where multiple types share the same implementation of a
> > > method, for example:
> > > 
> > > abstract MyType
> > > 
> > > 
> > > type A <: MyType end
> > > type B <: MyType end
> > > 
> > > 
> > > f(target::MyType, source::MyType) = "fallback"
> > > 
> > > 
> > > f(target::Int,source::A) = "from A"
> > > f(target::MyType, source::A) = "from A"
> > > 
> > > a = A()
> > > b = B()
> > > 
> > > f(b, b) # fallback
> > > f(b, a) # from A
> > > f(a, a) # from A
> > > 
> > > I was hoping that I could replace the "from A" function using a union
> > 
> > type,
> > 
> > > but I'm running into ambiguity errors:
> > > 
> > > f(target::Union{Int, MyType}, source::A) = "from A"
> > > 
> > > f(b, b) # fallback
> > > f(b, a) # Ambiguity error
> > > f(a, a) # Ambiguity error
> > > 
> > > Is this an expected behavior?
> > 
> > Yes.
> > 
> > > I thought that (::Union{Int, MyType}, ::A)
> > > would be a more specific match to (::B, ::A) than (::MyType, ::MyType).
> > 
> > There's basically nothing as a "more specific match". The two methods
> > are ambiguous and anything in their intersection cannot be dispatched
> > to either of them.
> > 
> > > Any ideas/suggestions?
> > > 
> > > Thanks,
> > > 
> > > Darwin




Re: [julia-users] When Julia v1.0 will be released?

2016-07-08 Thread Tim Holy
On Friday, July 8, 2016 6:43:36 PM CDT Daniel Carrera wrote:
> For that matter, will there be upper case functions for every
> concrete type? ... I'm just curious. I wouldn't actually use that feature.

Yes, it's just the constructor. In most cases you don't have to define them 
manually, they are created automatically.

--Tim



Re: [julia-users] how to tell if a jld file contains a dataset?

2016-07-08 Thread Tim Holy
`has` or `exists`

--Tim

On Friday, July 8, 2016 8:01:40 AM CDT Davide Lasagna wrote:
> I have read the available documentation but I cannot seem to get it.
> 
> How do I test whether an existing .jld file, opened  as
> 
> jldopen(filename, "r") do handle
> # test whether handle contains the dataset "foo"
> end
> 
> contains a dataset, given its name as a string, e.g. "foo"?
> 
> Thanks!
> 
> Davide




Re: [julia-users] When Julia v1.0 will be released?

2016-07-08 Thread Tim Holy
The string unification is already in julia-0.5.

There are functions called String(), Int(), and Float64(). In some cases there 
are lowercase variants, and these often "do more" (e.g., `float` will parse a 
string and return an AbstractFloat). The uppercase versions are the minimalist 
type-conversion forms.

Int isn't an alias for Int64: it's an alias for either Int32 or Int64, 
depending on whether you have a 32-bit or 64-bit computer. There's no 
analogous issue for Float32/Float64 (these are not CPU-dependent types), which 
is why Float won't become an alias for one of them.

So I think your list is as done as it's going to get :-).

--Tim

On Friday, July 8, 2016 7:02:19 AM CDT Daniel Carrera wrote:
> This is just me, but I prefer to wait a bit longer than to get mistakes
> frozen into the language. One bit that I care about is the names of some
> types and functions. For example, right now we have
> 
> - Base.String
> - Base.ASCIIString
> - Base.UTF8String
> - Base.AbstractString
> 
> So, I want to use "String" in my code but right now it's deprecated, and
> the others look horrible. My understanding is that this is still in flux
> and in the future there will be a sane "String" type that people can
> default to without getting errors... I would very much like to see that
> implemented and working before Julia is frozen.
> 
> I also think that the type-related functions in Julia are inconsistent. I
> think there should be functions called string(), int(), and float() that
> return a String, Int64, and Float64. I don't think that the Julia
> developers agree with me.
> 
> Oh, and I think that Float should be an alias for Float64 just like Int is
> an alias for Int64.
> 
> So... there are some inconsistencies in Julia and I prefer to wait in the
> hope that some of these might be ironed out before they become hard-coded
> into the language.
> 
> Cheers,
> Daniel.
> 
> On Thursday, 7 July 2016 16:47:28 UTC+2, Isaiah wrote:
> > I knew that.
> > 
> > 
> > The goal is 2017, if development community considers it to be ready.
> > 
> > I don't mean to be too glib, but I fail to see how any answer is
> > particularly actionable; it is certainly not binding.
> > 
> > On Thursday, July 7, 2016 at 10:14:24 AM UTC-4, Isaiah wrote:
> >>> When it is ready.
> >>> 
> >>> On Thu, Jul 7, 2016 at 10:07 AM, Hisham Assi  wrote:
>  I really like Julia (I am using it for my publications & thesis), but I
>  noticed that the versions are not really backward compatible. I am
>  still ok
>  with that, but  many other people are waiting for the mature, stable
>  version  (1.0) to start using Julia. So, when Julia v1.0 will be
>  released?




Re: [julia-users] Re: Converting from ColorTypes to Tuple

2016-07-07 Thread Tim Holy
On Thursday, July 7, 2016 4:41:10 PM CDT Islam Badreldin wrote:
> Maybe
> this means PyPlot.jl needs to add better support for ColorTypes?

That sounds like a very reasonable solution. I don't really know PyPlot at 
all, so I don't have any advice to offer, but given how well you seem to 
understand things already it seems that matters are in excellent hands :-).

Best,
--Tim



Re: [julia-users] Re: Converting from ColorTypes to Tuple

2016-07-05 Thread Tim Holy
Well, you've just written that convenience function yourself :-). It's not 
more complicated than that.

The reason there isn't an exported function is because the general case is not 
quite as simple as it may seem. For example, BGR and RGB both mean "RGB 
colors," but with different internal storage order. Which do you care about, 
storage order or "meaning"? When you access a value as `red(col)`, you're 
always getting the red channel no matter how it's stored, whereas `comp1(col)` 
might give you the red channel or might give you the blue channel. So 
"converting to a tuple" might mean different things to different people ("red 
first" or "component 1 first"?), which is why there's no general 
function---it's 
so simple to write, you might as well write it yourself, because you know what 
you want it to mean.

When you're dealing with arrays, you can always use `reinterpret` to force an 
interpretation.

Best,
--Tim

On Tuesday, July 5, 2016 9:59:00 AM CDT Islam Badreldin wrote:
> Forgot to add `using ColorTypes` to the top of the minimal example.
> 
>   -Islam
> 
> On Tuesday, July 5, 2016 at 12:55:37 PM UTC-4, Islam Badreldin wrote:
> > Hello,
> > 
> > I was experimenting with plotting using PyPlot and I wanted to create line
> > plots with custom colors. I wanted to use a color palette from
> > ColorBrewer.jl, which returns the colors as an array of ColorTypes colors.
> > I wanted to convert the returned colors to a Tuple, e.g.
> > (0.894,0.102,0.11), in order to pass it to PyPlot. The following snippet
> > works for me,
> > 
> > 
> > using PyPlot
> > 
> > using ColorBrewer
> > 
> > myc=palette("Set1",9)
> > 
> > for i in 1:9 plot(rand(100),c=(comp1(myc[i]),comp2(myc[i]),comp3(myc[i])))
> > end
> > 
> > 
> > My question is, is there a convenience function that can directly give me
> > the desired tuple, without having to manually construct it using
> > (comp1,comp2,comp3)?
> > 
> > Thanks,
> > Islam




Re: [julia-users] Re: How to profile a module?

2016-07-05 Thread Tim Holy
Ah. Can't you just run specific lines from that test file? You could even copy 
it and then delete the irrelevant lines using an editor, if you need to run 
many tests.

--Tim

On Tuesday, July 5, 2016 7:03:07 AM CDT chobbes...@gmail.com wrote:
> Tim, Thanks.
> 
> The problem is this: I wrote a module A and made a test to test module A.
> But both of module A and the test itself need to call another module B
> frequently. I want to see how A is calling B and don't care how the test is
> calling B. That's why I want to separate the profiling of module A from
> that of both. This way I can easily the profiling via
> 
> julia> Profile.print(format=:flat, sortedby=:count)
> 
> Sorry for my dumb question.
> 
> On Tuesday, July 5, 2016 at 12:49:37 PM UTC+1, Tim Holy wrote:
> > I don't know what "profiling a module" means. You profile running code,
> > wherever
> > that code happens to live---and that's all there is to say. To profile the
> > code
> > in a module, you just have to write code that exercises the functions in
> > the
> > module.
> > 
> > The meaning of the numbers is described here:
> > http://docs.julialang.org/en/stable/manual/profile/#basic-usage
> > The key words are "sampling profiler," the meaning of which is described
> > at the
> > top of that page (and see the wikipedia link). The number of samples is
> > approximately proportional to the cost of the line (or its descendents).
> > 
> > Best,
> > --Tim
> > 
> > On Tuesday, July 5, 2016 4:40:32 AM CDT chobb...@gmail.com 
> > 
> > wrote:
> > > Bump up.
> > > 
> > > On Monday, July 4, 2016 at 4:33:53 PM UTC+1, chobb...@gmail.com wrote:
> > > > I want to profile a module which is tested by a test. Following the
> > > > documentation (
> > 
> > http://docs.julialang.org/en/release-0.4/manual/profile/#options-for-contr
> > 
> > > > olling-the-display-of-profile-results), I know how to profile them
> > 
> > (module
> > 
> > > > + test) together:
> > > > 
> > > > @profile include("test.jl")
> > > > 
> > > > 
> > > > But I have no idea for how to do the profiling only for the module.
> > > > 
> > > > A second question is about the first number of each line in profiler's
> > > > output. For example, the output from Julia Documentation:
> > > > 
> > > > julia> Profile.print()
> > > > 
> > > >   23 client.jl; _start; line: 373
> > > >   
> > > > 23 client.jl; run_repl; line: 166
> > > > 
> > > >23 client.jl; eval_user_input; line: 91
> > > >
> > > >   23 profile.jl; anonymous; line: 14
> > > >   
> > > >  8  none; myfunc; line: 2
> > > >  
> > > >   8 dSFMT.jl; dsfmt_gv_fill_array_close_open!; line:
> > 128
> > 
> > > >  15 none; myfunc; line: 3
> > > >  
> > > >   2  reduce.jl; max; line: 35
> > > >   2  reduce.jl; max; line: 36
> > > >   11 reduce.jl; max; line: 37
> > > > 
> > > > Is it appropriate to interpret the numbers 23, 8, 15, etc as the
> > 
> > number of
> > 
> > > > times the line is run or the time has been spent (relatively) on that
> > > > line?
> > > > 
> > > > 
> > > > I searched the group and there is no threads with a similar topic. Any
> > > > comments? Thanks!




Re: [julia-users] Re: How to profile a module?

2016-07-05 Thread Tim Holy
I don't know what "profiling a module" means. You profile running code, 
wherever 
that code happens to live---and that's all there is to say. To profile the code 
in a module, you just have to write code that exercises the functions in the 
module.

The meaning of the numbers is described here:
http://docs.julialang.org/en/stable/manual/profile/#basic-usage
The key words are "sampling profiler," the meaning of which is described at the 
top of that page (and see the wikipedia link). The number of samples is 
approximately proportional to the cost of the line (or its descendents).

Best,
--Tim

On Tuesday, July 5, 2016 4:40:32 AM CDT chobbes...@gmail.com wrote:
> Bump up.
> 
> On Monday, July 4, 2016 at 4:33:53 PM UTC+1, chobb...@gmail.com wrote:
> > I want to profile a module which is tested by a test. Following the
> > documentation (
> > http://docs.julialang.org/en/release-0.4/manual/profile/#options-for-contr
> > olling-the-display-of-profile-results), I know how to profile them (module
> > + test) together:
> > 
> > @profile include("test.jl")
> > 
> > 
> > But I have no idea for how to do the profiling only for the module.
> > 
> > A second question is about the first number of each line in profiler's
> > output. For example, the output from Julia Documentation:
> > 
> > julia> Profile.print()
> > 
> >   23 client.jl; _start; line: 373
> >   
> > 23 client.jl; run_repl; line: 166
> > 
> >23 client.jl; eval_user_input; line: 91
> >
> >   23 profile.jl; anonymous; line: 14
> >   
> >  8  none; myfunc; line: 2
> >  
> >   8 dSFMT.jl; dsfmt_gv_fill_array_close_open!; line: 128
> >  
> >  15 none; myfunc; line: 3
> >  
> >   2  reduce.jl; max; line: 35
> >   2  reduce.jl; max; line: 36
> >   11 reduce.jl; max; line: 37
> > 
> > Is it appropriate to interpret the numbers 23, 8, 15, etc as the number of
> > times the line is run or the time has been spent (relatively) on that
> > line?
> > 
> > 
> > I searched the group and there is no threads with a similar topic. Any
> > comments? Thanks!




Re: [julia-users] How to store tuples in different jld files

2016-07-03 Thread Tim Holy
Duplicated in 
http://stackoverflow.com/questions/38147946/julia-how-to-store-tuples-in-different-files

--Tim

On Friday, July 1, 2016 7:48:34 AM CDT Ahmed Mazari wrote:
> Hello,
> 
> l want to store each tuple X[p] in a different file
> # X[1] in mini_batch1.jld X[2] in mini_batch2.
>  # but my code below stores (duplicate) all the tuple of X[p] in the files
> created.
> 
> 
> 
> let's see an example :
> 
> m= 100 k= 3 # number of tuples or partition
> y=rand_gen(m,k)
> 
> (3,[[-1.0,1.0,1.0,1.0,-1.0,-1.0,1.0,-1.0,1.0,1.0,-1.0,-1.0,
> 1.0,1.0,-1.0,-1.0,-1.0,1.0,-1.0,1.0,1.0,1.0,1.0,-1.0,-1.0,-
> 1.0,1.0,1.0,1.0,-1.0,1.0,-1.0,1.0,-1.0],[1.0,-1.0,-1.0,-1.0,
> 1.0,1.0,1.0,1.0,-1.0,1.0,-1.0,1.0,-1.0,1.0,1.0,1.0,-1.0,-1.
> 0,1.0,1.0,-1.0,1.0,-1.0,1.0,-1.0,1.0,-1.0,1.0,1.0,-1.0,1.0,
> -1.0,-1.0],[1.0,-1.0,-1.0,1.0,1.0,-1.0,-1.0,-1.0,1.0,-1.0,1.
> 0,1.0,-1.0,-1.0,-1.0,-1.0,-1.0,1.0,1.0,-1.0,-1.0,1.0,1.0,1.
> 0,1.0,1.0,-1.0,-1.0,-1.0,1.0,1.0,-1.0,1.0]])
> 
> l want to have in : mini_batch1 the first tuple
> [-1.0,1.0,1.0,1.0,-1.0,-1.0,1.0,-1.0,1.0,1.0,-1.0,-1.0,1.0,
> 1.0,-1.0,-1.0,-1.0,1.0,-1.0,1.0,1.0,1.0,1.0,-1.0,-1.0,-1.0,
> 1.0,1.0,1.0,-1.0,1.0,-1.0,1.0,-1.0]
> 
> 
> mini_batch2 the second tuple [1.0,-1.0,-1.0,-1.0,1.0,1.0,1.
> 0,1.0,-1.0,1.0,-1.0,1.0,-1.0,1.0,1.0,1.0,-1.0,-1.0,1.0,1.0,
> -1.0,1.0,-1.0,1.0,-1.0,1.0,-1.0,1.0,1.0,-1.0,1.0,-1.0,-1.0]
> 
> and so on.
> 
> However my code do the job of creating the mini_batches files but fail to
> store one by one tuple. how can l fix that ?
> workspace()
> using JLD, HDF5
> function gen_random(m,k)
> 
> # m the length of the vector , for instance m=10 and k the number of
> partitions let's set k=16
> 
> s = rand(m)
> # Pkg.add("JLD"), Pkg.add("HDF5") these two packages are needed in order to
> store our vectors in files under the extension jld
> 
>  # allow to convert each random number to -1 or 1
> 
> X=float_to_binary(s)
> parts= kfoldperm(length(X),k)
> # l want to store each tuple X[p] in a different file
> # X[1] in mini_batch1.jld X[2] in mini_batch2.
> # but my code below store all the tuple X[p] in the files created.
> for p in 1:length(parts)
> file =jldopen(@sprintf("my path to file/mini_batch%d.jld", p),"w")
> write(file, "X", [X[p] for p in parts])
> close(file)
> end
> return [X[p] for p in parts]
> 
> function float_to_binary(s,level=0.4)
>  for i=1:length(s)
>  s[i] = s[i] > level ? 1.0 : -1.0
>  end
>  file = jldopen("/my path/mydata.jld", "w")
>  write(file, "s", s) # alternatively, say "@write file A"
>  close(file)
>  return s
>  end
> 
> 
>  function kfoldperm(l,k)
>  n,r = divrem(l,k)
>  b = collect(1:n:l+1)
>  for i in 1:length(b)
>  b[i] += i > r ? r : i-1
>  end
>  p = randperm(l)
>  return [p[r] for r in [b[i]:b[i+1]-1 for i=1:k]]
> 
> 
>  end




Re: [julia-users] PriorityQueue for fast large network simulation

2016-07-01 Thread Tim Holy
I meant they should be updated on every step, but rather than update the 
priority of each neuron one-by-one, perhaps you could build a heap out of the 
to-be-updated neurons and then perform a "heap-merge."

http://link.springer.com/article/10.1007%2FBF00264229

Best,
--Tim

On Friday, July 1, 2016 5:27:03 PM CDT Rainer J. Engelken wrote:
> 2016-07-01 11:41 GMT+02:00 Tim Holy :
> > My guess is that you could do better by doing a "batch update" of the
> > queue,
> > so that you don't rebalance the heap each time.
> > 
> > @Tim, thanks for responding, maybe I didn't get your idea. How does
> 
> changing the priority of x*k keys every xth step make it faster than
> changing the priority of k keys n every step? Both has average costs of
> k*log(n) per iteration, where n is the total number of keys, right?
> 
> I tried something in this spirit, using an array for phi instead of a
> PriorityQueue, but instead of using findmax(phi) in every step on the whole
> array, I did TimSort every xth step on the whole (still partially sorted)
> array and findmax() on the first x+s sorted elements, where s is a safety
> margin, as some elements of phi might be pushed outside 1:x.
> The PriorityQueue turned out to be faster for large sparse networks
> (n=10^8, k=10^2), although quite some time is spend on dictionary
> bookkeeping (e.g. getindex and ht_keyindex in dict.jl)
> Is there a way to avoid this bookkeeping for a priority queue with integer
> keys?
> Best
> Rainer




Re: [julia-users] Re: A naive benchmark

2016-07-01 Thread Tim Holy
Read about cache here:
http://julialang.org/blog/2013/09/fast-numeric
and add @inbounds @simd in front of the inner loop.

--Tim

On Friday, July 1, 2016 8:10:33 AM CDT baillot maxime wrote:
> Thank you but don't worry in general I do it like 3 or 4 times before
> looking at the time :)
> 
> I think the problem in my main (not the useless code I did to do the
> benchmark) Julia code come from the BLAS library. The library look like
> much slower than the library MATLAB use to do matrix element wise
> multiplication.
> 
> On Friday, July 1, 2016 at 4:53:01 PM UTC+2, Andre Bieler wrote:
> > Also remember that the first time you execute a function you should not
> > @time it because
> > then it also measures the compilation time. You can test this by doing
> > 
> > @time test(m,nbp)
> > @time test(m,nbp)
> > @time test(m,nbp)
> > 
> > and probably see that the 2nd and 3rd time the timing is lower
> > than for the first one.




Re: [julia-users] PriorityQueue for fast large network simulation

2016-07-01 Thread Tim Holy
My guess is that you could do better by doing a "batch update" of the queue, 
so that you don't rebalance the heap each time.
Best,
--Tim

On Friday, July 1, 2016 12:43:48 AM CDT Rainer J. Engelken wrote:
> Hi,
> I am trying to implement a fast event-based numerically exact simulation of
> a sparse large spiking neural network using a priority queue. It is fast,
> but not fast enough. Profiling indicates that the bottleneck seem to be the
> dictionary operations keyindex and setindex! when changing priority of an
> existing key (3rd line of function ptc! in code below). Is there a way to
> avoid this? Is is possible to set up a priority queue with integer keys
> instead? More generally, is there a smarter data structure than priority
> queue, if I want to find the smallest element of an large array and change
> a small fraction array entries (say 10^2 out of 10^8) each iteration?
> 
> Best regards
> Rainer
> 
> ## For those interested into the implementation: We map the membrane
> potential to a phase variable phi \in [-1, inf) and solve the network
> evolution of an purely inhibitory homogeneous pulse-coupled leaky integrate
> and fire
> network analytically between network spikes. This is just a toy example,
> some important steps are missing.
> 
> # code ###
> 
> function ptc!(phi, postid, phishift, w, c) #phase transition curve
> @inbounds for i = postid
> phi[i] = w*log(exp((phi[i]-phishift)/w)+c) + phishift
> end
> end
> 
> function lifnet(n,nstep,k,j0,ratewnt,tau,seedic,seedtopo)
> iext = tau*sqrt(k)*j0*ratewnt # iext given by balance equation
> w = 1/log(1. + 1/iext) # phase velocity LIF
> c = j0/sqrt(k)/(1. + iext) # effective coupling in PTC for LIF
> phith, phireset = 1., 0. # reset and threshold for LIF
> 
> srand(seedic) # initialize random number generator (Mersenne Twister)
> phi = Collections.PriorityQueue(UInt32,Float64)
> @inbounds for i = 1:n #set initial phases for all neurons
> phi[i] = -rand()
> end
> 
> spikeidx, spiketimes = Int64[], Float64[] # initialize time & spike
> raster
> postidx = Array{UInt32,1}(k) # idices of postsynaptic neurons
> t = phishift = 0. # initial time and initial global phase shift
> for s = 1 : nstep #
> j, phimax = Collections.peek(phi) # next spiking neuron
> dphi = phith+phimax-phishift   # calculate next spike time
> phishift += dphi # global backshift instead of shifting all phases
> t += dphi  # update time
> state = UInt(j+seedtopo)  # spiking neuron index is seed of rng
> @inbounds for i = 1:k # idea by R.Rosenbaum to reduce memory
> postidx[i], state = randfast(n,state) # get postsyn. neurons
> end
> 
> ptc!(phi,postidx,phishift,w,c) # evaluate PTC
> phi[j] = phireset + phishift# reset spiking neuron to 0
> push!(spiketimes,t) # store spiketimes
> push!(spikeidx,j) # store spiking neuron
> end
> nstep/t/n/tau*w, spikeidx, spiketimes*tau/w
> end
> 
> function randfast(m, state::UInt) # linear congruential generator
> state = lcg(state)   # faster then Mersenne-Twister
> 1 + rem(state, m) % Int, state   # but can be problematic
> end
> 
> function lcg(oldstate) # tinyurl.com/fastrand by M. Schauer
> a = unsigned(2862933555777941757)
> b = unsigned(3037000493)
> a*oldstate + b
> end
> 
> #n: # of neurons, k: synapses/neuron, j0: syn. strength, tau: membr. time
> const.
> n,nstep,k,j0,ratewnt,tau,seedic,seedtopo = 10^6,10^4,10^2,1,1,.01,1,1
> @time lifnet(100, 1, 10, j0, ratewnt, tau, seedic, seedtopo); #warmup
> gc()
> @time rate,spidx,sptimes =
> lifnet(n,nstep,k,j0,ratewnt,tau,seedic,seedtopo);
> #using PyPlot;plot(sptimes,spidx,".k");ylabel("Neuron Index");xlabel("Time
> (s)")




Re: [julia-users] Re: how to hide deprecated warning ?

2016-06-30 Thread Tim Holy
To translate:

[a] concatenation is deprecated; use collect(a) instead
 in depwarn at ./deprecated.jl:73
while loading In[2], in expression starting on line 9

The first line implies you have some object `a` and you're putting brackets 
around it, `[a]`. Julia is telling you to switch to `collect(a)`. The reason 
it makes up a name, `a`, is because a function can't know the name (if any) 
you've given to variables in the caller (in this case, 0:0.01:50). 

The second line points out where the generic machinery for issuing deprecation 
warnings is; in this case, that's not very useful information, so you can 
basically ignore it.

The third line tells you the warning was triggered on the 9th line of input 
(i.e., where in your code you should look for the problem).

In general, these "backtraces" show you the chain of calls that triggered the 
warning or error; learning how to read them is well worth the investment of 
effort, because it will save you an almost-infinite amount of time in the 
future. Naturally, if you come up with clearer phrasing, please do suggest 
improvements.

Best,
--Tim

On Thursday, June 30, 2016 8:12:06 AM CDT Henri Girard wrote:
> Thanks. Well I don't really understand what they mean, sure it would be
> better.
> 
> Le jeudi 30 juin 2016 16:20:49 UTC+2, Henri Girard a écrit :
> > I am not a developper, I can't help to correct
> > these warning (unfortunatly) but that's
> > quiet annoying if one wants to make a clean
> > worksheet, this one is not too big but
> > sometimes it's half a page !
> > 
> > 
> > using ODE,Plots
> > pyplot(size=(300,200),leg=false,
> > guidefont=font(7), titlefont=font(7));
> > function oscillator(t, y)
> > 
> > y[2] = - 3* + y[1] - y[2] / 10
> > 
> > end
> > oscillator(t, y) = [y[2], - 3* + y[1] - y[2] / 10]
> > initial = [1.0,0.0];   # Initial conditions
> > t = float([0:0.01:50]);
> > t,y = ode23(oscillator, initial, t)
> > y=hcat(y...).';
> > 
> > WARNING: [a] concatenation is deprecated; use collect(a) instead
> > 
> >  in depwarn at ./deprecated.jl:73
> > 
> > while loading In[2], in expression starting on line 9




[julia-users] Julia community stewards

2016-06-29 Thread Tim Holy
Hi everyone,

I'm writing to call your attention to a new page on the julialang website, 
explaining the mission of a newly-created group called "Julia Stewards":
http://julialang.org/community/stewards/

This is a group that we hope will have very little to do, but which seems 
important given the growth (present and future) of the Julia community. As 
stated in the first paragraph: 

The spirit of community is crucial to free/open-source development. In the
large majority of cases, interactions are naturally constructive, but in
rare cases conflict can arise. This document explains the mechanisms for
conflict reporting and resolution within the Julia community, to handle
violations of the Julia Community Standards.

Naturally we welcome questions, discussion, and feedback on this document, 
either here in the mailing lists or in "new issues" at:
https://github.com/JuliaLang/julialang.github.com/issues

Best wishes,
--your Julia Stewards



Re: [julia-users] Row/column iterators?

2016-06-27 Thread Tim Holy
In Base there's mapslices, which may already do what you want.

Something like what you're proposing (but not yet with a nice "high level" API 
wrapper, since the low-level stuff is still in flux) is already here:
https://github.com/timholy/ArrayIteration.jl
That's aiming at a very general approach that should be efficient for a wide 
variety of AbstractArrays (dense, sparse, perhaps even distributed, etc). I 
bet you don't need so much generality?

Assuming you'd be happy with something that works well for just dense 
matrices, your `view(mat, :, i)` idea should work just fine. Given that 
julia-0.5 is just around the corner, my assumption is that it would be best to 
put it in a package first, and think about adding to Base later. (But I'm not 
opposed to sticking it in now, either.)

Best,
--Tim

On Monday, June 27, 2016 9:42:33 AM CDT Tom Breloff wrote:
> I find myself frequently wanting to do something like:
> 
> mat = rand(100,4)
> for c in columns(mat)
> # c is a vector (a view maybe?)
> end
> 
> I think ideally this method would return a lazy iterator which returns a
> `view(mat, :, i)` at each step.  Does this exist already?  If not, would it
> be welcomed in Base?
> 
> -Tom




Re: [julia-users] Efficient iteration for tensor dot product - alternatives to splatting

2016-06-26 Thread Tim Holy
I'd try TensorOperations.jl or AxisAlgorithms.jl

Best,
--Tim

On Sunday, June 26, 2016 4:24:17 AM CDT Alex Williams wrote:
> I'm trying to code an efficient implementation of the n-mode tensor
> product. Basically, this amounts to taking a dot product between a vector
> (b) and the mode-n fibers of a higher-order Array. For example, the mode-3
> product of a 4th order array is:
> 
> ```
> C[i,j,k] = dot(A[i,j,:,k], b)
> ```
> 
> After iterating over all (i,j,k). Doing this for any arbitrary `A` and `n`
> can be achieved by the function below. But I believe it is slower than
> necessary due to the use of the splatting operator (`...`). Any ideas for
> making this faster?
> 
> ```julia
> """
> B = tensordot(A, b, n)
> Multiply N-dimensional array A by vector b along mode n. For inputs
> size(A) = (I_1,...,I_n,...,I_N), and length(x) = (I_n), the output is
> a (N-1)-dimensional array with size(B) = (I_1,...,I_n-1,I_n+1,...,I_N)
> formed by taking the vector dot product of b and the mode-n fibers
> of B.
> """
> function tensordot{T,N}(A::Array{T,N}, b::Vector{T}, n::Integer)
> 
> I = size(A)
> @assert I[n] == length(b)
> D = I[setdiff(1:N,n)]
> K = [ 1:d for d in D ]
> 
> # preallocate result
> 
> C = Array(T,D...)
> 
> # do multiplication
> for i in product(K...)
> C[i...] = dot(b, vec(A[i[1:(n-1)]...,:,i[n:end]...]))
> end
> 
> return C
> 
> end
> ```
> 
> Thanks all,
> 
> -- Alex




Re: [julia-users] Re: Segfaults in SharedArrays

2016-06-26 Thread Tim Holy
On Sunday, June 26, 2016 3:27:57 AM CDT Nils Gudat wrote:
> Oh well that would explain why it had no effect then - is there some
> version that contains this fix then? Would building the current 0.4.7+
> master from source do? (Sorry, I've stuck strictly to stable versions so
> far so am not well versed in how the master branches work)

If you've succeeded in building from source, you're almost there: https://
github.com/JuliaLang/julia/pull/16899/files is literally a 1-line change, so 
you could just fire up an editor, make the same change in your own source, 
rebuild, and test.

Best,
--Tim


Re: [julia-users] Re: Drop element from tuple

2016-06-26 Thread Tim Holy
On Saturday, June 25, 2016 6:45:05 AM CDT jw3126 wrote:
> @Tim: Thanks for the insights! On my machine slowness starts to kick in at 
> size 9 already.

Depends on which version of julia you're running (I'm running a recent 
julia-0.5-dev).

> I tried to read the llvm code, but did not really 
> understand it. It seems however that the machine will not go through N 
> (out, t) pairs for a tuple of length N?

The @inline gives LLVM the chance to elide the apparent intermediates: 
essentially you're describing the computation you want to perform in a way the 
compiler can understand, but which may or may not reflect the final operations 
performed by the CPU. Coaching it into taking the opportunity to fully 
optimize this operation takes a little practice, and unfortunately sometimes 
involves a little magic (e.g., https://github.com/JuliaLang/julia/issues/
17126).

Best,
--Tim



Re: [julia-users] Re: Drop element from tuple

2016-06-25 Thread Tim Holy
It might scale just fine, because LLVM might discover the pattern of your 
"giant switch statement" and essentially compute a jump location. Even if not, 
if you're calling this repeatedly with a tuple of consistent length, the 
branches become predictable and then they kind of don't count. So in most 
cases I doubt it's possible to beat the performance of the solution you 
posted.

For small tuples, you can essentially match the performance of your generated 
function using "lispy" recursion. This is more friendly for the compiler, 
which you may or may not care about. (It's something we care about in Base, 
but there's no reason you necessarily have to worry about it in your own 
code.)

First, a couple of convenience functions:

@inline push(t::Tuple, item) = (t..., item)
@inline unshift(t::Tuple, item) = (item, t...)
# we'll also need something like shift!, but that's already present
# and called Base.tail

Now let's implement your function:

drop_ith(t, i) = 1 <= i <= length(t) ?
_drop_ith((), t, i) :
throw(ArgumentError("need 1 <= $i <= $(length(t))"))
@inline function _drop_ith(out, t, i)
h, t1 = t[1], Base.tail(t)
_drop_ith(length(out)+1 == i ? unshift(out, h) : push(out, h), t1, i)
end
_drop_ith(out, ::Tuple{}, i) = Base.tail(out)

This uses the sneaky trick of preserving type-stability by putting the element 
you want to drop into the first position, thus ensuring that the tuple always 
grows by 1 on each level of recursion. (For tuples, the length is part of the 
type, so making the length predictable is critical for type-stability.) At the 
end, you strip off the first element.

This works well up to tuples of size 14; any bigger, and a variable called 
MAX_TUPLETYPE_LEN ends up kicking in and hurting performance. However, it has 
the advantage of not needing Val tricks or @generated functions. In cases 
where there's less run-time checking, this approach can usually match 
@generated solutions, so the general technique is worth knowing.

Best,
--Tim

On Friday, June 24, 2016 9:12:53 AM CDT jw3126 wrote:
> The following is faster, though it does not scale very well for large
> indices because of ridiculous if chains...
> 
> ```
> @generated function droptup{N,T,i}(x::NTuple{N,T}, ::Type{Val{i}})
> @assert 1 <= i <= N
> args = [:(x[$j]) for j in deleteat!([1:N...], i)]
> Expr(:tuple, args...)
> end
> 
> @generated function droptup{N,T}(x::NTuple{N,T}, i::Int)
> quote
> @nif $N d->(i==d) d-> droptup(x, Val{d})
> end
> end
> 
> using BenchmarkTools
> x = (10,20,30,40)
> 
> @benchmark droptup($x, 4)
> ```
> 
> BenchmarkTools.Trial:
>   samples:  1
>   evals/sample: 1000
>   time tolerance:   5.00%
>   memory tolerance: 1.00%
>   memory estimate:  0.00 bytes
>   allocs estimate:  0
>   minimum time: 6.00 ns (0.00% GC)
>   median time:  6.00 ns (0.00% GC)
>   mean time:6.11 ns (0.00% GC)
>   maximum time: 70.00 ns (0.00% GC)
> 
> On Friday, June 24, 2016 at 4:50:59 PM UTC+2, jw3126 wrote:
> > Okay thanks, it works! However it has extremely poor performance. I would
> > love to do this stack allocated.
> > 
> > ```
> > using BenchmarkTools
> > function subtuple(t::Tuple,i::Integer)
> > 
> > idx = 1:length(t)
> > idx = setdiff(idx,i)
> > t[idx]
> > 
> > end
> > 
> > @benchmark subtuple($(1,2,3,4), $1)
> > ```
> > 
> > BenchmarkTools.Trial:
> >   samples:  1
> >   evals/sample: 10
> >   time tolerance:   5.00%
> >   memory tolerance: 1.00%
> >   memory estimate:  1.33 kb
> >   allocs estimate:  22
> >   minimum time: 1.52 μs (0.00% GC)
> >   median time:  1.69 μs (0.00% GC)
> >   mean time:1.96 μs (9.07% GC)
> >   maximum time: 323.58 μs (98.21% GC)
> > 
> > On Friday, June 24, 2016 at 4:42:17 PM UTC+2, STAR0SS wrote:
> >> You can do something like that:
> >> 
> >> t = tuple(1,2,3,4)
> >> 
> >> function subtuple(t::Tuple,i::Integer)
> >> 
> >> idx = 1:length(t)
> >> idx = setdiff(idx,i)
> >> t[idx]
> >> 
> >> end
> >> 
> >> subtuple(t,3)
> >> 
> >> (1,2,4)




  1   2   3   4   5   6   7   8   9   10   >