[julia-users] Re: small array/vector ops with broadcast and generators?

2016-09-26 Thread Jutho
For functions like dot and norm, its also good to check out all the existing 
methods in base, via e.g. methods(norm). You'll get this response:

# 9 methods for generic function "norm":
norm{T<:Union{Complex{Float32},Complex{Float64},Float32,Float64},TI<:Integer}(x::Union{Base.ReshapedArray{T,1,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N<:Any}}},DenseArray{T,1},SubArray{T,1,A<:Union{Base.ReshapedArray{T<:Any,N<:Any,A<:DenseArray,MI<:Tuple{Vararg{Base.MultiplicativeInverses.SignedMultiplicativeInverse{Int64},N<:Any}}},DenseArray},I<:Tuple{Vararg{Union{Base.AbstractCartesianIndex,Colon,Int64,Range{Int64}},N<:Any}},L<:Any}},
 rx::Union{Range{TI},UnitRange{TI}}) at linalg/dense.jl:44
norm(A::SparseMatrixCSC) at sparse/linalg.jl:497
norm(A::SparseMatrixCSC, p::Real) at sparse/linalg.jl:497
norm(x::AbstractArray{T<:Any,1}) at linalg/generic.jl:201
norm(x::AbstractArray{T<:Any,1}, p::Real) at linalg/generic.jl:201
norm{T}(A::AbstractArray{T,2}) at linalg/generic.jl:242
norm{T}(A::AbstractArray{T,2}, p::Real) at linalg/generic.jl:242
norm(x::Number) at linalg/generic.jl:253
norm(x::Number, p::Real) at linalg/generic.jl:253

While the first one hard to process because of all the unions and parametric 
types, it is important to note the second range argument. Delving into 
base/linalg/dense.jl shows that this is exactly what you need, compute the norm 
over a small part of the vector as indicated by the range, without having to 
create a view or a copy that allocates memory. The output of methods(dot) is 
even harder to read, but a similar method exists.

Re: [julia-users] Fused-Multiply-Add function?

2016-07-23 Thread Jutho
Did you check axpy! which originates from the BLAS tradition?

[julia-users] Re: How to build a range of -Inf to Inf

2016-06-03 Thread Jutho
Looks ok, but I think you could generically have less restricted types in your 
functions: e.g. 
4 in BasicInterval(3.1,4.9)
won't work, nor will you be able to construct an interval BasicInterval(3,4.5)

[julia-users] How to produce the ≉ symbol?

2016-06-03 Thread Jutho
(on Mac): what seems to work is to first use \approx + tab to get ≈, and then 
go back one character and type \not + tab. If you first use \not + tab, and 
then start typing \approx, the not acts only on the \ . If you write 
\not\approx + tab, the \not is not being substituted.

[julia-users] Re: vector assignment performance

2016-05-22 Thread Jutho
https://github.com/ahwillia/Einsum.jl
or
https://github.com/Jutho/TensorOperations.jl
might also be of interest

Op vrijdag 20 mei 2016 17:17:08 UTC+2 schreef Matt Bauman:
>
> I believe something just like that is already implemented in 
> Devectorize.jl's @devec macro.  Try: `@devec y[:] = y + a .* x[ind]`.
>
> https://github.com/lindahua/Devectorize.jl
>
> On Friday, May 20, 2016 at 11:04:34 AM UTC-4, vav...@uwaterloo.ca wrote:
>>
>> I'm not sure that fixed-size array solves the problem.  For one thing, 
>> the size of the small array may not be known at compile time.  For another, 
>> I don't want the small vectors necessarily to be immutable.
>>
>> I decided that what would really solve this problem is indicial notation. 
>>  This can be provided by a macro.  I have not actually written the macro-- 
>> macro-writing sometimes gives me a headache.  But I have already documented 
>> it!  See:
>>
>> https://github.com/StephenVavasis/Indicial.jl 
>> <https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2FStephenVavasis%2FIndicial.jl&sa=D&sntz=1&usg=AFQjCNHXjC8cNNi7_jdVi1QPzKUvFFeMTQ>
>>
>> for documentation of my latest vaporware.  If someone out there who 
>> enjoys macro-writing wants to take a crack at this before I do, feel free!
>>
>> -- Steve Vavasis
>>
>>
>>
>>
>> On Thursday, May 19, 2016 at 9:47:12 PM UTC-4, vav...@uwaterloo.ca wrote:
>>>
>>> The two functions test4 and test5 below are equivalent, but test5 is 
>>> much faster than test4.  Apparently test4 is carrying out a heap allocation 
>>> on each iteration of the j-loop.  Why?   In general, which kinds of 
>>> assignment statements of the form = create temporaries, and 
>>> which don't?  (In the example below, if the indirect addressing via array i 
>>> is eliminated, then the two functions have comparable performance.)
>>>
>>> Thanks,
>>> Steve Vavasis
>>>
>>> function test4(n)
>>> y = [2.0, 6.0, 3.0]
>>> i = [1, 2, 3]
>>> z = [0.0, 0.0, 0.0]
>>> u = 0.0
>>> for j = 1 : n
>>> z[:] = y[i]
>>> u += sum(z)
>>> end
>>> u
>>> end
>>>
>>> function test5(n)
>>> y = [2.0, 6.0, 3.0]
>>> i = [1, 2, 3]
>>> z = [0.0, 0.0, 0.0]
>>> u = 0.0
>>> for j = 1 : n
>>> for k = 1 : 3
>>> z[k] = y[i[k]]
>>> end
>>> u += sum(z)
>>> end
>>> u
>>> end
>>>
>>>
>>> julia> @time Testmv.test4(1000)
>>>   1.071396 seconds (20.00 M allocations: 1.192 GB, 7.03% gc time)
>>> 1.1e8
>>>
>>> julia> @time Testmv.test5(1000)
>>>   0.184411 seconds (4.61 k allocations: 198.072 KB)
>>> 1.1e8
>>>
>>>

[julia-users] Re: yet another benchmark with numba and fortran

2016-01-29 Thread Jutho
Transposing coord (i.e. let the column index correspond to x,y,z and the 
row index to the different particles) helps a little bit (but not so much). 
Julia uses column major order, Python is different I believe. How big is 
the difference with Numba?

Op vrijdag 29 januari 2016 21:26:07 UTC+1 schreef STAR0SS:
>
> A significant part of the time is spent in computing the norms (r_sq = 
> rx*rx + ry*ry + rz*rz) and the absolute value (abs(rz) > boxl2), I don't 
> know if there's more efficient ways to compute those.
>


[julia-users] Re: Linear combination of matrices using repmat: Matlab vs Julia

2015-11-24 Thread Jutho
I don't see why in Julia or Matlab you would want to use repmat or 
broadcasting. For me, the following simple line of code:

m,n,p = size(A)
reshape(reshape(A,m*n,p)*y,(m,n))

accomplishes your task, and it has about the same speed as matrixLinComb2. It 
writes it as a simple multiplication so it can use BLAS and vectorisation etc. 

Re: [julia-users] Re: Efficient way to compute X' diag(w) Y

2015-07-08 Thread Jutho
That's true, and therefore also not in Julia, unless using some command to 
inline assembly. However, in C it might be possible to get to a factor 2 of 
BLAS speed. This might be sufficient if you want to implement something 
slightly different from matrix multiplication (like maybe this case) and where 
you might create extra overhead when trying to reformulate it using BLAS matrix 
multiplication.

Re: [julia-users] Re: Efficient way to compute X' diag(w) Y

2015-07-08 Thread Jutho
I found a rather instructive tutorial about the kind of optimisations going 
into matrix multiplication in BLIS (not OpenBLAS but related) here:
http://apfel.mathematik.uni-ulm.de/%7Elehn/sghpc/gemm/index.html

It's not something one can implement in Julia (yet). Hopefully further work in 
the direction of vectorisation of tuples will help (e.g. issue #11899  and 
related)...

[julia-users] Efficient way to compute X' diag(w) Y

2015-07-07 Thread Jutho
If X can be Y, then you should of course not use scale! .

[julia-users] Efficient way to compute X' diag(w) Y

2015-07-07 Thread Jutho
X'*scale(w,Y) should do the job.

Or if you can spare to destroy Y (or more accurately, replace it by diag(w)*Y) 
you could do X'*scale!(w,Y) which should allocate less (and therefore be 
slightly faster).

Re: [julia-users] Re: Short-circuit evaluation for multiplication

2015-07-02 Thread Jutho
Depending on how much more complicated your actual use case is, could you not 
just write f(a,b,c) = a*b + a*c instead of a*(b+c)? I guess the former would be 
evaluated immediately at compile time if a is _zero ?

Re: [julia-users] Method ambiguity when defining new Number types

2015-05-22 Thread Jutho Haegeman
That’s probably true. But using the if construction in the high level function 
is certainly fine; and avoids having to use the if in the low-level kernel, 
which will be called many times.


> On 22 May 2015, at 10:31, Toivo Henningsson  wrote:
> 
> I would think that calling the function as in your initial example would 
> cause type inference to conclude Union types for alpha and beta, and cause 
> code generation to always use dynamic dispatch.
> 
> Another thing to consider is to make sure that the called function is still 
> type stable when passed a special zero or one in those arguments.
> 
> On 22 May 2015 10:27, "Jutho"  <mailto:juthohaege...@gmail.com>> wrote:
> Thanks for the detailed response. The MathConst idea is possibly a great 
> suggestion which I will look into. With respect to the calling the low-level 
> function, I don't think there is a big difference between your suggestion and 
> my one-liner, as long as the return type of the function is the same for all 
> possible combinations of argument types. Although maybe there is some boxing 
> going on in assigning the result of beta == 1 ? _one : (beta == 0 ?_zero : 
> beta)) to a temporary variable. 
> 
> Op vrijdag 22 mei 2015 10:20:57 UTC+2 schreef Toivo Henningsson:
> I'm not sure if this is the way to go, but I agree that it would be nice to 
> be able to extract specialized versions of such functions automatically based 
> on the general version by setting certain arguments to zero or one in a way 
> that the compiler can reason about.
> 
> One thing to consider is that it seems that an ordinary zero is not enough, 
> but you need a hard zero that such that _zero*NaN is zero. I believe that 
> False has been used like this in the past. I guess the compiler won't 
> specialize completely if you supply Bool arguments for alpha and beta though. 
> (Or will it?)
> 
> One thing that comes to mind is MathConst, which is an already existing 
> abstraction that seems to be close to what you would need for your _zero and 
> _one values. Though I believe that it's always assumed that a MathConst is 
> irrational right now, so that might need some changes. Apart from that I have 
> no idea if it's a good thing to do, but it seems to me that the ambiguities 
> would already have been handled.
> 
> Another possibility might be if you could force an inline call to e.g. gemm! 
> with specific arguments such as 0 or 1 for alpha or beta. But that would of 
> course break down if the inlined function doesn't do the work itself but 
> calls no another function to do it.
> 
> On thing I think you should avoid in either case is to have code such as
> 
> gemm!((beta == 1 ? _one : (beta == 0 ?_zero : beta)) , C, (alpha == 1 
> ?_one : alpha), A,B)
> 
> since it's not type stable. I believe the compiler will do a much better job 
> with e.g.
> 
> if beta == 1
> if alpha == 1; gemm!(_one, C, _one, A,B)
> else   gemm!(_one, C, alpha, A,B)
> end
> elseif beta == 0
> if alpha == 1; gemm!(_zero, C, _one, A,B)
> else   gemm!(_zero, C, alpha, A,B)
> end
> else
> if alpha == 1; gemm!(beta, C, _one, A,B)
> else   gemm!(beta, C, alpha, A,B)
> end
> end
> 
> even though it's clearly not as nice to write.
> 



[julia-users] Re: Method ambiguity when defining new Number types

2015-05-22 Thread Jutho
Thanks for the detailed response. The MathConst idea is possibly a great 
suggestion which I will look into. With respect to the calling the 
low-level function, I don't think there is a big difference between your 
suggestion and my one-liner, as long as the return type of the function is 
the same for all possible combinations of argument types. Although maybe 
there is some boxing going on in assigning the result of beta == 1 ? _one : 
(beta == 0 ?_zero : beta)) to a temporary variable. 

Op vrijdag 22 mei 2015 10:20:57 UTC+2 schreef Toivo Henningsson:
>
> I'm not sure if this is the way to go, but I agree that it would be nice 
> to be able to extract specialized versions of such functions automatically 
> based on the general version by setting certain arguments to zero or one in 
> a way that the compiler can reason about.
>
> One thing to consider is that it seems that an ordinary zero is not 
> enough, but you need a hard zero that such that _zero*NaN is zero. I 
> believe that False has been used like this in the past. I guess the 
> compiler won't specialize completely if you supply Bool arguments for alpha 
> and beta though. (Or will it?)
>
> One thing that comes to mind is MathConst, which is an already existing 
> abstraction that seems to be close to what you would need for your _zero 
> and _one values. Though I believe that it's always assumed that a MathConst 
> is irrational right now, so that might need some changes. Apart from that I 
> have no idea if it's a good thing to do, but it seems to me that the 
> ambiguities would already have been handled.
>
> Another possibility might be if you could force an inline call to e.g. 
> gemm! with specific arguments such as 0 or 1 for alpha or beta. But that 
> would of course break down if the inlined function doesn't do the work 
> itself but calls no another function to do it.
>
> On thing I think you should avoid in either case is to have code such as
>
> gemm!((beta == 1 ? _one : (beta == 0 ?_zero : beta)) , C, (alpha == 1 
> ?_one : alpha), A,B)
>
> since it's not type stable. I believe the compiler will do a much better 
> job with e.g.
>
> if beta == 1
> if alpha == 1; gemm!(_one, C, _one, A,B)
> else   gemm!(_one, C, alpha, A,B)
> end
> elseif beta == 0
> if alpha == 1; gemm!(_zero, C, _one, A,B)
> else   gemm!(_zero, C, alpha, A,B)
> end
> else
> if alpha == 1; gemm!(beta, C, _one, A,B)
> else   gemm!(beta, C, alpha, A,B)
> end
> end
>
> even though it's clearly not as nice to write.
>
>

[julia-users] Method ambiguity when defining new Number types

2015-05-20 Thread Jutho
Dear julia users,

When looking at e.g. BLAS functions, they have a general format of adding a 
scaled result array to an existing array, which itself can also be scaled; 
e.g. AB could be the result of a matrix multiplication of matrices A and B, 
and the BLAS gemm! function (using Julia's name) allows to store 
alpha*AB+beta*C into C. Looking at the high-performant microkernels of 
BLAS, they specialize on the different possibilities for alpha and beta 
using if constructions:

if beta == 0
 # do not actually compute C*0 since that could produce NaN if C was not 
initialized and contains NaN or infinity
 # instead set C to zero
elseif beta != 1
 # only scale C if beta is not 1
end
if alpha == 1
 # immediately add AB, do not scale with alpha
else
 # C+=alpha * AB
end

In writing something related in Julia, I thought this would be a perfect 
case for dispatch. The high performant low level function could just 
contain the general statement C=beta*C+alpha*AB and it would be called in a 
high level function as (note that I am not actually trying to reimplement 
gemm!, this is just as an example)

gemm!((beta == 1 ? _one : (beta == 0 ?_zero : beta)) , C, (alpha == 1 ?_one 
: alpha), A,B)


in combination with the following definitions

immutable Zero <: Integer
end
immutable One <: Integer
end

const _zero = Zero()
const _one = One()

Base.promote_rule{T<:Number}(::Type{Zero}, ::Type{T}) = T
Base.promote_rule{T<:Number}(::Type{One}, ::Type{T}) = T

Base.convert{T<:Number}(::Type{T}, ::Zero) = zero(T)
Base.convert{T<:Number}(::Type{T}, ::One) = one(T)

# add special rules, the most essential of which are:
+(::Zero, a::Number) = a
+(::Number, a::Zero) = a
*(::Zero, a::Number) = _zero
*(a::Number, ::Zero) = _zero

*(::One, a::Number) = a
*(a::Number, ::One) = a


Unfortunately this produces a lot of ambiguity warnings. Of course I need 
to add the rules for +(::Zero,::Zero) etc, i.e. the interaction of my newly 
defined types with themselves. But this is not sufficient. The remaining 
addition and multiplication ambiguities can be avoided by defining them 
separately for a::Real and a::Complex. But the most difficult one is the 
convert definition, which is ambiguous with almost any other number type 
out there.

I am aware of the many ambiguity related issues and the proposal to make 
those runtime errors instead of warnings, but I have the feeling that there 
might be a more specific solution for what I am trying to accomplish and 
that I am missing something trivial. I know there are several other number 
types defined in packages, and I assume the corresponding authors must have 
faced similar problems? So I appreciate any tips or input.



[julia-users] Re: ANN: EmpiricalRisks, Regression, SGDOptim

2015-04-23 Thread Jutho
Regression.jl seems to have like a sweet implementation of gradient based 
optimization algorithms. How does this compare to the work in Optim.jl? 
Would it be useful to join these efforts?

Op donderdag 23 april 2015 11:12:58 UTC+2 schreef Dahua Lin:
>
> Hi, 
>
> I am happy to announce three packages related to empirical risk 
> minimization
>
> EmpiricalRisks 
>
> This Julia package provides a collection of predictors and loss functions, 
> as well as the efficient computation of gradients, mainly to support the 
> implementation of (regularized) empirical risk minimization methods.
>
> Predictors:
>
>- linear prediction
>- affine prediction
>- multivariate linear prediction
>- multivariate affine prediction
>
> Loss functions:
>
>- squared loss
>- absolute loss
>- quantile loss
>- huber loss
>- hinge loss
>- smoothed hinge loss
>- logistic loss
>- sum squared loss (for multivariate prediction)
>- multinomial logistic loss
>
> Regularizers:
>
>- squared L2 regularization
>- L1 regularization
>- elastic net (L1 + squared L2)
>- evaluation of proximal operators, w.r.t. these regularizers.
>
>
> Regression 
>
> This package was dead before, and I revived it recently. It is based on 
> EmpiricalRisks, and provides methods for regression analysis (for moderate 
> size problems, i.e. the data can be loaded entirely to memory). It supports 
> the following problems:
>
>- Linear regression
>- Ridge regression
>- LASSO
>- Logistic regression
>- Multinomial Logistic regression
>- Problems with customized loss and regularizers
>
> It also provides a variety of solvers:
>
>- Analytical solution (for linear & ridge regression)
>- Gradient descent
>- BFGS
>- L-BFGS
>- Proximal gradient descent (recommended for LASSO & sparse regression)
>- Accelerated gradient descent (experimental)
>
>
> SGDOptim 
>
> I announced this couple weeks ago. Now this package has been fundamentally 
> refactored, and now it is based on EmpiricalRisks. It aims to provide 
> stochastic algorithms (e.g. SGD) for solve large scale regression problems.
>
>
> Cheers,
> Dahua
>
>
>
>
>
>
>

[julia-users] Re: smallest eigenvalue of sparse matrix

2015-04-13 Thread Jutho
When the matrix is real and symmetric, ARPACK does resort to Lanczos, or at 
least the implicitly restarted version thereof. Straight from the homepage: 
>
> When the matrix A is symmetric it reduces to a variant of the Lanczos 
> process called the Implicitly Restarted Lanczos Method (IRLM).


I always wondered why the ARPACK creators didn't bother to use the same for 
complex Hermitian problems. Even for that case, the Lanczos / Arnoldi 
orthogonalization will automatically yield a reduced matrix which is real 
and tridiagonal (instead of Heisenberg in the generic case) so I would 
think there is some benefit.


On a different note, typically you cannot easily find smallest eigenvalues 
of a matrix with a Krylov method if you just multiply with A, if smallest 
is supposed to mean closest to zero in absolute value (smallest magnitude). 
You can only find extremal eigenvalues, which are on the outer regions of 
the spectrum. For the smallest magnitude eigenvalues, in the generic case, 
one has to use a Krylov subspace built using the inverse of A.

Since your matrix is Hermitian, you know the spectrum will be real, but 
still it might have many positive and negative eigenvalues and so the same 
comments still apply if you are looking for the eigenvalues with smallest 
magnitude. If by smallest you mean, most negative, then there is no problem 
(you should use :SR in eigs). If you know more, e.g. that your matrix is 
not only Hermitian but also positive definite, then all eigenvalues are 
positive and smallest magnitude also means smallest real. 


Re: [julia-users] vector vs column matrix

2015-03-04 Thread Jutho
The rational would be that people often want to construct matrices (i.e. 
even where the elements are themselves arrays and they do not want 
concatenation) and that there is no syntax for this. All the tricks with 
reshape or transposing are not very feature proof, as e.g. transposing 
might change at some point to return a dedicated Transpose wrapper type 
that doesn't actually tranpose anything and reshape is currently also being 
reconsidered for returning an explicit view in the form of a wrapper 
ReshapedArray. 


Re: [julia-users] vector vs column matrix

2015-03-04 Thread Jutho


Op woensdag 4 maart 2015 11:54:27 UTC+1 schreef Tamas Papp:
>
> Surely I am missing something, but how would this relate to 
> concatenation? 
>
>
The point of proposal 2 in issue 10338 would be that concatenation would 
receive a new pair of brackets and that spaces and semicolons would be used 
as dedicated syntax for building matrices, i.e.
[1,2,3] is a vector, i.e. an array of size (3,)
[1;2;3] is a matrix, i.e. an array of size (3,1)


Re: [julia-users] vector vs column matrix

2015-03-04 Thread Jutho
But feel free to discuss the need for a convenient syntax for this 
at https://github.com/JuliaLang/julia/issues/10338

Op woensdag 4 maart 2015 10:32:12 UTC+1 schreef Tamas Papp:
>
> Assuming that you want to fill it with a sequence of integers as your 
> examples suggest, something like 
>
> [i for i=1:3, j=1] 
>
> or 
>
> reshape(1:3,3,1) 
>
> would also work. 
>
> Best, 
>
> Tamas 
>
> On Wed, Mar 04 2015, Deniz Yuret > 
> wrote: 
>
> > Is there a way to initialize a 2D column matrix? 
> > a=[1,2,3] gives me a 1D vector with size=(3,) 
> > a=[1;2;3] likewise 
> > a=[1 2 3] gives me a row matrix with size=(1,3) 
> > 
> > How do I initialize a column matrix with size=(3,1)?  (other than 
> > transposing a row matrix) 
>
>

Re: [julia-users] Is it OK (stylistically) to override missing Base functionality in a package?

2015-02-27 Thread Jutho Haegeman
I am not opposed to that but the same could be said for typemin and typemax.

Verstuurd vanaf mijn iPhone

> Op 27-feb.-2015 om 21:27 heeft Andreas Noack  
> het volgende geschreven:
> 
> I think it is fine that the type of the argument determines the behavior 
> here. Having "type" in the name would be a bit like having `fabs(x::Float64)`.
> 
> 2015-02-27 15:21 GMT-05:00 Jutho :
>> But I wouldn't overload real; real is for the real value of a value, not for 
>> the real type. Maybe something like realtype , or typereal if we want to go 
>> with the other type... functions.
>> 
>> Op vrijdag 27 februari 2015 21:18:34 UTC+1 schreef Andreas Noack:
>>> 
>>> I'd like to have something like this.
>>> 
>>> 2015-02-27 15:02 GMT-05:00 Jutho :
>>> 
>>>> Or in this particular case, maybe their should be some functionality like 
>>>> that in Base, or at least in Base.LinAlg, where is often necessary to mix 
>>>> complex variables and real variables of the same type used to build to 
>>>> complex variables.
>>>> 
>>>> Op donderdag 26 februari 2015 08:10:35 UTC+1 schreef Sheehan Olver:
>>>>> 
>>>>> Maybe a better alternative is to create an internal function with the 
>>>>> same name: 
>>>>> 
>>>>> real(v…)=Base.real(v…) 
>>>>> real{T<:Real}(::Type{Complex{T}})=T 
>>>>> real{T<:Real}(::Type{T})=T 
>>>>> 
>>>>> This will avoid the override leaking from the package. 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> > On 26 Feb 2015, at 6:07 pm, Sheehan Olver  wrote: 
>>>>> > 
>>>>> > I think this is a case where I know the answer but pretending I don’t 
>>>>> > :) 
>>>>> > 
>>>>> > 
>>>>> > 
>>>>> >> On 26 Feb 2015, at 6:06 pm, Ivar Nesje  wrote: 
>>>>> >> 
>>>>> >> We have seen quite a few instances where Base functions were extended 
>>>>> >> with methods without restriction to non-Base types, and it caused 
>>>>> >> problems when Julia was updated. 
>>>>> >> 
>>>>> >> Is randomly breaking in new versions of Julia your style? 
>>>>> > 
>>>>> 
>>> 
> 


Re: [julia-users] Is it OK (stylistically) to override missing Base functionality in a package?

2015-02-27 Thread Jutho Haegeman
I guess the subtle difference is that, strictly speaking, min and max of 
Floating Point types would be ±Inf.

> On 27 Feb 2015, at 21:46, Andreas Noack  wrote:
> 
> Thats right and I realized that right after I posted. I'd be fine with using 
> min and max for types but probably some would oppose that.
> 
> 2015-02-27 15:42 GMT-05:00 Jutho Haegeman  <mailto:juthohaege...@gmail.com>>:
> I am not opposed to that but the same could be said for typemin and typemax.
> 
> Verstuurd vanaf mijn iPhone
> 
> Op 27-feb.-2015 om 21:27 heeft Andreas Noack  <mailto:andreasnoackjen...@gmail.com>> het volgende geschreven:
> 
>> I think it is fine that the type of the argument determines the behavior 
>> here. Having "type" in the name would be a bit like having 
>> `fabs(x::Float64)`.
>> 
>> 2015-02-27 15:21 GMT-05:00 Jutho > <mailto:juthohaege...@gmail.com>>:
>> But I wouldn't overload real; real is for the real value of a value, not for 
>> the real type. Maybe something like realtype , or typereal if we want to go 
>> with the other type... functions.
>> 
>> Op vrijdag 27 februari 2015 21:18:34 UTC+1 schreef Andreas Noack:
>> I'd like to have something like this.
>> 
>> 2015-02-27 15:02 GMT-05:00 Jutho >:
>> 
>> Or in this particular case, maybe their should be some functionality like 
>> that in Base, or at least in Base.LinAlg, where is often necessary to mix 
>> complex variables and real variables of the same type used to build to 
>> complex variables.
>> 
>> Op donderdag 26 februari 2015 08:10:35 UTC+1 schreef Sheehan Olver:
>> Maybe a better alternative is to create an internal function with the same 
>> name: 
>> 
>> real(v…)=Base.real(v…) 
>> real{T<:Real}(::Type{Complex{T}})=T 
>> real{T<:Real}(::Type{T})=T 
>> 
>> This will avoid the override leaking from the package. 
>> 
>> 
>> 
>> 
>> > On 26 Feb 2015, at 6:07 pm, Sheehan Olver > wrote: 
>> > 
>> > I think this is a case where I know the answer but pretending I don’t :) 
>> > 
>> > 
>> > 
>> >> On 26 Feb 2015, at 6:06 pm, Ivar Nesje > wrote: 
>> >> 
>> >> We have seen quite a few instances where Base functions were extended 
>> >> with methods without restriction to non-Base types, and it caused 
>> >> problems when Julia was updated. 
>> >> 
>> >> Is randomly breaking in new versions of Julia your style? 
>> > 
>> 
>> 
>> 
> 



Re: [julia-users] Is it OK (stylistically) to override missing Base functionality in a package?

2015-02-27 Thread Jutho
But I wouldn't overload real; real is for the real value of a value, not 
for the real type. Maybe something like realtype , or typereal if we want 
to go with the other type... functions.

Op vrijdag 27 februari 2015 21:18:34 UTC+1 schreef Andreas Noack:
>
> I'd like to have something like this.
>
> 2015-02-27 15:02 GMT-05:00 Jutho >:
>
>> Or in this particular case, maybe their should be some functionality like 
>> that in Base, or at least in Base.LinAlg, where is often necessary to mix 
>> complex variables and real variables of the same type used to build to 
>> complex variables.
>>
>> Op donderdag 26 februari 2015 08:10:35 UTC+1 schreef Sheehan Olver:
>>>
>>> Maybe a better alternative is to create an internal function with the 
>>> same name: 
>>>
>>> real(v…)=Base.real(v…) 
>>> real{T<:Real}(::Type{Complex{T}})=T 
>>> real{T<:Real}(::Type{T})=T 
>>>
>>> This will avoid the override leaking from the package. 
>>>
>>>
>>>
>>>
>>> > On 26 Feb 2015, at 6:07 pm, Sheehan Olver  wrote: 
>>> > 
>>> > I think this is a case where I know the answer but pretending I don’t 
>>> :) 
>>> > 
>>> > 
>>> > 
>>> >> On 26 Feb 2015, at 6:06 pm, Ivar Nesje  wrote: 
>>> >> 
>>> >> We have seen quite a few instances where Base functions were extended 
>>> with methods without restriction to non-Base types, and it caused problems 
>>> when Julia was updated. 
>>> >> 
>>> >> Is randomly breaking in new versions of Julia your style? 
>>> > 
>>>
>>>
>

Re: [julia-users] Is it OK (stylistically) to override missing Base functionality in a package?

2015-02-27 Thread Jutho
Or in this particular case, maybe their should be some functionality like 
that in Base, or at least in Base.LinAlg, where is often necessary to mix 
complex variables and real variables of the same type used to build to 
complex variables.

Op donderdag 26 februari 2015 08:10:35 UTC+1 schreef Sheehan Olver:
>
> Maybe a better alternative is to create an internal function with the same 
> name: 
>
> real(v…)=Base.real(v…) 
> real{T<:Real}(::Type{Complex{T}})=T 
> real{T<:Real}(::Type{T})=T 
>
> This will avoid the override leaking from the package. 
>
>
>
>
> > On 26 Feb 2015, at 6:07 pm, Sheehan Olver  > wrote: 
> > 
> > I think this is a case where I know the answer but pretending I don’t :) 
> > 
> > 
> > 
> >> On 26 Feb 2015, at 6:06 pm, Ivar Nesje > 
> wrote: 
> >> 
> >> We have seen quite a few instances where Base functions were extended 
> with methods without restriction to non-Base types, and it caused problems 
> when Julia was updated. 
> >> 
> >> Is randomly breaking in new versions of Julia your style? 
> > 
>
>

Re: [julia-users] Re: Whats the deal with concatenation in 0.4!?

2015-02-26 Thread Jutho
I opened an issue to further discuss some related questions regarding 
concatenation and construction of matrices:

https://github.com/JuliaLang/julia/issues/10338


Re: [julia-users] Parametric vs Abstract typed collections, design question

2015-02-01 Thread Jutho
As long as not all parameters of a parametric concrete type are fully 
specified, the type is treated as abstract. So in both cases your collection 
would be of abstract elements and they would not be stored packed in memory. I 
don't think what you are requesting is possible, but I might be mistaken. If 
the elements of the collection are of a concrete type, then applying a function 
to them should always be the same function. If this were not the case, then 
type inference could not work.

[julia-users] Re: Peculiarly slow matrix multiplication

2015-01-21 Thread Jutho
Not sure what is causing the slowness, but you could avoid creating a 
diagonal matrix and then doing the matrix multiplication with diagm(expr) 
which will be treated as a full matrix. 
Instead of shrt*diagm(expr) which is interpreted as the multiplication of 
two full matrices, try scale(shrt,expr) .


Op woensdag 21 januari 2015 07:56:19 UTC+1 schreef Micah McClimans:
>
> I'm running into trouble with a line of matrix multiplication going very 
> slowly in one of my programs. The line I'm looking into is:
> objectivematrix=shrt*diagm(expr)*(shrt')
> where shrt is 12,000x600 and expr is 600 long. This line takes several 
> HOURS to run, on a computer that can run
>
> k=rand(12000,12000)
> k3=k*k*k
>
> in under a minute. I've tried devectorizing the line into the following 
> loop (shrt is block-diagonal with each block ONevecs and -ONevecs 
> respectively, so I split the loop in half)
>
> objectivematrix=zeros(2*size(ONevecs,1),2*size(ONevecs,1))
> for i in 1:size(ONevecs,1)
> print(i)
> for j in 1:size(ONevecs,1)
> for k in 1:size(ONevecs,2)
> objectivematrix[i,j]+=ONevecs[i,k]*ONevecs[j,k]*expr[k]
> end
> end
> end
> for i in 1:size(ONevecs,1)
> print(i)
> for j in 1:size(ONevecs,1)
> for k in 1:size(ONevecs,2)
> 
> objectivematrix[i+size(ONevecs,1),j+size(ONevecs,1)]+=ONevecs[i,k]*ONevecs[j,k]*expr[k+size(ONevecs,2)]
> end
> end
>
> end
>
> and this give a print out every couple seconds- it's faster than the 
> matrix multiplication version, but not enough. Why is this taking so long? 
> This should not be a hard operation.
>


[julia-users] Re: Text editor for coding Julia: Atom vs Light Table vs Bracket

2014-12-03 Thread Jutho
I remember reading somewhere that Codebox might support Julia in the near 
future. Does anybody have any comments or information about this?

Op vrijdag 28 november 2014 17:39:43 UTC+1 schreef Daniel Carrera:
>
> Hi everyone,
>
> Can anyone here comment or share opinions on the newer text editors -- 
> Atom, Light Table, Bracket -- that seem to be trying to supplant Sublime 
> Text? A lot of the information you find online seems to be geared toward 
> web development, but my interest is programming with Julia (and Fortran). 
> That's why I asking for opinions on the Julia mailing list.
>
> I currently use Sublime Text, and I am very happy with it. But I am 
> curious about the others, since they seem to intentionally copy the most 
> important features from Sublime Text. If you have experience with these 
> editors and can tell me why you like one better than another, I would love 
> to hear it.
>
> Cheers,
> Daniel.
>


Re: [julia-users] type confusions in list comprehensions (and how to work around it?)

2014-11-03 Thread Jutho
This only happens in global scope, not inside a function? If you define
f(list) = return [g(x) for x in list]

then f(xs) will return an Array{Float64,1}. 

Op dinsdag 4 november 2014 03:23:36 UTC+1 schreef K leo:
>
> I found that I often have to force this conversion, which is not too 
> difficult.  The question why comprehension has to build with type Any? 
>
>
> On 2014年11月04日 07:06, Miguel Bazdresch wrote: 
> > > How could I force the type of gxs1 to be of an array of Float64? 
> > 
> > The simplest way is: 
> > 
> > gxs1 = Float64[g(x) for x in xs] 
> > 
> > -- mb 
> > 
> > On Mon, Nov 3, 2014 at 6:01 PM, Evan Pu   
> > > wrote: 
> > 
> > Consider the following interaction: 
> > 
> > julia> g(x) = 1 / (1 + x) 
> > g (generic function with 1 method) 
> > 
> > julia> typeof(g(1.0)) 
> > Float64 
> > 
> > julia> xs = [1.0, 2.0, 3.0, 4.0] 
> > 4-element Array{Float64,1}: 
> >  1.0 
> >  2.0 
> >  3.0 
> >  4.0 
> > 
> > julia> gxs1 = [g(x) for x in xs] 
> > 4-element Array{Any,1}: 
> >  0.5 
> >  0.33 
> >  0.25 
> >  0.2 
> > 
> > Why isn't gxs1 type of Array{Float64,1}? 
> > How could I force the type of gxs1 to be of an array of Float64? 
> > 
> > julia> gxs2 = [convert(Float64,g(x)) for x in xs] 
> > 4-element Array{Any,1}: 
> >  0.5 
> >  0.33 
> >  0.25 
> >  0.2 
> > 
> > somehow this doesn't seem to work... 
> > 
> > 
> > 
> > 
>
>

[julia-users] Re: Should reshape be reshape!?

2014-10-13 Thread Jutho
Maybe point 3 on the noteworthy differences list could be expanded. If 
Stephan doesn't do so, I can try to give it a shot later

Op maandag 13 oktober 2014 19:53:58 UTC+2 schreef Ivar Nesje:
>
> This behaviour is already bullet 2 and 3 on the list you suggested. 
> Emphasizing that Matlab implicitly inserts a copy(x), when you mutate 
> arrays after some boundaries might be a good idea. I also find it somewhat 
> hard to grasp the consequences of the current wording, but I'm not the 
> target audience for that part of the documentation.
>
> As always when you want to change something in Julia, the prefered way is 
> to submit a pull request on Github. For documentation, you can even use the 
> github 
> web ui 
> <https://github.com/JuliaLang/julia/edit/master/doc/manual/noteworthy-differences.rst>
>  to 
> edit. If you open a pull request you should also link to this thread as 
> previous discussion.
>
> Regards Ivar
>
> kl. 16:35:52 UTC+2 mandag 13. oktober 2014 skrev Stephan Buchert følgende:
>>
>> Thanks for the explanations.
>>
>> To me this seems now reasonable, after realizing that in Julia not only 
>> reshape etc, but already the normal assignment b=a (for a an array) opens 
>> for the same kind of "surprises" as I had noticed them with reshape. Coming 
>> from Matlab this is a surprise, but for someone being familiar with certain 
>> other languages, I guess, it is not?
>>
>>  julia> a=[1, 2, 3]
>> 3-element Array{Int64,1}:
>>  1
>>  2
>>  3
>>
>> julia> b=a;
>>
>> julia> b[2]=0
>> 0
>>
>> julia> a
>> 3-element Array{Int64,1}:
>>  1
>>  0
>>  3
>>
>> But then "+-" is different again:
>>
>> julia> b+=1;
>>
>> julia> a
>> 3-element Array{Int64,1}:
>>  1
>>  0
>>  3
>>
>>
>> Perhaps one could add in 
>> http://julia.readthedocs.org/en/latest/manual/noteworthy-differences/ 
>> under Matlab near the top something like:
>>
>> * Arrays are mutable. Assignments like b=a and functions b=f(a) behave in 
>> Matlab always equivalent to b=copy(a) and b=f(copy(a)), but in Julia b 
>> becomes conceptually a pointer or different view of the contents of a. The 
>> values of a can be altered through b.
>>
>> PS: Yes, linear indices work directly also in Julia, I had overlooked this
>>
>> Am Montag, 13. Oktober 2014 10:33:26 UTC+2 schrieb Jutho:
>>>
>>> Some more comments / answers:
>>>
>>> I too was confused by the precise use of ! when I first started using 
>>> Julia in cases like reshape. However, it is very clear that reshape should 
>>> not have an exclamation mark.
>>>
>>> The use of ! in Julia functions is whenever the function modifies one of 
>>> its input arguments. This can be for several reasons:
>>>
>>>1. The function performs some operation in-place: e.g. 
>>>scale!(A,lambda) multiplies all elements of the array A with the scalar 
>>>lambda in place.
>>>2. The operation stores the result of the computation in one 
>>>(typically the first) argument: e.g. copy!(B,A), permutedims!(B,A,perm), 
>>>scale!(B,A,lambda), Base.transpose!(B,A): do something with the data of 
>>> A 
>>>and store the result in the pre-allocated but not necessarily 
>>> initialised 
>>>array B
>>>3. The function overwrites the input arguments for use as temporary 
>>>workspace, but the actual result is just a normal return value as in 
>>>typical functions: e.g. the different [eig,svd,qr]fact!(A) methods
>>>
>>> The Base.LinAlg.BLAS methods do something in between 1. and 2.; they 
>>> store the result of the computation in a dedicated 'output' argument as in 
>>> 2., but they can also just add the result to that output argument, in which 
>>> case it needs to be correctly initialised.
>>>
>>> In summary, the exclamation mark means that if you need the original 
>>> values of all of the arguments after calling he function, you should better 
>>> make a copy, since calling the function allows it to destroy the original 
>>> values of its arguments (only possible with arrays or other mutable types).
>>> Also note that most of these functions do not change the size of array 
>>> arguments, resize! is one of the notable few exceptions and only works with 
>>> one-dimensional arrays (vectors).
>>>
>>> reshape doesn't fit in any of these ca

[julia-users] Re: Should reshape be reshape!?

2014-10-13 Thread Jutho
Some more comments / answers:

I too was confused by the precise use of ! when I first started using Julia 
in cases like reshape. However, it is very clear that reshape should not 
have an exclamation mark.

The use of ! in Julia functions is whenever the function modifies one of 
its input arguments. This can be for several reasons:

   1. The function performs some operation in-place: e.g. scale!(A,lambda) 
   multiplies all elements of the array A with the scalar lambda in place.
   2. The operation stores the result of the computation in one (typically 
   the first) argument: e.g. copy!(B,A), permutedims!(B,A,perm), 
   scale!(B,A,lambda), Base.transpose!(B,A): do something with the data of A 
   and store the result in the pre-allocated but not necessarily initialised 
   array B
   3. The function overwrites the input arguments for use as temporary 
   workspace, but the actual result is just a normal return value as in 
   typical functions: e.g. the different [eig,svd,qr]fact!(A) methods

The Base.LinAlg.BLAS methods do something in between 1. and 2.; they store 
the result of the computation in a dedicated 'output' argument as in 2., 
but they can also just add the result to that output argument, in which 
case it needs to be correctly initialised.

In summary, the exclamation mark means that if you need the original values 
of all of the arguments after calling he function, you should better make a 
copy, since calling the function allows it to destroy the original values 
of its arguments (only possible with arrays or other mutable types).
Also note that most of these functions do not change the size of array 
arguments, resize! is one of the notable few exceptions and only works with 
one-dimensional arrays (vectors).

reshape doesn't fit in any of these categories. B=reshape(A,newsize) does 
not destroy the values of a. It is however true that the output B shares 
data with A, just like if you do B=slice(A,...) or B=sub(A,...). Note that 
none of these functions use an exclamation mark, nor is there any other 
stylistic convention to indicate that this happens. That's just a new 
aspect of the language that you have to take into account when coming from 
e.g. Matlab, but that is very natural when coming from e.g. C.

Finally, note that linear indexing indeed works on an array without 
reshaping, but that if you want the 'vectorized' version a multidimensional 
array, there is the build in function B=vec(A), which is indeed equivalent 
to B=reshape(A,prod(size(A)))




Op zaterdag 11 oktober 2014 22:20:13 UTC+2 schreef Stephan Buchert:
>
> julia> a=[1 2; 3 4; 5 6]
> 3x2 Array{Int64,2}:
>  1  2
>  3  4
>  5  6
>
> julia> b=reshape(a, prod(size(a)));
>
> julia> b[3]=0
> 0
>
> julia> a
> 3x2 Array{Int64,2}:
>  1  2
>  3  4
>  0  6
>
>

Re: [julia-users] Re: parametrized type weirdness? or Julia bug?

2014-10-07 Thread Jutho
t;>>>  in makeADict at c:\Users\vavasis\Documents\Projects\qmg21\julia\
>>>> testnestparam2
>>>> jl:6
>>>>
>>>>
>>>>
>>>> On Tuesday, October 7, 2014 10:05:09 AM UTC-4, vav...@uwaterloo.ca 
>>>> wrote:
>>>>>
>>>>> Thanks for the explanation!  Although I think I understand your 
>>>>> answer, I'm not sure how to define the types to accomplish my goal.
>>>>>
>>>>> * a Token should have two parts, a container and a semitoken, both 
>>>>> abstract
>>>>>
>>>>> * an SDToken should be a specialization of Token in which the 
>>>>> container portion is SortedDict{K,D} for a specific K,D and the semitoken 
>>>>> portion is IntSemiToken (both parts concrete)
>>>>>
>>>>> * Some functions take SDToken's as input.  These should be generic, 
>>>>> i.e., work for all choices of K,D.
>>>>>
>>>>> * There are also generic functions that return SDTokens as output (in 
>>>>> which K and D are determined from the input args).
>>>>>
>>>>> Can I write typealias SDToken{K,D} Token{SortedDict{K,D},IntSemiToken}?  
>>>>> I tried this and got an unexpected error message:
>>>>>
>>>>> ERROR: type: test1: in apply, expected Function, got 
>>>>> Type{Token{SortedDict{K,D},
>>>>> IntSemiToken}}
>>>>>  in test1 at c:\Users\vavasis\Documents\Projects\qmg21\julia\
>>>>> testnestparam.jl:32
>>>>>
>>>>> (No 'apply' in my code!)  And in any case, even if I could write this, 
>>>>> I'm not sure it solves the problem.
>>>>>
>>>>> -- Steve
>>>>>
>>>>>
>>>>>
>>>>> On Tuesday, October 7, 2014 2:15:01 AM UTC-4, Jutho wrote:
>>>>>>
>>>>>> I don't see the problem regarding point one.
>>>>>>
>>>>>> A type with parameters, as your SortedDict{D,K}, becomes an abstract 
>>>>>> type when the parameters are unspecified, e.g. SortedDict, but this is 
>>>>>> indeed printed/formatted with unspecified parameters put back (I guess 
>>>>>> with 
>>>>>> the name as you defined them), e.g. as SortedDict{D,K}.
>>>>>>
>>>>>> Regarding point 2, this relates to the invariance of parametric 
>>>>>> types. 
>>>>>>
>>>>>> isa(t2,Token{SortedDict, IntSemiToken})
>>>>>> will return false, because
>>>>>> Token{SortedDict{Int64,ASCIIString},IntSemiToken} <: 
>>>>>> Token{SortedDict, IntSemiToken}
>>>>>> is false. There are many discussions regarding this both on the forum 
>>>>>> and I guess in the manual. Search for invariance versus covariance of 
>>>>>> parametric types.
>>>>>>
>>>>>>
>>>>>> Op dinsdag 7 oktober 2014 04:42:12 UTC+2 schreef vav...@uwaterloo.ca:
>>>>>>>
>>>>>>> The code below is an excerpt from a more complicated code I am 
>>>>>>> writing.  It contains a parametrized type in which the parameter is 
>>>>>>> itself 
>>>>>>> another parametrized type. I have attached the printout 
>>>>>>> (0.4.0-dev+323).  
>>>>>>> Notice the error message at the end.  My questions are:
>>>>>>>
>>>>>>> (1) How is it possible that the type of t includes dummy parameters 
>>>>>>> K and D, which aren't real types at all?
>>>>>>>
>>>>>>> (2) Why is Julia not able to match t2 but it is able to match t to 
>>>>>>> the signature of test0()?  
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Steve Vavasis
>>>>>>>
>>>>>>>
>>>>>>> julia> testnestparam.test1()
>>>>>>> typeof(t) = Token{SortedDict{K,D},IntSemiToken}
>>>>>>> typeof(t2) = Token{SortedDict{Int64,ASCIIString},IntSemiToken}
>>>>>>> methods(test0) = # 1 method for generic function "test0":
>>>>>>> test0(i::Token{SortedDict{K,D},IntSemiToken}) at 
>>>>>>> c:\Users\vavasis\Documents\Projects\qmg21\julia\testnestparam.jl:28
>>>>>>> ERROR: `test0` has no method matching test0(::Token{SortedDict{
>>>>>>> Int64,ASCIIString},IntSemiToken})
>>>>>>>  in test1 at c:\Users\vavasis\Documents\Projects\qmg21\julia\
>>>>>>> testnestparam.jl:38
>>>>>>>
>>>>>>>
>>>>>>> module testnestparam
>>>>>>>
>>>>>>> immutable IntSemiToken
>>>>>>> address::Int
>>>>>>> end
>>>>>>>
>>>>>>> immutable Token{T, S}
>>>>>>> container::T
>>>>>>> semitoken::S
>>>>>>> end
>>>>>>>
>>>>>>> # take a token apart
>>>>>>> semi(i::Token) = i.semitoken
>>>>>>> container(i::Token) = i.container
>>>>>>>
>>>>>>> # put a token back together
>>>>>>> assemble(m, s) = Token(m,s)
>>>>>>>
>>>>>>> type SortedDict{K, D} <: Associative{K,D}
>>>>>>> bt::Dict{K,D}
>>>>>>> end
>>>>>>>
>>>>>>> typealias SDToken Token{SortedDict, IntSemiToken}
>>>>>>>
>>>>>>> sdtoken_construct(m::SortedDict,int1::Int) = 
>>>>>>> SDToken(m, IntSemiToken(int1))
>>>>>>>
>>>>>>> test0(i::SDToken) = nothing
>>>>>>>
>>>>>>> function test1()
>>>>>>> s = SortedDict([1=>"a",2=>"b"])
>>>>>>> t = sdtoken_construct(s, 0)
>>>>>>> t2 = assemble(t.container, t.semitoken)
>>>>>>> println("typeof(t) = ", typeof(t))
>>>>>>> println("typeof(t2) = ", typeof(t2))
>>>>>>> println("methods(test0) = ", methods(test0))
>>>>>>> test0(t)
>>>>>>> test0(t2) #Line 38
>>>>>>> end
>>>>>>>
>>>>>>> end
>>>>>>>
>>>>>>>
>>

[julia-users] Re: parametrized type weirdness? or Julia bug?

2014-10-06 Thread Jutho
I don't see the problem regarding point one.

A type with parameters, as your SortedDict{D,K}, becomes an abstract type 
when the parameters are unspecified, e.g. SortedDict, but this is indeed 
printed/formatted with unspecified parameters put back (I guess with the 
name as you defined them), e.g. as SortedDict{D,K}.

Regarding point 2, this relates to the invariance of parametric types. 

isa(t2,Token{SortedDict, IntSemiToken})
will return false, because
Token{SortedDict{Int64,ASCIIString},IntSemiToken} <: Token{SortedDict, 
IntSemiToken}
is false. There are many discussions regarding this both on the forum and I 
guess in the manual. Search for invariance versus covariance of parametric 
types.


Op dinsdag 7 oktober 2014 04:42:12 UTC+2 schreef vav...@uwaterloo.ca:
>
> The code below is an excerpt from a more complicated code I am writing. 
>  It contains a parametrized type in which the parameter is itself another 
> parametrized type. I have attached the printout (0.4.0-dev+323).  Notice 
> the error message at the end.  My questions are:
>
> (1) How is it possible that the type of t includes dummy parameters K and 
> D, which aren't real types at all?
>
> (2) Why is Julia not able to match t2 but it is able to match t to the 
> signature of test0()?  
>
> Thanks,
> Steve Vavasis
>
>
> julia> testnestparam.test1()
> typeof(t) = Token{SortedDict{K,D},IntSemiToken}
> typeof(t2) = Token{SortedDict{Int64,ASCIIString},IntSemiToken}
> methods(test0) = # 1 method for generic function "test0":
> test0(i::Token{SortedDict{K,D},IntSemiToken}) at 
> c:\Users\vavasis\Documents\Projects\qmg21\julia\testnestparam.jl:28
> ERROR: `test0` has no method matching 
> test0(::Token{SortedDict{Int64,ASCIIString},IntSemiToken})
>  in test1 at 
> c:\Users\vavasis\Documents\Projects\qmg21\julia\testnestparam.jl:38
>
>
> module testnestparam
>
> immutable IntSemiToken
> address::Int
> end
>
> immutable Token{T, S}
> container::T
> semitoken::S
> end
>
> # take a token apart
> semi(i::Token) = i.semitoken
> container(i::Token) = i.container
>
> # put a token back together
> assemble(m, s) = Token(m,s)
>
> type SortedDict{K, D} <: Associative{K,D}
> bt::Dict{K,D}
> end
>
> typealias SDToken Token{SortedDict, IntSemiToken}
>
> sdtoken_construct(m::SortedDict,int1::Int) = 
> SDToken(m, IntSemiToken(int1))
>
> test0(i::SDToken) = nothing
>
> function test1()
> s = SortedDict([1=>"a",2=>"b"])
> t = sdtoken_construct(s, 0)
> t2 = assemble(t.container, t.semitoken)
> println("typeof(t) = ", typeof(t))
> println("typeof(t2) = ", typeof(t2))
> println("methods(test0) = ", methods(test0))
> test0(t)
> test0(t2) #Line 38
> end
>
> end
>
>

[julia-users] Re: Type failure in vectors of parameterised types

2014-10-04 Thread Jutho
Note that
Vector
Vector{Belief}
Vector{Belief{DistributionType}}
Vector{Belief{D}} # for some D<: DistributionType

are all different types. Your z is a Vector{Belief}, where Belief, without 
parameters, is an abstract type.
For any D<:DistributionType ,   Belief{D} is a concrete type, even if D 
itself is abstract, as in Belief{DistributionType}

You're function is written to only accept arguments of the form 
Vector{Belief{D}} , whereas you are trying to execute it with an argument 
of type Vector{Belief}, which is different.

I think you want you're signature to be
function Testa{B<:Belief}(basin::Vector{B})
  "Test"
end

which will accept both arguments of type Vector{Belief} and 
Vector{Belief{D}}.

Hope this helps.

Op zaterdag 4 oktober 2014 00:43:55 UTC+2 schreef Zenna Tavares:
>
> I am getting type failures when dealing with vectors parametric types, and 
> I am not sure of the reason.
>
> z = Belief[a,b]
> function Testa{D <: DistributionType}(basin::Vector{Belief{D}})
>   "Test"
> end
>
>
> Testa(z)
>
> typeof(z)
>
> yields
>
> Array{Belief{D<:DistributionType},1}
>
> Which is as far as I can see is the signature of the function.  But I get 
> the Testa has no method matching
>
> Testa(::Array{Belief{D<:DistributionType},1})
>
> What am I doing wrong?
>


[julia-users] Re: Label vector components

2014-09-30 Thread Jutho
While it is certainly expected that a custom type will be more performant, 
you cannot really trust your timings. You should not time with variables 
defined in global scope, and you should add a warm-up phase to compile your 
functions.

Defining all of this:


# Method 1: Type

type parameters
p1::Float64
p2::Float64
p3::Float64
end

function f1(n,p)
for i = 1:n
p.p1 += 0.1
p.p2 += 0.1
p.p3 += 0.1
end
end

# Method 2: Dictionary

function f2(n,p)
for i = 1:n
p["p1"] += 0.1
p["p2"] += 0.1
p["p3"] += 0.1
end
end

# Time function
function correcttime()
p_method_1 = parameters(3.6, 0.6, 600.0)
p_method_2 = {"p1" => 3.6, "p2" => 0.6, "p3" => 600.0}
f1(10,p_method_1)
f2(10,p_method_2)

println("Method 1")
println(p_method_1)
@time f1(10,p_method_1)
println(p_method_1)

println("Method 2")
println(p_method_2)
@time f2(10,p_method_2)
println(p_method_2)
end

and then running

correcttime()

gives

Method 1

parameters(4.598,1.6003,601.2)

elapsed time: 0.000275517 seconds (0 bytes allocated)

parameters(10004.60018865,10001.60018854,10601.00020967)

Method 2

{"p3"=>601.2,"p2"=>1.6003,"p1"=>4.598}

elapsed time: 0.079420021 seconds (960 bytes allocated)

{"p3"=>10601.00020967,"p2"=>10001.60018854,"p1"=>10004.60018865}




Op dinsdag 30 september 2014 11:09:38 UTC+2 schreef ami...@gmail.com:
>
> Hi again,
>
> Thanks for all the suggestions.
> I'm a bit ashamed I overlooked the part of the documentation about types.
> I skimmed through the DataFrames documentation and I don't think there's a 
> way to access data via keys.
> As far as the two other solutions are concerned, I compared their 
> performance on a very simple test case, here it is if it can be of any 
> interest :
>
> n = 10
>
> # Method 1: Type
>
> type parameters
> p1::Float64
> p2::Float64
> p3::Float64
> end
>
> p_method_1 = parameters(3.6, 0.6, 600.0)
>
> function f1(p)
> for i = 1:n
> p.p1 += 0.1
> p.p2 += 0.1
> p.p3 += 0.1
> end
> end
>
> # Method 2: Dictionnary
>
> p_method_2 = {"p1" => 3.6, "p2" => 0.6, "p3" => 600.0}
>
> function f2(p)
> for i = 1:n
> p["p1"] += 0.1
> p["p2"] += 0.1
> p["p3"] += 0.1
> end
> end
>
> println("Method 1")
> println(p_method_1)
> @time f1(p_method_1)
> println(p_method_1)
>
> println("Method 2")
> println(p_method_2)
> @time f2(p_method_2)
> println(p_method_2)
>
> Here are the results:
>
> Method 1
> parameters(3.6,0.6,600.0)
> elapsed time: 0.020289272 seconds (6610188 bytes allocated)
> parameters(10003.60018861,1.6001885,10600.00020964)
> Method 2
> {"p3"=>600.0,"p2"=>0.6,"p1"=>3.6}
> elapsed time: 0.120914516 seconds (16193368 bytes allocated)
> {"p3"=>10600.00020964,"p2"=>1.6001885,"p1"=>10003.60018861}
>
> The first method is by far the most efficient.
>


Re: [julia-users] unexpected domain error for ^(float,float)

2014-09-18 Thread Jutho
because it is not recognized/parsed as literal but as the application of a 
unary minus, which has lower precedence than ^

I guess it is not possible to give binary minus a lower precedence than ^ 
and unary minus of higher precedence, since these are just different 
methods of the same function/operator.

Op donderdag 18 september 2014 14:54:26 UTC+2 schreef Florian Oswald:
>
> yes - not sure why -0.4 and (-0.4) are any different.
>
> On 18 September 2014 13:52, Patrick O'Leary  > wrote:
>
>> Seems like the literal -0.4^2.5 should throw the same error, though?
>>
>>
>> On Thursday, September 18, 2014 6:42:56 AM UTC-5, Tim Holy wrote:
>>>
>>> http://docs.julialang.org/en/latest/manual/faq/#why-does-
>>> julia-give-a-domainerror-for-certain-seemingly-sensible-operations 
>>>
>>> On Thursday, September 18, 2014 03:24:00 AM Florian Oswald wrote: 
>>> > # define a variable gamma: 
>>> > 
>>> > gamma = 1.4 
>>> > mgamma = 1.0-gamma 
>>> > 
>>> > julia> mgamma 
>>> > -0.3999 
>>> > 
>>> > # this works: 
>>> > 
>>> > julia> -0.3999^2.5 
>>> > -0.10119288512475567 
>>> > 
>>> > # this doesn't: 
>>> > 
>>> > julia> mgamma^2.5 
>>> > ERROR: DomainError 
>>> > in ^ at math.jl:252 
>>>
>>>
>

Re: [julia-users] Trouble with `permute!`

2014-09-17 Thread Jutho
One of the reasons that permutedims! is not documented yet, is because it 
was initially not exported, I believe. The non-allocating version of the 
simpler case of transposing a matrix, i.e. transpose! and ctranspose! are 
also still not exported, and thus, undocumented. Given the current tendency 
to export also the mutating methods, there was a pull request to export 
(c)transpose! as well, but it got stalled by having to decide on the best 
way to deal with the fact that one should use a completely different memory 
buffer for the destination (which is also the case for permutedims!)

https://github.com/JuliaLang/julia/pull/7814

I will try to add clear (hopefully) documentation for these methods and get 
this pull request merged next week.

Jutho



Op woensdag 17 september 2014 20:43:41 UTC+2 schreef Tim Holy:
>
> To add to Doug's point, permutedims! is non-allocating, but it's not 
> in-place. 
> After all, you can't stash a 3-by-5 matrix in a 5-by-3 matrix. The syntax 
> is 
> permutedims!(output, input, perm) 
> where output already has the right size. 
>
> Since permutedims! is undocumented, DumpsterDoofus, would you add it to 
> https://github.com/JuliaLang/julia/blob/master/doc/stdlib/base.rst? 
> (just click on the little pencil icon) 
>
> --Tim 
>
>
> On Wednesday, September 17, 2014 11:03:24 AM DumpsterDoofus wrote: 
> > I am having trouble with `permute!`, the in-place version of 
> `permutedims`. 
> > I tried the following: 
> > 
> > A = rand(Float32, 20, 30, 40, 2); 
> > 
> > permute!(A, [1;2;3;4]) 
> > 
> > which produced the following: 
> > 
> > ERROR: BoundsError() 
> >  in permute!! at combinatorics.jl:196 
> >  in permute! at combinatorics.jl:212 
> > 
> > Strangely, executing the the in-place version, 
> > 
> > permutedims(A, [1;2;3;4]) 
> > 
> > works fine, giving the correct result. 
> > 
> > What is going on here? 
>
>

[julia-users] Re: Incorrect and non-deterministic behavior of eigs

2014-09-06 Thread Jutho
I have just reopened an old issue here: 
https://github.com/JuliaLang/julia/issues/6965

We hoped that everything was fixed, but apparently not.


Re: [julia-users] help with improving performance of nested loops

2014-08-18 Thread Jutho
What are the dimensions (i.e. sizes) of these 9 dimensions? You might be 
interested in trying the tensorcontract routine from TensorOperations.jl 
and compare the method=:BLAS vs method=:native approach. Although I do 
assume that for a specific case like this (where basically every dimension 
is individually contracted with a matrix) there might be better approaches. 


Op maandag 18 augustus 2014 22:35:25 UTC+2 schreef Florian Oswald:
>
> Hi!
>
> yes. I find that 
>
> 37 % of time spent at line 26
> 51 % of time spent at line 29
>
> in the gist.
>
> line 26  is
>
> idx1 = idx9(is1,iz1,iy1,ip1,itau1,ia,ih,ij,age,p)
>
>
> line 29 is
>
> @inbounds tmp += m.vbar[idx1] * Gz[iz + p.nz * (iz1 + p.nz * (ij-1)-1)] * 
> Gyp[iy + p.ny * ((ip-1) + p.np * ((iy1-1) + p.ny * ((ip1-1) + p.np * 
> (ij-1 ] * Gs[is + p.ns * (is1-1)] * Gtau[itau1]
>
>
>
>
>
>
>
> On 18 August 2014 19:13, Kevin Squire > 
> wrote:
>
>> Have you run it through the profiler already?
>>
>>
>> On Monday, August 18, 2014, Florian Oswald > > wrote:
>>
>>> Hi all,
>>>
>>> i'm trying to improve the performance of this function:
>>>
>>> https://gist.github.com/floswald/9e79f6f51c276becbd74
>>>
>>> In a nutshell, I have got a high-dimensional array vbar (in this 
>>> instance it is 9D), and I want to obtain another array EV (also 9D), by 
>>> matrix-multiplying several dimensions of vbar with transition matrices - 
>>> they are prefixed with "G..." in the function. The G's are square matrices, 
>>> where row i has the probability of moving from state i to state j. (some of 
>>> those matrices are actually 3D because they depend on an additional state, 
>>> but that doesn't matter.)
>>>
>>> I devectorized it already but wanted to see whether anyone out there has 
>>> a suggestion.
>>>
>>> thanks!
>>> florian
>>>
>>
>

Re: [julia-users] Best way to eliminate parameters from a parametric type

2014-08-02 Thread Jutho
Suppose I want to build a composite type
type X{A<:AbstractArray}
   list::Vector{A}
   individual::A
end

Now, al the elements of the list should be of some concrete type of 
AbstractArray{T,N}, whereas individual is required to be of the same 
concrete type of AbstractArray, have the same T, but can have a different 
N'. So if all the elements in the list are Float64 Arrays, then so should 
be individual. For AbstractArray this is less of an issue, since there are 
not many concrete types with arbitrary N. In my code, it is not actually 
the AbstractArray hierarchy that I am using, but some other hierarchy with 
several concrete types.

So I was thinking of writing an outer constructor that accepts a general 
list::Vector, does several consistency check, and tries to determine the 
concrete subtype, but without fixing the N. 

Op zaterdag 2 augustus 2014 19:06:02 UTC+2 schreef Leah Hanson:
>
> T{P1,P2,P3} is a family of types; each member specifies all three type 
> parameters. There are no "super type" relationships within the family of T 
> types; they are just all members of the same family (think "set") of types.
>
> Could you explain what you're trying to achieve? I don't think that making 
> a type with only 2 of it's three parameters defined makes sense, so maybe 
> we can find another way to achieve the same goal.
>
> -- Leah
>
>
> On Sat, Aug 2, 2014 at 6:37 AM, Jutho > 
> wrote:
>
>> Suppose I have some type T{P1,P2,P3} depending some parameters. I don't 
>> know which type exactly, except that it originates from a type hierarchy 
>> which has 3 parameters. What is the best way to construct, given e.g. a 
>> variable of type T{P1,P2,P3} with specific values for P1, P2, P3, to 
>> construct the 'super' type T{P1,P2}. If TT = T{Float64,Float64,Float64} 
>> then I know I could do  TT.parameters = (TT.parameters[1], 
>> TT.parameters[2], TypeVar(P3) ). But it turns out that is not yet 
>> completely identical to T{Float64,Float64}, although TT==T{Float64,Float64} 
>> and TT===T{Float64,Float64} evaluate to true. However, not all fields of TT 
>> seem to be identical to those of T{Float64,Float64}. What is the 
>> recommended strategy ? Is there a specific method for doing this?
>>
>>
>

[julia-users] Best way to eliminate parameters from a parametric type

2014-08-02 Thread Jutho
Suppose I have some type T{P1,P2,P3} depending some parameters. I don't 
know which type exactly, except that it originates from a type hierarchy 
which has 3 parameters. What is the best way to construct, given e.g. a 
variable of type T{P1,P2,P3} with specific values for P1, P2, P3, to 
construct the 'super' type T{P1,P2}. If TT = T{Float64,Float64,Float64} 
then I know I could do  TT.parameters = (TT.parameters[1], 
TT.parameters[2], TypeVar(P3) ). But it turns out that is not yet 
completely identical to T{Float64,Float64}, although TT==T{Float64,Float64} 
and TT===T{Float64,Float64} evaluate to true. However, not all fields of TT 
seem to be identical to those of T{Float64,Float64}. What is the 
recommended strategy ? Is there a specific method for doing this?



Re: [julia-users] Julia mentioned in the history of Sage ...

2014-08-01 Thread Jutho
+1 for this quote of yours:
"The algorithm Julia uses for type inference works by walking through a 
program, starting with the types of its input values, and abstractly 
interpreting the code. Instead of applying the code to values, it applies 
the code to types, following all branches concurrently and tracking a 
superposition of all the possible states the program could be in, including 
all the types each expression could assume. (There's a certain similarity 
to the execution of a non-deterministic finite automaton or the process of 
quantum computation.)".

Maybe time to start thinking of how Julia could benefit from quantum 
computers.

Op vrijdag 1 augustus 2014 19:36:00 UTC+2 schreef Stefan Karpinski:
>
> Cool. There was an interesting discussion 
>  
> of using multiple dispatch for type promotion on the Sage mailing list 
> recently, which started out with links to Graydon Hoare's lovely blog post 
> about the history of scientific computing.
>
>
> On Fri, Aug 1, 2014 at 2:41 AM, cdm > 
> wrote:
>
>>
>> if it might interest you, have a look at:
>>
>>
>> http://sagemath.blogspot.com/2014/07/sagemathcloud-history-and-status.html
>>
>>
>> best,
>>
>> cdm
>>
>
>

[julia-users] Re: Method inheritance on Type{T}

2014-07-31 Thread Jutho
Never mind, I have found the solution. The reason is that the type of S is 
restricted in AbstractTensor, and by not restricting S in the methods on 
line 3 and 6, the methods on line 4 and 7 are in some sense more 
restricted, and I guess that's why they were the ones being called. Adding 
S<:IndexSpace to line 3 and 6 fixes the problem.

On a related note, I noticed that even for AbstractArray, one gets
eltype(AbstractArray{Float64,3}) # -> Float64
eltype(AbstractArray{Float64}) # -> Any
because there is no method
eltype{T}(::Type{AbstractArray{T}})
I guess it is not used anywhere, but for completeness, would it be useful 
to add such a method?


Op vrijdag 1 augustus 2014 07:52:44 UTC+2 schreef Jutho:
>
> Dear julia users/developers,
>
> I am working on some toolbox for working with tensors (on a more rigorous 
> level, defined as elements of tensor products of vector spaces; see 
> https://github.com/Jutho/TensorToolbox.jl ), where the first few lines of 
> a new type hierarchy start as follows:
>
> abstract AbstractTensor{S<:IndexSpace,T,N}
> Base.eltype{S,T}(::AbstractTensor{S,T}) = T
> Base.eltype{S,T,N}(::Type{AbstractTensor{S,T,N}}) = T
> Base.eltype{TT<:AbstractTensor}(::Type{TT}) = eltype(super(TT))
> spacetype{S}(::AbstractTensor{S})=S
> spacetype{S,T,N}(::Type{AbstractTensor{S,T,N}})=S
> spacetype{TT<:AbstractTensor}(::Type{TT})=spacetype(super(TT))
>
> similar to some of the first lines in abstractarray.jl. The only 
> difference is that there is an extra parameter. In the method table 
> (methods(eltype) or methods(spacetype)) the order of the methods on line 3 
> and 4 (for eltype) and of line 6 and 7 (for spacetype) is reversed. As a 
> consequence, 
> @which eltype(AbstractTensor{ComplexSpace,Float64,3})
> is caught by line 4 instead of line 3, and returns Any, whereas clearly 
> this should be Float64. Is there anything I do not understand of how this 
> works for AbstractArray and that I am doing differently, or should I file 
> an issue? Is there a way to force the order in which methods are added to 
> the method table?
>
> Thanks,
>
> Jutho
>
>

[julia-users] Method inheritance on Type{T}

2014-07-31 Thread Jutho
Dear julia users/developers,

I am working on some toolbox for working with tensors (on a more rigorous 
level, defined as elements of tensor products of vector spaces; see 
https://github.com/Jutho/TensorToolbox.jl ), where the first few lines of a 
new type hierarchy start as follows:

abstract AbstractTensor{S<:IndexSpace,T,N}
Base.eltype{S,T}(::AbstractTensor{S,T}) = T
Base.eltype{S,T,N}(::Type{AbstractTensor{S,T,N}}) = T
Base.eltype{TT<:AbstractTensor}(::Type{TT}) = eltype(super(TT))
spacetype{S}(::AbstractTensor{S})=S
spacetype{S,T,N}(::Type{AbstractTensor{S,T,N}})=S
spacetype{TT<:AbstractTensor}(::Type{TT})=spacetype(super(TT))

similar to some of the first lines in abstractarray.jl. The only difference 
is that there is an extra parameter. In the method table (methods(eltype) 
or methods(spacetype)) the order of the methods on line 3 and 4 (for 
eltype) and of line 6 and 7 (for spacetype) is reversed. As a consequence, 
@which eltype(AbstractTensor{ComplexSpace,Float64,3})
is caught by line 4 instead of line 3, and returns Any, whereas clearly 
this should be Float64. Is there anything I do not understand of how this 
works for AbstractArray and that I am doing differently, or should I file 
an issue? Is there a way to force the order in which methods are added to 
the method table?

Thanks,

Jutho



Re: [julia-users] Why A * zeros(..) is faster than A * ones(..)?

2014-07-17 Thread Jutho Haegeman
In principle, it’s also best to wrap all of this in a function, although it 
doesn’t seem to matter that much for this case (on my machine).

I get little over 0.6 seconds for the first, and about 0.55 s for the second 
and third. That sounds consistent with my expectation. Note also that the 
statement with rand has a slightly higher allocation. 

Since matrix multiplication seems to be slower on your machines (less cores?), 
whereas the time of rand is probably similar (since it is not multithreaded 
anyway if I am correct), then I guess the effect of rand is just unobservable 
in your timings. 

On 17 Jul 2014, at 15:54, Tomas Lycken  wrote:

> @Jutho: My gut reaction was the same thing, but then I should be able to 
> reproduce the results, right? All three invocations take about 1.2-1.5 
> seconds on my machine.
> 
> // T
> 
> On Thursday, July 17, 2014 3:06:08 PM UTC+2, Jutho wrote:
> I don't know about the zeros, but one issue with your timings is certainly 
> that you also measure the time to generate the random numbers, which is most 
> probably not negligible.
> 
> Op donderdag 17 juli 2014 13:54:54 UTC+2 schreef Andrei Zh:
> I continue investigating matrix multiplication performance. Today I found 
> that multiplication by array of zeros(..) is several times faster than 
> multiplication by array of ones(..) or random numbers: 
> 
> julia> A = rand(200, 100)
> ...
> 
> julia> @time for i=1:1000 A * rand(100, 200) end 
>  elapsed time: 3.009730414 seconds (48016 bytes allocated, 11.21% gc time)
> 
>  julia> @time for i=1:1000 A * ones(100, 200) end 
>  elapsed time: 2.973320655 seconds (480128000 bytes allocated, 12.72% gc time)
> 
>  julia> @time for i=1:1000 A * zeros(100, 200) end 
>  elapsed time: 0.438900132 seconds (480128000 bytes allocated, 85.46% gc time)
> 
> So, A * zeros() is about 6 faster than other kinds of multiplication. Note 
> also that it uses ~7x more GC time. 
> 
> On NumPy no such difference is seen:
> 
> In [106]: %timeit dot(A, rand(100, 200))
> 100 loops, best of 3: 2.77 ms per loop
> 
> In [107]: %timeit dot(A, ones((100, 200)))
> 100 loops, best of 3: 2.59 ms per loop
> 
> In [108]: %timeit dot(A, zeros((100, 200)))
> 100 loops, best of 3: 2.57 ms per loop
> 
> 
> So I'm curious, how multiplying by zeros matrix is different from other 
> multiplication types? 
> 
> 



[julia-users] Re: Why A * zeros(..) is faster than A * ones(..)?

2014-07-17 Thread Jutho
I don't know about the zeros, but one issue with your timings is certainly 
that you also measure the time to generate the random numbers, which is 
most probably not negligible.

Op donderdag 17 juli 2014 13:54:54 UTC+2 schreef Andrei Zh:
>
> I continue investigating matrix multiplication performance. Today I found 
> that multiplication by array of zeros(..) is several times faster than 
> multiplication by array of ones(..) or random numbers: 
>
> julia> A = rand(200, 100)
> ...
>
> julia> @time for i=1:1000 A * rand(100, 200) end 
>  elapsed time: 3.009730414 seconds (48016 bytes allocated, 11.21% gc 
> time)
>
>  julia> @time for i=1:1000 A * ones(100, 200) end 
>  elapsed time: 2.973320655 seconds (480128000 bytes allocated, 12.72% gc 
> time)
>
>  julia> @time for i=1:1000 A * zeros(100, 200) end 
>  elapsed time: 0.438900132 seconds (480128000 bytes allocated, 85.46% gc 
> time)
>
> So, A * zeros() is about 6 faster than other kinds of multiplication. Note 
> also that it uses ~7x more GC time. 
>
> On NumPy no such difference is seen:
>
> In [106]: %timeit dot(A, rand(100, 200))
> 100 loops, best of 3: 2.77 ms per loop
>
> In [107]: %timeit dot(A, ones((100, 200)))
> 100 loops, best of 3: 2.59 ms per loop
>
> In [108]: %timeit dot(A, zeros((100, 200)))
> 100 loops, best of 3: 2.57 ms per loop
>
>
> So I'm curious, how multiplying by zeros matrix is different from other 
> multiplication types? 
>
>
>

Re: [julia-users] early termination of @parallel for code

2014-07-10 Thread Jutho Haegeman
Using random permutations of your original parameter set is a clever idea. It 
never even occurred to me when I was trying to find a workaround :-).

On 10 Jul 2014, at 23:03, Thomas Covert  wrote:

> Jutho, I was also worried about this.  For that reason, “doset” is a random 
> permutation of the range 1:nCols, where nCols is the number of columns in 
> “fnmasks" and “dsmasks”.  I did this because the workload on the “early” 
> columns is much smaller than the workload on the “later” columns (the 
> function gprD is roughly cubic in the column position).  
> 
> I know randomizing generates tons of cache misses, but the vast majority of 
> the work done in each step is inside gprD, and its execution time is quite a 
> bit longer than memory access or memory allocation time (if I had to guess).
> 
> Thanks for pointing that out though.
> 
> -thom
> On Jul 10, 2014, at 2:09 AM, Jutho  wrote:
> 
>> Having played a little bit with parameter sweeps like these a while ago, I 
>> was also troubled by the best way to do this. Note that @parallel for will 
>> immediately partition your parameter set (in your case: doset ) and assign 
>> different partitions to different processors. This means that, if the 
>> function execution time depends strongly on the parameter values, or in your 
>> case, if your mask is highly non-uniform, the @parallel for strategy is not 
>> really optimal as a bunch of the processor will be doing nothing while the 
>> others are still working to finish their assigned partition.
>> 
>> Op woensdag 9 juli 2014 15:40:26 UTC+2 schreef Thomas Covert:
>> I'm having trouble debugging some @parallel for code, pasted below.  In 
>> particular, Julia seems to execute the "saving to disk" code before all of 
>> the parallel workers have finished.  Why is this?  
>> 
>> If the details are needed, "dsm" and "fnm" are bool arrays, 
>> (theta0,y,X,Ds,M,G,T,Xs) are data not modified during this code block, and 
>> gvals/vvals are SharedArray's to store results.  "doset" is an array of 
>> integers.  "gprD" is a non-mutating function that I have @everywhere'd 
>> previously
>> 
>> any ideas?  Thanks in advance.
>> 
>> -thom
>> 
>> @parallel for nn in doset
>> if sum(fnmasks[:,nn]) >= 1
>> println("doing obs $nn")
>> fnm = fnmasks[:,nn]
>> dsm = dsmasks[:,nn]
>>  g,v = gprD(theta0,y[dsm,1],X[dsm,:],Ds[fnm,fnm,:],M[fnm,:],
>>G[fnm,dsm],T[fnm,1],Xs[:,:,1],tDsPnn[fnm,:,:])
>> 
>> gvals[:,nn] = g
>> vvals[:,nn] = v
>> end
>> end
>> 
>> println("saving to disk")
>> gvals2 = convert(Array,gvals)
>> vvals2 = convert(Array,vvals)
>> save("results/gv.jld","gvals",gvals2,"vvals",vvals2)
> 



[julia-users] Re: early termination of @parallel for code

2014-07-10 Thread Jutho
Having played a little bit with parameter sweeps like these a while ago, I 
was also troubled by the best way to do this. Note that @parallel for will 
immediately partition your parameter set (in your case: doset ) and assign 
different partitions to different processors. This means that, if the 
function execution time depends strongly on the parameter values, or in 
your case, if your mask is highly non-uniform, the @parallel for strategy 
is not really optimal as a bunch of the processor will be doing nothing 
while the others are still working to finish their assigned partition.

Op woensdag 9 juli 2014 15:40:26 UTC+2 schreef Thomas Covert:
>
> I'm having trouble debugging some @parallel for code, pasted below.  In 
> particular, Julia seems to execute the "saving to disk" code before all of 
> the parallel workers have finished.  Why is this?  
>
> If the details are needed, "dsm" and "fnm" are bool arrays, 
> (theta0,y,X,Ds,M,G,T,Xs) are data not modified during this code block, and 
> gvals/vvals are SharedArray's to store results.  "doset" is an array of 
> integers.  "gprD" is a non-mutating function that I have @everywhere'd 
> previously
>
> any ideas?  Thanks in advance.
>
> -thom
>
> @parallel for nn in doset
>
> if sum(fnmasks[:,nn]) >= 1
>
> println("doing obs $nn")
>
> fnm = fnmasks[:,nn]
>
> dsm = dsmasks[:,nn]
>
> g,v = gprD(theta0,y[dsm,1],X[dsm,:],Ds[fnm,fnm,:],M[fnm,:],
>
>G[fnm,dsm],T[fnm,1],Xs[:,:,1],tDsPnn[fnm,:,:])
>
>
> gvals[:,nn] = g
>
> vvals[:,nn] = v
>
> end
>
> end
>
>
> println("saving to disk")
>
> gvals2 = convert(Array,gvals)
>
> vvals2 = convert(Array,vvals)
>
> save("results/gv.jld","gvals",gvals2,"vvals",vvals2)
>


Re: [julia-users] Question about 'dot' notation (and max/maximum)

2014-07-08 Thread Jutho Haegeman
Since I just read these operators were later removed from gcc again, it must 
not all have been perfect either :D.

On 08 Jul 2014, at 16:04, Jutho  wrote:

> Fully realising that this discussion has been settled and the convention is 
> here to stay, I nevertheless feel obsessed to make the remark that there 
> would have been more elegant solutions. Other languages have been able to 
> come up with acceptable operators for a binary 'min' or 'max':
> https://gcc.gnu.org/onlinedocs/gcc-2.95.2/gcc_5.html#SEC107
> (less elegant alternative since the difference between min and max is 
> confusing:
> http://support.sas.com/documentation/cdl/en/imlug/59656/HTML/default/viewer.htm#langref_sect10.htm
>  )
> 
> The linguistic difference between max and maximum is very subtle, maybe 
> because I am not a native English speaker. From the mathematical point of 
> view, I think of a maximum (but preferably abbreviated as max) always in the 
> context of a function or a set, never in the context of the largest out of 
> two numbers
> (not really a reference or definition, but 
> http://en.wikipedia.org/wiki/Maxima_and_minima only/directly discusses 
> functions and sets).
> 
> >? compares to max as + compares to sum would have been so much more 
> >convincing :-). That's all, enough bikeshedding for now, back to work.
> 
> Op maandag 5 mei 2014 17:46:05 UTC+2 schreef Stefan Karpinski:
> I think at this point, while the maximum name may not be everyone's favorite 
> choice, that bikeshed has sailed. Unless we're going to solve this problem 
> differently, it's going to stay the way it is: max is to maximum as + is to 
> sum.
> 
> 
> On Mon, May 5, 2014 at 11:39 AM, Ethan Anderes  wrote:
> +1 for max() and pmax(). It seems the easiest to write and to interpret.
> 



Re: [julia-users] Re: Question about 'dot' notation (and max/maximum)

2014-07-08 Thread Jutho
Fully realising that this discussion has been settled and the convention is 
here to stay, I nevertheless feel obsessed to make the remark that there 
would have been more elegant solutions. Other languages have been able to 
come up with acceptable operators for a binary 'min' or 'max':
https://gcc.gnu.org/onlinedocs/gcc-2.95.2/gcc_5.html#SEC107
(less elegant alternative since the difference between min and max is 
confusing:
http://support.sas.com/documentation/cdl/en/imlug/59656/HTML/default/viewer.htm#langref_sect10.htm
 
)

The linguistic difference between max and maximum is very subtle, maybe 
because I am not a native English speaker. From the mathematical point of 
view, I think of a maximum (but preferably abbreviated as max) always in 
the context of a function or a set, never in the context of the largest out 
of two numbers
(not really a reference or definition, 
but http://en.wikipedia.org/wiki/Maxima_and_minima only/directly discusses 
functions and sets).

>? compares to max as + compares to sum would have been so much more 
convincing :-). That's all, enough bikeshedding for now, back to work.

Op maandag 5 mei 2014 17:46:05 UTC+2 schreef Stefan Karpinski:
>
> I think at this point, while the maximum name may not be everyone's 
> favorite choice, that bikeshed has sailed. Unless we're going to solve this 
> problem differently, it's going to stay the way it is: max is to maximum as 
> + is to sum.
>
>
> On Mon, May 5, 2014 at 11:39 AM, Ethan Anderes  > wrote:
>
>> +1 for max() and pmax(). It seems the easiest to write and to interpret.
>>
>
>

Re: [julia-users] Overriding when the function is used elsewhere

2014-07-04 Thread Jutho
I've also encountered this problem and did indeed solve it by implementing 
a method that throws an error. But it would be nice to hear if a better, 
more julian, approach exists or could be made available.

Op vrijdag 4 juli 2014 10:50:21 UTC+2 schreef Magnus Lie Hetland:
>
> Just a quick followup: What if identifier wasn't defined in a.jl – what 
> do I do then? That is, I'd like an abstract, un-implemented "template 
> method," sort of. There'd be no way to import it into b.jl, and therefore 
> no way to "inject" its definition into a namespace available to identify. 
> Should I explicitly implement a function that raises a "no method" error, 
> or something? (Or am I once again missing some basic language construct?-)
>


Re: [julia-users] Re: orthogonalize

2014-06-30 Thread Jutho
Good point, I am not used to having to deal with licensing issues so I was 
probably a bit careless. My apologies for that.
I removed my message, unfortunately it is still contained in your response 
:-). 

Op maandag 30 juni 2014 15:40:03 UTC+2 schreef Matt Bauman:
>
> Careful!  That is "Copyright 1984-2011 The MathWorks, Inc."
>
> Especially on such short functions, it's important for folks to have never 
> seen a proprietary implementation so that their code is done in a "clean 
> room."  That becomes tougher to argue when you have a copy of the code 
> sitting in your email.
>
>>

Re: [julia-users] Re: orthogonalize

2014-06-30 Thread Jutho
This is Matlab's code:

[Q,S] = svd(A,'econ'); %S is always square.

if ~isempty(S)

S = diag(S);

tol = max(size(A)) * S(1) * eps(class(A));

r = sum(S > tol);

Q = Q(:,1:r);

end




Op maandag 30 juni 2014 07:53:19 UTC+2 schreef Andre P.:
>
> I have some Matlab code I'm porting to Julia
>
> BiasHN = rand(HN(i+1),1)*2 -1;
> BiasHN = orth(BiasHN);
>
> Was the equivalent of the orth() from Matlab added to Julia? Can't seem to 
> find it.
>
> Andre
>
> On Thursday, April 3, 2014 10:32:31 PM UTC+9, Alan Edelman wrote:
>>
>> Maybe something like this
>>
>> function orth(A,thresh=eps(A[1]))   
>>  (U,S)=svd(A)
>>  U[:, S.>S[1]*thresh]
>> end
>>
>> orth(float(A))
>>
>> 20x4 Array{Float64,2}:
>>
>>  -0.223607  -0.2136510.1837530.265684 
>>
>>  -0.223607  -0.2136510.1837530.265684 
>>
>>  -0.223607  -0.2136510.1837530.265684 
>>
>>  -0.223607  -0.2136510.1837530.265684 
>>
>>  -0.223607  -0.2136510.1837530.265684 
>>
>>  -0.223607   0.0371419  -0.3725030.0993058
>>
>>  -0.223607   0.0371419  -0.3725030.0993058
>>
>>  -0.223607   0.0371419  -0.3725030.0993058
>>
>>  -0.223607   0.0371419  -0.3725030.0993058
>>
>>  -0.223607   0.0371419  -0.3725030.0993058
>>
>>  -0.223607   0.3503580.163883   -0.0197973
>>
>>  -0.223607   0.3503580.163883   -0.0197973
>>
>>  -0.223607   0.3503580.163883   -0.0197973
>>
>>  -0.223607   0.3503580.163883   -0.0197973
>>
>>  -0.223607   0.3503580.163883   -0.0197973
>>
>>  -0.223607  -0.1738490.0248676  -0.345193 
>>
>>  -0.223607  -0.1738490.0248676  -0.345193 
>>
>>  -0.223607  -0.1738490.0248676  -0.345193 
>>
>>  -0.223607  -0.1738490.0248676  -0.345193 
>>
>>  -0.223607  -0.1738490.0248676  -0.345193
>>
>>
>>
>>
>> On Thursday, April 3, 2014 9:23:14 AM UTC-4, Alan Edelman wrote:
>>>
>>> I would use the svd with a threshold based on the norm
>>> and optionally adjustable
>>>
>>>

[julia-users] ANN: LinearMaps

2014-06-15 Thread Jutho
Dear Julia users,

I have just added a new Julia package for working with linear maps, also 
known as linear transformations or linear operators (although the latter 
term restricts to the slightly more restricted case where domain and 
codomain are the same, i.e. number of rows and columns of the matrix 
representation are the same). The idea is that a LinearMap object is like 
an AbstractMatrix in that it transforms vectors into vectors by 
multiplication, but that it does not need to be represented as a Matrix.

The package provides functionality for wrapping linear functions without 
explicit matrix representation (e.g. cumsum, fft, diff, ... ) as LinearMap 
objects, for combining LinearMap objects via composition or linear 
combination, and for taking transposes. All of these operations are 
evaluated 'lazily', i.e. only when the combined object is multiplied with a 
vector will the operations be performed, and in such a way that only matrix 
vector multiplications and vector additions need to be computed. In 
addition, these operations are implemented  with efficiency in mind. They 
try to minimise the number of temporaries (i.e. minimal memory allocation) 
and make available mutating operations such as A_mul_B!, even when this was 
not provided by e.g. the function that performs the matrix vector product. 
In addition, they can be combined with normal AbstractMatrix objects and 
with the built in identity I (UniformScaling). 

The application of this package is in first place in the context of 
iterative algorithms for linear systems or eigenvalue problems, which 
should be implemented using duck typing of the matrix/operator argument. In 
particular, it works well in combination with eigs, Julia's built in 
wrapper for Arpack. Like AbstractMatrix objects, LinearMap objects respond 
to the methods size, isreal, eltype, issym, ishermitian and isposdef, where 
it is even attempted to cleverly guess the correct values for these latter 
properties (i.e. A'*B*B'*A will be recognised as a positive definite 
hermitian linear map).

Check it out at https://github.com/Jutho/LinearMaps.jl . Comments and 
suggestions are appreciated.

Jutho


Re: [julia-users] Re: defining fallback functions

2014-06-07 Thread Jutho
Thanks. I figured something like this after doing some more tests.
Any ideas for approaching my original problem/question?

Best,

Jutho

On Saturday, June 7, 2014 1:02:23 PM UTC+2, Tim Holy wrote:
>
> Julia doesn't dispatch on abstract types; if there is more than one 
> applicable 
> version of a function available, but it can't determine the type(s) at 
> compile-time, then it waits until execution for all types to be concrete 
> before determining which variant of a function to call. 
>
> So likewise, you should use `which` only with concrete types. 
>
> --Tim 
>
> On Friday, June 06, 2014 03:04:13 PM Jutho wrote: 
> > I suddenly had the idea of using which in the following way 
> > which(g,(typeof(x),))!=which(g,(AbstractSupertype,)) 
> > 
> > to determine that g had a more specific implementation then the fallback 
> > method defined for AbstractSupertype. 
> > 
> > Quickly testing how to use with I typed the following in the REPL and 
> got a 
> > confusing result. 
> > 
> > *julia> **which(issym,(Matrix,))* 
> > 
> > *issym(A::AbstractArray{T,2}) at linalg/generic.jl:251* 
> > 
> > 
> > *julia> **which(issym,(AbstractMatrix,))* 
> > 
> > *issym(A::SparseMatrixCSC{Tv,Ti<:Integer}) at 
> sparse/sparsematrix.jl:1739* 
> > 
> > 
> > I would have thought that which(issym,(AbstractMatrix,)) would also 
> point 
> > to the first method. Is this a bug? Should I file an issue? 
>


[julia-users] Re: biconjugate gradient stabilized method

2014-06-07 Thread Jutho
That's a strange line of reasoning. Why not contribute to IterativeSolvers 
with the things you're interested and skilled in (linear solvers using 
Krylov)?
It's of course up to you, but I would certainly prefer a less general name 
then KrylovSolvers.jl for a package that contains linear solvers but no 
eigensolvers or other krylov type methods.

On Friday, June 6, 2014 3:02:29 PM UTC+2, Carlos Baptista wrote:
>
> Yes some but not all. I started my package actually before the one you 
> refer to was created. Also from what I've seen they are implementing more 
> than just Krylov solvers. I myself am only interested in Krylov solvers, so 
> I will keep on developing my own package.
>


[julia-users] Re: defining fallback functions

2014-06-06 Thread Jutho
I suddenly had the idea of using which in the following way
which(g,(typeof(x),))!=which(g,(AbstractSupertype,))

to determine that g had a more specific implementation then the fallback 
method defined for AbstractSupertype.

Quickly testing how to use with I typed the following in the REPL and got a 
confusing result. 

*julia> **which(issym,(Matrix,))*

*issym(A::AbstractArray{T,2}) at linalg/generic.jl:251*


*julia> **which(issym,(AbstractMatrix,))*

*issym(A::SparseMatrixCSC{Tv,Ti<:Integer}) at sparse/sparsematrix.jl:1739*


I would have thought that which(issym,(AbstractMatrix,)) would also point 
to the first method. Is this a bug? Should I file an issue?


[julia-users] defining fallback functions

2014-06-06 Thread Jutho
Dear Julia users,

if you have an AbstractSupertype and a concrete Subtype, then the dispatch 
evidently works great to make sure that if you define 
f(x::AbstractSupertype) and f(x::Subtype), the latter will be called if f 
is called with an argument of type Subtype.

I am writing some code in which I have two functions f and g which should 
have the following behaviour:
function f(x::AbstractType)
# if g is implemented for concrete subtype of x, then define f using g(x)
# otherwise, throw MethodError
end
function g(x::AbstractType)
# if g is implemented for concrete subtype of x, then define g using f(x)
# otherwise, throw MethodError
end

So the idea would be that the subtype only has to implement f or g, and 
these functions would make sure that both work. Think of e.g. a mutating 
and non mutating version of a function, where the subtype only needs to 
implement of them. If the subtype implements neither, one should get a 
MethodError. My initial guess was:
f(x::AbstractType) = method_exists(g,(typeof(x),)) ? (do something using g(x
)) : throw(MethodError())
g(x::AbstractType) = method_exists(f,(typeof(x),)) ? (do something using f(x
)) : throw(MethodError())

While this works as long as the Subtype defines at least one of the both 
functions, it runs in circles if there is no specific implementation of 
either function, since method_exists still thinks that there is a matching 
function, namely the one for AbstractType. So the MethodError never gets 
thrown.

So what would be the best way to tackle this problem? I was thinking of 
using methodswith(typeof(x)) to get all specific implementations for 
Subtype, and then trying to find wether this contains f or g. Methodswith 
returns an Array{Method,1}. However, I do not know how to create the 
corresponding Method object that I want to look for. What is the best way 
to find something in the output of methodswith? Is this by converting this 
output to a string? Better strategies for solving the problem above are 
certainly welcome, cause this doesn't look very elegant.


[julia-users] Re: @parallel for not using all julia processes?

2014-06-04 Thread Jutho
I probably already have an idea what's going on. How are the different 
tasks distributed over the different Julia processes? Is the for loop 
immediately cut into pieces where e.g. process 1 will handle the cases 
iter=1:10, process 2 handles the cases iter=11:20 and so on? For different 
values of the parameters, the execution time will be widely different (from 
fraction of a second to several minutes or even more). If some processes 
handle all the slow cases and other all the fast cases, then this explains 
the behaviour I am seeing. I guess I need to write my own task 
distribution, for which I will have to read the Manual section on parallel 
computing again.




[julia-users] @parallel for not using all julia processes?

2014-06-04 Thread Jutho


Dear Julia users,

I am trying to use Julia's parallel capabilities to do a simple parameter 
sweep. I am probably doing something wrong, but top seems to indicate that 
half om my julia processes are doing nothing at all:
This is my workflow (not very pro, but it suits my case for now)

I start the julia REPL (in a screen session) with "julia -p 20" (on a 
machine with enough = 24 processors)
I then type ' include("run.jl") ' where run.jl contains:

require("project.jl")

Arange=2:2:12
Brange=linspace(0,5,101)

I=length(Arange)
J=length(Brange)

e1=SharedArray(Float64,(I,J))
e2=SharedArray(Float64,(I,J))

@parallel for iter=1:I*J
i,j=ind2sub((I,J),iter)
e=F(Arange[i],Brange[j])
e=sort(abs(e),rev=true)
e1[i,j]=e[1]
e2[i,j]=e[2]
end

and where project.jl loads a number of modules and defines the function F. 
As soon as I hit ' include("run.jl") ', all julia processes start doing 
something, running at a 100% for a time that could be compatible with 
processing a single iteration. But then, about half of them fall back to 0% 
CPU, and even some more fall back to 0% later. Am I doing something wrong?

Best regards,

Jutho


[julia-users] Re: temporary arrays in loops

2014-06-03 Thread Jutho
Thanks. This certainly works great. I guess I more or less could have known 
this but that I also wanted to probe whether there was not a far superior 
approach to tackle these kind of problems. I guess I could wrap this 
behaviour in a simple MemoryBuffer type.

On Tuesday, June 3, 2014 1:05:09 AM UTC+2, Simon Kornblith wrote:
>
> There may be a more elegant solution, but the sledgehammer approach would 
> be to replace the reshape operation with:
>
> tmp = pointer_to_array(pointer(buffer), currentdims)
>
> Obviously don't access tmp after resize! but before setting its value or 
> else Julia may segfault.
>
> Simon
>
> On Monday, June 2, 2014 11:34:13 PM UTC+2, Jutho wrote:
>>
>> Dear Julia users,
>>
>> I often need to use large temporary multidimensional arrays within loops. 
>> These temporary arrays don't necessarily have a fixed size, so allocating 
>> them just once before the start of the loop is not really an option. 
>> Allocating them within the loop triggers a lot of gc activity, which can 
>> have a non-negible impact on the performance.
>>
>> I was now trying to use the following strategy:
>> buffer=zeros(T,prod(firstdims))
>> tmp=reshape(buffer,firstdims)
>> # initialize tmp
>> for i in iter
>>  # do something with tmp of previous iteration
>>
>>  # from here on, previous tmp is no longer needed
>>  resize!(buffer,prod(currentdims))
>>  tmp=reshape(buffer,currentdims)
>>  # initialise new tmp
>> end
>>
>> The idea would be that by using resize!, memory would only be allocated 
>> if the current length of tmp exceeds any length encountered before. In 
>> addition, I think resize! is directly using memory allocation calls in C 
>> and is not passing along the GC. Unfortunately, I get the following error:
>>
>> *ERROR: cannot resize array with shared data*
>>
>> * in resize! at array.jl:496*
>>
>> which makes sense. the old buffer is still used in the previous value of 
>> tmp, and I have not expressed that I will no longer need tmp. Is there a 
>> possibility to express this and to make this work, or is there a better 
>> suggestion of how to tackle this kind of problem?
>>
>>

[julia-users] temporary arrays in loops

2014-06-02 Thread Jutho
Dear Julia users,

I often need to use large temporary multidimensional arrays within loops. 
These temporary arrays don't necessarily have a fixed size, so allocating 
them just once before the start of the loop is not really an option. 
Allocating them within the loop triggers a lot of gc activity, which can 
have a non-negible impact on the performance.

I was now trying to use the following strategy:
buffer=zeros(T,prod(firstdims))
tmp=reshape(buffer,firstdims)
# initialize tmp
for i in iter
 # do something with tmp of previous iteration

 # from here on, previous tmp is no longer needed
 resize!(buffer,prod(currentdims))
 tmp=reshape(buffer,currentdims)
 # initialise new tmp
end

The idea would be that by using resize!, memory would only be allocated if 
the current length of tmp exceeds any length encountered before. In 
addition, I think resize! is directly using memory allocation calls in C 
and is not passing along the GC. Unfortunately, I get the following error:

*ERROR: cannot resize array with shared data*

* in resize! at array.jl:496*

which makes sense. the old buffer is still used in the previous value of 
tmp, and I have not expressed that I will no longer need tmp. Is there a 
possibility to express this and to make this work, or is there a better 
suggestion of how to tackle this kind of problem?



[julia-users] unexpected type alias and Union behavior

2014-04-30 Thread Jutho
Starting with the following definitions
```julia
abstract Vartype
type VT1<:Vartype
end

abstract Graph{T<:Vartype}
abstract Subgraph{T<:Vartype} <: Graph{T}
type Vertex{T<:Vartype} <: Subgraph{T}
end
type Block{T<:Vartype,N} <: Subgraph{T}
end

abstract GraphObject{T<:Vartype}
type SubgraphObject{T<:Vartype,S<:Subgraph{T}} <: GraphObject{T}
end

typealias VertexObject{T<:Vartype} SubgraphObject{T,Vertex{T}}
typealias BlockObject{T<:Vartype,N} SubgraphObject{T,Block{T,N}}
typealias VertexBlockObject{T<:Vartype} 
Union(VertexObject{T},BlockObject{T})

O1=SubgraphObject{VT1,Vertex{VT1}}()
O2=SubgraphObject{VT1,Block{VT1,3}}()
```
I did not expect all of the following to be true, but nevertheless got the 
results in the comment
```julia
isa(O1,VertexObject) # false
isa(O1,VertexObject{VT1}) # true
isa(O1,VertexBlockObject) # false
isa(O1,VertexBlockObject{VT1}) # true

isa(O2,BlockObject) # false
isa(O2,BlockObject{VT1}) # false
isa(O2,BlockObject{VT1,3}) # true
isa(O2,VertexBlockObject) # false
isa(O2,VertexBlockObject{VT1}) # false
```

I don't see how this is different to `isa(randn(3,3),StridedMatOrVec)`, 
which returns `true` with or without full specification of the argument.

Is this a bug or expected behaviour?


[julia-users] TensorOperations.jl

2014-04-09 Thread Jutho
Dear Julia users,

I would like to present TensorOperations.jl v0.0.1, a package for 
performing tensor contractions, traces and related operations. It can be 
found here:
https://github.com/Jutho/TensorOperations.jl
and can now be installed using Pkg.add("TensorOperations")

Before creating an official v1.0.0 i would certainly appreciate some user 
input:

- What's the opinion about the convention of indexing arrays using an 
l-prefixed string; is this overkill?
- What about the naming of the methods tensorcopy, tensoradd, tensortrace 
and tensorcontract ?
- Any outer suggestions?
- And of course any bug reports ...

Enjoy,

Jutho



Re: [julia-users] matrix multiplication with Julia

2014-02-11 Thread Jutho
Thanks for all the responses. It was never my intention to write a 
sophisticated code that can compete with BLAS, we have BLAS for that. I 
just wanted to see how much you lose with a simple code. I guess the 
generic_matmulmul is at the level of simplicity that I am still willing to 
consider, and already significantly outperforms my own naive matrix matrix 
multiplication. In the same setup, generic_matmulmul stagnates to a speed 
13 times slower than BLAS, without showing any effect of the cache being 
saturated (which is of course what it was written for).
  


Re: [julia-users] matrix multiplication with Julia

2014-02-11 Thread Jutho
Thanks for the quick response,

If I understand correctly, this is similar to the first stagnation of 
http://www.stanford.edu/~jacobm/matrixmultiply.html<http://www.google.com/url?q=http%3A%2F%2Fwww.stanford.edu%2F~jacobm%2Fmatrixmultiply.html&sa=D&sntz=1&usg=AFQjCNESX7Q6ZyhYgfEShN1FqVYliCxOxQ>
 for 
values in the range 50-200, at a factor of 1.3 or something times the BLAS 
speed. I completely overlooked this before.

So to make a fair comparison to that c implementation, I have to compare 
the Julia speed (10-15 times BLAS speed) with the C speed (1.3 times BLAS 
speed) in the first regime, and the Julia speed (100 times BLAS speed) with 
the C speed (4 to 5 times BLAS speed) in the second regime. Any idea on 
where the big difference between Julia and C is coming from?

Best regards,

Jutho


[julia-users] matrix multiplication with Julia

2014-02-11 Thread Jutho
Dear Julia users/developers,

for reasons that I will not go into right now, I wanted to investigate how 
much one loses by defining matrix multiplication within Julia over using 
the BLAS functionality. Doing this in C, I found the following reference:
http://www.stanford.edu/~jacobm/matrixmultiply.html
which shows that the speed gain of BLAS stagnates to about a factor 4 or 5 
of the native C algorithm without any special tricks (except for maybe 
the -O2 -funroll-loops specified to the compiler). While this page is very 
brief in detail, I assume that the BLAS matrix multiplication was run 
single-threaded. I don't know if there is any coincidence in the factor 4, 
i.e. whether this has anything to do with the use of SIMD instructions in 
BLAS. 

I wanted to do the same comparison using native Julia, so I created the 
following IJulia notebook to show the results. 
http://nbviewer.ipython.org/gist/Jutho/8934314

I repeated 5 timings for a bunch of values of the matrix dimension (all 
square matrices). First two plots show the Julia timings (respectively BLAS 
timings) divided by matrix size to the power 3, which should stagnate to a 
constant. This is the case for BLAS, and also for Julia, but there seems to 
be a first stagnation around matrix dimension 50-400, and then a sudden 
jump to a much larger value in the range 600-2000.

The third plot shows the speedup of BLAS (single threaded) over the Julia 
algorithm, which also reproduces the big jump around matrix dimension 500. 
For smaller matrix dimensions, the speedup is of the order 10 to 15, which 
is larger but at least in the same order of magnitude as the C comparison 
on the cited webpage. But then, for matrix dimensions larger than 500, the 
Julia version seems to become a factor 100 slower than BLAS. Is this to be 
expected?

Best regards,

Jutho