Re: [julia-users] Re: sparse rank, sparse nullspace, sparse linear algebra over rationals?

2016-04-12 Thread Laurent Bartholdi
@Stefan: sorry, I had meant that Julia doesn't yet have sparse
non-(real/complex) matrices. However, I see that that's wrong too: I can
create sparse matrices with Int64 entries. It would now make a lot of sense
to implement the algorithms in linbox so as to compute rank etc. of Int or
BigInt sparse matrices.

@Alex: if you're only interested in the 10 lowest singular values of a
sparse matrix, then MATLAB is quite good at it. Here's from the doc:

S = svds(A,K,SIGMA) computes the K singular values closest to the
scalar shift SIGMA.  For example, S = svds(A,K,0) computes the K
smallest singular values.


On Tue, 12 Apr 2016 at 22:29 Stefan Karpinski  wrote:

> Julia does have complex sparse matrices:
>
> julia> C = sparse(rand(1:10, 10), rand(1:10, 10), randn(10) + randn(10)im,
> 10, 10)
> 10x10 sparse matrix with 9 Complex{Float64} entries:
>   [7 ,  1]  =  1.1711-0.587334im
>   [5 ,  3]  =  0.940755+1.00755im
>   [1 ,  4]  =  1.51515-1.77335im
>   [4 ,  4]  =  -0.815624-0.678038im
>   [9 ,  5]  =  -0.598111-0.970266im
>   [2 ,  6]  =  0.671339-1.07398im
>   [7 ,  6]  =  -1.69705+1.08698im
>   [10,  7]  =  -1.32233-1.88083im
>   [7 , 10]  =  1.26453-0.399423im
>
> How much you can do with these depends on what kinds of special methods
> are implemented, but you can call eigs on them, which lets you figure out
> what the rank is:
>
> julia> map(norm,eigs(C)[1])
> 6-element Array{Float64,1}:
>  1.74612
>  1.74612
>  1.06065
>  4.28017e-6
>  4.28016e-6
>  4.28016e-6
>
> In this case, for example, the matrix is rank 3.
>
> On Tue, Apr 12, 2016 at 3:29 PM, Laurent Bartholdi <
> laurent.bartho...@gmail.com> wrote:
>
>> It's tricky, but sprank is definitely not competitive with finer
>> implementations. As you certainly know, rank calculations are very
>> unstable, so if your matrix has integer entries you should prefer an exact
>> method, or a calculation in a finite field. (Sadly, Julia does not contain,
>> yet, sparse matrices with non-real/complex entries). See
>> http://dx.doi.org.sci-hub.io/10.1145/2513109.2513116 for a recent
>> discussion of sparse matrix rank.
>>
>> The following routines compute null spaces. To obtain the rank, you
>> subtract the nullspace dimension from the row space dimension (i.e. number
>> of columns). If you want pure Julia, you could try
>>
>> function julia_null_space{T}(A::SparseMatrixCSC{T})
>> SVD = svdfact(full(A), thin = false)
>> any(1.e-8 .< SVD.S .< 1.e-4) && error("Values are dangerously small")
>> indstart = sum(SVD.S .> 1.e-8) + 1
>> nullspace = SVD.Vt[indstart:end,:]'
>> return sparse(nullspace .* (abs(nullspace) .> 1.e-8))
>> end
>>
>> If you have MATLAB, the following is faster:
>>
>> using MATLAB
>> global m_session = MSession()
>>
>> function matlab_null_space{T}(A::SparseMatrixCSC{T})
>> m, n = size(A)
>> (m == 0 || n == 0) && return speye(T, n)
>>
>> put_variable(m_session,:A,convert(SparseMatrixCSC{Float64},A))
>> eval_string(m_session,"N = spqr_null(A,struct('tol',1.e-8));")
>> eval_string(m_session,"ns = spqr_null_mult(N,speye(size(N.X,2)),1);")
>> eval_string(m_session,"ns = sparseclean(ns,1.e-8);")
>> ns = get_mvariable(m_session,:ns)
>> return jvariable(ns)
>> end
>>
>> Finally, if your matrices are over Z, Q or a finite field, do give a look
>> to linbox (http://www.linalg.org/linbox-html/index.html).
>>
>> HTH, Laurent
>>
>> On Tue, 12 Apr 2016 at 20:49 Alex Dowling  wrote:
>>
>>> Hello,
>>>
>>> I am also looking to compute the (approximate) rank of a sparse matrix.
>>> For my applications I'm consider dimensions of 10k - 100k by 10k - 100k,
>>> not millions by millions. I've previously done this in MATLAB using the 
>>> sprank
>>> command <http://www.mathworks.com/help/matlab/ref/sprank.html>. Does
>>> anyone have recommendations for similar functions in Julia?
>>>
>>> Thanks,
>>> Alex
>>>
>>>
>>> On Tuesday, November 17, 2015 at 3:08:29 AM UTC-6,
>>> ming...@irt-systemx.fr wrote:
>>>>
>>>> hello,
>>>>
>>>> getting the rank of a huge sparse matrix is mathematically difficult
>>>>
>>>>
>>>> http://math.stackexchange.com/questions/554777/rank-computation-of-large-matrices
>>>>
>>>> bests,
>>>> M.
>>>>
>>>> Le lundi 16 novembre

Re: [julia-users] Re: sparse rank, sparse nullspace, sparse linear algebra over rationals?

2016-04-12 Thread Laurent Bartholdi
It's tricky, but sprank is definitely not competitive with finer
implementations. As you certainly know, rank calculations are very
unstable, so if your matrix has integer entries you should prefer an exact
method, or a calculation in a finite field. (Sadly, Julia does not contain,
yet, sparse matrices with non-real/complex entries). See
http://dx.doi.org.sci-hub.io/10.1145/2513109.2513116 for a recent
discussion of sparse matrix rank.

The following routines compute null spaces. To obtain the rank, you
subtract the nullspace dimension from the row space dimension (i.e. number
of columns). If you want pure Julia, you could try

function julia_null_space{T}(A::SparseMatrixCSC{T})
SVD = svdfact(full(A), thin = false)
any(1.e-8 .< SVD.S .< 1.e-4) && error("Values are dangerously small")
indstart = sum(SVD.S .> 1.e-8) + 1
nullspace = SVD.Vt[indstart:end,:]'
return sparse(nullspace .* (abs(nullspace) .> 1.e-8))
end

If you have MATLAB, the following is faster:

using MATLAB
global m_session = MSession()

function matlab_null_space{T}(A::SparseMatrixCSC{T})
m, n = size(A)
(m == 0 || n == 0) && return speye(T, n)

put_variable(m_session,:A,convert(SparseMatrixCSC{Float64},A))
eval_string(m_session,"N = spqr_null(A,struct('tol',1.e-8));")
eval_string(m_session,"ns = spqr_null_mult(N,speye(size(N.X,2)),1);")
eval_string(m_session,"ns = sparseclean(ns,1.e-8);")
ns = get_mvariable(m_session,:ns)
return jvariable(ns)
end

Finally, if your matrices are over Z, Q or a finite field, do give a look
to linbox (http://www.linalg.org/linbox-html/index.html).

HTH, Laurent

On Tue, 12 Apr 2016 at 20:49 Alex Dowling  wrote:

> Hello,
>
> I am also looking to compute the (approximate) rank of a sparse matrix.
> For my applications I'm consider dimensions of 10k - 100k by 10k - 100k,
> not millions by millions. I've previously done this in MATLAB using the sprank
> command <http://www.mathworks.com/help/matlab/ref/sprank.html>. Does
> anyone have recommendations for similar functions in Julia?
>
> Thanks,
> Alex
>
>
> On Tuesday, November 17, 2015 at 3:08:29 AM UTC-6, ming...@irt-systemx.fr
> wrote:
>>
>> hello,
>>
>> getting the rank of a huge sparse matrix is mathematically difficult
>>
>>
>> http://math.stackexchange.com/questions/554777/rank-computation-of-large-matrices
>>
>> bests,
>> M.
>>
>> Le lundi 16 novembre 2015 22:12:04 UTC+1, Laurent Bartholdi a écrit :
>>>
>>> Hello world,
>>> I'm new at julia, and trying it out as a replacement for matlab and
>>> other computer algebra systems. I'm a bit stuck with sparse matrices: I
>>> have largeish matrices (10^6 x 10^6) that are very sparse, and
>>> want to know their rank and nullspace. (the matrices decompose by
>>> blocks, so the nullspace should be expressible as a sparse matrix).
>>>
>>> I tried with the latest (github) julia 0.5:
>>>
>>> julia> nullspace(sparse([1],[1],[1]))
>>> ERROR: MethodError: `nullspace` has no method matching
>>> nullspace(::SparseMatrixCSC{Int64,Int64})
>>>
>>> julia> nullspace(full(sparse([1],[1],[1])))
>>> 1x0 Array{Float64,2} # I'm a bit unhappy here, I was hoping to get a
>>> rational answer.
>>>
>>> julia> nullspace([1//1])
>>> 1x0 Array{Float32,2} # yikes! I'm down to 32 bits floats now.
>>>
>>> julia> rank(sparse([1],[1],[1.0]))
>>> ERROR: MethodError: `svdvals!` has no method matching
>>> svdvals!(::SparseMatrixCSC{Float64,Int64})
>>>
>>> julia> rank([1.0])
>>> ERROR: MethodError: `rank` has no method matching
>>> rank(::Array{Float64,1})
>>>
>>> julia> rank([1 0;0 1])
>>> 2 # finally something that works...
>>>
>>> Many thanks in advance! Laurent
>>>
>>> --
Laurent Bartholdi
DMA, École Normale Supérieure, 45 rue d'Ulm, 75005 Paris. +33 14432 2060
Mathematisches Institut, Universität Göttingen, Bunsenstrasse 3-5, D-37073
Göttingen. +49 551 39 7826


Re: [julia-users] Fast multiprecision integers?

2016-04-09 Thread Laurent Bartholdi
Thanks! Performance, however, would drop too much (Julia will not fit 
FastInts into registers), and memory is more an issue than time.

On Saturday, 9 April 2016 17:49:07 UTC+2, Erik Schnetter wrote:
>
> Laurent 
>
> You can use a construct such as 
>
> ```Julia 
> immutable FastInt 
> isbig::Bool 
> small::Int 
> big::BigInt 
> FastInt(x::Int) = new(false, x) 
> FastInt(x::BigInt) = new(true, 0, x) 
> end 
> getbig(x::FastInt) = x.isbig ? x.big : BigInt(x.small) 
> function add(x::FastInt, y::FastInt) 
> if !isbig(x) & !isbig(y) 
> # add overflow checking 
> return FastInt(x.small + y.small) 
> end 
> FastInt(getbig(x) + getbig(y)) 
> end 
> ``` 
>
> This will be reasonably efficient (apart from memory usage), and since 
> unused `BigInt` objects are never initialized, no memory will be 
> allocated for them. 
>
> You could also fold the `isbig` field into `small` by e.g. choosing to 
> use `typemin(Int)` as a value that indicates that the `BigInt` field 
> should be used. 
>
> -erik 
>
>
>
> On Sat, Apr 9, 2016 at 8:07 AM, Laurent Bartholdi 
> > wrote: 
> > Thanks for the explanations. Am I correct in understanding that objects 
> are 
> > garbage-collected when they're only pointed to, and this causes the 
> problem? 
> > This kills my naive idea of using pointers to BigInts as a 64-bit 
> datatype 
> > compatible with ints, and is suggested by the following experiment: 
> > 
> > julia> l = [pointer_from_objref(BigInt(i)) for i=1:10] 
> > 10-element Array{Ptr{Void},1}: 
> >  Ptr{Void} @0x000118c77150 
> >  Ptr{Void} @0x000118c77170 
> >  Ptr{Void} @0x000118c77190 
> >  Ptr{Void} @0x000118c771b0 
> >  Ptr{Void} @0x000118c771d0 
> >  Ptr{Void} @0x000118c771f0 
> >  Ptr{Void} @0x000118c77210 
> >  Ptr{Void} @0x000118c77230 
> >  Ptr{Void} @0x000118c77250 
> >  Ptr{Void} @0x000118c77270 
> > 
> > 
> > julia> unsafe_pointer_to_objref(l[7]) 
> > 7 
> > 
> > 
> > julia> map(unsafe_pointer_to_objref,l) 
> > Segmentation fault: 11 
> > 
> > 
> > On Friday, 8 April 2016 17:47:34 UTC+2, Stefan Karpinski wrote: 
> >> 
> >> This could help if both parts of the union were plain old data like 
> Int64 
> >> and Float64, but in the case of Int and BigInt, one of them is a more 
> >> complex type, which would currently force the whole thing to be boxed 
> and 
> >> live in the heap anyway. What's needed here is the ability to 
> stack-allocate 
> >> objects that refer to the heap. We cannot do that now, but it is an 
> >> important planned CG improvement. 
> >> 
> >> On Fri, Apr 8, 2016 at 11:28 AM, Scott Jones  
> >> wrote: 
> >>> 
> >>> I like very much the idea of a discriminated union - that would help 
> >>> enormously with some of the stuff I work on. 
> >>> 
> >>> 
> >>> On Friday, April 8, 2016 at 8:47:59 AM UTC-4, Erik Schnetter wrote: 
> >>>> 
> >>>> Laurent 
> >>>> 
> >>>> I'm afraid you can't use `reinterpret` with a type such as `BigInt`. 
> >>>> 
> >>>> I think what you want is an extension of `Nullable`: A nullable type 
> >>>> represents an object of a particular type that might or might not be 
> >>>> there; there's hope that this would be done efficiently, e.g. via an 
> >>>> additional bit. What you want is an efficient representation of a 
> >>>> discriminated union -- a type that can represent either of two types, 
> >>>> but without the overhead from boxing the types as is currently done 
> in 
> >>>> a `Union`. Unfortunately, Julia currently doesn't provide this, but 
> it 
> >>>> would make sense to have a feature request for this. 
> >>>> 
> >>>> This might look like this: 
> >>>> ```Julia 
> >>>> immutable FastInt 
> >>>> val::DiscriminatedUnion{Int, BigInt} 
> >>>> end 
> >>>> function (+)(a::FastInt, b::FastInt) 
> >>>> if typeindex(a.val) == 1 & typeindex(b.val) == 1 
> >>>> ov,res = add_with_overflow(a.val[1], b.val[1]) 
> >>>> ov && return FastInt(BigInt(res)) 
> >>>> return FastInt(res) 
> >>>> end 
> >>>> # TODO: handle mixed case 
> >>>> FastIn

Re: [julia-users] Access to gmp from library?

2016-04-09 Thread Laurent Bartholdi
Thanks! In fact, I see now that the latest github version already does this.

(And my email to the list was triggered by a discussion with the Nemo 
people.)

Issue closed.

On Saturday, 9 April 2016 18:16:18 UTC+2, Isaiah wrote:
>
> 1) It seems that Julia includes libgmp.{so,dylib} but not gmp.h, which is 
>> necessary for the C library's compilation
>
>  
> There was some prior discussion about this, for a similar reason [1], but 
> I think they eventually decided to use MPIR.
>
> I don't know of a reason this shouldn't be done, and it likely requires 
> only a few additional lines in the makefile. Please open an issue on 
> JuliaLang/julia.
>
> (ps: Nemo may be of interest, in case you haven't seen it [2])
>
> 2) It seems some black magic will be required to locate the gmp 
>> installation path within the Julia tree
>>
>
> I believe (hopefully Tony will confirm) that you need to check these two 
> paths:
>
> joinpath(JULIA_HOME, "../lib")  # source builds
> joinpath(JULIA_HOME, "../lib/julia")   # binary downloads
>
> Best,
> Isaiah
>
>
> [1] https://groups.google.com/d/msg/julia-users/8emhRgpTN5E/8IouaSnn5xwJ
> [2] http://nemocas.org/ https://github.com/Nemocas/Nemo.jl
>
> On Sat, Apr 9, 2016 at 8:13 AM, Laurent Bartholdi  > wrote:
>
>> Dear all,
>> My package contains a C library that requires gmp. It would be very nice 
>> to be able to use Julia's gmp, so as not to install a duplicate copy of gmp 
>> as by package's library's dependency. However,
>> 1) It seems that Julia includes libgmp.{so,dylib} but not gmp.h, which is 
>> necessary for the C library's compilation
>> 2) It seems some black magic will be required to locate the gmp 
>> installation path within the Julia tree
>>
>> Suggestions are welcome on the best way to proceed!
>>
>
>

[julia-users] Access to gmp from library?

2016-04-09 Thread Laurent Bartholdi
Dear all,
My package contains a C library that requires gmp. It would be very nice to 
be able to use Julia's gmp, so as not to install a duplicate copy of gmp as 
by package's library's dependency. However,
1) It seems that Julia includes libgmp.{so,dylib} but not gmp.h, which is 
necessary for the C library's compilation
2) It seems some black magic will be required to locate the gmp 
installation path within the Julia tree

Suggestions are welcome on the best way to proceed!


Re: [julia-users] Fast multiprecision integers?

2016-04-09 Thread Laurent Bartholdi
Thanks for the explanations. Am I correct in understanding that objects are 
garbage-collected when they're only pointed to, and this causes the 
problem? This kills my naive idea of using pointers to BigInts as a 64-bit 
datatype compatible with ints, and is suggested by the following experiment:

julia> l = [pointer_from_objref(BigInt(i)) for i=1:10]
10-element Array{Ptr{Void},1}:
 Ptr{Void} @0x000118c77150
 Ptr{Void} @0x000118c77170
 Ptr{Void} @0x000118c77190
 Ptr{Void} @0x000118c771b0
 Ptr{Void} @0x000118c771d0
 Ptr{Void} @0x000118c771f0
 Ptr{Void} @0x000118c77210
 Ptr{Void} @0x000118c77230
 Ptr{Void} @0x000118c77250
 Ptr{Void} @0x000118c77270


julia> unsafe_pointer_to_objref(l[7])
7


julia> map(unsafe_pointer_to_objref,l)
Segmentation fault: 11


On Friday, 8 April 2016 17:47:34 UTC+2, Stefan Karpinski wrote:
>
> This could help if both parts of the union were plain old data like Int64 
> and Float64, but in the case of Int and BigInt, one of them is a more 
> complex type, which would currently force the whole thing to be boxed and 
> live in the heap anyway. What's needed here is the ability to 
> stack-allocate objects that refer to the heap. We cannot do that now, but 
> it is an important planned CG improvement.
>
> On Fri, Apr 8, 2016 at 11:28 AM, Scott Jones  > wrote:
>
>> I like very much the idea of a discriminated union - that would help 
>> enormously with some of the stuff I work on.
>>
>>
>> On Friday, April 8, 2016 at 8:47:59 AM UTC-4, Erik Schnetter wrote:
>>
>>> Laurent 
>>>
>>> I'm afraid you can't use `reinterpret` with a type such as `BigInt`. 
>>>
>>> I think what you want is an extension of `Nullable`: A nullable type 
>>> represents an object of a particular type that might or might not be 
>>> there; there's hope that this would be done efficiently, e.g. via an 
>>> additional bit. What you want is an efficient representation of a 
>>> discriminated union -- a type that can represent either of two types, 
>>> but without the overhead from boxing the types as is currently done in 
>>> a `Union`. Unfortunately, Julia currently doesn't provide this, but it 
>>> would make sense to have a feature request for this. 
>>>
>>> This might look like this: 
>>> ```Julia 
>>> immutable FastInt 
>>> val::DiscriminatedUnion{Int, BigInt} 
>>> end 
>>> function (+)(a::FastInt, b::FastInt) 
>>> if typeindex(a.val) == 1 & typeindex(b.val) == 1 
>>> ov,res = add_with_overflow(a.val[1], b.val[1]) 
>>> ov && return FastInt(BigInt(res)) 
>>> return FastInt(res) 
>>> end 
>>> # TODO: handle mixed case 
>>> FastInt(a.val[2] + b.val[2]) 
>>> end 
>>> ``` 
>>>
>>> This would be the same idea as yours, but the `reinterpret` occurs 
>>> within Julia, in a type-safe and type-stable manner, in a way such 
>>> that the compiler can optimize better. 
>>>
>>> You could try defining a type that contains two fields, both an `Int` 
>>> and a `BigInt` -- maybe `BigInt` will handle the case of a value that 
>>> is never used in a more efficient manner that doesn't require 
>>> allocating an object. 
>>>
>>> -erik 
>>>
>>> On Fri, Apr 8, 2016 at 2:07 AM, Laurent Bartholdi 
>>>  wrote: 
>>> > Dear all, 
>>> > How hard would it be to code arbitrary-precision integers in Julia 
>>> with at 
>>> > worst 2x performance loss over native Ints? 
>>> > 
>>> > This is what I have in mind: have a bitstype structure, say 64 bits, 
>>> which 
>>> > is either the address of a BigInt (if even), or an Int64 (if odd). 
>>> Addition 
>>> > would be something like: 
>>> > 
>>> > function +(a::FastInt,b::FastInt) 
>>> > if a&b&1 
>>> > (result,obit) = @llvm.sadd.with.overflow.i64(a,b&~1) 
>>> > obit ? reinterpret(FastInt,BigInt(a>>1)+(b>>1)) : result 
>>> > elseif a&1 
>>> > reinterpret(FastInt,(a>>1) + reinterpret(BigInt,b)) 
>>> > elseif b&1 
>>> > reinterpret(FastInt,reinterpret(BigInt,a) + b>>1) 
>>> > else 
>>> > reinterpret(FastInt,reinterpret(BigInt,a) + 
>>> reinterpret(BigInt,b)) 
>>> > end 
>>> > end 
>>> > 
>>> > (code not meant to be run, just a skeleton) 
>>> > 
>>> > This would be very useful in the development of computer algebra 
>>> systems, in 
>>> > which BigInts are too slow and eat up too much memory, but one is 
>>> ready to 
>>> > pay a small price for guard against arithmetic overflows. 
>>> > 
>>> > If this is too complicated, then perhaps at least a type of integers 
>>> that 
>>> > would raise an error in case of over/underflows? Those could be caught 
>>> in 
>>> > throw/catch enclosures, so the user could restart the operation with 
>>> > BigInts. 
>>> > 
>>> > TIA, Laurent 
>>>
>>>
>>>
>>> -- 
>>> Erik Schnetter  
>>> http://www.perimeterinstitute.ca/personal/eschnetter/ 
>>>
>>
>

Re: [julia-users] Fast multiprecision integers?

2016-04-08 Thread Laurent Bartholdi
I understand my sketch is very specific... if memory efficiency is at
issue, it's important to treat pointers as 61-bit and to accept working
with 63-bit signints if discriminated unions are to exist at no memory cost.
(In the computer algebra applications I have in mind, losing 50% speed is
fine, losing 50% memory is not)

On Fri, Apr 8, 2016, 17:47 Stefan Karpinski  wrote:

> This could help if both parts of the union were plain old data like Int64
> and Float64, but in the case of Int and BigInt, one of them is a more
> complex type, which would currently force the whole thing to be boxed and
> live in the heap anyway. What's needed here is the ability to
> stack-allocate objects that refer to the heap. We cannot do that now, but
> it is an important planned CG improvement.
>
> On Fri, Apr 8, 2016 at 11:28 AM, Scott Jones 
> wrote:
>
>> I like very much the idea of a discriminated union - that would help
>> enormously with some of the stuff I work on.
>>
>>
>> On Friday, April 8, 2016 at 8:47:59 AM UTC-4, Erik Schnetter wrote:
>>
>>> Laurent
>>>
>>> I'm afraid you can't use `reinterpret` with a type such as `BigInt`.
>>>
>>> I think what you want is an extension of `Nullable`: A nullable type
>>> represents an object of a particular type that might or might not be
>>> there; there's hope that this would be done efficiently, e.g. via an
>>> additional bit. What you want is an efficient representation of a
>>> discriminated union -- a type that can represent either of two types,
>>> but without the overhead from boxing the types as is currently done in
>>> a `Union`. Unfortunately, Julia currently doesn't provide this, but it
>>> would make sense to have a feature request for this.
>>>
>>> This might look like this:
>>> ```Julia
>>> immutable FastInt
>>> val::DiscriminatedUnion{Int, BigInt}
>>> end
>>> function (+)(a::FastInt, b::FastInt)
>>> if typeindex(a.val) == 1 & typeindex(b.val) == 1
>>> ov,res = add_with_overflow(a.val[1], b.val[1])
>>> ov && return FastInt(BigInt(res))
>>> return FastInt(res)
>>> end
>>> # TODO: handle mixed case
>>> FastInt(a.val[2] + b.val[2])
>>> end
>>> ```
>>>
>>> This would be the same idea as yours, but the `reinterpret` occurs
>>> within Julia, in a type-safe and type-stable manner, in a way such
>>> that the compiler can optimize better.
>>>
>>> You could try defining a type that contains two fields, both an `Int`
>>> and a `BigInt` -- maybe `BigInt` will handle the case of a value that
>>> is never used in a more efficient manner that doesn't require
>>> allocating an object.
>>>
>>> -erik
>>>
>>> On Fri, Apr 8, 2016 at 2:07 AM, Laurent Bartholdi
>>>  wrote:
>>> > Dear all,
>>> > How hard would it be to code arbitrary-precision integers in Julia
>>> with at
>>> > worst 2x performance loss over native Ints?
>>> >
>>> > This is what I have in mind: have a bitstype structure, say 64 bits,
>>> which
>>> > is either the address of a BigInt (if even), or an Int64 (if odd).
>>> Addition
>>> > would be something like:
>>> >
>>> > function +(a::FastInt,b::FastInt)
>>> > if a&b&1
>>> > (result,obit) = @llvm.sadd.with.overflow.i64(a,b&~1)
>>> > obit ? reinterpret(FastInt,BigInt(a>>1)+(b>>1)) : result
>>> > elseif a&1
>>> > reinterpret(FastInt,(a>>1) + reinterpret(BigInt,b))
>>> > elseif b&1
>>> > reinterpret(FastInt,reinterpret(BigInt,a) + b>>1)
>>> > else
>>> > reinterpret(FastInt,reinterpret(BigInt,a) +
>>> reinterpret(BigInt,b))
>>> > end
>>> > end
>>> >
>>> > (code not meant to be run, just a skeleton)
>>> >
>>> > This would be very useful in the development of computer algebra
>>> systems, in
>>> > which BigInts are too slow and eat up too much memory, but one is
>>> ready to
>>> > pay a small price for guard against arithmetic overflows.
>>> >
>>> > If this is too complicated, then perhaps at least a type of integers
>>> that
>>> > would raise an error in case of over/underflows? Those could be caught
>>> in
>>> > throw/catch enclosures, so the user could restart the operation with
>>> > BigInts.
>>> >
>>> > TIA, Laurent
>>>
>>>
>>>
>>> --
>>> Erik Schnetter 
>>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>>
>>
> --
Laurent Bartholdi
DMA, École Normale Supérieure, 45 rue d'Ulm, 75005 Paris. +33 14432 2060
Mathematisches Institut, Universität Göttingen, Bunsenstrasse 3-5, D-37073
Göttingen. +49 551 39 7826


[julia-users] Fast multiprecision integers?

2016-04-07 Thread Laurent Bartholdi
Dear all,
How hard would it be to code arbitrary-precision integers in Julia with at 
worst 2x performance loss over native Ints?

This is what I have in mind: have a bitstype structure, say 64 bits, which 
is either the address of a BigInt (if even), or an Int64 (if odd). Addition 
would be something like:

function +(a::FastInt,b::FastInt)
if a&b&1
(result,obit) = @llvm.sadd.with.overflow.i64(a,b&~1)
obit ? reinterpret(FastInt,BigInt(a>>1)+(b>>1)) : result
elseif a&1
reinterpret(FastInt,(a>>1) + reinterpret(BigInt,b))
elseif b&1
reinterpret(FastInt,reinterpret(BigInt,a) + b>>1)
else
reinterpret(FastInt,reinterpret(BigInt,a) + reinterpret(BigInt,b))
end
end

(code not meant to be run, just a skeleton)

This would be very useful in the development of computer algebra systems, 
in which BigInts are too slow and eat up too much memory, but one is ready 
to pay a small price for guard against arithmetic overflows.

If this is too complicated, then perhaps at least a type of integers that 
would raise an error in case of over/underflows? Those could be caught in 
throw/catch enclosures, so the user could restart the operation with 
BigInts.

TIA, Laurent


Re: [julia-users] Re: Macro expansion for ccall with tuple argument

2016-04-07 Thread Laurent Bartholdi
Sorry, I still don't get it entirely. The code now looks like this (in 
simplified form) with a callable object Obj defined in the module:

macro CALL_VARARG(obj,args...)
quote
f = handler_function($(esc(obj)))
ccall(f,Obj,$(Expr(:tuple,[Obj for arg in 1:nargs]...)),$([esc(arg) 
for arg in args]...))
end
end

(obj::Obj)(args...) = @CALL_nARGS(obj,map(Obj,args)...)

and given x a function object taking one argument and xx a function object 
taking 2 arguments, and y an object,

julia> GAP.@CALL_nARGS(x,y) # all fine


julia> x(y) # same answer


julia> GAP.@CALL_nARGS(xx,y,y) # all fine


julia> xx(y,y) # oops!
ERROR: MethodError: no method matching cconvert(::Type{GAP.Obj}, ::GAP.Obj, 
::GAP.Obj)

So the splicing $(args...) you wrote is necessary for pass the correct 
arguments to ccall, but the cconvert method gets applied to "args..." and 
not individually to all elements of args.

Strangely enough, cconvert is not documented in Julia (?cconvert returns 
nothing) even though it's on the online docs; and no method seems available 
(methods(cconvert) returns nothing).

Thanks in advance! Laurent


Re: [julia-users] Re: Macro expansion for ccall with tuple argument

2016-04-07 Thread Laurent Bartholdi
Thanks a lot!
There remains a weird issue about globals being mangled by the module name: 
say that all my code is written in a module called "GAP". Assuming I put 
this in the module:

macro do_ccall(f,args...) :(ccall($f, Int, $(Expr(:tuple, [Int for i in 
1:length(args)]...)), $(args...))) end


julia> using GAP

julia> x = GAP.Obj(0);

julia> GAP.@do_ccall(x,1,2,3)
ERROR: UndefVarError: x not defined

julia> macroexpand(:(GAP.@do_ccall(x,1,2,3)))
:(ccall(GAP.x,GAP.Int,(Int64,Int64,Int64),1,2,3))

Ahah... so x got treated as GAP.x. I tried then

:(ccall($(esc(f)), Int, $(Expr(:tuple, [Int for i in 1:length(args)]...)), 
$(args...))) 

and this works now; though if one of the arguments were also x the same 
problems would occur. How should I make sure the code emitted contains 
exactly the binding of its argument (x), and not something else (GAP.x)?


[julia-users] Macro expansion for ccall with tuple argument

2016-04-07 Thread Laurent Bartholdi
Hi,
I'm trying to understand macro expansion, and am a bit stuck. I would like, 
in distilled form, to have a macro "@do_ccall" which calls its argument 
with Int type. Thus I would like

julia> macroexpand(:(@do_ccall(f,1,2,3)))
:(ccall(f,Int,(Int64,Int64,Int64),args...))

and I programmed it as
julia> macro do_ccall(f,args...) 
:(ccall(f,Int,$(ntuple(i->Int,length(args))),args...)) end
@do_ccall (macro with 1 method)

Now the tuple argument just doesn't seem to get interpreted properly:

julia> @do_ccall(C_NULL,1,2,3)
ERROR: syntax: ccall argument types must be a tuple; try "(T,)"
 in eval(::Module, ::Any) at ./boot.jl:237

(note: calling the routine at C_NULL would have crashed Julia; it's just 
for demonstration purposes)

Thanks in advance! Laurent


Re: [julia-users] Access stack frame address?

2016-03-31 Thread Laurent Bartholdi
Thanks a lot! That works perfectly!

Closed.

On Thu, Mar 31, 2016, 14:08 Yichao Yu  wrote:

> On Thu, Mar 31, 2016 at 7:52 AM, Yichao Yu  wrote:
> > On Thu, Mar 31, 2016 at 5:23 AM, Laurent Bartholdi
> >  wrote:
> >> Thanks for the quick reply! Yes, that would be wonderful.
> >>
> >> I want to interface to a library that has its own garbage collection;
> that
> >> library walks the stack to find potential objects that must be kept
> alive.
> >> Therefore, all calls to that library must be done in the form
> >>   library_global_StackBottomBags = __builtin_frame_address(0);
> >>   library_function(...)
> >
> >
> > OK, this is a reasonable use case if the library requires this
> > It's a weird API though.
> >
> > You can use llvm intrinsics to do that. This is basically how
> > `__builtin_frame_address` is implemented in clang.
> >
> > julia> function f()
> >Base.llvmcall(("""
> >   declare i8 *@llvm.frameaddress(i32)
> >   """, """
> >   %1 = call i8 *@llvm.frameaddress(i32 0)
> >   ret i8 *%1
> >   """), Ptr{UInt8}, Tuple{})
> >end
> > f (generic function with 1 method)
> >
> > julia> f()
> > Ptr{UInt8} @0x7ffecb9a3130
> >
>
> Note that this is 0.5 only.
>
> You can hack sth to work on 0.4
>
> ```
> julia> f(p) = p
> f (generic function with 1 method)
>
> julia> g() = ccall(cfunction(f, Ptr{Void}, Tuple{Ptr{Void}}),
> Ptr{Void}, (Ptr{Int},), &1)
> g (generic function with 1 method)
>
> julia> g()
> Ptr{Void} @0x7ffe4ec24920
> ```
>
> >
> >
> >>
> >>
> >> On Thursday, 31 March 2016 00:25:13 UTC+2, Yichao Yu wrote:
> >>>
> >>>
> >>> On Mar 30, 2016 6:22 PM, "Yichao Yu"  wrote:
> >>> >
> >>> >
> >>> > On Mar 30, 2016 6:21 PM, "Laurent Bartholdi" 
> >>> > wrote:
> >>> > >
> >>> > > Hi,
> >>> > > Is there a way to obtain the address of the current stack frame
> (the
> >>> > > ebp register on x86 processors)?
> >>> > >
> >>> > > In GCC, there's the bultin primitive __builtin_frame_address() that
> >>> > > does precisely that.
> >>> >
> >>> > Why do you want this?
> >>> >
> >>>
> >>> It's possible but should not be done in general.
> >>>
> >>> > >
> >>> > > Many thanks in advance, Laurent
>
-- 
Laurent Bartholdi
DMA, École Normale Supérieure, 45 rue d'Ulm, 75005 Paris. +33 14432 2060
Mathematisches Institut, Universität Göttingen, Bunsenstrasse 3-5, D-37073
Göttingen. +49 551 39 7826


Re: [julia-users] Access stack frame address?

2016-03-31 Thread Laurent Bartholdi
Thanks for the quick reply! Yes, that would be wonderful.

I want to interface to a library that has its own garbage collection; that 
library walks the stack to find potential objects that must be kept alive. 
Therefore, all calls to that library must be done in the form
  library_global_StackBottomBags = __builtin_frame_address(0);
  library_function(...)


On Thursday, 31 March 2016 00:25:13 UTC+2, Yichao Yu wrote:
>
>
> On Mar 30, 2016 6:22 PM, "Yichao Yu" > 
> wrote:
> >
> >
> > On Mar 30, 2016 6:21 PM, "Laurent Bartholdi"  > wrote:
> > >
> > > Hi,
> > > Is there a way to obtain the address of the current stack frame (the 
> ebp register on x86 processors)?
> > >
> > > In GCC, there's the bultin primitive __builtin_frame_address() that 
> does precisely that.
> >
> > Why do you want this?
> >
>
> It's possible but should not be done in general.
>
> > >
> > > Many thanks in advance, Laurent
>


[julia-users] Access stack frame address?

2016-03-30 Thread Laurent Bartholdi
Hi,
Is there a way to obtain the address of the current stack frame (the ebp 
register on x86 processors)?

In GCC, there's the bultin primitive __builtin_frame_address() that does 
precisely that.

Many thanks in advance, Laurent


[julia-users] sparse rank, sparse nullspace, sparse linear algebra over rationals?

2015-11-16 Thread Laurent Bartholdi
Hello world,
I'm new at julia, and trying it out as a replacement for matlab and other 
computer algebra systems. I'm a bit stuck with sparse matrices: I have 
largeish matrices (10^6 x 10^6) that are very sparse, and
want to know their rank and nullspace. (the matrices decompose by blocks, 
so the nullspace should be expressible as a sparse matrix).

I tried with the latest (github) julia 0.5:

julia> nullspace(sparse([1],[1],[1]))
ERROR: MethodError: `nullspace` has no method matching 
nullspace(::SparseMatrixCSC{Int64,Int64})

julia> nullspace(full(sparse([1],[1],[1])))
1x0 Array{Float64,2} # I'm a bit unhappy here, I was hoping to get a 
rational answer.

julia> nullspace([1//1])
1x0 Array{Float32,2} # yikes! I'm down to 32 bits floats now.

julia> rank(sparse([1],[1],[1.0]))
ERROR: MethodError: `svdvals!` has no method matching 
svdvals!(::SparseMatrixCSC{Float64,Int64})

julia> rank([1.0])
ERROR: MethodError: `rank` has no method matching rank(::Array{Float64,1})

julia> rank([1 0;0 1])
2 # finally something that works...

Many thanks in advance! Laurent



[julia-users] Re: Channels for multitasking / multithreading

2015-09-07 Thread Laurent Bartholdi
Here seems to be a safer implementation... but I'm still not quite 
confident it's optimally written. Opinions welcome!

function subdivide(v0,v1,filter::Function,middle::Function,doer::Function)
c = Channel(100)
results = Any[doer(v0),doer(v1)]
filter(v0,v1) || return results # nothing to subdivide

put!(c,(results[1],results[2]))

np = nprocs() # determine the number of processes available
idle = Array(Bool,np)
idle[myid()] = true
@sync begin
for p=1:np
if p != myid() || np == 1
@async begin
while true
if isready(c)
idle[p] = false
(d0,d1) = take!(c)
v = middle(d0,d1)
info("Doing task $v on process $p, queue 
$c...\n")
d = remotecall_fetch(p,doer,v)
push!(results,d)
filter(d0,d) && put!(c,(d0,d))
filter(d,d1) && put!(c,(d,d1))
else
idle[p] = true
if all(idle) break end
yield()
end
end
end
end
end
end
results
end

On Friday, 4 September 2015 09:30:50 UTC+2, Laurent Bartholdi wrote:
>
> Hello world,
> I'm completely new to Julia, and advice/comments are welcome. I'd like to 
> run calculations (~ 1 second per job) distributed along the different cores 
> of my CPU. Each calculation may trigger new calculations.
> More specifically, each calculation works on a number in an interval, and 
> may trigger a subdivision of the interval.
>
> I thought about the following implementation:
>
> function 
> subdivide(v0::Number,v1::Number,filter::Function,middle::Function,doer::Function)
> c = Channel(1000)
> results = Any[doer(v0),doer(v1)]
> put!(c,(results[1],results[2]))
> np = nprocs() # determine the number of processes available
> @sync begin
> for p=1:np
> if p != myid() || np == 1
>@async begin
>for (d0,d1)=c
> v = middle(d0,d1)
>#println("Doing $v on process $p")
> d = remotecall_fetch(p,doer,v)
> push!(results,d)
> filter(d0,d) && put!(c,(d0,d))
> filter(d,d1) && put!(c,(d,d1))
> isready(c) || break
>end
> end
> end
> end
> end
> results
> end
>
> typical use:
>
> julia> 
> subdivide(0,10,(x,y)->x[1](x[1]+y[1])/2,v->[v,string(v)])
> 17-element Array{Any,1}:
>  Any[0,"0"]
>  Any[10,"10"]
>  ⋮
>  Any[8.125,"8.125"]
>  Any[9.375,"9.375"]
>
> As you see, the function "subdivide" receives a "doer" which computes a 
> bit of data, and a "filter" which decides if a middle point is required 
> between data points. "middle" computes the argument of a middle point.
>
> My worry is: certainly there will be race conditions when a calculation 
> has been removed from the channel, but the new calculations will not yet 
> have been put in; then one of the cores will stop prematurely. The last 
> "isready(c)" could also, potentially, return true but the upcoming "for 
> (d0,d1)=c" will block because meanwhile a calculation has been removed by 
> another (green) thread.
>
> How should I program this safely? It seems like a useful primitive, so 
> maybe it already exists...
>
> Thanks in advance, Laurent
>


[julia-users] Channels for multitasking / multithreading

2015-09-04 Thread Laurent Bartholdi
Hello world,
I'm completely new to Julia, and advice/comments are welcome. I'd like to 
run calculations (~ 1 second per job) distributed along the different cores 
of my CPU. Each calculation may trigger new calculations.
More specifically, each calculation works on a number in an interval, and 
may trigger a subdivision of the interval.

I thought about the following implementation:

function 
subdivide(v0::Number,v1::Number,filter::Function,middle::Function,doer::Function)
c = Channel(1000)
results = Any[doer(v0),doer(v1)]
put!(c,(results[1],results[2]))
np = nprocs() # determine the number of processes available
@sync begin
for p=1:np
if p != myid() || np == 1
   @async begin
   for (d0,d1)=c
v = middle(d0,d1)
   #println("Doing $v on process $p")
d = remotecall_fetch(p,doer,v)
push!(results,d)
filter(d0,d) && put!(c,(d0,d))
filter(d,d1) && put!(c,(d,d1))
isready(c) || break
   end
end
end
end
end
results
end

typical use:
julia> 
subdivide(0,10,(x,y)->x[1](x[1]+y[1])/2,v->[v,string(v)])
17-element Array{Any,1}:
 Any[0,"0"]
 Any[10,"10"]
 ⋮
 Any[8.125,"8.125"]
 Any[9.375,"9.375"]

As you see, the function "subdivide" receives a "doer" which computes a bit 
of data, and a "filter" which decides if a middle point is required between 
data points. "middle" computes the argument of a middle point.

My worry is: certainly there will be race conditions when a calculation has 
been removed from the channel, but the new calculations will not yet have 
been put in; then one of the cores will stop prematurely. The last 
"isready(c)" could also, potentially, return true but the upcoming "for 
(d0,d1)=c" will block because meanwhile a calculation has been removed by 
another (green) thread.

How should I program this safely? It seems like a useful primitive, so 
maybe it already exists...

Thanks in advance, Laurent


Re: [julia-users] readandwrite: how can I read a line as soon as it's written by the process?

2015-06-29 Thread Laurent Bartholdi
Thanks to all for the thorough replies!

Of course I'm not really interested in interaction with od; I just used it 
as an example of a program that reads input and produces output accordingly.

I wanted to interface to a program (gap) that flushes its output on 
terminals, but seemingly not on pipes.

That leaves 2 options: either change gap so that it doesn't buffer its 
output, or run the process on a pseudo-tty.

In all cases, the problem isn't with Julia :)

On Sunday, 28 June 2015 22:39:36 UTC+2, Jameson wrote:
>
> yes, it is the fault of `od`. you can see this by using `cat` instead and 
> observing that it works. if you play around with the thresholds in `od`, 
> you will observe that it is doing block-buffering of 4096 byte chunks when 
> the input is a pipe.
>
> Unless you turn on write buffering (with `buffer_writes(si)`), the `write` 
> function implicitly calls `flush`, and will block the task until the write 
> completes (the actual `flush` function is a no-op). Therefore, it's best to 
> call write in a separate task if you are processing both ends of the pipe:
>
> julia> (so,si,pr) = readandwrite(`od`)
> (Pipe(open, 0 bytes waiting),Pipe(open, 0 bytes waiting),Process(`od`, 
> ProcessRunning))
>
> julia> @async write(si,repeat("test\n",1))
> Task (done) @0x7fe37aa0daa0
>
> (note, the line_buffered property has no effect)
>
> Note that Base.readavailable returns a random, non-zero number of bytes, 
> which is presumably not what you want?
>
> julia> Base.readavailable(so)
> 135168-element Array{UInt8,1}:
> ...
> julia> Base.readavailable(so)
> 61440-element Array{UInt8,1}:
> ...
>
> In order to get the full output, you need to instead call:
> julia> close(si)
> julia> readall(so)
>
>
> On Sun, Jun 28, 2015 at 9:08 AM Jeff Waller  > wrote:
>
>>
>>
>> On Saturday, June 27, 2015 at 5:03:53 PM UTC-4, ele...@gmail.com wrote:
>>>
>>> Is your od program flushing its output?  Maybe its stuck in its internal 
>>> buffers, not in the pipe.
>>>
>>
>> nah man, if it's it's od, it's not going to do that, the only thing 
>> simpler is cat; I suppose you could try cat.
>>
>> What happens when you do something like this?
>>
>> *julia> **fromStream,toStream,p=readandwrite(`cat`)*
>>
>> *(Pipe(open, 0 bytes waiting),Pipe(open, 0 bytes waiting),Process(`cat`, 
>> ProcessRunning))*
>>
>>
>> *julia> **toStream.line_buffered*
>>
>> *true*
>>
>>
>> *julia> **toStream.line_buffered = false*
>>
>> *false*
>>
>>
>> *julia> **fromStream.line_buffered = false*
>>
>> *false*
>>
>>
>> *julia> **write(toStream,"x")*
>>
>> *1*
>>
>>
>> *julia> **readavailable(fromStream)*
>>
>> *1-element Array{UInt8,1}:*
>>
>> * 0x78*
>>
>

Re: [julia-users] readandwrite: how can I read a line as soon as it's written by the process?

2015-06-27 Thread Laurent Bartholdi
Dear Miguel:
Indeed, data is not made available to the pipe; though it should be there, 
because od prints lines as soon as they're available. I tried "readall", 
but it also blocks. I should have added that I tested this with the latest, 
0.4 release from github.

I also tried just reading one character, with "read(so,UInt8)", and this 
also blocks.

I notice that you are the author of the gnuplot package "Gaston"; so you 
are certainly familiar with the issue. Looking at Gaston's code, I see that 
you directly called :popen from the C library. Is there a reason not to use 
the higher-level interface of Julia?

I got more crashes by feeding large amounts of data to a pipe:

julia> (so,si,pr) = readandwrite(`od`);

julia> write(si,repeat("test\n",10));
^CERROR: InterruptException:Assertion failed: (req->handle == stream), 
function uv__write, file src/unix/stream.c, line 741.

signal (6): Abort trap: 6
__pthread_kill at /usr/lib/system/libsystem_kernel.dylib (unknown line)
Abort trap: 6
bash$

Thanks in advance! Laurent


[julia-users] readandwrite: how can I read a line as soon as it's written by the process?

2015-06-27 Thread Laurent Bartholdi
I'm trying to interact with an exterior program, and can't get it to work 
as I want. Let's say the program is "od" for example. I try:

julia> (so,si,pr) = readandwrite(`od`)
(Pipe(open, 0 bytes waiting),Pipe(open, 0 bytes waiting),Process(`od`, 
ProcessRunning))

julia> write(si,repeat("test\n",100))
500
julia> flush(si)
Pipe(open, 0 bytes waiting)

julia> readline(so)
^C # nothing happens, the process is blocking output

How do I manage to get Julia to give me the output of od? A line was 
already ready after the first 16 characters of input.

This is related to buffering: if I stuff the buffer, then I do get lines 
asynchronously:
julia> (so,si,pr) = readandwrite(`od`);
julia> write(si,repeat("test\n",1));
julia> readline(so)
"000062564  072163  072012  071545  005164  062564  072163 
 072012\n"

However, there also seems to be a bug in Julia here:
julia> (so,si,pr) = readandwrite(`od`);
julia> write(si,repeat("test\n",10));
# and Julia is stuck
^CERROR: InterruptException:
ERROR (unhandled task failure): write: broken pipe (EPIPE)
 in yieldto at ./task.jl:21
 in wait at ./task.jl:309
 in wait at ./task.jl:225
 in wait_full at ./multi.jl:573
 in take! at ./multi.jl:744
 in take_ref at ./multi.jl:752
 in call_on_owner at ./multi.jl:718
 in anonymous at task.jl:86

julia> pr # hmmm... I wonder in which state the process is
^CERROR: InterruptException:

bash$ # Julia crashed


[julia-users] Object attributes // dispatch on extra attributes?

2015-06-20 Thread Laurent Bartholdi
Dear Julia-list:

I'm a quite heavy user of the computer algebra system GAP 
(www.gap-system.org), and wondered how some of its features could be mapped 
to Julia.

GAP is an object-oriented language, in which objects can acquire attributes 
over time; and these attributes can be used by the method selection to 
determine the most efficient method to be applied. For a contrived example, 
integers could have a method "is_square" which returns true if the argument 
is a perfect square. When the method is_square is called, the result is 
stored in a cache (attached to the object). On the second call to the same 
method, the value is just looked up in the cache. Furthermore, the method 
"sqrt" for integers may test whether a cached value is known for the method 
is_square, and look it up, and in case the argument is a perfect square it 
may switch to a better square root algorithm. Even better, the method 
dispatcher can do this behind the scenes, by calling itself the square root 
algorithm for perfect squares in case the argument is already known to be a 
square.

(This is not really a GAP example; in GAP, only composed types (records, 
lists) are allowed to store attributes. In the internals, each object has a 
type and a bitlist storing its known attributes.)

I read through the Julia manual, but did not see any features ressembling 
those (caching results of methods, and allowing finer dispatching). I have 
also read http://arxiv.org/pdf/1209.5145.pdf which says that "the type of a 
value cannot change over its lifetime".

Is there still a way to do similar things in Julia?

Thanks in advance, and apologies if I missed this in the manual,
Laurent