[julia-users] Re: Transpose view of a matrix

2016-10-13 Thread Andreas Noack
The plan is that the transpose function will return this eventually and 
within the 1.0 time frame but it's not done yet. It will probably not be 
a PermutedDimsArray though because it wouldn't do the right thing for the 
conjugate transpose of a complex matrix.

On Wednesday, October 12, 2016 at 5:46:42 PM UTC-4, Alex Williams wrote:
>
> Is there a preferred/standard way to create transposed view of a matrix. 
> More or less I want something like:
>
> Y = permutedims(X, [2,1])
>
> To return a view into X. Is the `PermutedDimsArray` that Tim Holy was 
> working on going to be available in Base soon?
>
> Is the answer buried in this thread somewhere? 
> https://github.com/JuliaLang/julia/issues/4774
>
> Thanks, -- Alex
>
>

[julia-users] Re: Hermitian or Symmetric + UniformScaling

2016-10-12 Thread Andreas Noack
It's not on purpose. It is just that it hasn't been implemented yet. It 
would be great if you could open a pull request with such a method.

You might also want to define a special type for C+λI such that you can 
avoid creating a new matrix but it is probably better to experiment with 
such a type in a package until we know how it should work.

On Wednesday, October 12, 2016 at 3:13:22 AM UTC-4, cciliber wrote:
>
> Hi,
>
> I am new to Julia and I was trying to learn it by doing some basic linear 
> algebra workout : ) 
>
> I was trying to compute the matrix (C + lambda * I), where C is Symmetric 
> and I is the uniform scaling. However apparently the sum operator for these 
> two object is not defined. I was wondering if there is a reason for that or 
> I can write the corresponding method for my modules.
>
> Thanks in advance!
> Carlo 
>


[julia-users] Re: linear algebra question

2016-10-05 Thread Andreas Noack
The documentation should be expanded with more examples. Many of the linear 
algebra functions work for arbitrary input types so if you construct a 
matrix with rational or integer inputs then many of the functions will 
still work. We don't have much support in base for fancy math on such 
matrices. In that case, you might want to take a look at 
http://www.nemocas.org/

On Wednesday, October 5, 2016 at 5:25:26 AM UTC-4, harven wrote:
>
> Is there a way to check if a vector v is in the linear span of a given 
> family of vectors v1,...vn?
> I don't think I saw such a function in the base library but I may have 
> missed it.
>
> I can use something like 
>
>  rank([v;v1...vn]) == rank([v1...vn])
>
> but that looks inefficient. 
>
> Also is there any facility to compute with matrices with integer/rational 
> coefficients?
> It seems that the standard library focuses on floats.
>


Re: [julia-users] Re: Why is BLAS.dot slower than BLAS.axpy!?

2016-09-11 Thread Andreas Noack
I get similar results for OpenBLAS. I expect that axpy gains more from
vectorization than dot.

On Fri, Sep 9, 2016 at 5:31 PM, Sheehan Olver <dlfivefi...@gmail.com> wrote:

> I did blas_set_num_threads(1) with the same profile numbers.  This is
> using Apple’s BLAS.
>
> Maybe I’ll try 0.5 and OpenBLAS for comparison.
>
> On 10 Sep 2016, at 2:34 AM, Andreas Noack <andreasnoackjen...@gmail.com>
> wrote:
>
> Try to time it again with threading disabled. Sometimes the
> threading heuristics can cause unintuitive performance.
>
> On Friday, September 9, 2016 at 6:39:13 AM UTC-4, Sheehan Olver wrote:
>>
>>
>> I have the following code that is part of a Householder routine, where 
>> j::Int64,
>> N::Int64, R.cols::Vector{Int64}, wp::Ptr{Float64}, M::Int64,
>> v::Ptr{Float64}:
>>
>>   …
>> for j=k:N
>> v=r+(R.cols[j]+k-2)*sz
>> dt=BLAS.dot(M,wp,1,v,1)
>> BLAS.axpy!(M,-2*dt,wp,1,v,1)
>> end
>> …
>>
>>
>>
>> For some reason, the BLAS.dot call takes 3x as long as the BLAS.axpy!
>> call.  Is this expected, or is there something wrong?
>>
>>
>>
>


[julia-users] Re: Error in call to eigs--0 index in arpack wrapper

2016-09-11 Thread Andreas Noack
It would be helpful if you could provide a self-contained example. Also, 
would it be possible to try out the release candidate for 0.5. We have made 
a few changes to the ARPACK wrappers so it would be useful to know if is 
only happening on 0.4. Thanks.

On Saturday, September 10, 2016 at 10:41:28 PM UTC-4, Steven White wrote:
>
> It only happens when I give an initial vector.  I am also specifying in 
> LinearMap that it is both symmetric and hermitian, which are both true (the 
> matrix is also real), but maybe one is not supposed to say hermitian if it 
> is real?  Here is the call to LinearMap:
>
> HLinup = 
> LinearMap(applyHamup,npnt*nj;ismutating=false,issym=true,ishermitian=true)
>
>
> and the call to eigs:
>
> (valup,vecup,nconv,niter,nmult) = eigs(HLinup, nev=Nup, 
> which=:SR,ritzvec=true,ncv=50,maxiter=2,tol=etol,v0=voup)
>
>
> By the way, I was incorrectly assuming [ns]aupd wasn't modifying its 
> arguments because there was no ! in its name!
>
> -Steve
>
> On Saturday, September 10, 2016 at 6:18:01 PM UTC-7, Ralph Smith wrote:
>>
>> `ipntr` is modified by the call to `[ns]aupd`, normally to make a sane 
>> index set (the line you quote is there to initialize memory).  I find that 
>> the example using `eigs()` at the bottom of the LinearMap README still 
>> works, and have not encountered trouble in other use of Arpack under Julia. 
>> Can you provide more specifics of the case where you had trouble?
>>
>> On Saturday, September 10, 2016 at 7:08:43 PM UTC-4, Steven White wrote:
>>>
>>> I am calling eigs and I am getting an out of bounds, which looks like an 
>>> obvious mistake in the Arpack wrapper.  
>>>
>>> calling eigs
>>>
>>> ERROR: LoadError: LoadError: BoundsError: attempt to access 2169-element 
>>> Array{Float64,1}:
>>>
>>>0.0 
>>>
>>>
>>>   at index [0:722]
>>>
>>>  in throw_boundserror at abstractarray.jl:156
>>>
>>>  in getindex at array.jl:289
>>>
>>>  in aupd_wrapper at linalg/arpack.jl:55
>>>
>>>  in _eigs at linalg/arnoldi.jl:228
>>>
>>>
>>> In the file arpack.jl I see line 55 is:
>>>
>>>
>>> x = workd[load_idx]
>>>
>>>
>>> Before that, load_idx is defined with:
>>>
>>> zernm1 = 0:(n-1)
>>>
>>> ...
>>>
>>> ipntr  = zeros(BlasInt, (sym && !cmplx) ? 11 : 14)
>>>
>>> ...
>>>
>>> load_idx = ipntr[1]+zernm1
>>>
>>>
>>> and workd was previously initialized with:
>>>
>>> workd  = Array(T, 3*n)
>>>
>>>
>>> So it looks like an attempt to get a section of an array with index
>>>
>>> starting at zero???
>>>
>>>
>>> This is on a Mac, Julia version 0.4.6, in the directory:
>>>
>>>
>>> /Applications/Julia-0.4.6.app/Contents/Resources/julia/share/julia/base/linalg
>>>
>>>
>>> If it matters, I was calling eigs with a LinearMap object (Package 
>>> LinearMaps)
>>>
>>

[julia-users] Re: Why is BLAS.dot slower than BLAS.axpy!?

2016-09-09 Thread Andreas Noack
Try to time it again with threading disabled. Sometimes the 
threading heuristics can cause unintuitive performance.

On Friday, September 9, 2016 at 6:39:13 AM UTC-4, Sheehan Olver wrote:
>
>
> I have the following code that is part of a Householder routine, where 
> j::Int64, 
> N::Int64, R.cols::Vector{Int64}, wp::Ptr{Float64}, M::Int64, 
> v::Ptr{Float64}:
>
>   …
> for j=k:N
> v=r+(R.cols[j]+k-2)*sz
> dt=BLAS.dot(M,wp,1,v,1)
> BLAS.axpy!(M,-2*dt,wp,1,v,1)
> end
> …
>
>
>
> For some reason, the BLAS.dot call takes 3x as long as the BLAS.axpy! 
> call.  Is this expected, or is there something wrong?
>
>
>

Re: [julia-users] Re: Julia equivalent to Matlab's [Q, R, P] = qr(A)?

2016-09-07 Thread Andreas Noack
My memory is too short. I just realized that I implemented a generic 
pivoted QR last fall so if you try out the prerelease of Julia 0.5 then 
you'll be able to compute the pivoted QR of a BigFloat matrix.

On Wednesday, September 7, 2016 at 9:20:12 AM UTC-4, Andreas Noack wrote:
>
> We use LAPACK's QR with column pivoting. See 
> http://www.netlib.org/lapack/lug/node42.html. LAPACK uses blocking for 
> BLAS3 but that is not necessary for a generic implementation. So it's the 
> task is just to sort the columns by norm at each step. If you want to try 
> an implementation you can look for the reflector! and reflectorApply! 
> functions in LinAlg.
>
> On Wednesday, September 7, 2016 at 7:53:41 AM UTC-4, Stuart Brorson wrote:
>>
>> Thank you, Michele & Chris for your suggestion to use qrfact.  Good 
>> idea.  Unfortunately, I neglected to mention that my matrix is a 
>> BigFloat array, and qrfact with pivoting fails like this: 
>>
>> julia> Z = qrfact(A, true) 
>> ERROR: MethodError: `qrfact` has no method matching 
>> qrfact(::Array{BigFloat,2}, ::Bool) 
>>
>> H   I guess I'll need to figure out how to implement the 
>> QR-with-pivot algorithm in Golub and Van Loan (ugh!).  Or is there a 
>> clear reference to the algorithm Julia uses? 
>>
>> Stuart 
>>
>>
>> On Tue, 6 Sep 2016, Chris Rackauckas wrote: 
>>
>> > Use qrfact() (or the in-place version). It doesn't return [Q,R,P] 
>> because 
>> > Julia is smarter than that. It returns a specially designed type for 
>> the QR 
>> > return. It holds all of the those parts so you can get Q, R, and P out, 
>> but 
>> > most of the time you won't need to. For example, if A is the return of 
>> > qrfact, then A\b will automatically dispatch to do the fast algorithm 
>> for a 
>> > QR-factored matrix (which is the most common use for accessing Q and 
>> R). 
>> > 
>> > On Tuesday, September 6, 2016 at 6:37:59 PM UTC-7, Stuart Brorson 
>> wrote: 
>> >> 
>> >> Hello Julia users, 
>> >> 
>> >> Matlab has a variant of the QR decomposition invoked like this: 
>> >> 
>> >> [Q,R,P] = qr(A) 
>> >> 
>> >> This variant of qr() returns matrix R where the diagonal elements are 
>> >> sorted from largest to smallest magnitude as you go from upper left to 
>> >> lower right.  The matrix P is the permutation matrix which permutes 
>> >> the rows/cols of A to give this ordering of the diagonal elements of 
>> >> R.  That is, Q*R = A*P. 
>> >> 
>> >> I tried doing the naive, analogous thing in Julia, but get an error: 
>> >> 
>> >> julia> A = rand(3,3) 
>> >> 3x3 Array{Float64,2}: 
>> >>   0.243071  0.454947   0.89657 
>> >>   0.112843  0.802457   0.375417 
>> >>   0.154241  0.0182734  0.992542 
>> >> 
>> >> julia> Q,R,P = qr(A) 
>> >> ERROR: BoundsError: attempt to access ( 
>> >> 3x3 Array{Float64,2}: 
>> >>   -0.786117   0.0985642  -0.610168 
>> >>   -0.364946  -0.8707630.329523 
>> >>   -0.498833   0.4817230.720492, 
>> >> 
>> >> 3x3 Array{Float64,2}: 
>> >>   -0.309204  -0.659611  -1.33693 
>> >>0.0   -0.645106   0.2396 
>> >>0.00.00.29177) 
>> >>at index [3] 
>> >>   in indexed_next at tuple.jl:21 
>> >> 
>> >> My question:  What's the best way to get the equivalent of Matlab's 
>> >> [Q,R,P] = qr(A) in Julia?  Should I write my own qr() (not too 
>> >> difficult)?  Or just do some row/col permutation of the output of 
>> >> regular qr()? 
>> >> 
>> >> Thanks for any advice, 
>> >> 
>> >> Stuart 
>> >> 
>> >> p.s.  I am using Version 0.4.3-pre+6 (2015-12-11 00:38 UTC) 
>> >> 
>> > 
>>
>

Re: [julia-users] Re: Julia equivalent to Matlab's [Q, R, P] = qr(A)?

2016-09-07 Thread Andreas Noack
We use LAPACK's QR with column pivoting. 
See http://www.netlib.org/lapack/lug/node42.html. LAPACK uses blocking for 
BLAS3 but that is not necessary for a generic implementation. So it's the 
task is just to sort the columns by norm at each step. If you want to try 
an implementation you can look for the reflector! and reflectorApply! 
functions in LinAlg.

On Wednesday, September 7, 2016 at 7:53:41 AM UTC-4, Stuart Brorson wrote:
>
> Thank you, Michele & Chris for your suggestion to use qrfact.  Good 
> idea.  Unfortunately, I neglected to mention that my matrix is a 
> BigFloat array, and qrfact with pivoting fails like this: 
>
> julia> Z = qrfact(A, true) 
> ERROR: MethodError: `qrfact` has no method matching 
> qrfact(::Array{BigFloat,2}, ::Bool) 
>
> H   I guess I'll need to figure out how to implement the 
> QR-with-pivot algorithm in Golub and Van Loan (ugh!).  Or is there a 
> clear reference to the algorithm Julia uses? 
>
> Stuart 
>
>
> On Tue, 6 Sep 2016, Chris Rackauckas wrote: 
>
> > Use qrfact() (or the in-place version). It doesn't return [Q,R,P] 
> because 
> > Julia is smarter than that. It returns a specially designed type for the 
> QR 
> > return. It holds all of the those parts so you can get Q, R, and P out, 
> but 
> > most of the time you won't need to. For example, if A is the return of 
> > qrfact, then A\b will automatically dispatch to do the fast algorithm 
> for a 
> > QR-factored matrix (which is the most common use for accessing Q and R). 
> > 
> > On Tuesday, September 6, 2016 at 6:37:59 PM UTC-7, Stuart Brorson wrote: 
> >> 
> >> Hello Julia users, 
> >> 
> >> Matlab has a variant of the QR decomposition invoked like this: 
> >> 
> >> [Q,R,P] = qr(A) 
> >> 
> >> This variant of qr() returns matrix R where the diagonal elements are 
> >> sorted from largest to smallest magnitude as you go from upper left to 
> >> lower right.  The matrix P is the permutation matrix which permutes 
> >> the rows/cols of A to give this ordering of the diagonal elements of 
> >> R.  That is, Q*R = A*P. 
> >> 
> >> I tried doing the naive, analogous thing in Julia, but get an error: 
> >> 
> >> julia> A = rand(3,3) 
> >> 3x3 Array{Float64,2}: 
> >>   0.243071  0.454947   0.89657 
> >>   0.112843  0.802457   0.375417 
> >>   0.154241  0.0182734  0.992542 
> >> 
> >> julia> Q,R,P = qr(A) 
> >> ERROR: BoundsError: attempt to access ( 
> >> 3x3 Array{Float64,2}: 
> >>   -0.786117   0.0985642  -0.610168 
> >>   -0.364946  -0.8707630.329523 
> >>   -0.498833   0.4817230.720492, 
> >> 
> >> 3x3 Array{Float64,2}: 
> >>   -0.309204  -0.659611  -1.33693 
> >>0.0   -0.645106   0.2396 
> >>0.00.00.29177) 
> >>at index [3] 
> >>   in indexed_next at tuple.jl:21 
> >> 
> >> My question:  What's the best way to get the equivalent of Matlab's 
> >> [Q,R,P] = qr(A) in Julia?  Should I write my own qr() (not too 
> >> difficult)?  Or just do some row/col permutation of the output of 
> >> regular qr()? 
> >> 
> >> Thanks for any advice, 
> >> 
> >> Stuart 
> >> 
> >> p.s.  I am using Version 0.4.3-pre+6 (2015-12-11 00:38 UTC) 
> >> 
> > 
>


Re: [julia-users] Re: Return type of eye()

2016-08-29 Thread Andreas Noack
Evan, this is exactly where you should use I, i.e.

m = m + λ*I

The reason is that eye(m) will first allocate a dense matrix of size(m,1)^2
elements. Then * will do size(m,1)^2 multiplications of lambda and allocate
a new size(m,1)^2 matrix for the result. Finally, size(m,1)^2 additions
will be computed for the elements in m and lambda*eye(m).

In contrast λ*I will only make a single multiplication, store the result in
a tiny stack allocated scalar and only add this to the diagonal of m, i.e.
size(m,1) additions.


On Mon, Aug 29, 2016 at 6:49 AM, Evan Fields  wrote:

>
>
> On Monday, August 29, 2016 at 9:39:19 AM UTC-4, Júlio Hoffimann wrote:
>>
>> I'd like to understand the existence of eye() in Julia, it is still not
>> clear to me. Is it because one wants type stability when updating a matrix
>> iteratively? Is this possibly a limitation from the design of the language?
>>
>> -Júlio
>>
>
> I've used it for things like
> m = m + lambda*eye(m)
>
>
>


Re: [julia-users] Re: Return type of eye()

2016-08-29 Thread Andreas Noack
Julia is developing over time. Originally, eye was probably implemented to
mimic Matlab. Later we realized that the type system allowed us to define
the much nicer UniformScaling which has the special case

const I = UniformScaling(1)

which is almost alway better to use unless you plan to modify some of the
elements of the diagonal matrix after construction. See also

https://github.com/JuliaLang/julia/pull/5810#issuecomment-37983457

and maybe the rest of that discussion.

On Mon, Aug 29, 2016 at 6:39 AM, Júlio Hoffimann 
wrote:

> I'd like to understand the existence of eye() in Julia, it is still not
> clear to me. Is it because one wants type stability when updating a matrix
> iteratively? Is this possibly a limitation from the design of the language?
>
> -Júlio
>


Re: [julia-users] Re: Return type of eye()

2016-08-29 Thread Andreas Noack
We could deprecate eye. Then the users would get a warning directing them
to use `I` instead.

On Mon, Aug 29, 2016 at 6:29 AM, Júlio Hoffimann 
wrote:

> I still think that having a "global variable" named "I" is not robust.
> I've read so many scripts in matlab that do I = eye(n). This approach is
> not gonna work.
>
> -Júlio
>


Re: [julia-users] Re: Return type of eye()

2016-08-29 Thread Andreas Noack
If we could somehow make `I` more visible, wouldn't you think that

B = I*A

is better than

B = eye(1)*A

?

Small side note: the best we can hope for is probably performance similarly
to B = copy(A) because it wouldn't be okay to alias A and B when B has been
constructed from *.

On Mon, Aug 29, 2016 at 6:17 AM, Júlio Hoffimann 
wrote:

> Hi Andreas,
>
> As a user I would like to write
>
> B = eye(1) * A
>
> and have the performance of
>
> B = A
>
> 90% of the users won't be aware of this 1-character variable "I" defined
> in Base nor use it. Also, I can guarantee that "I" is much easier to
> overwrite than a well known function name.
>
> -Júlio
>


[julia-users] Re: is there an rcond (reciprocal condition number of a matrix) function to complement cond

2016-08-29 Thread Andreas Noack
No. We are only exposing `cond` but as you can see 
in https://github.com/JuliaLang/julia/blob/master/base/linalg/lu.jl#L235 we 
are actually getting `rcond` from LAPACK and then calling `inv`. I can see 
the usefulness of working with a number in [0,1] instead of [1,inf) but it 
seems superfluous to have both `cond` and `rcond` when the `inv` 
computation is so cheap. Matlab has taken that path though.

On Sunday, August 28, 2016 at 2:18:55 PM UTC-4, Douglas Bates wrote:
>
> I am accustomed to evaluating the reciprocal condition number instead of 
> the condition number of a matrix because rcond is what LAPACK provides and 
> because the values of rcond are in [0, 1].  I notice that there is a cond 
> function in Base but not an rcond.  Did I miss something?  It happens that 
> I want the rcond of a triangular matrix so it is easy to call the LAPACK 
> function directly.
>


[julia-users] Re: Return type of eye()

2016-08-29 Thread Andreas Noack
You can also overwrite eye

Could you elaborate on the "90% of the users won't be aware of these 
internal details in their day-to-day coding" part? If we ignore the name 
for a while, why is `I` not what you want here? It is as efficient as it 
can possibly be.

On Sunday, August 28, 2016 at 3:39:20 PM UTC-4, Júlio Hoffimann wrote:
>
> Hi,
>
> I wanna revive a discussion around the return type of eye(). Currently 
> this function always returns a dense matrix, which is not desired 
> considering that the identity acts as a simple scalar in every single place 
> it is used.
>
> By exploiting the type system, one would ideally write algorithms with 
> eye() and get specialized function calls for a "ConstantMatrix" type saving 
> memory and unnecessary calculations. People have raised the existence of 
> this variable "I" defined in Base that acts as identity, but I still think 
> that this is not robust given that 1) I can easily overwrite "I" in my user 
> code and 2) 90% of the users won't be aware of these internal details in 
> their day-to-day coding.
>
> Also, can someone explain what is the use of eye(m,n) with m != n? If that 
> case is relevant for some applications, I would still expect to get a 
> SparseMatrix instead.
>
> Can you please share your opinions on this matter?
>
> -Júlio
>
>
>

Re: [julia-users] Re: chol() more strict in v0.5?

2016-08-08 Thread Andreas Noack
Okay. I did it this time

https://github.com/JuliaLang/Compat.jl/pull/268

On Monday, August 8, 2016 at 6:37:53 PM UTC-4, Giuseppe Ragusa wrote:
>
> Me too! I have been trying to update a package to `v0.5` and I do not 
> really see a clean way to support both 0.4 and 0.5 without an entry like 
> this in Compat.jl. 
>
> On Sunday, August 7, 2016 at 10:02:44 PM UTC+2, Andreas Noack wrote:
>>
>> It would be great with an entry for this in Compat.jl, e.g. something like
>>
>> cholfact(A::HermOrSym, args...) = cholfact(A.data, A.uplo, args...)
>>
>> On Sun, Aug 7, 2016 at 2:44 PM, Chris <7hunde...@gmail.com> wrote:
>>
>>> mmh, could you explain your comment a little more?
>>>
>>> David, thanks for the tip.
>>
>>
>>

Re: [julia-users] Re: chol() more strict in v0.5?

2016-08-07 Thread Andreas Noack
It would be great with an entry for this in Compat.jl, e.g. something like

cholfact(A::HermOrSym, args...) = cholfact(A.data, A.uplo, args...)

On Sun, Aug 7, 2016 at 2:44 PM, Chris <7hunderstr...@gmail.com> wrote:

> mmh, could you explain your comment a little more?
>
> David, thanks for the tip.


Re: [julia-users] Re: eigs fails silently

2016-08-06 Thread Andreas Noack
Looked a bit more into this. It was great to be able use Gallium for this 
by the way. I was able to figure out what the issue is on 0.4. We are not 
checking error codes returned from ARPACK. The check was added in 
https://github.com/JuliaLang/julia/commit/955af6b16c94189ec70c467dca445d94d617c989#commitcomment-16399154.
 
Therefore, eigs silently returns the wrong result in your first example 
even though ARPACK returns a non-zero error code. eigs then happens to 
error out with the right result for your latest example but that must be 
pure luck. I think that ARPACK really has a problem here.

On the other hand, looking further into this made me read this remark from 
dsaupd.f

c  3. If M can be factored into a Cholesky factorization M = LL`
c then Mode = 2 should not be selected.  Instead one should use
c Mode = 1 with  OP = inv(L)*A*inv(L`).  Appropriate triangular
c linear systems should be solved with L and L` rather
c than computing inverses.  After convergence, an approximate
c eigenvector z of the original problem is recovered by solving
c L`z = x  where x is a Ritz vector of OP.

and we are not following this advice. Right now, we only handle the 
generalized problem when B(aka M) can be factored so the wrappers need a 
fix. I'll file an issue about this. If you know somebody who'd like to work 
on this, please let us know.

On Saturday, August 6, 2016 at 2:03:21 AM UTC-4, Madeleine Udell wrote:
>
> Actually: I'm not sure we should chalk the error up to ARPACK. Julia 0.4 
> is able to produce a (correct, I think) answer to
>
> eigs(A, C)
>
> but 0.5 gives an ARPACK error. I don't suppose ARPACK changed between 
> Julia versions...!?
>
> On Sat, Aug 6, 2016 at 1:54 AM, Madeleine Udell <madelei...@gmail.com 
> > wrote:
>
>> Andreas, thanks for the investigation. I'll use 0.5 for now, and hope the 
>> real problems I encounter are within the capabilities of ARPACK.
>>
>> It's embarrassing to be bested by Matlab...
>>
>> On Fri, Aug 5, 2016 at 9:23 PM, Andreas Noack <andreasno...@gmail.com 
>> > wrote:
>>
>>> I've looked a bit into this. I believe there is a bug in the Julia 
>>> wrappers on 0.4. The good news is that the bug appears to be fixed on 0.5. 
>>> The bad news is the ARPACK seems to have a hard time with the problem. I get
>>>
>>> julia> eigs(A,C,nev = 1, which = :LR)[1]
>>> ERROR: Base.LinAlg.ARPACKException("unspecified ARPACK error: -")
>>>  in aupd_wrapper(::Type{T}, 
>>> ::Base.LinAlg.#matvecA!#67{Array{Float64,2}}, ::Base.LinAlg.##64#71{Array
>>> {Float64,2}}, ::Base.LinAlg.##65#72, ::Int64, ::Bool, ::Bool, ::String, 
>>> ::Int64, ::Int64, ::String, :
>>> :Float64, ::Int64, ::Int64, ::Array{Float64,1}) at ./linalg/arpack.jl:53
>>>  in #_eigs#60(::Int64, ::Int64, ::Symbol, ::Float64, ::Int64, ::Void, 
>>> ::Array{Float64,1}, ::Bool, ::B
>>> ase.LinAlg.#_eigs, ::Array{Float64,2}, ::Array{Float64,2}) at 
>>> ./linalg/arnoldi.jl:271
>>>  in (::Base.LinAlg.#kw##_eigs)(::Array{Any,1}, ::Base.LinAlg.#_eigs, 
>>> ::Array{Float64,2}, ::Array{Floa
>>> t64,2}) at ./:0
>>>  in #eigs#54(::Array{Any,1}, ::Function, ::Array{Float64,2}, 
>>> ::Array{Float64,2}) at ./linalg/arnoldi.
>>> jl:80
>>>  in (::Base.LinAlg.#kw##eigs)(::Array{Any,1}, ::Base.LinAlg.#eigs, 
>>> ::Array{Float64,2}, ::Array{Float6
>>> 4,2}) at ./:0
>>>
>>> and since SciPy ends up with the same conclusion I conclude that the 
>>> issue is ARPACK. Matlab is doing something else because they are able to 
>>> handle this problem.
>>>
>>> Given that 0.5 is almost release, I'll not spend more time on the issue 
>>> on 0.4. Thought, if anybody is able to figure out what is going on, please 
>>> let us know.
>>>
>>> On Friday, August 5, 2016 at 8:47:26 AM UTC-4, Madeleine Udell wrote:
>>>>
>>>> Setting `which=:LR, nev=1` does not return the generalized eigenvalue 
>>>> with the largest real parts, and does not give a warning or error:
>>>>
>>>> n = 10
>>>> C = eye(n)
>>>> A = zeros(n,n)
>>>> A[1] = 100
>>>> A[end] = -100
>>>> @assert eigs(A, C, nev=1, which=:LR)[1][1] == maximum(eigs(A, C)[1])
>>>>
>>>> Am I expected to set nev greater than the number of eigenvalues I truly 
>>>> desire, based on my intuition as a numerical analyst? Or has eigs broken 
>>>> its implicit guarantee?
>>>>
>>>>
>>
>>
>> -- 
>> Madeleine Udell
>> Assistant Professor, Operations Research and Information Engineering
>> Cornell University
>> https://people.orie.cornell.edu/mru8/
>> (415) 729-4115
>>
>
>
>
> -- 
> Madeleine Udell
> Assistant Professor, Operations Research and Information Engineering
> Cornell University
> https://people.orie.cornell.edu/mru8/
> (415) 729-4115
>


[julia-users] Re: eigs fails silently

2016-08-05 Thread Andreas Noack
I've looked a bit into this. I believe there is a bug in the Julia wrappers 
on 0.4. The good news is that the bug appears to be fixed on 0.5. The bad 
news is the ARPACK seems to have a hard time with the problem. I get

julia> eigs(A,C,nev = 1, which = :LR)[1]
ERROR: Base.LinAlg.ARPACKException("unspecified ARPACK error: -")
 in aupd_wrapper(::Type{T}, ::Base.LinAlg.#matvecA!#67{Array{Float64,2}}, 
::Base.LinAlg.##64#71{Array
{Float64,2}}, ::Base.LinAlg.##65#72, ::Int64, ::Bool, ::Bool, ::String, 
::Int64, ::Int64, ::String, :
:Float64, ::Int64, ::Int64, ::Array{Float64,1}) at ./linalg/arpack.jl:53
 in #_eigs#60(::Int64, ::Int64, ::Symbol, ::Float64, ::Int64, ::Void, 
::Array{Float64,1}, ::Bool, ::B
ase.LinAlg.#_eigs, ::Array{Float64,2}, ::Array{Float64,2}) at 
./linalg/arnoldi.jl:271
 in (::Base.LinAlg.#kw##_eigs)(::Array{Any,1}, ::Base.LinAlg.#_eigs, 
::Array{Float64,2}, ::Array{Floa
t64,2}) at ./:0
 in #eigs#54(::Array{Any,1}, ::Function, ::Array{Float64,2}, 
::Array{Float64,2}) at ./linalg/arnoldi.
jl:80
 in (::Base.LinAlg.#kw##eigs)(::Array{Any,1}, ::Base.LinAlg.#eigs, 
::Array{Float64,2}, ::Array{Float6
4,2}) at ./:0

and since SciPy ends up with the same conclusion I conclude that the issue 
is ARPACK. Matlab is doing something else because they are able to handle 
this problem.

Given that 0.5 is almost release, I'll not spend more time on the issue on 
0.4. Thought, if anybody is able to figure out what is going on, please let 
us know.

On Friday, August 5, 2016 at 8:47:26 AM UTC-4, Madeleine Udell wrote:
>
> Setting `which=:LR, nev=1` does not return the generalized eigenvalue with 
> the largest real parts, and does not give a warning or error:
>
> n = 10
> C = eye(n)
> A = zeros(n,n)
> A[1] = 100
> A[end] = -100
> @assert eigs(A, C, nev=1, which=:LR)[1][1] == maximum(eigs(A, C)[1])
>
> Am I expected to set nev greater than the number of eigenvalues I truly 
> desire, based on my intuition as a numerical analyst? Or has eigs broken 
> its implicit guarantee?
>
>

[julia-users] Re: chol() more strict in v0.5?

2016-08-03 Thread Andreas Noack
Yes. We are stricter now. LAPACK doesn't check for symmetry at all which we 
used to mimic. It might seem pedantic but it will capture programming 
errors and it is more in line with how check things elsewhere and it's in 
line with the behavior for the sparse Cholesky. For now, you'd need to use 
the Symmetric wrapper but ideally we'd have more operations returning 
Symmetric in the future, e.g. cov.

On Tuesday, August 2, 2016 at 8:49:22 AM UTC-4, Chris wrote:
>
> I ran into this issue when some of my unit tests failed on the v0.5 
> release candidate. This example illustrates it:
>
> v0.4:
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.4.6 (2016-06-19 17:16 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
> |__/   |  x86_64-w64-mingw32
>
> julia> a = cov(rand(1000,6));
>
> julia> chol(inv(a))
> 6x6 UpperTriangular{Float64,Array{Float64,2}}:
>  3.58638  -0.0476413  -0.208862   0.0448371  -0.1271090.163736
>  0.0   3.5431 -0.0297272  0.1167 -0.0598619  -0.105655
>  0.0   0.0 3.456670.0493362   0.0118136  -0.00615045
>  0.0   0.0 0.03.36366 0.0491131   0.146946
>  0.0   0.0 0.00.0 3.52767-0.0148549
>  0.0   0.0 0.00.0 0.0 3.52757
>
> v0.5:
>_   _ _(_)_ |  A fresh approach to technical computing
>   (_) | (_) (_)|  Documentation: http://docs.julialang.org
>_ _   _| |_  __ _   |  Type "?help" for help.
>   | | | | | | |/ _` |  |
>   | | |_| | | | (_| |  |  Version 0.5.0-rc0+0 (2016-07-26 20:22 UTC)
>  _/ |\__'_|_|_|\__'_|  |  Official http://julialang.org/ release
> |__/   |  x86_64-w64-mingw32
>
> julia> a = cov(rand(1000,6));
>
> julia> chol(inv(a))
> ERROR: ArgumentError: matrix is not symmetric/Hermitian. This error can be 
> avoided by calling chol(Hermitian(A)) which will ignore either the upper or 
> lower triangle of the matrix.
>  in chol(::Array{Float64,2}) at .\linalg\cholesky.jl:156
>
> julia> b = inv(a);
>
> julia> maximum(b'-b)
> 1.1102230246251565e-16
>
> I know an example using rand() is not ideal, but the behavior is pretty 
> consistent. I tracked the difference down to here 
> :
>  
> basically, in v0.5 there is an explicit check for a Hermitian matrix using 
> ishermitian(), whereas in v0.4, chol() just uses the `info` output 
> parameter from LAPACK.potrf!(). Calling potrf!() on the matrix `b` above 
> returns a zero for info, meaning the execution completed successfully. So, 
> it seems the ishermtian check for symmetry is stricter than LAPACK's.
>
> This is not necessarily a problem, but I run into this kind of issue 
> frequently in my code, and I'm wondering what the best way forward is. I 
> could wrap every chol() argument with Hermitian to ensure it's exactly 
> symmetric, but then I lose a guard against more sinister bugs, since it 
> will force any matrix to be Hermitian. Thoughts? Is the explicit 
> ishermitian check necessary?
>


[julia-users] Re: Linear-Equation-System Solver("\") does not work reliably for the "Rational" Type

2016-07-16 Thread Andreas Noack
A couple of things go wrong here. Right now, Julia tries to use the QR 
factorization to solve underdetermined systems. If things had worked the 
way I'd planned, your matrix would have been promoted to floating point 
elements (I know that it's not what you want so keep reading). The 
promotion fails and Julia tries to QR factorize your matrix even though it 
has rational elements. What happens afterwards is still a bit unclear to me 
but I'm looking into it.

We should add an LU based solver for underdetermined systems which should 
get called for rationals. Unfortunately due to our release cycle, it might 
take a while to get into a release version of Julia. Meanwhile something 
like

julia> A = Matrix{Rational{BigInt}}(rand(1:10, 4, 5))
4x5 Array{Rational{BigInt},2}:
 6//1  9//1   6//1  5//1  3//1
 6//1  2//1  10//1  9//1  9//1
 4//1  1//1   1//1  4//1  1//1
 6//1  3//1   6//1  7//1  5//1

julia> b = A*ones(Int, size(A, 2));

julia> F = lufact(A);

julia> x = copy!(zeros(eltype(b), size(A, 2)), b);

julia> A_ldiv_B!(UpperTriangular(sub(F[:U], :, 1:size(A, 1))), 
A_ldiv_B!(LowerTriangular(F[:L]), sub(x, 1:size(A, 1;

julia> A*x - b
4-element Array{Rational{BigInt},1}:
 0//1
 0//1
 0//1
 0//1

shouldn't be too inefficient (if wrapped in a function).

On Friday, July 15, 2016 at 2:50:01 PM UTC-4, Kurolong wrote:
>
> Hey Guys and Gals  ^^
> I'm having trouble with Julia. Maybe one of you can help me out
> The program i am writing requires the 
> linear-equation-system-solver(command "\") in the folder 
> "Julia-0.4.5\share\julia\base\linalg". 
> Now my problem is that i need the equation solved strictly on integers, an 
> approximate solution is of no use to me, which is why i tried working with 
> the "Rational" Type. 
> Curiously, the solver seems to work for the Rational Type if and only if 
> no pivoting is required. Otherwise, the following Error is thrown:
>
> WARNING: pivoting only implemented for Float32, Float64, Complex64 and 
> Complex128
> ERROR: LoadError: OverflowError()
>  in * at rational.jl:188
>  [inlined code] from linalg/generic.jl:471
>  in reflector! at no file:0
>  in A_ldiv_B! at linalg/qr.jl:346
>  in \ at linalg/qr.jl:398
>  in \ at linalg/dense.jl:450
>  in include at boot.jl:261
>  in include_from_node1 at loading.jl:320
> while loading C:\users\ omitted>\documents\julia-0.4.5\sg_project\v0.3\Test02.jl, in expression 
> starting on line 3
>
> Code to reproduce the Error:
>
> A=[[2//1,0//1] [3//1,3//2] [1//1,3//2]]
> v=[2//1,0//1]
> A\v
>
> Now i am under the impression that i could build a relatively simple 
> workaround if i knew exactly where in the developer's code the actual 
> problem is, but i am confused by the whole organization of the thing. 
> Building my own solver could certainly be done, but this will most likely 
> be ineffecient. Unfortunately, runtime is very important here, as the 
> method i am currently devising is supposed to be embedded in some very 
> expensive loops.
>
> I am a very unexperienced programmer and will likely not understand much 
> programmertalk, but i am a mathematician and will probably have little 
> problems in this respect.
>
> Maybe you have some ideas how to handle this problem.
>
> Thank you in advance for taking the time to look at my problem :)
>


[julia-users] Re: caching the pivot ordering for a sparse cholesky factorization?

2016-07-10 Thread Andreas Noack
The for Cholesky and sparse LDLt there are methods for reusing an existing 
symbolic factorization. They are cholfact!/ldltfact!(Factorization,Matrix) 
so you can e.g. do

```

julia> A = sprandn(100,100,0.1) + 10I;


julia> F = cholfact(Symmetric(A));


julia> cholfact!(F, Symmetric(A - I));

```


See 
https://github.com/JuliaLang/julia/blob/f8d67f7521287d9325e91fd2142dad5f222e6eaf/base/sparse/cholmod.jl#L1260-L1272.




On Friday, July 8, 2016 at 4:40:45 AM UTC-4, Gabriel Goh wrote:
>
> Hey,
>
> I have a sequence of sparse matrix factorizations I need to do, each one a 
> different matrix but with the same sparsity structure. Is there a way I can 
> save the AMD (or any other) ordering that sparsesuite returns, it does not 
> need to be recomputed each time?
>
> Thanks a lot for the help!
>
> Gabe
>


Re: [julia-users] Re: testing positive definiteness via Cholesky?

2016-06-12 Thread Andreas Noack
We could consider not to throw in cholfact but only when the factorization 
is applied. This is what we do for LU and BunchKaufman.

On Saturday, June 11, 2016 at 5:02:55 PM UTC-4, Tony Kelman wrote:
>
> For now, you can just manually make the same cholmod calls that cholfact 
> does, then use the same check that cholfact is currently using for throwing 
> a PosDefException, to instead return false (or use the just-computed 
> factorization as desired, when it succeeds) - see 
> https://github.com/JuliaLang/julia/blob/9ab1519db4ab8c525f84fce542bfc463d9a2b071/base/sparse/cholmod.jl#L1270-L1285
>
> This would likely be faster than the current try-catch implementation of 
> isposdef, but involve a bit of repeated code.
>
>
> On Friday, June 10, 2016 at 9:26:55 PM UTC-7, Tim Holy wrote:
>>
>> Two other advantages of just biting the bullet and implementing this for 
>> PositiveFactorizations: 
>> - there is no iteration step; the first time you try to factor it, it 
>> works. 
>> - when you think about it, the classic approach of adding a diagonal 
>> factor is 
>> (in my humble opinion) wrong, wrong, wrong. See the discussion linked 
>> from the 
>> PositiveFactorization README. 
>>
>> Best, 
>> --Tim 
>>
>> On Friday, June 10, 2016 5:22:03 PM CDT vav...@uwaterloo.ca wrote: 
>> > Dear Viral and Tim, 
>> > 
>> > Thanks for your prompt responses to my query.  I should have stated 
>> more 
>> > precisely how I am using positive definiteness testing.  The 
>> application is 
>> > a classic trust-region method for optimization.  In a trust region 
>> method, 
>> > the main operation is as follows.  The input is a symmetric (typically 
>> > sparse) matrix A and a vector b.  The problem is to compute a certain 
>> real 
>> > parameter lambda.  One tests if A + lambda*speye(n) is positive 
>> definite; 
>> > if so, then one solves a linear system with this coefficient matrix, 
>> and if 
>> > not, one increases lambda and tries again. 
>> > 
>> > So the problem with using isposdef is that, in the case that A is 
>> actually 
>> > positive definite, isposdef discards the Cholesky factor, so then my 
>> > application would need to compute it again (redundantly) to solve the 
>> > system.  In the case of the PostiveFactorizations.jl package, it 
>> appears 
>> > that the package is aimed at dense rather than sparse matrices. 
>> > 
>> > So aside from rewriting cholfact, it seems that the only remaining 
>> solution 
>> > is Viral's suggestion to catch an exception from cholfact.  This raises 
>> > another question.  Most C++ textbooks advise against using exceptions 
>> for 
>> > ordinary control-flow on the grounds that throwing and catching an 
>> > exception is a time-consuming operation.  How about in Julia?  Is it 
>> > reasonable to use try/throw/catch for ordinary control flow in a 
>> scientific 
>> > code?  The Julia manual states that exceptions are much slower than if 
>> > statements.  But on the other hand, isposdef in cholmod.jl is written 
>> in 
>> > terms of exceptions! 
>> > 
>> > Thanks, 
>> > Steve 
>> > 
>> > On Thursday, June 9, 2016 at 10:16:29 PM UTC-4, vav...@uwaterloo.ca 
>> wrote: 
>> > > In Matlab to check if a symmetric sparse matrix is positive definite, 
>> I 
>> > > can say [R,p]=chol(A) and then if p>0 etc.  Is this functionality 
>> > > available 
>> > > in Julia?  The cholfact standard routine throws an exception if its 
>> > > argument is not positive definite rather than returning any helpful 
>> > > information. 
>> > > 
>> > > I looked at the code for cholfact in cholmod.jl in Base; it appears 
>> that I 
>> > > can write a modified version of cholfact that exposes this 
>> functionality. 
>> > > But it would be better if the functionality were available in the 
>> library 
>> > > so that my code is not sensitive to changes in undocumented low-level 
>> > > routines. 
>> > > 
>> > > Thanks, 
>> > > Steve Vavasis 
>>
>>
>>

[julia-users] Re: Pivoting when inverting a sparse matrix

2016-06-12 Thread Andreas Noack
To start with the conclusion, the easiest solution here is to use the 
lufact, i.e. lufact(sparse(A))\ones(3). The explanation is that when a 
sparse matrix is symmetric, we first try to use a sparse Cholesky 
factorization and when that fails we try a sparse LDLt factorization. Both 
use Cholmod, as Tony mentions, and they only pivot during the symbolic 
factorization to reduce the fill in and no pivoting takes place during the 
numerical factorization which is the reason for the error. In contrast, our 
lufact is based on UMFPACK which is pivoting in both the symbolic and 
numerical factorization and therefore does not fail on this problem.

On Saturday, June 11, 2016 at 4:45:40 PM UTC-4, Tony Kelman wrote:
>
> That matrix is indefinite. Cholmod only implements diagonal pivoting 
> (modified Cholesky, will work for semidefinite or some special cases of 
> indefinite like quasidefinite), not the general symmetric indefinite 
> Bunch-Kaufman factorization with 2-by-2 pivots. Assuming you actually care 
> about sparse matrices that are large enough to make implementing this 
> difficult, you will need to use a library other than Cholmod - Pardiso, 
> Mumps, or HSL MA57 are all decent choices.
>
>
> On Friday, June 10, 2016 at 2:37:08 PM UTC-7, Gabriel Goh wrote:
>>
>> The following code doesn't make a lot of sense, is there a reason why the 
>> backslash doesnt pivot for sparse matrices?
>>
>>
>> *julia> *A =[1.0 0.0 1.0
>>
>>0.0 0.0 1.0
>>
>>1.0 1.0 0.0]
>>
>> 3x3 Array{Float64,2}:
>>
>>  1.0  0.0  1.0
>>
>>  0.0  0.0  1.0
>>
>>  1.0  1.0  0.0
>>
>>
>> *julia>* inv(A)
>>
>> 3x3 Array{Float64,2}:
>>
>>   1.0  -1.0  -0.0
>>
>>  -1.0   1.0   1.0
>>
>>   0.0   1.0   0.0
>>
>>
>> *julia>* A\ones(3)
>>
>> 3-element Array{Float64,1}:
>>
>>  0.0
>>
>>  1.0
>>
>>  1.0
>>
>>
>> *julia>* sparse(A)\ones(3)
>>
>> ERROR: ArgumentError: matrix has one or more zero pivots
>>
>>  in ldltfact at sparse/cholmod.jl:1246
>>
>>  in ldltfact at sparse/cholmod.jl:1253
>>
>>  in factorize at sparse/linalg.jl:849
>>
>>  in \ at linalg/generic.jl:326
>>
>>

[julia-users] Re: is the L in (L,U,p) = lu(A) in guaranteed to have 1's on the diagonal when A is not full rank?

2016-06-08 Thread Andreas Noack
Yes. The way out Ls are constructed will ensure that it has 1s in the 
diagonal.

On Tuesday, June 7, 2016 at 6:20:26 PM UTC-4, Gabriel Goh wrote:
>
> A simple question as posed in the title, this is guaranteed by LAPACK
>
>   The factorization has   the form
>  A = P * L * U
>   where   P is a permutation matrix, L is lower triangular with unit 
> diagonal
>   elements (lower trapezoidal if m > n), and U is upper   triangular 
> (upper
>   trapezoidal if m < n).
>
>
> but is not part of the official Julia documentation. Just wanted to make sure 
> that was the case because it's a useful feature in understanding the null 
> space of the matrix
>
>

Re: [julia-users] Re: linreg strangely picky about inputs

2016-05-23 Thread Andreas Noack
I haven't timed it but I think the final array allocation could make a
difference if you make a lot of regressions of moderate size. I don't share
your view that the output is ugly. Since the function only allows for
simple regression anyway, I think it is natural to save the two parameters
in separate variables, e.g.

α, β = linreg(x,y)

In many cases, you are only interested in β. However, you are right that
this is actually a separate issue from the one you raised initially so
let's just fix that for now and discuss optimizations in a later PR.

On Mon, May 23, 2016 at 11:37 AM, Gabriel Gellner <gabrielgell...@gmail.com>
wrote:

> Okay so I have completely hand coded the cov/var calculation to avoid any
> overhead, and now I get around the 20x speedup you mentioned. I really
> don't see any significant benefit of returning a tuple, allocating the 2
> element array has an insignificant overhead even for flot64 vectors of size
> 10 inside hot loops. I really don't like the ugly output and lack
> definitions of arithmatic that returning the tuple forces for no clear
> benefit in my mind. If this is really wanted I would much rather have a
> linreg! method, so that the default behavior is not effected by this
> optimization. I will of course change this if the will of the people is to
> have a tuple, but I am -1.
>
> Another quick question do I make a new issue/pull request for this
> behavior? As it is no longer the trivial change of my current pull request.
> Also should this function be moved out of linalg.jl since it is really not
> using the linear algebra routines anymore?
>
> The code I have come up with is:
>
> function linreg5{S <: Number, T <: Number}(x::AbstractArray{S},
> y::AbstractArray{T})
> n = length(x)
> @assert n == length(y)
> mx = mean(x) # need to know this at each step so calculate before
> tot_dx = 0.0
> tot_y = 0.0
> tot_xy = 0.0
> for i in 1:n
> tot_dx += (x[i] - mx)^2
> tot_y += y[i]
> tot_xy += x[i]*y[i]
> end
> b1 = (tot_xy - mx*tot_y)/tot_dx # a 1\n cancels in the top and bottom
> b0 = tot_y/n - b1*mx
> return [b0, b1]
> end
>
> If anyone has any comments on what could be made better.
>
> Thanks,
> Gabriel
>
> On Monday, May 23, 2016 at 5:58:37 AM UTC-7, Andreas Noack wrote:
>>
>> Almost, but return a tuple instead of an array to avoid array allocation.
>> Tight now, cov allocates temporary arrays for the demeaned vectors but that
>> should probably be changed later anyway.
>>
>> On Monday, May 23, 2016 at 2:50:34 AM UTC-4, Gabriel Gellner wrote:
>>>
>>> So I am not able to get such a dramatic speed up. Do you mean something
>>> beyond:
>>>
>>> function linreg2{S <: Number, T <: Number}(x::AbstractArray{S},
>>> y::AbstractArray{T})
>>> mx = mean(x)
>>> my = mean(y)
>>> b1 = Base.covm(x, mx, y, my)/varm(x, mx)
>>> b0 = my - b1*mx
>>> return [b0, b1]
>>> end
>>>
>>> Which I find speeds up around 3x, or do you mean writing a custom cov
>>> function that is smarter about memory? (I am returning an array as I like
>>> to be able to do vector math on the coefficients ... but if I return a
>>> tuple it isn't much faster for me)
>>>
>>> On Sunday, May 22, 2016 at 8:37:43 PM UTC-7, Andreas Noack wrote:
>>>>
>>>> I don't think that linreg has received much attention over the years.
>>>> Most often it is almost as simple to use \. It you take a look at
>>>> linreg then I'd suggest that you try to write in terms of cov and var and
>>>> return a tuple instead of an Array. That will speed up the computation
>>>> already now and with an unallocating cov, I see 20x speed up over linreg
>>>> for n=1 and Float64 element on 0.4.
>>>>
>>>> On Saturday, May 21, 2016 at 2:33:20 PM UTC-4, Gabriel Gellner wrote:
>>>>>
>>>>>
>>>>>
>>>>> I think it's an error.  The method definition should probably just be:
>>>>>>
>>>>>> linreg{T<:Number,S<:Number}(X::AbstractVector{T},
>>>>>> y::AbstractVector{S}) = [ones(T, size(X,1)) X] \ y
>>>>>>
>>>>>> which will allow it to work for X and y of different types.
>>>>>>
>>>>>> Is using the more specific typing (<: Number) the current best
>>>>> practices? I notice a lot of the methods in linalg/generics.jl just check
>>>>> for AbstractVectors etc without requiring numeric types even though that
>>>>> would be the more correct check.
>>>>>
>>>>>
>>>>>> A PR (pull request) with a patch and a test case would be most
>>>>>> welcome.
>>>>>>
>>>>> On it! Taking me a bit of sweat to figure out the build process and
>>>>> github stuff but once I have that all sorted a PR will be submitted. 
>>>>> Thanks
>>>>> for the quick response.
>>>>>
>>>>


[julia-users] Re: linreg strangely picky about inputs

2016-05-23 Thread Andreas Noack
Almost, but return a tuple instead of an array to avoid array allocation. 
Tight now, cov allocates temporary arrays for the demeaned vectors but that 
should probably be changed later anyway.

On Monday, May 23, 2016 at 2:50:34 AM UTC-4, Gabriel Gellner wrote:
>
> So I am not able to get such a dramatic speed up. Do you mean something 
> beyond:
>
> function linreg2{S <: Number, T <: Number}(x::AbstractArray{S}, 
> y::AbstractArray{T})
> mx = mean(x)
> my = mean(y)
> b1 = Base.covm(x, mx, y, my)/varm(x, mx)
> b0 = my - b1*mx
> return [b0, b1]
> end
>
> Which I find speeds up around 3x, or do you mean writing a custom cov 
> function that is smarter about memory? (I am returning an array as I like 
> to be able to do vector math on the coefficients ... but if I return a 
> tuple it isn't much faster for me)
>
> On Sunday, May 22, 2016 at 8:37:43 PM UTC-7, Andreas Noack wrote:
>>
>> I don't think that linreg has received much attention over the years. 
>> Most often it is almost as simple to use \. It you take a look at 
>> linreg then I'd suggest that you try to write in terms of cov and var and 
>> return a tuple instead of an Array. That will speed up the computation 
>> already now and with an unallocating cov, I see 20x speed up over linreg 
>> for n=1 and Float64 element on 0.4.
>>
>> On Saturday, May 21, 2016 at 2:33:20 PM UTC-4, Gabriel Gellner wrote:
>>>
>>>
>>>
>>> I think it's an error.  The method definition should probably just be:
>>>>
>>>> linreg{T<:Number,S<:Number}(X::AbstractVector{T}, y::AbstractVector{S}) 
>>>> = [ones(T, size(X,1)) X] \ y
>>>>
>>>> which will allow it to work for X and y of different types.
>>>>
>>>> Is using the more specific typing (<: Number) the current best 
>>> practices? I notice a lot of the methods in linalg/generics.jl just check 
>>> for AbstractVectors etc without requiring numeric types even though that 
>>> would be the more correct check.
>>>  
>>>
>>>> A PR (pull request) with a patch and a test case would be most welcome.
>>>>
>>> On it! Taking me a bit of sweat to figure out the build process and 
>>> github stuff but once I have that all sorted a PR will be submitted. Thanks 
>>> for the quick response. 
>>>
>>

[julia-users] Re: linreg strangely picky about inputs

2016-05-22 Thread Andreas Noack
I don't think that linreg has received much attention over the years. Most 
often it is almost as simple to use \. It you take a look at linreg then 
I'd suggest that you try to write in terms of cov and var and return a 
tuple instead of an Array. That will speed up the computation already now 
and with an unallocating cov, I see 20x speed up over linreg for n=1 
and Float64 element on 0.4.

On Saturday, May 21, 2016 at 2:33:20 PM UTC-4, Gabriel Gellner wrote:
>
>
>
> I think it's an error.  The method definition should probably just be:
>>
>> linreg{T<:Number,S<:Number}(X::AbstractVector{T}, y::AbstractVector{S}) = 
>> [ones(T, size(X,1)) X] \ y
>>
>> which will allow it to work for X and y of different types.
>>
>> Is using the more specific typing (<: Number) the current best practices? 
> I notice a lot of the methods in linalg/generics.jl just check for 
> AbstractVectors etc without requiring numeric types even though that would 
> be the more correct check.
>  
>
>> A PR (pull request) with a patch and a test case would be most welcome.
>>
> On it! Taking me a bit of sweat to figure out the build process and github 
> stuff but once I have that all sorted a PR will be submitted. Thanks for 
> the quick response. 
>


[julia-users] Re: JuliaCon birds of a feather

2016-05-18 Thread Andreas Noack
The Hackathon on Saturday is quite flexible so if somebody volunteers to 
organize such session then I'm sure we can make it happen. There should be 
people around that are knowledgeable about the topics.

On Tuesday, May 17, 2016 at 10:26:00 AM UTC-4, Josef Sachs wrote:
>
> Will there be any organized or unorganized opportunities for 
> birds of a feather sessions at JuliaCon?  I would be particularly 
> interested in 
>
> (1) Julia on Amazon Web Services, and 
>
> (2) editors and IDEs for Julia (There was talk at one point about 
> arranging a NYC Julia User Group meeting where we would be talking 
> about how to get various text editors set up to work with Julia, 
> but doing it at JuliaCon would probably be even better.  It would 
> also be great to see a demo of JuliaDT, the Eclipse plugin, if that 
> could be arranged.) 
>


[julia-users] Re: Matrix multiplication performance with BitMatrix, Matrix{Bool}, Matrix{Int8}

2016-05-16 Thread Andreas Noack
I think there'll necessarily be some overhead in the decompression of the 
packed Booleans in a BitArray. The difference between Bool and Int8 is that 
the Int8 is promoted to a Float64 whereas a Bool is not. It appears that 
the Bool multiplication 
in 
https://github.com/JuliaLang/julia/blob/a717fbee731b4e05ef9541ece2b3a310fa394623/base/bool.jl#L42-L45
 
creates two branches, see

julia> @code_native true*1.0
.section __TEXT,__text,regular,pure_instructions
Filename: bool.jl
Source line: 0
pushq %rbp
movq %rsp, %rbp
vxorpd %xmm1, %xmm1, %xmm1
Source line: 49
vmovq %xmm0, %rax
cmpq $-1, %rax
jg L33
movabsq $13101473768, %rax  ## imm = 0x30CE89FE8
vmovsd (%rax), %xmm1   ## xmm1 = mem[0],zero
L33:
testb $1, %dil
jne L43
vmovapd %xmm1, %xmm0
L43:
popq %rbp
retq
nopl (%rax)

Notice the jg and jne and compare to

julia> @code_native Int8(1)*1.0
.section __TEXT,__text,regular,pure_instructions
Filename: promotion.jl
Source line: 0
pushq %rbp
movq %rsp, %rbp
Source line: 191
movsbl %dil, %eax
vcvtsi2sdl %eax, %xmm0, %xmm1
vmulsd %xmm0, %xmm1, %xmm0
popq %rbp
retq
nopw %cs:(%rax,%rax)

I think the branches might affect how well the loops are vectorized which 
could explain the time difference.

On Sunday, May 15, 2016 at 1:26:59 PM UTC-4, Josh Day wrote:
>
> Hi all.  Suppose I have a large matrix with entries {0, 1} and I'd like to 
> keep storage small by using a BitMatrix.  Are there any tricks to squeeze 
> better performance out of BitMatrix multiplication?  I'm also curious about 
> the performance difference between Matrix{Bool} and Matrix{Int8}.  Thoughts 
> or suggestions are appreciated.  Thanks.
>
> n, p = 1000, 100_000
> x1 = rand(n, p) .> .5
> x2 = Matrix{Bool}(x1)
> x3 = Matrix{Float64}(x1)
> x4 = Matrix{Int8}(x1)
> b = randn(p)
>
> @time x1 * b
> @time x2 * b
> @time x3 * b
> @time x4 * b
>   0.559938 seconds (7 allocations: 8.078 KB)
>   0.437336 seconds (7 allocations: 8.078 KB)
>   0.062144 seconds (7 allocations: 8.078 KB)
>   0.109573 seconds (7 allocations: 8.078 KB)
>


[julia-users] JuliaCon 2016: Keynote speakers

2016-04-08 Thread Andreas Noack
JuliaCon 2016
June 21 - 25, 2016
Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.
http://juliacon.org/

On behalf of the JuliaCon 2016 committee, I am happy to announce the 
following keynote speakers:

*Timothy E. Holy, Thomas J. Sargent, *and* Guy L. Steele Jr.*

*Timothy E. Holy* is Associate Professor of Neuroscience at Washington 
University in St. Louis. In 2009 he received the NIH Director’s Pioneer 
award for innovations in optics and microscopy. His research interests 
include imaging of neuronal activity and his lab was probably one of the 
first to adopt Julia for scientific research. He is a long time Julia 
contributor and a lead developer of Julia’s multidimensional array 
capabilities as well as the author of far too many Julia packages.

*Thomas J. Sargent* is Professor of Economics at New York University and 
Senior Fellow at the Hoover Institution. In 2011 the Royal Swedish Academy 
of Sciences awarded him the Nobel Memorial Prize in Economic Sciences for 
his work on macroeconomics. Together with John Stachurski he founded 
quant-econ.net, a Julia and Python based learning platform for quantitative 
economics focusing on algorithms and numerical methods for studying 
economic problems as well as coding skills.

*Guy L. Steele Jr.* is a Software Architect for Oracle Labs and Principal 
Investigator of the Programming Language Research project. The Association 
for Computing Machinery awarded him the 1988 Grace Murray Hopper award. He 
has co-designed the programming language Scheme, which has greatly 
influenced the design of Julia, as well as languages such as Fortress and 
Java.

Andreas Noack
JuliaCon 2016 Local Chair
Postdoctoral Associate
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology


[julia-users] Re: hypot question

2016-03-27 Thread Andreas Noack
It's to ensure that the return type doesn't depend on the value of x. If x and 
y are integers then the return type of hypot1 will be Int if x==0 and Float64 
otherwise.

Re: [julia-users] Re: git help needed

2016-03-08 Thread Andreas Noack
No problem. When you have made a change like this, i.e. a hard reset or a
rebase and want to push the changes to an existing branch you'll have to

git push --force origin glmms

because git is trying to prevent you from making a non-recoverable change
to your repo. The reason is that when you have made this push, you cannot
recover the commit you are trying to remove from the git history.

On Tue, Mar 8, 2016 at 4:46 PM, Douglas Bates <dmba...@gmail.com> wrote:

> Thanks, Andreas.  Unfortunately, I am still in trouble.  Sorry to be such
> a PITA.
>
> When I apply those changes that you describe it is okay until I try to
> push the changes.  The response  I get is that I can't push to origin/glmms
> because the branches have diverged.
>
> $ git status
> On branch glmms
> Your branch and 'origin/glmms' have diverged,
> and have 11 and 6 different commits each, respectively.
>   (use "git pull" to merge the remote branch into yours)
> nothing to commit, working directory clean
> bates@thin40:~/.julia/v0.4/MixedModels$ git pull
> Auto-merging src/paramlowertriangular.jl
> CONFLICT (content): Merge conflict in src/paramlowertriangular.jl
> Auto-merging src/logdet.jl
> CONFLICT (content): Merge conflict in src/logdet.jl
> Auto-merging src/linalg.jl
> CONFLICT (content): Merge conflict in src/linalg.jl
> Auto-merging src/inject.jl
> CONFLICT (content): Merge conflict in src/inject.jl
> Auto-merging src/inflate.jl
> CONFLICT (content): Merge conflict in src/inflate.jl
> Auto-merging src/cfactor.jl
> CONFLICT (content): Merge conflict in src/cfactor.jl
> Auto-merging src/bootstrap.jl
> CONFLICT (content): Merge conflict in src/bootstrap.jl
> Auto-merging src/MixedModels.jl
> CONFLICT (content): Merge conflict in src/MixedModels.jl
> Auto-merging src/GLMM/PIRLS.jl
> CONFLICT (content): Merge conflict in src/GLMM/PIRLS.jl
> Automatic merge failed; fix conflicts and then commit the result.
>
>
>
>
> On Tuesday, March 8, 2016 at 2:29:08 PM UTC-6, Andreas Noack wrote:
>>
>> Provided that you have pushed all your changes, I think the easiest
>> solution is to "remove" the commit in which you add the chksqr fix. You can
>> do that with
>>
>> git reset --hard c0b5c41d136013a8e2cd57f5bedd8c96f5d2e3c6 # the commit
>> right before the chksqr changes
>>
>> git cherry-pick b7564f59ac0a8b72a4a86ccc73cb805520d820b7
>>
>> git cherry-pick 021f766768b54686e60a75c023cb00ab0b38a5dc # the two
>> commits after the chksqr
>>
>> git rebase origin/master # now without conflicts
>>
>> You can also just reset hard to
>> https://github.com/andreasnoack/MixedModels.jl/tree/anj/glmms where I've
>> made exactly these changes or I can open a PR with my version.
>>
>> On Tuesday, March 8, 2016 at 1:46:08 PM UTC-5, Douglas Bates wrote:
>>>
>>> Having looked closer (nothing like a public post to cause you to read
>>> again and discover you were wrong) I see that I did change all those files
>>> in the glmms branch that Andreas changed in his pull request.  I  hadfixed
>>> the issue that Andreas addressed but in a different way and our changes
>>> were in conflict.
>>>
>>> However, I am still in the position that I can't reconcile the glmms
>>> branch and the master branch.
>>>
>>


[julia-users] Re: git help needed

2016-03-08 Thread Andreas Noack
Provided that you have pushed all your changes, I think the easiest 
solution is to "remove" the commit in which you add the chksqr fix. You can 
do that with

git reset --hard c0b5c41d136013a8e2cd57f5bedd8c96f5d2e3c6 # the commit 
right before the chksqr changes

git cherry-pick b7564f59ac0a8b72a4a86ccc73cb805520d820b7

git cherry-pick 021f766768b54686e60a75c023cb00ab0b38a5dc # the two commits 
after the chksqr

git rebase origin/master # now without conflicts

You can also just reset hard 
to https://github.com/andreasnoack/MixedModels.jl/tree/anj/glmms where I've 
made exactly these changes or I can open a PR with my version.

On Tuesday, March 8, 2016 at 1:46:08 PM UTC-5, Douglas Bates wrote:
>
> Having looked closer (nothing like a public post to cause you to read 
> again and discover you were wrong) I see that I did change all those files 
> in the glmms branch that Andreas changed in his pull request.  I  hadfixed 
> the issue that Andreas addressed but in a different way and our changes 
> were in conflict.
>
> However, I am still in the position that I can't reconcile the glmms 
> branch and the master branch.
>


Re: [julia-users] Re: ANN: CppWrapper C++ wrapping package

2016-02-15 Thread Andreas Noack
Thanks for the explanation. I think it is useful to have a short section in
the docs that explains how and why a package is different from similar
packages.

On Mon, Feb 15, 2016 at 2:06 PM, Bart Janssens <bar...@gmail.com> wrote:

> Hi Andreas,
>
> Yes, I'm aware of (and impressed by :) Cxx.jl. The reason I went ahead
> with CppWrapper anyway is that I wanted something like Boost.Python does
> for Python: allow writing the interface completely in C++.
>
> If I understand Cxx.jl correctly, the idea is that it becomes possible to
> directly access C++ using the @cxx macro from Julia. So when facing the
> task of wrapping a C++ library in a Julia package, authors now have 2
> options:
> - Use Cxx.jl to write the wrapper package in Julia code
> - Use CppWrapper to write the wrapper completely in C++ (and one line of
> Julia code to load the .so)
>
> We currently have a C++ project with Python bindings done using
> Boost.Python, which also uses the latter approach, so translating this is
> more natural using CppWrapper.
>
> Maybe I should clarify this in the docs?
>
> Cheers,
>
> Bart
>
> On Monday, February 15, 2016 at 7:35:52 PM UTC+1, Andreas Noack wrote:
>>
>> Hi Bart,
>>
>> Are you aware of https://github.com/Keno/Cxx.jl? What are the reasons
>> for a separate package?
>>
>>


[julia-users] Re: ANN: CppWrapper C++ wrapping package

2016-02-15 Thread Andreas Noack
Hi Bart,

Are you aware of https://github.com/Keno/Cxx.jl? What are the reasons for a 
separate package?

Best
Andreas

On Sunday, February 14, 2016 at 6:11:07 PM UTC-5, Bart Janssens wrote:
>
> Hi all,
>
> The CppWrapper package is meant to facilitate exposing C++ libraries to 
> Julia, allowing library authors to write the wrapping code in C++ and 
> import to Julia using a one-liner. I think I have now added most of the 
> basic features needed to produce simple type wrappings. The detailed 
> description can be found at:
> https://github.com/barche/CppWrapper
>
> For demonstration and performance-checking purposes, I have also wrapped a 
> very limited part of the Eigen matrix library for some given fixed-size 
> matrices:
> https://github.com/barche/EigenCpp.jl
>
> I'm thinking of submitting a presentation proposal for CppWrapper to 
> JuliaCon, so comments and suggestions are most welcome.
>
> Cheers,
>
> Bart
>


[julia-users] Re: Factors in Sparse QR factorization

2016-01-13 Thread Andreas Noack
Eventually, we should support multiplying with Q in the same way it 
possible with dense QR. It will require that we introduce a Q type for 
sparse QR for which we can overload the usual matrix multiplication 
functions by appropriate calls to qmult. Notice however that qmult is only 
defined for multiplication with dense matrices. The C API only supports 
part of the functionality availible in SPQR. You could look into 
wrapping SuiteSparseQR_C_QR in a method similar to qr (i.e. not qrfact) 
which will give what you want. A PR would be very welcome.

Finally, you are probably aware, but notice that Q will have a lower 
sparcity than the original matrix so it is often recommened to avoid 
constructing it explicitly.

On Tuesday, January 12, 2016 at 8:34:48 AM UTC-5, Christopher Rinderspacher 
wrote:
>
> Dear All,
>
> I need direct access to Q and R of a sparse matrix for one of my 
> algorithms. Currently, it seems the API does not support this, although 
> internally spqr.jl implements qmult, which I presume I could use if it was 
> exposed to multiply Q with the identity (in a sparse format) to get Q. 
> Similarly with R. Are there any plans to expose appropriate APIs?
>
> Sincerely,
>
> Christopher
>


[julia-users] Re: Cholmod Factor re-use

2015-12-01 Thread Andreas Noack
Sorry for the confusion here. We have moved the CHOLMOD module to 
SparseMatrix.CHOLMOD in 0.4 and SparseArrays.CHOLMOD in 0.5. However, 
update! will probably be an exported function in 0.5 so this should become 
much easier.

On Tuesday, December 1, 2015 at 7:08:12 AM UTC-5, Matthew Pearce wrote:
>
> Thanks, but I'm afraid not. I guess the inclusion of Cholmod into Julia 
> must have been restructured since that part of the code was written.
> E.g.:
>
> ```
> julia> using Base.LinAlg.CHOLMOD.CholmodFactor
> ERROR: UndefVarError: CHOLMOD not defined
> ```
>
> Matthew
>
> On Tuesday, November 24, 2015 at 3:43:21 PM UTC, Pieterjan Robbe wrote:
>>
>> is this of any help?
>>
>> https://groups.google.com/forum/#!msg/julia-users/tgO3hd238Ac/olgfSJLXvzoJ
>>
>

[julia-users] Re: drandn sometimes crashes

2015-11-24 Thread Andreas Noack
Please file this issue at DistributedArrays.jl

On Monday, November 23, 2015 at 8:13:06 AM UTC-5, Antonio Suriano wrote:
>
> addprocs(3)
>
> @everywhere using DistributedArrays
>
> function tony(N)
> return sum(drandn(N,N))
> end
>
>
> function pesante(N)
> a=zeros(N,N)
> for i = 1:N
> for j=1:N
> a[i,j]=tony(1000)
> end
> end
> return a
> end
>
> matrice= pesante(100)
>
>
> 
> julia 0.4.1
>
> when calling pesante with N>30 sometimes it crashes
>
> LoadError: BoundsError: attempt to access 0-element Array{Any,1}
>   at index [2]
> while loading In[3], in expression starting on line 20
>
> fatal error on 2: ERROR: MethodError: `convert` has no method matching 
> convert(::Type{RemoteRef{T<:AbstractChannel}}, ::Base.RemoteDoMsg)
> This may have arisen from a call to the constructor 
> RemoteRef{T<:AbstractChannel}(...),
> since type constructors fall back to convert methods.
> Closest candidates are:
>   call{T}(::Type{T}, ::Any)
>   convert{T}(::Type{T}, !Matched::T)
>   RemoteRef()
>   ...
>  in setindex! at array.jl:313
>  in deserialize_array at serialize.jl:616
>  in deserialize_datatype at serialize.jl:651
>  in handle_deserialize at serialize.jl:477 (repeats 2 times)
>  in deserialize_datatype at serialize.jl:651
>  in message_handler_loop at multi.jl:861
>  in anonymous at task.jl:63
> Worker 2 terminated.
> ERROR (unhandled task failure): EOFError: read end of file
>  in read at stream.jl:911
>  in message_handler_loop at multi.jl:861
>  in anonymous at task.jl:63
>


[julia-users] Re: low rank matrix-vector multiplication and order of operations

2015-10-27 Thread Andreas Noack
The order of operations is from left to right so the parentheses can be 
important here. We have discussed ways of executing more efficiently for 
matrix products, see https://github.com/JuliaLang/julia/issues/12065, but 
so far nothing has been implemented. In that issue, you can also see the 
reason why an explicit transpose is computed in one of your examples.

On Monday, October 26, 2015 at 3:26:12 PM UTC-4, Michael Lindon wrote:
>
> I have a somewhat large matrix-vector product to calculate, where the 
> matrix is not full rank. I can perform a low rank factorization of the nxn 
> Matrix M as M=LL' where L is nxp.  I'd like to use this information to 
> speed up the computation. Here is a minimal example:
>
> x=rand(1,100) 
> 
>  
> v=rand(1) 
> 
>  
> xx=x*x'   
> 
>  
> xt=x' 
> 
>  
> @time xx*v;   
> 
>  
> @time x*x'*v; 
> 
>  
> @time x*(x'*v);   
> 
>  
> @time x*(xt*v);
>
> here are the timings:
>
> julia> @time xx*v;
>   0.038634 seconds (8 allocations: 78.375 KB)
> julia> @time x*x'*v;
>   0.385250 seconds (17 allocations: 770.646 MB, 13.14% gc time)
> julia> @time x*(x'*v);
>   0.000662 seconds (10 allocations: 79.266 KB)
> julia> @time x*(xt*v);
>   0.000819 seconds (10 allocations: 79.266 KB)
>
> I would like to understand why x*(x'*v) is much faster than  x*x'*v, and 
> uses less memory. I would have thought the order of operations would have 
> gone from right to left but it seems the parentheses made a huge 
> difference. I also might have expected that x' would just flick a TRANS='T" 
> switch in the BLAS implementation but by the looks of it there is a very 
> large amount of memory allocation going on in x*x'*v.
>


[julia-users] Re: VAR estimation in Julia

2015-10-20 Thread Andreas Noack
Hi Michael

Author of Civecm.jl here. The repo has code that I developed during my PhD 
and I haven't spend much time on it lately, but please ask questions if 
you'd like to use it.

Best
Andreas

On Friday, October 16, 2015 at 5:59:44 PM UTC-4, Michael Wang wrote:
>
> Cool, thank you! I will make good use of these. 
>
> On Friday, October 16, 2015 at 1:00:34 PM UTC-7, Patrick Kofod Mogensen 
> wrote:
>>
>> I'm not aware of any packages in metadata, but you can perhaps get 
>> inspiration from 
>>
>> https://github.com/andreasnoack/Civecm.jl (Cointegrated VAR models)
>> https://github.com/tomaskrehlik/VARmodels.jl (VAR models)
>>
>> Neither are from repositories registered in METADATA.jl, so don't expect 
>> too much support. They do not seem to have been updated too recently either 
>> (5-7 months).
>>
>> On Friday, October 16, 2015 at 3:03:41 PM UTC-4, Michael Wang wrote:
>>>
>>> Hi all,
>>>
>>> I am trying to estimation a vector autoregressive model, but I did not 
>>> find a specific Julia package performing the job. Does anyone know that 
>>> there is some package doing this?
>>>
>>> Thanks!
>>>
>>

[julia-users] Re: Type stability (or not) in core stats functions

2015-09-02 Thread Andreas Noack
I think you are right that we should simply remove the mean keyword 
argument from cov and cor. If users want the efficient versions with user 
provided means then they can use corm and covm. Right now they are not 
exported, but we could consider doing it, although I'm in doubt if it is 
really needed. The important thing is to have cov and cor type stable.

On Tuesday, September 1, 2015 at 1:29:59 PM UTC-4, Michael Francis wrote:
>
> Thanks, that is a good pointer.
>
> In this specific case its unfortunate that there is a keyword arg in the 
> API at all, having two functions one with a mean supplied and one without 
> would avoid this issue and remove the branch logic replacing it with static 
> dispatch. 
>
> On Tuesday, September 1, 2015 at 1:02:17 PM UTC-4, Jarrett Revels wrote:
>>
>> Actually, just saw this: https://github.com/JuliaLang/julia/issues/9818 
>> . Ignore the messed up 
>> @code_typed stuff in my previous reply to this thread.
>>
>> I believe the type-inference concerns are still there, however, even if 
>> @code_typed doesn't correctly report them, so the fixes I listed should 
>> still be useful for patching over inferencing problems with keyword 
>> arguments.
>>
>> Best,
>> Jarrett
>>
>> On Tuesday, September 1, 2015 at 12:49:02 PM UTC-4, Jarrett Revels wrote:
>>>
>>> Related: https://github.com/JuliaLang/julia/issues/9551
>>>
>>> Unfortunately, as you've seen, type-variadic keyword arguments can 
>>> really mess up type-inferencing. It appears that keyword argument types are 
>>> pulled from the default arguments rather than those actually passed in at 
>>> runtime:
>>>
>>> *julia> f(x; a=1, b=2) = a*x^b*
>>> *f (generic function with 1 method)*
>>>
>>> *julia> f(1)*
>>> *1*
>>>
>>> *julia> f(1, a=(3+im), b=5.15)*
>>> *3.0 + 1.0im*
>>>
>>> *julia> @code_typed f(1, a=(3+im), b=5.15)*
>>> *1-element Array{Any,1}:*
>>> * :($(Expr(:lambda, Any[:x], 
>>> Any[Any[Any[:x,Int64,0]],Any[],Any[Int64],Any[]], :(begin $(Expr(:line, 1, 
>>> :none, symbol("")))*
>>> *GenSym(0) = (Base.power_by_squaring)(x::Int64,2)::Int64*
>>> *return (Base.box)(Int64,(Base.mul_int)(1,GenSym(0)))::Int64*
>>> *end::Int64*
>>>
>>> Obviously, that specific call to f does NOT return an Int64.
>>>
>>> I know of only two reasonable ways to handle it at the moment:
>>>
>>> 1. If you're the method author: Restrict every keyword argument to a 
>>> declared, concrete type, which ensures that the argument isn't 
>>> type-variadic. Yichao basically gave an example of this.
>>> 2. If you're the method caller: Manually assert the return type. You can 
>>> do this pretty easily in most cases using a wrapper function. 
>>> Using `f` from above as an example:
>>>
>>> *julia> g{X,A,B}(x::X, a::A, b::B) = f(x, a=a, b=b)::promote_type(X, A, 
>>> B)*
>>> *g (generic function with 2 methods)*
>>>
>>> *julia> @code_typed g(1,2,3)*
>>> *1-element Array{Any,1}:*
>>> * :($(Expr(:lambda, Any[:x,:a,:b], 
>>> Any[Any[Any[:x,Int64,0],Any[:a,Int64,0],Any[:b,Int64,0]],Any[],Any[Int64],Any[:X,:A,:B]],
>>>  
>>> :(begin  # none, line 1:*
>>> *return 
>>> (top(typeassert))((top(kwcall))((top(getfield))(Main,:call)::F,2,:a,a::Int64,:b,b::Int64,Main.f,(top(ccall))(:jl_alloc_array_1d,(top(apply_type))(Base.Array,Any,1)::Type{Array{Any,1}},(top(svec))(Base.Any,Base.Int)::SimpleVector,Array{Any,1},0,4,0)::Array{Any,1},x::Int64),Int64)::Int64*
>>> *end::Int64*
>>>
>>> *julia> @code_typed g(1,2,3.0)*
>>> *1-element Array{Any,1}:*
>>> * :($(Expr(:lambda, Any[:x,:a,:b], 
>>> Any[Any[Any[:x,Int64,0],Any[:a,Int64,0],Any[:b,Float64,0]],Any[],Any[Int64],Any[:X,:A,:B]],
>>>  
>>> :(begin  # none, line 1:*
>>> *return 
>>> (top(typeassert))((top(kwcall))((top(getfield))(Main,:call)::F,2,:a,a::Int64,:b,b::Float64,Main.f,(top(ccall))(:jl_alloc_array_1d,(top(apply_type))(Base.Array,Any,1)::Type{Array{Any,1}},(top(svec))(Base.Any,Base.Int)::SimpleVector,Array{Any,1},0,4,0)::Array{Any,1},x::Int64),Float64)::Float64*
>>> *end::Float64*
>>>
>>> *julia> @code_typed g(1,2,3.0+im)*
>>> *1-element Array{Any,1}:*
>>> * :($(Expr(:lambda, Any[:x,:a,:b], 
>>> Any[Any[Any[:x,Int64,0],Any[:a,Int64,0],Any[:b,Complex{Float64},0]],Any[],Any[Int64],Any[:X,:A,:B]],
>>>  
>>> :(begin  # none, line 1:*
>>> *return 
>>> (top(typeassert))((top(kwcall))((top(getfield))(Main,:call)::F,2,:a,a::Int64,:b,b::Complex{Float64},Main.f,(top(ccall))(:jl_alloc_array_1d,(top(apply_type))(Base.Array,Any,1)::Type{Array{Any,1}},(top(svec))(Base.Any,Base.Int)::SimpleVector,Array{Any,1},0,4,0)::Array{Any,1},x::Int64),Complex{Float64})::Complex{Float64}*
>>> *end::Complex{Float64}*
>>>
>>> Thus, downstream functions can call *f* through *g, *preventing 
>>> type-instability from "bubbling up" to the calling methods (as it would if 
>>> they called *f* directly).
>>>
>>> Best,
>>> Jarrett
>>>
>>> On Tuesday, September 1, 2015 at 8:39:11 AM UTC-4, Michael Francis wrote:

 2) The 

[julia-users] Re: Is some Autocorrelation function in Julia?

2015-08-09 Thread Andreas Noack
Take a look in StatsBase.jl

On Sunday, August 9, 2015 at 8:17:02 AM UTC-4, paul analyst wrote:



 *Is some Autocorrelation function in Julia?Paul*



[julia-users] Re: Time series current state of the art

2015-07-23 Thread Andreas Noack
For the example you describe, you can simply use the tools in base, but 
unfortunately I don't think our reader can handle continental style decimal 
comma yet. However, that is easy to search/replace with a dot. Something 
like

cov(diff(log(readdlm(prices.csv, ';'

should then do the job.

The is support for date and time in base as well. Depending on what you are 
interested in doing with the data you might be intersted in

https://github.com/JuliaQuant

and what https://github.com/milktrader has done.

One thing we are missing though is more formalized statistical tools for 
time series analysis. In Stats.jl there is support for computing 
autocorreclations and partial autocorrelations but not much beyond that. I 
saw a GARCH package at some point, but it was very limited.

Den onsdag den 22. juli 2015 kl. 09.08.52 UTC-4 skrev Danny Zuko:

 I am new to Julia and would like to try it to deal with financial time 
 series. I read there has been a good bunch discussions within the Julia 
 community about it (for example, some interesting ones on indexing).

 As a test, I would like to read some 100MB .csv file containing prices 
 into an array (or data-frame?), computing their logarithmic returns and 
 eventually compute a covariance matrix. 

 Something that in R I might do like:

 ## Read CSV file and store contents in a dataframe:
 ## - fields are separated by semicolons,
 ## - first line contains column names,
 ## - first column contains row names,
 ## - decimal separator is a comma.
 prices - read.table (prices.csv,
   sep = ;, header = TRUE, row.names = 1, dec = ,)

 ## Convert prices into logarithmic returns by applying the diff function on
 ## the log of the prices:
 returns - apply (log (prices), 2, diff)

 ## Compute the covariance matrix for the logarithmic returns:
 returns_covariance - cov (returns, use = pairwise.complete.obs)

 As far as the current state of the art is concerned, which are the latest 
 packages that are considered a reference at the moment? Is it TimeSeries.jl?



[julia-users] Re: How to make (sort of) pointers to an array subsection ?

2015-07-15 Thread Andreas Noack
You can use sub or slice for this, e.g. W = slice(Wall, 2:size(Wall, 1), 
2:size(Wall, 2))

Den onsdag den 15. juli 2015 kl. 06.14.14 UTC-4 skrev Ferran Mazzanti:

 Hi folks,

 I have a little mess with the way arrays are being handled in Julia. I 
 come from C and fortran95 and I know I can do the following things there 
 using pointers: imagine I have a big array A. Then I can use pointers to 
 define W a subsection of A, such that if I modify W then A is modified, and 
 if I modify A then W is modified. In this way I have one and the same data, 
 which I can manipulate by acting on A or on W.

 I'd like to do the same thing in Julia, but don't know how. Somehow it 
 works if I use the whole array. For instance in this exemple:

 Wall = ones(2,2) 
 1.0 1.0
 1.0 1.0

 W = Wall
 1.0 1.0
 1.0 1.0

 W[1,1] = 0.
 0.0

 Wall
 2x2 Array{Float64,2}:
 0.0  1.0
 1.0  1.0

 Wall[1,2] = 2.
 0.0 2.0
 1.0 1.0

 W
 2x2 Array{Float64,2}:
 0.0  2.0
 1.0  1.0

 ...so it works both ways. That's exactly what I want, but now not just 
 with W and Wall being the same, but with W being a part of Wall.
 I could try something like

 Wall = ones(3,3)
 3x3 Array{Float64,2}:
 1.0  1.0  1.0
 1.0  1.0  1.0
 1.0  1.0  1.0

 W = Wall[2:end,2:end]
 1.0 1.0
 1.0 1.0

 W[1,1] = 3.
 3.0

 W
 2x2 Array{Float64,2}:
 3.0  1.0
 1.0  1.0

 Wall
 3x3 Array{Float64,2}:
 1.0  1.0  1.0
 1.0  1.0  1.0
 1.0  1.0  1.0

 ...so W has been updated but not Wall. If I update W itdoesn't get copied 
 to Wall either.

 So I know how to do automatic updating when the two arrays are of the same 
 size, but can it be done with subsections of arrays as (I want it to be) in 
 the example above?

 Thanks for your help and patience,

 Ferran.



[julia-users] Re: Efficient way to compute X' diag(w) Y

2015-07-08 Thread Andreas Noack
No such BLAS routine exists, but for larger matrices the calculation will 
be dominated by the final matrix-matrix product anyway.

Den tirsdag den 7. juli 2015 kl. 18.24.34 UTC-4 skrev Matthieu:

 Thanks, this is what I currently do :)

 However, I'd like to find a solution that is both memory efficient (X can 
 be very large) and which does not modify X in place.

 Basically, I'm wondering whether there was a BLAS subroutine that would 
 allow to compute cross(X, w, Y) in one pass without creating an 
 intermediate matrix as large as X or Y.



[julia-users] Re: Condition Number of a Matrix

2015-07-08 Thread Andreas Noack
Hi Jared

The short answer is yes. Different algorithms are used in `svdvals` and 
`svd`/`svdfact`. In both cases, we are using the divide and conquer routine 
xGESDD from LAPACK, but internally in the routine uses two different 
algorithms depending on the choice for the vectors. Are requested or not? 
When the vectors are asked for the algorithm is an implementation of

http://www.netlib.org/lapack/lug/node151.html#gueisenstat3

and when only the values are requested the algorithm is no longer divide 
and conquer but

http://www.netlib.org/lapack/lug/node151.html#fernandoparlett

I think your matrix shows very well the shortcomings for the divide and 
conquer algorithm since the small values are very inaccurate here. We have 
wrapped the older QR based routine xGESVD where you can get the vectors 
without getting inaccurate small values.

julia LAPACK.gesvd!('A', 'A', copy(A))[2] | t - t[1]/t[end]
1.1607083109062826e84

For some time, I've wanted to make it easier to choose the algorithm from 
`svd`/`svdfact` so at some point, when time allows, it should be possible 
to do so, which will make it much more transparent what is happening.

Den tirsdag den 7. juli 2015 kl. 16.33.52 UTC-4 skrev Jared Crean:

 Hello,
 I am working with some poorly conditioned matrices, and I computed the 
 condition number using the cond() function and by comparing the maximum and 
 minimum singular values from svd(), and got very different answers.  The 
 matrix is attached.

 From svdvals, used by cond(), the max and min are  8.488228999798525e18, 
 2.3929633941941788e-61,
 From svd, the are 8.488228999798525e18, 169.86022318633755

 Except for the first three, all the singular values reported by svd and 
 svdvals are different.  Are the two functions using different methods to 
 calculate the singular values?

   Jared Crean



Re: [julia-users] Re: Efficient way to compute X' diag(w) Y

2015-07-08 Thread Andreas Noack
You could, but unless the matrices are small, it would be slower because it
wouldn't use optimized matrix multiplication.

2015-07-08 10:36 GMT-04:00 Josh Langsfeld jdla...@gmail.com:

 Maybe I'm missing something obvious, but couldn't you easily write your
 own 'cross' function that uses a couple nested for-loops to do the
 arithmetic without any intermediate allocations at all?

 On Tuesday, July 7, 2015 at 6:24:34 PM UTC-4, Matthieu wrote:

 Thanks, this is what I currently do :)

 However, I'd like to find a solution that is both memory efficient (X can
 be very large) and which does not modify X in place.

 Basically, I'm wondering whether there was a BLAS subroutine that would
 allow to compute cross(X, w, Y) in one pass without creating an
 intermediate matrix as large as X or Y.




Re: [julia-users] Re: Efficient way to compute X' diag(w) Y

2015-07-08 Thread Andreas Noack
It can be quite large. With

julia function mymul(A,B)
   m, n = size(A, 1), size(B, 2)
   C = promote_type(typeof(A), typeof(B))(m,n)
   for j = 1:n
   for i = 1:m
   tmp = zero(eltype(C)); for k = 1:size(A, 2)
   tmp += A[i,k]*B[k,j]
   end
   C[i,j] = tmp
   end
   end
   return C
   end

I get that single threaded OpenBLAS speed-up of

size factor
2 1.16176
4 0.515929
8 1.73846
16   4.80873
32   10.4425
64   11.6411
128 20.1504
256 41.6211
512 38.4489
1024 136.855

2015-07-08 10:46 GMT-04:00 Josh Langsfeld jdla...@gmail.com:

 Ah, thanks, that's good to know. I was under the mistaken impression that
 loops are always the fastest option in Julia since it's brought up pretty
 frequently. Out of curiosity, what factor of slow-down would not using the
 optimized routines cause?

 On Wed, Jul 8, 2015 at 10:39 AM, Andreas Noack 
 andreasnoackjen...@gmail.com wrote:

 You could, but unless the matrices are small, it would be slower because
 it wouldn't use optimized matrix multiplication.

 2015-07-08 10:36 GMT-04:00 Josh Langsfeld jdla...@gmail.com:

 Maybe I'm missing something obvious, but couldn't you easily write your
 own 'cross' function that uses a couple nested for-loops to do the
 arithmetic without any intermediate allocations at all?

 On Tuesday, July 7, 2015 at 6:24:34 PM UTC-4, Matthieu wrote:

 Thanks, this is what I currently do :)

 However, I'd like to find a solution that is both memory efficient (X
 can be very large) and which does not modify X in place.

 Basically, I'm wondering whether there was a BLAS subroutine that would
 allow to compute cross(X, w, Y) in one pass without creating an
 intermediate matrix as large as X or Y.






[julia-users] Re: caveat: Givens does not transpose

2015-07-08 Thread Andreas Noack
Hi Ivan

This is fixed on 0.4 but needs a backport to 0.3. I'll take a look.

Den tirsdag den 7. juli 2015 kl. 08.41.37 UTC-4 skrev Ivan Slapnicar:

 In [1]:

 Z=givens(1.0,2.0,1,3,3)

 Z, Z', transpose(Z)

 Out[1]:

 (
 3x3 Givens{Float64}:
   0.447214  0.0  0.894427
   0.0   1.0  0.0 
  -0.894427  0.0  0.447214,

 3x3 Givens{Float64}:
   0.447214  0.0  0.894427
   0.0   1.0  0.0 
  -0.894427  0.0  0.447214,

 3x3 Givens{Float64}:
   0.447214  0.0  0.894427
   0.0   1.0  0.0 
  -0.894427  0.0  0.447214)



Re: [julia-users] Re: ldltfactor for sparse matrices

2015-06-04 Thread Andreas Noack
Unfortunately not. See https://github.com/JuliaLang/julia/pull/10402

2015-06-04 9:18 GMT-04:00 Dominique Orban dominique.or...@gmail.com:

 Is it possible to recover L and D from an LDL' factorization? I'm using
 0.3.8.


 On Wednesday, May 27, 2015 at 11:24:03 PM UTC+2, Eduardo Lenz wrote:

 This is very interesting !

 So UMFPACK is more robust and this is why I am not having any issues with
 the same matrix.

 Thanks.



 On Wednesday, May 27, 2015 at 6:15:57 PM UTC-3, Andreas Noack wrote:

 It could happen if a pivot is zero. CHOLMOD's ldlt is only making
 permutations in the symbolic step to reduce fill in. The problem can easily
 arise if a random matrix is too sparse, e.g.

 julia A = sprandn(5,5, 0.2);

 julia A = A + A';

 julia b = A*ones(5);

 julia cholfact(A)\b
 CHOLMOD warning: not positive definite
 5-element Array{Float64,1}:
  NaN
  NaN
  NaN
  NaN
  Inf

 2015-05-27 17:11 GMT-04:00 Eduardo Lenz eduardo...@gmail.com:

 OK, I see.

 It is really using the LDLt, I will try to find why it is returning the
 NaNs for my matrix.

 Thank you very much for your time and knowledge



 On Wednesday, May 27, 2015 at 5:54:20 PM UTC-3, Andreas Noack wrote:

 As I wrote in the first reply: in 0.3 the cholfact function returns
 the LDLt when the matrix is symmetric but not positive definite, e.g.
 julia A = sprandn(5,5, 0.5);

 julia A = A + A';

 julia b = A*ones(5);

 julia cholfact(A)

 CHOLMOD factor:  :  5-by-5
   scalar types: SuiteSparse_long, real, double
   simplicial, LDL'.
   ordering method used: AMD
  0:4
  1:3
  2:0
  3:1
  4:2
   col: 0 colcount: 3
   col: 1 colcount: 3
   col: 2 colcount: 3
   col: 3 colcount: 2
   col: 4 colcount: 1
 monotonic: 1
  nzmax 12.
   col 0: nz 3 start 0 end 3 space 3 free 0:
  0: -0.077417
  1: 8.3137
  3: -0.22451
   col 1: nz 3 start 3 end 6 space 3 free 0:
  1: 6.1217
  3: -0.023604
  4: 0.33154
   col 2: nz 3 start 6 end 9 space 3 free 0:
  2: -0.82878
  3: -0.16901
  4: 2.1383
   col 3: nz 2 start 9 end 11 space 2 free 0:
  3: 0.96632
  4: -1.1466
   col 4: nz 1 start 11 end 12 space 1 free 0:
  4: 1.8461
   nz 12  OK


 julia cholfact(A)\b
 5-element Array{Float64,1}:
  1.0
  1.0
  1.0
  1.0
  1.0

 You do have CHOLMOD installed, but in 0.3 it is in a different module.
 Try Base.LinAlg.CHOLMOD

 2015-05-27 16:44 GMT-04:00 Eduardo Lenz eduardo...@gmail.com:

 Sorry for pointing a wrong julia version.

 The matrix is not posdef, so it gives me a (correct) warning and than
 my computations return NaN.

  I will take a deeper look, but I really cannot understand why I dont
 have CHOLMOD avaliable in a regular
 windows install.

 Thanks for your help Andreas !


 On Wednesday, May 27, 2015 at 5:37:53 PM UTC-3, Andreas Noack wrote:

 You are using 0.3.8 and not 0.4. Have you tried cholfact(A)?

 2015-05-27 16:33 GMT-04:00 Eduardo Lenz eduardo...@gmail.com:

 Just to make it clear...
_
 julia versioninfo()
 Julia Version 0.3.8
 Commit 79599ad (2015-04-30 23:40 UTC)
 Platform Info:
   System: Windows (x86_64-w64-mingw32)
   CPU: Intel(R) Core(TM) i5-3320M CPU @ 2.60GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY
 Sandybridge)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia Base.SparseMatrix.CHOLMOD
 ERROR: CHOLMOD not defined



 On Wednesday, May 27, 2015 at 3:25:46 PM UTC-3, Eduardo Lenz wrote:

 Funny... I dont have CHOLMOD installed...but I am using the
 official windows installer.. I will try to make a fresh install.

 Thanks Andreas !

 On Wednesday, May 27, 2015 at 2:59:30 PM UTC-3, Andreas Noack
 wrote:

 What do you get when you type Base.SparseMatrix.CHOLMOD.ITypes in
 the terminal?

 2015-05-27 13:56 GMT-04:00 Eduardo Lenz eduardo...@gmail.com:

 Thanks Andreas.

 Indeed ... but I am using 0.4 with ldltfact and it is
 complaining about the type of the matrix, which is OK.

 I am realy confused with this error.


 On Wednesday, May 27, 2015 at 2:22:30 PM UTC-3, Andreas Noack
 wrote:

 In 0.3 the sparse LDLt and Cholesky factorizations are both in
 the cholfact function. If the matrix is symmetric, but not positive
 definite the result of cholfact will be an LDLt factorization. In 
 0.4 the
 factorizations have been split into cholfact and ldltfact.

 Den onsdag den 27. maj 2015 kl. 12.34.30 UTC-4 skrev Eduardo
 Lenz:

 Hi.

 I am trying to solve a linear system defined by a Symmetric
 sparse matrix. The lufact is working well, but as the matrix is 
 symmetric,
 I intend to use ldltfact.

 Unfortunately, it is returning the following error:

 ERROR: `ldltfact` has no method matching
 ldltfact(::SparseMatrixCSC{Float64,Int64})

 but my matrix is reported as

 typeof(A)
 SparseMatrixCSC{Float64,Int64}.

 Is it an error or Im doing something wrong.

 Thanks,
 Eduardo.








Re: [julia-users] Type inference cannot deduce that convert(Float32,x) returns a Float32?

2015-06-02 Thread Andreas Noack
The convert methods for Date.Period, Complex and Rational are inferred to 
give Any. The problem in Period is because of the use of the value method 
in line 4 of periods.jl. It extracts a field from an abstract type so even 
though all subtypes in base have the specified field and have it defined as 
Int64, the result is inferred to be Any. With three type asserts in the 
mentioned convert methods, the problem can be fixed, but I don't know if 
that is desirable.

I found one example where the inference fails for a concrete type, 

julia @code_warntype convert(Float64, Complex{Real}(1,0))
Variables:
  #s59::Type{Float64}
  z::Complex{Real}

Body:
  begin  # complex.jl, line 18:
  unless (top(getfield))(z::Complex{Real},:im)::Real == 0::Any goto 0
  return convert(T,(top(getfield))(z::Complex{Real},:re)::Real)::Any
  0: 
  return throw($(Expr(:new, 
:((top(getfield))(Core,:InexactError)::Type{InexactError}
  end::Any

and a place where we might have another problem

julia convert(FloatingPoint, Rational{Integer}(1,1))
1.0

julia convert(Float64, Rational{Integer}(1,1))
ERROR: no promotion exists for Float64 and Integer
 in convert at no file


Den mandag den 1. juni 2015 kl. 18.26.01 UTC-4 skrev Stefan Karpinski:

 Either that or there's some convert method that violates that expectation, 
 in which case inference would correctly decide that it can't predict the 
 result type.

 On Mon, Jun 1, 2015 at 6:05 PM, Yichao Yu yyc...@gmail.com javascript: 
 wrote:

 On Mon, Jun 1, 2015 at 6:03 PM, Stefan Karpinski ste...@karpinski.org 
 javascript: wrote:
  Probably not necessary – that code already has more type annotations 
 than it
  needs. The compiler is usually quite good at figuring out types, in this
  case the global sabotages it.

 And I guess typeinf gives up because of the number of methods out
 there (although seems that typeinf also fail on some of them though).

 ```julia
 julia length(methods(convert, Tuple{Type{Float32}, Any}))
 26

 julia length(methods(call, Tuple{Type{Float32}, Any}))
 1
 ```

 
  On Mon, Jun 1, 2015 at 5:50 PM, Scott Jones scott.pa...@gmail.com 
 javascript:
  wrote:
 
  Does this mean that in all the code that I've written for UTF 
 conversions,
  I should decorate the results of the convert with ::T to help the 
 compiler's
  inference?
 
  On Monday, June 1, 2015 at 11:31:53 PM UTC+2, Stefan Karpinski wrote:
 
  There's nothing in the language that forces convert(T,x) to return
  something of type T, so type inference has no idea what type b is. The
  method that implements Float32(x) is this:
 
  call{T}(::Type{T}, arg) = convert(T, arg)::T
 
 
  Note the type assertion – so type inference does know the type of the
  result. Related: #1090.
 
  On Mon, Jun 1, 2015 at 5:06 PM, Arch Robison arch.d@gmail.com
  wrote:
 
  I was a bit surprised when I tried this example:
  function foo()
  global G
  #b = Float32(G)
  b = convert(Float32,G)
  b
  end
 
  G = 0.5
  println(foo())
  code_warntype(foo,())
  and code_warntype deduced that the type of b is ANY instead of 
 Float32.
  Is this a bug or a feature?  If I use the constructor syntax instead 
 (see
  comment in code), then the type of b is deduced as Float32.
 
 
 




[julia-users] Re: Simple Conjugate Gradients coded in Julia is much faster than cholfact

2015-06-01 Thread Andreas Noack
I think the chosen matrix has very good convergence properties for 
iterative methods, but I agree that iterative methods are very useful to 
have in Julia. There is already quite a few implementations in

https://github.com/JuliaLang/IterativeSolvers.jl

I'm not sure if these methods cover the one you chose, so you could have a 
look and see if there is something to contribute there.

Den søndag den 31. maj 2015 kl. 21.37.23 UTC-4 skrev Eduardo Lenz:

 Hi.

 One of my students is solving some large sparse systems (more than 20K 
 equations). The coeficient matrix 
 is symmetric and positive definite, with large sparsivity (1% of non zero 
 elements in some cases).

 After playing around a little bit with cholfact we decided to compare the 
 time with a very simple implementation
 of the conjugate gradient method with diagonal scaling. 

 The code is in

 https://gist.github.com/CodeLenz/92086ba37035fe8d9ed8#file-gistfile1-txt 
 https://www.google.com/url?q=https%3A%2F%2Fgist.github.com%2FCodeLenz%2F92086ba37035fe8d9ed8%23file-gistfile1-txtsa=Dsntz=1usg=AFQjCNHejjwPU4C7HybkkB2Y_kY2sPA7zQ

 And, as for example, the solution of Ax=b for 

 julia A = sprand(1,1,0.01); A = A'+A; 
 A=A+100*rand()*speye(1,1)

 takes 16 seconds with cholfact(A) and 600 milliseconds !!! with DCGC 
 (tol=1E-10)

 Also, as expected, the memory consumption with CG is very low, allowing 
 the solution 
 of very large systems. 

 The same pattern is observed for different leves of sparsivity and for 
 different random matrices.

 I would like to thank the Julia developers for such amazing tool !





[julia-users] Re: option to return success flag in chol() and constructor for Triangular

2015-05-28 Thread Andreas Noack
You can use try/catch, e.g.

julia try 
   cholfact(A)
   catch e
   e.info
   end
1

In 0.3, you can construct a Triangular matrix with Triangular(B, :L) and in 
0.4 with LowerTriangular(B).

Den onsdag den 27. maj 2015 kl. 18.12.51 UTC-4 skrev Roy Wang:

 Is there an easy way to return the LAPACK success flag of Cholesky 
 decompositions? I am currently calling 
 (B,success_flag)=Base.LinAlg.LAPACK.potrf!(my_inputs) in order to do this. 
 I suspect this is not a good way to do this.

 Is there a constructor for the Triangular type? I wish to make B into a 
 Triangular type.



Re: [julia-users] Re: ldltfactor for sparse matrices

2015-05-27 Thread Andreas Noack
What do you get when you type Base.SparseMatrix.CHOLMOD.ITypes in the
terminal?

2015-05-27 13:56 GMT-04:00 Eduardo Lenz eduardobarpl...@gmail.com:

 Thanks Andreas.

 Indeed ... but I am using 0.4 with ldltfact and it is complaining about
 the type of the matrix, which is OK.

 I am realy confused with this error.


 On Wednesday, May 27, 2015 at 2:22:30 PM UTC-3, Andreas Noack wrote:

 In 0.3 the sparse LDLt and Cholesky factorizations are both in the
 cholfact function. If the matrix is symmetric, but not positive definite
 the result of cholfact will be an LDLt factorization. In 0.4 the
 factorizations have been split into cholfact and ldltfact.

 Den onsdag den 27. maj 2015 kl. 12.34.30 UTC-4 skrev Eduardo Lenz:

 Hi.

 I am trying to solve a linear system defined by a Symmetric sparse
 matrix. The lufact is working well, but as the matrix is symmetric, I
 intend to use ldltfact.

 Unfortunately, it is returning the following error:

 ERROR: `ldltfact` has no method matching
 ldltfact(::SparseMatrixCSC{Float64,Int64})

 but my matrix is reported as

 typeof(A)
 SparseMatrixCSC{Float64,Int64}.

 Is it an error or Im doing something wrong.

 Thanks,
 Eduardo.




[julia-users] Re: ldltfactor for sparse matrices

2015-05-27 Thread Andreas Noack
In 0.3 the sparse LDLt and Cholesky factorizations are both in the 
cholfact function. If the matrix is symmetric, but not positive definite 
the result of cholfact will be an LDLt factorization. In 0.4 the 
factorizations have been split into cholfact and ldltfact.

Den onsdag den 27. maj 2015 kl. 12.34.30 UTC-4 skrev Eduardo Lenz:

 Hi.

 I am trying to solve a linear system defined by a Symmetric sparse matrix. 
 The lufact is working well, but as the matrix is symmetric, I intend to use 
 ldltfact.

 Unfortunately, it is returning the following error:

 ERROR: `ldltfact` has no method matching 
 ldltfact(::SparseMatrixCSC{Float64,Int64})

 but my matrix is reported as 

 typeof(A)
 SparseMatrixCSC{Float64,Int64}.

 Is it an error or Im doing something wrong.

 Thanks,
 Eduardo.



Re: [julia-users] Re: ldltfactor for sparse matrices

2015-05-27 Thread Andreas Noack
You are using 0.3.8 and not 0.4. Have you tried cholfact(A)?

2015-05-27 16:33 GMT-04:00 Eduardo Lenz eduardobarpl...@gmail.com:

 Just to make it clear...
_
 julia versioninfo()
 Julia Version 0.3.8
 Commit 79599ad (2015-04-30 23:40 UTC)
 Platform Info:
   System: Windows (x86_64-w64-mingw32)
   CPU: Intel(R) Core(TM) i5-3320M CPU @ 2.60GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia Base.SparseMatrix.CHOLMOD
 ERROR: CHOLMOD not defined



 On Wednesday, May 27, 2015 at 3:25:46 PM UTC-3, Eduardo Lenz wrote:

 Funny... I dont have CHOLMOD installed...but I am using the official
 windows installer.. I will try to make a fresh install.

 Thanks Andreas !

 On Wednesday, May 27, 2015 at 2:59:30 PM UTC-3, Andreas Noack wrote:

 What do you get when you type Base.SparseMatrix.CHOLMOD.ITypes in the
 terminal?

 2015-05-27 13:56 GMT-04:00 Eduardo Lenz eduardo...@gmail.com:

 Thanks Andreas.

 Indeed ... but I am using 0.4 with ldltfact and it is complaining about
 the type of the matrix, which is OK.

 I am realy confused with this error.


 On Wednesday, May 27, 2015 at 2:22:30 PM UTC-3, Andreas Noack wrote:

 In 0.3 the sparse LDLt and Cholesky factorizations are both in the
 cholfact function. If the matrix is symmetric, but not positive definite
 the result of cholfact will be an LDLt factorization. In 0.4 the
 factorizations have been split into cholfact and ldltfact.

 Den onsdag den 27. maj 2015 kl. 12.34.30 UTC-4 skrev Eduardo Lenz:

 Hi.

 I am trying to solve a linear system defined by a Symmetric sparse
 matrix. The lufact is working well, but as the matrix is symmetric, I
 intend to use ldltfact.

 Unfortunately, it is returning the following error:

 ERROR: `ldltfact` has no method matching
 ldltfact(::SparseMatrixCSC{Float64,Int64})

 but my matrix is reported as

 typeof(A)
 SparseMatrixCSC{Float64,Int64}.

 Is it an error or Im doing something wrong.

 Thanks,
 Eduardo.





Re: [julia-users] Re: ldltfactor for sparse matrices

2015-05-27 Thread Andreas Noack
It could happen if a pivot is zero. CHOLMOD's ldlt is only making
permutations in the symbolic step to reduce fill in. The problem can easily
arise if a random matrix is too sparse, e.g.

julia A = sprandn(5,5, 0.2);

julia A = A + A';

julia b = A*ones(5);

julia cholfact(A)\b
CHOLMOD warning: not positive definite
5-element Array{Float64,1}:
 NaN
 NaN
 NaN
 NaN
 Inf

2015-05-27 17:11 GMT-04:00 Eduardo Lenz eduardobarpl...@gmail.com:

 OK, I see.

 It is really using the LDLt, I will try to find why it is returning the
 NaNs for my matrix.

 Thank you very much for your time and knowledge



 On Wednesday, May 27, 2015 at 5:54:20 PM UTC-3, Andreas Noack wrote:

 As I wrote in the first reply: in 0.3 the cholfact function returns the
 LDLt when the matrix is symmetric but not positive definite, e.g.
 julia A = sprandn(5,5, 0.5);

 julia A = A + A';

 julia b = A*ones(5);

 julia cholfact(A)

 CHOLMOD factor:  :  5-by-5
   scalar types: SuiteSparse_long, real, double
   simplicial, LDL'.
   ordering method used: AMD
  0:4
  1:3
  2:0
  3:1
  4:2
   col: 0 colcount: 3
   col: 1 colcount: 3
   col: 2 colcount: 3
   col: 3 colcount: 2
   col: 4 colcount: 1
 monotonic: 1
  nzmax 12.
   col 0: nz 3 start 0 end 3 space 3 free 0:
  0: -0.077417
  1: 8.3137
  3: -0.22451
   col 1: nz 3 start 3 end 6 space 3 free 0:
  1: 6.1217
  3: -0.023604
  4: 0.33154
   col 2: nz 3 start 6 end 9 space 3 free 0:
  2: -0.82878
  3: -0.16901
  4: 2.1383
   col 3: nz 2 start 9 end 11 space 2 free 0:
  3: 0.96632
  4: -1.1466
   col 4: nz 1 start 11 end 12 space 1 free 0:
  4: 1.8461
   nz 12  OK


 julia cholfact(A)\b
 5-element Array{Float64,1}:
  1.0
  1.0
  1.0
  1.0
  1.0

 You do have CHOLMOD installed, but in 0.3 it is in a different module.
 Try Base.LinAlg.CHOLMOD

 2015-05-27 16:44 GMT-04:00 Eduardo Lenz eduardo...@gmail.com:

 Sorry for pointing a wrong julia version.

 The matrix is not posdef, so it gives me a (correct) warning and than my
 computations return NaN.

  I will take a deeper look, but I really cannot understand why I dont
 have CHOLMOD avaliable in a regular
 windows install.

 Thanks for your help Andreas !


 On Wednesday, May 27, 2015 at 5:37:53 PM UTC-3, Andreas Noack wrote:

 You are using 0.3.8 and not 0.4. Have you tried cholfact(A)?

 2015-05-27 16:33 GMT-04:00 Eduardo Lenz eduardo...@gmail.com:

 Just to make it clear...
_
 julia versioninfo()
 Julia Version 0.3.8
 Commit 79599ad (2015-04-30 23:40 UTC)
 Platform Info:
   System: Windows (x86_64-w64-mingw32)
   CPU: Intel(R) Core(TM) i5-3320M CPU @ 2.60GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia Base.SparseMatrix.CHOLMOD
 ERROR: CHOLMOD not defined



 On Wednesday, May 27, 2015 at 3:25:46 PM UTC-3, Eduardo Lenz wrote:

 Funny... I dont have CHOLMOD installed...but I am using the official
 windows installer.. I will try to make a fresh install.

 Thanks Andreas !

 On Wednesday, May 27, 2015 at 2:59:30 PM UTC-3, Andreas Noack wrote:

 What do you get when you type Base.SparseMatrix.CHOLMOD.ITypes in
 the terminal?

 2015-05-27 13:56 GMT-04:00 Eduardo Lenz eduardo...@gmail.com:

 Thanks Andreas.

 Indeed ... but I am using 0.4 with ldltfact and it is complaining
 about the type of the matrix, which is OK.

 I am realy confused with this error.


 On Wednesday, May 27, 2015 at 2:22:30 PM UTC-3, Andreas Noack wrote:

 In 0.3 the sparse LDLt and Cholesky factorizations are both in the
 cholfact function. If the matrix is symmetric, but not positive 
 definite
 the result of cholfact will be an LDLt factorization. In 0.4 the
 factorizations have been split into cholfact and ldltfact.

 Den onsdag den 27. maj 2015 kl. 12.34.30 UTC-4 skrev Eduardo Lenz:

 Hi.

 I am trying to solve a linear system defined by a Symmetric
 sparse matrix. The lufact is working well, but as the matrix is 
 symmetric,
 I intend to use ldltfact.

 Unfortunately, it is returning the following error:

 ERROR: `ldltfact` has no method matching
 ldltfact(::SparseMatrixCSC{Float64,Int64})

 but my matrix is reported as

 typeof(A)
 SparseMatrixCSC{Float64,Int64}.

 Is it an error or Im doing something wrong.

 Thanks,
 Eduardo.







Re: [julia-users] Re: ldltfactor for sparse matrices

2015-05-27 Thread Andreas Noack
As I wrote in the first reply: in 0.3 the cholfact function returns the
LDLt when the matrix is symmetric but not positive definite, e.g.
julia A = sprandn(5,5, 0.5);

julia A = A + A';

julia b = A*ones(5);

julia cholfact(A)

CHOLMOD factor:  :  5-by-5
  scalar types: SuiteSparse_long, real, double
  simplicial, LDL'.
  ordering method used: AMD
 0:4
 1:3
 2:0
 3:1
 4:2
  col: 0 colcount: 3
  col: 1 colcount: 3
  col: 2 colcount: 3
  col: 3 colcount: 2
  col: 4 colcount: 1
monotonic: 1
 nzmax 12.
  col 0: nz 3 start 0 end 3 space 3 free 0:
 0: -0.077417
 1: 8.3137
 3: -0.22451
  col 1: nz 3 start 3 end 6 space 3 free 0:
 1: 6.1217
 3: -0.023604
 4: 0.33154
  col 2: nz 3 start 6 end 9 space 3 free 0:
 2: -0.82878
 3: -0.16901
 4: 2.1383
  col 3: nz 2 start 9 end 11 space 2 free 0:
 3: 0.96632
 4: -1.1466
  col 4: nz 1 start 11 end 12 space 1 free 0:
 4: 1.8461
  nz 12  OK


julia cholfact(A)\b
5-element Array{Float64,1}:
 1.0
 1.0
 1.0
 1.0
 1.0

You do have CHOLMOD installed, but in 0.3 it is in a different module. Try
Base.LinAlg.CHOLMOD

2015-05-27 16:44 GMT-04:00 Eduardo Lenz eduardobarpl...@gmail.com:

 Sorry for pointing a wrong julia version.

 The matrix is not posdef, so it gives me a (correct) warning and than my
 computations return NaN.

  I will take a deeper look, but I really cannot understand why I dont have
 CHOLMOD avaliable in a regular
 windows install.

 Thanks for your help Andreas !


 On Wednesday, May 27, 2015 at 5:37:53 PM UTC-3, Andreas Noack wrote:

 You are using 0.3.8 and not 0.4. Have you tried cholfact(A)?

 2015-05-27 16:33 GMT-04:00 Eduardo Lenz eduardo...@gmail.com:

 Just to make it clear...
_
 julia versioninfo()
 Julia Version 0.3.8
 Commit 79599ad (2015-04-30 23:40 UTC)
 Platform Info:
   System: Windows (x86_64-w64-mingw32)
   CPU: Intel(R) Core(TM) i5-3320M CPU @ 2.60GHz
   WORD_SIZE: 64
   BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Sandybridge)
   LAPACK: libopenblas
   LIBM: libopenlibm
   LLVM: libLLVM-3.3

 julia Base.SparseMatrix.CHOLMOD
 ERROR: CHOLMOD not defined



 On Wednesday, May 27, 2015 at 3:25:46 PM UTC-3, Eduardo Lenz wrote:

 Funny... I dont have CHOLMOD installed...but I am using the official
 windows installer.. I will try to make a fresh install.

 Thanks Andreas !

 On Wednesday, May 27, 2015 at 2:59:30 PM UTC-3, Andreas Noack wrote:

 What do you get when you type Base.SparseMatrix.CHOLMOD.ITypes in the
 terminal?

 2015-05-27 13:56 GMT-04:00 Eduardo Lenz eduardo...@gmail.com:

 Thanks Andreas.

 Indeed ... but I am using 0.4 with ldltfact and it is complaining
 about the type of the matrix, which is OK.

 I am realy confused with this error.


 On Wednesday, May 27, 2015 at 2:22:30 PM UTC-3, Andreas Noack wrote:

 In 0.3 the sparse LDLt and Cholesky factorizations are both in the
 cholfact function. If the matrix is symmetric, but not positive definite
 the result of cholfact will be an LDLt factorization. In 0.4 the
 factorizations have been split into cholfact and ldltfact.

 Den onsdag den 27. maj 2015 kl. 12.34.30 UTC-4 skrev Eduardo Lenz:

 Hi.

 I am trying to solve a linear system defined by a Symmetric sparse
 matrix. The lufact is working well, but as the matrix is symmetric, I
 intend to use ldltfact.

 Unfortunately, it is returning the following error:

 ERROR: `ldltfact` has no method matching
 ldltfact(::SparseMatrixCSC{Float64,Int64})

 but my matrix is reported as

 typeof(A)
 SparseMatrixCSC{Float64,Int64}.

 Is it an error or Im doing something wrong.

 Thanks,
 Eduardo.






Re: [julia-users] forming an FFT matrix in julia

2015-05-10 Thread Andreas Noack
1. In Julia fft(A) is the 2d DFT of A. You can get MATLAB's behavior with
fft(A, 1)

2. I might not understand what you are trying to do, but it appears to me
that you can just apply the DFT to the full vector and then sample the
elements of the vector.

2015-05-10 22:32 GMT-04:00 Edward Chen echen...@gmail.com:

 To whom it may concern:

 2 issues:

1. In MATLAB, we can form an nxn FFT matrix by doing fft(eye(n)). In
Julia, doing fft(eye(n)) doesn't seem to be giving me the same result.
2. I am actually interested in randomly sampling the rows of a FFT
matrix, and doing matrix-vector multiples with only those rows. I was
wondering if there was a way to use plan_fft to get the nlogn flop speed
using plan_fft in this case.


 Thanks!
 Ed



Re: [julia-users] asymmetry between treatment of mean and cov

2015-05-08 Thread Andreas Noack
Ah. I misread your post. The explanation is that the computation of the
mean generalizes to the case where each element is a vector, but this is
not the case for the way we calculate the variance. It could be supported,
but then I think it would make most sense to generalize the variance and
not the covariance function.

2015-05-08 17:05 GMT-04:00 JPi pin...@gmail.com:

 Yes, the variance.

 But that doesn't explain why you can't get the covariance matrix of an
 array of vectors.

 On Friday, May 8, 2015 at 4:51:13 PM UTC-4, Andreas Noack wrote:

 Calculating the covariance requires two sequences of data points. Either
 from two vectors or between the columns of a matrix. The mean is different
 as it requires one sequence. What did you expect to get from the covariance
 function of a vector? The variance?

 2015-05-08 16:01 GMT-04:00 JPi pin...@gmail.com:

 Hello,

 1. I can apply mean to an array of vectors, but doing the same for cov
 produces an error.

 2. I can apply cov to a matrix, which produces the covariance matrix
 treating each row as an observation.  Applying mean to the same matrix
 produces a scalar average of all elements in the matrix.

 This asymmetric treatment is counter-intuitive.  What is the rationale?

 Thanks!

 n=10
 A=Array(Vector,n)

 for i=1:n
  A[i]=randn(3)
 end

 println(mean(A))
 println(cov(A)) #
 produces an error

 B=Array(Float64,n,3)
 for i=1:n
  B[i,:]=A[i]
 end

 println(mean:,mean(B))#
 produces average of all elements in B
 println(covariance matrix:,cov(B))#
 produces covariance matrix of columns of B







Re: [julia-users] asymmetry between treatment of mean and cov

2015-05-08 Thread Andreas Noack
Calculating the covariance requires two sequences of data points. Either
from two vectors or between the columns of a matrix. The mean is different
as it requires one sequence. What did you expect to get from the covariance
function of a vector? The variance?

2015-05-08 16:01 GMT-04:00 JPi pin...@gmail.com:

 Hello,

 1. I can apply mean to an array of vectors, but doing the same for cov
 produces an error.

 2. I can apply cov to a matrix, which produces the covariance matrix
 treating each row as an observation.  Applying mean to the same matrix
 produces a scalar average of all elements in the matrix.

 This asymmetric treatment is counter-intuitive.  What is the rationale?

 Thanks!

 n=10
 A=Array(Vector,n)

 for i=1:n
  A[i]=randn(3)
 end

 println(mean(A))
 println(cov(A)) #
 produces an error

 B=Array(Float64,n,3)
 for i=1:n
  B[i,:]=A[i]
 end

 println(mean:,mean(B))#
 produces average of all elements in B
 println(covariance matrix:,cov(B))#
 produces covariance matrix of columns of B






Re: [julia-users] Is anyone, anywhere using Julia on a supercomputer/cluster

2015-04-28 Thread Andreas Noack
As I'm writing this, I'm running Julia on a pretty new 90 node cluster. I
don't know if that counts as medium size cluster, but recently it was
reported on the mailing list that Julia was running on

http://www.top500.org/system/178451

which I think counts as a supercomputer.

2015-04-28 19:58 GMT-04:00 Lyndon White oxina...@ucc.asn.au:

 Hi,

 I have a big numerical problem that julia is nice for.
 But I really want to farm it out over a few hundren cores.

 I know my local research supercomputing provider (iVec since I am in
 Western Australia),
 prefers it if you are running programs in C or Fortran.

 But I know they have run things in Python and Matlab.
 I know they losely appreciate the trade off between development time and
 CPU time.
 But I think there main hesitation is the knowledge that the CPU cycles
 python wastes could be going to another project,
 and that other project could be curing cancer etc.

 Julia on the other hand is comparable to C or Fortran, so that objection
 is out.
 It is on the other hand imature and not exactly well known.
 (I would not be surpised if i was the only user in my universivy.

 It would help any argument I might have,
 or explination I need to render if I could say: They are using it on the
 super-computers in X.

 Have you, or do you know of anyone who used it on supercomputers /
 medium-large clusters?

 Did it go well?
 What are the pitfalls



Re: [julia-users] Yet Another String Concatenation Thread (was: Re: Naming convention)

2015-04-28 Thread Andreas Noack
I like the idea of something like factorize(MyType,...), but it is not
without problems for generic programming. Right now cholfact(Matrix) and
cholfact(SparseMatrixCSC) return different types, i.e. LinAlg.Cholesky and
SparseMatrix.CHOLMOD.Factor. The reason is that internally, they are very
different, but from the perspective of solving a positive definite system
of equations they could be considered the same, and therefore they share
the cholfact name.

2015-04-28 9:26 GMT-04:00 Tom Breloff t...@breloff.com:

 +1 for factorize(MyType, ...), Sparse(MyDist, ...) and other similar
 examples that have been suggested.  It's only a very slight hardship for
 those copying their code directly from matlab, but for everyone else I
 think it's a big win for readability and type safety.  It's also likely
 easier to learn (assuming you don't already know matlab too well), since
 it'll be easier to guess the appropriate function name without reading
 through a long list of jumbled letters in function names.   Each method
 name will be more powerful, so I can see people having to reference docs
 less often.


 On Monday, April 27, 2015 at 8:49:43 PM UTC-4, Glen H wrote:

 +1 for using multi-dispatch to separate concepts and to make things more
 readable:

 cholfact - factorize(Cholesky, )
 sprandn - Sparse(Normal, m, n)

 The main benefit isn't readability but for speed and generality.  If you
 want to use a different distribution (in sprandn) or a different
 factorization type then than is easily parameterized and can be specialized
 by the compiler to make the fastest code.  This desire to stick with
 defacto standards doesn't make sense because it is coming from a
 different language that doesn't have multidispatch and has poor types.

 As an example, what would you call the function that factorizes
 polynomials?  The answer for MATLAB is factor.  I would much rather it be:

 factorize(Polynomial, )

 This is self documenting and all factorization methods have the same name
 and and can be easily found.  It would be nice to also have:

 ?factorize(Polynomial, )

 To return the help for how to factorize a polynomial and some way to find
 out all the types that can go in the first argument of factorize().

 If people are really set on not learning something new then there could
 be a MATLAB compatibility package that does:

 cholfact - factorize(Cholesky ...)

 But that leads to bad code so I would rather it just be a chapter in the
 documentation for MATLAB users (or maybe a script to do the conversion).

 Forwards compatibility from MATLAB doesn't exist anyways so why stick to
 it when it leads to worse code?

 I agree with François's reasons for why people use MATLAB...it isn't
 because they came up with the best function names for all languages to
 use...it likely just happened and people got used to it.

 Glen




Re: [julia-users] Sparse cholesky and triangular solve

2015-04-27 Thread Andreas Noack
Right now, we don't support extraction of the sparse Cholesky factor, but I
think that it will be supported soon, see

https://github.com/JuliaLang/julia/pull/10402

I don't fully understand the structure of your problem, but I think that
you can only save flops by using the Cholesky factor when B=D. In that case
solving with the whole factorization and D should be okay from an
efficiency point of view. Alternative, you'll probably have to wait for the
pull request to be merged.

2015-04-27 15:47 GMT-04:00 matt katzf...@gmail.com:

 There are a small number of different Bs and different Ds, and I have to
 compute the quadratic form for a number of different combinations. I can
 probably do things the way you suggested, but I would lose the nice
 symmetry that I have in my current R code. (I am trying to switch to Julia
 for speed.)


 On Monday, April 27, 2015 at 2:30:00 PM UTC-5, Andreas Noack wrote:

 If B and D are different then why is it not okay to calculate x = C\D and
 then B'x afterwards?

 2015-04-27 15:24 GMT-04:00 matt katz...@gmail.com:

 I would like to compute multiple quadratic forms B'*C^(-1)*D, where B,
 C, and D are sparse, and C is always the same symmetric positive matrix.
 For fast computation, I would like to do this by computing and re-using the
 Cholesky factor: C=LL'. However, chol(C) does not work (because C is
 sparse), and cholfact(C)\D returns C^(-1)*D instead of C^(-1/2)*D.

 Minimal working example:
 C=4*speye(3);  # sparse matrix
 B=sparse(ones(1,3))  # sparse unit vector
 chol(C) # returns ERROR: Use cholfact()instead of chol() for sparse
 matrices...
 L=cholfact(C); # works, but returns Cholesky decomposition instead of
 cholesky factor
 L\B'  # would like this to be vector with elements .5 instead of .25

 Thank you!





Re: [julia-users] Solving a linear system with sparse, approximately low rank, and symmetric A

2015-04-27 Thread Andreas Noack
How large is your system? The calculation U * diagm(D) * U' * b could
easily use all of your RAM if the problem is large. Even if it is not
really large you should evaluate it from the right and use Diagonal, i.e. U
* (Diagonal(D) * (U' * b)) because the you are not construction a large
matrix, but only vectors.

If the problem is not that large, you might just want to write
pinv(full(A))*b because that is effectively the same as what you are doing
now just with a sparse solver.

I don't think it is possible to specify a threshold in eigs.

2015-04-27 15:12 GMT-04:00 Mladen Kolar mko...@gmail.com:

 Dear Andreas,

 Thank you for your response and I apologize for not writing earlier as I
 got swamped with teaching.

 Your suggestion of using cholfact together with shift argument is a
 reasonable one. I need to test it and see what results I'll get for a
 specific problem at hand. Also, I should probably make a leap and install
 v0.4.

 I would like to ask a related question on a slightly different approach.
 While in general the solution is not unique when A is low rank, it seems
 that in my problem vector b is in the range of A (this is an empirical
 observation). The following approach seems to result in an estimator with
 good statistical properties.

 # A -- sparse matrix
 # b -- vector

 D, U, _ = eigs(A; nev = size(A, 1)-1 )
 indC = findfirst((x) - x  5e-5, D)
 D[indC:end] = 0
 for j=1:indC
 D[j] = 1. / D[j]
 end
 hat_x = U * diagm(D) * U' * b

 In the above snippet, first eigen-decomposition of A is performed. Next, a
 low rank approximation of A is formed by keeping all eigenvalues larger
 that 5e-5 (this is a tuning parameter). Finally, x is obtained by computing
 the pseudo-inverse of the low-rank approximation and multiplying with b.

 The function eigs accepts the parameter nev, however, in general I do
 not know how many eigenvalues are going to be above a certain cutoff (in
 the snippet above, 5e-5). Is it possible to pass a parameter to obtain all
 eigenvalues and eigenvectors above certain threshold? Or should I simply
 identify one eigenvalue at a time and deflate A by removing the
 corresponding eigenvector?

 Thanks, Mladen


 On Tuesday, April 21, 2015 at 9:09:29 AM UTC-5, Andreas Noack wrote:

 I'm not sure what the best solution is here because I don't fully
 understand your objective. If A has low rank then the solution is not
 unique and if it has almost low rank, the solution is very ill conditioned.
 A solution could our new shift argument to our complex Cholesky
 factorization. This will introduce some regularization to the problem. This
 requires latest master of Julia. Then you can do
 julia A = sprandn(20,20,0.1);A = A'A;

 julia b = A*ones(size(A, 1));

 julia A\b
 ERROR: ArgumentError: matrix has one or more zero pivots
  in ldltfact at sparse/cholmod.jl:1042
  in ldltfact at sparse/cholmod.jl:1048
  in factorize at sparse/linalg.jl:688
  in \ at linalg/generic.jl:244

 julia cholfact(A, shift = 1e-16)\b
 20-element Array{Float64,1}:
  1.0
  1.0
  1.0
  1.0
  1.0
  1.0
  1.0
  1.0
  1.0
  1.0
  1.0
  1.0
  1.38612
  0.0
  1.0
  1.0
  1.0
  1.0
  1.0
  1.0

 2015-04-20 20:32 GMT-04:00 Mladen Kolar mko...@gmail.com:

 Hi,

 I am interested in solving a linear system Ax = b where A is a sparse,
 symmetric positive semi-definite matrix.
 A is not exactly low rank, however, for the estimation problem at hand,
 one can approximate A with a matrix that has rank around sqrt(p) where A is
 p x p matrix and still obtain an estimator of x that will have good
 statistical properties.

 Is there an incomplete Cholesky factorization for sparse symmetric
 matrices in julia? Or an svd implementation for sparse matrices?
 What would be a recommended way of solving the above problem?

 Kind regards,
 Mladen





Re: [julia-users] Sparse cholesky and triangular solve

2015-04-27 Thread Andreas Noack
If B and D are different then why is it not okay to calculate x = C\D and
then B'x afterwards?

2015-04-27 15:24 GMT-04:00 matt katzf...@gmail.com:

 I would like to compute multiple quadratic forms B'*C^(-1)*D, where B, C,
 and D are sparse, and C is always the same symmetric positive matrix. For
 fast computation, I would like to do this by computing and re-using the
 Cholesky factor: C=LL'. However, chol(C) does not work (because C is
 sparse), and cholfact(C)\D returns C^(-1)*D instead of C^(-1/2)*D.

 Minimal working example:
 C=4*speye(3);  # sparse matrix
 B=sparse(ones(1,3))  # sparse unit vector
 chol(C) # returns ERROR: Use cholfact()instead of chol() for sparse
 matrices...
 L=cholfact(C); # works, but returns Cholesky decomposition instead of
 cholesky factor
 L\B'  # would like this to be vector with elements .5 instead of .25

 Thank you!



Re: [julia-users] Finite field / Gaulois field implementation

2015-04-24 Thread Andreas Noack
Hej Valentin

There is a couple of simple examples. At least this

http://acooke.org/cute/FiniteFiel1.html

and I did one in this

http://andreasnoack.github.io/talks/2015AprilStanford_AndreasNoack.ipynb

notebook. The arithmetic definitions are simpler for GF(2), but should be
simple modifications to the definitions in the notebook.

2015-04-24 2:50 GMT-04:00 Valentin Churavy v.chur...@gmail.com:

 Hej,

 Did anybody tried to implement finite field arithmetic in Julia?. I would
 be particular interested in GF(2) eg. base 2 arithmetic modulus 2.

 Best,
 Valentin



Re: [julia-users] type-stable way to get primary type

2015-04-22 Thread Andreas Noack
This problem is quite common in the LinAlg code. We have two type of
definitions to handle the conversion of the element types. First an idea
due to Jeff as implemented in

https://github.com/JuliaLang/julia/blob/237cdab7100b29a6769313391b1f8d2563ada06e/base/linalg/triangular.jl#L20

and

https://github.com/JuliaLang/julia/blob/237cdab7100b29a6769313391b1f8d2563ada06e/base/linalg/symmetric.jl#L34

These create maybe-aliased objects which can be problem in the destructive
linalg functions so therefore Simon Kornblith invented copy_oftype which
you can see example of here

https://github.com/JuliaLang/julia/blob/237cdab7100b29a6769313391b1f8d2563ada06e/base/linalg/symmetric.jl#L34

Notice that it requires two method definitions for each type.

2015-04-22 8:57 GMT-04:00 Tim Holy tim.h...@gmail.com:

 As Jeff says...this is why `similar` exists.

 --Tim

 On Wednesday, April 22, 2015 10:03:02 AM Mauro wrote:
  Thanks Jameson and here the context:
 
  # Initializes an array which has a's container but b's eltype
  function f(a::AbstractVector, b::AbstractVector)
  Eb = eltype(b)  # this is type-stable
  Ca = typeof(a).name.primary # this is not type-stable
  return Ca(Eb, 5)# assumes Ca supports the normal Array
  constructor end
 
  The last line would probably better use copy+convert but for that I
  still need to make Ca{Eb,1}.  Note that Ca is always a leaftype and I
  don't want to climb the type hierarchy, so I think your remark about the
  subtypes does not apply.
 
  Here the typed code:
  julia @code_warntype f([1,2], [1.])
  Variables:
a::Array{Int64,1}
b::Array{Float64,1}
Eb::Type{Float64}
Ca::Type{T} # ---
 
  Body:
begin  # none, line 3:
Eb = Float64 # line 4:
Ca =
 
 (top(getfield))((top(getfield))(typeof(a::Array{Int64,1})::Type{Array{Int64
  ,1}},:name)::TypeName,:primary)::Type{T} # line 5: return
  (Ca::Type{T})(Eb::Type{Float64},5)::Any
end::Any
 
  Thanks!
 
  On Tue, 2015-04-21 at 22:56, Jameson Nash vtjn...@gmail.com wrote:
   Your question presupposes that you can write useful generic code with
 the
   return result, but does not provide any context on your problem. When a
   similar question was asked previously on the mailing list, I recall
   pointing out that the code the author was attempting to write was not
   going
   to function as intended. Perhaps you can provide some more context?
  
   The observation that I am making is that you cannot arbitrarily pick
 apart
   a type and expect everything to line up afterwards. For example, it is
   possible to have the following:
   abstract AbstractTy{A,B}
   type Ty1 : AbstractTy{Int, Float64} end
   type Ty2{B,A} : AbstractTy{A,B} end
  
   So you can't generically reconstruct some arbitrary subtype by some
   generic
   rearrangement of its type parameters.
  
   On Tue, Apr 21, 2015 at 4:34 PM Mauro mauro...@runbox.com wrote:
   I have a parameterized type and want to get the primary-type.  For
   example, I got
  
   a = Array{Int,2}
  
   is there a type-stable way to get Array?  This is non-type stable:
  
   f(t::Type) = t.name.primary
  
   as the inferred return type is Type.
  
   This does not work:
  
   f{T, S}(::Type{T{S...}}) = T
  
   Any ideas? Thanks, M




Re: [julia-users] CholmodFactor not defined?

2015-04-21 Thread Andreas Noack
The reason is that it is not exported so you can either use the full path

bar(Base.LinAlg.CHOLMOD.CholmodFactor{Float64,Int64}) = 1

or use the type first

using Base.LinAlg.CHOLMOD.CholmodFactor
bar(CholmodFactor{Float64,Int64}) = 1

2015-04-21 10:27 GMT-04:00 andreas scheidegge...@gmail.com:


 Hi everybody,

 It seems to that CholmodFactor is not defined when I try to use to define
 a type of a function parameter.

 For example:

 typeof(cholfact(speye(5)))
 - CholmodFactor{Float64,Int64} (constructor with 1 method)

 So far so good. However, that does not work:

 bar(x::CholmodFactor{Float64,Int64}) = 1
 - ERROR: CholmodFactor not defined


 I have the feeling I miss something very obvious here...
 Any help is very much appreciated.

 Andreas

 PS: using v0.3.7 on Windows 7





Re: [julia-users] Solving a linear system with sparse, approximately low rank, and symmetric A

2015-04-21 Thread Andreas Noack
I'm not sure what the best solution is here because I don't fully
understand your objective. If A has low rank then the solution is not
unique and if it has almost low rank, the solution is very ill conditioned.
A solution could our new shift argument to our complex Cholesky
factorization. This will introduce some regularization to the problem. This
requires latest master of Julia. Then you can do
julia A = sprandn(20,20,0.1);A = A'A;

julia b = A*ones(size(A, 1));

julia A\b
ERROR: ArgumentError: matrix has one or more zero pivots
 in ldltfact at sparse/cholmod.jl:1042
 in ldltfact at sparse/cholmod.jl:1048
 in factorize at sparse/linalg.jl:688
 in \ at linalg/generic.jl:244

julia cholfact(A, shift = 1e-16)\b
20-element Array{Float64,1}:
 1.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.38612
 0.0
 1.0
 1.0
 1.0
 1.0
 1.0
 1.0

2015-04-20 20:32 GMT-04:00 Mladen Kolar mko...@gmail.com:

 Hi,

 I am interested in solving a linear system Ax = b where A is a sparse,
 symmetric positive semi-definite matrix.
 A is not exactly low rank, however, for the estimation problem at hand,
 one can approximate A with a matrix that has rank around sqrt(p) where A is
 p x p matrix and still obtain an estimator of x that will have good
 statistical properties.

 Is there an incomplete Cholesky factorization for sparse symmetric
 matrices in julia? Or an svd implementation for sparse matrices?
 What would be a recommended way of solving the above problem?

 Kind regards,
 Mladen



Re: [julia-users] How to declare correctly this matrix?

2015-04-21 Thread Andreas Noack
Hi Michela

It is easier to help if your example is complete such that it can just be
pasted into the terminal. The variable gmax is not defined in your example,
but I guess it is equal to length(SUC_C). It is also useful to provide the
exact error message.

That said, I think the root of the problem is the dependency between the
loop variable in the comprehension. The loop variable r depends on the
other loop variable j which is not allowed in comprehension. A solution
could be to preallocate suc, e.g. something like

suc = zeros(length(SUC_C), length(tau_max))
for j = 1:length(SUC_C)
for r = 1:tau_max[j]
suc[j,r] = SUC_C[j]*(1 - 1/(r+1))
end
end

2015-04-21 12:21 GMT-04:00 Michela Di Lullo michela.dilu...@uniroma1.it:

 Hello everyone,

 I'm trying to declare the following matrix:

 suc=[SUC_C[j]*(1-1/(r+1)) for j=1:gmax, r=1:tau_max[j]]

 whereas:

 SUC_C=[6661.09, 8236.1, 7619.48, 8462.68, 5705.73, 6040.87]

 tau_max=[4,4,3,2,1,4]

 but it's not working.

 Any idea about how to make it correctly?

 Thanks for any info,

 Michela



Re: [julia-users] Re: Fastest way for constructing a hermitian matrix

2015-04-18 Thread Andreas Noack
We mainly use SymTridiagonal for eigenvalue problem and therefore it is not
necessary to allow for complex matrices because the Hermitian eigenvalue
problem can be reduced to a real symmetric problem. It might be easier to
specify the problem in Hermitian form, so we might change this. What is
your use case?

2015-04-18 17:11 GMT-04:00 Edward Chen echen...@gmail.com:

 Nvm, I think I can just write half the matrix, B, and do B + B' = A, the
 hermitian matrix that I want.


 On Saturday, April 18, 2015 at 5:10:38 PM UTC-4, Edward Chen wrote:

 To whom it may concern:

 I am interested in constructing a Hermitian matrix, and entering only the
 diagonals + upper-diagonals. Is seems 'SymTridiagonal' works for reals,
 but I have a complex matrix.

 Thanks,
 Ed




Re: [julia-users] pinv(0) != pinv([0])

2015-04-16 Thread Andreas Noack
This has been fixed on master. I've just backported the fix to the release
branch so it should be okay in 0.3.8.

2015-04-16 4:05 GMT-04:00 Rasmus Brandt rasmus...@gmail.com:

 Hey everyone,

 I just stumbled over this behaviour in Julia-0.3.7, which seems a bit
 unintuitive to me:

 julia pinv(0)
 Inf

 julia pinv([0])
 1x1 Array{Float64,2}:
  0.0

 In Matlab R2012a I get:

  pinv(0)

 ans =

  0

 It seems that the definition
 pinv(x::Number) = one(x)/x
 satisfies the Moore-Penrose conditions for all x != 0.

 Is this intentional, or a bug?

 -- Rasmus



Re: [julia-users] Re: Livestream of ICME LA/Opt Seminar today 4:15pm PDT (Andreas Noack, MIT)

2015-04-12 Thread Andreas Noack
The notebook is now available from

http://andreasnoack.github.io/talks/2015AprilStanford_AndreasNoack.ipynb

Note that it is based on master so some parts of the code might fail on
Julia release.

2015-04-11 15:21 GMT-04:00 Andreas Noack andreasnoackjen...@gmail.com:

 I've been in transit back to Boston and the notebook requires a bit of
 post processing, but I'll try to make the notebook available tonight or
 tomorrow.

 2015-04-11 13:21 GMT-04:00 Jim Garrison j...@garrison.cc:

 Is the IJulia notebook associated with this talk available online anywhere?


 On Saturday, April 11, 2015 at 7:33:30 AM UTC-7, Viral Shah wrote:

 This talk is highly recommended, if you have been following the
 development of linear algebra in Julia from the sidelines, and want to know
 a little more about why things are the way they are, and where it is headed.

 -viral

 On Friday, April 10, 2015 at 11:02:52 PM UTC+5:30, Nick Henderson wrote:

 Certainly!  The video can be found here:

   https://www.youtube.com/watch?v=VS0fnUOAKpI

 Other ICME videos:

   https://www.youtube.com/user/ICMEStudio

 History of Gaussian Elimination is quite interesting:

   https://www.youtube.com/watch?v=KxmmYve4AX0

 Cheers,
 Nick


 On Friday, April 10, 2015 at 12:18:56 AM UTC-7, Valentin Churavy wrote:

 Could you send out an e-mail when the video goes online in the archive?

 On Friday, 10 April 2015 08:26:15 UTC+9, Nick Henderson wrote:

 In setting up the livestream a new link was created:

   https://www.youtube.com/watch?v=WrFURbHwwrs

 Videos are archived here:

   https://www.youtube.com/channel/UCizxnsw19qcTOdJdIJVtl0Q

 On Thursday, April 9, 2015 at 11:39:07 AM UTC-7, Nick Henderson wrote:

 Hello All,

 The Institute for Computational and Mathematical Engineering at
 Stanford is pleased to have Andreas Noack and Jiahoa Chen speaking
 in our Linear Algebra and Optimization seminar this Thursday and next.
 Today's talk will be livestreamed via YouTube starting at 4:15pm PDT.

 Livestream link:  https://www.youtube.com/watch?v=a_bFB1BZbvI

 (Videos will also be made available on YouTube after the seminar.)

 We hope you can tune in!

   CME 510 Spring 2015
   Linear Algebra and Optimization Seminar
   ICME, Stanford University
   http://icme.stanford.edu/

   4:15pm PDT Thursday April 9

   Fast and flexible linear algebra in Julia
   Andreas Noack, MIT CSAIL

 Applied scientists often develop computer programs exploratively,
 where data examination, manipulation, visualization and code
 development are tightly coupled.  Traditionally, the programming
 languages used are slow, with performance critical computations
 relegated to library code written in languages on the other side of
 Ousterhout's dichotomy, e.g. LAPACK.  I will introduce the Julia
 programming language and argue that it is well suited for
 computational
 linear algebra.  Julia provides features for exploratory program
 development, but the language itself can be almost as fast as C and
 Fortran.  Furthermore, Julia's rich type system makes it possible to
 extend linear algebra functions with user defined element types, such
 as finite fields or strings with algebraic structured attached.  I
 will show examples of Julia programs that are relatively simple, yet
 fast and flexible at the same time.  Finally, the potential and
 challenges for parallel linear algebra in Julia will be discussed.





Re: [julia-users] Re: Livestream of ICME LA/Opt Seminar today 4:15pm PDT (Andreas Noack, MIT)

2015-04-11 Thread Andreas Noack
I've been in transit back to Boston and the notebook requires a bit of post
processing, but I'll try to make the notebook available tonight or tomorrow.

2015-04-11 13:21 GMT-04:00 Jim Garrison j...@garrison.cc:

 Is the IJulia notebook associated with this talk available online anywhere?


 On Saturday, April 11, 2015 at 7:33:30 AM UTC-7, Viral Shah wrote:

 This talk is highly recommended, if you have been following the
 development of linear algebra in Julia from the sidelines, and want to know
 a little more about why things are the way they are, and where it is headed.

 -viral

 On Friday, April 10, 2015 at 11:02:52 PM UTC+5:30, Nick Henderson wrote:

 Certainly!  The video can be found here:

   https://www.youtube.com/watch?v=VS0fnUOAKpI

 Other ICME videos:

   https://www.youtube.com/user/ICMEStudio

 History of Gaussian Elimination is quite interesting:

   https://www.youtube.com/watch?v=KxmmYve4AX0

 Cheers,
 Nick


 On Friday, April 10, 2015 at 12:18:56 AM UTC-7, Valentin Churavy wrote:

 Could you send out an e-mail when the video goes online in the archive?

 On Friday, 10 April 2015 08:26:15 UTC+9, Nick Henderson wrote:

 In setting up the livestream a new link was created:

   https://www.youtube.com/watch?v=WrFURbHwwrs

 Videos are archived here:

   https://www.youtube.com/channel/UCizxnsw19qcTOdJdIJVtl0Q

 On Thursday, April 9, 2015 at 11:39:07 AM UTC-7, Nick Henderson wrote:

 Hello All,

 The Institute for Computational and Mathematical Engineering at
 Stanford is pleased to have Andreas Noack and Jiahoa Chen speaking
 in our Linear Algebra and Optimization seminar this Thursday and next.
 Today's talk will be livestreamed via YouTube starting at 4:15pm PDT.

 Livestream link:  https://www.youtube.com/watch?v=a_bFB1BZbvI

 (Videos will also be made available on YouTube after the seminar.)

 We hope you can tune in!

   CME 510 Spring 2015
   Linear Algebra and Optimization Seminar
   ICME, Stanford University
   http://icme.stanford.edu/

   4:15pm PDT Thursday April 9

   Fast and flexible linear algebra in Julia
   Andreas Noack, MIT CSAIL

 Applied scientists often develop computer programs exploratively,
 where data examination, manipulation, visualization and code
 development are tightly coupled.  Traditionally, the programming
 languages used are slow, with performance critical computations
 relegated to library code written in languages on the other side of
 Ousterhout's dichotomy, e.g. LAPACK.  I will introduce the Julia
 programming language and argue that it is well suited for
 computational
 linear algebra.  Julia provides features for exploratory program
 development, but the language itself can be almost as fast as C and
 Fortran.  Furthermore, Julia's rich type system makes it possible to
 extend linear algebra functions with user defined element types, such
 as finite fields or strings with algebraic structured attached.  I
 will show examples of Julia programs that are relatively simple, yet
 fast and flexible at the same time.  Finally, the potential and
 challenges for parallel linear algebra in Julia will be discussed.




Re: [julia-users] does pca() center the data?

2015-04-06 Thread Andreas Noack
Which pca?

2015-04-06 6:53 GMT-07:00 Steven Sagaert steven.saga...@gmail.com:

 does pca() center the input  output data or do you have to do that
 yourself?



Re: [julia-users] does pca() center the data?

2015-04-06 Thread Andreas Noack
There is no pca in Julia Base

2015-04-06 9:16 GMT-07:00 Steven Sagaert steven.saga...@gmail.com:

 the one from the standard lib

 On Monday, April 6, 2015 at 4:01:00 PM UTC+2, Andreas Noack wrote:

 Which pca?

 2015-04-06 6:53 GMT-07:00 Steven Sagaert steven@gmail.com:

 does pca() center the input  output data or do you have to do that
 yourself?





Re: [julia-users] Calling gemm! on part of an array

2015-03-29 Thread Andreas Noack
I think that you could use sub(A,:,2:2:4) for BLAS, but not sub(A,:,[2,4])
because the indexing has to be with ranges for BLAS to be able to extract
the right elements of the matrix.

2015-03-29 17:34 GMT-04:00 Dominique Orban dominique.or...@gmail.com:

 Sorry if this is another [:] kind of question, but I can't seem to find
 the right syntax to call BLAS.gemm! on part of an array. The piece of code

 julia n = 10; m = 5; idx = [2, 4];
 julia C = rand(n, m); A = rand(n, m); B = rand(m, m);
 julia BLAS.gemm!('N', 'N', 1.0, A[:,idx], B[:,idx], 1.0, C[:,idx]);

 won't modify C. I'm simply trying to do

 C[:,idx] += A[:,idx] * B[:,idx];

 What's the right syntax here?

 It seems ArrayViews.jl might be the way to go, but it doesn't let me do
 view(A, :, idx).

 Thanks!



Re: [julia-users] Calling gemm! on part of an array

2015-03-29 Thread Andreas Noack
C[:,idx] creates a copy, so gemm updates a copy of part of C and then
discarts it after the computation. sub(C,:,idx) creates a view instead of a
copy.

2015-03-29 18:10 GMT-04:00 Dominique Orban dominique.or...@gmail.com:

 Sorry there was a typo in my example. It should have been

 BLAS.gemm!('N', 'N', 1.0, A[:,idx], B[idx,idx], 1.0, C[:,idx]);

 for the dimensions to work out.

 The gemm! command works fine, is happy with the dimensions, and produces
 the correct update. As far as I can tell,

 C1 = C[:,idx];
 BLAS.gemm!('N', 'N', 1.0, A[:,idx], B[:,idx], 1.0, C1);

 does what it's supposed to do, it just doesn't update C (which is awkward
 for a ! function, though I realize there's something else going on here).


 On Sunday, March 29, 2015 at 5:56:27 PM UTC-4, Andreas Noack wrote:

 I think that you could use sub(A,:,2:2:4) for BLAS, but not
 sub(A,:,[2,4]) because the indexing has to be with ranges for BLAS to be
 able to extract the right elements of the matrix.

 2015-03-29 17:34 GMT-04:00 Dominique Orban dominiq...@gmail.com:

 Sorry if this is another [:] kind of question, but I can't seem to find
 the right syntax to call BLAS.gemm! on part of an array. The piece of code

 julia n = 10; m = 5; idx = [2, 4];
 julia C = rand(n, m); A = rand(n, m); B = rand(m, m);
 julia BLAS.gemm!('N', 'N', 1.0, A[:,idx], B[:,idx], 1.0, C[:,idx]);

 won't modify C. I'm simply trying to do

 C[:,idx] += A[:,idx] * B[:,idx];

 What's the right syntax here?

 It seems ArrayViews.jl might be the way to go, but it doesn't let me do
 view(A, :, idx).

 Thanks!





Re: [julia-users] Calling gemm! on part of an array

2015-03-29 Thread Andreas Noack
It is supported in SubArrays and you can also multiply with these in place
with A_mul_B!, but this will be calling a generic  multiplication method
implemented in Julia instead of BLAS. So far it doesn't support the α and β
arguments of C:=αAB+βC, but I'm working on that.

2015-03-29 18:23 GMT-04:00 Dominique Orban dominique.or...@gmail.com:

 Unfortunately, my idx will be computed on the fly and there's zero chance
 that it would be a range. Is there a plan to support more general indexing
 in subarrays and/or ArrayView?


 On Sunday, March 29, 2015 at 6:13:16 PM UTC-4, Andreas Noack wrote:

 C[:,idx] creates a copy, so gemm updates a copy of part of C and then
 discarts it after the computation. sub(C,:,idx) creates a view instead of a
 copy.

 2015-03-29 18:10 GMT-04:00 Dominique Orban dominiq...@gmail.com:

 Sorry there was a typo in my example. It should have been

 BLAS.gemm!('N', 'N', 1.0, A[:,idx], B[idx,idx], 1.0, C[:,idx]);

 for the dimensions to work out.

 The gemm! command works fine, is happy with the dimensions, and produces
 the correct update. As far as I can tell,

 C1 = C[:,idx];
 BLAS.gemm!('N', 'N', 1.0, A[:,idx], B[:,idx], 1.0, C1);

 does what it's supposed to do, it just doesn't update C (which is
 awkward for a ! function, though I realize there's something else going on
 here).


 On Sunday, March 29, 2015 at 5:56:27 PM UTC-4, Andreas Noack wrote:

 I think that you could use sub(A,:,2:2:4) for BLAS, but not
 sub(A,:,[2,4]) because the indexing has to be with ranges for BLAS to be
 able to extract the right elements of the matrix.

 2015-03-29 17:34 GMT-04:00 Dominique Orban dominiq...@gmail.com:

 Sorry if this is another [:] kind of question, but I can't seem to
 find the right syntax to call BLAS.gemm! on part of an array. The piece of
 code

 julia n = 10; m = 5; idx = [2, 4];
 julia C = rand(n, m); A = rand(n, m); B = rand(m, m);
 julia BLAS.gemm!('N', 'N', 1.0, A[:,idx], B[:,idx], 1.0, C[:,idx]);

 won't modify C. I'm simply trying to do

 C[:,idx] += A[:,idx] * B[:,idx];

 What's the right syntax here?

 It seems ArrayViews.jl might be the way to go, but it doesn't let me
 do view(A, :, idx).

 Thanks!






Re: [julia-users] code generation with @eval

2015-03-26 Thread Andreas Noack
Distributed reduce is already implemented, so maybe these slightly simpler
with e.g. sum(A::DArray) = reduce(Base.AddFun(), A)

2015-03-26 8:41 GMT-04:00 Jameson Nash vtjn...@gmail.com:

 `eval` (typically) isn't allowed to handle `import` and `export`
 statements. those must be written explicitly

 On Thu, Mar 26, 2015 at 8:18 AM Amit Murthy amit.mur...@gmail.com wrote:

 I was trying to add a bunch  of common functions to DistributedArrays.jl
 with the below code block

 for f in [:sum, :minimum, :maximum, :mean]
 @eval begin
 import Base: ($f)
 export ($f)
 function ($f)(D::DArray)
 refs = [@spawnat p ($f)(localpart(D)) for p in procs(D)]
 ($f)([fetch(r) for r in refs])
 end
 end
 end



 But I get an error

 ERROR: LoadError: syntax: invalid import statement: expected identifier
  in include at ./boot.jl:250
  in include_from_node1 at ./loading.jl:129
  in reload_path at ./loading.jl:153
  in _require at ./loading.jl:68
  in require at ./loading.jl:51
  in process_options at ./client.jl:292
  in _start at ./client.jl:402
 while loading /home/amitm/.julia/v0.4/DistributedArrays/src/
 DistributedArrays.jl, in expression starting on line 497



 What am I doing wrong ?




Re: [julia-users] Is there a Julia equivalent to Mathematica's Tally?

2015-03-26 Thread Andreas Noack
I think that countmap in StatsBase does that

Den torsdag den 26. marts 2015 skrev DumpsterDoofus 
peter.richter@gmail.com:

 In Mathematica, there is a function called Tally which takes a list and
 returns a list of the unique elements of the input list, along with their
 multiplicities. I.e, it takes a list of length N and returns an array of
 dimensions (N,2), namely the pairs of elements and the number of times they
 appear.

 Is there a similar (but perhaps differently-named) function in Julia to
 tally a list?



Re: [julia-users] Parallel performance test question

2015-03-18 Thread Andreas Noack
It has caused a lot of frustration. See #9118. I think the easiest right
now is
for p in procs()
@spawnat p blas_set_num_threads(k)
end

2015-03-17 23:19 GMT-04:00 Sheehan Olver dlfivefi...@gmail.com:

 Hi,

 I've created the following to test the performance of parallel processing
 on our departments server.  But the @everywhere isn't working because
 ERROR: k not defined.   What's wrong?  And any advice on improving the
 test?  (Right now for small n there is little benefit for  12 processes,
 but this seems expected behaviour to me.  There are many other processes
 maxing out CPU simultaneous with the test.)

 Sheehan



 function timemat(n,m,b,p)
 rmprocs(procs()[2:end])
 tms=Array(Float64,p,b)
 for j=1:p
 for k=1:b
 @everywhere blas_set_num_threads(k)

 # following line is just to make sure everything is precompiled
 @parallel (+) for k=1:2p
 last(randn(n,n)\[1;zeros(n-1)])
  end;

 tms[j,k]=@elapsed(@parallel (+) for k=1:m
 last(randn(n,n)\[1;zeros(n-1)])
  end)

 println(procs=$j blas=$k $(tms[j,k]))
 end
 addprocs(1)
 end
 tms
 end




Re: [julia-users] Change array of arrays to a one dimensional array

2015-03-17 Thread Andreas Noack
new_array = vcat(data...)

2015-03-17 20:59 GMT-04:00 Christopher Fisher fishe...@miamioh.edu:



 Hi all-

 pmap outputs the results as an array of arrays and I am trying to find a
 flexible way to change it into a one dimensional array. I can hardcode the
 results as new_array = vcat(data[1],data[2],data[3],data[4]). Is there an
 easy way to accomplish this without hardcoding each data[] into vcat?

 Thank you in advance



Re: [julia-users] Looking for incomplete SVD -- need only some of the singular values

2015-03-16 Thread Andreas Noack
I've tried to make the package that Jiahao mentioned usable. I think it
works, but it probably still has some rough edges. You can find it here

https://github.com/andreasnoack/TSVD.jl

and there is a help entry for tsvd that explains the arguments.

For a 2000x2000 dense non-symmetric complex matrix my tsvd runs in

julia @time tsvd(A);

elapsed time: 0.354467696 seconds (16 MB allocated, 0.64% gc time in 1
pauses with 0 full sweep)

The svds in base runs a bit slower

julia @time svds(A);
elapsed time: 2.034591908 seconds (134 MB allocated, 0.25% gc time in 6
pauses with 0 full sweep)

2015-03-16 0:10 GMT-04:00 Jiahao Chen cjia...@gmail.com:

  I've looked for other packages that could be wrapped, but couldn't find
 any that offers this feature. The only thing I found is a description of
 the svds Matlab routine, which is apparently based on eigs.

 If you want a canned routine, you're best off with svds(A) in base Julia
 (wraps ARPACK) or tsvd(A) in https://github.com/andreasnoack/PROPACK.jl
 (work in progress to wrap PROPACK). The other package I'm aware of is
 SLEPc, but it requires the entire PETSc stack.

 Note that the singular values of A are essentially the square root of the
 magnitude of the eigenvalues of AA', or A'A, or [0 A; A' 0], so eigenvalue
 routines can be used to solve the singular value problem.

 If you only need the largest singular value and its left singular vector,
 a simple quick and dirty DIY scheme is to do power iteration on AA':

 A = randn(m, n)
 u = randn(n) #some guess
 for i=1:10 #some number of iterations
u = A*(A'u)
scale!(u, 1/norm(u))
 end
 σ = norm(A'u) #σ, u are the largest singular value and its left singular
 vector

 There is a simple generalization to the largest k singular values and
 their left singular vectors.



Re: [julia-users] Solving a lower triangular system of equations

2015-03-16 Thread Andreas Noack
On 0.3.x there is a very expensive error bounds calculation in the
triangular solve which the reason for the surprisingly slow calculation.
This is not acceptable and we have therefore removed the error bounds
calculation in 0.4. On my machine I get

julia @time L\B;
elapsed time: 2.535437796 seconds (8096672 bytes allocated)

on 0.3.7 and

julia @time L\B;
elapsed time: 0.022492837 seconds (15 MB allocated, 11.99% gc time in 1
pauses with 0 full sweep)

on 0.4.

2015-03-16 18:22 GMT-04:00 matt katzf...@gmail.com:

 Hi,

 Assume that L is the lower Cholesky factor of a symmetric p.d. matrix A,
 such that A = LL'. I am trying to calculate L^(-1)B for some matrix B, but
 my current way of doing it is extremely slow.

 Here is an example:
 using Distances
 n=1000
 A=exp(-pairwise(Euclidean(),[1:n]',[1:n]')/n*50)
 B=rand(n,n)
 L=chol(A,:L)  # lower cholesky factor of A
 @time L\B  # this is what I want, but it is slow (1.2 seconds on my
 computer)

 Compare this to the time it takes to solve the following (harder) problem
 of computing A^(-1)B=L'^(-1)(L^(-1)B):
 Achol=cholfact(A,:L)  # cholesky representation of A = LL'
 @time Achol\B  # this is much faster (.04 seconds), even though it solves
 the harder problem L'^(-1)(L^(-1)B)
 @time Achol[:L]\B # again, this is what I want, but computation is slow
 (1.3 seconds)

 I suspect this discrepancy in timing is because Achol\B uses
 back-/forward-solves that exploit the triangular structure of Cholesky
 factors, whereas L\B does not. Is there any way I can obtain L^(-1)B much
 faster?

 Thank you!



Re: [julia-users] Tolerance for the checks in cholfact too small?

2015-03-14 Thread Andreas Noack
Good to to hear that. I've filed an issue to figure out how to make this
more consistent

https://github.com/JuliaLang/julia/issues/10520

2015-03-14 21:06 GMT-04:00 Kristoffer Carlsson kcarlsso...@gmail.com:



 On Sunday, March 15, 2015 at 1:02:11 AM UTC+1, Andreas Noack wrote:

 You can get around this by specifying that the matrix is symmetric. This
 can be done with

 cholfact(Symmetric(A, :L))


 Thank you, just what I was looking for.


 which then bypasses the test for symmetry and cholfact only looks at the
 lower triangle.

 However, the error you got is not consistent with the way cholfact works
 for dense matrices, so it might be worth changing the behavior of either
 dense or sparse cholfact. Dense cholfact doesn't check for symmetry.

 2015-03-14 17:37 GMT-04:00 Kristoffer Carlsson kcarl...@gmail.com:

 From my Finite Element code I generate a global stiffness matrix K.

 However, I can not use the cholesky factorization on it because of

 ERROR: ArgumentError: sparse matrix is not symmetric/Hermitian
  in cholfact at sparse/cholmod.jl:916
  in cholfact at sparse/cholmod.jl:991

 However, I am pretty sure it is symmetric and positive definite.

 For example:
 julia issym(K)
 false

 julia maximum(abs(K - K'))
 4.440892098500626e-16

 Matlab for example has no problem running chol on it when I copy + paste
 it.

 This works (of course):
 julia cholfact((K + K')/2)

 Are the tolerances for the Hermitian check set too small or is there
 maybe a specific symmetric sparse type I should use when I know my matrix
 will be symm + pos def?

 Best regards,
 Kristoffer Carlsson





Re: [julia-users] Tolerance for the checks in cholfact too small?

2015-03-14 Thread Andreas Noack
You can get around this by specifying that the matrix is symmetric. This
can be done with

cholfact(Symmetric(A, :L))

which then bypasses the test for symmetry and cholfact only looks at the
lower triangle.

However, the error you got is not consistent with the way cholfact works
for dense matrices, so it might be worth changing the behavior of either
dense or sparse cholfact. Dense cholfact doesn't check for symmetry.

2015-03-14 17:37 GMT-04:00 Kristoffer Carlsson kcarlsso...@gmail.com:

 From my Finite Element code I generate a global stiffness matrix K.

 However, I can not use the cholesky factorization on it because of

 ERROR: ArgumentError: sparse matrix is not symmetric/Hermitian
  in cholfact at sparse/cholmod.jl:916
  in cholfact at sparse/cholmod.jl:991

 However, I am pretty sure it is symmetric and positive definite.

 For example:
 julia issym(K)
 false

 julia maximum(abs(K - K'))
 4.440892098500626e-16

 Matlab for example has no problem running chol on it when I copy + paste
 it.

 This works (of course):
 julia cholfact((K + K')/2)

 Are the tolerances for the Hermitian check set too small or is there maybe
 a specific symmetric sparse type I should use when I know my matrix will be
 symm + pos def?

 Best regards,
 Kristoffer Carlsson



Re: [julia-users] Saving timing results from @time

2015-03-11 Thread Andreas Noack
@elapsed is what you are looking for

2015-03-11 7:43 GMT-04:00 Patrick Kofod Mogensen patrick.mogen...@gmail.com
:

 I am testing the run times of two different algorithms, solving the same
 problem. I know there is the @time macro, but I cannot seem to wrap my head
 around how I should save the printed times. Any clever way of doing this? I
 thought I would be able to

 [@time algo(input) for i = 1:500],

 but this saves the return form algo.


 Best,
 Patrick



Re: [julia-users] Dimension mismatch error in Julia!

2015-03-10 Thread Andreas Noack
It is more helpful if you can provide a self contained example that we can
run. However, I think you've been bitten by our white space concatenation.
When you define f, the second element is written

*-w^2*sin(x)-u*w^2*cos(x) -2γ*y*

*but I think that is getting parsed as*

*hvcat(**-w^2*sin(x)-u*w^2*cos(x), -2γ*y**)*

*if instead you had a space after the minus, it would work. Like*

*-w^2*sin(x)-u*w^2*cos(x) - 2γ*y*

*That is a bit unfortunate and something that is heavily discussed right
now.*

2015-03-10 5:17 GMT-04:00 Kacem HARIZ kacem.ha...@gmail.com:

 *Here is the code:*

 *w=1;*
 *γ=0.01;*
 *f(x,y)= [y;-w^2*sin(x)-u*w^2*cos(x) -2γ*y];*

 *for i=1:sizeof(Sx)*
 *x0=Sx[i+2];*
 *y0=Sy[i+2];*
 *for j=1:sizeU*
 *u=U[i];*
 *t,NX=ODE.ode45(f,[0,0.1],[x0,y0])*
 *end*
 *end*

 with Sx and Sy vectors of the same size, and U an other vector of size
 sizeU.

 *and here is the error message:*

 mismatch in dimension 1
 while loading In[69], in expression starting on line 5

  in cat at abstractarray.jl:631
  in hcat at abstractarray.jl:671
  in hvcat at abstractarray.jl:966
  in f at In[69]:3
  in oderkf at /home/juser/.julia/v0.3/ODE/src/ODE.jl:208
  in ode45_dp at /home/juser/.julia/v0.3/ODE/src/ODE.jl:303
  in anonymous at no file:10

 Does anyone know how to fix it, please let me know, I'd be very grateful!



Re: [julia-users] [ANN] MatrixDepot 0.2.0: include a Julia interface to the UF Sparse Matrix collection

2015-03-06 Thread Andreas Noack
Hi Weijian

This is a great functionality. It seems that you are using MAT.jl to read
in the sparse matrices. You could consider using the the MatrixMarket
reader in Base e.g.

A = sparse(Base.SparseMatrix.CHOLMOD.Sparse(matrix.mtx))

It will also have the benefit of using the Symmetric matrix type when the
matrix is symmetric and thereby saving half of the storage.

2015-03-06 14:39 GMT-05:00 Weijian Zhang zweiji...@gmail.com:

 Hello,

 I just included a Julia interface to the University of Florida Sparse
 Matrix Collection in Matrix Depot (
 https://github.com/weijianzhang/MatrixDepot.jl).

 Here is the documentation:
 http://matrixdepotjl.readthedocs.org/en/latest/ufsparse.html#ufsparse

 Please let me know if you have any comments.

 Thanks,

 Weijian





Re: [julia-users] [ANN] MatrixDepot 0.2.0: include a Julia interface to the UF Sparse Matrix collection

2015-03-06 Thread Andreas Noack
Oh. That is right. Sounds good.

2015-03-06 16:43 GMT-05:00 Weijian Zhang zweiji...@gmail.com:

 Hi Andreas,

 Yes, you are right. I am using MAT.jl to read matrices.

 That's a nice idea. Thanks a lot. But I think Base.SparseMatrix.CHOLMOD.Sparse
 is
 only available for Julia v0.4. I will change to this reader when Julia
 v0.4 is released.

 Best,

 Weijian


 On Friday, 6 March 2015 20:35:43 UTC, Andreas Noack wrote:

 Hi Weijian

 This is a great functionality. It seems that you are using MAT.jl to read
 in the sparse matrices. You could consider using the the MatrixMarket
 reader in Base e.g.

 A = sparse(Base.SparseMatrix.CHOLMOD.Sparse(matrix.mtx))

 It will also have the benefit of using the Symmetric matrix type when the
 matrix is symmetric and thereby saving half of the storage.

 2015-03-06 14:39 GMT-05:00 Weijian Zhang zwei...@gmail.com:

 Hello,

 I just included a Julia interface to the University of Florida Sparse
 Matrix Collection in Matrix Depot (https://github.com/
 weijianzhang/MatrixDepot.jl).

 Here is the documentation: http://matrixdepotjl.readthedocs.org/
 en/latest/ufsparse.html#ufsparse

 Please let me know if you have any comments.

 Thanks,

 Weijian






Re: [julia-users] Unevenly distributed arrays

2015-03-05 Thread Andreas Noack
I hope so. It is something we really want to do but I cannot promise when
we'll do it.

2015-03-04 17:01 GMT-05:00 Simone Ulzega simoneulz...@gmail.com:

 Thank you Andreas. Is there any plan to implement something more flexible
 in the near future?


 On Wednesday, March 4, 2015 at 10:09:41 PM UTC+1, Andreas Noack wrote:

 I don't think it is possible right now. We have been discussing more
 flexible solutions, but so far nothing has been done.

 2015-03-04 9:22 GMT-05:00 Simone Ulzega simone...@gmail.com:

 Is it possible to construct a DArray with unevenly distributed chunks?
 For example, I want to create a distributed array with size (5*n,),
 where n is an integer, on 4 processes.
 For some reasons I want uneven chunks of sizes n, 2*n, n and n.

 The syntax

 DArray(size) do I  end

 doesn't seem to work as I is automatically generated to have, obviously,
 evenly distributed chunks on the available processes.





Re: [julia-users] Unevenly distributed arrays

2015-03-04 Thread Andreas Noack
I don't think it is possible right now. We have been discussing more
flexible solutions, but so far nothing has been done.

2015-03-04 9:22 GMT-05:00 Simone Ulzega simoneulz...@gmail.com:

 Is it possible to construct a DArray with unevenly distributed chunks?
 For example, I want to create a distributed array with size (5*n,), where
 n is an integer, on 4 processes.
 For some reasons I want uneven chunks of sizes n, 2*n, n and n.

 The syntax

 DArray(size) do I  end

 doesn't seem to work as I is automatically generated to have, obviously,
 evenly distributed chunks on the available processes.



Re: [julia-users] Trouble with simple probit regression not converging with glm

2015-03-02 Thread Andreas Noack
I don't see an obvious reason for this so please try to post this as an
issue on GLM.jl.

2015-02-27 10:47 GMT-05:00 Andrew Newman andrew.brudev...@gmail.com:

 Hi Julia-users,

 I am trying to run a few simple regressions on simulated data.  I had no
 problem with a logit and was able to run it using glm and subsequently
 match the results with a separate maximum likelihood.  I'm having a lot
 more trouble with the probit which doesn't seem to want to converge using
 glm.  Stata has no problem with convergence to the true parameters with the
 same data.  The simulated data are assumed to come from an underlying
 latent variable.

 Thanks in advance for any insights,
 Andrew

 *

 using DataFrames
 using Distributions
 using GLM
 srand(10001)

 N=1
 genX = Normal(0,3)
 genɛ = Normal()
 X = rand(genX,N)
 ɛ = rand(genɛ,N)
 α0 = 2
 α1 = -1
 Ystar = α0 + α1*X + ɛ
 Y = (Ystar.0)*1.
 df = DataFrame(X=[X],Y=[Y])

 Probit = glm(Y ~ X, df, Binomial(), ProbitLink())



Re: [julia-users] Re: pmap not respecting srand set seed settings

2015-03-02 Thread Andreas Noack
Steve, I don't think that method works. The mapping between the argument to
srand and the internal state of the MT is quite complicated. We are calling
a seed function in the library we are using that maps an integer to a state
vector so srand(1) and srand(2) end up as two quite different streams. I
think this is quite on purpose to avoid that many of the same streams get
repeated over and over. Also, I think the seed function tries to select a
state vector that doesn't require you to burn numbers.

2015-02-28 17:35 GMT-05:00 Ivar Nesje iva...@gmail.com:

 If you find evidence that there is anything wrong with the first N random
 numbers in the standard Random Number Generator (RNG) in Julia, we would
 consider that a bug, fix it, and release a new point release faster than a
 Lenovo CTO is able to arrange a meeting with someone who understands the
 https protocol.

 You should still be aware that we optimize for speed and statistical bias,
 so you should not rely on the built-in RNG for cryptography or other
 security related purposes.


Re: [julia-users] Is it OK (stylistically) to override missing Base functionality in a package?

2015-02-27 Thread Andreas Noack
Thats right and I realized that right after I posted. I'd be fine with
using min and max for types but probably some would oppose that.

2015-02-27 15:42 GMT-05:00 Jutho Haegeman juthohaege...@gmail.com:

 I am not opposed to that but the same could be said for typemin and
 typemax.

 Verstuurd vanaf mijn iPhone

 Op 27-feb.-2015 om 21:27 heeft Andreas Noack andreasnoackjen...@gmail.com
 het volgende geschreven:

 I think it is fine that the type of the argument determines the behavior
 here. Having type in the name would be a bit like having
 `fabs(x::Float64)`.

 2015-02-27 15:21 GMT-05:00 Jutho juthohaege...@gmail.com:

 But I wouldn't overload real; real is for the real value of a value, not
 for the real type. Maybe something like realtype , or typereal if we want
 to go with the other type... functions.

 Op vrijdag 27 februari 2015 21:18:34 UTC+1 schreef Andreas Noack:

 I'd like to have something like this.

 2015-02-27 15:02 GMT-05:00 Jutho juthoh...@gmail.com:

 Or in this particular case, maybe their should be some functionality
 like that in Base, or at least in Base.LinAlg, where is often necessary to
 mix complex variables and real variables of the same type used to build to
 complex variables.

 Op donderdag 26 februari 2015 08:10:35 UTC+1 schreef Sheehan Olver:

 Maybe a better alternative is to create an internal function with the
 same name:

 real(v…)=Base.real(v…)
 real{T:Real}(::Type{Complex{T}})=T
 real{T:Real}(::Type{T})=T

 This will avoid the override leaking from the package.




  On 26 Feb 2015, at 6:07 pm, Sheehan Olver dlfiv...@gmail.com
 wrote:
 
  I think this is a case where I know the answer but pretending I
 don’t :)
 
 
 
  On 26 Feb 2015, at 6:06 pm, Ivar Nesje iva...@gmail.com wrote:
 
  We have seen quite a few instances where Base functions were
 extended with methods without restriction to non-Base types, and it caused
 problems when Julia was updated.
 
  Is randomly breaking in new versions of Julia your style?
 






Re: [julia-users] Is it OK (stylistically) to override missing Base functionality in a package?

2015-02-27 Thread Andreas Noack
I'd like to have something like this.

2015-02-27 15:02 GMT-05:00 Jutho juthohaege...@gmail.com:

 Or in this particular case, maybe their should be some functionality like
 that in Base, or at least in Base.LinAlg, where is often necessary to mix
 complex variables and real variables of the same type used to build to
 complex variables.

 Op donderdag 26 februari 2015 08:10:35 UTC+1 schreef Sheehan Olver:

 Maybe a better alternative is to create an internal function with the
 same name:

 real(v…)=Base.real(v…)
 real{T:Real}(::Type{Complex{T}})=T
 real{T:Real}(::Type{T})=T

 This will avoid the override leaking from the package.




  On 26 Feb 2015, at 6:07 pm, Sheehan Olver dlfiv...@gmail.com wrote:
 
  I think this is a case where I know the answer but pretending I don’t
 :)
 
 
 
  On 26 Feb 2015, at 6:06 pm, Ivar Nesje iva...@gmail.com wrote:
 
  We have seen quite a few instances where Base functions were extended
 with methods without restriction to non-Base types, and it caused problems
 when Julia was updated.
 
  Is randomly breaking in new versions of Julia your style?
 




Re: [julia-users] Is it OK (stylistically) to override missing Base functionality in a package?

2015-02-27 Thread Andreas Noack
I think it is fine that the type of the argument determines the behavior
here. Having type in the name would be a bit like having
`fabs(x::Float64)`.

2015-02-27 15:21 GMT-05:00 Jutho juthohaege...@gmail.com:

 But I wouldn't overload real; real is for the real value of a value, not
 for the real type. Maybe something like realtype , or typereal if we want
 to go with the other type... functions.

 Op vrijdag 27 februari 2015 21:18:34 UTC+1 schreef Andreas Noack:

 I'd like to have something like this.

 2015-02-27 15:02 GMT-05:00 Jutho juthoh...@gmail.com:

 Or in this particular case, maybe their should be some functionality like
 that in Base, or at least in Base.LinAlg, where is often necessary to mix
 complex variables and real variables of the same type used to build to
 complex variables.

 Op donderdag 26 februari 2015 08:10:35 UTC+1 schreef Sheehan Olver:

 Maybe a better alternative is to create an internal function with the
 same name:

 real(v…)=Base.real(v…)
 real{T:Real}(::Type{Complex{T}})=T
 real{T:Real}(::Type{T})=T

 This will avoid the override leaking from the package.




  On 26 Feb 2015, at 6:07 pm, Sheehan Olver dlfiv...@gmail.com
 wrote:
 
  I think this is a case where I know the answer but pretending I don’t
 :)
 
 
 
  On 26 Feb 2015, at 6:06 pm, Ivar Nesje iva...@gmail.com wrote:
 
  We have seen quite a few instances where Base functions were
 extended with methods without restriction to non-Base types, and it caused
 problems when Julia was updated.
 
  Is randomly breaking in new versions of Julia your style?
 





Re: [julia-users] pmap not respecting srand set seed settings

2015-02-26 Thread Andreas Noack
@everywhere srand(seed) would give reproducibility, but it would probably
not be a good idea since the exact same random variates will be generated
on each process. Maybe something like

for p in workers()
@spawnat p srand(seed + p)
end

However, out RNG gives no guarantees about independence of these stream,
but for this bootstrap example I would be surprised if the generated
variates wouldn't be good enough.

2015-02-26 11:04 GMT-05:00 Ivar Nesje iva...@gmail.com:

 I think something like @everywhere srand(seed) would partially work, but
 you'd still suffer from non determinism in the scheduler that might run
 different portions of the array in different processes depending on the
 current load on your computer.


  1   2   3   4   >