Well, I meant specifically how the special matrix types in Julia and the
functions like factorize interact with them. It has grown to be a really
polished, powerful interface.

On Wed, Nov 5, 2014 at 4:10 AM, Tony Kelman <t...@kelman.net> wrote:

> There are several chapters of Trefethen, or Demmel, on these exact topics.
> Just have to translate their pseudocode or Matlab addenda into Julia.
>
>
> On Tuesday, November 4, 2014 5:41:55 PM UTC-8, Stefan Karpinski wrote:
>>
>> Ah, good to know. There's so much depth here. This could be a chapter or
>> two of a book on effective numerical analysis in Julia :-)
>>
>> On Wed, Nov 5, 2014 at 2:33 AM, Andreas Noack <andreasno...@gmail.com>
>> wrote:
>>
>>> Yes and no. The problem is that small floating point noise destroys
>>> symmetry/Hermitianity and therefore factorize concludes that the matrix is
>>> not positive definite (I know about the other definition). If you construct
>>> your positive definite matrix by A'A, then Julia makes it exactly
>>> symmetric/Hermitian, but if you do e.g. A'*Diagonal(rand(size(A,1)))*A then
>>> your matrix is still positive definite in infinite precision, but it is
>>> almost never exactly symmetric/Hermitian in finite precision.
>>>
>>> Hence it is a good idea to use inv(cholfact(X)) whenever you know that
>>> your matrix should be considered positive definite. This is also much
>>> faster as it bypasses all the checks in factorize.
>>>
>>> 2014-11-04 20:22 GMT-05:00 Stefan Karpinski <ste...@karpinski.org>:
>>>
>>> So this works as desired if you just do inv(factorize(X))?
>>>>
>>>> On Wed, Nov 5, 2014 at 1:59 AM, Andreas Noack <andreasno...@gmail.com>
>>>> wrote:
>>>>
>>>>> factorize(Matrix) is a full service check, but if you specify
>>>>> structure as David did with Hermitian, factorize dispatches to an
>>>>> appropriate method. In this case it is Bunch-Kaufman. E.g.
>>>>>
>>>>> julia> A = randn(3,3);A = A'A;
>>>>>
>>>>> julia> typeof(factorize(A))
>>>>> Cholesky{Float64} (constructor with 1 method)
>>>>>
>>>>> julia> typeof(factorize(Hermitian(A)))
>>>>> BunchKaufman{Float64} (constructor with 1 method)
>>>>>
>>>>> 2014-11-04 19:31 GMT-05:00 Tim Holy <tim....@gmail.com>:
>>>>>
>>>>> On Wednesday, November 05, 2014 01:19:55 AM Stefan Karpinski wrote:
>>>>>> > I know that factorize checks a bunch of properties of the matrix to
>>>>>> be
>>>>>> > factorized – is positive definiteness not something that it checks?
>>>>>> Should
>>>>>> > it?
>>>>>>
>>>>>> Positive definiteness is not a quick check. For example, the matrix
>>>>>> `ones(2,2)`
>>>>>> is symmetric and has all positive entries but is not
>>>>>> positive-definite. You
>>>>>> have to finish computing the Cholesky factorization before you can be
>>>>>> sure it's
>>>>>> positive definite, at which point you should of course just keep that
>>>>>> factorization.
>>>>>>
>>>>>> --Tim
>>>>>>
>>>>>> >
>>>>>> > On Tue, Nov 4, 2014 at 5:59 PM, Andreas Noack <
>>>>>> andreasno...@gmail.com>
>>>>>> > wrote:
>>>>>> > > In your case, I think the right solution is to invert it by
>>>>>> > > inv(cholfact(pd)). By calling cholfact, you are telling Julia
>>>>>> that your
>>>>>> > > matrix is positive definite and Julia will exploit that to give
>>>>>> you a fast
>>>>>> > > inverse which is also positive definite.
>>>>>> > >
>>>>>> > > inv(factorize(Hermitian(pd))) is slow because it uses a
>>>>>> factorization that
>>>>>> > > only exploits symmetry (Bunch-Kaufman), but not positive
>>>>>> definiteness
>>>>>> > > (Cholesky). However, the Bunch-Kaufman factorization preserves
>>>>>> symmetry
>>>>>> > > and
>>>>>> > > hence the result is positive definite. In contrast, when doing
>>>>>> inv(pd)
>>>>>> > > Julia has no idea that pd is positive definite or even symmetric
>>>>>> and hence
>>>>>> > > it defaults to use the LU factorization which won't preserve
>>>>>> symmetry and
>>>>>> > > therefore isposdef will return false.
>>>>>> > >
>>>>>> > > Hope it made sense. I'll probably have to write a section in the
>>>>>> > > documentation about this soon.
>>>>>> > >
>>>>>> > > 2014-11-03 18:53 GMT-05:00 David van Leeuwen <
>>>>>> david.va...@gmail.com>:
>>>>>> > >
>>>>>> > > Hello,
>>>>>> > >
>>>>>> > >> I am struggling with the fact that covariance matrices computed
>>>>>> from a
>>>>>> > >> precision matrix aren't positive definite, according to
>>>>>> `isposdef()`
>>>>>> > >> (they
>>>>>> > >> should be according to the maths).
>>>>>> > >>
>>>>>> > >> It looks like the culprit is `inv(pd::Matrix)` which does not
>>>>>> always
>>>>>> > >> result in a positive definite matrix if `pd` is one.  This is
>>>>>> probably
>>>>>> > >> because `inv()` is agnostic of the fact that the argument is
>>>>>> positive
>>>>>> > >> definite, and numerical details.
>>>>>> > >>
>>>>>> > >> Now I've tried to understand the support for special matrices,
>>>>>> and I
>>>>>> > >> believe that `inv(factorize(Hermitian(pd)))` is the proper way
>>>>>> to do
>>>>>> > >> this.
>>>>>> > >> Indeed the resulting matrix is positive definite.  However, this
>>>>>> > >> computation takes a lot longer than inv(), about 5--6 times as
>>>>>> slow.  I
>>>>>> > >> would have expected that the extra symmetry would lead to a more
>>>>>> > >> efficient
>>>>>> > >> matrix inversion.
>>>>>> > >>
>>>>>> > >> Is there something I'm doing wrong?
>>>>>> > >>
>>>>>> > >> Cheers,
>>>>>> > >>
>>>>>> > >> ---david
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>

Reply via email to