Hey Mikio I posted the stack trace on the jblas-users list FYI. It
happens on a random 100x100 matrix, for example -- but not every time.

PS Sebastian reminds me (though he clearly already said it) that, for
this use case, in Ax = B, A is always symmetric, and taking advantage
of that does in fact make it faster, and jblas packages up that
routine quite easily. That makes the difference.

I find that out of the box, pure Java is faster until about a 100x100
matrix, which is still a scale that occurs in practice. With some
digging I think it might be made faster, by maybe optimizing away some
of the data copying.

(And I don't get any SIGSEGV in this code path.)

The only thing I "miss" right now versus a QR decomposition is that
I'm using a rank-revealing implementation and it's useful to know the
apparent rank when the matrix is near singular.

This is quite a nice result overall.

On Fri, Apr 19, 2013 at 10:44 AM, Mikio Braun <mikio.br...@tu-berlin.de> wrote:
> Hi everyone,
>
> I'll have a look at the segfault. Could you provide a small example. I
> guess it has something to do with the dimensions of the matrix and
> possible a bug in my code.
>
> Just to clarify, Solve.solve() uses dgesv which uses LU decomposition
> to solve.
>
> Solve.solveLeastSquares() and Solve.pinv() use dgelsd which uses an
> SVD decomposition, as you've pointed out.
>
> There are also solveSymmteric() for symmetric matrices and
> solvePositive() for symmetric and positive definite matrices which
> should again run a bit faster in that case.
>
> Concerning sparse matrices, that's something which is missing right
> now. However, AFAIK there are no similarly performant sparse matrix
> libraries in Fortran which you could plug in, meaning that you're
> probably stuck with the performance of other Java libs.
>
> By the way, I'll be in the Bay Area next week, so if you want we could
> meet and chat about jblas. I'm pretty open to adding new features
> which users find necessary.
>
> Best,
>
> -M
>

Reply via email to