Mikio I will mail the full dump to you directly, but the key part seems to be:

Stack: [0x000000010862d000,0x000000010872d000],
sp=0x000000010872be40,  free space=1019k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C  [libsystem_c.dylib+0x27e03]  memmove$VARIANT$sse42+0x146
C  [jblas2424405430991183844libjblas.dylib+0x148129]  ilaenv_+0xe3
C  [jblas2424405430991183844libjblas.dylib+0x1af96]
Java_org_jblas_NativeBlas_ilaenv+0x106
...

I think you since found my post FWIW...
https://groups.google.com/d/msg/jblas-users/okurY2BnVhw/VqNnnmuHaNMJ

I can't reproduce it with a small test program like:

    for (int i = 0; i < 100; i++) {
      DoubleMatrix dm = DoubleMatrix.randn(1000, 1000);
      Solve.pinv(dm);
    }

I've since lost the exact state of the code that would reproduce it,
I've been experimenting a lot. It could just be an incorrect way of
calling it somehow.

That's what I mean -- solveSymmetric is the better choice. I don't get
an error there and yes it is faster.
In my case I'm already copying objects to make a DoubleMatrix and
don't need the API to .dup() it again, so I may avoid that in my
implementation and instead use SimpleBlas directrly, getting a bit
more speed.

On Fri, Apr 19, 2013 at 12:04 PM, Mikio Braun <mikio.br...@tu-berlin.de> wrote:
> Hi Sean,
>
> Somehow, your post on jblas-users seems to have gone missing. Stack
> trace is nice, but is it possible to get a small piece of code which
> reproduces the error? That would be helpful.
>
>> PS Sebastian reminds me (though he clearly already said it) that, for
>> this use case, in Ax = B, A is always symmetric, and taking advantage
>> of that does in fact make it faster, and jblas packages up that
>> routine quite easily. That makes the difference.
>
> Not sure if I follow. If the matrix is symmetric, you should use
> Solve.solveSymmetric(). Do you get the same error there, or are we
> talking about different problems?
>
>> I find that out of the box, pure Java is faster until about a 100x100
>> matrix, which is still a scale that occurs in practice. With some
>> digging I think it might be made faster, by maybe optimizing away some
>> of the data copying.
>
> Probably the best way here is to have both and use the faster function
> based on the matrix size. As it is, there is a lot of copying for
> using the JNI functions. I have some plans for jblas2 which gives you
> finer control over when objects are copied, but this is still far in
> the future ;)
>
> -M
>
> --
> Dr. Mikio Braun                        email: mikio.br...@tu-berlin.de
> TU Berlin                              web: http://mikiobraun.de
> Franklinstr. 28/29                     tel: +49 30 314 78627
> 10587 Berlin, Germany
>
>
>

Reply via email to