Nice :-)

On Thu, Sep 18, 2014 at 4:20 PM, Andreas Noack <andreasnoackjen...@gmail.com
> wrote:

> Yes. It appears so on my Mac. I just redid the timings with the same
> result.
>
> Med venlig hilsen
>
> Andreas Noack
>
> 2014-09-18 15:55 GMT-04:00 Stefan Karpinski <ste...@karpinski.org>:
>
> I'm slightly confused – does that mean Julia is 2.4x faster in this case?
>>
>> On Thu, Sep 18, 2014 at 3:53 PM, Andreas Noack <
>> andreasnoackjen...@gmail.com> wrote:
>>
>>> In addition our lu calculates a partially pivoted lu and returns the L
>>> and U matrices and the vector of permutations. To get something comparable
>>> in MATLAB you'll have to write
>>>
>>> [L,,U,p] = lu(A,'vector')
>>>
>>> On my old Mac where Julia is compiled with OpenBLAS the timings are
>>>
>>> MATLAB:
>>> >> tic();for i = 1:10
>>> [L,U,p] = qr(A, 'vector');
>>> end;toc()/10
>>>
>>> ans =
>>>
>>>     3.4801
>>>
>>> Julia:
>>> julia> tic(); for i = 1:10
>>>        qr(A);
>>>        end;toc()/10
>>> elapsed time: 14.758491472 seconds
>>> 1.4758491472
>>>
>>> Med venlig hilsen
>>>
>>> Andreas Noack
>>>
>>> 2014-09-18 15:33 GMT-04:00 Jason Riedy <ja...@lovesgoodfood.com>:
>>>
>>> And Elliot Saba writes:
>>>> > The first thing you should do is run your code once to warm up the
>>>> > JIT, and then run it again to measure the actual run time, rather
>>>> > than compile time + run time.
>>>>
>>>> To be fair, he seems to be timing MATLAB in the same way, so he's
>>>> comparing systems appropriately at that level.
>>>>
>>>> It's just the tuned BLAS+LAPACK & fftw v. the default ones.  This
>>>> is one reason why MATLAB bundles so much.  (Another reason being
>>>> the differences in numerical results causing support calls.  Took
>>>> a long time before MATLAB gave in to per-platform-tuned libraries.)
>>>>
>>>>
>>>
>>
>

Reply via email to