a

On Sat, Oct 25, 2008 at 4:04 AM, mabshoff <[EMAIL PROTECTED]> wrote:
>
> On Oct 25, 3:46 am, "David Joyner" <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
>> (1) Sage also does FFT via the GSL. I guess they didn't try that,
>> but I would imaginge they would be fast since they are in C.
>
> The Numpy FFT is fairly quick and AFAIK we only build a subset of the
> possible libraries there. The one we ship IIRC only is really fast
> when the input consists of 2^n values.
>
>> (2) I'm not sure what a toolbox actually *is*. Something like an
>> optional package with a nice gui interface?
>>
>> (3) Related to this thread, I just got an email from Cesar (from
>> Mexico, the grad
>> student working on the RingCode class). He was at a big math
>> meeting and told people one reason why he prefers Sage over Matlab
>> is because in Matlab (and I quote from Cesar):
>>
>> "In MatLab:
>>
>> var1 = 2^10000000
>> var2 = 2^10000000 + 1
>> var1 == var2
>>
>> The result of the last line will be True!!!!"
>
> Yep, the bane of floating points. That is why Matlab has a symbolic
> toolbox which I am sure isn't exactly cheap. It certainly isn't part
> of the default config.
>
>> Also Cesar says Matlab will often invert without warning a singular
>> matrix.
>>
>> So, I think it would be cool if the reviewer also changed that the
>> programs worked *correctly*. Apologies for the scarcasm:-)
>
> Well, all the reviewer did in case of the SVD benchmark is to measure
> the speed of the BLAS linked against Lapack since all projects use
> Lapack for SVD.

+1

Just to add to this, the source code for numpy's svd command
is in numpy-*/src/numpy/linalg/linalg.py and after setting up
some variables it does this:

        lapack_routine = lapack_lite.dgesdd
        lwork = 1
        work = zeros((lwork,), t)
        results = lapack_routine(option, m, n, a, m, s, u, m, vt, nvt,
                                 work, -1, iwork, 0)
        lwork = int(work[0])
        work = zeros((lwork,), t)
        results = lapack_routine(option, m, n, a, m, s, u, m, vt, nvt,
                                 work, lwork, iwork, 0)


I.e., it calls dgesdd in lapack.  (The lapack_lite I think just
wraps a real lapack in Sage.)

If you lookup DGESDD here you'll see it is the standard
lapack function to compute the singular value decomposition:

http://www.netlib.org/lapack/double/dgesdd.f

So we maybe just have a better way of building  atlas/lapack
on that person's test machine.   To add to this, on sage.math
(opterton) here is what I get, which is matlab appears to be
slightly faster:

sage: time a = numpy.linalg.svd(numpy.random.rand(500,500),compute_uv=0)
CPU times: user 0.86 s, sys: 0.02 s, total: 0.88 s
Wall time: 0.88 s
sage: time a=matlab('svd(rand(500))')
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 0.65 s


William

William

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to