On Jan 21, 2009, at 3:34 PM, rjf wrote:

> On Jan 21, 11:54 am, Robert Bradshaw <rober...@math.washington.edu>
> wrote:
>
>>
>>> I am sure that some Sage people have thought about such things, but
>>> probably not
>>> enough. Which is why I try to poke holes in some of these comments!
>>
>> Sage has thought about this--we have models for both:
>>
>> RDF -- The real double "field" with provides wrappers around the
>> machine's native (double) floating point arithmetic, fast but
>> possibly machine-dependent
>> RealField(n) -- n-bit mentissa real numbers, based on mpfr. Slower,
>> but every operation follows strict and reproducible rounding rules.
>>
>> There is also support for real interval fields, and the (now
>> deprecated for reasons very similar to the start of topic of this
>> thread) quad-double model, and even "real lazy" fields whose entries
>> are computed on the fly to whatever precision is needed.
>
>  RDF, which appears to be a slow version of machine arithmetic,

Not sure what you mean here, unless you're comparing to 32-bit C floats.

> presumably even
> slower when accessed through whatever layers python has, is the
> competition
> for Java, which in this case would not be strictfp.
>
> Neither seems likely to be very favorable for traditional "scientific
> computing", but perhaps you can provide an example with some timing?
> A comparison with the same program/hardware in C might be helpful too.


Yes, manipulating individual C doubles wrapped as Python objects will  
be slow, no matter how efficient the wrapping. But if you're  
manipulating enough individual elements to worry about the speed,  
then chances are there's a higher-level structure involved. For  
example, if you make a matrix over RDF, the entries are not stored  
individually wrapped, but as a single double*. In our case, linear  
algebra is done via NumPy, which in turn uses a BLAS (with Sage we  
ship ATLAS). On the other hand, if one is doing a problem that isn't  
easily phrased as a linear algebra/differential equations/etc.  
question for which there is already an optimized library/code, that's  
where Cython comes in, where one can compile code that operates on C  
doubles directly (and, because it compiles down to C, is just as  
efficient). It also fits in with the 90-10 philosophy, making it easy  
to optimize only those parts one needs to (instead of writing the  
whole thing in a more restrictive (depending on your tastes) language  
just so your couple of inner loops can be fast enough).

Getting back to the topic of the thread, I think a reason Python is  
nice for scientific programming is that it's easy to learn, easy to  
read, and easy to prototype in, and has a rich set of libraries to  
work with. On the other hand, when you need raw number crunching  
speed there are good tools out there to interface with or write at a  
lower level (e.g. Cython, or all that's been wrapped by SciPy and Sage).

- Robert


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to