These days sage.math is often operating with all 16 cores going flat
out.

When it gets like this, FLINT slows down dramatically, but MAGMA does
not. This makes timing comparisons meaningless (and slows down
development).

I'm wondering if anyone has any suggestions as to why this might be.
How can their code take the same user time whether the machine is
under load or not, and ours takes up to 50% longer.

I've checked the following things:

1) Cache hints: We are already using these as these make a noticeable
difference. My theory about this is that when the kernel switches
between different processes, any computation that was underway gets
moved out of cache and the cache gets overwritten by the other
process. Cache hints help by making sure cache coherency is optimised
for the currently active process.

2) Nice number: MAGMA runs at a default nice level of -5 as does
FLINT.

3) Memory management: a memory manager saves kernel time, but not user
time as far as I can tell. However, the problem I've noted affects
user time. I've experimented with replacing various parts of the
memory allocation and deallocation code with the FLINT memory manager,
but it doesn't seem to affect user time (though naturally it cuts
kernel time).

4) gcc compiler optimizations: I use -O3 -fexpensive-optimizations -
funroll-loops

Does anyone know of anything else? Are there other compiler
optimizations I should use? Is there something else about memory
management that I don't know? Is it to do with the way our library is
linked? Does it have to do with MAGMA having root privileges because
it was installed by root or something?

There has to be a simple explanation for this.

Bill Hart.


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to