I know from experience that trying to build programs in parallel, with 
things like

$ make -j 10

will often break them, so I tend not to do it. It's a nice idea, and 
does work in some cases, but in others one just ends up with a mess.

But Sage is a big program, and takes a *long* time to compile. The T5240 
(hostname 't2') has a couple of T2+ processors each with 8 cores, (16 
cores in total). Building 20-30 Sage .spkg files in parallel on 't2' 
would speed up the compilation process *considerably*. With the ready 
availability of multi-core processors (some 'home' machine now have 4 
cores), it would seem to me some way of building Sage more quickly would 
be a really good idea.

I don't know how long it will take to build Sage on 't2' in its present 
form, but I believe it would be more than one day, as that machine is 
currently not being used to its best (mainly since I'm the only one 
using it most of the time).

Another point is that by doing things in parallel, it might allow a more 
extensive test suite.

Clearly there are dependencies among the .spkg files. There's no point 
trying to build model A if it depends on a library B which is not yet 
built.

I can see this would take some work, but the rewards in a reduction of 
compilation time, could be very large. You can be sure processors are 
only going to become available with more cores. At one time it was one, 
my laptop has two, several home computers have 4 and 't2' has 16.


I would suggest if this was done, it would be wise to have create files 
'install-<spkg_name>.txt'  so there is a record of each. package name, 
and not use one 'install.log'.


Thoughts?


--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to