On Thu, 13 Dec 2007, Peter Skomoroch wrote:

This reminds me of a similar issue I had.  What approaches do you take for
large dense matrix multiplication in MPI, when the matrices are too large to
fit into cluster memory?  If I hack up something to cache intermediate
results to disk, the IO seems to drag everything to a halt and I'm looking
for a better solution.  I'd like to use some libraries like PETSc, but how
would you work around memory limitations like this (short of building a
bigger cluster)?

You can build a cluster differently, maybe -- designing a bunch of nodes
that basically just form a memory-network-memory cache.  Spending more
money on memory and network, less on CPU.  But if you have a fundamental
limitation of less aggregate memory than the size of your matrix, you
pretty much have to store it somewhere, the only question is where and
how fast the store is and how much it costs to build it.

   rgb






I don't speak fortran natively, but isn't that array
approximately 3.6 TB in size?

Oops, forgot to put the decimal in the right place.

9915^3 * 8 bits/integer / 1024^3 bytes/GB = 907 GB.

It could be done with a 64 bit kernel. Too big for PAE.

Yeah, if you had a box with several hundred memory slots....

Which I say only semi-sarcastically.  They sound like they're coming,
they're coming.  Who knows, maybe they're here and I'm just out of
touch.

If it is a sparse matrix, them just maybe one can do something on this
scale, but otherwise, well, it's like telling mathematica to go and
compute umpty-something factorial -- it will go out, make a herioc
effort, use all the free memory in the universe, and die valiantly
(perhaps taking down your computer with it if the kernel happens to need
some memory at a critical time when their isn't any).  Large scale
computation as a DOS attack...






--
Robert G. Brown
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone(cell): 1-919-280-8443
Web: http://www.phy.duke.edu/~rgb
Lulu Bookstore: http://stores.lulu.com/store.php?fAcctID=877977
_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to