On Fri, 25 May 2007, Jaime Perea wrote:

One alternative that I like and it integrates well with mpi
is the global arrays toolkit

http://www.emsl.pnl.gov/docs/global/

I disagree with the "integrates well with mpi" part of the statement. Global Arrays works on top of a communication layer called ARMCI which only uses an MPI implementation for setting up the job and tearing it apart (only calling MPI_Init and MPI_Finalize from what I remember), the communication itself is done directly via lower level protocols (TCP, GM, etc.) - I know because some years ago I wanted to use Global Arrays on a SCore cluster and discovered that I have to port ARMCI to PM (the low level communication protocol of SCore)... This sometimes creates problems due to limitations imposed by the low level protocols (for example, the MPI implementation would open a GM port while ARMCI would need to open a second one, so the per-node limit of GM ports would be reached much faster).

[ I don't want the above statement to sound negative towards Global Arrays or ARMCI, my intention was only to bring to discussion a fact that was missing. ]

--
Bogdan Costescu

IWR - Interdisziplinaeres Zentrum fuer Wissenschaftliches Rechnen
Universitaet Heidelberg, INF 368, D-69120 Heidelberg, GERMANY
Telephone: +49 6221 54 8869, Telefax: +49 6221 54 8868
E-mail: [EMAIL PROTECTED]
_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to