Re: [OMPI users] Sending large boradcasts

2011-01-03 Thread David Singleton
Hi Brock, That message should only be 2MB. Are you sure its not a mismatch of message lengths in MPI_Bcast calls? David On 01/04/2011 03:47 AM, Brock Palen wrote: I have a user who reports that sending a broadcast of 540*1080 of reals (just over 2GB) fails with this: *** An error occurred

Re: [OMPI users] Sending large boradcasts

2011-01-03 Thread Gustavo Correa
Hi Brock He's probably hitting the MPI address boundary of 2GB. A workaround is to declare a user defined type (MPI_TYPE_CONTIGUOUS, or MPI_TYPE_VECTOR), to bundle a bunch of primitive data (e.g. reals), then send (broadcast for him/her) a smaller number of those types. See this thread: http://

Re: [OMPI users] Granular locks?

2011-01-03 Thread Gijsbert Wiesenekker
On Oct 2, 2010, at 10:54 , Gijsbert Wiesenekker wrote: > > On Oct 1, 2010, at 23:24 , Gijsbert Wiesenekker wrote: > >> I have a large array that is shared between two processes. One process >> updates array elements randomly, the other process reads array elements >> randomly. Most of the tim

[OMPI users] Sending MPI-message from master to master

2011-01-03 Thread Сергей Реймеров
I've not found any helpful information about possibility of sending messages from a node of cluster to the same node (for example MPI_Send(&f, 1, MPI_UNSIGNED_LONG_LONG, 0, 0, MPI_COMM_WORLD) from node #0). I wrote program with two threads and one thread MPI_Send message to another thread that must

[OMPI users] Sending large boradcasts

2011-01-03 Thread Brock Palen
I have a user who reports that sending a broadcast of 540*1080 of reals (just over 2GB) fails with this: *** An error occurred in MPI_Bcast *** on communicator MPI_COMM_WORLD *** MPI_ERR_TRUNCATE: message truncated *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort) I was reading the archive

Re: [OMPI users] Using MPI_Put/Get correctly?

2011-01-03 Thread Grismer, Matthew J Civ USAF AFMC AFRL/RBAT
I'm using Open MPI 1.4.3, is the bug in that version as well? -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Barrett, Brian W Sent: Monday, January 03, 2011 11:35 AM To: Open MPI Users Subject: Re: [OMPI users] Using MPI_Put/Get correct

Re: [OMPI users] Using MPI_Put/Get correctly?

2011-01-03 Thread Barrett, Brian W
Matt - There's a known bug in the datatype engine of Open MPI 1.5 that breaks MPI One-sided when used with user-defined datatypes. Unfortunately, I don't have a timetable as to when it will be fixed. Brian On Jan 3, 2011, at 9:18 AM, Grismer,Matthew J Civ USAF AFMC AFRL/RBAT wrote: > Unf

Re: [OMPI users] Using MPI_Put/Get correctly?

2011-01-03 Thread Grismer, Matthew J Civ USAF AFMC AFRL/RBAT
Unfortunately correcting the integer type for the displacement does not fix the problem in my code, argh! So, thinking this might have something to do with the large arrays and amount of data being passed in the actual code, I modified my example (attached putbothways2.f90) so that the array sizes

Re: [OMPI users] srun and openmpi

2011-01-03 Thread Jeff Squyres
Yo Ralph -- I see this was committed https://svn.open-mpi.org/trac/ompi/changeset/24197. Do you want to add a blurb in README about it, and/or have this executable compiled as part of the PSM MTL and then installed into $bindir (maybe named ompi-psm-keygen)? Right now, it's only compiled as

Re: [OMPI users] Windows installers of 1.5.1 - No Fortan ?

2011-01-03 Thread Damien Hocking
Ah. Well, I do. Do you want me to build a set of binaries for Windows with Fortran in there? I had to do that anyway for using 1.5.1 with MUMPS. All we'd need to do is make sure we have all the flags on that you need, it takes about an hour. Damien On 03/01/2011 5:17 AM, Shiqing Fan wrote

Re: [OMPI users] Windows installers of 1.5.1 - No Fortan ?

2011-01-03 Thread Shiqing Fan
Hi Damien, Unfortunately, we don't have a valid license for Intel Fortran compiler at moment on the machine that we built this installer. Regards, Shiqing On 12/29/2010 6:47 AM, Damien Hocking wrote: Jeff, Shiqing, anyone... I notice there's no Fortan support in the Windows binary version

Re: [OMPI users] memory consumption on rank 0 and btl_openib_receive_queues use

2011-01-03 Thread Eloi Gaudry
hi, i'd like to know if someone had a chance to check at the issue I reported. thanks and happy new year ! éloi On 12/21/2010 10:58 AM, Eloi Gaudry wrote: hi, when launching a parallel computation on 128 nodes using openib and the "-mca btl_openib_receive_queues P,65536,256,192,128" option,