Re: [OMPI users] cannot restrict port numbers using btl_tcp_port_min_v4 and btl_tcp_port_range_v4

2010-12-13 Thread Tang, Hsiu-Khuern
Hi Ralph, Thanks for confirming this and for getting a patch in. Greatly appreciated! -- Best, Hsiu-Khuern. * On Sat 11:27AM -0700, 11 Dec 2010, Ralph Castain (r...@open-mpi.org) wrote: > Hmmmwell, that stinks. I did some digging and there is indeed a bug in > the 1.4 series - forgot to

Re: [OMPI users] Open MPI on Cygwin

2010-12-13 Thread Shiqing Fan
Hi Siegmar, Building Open MPI under Cygwin is not the way we recommend, it's not easy, and the building time is extremely long. Actually if you have CMake and Visual Studio installed, then it's pretty easy to build Open MPI binary, but of course you have to port your GNU makefiles. If

[OMPI users] how to set the connecttimeout para?

2010-12-13 Thread peifan
i have 3 nodes, one is master node and another is computing nodes,these nodes deployed in the internet (not in cluster) when i running NPB (NASA parallel benchmark) in one node (use 2 processes) mpirun -np 2 exe. I can get the successful result, but when i running in two nodes(for example

Re: [OMPI users] Help on Mpi derived datatype for class with static members

2010-12-13 Thread Riccardo Murri
Hi, On Fri, Dec 10, 2010 at 2:51 AM, Santosh Ansumali wrote: >> - the "static" data member is shared between all instances of the >>  class, so it cannot be part of the MPI datatype (it will likely be >>  at a fixed memory location); > > Yes! I agree that i is global as far

Re: [OMPI users] Help on Mpi derived datatype for class with static members

2010-12-13 Thread Jeff Squyres
You should be able to use normal MPI_TYPE_CREATE_STRUCT functionality to skip members that you don't want represented in a struct. The general idea is to instantiate one of the classes and then use the addresses of the members to compute the displacement(s) from the base. The same thing works

Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu

2010-12-13 Thread Jeff Squyres
Also note that recent versions of the Linux kernel have changed what sched_yield() does -- it no longer does essentially what Ralph describes below. Google around to find those discussions. On Dec 9, 2010, at 4:07 PM, Ralph Castain wrote: > Sorry for delay - am occupied with my day job. > >

Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu

2010-12-13 Thread Ralph Castain
Could you at least provide a one-line explanation of that statement? On Dec 13, 2010, at 7:31 AM, Jeff Squyres wrote: > Also note that recent versions of the Linux kernel have changed what > sched_yield() does -- it no longer does essentially what Ralph describes > below. Google around to

Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu

2010-12-13 Thread Jeff Squyres
See the discussion on kerneltrap: http://kerneltrap.org/Linux/CFS_and_sched_yield Looks like the change came in somewhere around 2.6.23 or so...? On Dec 13, 2010, at 9:38 AM, Ralph Castain wrote: > Could you at least provide a one-line explanation of that statement? > > > On Dec 13,

Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu

2010-12-13 Thread Ralph Castain
Thanks for the link! Just to clarify for the list, my original statement is essentially correct. When calling sched_yield, we give up the remaining portion of our time slice. The issue in the kernel world centers around where to put you in the scheduling cycle once you have called sched_yield.

Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu

2010-12-13 Thread Jeff Squyres
I think there *was* a decision and it effectively changed how sched_yield() effectively operates, and that it may not do what we expect any more. See this thread (the discussion of Linux/sched_yield() comes in the later messages):

Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu

2010-12-13 Thread Hicham Mouline
very clear, thanks very much.     -Original Message- From: "Ralph Castain" [r...@open-mpi.org] List-Post: users@lists.open-mpi.org Date: 13/12/2010 03:49 PM To: "Open MPI Users" Subject: Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu Thanks for the link! Just to

[OMPI users] Why? MPI_Scatter problem

2010-12-13 Thread Kechagias Apostolos
I have the code that is in the attachment. Can anybody explain how to use scatter function? It seems that this way im using it doesnt do the job. #include #include #include #include int main(int argc, char *argv[]) { int error_code, err, rank, size, N, i, N1, start, end; float W, pi=0,

Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu

2010-12-13 Thread Hicham Mouline
I don't understand 1 thing though and would appreciate your comments.   In various interfaces, like network sockets, or threads waiting for data from somewhere, there are various solutions based on _not_ checking the state of the socket or some sort of  queue continuously, but sort of getting

Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu

2010-12-13 Thread Ralph Castain
OMPI does use those methods, but they can't be used for something like shared memory. So if you want the performance benefit of shared memory, then we have to poll. On Dec 13, 2010, at 9:00 AM, Hicham Mouline wrote: > I don't understand 1 thing though and would appreciate your comments. > >

Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu

2010-12-13 Thread Jeff Squyres
Ralph and I chatted on the phone about this. Let's clarify a few things here for the user list: 1. It looks like we don't have this issue explicitly discussed on the FAQ. We obliquely discuss it in: http://www.open-mpi.org/faq/?category=all#oversubscribing and

Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu

2010-12-13 Thread Jeff Squyres
On Dec 13, 2010, at 11:00 AM, Hicham Mouline wrote: > In various interfaces, like network sockets, or threads waiting for data from > somewhere, there are various solutions based on _not_ checking the state of > the socket or some sort of queue continuously, but sort of getting >

Re: [OMPI users] Why? MPI_Scatter problem

2010-12-13 Thread Riccardo Murri
On Mon, Dec 13, 2010 at 4:57 PM, Kechagias Apostolos wrote: > I have the code that is in the attachment. > Can anybody explain how to use scatter function? MPI_Scatter receives the data in the initial segment of the given buffer. (The receiving buffer needs to be 1/Nth of

Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu

2010-12-13 Thread David Mathog
Ralph Castain wrote: > Bottom line for users: the results remain the same. If no other process wants time, you'll continue to see near 100% utilization even if we yield because we will always poll for some time before deciding to yield. Not surprisingly, I am seeing this with recv/send too, at

Re: [OMPI users] Why? MPI_Scatter problem

2010-12-13 Thread Gus Correa
Kechagias Apostolos wrote: I have the code that is in the attachment. Can anybody explain how to use scatter function? It seems that this way im using it doesnt do the job. ___

Re: [OMPI users] Why? MPI_Scatter problem

2010-12-13 Thread Kechagias Apostolos
I thought that every process will receive the data as is. Thanks that solved my problem. 2010/12/13 Gus Correa > Kechagias Apostolos wrote: > >> I have the code that is in the attachment. >> Can anybody explain how to use scatter function? >> It seems that this way im

Re: [OMPI users] Why? MPI_Scatter problem

2010-12-13 Thread David Zhang
That would be MPI_BroadCast On Mon, Dec 13, 2010 at 9:25 AM, Kechagias Apostolos wrote: > I thought that every process will receive the data as is. > Thanks that solved my problem. > > 2010/12/13 Gus Correa > > Kechagias Apostolos wrote: >> >>>

Re: [OMPI users] Why? MPI_Scatter problem

2010-12-13 Thread Gus Correa
Hi Kechagias The figures in Chapter 4 of "MPI: The Complete Reference, Vol 1, 2nd Ed.", by Snir et. al. are good reminders. Here are a few: //www.dartmouth.edu/~rc/classes/intro_mpi/mpi_comm_modes2.html#top I hope this helps, Gus Correa Kechagias Apostolos wrote: I thought that every process

Re: [OMPI users] Why? MPI_Scatter problem

2010-12-13 Thread Kechagias Apostolos
Sure it helps. I had no idea about this source. I hope that it is up to date. 2010/12/13 Gus Correa > Hi Kechagias > > The figures in Chapter 4 of > "MPI: The Complete Reference, Vol 1, 2nd Ed.", > by Snir et. al. are good reminders. > > Here are a few: >

Re: [OMPI users] Why? MPI_Scatter problem

2010-12-13 Thread Gus Correa
Kechagias Apostolos wrote: Sure it helps. I had no idea about this source. I hope that it is up to date. As far as I can tell the figure is up to date. Here it is again in the MPI Forum: http://www.mpi-forum.org/docs/mpi21-report/node85.htm

[OMPI users] One-sided datatype errors

2010-12-13 Thread James Dinan
Hi, I'm getting strange behavior using datatypes in a one-sided MPI_Accumulate operation. The attached example performs an accumulate into a patch of a shared 2d matrix. It uses indexed datatypes and can be built with displacement or absolute indices (hindexed) - both cases fail. I'm

[OMPI users] MPI_Bcast vs. per worker MPI_Send?

2010-12-13 Thread David Mathog
Is there a rule of thumb for when it is best to contact N workers with MPI_Bcast vs. when it is best to use a loop which cycles N times and moves the same information with MPI_Send to one worker at a time? For that matter, other than the coding semantics, is there any real difference between the

Re: [OMPI users] MPI_Bcast vs. per worker MPI_Send?

2010-12-13 Thread Eugene Loh
David Mathog wrote: Is there a rule of thumb for when it is best to contact N workers with MPI_Bcast vs. when it is best to use a loop which cycles N times and moves the same information with MPI_Send to one worker at a time? The rule of thumb is to use a collective whenever you can. The

Re: [OMPI users] MPI_Bcast vs. per worker MPI_Send?

2010-12-13 Thread David Zhang
Unless your cluster has some weird connection topology and you're trying to take advantage of that, collective is the best bet. On Mon, Dec 13, 2010 at 4:26 PM, Eugene Loh wrote: > David Mathog wrote: > > Is there a rule of thumb for when it is best to contact N workers