Unless your cluster has some weird connection topology and you're trying to
take advantage of that, collective is the best bet.
On Mon, Dec 13, 2010 at 4:26 PM, Eugene Loh wrote:
> David Mathog wrote:
>
> Is there a rule of thumb for when it is best to contact N workers with
>> MPI_Bcast vs. wh
David Mathog wrote:
Is there a rule of thumb for when it is best to contact N workers with
MPI_Bcast vs. when it is best to use a loop which cycles N times and
moves the same information with MPI_Send to one worker at a time?
The rule of thumb is to use a collective whenever you can. The
ra
Is there a rule of thumb for when it is best to contact N workers with
MPI_Bcast vs. when it is best to use a loop which cycles N times and
moves the same information with MPI_Send to one worker at a time?
For that matter, other than the coding semantics, is there any real
difference between the t
Hi,
I'm getting strange behavior using datatypes in a one-sided
MPI_Accumulate operation.
The attached example performs an accumulate into a patch of a shared 2d
matrix. It uses indexed datatypes and can be built with displacement or
absolute indices (hindexed) - both cases fail. I'm seein
Kechagias Apostolos wrote:
Sure it helps. I had no idea about this source.
I hope that it is up to date.
As far as I can tell the figure is up to date.
Here it is again in the MPI Forum:
http://www.mpi-forum.org/docs/mpi21-report/node85.htm
http://www.mpi-forum.org/docs/mpi21-report/mpi21-repor
Sure it helps. I had no idea about this source.
I hope that it is up to date.
2010/12/13 Gus Correa
> Hi Kechagias
>
> The figures in Chapter 4 of
> "MPI: The Complete Reference, Vol 1, 2nd Ed.",
> by Snir et. al. are good reminders.
>
> Here are a few:
> //www.dartmouth.edu/~rc/classes/intro_mp
Hi Kechagias
The figures in Chapter 4 of
"MPI: The Complete Reference, Vol 1, 2nd Ed.",
by Snir et. al. are good reminders.
Here are a few:
//www.dartmouth.edu/~rc/classes/intro_mpi/mpi_comm_modes2.html#top
I hope this helps,
Gus Correa
Kechagias Apostolos wrote:
I thought that every process
That would be MPI_BroadCast
On Mon, Dec 13, 2010 at 9:25 AM, Kechagias Apostolos wrote:
> I thought that every process will receive the data as is.
> Thanks that solved my problem.
>
> 2010/12/13 Gus Correa
>
> Kechagias Apostolos wrote:
>>
>>> I have the code that is in the attachment.
>>> Can
I thought that every process will receive the data as is.
Thanks that solved my problem.
2010/12/13 Gus Correa
> Kechagias Apostolos wrote:
>
>> I have the code that is in the attachment.
>> Can anybody explain how to use scatter function?
>> It seems that this way im using it doesnt do the job.
Kechagias Apostolos wrote:
I have the code that is in the attachment.
Can anybody explain how to use scatter function?
It seems that this way im using it doesnt do the job.
___
Ralph Castain wrote:
> Bottom line for users: the results remain the same. If no other
process wants time, you'll continue to see near 100% utilization even if
we yield because we will always poll for some time before deciding to yield.
Not surprisingly, I am seeing this with recv/send too, at lea
On Mon, Dec 13, 2010 at 4:57 PM, Kechagias Apostolos
wrote:
> I have the code that is in the attachment.
> Can anybody explain how to use scatter function?
MPI_Scatter receives the data in the initial segment of the given
buffer. (The receiving buffer needs to be 1/Nth of the send buffer.)
So, i
On Dec 13, 2010, at 11:00 AM, Hicham Mouline wrote:
> In various interfaces, like network sockets, or threads waiting for data from
> somewhere, there are various solutions based on _not_ checking the state of
> the socket or some sort of queue continuously, but sort of getting
> _interrupted_
Ralph and I chatted on the phone about this. Let's clarify a few things here
for the user list:
1. It looks like we don't have this issue explicitly discussed on the FAQ. We
obliquely discuss it in:
http://www.open-mpi.org/faq/?category=all#oversubscribing
and
http://www.open-mpi.org/faq/?cat
OMPI does use those methods, but they can't be used for something like shared
memory. So if you want the performance benefit of shared memory, then we have
to poll.
On Dec 13, 2010, at 9:00 AM, Hicham Mouline wrote:
> I don't understand 1 thing though and would appreciate your comments.
>
>
I don't understand 1 thing though and would appreciate your comments.
In various interfaces, like network sockets, or threads waiting for data from
somewhere, there are various solutions based on _not_ checking the state of the
socket or some sort of queue continuously, but sort of getting _in
I have the code that is in the attachment.
Can anybody explain how to use scatter function?
It seems that this way im using it doesnt do the job.
#include
#include
#include
#include
int main(int argc, char *argv[])
{
int error_code, err, rank, size, N, i, N1, start, end;
float W, pi=0, s
very clear, thanks very much.
-Original Message-
From: "Ralph Castain" [r...@open-mpi.org]
List-Post: users@lists.open-mpi.org
Date: 13/12/2010 03:49 PM
To: "Open MPI Users"
Subject: Re: [OMPI users] curious behavior during wait for broadcast: 100% cpu
Thanks for the link!
Just to cl
I think there *was* a decision and it effectively changed how sched_yield()
effectively operates, and that it may not do what we expect any more.
See this thread (the discussion of Linux/sched_yield() comes in the later
messages):
http://www.open-mpi.org/community/lists/users/2010/07/1372
Thanks for the link!
Just to clarify for the list, my original statement is essentially correct.
When calling sched_yield, we give up the remaining portion of our time slice.
The issue in the kernel world centers around where to put you in the scheduling
cycle once you have called sched_yield.
See the discussion on kerneltrap:
http://kerneltrap.org/Linux/CFS_and_sched_yield
Looks like the change came in somewhere around 2.6.23 or so...?
On Dec 13, 2010, at 9:38 AM, Ralph Castain wrote:
> Could you at least provide a one-line explanation of that statement?
>
>
> On Dec 13, 201
Could you at least provide a one-line explanation of that statement?
On Dec 13, 2010, at 7:31 AM, Jeff Squyres wrote:
> Also note that recent versions of the Linux kernel have changed what
> sched_yield() does -- it no longer does essentially what Ralph describes
> below. Google around to fin
What version of OMPI are you using? That error message looks like something
from an ancient version - might be worth updating.
On Dec 13, 2010, at 4:04 AM, peifan wrote:
> i have 3 nodes, one is master node and another is computing nodes,these nodes
> deployed in the internet (not in cluster)
>
Also note that recent versions of the Linux kernel have changed what
sched_yield() does -- it no longer does essentially what Ralph describes below.
Google around to find those discussions.
On Dec 9, 2010, at 4:07 PM, Ralph Castain wrote:
> Sorry for delay - am occupied with my day job.
>
>
You should be able to use normal MPI_TYPE_CREATE_STRUCT functionality to skip
members that you don't want represented in a struct.
The general idea is to instantiate one of the classes and then use the
addresses of the members to compute the displacement(s) from the base. The
same thing works
Hi,
On Fri, Dec 10, 2010 at 2:51 AM, Santosh Ansumali wrote:
>> - the "static" data member is shared between all instances of the
>> class, so it cannot be part of the MPI datatype (it will likely be
>> at a fixed memory location);
>
> Yes! I agree that i is global as far as different instances
i have 3 nodes, one is master node and another is computing nodes,these nodes
deployed in the internet (not in cluster)
when i running NPB (NASA parallel benchmark) in one node (use 2 processes)
mpirun -np 2 exe.
I can get the successful result, but when i running in two nodes(for example
ru
Hi Siegmar,
Building Open MPI under Cygwin is not the way we recommend, it's not
easy, and the building time is extremely long. Actually if you have
CMake and Visual Studio installed, then it's pretty easy to build Open
MPI binary, but of course you have to port your GNU makefiles.
If there
Hi Ralph,
Thanks for confirming this and for getting a patch in. Greatly appreciated!
--
Best,
Hsiu-Khuern.
* On Sat 11:27AM -0700, 11 Dec 2010, Ralph Castain (r...@open-mpi.org) wrote:
> Hmmmwell, that stinks. I did some digging and there is indeed a bug in
> the 1.4 series - forgot to c
29 matches
Mail list logo