Designation: Non-Export Controlled Content
Folks;
My MPI_COMM_WORLD size is 2000. I have created a communicator
based on a small subset. I have used this communicator in MPI_Barrier calls and
it seems to work fine. Now I want to use this in MPI_Reduce. When I do this I
get errors
and proxies in the right group
Does that fit your needs ?
If yes, then keep in mind sensorComm is MPI_COMM_NULL on the proxy tasks,
proxyComm is MPI_COMM_NULL on the sensor tasks, and controlComm is
MPI_COMM_NULL on the dispatcher.
Cheers,
Gilles
On Monday, October 17, 2016, Marlborough, Rick
Gilles
On Monday, October 17, 2016, Marlborough, Rick
mailto:rmarlboro...@aaccorp.com>> wrote:
Designation: Non-Export Controlled Content
George;
Thanks for your response. Your second sentence is a little
confusing. If my world group is P0,P1, visible on both processes, w
_COMM_NULL communicator, which might explain the
"invalid communicator" error you are getting.
George.
On Fri, Oct 14, 2016 at 5:33 PM, Marlborough, Rick
mailto:rmarlboro...@aaccorp.com>> wrote:
Designation: Non-Export Controlled Content
Folks;
I have the following cod
Designation: Non-Export Controlled Content
Folks;
I have the following code setup. The sensorList is an array of
ints of size 1. The value it contains is 1. My comm world size is 5. The call
to MPI_Barrier fails every time with error "invalid communicator". This code is
pretty mu
@01D224B2.985D36B0]
3.1.1001
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Marlborough,
Rick
Sent: Wednesday, October 12, 2016 5:44 PM
To: Open MPI Users
Subject: Re: [OMPI users] clarity on Comm_connect
Designation: Non-Export Controlled Content
...forgot to mention...
I have a
works! The sensors and
proxies are all spawned in 2 batches using Comm_spawn_multiple. Error message
below. Is there some configuration to enable this?
[cid:image001.png@01D224B0.446AA710]
3.1.1001
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Marlborough,
Rick
Sent
Designation: Non-Export Controlled Content
Folks;
Trying to do an MPI_Lookup_name. The call is surrounded by a
try catch block. Even with the try catch block the calling process will still
abort if the publishing process has not published the name. Is there a way to
configure/cod
to try a nightly snapshot of the v2.0.x branch
Cheers,
Gilles
On Tuesday, October 4, 2016, Marlborough, Rick
mailto:rmarlboro...@aaccorp.com>> wrote:
Gilles;
Here is the client side code. The start command is “mpirun –n 1
client 10” where 10 is used to size a
ink ompi_server is required here.
Can you please post a trimmed version of your client and server, and your two
mpirun command lines.
You also need to make sure all ranks have the same root parameter when invoking
MPI_Comm_accept and MPI_Comm_connect
Cheers,
Gilles
"Marlborough, Rick" ma
Folks;
I have been trying to get a test case up and running using a
client server scenario with a server waiting on MPI_Comm_accept and the client
trying to connect via MPI_Comm_connect. The port value is written to a file.
The client opens the file and reads the port value. I ru
ng out of the box with an other Open MPI
version.
Bottom line, this test program is not doing what you expected.
Cheers,
Gilles
On Friday, September 30, 2016, Marlborough, Rick
mailto:rmarlboro...@aaccorp.com>> wrote:
Gilles;
Thanks for your response. The network setup I hav
cast with different but matching
signatures
(e.g. some tasks MPI_Bcast 8000 MPI_BYTE, while some other tasks MPI_Bcast 1
vector of 8000 MPI_BYTE)
you might want to try
mpirun --mca coll ^tuned
and see if it helps
Cheers,
Gilles
On 9/30/2016 6:52 AM, Marlborough, Rick wrote:
Folks;
Folks;
I am attempting to set up a task that sends large messages via
MPI_Bcast api. I am finding that small message work ok, anything less then 8000
bytes. Anything more than this then the whole scenario hangs with most of the
worker processes pegged at 100% cpu usage. Tried som
14 matches
Mail list logo