Hi developers,
I can see in the code that the part that launches processes on other
machines on Windows is not compiled on other platforms because it uses
COM.
Is there another way of launching processes on Windows from non windows
machines ?
What would I need to do to write a daemon similar to
As promised earlier today, here are results from my Solaris platforms.
Note that there are libtool-related failures below that may be worth
pursuing.
If necessary, access to most of my machines can be arranged for
qualified persons.
== GNU compilers with {C,CXX,F77,FC}FLAGS=-mcpu=v9 on SPARCs,
Dear All,
Next is question about "MPI_Close_port".
According to the MPI-2.2 standard,
the "port_name" argument of
MPI_Close_port() is marked as 'IN'.
But, in Open MPI (both trunk and 1.4.x), the content of
"port_name" is updated in MPI_Close_port().
It seems to violate the MPI standard.
The foll
On 1/19/2012 5:22 PM, Paul H. Hargrove wrote:
Minor documentation nit, which might apply to the 1.5 branch as well
(didn't check).
README says:
- Open MPI does not support the Sparc v8 CPU target, which is the
default on Sun Solaris. The v8plus (32 bit) or v9 (64 bit)
targets must be us
No reason for doing so comes to mind - I suspect the original author probably
started out doing a "free", then discovered that the overlying MPI code was
passing in an array and so just converted it to a memset. Either way, it really
should be the responsibility of the user's code to deal with t
Guess I'm confused. The launcher is running on a Linux machine, so it has to
use a Linux service to launch the remote daemon. Can you use ssh to launch the
daemons onto the Windows machines? In other words, can you have the Windows
machine support an ssh connection?
I did a quick search and fou
With
* MLNX OFED stack tailored for GPUDirect
* RHEL + kernel patch
* MVAPICH2
it is possible to monitor GPUDirect v1 activities by means of observing changes
to values in
* /sys/module/ib_core/parameters/gpu_direct_pages
* /sys/module/ib_core/parameters/gpu_direct_shares
By setting CUDA_NI
You can tell it is working because your program does not hang anymore :)
Otherwise, there is a not a way that I am aware of.
Rolf
PS: And I assume you mean Open MPI under your third bullet below.
From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf
Of Sebastian Rinke