ah, I guess my original understanding of PML was wrong. Adding "-mca
pml ob1" does help to ease the problem. But the question still
remains. Why ompi decided to use the mx BTL in the first place, given
there's no physical device onboard at all? This behavior is completely
different than the origina
Hi Aurelien,
Thanks for the explanation. But I'm not following it. There's no MX
device on the test machine as I mentioned, so ompi should not find it
at all in the first place. I'm also not able to locate the ob1 MTL.
There's the ob1 PML but I don't understand how that's going to affect
the mx BT
Le 11 juin 2012 à 18:57, Aurélien Bouteiller a écrit :
> Hi,
>
> If some mx devices are found, the logic is not only to use the mx BTL but
> also to use the mx MTL. You can try to disable this with --mca mtl ob1.
>
Sorry, I meant --mca pml ob1
> Aurelien
>
>
>
>
> Le 11 juin 2012 à 18:24
Hi,
If some mx devices are found, the logic is not only to use the mx BTL but also
to use the mx MTL. You can try to disable this with --mca mtl ob1.
Aurelien
Le 11 juin 2012 à 18:24, Yong Qin a écrit :
> Hi,
>
> We are migrating to Open MPI 1.6 but since 1.6 dropped support for
> Myricom
Hi,
We are migrating to Open MPI 1.6 but since 1.6 dropped support for
Myricom GM driver so we have to switch to the MX driver. We have the
Myricom MX2G 1.2.16 driver installed. However upon testing the new
build of Open MPI on a node without the actual Myrinet device, we are
getting the following
On Jun 11, 2012, at 12:11 PM, BOUVIER Benjamin wrote:
> Wow. I thought in the first place that all combinations would be equivalent,
> but in fact, this is not the case...
> I've kept the firewalls down during all the tests.
>
>> - on node1, "mpirun --host node1,node2 ring_c"
> Works.
>
>> - on
Wow. I thought in the first place that all combinations would be equivalent,
but in fact, this is not the case...
I've kept the firewalls down during all the tests.
> - on node1, "mpirun --host node1,node2 ring_c"
Works.
> - on node1, "mpirun --host node1,node3 ring_c"
> - on node1, "mpirun --ho
On Jun 11, 2012, at 11:15 AM, BOUVIER Benjamin wrote:
> Thanks for your hints Jeff.
> I've just tried without any firewalls on involved machines, but the issue
> remains.
>
> # /etc/init.d/ip6tables status
> ip6tables: Firewall is not running.
> # /etc/init.d/iptables status
> iptables: Firewall
Hi,
Thanks for your hints Jeff.
I've just tried without any firewalls on involved machines, but the issue
remains.
# /etc/init.d/ip6tables status
ip6tables: Firewall is not running.
# /etc/init.d/iptables status
iptables: Firewall is not running.
The machines have the host names "node1", "node2
To start, I would ensure that all firewalling (e.g., iptables) is disabled on
all machines involved.
On Jun 11, 2012, at 10:16 AM, BOUVIER Benjamin wrote:
> Hi,
>
>> I'd guess that running net pipe with 3 procs may be undefined.
>
> It is indeed undefined. Running the net pipe program locally
Hi,
> I'd guess that running net pipe with 3 procs may be undefined.
It is indeed undefined. Running the net pipe program locally with 3 processors
blocks, on my computer.
This issue is especially weird as there is no problem for running the example
program on network with MPICH2 implementatio
11 matches
Mail list logo