On Aug 29, 2011, at 4:22 PM, Eugene Loh wrote:
> It seems to me the FAQ item
> http://www.open-mpi.org/faq/?category=large-clusters#fd-limits needs
> updating. I'm willing to give this a try, but need some help first. (I'm
> even more willing to let someone else do all this, but I'm not hold
On Aug 29, 2011, at 11:18 PM, Eugene Loh wrote:
> Maybe someone can help me from having to think too hard.
>
> Let's say I want to max my system limits. I can say this:
>
>% mpirun --mca opal_set_max_sys_limits 1 ...
>
> Cool.
>
> Meanwhile, if I do this:
>
>% setenv OMPI_MCA_opal_s
You're right. Sorry about the typo. It was just corrected.
On Aug 30, 2011, at 2:27 PM, Shamis, Pavel wrote:
> Hi all,
> I'm not sure, if it is relevant to this specific commit, but it is relevant
> for some of epoch changes.
> I was not able to compile latest trunk version on our cray system, t
You're right. Sorry about the typo. It was just corrected.
On Aug 30, 2011, at 2:27 PM, Shamis, Pavel wrote:
> Hi all,
> I'm not sure, if it is relevant to this specific commit, but it is relevant
> for some of epoch changes.
> I was not able to compile latest trunk version on our cray system, t
Hi all,
I'm not sure, if it is relevant to this specific commit, but it is relevant for
some of epoch changes.
I was not able to compile latest trunk version on our cray system, the failure
was in ess/alps component, for me it seems like simple typo. I did not have
chance to check my fix on our
Thanks for the links.
I found a link(below) to compare TIPC, TCP and SCTP. But it uses some old
version of TIPC(1.7.3). Do you have any similar tests but on the latest
version of TIPC, TCP and SCTP. That will be more helpful to convince people
to use TIPC.
Another thing I am interested is wheth
On Aug 29, 2011, at 3:51 AM, Xin He wrote:
>> -
>> $ mpirun --mca btl tcp,self --bynode -np 2 --mca btl_tcp_if_include eth0
>> hostname
>> svbu-mpi008
>> svbu-mpi009
>> $ mpirun --mca btl tcp,self --bynode -np 2 --mca btl_tcp_if_include eth0
>> IMB-MPI1 PingPong
>> #-
Yes, it is Gigabytes Ethernet. I configure ompi again using "./configure
--disable-mpi-f90 --disable-mpi-f77 --disable-mpi-cxx --disable-vt
--disable-io-romio --prefix=/usr --with-platform=optimized"
and run IMB-MPI1 again with "mpirun --mca btl tcp,self -n 2 --hostfile
my_hostfile --bynode ./IM
devel-boun...@open-mpi.org wrote on 08/29/2011 06:59:49 PM:
> De : Brice Goglin
> A : Open MPI Developers
> Date : 08/29/2011 07:00 PM
> Objet : Re: [OMPI devel] known limitation or bug in hwloc?
> Envoyé par : devel-boun...@open-mpi.org
>
> I am playing with those aspects right now (it's plann
Thanks a lot Ralph!
Regards,
--
Nadia Derbey
Phone: +33 (0)4 76 29 77 62
devel-boun...@open-mpi.org wrote on 08/29/2011 06:12:13 PM:
> De : Ralph Castain
> A : Open MPI Developers
> Date : 08/29/2011 06:12 PM
> Objet : Re: [OMPI devel] known limitation or bug in hwloc?
> Envoyé par : devel-
Maybe someone can help me from having to think too hard.
Let's say I want to max my system limits. I can say this:
% mpirun --mca opal_set_max_sys_limits 1 ...
Cool.
Meanwhile, if I do this:
% setenv OMPI_MCA_opal_set_max_sys_limits 1
% mpirun ...
remote processes don't see the
11 matches
Mail list logo