The osu_bibw micro-benchmark from Ohio State's OMB 3.1 suite hangs when
run over OpenMPI 1.2.5 from OFED 1.3 using the OpenIB BTL if there is
insufficient lockable memory. 128MB of lockable memory gives a hang
when the test gets to 4MB messages, while 512MB is sufficient for it
to pass. I observed
To be clear -- this looks like a different issue than what Pasha was
reporting.
On Jul 15, 2008, at 8:55 AM, Rolf vandeVaart wrote:
Lenny, I opened a ticket for something that looks the same as this.
Maybe you can add your details to it.
https://svn.open-mpi.org/trac/ompi/ticket/1386
R
I created a new trac milestone today: v1.4.
So if you have stuff that you know won't make it for v1.3, but you
definitely want it in the next version, go ahead and label it for v1.4
(as opposed to the amorphous "Future" milestone, which means "we'll do
it someday -- possibly v1.4, possibly
I opened ticket for the bug:
https://svn.open-mpi.org/trac/ompi/ticket/1389
Ralph Castain wrote:
It looks like a new issue to me, Pasha. Possibly a side consequence of the
IOF change made by Jeff and I the other day. From what I can see, it looks
like you app was a simple "hello" - correct?
If
The UD BTL currently relies on DR for reliability; though in the near
future the UD BTL is planned to have its own reliability -- so I'm fine
with DR going away.
Andrew
Jeff Squyres wrote:
Should we .ompi_ignore dr?
It's not complete and no one wants to support it. I'm thinking that we
sho
Lenny, I opened a ticket for something that looks the same as this.
Maybe you can add your details to it.
https://svn.open-mpi.org/trac/ompi/ticket/1386
Rolf
Lenny Verkhovsky wrote:
I guess it should be here, sorry.
/home/USERS/lenny/OMPI_ORTE_18850/bin/mpirun -np 2 -H witch2,witch3
./IM
I guess it should be here, sorry.
/home/USERS/lenny/OMPI_ORTE_18850/bin/mpirun -np 2 -H witch2,witch3
./IMB-MPI1_18850 PingPong
#---
# Intel (R) MPI Benchmark Suite V3.0v modified by Voltaire, MPI-1 part
#-
Hi George,
I got seqv with IMB PingPong test starting from r18850 ( including trunk )
r18849 works fine.
Best Regards
Lenny.
/home/USERS/lenny/OMPI_ORTE_18850/bin/mpirun -np 2 -H witch2,witch3
./IMB-MPI1_18850 PingPong
#---
# Intel (R) MPI Benchm
On Jul 15, 2008, at 7:30 AM, Ralph Castain wrote:
Minor clarification: we did not test RDMACM on RoadRunner.
Just for further clarification - I did, and it wasn't a particularly
good
experience. Encountered several problems, none of them overwhelming,
hence
my comments.
Ah -- I didn't k
Guess what - we don't always put them out there because - tada - we don't
use them! What goes out on the backend is a stripped down version of
libraries we require. Given the huge number of libraries people provide
(looking at the bigger, beyond OMPI picture), it consumes a lot of limited
disk s
On 7/15/08 5:05 AM, "Jeff Squyres" wrote:
> On Jul 14, 2008, at 3:04 PM, Ralph H. Castain wrote:
>
>> I've been quietly following this discussion, but now feel a need to
>> jump
>> in here. I really must disagree with the idea of building either
>> IBCM or
>> RDMACM support by default. Neithe
On Jul 14, 2008, at 3:04 PM, Ralph H. Castain wrote:
I've been quietly following this discussion, but now feel a need to
jump
in here. I really must disagree with the idea of building either
IBCM or
RDMACM support by default. Neither of these has been proven to
reliably
work, or to be advan
I need to check on this. You may want to look at section A3.2.3 of
the spec.
If you set the first byte (network order) to 0x00, and the 2nd byte
to 0x01,
then you hit a 'reserved' range that probably isn't being used
currently.
If you don't care what the service ID is, you can specify 0, an
It looks like a new issue to me, Pasha. Possibly a side consequence of the
IOF change made by Jeff and I the other day. From what I can see, it looks
like you app was a simple "hello" - correct?
Yep, it is simple hello application.
If you look at the error, the problem occurs when mpirun is
14 matches
Mail list logo