Of Pak Lui
Sent: Monday, February 28, 2011 11:30 AM
To: Open MPI Users
Subject: Re: [OMPI users] anybody tried OMPI with gpudirect?
Hi Brice,
You will need the MLNX_OFED with the GPUDirect support in order to work. I will
check to there's a release of it that supports SLES and let you know.
[pak
Hi Brice,
You will need the MLNX_OFED with the GPUDirect support in order to work. I will
check to there's a release of it that supports SLES and let you know.
[pak@maia001 ~]$ /sbin/modinfo ib_core
filename:
queues for parallel job.
btw, you don't need the --with-sge switch in OMPI configure. It's new in
OMPI v1.3 so that we don't build SGE support by default.
My $.02...
- Pak Lui
p...@penguincomputing.com
Penguin Computing
users-requ...@open-mpi.org wrote:
Date: Sat, 11 Oct 2008 07:56:02 -0400
Fr
Pak Lui wrote:
Romaric David wrote:
Pak Lui a écrit :
It was fixed at one point in the trunk before v1.3 went official, but
while rolling the code from gridengine PLM into the rsh PLM code,
this feature was left out because there was some lingering issues
that I didn't resolved and I lost
Reuti wrote:
Hi,
Am 07.07.2008 um 11:31 schrieb Romaric David:
Pak Lui a écrit :
It was fixed at one point in the trunk before v1.3 went official, but
while rolling the code from gridengine PLM into the rsh PLM code,
this feature was left out because there was some lingering issues
that I
Romaric David wrote:
Pak Lui a écrit :
It was fixed at one point in the trunk before v1.3 went official, but
while rolling the code from gridengine PLM into the rsh PLM code, this
feature was left out because there was some lingering issues that I
didn't resolved and I lost track
n your mpirun command and look for the launch
commands that mpirun uses.
Regards,
Romaric
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
ed pre-relese 1.2.6rc3 same results.
Prakashan
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
iddle5/2041799_2034533/2041733/1?PARTNER=3_QUERY=null>
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
anks,
~Tim
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
Hi Henk,
SLIM H.A. wrote:
Dear Pak Lui
I can delete the (sge) job with qdel -f such that it disappears from the
job list but the application processes keep running, including the
shepherds. I have to kill them with -15
For some reason the kill -15 does not reach mpirun. (We use
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
onent v1.2.1)
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
r computer.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
Geoff Galitz wrote:
On Jan 24, 2007, at 7:03 AM, Pak Lui wrote:
Geoff Galitz wrote:
Hello,
On the following system:
OpenMPI 1.1.1
SGE 6.0 (with tight integration)
Scientific Linux 4.3
Dual Dual-Core Opterons
MPI jobs are oversubscribing to the nodes. No matter where jobs
are launched
Geoff Galitz wrote:
Hello,
On the following system:
OpenMPI 1.1.1
SGE 6.0 (with tight integration)
Scientific Linux 4.3
Dual Dual-Core Opterons
MPI jobs are oversubscribing to the nodes. No matter where jobs are
launched by the scheduler, they always stack up on the first node
(node00)
(not shown)
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Thanks,
- Pak Lui
pak@sun.com
2006 10:31, Pak Lui a écrit :
> Hi, I noticed your prefix set to the lib dir, can you try without the
> lib64 part and rerun?
>
> Eric Thibodeau wrote:
> > Hello everyone,
> >
> > Well, first off, I hope this problem I am reporting is of some
validity,
1: ./mspawn2: MPI_APPNUM = 1
Password:
orted: Command not found.
^C^\Quit
--
Thanks,
- Pak Lui
to call tm_init again?
If you are curious to know about the implementation for PBS, you can
download the source from openpbs.org. OpenPBS source:
v2.3.16/src/lib/Libifl/tm.c
--
Thanks,
- Pak Lui
pak@sun.com
21 matches
Mail list logo