On Wed, 9 May 2012, Jiri Polach wrote:
You might want to use a smaller number of processors than those made
available by SGE.
Thanks for replying. I can imagine that in some special cases it might be
useful to request N processors from SGE and than use M
Dear all,
is "-np N" parameter needed for mpirun when running jobs under SGE
environment? All examples in
http://www.open-mpi.org/faq/?category=running#run-n1ge-or-sge
show that "-np N" is used, but in my opinion it should be redundant:
mpirun should determine all parameters from SGE
Dear all,
is "-np N" parameter needed for mpirun when running jobs under SGE
environment? All examples in
http://www.open-mpi.org/faq/?category=running#run-n1ge-or-sge
show that "-np N" is used, but in my opinion it should be redundant:
mpirun should determine all parameters from SGE
As a follow up, the problem was with host name resolution. The error was
introduced, with a change to the Rocks environment, which broke reverse
lookups for host names.
--
Ray Muno
Rolf Vandevaart wrote:
>>
>> PMGR_COLLECTIVE ERROR: unitialized MPI task: Missing required
>> environment variable: MPIRUN_RANK
>> PMGR_COLLECTIVE ERROR: PMGR_COLLECTIVE ERROR: unitialized MPI task:
>> Missing required environment variable: MPIRUN_RANK
>>
> I do not recognize these errors as
Ray Muno wrote:
Rolf Vandevaart wrote:
Ray Muno wrote:
Ray Muno wrote:
We are running a cluster using Rocks 5.0 and OpenMPI 1.2 (primarily).
Scheduling is done through SGE. MPI communication is over InfiniBand.
We also have OpenMPI 1.3 installed and receive
Ray Muno wrote:
> Tha give me
How about "That gives me"
>
> PMGR_COLLECTIVE ERROR: unitialized MPI task: Missing required
> environment variable: MPIRUN_RANK
> PMGR_COLLECTIVE ERROR: PMGR_COLLECTIVE ERROR: unitialized MPI task:
> Missing required environment variable: MPIRUN_RANK
>
>
Rolf Vandevaart wrote:
> Ray Muno wrote:
>> Ray Muno wrote:
>>
>>> We are running a cluster using Rocks 5.0 and OpenMPI 1.2 (primarily).
>>> Scheduling is done through SGE. MPI communication is over InfiniBand.
>>>
>>>
>>
>> We also have OpenMPI 1.3 installed and receive similar errors.-
Ray Muno wrote:
Ray Muno wrote:
We are running a cluster using Rocks 5.0 and OpenMPI 1.2 (primarily).
Scheduling is done through SGE. MPI communication is over InfiniBand.
We also have OpenMPI 1.3 installed and receive similar errors.-
This does sound like a problem with SGE.
We are running a cluster using Rocks 5.0 and OpenMPI 1.2 (primarily).
Scheduling is done through SGE. MPI communication is over InfiniBand.
We have been running with this setup for over 9 months. Last week, all
user jobs stopped executing (cluster load dropped to zero). User can
schedule jobs
Hi,
Am 04.11.2008 um 16:54 schrieb Sangamesh B:
Hi all,
In Rocks-5.0 cluster, OpenMPI-1.2.6 comes by default. I guess it
gets installed through rpm.
# /opt/openmpi/bin/ompi_info | grep gridengine
MCA ras: gridengine (MCA v1.0, API v1.3, Component
v1.2.6)
Hi all,
In Rocks-5.0 cluster, OpenMPI-1.2.6 comes by default. I guess it
gets installed through rpm.
# /opt/openmpi/bin/ompi_info | grep gridengine
MCA ras: gridengine (MCA v1.0, API v1.3, Component v1.2.6)
MCA pls: gridengine (MCA v1.0, API v1.3,
Hi,
Am 19.02.2008 um 12:49 schrieb Neeraj Chourasia:
I am facing problem while calling mpirun in a loop when using
with SGE. My sge version is SGE6.1AR_snapshot3. The script i am
submitting via sge is
xx
xxx
I am not quite sure. It seems that your AR (advance reservation)
snapshot3 build is a bit new, and it may be a problem coming from it. I
am not quite familiar with this new SGE feature. I'd ping the gridengine
list to check on that error message coming from execd.
Neeraj Chourasia wrote:
Hello everyone, I am facing problem while calling mpirun in a
loop when using with SGE. My sge version is SGE6.1AR_snapshot3. The script i am
submitting via sge is
xlet
i=0while [ $i -lt 100 ]do echo
15 matches
Mail list logo