Hi Reuti,
I've been unable to reproduce the issue so far.
Sorry for the convenience,
Eloi
On Tuesday 25 May 2010 11:32:44 Reuti wrote:
> Hi,
>
> Am 25.05.2010 um 09:14 schrieb Eloi Gaudry:
> > I do no reset any environment variable during job submission or job
> > handling. Is there a simple wa
Hi,
Am 25.05.2010 um 09:14 schrieb Eloi Gaudry:
> I do no reset any environment variable during job submission or job handling.
> Is there a simple way to check that openmpi is working as expected with SGE
> tight integration (as displaying environment variables, setting options on
> the
> com
Hi Reuti,
I do no reset any environment variable during job submission or job handling.
Is there a simple way to check that openmpi is working as expected with SGE
tight integration (as displaying environment variables, setting options on the
command line, etc. ) ?
Regards,
Eloi
On Friday 21
Hi,
Am 21.05.2010 um 17:19 schrieb Eloi Gaudry:
> Hi Reuti,
>
> Yes, the openmpi binaries used were build after having used the --with-sge
> during configure, and we only use those binaries on our cluster.
>
> [eg@moe:~]$ /opt/openmpi-1.3.3/bin/ompi_info
> MCA ras: gridengine
Hi Reuti,
Yes, the openmpi binaries used were build after having used the --with-sge
during configure, and we only use those binaries on our cluster.
[eg@moe:~]$ /opt/openmpi-1.3.3/bin/ompi_info
Package: Open MPI root@moe Distribution
Open MPI: 1.3.3
Open MPI
Hi,
Am 21.05.2010 um 14:11 schrieb Eloi Gaudry:
> Hi there,
>
> I'm observing something strange on our cluster managed by SGE6.2u4 when
> launching a parallel computation on several nodes, using OpenMPI/SGE tight-
> integration mode (OpenMPI-1.3.3). It seems that the SGE allocated slots are
>
Hi there,
I'm observing something strange on our cluster managed by SGE6.2u4 when
launching a parallel computation on several nodes, using OpenMPI/SGE tight-
integration mode (OpenMPI-1.3.3). It seems that the SGE allocated slots are
not used by OpenMPI, as if OpenMPI was doing is own round-robi