Re: [OMPI users] Openmpi SGE and BLACS

2012-01-15 Thread Conn ORourke
Found the problem. I had accidently linked to BLACS built wit mpich, not openmpi. Cheers, Conn From: Conn ORourke To: "us...@open-mpi.org" ; "terry.don...@oracle.com" Sent: Saturday, 14 January 2012, 17:42 Subject: Re: [OMPI us

Re: [OMPI users] Openmpi SGE and BLACS

2012-01-14 Thread Conn ORourke
Sent: Friday, 13 January 2012, 13:21 Subject: Re: [OMPI users] Openmpi SGE and BLACS Do you have a stack of where exactly things are seg faulting in blacs_pinfo?  --td On 1/13/2012 8:12 AM, Conn ORourke wrote: Dear Openmpi Users, > > >I am reserving several processors with SGE upon which

Re: [OMPI users] Openmpi SGE and BLACS

2012-01-13 Thread TERRY DONTJE
Do you have a stack of where exactly things are seg faulting in blacs_pinfo? --td On 1/13/2012 8:12 AM, Conn ORourke wrote: Dear Openmpi Users, I am reserving several processors with SGE upon which I want to run a number of openmpi jobs, all of which individually (and combined) use less tha

[OMPI users] Openmpi SGE and BLACS

2012-01-13 Thread Conn ORourke
Dear Openmpi Users, I am reserving several processors with SGE upon which I want to run a number of openmpi jobs, all of which individually (and combined) use less than the reserved number of processors. The code I am using uses BLACS, and when blacs_pinfo is called I get a seg fault. If the co

Re: [OMPI users] OpenMPI & SGE: bash errors at mpirun

2010-05-02 Thread Reuti
Am 27.04.2010 um 16:57 schrieb Edmund Sumbar: On Tue, 27 Apr 2010, Frederik Himpe wrote: OpenMPI is installed in its own prefix (/shared/apps/openmpi/gcc-4.4/1.4.1), and can be loaded by the environment module (http://modules.sourceforge.net/) openmpi. Now I can successfully run this pe job:

Re: [OMPI users] OpenMPI & SGE: bash errors at mpirun

2010-04-27 Thread Edmund Sumbar
On Tue, 27 Apr 2010, Frederik Himpe wrote: OpenMPI is installed in its own prefix (/shared/apps/openmpi/gcc-4.4/1.4.1), and can be loaded by the environment module (http://modules.sourceforge.net/) openmpi. Now I can successfully run this pe job: #!/bin/bash #$ -N test #$ -q all.q #$ -pe openm

Re: [OMPI users] OpenMPI & SGE: bash errors at mpirun

2010-04-27 Thread Dave Love
Frederik Himpe writes: > bash: module: line 1: syntax error: unexpected end of file > bash: error importing function definition for `module' It's nothing to do with open-mpi -- the job hasn't even started executing at that point. Consult the archives of the SGE users list and the issue tracker.

Re: [OMPI users] OpenMPI & SGE: bash errors at mpirun

2010-04-27 Thread Frederik Himpe
On Tue, 2010-04-27 at 07:52 -0600, Ralph Castain wrote: > Looks to me like you have an error in the openmpi module file... I cannot trigger this error by running module add openmpi/gcc-4.4, so I don't have the feeling the module file in itself is erroneous. Just in case, this is what it looks lik

Re: [OMPI users] OpenMPI & SGE: bash errors at mpirun

2010-04-27 Thread Ralph Castain
Looks to me like you have an error in the openmpi module file... On Apr 27, 2010, at 6:38 AM, Frederik Himpe wrote: > I'm using SGE 6.1 and OpenMPI 1.4.1 built with gridengine support. > > I've got this parallel environment defined in SGE: > > pe_name openmpi > slots 100 >

[OMPI users] OpenMPI & SGE: bash errors at mpirun

2010-04-27 Thread Frederik Himpe
I'm using SGE 6.1 and OpenMPI 1.4.1 built with gridengine support. I've got this parallel environment defined in SGE: pe_name openmpi slots 100 user_listsNONE xuser_lists NONE start_proc_args /bin/true stop_proc_args/bin/true allocation_rule $fill_up co

Re: [OMPI users] OpenMPI+SGE tight integration works on E6600 core duo systems but not on Q9550 quads

2009-07-08 Thread Lengyel, Florian
This was addressed to the Open MPI list; on the SGE list you suggested changing the pe allocation rule from full_up$ to pe_slots$; the pe is now [flengyel@nept OPENMPI]$ qconf -sp ompi pe_name ompi slots 999 user_listsResearch xuser_lists NONE start_proc_args

Re: [OMPI users] OpenMPI+SGE tight integration works on E6600 core duo systems but not on Q9550 quads

2009-07-08 Thread Lengyel, Florian
-Original Message- From: users-boun...@open-mpi.org on behalf of rahmani Sent: Wed 7/8/2009 1:58 AM To: Open MPI Users Subject: Re: [OMPI users] OpenMPI+SGE tight integration works on E6600 core duo systems but not on Q9550 quads ... Hi in your job file don't user "mpiru

Re: [OMPI users] OpenMPI+SGE tight integration works on E6600 core duo systems but not on Q9550 quads

2009-07-08 Thread rahmani
- Original Message - From: "Florian Lengyel" To: us...@open-mpi.org Sent: Tuesday, July 7, 2009 4:12:22 PM (GMT-0500) America/New_York Subject: [OMPI users] OpenMPI+SGE tight integration works on E6600 core duo systems but not on Q

Re: [OMPI users] OpenMPI+SGE tight integration works on E6600 core duo systems but not on Q9550 quads

2009-07-07 Thread Reuti
Hi, Am 07.07.2009 um 22:12 schrieb Lengyel, Florian: Hi, I may have overlooked something in the archives (not to mention Googling)--if so I apologize, however I have been unable to find info on this particular problem. OpenMPI+SGE tight integration works on E6600 core duo systems but not

[OMPI users] OpenMPI+SGE tight integration works on E6600 core duo systems but not on Q9550 quads

2009-07-07 Thread Lengyel, Florian
Hi, I may have overlooked something in the archives (not to mention Googling)--if so I apologize, however I have been unable to find info on this particular problem. OpenMPI+SGE tight integration works on E6600 core duo systems but not on Q9550 quads. Could use some troubleshooting assistance.

Re: [OMPI users] openmpi+sge

2008-10-03 Thread Reuti
Am 03.10.2008 um 10:46 schrieb Jaime Perea: Hello again. Since I already had a 6.1 version of the sge I reverted to it and included the hacks (ssh, sshd -i and qlogin_wrap) and in this way both the interactives qsh and qrsh and batch qsub worked with openmpi. For me this is a solution, but I'm

Re: [OMPI users] openmpi+sge

2008-10-03 Thread Jaime Perea
Hello again. Since I already had a 6.1 version of the sge I reverted to it and included the hacks (ssh, sshd -i and qlogin_wrap) and in this way both the interactives qsh and qrsh and batch qsub worked with openmpi. For me this is a solution, but I'm still curious of what it was going on in 6.2.

Re: [OMPI users] openmpi+sge

2008-10-02 Thread Rolf Vandevaart
On 10/02/08 11:18, Reuti wrote: Am 02.10.2008 um 16:51 schrieb Jaime Perea: Hi builtin, do I have to change them to ssh and sshd as in sge 6.1? I always used only rsh, as ssh doesn't provide a Tight Integration with correct accounting (unless you compiled SGE with -tigth-ssh on your own).

Re: [OMPI users] openmpi+sge

2008-10-02 Thread Jaime Perea
Hi Well, let's try, I downloaded binaries for the sge, I was thinking on rsh, I'm going to try it after the old ssh/sshd settings and before than trying to compile the sge... which I guess is not an easy task. Regards -- Jaime Perea El Jueves, 2 de Octubre de 2008, Reuti escribió: > Am 02.1

Re: [OMPI users] openmpi+sge

2008-10-02 Thread Reuti
Am 02.10.2008 um 16:51 schrieb Jaime Perea: Hi builtin, do I have to change them to ssh and sshd as in sge 6.1? I always used only rsh, as ssh doesn't provide a Tight Integration with correct accounting (unless you compiled SGE with -tigth-ssh on your own). But it would be worth a try w

Re: [OMPI users] openmpi+sge

2008-10-02 Thread Jaime Perea
Hi builtin, do I have to change them to ssh and sshd as in sge 6.1? Thanks again -- Jaime Perea El Jueves, 2 de Octubre de 2008, Reuti escribió: > Am 02.10.2008 um 16:12 schrieb Jaime Perea: > > Hi again, thanks for the answer > > > > Actually I took the definition of the pe from the openmpi >

Re: [OMPI users] openmpi+sge

2008-10-02 Thread Reuti
Am 02.10.2008 um 16:12 schrieb Jaime Perea: Hi again, thanks for the answer Actually I took the definition of the pe from the openmpi webpage, in my case qconf -sp orte pe_nameorte slots 24 user_lists NONE xuser_listsNONE start_proc_args/bin/true st

Re: [OMPI users] openmpi+sge

2008-10-02 Thread Jaime Perea
Hi again, thanks for the answer Actually I took the definition of the pe from the openmpi webpage, in my case qconf -sp orte pe_nameorte slots 24 user_lists NONE xuser_listsNONE start_proc_args/bin/true stop_proc_args /bin/true allocation_rule

Re: [OMPI users] openmpi+sge

2008-10-02 Thread Reuti
Hi, Am 02.10.2008 um 15:37 schrieb Jaime Perea: Hello, I am having some problems with a combination of openmpi+sge6.2 Currently I'm working with the 1.3a1r19666 openmpi release and the AFAIK, you have to enable SGE support in Open MPI 1.3 during its compilation. myrinet gm libraries (2

[OMPI users] openmpi+sge

2008-10-02 Thread Jaime Perea
Hello, I am having some problems with a combination of openmpi+sge6.2 Currently I'm working with the 1.3a1r19666 openmpi release and the myrinet gm libraries (2.1.19) but the problem was the same with the prior 1.3 version. In short, I'm able to send jobs to a que via qrsh, more or less this wa

Re: [OMPI users] OpenMPI + SGE Problem

2007-10-17 Thread Vittorio Zaccaria
Dear Reuti and Harvey, I just tried by setting control_slaves to TRUE and it works! Thank you very much, Vittorio On Oct 17, 2007, at 7:48 PM, Reuti wrote: Hi, Am 17.10.2007 um 18:49 schrieb Vittorio Zaccaria: I am just trying to run a very simple application using mpirun in an SGE 6 e

Re: [OMPI users] OpenMPI + SGE Problem

2007-10-17 Thread Reuti
Hi, Am 17.10.2007 um 18:49 schrieb Vittorio Zaccaria: I am just trying to run a very simple application using mpirun in an SGE 6 environment. The job is called 'example' and it is submitted to the SGE environment with the following command: > qsub -pe parallel 2 example where 'parallel'

[OMPI users] OpenMPI + SGE Problem

2007-10-17 Thread Vittorio Zaccaria
Hello, I am just trying to run a very simple application using mpirun in an SGE 6 environment. The job is called 'example' and it is submitted to the SGE environment with the following command: > qsub -pe parallel 2 example where 'parallel' is a working parallel environment. 'example' is