Re: [OMPI users] Can't run with more than two nodes in the hostfile

2014-07-16 Thread Ricardo Fernández-Perea
46 AM, Ralph Castain wrote: > Forgive me, but I am now fully confused - case 1 and case 3 appear > identical to me, except for the debug-daemons flag on case 3. > > > On Jul 15, 2014, at 7:56 AM, Ricardo Fernández-Perea < > rfernandezpe...@gmail.com> wrote: > > What I me

Re: [OMPI users] Can't run with more than two nodes in the hostfile

2014-07-15 Thread Ricardo Fernández-Perea
What I mean with "another mpi process". I have 4 nodes where there is process that use mpi and where initiated using mpirun from the control node already running when I run the command against any of those nodes it execute but when I do it against any other node it fails if no_tree_spawn flag

Re: [OMPI users] Can't run with more than two nodes in the hostfile

2014-07-15 Thread Ricardo Fernández-Perea
; On Jul 14, 2014, at 10:27 AM, Ralph Castain wrote: > > I confess I haven't tested no_tree_spawn in ages, so it is quite possible > it has suffered bit rot. I can try to take a look at it in a bit > > > On Jul 14, 2014, at 10:13 AM, Ricardo Fernández-Perea < > rfernan

Re: [OMPI users] Can't run with more than two nodes in the hostfile

2014-07-14 Thread Ricardo Fernández-Perea
ern, thus requiring that we > be able to ssh from one compute node to another. You can change that to a > less scalable direct mode by adding > > --mca plm_rsh_no_tree_spawn 1 > > to the cmd line > > > On Jul 14, 2014, at 9:21 AM, Ricardo Fernández-Perea < > rfe

[OMPI users] Can't run with more than two nodes in the hostfile

2014-07-14 Thread Ricardo Fernández-Perea
I'm trying to update to openMPI 1.8.1 thru ssh and Myrinet running a command as /opt/openmpi/bin/mpirun --verbose --mca mtl mx --mca pml cm -hostfile hostfile -np 16 when the hostfile contain only two nodes as host1 slots=8 max-slots=8 host2 slots=8 max-slots=8 it runs perfectly but when the

Re: [OMPI users] Myrinet optimization with OMP1.3 and macosX

2009-05-07 Thread Ricardo Fernández-Perea
endpoints to 16 ) On Wed, May 6, 2009 at 7:31 PM, Scott Atchley wrote: > On May 4, 2009, at 10:54 AM, Ricardo Fernández-Perea wrote: > > I finally have opportunity to run the imb-3.2 benchmark over myrinet I am >> running in a cluster of 16 node Xservers connected with myrinet 15

Re: [OMPI users] Myrinet optimization with OMP1.3 and macosX

2009-05-06 Thread Ricardo Fernández-Perea
n Costescu < bogdan.coste...@iwr.uni-heidelberg.de> wrote: > On Mon, 4 May 2009, Ricardo Fernández-Perea wrote: > > any idea where I should look for the cause. >> > > Can you try adding to the mpirun/mpiexec command line '--mca mtl mx --mca > pml cm' to specify usage of t

Re: [OMPI users] Myrinet optimization with OMP1.3 and macosX

2009-05-04 Thread Ricardo Fernández-Perea
process test is always running 1 process by node. the following test pingpong, pingping, sendrecv, exchange presents a strong drop in performance with the 64k packet size. any idea where I should look for the cause. Ricardo On Fri, Mar 20, 2009 at 7:32 PM, Ricardo Fernández-Perea < rfernande

Re: [OMPI users] Myrinet optimization with OMP1.3 and macosX

2009-03-20 Thread Ricardo Fernández-Perea
hen I came back. Ricardo On Fri, Mar 20, 2009 at 5:10 PM, Scott Atchley wrote: > On Mar 20, 2009, at 11:33 AM, Ricardo Fernández-Perea wrote: > > This are the results initially >> Running 1000 iterations. >> Length Latency(us)Bandwidth(MB/s) >>0

Re: [OMPI users] Myrinet optimization with OMP1.3 and macosX

2009-03-20 Thread Ricardo Fernández-Perea
On Fri, Mar 20, 2009 at 2:21 PM, Scott Atchley wrote: > On Mar 20, 2009, at 5:59 AM, Ricardo Fernández-Perea wrote: > > Hello, >> >> I am running DL_POLY in various Xserver 8 processor with a myrinet >> network.using mx-1.2.7 >> >> While I keep in the

[OMPI users] Myrinet optimization with OMP1.3 and macosX

2009-03-20 Thread Ricardo Fernández-Perea
Hello, I am running DL_POLY in various Xserver 8 processor with a myrinet network.using mx-1.2.7 While I keep in the same node the process scales reasonably well but in the moment I hit the network ... I will like to try to maximize the mx network before trying to touch the program code. Is the

[OMPI users] Xgrid performance (it choose tcp when it should choose sm)

2009-03-13 Thread Ricardo Fernández-Perea
In the same machine the same job takes a lot more time while using XGrid than while using any other method even all the orted run in the same node when using Xgrid it use tcp instead of sm is that expected or do I have a problem. Ricardo

[OMPI users] Fwd: more XGrid Problems with openmpi1.2.9 (error find)

2009-02-27 Thread Ricardo Fernández-Perea
Find the problem in orte_pls_xgrid_terminate_orteds orte_pls_base_get_active_daemons is been call as orte_pls_base_get_active_daemons(&daemons, jobid) when the correct way of doing it is orte_pls_base_get_active_daemons(&daemons, jobid, attrs) yours. Ricardo Hi It seems to me more like time

[OMPI users] more XGrid Problems with openmpi1.2.9

2009-02-27 Thread Ricardo Fernández-Perea
Hi It seems to me more like time issues. All the runs end with something similar to Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x45485308 Crashed Thread: 0 Thread 0 Crashed: 0 libSystem.B.dylib 0x95208f04 strcmp + 84 1 libopen-rte.

Re: [OMPI users] openmpi 1.2.9 with Xgrid support more information

2009-02-26 Thread Ricardo Fernández-Perea
Anyone can spot any reason for concern with this change? Yours Ricardo On Thu, Feb 26, 2009 at 10:40 AM, Ricardo Fernández-Perea < rfernandezpe...@gmail.com> wrote: > Yes Brian > Its in Leopard. > > thanks for your interest. > > Ricardo > > On Wed, Feb 2

Re: [OMPI users] openmpi 1.2.9 with Xgrid support more information

2009-02-26 Thread Ricardo Fernández-Perea
;ve been hiding trying > to finish my dissertation the last couple of months. I can't offer much > advice without digging into it in more detail than I have time to do in the > near future. > > Brian > > > On Wed, 25 Feb 2009, Ricardo Fernández-Perea wrote: > >

[OMPI users] openmpi 1.2.9 with Xgrid support more information

2009-02-25 Thread Ricardo Fernández-Perea
HI I Have checked the crash log. the result is bellow. If I am reading it and following the mpirun code correctly the release of the last mca_pls_xgrid_component.client by orte_pls_xgrid_finalize causes a call to method dealloc for PlsXGridClient where a [connection finalize] is call that end

[OMPI users] openmpi 1.2.9 with Xgrid support

2009-02-24 Thread Ricardo Fernández-Perea
Hi.Due that xgrid support is broken at the moment in 1.3, I am trying to install 1.2.9 in a xserve cluster. I am using the gcc compilers downloaded from http://hpc.sourceforge.net/. To be sure to not mixing compiler I am using the following configure ./configure --prefix=/opt/openmpi CC=/usr/lo