[OMPI users] -output-filename 1234 versus --mca orte_output_filename 1234

2012-10-04 Thread Sébastien Boisvert
Hi, Is there any difference in the code path between mpiexec -n 1 -output-filename 1234 ./a.out and mpiexec -n 1 --mca orte_output_filename 1234 ./a.out ?

Re: [OMPI users] About MPI_TAG_UB

2012-09-28 Thread Sébastien Boisvert
On 28/09/12 10:50 AM, Jeff Squyres wrote: > On Sep 28, 2012, at 10:38 AM, Sébastien Boisvert wrote: > >> 1.5 us is very good. But I get 1.5 ms with shared queues (see above). > > Oh, I mis-read (I blame it on jet-lag...). > > Yes, that seems wy too high. > > Y

Re: [OMPI users] About MPI_TAG_UB

2012-09-28 Thread Sébastien Boisvert
m: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] >> On Behalf Of Sébastien Boisvert >> Sent: Friday, September 28, 2012 3:09 PM >> To: us...@open-mpi.org >> Subject: Re: [OMPI users] About MPI_TAG_UB >> >> Hello, >> >> My application ha

Re: [OMPI users] About MPI_TAG_UB

2012-09-28 Thread Sébastien Boisvert
Hello, On 28/09/12 10:00 AM, Jeff Squyres wrote: > On Sep 28, 2012, at 9:50 AM, Sébastien Boisvert wrote: > >> I did not know about shared queues. >> >> It does not run out of memory. ;-) > > It runs out of *registered* memory, which could be far less than your

Re: [OMPI users] About MPI_TAG_UB

2012-09-28 Thread Sébastien Boisvert
st out of curiosity, does Open-MPI utilize heavily negative values internally for user-provided MPI tags ? If the negative tags are internal to Open-MPI, my code will not touch these private variables, right ? Sébastien On 28/09/12 08:59 AM, Jeff Squyres wrote: > On Sep 27, 2012, at 7:22 PM, Sébastien

Re: [OMPI users] About MPI_TAG_UB

2012-09-28 Thread Sébastien Boisvert
d Communication > Rechen- und Kommunikationszentrum der RWTH Aachen > Seffenter Weg 23, D 52074 Aachen (Germany) > >> -Original Message- >> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] >> On Behalf Of Sébastien Boisvert >> Sent

[OMPI users] About MPI_TAG_UB

2012-09-27 Thread Sébastien Boisvert
ant ? Thanks ! *** Sébastien Boisvert Ph.D. student http://boisvert.info/

[OMPI users] About the Open-MPI point-to-point messaging layers

2012-06-30 Thread Sébastien Boisvert
know. New code should only do the case when the two MPI processes are on different nodes, right ? Thank you ! Sébastien Boisvert p.s.: I found the silly names here -> http://www.open-mpi.org/community/lists/devel/2008/05/3925.php ;)

Re: [OMPI users] Performance scaled messaging and random crashes

2012-06-30 Thread Sébastien Boisvert
: Yes, PSM is the native transport for InfiniPath. It is faster than the InfiniBand verbs support on the same hardware. What version of Open MPI are you using? On Jun 28, 2012, at 10:03 PM, Sébastien Boisvert wrote: Hello, I am getting random crashes (segmentation faults) on a super

Re: [OMPI users] Performance scaled messaging and random crashes

2012-06-29 Thread Sébastien Boisvert
: parameter "mtl_psm_path_query" (current value:, data source: default value) MCA mtl: parameter "mtl_psm_priority" (current value: <0>, data source: default value) Thank you. Sébastien Boisvert Jeff Squyres a écrit : The Open MPI 1.4 series is now

Re: [OMPI users] Performance scaled messaging and random crashes

2012-06-29 Thread Sébastien Boisvert
MCA mtl: parameter "mtl_psm_ib_service_id" (current value: <0x10001175>, data source: default value) MCA mtl: parameter "mtl_psm_path_query" (current value: , data source: default value) MCA mtl: parameter "mtl_psm_priorit

Re: [OMPI users] Performance scaled messaging and random crashes

2012-06-29 Thread Sébastien Boisvert
support on the same hardware. What version of Open MPI are you using? On Jun 28, 2012, at 10:03 PM, Sébastien Boisvert wrote: Hello, I am getting random crashes (segmentation faults) on a super computer (guillimin) using 3 nodes with 12 cores per node. The same program (Ray) runs without any

[OMPI users] Performance scaled messaging and random crashes

2012-06-28 Thread Sébastien Boisvert
piexec -n 36 -output-filename psm-bug-2012-06-26-hotfix.1 \ --mca mtl ^psm \ Ray -k 31 \ -o psm-bug-2012-06-26-hotfix.1 \ -p \ data-for-system-tests/ecoli-MiSeq/MiSeq_Ecoli_MG1655_110527_R1.fastq \ data-for-system-tests/ecoli-MiSeq/MiSeq_Ecoli_MG1655_110527_R2.fastq Sébastien Boisvert

Re: [OMPI users] RE : RE : Latency of 250 microseconds with Open-MPI 1.4.3, Mellanox Infiniband and 256 MPI ranks

2011-11-09 Thread Sébastien Boisvert
implement directly in Open-MPI as a component ? Sébastien Boisvert http://boisvert.info On 26/09/11 08:46 AM, Yevgeny Kliteynik wrote: On 26-Sep-11 11:27 AM, Yevgeny Kliteynik wrote: On 22-Sep-11 12:09 AM, Jeff Squyres wrote: On Sep 21, 2011, at 4:24 PM, Sébastien Boisvert wrote:

[OMPI users] RE : RE : Latency of 250 microseconds with Open-MPI 1.4.3, Mellanox Infiniband and 256 MPI ranks

2011-09-21 Thread Sébastien Boisvert
sers > Objet : Re: [OMPI users] RE : Latency of 250 microseconds with Open-MPI > 1.4.3, Mellanox Infiniband and 256 MPI ranks > > On Sep 21, 2011, at 3:17 PM, Sébastien Boisvert wrote: > >> Meanwhile, I contacted some people at SciNet, which is also part of Compute >&g

[OMPI users] RE : Latency of 250 microseconds with Open-MPI 1.4.3, Mellanox Infiniband and 256 MPI ranks

2011-09-21 Thread Sébastien Boisvert
nvoi : 20 septembre 2011 08:14 > À : Open MPI Users > Cc : Sébastien Boisvert > Objet : Re: [OMPI users] Latency of 250 microseconds with Open-MPI 1.4.3, > Mellanox Infiniband and 256 MPI ranks > > Hi Sébastien, > > If I understand you correctly, you are running your

[OMPI users] RE : MPI hangs on multiple nodes

2011-09-19 Thread Sébastien Boisvert
Hello, Is it safe to re-use the same buffer (variable A) for MPI_Send and MPI_Recv given that MPI_Send may be eager depending on the MCA parameters ? > > > Sébastien > > De : users-boun...@open-mpi.org [users-boun...@open-mpi.org] de la part de >

[OMPI users] RE : Problems with MPI_Init_Thread(...)

2011-09-19 Thread Sébastien Boisvert
Hello, You need to call MPI_Init before calling MPI_Init_thread. According to http://cw.squyres.com/columns/2004-02-CW-MPI-Mechanic.pdf (Past MPI Mechanic Columns written by Jeff Squyres) only 3 functions that can be called before calling MPI_Init and they are: - MPI_Initialized -

[OMPI users] Latency of 250 microseconds with Open-MPI 1.4.3, Mellanox Infiniband and 256 MPI ranks

2011-09-17 Thread Sébastien Boisvert
llanox HCAs ? Thank you for your time. Sébastien Boisvert