Hi,
Is there any difference in the code path between
mpiexec -n 1 -output-filename 1234 ./a.out
and
mpiexec -n 1 --mca orte_output_filename 1234 ./a.out ?
On 28/09/12 10:50 AM, Jeff Squyres wrote:
> On Sep 28, 2012, at 10:38 AM, Sébastien Boisvert wrote:
>
>> 1.5 us is very good. But I get 1.5 ms with shared queues (see above).
>
> Oh, I mis-read (I blame it on jet-lag...).
>
> Yes, that seems wy too high.
>
> Y
m: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
>> On Behalf Of Sébastien Boisvert
>> Sent: Friday, September 28, 2012 3:09 PM
>> To: us...@open-mpi.org
>> Subject: Re: [OMPI users] About MPI_TAG_UB
>>
>> Hello,
>>
>> My application ha
Hello,
On 28/09/12 10:00 AM, Jeff Squyres wrote:
> On Sep 28, 2012, at 9:50 AM, Sébastien Boisvert wrote:
>
>> I did not know about shared queues.
>>
>> It does not run out of memory. ;-)
>
> It runs out of *registered* memory, which could be far less than your
st out of curiosity, does Open-MPI utilize heavily negative values
internally for user-provided MPI tags ?
If the negative tags are internal to Open-MPI, my code will not touch
these private variables, right ?
Sébastien
On 28/09/12 08:59 AM, Jeff Squyres wrote:
> On Sep 27, 2012, at 7:22 PM, Sébastien
d Communication
> Rechen- und Kommunikationszentrum der RWTH Aachen
> Seffenter Weg 23, D 52074 Aachen (Germany)
>
>> -Original Message-
>> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org]
>> On Behalf Of Sébastien Boisvert
>> Sent
ant ?
Thanks !
***
Sébastien Boisvert
Ph.D. student
http://boisvert.info/
know. New code should only do the case
when the two
MPI processes are on different nodes, right ?
Thank you !
Sébastien Boisvert
p.s.: I found the silly names here ->
http://www.open-mpi.org/community/lists/devel/2008/05/3925.php
;)
:
Yes, PSM is the native transport for InfiniPath. It is faster than the
InfiniBand verbs support on the same hardware.
What version of Open MPI are you using?
On Jun 28, 2012, at 10:03 PM, Sébastien Boisvert wrote:
Hello,
I am getting random crashes (segmentation faults) on a super
: parameter "mtl_psm_path_query" (current
value:, data source: default value)
MCA mtl: parameter "mtl_psm_priority" (current value:
<0>, data source: default value)
Thank you.
Sébastien Boisvert
Jeff Squyres a écrit :
The Open MPI 1.4 series is now
MCA mtl: parameter "mtl_psm_ib_service_id" (current
value: <0x10001175>, data source: default value)
MCA mtl: parameter "mtl_psm_path_query" (current
value: , data source: default value)
MCA mtl: parameter "mtl_psm_priorit
support on the same hardware.
What version of Open MPI are you using?
On Jun 28, 2012, at 10:03 PM, Sébastien Boisvert wrote:
Hello,
I am getting random crashes (segmentation faults) on a super computer
(guillimin)
using 3 nodes with 12 cores per node. The same program (Ray) runs without any
piexec -n 36 -output-filename psm-bug-2012-06-26-hotfix.1 \
--mca mtl ^psm \
Ray -k 31 \
-o psm-bug-2012-06-26-hotfix.1 \
-p \
data-for-system-tests/ecoli-MiSeq/MiSeq_Ecoli_MG1655_110527_R1.fastq \
data-for-system-tests/ecoli-MiSeq/MiSeq_Ecoli_MG1655_110527_R2.fastq
Sébastien Boisvert
implement
directly in Open-MPI as a component ?
Sébastien Boisvert
http://boisvert.info
On 26/09/11 08:46 AM, Yevgeny Kliteynik wrote:
On 26-Sep-11 11:27 AM, Yevgeny Kliteynik wrote:
On 22-Sep-11 12:09 AM, Jeff Squyres wrote:
On Sep 21, 2011, at 4:24 PM, Sébastien Boisvert wrote:
sers
> Objet : Re: [OMPI users] RE : Latency of 250 microseconds with Open-MPI
> 1.4.3, Mellanox Infiniband and 256 MPI ranks
>
> On Sep 21, 2011, at 3:17 PM, Sébastien Boisvert wrote:
>
>> Meanwhile, I contacted some people at SciNet, which is also part of Compute
>&g
nvoi : 20 septembre 2011 08:14
> À : Open MPI Users
> Cc : Sébastien Boisvert
> Objet : Re: [OMPI users] Latency of 250 microseconds with Open-MPI 1.4.3,
> Mellanox Infiniband and 256 MPI ranks
>
> Hi Sébastien,
>
> If I understand you correctly, you are running your
Hello,
Is it safe to re-use the same buffer (variable A) for MPI_Send and MPI_Recv
given that MPI_Send may be eager depending on
the MCA parameters ?
>
>
> Sébastien
>
> De : users-boun...@open-mpi.org [users-boun...@open-mpi.org] de la part de
>
Hello,
You need to call MPI_Init before calling MPI_Init_thread.
According to http://cw.squyres.com/columns/2004-02-CW-MPI-Mechanic.pdf (Past
MPI Mechanic Columns written by Jeff Squyres)
only 3 functions that can be called before calling MPI_Init and they are:
- MPI_Initialized
-
llanox
HCAs ?
Thank you for your time.
Sébastien Boisvert
19 matches
Mail list logo