Open MPI uses clock_gettime when it is available, and defaults to gettimeofday
only when this better option can't be found. Check that your system has
clock_gettime and the resolution of this timer.
Aurélien
--
Aurélien Bouteiller, Ph.D. ~~ https://icl.cs.utk.edu/~bouteill/
> Le 5 a
Try to run with coll_base_verbose 1000, just to see what collective module got
effectively loaded.
Aurélien
--
Aurélien Bouteiller, Ph.D. ~~ https://icl.cs.utk.edu/~bouteill/
<https://icl.cs.utk.edu/~bouteill/>
> Le 9 déc. 2015 à 09:53, Saliya Ekanayake a écrit :
>
> Hi,
>
Not by default. It can be done with a rankfile, but the observed behavior is
normal when launching two mpirun with the same machinefile and all default
options.
Aurélien
--
Aurélien Bouteiller, Ph.D. ~~ https://icl.cs.utk.edu/~bouteill/
<https://icl.cs.utk.edu/~bouteill/>
> Le 24 no
You can use the 'mpirun -report-bindings’ option to see how your processes have
been mapped in your deployment. If you are unhappy with the default, you can
play with the -map-by option.
Aurélien
--
Aurélien Bouteiller, Ph.D. ~~ https://icl.cs.utk.edu/~bouteill/
<https://icl.cs
ev/all, type: string)
MCA orte: parameter "orte_local_tmpdir_base" (current value:
"", data source: default, level: 9 dev/all, type: string)
MCA orte: parameter "orte_remote_tmpdir_base" (current value:
"", data source: default, leve
Irena,
This is a known problem in BLACS. I have pushed a patch to the scalapack devs
and I believe the latest version (the one that is integrated into scalapack
2.0) does include the fix.
Aurelien
--
Aurélien Bouteiller ~ https://icl.cs.utk.edu/~bouteill/
> Le 6 mars 2015 à 09:31, Ir
Nathan,
I think I already pushed a patch for this particular issue last month. I do not
know if it has been back ported to release yet.
See
here:https://github.com/open-mpi/ompi/commit/ee3b0903164898750137d3b71a8f067e16521102
Aurelien
--
~~~ Aurélien Bouteiller, Ph.D
Are you sure you are not using the vader BTL ?
Setting mca_btl_base_verbose and/or sm_verbose should spit out some knem
initialization info.
The CMA linux system (that ships with most 3.1x linux kernels) has similar
features, and is also supported in sm.
Aurelien
--
~~~ Aurélien
ions requires at runtime the shared libraries from the glibc version
used for linking
Of course, the resulting code does crash in getpwuid.
I know Nathan has successfully compiled for cray machines. What’s the trick ?
Aurelien
--
~~~ Aurélien Bouteiller, Ph.D. ~~~
~
Le 13 févr. 2014 à 15:23, MM a écrit :
> my ompi_info says (openmpi)
> Threading support: No
>
> Does that mean it's not supported?
>
Yes, that’s what it means.
> If so, what to do?
>
Gently ignore that information. Open MPI works for “serialized” workloads, even
when thread support is di
an MPI
> message queue can I expect before something breaks?
>
> ---John
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
* Dr. Aurélien Bouteiller
* Researcher at Inno
>Jacky
>>
>>
>>_
>>users mailing list
>>us...@open-mpi.org <mailto:us...@open-mpi.org>
>>http://www.open-mpi.org/__mailman/listinfo.cgi/users
>><h
Le 11 juin 2012 à 18:57, Aurélien Bouteiller a écrit :
> Hi,
>
> If some mx devices are found, the logic is not only to use the mx BTL but
> also to use the mx MTL. You can try to disable this with --mca mtl ob1.
>
Sorry, I meant --mca pml ob1
> Aurelien
>
>
>
> mpirun noticed that process rank 0 with PID 3460 on node n0007.scs00
> exited on signal 11 (Segmentation fault).
> --
> <>
>
>
> Can anybody shed some light here? It looks like omp
elien
Le 1 juin 2012 à 09:27, Jeff Squyres a écrit :
> On Jun 1, 2012, at 8:20 AM, Aurélien Bouteiller wrote:
>
>> You need to pass the following option to configure:
>> --with-devel-headers --enable-binaries
>>
>> I don't know exactly why the default is not
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
* Dr. Aurélien Bouteiller
* Researcher at Innovative Computing Laboratory
* University of Tennessee
* 1122 Volunteer Boulevard, suite 309b
* Knoxville, TN 37996
You can use the intel mpi benchmark, or skampi. These two programs are designed
to evaluate Moi performance.
Envoyé de mon iPad
Le 2012-04-04 à 08:46, anas trad a écrit :
> Hi all,
>
> I need to know the time estimation of executing MPI_Bcast function on Neolith
> Cluster. Please, can anyone
) ?
--
* Dr. Aurélien Bouteiller
* Researcher at Innovative Computing Laboratory
* University of Tennessee
* 1122 Volunteer Boulevard, suite 350
* Knoxville, TN 37996
* 865 974 6321
signature.asc
Description: Message signed with OpenPGP using GPGMail
You should consider reading about communicators in MPI.
Aurelien
--
Aurelien Bouteiller, Ph.D.
Innovative Computing Laboratory, The University of Tennessee.
Envoyé de mon iPad
Le Aug 7, 2010 à 1:05, Randolph Pullen a écrit :
> I seem to be having a problem with MPI_Bcast.
> My massive I/O int
Yves,
In Open MPI you can have a very fine control over how the deployment is bound
to the cores. For more information, please refer to the faq concerning the
rankfile description (in a rankfile you can specify very precisely what rank
goes on what physical PU). For a more single shot option,
Hi,
setting the eager limit to such a drastically high value will have the effect
of generating gigantic memory consumption for unexpected messages. Any message
you send which does not have a preposted ready recv will mallocate 150mb of
temporary storage, and will be memcopied from that intern
Hi Thomas,
The message you get comes from the convertor. The convertor is in
charge of packing/unpacking the data. As you add yourself an extra
int to the wire data, the convertor gets confused on the receiver
side, as it gets a message that's not in the expected format.
What you should
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
* Dr. Aurélien Bouteille
tp://www.open-mpi.org/mailman/listinfo.cgi/users
--
* Dr. Aurélien Bouteiller
* Sr. Research Associate at Innovative Computing Laboratory
* University of Tennessee
* 1122 Volunteer Boulevard, suite 350
* Knoxville, TN 37996
* 865 974 6321
external name server called ompi-server. I can give you more
details if you want to try the svn version.
Regards,
Aurelien
--
* Dr. Aurélien Bouteiller
* Sr. Research Associate at Innovative Computing Laboratory
* University of Tennessee
* 1122 Volunteer Boulevard, suite 350
* Knoxville, TN 37996
Cheers,
- Brian
Brian Dobbins
Yale Engineering HPC
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
* Dr. Aurélien Bouteiller
* Sr. Research Associate at Innovative Computing Laboratory
* University
set properly? Thanks!
Hahn
--
Hahn Kim, h...@ll.mit.edu
MIT Lincoln Laboratory
244 Wood St., Lexington, MA 02420
Tel: 781-981-0940, Fax: 781-981-5255
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
If you have several network cards in your system, it can sometime get
the endpoints confused. Especially if you don't have the same number
of cards or don't use the same subnet for all "eth0, eth1". You should
try to restrict Open MPI to use only one of the available networks by
using the -
replaced that with OMPI. Could that be the problem?? Do I need to
change any part of my source code if I migrate from MPICH-1.2.6 to
OpenMPI-1.2.7?? Please let me know.
--- On Sat, 9/20/08, Aurélien Bouteiller
wrote:
From: Aurélien Bouteiller
Subject: Re: [OMPI users] Segmentation Fault--li
list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
* Dr. Aurélien Bouteiller
* Sr. Research Associate at Innovative Computing Laboratory
* University of Tennessee
* 1122 Volunteer Boulevard, suite 350
* Knoxville, TN 37996
* 865 974 6321
ORLD,ierr)
end do
end if
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
* Dr. Aurélien Bout
Do you know where to find a comprehensive document for Opal interface
functions and user guide ?
thanks,
Aurelien
--
* Dr. Aurélien Bouteiller
* Sr. Research Associate at Innovative Computing Laboratory
* University of Tennessee
* 1122 Volunteer Boulevard, suite 350
* Knoxville, TN 37996
comm_merge returns
(long pause inserted via parent sleep)
parent sends data to kid 1
(long pause inserted via parent sleep)
parent starts to receive data from kid 1
all children's calls to MPI_Intercomm_merge return
-- Mark
Aurélien Bouteiller wrote:
Ok, I'll check to see what happens.
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
* Dr. Aurélien Bouteiller
* Sr. Resear
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
* Dr. Aurélien Bouteiller
* Sr. Research Associate at Innovative Computing Laboratory
* University of Tennessee
* 1122 Volunteer Boulevard, suite 350
* Knoxville, TN
MPI_Intercomm_merge is what you are looking for.
Aurelien
Le 26 juil. 08 à 13:23, Mark Borgerding a écrit :
Okay, so I've gotten a little bit closer.
I'm using MPI_Comm_spawn to start several children processes. The
problem is that the children are in their own group, separate from
the pa
This is a harmless error, related to fault tolerance component. If you
don't need FT, you can safely ignore it. It will disappear soon.
Aurelien
Le 15 juil. 08 à 11:22, Tom Riddle a écrit :
Hi,
I wonder if anyone can shed some light on what exactly this is
referring to? At the start of a
Hi,
There is no mpd in Open MPI. mpirun will spawn everything needed for
you. Make sure all your processes call MPI_Init, and not only the root
process. If you mpirun -np 10, 10 processes need to go trough MPI_Init
to allow for further progression.
If this does not solve you problem, plea
You can add --enable-progress-threads to the configure. However,
please consider this as a beta feature. We know for sure there is some
bugs in current thread safety.
Aurelien
Le 1 mai 08 à 09:46, Alberto Giannetti a écrit :
In message http://www.open-mpi.org/community/lists/users/
2007/03
ERN:
internal error
[alberto-giannettis-computer.local:07933] *** MPI_ERRORS_ARE_FATAL
(goodbye)
Why do I have an internal error? If I try to connect to 0.1.1:2001
from the client the program hangs.
___
users mailing list
us...@open-mpi.org
http://www.
From a pretty old experiment I made, compression was giving good
results on 10Mbps network but was actually decreasing RTT on 100Mbs
and more. I played with all the zlib settings from 1 to 9, and
actually even the low compression setting was unable to reach decent
performance. I don't belie
).
Aurelien
Le 11 avr. 08 à 05:34, jody a écrit :
Aurelien:
What is the cause of this performance penalty?
Jody
On Fri, Apr 11, 2008 at 1:44 AM, Aurélien Bouteiller
wrote:
Open MPI can manage heterogeneous system. Though you prefer to avoid
this because it has a performance penalty. I suggest
Open MPI can manage heterogeneous system. Though you prefer to avoid
this because it has a performance penalty. I suggest you compile on
the 32bit machine and use the same version everywhere.
Aurelien
Le 10 avr. 08 à 18:09, clark...@clarktx.com a écrit :
Thanks to those who answered my post i
If you can avoid them it is better to avoid them. However it is always
better to use a MPI_Alltoall than coding your own all to all with
point to point, and in some algorithms you *need* to make a all to all
communication. What you should understand by "avoid all to all" is not
avoid MPI_al
Autonoma de Barcelona - UAB
ETSE, Edifcio Q, QC/3088
http://www.caos.uab.es
Phone: +34-93-581-2888
Fax: +34-93-581-2478
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
* Dr. Aurélien Bouteiller
* Sr
http://www.open-mpi.org/community/lists/users/2006/01/0480.php
Yours, Ashley Pittman.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
* Dr. Aurélien Bouteiller
* Sr. Research Associate at Innovat
and suggestions on how to resolve these
performance issues.
Thank you very much.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dr. Aurélien Bouteiller
Sr. Research Associate - Innovative Computing Labor
your time and looking forward to your answer(s)!
Alex
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
Dr. Aurélien Bouteiller
Sr. Research Associate - Innovative Computing Laboratory
Suite 350, 1122 Volunteer Boulevard
Knoxville, TN 37996
865 974 6321
@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dr. Aurélien Bouteiller
Sr. Research Associate - Innovative Computing Laboratory
Suite 350, 1122 Volunteer Boulevard
Knoxville, TN 37996
865 974 6321
ynamic Statistical Profiling of
Communication
Activity in Distributed Applications". They add support for
piggyback at MPI
implementation level and report very low overheads (no surprise).
Regards,
Oleg Morajko
On Feb 1, 2008 5:08 PM, Aurélien Bouteiller
wrote:
I don't know of
ailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Dr. Aurélien Bouteiller
Sr. Research Associate - Innovative Computing Laboratory
Suite 350, 1122 Volunteer Boulevard
Knoxville, TN 37996
865 974 6321
You can do it, it might or might not make sense, depending on your
application. Load imbalance in regular MPI applications kills
performance. Therefore if your cluster is very heterogeneous, you
might prefer some different programming paradigm that take care of
this by nature (let say RPC).
Using the --mca btl ^mx totally prevents use of mx interface. So
everybody uses tcp (even mx capable nodes). If you want a mixed
configuration you have to enforce use of the ob1 pml, but let the mx
btl be used where it is suitable (it will be disabled at runtime if it
can't run). You're pro
led
--> Returned "Unreachable" (-12) instead of "Success" (0)
--
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (goodbye)
*** An error occurred in MPI_Init
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (goodbye)
burl-ct-v4
___
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
Dr. Aurélien Bouteiller
Sr. Research Associate - Innovative Computing Laboratory
Suite 350, 1122 Volunteer Boulevard
Knoxville, TN 37996
865 974 6321
55 matches
Mail list logo