Re: [OMPI users] running openmpi in debug/verbose mode

2012-10-26 Thread Mahmood Naderan
>You can usually resolve that by configuring with --disable-dlopen Ok I will try. So what is the purpose of enabling dlopen? Why dlopen is not disabled by default. I mean why high traffic configuration is enabled by default?   Regards, Mahmood From: Ralph

[OMPI users] OpenMPI on Windows when MPI_F77 is used from a C application

2012-10-26 Thread Mathieu Gontier
Dear all, I am willing to use OpenMPI on Windows for a CFD instead of MPICH2. My solver is developed if Fortran77 and piloted by a C++ interface; the both levels call MPI functions. So, I installed OpenMPI-1.6.2-x64 on my system and compiled my code successfully. But, at the runtime it crashed.

Re: [OMPI users] running openmpi in debug/verbose mode

2012-10-26 Thread Jeff Squyres
Open MPI doesn't really do much file IO at all. We do a little during startup / shutdown, but during the majority of the MPI application run, there's little/no file IO from the MPI layer. Note that the above statements assume that you are not using the MPI IO function calls. If your

Re: [OMPI users] ompi-clean on single executable

2012-10-26 Thread Ralph Castain
On Oct 26, 2012, at 4:14 AM, Nicolas Deladerriere wrote: > Thanks all for your comments > > Ralph > > What I was initially looking at is a tool (or option of orte-clean) that > clean up the mess you are talking about, but only the mess that have been >

Re: [OMPI users] System CPU of openmpi-1.7rc1

2012-10-26 Thread tmishima
Hi Ralph, thank you for your comment. I understand what you mean. As you pointed out, I have one process sleep before finalize. Then, mumps finalize might affect the behavior. I will remove mumps finalize (and/or initialize) function from my testing program ant try again on next Monday to make

Re: [OMPI users] ompi-clean on single executable

2012-10-26 Thread Nicolas Deladerriere
Thanks all for your comments Ralph What I was initially looking at is a tool (or option of orte-clean) that clean up the mess you are talking about, but only the mess that have been created by a single mpirun command. As far I have understood, orte-clean clean all mess on a node associated to

Re: [OMPI users] System CPU of openmpi-1.7rc1

2012-10-26 Thread Ralph Castain
I'm not sure - just fishing for possible answers. When we see high cpu usage, it usually occurs during MPI communications - when a process is waiting for a message to arrive, it polls at a high rate to keep the latency as low as possible. Since you have one process "sleep" before calling the

Re: [hwloc-users] How do I access CPUModel info string

2012-10-26 Thread Robin Scher
On 10/25/2012 3:06 PM, Samuel Thibault wrote: Robin Scher, le Thu 25 Oct 2012 23:57:38 +0200, a écrit : Do you think those could be added to hwloc? Yes: we already use cpuid for the x86 backend. That will only work on x86 hosts of course. Windows being x86 only for the time being, I'm OK