24, 2013, at 6:40 PM, Ralph Castain wrote:
>
> > Our windows supporter has left to greener pastures. Long term, we may
> have an org that will want to support it. However, for now support has been
> dropped in 1.7.
> >
> > Sent from my iPhone
> >
> > On Jun 2
in Windows installers, Cygwin
support does not really match with my expectations and constraints.
Thanks a lot,
Mathieu.
--
Mathieu Gontier
- MSN: mathieu.gont...@gmail.com
- Skype: mathieu_gontier
I do not know too :-/
On Tue, Oct 30, 2012 at 2:37 PM, Jeff Squyres wrote:
> What's errno=108 on your platform?
>
> On Oct 30, 2012, at 9:22 AM, Damien Hocking wrote:
>
> > I've never seen that, but someone else might have.
> >
> > Damien
> >
&g
, 2012 at 9:35 PM, Damien wrote:
> Is there a series of error messages or anything at all that you can post
> here?
>
> Damien
>
>
> On 29/10/2012 2:30 PM, Mathieu Gontier wrote:
>
> Hi guys.
>
> Finally, I compiled with /O: the options is deprecated and, li
, Oct 29, 2012 at 7:33 PM, Mathieu Gontier
wrote:
> I crashes into the fortran routine calling a MPI functions. When I run the
> debugger, the crash seems to be in libmpi_f77.lib, but I cannot go further
> since the lib is not in debbug mode.
>
> Attached to this email the files o
o compile it cleanly (so no /O flags) and see if it crashes.
>
> Damien
>
>
> On 26/10/2012 10:27 AM, Mathieu Gontier wrote:
>
> Dear all,
>
> I am willing to use OpenMPI on Windows for a CFD instead of MPICH2. My
> solver is developed if Fortran77 and piloted by a C
against libmpi_f77.lib?
Thanks for your help.
Mathieu.
--
Mathieu Gontier
- MSN: mathieu.gont...@gmail.com
- Skype: mathieu_gontier
Dear all,
Thanks a lot for your support. I only had time to test one today, but it
seems the option --mca mpi_leave_pinned 0 works on my case. I will go
further next week, but for the moment, I can submit my computation.
Thanks a lot for your help.
On 06/23/2011 03:56 PM, Mathieu Gontier
23, 2011, at 7:56 AM, Mathieu Gontier wrote:
Hello,
Thank for the answer.
I am testing with OpenMPI-1.4.3: my computation is queuing. But I did
not read anything obvious related to my issue. Have you read
something which could solve it?
I am going to submit my computation with --mca
Supported Compilers:
pathscale/3.2.99
pathscale/3.2
pgi/10.9
pgi/10.4
intel/11.1.072
gcc/4.4.4
gcc/4.4.3
---
Let me know if that helps.
Josh
On Wed, Jun 22, 2011 at 4:16 AM, Mathieu Gontier
wr
for us, but I do not want to ask to install to many of them.
Thanks.
--
/
Mathieu Gontier
skype: mathieu_gontier /
to all of you for the help.
Best regards,
Mathieu.
On 12/16/2010 06:21 PM, Eugene Loh wrote:
Jeff Squyres wrote:
On Dec 16, 2010, at 5:14 AM, Mathieu Gontier wrote:
We have lead some tests and the option btl_sm_eager_limit has a positive
consequence on the performance. Eugene, thank you
Does the env. var. works to overload it:
export OMPI_MCA_btl_sm_eager_limit=40960
In that case, I can deal with it.
On 12/16/2010 11:14 AM, Mathieu Gontier wrote:
Hi all,
We have lead some tests and the option btl_sm_eager_limit has a
positive consequence on the performance. Eugene, thank
.
Any idea?
On 12/06/2010 04:31 PM, Eugene Loh wrote:
Mathieu Gontier wrote:
Nevertheless, one can observed some differences between MPICH and
OpenMPI from 25% to 100% depending on the options we are using into
our software. Tests are lead on a single SGI node on 6 or 12
processes, and thus
there a difference between --mca btl tcp,sm,self and --mca btl
self,sm,tcp (or not put any explicit mca option)?
Best regards,
Mathieu.
On 12/05/2010 06:10 PM, Eugene Loh wrote:
Mathieu Gontier wrote:
Dear OpenMPI users
I am dealing with an arithmetic problem. In fact, I have two variants
accentuation of the OpenMPI+Ethernet loss of performance, is
it another issue into OpenMPI or is there any option a can use?
Thank you for your help.
Mathieu.
--
Mathieu Gontier
MPI_UNIVERSE_SIZE is not set in Open MPI.
george.
On Feb 18, 2010, at 10:18 , Mathieu Gontier wrote:
Another question on the same example.
When I ask the size on the inter-communitator (MPI_COMM_UNIVERSE in the example) between the spaner/parent and the spawned/children processes, the
my MPI_COMM_UNIVERSE be a
higher communicator including here the group of MPI_COMM_SELF and the
group of MPI_COMM_WORLD of my spawned application (./worker).
I think I missed something. Does someone can help me?
Thank you.
Mathieu Gontier wrote:
Hello,
I am trying to use MPI_Comm_spawn (MPI
Hello,
I am trying to use MPI_Comm_spawn (MPI-2 standard only) and I have an
problem when I use MPI_UNIVERSE_SIZE. Here my code:
int main( int argc, char *argv[] )
{
int wsize=0, wrank=-1 ;
int usize=0, urank=-1 ;
int ier ;
int usize_attr=0, flag=0 ;
MPI_Comm MPI_COMM_UNIVERSE;
ie
Squyres wrote:
On Jan 26, 2010, at 4:22 AM, Mathieu Gontier wrote:
1/ I rebuilt without --enable-dist (more secured indeed) and with explicit --without-openib/--with-openib : behaviors are better. Great.
Excellent. I didn't mention it in my prior email, but our conf
nk Jeff: the previous answers were really useful.
Jeff Squyres wrote:
On Jan 25, 2010, at 11:58 AM, Mathieu Gontier wrote:
I built OpenMPI-1.4.1 without openib support with the following configuration options:
./configure --prefix=/develop/libs/OpenMPI/openmpi-1.4.1/LINUX_GCC_4_1
Hello,
I built OpenMPI-1.4.1 without openib support with the following
configuration options:
./configure
--prefix=/develop/libs/OpenMPI/openmpi-1.4.1/LINUX_GCC_4_1_tcp_mach
--enable-static --enable-shared --enable-cxx-exceptions
--enable-mpi-f77 --disable-mpi-f90 --enable-mpi-cxx
--disable
lot of printing to stdout/stderr?
On Jan 11, 2010, at 8:00 AM, Mathieu Gontier wrote:
Hi all
I want to migrate my CFD application from MPICH-1.2.4 (ch_p4 device) to
OpenMPI-1.4. Hence, I compared the two libraries compiled with my
application and I noted OpenMPI is less effici
Hi all
I want to migrate my CFD application from MPICH-1.2.4 (ch_p4 device) to
OpenMPI-1.4. Hence, I compared the two libraries compiled with my
application and I noted OpenMPI is less efficient thant MPICH on
ethernet (170min with MPICH against 200min with OpenMPI). So, I wonder
if someone h
24 matches
Mail list logo