Hi Gilles,

On 13/07/16 08:01 PM, Gilles Gouaillardet wrote:
Eric,


OpenMPI 2.0.0 has been released, so the fix should land into the v2.x
branch shortly.

ok, thanks again.


If i understand correctly, you script download/compile OpenMPI and then
download/compile PETSc.

More precisely, for OpenMPI I am cloning https://github.com/open-mpi/ompi.git and for Petsc, I just compile the latest proved stable with our code which is now 3.7.2.

In this is correct, and for the time being, feel free to patch Open MPI
v2.x before compiling it, the fix can be

downloaded ad
https://patch-diff.githubusercontent.com/raw/open-mpi/ompi-release/pull/1263.patch


Ok but I think it is already included into the master of the clone I get... :)

Cheers,

Eric



Cheers,


Gilles


On 7/14/2016 3:37 AM, Eric Chamberland wrote:
Hi Howard,

ok, I will wait for 2.0.1rcX... ;)

I've put in place a script to download/compile OpenMPI+PETSc(3.7.2)
and our code from the git repos.

Now I am in a somewhat uncomfortable situation where neither the
ompi-release.git or ompi.git repos are working for me.

The first gives me the errors with MPI_File_write_all_end I reported,
but the former gives me errors like these:

[lorien:106919] [[INVALID],INVALID] ORTE_ERROR_LOG: Bad parameter in
file ess_singleton_module.c at line 167
*** An error occurred in MPI_Init_thread
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[lorien:106919] Local abort before MPI_INIT completed completed
successfully, but am not able to aggregate error messages, and not
able to guarantee that all other processes were killed!

So, for my continuous integration of OpenMPI I am in a no man's
land... :(

Thanks anyway for the follow-up!

Eric

On 13/07/16 07:49 AM, Howard Pritchard wrote:
Hi Eric,

Thanks very much for finding this problem.   We decided in order to have
a reasonably timely
release, that we'd triage issues and turn around a new RC if something
drastic
appeared.  We want to fix this issue (and it will be fixed), but we've
decided to
defer the fix for this issue to a 2.0.1 bug fix release.

Howard



2016-07-12 13:51 GMT-06:00 Eric Chamberland
<eric.chamberl...@giref.ulaval.ca
<mailto:eric.chamberl...@giref.ulaval.ca>>:

    Hi Edgard,

    I just saw that your patch got into ompi/master... any chances it
    goes into ompi-release/v2.x before rc5?

    thanks,

    Eric


    On 08/07/16 03:14 PM, Edgar Gabriel wrote:

        I think I found the problem, I filed a pr towards master, and if
        that
        passes I will file a pr for the 2.x branch.

        Thanks!
        Edgar


        On 7/8/2016 1:14 PM, Eric Chamberland wrote:


            On 08/07/16 01:44 PM, Edgar Gabriel wrote:

                ok, but just to be able to construct a test case,
                basically what you are
                doing is

                MPI_File_write_all_begin (fh, NULL, 0, some datatype);

                MPI_File_write_all_end (fh, NULL, &status),

                is this correct?

            Yes, but with 2 processes:

            rank 0 writes something, but not rank 1...

            other info: rank 0 didn't wait for rank1 after
            MPI_File_write_all_end so
            it continued to the next MPI_File_write_all_begin with a
            different
            datatype but on the same file...

            thanks!

            Eric
            _______________________________________________
            devel mailing list
            de...@open-mpi.org <mailto:de...@open-mpi.org>
            Subscription:
            https://www.open-mpi.org/mailman/listinfo.cgi/devel
            Link to this post:
http://www.open-mpi.org/community/lists/devel/2016/07/19173.php


    _______________________________________________
    devel mailing list
    de...@open-mpi.org <mailto:de...@open-mpi.org>
    Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/devel
    Link to this post:
http://www.open-mpi.org/community/lists/devel/2016/07/19192.php


_______________________________________________
devel mailing list
de...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/devel
Link to this post:
http://www.open-mpi.org/community/lists/devel/2016/07/19201.php


_______________________________________________
devel mailing list
de...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/devel
Link to this post:
http://www.open-mpi.org/community/lists/devel/2016/07/19206.php

Reply via email to