Hi Edgard,
just to tell that I tested your fixe that has been merged into
ompi-release/v2.x (9ba667815) and it works! :)
Thanks!
Eric
On 12/07/16 04:30 PM, Edgar Gabriel wrote:
I think the decision was made to postpone the patch to 2.0.1, since the
release of 2.0.0 is eminent.
Thanks
Edgar
This looks like a new error -- something is potentially going wrong in
MPI_Request_free (or perhaps the underlying progress invocation invoked by
MPI_Request_free).
I think cloning at that time and running tests is absolutely fine.
We tend to track our bugs in Github issues, so if you'd like to
Thanks Ralph,
It is now *much* better: all sequential executions are working... ;)
but I still have issues with a lot of parallel tests... (but not all)
The SHA tested last night was c3c262b.
http://www.giref.ulaval.ca/~cmpgiref/dernier_ompi/2016.07.14.01h20m32s_config.log
Here is what is the
Hi Gilles,
On 13/07/16 08:01 PM, Gilles Gouaillardet wrote:
Eric,
OpenMPI 2.0.0 has been released, so the fix should land into the v2.x
branch shortly.
ok, thanks again.
If i understand correctly, you script download/compile OpenMPI and then
download/compile PETSc.
More precisely, for Op
Fixed on master
> On Jul 13, 2016, at 12:47 PM, Jeff Squyres (jsquyres)
> wrote:
>
> I literally just noticed that this morning (that singleton was broken on
> master), but hadn't gotten to bisecting / reporting it yet...
>
> I also haven't tested 2.0.0. I really hope singletons aren't broke
Eric,
OpenMPI 2.0.0 has been released, so the fix should land into the v2.x
branch shortly.
If i understand correctly, you script download/compile OpenMPI and then
download/compile PETSc.
In this is correct, and for the time being, feel free to patch Open MPI
v2.x before compiling it, the
Hi,
FYI: I've tested the SHA e28951e
From git clone launched around 01h19:
http://www.giref.ulaval.ca/~cmpgiref/dernier_ompi/2016.07.13.01h19m30s_config.log
Eric
On 13/07/16 04:01 PM, Pritchard Jr., Howard wrote:
Jeff,
I think this was fixed in PR 1227 on v2.x
Howard
Jeff,
I think this was fixed in PR 1227 on v2.x
Howard
--
Howard Pritchard
HPC-DES
Los Alamos National Laboratory
On 7/13/16, 1:47 PM, "devel on behalf of Jeff Squyres (jsquyres)"
wrote:
>I literally just noticed that this morning (that singleton was broken on
>master), but hadn't gotte
I literally just noticed that this morning (that singleton was broken on
master), but hadn't gotten to bisecting / reporting it yet...
I also haven't tested 2.0.0. I really hope singletons aren't broken then...
/me goes to test 2.0.0...
Whew -- 2.0.0 singletons are fine. :-)
> On Jul 13, 20
Hmmm…I see where the singleton on master might be broken - will check later
today
> On Jul 13, 2016, at 11:37 AM, Eric Chamberland
> wrote:
>
> Hi Howard,
>
> ok, I will wait for 2.0.1rcX... ;)
>
> I've put in place a script to download/compile OpenMPI+PETSc(3.7.2) and our
> code from the g
Hi Howard,
ok, I will wait for 2.0.1rcX... ;)
I've put in place a script to download/compile OpenMPI+PETSc(3.7.2) and
our code from the git repos.
Now I am in a somewhat uncomfortable situation where neither the
ompi-release.git or ompi.git repos are working for me.
The first gives me the
Hi Eric,
Thanks very much for finding this problem. We decided in order to have a
reasonably timely
release, that we'd triage issues and turn around a new RC if something
drastic
appeared. We want to fix this issue (and it will be fixed), but we've
decided to
defer the fix for this issue to a 2
I think the decision was made to postpone the patch to 2.0.1, since the
release of 2.0.0 is eminent.
Thanks
Edgar
On 7/12/2016 2:51 PM, Eric Chamberland wrote:
Hi Edgard,
I just saw that your patch got into ompi/master... any chances it goes
into ompi-release/v2.x before rc5?
thanks,
Eric
Hi Edgard,
I just saw that your patch got into ompi/master... any chances it goes
into ompi-release/v2.x before rc5?
thanks,
Eric
On 08/07/16 03:14 PM, Edgar Gabriel wrote:
I think I found the problem, I filed a pr towards master, and if that
passes I will file a pr for the 2.x branch.
Th
I think I found the problem, I filed a pr towards master, and if that
passes I will file a pr for the 2.x branch.
Thanks!
Edgar
On 7/8/2016 1:14 PM, Eric Chamberland wrote:
On 08/07/16 01:44 PM, Edgar Gabriel wrote:
ok, but just to be able to construct a test case, basically what you are
do
On 08/07/16 01:44 PM, Edgar Gabriel wrote:
ok, but just to be able to construct a test case, basically what you are
doing is
MPI_File_write_all_begin (fh, NULL, 0, some datatype);
MPI_File_write_all_end (fh, NULL, &status),
is this correct?
Yes, but with 2 processes:
rank 0 writes somethi
ok, but just to be able to construct a test case, basically what you are
doing is
MPI_File_write_all_begin (fh, NULL, 0, some datatype);
MPI_File_write_all_end (fh, NULL, &status),
is this correct?
Thanks
Edgar
On 7/8/2016 12:19 PM, Eric Chamberland wrote:
Hi,
On 08/07/16 12:52 PM, Edgar
Hi,
On 08/07/16 12:52 PM, Edgar Gabriel wrote:
The default MPI I/O library has changed in the 2.x release to OMPIO for
ok, I am now doing I/O on my own hard drive... but I can test over NFS
easily. For Lustre, I will have to produce a reduced example out of our
test suite...
most file sy
The default MPI I/O library has changed in the 2.x release to OMPIO for
most file systems. I can look into that problem, any chance to get
access to the testsuite that you mentioned?
Thanks
Edgar
On 7/8/2016 11:32 AM, Eric Chamberland wrote:
Hi,
I am testing for the first time the 2.X relea
19 matches
Mail list logo