ad 5884 njoly 21w REG 0,308 13510798885996031
input.dat.lockedfile
Thanks in advance.
--
Nicolas Joly
Cluster & Computing Group
Biology IT Center
Institut Pasteur, Paris.
#include
#include
#include
#include
#include
#include
#include
#include
int main(int argc, char
ght be too late for that.
No hurry. We are in the process of validating our codes with the new
ompio backend ... And we still have romio as a fallback.
Thanks again.
> Thanks for the bug report!
>
> Edgar
>
>
> On 1/18/2017 9:36 AM, Nicolas Joly wrote:
> >Hi,
e shared file pointer
was reset/initialised(?) to zero ... leading to an unexpected write
position for the "Data line" buffer.
Thanks in advance.
Regards.
--
Nicolas Joly
Cluster & Computing Group
Biology IT Center
Institut Pasteur, Paris.
#include
#include
#include
#define
Hi,
Just noticed a typo in MPI_Info_get_nkeys/MPI_Info_get_nthkey man
pages. The last cross-reference is 'MPI_Info_get_valueln' where it
should be 'MPI_Info_get_valuelen'.
The attached patch should fix them. AFAIK all versions are affected
(master, 2.x and 1.10).
Thank
ginal tool that crashed first on read, and later on write with
MPI_BOTTOM now succeed.
> based on previous tests posted here, it is likely a similar bug should
> be fixed for other filesystems.
Thanks a lot.
> Gilles
>
>
> On 6/15/2016 12:42 AM, Nicolas Joly wrote:
> >Hi,
&
at sample.c:63
> Cheers,
>
> Gilles
>
> Nicolas Joly wrote:
> >On Fri, Jun 17, 2016 at 10:15:28AM +0200, Vincent Huber wrote:
> >> Dear Mr. Joly,
> >>
> >>
> >> I have tried your code on my MacBook Pro (cf. infra for details) to detail
pp/Contents/Developer/usr
> --with-gxx-include-dir=/usr/include/c++/4.2.1
> Apple LLVM version 7.0.0 (clang-700.0.72)
> Target: x86_64-apple-darwin15.5.0
> Thread model: posix
>
>
> et
>
>
> mpirun --version
> mpirun (Open MPI) 1.10.2
>
>
> ?
>
>
mio.so
#3 0x78b97400bc72 in mca_io_romio_dist_MPI_File_read () from
/usr/pkg/lib/openmpi/mca_io_romio.so
#4 0x78b988e72b38 in PMPI_File_read () from /usr/pkg/lib/libmpi.so.12
#5 0x004013a4 in main (argc=2, argv=0x7f7fff7b0f00) at sample.c:63
Thanks.
--
Nicolas Joly
Cluster & Computing Grou
I) 1.10.1
I discovered it with 1.10.1, and was able to reproduce with older
versions 1.6.5 and 1.8.8 i had handy.
Thanks.
> > On May 19, 2016, at 9:06 AM, Nicolas Joly wrote:
> >
> >
> > Hi,
> >
> > I just discovered a small issue with MPI_Finalize(). When sanity
medwait() or pthread_cond_wait()) by another thread
results in undefined behavior.
[...]
Any expected issue in adding a opal_mutex_unlock() call before
destroying the opal_mutex_t object ?
Thanks.
[1]
http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_mutex_destroy.html
--
Nico
10 matches
Mail list logo