"Gabriel, Edgar" writes:
> a) if we detect a Lustre file system without flock support, we can
> printout an error message. Completely disabling MPI I/O is on the
> ompio architecture not possible at the moment, since the Lustre
> component can disqualify itself, but the generic Unix FS component
"Latham, Robert J." writes:
> it's hard to implement fcntl-lock-free versions of Atomic mode and
> Shared file pointer so file systems like PVFS don't support those modes
> (and return an error indicating such at open time).
Ah. For some reason I thought PVFS had the support to pass the tests
s
onday, October 15, 2018 4:22 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] ompio on Lustre
>
> For what it's worth, I found the following from running ROMIO's tests with
> OMPIO on Lustre mounted without flock (or localflock). I used 48 processes
> on two nodes wit
On Mon, 2018-10-15 at 12:21 +0100, Dave Love wrote:
> For what it's worth, I found the following from running ROMIO's tests
> with OMPIO on Lustre mounted without flock (or localflock). I used
> 48
> processes on two nodes with Lustre for tests which don't require a
> specific number.
>
> OMPIO f
For what it's worth, I found the following from running ROMIO's tests
with OMPIO on Lustre mounted without flock (or localflock). I used 48
processes on two nodes with Lustre for tests which don't require a
specific number.
OMPIO fails tests atomicity, misc, and error on ext4; it additionally
fai
> -Original Message-
> From: Dave Love [mailto:dave.l...@manchester.ac.uk]
> Sent: Wednesday, October 10, 2018 3:46 AM
> To: Gabriel, Edgar
> Cc: Open MPI Users
> Subject: Re: [OMPI users] ompio on Lustre
>
> "Gabriel, Edgar" writes:
>
> > Ok,
"Gabriel, Edgar" writes:
> Ok, thanks. I usually run these test with 4 or 8, but the major item
> is that atomicity is one of the areas that are not well supported in
> ompio (along with data representations), so a failure in those tests
> is not entirely surprising .
If it's not expected to wo
> Subject: Re: [OMPI users] ompio on Lustre
>
> "Gabriel, Edgar" writes:
>
> > Hm, thanks for the report, I will look into this. I did not run the
> > romio tests, but the hdf5 tests are run regularly and with 3.1.2 you
> > should not have any problems on a re
"Gabriel, Edgar" writes:
> Hm, thanks for the report, I will look into this. I did not run the
> romio tests, but the hdf5 tests are run regularly and with 3.1.2 you
> should not have any problems on a regular unix fs. How many processes
> did you use, and which tests did you run specifically? Th
ober 8, 2018 10:20 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] ompio on Lustre
>
> I said I'd report back about trying ompio on lustre mounted without flock.
>
> I couldn't immediately figure out how to run MTT. I tried the parallel
> hdf5 tests from the h
I said I'd report back about trying ompio on lustre mounted without flock.
I couldn't immediately figure out how to run MTT. I tried the parallel
hdf5 tests from the hdf5 1.10.3, but I got errors with that even with
the relevant environment variable to put the files on (local) /tmp.
Then it occur
"Gabriel, Edgar" writes:
> It was originally for performance reasons, but this should be fixed at
> this point. I am not aware of correctness problems.
>
> However, let me try to clarify your question about: What do you
> precisely mean by "MPI I/O on Lustre mounts without flock"? Was the
> Lustr
-mpi.org
> Subject: [OMPI users] ompio on Lustre
>
> Is romio preferred over ompio on Lustre for performance or correctness?
> If it's relevant, the context is MPI-IO on Lustre mounts without flock, which
> ompio doesn't seem to require.
> Thanks.
>
Is romio preferred over ompio on Lustre for performance or correctness?
If it's relevant, the context is MPI-IO on Lustre mounts without flock,
which ompio doesn't seem to require.
Thanks.
___
users mailing list
users@lists.open-mpi.org
https://lists.open
14 matches
Mail list logo