Thanks Rob. I'm glad you have verified this. It happens only with 2
PVFS2 servers for me. I have done some debugging on it in the past to
only come up with a few clues.
1) It is the MPI_File_delete in hpio where it is hanging. Basically on
the PVFS2 server, I think that the reads never get ful
On Thu, Feb 23, 2006 at 12:38:26PM -0600, Avery Ching wrote:
> By the way, is the datatype branch going to make it to ROMIO at some
> point? The major bug I've been trying to fix is using the datatype I/O
> branch of the PVFS2 ROMIO driver using 2 pvfs2 servers.
>
> mpiexec -n 2 ./hpio-debug -o
Thanks a lot guys. When I rebuilt everything, it (the total_completed)
was all fixed (my bad).
It would be great to probably add the hpio test to the nightly tests.
You might want to run the smaller version (hpio-debug) with the -v 1 -d
pvfs2:/mnt/pvfs2 options, which will test a whole bunch of
On Feb 23, 2006, at 11:19 AM, Robert Latham wrote:
On Wed, Feb 22, 2006 at 07:46:09PM -0600, Avery Ching wrote:
Hi guys. I'm sending you guys a link for a somewhat complex I/O
benchmark I wrote for doing all kinds of noncontiguous I/O through
MPI-IO. It's called HPIO and does all kinds of we
On Wed, Feb 22, 2006 at 07:46:09PM -0600, Avery Ching wrote:
> Hi guys. I'm sending you guys a link for a somewhat complex I/O
> benchmark I wrote for doing all kinds of noncontiguous I/O through
> MPI-IO. It's called HPIO and does all kinds of weird tests. Anyway, it
> is really overkill for th
On Feb 22, 2006, at 7:46 PM, Avery Ching wrote:
Hi guys. I'm sending you guys a link for a somewhat complex I/O
benchmark I wrote for doing all kinds of noncontiguous I/O through
MPI-IO. It's called HPIO and does all kinds of weird tests.
Anyway, it
is really overkill for this small I/O bu
Hi guys. I'm sending you guys a link for a somewhat complex I/O
benchmark I wrote for doing all kinds of noncontiguous I/O through
MPI-IO. It's called HPIO and does all kinds of weird tests. Anyway, it
is really overkill for this small I/O bug, but the nice thing is that I
have a verify mode whi
nevermind - i'm not reading carefully. -- rob
Robert Latham wrote:
On Wed, Feb 22, 2006 at 03:22:35PM -0600, Avery Ching wrote:
Sure. I'm actually have a single client just doing a small contiguous
write of 50 bytes. But I think it occurs for pretty much any small I/O
operation I do. It seem
small i/o is only used in contig/contig cases, so the test you mention
isn't going to trigger what avery is describing.
rob
Robert Latham wrote:
On Wed, Feb 22, 2006 at 03:22:35PM -0600, Avery Ching wrote:
Sure. I'm actually have a single client just doing a small contiguous
write of 50 byte
On Wed, Feb 22, 2006 at 03:22:35PM -0600, Avery Ching wrote:
> Sure. I'm actually have a single client just doing a small contiguous
> write of 50 bytes. But I think it occurs for pretty much any small I/O
> operation I do. It seems to happen with any amount of PVFS2 servers.
> Both the memory r
Sure. I'm actually have a single client just doing a small contiguous
write of 50 bytes. But I think it occurs for pretty much any small I/O
operation I do. It seems to happen with any amount of PVFS2 servers.
Both the memory request and file request structures are contig I think.
The total_comp
Hi Avery,
Can you let me know what version of pvfs2 you're using, and also the
patterns of the reads you're doing (the memory request and file
request structures passed to sys_read)?
Thanks,
-sam
On Feb 22, 2006, at 2:05 PM, Avery Ching wrote:
Hi guys,
I've been trying to debug
Hi guys,
I've been trying to debug a nasty noncontiguous I/O problem in PVFS2
and noticed a problem with the small I/O case. It appears that the
resp_io.total_completed = 0 in the write case even though some data
seems to be written to the file. I was thinking it might be because the
smal
13 matches
Mail list logo