Hi Phil,
> These patches are based on part of the functionality Murali originally
> contributed in this email thread:
>
> http://www.beowulf-underground.org/pipermail/pvfs2-developers/2005-November/001624.html
Thanks for the patches!
> This does not include any of the provenance support. It jus
Bart,
Do tests like dbench or concurrent directory renames and listings pass
with the ncache patches?
I noticed that renames of files/directories are not handled by the patch
(in which case cached names of subdirectories/files under that path may
have to be invalidated/updated).
Thanks,
Murali
On
Certainly I was able to at least identify one bug I think. The small
I/O path is being used based on whether the amount of data going to the
I/O servers is below the max_unexep_payload. However, when request gets
to the server, small-io.sm calls PINT_Process_request() once and then
calls job_trov
Sam,
hmm.. it is not fixed
I can get you some logs if you are interested..
perhaps, phils bug was not related to this one... :(
Thanks,
Murali
On Wed, 14 Jun 2006, Sam Lang wrote:
>
> Hi All,
>
> I've attached a patch of the sys-io state machine from trunk that
> doesn't post the flow until the
On Jun 12, 2006, at 8:45 PM, Sam Lang wrote:
On Jun 12, 2006, at 3:55 PM, Bradley W Settlemyer wrote:
Is there a strong reason not to #define these strings somewhere?
I don't think so. Making them #defines would mean either replacing
the strings in the array with the defs, or creating
I've seen similar behavior to this using Myrinet (jazz) also. I haven't
ever gotten it with TCP though, so maybe mine was due to something a
little different (basically, it looked like a message with the tag was
sent, but never delivered). Are you certain that its just a BMI tcp issue?
Cheer
I went ahead and committed the changes I made (with the HANDLE_COUNT
flag).
-sam
On Jun 13, 2006, at 10:43 AM, Sam Lang wrote:
On Jun 13, 2006, at 9:14 AM, Rob Ross wrote:
Sam Lang wrote:
On Jun 12, 2006, at 10:07 PM, Rob Ross wrote:
i know we're trying to keep the # of DBs down, but w
Hi All,
I've attached a patch of the sys-io state machine from trunk that
doesn't post the flow until the initial response is received. Phil
and Murali, can you let me know if this fixes your respective
problems? Also, if you get a chance Pete, can you see what kind of
performance degr
On Jun 14, 2006, at 8:21 AM, Phil Carns wrote:
If we did this within BMI, we would be paying an extra round trip
latency time for each large TCP message, which we should probably
try to avoid.
I vote for just changing the ordering of sys-io.sm so that it does
not post write flows until a
[EMAIL PROTECTED] wrote on Wed, 14 Jun 2006 15:21 +0200:
> If we did this within BMI, we would be paying an extra round trip
> latency time for each large TCP message, which we should probably try to
> avoid.
>
> I vote for just changing the ordering of sys-io.sm so that it does not
> post writ
I am going to try to find a brute force way to serialize I/O from each
pvfs2-client-core just to see if that solves the problem (maybe only
allowing one buffer between pvfs2-client-core and kernel, rather than
5). If that does look like it fixed the problem, then we need a more
elegant solutio
If we did this within BMI, we would be paying an extra round trip
latency time for each large TCP message, which we should probably try to
avoid.
I vote for just changing the ordering of sys-io.sm so that it does not
post write flows until a positive write ack is received from the server.
Th
Ok. Well we screwed up here. We've either got to be able to pull that
data off the wire (presumably at the BMI layer) or we've got to ACK for
large messages (either in BMI or flow or elsewhere).
Suggestions on which approach to take and where to implement? Probably
most straightforward to do t
Sorry- "rendezvous" the wrong terminology here for what is happening
within bmi_tcp at the individual message level. It doesn't implicitly
exchange control messages before putting each buffer on the wire.
bmi_tcp will send any size message without using control messages to
handshake within bm
14 matches
Mail list logo