At 03:12 PM 12/17/2007, Jeffrey Altman wrote:
That is not true at all. If the CIFS interface reads 64KB at a time
and refuses to request the next 64KB until the previous one has been
delivered and the CM is reading 1MB at a time, then there is significant
overhead caused by the CIFS interface. Why should I lose hundreds of
microseconds per 64KB simply because the CIFS protocol is dumb?
I believe you, I was just wondering aloud.
You wouldn't do file caching in the file server. you would perform
block caching. I'm not going to cache a 250GB file, I'm going to cache
that parts of the file that have been used recently.
True.
> Not always. The previous error mentioned in the last message was from
> just such a volume.
Then file a bug report because otherwise I have no idea I should be
looking for issues.
The error mentioned is random enough that tracking it down just leads to a
can of worms that involves routers/switches/packets/server
logs/etc. Filing bug reports for non-repeatable errors is kind of shooting
in the dark.
It's a cost benefit problem. For this error, I won't say anymore about it
until it impacts me to the point of being intolerable.
Rodney
_______________________________________________
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info