At 02:28 PM 12/17/2007, Jeffrey Altman wrote:
For each file operation CIFS sends anywhere from three to five requests.
 As a result there is significant overhead that increases the round trip
time and limits the overall throughput.

What I'm getting at is that once a 100 MB file is opened and is being actively read from the server, if at any time your CPU is idle op'ing, waiting on the file contents from the server, then there is nothing you can do to improve on that. CPUs and memory bandwidth are so fast these days that the biggest "cost" is in I/O where the CPU sits and idles its time away. The only thing you do by improving the fetch/request algorithm is make Windows more efficient at performing other application operations, not your own.

> On a related note, I've always wondered why the AFS file servers do not
> have "read cache" on the server side, so when one person requests a
> file, the second person sees the same file from the servers cache.  Or
> is that handled by the server OS cache (via the swap)?

I would hope your file server's operating system provides file caching.

Block caching, not file caching. The problem here is that the OS cache can't see inside the contents of the vicep to perform file cache operations.

And when you copy the MSM or MSI to local disk it works or when you run
the install from a .readonly volume it works from a volume in which
the client user only has 'rl' privileges.

Not always. The previous error mentioned in the last message was from just such a volume.

If you try installing from a read/write volume and the application can't obtain the lock it wants, you get that error.

We don't have byte range locking on the file servers.  When we do, this
will get better.

In the meantime, don't use read/write volumes for distribution of .MSI/.MSM packages.

We never do that, we always use "RO" replications for distribution of application packages. And the access is usually "system:anyuser rl" or "install_account rl". There should be no locking issues here.

Rodney

_______________________________________________
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info

Reply via email to