On 3/30/2011 2:11 PM, Milan Zamazal wrote: > I use OpenAFS 1.4.12.1 from Debian in home environment. > > I connect from the client machine to both a network file server running > on another machine and a local file server running on the same machine > as the client. I want to cache much data from the network server, so I > use a disk cache. I don't want to cache any data from the local server > in the disk cache because this means they are copied from a local drive > to the same local drive during transfer, resulting in poor performance. > > Is there a way to instruct the client to use a disk cache for remote > files and to use a memory cache (or not to use any cache at all) for > files stored locally? Or to solve the problem in another way?
As I am sure you are aware, the AFS client must read/write all data through the file servers via the AFS3 RPC protocol in order to perform authentication and enforce access rights. Given the current implementations of the AFS client there is no method by which the AFS client can access the content of AFS volumes directly. Data stored in AFS volumes is location independent from the perspective of the client. The volumes are permitted to move between servers while in use. As such there is no concept of local versus remote data. As Marc points out elsewhere in the thread, the 1.6 branch contains "cache bypass" function which permits the AFS cache manager to read data and avoid storing it in a local cache. However, even if the cache manager could avoid caching data only read from the local file server there would be a significant overhead associated with the local reads compared to reading from a local disk file system. I would like to hear a better description of your use case. Why is there an AFS file server running on a client machine that is a heavy consumer of data stored in AFS volumes on the file server? Perhaps AFS as it is implemented today is not the best choice of a file system for your application. Jeffrey Altman
signature.asc
Description: OpenPGP digital signature
