On 9/21/2011 7:09 AM, Anders Magnusson wrote: > > In the hunt for oddities regarding the new IFS Windows client I have > observed a problem causing bad performance, and hopefully someone has > some idea about what is going on. > > Environment: > Server: OpenAFS 1.4.12.1, CentOS 5.3 > Client: Windows 7, OpenAFS 1.7.1 > > The test case is to write an ISO image (700MB) to afs from local disk.
What size is the cache? Is the ISO larger than the cache? What is the chunksize? What is the blocksize? > If the switch port is set to 100Mbit I will get ~3Mbyte/s, but if it is > set to 1Gbit then I get ~10Mbyte/s. > Both these numbers are much lower than they should be, and more > precisely I cannot understand why the speed in 100Mbit configuration > becomes much lower than when using 1Gbit. More than likely it is because the RPC round trip time is slower and therefore the latency is longer. > Before someone asks; there are no network limits here and both client > and server are on the same subnet. > > I have run tcpdump on both client and server and seen this traffic > "pattern": > > For 100Mbit: > - A data packet is sent out periodically at an almost exact rate of one > 1472 byte > per 420 microseconds, which gives something close to 3Mbyte/s > > For 1Gbit: > - The same as for 100Mbit except for that the packet rate is one packet > per 91 microseconds. > > The ack packet from the file server is sent back 12 microseconds after > each second data packet. How long does it take for each each StoreData RPC to complete? > I have uninstalled the QoS module on the Windows interface. > > Any hints anyone? I think this smells as traffic shaping due to the > quite exact transmit rate but > since the QoS module is uninstalled and the behaviour is seen on the > windows network interface > I have no clue where it may be. > > A side note: Going via a SMB-AFS gateway on the same network gives > significantly better > performance. The SMB client behavior is very different. The SMB redirector sends data in 64K chunks to the SMB server which is then written to the file server semi-synchronously. As a result there is much less pressure on the cache regardless of size. For the IFS client at present, all 700MB will go into the windows page cache and it will swallow the entire AFS cache at once. Things degrade at that point waiting for each RPC to complete in order to make more room for new data. If your cache size is large enough and the file servers are responsive, then it is possible to obtain 40MB/sec write speeds on 1Gbit links. I am aware of where the bottlenecks are but it is going to take time for me to address them. I will refer people to a blog post I wrote back in March 2008 http://blog.secure-endpoints.com/2008/03/i-want-my-openafs-windows-client-to-be.html Jeffrey Altman
signature.asc
Description: OpenPGP digital signature