On 3/26/2010 2:22 PM, Holger Rauch wrote:
> Hi everybody,
> 
> I installed OpenAFS for Windows 1.572 on a Windows 7 Professional (64
> bit) system with all system updates applied in conjunction with
> Kerberos for Windows (KfW) and the Network Identity Manager. I
> downloaded all packages from Secure Endpoints' web site. The PC is
> consists of current HW (Intel Core2Duo, 4 to 8 GB RAM, etc.).

1.5.74 is the current release.

> Unfortunately, the maximum transfer speed on Windows is about 8-10
> MB/sec when I copy local files to an OpenAFS volume. 

This is common when encryption is in use, jumbo grams are disabled, and
the file server is running with its defaults.

> All involved
> network components are gigabit capable and I don't experience this
> problem when doing file transfers to a native ext3 filesystem using
> scp, for example. (I consider this comparable since the transfers are
> encrypted as well, in case of SSH/SCP even with a much stronger
> encryption algorithm).

The performance of a secure transfer protocol is very roughly comparable
to the cost of the encryption algorithm itself (fcrypt, DES, RC4, AES,
etc.), the encryption mode, and the number of encryption operations that
must be performed (size of the chunks).  For AFS without jumbograms the
size of the data portion of a packet is under 1444 bytes per encryption
operation.  For something like SSH/SCP which is a tcp/ip stream protocol
instead of packet based, the number of operations are significantly smaller.

The AFS Rx fcrypt also does not lend itself particularly well to
pipeline operations.

Disabling encryption of the data will result in significant performance
improvements and the cost of sending your data in the clear.

> The server is a Debian Lenny system running OpenAFS 1.4.11 obtained
> from the Debian backports repository. The system is a QNAP TS-809 Pro
> with an external HD for the OS. The QNAP HDs are Seagate 7200k RPM
> drives of the Enterprise series used with SW RAID 5, totalling 6,4 TB
> in capacity. OpenAFS volumes reside on /vicep partitions which in turn
> reside on Logical Volumes (LVs). (We're only 25 users, so this scheme
> works perfectly well and has the advantage that the size of the
> underlying LV implicitly determines the quota, so I don't have to
> worry about setting OpenAFS quotas).
> 
> The questions are thus:
> 
> - Can I change the speed of the loopback adapter on Windows 7? If so, how?

The Microsoft loopback adapter has no speed (it is virtual and doesn't
report one.  When an adapter does not report a link speed it is reported
as 10Mbit.  This has no impact.

> - Why does the loopback adapter's speed default to 10 Mb instead of
>   the speed of the physical interface (Gb) at all?

The loopback adapter is not associated with the physical interface.  All
AFS Rx traffic to the file server is sent over the physical interfaces.
 The loopback adapter is used as a method of binding a Netbios name
"AFS" that is visible only to the local machine.  If the name was
published on a physical adapter, there would be no mechanism for
providing a common UNC name on all machines.

> - What's a "normal" transfer speed for OpenAFS when run in gigabit
>   network environemnts?

On 64-bit Win7, the SMB-to-AFS gateway is limited to approximately
65MB/second having nothing to do with the AFS Cache Manager to File
Server interface speed.

The AFS Redirector on 64-bit Win7 is currently producing sustained read
performance from the cache manager of 380MB/sec.  The AFS Redirector
implementation is not publicly available as yet.  When the 1.7 branch is
ready for code submissions the AFS Redirector code base will be
integrated and test releases will be made available.

The maximum throughput of an Rx connection (without encryption) on a
network that supports an MTU size of 9000 octets and clients and file
servers configured to use jumbograms is somewhere between 260MB/sec and
280MB/sec.

---

In reading the rest of the thread, there appears to be a mixture of
reports describing Windows and Unix cache manager performance numbers.
In this e-mail you are asking specifically about Windows.  I'm not sure
that the discussion of tuning Unix cache managers is of any value to you.

On Windows, there is one type of cache.  It is a memory mapped paging file.

As others have mentioned, accessing lots of small files versus a small
number of large files will also result is lower bandwidth numbers.  This
is especially true on Windows because properly implementation of file
sharing operations requires that file locks be obtained for each and
every file open.  This overhead will not be present in SSH/SCP.

Jeffrey Altman


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to