Steve Gaarder writes:
...
Then try copying a large file from AFS to the client's local storage,
...
Now it gets weird. Iperf shows the same performance with or without
IPSEC. But if I run iperf under IPSEC, openafs performance jumps back up
to normal and stays there for several minutes.
I fired up Wireshark and took a look. I set up IPSEC to use
authentication only, so I can still see inside the packets. What I see,
on both server and client, is this:
When performance is poor, I see two fetch-data-64 packets from the server
followed by an ACK packet from the client. There
I run a network of machines running Scientific Linux 6 (a
Red Hat Enterprise clone). We have both AFS and NFS file
servers. In an effort to add some security to NFS, we are
using IPSEC.
IPSEC may or may not be a good idea, but in an ideal world you
would be using NFSv4 with a kernel newer
I run a network of machines running Scientific Linux 6 (a Red Hat
Enterprise clone). We have both AFS and NFS file servers. In an effort
to add some security to NFS, we are using IPSEC. I have discovered that
IPSEC, specifically Red Hat's NETKEY protocol stack, sends OpenAFS
performance
curiosity, what is the mtu in the ipsec network? is netkey implemented
similarly to ppp, namely that it encapsulates traffic and thus drops below
a standard mtu?
On Mon, Dec 9, 2013 at 11:24 AM, Steve Gaarder gaard...@math.cornell.eduwrote:
I run a network of machines running Scientific Linux
On Mon, 9 Dec 2013, Andrew Deason wrote:
On Mon, 9 Dec 2013 11:24:55 -0500 (EST)
Steve Gaarder gaard...@math.cornell.edu wrote:
Then try copying a large file from AFS to the client's local storage,
e.g. with rsync --progress. You will see performance steadily drop to
miserable levels.