That's in addition to jumbo frames (upping your MTU), right? -- Rob
On Jul 22, 2009, at 10:24 PM, Sam Lang wrote:
On Jul 22, 2009, at 7:17 PM, Rob Ross wrote:
Related to configuration:
- Do you have jumbo frames enabled?
Also I don't know what versions of linux run on CentOS 5 and 5.2,
but if they're older versions, you may need to do some manual tuning
of the tcp socket window buffers. The config file option that
allows you to tune those parameters is described here:
http://www.pvfs.org/cvs/pvfs-2-8-branch-docs/doc//pvfs-config-options.php#TCPBufferSend
It has some links to the PSC tuning guide that will probably be
useful for you. Its important to note that if you're running a
newer kernel, you probably just want to let autotuning do its thing.
-sam
- Have you adjusted the default strip size in PVFS (looks like no
from your configuration file, but you could have done this via an
attribute)?
See:
http://www.pvfs.org/cvs/pvfs-2-8-branch.build/doc/pvfs2-faq/pvfs2-faq.php#SECTION00076200000000000000
Crank this to 4194304 or so
- Have you read Kyle's email about flow buffer adjustments? That
can also be helpful.
Maybe you just have a junk switch; see this:
http://www.pdl.cmu.edu/Incast/index.html
Regards,
Rob
On Jul 22, 2009, at 6:34 PM, Jim Kusznir wrote:
Hi:
I performed some basic tests today with pvfs2 and 2.8.1. Running
this
version, and all tests performed with the kernel connector (i.e.,
traditional filesystem mount), I performed a few tests.
My topology is as follows:
3 dedicated pvfs2 servers, serving I/O and metadata (although all
clients are given the address to the first server in their URI).
Each
I/O server has 2 gig-e connections into the gigabit network switch
for
the cluster, and are runing ALB ethernet load balancing. In theory,
each server has 2Gbps throughput potential now. For disk drives,
all
my servers are running Dell PERC 6/e cards and MD1000's array of 15
SATA 750GB hard drives in hardware RAID-6. Each pvfs server is
responsible for just under 10TB of disk storage. Using the test
command below, it came out to 373MB/s to the local disk on the pvfs
server.
All of my clients are single Gig-E connected into the same gigabit
switch, same network. My network is comprised of ROCKS 5.1 (CentOS
5.2) and some just plain old CentOS 5 servers.
The test command was: dd if=/dev/zero of=<file>.out bs=4000K
count=2800
First test: single machine to pvfs storage: 95.6MB/s
Second Test: two cluster machines to pvfs storage: 80.2 MB/s
Third test: 3 machines: 53.2MB/s
Fourth test: 4 machines: 44.7MB/s
This test surprised me greatly. My understanding was the big
benifits
behind pvfs was the scalability; that with 3 I/O servers, I should
reasonably expect to get at least 3x the bandwidth. Given this, I
have a theoretical 6 Gbps to my storage, yet my actual throughput
did
not scale much at all...My initial single-machine connection came
out
at a bit under 1Gbps, and my 4 machine connection came up at
1.2Gbps.
Each time I added a machine, the throughput of them all went down.
What gives? My actual local disk throughput on my I/O servers is
373MB/s and the local pvfs2 server system load never broke 1.0, so
that wasn't the bottleneck...
Here's my pvfs2-fs.conf:
<Defaults>
UnexpectedRequests 50
EventLogging none
LogStamp datetime
BMIModules bmi_tcp
FlowModules flowproto_multiqueue
PerfUpdateInterval 1000
ServerJobBMITimeoutSecs 30
ServerJobFlowTimeoutSecs 30
ClientJobBMITimeoutSecs 300
ClientJobFlowTimeoutSecs 300
ClientRetryLimit 5
ClientRetryDelayMilliSecs 2000
StorageSpace /mnt/pvfs2
LogFile /var/log/pvfs2-server.log
</Defaults>
<Aliases>
Alias pvfs2-io-0-0 tcp://pvfs2-io-0-0:3334
Alias pvfs2-io-0-1 tcp://pvfs2-io-0-1:3334
Alias pvfs2-io-0-2 tcp://pvfs2-io-0-2:3334
</Aliases>
<Filesystem>
Name pvfs2-fs
ID 62659950
RootHandle 1048576
<MetaHandleRanges>
Range pvfs2-io-0-0 4-715827885
Range pvfs2-io-0-1 715827886-1431655767
Range pvfs2-io-0-2 1431655768-2147483649
</MetaHandleRanges>
<DataHandleRanges>
Range pvfs2-io-0-0 2147483650-2863311531
Range pvfs2-io-0-1 2863311532-3579139413
Range pvfs2-io-0-2 3579139414-4294967295
</DataHandleRanges>
<StorageHints>
TroveSyncMeta yes
TroveSyncData no
</StorageHints>
</Filesystem>
--Jim
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users