Linux filesystem speed comparison

2005-04-10 Thread Steven Wilton
We are running some large proxies in our Melbourne POP, and we graph the CPU
counters available in the 2.6 linux kernel to give us an idea of what the
CPU is doing.  We noticed that the CPU was spending large amounts of time
(around 60%) in an I/O wait state, which is when the CPU is idle, but there
are pending disk i/o opeartions.

Some other recent tests have shown that on linux the aufs disk type gives us
the best performance, but I wanted to see if I could reduce the amount of
I/O wait time on the proxy servers by changing the filesystem.

In Perth we have 4 identical proxies (P3-500, 512Mb RAM, 3x9Gb cache disks,
linux 2.6.10 kernel, squid s2_5-epoll tree), which we were running with the
ext3 filesystem.  I reformatted 3 of them with reiserfs, xfs and jfs to see
what difference each of these filesystems would have on the I/O wait.  The
mount options for each are as follows:

/dev/sdb1 on /var/spool/squid/disk1 type reiserfs (rw,noatime,notail)
/dev/sdb1 on /var/spool/squid/disk1 type xfs (rw,noatime)
/dev/sdb1 on /var/spool/squid/disk1 type ext3 (rw,noatime,data=writeback)
/dev/sdb1 on /var/spool/squid/disk1 type jfs (rw,noatime)

Below is a single set of results from the daily averages of the graphs we
have.  I have taken 10 samples of 5 mijnute averages over the past week, and
they come up with similar figures (the 5 minute samples are pasted at the
end of this e-mail):

Filesystem  UserSys IO  Req/sec U/R S/R I/R
Reiser  7.6 8.4 14.128  0.270.170.50
Xfs 8.4 5.3 4.4 27.30.310.190.16
Ext37.6 4.4 10.428.20.270.160.15
Jfs 7.3 4.1 15.826.60.270.150.59


The numbers are as follows:
User- %CPU user
Sys - %CPU system
IO  - %CPU IO wait
Req/sec - Requests/sec for squid
U/R - User/(Req/sec)
S/R - Sys/(Req/sec)
I/R - IO /(Req/sec)

The interesting thing is that this test shows that in a 2.6.10 kernel, XFS
is the clear winner for I/O wait, followed by ext3 writeback.  I was not
surprised to see reiser come off worse than ext3, as I have previously tried
to use reiser on our proxies (on a 2.2 kernel), and noticed that initially
the proxy was a lot quicker, but as the disk filled up, the cache
performance dropped.

I thought I'd post this to squid-dev for comments first, as I have read
other posts that say that squid+reiser is the recommended combination, and
was wondering if there are other tests that I should perform.

Steven


The 

UserSys IO  Req/sec U/R S/R I/R
5/4 9:43am  
reiser  11.57.3 12.254  0.210.140.23

xfs 12.37.4 4.5 49.40.250.150.09

ext312.28.8 10  48  0.250.180.21

jfs 9   5.3 10.240.90.220.130.25

5/4 8:23pm

reiser  11.88   13  46.10.260.170.28

xfs 12.98   6.2 56.50.230.140.11

ext313.48.8 14.359.70.220.150.24

jfs 12.47.8 12.856.20.220.140.23

6/4 7:23am

reiser  4.3 2.2 5.1 21.20.200.100.24

xfs 5.2 2.6 1.2 24.30.210.110.05

ext34.1 2.3 5.1 13.90.290.170.37

jfs 5.9 2.7 4.7 17  0.350.160.28

6/4 10:47am

reiser  10.97.6 12.348.10.230.160.26

xfs 11.57.5 5.6 50.70.230.150.11

ext311.17.4 14.151  0.220.150.28

jfs 10.26.2 13.242.10.240.150.31

6/4 12:02pm

reiser  10.16.2 12.549.70.200.120.25

xfs 11.68.3 6.4 49  0.240.170.13

ext312.38.2 14.448.80.250.170.30

jfs 10.16.2 11.241.30.240.150.27

6/4 15:20pm

reiser  10.26.8 12.547.80.210.140.26

xfs 13.99.9 7.6 58.90.240.170.13

ext311.97.9 13.546.90.250.170.29

jfs 13.46   13.441.90.320.140.32

7/4 07:54am

reiser  7.8 4.7 10.734.80.220.140.31

xfs 8.4 5.6 4.7 26.90.310.210.17

ext37.5 5.2 10.229.40.260.180.35

jfs 6   3.7 9.3 24.80.240.150.38

7/4 1:44pm

reiser  12  8.5 19.755.30.220.150.36

xfs 11.4

helper's handles

2005-04-10 Thread Evgeny Kotsuba
Hi,
I  have a look at void helper.c  and can't find point were  srv->rfd 
is closed
helperShutdown() has only   comm_close() call for  srv->wfd;

If there are really no  close for  srv->rfd  then  at each reconfigure 
N  handles  will be non  closed, where N is a number of  helpers 
(redirectors+auths+ etc.)

SY,
Evgeny Kotsuba


cache peer acl

2005-04-10 Thread Nikcolay Pelov
Hi!
I have a small network with 2 squid servers running as siblings. I'm 
using delay pools to limit bandwidth and the are some problems.
When there is a sibling hit delay pool limits the speed. Is there any 
way to define cache peer acl to exclude sibling cache hits from delay pools.



Re: storeUfsDirRebuildFromSwapLog

2005-04-10 Thread Henrik Nordstrom
On Sun, 10 Apr 2005, Evgeny Kotsuba wrote:
-   if (s.op <= SWAP_LOG_NOP)
-   continue;
-   if (s.op >= SWAP_LOG_MAX)
-   continue;
I didn`t quite catch what  those ifs   do, but 
\Squid\S2_5\squid-s2_5\src\fs\coss\store_dir_coss.c   in 
storeCossRebuildFromSwapLog() still  has those.
These ifs ignores swap.state log entry types not present in this version 
of Squid (i.e. from some future version of Squid).

The LFS/2GB updated code in aufs/ufs/diskd does this slightly differently 
as there was one more log type added. I did not spend time on doing these 
updates in COSS as the structure of COSS is not suited for huge objects.

Regards
Henrik


storeUfsDirRebuildFromSwapLog

2005-04-10 Thread Evgeny Kotsuba
Hi,
squid-s2_5\src\fs\ufs\store_dir_ufs.c  -> 
storeUfsDirRebuildFromSwapLog
--8<--
   rb->n_read++;
-   if (s.op <= SWAP_LOG_NOP)
-   continue;
-   if (s.op >= SWAP_LOG_MAX)
-   continue;
   /*
* BC: during 2.4 development, we changed the way swap file
* numbers are assigned and stored.  The high 16 bits used
--8<--

I didn`t quite catch what  those ifs   do, but 
\Squid\S2_5\squid-s2_5\src\fs\coss\store_dir_coss.c   in 
storeCossRebuildFromSwapLog() still  has those.

SY,
Evgeny Kotsuba