I've come up with a better name for the concept of file and directory
fragmentation which is, "Filesystem Entropy". Where, over time, an
active and volitile  filesystem moves from an organized state to a
disorganized state resulting in backup difficulties.

Here are some stats which illustrate the issue:

First the development mail server:
==================================
(Jump frames, Nagle disabled and tcp_xmit_hiwat,tcp_recv_hiwat set to
2097152)

Small file workload (copy from zfs on iscsi network to local ufs
filesystem)
# zpool iostat 10
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G      3      0   247K  59.7K
space       70.5G  29.0G    136      0  8.37M      0
space       70.5G  29.0G    115      0  6.31M      0
space       70.5G  29.0G    108      0  7.08M      0
space       70.5G  29.0G    105      0  3.72M      0
space       70.5G  29.0G    135      0  3.74M      0
space       70.5G  29.0G    155      0  6.09M      0
space       70.5G  29.0G    193      0  4.85M      0
space       70.5G  29.0G    142      0  5.73M      0
space       70.5G  29.0G    159      0  7.87M      0

Large File workload (cd and dvd iso's) 
# zpool iostat 10
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       70.5G  29.0G      3      0   224K  59.8K
space       70.5G  29.0G    462      0  57.8M      0
space       70.5G  29.0G    427      0  53.5M      0
space       70.5G  29.0G    406      0  50.8M      0
space       70.5G  29.0G    430      0  53.8M      0
space       70.5G  29.0G    382      0  47.9M      0

The production mail server:
===========================
Mail system is running with 790 imap users logged in (low imap work
load).
Two backup streams are running.
Not using jumbo frames, nagle enabled, tcp_xmit_hiwat,tcp_recv_hiwat set
to 2097152
    - we've never seen any effect of changing the iscsi transport
parameters
      under this small file workload.

# zpool iostat 10
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
space       1.06T   955G     96     69  5.20M  2.69M
space       1.06T   955G    175    105  8.96M  2.22M
space       1.06T   955G    182     16  4.47M   546K
space       1.06T   955G    170     16  4.82M  1.85M
space       1.06T   955G    145    159  4.23M  3.19M
space       1.06T   955G    138     15  4.97M  92.7K
space       1.06T   955G    134     15  3.82M  1.71M
space       1.06T   955G    109    123  3.07M  3.08M
space       1.06T   955G    106     11  3.07M  1.34M
space       1.06T   955G    120     17  3.69M  1.74M

# prstat -mL
   PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG
PROCESS/LWPID
 12438 root      12 6.9 0.0 0.0 0.0 0.0  81 0.1 508  84  4K   0 save/1
 27399 cyrus     15 0.5 0.0 0.0 0.0 0.0  85 0.0  18  10 297   0 imapd/1
 20230 root     3.9 8.0 0.0 0.0 0.0 0.0  88 0.1 393  33  2K   0 save/1
 25913 root     0.5 3.3 0.0 0.0 0.0 0.0  96 0.0  22   2  1K   0 prstat/1
 20495 cyrus    1.1 0.2 0.0 0.0 0.5 0.0  98 0.0  14   3 191   0 imapd/1
  1051 cyrus    1.2 0.0 0.0 0.0 0.0 0.0  99 0.0  19   1  80   0 master/1
 24350 cyrus    0.5 0.5 0.0 0.0 1.4 0.0  98 0.0  57   1 484   0 lmtpd/1
 22645 cyrus    0.6 0.3 0.0 0.0 0.0 0.0  99 0.0  53   1 603   0 imapd/1
 24904 cyrus    0.3 0.4 0.0 0.0 0.0 0.0  99 0.0  66   0 863   0 imapd/1
 18139 cyrus    0.3 0.2 0.0 0.0 0.0 0.0  99 0.0  24   0 195   0 imapd/1
 21459 cyrus    0.2 0.3 0.0 0.0 0.0 0.0  99 0.0  54   0 635   0 imapd/1
 24891 cyrus    0.3 0.3 0.0 0.0 0.9 0.0  99 0.0  28   0 259   0 lmtpd/1
   388 root     0.2 0.3 0.0 0.0 0.0 0.0 100 0.0   1   1  48   0
in.routed/1
 21643 cyrus    0.2 0.3 0.0 0.0 0.2 0.0  99 0.0  49   7 540   0 imapd/1
 18684 cyrus    0.2 0.3 0.0 0.0 0.0 0.0 100 0.0  48   1 544   0 imapd/1
 25398 cyrus    0.2 0.2 0.0 0.0 0.0 0.0 100 0.0  47   0 466   0 pop3d/1
 23724 cyrus    0.2 0.2 0.0 0.0 0.0 0.0 100 0.0  47   0 540   0 imapd/1
 24909 cyrus    0.1 0.2 0.0 0.0 0.2 0.0  99 0.0  25   1 251   0 lmtpd/1
 16317 cyrus    0.2 0.2 0.0 0.0 0.0 0.0 100 0.0  37   1 495   0 imapd/1
 28243 cyrus    0.1 0.3 0.0 0.0 0.0 0.0 100 0.0  32   0 289   0 imapd/1
 20097 cyrus    0.1 0.2 0.0 0.0 0.3 0.0  99 0.0  26   5 253   0 lmtpd/1
Total: 893 processes, 1125 lwps, load averages: 1.14, 1.16, 1.16
 
-- 
Ed  

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to