All,

We had a situation where write speeds to a ZFS consisting of 2 7TB RAID5 LUNs 
came to a crawl. We have spent a good 100 men hours trying to troubleshoot the 
issue eliminating HW issues. In the end, when we whacked about 2TB out of 14, 
performance went back to normal (300megs+ vs 3 megs when it was poor).
I would like some understanding as to why this is the case with ZFS, as well as 
for what threshold to look out for.
 
Ths was the layout when performance was 3megs/sec
beast                 130G   37K  130G   1% /mnt/backup1
beast/customer1         130G   29K  130G   1% /mnt/backup1/customer1
beast/customer1/bacula  222G   93G  130G  42% /mnt/backup1/customer1/bacula
beast/customer1/db      2.0T  1.8T  130G  94% /mnt/backup1/customer1/db
beast/customer1/fs      2.1T  1.9T  130G  94% /mnt/backup1/customer1/filesystem
beast/customer5      130G   29K  130G   1% /mnt/backup1/customer5
beast/customer5/bacula
                      221G   92G  130G  42% /mnt/backup1/customer5/bacula
beast/customer5/db   130G   25K  130G   1% /mnt/backup1/customer5/db
beast/customer5/fs   172G   42G  130G  25% /mnt/backup1/customer5/filesystem
beast/bacula          130G   15M  130G   1% /mnt/backup1/bacula
beast/bacula/spool    130G   34K  130G   1% /mnt/backup1/bacula/spool
beast/customer6          130G   29K  130G   1% /mnt/backup1/customer6
beast/customer6/bacula   210G   81G  130G  39% /mnt/backup1/customer6/bacula
beast/customer6/db       3.7T  3.6T  130G  97% /mnt/backup1/customer6/db
beast/customer6/fs       130G   25K  130G   1% /mnt/backup1/customer6/filesystem
beast/customer2         133G  3.6G  130G   3% /mnt/backup1/customer2
beast/customer2/bacula  1.5T  1.4T  130G  92% /mnt/backup1/customer2/bacula
beast/customer2/db      194G   65G  130G  34% /mnt/backup1/customer2/db
beast/customer2/fs      221G   92G  130G  42% /mnt/backup1/customer2/filesystem
beast/customer4         130G   29K  130G   1% /mnt/backup1/customer4
beast/customer4/bacula  1.3T  1.2T  130G  90% /mnt/backup1/customer4/bacula
beast/customer4/db      1.6T  1.5T  130G  92% /mnt/backup1/customer4/db
beast/customer4/fs      130G   25K  130G   1% /mnt/backup1/customer4/filesystem
beast/customer3    130G   26K  130G   1% /mnt/backup1/customer3
beast/customer3/bacula
                      2.8T  2.6T  130G  96% /mnt/backup1/customer3/bacula

Original Post:
                                           capacity     operations    bandwidth
pool                                     used  avail   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
beast                                   14.1T   366G      0    155      0  3.91M
  c7t6000402002FC424F6CF5317A00000000d0  7.07T   183G      0     31      0  
16.2K
  c7t6000402002FC424F6CF5318F00000000d0  7.07T   183G      0    124      0  
3.90M
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to