Hi,
I have a large pool (~50TB total, ~42TB usable), composed of 4 raidz1
volumes (of 7 x 2TB disks each):
# zpool iostat -v | grep -v c4
capacity operations bandwidth
pool used avail read write read write
------------ ----- ----- ----- ----- ----- -----
backup 35.2T 15.3T 602 272 15.3M 11.1M
raidz1 11.6T 1.06T 138 49 2.99M 2.33M
raidz1 11.8T 845G 163 54 3.82M 2.57M
raidz1 6.00T 6.62T 161 84 4.50M 3.16M
raidz1 5.88T 6.75T 139 83 4.01M 3.09M
------------ ----- ----- ----- ----- ----- -----
Originally there were only the first two raidz1 volumes, and the two
from the bottom were added later.
You can notice that by the amount of used / free space. The first two
volumes have ~11TB used and ~1TB free, while the other two have around
~6TB used and ~6TB free.
I have hundreds of zfs'es storing backups from several servers. Each
ZFS has about 7 snapshots of older backups.
I have the impression I'm getting degradation in performance due to
the limited space in the first two volumes, specially the second,
which has only 845GB free.
Is there any way to re-stripe the pool, so I can take advantage of all
spindles across the raidz1 volumes? Right now it looks like the newer
volumes are doing the heavy while the other two just hold old data.
Thanks,
Eduardo Bragatto
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss