On Aug 3, 2010, at 10:52 AM, Eduardo Bragatto wrote:
> Hi,
> 
> I have a large pool (~50TB total, ~42TB usable), composed of 4 raidz1 volumes 
> (of 7 x 2TB disks each):
> 
> # zpool iostat -v | grep -v c4

Unfortunately, zpool iostat is completely useless at describing performance.
The only thing it can do is show device bandwidth, and everyone here knows
that bandwidth is not performance, right?  Nod along, thank you.

>                 capacity     operations    bandwidth
> pool           used  avail   read  write   read  write
> ------------  -----  -----  -----  -----  -----  -----
> backup        35.2T  15.3T    602    272  15.3M  11.1M
>  raidz1      11.6T  1.06T    138     49  2.99M  2.33M
>  raidz1      11.8T   845G    163     54  3.82M  2.57M
>  raidz1      6.00T  6.62T    161     84  4.50M  3.16M
>  raidz1      5.88T  6.75T    139     83  4.01M  3.09M
> ------------  -----  -----  -----  -----  -----  -----
> 
> Originally there were only the first two raidz1 volumes, and the two from the 
> bottom were added later.
> 
> You can notice that by the amount of used / free space. The first two volumes 
> have ~11TB used and ~1TB free, while the other two have around ~6TB used and 
> ~6TB free.

Yes, and you also notice that the writes are biased towards the raidz1 sets
that are less full.  This is exactly what you want :-)  Eventually, when the 
less
empty sets become more empty, the writes will rebalance.

OTOH, reads will come from whence they were written.

> 
> I have hundreds of zfs'es storing backups from several servers. Each ZFS has 
> about 7 snapshots of older backups.
> 
> I have the impression I'm getting degradation in performance due to the 
> limited space in the first two volumes, specially the second, which has only 
> 845GB free.

Impressions work well for dating, but not so well for performance.
Does your application run faster or slower?

> 
> Is there any way to re-stripe the pool, so I can take advantage of all 
> spindles across the raidz1 volumes? Right now it looks like the newer volumes 
> are doing the heavy while the other two just hold old data.

Yes, of course.  But it requires copying the data, which probably isn't 
feasible.
 -- richard

-- 
Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to