Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Bob Friesenhahn
On Tue, 3 Aug 2010, Eduardo Bragatto wrote: You're a funny guy. :) Let me re-phrase it: I'm sure I'm getting degradation in performance as my applications are waiting more on I/O now than they used to do (based on CPU utilization graphs I have). The impression part, is that the reason is the

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Eduardo Bragatto
On Aug 4, 2010, at 12:26 AM, Richard Elling wrote: The tipping point for the change in the first fit/best fit allocation algorithm is now 96%. Previously, it was 70%. Since you don't specify which OS, build, or zpool version, I'll assume you are on something modern. I'm running Solaris 10

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Eduardo Bragatto
On Aug 4, 2010, at 12:20 AM, Khyron wrote: I notice you use the word volume which really isn't accurate or appropriate here. Yeah, it didn't seem right to me, but I wasn't sure about the nomenclature, thanks for clarifying. You may want to get a bit more specific and choose from the

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Bob Friesenhahn
On Wed, 4 Aug 2010, Eduardo Bragatto wrote: Checking with iostat, I noticed the average wait time to be between 40ms and 50ms for all disks. Which doesn't seem too bad. Actually, this is quite high. I would not expect such long wait times except for when under extreme load such as a

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Eduardo Bragatto
On Aug 4, 2010, at 11:18 AM, Bob Friesenhahn wrote: Assuming that your impressions are correct, are you sure that your new disk drives are similar to the older ones? Are they an identical model? Design trade-offs are now often resulting in larger capacity drives with reduced performance.

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Bob Friesenhahn
On Wed, 4 Aug 2010, Eduardo Bragatto wrote: I will also start using rsync v3 to reduce the memory foot print, so I might be able to give back some RAM to ARC, and I'm thinking maybe going to 16GB RAM, as the pool is quite large and I'm sure more ARC wouldn't hurt. It is definitely a wise

Re: [zfs-discuss] ZFS Restripe

2010-08-04 Thread Richard Elling
On Aug 4, 2010, at 9:03 AM, Eduardo Bragatto wrote: On Aug 4, 2010, at 12:26 AM, Richard Elling wrote: The tipping point for the change in the first fit/best fit allocation algorithm is now 96%. Previously, it was 70%. Since you don't specify which OS, build, or zpool version, I'll assume

[zfs-discuss] ZFS Restripe

2010-08-03 Thread Eduardo Bragatto
Hi, I have a large pool (~50TB total, ~42TB usable), composed of 4 raidz1 volumes (of 7 x 2TB disks each): # zpool iostat -v | grep -v c4 capacity operationsbandwidth pool used avail read write read write - - - -

Re: [zfs-discuss] ZFS Restripe

2010-08-03 Thread Khyron
Short answer: No. Long answer: Not without rewriting the previously written data. Data is being striped over all of the top level VDEVs, or at least it should be. But there is no way, at least not built into ZFS, to re-allocate the storage to perform I/O balancing. You would basically have to

Re: [zfs-discuss] ZFS Restripe

2010-08-03 Thread Eduardo Bragatto
On Aug 3, 2010, at 10:08 PM, Khyron wrote: Long answer: Not without rewriting the previously written data. Data is being striped over all of the top level VDEVs, or at least it should be. But there is no way, at least not built into ZFS, to re- allocate the storage to perform I/O

Re: [zfs-discuss] ZFS Restripe

2010-08-03 Thread Eduardo Bragatto
On Aug 3, 2010, at 10:57 PM, Richard Elling wrote: Unfortunately, zpool iostat is completely useless at describing performance. The only thing it can do is show device bandwidth, and everyone here knows that bandwidth is not performance, right? Nod along, thank you. I totally understand

Re: [zfs-discuss] ZFS Restripe

2010-08-03 Thread Khyron
I notice you use the word volume which really isn't accurate or appropriate here. If all of these VDEVs are part of the same pool, which as I recall you said they are, then writes are striped across all of them (with bias for the more empty aka less full VDEVs). You probably want to zfs send the

Re: [zfs-discuss] ZFS Restripe

2010-08-03 Thread Richard Elling
On Aug 3, 2010, at 8:55 PM, Eduardo Bragatto wrote: On Aug 3, 2010, at 10:57 PM, Richard Elling wrote: Unfortunately, zpool iostat is completely useless at describing performance. The only thing it can do is show device bandwidth, and everyone here knows that bandwidth is not performance,