thanks, bill.  i killed an old filesystem.  also forgot about
arc_meta_limit.  kicked it up to 4gb from 2gb.  things are back to
normal.

On Thu, Dec 15, 2011 at 1:06 PM, Bill Sommerfeld
<sommerf...@alum.mit.edu> wrote:
> On 12/15/11 09:35, milosz wrote:
>>
>> hi all,
>>
>> suddenly ran into a very odd issue with a 151a server used primarily
>> for cifs... out of (seemingly) nowhere, writes are incredibly slow,
>> often<10kb/s.  this is what zpool iostat 1 looks like when i copy a
>> big file:
>>
>> storepool   13.4T  1.07T     57      0  6.13M      0
>> storepool   13.4T  1.07T    216     91   740K  5.58M
>
> ...
>
>> any ideas?  pretty stumped.
>
>
> Behavior I've observed with multiple pools is that you will sometimes hit a
> performance wall when the pool gets too full; the system spends lots of time
> reading in metaslab metadata looking for a place to put newly-allocated
> blocks.  If you're in this mode, kernel profiling will show a lot of time
> spent in metaslab-related code.
>
> Exactly where you hit the wall seems to depend on the history of what went
> into the pool; I've seen the problem kick in with only 69%-70% usage in one
> pool that was used primarily for solaris development.
>
> The workaround turned out to be simple: delete stuff you don't need to keep.
>  Once there was enough free space, write performance returned to normal.
>
> There are a few metaslab-related tunables that can be tweaked as well.
>
>                                        - Bill
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to