Sorry everyone, this one was indeed a case of root stupidity. I had
forgotten to upgrade to OI 148, which apparently fixed the write balancer.
Duh. (didn't find full changelog from google tho.)
On Jun 30, 2011 3:12 PM, "Tuomas Leikola" <tuomas.leik...@gmail.com> wrote:
> Thanks for the input. This was not a case of degraded vdev, but only a
> missing log device (which i cannot get rid of..). I'll try offlining some
> vdevs and see what happens - altough this should be automatic atf all
times
> IMO.
> On Jun 30, 2011 1:25 PM, "Markus Kovero" <markus.kov...@nebula.fi> wrote:
>>
>>
>>> To me it seems that writes are not directed properly to the devices that
> have most free space - almost exactly the opposite. The writes seem to go
to
> the devices that have _least_ free space, instead of the devices that have
> most free space. The same effect that can be seen in these 60s averages
can
> also be observed in a shorter timespan, like a second or so.
>>
>>> Is there something obvious I'm missing?
>>
>>
>> Not sure how OI should behave, I've managed to even writes & space usage
> between vdevs by bringing device offline in vdev you don't want to writes
> end up to.
>> If you have degraded vdev in your pool, zfs will try not to write there,
> and this may be the case here as well as I don't see zpool status output.
>>
>> Yours
>> Markus Kovero
>>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to