Am 22.04.16 um 19:28 schrieb Dan McDonald:
On Apr 22, 2016, at 1:13 PM, Richard Elling
wrote:
If you're running Solaris 11 or pre-2015 OmniOS, then the old write throttle is
impossible
to control and you'll chase your tail trying to balance scrubs/resilvers
against any other
workload. From a
> On Apr 22, 2016, at 1:13 PM, Richard Elling
> wrote:
>
> If you're running Solaris 11 or pre-2015 OmniOS, then the old write throttle
> is impossible
> to control and you'll chase your tail trying to balance scrubs/resilvers
> against any other
> workload. From a control theory perspective,
> On Apr 22, 2016, at 10:28 AM, Dan McDonald wrote:
>
>
>> On Apr 22, 2016, at 1:13 PM, Richard Elling
>> wrote:
>>
>> If you're running Solaris 11 or pre-2015 OmniOS, then the old write throttle
>> is impossible
>> to control and you'll chase your tail trying to balance scrubs/resilvers
>
> On Apr 22, 2016, at 5:00 AM, Stephan Budach wrote:
>
> Am 21.04.16 um 18:36 schrieb Richard Elling:
>>> On Apr 21, 2016, at 7:47 AM, Chris Siebenmann wrote:
>>>
>>> [About ZFS scrub tunables:]
Interesting read - and it surely works. If you set the tunable before
you start the scrub
Am 21.04.16 um 18:36 schrieb Richard Elling:
On Apr 21, 2016, at 7:47 AM, Chris Siebenmann wrote:
[About ZFS scrub tunables:]
Interesting read - and it surely works. If you set the tunable before
you start the scrub you can immediately see the thoughput being much
higher than with the standard
> On Apr 21, 2016, at 7:47 AM, Chris Siebenmann wrote:
>
> [About ZFS scrub tunables:]
>> Interesting read - and it surely works. If you set the tunable before
>> you start the scrub you can immediately see the thoughput being much
>> higher than with the standard setting. [...]
>
> It's perhap
[About ZFS scrub tunables:]
> Interesting read - and it surely works. If you set the tunable before
> you start the scrub you can immediately see the thoughput being much
> higher than with the standard setting. [...]
It's perhaps worth noting here that the scrub rate shown in 'zpool
status' is a
Am 19.04.16 um 23:31 schrieb wuffers:
You might want to check this old thread:
http://lists.omniti.com/pipermail/omnios-discuss/2014-July/002927.html
Richard Elling had some interesting insights on how the scrub works:
"So I think the pool is not scheduling scrub I/Os very well. You can
incre
You might want to check this old thread:
http://lists.omniti.com/pipermail/omnios-discuss/2014-July/002927.html
Richard Elling had some interesting insights on how the scrub works:
"So I think the pool is not scheduling scrub I/Os very well. You can
increase the number of
scrub I/Os in the sched
Am 17.04.16 um 20:42 schrieb Dale Ghent:
On Apr 17, 2016, at 9:07 AM, Stephan Budach wrote:
Well… searching the net somewhat more thoroughfully, I came across an archived
discussion which deals also with a similar issue. Somewhere down the
conversation, this parameter got suggested:
echo "zf
> On Apr 17, 2016, at 9:07 AM, Stephan Budach wrote:
>
> Well… searching the net somewhat more thoroughfully, I came across an
> archived discussion which deals also with a similar issue. Somewhere down the
> conversation, this parameter got suggested:
>
> echo "zfs_scrub_delay/W0" | mdb -kw
Am 17.04.16 um 14:07 schrieb Stephan Budach:
Hi all,
I am running a scrub on a SSD-only zpool on r018. This zpool consists
of 16 iSCSI targets, which are served from two other OmniOS boxes -
currently still running r016 over 10GbE connections.
This zpool serves as a NFS share for my Oracle V
12 matches
Mail list logo