Hi Theo,

znapzend can use mbuffers ability to do a direct tcp connection to another
mbuffer instance ...

didn't know about pv though ... neat tool!

cheers
tobi

http://www.znapzend.org

Yesterday Theo Schlossnagle wrote:

> OmniOS ships with pipeviewer (pv), if you use pv -s <several megs>, it
> would have close to the same effect as using mbuffer.
>
>
> On Mon, Aug 11, 2014 at 2:06 AM, Hafiz Rafibeyli <rafibe...@gmail.com>
> wrote:
>
> > Tobias thank you for great job,it was missing backup  part for zfs on
> > omnios,
> >
> > I think ssh will slow for bigger datasets,as you mention znapzend 0.11
> > supporting use of mbuffer.
> >
> > I could not find mbuffer package for omnios,could you explain how to
> > setup/use mbuffer on omnios please?
> >
> > regards
> >
> >
> >
> > ----- Original Message -----
> > From: omnios-discuss-requ...@lists.omniti.com
> > To: omnios-discuss@lists.omniti.com
> > Sent: Tuesday, 29 July, 2014 10:29:42 PM
> > Subject: OmniOS-discuss Digest, Vol 28, Issue 8
> >
> > Send OmniOS-discuss mailing list submissions to
> >         omnios-discuss@lists.omniti.com
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> >         http://lists.omniti.com/mailman/listinfo/omnios-discuss
> > or, via email, send a message with subject or body 'help' to
> >         omnios-discuss-requ...@lists.omniti.com
> >
> > You can reach the person managing the list at
> >         omnios-discuss-ow...@lists.omniti.com
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of OmniOS-discuss digest..."
> >
> >
> > Today's Topics:
> >
> >    1. announcement znapzend a new zfs backup tool (Tobias Oetiker)
> >    2. Re: announcement znapzend a new zfs backup tool
> >       (Theo Schlossnagle)
> >    3. Re: announcement znapzend a new zfs backup tool (Saso Kiselkov)
> >    4. Re: Slow scrub performance (wuffers)
> >
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Tue, 29 Jul 2014 17:50:02 +0200 (CEST)
> > From: Tobias Oetiker <t...@oetiker.ch>
> > To: omnios-discuss@lists.omniti.com
> > Subject: [OmniOS-discuss] announcement znapzend a new zfs backup tool
> > Message-ID: <alpine.deb.2.02.1407291748500.6...@froburg.oetiker.ch>
> > Content-Type: TEXT/PLAIN; charset=US-ASCII
> >
> > Just out:
> >
> >  ZnapZend a Multilevel Backuptool for ZFS
> >
> > It is on Github. Check out
> >
> >  http://www.znapzend.org
> >
> > cheers
> > tobi
> >
> > --
> > Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
> > www.oetiker.ch t...@oetiker.ch +41 62 775 9902
> >
> >
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Tue, 29 Jul 2014 11:54:07 -0400
> > From: Theo Schlossnagle <je...@omniti.com>
> > To: "OmniOS-discuss@lists.omniti.com"
> >         <omnios-discuss@lists.omniti.com>
> > Subject: Re: [OmniOS-discuss] announcement znapzend a new zfs backup
> >         tool
> > Message-ID:
> >         <
> > caclsaptc_wdb+stkw2-jzkgp7oqz4oweuwg_nnrm_xkaook...@mail.gmail.com>
> > Content-Type: text/plain; charset="utf-8"
> >
> > Awesome!
> >
> >
> > On Tue, Jul 29, 2014 at 11:50 AM, Tobias Oetiker <t...@oetiker.ch> wrote:
> >
> > > Just out:
> > >
> > >  ZnapZend a Multilevel Backuptool for ZFS
> > >
> > > It is on Github. Check out
> > >
> > >  http://www.znapzend.org
> > >
> > > cheers
> > > tobi
> > >
> > > --
> > > Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
> > > www.oetiker.ch t...@oetiker.ch +41 62 775 9902
> > >
> > > _______________________________________________
> > > OmniOS-discuss mailing list
> > > OmniOS-discuss@lists.omniti.com
> > > http://lists.omniti.com/mailman/listinfo/omnios-discuss
> > >
> >
> >
> >
> > --
> >
> > Theo Schlossnagle
> >
> > http://omniti.com/is/theo-schlossnagle
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL: <
> > http://lists.omniti.com/pipermail/omnios-discuss/attachments/20140729/f8adbbf5/attachment-0001.html
> > >
> >
> > ------------------------------
> >
> > Message: 3
> > Date: Tue, 29 Jul 2014 17:59:18 +0200
> > From: Saso Kiselkov <skiselkov...@gmail.com>
> > To: omnios-discuss@lists.omniti.com
> > Subject: Re: [OmniOS-discuss] announcement znapzend a new zfs backup
> >         tool
> > Message-ID: <53d7c4d6.5060...@gmail.com>
> > Content-Type: text/plain; charset=ISO-8859-1
> >
> > On 7/29/14, 5:50 PM, Tobias Oetiker wrote:
> > > Just out:
> > >
> > >  ZnapZend a Multilevel Backuptool for ZFS
> > >
> > > It is on Github. Check out
> > >
> > >  http://www.znapzend.org
> >
> > Neat, especially the feature that the backup config is part of a
> > dataset's properties. Very cool.
> >
> > --
> > Saso
> >
> >
> >
> > ------------------------------
> >
> > Message: 4
> > Date: Tue, 29 Jul 2014 15:29:38 -0400
> > From: wuffers <m...@wuffers.net>
> > To: Richard Elling <richard.ell...@richardelling.com>
> > Cc: omnios-discuss <omnios-discuss@lists.omniti.com>
> > Subject: Re: [OmniOS-discuss] Slow scrub performance
> > Message-ID:
> >         <
> > ca+tr_kwx_1hn4tva+-zofjk2mn7re-nfh31smctno7tjjjf...@mail.gmail.com>
> > Content-Type: text/plain; charset="utf-8"
> >
> > Going to try to answer both responses in one message..
> >
> > Short answer, yes. ? Keep in mind that
> > >
> > > 1. a scrub runs in the background (so as not to impact production I/O,
> > > this was not always the case and caused serious issues in the past with a
> > > pool being unresponsive due to a scrub)
> > >
> > > 2. a scrub essentially walks the zpool examining every transaction in
> > > order (as does a resilver)
> > >
> > > So the time to complete a scrub depends on how many write transactions
> > > since the pool was created (which is generally related to the amount of
> > > data but not always). You are limited by the random I/O capability of the
> > > disks involved. With VMs I assume this is a file server, so the I/O size
> > > will also affect performance.
> >
> >
> > I haven't noticed any slowdowns in our virtual environments, so I guess
> > that's a good thing it's so low priority that it doesn't impact workloads.
> >
> > Run the numbers? you are scanning 24.2TB at about 5.5MB/sec ? 4,613,734
> > > seconds or 54 days. And that assumes the same rate for all of the scan.
> > The
> > > rate will change as other I/O competes for resources.
> > >
> >
> > The number was fluctuating when I started the scrub, and I had seen it go
> > as high as 35MB/s at one point. I am certain that our Hyper-V workload has
> > increased since the last scrub, so this does make sense.
> >
> >
> > > Looks like you have a fair bit of activity going on (almost 1MB/sec of
> > > writes per spindle).
> > >
> >
> > As Richard correctly states below, this is the aggregate since boot (uptime
> > ~56 days). I have another output from iostat as per his instructions below.
> >
> >
> > > Since this is storage for VMs, I assume this is the storage server for
> > > separate compute servers? Have you tuned the block size for the file
> > share
> > > you are using? That can make a huge difference in performance.
> > >
> >
> > Both the Hyper-V and VMware LUNs are created with 64K block sizes. From
> > what I've read of other performance and tuning articles, that is the
> > optimal block size (I did some limited testing when first configuring the
> > SAN, but results were somewhat inconclusive). Hyper-V hosts our testing
> > environment (we integrate with TFS, a MS product, so we have no choice
> > here) and probably make up the bulk of the workload (~300+ test VMs with
> > various OSes). VMware hosts our production servers (Exchange, file servers,
> > SQL, AD, etc - ~50+ VMs).
> >
> > I also noted that you only have a single LOG device. Best Practice is to
> > > mirror log devices so you do not lose any data in flight if hit by a
> > power
> > > outage (of course, if this server has more UPS runtime that all the
> > clients
> > > that may not matter).
> > >
> >
> > Actually, I do have a mirror ZIL device, it's just disabled at this time
> > (my ZIL devices are ZeusRAMs). At some point, I was troubleshooting some
> > kernel panics (turned out to be a faulty SSD on the rpool), and hadn't
> > re-enabled it yet. Thanks for the reminder (and yes, we do have a UPS as
> > well).
> >
> > And oops.. re-attaching the ZIL as a mirror triggered a resilver now,
> > suspending or canceling the scrub? Will monitor this and restart the scrub
> > if it doesn't by itself.
> >
> >   pool: tank
> >  state: ONLINE
> > status: One or more devices is currently being resilvered.  The pool will
> >         continue to function, possibly in a degraded state.
> > action: Wait for the resilver to complete.
> >   scan: resilver in progress since Tue Jul 29 14:48:48 2014
> >     3.89T scanned out of 24.5T at 3.06G/s, 1h55m to go
> >     0 resilvered, 15.84% done
> >
> > At least it's going very fast. EDIT: Now about 67% done as I finish writing
> > this, speed dropping to ~1.3G/s.
> >
> > maybe, maybe not
> > >>
> > >> this is slower than most, surely slower than desired
> > >>
> > >
> > Unfortunately reattaching the mirror to my log device triggered a resilver.
> > Not sure if this is desired behavior, but yes, 5.5MB/s seems quite slow.
> > Hopefully after the resilver the scrub will progress where it left off.
> >
> >
> > > The estimate is often very wrong, especially for busy systems.
> > >> If this is an older ZFS implementation, this pool is likely getting
> > >> pounded by the
> > >> ZFS write throttle. There are some tunings that can be applied, but the
> > >> old write
> > >> throttle is not a stable control system, so it will always be a little
> > >> bit unpredictable.
> > >>
> > >
> > The system is on r151008 (my BE states that I upgraded back in February,
> > putting me in r151008j or so), with all the pools upgraded for the new
> > enhancements as well as activating the new L2ARC compression feature.
> > Reading the release notes, the ZFS write throttle enhancements were in
> > since r151008e so I should be good there.
> >
> >
> > > # iostat -xnze
> > >>
> > >>
> > >> Unfortunately, this is the performance since boot and is not suitable
> > for
> > >> performance
> > >> analysis unless the system has been rebooted in the past 10 minutes or
> > >> so. You'll need
> > >> to post the second batch from "iostat -zxCn 60 2"
> > >>
> > >
> > Ah yes, that was my mistake. Output from second count (before re-attaching
> > log mirror):
> >
> > # iostat -zxCn 60 2
> >
> >                     extended device statistics
> >     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
> >   255.7 1077.7 6294.0 41335.1  0.0  1.9    0.0    1.4   0 153 c1
> >     5.3   23.9  118.5  811.9  0.0  0.0    0.0    1.1   0   3
> > c1t5000C50055F8723Bd0
> >     5.9   14.5  110.0  834.3  0.0  0.0    0.0    1.3   0   2
> > c1t5000C50055E66B63d0
> >     5.6   16.6  123.8  822.7  0.0  0.0    0.0    1.3   0   2
> > c1t5000C50055F87E73d0
> >     4.7   27.8  118.6  796.6  0.0  0.0    0.0    1.3   0   3
> > c1t5000C50055F8BFA3d0
> >     5.6   14.5  139.7  833.8  0.0  0.0    0.0    1.6   0   3
> > c1t5000C50055F9E123d0
> >     4.4   27.1  112.3  825.2  0.0  0.0    0.0    0.8   0   2
> > c1t5000C50055F9F0B3d0
> >     5.0   20.2  121.7  803.4  0.0  0.0    0.0    1.2   0   3
> > c1t5000C50055F9D3B3d0
> >     5.4   26.4  137.0  857.3  0.0  0.0    0.0    1.4   0   4
> > c1t5000C50055E4FDE7d0
> >     4.7   12.3  123.7  832.7  0.0  0.0    0.0    2.0   0   3
> > c1t5000C50055F9A607d0
> >     5.0   23.9  125.9  830.9  0.0  0.0    0.0    1.3   0   3
> > c1t5000C50055F8CDA7d0
> >     4.5   31.4  112.2  814.6  0.0  0.0    0.0    1.1   0   3
> > c1t5000C50055E65877d0
> >     5.2   24.4  130.6  872.5  0.0  0.0    0.0    1.2   0   3
> > c1t5000C50055F9E7D7d0
> >     4.1   21.8  103.7  797.2  0.0  0.0    0.0    1.1   0   3
> > c1t5000C50055FA0AF7d0
> >     5.5   24.8  129.8  802.8  0.0  0.0    0.0    1.5   0   4
> > c1t5000C50055F9FE87d0
> >     5.7   17.7  137.2  797.6  0.0  0.0    0.0    1.4   0   3
> > c1t5000C50055F9F91Bd0
> >     6.0   30.6  139.1  852.0  0.0  0.1    0.0    1.5   0   4
> > c1t5000C50055F9FEABd0
> >     6.1   34.1  137.8  929.2  0.0  0.1    0.0    1.9   0   6
> > c1t5000C50055F9F63Bd0
> >     4.1   15.9  101.8  791.4  0.0  0.0    0.0    1.6   0   3
> > c1t5000C50055F9F3EBd0
> >     6.4   23.2  155.2  878.6  0.0  0.0    0.0    1.1   0   3
> > c1t5000C50055F9F80Bd0
> >     4.5   23.5  106.2  825.4  0.0  0.0    0.0    1.1   0   3
> > c1t5000C50055F9FB8Bd0
> >     4.0   23.2  101.1  788.9  0.0  0.0    0.0    1.3   0   3
> > c1t5000C50055F9F92Bd0
> >     4.4   11.3  125.7  782.3  0.0  0.0    0.0    1.9   0   3
> > c1t5000C50055F8905Fd0
> >     4.6   20.4  129.2  823.0  0.0  0.0    0.0    1.5   0   3
> > c1t5000C50055F8D48Fd0
> >     5.1   19.7  142.9  887.2  0.0  0.0    0.0    1.7   0   3
> > c1t5000C50055F9F89Fd0
> >     5.6   11.4  129.1  776.0  0.0  0.0    0.0    1.9   0   3
> > c1t5000C50055F9EF2Fd0
> >     5.6   23.7  137.4  811.9  0.0  0.0    0.0    1.2   0   3
> > c1t5000C50055F8C3ABd0
> >     6.8   13.9  132.4  834.3  0.0  0.0    0.0    1.8   0   3
> > c1t5000C50055E66053d0
> >     5.2   26.7  126.9  857.3  0.0  0.0    0.0    1.2   0   3
> > c1t5000C50055E66503d0
> >     4.2   27.1  104.6  825.2  0.0  0.0    0.0    1.0   0   3
> > c1t5000C50055F9D3E3d0
> >     5.2   30.7  140.9  852.0  0.0  0.1    0.0    1.5   0   4
> > c1t5000C50055F84FB7d0
> >     5.4   16.1  124.3  791.4  0.0  0.0    0.0    1.7   0   3
> > c1t5000C50055F8E017d0
> >     3.8   31.4   89.7  814.6  0.0  0.0    0.0    1.1   0   4
> > c1t5000C50055E579F7d0
> >     4.6   27.5  116.0  796.6  0.0  0.1    0.0    1.6   0   4
> > c1t5000C50055E65807d0
> >     4.0   21.5   99.7  797.2  0.0  0.0    0.0    1.1   0   3
> > c1t5000C50055F84A97d0
> >     4.7   20.2  116.3  803.4  0.0  0.0    0.0    1.4   0   3
> > c1t5000C50055F87D97d0
> >     5.0   11.5  121.5  776.0  0.0  0.0    0.0    2.0   0   3
> > c1t5000C50055F9F637d0
> >     4.9   11.3  112.4  782.3  0.0  0.0    0.0    2.3   0   3
> > c1t5000C50055E65ABBd0
> >     5.3   11.8  142.5  832.7  0.0  0.0    0.0    2.4   0   3
> > c1t5000C50055F8BF9Bd0
> >     5.0   20.3  121.4  823.0  0.0  0.0    0.0    1.7   0   3
> > c1t5000C50055F8A22Bd0
> >     6.6   24.3  170.3  872.5  0.0  0.0    0.0    1.3   0   3
> > c1t5000C50055F9379Bd0
> >     5.8   16.3  121.7  822.7  0.0  0.0    0.0    1.3   0   2
> > c1t5000C50055E57A5Fd0
> >     5.3   17.7  146.5  797.6  0.0  0.0    0.0    1.4   0   3
> > c1t5000C50055F8CCAFd0
> >     5.7   34.1  141.5  929.2  0.0  0.1    0.0    1.7   0   5
> > c1t5000C50055F8B80Fd0
> >     5.5   23.8  125.7  830.9  0.0  0.0    0.0    1.2   0   3
> > c1t5000C50055F9FA1Fd0
> >     5.0   23.2  127.9  878.6  0.0  0.0    0.0    1.1   0   3
> > c1t5000C50055E65F0Fd0
> >     5.2   14.0  163.7  833.8  0.0  0.0    0.0    2.0   0   3
> > c1t5000C50055F8BE3Fd0
> >     4.6   18.9  122.8  887.2  0.0  0.0    0.0    1.6   0   3
> > c1t5000C50055F8B21Fd0
> >     5.5   23.6  137.4  825.4  0.0  0.0    0.0    1.5   0   3
> > c1t5000C50055F8A46Fd0
> >     4.9   24.6  116.7  802.8  0.0  0.0    0.0    1.4   0   4
> > c1t5000C50055F856CFd0
> >     4.9   23.4  120.8  788.9  0.0  0.0    0.0    1.4   0   3
> > c1t5000C50055E6606Fd0
> >   234.9  170.1 4079.9 11127.8  0.0  0.2    0.0    0.5   0   9 c2
> >   119.0   28.9 2083.8  670.8  0.0  0.0    0.0    0.3   0   3
> > c2t500117310015D579d0
> >   115.9   27.4 1996.1  634.2  0.0  0.0    0.0    0.3   0   3
> > c2t50011731001631FDd0
> >     0.0  113.8    0.0 9822.8  0.0  0.1    0.0    1.0   0   2
> > c2t5000A72A3007811Dd0
> >     0.1   18.5    0.0   64.8  0.0  0.0    0.0    0.0   0   0 c4
> >     0.1    9.2    0.0   32.4  0.0  0.0    0.0    0.0   0   0 c4t0d0
> >     0.0    9.2    0.0   32.4  0.0  0.0    0.0    0.0   0   0 c4t1d0
> >   229.8   58.1 3987.4 1308.0  0.0  0.1    0.0    0.3   0   6 c12
> >   114.2   27.7 1994.8  626.0  0.0  0.0    0.0    0.3   0   3
> > c12t500117310015D59Ed0
> >   115.5   30.4 1992.6  682.0  0.0  0.0    0.0    0.3   0   3
> > c12t500117310015D54Ed0
> >     0.1   17.1    0.0   64.8  0.0  0.0    0.6    0.1   0   0 rpool
> >   720.3 1298.4 14361.2 53770.8 18.7  2.3    9.3    1.1   6  68 tank
> >
> > Is 153% busy correct on c1? Seems to me that disks are quite "busy", but
> > are handling the workload just fine (wait at 6% and asvc_t at 1.1ms)
> >
> > Interestingly, this is the same output now that the resilver is running:
> >
> >                     extended device statistics
> >     r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
> >  2876.9 1041.1 25400.7 38189.1  0.0 37.9    0.0    9.7   0 2011 c1
> >    60.8   26.1  540.1  845.2  0.0  0.7    0.0    8.3   0  39
> > c1t5000C50055F8723Bd0
> >    58.4   14.2  511.6  740.7  0.0  0.7    0.0   10.1   0  39
> > c1t5000C50055E66B63d0
> >    60.2   16.3  529.3  756.1  0.0  0.8    0.0   10.1   0  41
> > c1t5000C50055F87E73d0
> >    57.5   24.9  527.6  841.7  0.0  0.7    0.0    9.0   0  40
> > c1t5000C50055F8BFA3d0
> >    57.9   14.5  543.5  765.1  0.0  0.7    0.0    9.8   0  38
> > c1t5000C50055F9E123d0
> >    57.9   23.9  516.6  806.9  0.0  0.8    0.0    9.3   0  40
> > c1t5000C50055F9F0B3d0
> >    59.8   24.6  554.1  857.5  0.0  0.8    0.0    9.6   0  42
> > c1t5000C50055F9D3B3d0
> >    56.5   21.0  480.4  715.7  0.0  0.7    0.0    8.9   0  37
> > c1t5000C50055E4FDE7d0
> >    54.8    9.7  473.5  737.9  0.0  0.7    0.0   11.2   0  39
> > c1t5000C50055F9A607d0
> >    55.8   20.2  457.3  708.7  0.0  0.7    0.0    9.9   0  40
> > c1t5000C50055F8CDA7d0
> >    57.8   28.6  487.0  796.1  0.0  0.9    0.0    9.9   0  45
> > c1t5000C50055E65877d0
> >    60.8   27.1  572.6  823.7  0.0  0.8    0.0    8.8   0  41
> > c1t5000C50055F9E7D7d0
> >    55.8   21.1  478.2  766.6  0.0  0.7    0.0    9.7   0  40
> > c1t5000C50055FA0AF7d0
> >    57.0   22.8  528.3  724.5  0.0  0.8    0.0    9.6   0  41
> > c1t5000C50055F9FE87d0
> >    56.2   10.8  465.2  715.6  0.0  0.7    0.0   10.4   0  38
> > c1t5000C50055F9F91Bd0
> >    59.2   29.4  524.6  740.9  0.0  0.8    0.0    8.9   0  41
> > c1t5000C50055F9FEABd0
> >    57.3   30.7  496.7  788.3  0.0  0.8    0.0    9.1   0  42
> > c1t5000C50055F9F63Bd0
> >    55.5   16.3  461.9  652.9  0.0  0.7    0.0   10.1   0  39
> > c1t5000C50055F9F3EBd0
> >    57.2   22.1  495.1  701.1  0.0  0.8    0.0    9.8   0  41
> > c1t5000C50055F9F80Bd0
> >    59.5   30.2  543.1  741.8  0.0  0.9    0.0    9.6   0  45
> > c1t5000C50055F9FB8Bd0
> >    56.5   25.1  515.4  786.9  0.0  0.7    0.0    8.6   0  38
> > c1t5000C50055F9F92Bd0
> >    61.8   12.5  540.6  790.9  0.0  0.8    0.0   10.3   0  41
> > c1t5000C50055F8905Fd0
> >    57.0   19.8  521.0  774.3  0.0  0.7    0.0    9.6   0  39
> > c1t5000C50055F8D48Fd0
> >    56.3   16.3  517.7  724.7  0.0  0.7    0.0    9.9   0  38
> > c1t5000C50055F9F89Fd0
> >    57.0   13.4  504.5  790.5  0.0  0.8    0.0   10.7   0  40
> > c1t5000C50055F9EF2Fd0
> >    55.0   26.1  477.6  845.2  0.0  0.7    0.0    8.3   0  36
> > c1t5000C50055F8C3ABd0
> >    57.8   14.1  518.7  740.7  0.0  0.8    0.0   10.8   0  41
> > c1t5000C50055E66053d0
> >    55.9   20.8  490.2  715.7  0.0  0.7    0.0    9.0   0  37
> > c1t5000C50055E66503d0
> >    57.0   24.1  509.7  806.9  0.0  0.8    0.0   10.0   0  41
> > c1t5000C50055F9D3E3d0
> >    59.1   29.2  504.1  740.9  0.0  0.8    0.0    9.3   0  44
> > c1t5000C50055F84FB7d0
> >    54.4   16.3  449.5  652.9  0.0  0.7    0.0   10.4   0  39
> > c1t5000C50055F8E017d0
> >    57.8   28.4  503.3  796.1  0.0  0.9    0.0   10.1   0  45
> > c1t5000C50055E579F7d0
> >    58.2   24.9  502.0  841.7  0.0  0.8    0.0    9.2   0  40
> > c1t5000C50055E65807d0
> >    58.2   20.7  513.4  766.6  0.0  0.8    0.0    9.8   0  41
> > c1t5000C50055F84A97d0
> >    56.5   24.9  508.0  857.5  0.0  0.8    0.0    9.2   0  40
> > c1t5000C50055F87D97d0
> >    53.4   13.5  449.9  790.5  0.0  0.7    0.0   10.7   0  38
> > c1t5000C50055F9F637d0
> >    57.0   11.8  503.0  790.9  0.0  0.7    0.0   10.6   0  39
> > c1t5000C50055E65ABBd0
> >    55.4    9.6  461.1  737.9  0.0  0.8    0.0   11.6   0  40
> > c1t5000C50055F8BF9Bd0
> >    55.7   19.7  484.6  774.3  0.0  0.7    0.0    9.9   0  40
> > c1t5000C50055F8A22Bd0
> >    57.6   27.1  518.2  823.7  0.0  0.8    0.0    8.9   0  40
> > c1t5000C50055F9379Bd0
> >    59.6   17.0  528.0  756.1  0.0  0.8    0.0   10.1   0  41
> > c1t5000C50055E57A5Fd0
> >    61.2   10.8  530.0  715.6  0.0  0.8    0.0   10.7   0  40
> > c1t5000C50055F8CCAFd0
> >    58.0   30.8  493.3  788.3  0.0  0.8    0.0    9.4   0  43
> > c1t5000C50055F8B80Fd0
> >    56.5   19.9  490.7  708.7  0.0  0.8    0.0   10.0   0  40
> > c1t5000C50055F9FA1Fd0
> >    56.1   22.4  484.2  701.1  0.0  0.7    0.0    9.5   0  39
> > c1t5000C50055E65F0Fd0
> >    59.2   14.6  560.9  765.1  0.0  0.7    0.0    9.8   0  39
> > c1t5000C50055F8BE3Fd0
> >    57.9   16.2  546.0  724.7  0.0  0.7    0.0   10.1   0  40
> > c1t5000C50055F8B21Fd0
> >    59.5   30.0  553.2  741.8  0.0  0.9    0.0    9.8   0  45
> > c1t5000C50055F8A46Fd0
> >    57.4   22.5  504.0  724.5  0.0  0.8    0.0    9.6   0  41
> > c1t5000C50055F856CFd0
> >    58.4   24.6  531.4  786.9  0.0  0.7    0.0    8.4   0  38
> > c1t5000C50055E6606Fd0
> >   511.0  161.4 7572.1 11260.1  0.0  0.3    0.0    0.4   0  14 c2
> >   252.3   20.1 3776.3  458.9  0.0  0.1    0.0    0.2   0   6
> > c2t500117310015D579d0
> >   258.8   18.0 3795.7  350.0  0.0  0.1    0.0    0.2   0   6
> > c2t50011731001631FDd0
> >     0.0  123.4    0.0 10451.1  0.0  0.1    0.0    1.0   0   3
> > c2t5000A72A3007811Dd0
> >     0.2   16.1    1.9   56.7  0.0  0.0    0.0    0.0   0   0 c4
> >     0.2    8.1    1.6   28.3  0.0  0.0    0.0    0.0   0   0 c4t0d0
> >     0.0    8.1    0.3   28.3  0.0  0.0    0.0    0.0   0   0 c4t1d0
> >   495.6  163.6 7168.9 11290.3  0.0  0.2    0.0    0.4   0  14 c12
> >     0.0  123.4    0.0 10451.1  0.0  0.1    0.0    1.0   0   3
> > c12t5000A72B300780FFd0
> >   248.2   18.1 3645.8  323.0  0.0  0.1    0.0    0.2   0   5
> > c12t500117310015D59Ed0
> >   247.4   22.1 3523.1  516.2  0.0  0.1    0.0    0.2   0   6
> > c12t500117310015D54Ed0
> >     0.2   14.8    1.9   56.7  0.0  0.0    0.6    0.1   0   0 rpool
> >  3883.5 1357.7 40141.6 60739.5 22.8 38.6    4.4    7.4  54 100 tank
> >
> > It is very busy with alot of wait % and higher asvc_t (2011% busy on c1?!).
> > I'm assuming resilvers are alot more aggressive than scrubs.
> >
> > There are many variables here, the biggest of which is the current
> > >> non-scrub load.
> > >>
> > >
> > I might have lost 2 weeks of scrub time, depending on whether the scrub
> > will resume where it left off. I'll update when I can.
> > -------------- next part --------------
> > An HTML attachment was scrubbed...
> > URL: <
> > http://lists.omniti.com/pipermail/omnios-discuss/attachments/20140729/1b53a492/attachment.html
> > >
> >
> > ------------------------------
> >
> > Subject: Digest Footer
> >
> > _______________________________________________
> > OmniOS-discuss mailing list
> > OmniOS-discuss@lists.omniti.com
> > http://lists.omniti.com/mailman/listinfo/omnios-discuss
> >
> >
> > ------------------------------
> >
> > End of OmniOS-discuss Digest, Vol 28, Issue 8
> > *********************************************
> >
> > --
> > This message has been scanned for viruses and
> > dangerous content by MailScanner, and is
> > believed to be clean.
> >
> > _______________________________________________
> > OmniOS-discuss mailing list
> > OmniOS-discuss@lists.omniti.com
> > http://lists.omniti.com/mailman/listinfo/omnios-discuss
> >
>
>
>
>

-- 
Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
www.oetiker.ch t...@oetiker.ch +41 62 775 9902

_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss

Reply via email to