Hi Richard,
Yesterday Richard Elling wrote:
On Dec 22, 2013, at 4:23 PM, Tobias Oetiker t...@oetiker.ch wrote:
Hi Richard,
Yesterday Richard Elling wrote:
c) shouldn't the smarter write throttle change
https://github.com/illumos/illumos-gate/commit/69962b5647e4a8b9b14998733b765925381b727e
have helped with this by makeing zfs do its internal things
with a lower priority.
Yes, but the default zfs_vdev_max_pending remains at 10. Once
the I/Os are sent to disk, there is no priority scheduling. You
should consider lowering zfs_vdev_max_pending to allow the ZIO
scheduler to do a better job of rescheduling the more important
I/Os.
the patch mentioned introduces a ton of new tuneables but it
removes zfs_vdev_max_pending
Indeed, these are now zfs_vdev_max_active and friends.
It is very unclear to me what your graphite is attempting to show.
Is this data from the pool itself, or from vdevs under the pool?
The pool-level stats are mostly useless for this analysis, need to
see the per-vdev stats.
the reason I am interested in this is that while the removal
operation is active, all data access to data not already in ARC
this is very slow ... 'ls' in a directory with 10 entries takes
several seconds to complete.
iostat -xnd 10 returns lines like this:
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
85.3 265.1 350.4 3280.5 22.4 0.9 63.82.5 6 8 fast
0.00.00.00.0 0.0 0.00.00.0 0 0 rpool
0.00.20.00.0 0.0 0.00.00.0 0 0
c0t5001517BB2AD9526d0
0.00.20.00.0 0.0 0.00.00.0 0 0
c0t5001517BB2AD9589d0
0.0 92.80.0 1205.7 0.0 0.00.00.1 0 1
c0t5001517803D28EA3d0
0.6 331.80.4 789.1 0.0 1.40.04.2 0 99
c24t5000CCA03E45E11Dd0
11.1 25.6 49.9 302.8 0.0 0.10.03.2 0 5
c20t50014EE700052642d0
0.3 394.60.2 919.5 0.0 1.20.03.1 0 99
c25t5000CCA03E45E25Dd0
10.5 24.4 43.4 302.4 0.0 0.10.03.2 0 5
c19t50014EE70005217Ed0
0.00.00.00.0 0.0 0.00.00.0 0 0
c29t5000CCA03E404FDDd0
12.0 23.8 42.5 301.2 0.0 0.10.03.4 0 5
c15t50014EE70005248Ed0
0.4 427.30.3 960.5 0.0 1.70.03.9 0 99
c28t5000CCA03E426985d0
9.2 24.3 45.5 302.1 0.0 0.10.03.2 0 5
c18t50014EE7AAAFCC0Ad0
0.6 380.20.5 1061.0 0.0 1.80.04.6 0 99
c22t5000CCA03E45E211d0
11.1 24.8 49.1 301.2 0.0 0.10.03.0 0 5
c14t50014EE7555A792Ed0
0.4 330.60.3 800.3 0.0 1.30.03.9 0 99
c26t5000CCA03E420D4Dd0
10.4 24.7 35.8 302.7 0.0 0.10.02.6 0 4
c17t50014EE7555A7B7Ad0
0.6 371.70.5 901.7 0.0 1.20.03.1 0 99
c23t5000CCA03E434C41d0
10.4 27.3 52.0 302.1 0.0 0.10.03.1 0 5
c13t50014EE700052386d0
0.3 347.40.3 766.9 0.0 1.70.04.8 0 100
c27t5000CCA03E4229ADd0
10.6 24.2 32.3 301.3 0.0 0.10.02.8 0 5
c16t50014EE7555A7B4Ad0
3.2 2607.42.3 6539.9 3610203.4 10.2 1382912.33.9 100 100 slow
the config of the pool is this:
NAME STATE READ WRITE CKSUM
slowpool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
c22t5000CCA03E45E211d0 ONLINE 0 0 0
c23t5000CCA03E434C41d0 ONLINE 0 0 0
c24t5000CCA03E45E11Dd0 ONLINE 0 0 0
c25t5000CCA03E45E25Dd0 ONLINE 0 0 0
c26t5000CCA03E420D4Dd0 ONLINE 0 0 0
c27t5000CCA03E4229ADd0 ONLINE 0 0 0
c28t5000CCA03E426985d0 ONLINE 0 0 0
logs
mirror-1 ONLINE 0 0 0
c0t5001517BB2AD9526d0s3 ONLINE 0 0 0
c0t5001517BB2AD9589d0s3 ONLINE 0 0 0
cache
c0t5001517803D28EA3d0s1ONLINE 0 0 0
spares
c29t5000CCA03E404FDDd0 AVAIL
? richard
---
illumos-discuss
Archives: https://www.listbox.com/member/archive/182180/=now
RSS Feed: https://www.listbox.com/member/archive/rss/182180/25228367-6b1cb39c
Modify Your Subscription:
https://www.listbox.com/member/?member_id=25228367id_secret=25228367-5c415042
Powered by Listbox: http://www.listbox.com
--
Tobi Oetiker, OETIKER+PARTNER AG, Aarweg 15 CH-4600 Olten, Switzerland
http://it.oetiker.ch t...@oetiker.ch ++41 62 775 9902 / sb: -9900
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss