On Sat, 4 Jul 2009, Phil Harman wrote:
This is not a new problem. It seems that I have been banging my head
against this from the time I started using zfs.
I'd like to see mpstat 1 for each case, on an otherwise idle system,
but then there's probably a whole lot of dtrace I'd like to do ...
but I'm just off on vacation for a week, and this will probably have
to be my last post on this thread until I'm back.
Shame on you for taking well-earned vacation in my time of need. :-)
'mpstat 1' output when I/O is good:
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 0 1700 247 2187 11 214 11 0 10270 2 5 0 93
1 0 0 0 1478 5 2812 18 241 10 0 18424 2 4 0 94
2 0 0 1 1210 0 2392 60 185 19 0 301927 5 28 0 67
3 0 0 0 3242 2320 2028 60 181 9 0 222500 3 24 0 73
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 0 1862 244 2554 9 231 6 0 2880 2 3 0 95
1 0 0 0 1158 1 2055 17 221 7 0 4479 1 3 0 96
2 0 0 0 1037 0 2051 65 186 14 0 250211 4 24 0 73
3 0 0 0 3037 2167 2101 62 186 11 0 251393 4 25 0 71
'mpstat 1' output when I/O is bad:
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 0 0 0 859 243 1006 5 106 0 0 20733 2 3 0 95
1 0 0 0 504 15 942 12 84 6 0 74009 3 6 0 91
2 0 0 0 192 0 338 0 48 0 0 38 0 1 0 99
3 0 0 0 549 376 522 1 36 0 0 135 0 2 0 98
Notice how intensely unbusy the CPU cores are when I/O is bad.
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss