[OpenIndiana-discuss] openindiana.org website cert expired on December 27

2022-01-03 Thread Brett Dikeman
Greetings,

Could someone reach out to the group that is responsible for the website
and let them know that their LetsEncrypt auto-renewal setup...isn't
auto-renewing, and likely hasn't for 1-2 months?

Brett
___
openindiana-discuss mailing list
openindiana-discuss@openindiana.org
https://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] oi_151a8 is out

2013-08-21 Thread Brett Dikeman
Could someone summarize what the ZFS changes are?

On Mon, Aug 12, 2013 at 2:25 AM, Dave Koelmeyer
 wrote:
> Wot, no-one's mentioned this yet?
>
> http://wiki.openindiana.org/oi/oi_151a_prestable8+Release+Notes
>
> --
> Dave Koelmeyer
> http://blog.davekoelmeyer.co.nz
>
>
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] leaked space?

2013-07-23 Thread Brett Dikeman
What exactly does this mean? What is "leaked space"? Is it repaired by
virtue of zdb walking the filesystem, etc?

# zdb -b pool

Traversing all blocks to verify nothing leaked ...

Error counts:

errno  count
leaked space: vdev 0, offset 0x7cfeea00, size 3072
leaked space: vdev 0, offset 0x61c3507800, size 9216
leaked space: vdev 0, offset 0x61c40b9e00, size 9216
leaked space: vdev 0, offset 0x61c3f00e00, size 9216
leaked space: vdev 0, offset 0x61c4e6b600, size 9216
leaked space: vdev 0, offset 0x61c5e48a00, size 9216
leaked space: vdev 0, offset 0x61c5b47e00, size 9216
leaked space: vdev 0, offset 0x61c47e0400, size 9216
leaked space: vdev 0, offset 0x629b5c0600, size 9216
leaked space: vdev 0, offset 0x629bb86a00, size 9216
leaked space: vdev 0, offset 0x629ba3ac00, size 9216
leaked space: vdev 0, offset 0x61c6282200, size 9216
leaked space: vdev 0, offset 0x6a7c11c800, size 632832
leaked space: vdev 0, offset 0x6a7e7aae00, size 5220864
leaked space: vdev 0, offset 0x6a7e6ad000, size 158208

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] can't get into system after hostname change

2011-07-14 Thread Brett Dikeman
Greetings all,

I just changed the hostname on an OpenIndiana machine by changing
/etc/hostname.(ifname), /etc/nodename, and rebooting.  Networking is
down, and worse, I can't login using known-good credentials.

On the console was an error about SMF not starting, and on each login
attempt, this appears:

"Solaris_audit getadrinfo(hobbes) failed[node name or service name not
known]: Error 0
Solaris_audit adt_get_local_address failed, no Audit IP address
available, faking loopback and error: Network is down
Login incorrect"

The system has a static IP and is not using directory services.
Obviously, this is a big problem...and I'm a bit under the gun to fix
it.  Suggestions would be most welcome; I have console access, and
possibly virtual media access, at the moment.

Much appreciated,

Brett

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] measuring fragmentation on ZFS?

2011-03-14 Thread Brett Dikeman
On Mon, Mar 14, 2011 at 4:23 PM, Roy Sigurd Karlsbakk  
wrote:

> If the filesystem was filled up, and you added another VDEV, the initial 
> VDEV(s) will stay full until you either destroy the pool,
> replace the drives with larger ones (given autoexpand=on) or wait for block 
> pointer rewrite, which may take a while

We've expanded all the drives in the pool.  Since fragmentation is
largely a function of free space and we had to let things get pretty
full, I imagine fragmentation is pretty bad.  I'd like to measure it
if possible.

Also, like I said, we have the option to easily move data on+off the
pool from another drive.

-B

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] measuring fragmentation on ZFS?

2011-03-14 Thread Brett Dikeman
Is there a way to query how much fragmentation there is in a filesystem?

We've got a large filesystem which was allowed to get really full
before it was expanded considerably, and we also had deduplication
turned on briefly, which caused the dedupe table to get very, very
big.  While the application that stores data rewrites a lot of it, the
dedupe table still has a lot of entries in it, etc.

We can easily queue up data to move it between filesystems, and have
scratch space elsewhere, so I'm thinking of moving data off and then
back on again, but if there's a painless way to see how bad the
fragmentation is (and tell if moving data off+on again makes a
difference), I'd love to know.

Thanks!
Brett
PS:Thanks to those who replied about my earlier questions regarding
ZIL and L2ARC.  I've experimented with both- thanks!

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] ZFS log/cache on a loopback device/file?

2011-03-09 Thread Brett Dikeman
Hi all,

We have an SSD with plenty of space on it, but when the OS was
installed, it was partitioned to use all of the disk. I'd like to use
some space on the SSD for a log or cache device for a 4-drive zpool.
Is that possible, for example by using a loopback device or file?  Or
is our only option exporting the ZFS filesystems on the SSD,
repartitioning, and replacing the filesystems?

Brett

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] anyone done a HP DL160G5 install?

2011-03-05 Thread Brett Dikeman
On Fri, Mar 4, 2011 at 7:45 PM, Gary  wrote:
> I'm not familiar with Syba cards so I looked them up and none of the
> low profile PCIe cards have Solaris drivers. I did notice one of their
> low profile cards has a Silicon Image Sil3132 chipset but I've no idea
> if that's supported either. Does the DL160G5 not have native SATA
> ports?

Yup, 4. And unfortunately, I need all four ports for the front drive
bays; the fifth was to accommodate a boot SSD.

By the way, if anyone is considering a HP server, skip the HP raid
cards.  A number of the low-end cards are utter, complete garbage.
One of ours (less than three years old) doesn't even support NCQ!

-B

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] anyone done a HP DL160G5 install?

2011-03-04 Thread Brett Dikeman
I can't for the life of me seem to get OpenIndiana text install, on
either USB or CDROM, to boot further than keyboard/language selection.
 Usually I get 'silence' from the console after "Configuring devices",
but now I'm staring at a stream of bus timeout errors on ata0.  I did
a BIOS update Just In Case, but no improvement...

The only thing different from usual: an Intel SSD hanging off a Syba
SATA PCI-E card.  I'm now trying the install with that removed...

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] equivalents for SUNWcryr/SUNWcry?

2011-01-31 Thread Brett Dikeman
On Fri, Jan 28, 2011 at 4:43 PM, Richard Lowe  wrote:
> You shouldn't need to do anything, the functionality formally provided by
> SUNWcry* (the encryption kit) was
> folded in the base OS a couple of years ago.
>
> Forcing an install would (I think) be a really bad idea.

I meant force installing of Legato networker, not the SUNWcry*
packages...is this what you meant as well?

-B
PS: Could someone please fix the reply-to settings for the list so
that I can choose whether to reply to the list, privately, or both?

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] equivalents for SUNWcryr/SUNWcry?

2011-01-28 Thread Brett Dikeman
Greetings,

We're trying to install Legato Networker, and the client requires
SUNWcryr and SUNWcry.

Is the functionality of these packages provided by an openindiana
install already, and we should force installation?  Or, are there
openindiana equivalents to the Sun packages?

Thanks!
Brett

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] slow zfs scrub, fixed after reboot?

2010-12-15 Thread Brett Dikeman
Hi all,

I triggered a scrub of a ZFS pool, and it was going at a glacial
pace...about 1GB over 4 hours.  After rebooting the machine (which was
up for 16 days), the scrub is now running at more normal speeds
(+100MB/sec, sometimes peaking at 200), but still not very consistent,
even with a 5 second average.

Previously, I had turned on de-duplication (turns out all our data is
unique, so the table lookups were extremely expensive), but have since
turned it off and we have a fair amount of turnover data-wise, so the
table has shrunk to a fraction of its original size.

Also: is it normal that I haven't seen any package updates since
switching to openindiana, shortly after it was announced?  I was
expecting at least some security releases and such...or do I need to
add another repository, similar to how opensolaris had release and dev
branches?

Brett

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The name: "OpenIndiana" (part II)

2010-10-08 Thread Brett Dikeman
Initially it was said that "the community" didn't get a chance to
provide input, which turned out not to be true.  Several of us pointed
out reasons why a name change right now would be a bad idea.  Nobody
for the name change rebutted those reasons.

Can we move on from style for now and work on substance, like a
skinnable mp3 player written in Scheme with a Fortran decoder? :)

-B

On Fri, Oct 8, 2010 at 2:32 PM, Søren Krarup Olesen  wrote:
> Dear all,
>
> Okay, I gave the name a little thought and came up with
>
> http://raptus.dk/tmp/logo.svg
>
> Please ignore the "logo". I was just fooling around with SVG and
> playing with colours...orange and blue. The name however is nice, me
> thinks.
>
> Søren
>
> ___
> OpenIndiana-discuss mailing list
> OpenIndiana-discuss@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
>

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS pool slow as molasses

2010-10-07 Thread Brett Dikeman
On Thu, Oct 7, 2010 at 4:57 AM, Thorsten Heit  wrote:

> DDT-sha256-zap-unique: 5 entries, size 123986329 on disk, 118375219 in
> core
>
> DDT histogram (aggregated over all DDTs):
>
> bucket              allocated                       referenced
> __   __   __
> refcnt   blocks   LSIZE   PSIZE   DSIZE   blocks   LSIZE   PSIZE   DSIZE
> --   --   -   -   -   --   -   -   -
>     1        5      5K      5K   6.66K        5      5K      5K   6.66K

> What does this mean?

I believe it means you have five blocks of data in the de-duplication
data table, all unique (refcnt = 1.)  Probably a dotfile or something
that you didn't delete when you moved your data off.

-Brett

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS pool slow as molasses

2010-10-06 Thread Brett Dikeman
On Wed, Oct 6, 2010 at 10:14 AM, Brett Dikeman  wrote:
> I'll have more to report in about a week, hopefully good news.  Is
> there a way to measure the size of the lookup table?

Answered my own question again, I think:

# zdb -DD data

DDT-sha256-zap-duplicate: 693 entries, size 994 on disk, 839 in core
DDT-sha256-zap-unique: 10712002 entries, size 354 on disk, 186 in core

According to the histogram also provided by zdb, the table only
contains about 1/4 of our data- 1TB out of ~4TB total.

The count is, in fact, going down as archives go through the
maintenance process.

By the way, there seems to be a problem with zdb under heavy disk IO:

r...@planb:/crashplan# zdb -DD data
zdb: can't open 'data'': I/O error

(this was after several minutes of waiting.)

-B

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] The name: "OpenIndiana"

2010-10-06 Thread Brett Dikeman
On Wed, Oct 6, 2010 at 10:32 AM, Hillel Lubman  wrote:

> But really - there was no open discussion about the name prior to
> OpenIndiana going public. May be now is a good time to make some
> contest for the distro name.

IMHO (and coming into this as someone with an outsider view, ie no
involvement in the project other than as a relatively new user), the
horse is out of the barn.  The time for a new name was when the brand
was established - all the coverage the project received.  Also, the
name makes sense and thus communicates what it is reasonably well.

Changing the name will throw all that coverage right out the window
and confuse people.  Maybe it is a shame the name was chosen without
more community input, but it doesn't negate the problems with a name
change right now.

I suggest having some fun with mascots or code names for releases, but
urge leaving the project name alone for a bit, until the project has
some history and a mind&market share.

-B

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS pool slow as molasses

2010-10-06 Thread Brett Dikeman
On Wed, Oct 6, 2010 at 4:23 AM, Albert Lee  wrote:
> On Tue, Oct 5, 2010 at 9:11 PM, Chris Mosetick  wrote:
>> If it's not too late already, my suggestion at this point is to go back to
>> ide mode in the bios, boot the machine and rsync your data to another
>> physical machine through the network, then do a clean install of openindiana
>> with your bios set to ahci mode, create new pools, then rsync your data back
>> to the new pools.  Time consuming, yes.  But virtually guaranteed to work.
>>
>> It would seem that this machine would have worked better with ahci mode when
>> you first installed opensolaris on it back in the day.  This is a lesson for
>> everyone to check their bios settings thoroughly before installing a new
>> operating system.
>
> Erm, that's a curious bit of modern folklore. How the drive was
> connected previously changes absolutely nothing.

I agree.  Aside from a few hiccups with having device names changed
(easy to fix), there's nothing permanent done.

> The problem here is probably from having dedup enabled. Disabling it
> does not affect the existing data, which has already been
> deduplicated. Reading any of this will incur a deduplication table
> lookup, which can be quite expensive in terms of random seeks

I've disabled it, and I believe that our backup software's
"maintenance" function re-packs client archives, so within a couple of
days (it's configurable how often this runs per-client), things will
improve, provided the server is able to chew through the archives
quickly enough.

After I enabled dedupe, I never saw the ratio change from 1.00; I
later learned archives are encrypted with a client-specific key, so
theoretically, they're almost completely unique- a worst-case scenario
for the dedup lookup, right?

I'll have more to report in about a week, hopefully good news.  Is
there a way to measure the size of the lookup table?

Sidenote: could someone fix the mailman settings for the list?
They're currently set such that the Reply-To: headers make it annoying
to reply to an individual off-list, destroying the functionality of
reply vs reply-all.

-B

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS pool slow as molasses

2010-10-05 Thread Brett Dikeman
On Tue, Oct 5, 2010 at 5:42 PM, Paul Johnston
 wrote:

> I have a Dell Optiplex 755 and when I switched in the bios from ATA to AHCI
> it wouldn't boot so I'll get another disk and do a re-install then I can

It's actually pretty easy; googling 'round: boot off the OpenIndiana
LiveCD (if you've upgraded to a higher ZFS version than OpenSolaris
supports) and import the system pool.  This fixes the device names
stored inside the pool (?) metadata:

http://wstrange.wordpress.com/2010/04/27/opensolaris-tip-switching-from-ide-to-ahci-driver/

I burned the DVD, booted off it with the system in AHCI mode, and
imported the system (rpool) ZFS pool, and did the same with "data".
While the system booted, the data pool wasn't recognized:

# zpool status
  pool: data
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scan: none requested
config:

NAME STATE READ WRITE CKSUM
dataUNAVAIL  0 0 0  insufficient replicas
  raidz1-0   UNAVAIL  0 0 0  insufficient replicas
c7d1p0   UNAVAIL  0 0 0  cannot open
c8d0p0   UNAVAIL  0 0 0  cannot open
c8d1p0   UNAVAIL  0 0 0  cannot open
c10d0p0  UNAVAIL  0 0 0  cannot open

The solution for that was to run:
zpool export data
zpool import -f data

...and then reboot, because I got:

cannot mount 'data': mountpoint or dataset is busy

Performance of the scrub is still in the 100-200KB/sec rnage, so the
original complaint remains.  Something's wrong.

-B

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS pool slow as molasses

2010-10-05 Thread Brett Dikeman
On Tue, Oct 5, 2010 at 4:49 PM, Brett Dikeman  wrote:

> Flipped it over to AHCI, and the machine wouldn't boot.  The progress
> bar comes up and fills twice (blue and orange), then the screen goes
> blank and I'm back at the POST screen.

Point of clarification/additional info: all the drives (the SSD, 4 2TB
drives, and DVD drive) were detected and displayed on the POST screen.

(By the way, thank you for the quick responses- you guys are great!)

-B

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


Re: [OpenIndiana-discuss] ZFS pool slow as molasses

2010-10-05 Thread Brett Dikeman
On Tue, Oct 5, 2010 at 4:01 PM, Julian Wiesener  wrote:
> Hi,
>
> as far as i see (device names in iostat) you're using the PATA interface to
> the SATA drives. Please look if you have an "native AHCI" switch in the BIOS
> and turn it on, the SATA drivers are way better implementet.

Alright, fair enough- I rebooted the server and checked the BIOS.  IDE
controller #1 was set to "IDE", not AHCI.

Flipped it over to AHCI, and the machine wouldn't boot.  The progress
bar comes up and fills twice (blue and orange), then the screen goes
blank and I'm back at the POST screen.

I put it back to "IDE", and the machine booted OK.  I'm seeing pretty
good performance from our disk backup software, but if I stop it, the
scrub (which picked up where it left off) is still running at a crawl.

I'm happy to switch to AHCI, so suggestions on getting the system to
boot with that are welcome (does a device name need to be changed
somewhere?  I don't see anything obvious in /rpool/boot/grub/menu.lst,
for example.)  In the meantime, I'm regularly seeing iostat figures
bounce between 10 and 200MB on the data pool with our backup software
running, sonot sure why the scrub isn't capable of running at the
same rate.  If I shut down the backup software, I see figures like
this:

--  -  -  -  -  -  -
data   5.55T  1.70T 69  0   163K  0
rpool   18.9G  10.6G  0  0  0  0
--  -  -  -  -  -  -

Brett

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss


[OpenIndiana-discuss] ZFS pool slow as molasses

2010-10-05 Thread Brett Dikeman
Greetings all,

I have an OpenSolaris system I upgraded to OpenIndiana last night, and
I'm not sure if it coincides, but I'm having a ton of problems with
disk performance on a 4x2TB SATA drive RAID-Z pool which is separate
from the system/boot pool (on an SSD.)  There are no SATA multipliers,
and no SAS components; all four drives are plugged into the
motherboard SATA ports, I believe.

De-duplication and compression were turned on; I disabled
de-duplication, with no effect (we weren't seeing any dedupe anyway.)
I've upgraded ZFS and the pools, with no effect.

Symptoms:
-Maintenance jobs from our backup software started taking forever
(this involves scanning a compressed archive) and the count for
simultaneous backup sessions rose as well (backups are
client-initiated every hour.)
-zpool scrub on the pool in question runs at about 100-200KB/sec,
instead of the more typical 150-200MB/sec (the SSD will complete a
scrub at normal speeds.)
-watching the drive indicators physically, all four drives are
performing what appears to be a huge amount of random IO during the
scrub.
-iowait is always zero and CPU usage is in the single digits.

And and all suggestions are heartily accepted, particularly since I'm
a relative newbie in the Solaris world.  We're dead in the water at
the moment, and the scrub isn't going to finish in my lifetime at this
rate.  More info below.

Thanks!
Brett


uname output:
SunOS d2d 5.11 oi_147 i86pc i386 i86pc Solaris

scanpci information for the SATA controller:

pci bus 0x cardnum 0x1f function 0x02: vendor 0x8086 device 0x3a20
 Intel Corporation 82801JI (ICH10 Family) 4 port SATA IDE Controller #1

pci bus 0x cardnum 0x1f function 0x05: vendor 0x8086 device 0x3a26
 Intel Corporation 82801JI (ICH10 Family) 2 port SATA IDE Controller #2

Sample output from zpool iostat 1:

data   5.55T  1.70T 53 92   126K   198K
rpool   18.9G  10.6G  0  0  0  0
--  -  -  -  -  -  -
data   5.55T  1.70T 58  8   135K  13.5K
rpool   18.9G  10.6G  0  0  0  0
--  -  -  -  -  -  -
data   5.55T  1.70T 69  0   164K  0
rpool   18.9G  10.6G  0  0  0  0
--  -  -  -  -  -  -

Sample output from iostat:
# iostat -cxnz 1
 cpu
 us sy wt id
  0  1  0 99
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
4.25.5  270.7   50.9  0.0  0.03.02.1   1   1 c7d0
   64.97.4   81.9   27.7  7.5  2.0  103.4   27.1  96  99 c8d0
   65.47.9   87.7   29.1  3.9  1.3   52.6   17.6  59  67 c7d1
   65.47.5   80.8   27.8  3.8  1.3   52.4   17.5  59  66 c10d0
   64.97.8   89.3   29.1  7.5  2.0  102.8   27.0  96 100 c8d1
 cpu
 us sy wt id
  0  0  0 100
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   64.0   26.0   59.5   52.0  7.1  2.0   79.0   21.9  96 100 c8d0
0.0   39.00.0   76.0  0.1  0.01.50.7   1   1 c7d1
0.0   28.00.0   64.5  0.0  0.01.10.7   1   1 c10d0
   63.0   25.0   64.5   46.0  7.0  2.0   80.0   22.2  95  99 c8d1
 cpu
 us sy wt id
  0  1  0 99
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   45.0   21.0   35.5   29.0  4.8  1.7   73.4   26.3  74  99 c8d0
   90.0   13.0   83.59.0  5.1  1.6   49.5   15.7  74  88 c7d1
   93.0   11.0   74.08.0  5.2  1.6   49.8   15.5  74  87 c10d0
   46.0   25.0   39.5   40.5  5.0  1.7   69.7   24.6  75 100 c8d1
 cpu
 us sy wt id
  0  1  0 99
extended device statistics
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
   72.00.0   57.00.0  8.0  2.0  111.0   27.8 100 100 c8d0
  101.00.0   93.00.0  6.6  1.8   65.5   17.7  89  90 c7d1
   96.00.0   78.50.0  6.4  1.8   66.7   18.5  87  89 c10d0
   71.00.0   63.50.0  8.0  2.0  112.6   28.2 100 100 c8d1

___
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss