Re: [OmniOS-discuss] issue importing zpool on S11.1 from omniOS LUNs

2017-01-26 Thread Richard Elling

> On Jan 26, 2017, at 12:20 AM, Stephan Budach  wrote:
> 
> Hi Richard,
> 
> gotcha… read on, below…

"thin provisioning" bit you. For "thick provisioning" you’ll have a 
refreservation and/or reservation.
 — richard

> 
> Am 26.01.17 um 00:43 schrieb Richard Elling:
>> more below…
>> 
>>> On Jan 25, 2017, at 3:01 PM, Stephan Budach >> > wrote:
>>> 
>>> Ooops… should have waited with sending that message after I rebootet the 
>>> S11.1 host…
>>> 
>>> 
>>> Am 25.01.17 um 23:41 schrieb Stephan Budach:
 Hi Richard,
 
 Am 25.01.17 um 20:27 schrieb Richard Elling:
> Hi Stephan,
> 
>> On Jan 25, 2017, at 5:54 AM, Stephan Budach > > wrote:
>> 
>> Hi guys,
>> 
>> I have been trying to import a zpool, based on a 3way-mirror provided by 
>> three omniOS boxes via iSCSI. This zpool had been working flawlessly 
>> until some random reboot of the S11.1 host. Since then, S11.1 has been 
>> importing this zpool without success.
>> 
>> This zpool consists of three 108TB LUNs, based on a raidz-2 zvols… yeah 
>> I know, we shouldn't have done that in the first place, but performance 
>> was not the primary goal for that, as this one is a backup/archive pool.
>> 
>> When issueing a zpool import, it says this:
>> 
>> root@solaris11atest2:~# zpool import
>>   pool: vsmPool10
>> id: 12653649504720395171
>>  state: DEGRADED
>> status: The pool was last accessed by another system.
>> action: The pool can be imported despite missing or damaged devices.  The
>> fault tolerance of the pool may be compromised if imported.
>>see: http://support.oracle.com/msg/ZFS-8000-EY 
>> 
>> config:
>> 
>> vsmPool10  DEGRADED
>>   mirror-0 DEGRADED
>> c0t600144F07A350658569398F60001d0  DEGRADED  corrupted 
>> data
>> c0t600144F07A35066C5693A0D90001d0  DEGRADED  corrupted 
>> data
>> c0t600144F07A35001A5693A2810001d0  DEGRADED  corrupted 
>> data
>> 
>> device details:
>> 
>> c0t600144F07A350658569398F60001d0DEGRADED 
>> scrub/resilver needed
>> status: ZFS detected errors on this device.
>> The device is missing some data that is recoverable.
>> 
>> c0t600144F07A35066C5693A0D90001d0DEGRADED 
>> scrub/resilver needed
>> status: ZFS detected errors on this device.
>> The device is missing some data that is recoverable.
>> 
>> c0t600144F07A35001A5693A2810001d0DEGRADED 
>> scrub/resilver needed
>> status: ZFS detected errors on this device.
>> The device is missing some data that is recoverable.
>> 
>> However, when  actually running zpool import -f vsmPool10, the system 
>> starts to perform a lot of writes on the LUNs and iostat report an 
>> alarming increase in h/w errors:
>> 
>> root@solaris11atest2:~# iostat -xeM 5
>>  extended device statistics  errors 
>> ---
>> devicer/sw/s   Mr/s   Mw/s wait actv  svc_t  %w  %b s/w h/w trn 
>> tot
>> sd0   0.00.00.00.0  0.0  0.00.0   0   0   0   0   0  
>>  0
>> sd1   0.00.00.00.0  0.0  0.00.0   0   0   0   0   0  
>>  0
>> sd2   0.00.00.00.0  0.0  0.00.0   0   0   0  71   0  
>> 71
>> sd3   0.00.00.00.0  0.0  0.00.0   0   0   0   0   0  
>>  0
>> sd4   0.00.00.00.0  0.0  0.00.0   0   0   0   0   0  
>>  0
>> sd5   0.00.00.00.0  0.0  0.00.0   0   0   0   0   0  
>>  0
>>  extended device statistics  errors 
>> ---
>> devicer/sw/s   Mr/s   Mw/s wait actv  svc_t  %w  %b s/w h/w trn 
>> tot
>> sd0  14.2  147.30.70.4  0.2  0.12.0   6   9   0   0   0  
>>  0
>> sd1  14.28.40.40.0  0.0  0.00.3   0   0   0   0   0  
>>  0
>> sd2   0.04.20.00.0  0.0  0.00.0   0   0   0  92   0  
>> 92
>> sd3 157.3   46.22.10.2  0.0  0.73.7   0  14   0  30   0  
>> 30
>> sd4 123.9   29.41.60.1  0.0  1.7   10.9   0  36   0  40   0  
>> 40
>> sd5 142.5   43.02.00.1  0.0  1.9   10.2   0  45   0  88   0  
>> 88
>>  extended device statistics  errors 
>> ---
>> devicer/sw/s   Mr/s   Mw/s wait actv  svc_t  %w  %b s/w h/w trn 
>> tot
>> sd0   0.0 

[OmniOS-discuss] OpenSSL now updated to 1.0.2k

2017-01-26 Thread Dan McDonald
All supported releases (r151014, r151018, r151020) now have updated OpenSSL 
from this morning's 1.0.2k update.

Please "pkg update" your supported OmniOS deployments.  This is a non-reboot 
update, but you may, depending, have to manually restart your openssl-using 
services.

Dan

___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fwd: Install on Supermicro DOM=low space left

2017-01-26 Thread Peter Tribble
On Thu, Jan 26, 2017 at 2:12 PM, Olaf Marzocchi  wrote:

> But dumps can also be saved as files on a normal dataset, right? provided
> enough space is left for them.
>

No. The dump is a two-stage process.

When the system panics, it simply drops memory into the dump volume.
(Traditionally, it used to use the swap partition.)

Then, when the system is back up, you save that dump into regular files
for subsequent analysis.

If you're really tight for space, and aren't worried about debugging a
panic, then
disabling dumps entirely (dupadm -d none) might be appropriate in this case.

(As an aside, I note that current OmniOS LTS - r151014 - doesn't understand
dumpadm -e, which is a shame.)


> Olaf
>
>
>
> Il 26 gennaio 2017 12:38:27 CET, v...@bb-c.de ha scritto:
>>
>>  NAME USED  AVAIL  REFER  MOUNTPOINT
>>>  rpool/dump  41.5G  9.15G  41.5G  -
>>>  rpool/swap  4.13G  13.0G   276M  -
>>>
>>
>> The "dump" volume is much too big.  Do a
>>
>>   dumpadm -e
>>
>> This will print the "estimated" dump size.  Then add a bit, and
>> set the new dump volume size with:
>>
>>   zfs set volsize= rpool/dump
>>
>> For example, on my OmniOS file server:
>>
>> # dumpadm -e
>> Estimated dump size: 4.63G
>>
>> # zfs set volsize=6G rpool/dump
>>
>> # zfs list rpool/dump
>> NAME USED  AVAIL  REFER  MOUNTPOINT
>> rpool/dump  6.00G  24.1G  6.00G  -
>>
>>  I did not changed anything during instalation proccess, I've just
>>>  accepted all defaults
>>>
>>
>> Yes.  The "traditional" installation usually sizes dump too big.
>> That is why I asked. :-)
>>
>>
>> Hope this helps -- Volker
>>
>>
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
>


-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fwd: Install on Supermicro DOM=low space left

2017-01-26 Thread Olaf Marzocchi
But dumps can also be saved as files on a normal dataset, right? provided 
enough space is left for them.

Olaf



Il 26 gennaio 2017 12:38:27 CET, v...@bb-c.de ha scritto:
>> NAME USED  AVAIL  REFER  MOUNTPOINT
>> rpool/dump  41.5G  9.15G  41.5G  -
>> rpool/swap  4.13G  13.0G   276M  -
>
>The "dump" volume is much too big.  Do a
>
>  dumpadm -e
>
>This will print the "estimated" dump size.  Then add a bit, and
>set the new dump volume size with:
>
>  zfs set volsize= rpool/dump
>
>For example, on my OmniOS file server:
>
># dumpadm -e
>Estimated dump size: 4.63G
>
># zfs set volsize=6G rpool/dump
>
># zfs list rpool/dump
>NAME USED  AVAIL  REFER  MOUNTPOINT
>rpool/dump  6.00G  24.1G  6.00G  -
>
>> I did not changed anything during instalation proccess, I've just
>> accepted all defaults
>
>Yes.  The "traditional" installation usually sizes dump too big.
>That is why I asked. :-)
>
>
>Hope this helps -- Volker
>-- 
>
>Volker A. Brandt   Consulting and Support for Oracle
>Solaris
>Brandt & Brandt Computer GmbH   WWW:
>http://www.bb-c.de/
>Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail:
>v...@bb-c.de
>Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße:
>46
>Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt
>
>"When logic and proportion have fallen sloppy dead"
>___
>OmniOS-discuss mailing list
>OmniOS-discuss@lists.omniti.com
>http://lists.omniti.com/mailman/listinfo/omnios-discuss
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fwd: Install on Supermicro DOM=low space left

2017-01-26 Thread Davide Poletto
I recall an interesting post by Chris Siebenmann about dump/swap sizes
(surprise) on OmniOS, here it is (the only one comment at time was mine :-)
):
https://utcc.utoronto.ca/~cks/space/blog/solaris/OmniOSDiskSizing?showcomments#comments

Cheers, Davide

On Thu, Jan 26, 2017 at 12:22 PM, Fábio Rabelo 
wrote:

> sorry, I forgot to change address to all list before send ...
>
> -- Forwarded message --
> From: Fábio Rabelo 
> Date: 2017-01-26 9:21 GMT-02:00
> Subject: Re: [OmniOS-discuss] Install on Supermicro DOM=low space left
> To: "Volker A. Brandt" 
>
>
> 2017-01-26 9:06 GMT-02:00 Volker A. Brandt :
> > Hi Fábio!
> >
> >
> >> I've just installed OmniOS on a Supermicro Motherboard with a DOM
> >> device for boot .
> >>
> >> It is working fine, no issues ...
> >>
> >> But, the 64GB DOM has just 9GB of space left
> >>
> >> Can I delete something ( temp files, compacted installed packages, etc
> >> ) to free some space ?
> >
> > You might have oversized swap and/or dump volumes.  Do a
> >
> >   zfs list -t volume
> >
> > What volume sizes are shown
>
> NAME USED  AVAIL  REFER  MOUNTPOINT
> rpool/dump  41.5G  9.15G  41.5G  -
> rpool/swap  4.13G  13.0G   276M  -
>
> I did not changed anything during instalation proccess, I've just
> accepted all defaults
>
>
> Fábio Rabelo
> ___
> OmniOS-discuss mailing list
> OmniOS-discuss@lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fwd: Install on Supermicro DOM=low space left

2017-01-26 Thread Stephan Budach

Hi Fábio,

Am 26.01.17 um 12:22 schrieb Fábio Rabelo:

sorry, I forgot to change address to all list before send ...

-- Forwarded message --
From: Fábio Rabelo 
Date: 2017-01-26 9:21 GMT-02:00
Subject: Re: [OmniOS-discuss] Install on Supermicro DOM=low space left
To: "Volker A. Brandt" 


2017-01-26 9:06 GMT-02:00 Volker A. Brandt :

Hi Fábio!



I've just installed OmniOS on a Supermicro Motherboard with a DOM
device for boot .

It is working fine, no issues ...

But, the 64GB DOM has just 9GB of space left

Can I delete something ( temp files, compacted installed packages, etc
) to free some space ?

You might have oversized swap and/or dump volumes.  Do a

   zfs list -t volume

What volume sizes are shown

NAME USED  AVAIL  REFER  MOUNTPOINT
rpool/dump  41.5G  9.15G  41.5G  -
rpool/swap  4.13G  13.0G   276M  -

I did not changed anything during instalation proccess, I've just
accepted all defaults



If you still want to change the size of the dump volume:

zfs set volsize=16g rpool/dump

The size depends of course on the estimated size of a core dump, but 16G 
should ne way over the top.


Cheers,
Stephan


smime.p7s
Description: S/MIME cryptographic signature
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Fwd: Install on Supermicro DOM=low space left

2017-01-26 Thread Volker A. Brandt
> NAME USED  AVAIL  REFER  MOUNTPOINT
> rpool/dump  41.5G  9.15G  41.5G  -
> rpool/swap  4.13G  13.0G   276M  -

The "dump" volume is much too big.  Do a

  dumpadm -e

This will print the "estimated" dump size.  Then add a bit, and
set the new dump volume size with:

  zfs set volsize= rpool/dump

For example, on my OmniOS file server:

# dumpadm -e
Estimated dump size: 4.63G

# zfs set volsize=6G rpool/dump

# zfs list rpool/dump
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool/dump  6.00G  24.1G  6.00G  -

> I did not changed anything during instalation proccess, I've just
> accepted all defaults

Yes.  The "traditional" installation usually sizes dump too big.
That is why I asked. :-)


Hope this helps -- Volker
-- 

Volker A. Brandt   Consulting and Support for Oracle Solaris
Brandt & Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 46
Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt

"When logic and proportion have fallen sloppy dead"
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] Fwd: Install on Supermicro DOM=low space left

2017-01-26 Thread Fábio Rabelo
sorry, I forgot to change address to all list before send ...

-- Forwarded message --
From: Fábio Rabelo 
Date: 2017-01-26 9:21 GMT-02:00
Subject: Re: [OmniOS-discuss] Install on Supermicro DOM=low space left
To: "Volker A. Brandt" 


2017-01-26 9:06 GMT-02:00 Volker A. Brandt :
> Hi Fábio!
>
>
>> I've just installed OmniOS on a Supermicro Motherboard with a DOM
>> device for boot .
>>
>> It is working fine, no issues ...
>>
>> But, the 64GB DOM has just 9GB of space left
>>
>> Can I delete something ( temp files, compacted installed packages, etc
>> ) to free some space ?
>
> You might have oversized swap and/or dump volumes.  Do a
>
>   zfs list -t volume
>
> What volume sizes are shown

NAME USED  AVAIL  REFER  MOUNTPOINT
rpool/dump  41.5G  9.15G  41.5G  -
rpool/swap  4.13G  13.0G   276M  -

I did not changed anything during instalation proccess, I've just
accepted all defaults


Fábio Rabelo
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] Install on Supermicro DOM=low space left

2017-01-26 Thread Volker A. Brandt
Hi Fábio!


> I've just installed OmniOS on a Supermicro Motherboard with a DOM
> device for boot .
> 
> It is working fine, no issues ...
> 
> But, the 64GB DOM has just 9GB of space left
> 
> Can I delete something ( temp files, compacted installed packages, etc
> ) to free some space ?

You might have oversized swap and/or dump volumes.  Do a

  zfs list -t volume

What volume sizes are shown?


Regarsd -- Volker
-- 

Volker A. Brandt   Consulting and Support for Oracle Solaris
Brandt & Brandt Computer GmbH   WWW: http://www.bb-c.de/
Am Wiesenpfad 6, 53340 Meckenheim, GERMANYEmail: v...@bb-c.de
Handelsregister: Amtsgericht Bonn, HRB 10513  Schuhgröße: 46
Geschäftsführer: Rainer J.H. Brandt und Volker A. Brandt

"When logic and proportion have fallen sloppy dead"
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


[OmniOS-discuss] Install on Supermicro DOM=low space left

2017-01-26 Thread Fábio Rabelo
Hi to all

I've just installed OmniOS on a Supermicro Motherboard with a DOM
device for boot .

It is working fine, no issues ...

But, the 64GB DOM has just 9GB of space left

Can I delete something ( temp files, compacted installed packages, etc
) to free some space ?

I thing this 9GB free space may become an issue in the future .


Thanks in advance for any help ...


Fábio Rabelo
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss


Re: [OmniOS-discuss] issue importing zpool on S11.1 from omniOS LUNs

2017-01-26 Thread Stephan Budach

Just for sanity… these are a couple of errors fmdump outputs using -eV

root@solaris11atest2:~# fmdump -eV
TIME   CLASS
Jan 25 2017 10:10:45.011761190 ereport.io.pciex.rc.tmp
nvlist version: 0
class = ereport.io.pciex.rc.tmp
ena = 0xff37bc9a861
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = dev
device-path = /intel-iommu@0,fbffe000
(end detector)

epkt_ver = 0x1
desc = 0x21152014
size = 0x0
addr = 0xca000
hdr1 = 0x60d7
hdr2 = 0x328000
reserved = 0x1
count = 0x1
total = 0x1
event_name = The Write field in a page-table entry is Clear 
when DMA write

VID = 0x8086
DID = 0x0
RID = 0x0
SID = 0x0
SVID = 0x0
reg_ver = 0x1
platform-specific = (embedded nvlist)
nvlist version: 0
VER_REG = 0x10
CAP_REG = 0x106f0462
ECAP_REG = 0xf020fe
GCMD_REG = 0x8680
GSTS_REG = 0xc780
FSTS_REG = 0x100
FECTL_REG = 0x0
FEDATA_REG = 0xf2
FEADDR_REG = 0xfee0
FEUADDR_REG = 0x0
FRCD_REG_LOW = 0xca000
FRCD_REG_HIGH = 0x800500d7
PMEN_REG = 0x64
PLMBASE_REG = 0x68
PLMLIMIT_REG = 0x6c
PHMBASE_REG = 0x70
PHMLIMIT_REG = 0x78
(end platform-specific)

__ttl = 0x1
__tod = 0x58886b95 0xb37626

Jan 25 2017 12:28:55.712580014 ereport.io.scsi.cmd.disk.dev.rqs.derr
nvlist version: 0
class = ereport.io.scsi.cmd.disk.dev.rqs.derr
ena = 0x88985751a4a02c01
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = dev
cna_dev = 0x579a001f
device-path = 
/iscsi/d...@iqn.2016-01.de.jvm.tr1206900:vsmpool12,0

(end detector)

devid = unknown
driver-assessment = info
op-code = 0x15
cdb = 0x15 0x10 0x0 0x0 0x18 0x0
pkt-reason = 0x0
pkt-state = 0x3f
pkt-stats = 0x0
stat-code = 0x2
key = 0x5
asc = 0x1a
ascq = 0x0
sense-data = 0x70 0x0 0x5 0x0 0x0 0x0 0x0 0xa 0x0 0x0 0x0 0x0 
0x1a 0x0 0x0 0x0 0x0 0x0

__ttl = 0x1
__tod = 0x5bf7 0x2a791bae

Jan 25 2017 12:32:35.072413593 ereport.io.scsi.cmd.disk.dev.rqs.derr
nvlist version: 0
class = ereport.io.scsi.cmd.disk.dev.rqs.derr
ena = 0x8bc98528b5c00801
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = dev
cna_dev = 0x579a0024
device-path = 
/iscsi/d...@iqn.2016-01.de.jvm.tr1206901:vsmpool12,0

(end detector)

devid = unknown
driver-assessment = info
op-code = 0x15
cdb = 0x15 0x10 0x0 0x0 0x18 0x0
pkt-reason = 0x0
pkt-state = 0x3f
pkt-stats = 0x0
stat-code = 0x2
key = 0x5
asc = 0x1a
ascq = 0x0
sense-data = 0x70 0x0 0x5 0x0 0x0 0x0 0x0 0xa 0x0 0x0 0x0 0x0 
0x1a 0x0 0x0 0x0 0x0 0x0

__ttl = 0x1
__tod = 0x5cd3 0x450f199

Jan 25 2017 12:32:52.661439798 ereport.io.scsi.cmd.disk.dev.rqs.derr
nvlist version: 0
class = ereport.io.scsi.cmd.disk.dev.rqs.derr
ena = 0x8c0b0b5c71e00401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = dev
cna_dev = 0x579a0029
device-path = 
/iscsi/d...@iqn.2016-01.de.jvm.tr1206902:vsmpool12,0

(end detector)

devid = unknown
driver-assessment = info
op-code = 0x15
cdb = 0x15 0x10 0x0 0x0 0x18 0x0
pkt-reason = 0x0
pkt-state = 0x3f
pkt-stats = 0x0
stat-code = 0x2
key = 0x5
asc = 0x1a
ascq = 0x0
sense-data = 0x70 0x0 0x5 0x0 0x0 0x0 0x0 0xa 0x0 0x0 0x0 0x0 
0x1a 0x0 0x0 0x0 0x0 0x0

__ttl = 0x1
__tod = 0x5ce4 0x276cc536

Jan 25 2017 12:35:48.187562523 ereport.io.scsi.cmd.disk.dev.uderr
nvlist version: 0
class = ereport.io.scsi.cmd.disk.dev.uderr
ena = 0x8e98ee1dd5c00401
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = dev
cna_dev = 0x579a002e
device-path = 
/iscsi/d...@iqn.2016-01.de.jvm.tr1206902:vsmpool12,0

devid = id1,sd@n600144f07a35001a5693a2810001
(end detector)

devid = id1,sd@n600144f07a35001a5693a2810001
driver-assessment = retry
op-code = 0x8a

Re: [OmniOS-discuss] issue importing zpool on S11.1 from omniOS LUNs

2017-01-26 Thread Stephan Budach

Hi Richard,

gotcha… read on, below…

Am 26.01.17 um 00:43 schrieb Richard Elling:

more below…

On Jan 25, 2017, at 3:01 PM, Stephan Budach > wrote:


Ooops… should have waited with sending that message after I rebootet 
the S11.1 host…



Am 25.01.17 um 23:41 schrieb Stephan Budach:

Hi Richard,

Am 25.01.17 um 20:27 schrieb Richard Elling:

Hi Stephan,

On Jan 25, 2017, at 5:54 AM, Stephan Budach > wrote:


Hi guys,

I have been trying to import a zpool, based on a 3way-mirror 
provided by three omniOS boxes via iSCSI. This zpool had been 
working flawlessly until some random reboot of the S11.1 host. 
Since then, S11.1 has been importing this zpool without success.


This zpool consists of three 108TB LUNs, based on a raidz-2 zvols… 
yeah I know, we shouldn't have done that in the first place, but 
performance was not the primary goal for that, as this one is a 
backup/archive pool.


When issueing a zpool import, it says this:

root@solaris11atest2:~# zpool import
  pool: vsmPool10
id: 12653649504720395171
 state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged 
devices.  The

fault tolerance of the pool may be compromised if imported.
   see: http://support.oracle.com/msg/ZFS-8000-EY
config:

vsmPool10 DEGRADED
mirror-0 DEGRADED
c0t600144F07A350658569398F60001d0 DEGRADED  corrupted data
c0t600144F07A35066C5693A0D90001d0 DEGRADED  corrupted data
c0t600144F07A35001A5693A2810001d0 DEGRADED  corrupted data

device details:

c0t600144F07A350658569398F60001d0 DEGRADED 
scrub/resilver needed

status: ZFS detected errors on this device.
The device is missing some data that is recoverable.

c0t600144F07A35066C5693A0D90001d0 DEGRADED 
scrub/resilver needed

status: ZFS detected errors on this device.
The device is missing some data that is recoverable.

c0t600144F07A35001A5693A2810001d0 DEGRADED 
scrub/resilver needed

status: ZFS detected errors on this device.
The device is missing some data that is recoverable.

However, when  actually running zpool import -f vsmPool10, the 
system starts to perform a lot of writes on the LUNs and iostat 
report an alarming increase in h/w errors:


root@solaris11atest2:~# iostat -xeM 5
extended device statistics  errors ---
devicer/sw/s Mr/s   Mw/s wait actv  svc_t  %w  %b s/w h/w 
trn tot

sd0   0.00.0 0.00.0  0.0  0.00.0   0   0 0   0   0   0
sd1   0.00.0 0.00.0  0.0  0.00.0   0   0 0   0   0   0
sd2   0.00.0 0.00.0  0.0  0.00.0   0   0   0 71   
0  71

sd3   0.00.0 0.00.0  0.0  0.00.0   0   0 0   0   0   0
sd4   0.00.0 0.00.0  0.0  0.00.0   0   0 0   0   0   0
sd5   0.00.0 0.00.0  0.0  0.00.0   0   0 0   0   0   0
extended device statistics  errors ---
devicer/sw/s Mr/s   Mw/s wait actv  svc_t  %w  %b s/w h/w 
trn tot

sd0  14.2  147.3 0.70.4  0.2  0.12.0   6   9 0   0   0   0
sd1  14.28.4 0.40.0  0.0  0.00.3   0   0 0   0   0   0
sd2   0.04.2 0.00.0  0.0  0.00.0   0   0   0 92   
0  92
sd3 157.3   46.2 2.10.2  0.0  0.73.7   0  14   0 30   
0  30
sd4 123.9   29.4 1.60.1  0.0  1.7   10.9   0  36   0 40   
0  40
sd5 142.5   43.0 2.00.1  0.0  1.9   10.2   0  45   0 88   
0  88

extended device statistics  errors ---
devicer/sw/s Mr/s   Mw/s wait actv  svc_t  %w  %b s/w h/w 
trn tot

sd0   0.0  234.5 0.00.6  0.2  0.11.4   6  10 0   0   0   0
sd1   0.00.0 0.00.0  0.0  0.00.0   0   0 0   0   0   0
sd2   0.00.0 0.00.0  0.0  0.00.0   0   0   0 92   
0  92
sd3   3.6   64.0 0.00.5  0.0  4.3   63.2   0  63   0 235   
0 235
sd4   3.0   67.0 0.00.6  0.0  4.2   60.5   0  68   0 298   
0 298
sd5   4.2   59.6 0.00.4  0.0  5.2   81.0   0  72   0 406   
0 406

extended device statistics  errors ---
devicer/sw/s Mr/s   Mw/s wait actv  svc_t  %w  %b s/w h/w 
trn tot

sd0   0.0  234.8 0.00.7  0.4  0.12.2  11  10 0   0   0   0
sd1   0.00.0 0.00.0  0.0  0.00.0   0   0 0   0   0   0
sd2   0.00.0 0.00.0  0.0  0.00.0   0   0   0 92   
0  92
sd3   5.4   54.4 0.00.3  0.0  2.9   48.5   0  67   0 384   
0 384
sd4   6.0   53.4 0.00.3  0.0  4.6   77.7   0  87   0 519   
0 519
sd5   6.0   60.8 0.00.3  0.0  4.8   72.5   0  87   0 727   
0 727


h/w errors are a classification of other errors. The full error 
list is available from "iostat -E" and will

be important to tracking this down.

A better, more detailed analysis can be gleaned from the "fmdump 
-e" ereports that should be
associated with each h/w error.