Hello *,

we're running a local zone located on an iSCSI device and see zpools faulting 
at each reboot of the server.
[b]$ zpool list
NAME       SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
data       168G   127G  40.3G    75%  ONLINE  -
iscsi1        -      -      -      -  FAULTED  -
$ zpool status iscsi1
  pool: iscsi1
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
        replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        iscsi1      UNAVAIL      0     0     0  insufficient replicas
          c8t1d0    UNAVAIL      0     0     0  cannot open

$ zpool status iscsi1
  pool: iscsi1
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
        replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        iscsi1      UNAVAIL      0     0     0  insufficient replicas
          c8t1d0    UNAVAIL      0     0     0  cannot open[/b]

It seems that zpool is beeing access before iSCSI device is online.

Currently we cure that problem temporarily with export and import of the 
faulted zpool:
[b]$ zpool export iscsi1
$ zpool import iscsi1
$ zpool list
NAME       SIZE  ALLOC   FREE    CAP  HEALTH  ALTROOT
iscsi1    21.4G  5.66G  15.7G    26%  ONLINE  -

Upgrading zpool from version 15 to version 22 didn't help.

Is this a known problem?
Any hints available?

The server is running pre-update-9 with kernel patched to 142909-17.

- Andreas
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to