Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-23 Thread Fajar A. Nugraha
On Fri, Jun 24, 2011 at 7:44 AM, David W. Smith  wrote:
>> Generally, the log devices are listed after the pool devices.
>> Did this pool have log devices at one time? Are they missing?
>
> Yes the pool does have logs.  I'll include a zpool status -v below
> from when I'm booted in solaris 10 U9.

I think what Cindy means is does "zpool status" on Solaris Express
(when you were having the problem) has pool devices listed as well?

If not, that would explain the faulted status: zfs can't find pool
devices. So we need to track why Solaris can't see it (probably driver
issues).

If it can see the pool devices, then the status of each device as seen
by zfs on Solaris Express would provide some info,

>> My sense is that if you have remnants of the same pool name on some of
>> your devices but as different pools, then you will see device problems
>> like these.

I had a similar case (though my problem was on Linux). In my case the
"solution" was to rename /etc/zfs/zpool.cache, reboot the server, then
re-import the pool.

> Please let me know if you need more info...

If you're still interested in using this pool under Solaris Express,
then we'll need the output of format and zpool import when running
Solaris Express.

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-23 Thread David W. Smith
On Thu, Jun 23, 2011 at 01:26:38PM -0700, Cindy Swearingen wrote:
> Hi David,
> 
> I see some inconsistencies between the mirrored pool tank info below
> and the device info that you included.
> 
> 1. The zpool status for tank shows some remnants of log devices (?),
> here:
> 
>   tank FAULTED  corrupted data
> logs
> 
> Generally, the log devices are listed after the pool devices.
> Did this pool have log devices at one time? Are they missing?

Yes the pool does have logs.  I'll include a zpool status -v below 
from when I'm booted in solaris 10 U9.
> 
> # zpool status datap
>pool: datap
>   state: ONLINE
>   scrub: none requested
> config:
> 
>  NAMESTATE READ WRITE CKSUM
>  datap   ONLINE   0 0 0
>mirror-0  ONLINE   0 0 0
>  c1t1d0  ONLINE   0 0 0
>  c1t2d0  ONLINE   0 0 0
>mirror-1  ONLINE   0 0 0
>  c1t3d0  ONLINE   0 0 0
>  c1t4d0  ONLINE   0 0 0
>  logs
>mirror-2  ONLINE   0 0 0
>  c1t5d0  ONLINE   0 0 0
>  c1t8d0  ONLINE   0 0 0
> 
> I would like to see this output:
> 
> # zpool history tank
> 
> 2. Can you include the zdb -l output for c9t57d0 because
> the zdb -l device output below is from a RAIDZ config, not
> a mirrored config, although the pool GUIDs match so I'm
> confused.

The zpool is a Raidz.  The logs were mirrored however.  

> 
> I don't think this has anything to do with moving from s10u9 to S11
> express.
> 
> My sense is that if you have remnants of the same pool name on some of
> your devices but as different pools, then you will see device problems
> like these.
> 
> Thanks,
> 
> Cindy
> 

# zpool history tank
History for 'tank':
2011-02-03.10:39:16 zpool create tank raidz 
c8t60001FF0123251B20F00081D1BF1d0 c8t60001FF010DC50410E00081D1BF1d0 
c8t60001FF0123251A70D00081D1BF1d0 c8t60001FF010DC503B0C00081D1BF1d0 
c8t60001FF01232519B0B00081D1BF1d0 c8t60001FF010DC50350A00081D1BF1d0 
c8t60001FF01232518F0900081D1BF1d0 c8t60001FF010DC502F0800081D1BF1d0 
c8t60001FF0123251830700081D1BF1d0 c8t60001FF010DC502A0600081D1BF1d0
2011-02-03.10:39:23 zpool add tank raidz c8t60001FF0123251780500081D1BF1d0 
c8t60001FF010DC50240400081D1BF1d0 c8t60001FF01232516C0300081D1BF1d0 
c8t60001FF010DC501F0200081D1BF1d0 c8t60001FF0123251630100081D1BF1d0 
c8t60001FF010DC50731E00081D1BF1d0 c8t60001FF0123252051D00081D1BF1d0 
c8t60001FF010DC506D1C00081D1BF1d0 c8t60001FF0123251F91B00081D1BF1d0 
c8t60001FF010DC50661A00081D1BF1d0
2011-02-03.10:39:29 zpool add tank raidz c8t60001FF0123251EC1900081D1BF1d0 
c8t60001FF010DC50601800081D1BF1d0 c8t60001FF0123251E01700081D1BF1d0 
c8t60001FF010DC50591600081D1BF1d0 c8t60001FF0123251D41500081D1BF1d0 
c8t60001FF010DC50531400081D1BF1d0 c8t60001FF0123251C91300081D1BF1d0 
c8t60001FF010DC504D1200081D1BF1d0 c8t60001FF0123251BD1100081D1BF1d0 
c8t60001FF010DC5047181D1BF1d0
2011-02-03.10:39:36 zpool add tank raidz c8t60001FF01232525C2D00081D1BF1d0 
c8t60001FF010DC50A32C00081D1BF1d0 c8t60001FF0123252552B00081D1BF1d0 
c8t60001FF010DC509C2A00081D1BF1d0 c8t60001FF01232524D2900081D1BF1d0 
c8t60001FF010DC50962800081D1BF1d0 c8t60001FF0123252462700081D1BF1d0 
c8t60001FF010DC508E2600081D1BF1d0 c8t60001FF01232523E2500081D1BF1d0 
c8t60001FF010DC50872400081D1BF1d0
2011-02-03.10:39:43 zpool add tank raidz c8t60001FF0123252362300081D1BF1d0 
c8t60001FF010DC50812200081D1BF1d0 c8t60001FF01232522C2100081D1BF1d0 
c8t60001FF010DC507A281D1BF1d0 c8t60001FF01232521F1F00081D1BF1d0 
c8t60001FF010DC50D93C00081D1BF1d0 c8t60001FF01232528E3B00081D1BF1d0 
c8t60001FF010DC50D33A00081D1BF1d0 c8t60001FF0123252883900081D1BF1d0 
c8t60001FF010DC50CC3800081D1BF1d0
2011-02-03.10:39:50 zpool add tank raidz c8t60001FF0123252803700081D1BF1d0 
c8t60001FF010DC50C53600081D1BF1d0 c8t60001FF0123252793500081D1BF1d0 
c8t60001FF010DC50BE3400081D1BF1d0 c8t60001FF0123252723300081D1BF1d0 
c8t60001FF010DC50B83200081D1BF1d0 c8t60001FF01232526B3100081D1BF1d0 
c8t60001FF010DC50B0381D1BF1d0 c8t60001FF0123252632F00081D1BF1d0 
c8t60001FF010DC50AA2E00081D1BF1d0
2011-02-03.10:49:06 zfs create tank/test1
2011-02-03.13:12:40 zfs create tank/other
2011-02-03.13:12:52 zfs create tank/other/testing-compression
2011-02-03.13:13:05 zfs create tank/other/testing-no-compression
2011-02-03.13:13:16 zfs create tank/other/iotesting
2011-02-04.11:17:07 zpool add tank cache c1t4d0 c1t5d0 c1t6d0 c1t7d0
2011-02-04.11:18:44 zpool add tank log mirror c3t57d0 c3t58d0 mirror c3t59d0 
c3t60d0
2011-02-04.15:37:11 zfs set sharenfs=
2011-02-04.15:41:57 zfs set sharenfs=
2011-02-09.15:52:39 zfs set sharenfs=
2011-02-11.09:29:24 zpool remove tank

Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-23 Thread Cindy Swearingen

Hi David,

I see some inconsistencies between the mirrored pool tank info below
and the device info that you included.

1. The zpool status for tank shows some remnants of log devices (?),
here:

 tank FAULTED  corrupted data
   logs

Generally, the log devices are listed after the pool devices.
Did this pool have log devices at one time? Are they missing?

# zpool status datap
  pool: datap
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
datap   ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
c1t1d0  ONLINE   0 0 0
c1t2d0  ONLINE   0 0 0
  mirror-1  ONLINE   0 0 0
c1t3d0  ONLINE   0 0 0
c1t4d0  ONLINE   0 0 0
logs
  mirror-2  ONLINE   0 0 0
c1t5d0  ONLINE   0 0 0
c1t8d0  ONLINE   0 0 0

I would like to see this output:

# zpool history tank

2. Can you include the zdb -l output for c9t57d0 because
the zdb -l device output below is from a RAIDZ config, not
a mirrored config, although the pool GUIDs match so I'm
confused.

I don't think this has anything to do with moving from s10u9 to S11
express.

My sense is that if you have remnants of the same pool name on some of
your devices but as different pools, then you will see device problems
like these.

Thanks,

Cindy



On 06/22/11 20:28, David W. Smith wrote:

On Wed, Jun 22, 2011 at 06:32:49PM -0700, Daniel Carosone wrote:

On Wed, Jun 22, 2011 at 12:49:27PM -0700, David W. Smith wrote:

# /home/dws# zpool import
  pool: tank
id: 13155614069147461689
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

tank FAULTED  corrupted data
logs
  mirror-6   ONLINE
c9t57d0  ONLINE
c9t58d0  ONLINE
  mirror-7   ONLINE
c9t59d0  ONLINE
c9t60d0  ONLINE

Is there something else I can do to see what is wrong.

Can you tell us more about the setup, in particular the drivers and
hardware on the path?  There may be labelling, block size, offset or
even bad drivers or other issues getting in the way, preventing ZFS
from doing what should otherwise be expected to work.   Was there
something else in the storage stack on the old OS, like a different
volume manager or some multipathing?

Can you show us the zfs labels with zdb -l /dev/foo ?

Does import -F get any further?


Original attempt when specifying the name resulted in:

# /home/dws# zpool import tank
cannot import 'tank': I/O error

Some kind of underlying driver problem odour here.

--
Dan.


The system is an x4440 with two dual port Qlogic 8 Gbit FC cards connected to a 
DDN 9900 storage unit.  There are 60 luns configured from the storage unit we using

raidz1 across these luns in a 9+1 configuration.  Under Solaris 10U9 
multipathing
is enabled.

For example here is one of the devices:


# luxadm display /dev/rdsk/c8t60001FF010DC50AA2E00081D1BF1d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t60001FF010DC50AA2E00081D1BF1d0s2
  Vendor:   DDN 
  Product ID:   S2A 9900
  Revision: 6.11

  Serial Num:   10DC50AA002E
  Unformatted capacity: 15261576.000 MBytes
  Write Cache:  Enabled
  Read Cache:   Enabled
Minimum prefetch:   0x0
Maximum prefetch:   0x0
  Device Type:  Disk device
  Path(s):

  /dev/rdsk/c8t60001FF010DC50AA2E00081D1BF1d0s2
  /devices/scsi_vhci/disk@g60001ff010dc50aa2e00081d1bf1:c,raw
   Controller   /dev/cfg/c5
Device Address  2401ff051232,2e
Host controller port WWN2101001b32bfe1d3
Class   secondary
State   ONLINE
   Controller   /dev/cfg/c7
Device Address  2801ff0510dc,2e
Host controller port WWN2101001b32bd4f8f
Class   primary
State   ONLINE


Here is the output of the zdb command:

# zdb -l /dev/dsk/c8t60001FF010DC50AA2E00081D1BF1d0s0

LABEL 0

version=22
name='tank'
state=0
txg=402415
pool_guid=13155614069147461689
hostid=799263814
hostname='Chaiten'
top_guid=7879214599529115091
guid=9439709931602673823
vdev_children=8
vdev_tree
type='raidz'
id=5
guid=7879214599529115091
nparity=1
metaslab_array=35
metaslab_shift=40
ashift=12
asize=160028491776000
is_log=0
create_txg=22
children[0]
type='disk'
id=0
guid=15738823520260019536
path='/dev/dsk/c8t60001FF01232528037000

Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-23 Thread Smith, David W.


On 6/22/11 10:28 PM, "Fajar A. Nugraha"  wrote:

> On Thu, Jun 23, 2011 at 9:28 AM, David W. Smith  wrote:
>> When I tried out Solaris 11, I just exported the pool prior to the install of
>> Solaris 11.  I was lucky in that I had mirrored the boot drive, so after I
>> had
>> installed Solaris 11 I still had the other disk in the mirror with Solaris 10
>> still
>> installed.
>> 
>> I didn't install any additional software in either environments with regards
>> to
>> volume management, etc.
>> 
>> From the format command, I did remember seeing 60 luns coming from the DDN
>> and
>> as I recall I disk see multiple paths as well under Solaris 11.  I think you
>> are
>> correct however in that for some reason Solaris 11 could not read the
>> devices.
>> 
> 
> So you mean the root cause of the problem is Solaris Express failed to
> see the disks? Or are the disks available on solaris express as well?
> 
> When you boot with Solaris Express Live CD, what does "zpool import" show?


Under Solaris 11 express, disks were seen with the format command, or like
luxadm probe, etc.  So I'm not sure why zpool import failed, or why I assume
could not read the devices.  I have not tried the Solaris Express live CD,
but I was booted off an installed version.

David

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-22 Thread Fajar A. Nugraha
On Thu, Jun 23, 2011 at 9:28 AM, David W. Smith  wrote:
> When I tried out Solaris 11, I just exported the pool prior to the install of
> Solaris 11.  I was lucky in that I had mirrored the boot drive, so after I had
> installed Solaris 11 I still had the other disk in the mirror with Solaris 10 
> still
> installed.
>
> I didn't install any additional software in either environments with regards 
> to
> volume management, etc.
>
> From the format command, I did remember seeing 60 luns coming from the DDN and
> as I recall I disk see multiple paths as well under Solaris 11.  I think you 
> are
> correct however in that for some reason Solaris 11 could not read the devices.
>

So you mean the root cause of the problem is Solaris Express failed to
see the disks? Or are the disks available on solaris express as well?

When you boot with Solaris Express Live CD, what does "zpool import" show?

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-22 Thread David W. Smith
On Wed, Jun 22, 2011 at 06:32:49PM -0700, Daniel Carosone wrote:
> On Wed, Jun 22, 2011 at 12:49:27PM -0700, David W. Smith wrote:
> > # /home/dws# zpool import
> >   pool: tank
> > id: 13155614069147461689
> >  state: FAULTED
> > status: The pool metadata is corrupted.
> > action: The pool cannot be imported due to damaged devices or data.
> >see: http://www.sun.com/msg/ZFS-8000-72
> > config:
> > 
> > tank FAULTED  corrupted data
> > logs
> >   mirror-6   ONLINE
> > c9t57d0  ONLINE
> > c9t58d0  ONLINE
> >   mirror-7   ONLINE
> > c9t59d0  ONLINE
> > c9t60d0  ONLINE
> > 
> > Is there something else I can do to see what is wrong.
> 
> Can you tell us more about the setup, in particular the drivers and
> hardware on the path?  There may be labelling, block size, offset or
> even bad drivers or other issues getting in the way, preventing ZFS
> from doing what should otherwise be expected to work.   Was there
> something else in the storage stack on the old OS, like a different
> volume manager or some multipathing?
> 
> Can you show us the zfs labels with zdb -l /dev/foo ?
> 
> Does import -F get any further?
> 
> > Original attempt when specifying the name resulted in:
> > 
> > # /home/dws# zpool import tank
> > cannot import 'tank': I/O error
> 
> Some kind of underlying driver problem odour here.
> 
> --
> Dan.

The system is an x4440 with two dual port Qlogic 8 Gbit FC cards connected to a 
DDN 9900 storage unit.  There are 60 luns configured from the storage unit we 
using
raidz1 across these luns in a 9+1 configuration.  Under Solaris 10U9 
multipathing
is enabled.

For example here is one of the devices:


# luxadm display /dev/rdsk/c8t60001FF010DC50AA2E00081D1BF1d0s2
DEVICE PROPERTIES for disk: /dev/rdsk/c8t60001FF010DC50AA2E00081D1BF1d0s2
  Vendor:   DDN 
  Product ID:   S2A 9900
  Revision: 6.11
  Serial Num:   10DC50AA002E
  Unformatted capacity: 15261576.000 MBytes
  Write Cache:  Enabled
  Read Cache:   Enabled
Minimum prefetch:   0x0
Maximum prefetch:   0x0
  Device Type:  Disk device
  Path(s):

  /dev/rdsk/c8t60001FF010DC50AA2E00081D1BF1d0s2
  /devices/scsi_vhci/disk@g60001ff010dc50aa2e00081d1bf1:c,raw
   Controller   /dev/cfg/c5
Device Address  2401ff051232,2e
Host controller port WWN2101001b32bfe1d3
Class   secondary
State   ONLINE
   Controller   /dev/cfg/c7
Device Address  2801ff0510dc,2e
Host controller port WWN2101001b32bd4f8f
Class   primary
State   ONLINE


Here is the output of the zdb command:

# zdb -l /dev/dsk/c8t60001FF010DC50AA2E00081D1BF1d0s0

LABEL 0

version=22
name='tank'
state=0
txg=402415
pool_guid=13155614069147461689
hostid=799263814
hostname='Chaiten'
top_guid=7879214599529115091
guid=9439709931602673823
vdev_children=8
vdev_tree
type='raidz'
id=5
guid=7879214599529115091
nparity=1
metaslab_array=35
metaslab_shift=40
ashift=12
asize=160028491776000
is_log=0
create_txg=22
children[0]
type='disk'
id=0
guid=15738823520260019536
path='/dev/dsk/c8t60001FF0123252803700081D1BF1d0s0'
devid='id1,sd@n60001ff0123252803700081d1bf1/a'
phys_path='/scsi_vhci/disk@g60001ff0123252803700081d1bf1:a'
whole_disk=1
DTL=166
create_txg=22
children[1]
type='disk'
id=1
guid=7241121769141495862
path='/dev/dsk/c8t60001FF010DC50C53600081D1BF1d0s0'
devid='id1,sd@n60001ff010dc50c53600081d1bf1/a'
phys_path='/scsi_vhci/disk@g60001ff010dc50c53600081d1bf1:a'
whole_disk=1
DTL=165
create_txg=22
children[2]
type='disk'
id=2
guid=2777230007222012140
path='/dev/dsk/c8t60001FF0123252793500081D1BF1d0s0'
devid='id1,sd@n60001ff0123252793500081d1bf1/a'
phys_path='/scsi_vhci/disk@g60001ff0123252793500081d1bf1:a'
whole_disk=1
DTL=164
create_txg=22
children[3]
type='disk'
id=3
guid=5525323314985659974
path='/dev/dsk/c8t60001FF010DC50BE3400081D1BF1d0s0'
devid='id1,sd@n60001ff010dc50be3400081d1bf1/a'
phys_path='/scsi_vhci/disk@g60001ff010dc50be3400081d1bf

Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-22 Thread Daniel Carosone
On Wed, Jun 22, 2011 at 12:49:27PM -0700, David W. Smith wrote:
> # /home/dws# zpool import
>   pool: tank
> id: 13155614069147461689
>  state: FAULTED
> status: The pool metadata is corrupted.
> action: The pool cannot be imported due to damaged devices or data.
>see: http://www.sun.com/msg/ZFS-8000-72
> config:
> 
> tank FAULTED  corrupted data
> logs
>   mirror-6   ONLINE
> c9t57d0  ONLINE
> c9t58d0  ONLINE
>   mirror-7   ONLINE
> c9t59d0  ONLINE
> c9t60d0  ONLINE
> 
> Is there something else I can do to see what is wrong.

Can you tell us more about the setup, in particular the drivers and
hardware on the path?  There may be labelling, block size, offset or
even bad drivers or other issues getting in the way, preventing ZFS
from doing what should otherwise be expected to work.   Was there
something else in the storage stack on the old OS, like a different
volume manager or some multipathing?

Can you show us the zfs labels with zdb -l /dev/foo ?

Does import -F get any further?

> Original attempt when specifying the name resulted in:
> 
> # /home/dws# zpool import tank
> cannot import 'tank': I/O error

Some kind of underlying driver problem odour here.

--
Dan.


pgpGA1IeLdHiM.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-22 Thread David Smith
An update:

I had mirrored my boot drive when I installed Solaris 10U9 originally, so I 
went ahead and rebooted the system to this disk instead of my Solaris 11 
install.  After getting the system up, I imported the zpool, and everything 
worked normally.  

So I guess there is some sort of incompatibility between Solaris 10 and Solaris 
11.  I would have thought that Solaris 11 could import an older pool level.

Any other insight on importing pools between these two versions of Solaris 
would be helpful.

Thanks,

David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-22 Thread David Smith
I was recently running Solaris 10 U9 and I decided that I would like to go  

to Solaris 11 Express so I exported my zpool, hoping that I would just do   

an import once I had the new system installed with Solaris 11.  Now when I  

try to do an import I'm getting the following:  



# /home/dws# zpool import   

  pool: tank

id: 13155614069147461689

 state: FAULTED 

status: The pool metadata is corrupted. 

action: The pool cannot be imported due to damaged devices or data. 

   see: http://www.sun.com/msg/ZFS-8000-72  

config: 



tank FAULTED  corrupted data

logs

  mirror-6   ONLINE 

c9t57d0  ONLINE 

c9t58d0  ONLINE 

  mirror-7   ONLINE 

c9t59d0  ONLINE 

c9t60d0  ONLINE 



Is there something else I can do to see what is wrong.  



Original attempt when specifying the name resulted in:  



# /home/dws# zpool import tank  

cannot import 'tank': I/O error 

Destroy and re-create the pool from 

a backup source.



I verified that I have all 60 of my luns.  The controller numbers have  

changed, but I don't believe that should matter. 

Any suggestions about getting additional information about what is happening

would be greatly appreciated.   



Thanks, 



David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Zpool metadata corruption from S10U9 to S11 express

2011-06-22 Thread David W. Smith

I was recently running Solaris 10 U9 and I decided that I would like to go
to Solaris 11 Express so I exported my zpool, hoping that I would just do
an import once I had the new system installed with Solaris 11.  Now when I
try to do an import I'm getting the following:

# /home/dws# zpool import
  pool: tank
id: 13155614069147461689
 state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-72
config:

tank FAULTED  corrupted data
logs
  mirror-6   ONLINE
c9t57d0  ONLINE
c9t58d0  ONLINE
  mirror-7   ONLINE
c9t59d0  ONLINE
c9t60d0  ONLINE

Is there something else I can do to see what is wrong.

Original attempt when specifying the name resulted in:

# /home/dws# zpool import tank
cannot import 'tank': I/O error
Destroy and re-create the pool from
a backup source.

I verified that I have all 60 of my luns.  The controller numbers have
changed, but I don't believe that should matter.

Any suggestions about getting additional information about what is happening 
would be greatly appreciated.

Thanks,

David

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss