hi,
  your output doesn't show it, but format(1m)'s VERIFY tries to read
the Primary Label (PL) and Backup Labels (BL)... and VERIFY prefixes
the output you show w/ a string saying where the info was read from.

VTOC-style disk labels on x86/solaris show a size that is 1 sector less than
if the label had been written on SPARC/Solaris.

were all the disks formatted under the same {architecture}/solaris config?

lastly, from the zfs-basics i know, zfs creates both a "header" and
"trailer" on a Volume (using round-up/down techniques) so that issues
like what you're seeing become masked.  eg, a disk's size can be "off by 1",
yet zfs loses this "rounding err" due to its headers/trailers.

/andrew

On 11/05/09 15:06, Ron Mexico wrote:
> I have 24 identical Western Digital drives connected to a Dell SAS 5/E HBA.
> 
> Three of the drives list the following disk information when using the verify 
> utility under the format command:
> 
> Volume name = <        >
> ascii name  = <ATA-WDC WD2002FYPS-0-5G04-1.82TB>
> bytes/sector    =  512
> sectors = 3907029166
> accessible sectors = 3907029133
> Part      Tag    Flag     First Sector          Size          Last Sector
>   0        usr    wm                34         1.82TB           3907012749    
>   1 unassigned    wm                 0            0                0    
>   2 unassigned    wm                 0            0                0    
>   3 unassigned    wm                 0            0                0    
>   4 unassigned    wm                 0            0                0    
>   5 unassigned    wm                 0            0                0    
>   6 unassigned    wm                 0            0                0    
>   8   reserved    wm        3907012750         8.00MB           3907029133
> 
> 
> The remaining 21 drives show this disk info:
> 
> Volume name = <        >
> ascii name  = <ATA-WDC WD2002FYPS-0-5G04-1.82TB>
> bytes/sector    =  512
> sectors = 3907029166
> accessible sectors = 3907029134
> Part      Tag    Flag     First Sector          Size          Last Sector
>   0        usr    wm               256         1.82TB           3907012750    
>   1 unassigned    wm                 0            0                0    
>   2 unassigned    wm                 0            0                0    
>   3 unassigned    wm                 0            0                0    
>   4 unassigned    wm                 0            0                0    
>   5 unassigned    wm                 0            0                0    
>   6 unassigned    wm                 0            0                0    
>   8   reserved    wm        3907012751         8.00MB           3907029134    
> 
> 
> I know this isn't going to be an issue when creating a raidz pool, but if I 
> have to replace a failed disk with one that has one less accessible sector, 
> won't that cause problems?
> 
> According the ZFS best practices guide: "The size of the replacements vdev, 
> measured by usable sectors, must be the same or greater than the vdev being 
> replaced. This can be confusing when whole disks are used because different 
> models of disks may provide a different number of usable sectors."
> 
> Can anyone shed some light on this?

-- 
Andrew Rutz                                    andrew.rutz at sun.com
Solaris RPE                              Ph: (x64089) 512-401-1089
Austin, TX  78727                        Fax:         512-401-1452

Reply via email to