Re: [zfs-discuss] Question: ZFS + Block level SHA256 ~= almost free CAS Squishing?

2007-01-13 Thread Pawel Jakub Dawidek
On Mon, Jan 08, 2007 at 11:00:36AM -0600, [EMAIL PROTECTED] wrote:
 I have been looking at zfs source trying to get up to speed on the
 internals.  One thing that interests me about the fs is what appears to be
 a low hanging fruit for block squishing CAS (Content Addressable Storage).
 I think that in addition to lzjb compression, squishing blocks that contain
 the same data would buy a lot of space for administrators working in many
 common workflows.
[...]

I like the idea, but I'd prefer to see such option to be per-pool, not
per-filesystem option.

I found somewhere in ZFS documentation that clones are nice to use for a
large number of diskless stations. That's fine, but after every upgrade,
more and more files are updated and fewer and fewer blocks are shared
between clones. Having such functionality for the entire pool would be a
nice optimization in this case. This doesn't have to be per-pool option
actually, but per-filesystem-hierarchy, ie. all file systems under
tank/diskless/.

I'm not yet sure how you can build the list of hash-to-block mappings for
large pools on boot fast...

-- 
Pawel Jakub Dawidek   http://www.wheel.pl
[EMAIL PROTECTED]   http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!


pgpIN0bljATsF.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Extremely poor ZFS perf and other observations

2007-01-13 Thread Anantha N. Srirama
I'm observing the following behavior in our environment (Sol10U2, E2900, 24x96, 
2x2Gbps, ...)

- I've a compressed ZFS filesystem where I'm creating a large tar file. I 
notice that the tar process is running fine (accumulating CPU, truss shows 
writes, ...) but for whatever reason the timestamp on the file doesn't change 
nor does the file size change. The same is true for 'zpool list' output, the 
usage numbers don't change for minutes at a time.

- I started a tar job to the compressed ZFS filesystem reading from another 
compressed ZFS filesystem. At the same time I started copying files from 
another ZFS filesysem (same pool   same attributes) to a remote server (GigE 
connection) using SCP writing to an UFS filesystem. [b]Guess what? My scp over 
the wire beat the pants off of the local ZFS tar session writing to a 2x2Gbps 
SAN and EMC disks![/b]

[b]I'm beginning to develop serious reservations about ZFS performance, 
specially with the compress feature turned on.[/b]
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] question about self healing

2007-01-13 Thread roland
i have come across an interesting article at : 

http://www.anandtech.com/IT/showdoc.aspx?i=2859p=5

it`s about sata vs. sas/scsi realiability , telling that typical desktop sata 
drives 
.on average experience an Unrecoverable Error every 12.5 terabytes written 
or read (EUR of 1 in 1014 bits).

since the 1TB drive is out very soon, this really makes me afraid of data 
integrity on my backup disks, so the question is:

will zfs help detect/prevent such single-bit errors ?

i`m somewhat sure, that it will help if i use raid1 setup with ZFS - it`s self 
healing will detect those single-bit-errors and correct this - but what about 
single disk setup ?

can zfs protect my data from such single-bit-errors with a single drive ?

regards
roland
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about self healing

2007-01-13 Thread James Dickens

On 1/13/07, roland [EMAIL PROTECTED] wrote:

i have come across an interesting article at :

http://www.anandtech.com/IT/showdoc.aspx?i=2859p=5

it`s about sata vs. sas/scsi realiability , telling that typical desktop sata 
drives
.on average experience an Unrecoverable Error every 12.5 terabytes written or 
read (EUR of 1 in 1014 bits).

since the 1TB drive is out very soon, this really makes me afraid of data 
integrity on my backup disks, so the question is:

will zfs help detect/prevent such single-bit errors ?


zfs will detect a single bit error, if you are using raid either
raidz or mirroring it will fix the error.


i`m somewhat sure, that it will help if i use raid1 setup with ZFS - it`s self 
healing will detect those single-bit-errors and correct this - but what about 
single disk setup ?


if you aren't using mirroring or raidz the error will be detected but
won't be repaired. with the possible exception of metablocks that hold
information about the files and disk structures there are multiple
copy of these, and they can be used should the error occur in one of
those blocks.



can zfs protect my data from such single-bit-errors with a single drive ?


nope.. but it can tell you that it has occurred.


James Dickens
uadmin.blogspot.com


regards
roland


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: question about self healing

2007-01-13 Thread James Dickens

On 1/13/07, roland [EMAIL PROTECTED] wrote:

thanks for your infos!

  can zfs protect my data from such single-bit-errors with a single drive ?
 
nope.. but it can tell you that it has occurred.

can it also tell (or can i use a tool to determine), which data/file is 
affected by this error (and needs repair/restore from backup) ?



with current versions it prints out the inode of the damaged file, you
can use find -inode xxx to fnd the bad file there is allready
request for enhancement to print the name and path to the bad file.

James



This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?

2007-01-13 Thread Gael

Hello,


I'm currently trying to convert a system from Solaris 10 U1 with Veritas VM
to Solaris 10 U3 with ZFS... the san portion of the server is managed by
Hitachi HDLM 5.8.

I'm seeing two distinct errors... let me know if they are classical or if I
should open a ticket (bug report)... Thanks in advance ...

When trying to create a pool with the whole luns, I'm getting the following
error


jumps8002 #format
Unknown controller 'MD21' - /etc/format.dat (15)
Unknown controller 'MD21' - /etc/format.dat (20)
Unknown controller 'MD21' - /etc/format.dat (25)
Unknown controller 'MD21' - /etc/format.dat (151)
Unknown controller 'MD21' - /etc/format.dat (155)
Unknown controller 'MD21' - /etc/format.dat (159)
Unknown controller 'MD21' - /etc/format.dat (163)
Unknown controller 'MD21' - /etc/format.dat (167)
Searching for disks...done


AVAILABLE DISK SELECTIONS:
  0. c1t0d0 SUN72G cyl 14087 alt 2 hd 24 sec 424
 /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
  1. c1t1d0 SUN72G cyl 14087 alt 2 hd 24 sec 424
 /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
  2. c7t50060E8004758654d0 HITACHI-OPEN-V-SUN-5006 cyl 3821 alt 2 hd
15 sec 512
 /pseudo/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
  3. c7t50060E8004758654d1 HITACHI-OPEN-V-SUN-5006 cyl 3821 alt 2 hd
15 sec 512
 /pseudo/[EMAIL PROTECTED]/[EMAIL PROTECTED],1
  4. c7t50060E8004758654d2 HITACHI-OPEN-V-SUN-5006 cyl 3821 alt 2 hd
15 sec 512
 /pseudo/[EMAIL PROTECTED]/[EMAIL PROTECTED],2
  5. c7t50060E8004758654d3 HITACHI-OPEN-V-SUN-5005 cyl 3821 alt 2 hd
15 sec 512
 /pseudo/[EMAIL PROTECTED]/[EMAIL PROTECTED],3
  6. c7t50060E8004758654d4 HITACHI-OPEN-V-SUN-5005 cyl 3821 alt 2 hd
15 sec 512
 /pseudo/[EMAIL PROTECTED]/[EMAIL PROTECTED],4
  7. c7t50060E8004758654d5 HITACHI-OPEN-V-SUN-5005 cyl 3821 alt 2 hd
15 sec 512
 /pseudo/[EMAIL PROTECTED]/[EMAIL PROTECTED],5
  8. c7t50060E8004758654d6 HITACHI-OPEN-V-SUN-5005 cyl 3821 alt 2 hd
15 sec 512
 /pseudo/[EMAIL PROTECTED]/[EMAIL PROTECTED],6
  9. c7t50060E8004758654d7 HITACHI-OPEN-V-SUN-5005 cyl 3821 alt 2 hd
15 sec 512
 /pseudo/[EMAIL PROTECTED]/[EMAIL PROTECTED],7
 10. c7t50060E8004758654d8 HITACHI-OPEN-V-SUN-5005 cyl 3821 alt 2 hd
15 sec 512
 /pseudo/[EMAIL PROTECTED]/[EMAIL PROTECTED],8
- hit space for more or s to select -
jumps8002 #zpool create sanpool c7t50060E8004758654d0 c7t50060E8004758654d1
c7t50060E8004758654d2
cannot open '/dev/dsk/c7t50060E8004758654d0s0':

[..]
AVAILABLE DISK SELECTIONS:
  0. c1t0d0 SUN72G cyl 14087 alt 2 hd 24 sec 424
 /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
  1. c1t1d0 SUN72G cyl 14087 alt 2 hd 24 sec 424
 /[EMAIL PROTECTED],60/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
  2. c7t50060E8004758654d0 HITACHI-OPEN-V  -SUN-5006-14.00GB
 /pseudo/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
[..]

The first disk listed switched to the EFI format, not the others
When resetting the disks to the SMI format and attempting to create the same
pool using the s0 slices (after creating them the same on each disk), I do
simply get a panic...


jumps8002:/root #zpool create sanpool c7t50060E8004758654d0s0
c7t50060E8004758654d1s0 c7t50060E8004758654d2s0

panic[cpu1]/thread=30003140320: BAD TRAP: type=31 rp=2a100cbe9d0 addr=0
mmu_fsr=0 occurred in module dlmfdrv due to a NULL point
er dereference

zpool: trap type = 0x31
pid=966, pc=0x7b286518, sp=0x2a100cbe271, tstate=0x80001606, context=0x73b
g1-g7: 0, 0, 0, 300031c29c0, 1, 0, 30003140320

02a100cbe6f0 unix:die+78 (31, 2a100cbe9d0, 0, 0, 2a100cbe7b0, 1076000)
 %l0-3: 1fff 0031 0100 2000
 %l4-7: 0181a1d8 0181a000  80001606
02a100cbe7d0 unix:trap+9d4 (2a100cbe9d0, 1, 1fff, 5, 0, 1)
 %l0-3:  06000293e7a0 0031 
 %l4-7: e000  0001 0005
02a100cbe920 unix:ktl0+48 (70067f58, 30003140320, , 0,
0, 3072a08)
 %l0-3: 0007 1400 80001606 0101aa04
 %l4-7: 70067ee8 70067000  02a100cbe9d0
02a100cbea70 dlmfdrv:HSPLog_Main+704 (1110008, 42a, 2a100cbf398,
8020, 6c03e48, 0)
 %l0-3:   060001700354 70067ee8
 %l4-7: 70067ef0 060001700330 0008 0008
02a100cbeb80 dlmfdrv:dlmfdrv_ioctl+648 (1110008, 42a, 2a100cbf398,
8020, 6c03e48, 0)
 %l0-3: 02a100cbf29c 06e422f8 f00ffc00 
 %l4-7:    01860800
02a100cbf2e0 zfs:vdev_disk_open+2cc (6000294ba40, 7c00, 2a100cbf470,
1c8, 18a8708, 0)
 %l0-3: 0600016cd6c8 0016  b076
 

Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?

2007-01-13 Thread Richard Elling

Gael wrote:

Hello,
 
I'm currently trying to convert a system from Solaris 10 U1 with Veritas 
VM to Solaris 10 U3 with ZFS... the san portion of the server is managed 
by Hitachi HDLM 5.8.
 
I'm seeing two distinct errors... let me know if they are classical or 
if I should open a ticket (bug report)... Thanks in advance ...
 
When trying to create a pool with the whole luns, I'm getting the 
following error
 
jumps8002 #format

Unknown controller 'MD21' - /etc/format.dat (15)
Unknown controller 'MD21' - /etc/format.dat (20)
Unknown controller 'MD21' - /etc/format.dat (25)
Unknown controller 'MD21' - /etc/format.dat (151)
Unknown controller 'MD21' - /etc/format.dat (155)
Unknown controller 'MD21' - /etc/format.dat (159)
Unknown controller 'MD21' - /etc/format.dat (163)
Unknown controller 'MD21' - /etc/format.dat (167)
Searching for disks...done


So, what is in your format.dat?  I haven't seen an MD21 in over 15 years.
I would have thought that we removed it from format.dat long ago...
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?

2007-01-13 Thread Eric Schrock
On Sat, Jan 13, 2007 at 12:11:26PM -0800, Richard Elling wrote:
 
 So, what is in your format.dat?  I haven't seen an MD21 in over 15 years.
 I would have thought that we removed it from format.dat long ago...
  -- richard

This sounds like:

5020503 *format* Unknown controller 'MD21' warnings

Which was due to customer error when a jumpstart symlink was
accidentally grabbing information from a Solaris 9 environment.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?

2007-01-13 Thread Gael

On 1/13/07, Richard Elling [EMAIL PROTECTED] wrote:


Gael wrote:
 Hello,

 I'm currently trying to convert a system from Solaris 10 U1 with Veritas
 VM to Solaris 10 U3 with ZFS... the san portion of the server is managed
 by Hitachi HDLM 5.8.

 I'm seeing two distinct errors... let me know if they are classical or
 if I should open a ticket (bug report)... Thanks in advance ...

 When trying to create a pool with the whole luns, I'm getting the
 following error

 jumps8002 #format
 Unknown controller 'MD21' - /etc/format.dat (15)
 Unknown controller 'MD21' - /etc/format.dat (20)
 Unknown controller 'MD21' - /etc/format.dat (25)
 Unknown controller 'MD21' - /etc/format.dat (151)
 Unknown controller 'MD21' - /etc/format.dat (155)
 Unknown controller 'MD21' - /etc/format.dat (159)
 Unknown controller 'MD21' - /etc/format.dat (163)
 Unknown controller 'MD21' - /etc/format.dat (167)
 Searching for disks...done

So, what is in your format.dat?  I haven't seen an MD21 in over 15 years.
I would have thought that we removed it from format.dat long ago...
-- richard



jumps8002:/etc/apache2 #cat /etc/release
  Solaris 10 11/06 s10s_u3wos_10 SPARC
  Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved.
   Use is subject to license terms.
  Assembled 14 November 2006

The file is a little bit too long to flood the list with it, here a quick
grep

jumps8002:/etc/apache2 #cat /etc/format.dat |grep MD21
# This is the list of supported disks for the Emulex MD21 controller.
   : ctlr = MD21 \
   : ctlr = MD21 \
   : ctlr = MD21 \
# This is the list of partition tables for the Emulex MD21 controller.
   : disk = Micropolis 1355 : ctlr = MD21 \
   : disk = Micropolis 1355 : ctlr = MD21 \
   : disk = Toshiba MK 156F : ctlr = MD21 \
   : disk = Micropolis 1558 : ctlr = MD21 \
   : disk = Micropolis 1558 : ctlr = MD21 \



--
Gael
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?

2007-01-13 Thread Gael

On 1/13/07, Eric Schrock [EMAIL PROTECTED] wrote:


On Sat, Jan 13, 2007 at 12:11:26PM -0800, Richard Elling wrote:

 So, what is in your format.dat?  I haven't seen an MD21 in over 15
years.
 I would have thought that we removed it from format.dat long ago...
  -- richard

This sounds like:

5020503 *format* Unknown controller 'MD21' warnings

Which was due to customer error when a jumpstart symlink was
accidentally grabbing information from a Solaris 9 environment.

- Eric

--
Eric Schrock, Solaris Kernel Development
http://blogs.sun.com/eschrock



Eric,

My issue is not the MD21 error which has been into solaris since a long
time, but the inability to create a pool with a HDLM device Did try to
search on sunsolve but no luck on that one... same with the Hitachi site,
and no luck either, googling right now and thinking to switch to MPxIO as I
need to get that machine back online before monday...

Regards
--
Gael
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?

2007-01-13 Thread Eric Schrock
On Sat, Jan 13, 2007 at 01:30:19PM -0600, Gael wrote:
 Hello,
 
 jumps8002 #zpool create sanpool c7t50060E8004758654d0 c7t50060E8004758654d1
 c7t50060E8004758654d2
 cannot open '/dev/dsk/c7t50060E8004758654d0s0':

This is a strange error, can you do a 'truss -topen' of this process?
Does the automatic EFI label work?  Does the 's0' slice exist after
labelling the disk?  Can you manually create an EFI label using format?

 jumps8002:/root #zpool create sanpool c7t50060E8004758654d0s0
 c7t50060E8004758654d1s0 c7t50060E8004758654d2s0
 
 panic[cpu1]/thread=30003140320: BAD TRAP: type=31 rp=2a100cbe9d0 addr=0
 mmu_fsr=0 occurred in module dlmfdrv due to a NULL point
 er dereference

This is clearly a bug in the driver.  The driver is not behaving
correctly in reponse to either the DKIOCSETWCE or DKIOCGMEDIAINFO
ioctl().

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?

2007-01-13 Thread Gael

On 1/13/07, Eric Schrock [EMAIL PROTECTED] wrote:


On Sat, Jan 13, 2007 at 01:30:19PM -0600, Gael wrote:
 Hello,

 jumps8002 #zpool create sanpool c7t50060E8004758654d0
c7t50060E8004758654d1
 c7t50060E8004758654d2
 cannot open '/dev/dsk/c7t50060E8004758654d0s0':

This is a strange error, can you do a 'truss -topen' of this process?
Does the automatic EFI label work?  Does the 's0' slice exist after
labelling the disk?  Can you manually create an EFI label using format?




The truss is attached to that email, after running the zpool against the
whole luns (not specifying s0 in the command line),
the first device listed is converted to EFI, the two others remains in SMI

selecting c7t50060E8004758654d0
[disk formatted]

partition p
Current partition table (original):
Total disk sectors available: 29344222 + 16384 (reserved sectors)

Part  TagFlag First SectorSizeLast Sector
 0usrwm34  13.99GB 29344222
 1 unassignedwm 0  0  0
 2 unassignedwm 0  0  0
 3 unassignedwm 0  0  0
 4 unassignedwm 0  0  0
 5 unassignedwm 0  0  0
 6 unassignedwm 0  0  0
 8   reservedwm  29344223   8.00MB 29360606
If I go and create the s0 slice on the second lun, it works perfectly...


partition p
Current partition table (original):
Total disk cylinders available: 3821 + 2 (reserved cylinders)

Part  TagFlag CylindersSizeBlocks
 0   rootwm   0 - 3820   13.99GB(3821/0/0) 29345280
 1 unassignedwu   0   0 (0/0/0)   0
 2 backupwu   0 - 3820   13.99GB(3821/0/0) 29345280
 3 unassignedwu   0   0 (0/0/0)   0
 4 unassignedwu   0   0 (0/0/0)   0
 5 unassignedwu   0   0 (0/0/0)   0
 6 unassignedwu   0   0 (0/0/0)   0
 7 unassignedwu   0   0 (0/0/0)   0

partition label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 1
Warning: This disk has an SMI label. Changing to EFI label will erase all
current partitions.
Continue? y
partition p
Current partition table (original):
Total disk sectors available: 29344222 + 16384 (reserved sectors)

Part  TagFlag First SectorSizeLast Sector
 0   rootwm34  13.99GB 29344221
 1 unassignedwm 0  0  0
 2 unassignedwm 0  0  0
 3 unassignedwm 0  0  0
 4 unassignedwm 0  0  0
 5 unassignedwm 0  0  0
 6 unassignedwm 0  0  0
 7 unassignedwm 0  0  0
 8   reservedwm  29344222   8.00MB 29360605



jumps8002:/root #zpool create sanpool c7t50060E8004758654d0s0
 c7t50060E8004758654d1s0 c7t50060E8004758654d2s0

 panic[cpu1]/thread=30003140320: BAD TRAP: type=31 rp=2a100cbe9d0 addr=0
 mmu_fsr=0 occurred in module dlmfdrv due to a NULL point
 er dereference

This is clearly a bug in the driver.  The driver is not behaving
correctly in reponse to either the DKIOCSETWCE or DKIOCGMEDIAINFO
ioctl().

- Eric

--
Eric Schrock, Solaris Kernel Development
http://blogs.sun.com/eschrock





--
Gael
open(/var/ld/ld.config, O_RDONLY) Err#2 ENOENT
open(/lib/libzfs.so.2, O_RDONLY)  = 3
open(/lib/libnvpair.so.1, O_RDONLY)   = 3
open(/lib/libdevid.so.1, O_RDONLY)= 3
open(/lib/libefi.so.1, O_RDONLY)  = 3
open(/usr/lib/libdiskmgt.so.1, O_RDONLY)  = 3
open(/lib/libuutil.so.1, O_RDONLY)= 3
open(/lib/libumem.so.1, O_RDONLY) = 3
open(/lib/libc.so.1, O_RDONLY)= 3
open(/lib/libm.so.2, O_RDONLY)= 3
open(/lib/libdevinfo.so.1, O_RDONLY)  = 3
open(/lib/libgen.so.1, O_RDONLY)  = 3
open(/lib/libnsl.so.1, O_RDONLY)  = 3
open(/lib/libuuid.so.1, O_RDONLY) = 3
open(/lib/libadm.so.1, O_RDONLY)  = 3
open(/lib/libkstat.so.1, O_RDONLY)= 3
open(/lib/libsysevent.so.1, O_RDONLY) = 3
open(/usr/lib/libvolmgt.so.1, O_RDONLY)   = 3
open(/lib/libsec.so.1, O_RDONLY)  = 3
open(/lib/libsocket.so.1, O_RDONLY)   = 3
open(/lib/libdoor.so.1, O_RDONLY) = 3
open(/lib/libavl.so.1, O_RDONLY)  = 3
open(/platform/SUNW,Sun-Fire-V240/lib/libc_psr.so.1, O_RDONLY) = 3
open(/dev/zfs, O_RDWR)= 3
open(/etc/mnttab, O_RDONLY)   = 4

Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?

2007-01-13 Thread Joerg Schilling
Eric Schrock [EMAIL PROTECTED] wrote:

 On Sat, Jan 13, 2007 at 12:11:26PM -0800, Richard Elling wrote:
  
  So, what is in your format.dat?  I haven't seen an MD21 in over 15 years.
  I would have thought that we removed it from format.dat long ago...
   -- richard

 This sounds like:

 5020503 *format* Unknown controller 'MD21' warnings

I don't understand these warnings as the MD21 is the first controller that 
_really_ supports inquiry. Sformat may output some ACB-5500 warnings wich
is the first controller that replies to inquiry but sends a completely nulled
block that was OK for a disk controller at these times.

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] Replacing a drive in a raidz2 group

2007-01-13 Thread Jason J. W. Williams

Hi Robert,

Will build 54 offline the drive?

Best Regards,
Jason

On 1/13/07, Robert Milkowski [EMAIL PROTECTED] wrote:

Hello Jason,

Saturday, January 13, 2007, 12:06:57 AM, you wrote:

JJWW Hi Robert,

JJWW We've experienced luck with flaky SATA drives in our STK array by
JJWW unseating and reseating the drive to cause a reset of the firmware. It
JJWW may be a bad drive, or the firmware may just have hit a bug. Hope its
JJWW the latter! :-D

JJWW I'd be interested why the hot-spare didn't kick in. I thought the FMA
JJWW integration would detect read errors.

FMA did but ZFS+FMA we're not there in U3.

--
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?

2007-01-13 Thread Richard Elling

Gael wrote:

jumps8002:/etc/apache2 #cat /etc/release
   Solaris 10 11/06 s10s_u3wos_10 SPARC
   Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 14 November 2006
 
The file is a little bit too long to flood the list with it, here a 
quick grep


jumps8002:/etc/apache2 #cat /etc/format.dat |grep MD21
# This is the list of supported disks for the Emulex MD21 controller.
: ctlr = MD21 \
: ctlr = MD21 \
: ctlr = MD21 \
# This is the list of partition tables for the Emulex MD21 controller.
: disk = Micropolis 1355 : ctlr = MD21 \
: disk = Micropolis 1355 : ctlr = MD21 \
: disk = Toshiba MK 156F : ctlr = MD21 \
: disk = Micropolis 1558 : ctlr = MD21 \
: disk = Micropolis 1558 : ctlr = MD21 \


As I thought.  That /etc/format.dat probably didn't come from Solaris 10,
or at least I don't see those entries in NV.

FWIW, the Micropolis 1355 is a 141 MByte (!) ESDI disk.  The MD21 is an
ESDI to SCSI converter.
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and HDLM 5.8 ... does that coexist well ?

2007-01-13 Thread Torrey McMahon

Richard Elling wrote:

Gael wrote:

jumps8002:/etc/apache2 #cat /etc/release
   Solaris 10 11/06 s10s_u3wos_10 SPARC
   Copyright 2006 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 14 November 2006
 
The file is a little bit too long to flood the list with it, here a 
quick grep


jumps8002:/etc/apache2 #cat /etc/format.dat |grep MD21
# This is the list of supported disks for the Emulex MD21 controller.
: ctlr = MD21 \
: ctlr = MD21 \
: ctlr = MD21 \
# This is the list of partition tables for the Emulex MD21 controller.
: disk = Micropolis 1355 : ctlr = MD21 \
: disk = Micropolis 1355 : ctlr = MD21 \
: disk = Toshiba MK 156F : ctlr = MD21 \
: disk = Micropolis 1558 : ctlr = MD21 \
: disk = Micropolis 1558 : ctlr = MD21 \


As I thought.  That /etc/format.dat probably didn't come from Solaris 10,
or at least I don't see those entries in NV.

FWIW, the Micropolis 1355 is a 141 MByte (!) ESDI disk.  The MD21 is an
ESDI to SCSI converter.



Maybe it's time to clean that file up? Do we even need it anymore?



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss