Hi. I'm working on getting the most recent ZFS to the FreeBSD's CVS. Because of the huge amount of changes, I decided to work on ZFS regression tests, so I'm more or less sure nothing broke in the meantime.
(Yes, I know about ZFS testsuite, but unfortunately I wasn't able to port it to FreeBSD, it was just too much work. I'm afraid it is too Solaris-specific. My tests are quite different on the other hand, so we will just end up with more tests for ZFS, which is a good thing.) For now I've around 2000 tests covering only zpool create/add/remove/offline commands. I discovered some issues already, so I'll share them while I'm working on the rest. 1. Inconsistent behaviour for 'zpool create' and 'zpool add' when handling mirror vdevs and log mirror vdevs. Different disk sizes: # zpool create test mirror <disk10GB> <disk20GB> invalid vdev specification use '-f' to override the following errors: mirror contains devices of different sizes # zpool create test <disk> log mirror <disk10GB> <disk20GB> # echo $? 0 # zpool status -x test pool 'test' is healthy Mixing disks and files in the same vdev: # zpool create test <disk> <file> invalid vdev specification use '-f' to override the following errors: mismatched replication level: both disk and file vdevs are present # zpool create test mirror <disk> <file> invalid vdev specification use '-f' to override the following errors: mismatched replication level: mirror contains both files and devices # zpool create test <disk0> log <disk1> <file> # echo $? 0 # zpool create test <disk0> log mirror <disk1> <file> # echo $? 0 Mixing replication levels: # zpool create test <disk0> mirror <disk1> <disk2> invalid vdev specification use '-f' to override the following errors: mismatched replication level: both disk and mirror vdevs are present # zpool create test <disk0> log <disk1> mirror <disk2> <disk3> # echo $? 0 # zpool create test mirror <disk0> <disk1> mirror <disk2> <disk3> <disk4> invalid vdev specification use '-f' to override the following errors: mismatched replication level: both 2-way and 3-way mirror vdevs are present # zpool create test <disk0> log mirror <disk1> <disk2> mirror <disk3> <disk4> <disk5> # echo $? 0 All of the above also apply to 'zpool add' command. 2. Bogus offline behaviour for N-way (N > 2) mirrors and RAIDZ2. For a N-way mirror vdev, one should be able to offline up to N-1 components, but it is not the case: # zpool create test mirror <disk0> <disk1> <disk2> # zpool offline test <disk0> # zpool offline test <disk1> cannot offline <disk1>: no valid replicas For a RAIDZ2 vdev, one should be able to offline up to two components, but it is also not the case: # zpool create test raidz2 <disk0> <disk1> <disk2> <disk3> # zpool offline test <disk0> # zpool offline test <disk1> cannot offline <disk1>: no valid replicas Quite surprising is that it works fine for log mirror vdevs: # zpool create test <disk0> log mirror <disk1> <disk2> <disk3> # zpool offline test <disk1> # zpool offline test <disk2> # zpool offline test <disk3> cannot offline <disk3>: no valid replicas 3. Resilver reported without a reason. # zpool create test mirror disk0 disk1 # zpool offline test disk0 # zpool export test # zpool import test # zpool status test | grep scrub scrub: resilver completed after 0h0m with 0 errors on Sun May 4 15:57:47 2008 What ZFS tries to resilver here? I verified that disk0 is not touched (which is expected behaviour). 4. Inconsistent 'zpool status' output for log vdevs. (I'll show only relevant parts of 'zpool status'.) # zpool create test disk0 log mirror disk1 disk2 # zpool offline test disk1 # zpool status test pool: test state: ONLINE config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0 disk0 ONLINE 0 0 0 logs ONLINE 0 0 0 mirror DEGRADED 0 0 0 disk1 OFFLINE 0 0 0 disk2 ONLINE 0 0 0 # zpool export test # zpool import test # zpool status test pool: test state: DEGRADED config: NAME STATE READ WRITE CKSUM test DEGRADED 0 0 0 disk0 ONLINE 0 0 0 logs DEGRADED 0 0 0 mirror DEGRADED 0 0 0 disk1 OFFLINE 0 0 0 disk2 ONLINE 0 0 0 Note how various states changed after export/import cycle. That's all for now, hopefully nothing more to come:) -- Pawel Jakub Dawidek http://www.wheel.pl pjd at FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-code/attachments/20080504/b9d76d79/attachment.bin>