Constantine wrote:
ZFS doesn't do this.
I thought so too. ;)

Situation brief: I've got OpenSolaris 2009.06 installed on the RAID-5 array on 
the controller with 512 Mb cache (as i can remember) without a cache-saving 
battery.

I hope the controller disabled the cache then.
Probably a good idea to run "zpool scrub rpool" to find out if it's broken. It will probably take some time. zpool status will show the progress.

At the Friday lightning bolt hit the power supply station of colocating 
company,and turned out that their UPSs not much more then decoration. After 
reboot filesystem and logs are on their last snapshot version.

Would also be useful to see output of: zfs list -t all -r zpool/filesystem
====================================
wi...@zeus:~/.zfs/snapshot# zfs list -t all -r rpool
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
rpool                                  427G  1.37T  82.5K  /rpool
rpool/ROOT                             366G  1.37T    19K  legacy
rpool/ROOT/opensolaris                20.6M  1.37T  3.21G  /
rpool/ROOT/xvm                        8.10M  1.37T  8.24G  /
rpool/ROOT/xvm-1                       690K  1.37T  8.24G  /
rpool/ROOT/xvm-2                      35.1G  1.37T   232G  /
rpool/ROOT/xvm-3                       851K  1.37T   221G  /
rpool/ROOT/xvm-4                       331G  1.37T   221G  /
rpool/ROOT/xv...@install               144M      -  2.82G  -
rpool/ROOT/xv...@xvm                  38.3M      -  3.21G  -
rpool/ROOT/xv...@2009-07-27-01:09:14    56K      -  8.24G  -
rpool/ROOT/xv...@2009-07-27-01:09:57    56K      -  8.24G  -
rpool/ROOT/xv...@2009-09-13-23:34:54  2.30M      -   206G  -
rpool/ROOT/xv...@2009-09-13-23:35:17  1.14M      -   206G  -
rpool/ROOT/xv...@2009-09-13-23:42:12  5.72M      -   206G  -
rpool/ROOT/xv...@2009-09-13-23:42:45  5.69M      -   206G  -
rpool/ROOT/xv...@2009-09-13-23:46:25   573K      -   206G  -
rpool/ROOT/xv...@2009-09-13-23:46:34   525K      -   206G  -
rpool/ROOT/xv...@2009-09-13-23:48:11  6.51M      -   206G  -
rpool/ROOT/xv...@2010-04-22-03:50:25  24.6M      -   221G  -
rpool/ROOT/xv...@2010-04-22-03:51:28  24.6M      -   221G  -

Actually, there's 24.6Mbytes worth of changes to the filesystem since the last snapshot, which is coincidentally about the same as there was over the preceding minute between the last two snapshots. I can't tell if (or how much of) that happened before, verses after, the reboot though.

rpool/dump                            16.0G  1.37T  16.0G  -
rpool/export                          28.6G  1.37T    21K  /export
rpool/export/home                     28.6G  1.37T    21K  /export/home
rpool/export/home/wiron               28.6G  1.37T  28.6G  /export/home/wiron
rpool/swap                            16.0G  1.38T   101M  -
=====================================================

Normally in a power-out scenario, you will only lose asynchronous writes since the last transaction group commit, which will be up to 30 seconds worth (although normally much less), and you lose no synchronous writes.

However, I've no idea what your potentially flaky RAID array will have done. If it was using its cache and thinking it was non-volatile, then it could easily have corrupted the zfs filesystem due to having got writes out of sequence with transaction commits, and this can render the filesystem no longer mountable because the back-end storage has lied to zfs about committing writes. Even though you were lucky and it still mounts, it might still be corrupted, hence the suggestion to run zpool scrub (and even more important, get the RAID array fixed). Since I presume ZFS doesn't have redundant storage for this zpool, any corrupted data can't be repaired by ZFS, although it will tell you about it. Running ZFS without redundancy on flaky storage is not a good place to be.

--
Andrew Gabriel
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to