On Mar 15, 2010, at 10:55 AM, Gabriele Bulfon wrote:

> - In this case, the storage appliance is a legacy system based on linux, so 
> raids/mirrors are managed at the storage side its own way. Being an iscsi 
> target, this volume was mounted as a single iscsi disk from the solaris host, 
> and prepared as a zfs pool consisting of this single iscsi target. ZFS best 
> practices, tell me that to be safe in case of corruption, pools should always 
> be mirrors or raidz on 2 or more disks. In this case, I considered all safe, 
> because the mirror and raid was managed by the storage machine. But from the 
> solaris host point of view, the pool was just one! And maybe this has been 
> the point of failure. What is the correct way to go in this case?

I'd guess this could be because the iscsi target wasn't honoring ZFS flush 
requests.

> - Finally, looking forward to run new storage appliances using OpenSolaris 
> and its ZFS+iscsitadm and/or comstar, I feel a bit confused by the 
> possibility of having a double zfs situation: in this case, I would have the 
> storage zfs filesystem divided into zfs volumes, accessed via iscsi by a 
> possible solaris host that creates his own zfs pool on it (...is it too 
> redundant??) and again I would fall in the same previous case (host zfs pool 
> connected to one only iscsi resource).

My experience with this is significantly lower end, but I have had iSCSI shares 
from a ZFS NAS come up as corrupt to the client.  It's fixable if you have 
snapshots.

I've been using iSCSI to provide Time Machine targets to OS X boxes.  We had a 
client crash during writing, and upon reboot it showed the iSCSI volume is 
corrupt.  You can put whatever file system you like the iSCSI target obviously. 
 The current OpenSolaris iSCSI implementation I believe uses synchronous 
writes, so hopefully what happened to you wouldn't happen in this case.

In my case I was using HFS+ (the OS X client has to), and I couldn't repair the 
volume.  However, with a snapshot I could roll it back.  If you plan ahead this 
should save you some restoration work (you'll need to be able to roll back all 
the files that have to be consistent).

Good luck,
Ware
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to