On Tue, Dec 16, 2008 at 1:53 PM, Miles Nordin <car...@ivy.net> wrote:

> >>>>> "np" == Niall Power <niall.po...@sun.com> writes:
>
>    np> So I'd like to ask if this is an appropriate use of ZFS mirror
>    np> functionality?
>
> I like it a lot.
>
> I tried to set up something like that ad-hoc using a firewire disk on
> an Ultra10 at first, and then, just as you thought, tried using one
> firewire disk and one iSCSI disk to make the mirror.  It was before
> ZFS boot, so I mirrored /usr and /var only with ZFS, and / with SVM
> (internal 2GB scsi to firewire).  I was trying to get around the 128GB
> PATA limitation in the Ultra 10.  It was a lot of sillyness, but it
> was still useful even though I ran into a lot of bugs that have been
> fixed since I was trying it.  The stuff you successfully tested
> explores a lot of the problem areas I had---hangs on disconnecting,
> incomplete resilvering, both sound fixed---but iSCSI still does not
> work well because the system will ``patiently wait'' forever during
> boot for an absent iSCSI target.  On SPARC neither firewire nor iscsi
> was bootable back then, so you're in a much better spot there too than
> I was with only a single bootable SVM component and a lot of painful
> manual rescue work to do if that failed.
>
> From reading the list you might be able to do something similar with
> the storagetek AVS/ii/geo-cluster stuff, but I haven't tried it and
> remember some problem with running it on localhost---I think you need
> two machines, just because of UI limitations.  It might resilver
> faster than ZFS though, and it's always a Plan B if you run into a
> show-stopper.  Also (if it worked at all) it solves the
> slower-performacne-while-connected problem.
>
> In the long run some USB stick problems may surface because the wear
> leveling is done in 16MB sections, and you could blow your stick if
> you have a 16MB region which is ``hot''.  I wonder if parts of a zpool
> are hotter than others?  With AVS the dirty bitmap might be hot.
>
> I guess you are not erally imagining sticks though, just testing with
> them.  You're imagining something more like the time capsule, where
> the external drive is bigger than the internal one, that it'll be used
> more on laptops.  At home you keep a large, heavy disk which holds a
> mirror of your laptop ZFS root on one slice, plus an unredundant
> scratch pool made of the extra free space.
>
> Finally, I still don't understand the ZFS quorum rules.  What happens
> if you:
>
>  (1) boot the internal disk, change some stuff, shut down.
>
>  (2) Then boot the USB-stick/big-home-disk, change some stuff, shut down.
>
>  (3) Then boot with both disks.
>
> corruption or successful scrub?  Which changes survive?  because
> people WILL do that.  some will not even remember that they did it,
> will even lie and deny it.
>
>

I did similar, although I can't say I did extensive testing.  When verifying
that both drives were working properly, I simply pulled one, booted, checked
around to make sure the system was fine, then halted.  Pulled that drive and
put the other one in, made sure everything came up fine, halted.  Finally
booted with both in and did a scrub.  It did scrub, and it did do so
correctly.  I guess I didn't actually verify which ones data was kept.  I
know things like the messages file had to be different across systems... so
that is an interesting question.

As for the hanging, and forgive me if he said this as I've not read the OP's
post, but couldn't you simply do a detach before removing the disk, and do a
re-attach everytime you wanted to re-mirror?  Then there'd be no hanging
involved with a missing device on boot, as it would assume it's got every
disk in the pool.  When you re-attach it, at least when I've tested this, it
appears to acknowledge the data on disk and simply scrub the changes since
it was last attached.  Maybe what I saw was a fluke though.

---Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to