Re: [zfs-discuss] Re: Re: How to destroy a pool wich you can't import

2006-09-07 Thread Eric Schrock
On Thu, Sep 07, 2006 at 06:31:30PM -0700, Darren Dunham wrote:
> 
> It certainly changes some semantics...
> 
> In a UFS/VxVM world, I still have filesystems referenced in /etc/vfstab.
> I might expect (although have seen counterexamples), that if my VxVM
> group doesn't autoimport, then obviously my filesystems don't mount, and
> that will halt startup until I deal with the problem.  This is often a
> good thing.
> 
> With ZFS and non-legacy mounts, I don't really have a statement that the
> ZFS filesystem /path/to/critical/resource must be mounted at boot time
> other than the configuration of the pool.  I guess I need to make some
> more explicit dependencies for services if I want some of them to
> notice.  (Unfortunately creating/removing dependences takes a bit more
> work than maintaining a vfstab today).

Currently, a faulted pool will not prevent you from coming up (that is,
filesystem/* services will continue to come up).  There are already some
folks thinking about how failed /etc/vfstab mounts should affect boot
(not everyone wants it to fail coming up).  Similar thought should
probably be given to what it means for a faulted pool and/or dataset.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: How to destroy a pool wich you can't import

2006-09-07 Thread Eric Schrock
On Thu, Sep 07, 2006 at 06:07:40PM -0700, Anton B. Rang wrote:
> 
> And why would we want a pool imported on another host, or not marked
> as belonging to this host, to show up as faulted? That seems an odd
> use of the word.  Unavailable, perhaps, but not faulted.
>  

That's FMA terminology, and besides wanting to stay within the same
framework, I believe it is correct.  If you have booted a machines which
claims to be the owner of a pool, only to find that it has since been
actively opened on another host, this is administrator misconfiguration.
As such, the pool is faulted, and an FMA message explaining what has
happened, along with a link to a more detailed knowledge article
explaining how to fix it, will be generated.  The term 'faulted' is 
specific FMA terminology, and carries with it many desired semantics
(such as showing up in the FMA resource cache).

Silently ignoring failure in this case is not an option.  If you want
this silent behavior, you should be using some combination of clustering
software to provide higher level abstractions of ownership besides
'zpool.cache'.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: How to destroy a pool wich you can't import

2006-09-07 Thread Darren Dunham
> And why would we want a pool imported on another host, or not marked
> as belonging to this host, to show up as faulted? That seems an odd
> use of the word.  Unavailable, perhaps, but not faulted.

It certainly changes some semantics...

In a UFS/VxVM world, I still have filesystems referenced in /etc/vfstab.
I might expect (although have seen counterexamples), that if my VxVM
group doesn't autoimport, then obviously my filesystems don't mount, and
that will halt startup until I deal with the problem.  This is often a
good thing.

With ZFS and non-legacy mounts, I don't really have a statement that the
ZFS filesystem /path/to/critical/resource must be mounted at boot time
other than the configuration of the pool.  I guess I need to make some
more explicit dependencies for services if I want some of them to
notice.  (Unfortunately creating/removing dependences takes a bit more
work than maintaining a vfstab today).

-- 
Darren Dunham   [EMAIL PROTECTED]
Senior Technical Consultant TAOShttp://www.taos.com/
Got some Dr Pepper?   San Francisco, CA bay area
 < This line left intentionally blank to confuse you. >
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: How to destroy a pool wich you can't import

2006-09-07 Thread Anton B. Rang
A determined administrator can always get around any checks and cause problems. 
We should do our very best to prevent data loss, though! This case is 
particularly bad since simply booting a machine can permanently damage the pool.

And why would we want a pool imported on another host, or not marked as 
belonging to this host, to show up as faulted? That seems an odd use of the 
word.  Unavailable, perhaps, but not faulted.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: How to destroy a pool wich you can't import

2006-09-06 Thread Robert Milkowski
This could still corrupt the pool.
Probably the customer has to write its own tool to import a pool using libzfs 
and not creating zpool.cache.

Eventually just after pool is imported remove zpool.cache - I'm not sure but it 
should work.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss