[zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st

2006-09-07 Thread Lieven De Geyndt
I know this is not supported . But we try to build a safe configuration , till 
zfs is supported in Sun cluster .
The customer did order SunCluster , but needs a workarround till the release 
date .
And I think it must be possible to setup .
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st

2006-09-07 Thread Lieven De Geyndt
So I can manage the file system mounts/automounts using the legacy option  , 
but I can't manage the auto-import of the pools . Or I should delete the 
zpool.cache file during boot .
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st

2006-09-06 Thread Lieven De Geyndt
sorry guys ...RTF did the job 

[b]Legacy Mount Points[/b]
You can manage ZFS file systems with legacy tools by setting the mountpoint 
property to legacy.
Legacy file systems must be managed through the mount and umount commands and the
/etc/vfstab file. ZFS does not automatically mount legacy file systems on boot, 
and the ZFS mount and umount command do not operate on datasets of this type. 
The following examples show how to
set up and manage a ZFS dataset in legacy mode:
# zfs set mountpoint=legacy tank/home/eschrock
# mount -F zfs tank/home/eschrock /mnt
In particular, if you have set up separate ZFS /usr or /var file systems, you 
must indicate that theyare legacy file systems. In addition, you must mount them 
by creating entries in the /etc/vfstab file.
Otherwise, the system/filesystem/local service enters maintenance mode when the 
system boots.
To automatically mount a legacy file system on boot, you must add an entry to 
the /etc/vfstab file.
The following example shows what the entry in the /etc/vfstab file might look 
like:
#device device mount   FSfsck mount 
mount
#to mount   to fsckpoint   type  pass at 
boot options
#
tank/home/eschrock - /mnt  zfs   - yes  
   -

Note that the device to fsck and fsck pass entries are set to -. This syntax is 
because the fsck command is not applicable to ZFS file systems. For more 
information regarding data integrity and the lack of need for fsck in ZFS
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st

2006-09-06 Thread Lieven De Geyndt
zpool create -R did his job . Thanks for the tip .

Is ther a way to disable the auto mount when you boot a system ?
The customer has some kind of poor mans cluster . 
2 systems has access to a SE3510 with ZFS . 
System A was powered-off as test , system B did an import of the pools .
When system A rebooted , this system tries to import his pools , so 2 systems 
are accessing the same pool . Probably this caused a corruption in his pool .
So how to disable automount of zfs pools ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] How to destroy a pool wich you can't import because it is in faulted state

2006-09-06 Thread Lieven De Geyndt
When a pool is in a faulted state , you can't import it . Even -f fails .
When you to decide to recreate the pool , you cannot execute zpool destroy , 
because it is not imported . Also -f does not work .
Any idea how to get out of this situation ?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss