Re: [zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st
Robert Milkowski wrote: Hello James, Thursday, September 7, 2006, 1:44:48 PM, you wrote: JCM> Lieven De Geyndt wrote: I know this is not supported . But we try to build a safe configuration, till zfs is supported in Sun cluster. The customer did order SunCluster, but needs a workarround till the release date . And I think it must be possible to setup . JCM> So build them a configuration which works and is supported today, and JCM> design it so the migration plan which you also provide them makes it JCM> reasonably pain-free to move to HA-ZFS when sc3.2 is released. Yep. Few days ago I did migrate two servers with 40TB ZFS production data from non cluster config to SC with HA-ZFS with minimal downtime (one node cluster, zpool import, add second node cluster, put pools under SC). Basically it worked however with some problems. Right now it works perfectly :) Excellent! James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re[2]: [zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st
Hello James, Thursday, September 7, 2006, 1:44:48 PM, you wrote: JCM> Lieven De Geyndt wrote: >> I know this is not supported . But we try to build a safe configuration, >> till zfs is supported in Sun cluster. The customer did order SunCluster, >> but needs a workarround till the release date . And I think it must be >> possible to setup . JCM> So build them a configuration which works and is supported today, and JCM> design it so the migration plan which you also provide them makes it JCM> reasonably pain-free to move to HA-ZFS when sc3.2 is released. Yep. Few days ago I did migrate two servers with 40TB ZFS production data from non cluster config to SC with HA-ZFS with minimal downtime (one node cluster, zpool import, add second node cluster, put pools under SC). Basically it worked however with some problems. Right now it works perfectly :) -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st
Lieven De Geyndt wrote: I know this is not supported . But we try to build a safe configuration, till zfs is supported in Sun cluster. The customer did order SunCluster, but needs a workarround till the release date . And I think it must be possible to setup . So build them a configuration which works and is supported today, and design it so the migration plan which you also provide them makes it reasonably pain-free to move to HA-ZFS when sc3.2 is released. James ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st
I know this is not supported . But we try to build a safe configuration , till zfs is supported in Sun cluster . The customer did order SunCluster , but needs a workarround till the release date . And I think it must be possible to setup . This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st
On September 7, 2006 6:55:48 PM +1000 "James C. McPherson" <[EMAIL PROTECTED]> wrote: Doesn't this come back to the problem which is self-induced, namely that they are trying "poor man's cluster" ?? If you want cluster functionality then pay for a proper solution. If you can't afford a proper solution then you will *always* get hurt when you come up against a problem of your own making. I saw this scenario *many* times while working in Sun's CPRE and PTS organisations. Save yourself the hassle and do things right from the start. AIUI, there is no zfs cluster option today. SC3.2 (with HA-ZFS) is only in beta. So, it can't be done "right" from the start with zfs. [I'm not disagreeing with you, though.] -frank ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st
Lieven De Geyndt wrote: So I can manage the file system mounts/automounts using the legacy option , but I can't manage the auto-import of the pools . Or I should delete the zpool.cache file during boot . Doesn't this come back to the problem which is self-induced, namely that they are trying "poor man's cluster" ?? If you want cluster functionality then pay for a proper solution. If you can't afford a proper solution then you will *always* get hurt when you come up against a problem of your own making. I saw this scenario *many* times while working in Sun's CPRE and PTS organisations. Save yourself the hassle and do things right from the start. James C. McPherson ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st
So I can manage the file system mounts/automounts using the legacy option , but I can't manage the auto-import of the pools . Or I should delete the zpool.cache file during boot . This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st
On September 6, 2006 7:19:32 AM -0700 Lieven De Geyndt <[EMAIL PROTECTED]> wrote: sorry guys ...RTF did the job [b]Legacy Mount Points[/b] That just means filesystems in the pool won't get mounted, not that the pool won't be imported. -frank ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st
sorry guys ...RTF did the job [b]Legacy Mount Points[/b] You can manage ZFS file systems with legacy tools by setting the mountpoint property to legacy. Legacy file systems must be managed through the mount and umount commands and the /etc/vfstab file. ZFS does not automatically mount legacy file systems on boot, and the ZFS mount and umount command do not operate on datasets of this type. The following examples show how to set up and manage a ZFS dataset in legacy mode: # zfs set mountpoint=legacy tank/home/eschrock # mount -F zfs tank/home/eschrock /mnt In particular, if you have set up separate ZFS /usr or /var file systems, you must indicate that theyare legacy file systems. In addition, you must mount them by creating entries in the /etc/vfstab file. Otherwise, the system/filesystem/local service enters maintenance mode when the system boots. To automatically mount a legacy file system on boot, you must add an entry to the /etc/vfstab file. The following example shows what the entry in the /etc/vfstab file might look like: #device device mount FSfsck mount mount #to mount to fsckpoint type pass at boot options # tank/home/eschrock - /mnt zfs - yes - Note that the device to fsck and fsck pass entries are set to -. This syntax is because the fsck command is not applicable to ZFS file systems. For more information regarding data integrity and the lack of need for fsck in ZFS This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st
Lieven De Geyndt wrote: zpool create -R did his job . Thanks for the tip . Is ther a way to disable the auto mount when you boot a system ? The customer has some kind of poor mans cluster . 2 systems has access to a SE3510 with ZFS . System A was powered-off as test , system B did an import of the pools . When system A rebooted , this system tries to import his pools , so 2 systems are accessing the same pool . Probably this caused a corruption in his pool . So how to disable automount of zfs pools ? Oh heck PMC 0.0.0alpha again :( How about # zfs set mountpoint=none fsname James C. McPherson ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st
zpool create -R did his job . Thanks for the tip . Is ther a way to disable the auto mount when you boot a system ? The customer has some kind of poor mans cluster . 2 systems has access to a SE3510 with ZFS . System A was powered-off as test , system B did an import of the pools . When system A rebooted , this system tries to import his pools , so 2 systems are accessing the same pool . Probably this caused a corruption in his pool . So how to disable automount of zfs pools ? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Re: How to destroy a pool wich you can't import because it is in faulted st
Hi. Just re-create it or create new pool with disks from the old one and use -f flag. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss