Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-10-09 Thread Robert Milkowski
Hello Richard,

Friday, October 5, 2007, 6:41:10 PM, you wrote:

RE> Robert Milkowski wrote:
>> Hello Richard,
>>
>> Friday, September 28, 2007, 7:45:47 PM, you wrote:
>>
>> RE> Kris Kasner wrote:
>>   
> 2. Back to Solaris Volume Manager (SVM), I guess. It's too bad too, 
> because I 
> don't like it with 2 SATA disks either. There isn't enough drives to put 
> the 
> State Database Replicas so that if either drive failed, the system would 
> reboot unattended. Unless there is a trick?
> 
 There is a trick for this, not sure how long it's been around.
 Add to /etc/system:
 *Allow the system to boot if one of two rootdisks is missing
 set md:mirrored_root_flag=1
   
>>
>> RE> Before you do this, please read the fine manual:
>> RE> 
>> http://docs.sun.com/app/docs/doc/819-2724/chapter2-161?l=en&a=view&q=mirrored_root_flag
>>
>> The description on docs.sun.com is somewhat misleading.
>>
>> http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/lvm/md/md_mddb.c#5659
>>5659 if (mirrored_root_flag == 1 && setno == 0 &&
>>5660 svm_bootpath[0] != 0) {
>>5661 md_clr_setstatus(setno, MD_SET_STALE);
>>
>> Looks like it has to be diskset=0 bootpath has to reside on svm device
>> and mirrored_root_flag has to be set to 1.
>>
>> So if you got other disks (>2) in a system just put them in a separate
>> disk group.
>>
>>
>>
>>   
RE> If we have more than 2 disks, then we have space for a 3rd metadb copy.
RE>  -- richard

well, depends - if it's external jbod I prefer to put all disks from
that jbod into separate diskset - that way it's easier to move that
jbod or re-install host.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-10-06 Thread Dick Davies
On 30/09/2007, William Papolis <[EMAIL PROTECTED]> wrote:
> OK,
>
> I guess using this ...
>
>  set md:mirrored_root_flag=1
>
> for Solaris Volume Manager (SVM) is not supported and could cause problems.
>
> I guess it's back to my first idea ...
>
> With 2 disks, setup three SDR's (State Database Replicas)
>Drive 0 = 1 SDR -> If this drive fails auto-magically boot DRIVE 1
>Drive 1 = 2 SDR's   -> If this drive fails Sysadmin intervention required
>
> Well that's OK, at least 50% of the time the system won't KACK.

What you gain on the swings, you lose on the roundabouts.
But if you lose drive 1 when the system is running, it'll now panic
(whereas with
50% of quorum, it will continue to run).

-- 
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-10-05 Thread Richard Elling
Robert Milkowski wrote:
> Hello Richard,
>
> Friday, September 28, 2007, 7:45:47 PM, you wrote:
>
> RE> Kris Kasner wrote:
>   
 2. Back to Solaris Volume Manager (SVM), I guess. It's too bad too, 
 because I 
 don't like it with 2 SATA disks either. There isn't enough drives to put 
 the 
 State Database Replicas so that if either drive failed, the system would 
 reboot unattended. Unless there is a trick?
 
>>> There is a trick for this, not sure how long it's been around.
>>> Add to /etc/system:
>>> *Allow the system to boot if one of two rootdisks is missing
>>> set md:mirrored_root_flag=1
>>>   
>
> RE> Before you do this, please read the fine manual:
> RE> 
> http://docs.sun.com/app/docs/doc/819-2724/chapter2-161?l=en&a=view&q=mirrored_root_flag
>
> The description on docs.sun.com is somewhat misleading.
>
> http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/lvm/md/md_mddb.c#5659
>5659 if (mirrored_root_flag == 1 && setno == 0 &&
>5660 svm_bootpath[0] != 0) {
>5661 md_clr_setstatus(setno, MD_SET_STALE);
>
> Looks like it has to be diskset=0 bootpath has to reside on svm device
> and mirrored_root_flag has to be set to 1.
>
> So if you got other disks (>2) in a system just put them in a separate
> disk group.
>
>
>
>   
If we have more than 2 disks, then we have space for a 3rd metadb copy.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-10-05 Thread Robert Milkowski
Hello Richard,

Friday, September 28, 2007, 7:45:47 PM, you wrote:

RE> Kris Kasner wrote:
>>> 2. Back to Solaris Volume Manager (SVM), I guess. It's too bad too, because 
>>> I 
>>> don't like it with 2 SATA disks either. There isn't enough drives to put 
>>> the 
>>> State Database Replicas so that if either drive failed, the system would 
>>> reboot unattended. Unless there is a trick?
>> 
>> There is a trick for this, not sure how long it's been around.
>> Add to /etc/system:
>> *Allow the system to boot if one of two rootdisks is missing
>> set md:mirrored_root_flag=1

RE> Before you do this, please read the fine manual:
RE> 
http://docs.sun.com/app/docs/doc/819-2724/chapter2-161?l=en&a=view&q=mirrored_root_flag

The description on docs.sun.com is somewhat misleading.

http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/lvm/md/md_mddb.c#5659
   5659 if (mirrored_root_flag == 1 && setno == 0 &&
   5660 svm_bootpath[0] != 0) {
   5661 md_clr_setstatus(setno, MD_SET_STALE);

Looks like it has to be diskset=0 bootpath has to reside on svm device
and mirrored_root_flag has to be set to 1.

So if you got other disks (>2) in a system just put them in a separate
disk group.
   

   
-- 
Best regards,
 Robert Milkowski  mailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-09-29 Thread William Papolis
OK,

I guess using this ...

 set md:mirrored_root_flag=1

for Solaris Volume Manager (SVM) is not supported and could cause problems.

I guess it's back to my first idea ...

With 2 disks, setup three SDR's (State Database Replicas)
   Drive 0 = 1 SDR -> If this drive fails auto-magically boot DRIVE 1
   Drive 1 = 2 SDR's   -> If this drive fails Sysadmin intervention required

Well that's OK, at least 50% of the time the system won't KACK.

Thanks for the help. I am pleasantly surprised with the level of Sun Staff 
involvement to help things along. Much appreciated Richard and other Sun Staff!!

Bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-09-28 Thread Richard Elling
Kris Kasner wrote:
>> 2. Back to Solaris Volume Manager (SVM), I guess. It's too bad too, because 
>> I 
>> don't like it with 2 SATA disks either. There isn't enough drives to put the 
>> State Database Replicas so that if either drive failed, the system would 
>> reboot unattended. Unless there is a trick?
> 
> There is a trick for this, not sure how long it's been around.
> Add to /etc/system:
> *Allow the system to boot if one of two rootdisks is missing
> set md:mirrored_root_flag=1

Before you do this, please read the fine manual:
http://docs.sun.com/app/docs/doc/819-2724/chapter2-161?l=en&a=view&q=mirrored_root_flag

This can cause corruption and is "not supported."
  -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-09-27 Thread William Papolis
Sweet!

Thank you! :)

Bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-09-27 Thread Kris Kasner
> 2. Back to Solaris Volume Manager (SVM), I guess. It's too bad too, because I 
> don't like it with 2 SATA disks either. There isn't enough drives to put the 
> State Database Replicas so that if either drive failed, the system would 
> reboot unattended. Unless there is a trick?

There is a trick for this, not sure how long it's been around.
Add to /etc/system:
*Allow the system to boot if one of two rootdisks is missing
set md:mirrored_root_flag=1

Good luck.

--Kris
-- 

Thomas Kris Kasner
Qualcomm Inc.
5775 Morehouse Drive
San Diego, CA 92121
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-09-27 Thread William Papolis
k, Thanks for the tip.

I spent a day trying it out, and this is what I learned ...

1. Solaris 10 (2007-08) doesn't have the "ZFS Boot Bits" installed
2. Only OpenSolaris version snv_62 or later has "ZFS Boot Bits"
3. Even if I got OpenSolaris working with ZFS booting the system, with a 2 disk 
SATA array, I would still have trouble; if one drive fails, ZFS won't boot.
4. Further there seems to be some issues with SATA framework and ZFS. 
Currently, it appears, it's best to use SAS or SCSI.

It's too bad, I had high hopes that I could use ZFS to boot with the latest 
Solaris 10 build. With the "Live updating with ZFS running" problem solved, I 
was ready! On the other side, utilizing 2 filesystems on 2 drives is more 
complicated than I wanted to go.

A couple questions ...
1. It appears that OpenSolaris has no way to get updates from Sun. (meaning 
Patch Manager doesn't work with it in CLI - #/usr/sbin/smpatch get -) So ... 
how do people "patch" OpenSolaris?
2. Back to Solaris Volume Manager (SVM), I guess. It's too bad too, because I 
don't like it with 2 SATA disks either. There isn't enough drives to put the 
State Database Replicas so that if either drive failed, the system would reboot 
unattended. Unless there is a trick?

Thanks for the help,

Bill
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS booting with Solaris (2007-08)

2007-09-27 Thread Lori Alt
William Papolis wrote:
> Hello there,
>
> I know this is an OpenSolaris forum, and likely if I used OpenSolaris I could 
> make this script work ... 
>
>  http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/
>
> ..but I was trying to make it work with Solaris 10 (2007-08) and I got as far 
> as Step #5 before I encountered a bit of a problem.
>
> More specifically this line ... # zpool set bootfs=rootpool/rootfs rootpool
>
> As far as I can see, Solaris (2007-08) does not include the "ZFS Boot bits" 
> which means it doesn't support the boolean "bootfs".
>
> Now at this point I am a little stuck, because frankly I don't have a 100% 
> handle on exactly what I am doing, but I am thinking I can use this ...
>
> http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/mntroot-transition/
>
> ... to boot ZFS the old "mountroot" way?
>
> Frankly I am guessing, and a better explanation would be much appreciated.
>
> Again, I know I am using Solaris 10 and not OpenSolaris. I was thinking 
> OpenSolaris wouldn't be good for a production system just yet, but I still 
> wanted to use ZFS if it's stable. Also, I figured if I posted here I would 
> get the best, quick response! Any help or explanation is much appreciated.
>
>   
You can certainly use ZFS on S10, but not as a root file system.
None of that support has been backported and so zfs root
is not yet ready for production use.

Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss