[zfs-discuss] rpool mirroring

2009-06-04 Thread noz
I've been playing around with zfs root pool mirroring and came across some 
problems.

I have no problems mirroring the root pool if I have both disks attached during 
OpenSolaris installation (installer sees 2 disks).

The problem occurs when I only have one disk attached to the system during 
install.  After OpenSolaris installation completes, I attach the second disk 
and try to create a mirror but I cannot.

Here are the steps I go through:
1) install OpenSolaris onto 16GB disk
2) after successful install, shutdown, and attach second disk (also 16GB)
3) fdisk -B
4) partition
5) zfs attach

Step 5 fails, giving a disk too small error.

What I noticed about the second disk is that it has a 9th partition called 
alternates that takes up about 15MBs.  This partition doesn't exist in the 
first disk and I believe is what's causing the problem.  I can't figure out how 
to delete this partition and I don't know why it's there.  How do I mirror the 
root pool if I don't have both disks attached during OpenSolaris installation?  
I realize I can just use a disk larger than 16GBs, but that would be a waste.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rpool mirroring

2009-06-04 Thread noz
 I believe slice 9 (alternates) is an older method for
 providing
 alternate disk blocks on x86 systems. Apparently, it
 can be removed by
 using the format -e command. I haven't tried this
 though.

format -e worked!!  It is resilvering as I type this message.  Thanks!!
 
 I don't think removing slice 9 will help though if
 these two disks
 are not identical, hence the bug.

They are identical though.  The only difference is s0 on the second disk is 
slightly smaller than s0 on the first disk due to s9 stealing about 15MBs of 
space.  So when I invoked zpool attach -f rpool c7d0s0 c7d1s0, I get the too 
small error.  After deleting s9, everything worked okay.

Thanks Cindy!!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home partition?

2009-01-09 Thread noz
 The above is very dangerous, if it
 will even work. The output of the zfs send is
 redirected to /tmp, which is a ramdisk.  If you
 have enough space (RAM + Swap), it will work, but if
 there is a reboot or crash before the zfs receive
 completes then everything is gone.

 In stead, do the following:
 (2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
 (3) n...@holodeck:~# zfs send -R rpool/exp...@now | zfs recv -d epool
 (4) Check that all the data looks OK in epool
 (5) n...@holodeck:~# zfs destroy -r -f rpool/export

Thanks for the tip.  Is there an easy way to do your revised step 4?  Can I use 
a diff or something similar?  e.g.  diff rpool/export epool/export
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home partition?

2009-01-08 Thread noz
Kyle wrote:
 So if preserving the home filesystem through
 re-installs are really
 important, putting the home filesystem in a separate
 pool may be in
 order.

My problem similar to the original thread author, and this scenario is exactly 
the one I had in mind.  I figured out a workable solution from the zfs admin 
guide, but I've only tested this in virtualbox.  I have no idea how well this 
would work if I actually had hundreds of gigabytes of data.  I also don't know 
if my solution is the recommended way to do this, so please let me know if 
anyone has a better method.

Here's my solution:
(1) n...@holodeck:~# zpool create epool mirror c4t1d0 c4t2d0 c4t3d0

n...@holodeck:~# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
epool 69K  15.6G18K  /epool
rpool   3.68G  11.9G72K  /rpool
rpool/ROOT  2.81G  11.9G18K  legacy
rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
rpool/dump   383M  11.9G   383M  -
rpool/export 632K  11.9G19K  /export
rpool/export/home612K  11.9G19K  /export/home
rpool/export/home/noz594K  11.9G   594K  /export/home/noz
rpool/swap   512M  12.4G  21.1M  -
n...@holodeck:~# 

(2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
(3) n...@holodeck:~# zfs send -R rpool/exp...@now #62; /tmp/export_now
(4) n...@holodeck:~# zfs destroy -r -f rpool/export
(5) n...@holodeck:~# zfs recv -d epool #60; /tmp/export_now

n...@holodeck:~# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
epool756K  15.6G18K  /epool
epool/export 630K  15.6G19K  /export
epool/export/home612K  15.6G19K  /export/home
epool/export/home/noz592K  15.6G   592K  /export/home/noz
rpool   3.68G  11.9G72K  /rpool
rpool/ROOT  2.81G  11.9G18K  legacy
rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
rpool/dump   383M  11.9G   383M  -
rpool/swap   512M  12.4G  21.1M  -
n...@holodeck:~# 

(6) n...@holodeck:~# zfs mount -a

or

(6) reboot

The only part I'm uncomfortable with is when I have to destroy rpool's export 
filesystem (step 4), because trying to destroy without the -f switch results in 
a filesystem is active error.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home partition?

2009-01-08 Thread noz
 To do step no 4, you need to login as root, or create
 new user which  
 home dir not at export.
 
 Sent from my iPhone
 

I tried to login as root at the login screen but it wouldn't let me, some error 
about roles.  Is there another way to login as root?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss