Re: [zfs-discuss] separate home "partition"?

2009-01-13 Thread Cyril Plisko
On Tue, Jan 13, 2009 at 2:42 PM, Johan Hartzenberg  wrote:
>
>
> On Fri, Jan 9, 2009 at 11:51 AM, Johan Hartzenberg 
> wrote:
>>
>>
>> I have this situation working and use my "shared" pool between Linux and
>> Solaris.  Note:  The shared pool needs to reside on a whole physical disk or
>> on a primary fdisk partition, Unless something changed since I last checked,
>> Solaris' support for Logical Partitions are... not quite there yet.
>>
>
> I just chanced apon the following in the SNV Build105 Change logs:

This work was removed later - see
http://hg.genunix.org/onnv-gate.hg/rev/de8038a7796e


>
> PSARC case 2006/379 : Solaris on Extended partition
> BUG/RFE:6644364Extended partitions need to be supported on Solaris
> BUG/RFE:6713308Macro UNUSED in fdisk.h needs to be changed since id 100 is
> Novell Netware 286's partition ID
> BUG/RFE:6713318Need to differentiate between solaris old partition and Linux
> swap
> BUG/RFE:6745175Partitions can be created using fdisk table with invalid
> partition line by "fdisk -F"
> BUG/RFE:6745740Multiple extended partition can be created by "fdisk -A"
> Files Changed: update:usr/src/Makefile.lint
> update:usr/src/Targetdirs
> update:usr/src/cmd/boot/installgrub/Makefile
> update:usr/src/cmd/boot/installgrub/installgrub.c
> update:usr/src/cmd/devfsadm/disk_link.c
> update:usr/src/cmd/fdisk/Makefile
> update:usr/src/cmd/fdisk/fdisk.c
> update:usr/src/cmd/format/Makefile
> update:usr/src/cmd/format/menu_fdisk.c
> update:usr/src/lib/Makefile
> update:usr/src/pkgdefs/SUNWarc/prototype_i386
> update:usr/src/pkgdefs/SUNWarcr/prototype_i386
> update:usr/src/pkgdefs/SUNWcsl/prototype_i386
> update:usr/src/pkgdefs/SUNWcslr/prototype_i386
> update:usr/src/pkgdefs/SUNWhea/prototype_i386
> update:usr/src/uts/common/io/cmlb.c
> update:usr/src/uts/common/io/scsi/targets/sd.c
> update:usr/src/uts/common/sys/cmlb_impl.h
> update:usr/src/uts/common/sys/dkio.h
> update:usr/src/uts/common/sys/dktp/fdisk.h
> update:usr/src/uts/common/sys/scsi/targets/sddef.h
> update:usr/src/uts/common/xen/io/xdf.c
> update:usr/src/uts/intel/io/dktp/disk/cmdk.c
> create:usr/src/lib/libfdisk/Makefile
> create:usr/src/lib/libfdisk/i386/Makefile
> create:usr/src/lib/libfdisk/i386/libfdisk.c
> create:usr/src/lib/libfdisk/i386/libfdisk.h
> create:usr/src/lib/libfdisk/i386/llib-lfdisk
> create:usr/src/lib/libfdisk/i386/mapfile-vers
>
>
>
>
>
>
> --
> Any sufficiently advanced technology is indistinguishable from magic.
>Arthur C. Clarke
>
> My blog: http://initialprogramload.blogspot.com
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>



-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-13 Thread Johan Hartzenberg
On Fri, Jan 9, 2009 at 11:51 AM, Johan Hartzenberg wrote:

>
>
> I have this situation working and use my "shared" pool between Linux and
> Solaris.  Note:  The shared pool needs to reside on a whole physical disk or
> on a primary fdisk partition, Unless something changed since I last checked,
> Solaris' support for Logical Partitions are... not quite there yet.
>
>
I just chanced apon the following in the SNV Build105 Change logs:

 PSARC case 2006/379 : Solaris on Extended partition
BUG/RFE:6644364Extended
partitions need to be supported on Solaris
BUG/RFE:6713308Macro
UNUSED in fdisk.h needs to be changed since id 100 is Novell Netware 286's
partition ID
BUG/RFE:6713318Need
to differentiate between solaris old partition and Linux swap
BUG/RFE:6745175Partitions
can be created using fdisk table with invalid partition line by "fdisk -F"
BUG/RFE:6745740Multiple
extended partition can be created by "fdisk -A"
Files Changed: 
update:usr/src/Makefile.lint
update:usr/src/Targetdirs
update:usr/src/cmd/boot/installgrub/Makefile
update:usr/src/cmd/boot/installgrub/installgrub.c
update:usr/src/cmd/devfsadm/disk_link.c
update:usr/src/cmd/fdisk/Makefile
update:usr/src/cmd/fdisk/fdisk.c
update:usr/src/cmd/format/Makefile
update:usr/src/cmd/format/menu_fdisk.c
update:usr/src/lib/Makefile
update:usr/src/pkgdefs/SUNWarc/prototype_i386
update:usr/src/pkgdefs/SUNWarcr/prototype_i386
update:usr/src/pkgdefs/SUNWcsl/prototype_i386
update:usr/src/pkgdefs/SUNWcslr/prototype_i386
update:usr/src/pkgdefs/SUNWhea/prototype_i386
update:usr/src/uts/common/io/cmlb.c
update:usr/src/uts/common/io/scsi/targets/sd.c
update:usr/src/uts/common/sys/cmlb_impl.h
update:usr/src/uts/common/sys/dkio.h
update:usr/src/uts/common/sys/dktp/fdisk.h
update:usr/src/uts/common/sys/scsi/targets/sddef.h
update:usr/src/uts/common/xen/io/xdf.c
update:usr/src/uts/intel/io/dktp/disk/cmdk.c
create:usr/src/lib/libfdisk/Makefile
create:usr/src/lib/libfdisk/i386/Makefile
create:usr/src/lib/libfdisk/i386/libfdisk.c
create:usr/src/lib/libfdisk/i386/libfdisk.h
create:usr/src/lib/libfdisk/i386/llib-lfdisk
create:usr/src/lib/libfdisk/i386/mapfile-vers






-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-09 Thread Johan Hartzenberg
On Fri, Jan 9, 2009 at 6:25 PM, noz  wrote:

> > The above is very dangerous, if it
> > will even work. The output of the zfs send is
> > redirected to /tmp, which is a ramdisk.  If you
> > have enough space (RAM + Swap), it will work, but if
> > there is a reboot or crash before the zfs receive
> > completes then everything is gone.
>
> > In stead, do the following:
> > (2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
> > (3) n...@holodeck:~# zfs send -R rpool/exp...@now | zfs recv -d epool
> > (4) Check that all the data looks OK in epool
> > (5) n...@holodeck:~# zfs destroy -r -f rpool/export
>
> Thanks for the tip.  Is there an easy way to do your revised step 4?  Can I
> use a diff or something similar?  e.g.  diff rpool/export epool/export
>

Personally I would just browse around the structure, open a few files at
random, and consider it done.  But that is me, and my data, of which I _DO_
make backups.

You could use find to create an index of all the files and save these in
files, and compare those.  Depending on exactly how you do the find, you
might be able to just diff the files.

Of course if you want to be realy pedantic, you would do
cd /rpool/export; find . | xargs cksum > /rpool_checksums
cd /epool/export; find  . | xargs cksum > /epool_checksums
diff /?pool_checksums

But be prepared to wait a very very very long time for the two checksum
processes to run.  Unless you have very little data.

Cheers,
  _J



-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-09 Thread noz
> The above is very dangerous, if it
> will even work. The output of the zfs send is
> redirected to /tmp, which is a ramdisk.  If you
> have enough space (RAM + Swap), it will work, but if
> there is a reboot or crash before the zfs receive
> completes then everything is gone.

> In stead, do the following:
> (2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
> (3) n...@holodeck:~# zfs send -R rpool/exp...@now | zfs recv -d epool
> (4) Check that all the data looks OK in epool
> (5) n...@holodeck:~# zfs destroy -r -f rpool/export

Thanks for the tip.  Is there an easy way to do your revised step 4?  Can I use 
a diff or something similar?  e.g.  diff rpool/export epool/export
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-09 Thread Johan Hartzenberg
On Fri, Jan 9, 2009 at 9:55 AM, hardware technician wrote:

> I want to create a separate home, shared, read/write zfs partition on a
> tri-boot OpenSolaris, Ubuntu, and CentOS system.  I have successfully
> created and exported the zpools that I would like to use, in Ubuntu using
> zfs-fuse.  However, I boot into OpenSolaris, and I type zpool import with no
> options.  The only pool I see to import is on the primary partition, and I
> haven't been able to see or import the pool that is on the extended
> partition.  I have tried importing using the name, and ID.
>
> In OpenSolaris /dev/dsk/c3d0 shows 15 slices, so I think the slices are
> there, but then I type format, select the disk, and the partition option,
> but it doesn't show (zfs) partitions from linux.  In format, the fdisk
> option recognizes the (zfs) linux partitions.  The partition that I was able
> to import is on the first partition, and is named c3d0p1, and is not a
> slice.
>
> Are there any ideas how I could import the other pool?
>

I have this situation working and use my "shared" pool between Linux and
Solaris.  Note:  The shared pool needs to reside on a whole physical disk or
on a primary fdisk partition, Unless something changed since I last checked,
Solaris' support for Logical Partitions are... not quite there yet.

P.S. I blogged about my setup (Linux + Solaris with a Shared ZFS pool) here
http://initialprogramload.blogspot.com/search?q=zfs-fuse+linux ...  However
this was a long time ago and I don't know whether the statement about Grub
ZFS support in point 3 is still true.

Aparently some bugs pertaining to time stomping between ubuntu and solaris
has been fixed, so you may not need to do step 4. An Alternative to step 4
is to run this in Solaris: pfexec /usr/sbin/rtc -z UTC

In addition, at point nr 7, use "bootadm list-menu" to find out where
Solaris has decided to save the grub menu.lst file.





-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-09 Thread Johan Hartzenberg
On Fri, Jan 9, 2009 at 4:10 AM, noz  wrote:

>
> Here's my solution:
> (1) n...@holodeck:~# zpool create epool mirror c4t1d0 c4t2d0 c4t3d0
>
> n...@holodeck:~# zfs list
> NAME USED  AVAIL  REFER  MOUNTPOINT
> epool 69K  15.6G18K  /epool
> rpool   3.68G  11.9G72K  /rpool
> rpool/ROOT  2.81G  11.9G18K  legacy
> rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
> rpool/dump   383M  11.9G   383M  -
> rpool/export 632K  11.9G19K  /export
> rpool/export/home612K  11.9G19K  /export/home
> rpool/export/home/noz594K  11.9G   594K  /export/home/noz
> rpool/swap   512M  12.4G  21.1M  -
> n...@holodeck:~#
>
> (2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
> (3) n...@holodeck:~# zfs send -R rpool/exp...@now > /tmp/export_now
> (4) n...@holodeck:~# zfs destroy -r -f rpool/export
> (5) n...@holodeck:~# zfs recv -d epool < /tmp/export_now
>
> The above is very dangerous, if it will even work.

The output of the zfs send is redirected to /tmp, which is a ramdisk.  If
you have enough space (RAM + Swap), it will work, but if there is a reboot
or crash before the zfs receive completes then everything is gone.

In stead, do the following:
(2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
(3) n...@holodeck:~# zfs send -R rpool/exp...@now | zfs recv -d epool
(4) Check that all the data looks OK in epool
(5) n...@holodeck:~# zfs destroy -r -f rpool/export


-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-08 Thread hardware technician
I want to create a separate home, shared, read/write zfs partition on a 
tri-boot OpenSolaris, Ubuntu, and CentOS system.  I have successfully created 
and exported the zpools that I would like to use, in Ubuntu using zfs-fuse.  
However, I boot into OpenSolaris, and I type zpool import with no options.  The 
only pool I see to import is on the primary partition, and I haven't been able 
to see or import the pool that is on the extended partition.  I have tried 
importing using the name, and ID.

In OpenSolaris /dev/dsk/c3d0 shows 15 slices, so I think the slices are there, 
but then I type format, select the disk, and the partition option, but it 
doesn't show (zfs) partitions from linux.  In format, the fdisk option 
recognizes the (zfs) linux partitions.  The partition that I was able to import 
is on the first partition, and is named c3d0p1, and is not a slice.

Are there any ideas how I could import the other pool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-08 Thread Andre Wenas
You can edit /etc/user_attr file.

Sent from my iPhone

On Jan 9, 2009, at 11:13 AM, noz  wrote:

>> To do step no 4, you need to login as root, or create
>> new user which
>> home dir not at export.
>>
>> Sent from my iPhone
>>
>
> I tried to login as root at the login screen but it wouldn't let me,  
> some error about roles.  Is there another way to login as root?
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-08 Thread noz
> To do step no 4, you need to login as root, or create
> new user which  
> home dir not at export.
> 
> Sent from my iPhone
> 

I tried to login as root at the login screen but it wouldn't let me, some error 
about roles.  Is there another way to login as root?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-08 Thread Andre Wenas
To do step no 4, you need to login as root, or create new user which  
home dir not at export.

Sent from my iPhone

On Jan 9, 2009, at 10:10 AM, noz  wrote:

> Kyle wrote:
>> So if preserving the home filesystem through
>> re-installs are really
>> important, putting the home filesystem in a separate
>> pool may be in
>> order.
>
> My problem similar to the original thread author, and this scenario  
> is exactly the one I had in mind.  I figured out a workable solution  
> from the zfs admin guide, but I've only tested this in virtualbox.   
> I have no idea how well this would work if I actually had hundreds  
> of gigabytes of data.  I also don't know if my solution is the  
> recommended way to do this, so please let me know if anyone has a  
> better method.
>
> Here's my solution:
> (1) n...@holodeck:~# zpool create epool mirror c4t1d0 c4t2d0 c4t3d0
>
> n...@holodeck:~# zfs list
> NAME USED  AVAIL  REFER  MOUNTPOINT
> epool 69K  15.6G18K  /epool
> rpool   3.68G  11.9G72K  /rpool
> rpool/ROOT  2.81G  11.9G18K  legacy
> rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
> rpool/dump   383M  11.9G   383M  -
> rpool/export 632K  11.9G19K  /export
> rpool/export/home612K  11.9G19K  /export/home
> rpool/export/home/noz594K  11.9G   594K  /export/home/noz
> rpool/swap   512M  12.4G  21.1M  -
> n...@holodeck:~#
>
> (2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
> (3) n...@holodeck:~# zfs send -R rpool/exp...@now > /tmp/export_now
> (4) n...@holodeck:~# zfs destroy -r -f rpool/export
> (5) n...@holodeck:~# zfs recv -d epool < /tmp/export_now
>
> n...@holodeck:~# zfs list
> NAME USED  AVAIL  REFER  MOUNTPOINT
> epool756K  15.6G18K  /epool
> epool/export 630K  15.6G19K  /export
> epool/export/home612K  15.6G19K  /export/home
> epool/export/home/noz592K  15.6G   592K  /export/home/noz
> rpool   3.68G  11.9G72K  /rpool
> rpool/ROOT  2.81G  11.9G18K  legacy
> rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
> rpool/dump   383M  11.9G   383M  -
> rpool/swap   512M  12.4G  21.1M  -
> n...@holodeck:~#
>
> (6) n...@holodeck:~# zfs mount -a
>
> or
>
> (6) reboot
>
> The only part I'm uncomfortable with is when I have to destroy  
> rpool's export filesystem (step 4), because trying to destroy  
> without the -f switch results in a "filesystem is active" error.
> -- 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2009-01-08 Thread noz
Kyle wrote:
> So if preserving the home filesystem through
> re-installs are really
> important, putting the home filesystem in a separate
> pool may be in
> order.

My problem similar to the original thread author, and this scenario is exactly 
the one I had in mind.  I figured out a workable solution from the zfs admin 
guide, but I've only tested this in virtualbox.  I have no idea how well this 
would work if I actually had hundreds of gigabytes of data.  I also don't know 
if my solution is the recommended way to do this, so please let me know if 
anyone has a better method.

Here's my solution:
(1) n...@holodeck:~# zpool create epool mirror c4t1d0 c4t2d0 c4t3d0

n...@holodeck:~# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
epool 69K  15.6G18K  /epool
rpool   3.68G  11.9G72K  /rpool
rpool/ROOT  2.81G  11.9G18K  legacy
rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
rpool/dump   383M  11.9G   383M  -
rpool/export 632K  11.9G19K  /export
rpool/export/home612K  11.9G19K  /export/home
rpool/export/home/noz594K  11.9G   594K  /export/home/noz
rpool/swap   512M  12.4G  21.1M  -
n...@holodeck:~# 

(2) n...@holodeck:~# zfs snapshot -r rpool/exp...@now
(3) n...@holodeck:~# zfs send -R rpool/exp...@now > /tmp/export_now
(4) n...@holodeck:~# zfs destroy -r -f rpool/export
(5) n...@holodeck:~# zfs recv -d epool < /tmp/export_now

n...@holodeck:~# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
epool756K  15.6G18K  /epool
epool/export 630K  15.6G19K  /export
epool/export/home612K  15.6G19K  /export/home
epool/export/home/noz592K  15.6G   592K  /export/home/noz
rpool   3.68G  11.9G72K  /rpool
rpool/ROOT  2.81G  11.9G18K  legacy
rpool/ROOT/opensolaris  2.81G  11.9G  2.68G  /
rpool/dump   383M  11.9G   383M  -
rpool/swap   512M  12.4G  21.1M  -
n...@holodeck:~# 

(6) n...@holodeck:~# zfs mount -a

or

(6) reboot

The only part I'm uncomfortable with is when I have to destroy rpool's export 
filesystem (step 4), because trying to destroy without the -f switch results in 
a "filesystem is active" error.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2008-12-29 Thread scott
at the risk of venturing off topic:

that looks like a good revisioning scheme. opensolaris creates a new (default) 
boot environment during the update process, which seems like a very cool 
feature. seems like. when i update my 2008.11 install, nwam breaks, apparently 
a known bug (the workaround didn't work for me, probably due to inexperience). 
"no problem" me thinks, i just boot into the old environment. nwam still 
broken. i had naively assumed that the new BE was a "delta", again, i know 
little.

anyway, back to zfs, you didn't voice any alarm at my "virtual" home folder 
scheme. regarding root partition, i'll think it over, but given my luck with 
updates, i don't imagine doing any.

thank you once again for all of your valuable input.

scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2008-12-29 Thread Johan Hartzenberg
On Mon, Dec 29, 2008 at 1:12 AM, scott  wrote:

> thanks for the input. since i have no interest in multibooting (virtualbox
> will suit my needs), i created a 10gb partition on my 500gb drive for
> opensolaris and reserved the rest for files (130gb worth).
>
> after installing the os and fdisking the rest of the space to solaris2, i
> created a zpool called DOCUMENTS (good tips with the upper case), which i
> then mounted to Documents in my home folder.
>
> the logic is, if i have to reinstall, i just export DOCUMENTS and re-import
> it into the reinstalled os (or import -f in a worst-case scenario).
>
> after having done all the setup, i partitioned drive 2 using identical
> cylinder locs and mirrored each into their respective pools (rpool and
> DOCUMENTS). replacing drive 1 with 2 and starting back up, everything boots
> fine and i see all my data, so it worked.
>
> obviously i'm a noob, and yet even i find my own method a little
> suspicious. i look at the disk usage analyzer and see that / is 100% used.
> while i'm sure that this is in some kind of "virtual" sense, it leaves me
> with a feeling that i've done a goofy thing.
>
> comments about this last concern are greatly appreciated!
>

Firstly, 10 GB is a bit on the lean side for a Solaris root pool. The pool
needs to store about 6GB of software, a Swap Device, and a Dump Device.

OpenSolaris also gives you upgrade with roll-back.  For this purpose I
reserved about 8 GB per "instance".  The way I do it is as follow:

8 GB for the current version
8 GB for current - 1.
8 GB for a "transient" version - see below
6 GB for Swap and Dump.
10 GB for some flexibility, installing software, etc.
Total for Solaris partition: 40 GB

The transient instance does not stay on the disk for long.  The upgrade
strategy is as follow:

When running on version N, and upgrading to N+1, you will still have N-1 on
disk.  Thus, space for 3 releases is needed.  A few days after upgrading to
N+1, I start to consider it to be the new N.  The old N-1 is then redundant,
and I delete it at that point.  The exception is if the new release doesn't
work to my liking.  Then I delete it, and keep the old N and N-1.

I ALWAYS keep one older release on disk - if nothing else, I've had to use
it as a "recovery" environment many times.  However, it is a somewhat
"expensive" recovery area: In particular, I am using Solaris Express.  It is
possible to create a "recovery" alternate boot environment as follow:

Create a new boot environment (lucreate -n recovery)
Make it bootable (lucativate recovery)
Boot into it once (init 6)
Make the "old BE" active again and boot back into it.

The result is a recovery environment from which you can boot, which does not
take any disk space (other than whatever changes on disk) because it is
based on a snapshot of the existing/current boot environment.

I don't know the OpenSolaris upgrade mechanism yet, though I understand that
something similar is possible.

-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2008-12-28 Thread scott
thanks for the input. since i have no interest in multibooting (virtualbox will 
suit my needs), i created a 10gb partition on my 500gb drive for opensolaris 
and reserved the rest for files (130gb worth).

after installing the os and fdisking the rest of the space to solaris2, i 
created a zpool called DOCUMENTS (good tips with the upper case), which i then 
mounted to Documents in my home folder.

the logic is, if i have to reinstall, i just export DOCUMENTS and re-import it 
into the reinstalled os (or import -f in a worst-case scenario).

after having done all the setup, i partitioned drive 2 using identical cylinder 
locs and mirrored each into their respective pools (rpool and DOCUMENTS). 
replacing drive 1 with 2 and starting back up, everything boots fine and i see 
all my data, so it worked.

obviously i'm a noob, and yet even i find my own method a little suspicious. i 
look at the disk usage analyzer and see that / is 100% used. while i'm sure 
that this is in some kind of "virtual" sense, it leaves me with a feeling that 
i've done a goofy thing.

comments about this last concern are greatly appreciated!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2008-12-27 Thread Johan Hartzenberg
On Sat, Dec 27, 2008 at 7:33 AM, scott  wrote:

> do you mean a pool on a SEPARATE partition?
> --
>
That is what I do.  In particular, I have:

fdisk partition 1 = Solaris partition type 0xbf  = rpool = 40 GB
fdisk partition 2 = MSDOS partition type = SHARED zpool = 190 GB
fdisk partition 3 = 30 GB Extended partition. Logical partition 5 used for
Ubuntu Root, Logical Partition 6 = Ubuntu Swap.

This leaves me with the option of creating an fdisk partition 4 for another
operating system.

Disadvantages:
1. Partitioning means ZFS does not turn on write-caching.
2. Also there is "wasted space". (Partitioning implies pre-allocating space,
which means you have to dedicate space that you may not use)

Advantages:
1. I can import the SHARED zpool under Ubuntu and thus I have the perfect
shared space solution between the two operating systems, without having to
worry about clashing mount points which would be present if I tried to
import the root pool.
2.  If I needed to re-install, I would only wipe/destroy/touch the OS, not
my user data.

I have not yet made the move from Solaris Express to OpenSolaris, so I am
still using Live Upgrade.  I generally upgrade to every new release,
sometimes to my sorrow.  But it does not touch my "SHARED data" zpool.

One other thing:  I started a "convention" of using all-capital names for my
ZFS pool names.  It makes them stand out nicely in the output of df and
mount, but in particular ir distinguishes nicely between the pool name and
the mountpoint because I then mount the "SHARED" pool on "/shared".

-- 
Any sufficiently advanced technology is indistinguishable from magic.
   Arthur C. Clarke

My blog: http://initialprogramload.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2008-12-26 Thread scott
do you mean a pool on a SEPARATE partition?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2008-12-26 Thread scott
i tend to bork up oses a lot and until i get better at problem solving it's 
just easier to reinstall (at least in linux days).

by separate pool do you mean A a pool that exists on a different 
partition/device? that's the only way i can imagine caiman not overwriting 
everything.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2008-12-26 Thread Kyle McDonald

Richard Elling wrote:

scott stanley wrote:
  

(i use the term loosely because i know that zfs likes whole volumes better)

when installing ubuntu, i got in the habit of using a separate partition for my 
home directory so that my data and gnome settings would all remain intact when 
i reinstalled or upgraded.

i'm running osol 2008.11 on an ultra 20, which has only two drives. i've got 
all my data located in my home directory, and the two drives are cloned 
(mirrored and bootable).

i want to have a similar option as my linux setup, but want to play by the zfs 
(best practice) rules as much as possible.
  



Cool.  By default, OpenSolaris implements the prevailing best practice.
It uses ZFS and puts the home directories in a separate file system.
  
And upgrades, by using ZFS snapshots, won't ever touch the ZFS home 
directory FS.


The only part I'm not sure about is reinstalls. I don't believe the 
Installer is capable of re-using an existing root pool, and preserving 
the non-root (non-bootenv really) filesystems. Or for that matter 
scratch installing into an entirely new BE, and leaving the others alone.


So if preserving the home filesystem through re-installs are really 
important, putting the home filesystem in a separate pool may be in order.


  -Kyle


 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] separate home "partition"?

2008-12-26 Thread Richard Elling
scott stanley wrote:
> (i use the term loosely because i know that zfs likes whole volumes better)
>
> when installing ubuntu, i got in the habit of using a separate partition for 
> my home directory so that my data and gnome settings would all remain intact 
> when i reinstalled or upgraded.
>
> i'm running osol 2008.11 on an ultra 20, which has only two drives. i've got 
> all my data located in my home directory, and the two drives are cloned 
> (mirrored and bootable).
>
> i want to have a similar option as my linux setup, but want to play by the 
> zfs (best practice) rules as much as possible.
>   

Cool.  By default, OpenSolaris implements the prevailing best practice.
It uses ZFS and puts the home directories in a separate file system.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] separate home "partition"?

2008-12-26 Thread scott stanley
(i use the term loosely because i know that zfs likes whole volumes better)

when installing ubuntu, i got in the habit of using a separate partition for my 
home directory so that my data and gnome settings would all remain intact when 
i reinstalled or upgraded.

i'm running osol 2008.11 on an ultra 20, which has only two drives. i've got 
all my data located in my home directory, and the two drives are cloned 
(mirrored and bootable).

i want to have a similar option as my linux setup, but want to play by the zfs 
(best practice) rules as much as possible.

thanks in advance
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss