manageBE and ZFS boot-environments WAS [Re: ZFS NAS configuration question]

2009-06-04 Thread Philipp Wuensche
Dan Naumov wrote:
 Anyone else think that this combined with freebsd-update integration
 and a simplistic menu GUI for choosing the preferred boot environment
 would make an _awesome_ addition to the base system? :)

I guess freebsd-update is not a problem, should be freebsd-update -b
path_to_new_boot-environment.  But I can't test this, because I can't
update STABLE with freebsd-update. ;-)

I wrote a small step-by-step example to show how stuff works:
http://anonsvn.h3q.com/projects/freebsd-patches/browser/manageBE/example.txt

greetings,
philipp


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-06-03 Thread Philipp Wuensche
Dan Naumov wrote:
 
 Reading that made me pause for a second and made me go WOW, this is how
 UNIX system upgrades should be done. Any hope of us lowly users ever seeing
 something like this implemented in FreeBSD? :)

I wrote a script implementing the most useful features of the solaris
live upgrade, the only thing missing is selecting a boot-environment
from the loader and freebsd-update support as I write the script on a
system running current. I use this on all my freebsd-zfs boxes and it is
extremely useful!

http://anonsvn.h3q.com/projects/freebsd-patches/wiki/manageBE

greetings,
philipp

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-06-03 Thread Dan Naumov
Anyone else think that this combined with freebsd-update integration
and a simplistic menu GUI for choosing the preferred boot environment
would make an _awesome_ addition to the base system? :)

- Dan Naumov


On Wed, Jun 3, 2009 at 5:42 PM, Philipp Wuenschecryx-free...@h3q.com wrote:
 I wrote a script implementing the most useful features of the solaris
 live upgrade, the only thing missing is selecting a boot-environment
 from the loader and freebsd-update support as I write the script on a
 system running current. I use this on all my freebsd-zfs boxes and it is
 extremely useful!

 http://anonsvn.h3q.com/projects/freebsd-patches/wiki/manageBE

 greetings,
 philipp
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-06-02 Thread Gerrit Kühn
On Sat, 30 May 2009 21:41:36 +0300 Dan Naumov dan.nau...@gmail.com wrote
about ZFS NAS configuration question:

DN So, this leaves me with 1 SATA port used for a FreeBSD disk and 4 SATA
DN ports available for tinketing with ZFS. 

Do you have a USB port available to boot from? A conventional USB stick (I
use 4 GB or 8GB these days, but smaller ones would certainly also do) is
enough to hold the base system on UFS, and you can give the whole of your
disks to ZFS without having to bother with booting from them.


cu
  Gerrit
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-06-02 Thread Dan Naumov
USB root partition for booting off UFS is something I have considered. I
have looked around and it seems that all the install FreeBSD onto USB
stick guides seem to involve a lot of manual work from a fixit environment,
does sysinstall not recognise USB drives as a valid disk device to
parition/label/install FreeBSD on? If I do go with an USB boot/root, what
things I should absolutely keep on it and which are safe to move to a ZFS
pool? The idea is that in case my ZFS configuration goes bonkers for some
reason, I still have a fully workable singleuser configuration to boot from
for recovery.

I haven't really used USB flash for many years, but I remember when they
first started appearing on the shelves, they got well known for their
horrible reliability (stick would die within a year of use, etc). Have they
improved to the point of being good enough to host a root partition on,
without having to setup some crazy GEOM mirror setup using 2 of them?

- Dan Naumov



2009/6/2 Gerrit Kühn ger...@pmp.uni-hannover.de

 On Sat, 30 May 2009 21:41:36 +0300 Dan Naumov dan.nau...@gmail.com wrote
 about ZFS NAS configuration question:

 DN So, this leaves me with 1 SATA port used for a FreeBSD disk and 4 SATA
 DN ports available for tinketing with ZFS.

 Do you have a USB port available to boot from? A conventional USB stick (I
 use 4 GB or 8GB these days, but smaller ones would certainly also do) is
 enough to hold the base system on UFS, and you can give the whole of your
 disks to ZFS without having to bother with booting from them.


 cu
  Gerrit
 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-06-02 Thread Daniel O'Connor
On Tue, 2 Jun 2009, Dan Naumov wrote:
 USB root partition for booting off UFS is something I have
 considered. I have looked around and it seems that all the install
 FreeBSD onto USB stick guides seem to involve a lot of manual work
 from a fixit environment, does sysinstall not recognise USB drives as
 a valid disk device to parition/label/install FreeBSD on? If I do go
 with an USB boot/root, what things I should absolutely keep on it and
 which are safe to move to a ZFS pool? The idea is that in case my
 ZFS configuration goes bonkers for some reason, I still have a fully
 workable singleuser configuration to boot from for recovery.

It should see them as SCSI disks, note that if you plug them in after 
the installer boots you will need to go into Options and tell it to 
rescan the devices.

 I haven't really used USB flash for many years, but I remember when
 they first started appearing on the shelves, they got well known for
 their horrible reliability (stick would die within a year of use,
 etc). Have they improved to the point of being good enough to host a
 root partition on, without having to setup some crazy GEOM mirror
 setup using 2 of them?

I would expect one to last a long time if you only use it for /boot and 
use ZFS for the rest (or even just moving /var onto ZFS would save 
heaps of writes).

Also, you could setup 2 USB sticks (install on one then dd onto the 
other) so you have a cold spare.

-- 
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
The nice thing about standards is that there
are so many of them to choose from.
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C


signature.asc
Description: This is a digitally signed message part.


Re: ZFS NAS configuration question

2009-06-02 Thread Miroslav Lachman

Daniel O'Connor wrote:

On Tue, 2 Jun 2009, Dan Naumov wrote:


USB root partition for booting off UFS is something I have
considered. I have looked around and it seems that all the install
FreeBSD onto USB stick guides seem to involve a lot of manual work
from a fixit environment, does sysinstall not recognise USB drives as
a valid disk device to parition/label/install FreeBSD on? If I do go
with an USB boot/root, what things I should absolutely keep on it and
which are safe to move to a ZFS pool? The idea is that in case my
ZFS configuration goes bonkers for some reason, I still have a fully
workable singleuser configuration to boot from for recovery.



It should see them as SCSI disks, note that if you plug them in after 
the installer boots you will need to go into Options and tell it to 
rescan the devices.




I haven't really used USB flash for many years, but I remember when
they first started appearing on the shelves, they got well known for
their horrible reliability (stick would die within a year of use,
etc). Have they improved to the point of being good enough to host a
root partition on, without having to setup some crazy GEOM mirror
setup using 2 of them?



I would expect one to last a long time if you only use it for /boot and 
use ZFS for the rest (or even just moving /var onto ZFS would save 
heaps of writes).


I am using this setup (booting from USB with UFS) in our backup storage 
server with FreeBSD 7.2 + ZFS.
2GB USB flash disk contains normal installation of the whole system, but 
is set to read only in fstab. ZFS is used for /tmp /var /usr/ports 
/usr/src /usr/obj and storage.


root filesystem is remounted read write only for some configuration 
changes, then remounted back to read only.


Miroslav Lachman


# df -h
Filesystem   Size  Used  Avail Capacity  Mounted on
/dev/ufs/2gLive  1.6G  863M   642M57%/
devfs1.0K  1.0K 0B   100%/dev
tank 1.1T  128K   1.1T 0%/tank
tank/system  1.1T  128K   1.1T 0%/tank/system
tank/system/usr  1.1T  128K   1.1T 0% 
/tank/system/usr

tank/system/tmp  1.1T  128K   1.1T 0%/tmp
tank/system/usr/obj  1.1T  128K   1.1T 0%/usr/obj
tank/system/usr/ports1.1T  218M   1.1T 0%/usr/ports
tank/system/usr/ports/distfiles  1.1T  108M   1.1T 0% 
/usr/ports/distfiles
tank/system/usr/ports/packages   1.1T  125M   1.1T 0% 
/usr/ports/packages

tank/system/usr/src  1.1T  171M   1.1T 0%/usr/src
tank/system/var  1.1T  256K   1.1T 0%/var
tank/system/var/db   1.1T  716M   1.1T 0%/var/db
tank/system/var/db/pkg   1.1T  384K   1.1T 0%/var/db/pkg
tank/system/var/log  1.1T   45M   1.1T 0%/var/log
tank/system/var/run  1.1T  128K   1.1T 0%/var/run
tank/vol02.6T  1.5T   1.1T57%/vol0
tank/vol0/mon1.1T  128K   1.1T 0%/vol0/mon

(some filesystems are using compression, that's why ports and var are 
splitted in to more filesystems)

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-06-02 Thread sthaug
 root filesystem is remounted read write only for some configuration 
 changes, then remounted back to read only.

Does this work reliably for you? I tried doing the remounting trick,
both for root and /usr, back in the 4.x time frame. And could never
get it to work - would always end up with inconsistent file systems.

Steinar Haug, Nethelp consulting, sth...@nethelp.no
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-06-02 Thread Nikos Vassiliadis

sth...@nethelp.no wrote:
root filesystem is remounted read write only for some configuration 
changes, then remounted back to read only.


Does this work reliably for you? I tried doing the remounting trick,
both for root and /usr, back in the 4.x time frame. And could never
get it to work - would always end up with inconsistent file systems.


There were many fixes in this area lately. The case where a
file system with softdeps would fail to update to read-only
is fixed in -CURRENT and these changes are merged to -STABLE.
It is believed to work correctly.

http://lists.freebsd.org/pipermail/freebsd-stable/2008-October/046001.html

Remounting with soft updates enabled used to be too
fragile to be useful. Now it seems very solid.

Nikos

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-06-02 Thread Zaphod Beeblebrox
On Sun, May 31, 2009 at 4:43 AM, Aristedes Maniatis a...@ish.com.au wrote:


 On 31/05/2009, at 4:41 AM, Dan Naumov wrote:

  To top that
 off, even when/if you do it right, not your entire disk goes to ZFS
 anyway, because you still do need a swap and a /boot to be non-ZFS, so
 you will have to install ZFS onto a slice and not the entire disk and
 even SUN discourages to do that.


 ZFS on root is still pretty new to FreeBSD, and until it gets ironed out
 and all the sysinstall tools support it nicely, it isn't hard to use a small
 UFS slice to get things going during boot. And there is nothing wrong with
 putting ZFS onto a slice rather than the entire disk: that is a very common
 approach.


It's worth noting that there are a few sensible appliance designs...
(although as a ZFS server, you might want 4, 8 or 16G in your appliance).

You could, for instance, boot from flash.  If your true purpose is an
appliance, this is very reasonable.  It means that your appliance boots
when no disks are attached.  Useful to instruct the appliance user how to
attache disks and do diagnostics, for instance.

My own ZFS is  5x 1.5TB disks running on a few week old 8-CURRENT.  I gave
up waiting for v13 in 7.x.  Maybe I should have waited.  But I've avoided
most of the most recent foo-for-ah by not tracking current incessantly.  If
I was installing new, I'd probably stick with 7.x for a server... for now.
I must admit, however, that the system seems happy with 8-CURRENT.

The system boots from a pair of drives in a gmirror.  Mot because you can't
boot from ZFS, but because it's just so darn stable (and it predates the use
of ZFS).

Really there are two camps here --- booting from ZFS is the use of ZFS as
the machine's own filesystem.  This is one goal of ZFS that is somewhat
imperfect on FreeBSD at the momment.  ZFS file servers are another goal
where booting from ZFS is not really required and only marginally
beneficial.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-06-02 Thread Miroslav Lachman

sth...@nethelp.no wrote:
root filesystem is remounted read write only for some configuration 
changes, then remounted back to read only.



Does this work reliably for you? I tried doing the remounting trick,
both for root and /usr, back in the 4.x time frame. And could never
get it to work - would always end up with inconsistent file systems.


The system is in production from October 2008 and never paniced in 
remounting. In this time frame, we got only two deadlocks caused by 
earlier versions of ZFS.
At this time, files on ZFS are using 28151719 inodes, storage is for 
daily rsync backups of dozen webservers and mailserver.


I am using

mount -u -o current,rw /
[do some configuration work]
sync; sync; sync;
mount -u -o current,ro /

The sync command is maybe useless, but I feel safer with it ;o)
(root filesystem is not using soft-updates)

Miroslav Lachman
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-06-02 Thread Dan Naumov
This reminds me. I was reading the release and upgrade notes of OpenSolaris
2009.6 and noted one thing about upgrading from a previous version to the
new one::

When you pick the upgrade OS option in the OpenSolaris installer, it will
check if you are using a ZFS root partition and if you do, it intelligently
suggests to take a current snapshot of the root filesystem. After you finish
the upgrade and do a reboot, the boot menu offers you the option of booting
the new upgraded version of the OS or alternatively _booting from the
snapshot taken by the upgrade installation procedure_.

Reading that made me pause for a second and made me go WOW, this is how
UNIX system upgrades should be done. Any hope of us lowly users ever seeing
something like this implemented in FreeBSD? :)

- Dan Naumov





On Tue, Jun 2, 2009 at 9:47 PM, Zaphod Beeblebrox zbee...@gmail.com wrote:



 The system boots from a pair of drives in a gmirror.  Mot because you can't
 boot from ZFS, but because it's just so darn stable (and it predates the use
 of ZFS).

 Really there are two camps here --- booting from ZFS is the use of ZFS as
 the machine's own filesystem.  This is one goal of ZFS that is somewhat
 imperfect on FreeBSD at the momment.  ZFS file servers are another goal
 where booting from ZFS is not really required and only marginally
 beneficial.



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-06-02 Thread Dan Naumov
A little more info for the (perhaps) curious:

Managing Multiple Boot Environments:
http://dlc.sun.com/osol/docs/content/2009.06/getstart/bootenv.html#bootenvmgr
Introduction to Boot Environments:
http://dlc.sun.com/osol/docs/content/2009.06/snapupgrade/index.html

- Dan Naumov



On Tue, Jun 2, 2009 at 10:39 PM, Dan Naumov dan.nau...@gmail.com wrote:

 This reminds me. I was reading the release and upgrade notes of OpenSolaris 
 2009.6 and noted one thing about upgrading from a previous version to the new 
 one::

 When you pick the upgrade OS option in the OpenSolaris installer, it will 
 check if you are using a ZFS root partition and if you do, it intelligently 
 suggests to take a current snapshot of the root filesystem. After you finish 
 the upgrade and do a reboot, the boot menu offers you the option of booting 
 the new upgraded version of the OS or alternatively _booting from the 
 snapshot taken by the upgrade installation procedure_.

 Reading that made me pause for a second and made me go WOW, this is how 
 UNIX system upgrades should be done. Any hope of us lowly users ever seeing 
 something like this implemented in FreeBSD? :)

 - Dan Naumov





 On Tue, Jun 2, 2009 at 9:47 PM, Zaphod Beeblebrox zbee...@gmail.com wrote:


 The system boots from a pair of drives in a gmirror.  Mot because you can't 
 boot from ZFS, but because it's just so darn stable (and it predates the use 
 of ZFS).

 Really there are two camps here --- booting from ZFS is the use of ZFS as 
 the machine's own filesystem.  This is one goal of ZFS that is somewhat 
 imperfect on FreeBSD at the momment.  ZFS file servers are another goal 
 where booting from ZFS is not really required and only marginally beneficial.



___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-06-02 Thread Adam McDougall
I have a proof of concept system doing this.  I started with a 7.2 
install on zfs root, compiled world and kernel from 8, took a snapshot 
and made a clone for the 7.2 install, and proceeded to upgrade the 
current fs to 8.0.  After updating the loader.conf in the 7.2 zfs to 
point to its own cloned fs, I can pick which one to boot with a simple 
zpool set bootfs=z/ROOT/7.2 or zpool set bootfs=z/ROOT/8.0 before 
rebooting.  I also tried rsyncing from a FFS based system into a new ZFS 
in that same zpool, used DESTDIR with installkernel and installworld to 
update the imported OS to support zfs, setup its boot loader and misc 
config files, and was able to boot from it using zpool to set it as the 
bootfs.  Somewhat like shifting around OS images in a virtualization 
environment except its easy to reach inside the image to 
upgrade/modify it, copy them between systems, and no execution overhead 
while running one since its still on bare metal (but only one running OS 
per server of course). This makes it very easy to swap an OS onto 
another server if you need better/lesser hardware or just want to test.


Dan Naumov wrote:

This reminds me. I was reading the release and upgrade notes of OpenSolaris
2009.6 and noted one thing about upgrading from a previous version to the
new one::

When you pick the upgrade OS option in the OpenSolaris installer, it will
check if you are using a ZFS root partition and if you do, it intelligently
suggests to take a current snapshot of the root filesystem. After you finish
the upgrade and do a reboot, the boot menu offers you the option of booting
the new upgraded version of the OS or alternatively _booting from the
snapshot taken by the upgrade installation procedure_.

Reading that made me pause for a second and made me go WOW, this is how
UNIX system upgrades should be done. Any hope of us lowly users ever seeing
something like this implemented in FreeBSD? :)

- Dan Naumov





On Tue, Jun 2, 2009 at 9:47 PM, Zaphod Beeblebrox zbee...@gmail.com wrote:

  

The system boots from a pair of drives in a gmirror.  Mot because you can't
boot from ZFS, but because it's just so darn stable (and it predates the use
of ZFS).

Really there are two camps here --- booting from ZFS is the use of ZFS as
the machine's own filesystem.  This is one goal of ZFS that is somewhat
imperfect on FreeBSD at the momment.  ZFS file servers are another goal
where booting from ZFS is not really required and only marginally
beneficial.





___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

  


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-05-31 Thread Aristedes Maniatis


On 31/05/2009, at 4:41 AM, Dan Naumov wrote:


To top that
off, even when/if you do it right, not your entire disk goes to ZFS
anyway, because you still do need a swap and a /boot to be non-ZFS, so
you will have to install ZFS onto a slice and not the entire disk and
even SUN discourages to do that.


ZFS on root is still pretty new to FreeBSD, and until it gets ironed  
out and all the sysinstall tools support it nicely, it isn't hard to  
use a small UFS slice to get things going during boot. And there is  
nothing wrong with putting ZFS onto a slice rather than the entire  
disk: that is a very common approach.


http://www.ish.com.au/solutions/articles/freebsdzfs

Ari Maniatis



--
ish
http://www.ish.com.au
Level 1, 30 Wilson Street Newtown 2042 Australia
phone +61 2 9550 5001   fax +61 2 9550 4001
GPG fingerprint CBFB 84B4 738D 4E87 5E5C  5EFA EF6A 7D2E 3E49 102A


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-05-30 Thread Louis Mamakos
I built a system recently with 5 drives and ZFS.  I'm not booting off  
a ZFS root, though it does mount a ZFS file system once the system has  
booted from a UFS file system.  Rather than dedicate drives, I simply  
partitioned each of the drives into a 1G partition, and another  
spanning the remainder of the disk.  (In my case, all the drives are  
the same size).  I boot off a gmirror of two partitions off the first  
two drives, and then use the other 3 1G partitions on the remaining 3  
drives as swap partitions.  I take the larger partitions on each of  
the 5 drives and organize them into a raidz2 ZFS pool.


My needs are more relating to integrity of the data vs. surviving a  
disk failure without crashing.  So, I don't bother to mirror swap  
partitions to keep running in the event of a drive failure.  But  
that's a decision for you to make.


It's not too tricky to do the install; I certainly didn't need to burn  
a custom CD or anything.  There are some fine cookbooks on the net  
that talk about techniques.  For me, the tricky bit was setting up the  
geom gmirror, which you could probably do from the fixit CD or  
something.  I just did a normal install on the first drive to get a  
full FreeBSD running, and then built the mirrors on a couple of  
other drives, did an install on the mirror (make installworld  
DESTDIR=/mnt) and then just moved the drives around.  And I did a  
full installation in the 1G UFS gmirror file system, just to have an  
full environment to debug from, if necessary, rather than just a /boot.


Just some ideas..

louie


On May 30, 2009, at 2:41 PM, Dan Naumov wrote:


Hey

I am not entirely sure if this question belongs here or to another
list, so feel free to direct me elsewhere :)

Anyways, I am trying to figure out the best way to configure a NAS
system I will soon get my hands on, it's a Tranquil BBS2 (
http://www.tranquilpc-shop.co.uk/acatalog/BAREBONE_SERVERS.html ).
which has 5 SATA ports. Due to budget constraints, I have to start
small, either a single 1,5 TB drive or at most, a small 500 GB system
drive + a 1,5 TB drive to get started with ZFS. What I am looking for
is a configuration setup that would offer maximum possible storage,
while having at least _some_ redundancy and having the possibility to
grow the storage pool without having to reload the entire setup.

Using ZFS root right now seems to involve a fair bit of trickery (you
need to make an .ISO snapshot of -STABLE, burn it, boot from it,
install from within a fixit environment, boot into your ZFS root and
then make and install world again to fix the permissions). To top that
off, even when/if you do it right, not your entire disk goes to ZFS
anyway, because you still do need a swap and a /boot to be non-ZFS, so
you will have to install ZFS onto a slice and not the entire disk and
even SUN discourages to do that. Additionally, there seems to be at
least one reported case of a system failing to boot after having done
installworld on a ZFS root: the installworld process removes the old
libc, tries to install a new one and due to failing to apply some
flags to it which ZFS doesn't support, leave it uninstall, leaving the
system in an unusable state. This can be worked around, but gotchas
like this and the amount of work involved in getting the whole thing
running make me really lean towards having a smaller traditional UFS2
system disk for FreeBSD itself.

So, this leaves me with 1 SATA port used for a FreeBSD disk and 4 SATA
ports available for tinketing with ZFS. What would make the most sense
if I am starting with 1 disk for ZFS and eventually plan on having 4
and want to maximise storage, yet have SOME redundancy in case of a
disk failure? Am I stuck with 2 x 2 disk mirrors or is there some 3+1
configuration possible?

Sincerely,
- Dan Naumov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org 





___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-05-30 Thread Dan Naumov
Is the idea behind leaving 1GB unused on each disk to work around the
problem of potentially being unable to replace a failed device in a
ZFS pool because a 1TB replacement you bought actually has a lower
sector count than your previous 1TB drive (since the replacement
device has to be either of exact same size or bigger than the old
device)?

- Dan Naumov


On Sat, May 30, 2009 at 10:06 PM, Louis Mamakos lo...@transsys.com wrote:
 I built a system recently with 5 drives and ZFS.  I'm not booting off a ZFS
 root, though it does mount a ZFS file system once the system has booted from
 a UFS file system.  Rather than dedicate drives, I simply partitioned each
 of the drives into a 1G partition
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS NAS configuration question

2009-05-30 Thread Louis Mamakos

The system that I built had 5 x 72GB SCA SCSI drives.  Just to keep my
own sanity, I decided that I'd configure the fdisk partitioning  
identically
across all of the drives.  So that they all have a 1GB slice and and a  
71GB

slice.

The drives all have identical capacity, so the second 71GB slice ends up
the same on all of the drives.  I actually end up using glabel to create
a named unit of storage, so that I don't have to worry about getting
the drives inserted into the right holes..

I figured that 1GB wasn't too far off for both swap partitions (3 of  
'em)

plus a pair mirrored to boot from.

I haven't really addressed directly swapping another drive of a slightly
different size, though I've spares and I could always put a larger drive
in and create a slice at the right size.

It looks like this, with all of the slices explicitly named with glabel:

r...@droid[41] # glabel status
Name  Status  Components
 label/boot0 N/A  da0s1
label/zpool0 N/A  da0s2
 label/boot1 N/A  da1s1
label/zpool1 N/A  da1s2
 label/swap2 N/A  da2s1
label/zpool2 N/A  da2s2
 label/swap3 N/A  da3s1
label/zpool3 N/A  da3s2
 label/swap4 N/A  da4s1
label/zpool4 N/A  da4s2

And the ZFS pool references the labeled slices:

r...@droid[42] # zpool status
  pool: z
 state: ONLINE
 scrub: none requested
config:

NAME  STATE READ WRITE CKSUM
z ONLINE   0 0 0
  raidz2  ONLINE   0 0 0
label/zpool0  ONLINE   0 0 0
label/zpool1  ONLINE   0 0 0
label/zpool2  ONLINE   0 0 0
label/zpool3  ONLINE   0 0 0
label/zpool4  ONLINE   0 0 0

errors: No known data errors

And swap on the other ones:

r...@droid[43] # swapinfo
Device  1024-blocks UsedAvail Capacity
/dev/label/swap4 10441920  1044192 0%
/dev/label/swap3 10441920  1044192 0%
/dev/label/swap2 10441920  1044192 0%
Total   31325760  3132576 0%

This is the mirrored partition that the system actually boots from.   
This
maps physically to da0s1 and da1s1.  The normal boot0 and boot1/boot2  
and

loader operate typically on da0s1a which is really /dev/mirror/boota:

r...@droid[45] # gmirror status
   NameStatus  Components
mirror/boot  COMPLETE  label/boot0
   label/boot1

r...@droid[47] # df -t ufs
Filesystem  1024-blocks  UsedAvail Capacity  Mounted on
/dev/mirror/boota   1008582680708   24718873%/bootdir

The UFS partition eventually ends up getting mounted on /bootdir:

r...@droid[51] # cat /etc/fstab
# DeviceMountpoint  FStype  Options  
DumpPass#
zfs:z/root  /   zfs rw   
0   0
/dev/mirror/boota   /bootdirufs rw,noatime   
1   1
/dev/label/swap2noneswapsw   
0   0
/dev/label/swap3noneswapsw   
0   0
/dev/label/swap4noneswapsw   
0   0
/dev/acd0   /cdrom  cd9660  ro,noauto
0   0


But when /boot/loader on the UFS partition reads what it thinks is / 
etc/fstab,
which eventually ends up in /bootdir/etc/fstab, the root file system  
that's mounted

is the ZFS filesystem at z/root:


r...@droid[52] # head /bootdir/etc/fstab
# DeviceMountpoint  FStype  Options  
DumpPass#
z/root  /   zfs rw   
0   0


And /boot on the ZFS root is symlinked into the UFS filesystem, so it  
gets updated

when a make installworld happens:

r...@droid[53] # ls -l /boot
lrwxr-xr-x  1 root  wheel  12 May  3 23:00 /boot@ - bootdir/boot

louie



On May 30, 2009, at 3:15 PM, Dan Naumov wrote:


Is the idea behind leaving 1GB unused on each disk to work around the
problem of potentially being unable to replace a failed device in a
ZFS pool because a 1TB replacement you bought actually has a lower
sector count than your previous 1TB drive (since the replacement
device has to be either of exact same size or bigger than the old
device)?

- Dan Naumov


On Sat, May 30, 2009 at 10:06 PM, Louis Mamakos lo...@transsys.com  
wrote:
I built a system recently with 5 drives and ZFS.  I'm not booting  
off a ZFS
root, though it does mount a ZFS file system once the system has  
booted from
a UFS file system.  Rather than dedicate drives, I simply  
partitioned each

of the drives into a 1G partition




___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org