Re: ZFS boot

2008-10-11 Thread Mike Meyer
On Sat, 11 Oct 2008 20:37:10 +
"Freddie Cash" <[EMAIL PROTECTED]> wrote:
> > Most linux dists don't bother with multiple partitions any more.
> > They just have '/' and maybe a small boot partition, and that's it.
> 
> Heh, that's more proof of the difficulties inherent with old-school
> disk partitioning, compared to pooled storage setups, than an
> endorsement of using a single partition/filesystem.  :)

I think it's more likely that, given you know absolutely nothing about
what the system is going to be used for, you don't know enough to set
up the partitions intelligently, so one partitions makes as much sense
as anything else. That's one of the best thing about pooled storage:
you can create new file systems for new usages without having to
repartition your disk subsystem.

  http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.

O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: continuous backup solution for FreeBSD

2008-10-11 Thread Mike Meyer
On Sat, 11 Oct 2008 04:24:31 -0700
Jeremy Chadwick <[EMAIL PROTECTED]> wrote:
> > I'm asking, because I want to deploy some zfs fileservers soon, and so
> > far the solution is either PXE boot, or keep one disk UFS (or boot off a 
> > USB)
> > Today's /(root+usr) is somewhere between .5 to 1Gb(kernel+debug+src),
> > and is readonly, so having 1 disk UFS seems to be a pitty.
> 
> Hold on a minute.  "One disk" has nothing to do with the filesystem.
> You asked if FreeBSD could boot off of a specific filesystem, and I
> answered that -- I didn't state anything about disk counts.  Now you're
> changing the focus.  :-)
> 
> I'm pretty sure FreeBSD can boot off of gmirror setups (see above,
> boot2/loader should work off of gmirror), which means >1 disk.  You
> do not have to gmirror the entire disk, you can gmirror just a slice
> (AFAIK).
>
> I think (hope?) you can use the "remaining" (e.g. non-UFS/non-gmirror)
> part of the 2nd disk for ZFS as well, otherwise the space would go
> to waste.  The "Root on ZFS configuration" FreeBSD ZFS Wiki page
> seems to imply you can.

You mean like this:

bhuda% gmirror status
   NameStatus  Components
mirror/boot  COMPLETE  ad0s1a
   ad1s1a
bhuda% zpool status
  pool: internal
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
internalONLINE   0 0 0
  mirrorONLINE   0 0 0
ad0s1d  ONLINE   0 0 0
ad1s1d  ONLINE   0 0 0

errors: No known data errors

Yes, I don't get the benefits of having /boot on a zfs partition, but
I do get the benefits of having it on a mirror: automatic duplication,
reads from either device, and I can use either device stand-alone if I
break the mirror.

Note that FreeBSD booting from a gmirror'ed partition/disk can't boot
from the gmirror device - boot doesn't understand gmirror. It can,
however, boot from any of the devices participating in the mirror. The
mirror device appears after the kernel is loaded.

Given that I have to have a separate boot partition, having swap
partitions on the drives is a win compared to swapping to a zvol. I'm
going to investigate putting /boot on an SSD of some kind so that ZFS
can have the entire disk.

  http://www.mired.org/consulting.html
Independent Network/Unix/Perforce consultant, email for more information.

O< ascii ribbon campaign - stop html mail - www.asciiribbon.org
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: Is it possible to recover from SEGV?

2008-10-11 Thread Nate Eldredge

On Sat, 11 Oct 2008, Yuri wrote:


Let's say I have signal(3) handler set.
And I know exactly what instruction caused SEGV and why.

Is there a way to access from signal handler CPU registers as they
were before signal, modify some of them, clear the signal and
continue from the instruction that caused SEGV initially?


Absolutely.  Declare your signal handler as

void handler(int sig, int code, struct sigcontext *scp);

You will need to cast the pointer passed to signal(3).  struct sigcontext 
is defined in  I believe.   struct sigcontext contains 
the CPU registers as they were when the faulting instruction began to 
execute.  You can modify them and then return from the signal handler. 
The program will resume the faulting instruction with the new registers. 
You can also alter the copy of the instruction pointer in the struct 
sigcontext if you want it to resume somewhere else.


There is also a libsigsegv which looks like it wraps some of this process 
in a less machine-specific way.


Out of curiosity, what are you looking to achieve with this?  And what 
architecture are you on?


--

Nate Eldredge
[EMAIL PROTECTED]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Is it possible to recover from SEGV?

2008-10-11 Thread Yuri

Let's say I have signal(3) handler set.
And I know exactly what instruction caused SEGV and why.

Is there a way to access from signal handler CPU registers as they
were before signal, modify some of them, clear the signal and
continue from the instruction that caused SEGV initially?

I see that if signal handler doesn't terminate the process signal is being
generated again and again. I understand it the way that the faulty
instruction is being rerun if signal handler didn't terminate the process.
rusage.ru_nsignals is also being incremented every time signal handler
is being called.

Yuri

PS: Of course I understand why SEGVs happen in general. I am trying to
understand if it's possible to use SEGV beyond the way it's commonly used.

___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: TIME WARP! Re: HEADS UP: GCC 4.2.0 is coming

2008-10-11 Thread Garrett Cooper
On Wed, Oct 8, 2008 at 1:36 AM, Peter Wemm <[EMAIL PROTECTED]> wrote:
> On Wed, Oct 8, 2008 at 12:57 AM, O. Hartmann
> <[EMAIL PROTECTED]> wrote:
>> Alexander Kabaev wrote:
>>>
>>> On Fri, 18 May 2007 19:20:07 -0400
>>> Alexander Kabaev <[EMAIL PROTECTED]> wrote:
>>>
 HEADS UP: I will start importing GCC 4.2.0 bits in about one hour and
 plan to finish in a couple of hours after that.

 The src/ tree will be utterly broken meanwhile. I'll send an 'all
 clear' message when done.
>>>
>>> Done.
>>>
>>
>> Just for those who aren't on the cutting edge: why gcc 4.2.0 and not 4.2.1
>> as it is used in 7.X?
>>
>> Regards,
>> O.
>
> Sorry about that.  I accidently revived a bunch of stuck email
> messages from our mailing list processing system.  These messages from
> 2007 came back to life somehow.
>
> (Hint: Mailman's 'unshunt' command doesn't give a usage message)

Apparently you mastered what Robert Zemekis was trying to do back in
1985  XD.
-Garrett
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: ZFS boot

2008-10-11 Thread Nate Eldredge

On Sat, 11 Oct 2008, Pegasus Mc Cleaft wrote:


FWIW, my system is amd64 with 1 G of memory, which the page implies is
insufficient.  Is it really?


This may be purely subjective, as I have never bench marked the speeds, 
but
when I was first testing zfs on a i386 machine with 1gig ram, I thought the
performance was mediocre. However, when I loaded the system on a quad core -
core2 with 8 gigs ram, I was quite impressed. I put localized changes in my
/boot/loader.conf to give the kernel more breathing room and disabled the
prefetch for zfs.

#more loader.conf
vm.kmem_size_max="1073741824"
vm.kmem_size="1073741824"
vfs.zfs.prefetch_disable=1


I was somewhat confused by the suggestions on the wiki.  Do the kmem_size 
sysctls affect the allocation of *memory* or of *address space*?  It seems 
a bit much to reserve 1 G of memory solely for the use of the kernel, 
expecially in my case when that's all I have :)  But on amd64, it's 
welcome to have terabytes of address space if it will help.



The best advice I can give is for you to find an old machine and 
test-bed zfs
for yourself. I personally have been pleased with it and It has saved my
machines data 4 times already (dieing hardware, unexpected power bounces, etc)


Sure, but if my "new" machine isn't studly enough to run it, there's no 
hope for an old machine.  So I'm trying to figure out what I actually 
need.


--

Nate Eldredge
[EMAIL PROTECTED]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: ZFS boot

2008-10-11 Thread Pegasus Mc Cleaft
On Saturday 11 October 2008 21:53:35 Nate Eldredge wrote:
> On Sat, 11 Oct 2008, Freddie Cash wrote:
> > On 10/11/08, Matthew Dillon <[EMAIL PROTECTED]> wrote:
> >> With regards to the traditional BSD partitioning scheme, having a
> >> separate /usr, /home, /tmp, etc... there's no reason to do that
> >> stuff any more with ZFS (or HAMMER).
> >
> > As separate partitions, no.  As separate filesystems, definitely.
> >
> > While HAMMER PFSes may not support these things yet, ZFS allows you to
> > tailor each filesystem to its purpose.  For example, you can enable
> > compression on /usr/ports, but have a separate /usr/ports/distfilles
> > and /usr/ports/work that aren't compressed.  Or /usr/src compressed
> > and /usr/obj not.  Have a small record (block) size for /usr/src, but
> > a larger one for /home.  Give each user a separate filesystem for
> > their /home/, with separate snapshot policies, quotas, and
> > reservations (initial filesystem size).
>
> All this about ZFS sounds great, and I'd like to try it out, but some of
> the bugs, etc, listed at http://wiki.freebsd.org/ZFSKnownProblems are
> rather alarming.  Even on a personal machine, I don't want these features
> at the cost of an unstable system.  Is that list still current?

I dont know if that list is completely accurate any more, but I can 
tell you 
from my own personal experience with ZFS that it has been quite good. I have 
two servers (one is my test-bed at home) and the other is a production server 
running mostly mysql at work and I have never experienced the dead-locking 
problem. 


>
> FWIW, my system is amd64 with 1 G of memory, which the page implies is
> insufficient.  Is it really?

This may be purely subjective, as I have never bench marked the speeds, 
but 
when I was first testing zfs on a i386 machine with 1gig ram, I thought the 
performance was mediocre. However, when I loaded the system on a quad core - 
core2 with 8 gigs ram, I was quite impressed. I put localized changes in my 
/boot/loader.conf to give the kernel more breathing room and disabled the 
prefetch for zfs. 

#more loader.conf
vm.kmem_size_max="1073741824"
vm.kmem_size="1073741824"
vfs.zfs.prefetch_disable=1

The best advice I can give is for you to find an old machine and 
test-bed zfs 
for yourself. I personally have been pleased with it and It has saved my 
machines data 4 times already (dieing hardware, unexpected power bounces, etc) 

As a side note, my production machine boots off a dedicated UFS drive 
(where 
I also have a slice for the swap). /usr, /var, /var/db and /usr/home are zfs.  
My test server at home only has /usr/home as zfs. I found it easier for me, 
when I kill the home machine to just do a reload/rebuild of the OS,  rebuild 
the applications, and rechown/grp the home directories. 

Peg
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: ZFS boot

2008-10-11 Thread Xin LI
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi, Matt,

Matthew Dillon wrote:
[...]
> /boot can be as complex as boot2 allows.  There's nothing preventing
> it from being RAIDed if boot2 supported that, and there's nothing
> preventing it (once you had ZFS boot capabilities) from being ZFS
> using a topology supported by boot2.  Having a sparate /boot allows
> your filesystem root to use topologues boot2 would otherwise not
> support.

I believe that it's a good idea to separate / from the zpool for other
file systems, or even use a UFS /.  My experience with ZFS on my laptop
shows that disk failures can be more easily fixed if there are some
utilities available in the UFS /, even when ZFS is used as /.  Another
issue with a ZFS / is that the snapshot rollback feature generally does
not work on / since it needs the mountpoint to be unmounted.

One thing that I found very useful is the new GPT boot feature on 8.0,
which also works on older BIOS because the protected MBR would deal with
the bootstrap to the actual GPT boot.  Now we have a 15-block sized
gptboot that can boot FreeBSD from UFS, however this 'boot' can be in
virtually any size that the BIOS supports, so we can embed more logic there.

Cheers,
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.9 (FreeBSD)

iEYEARECAAYFAkjxHV0ACgkQi+vbBBjt66CpXgCfWstsxNc3B4xOzNTxz9/kdl3Y
/WYAnjqiV5H8xQYxGgZTnwWieuG6ZZij
=LH+x
-END PGP SIGNATURE-
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: ZFS boot

2008-10-11 Thread Freddie Cash
On 10/11/08, Danny Braniss <[EMAIL PROTECTED]> wrote:
>> > I'm asking, because I want to deploy some zfs fileservers soon, and so
>> > far the solution is either PXE boot, or keep one disk UFS (or boot off a
>> > USB)

For the servers we're deploying FreeBSD+ZFS on, mainly large backup
systems with 24 drives, we're putting / onto either CompactFlash
(using IDE adapters) or USB sticks (using internal connectors), using
gmirror to provide fail-over for /.  That way, we can boot off UFS,
have full access to single-user mode and /rescue, and use every bit of
each disk for ZFS.  Works quite nicely.

>> > Today's /(root+usr) is somewhere between .5 to 1Gb(kernel+debug+src),
>> > and is readonly, so having 1 disk UFS seems to be a pitty.

/ by itself (no /usr, /home, /tmp, or /var) is under 300 MB on our
systems (FreeBSD 7-STABLE from August, amd64).  Definitely not worth
dedicating an entire 500 GB drive to, or even a single slice or
partition to.  By putting / onto separate media (like CF, USB,
whatever), you can dedicate all your harddrive space to ZFS.

> /OT
> Initially, I was not thrilled with ZFS, but once you cross the

Once you start using ZFS features, especially snapshots, it's really
hard to move to non-pooled-storage setups.  Even LVM on Linux becomes
hard to work with.  There's just no easier way to work with multi-TB
storage setups using 10+ drives.

Even for smaller systems with only 3 drives, it's so much nicer
working with pooled storage systems like ZFS.  My home server uses a 2
GB USB stick for / with 3x 120 GB drives for ZFS, with separate
filesystems for /usr, /usr/ports, /usr/src, /usr/obj, /usr/local,
/home, /var, and /tmp.  No fussing around with partition sizes ahead
of time is probably the single greatest feature, with
instant/unlimited snapshots a very close second.

>> I think (hope?) you can use the "remaining" (e.g. non-UFS/non-gmirror)
>> part of the 2nd disk for ZFS as well, otherwise the space would go
>> to waste.  The "Root on ZFS configuration" FreeBSD ZFS Wiki page
>> seems to imply you can.

I did this for awhile.  3x 120 GB drives configured as:
  10  GB slice for /
2  GB slice for swap
 108 GB slice to ZFS

The first slice was configured as a 3-way gmirror, and the last slice
was configured as a raidz pool.   But performance wasn't that great.
Moved / to a USB stick, and dedicated the entire drives to the zpool,
and things have been a lot smoother.

-- 
Freddie Cash
[EMAIL PROTECTED]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: ZFS boot

2008-10-11 Thread Nate Eldredge

On Sat, 11 Oct 2008, Freddie Cash wrote:


On 10/11/08, Matthew Dillon <[EMAIL PROTECTED]> wrote:

With regards to the traditional BSD partitioning scheme, having a
separate /usr, /home, /tmp, etc... there's no reason to do that stuff
any more with ZFS (or HAMMER).


As separate partitions, no.  As separate filesystems, definitely.

While HAMMER PFSes may not support these things yet, ZFS allows you to
tailor each filesystem to its purpose.  For example, you can enable
compression on /usr/ports, but have a separate /usr/ports/distfilles
and /usr/ports/work that aren't compressed.  Or /usr/src compressed
and /usr/obj not.  Have a small record (block) size for /usr/src, but
a larger one for /home.  Give each user a separate filesystem for
their /home/, with separate snapshot policies, quotas, and
reservations (initial filesystem size).


All this about ZFS sounds great, and I'd like to try it out, but some of 
the bugs, etc, listed at http://wiki.freebsd.org/ZFSKnownProblems are 
rather alarming.  Even on a personal machine, I don't want these features 
at the cost of an unstable system.  Is that list still current?


FWIW, my system is amd64 with 1 G of memory, which the page implies is 
insufficient.  Is it really?


--

Nate Eldredge
[EMAIL PROTECTED]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: ZFS boot

2008-10-11 Thread Freddie Cash
On 10/11/08, Matthew Dillon <[EMAIL PROTECTED]> wrote:
> With regards to the traditional BSD partitioning scheme, having a
> separate /usr, /home, /tmp, etc... there's no reason to do that stuff
> any more with ZFS (or HAMMER).

As separate partitions, no.  As separate filesystems, definitely.

While HAMMER PFSes may not support these things yet, ZFS allows you to
tailor each filesystem to its purpose.  For example, you can enable
compression on /usr/ports, but have a separate /usr/ports/distfilles
and /usr/ports/work that aren't compressed.  Or /usr/src compressed
and /usr/obj not.  Have a small record (block) size for /usr/src, but
a larger one for /home.  Give each user a separate filesystem for
their /home/, with separate snapshot policies, quotas, and
reservations (initial filesystem size).

Creating new filesystems with ZFS is as simple as "zfs create -o
mountpoint=/wherever pool/fsname".   If you put a little time into
planning the hierarchy/structure,  you can take advantage off the
properties inheritance features of ZFS as well.

>You just need one, and can break it
> down into separate management domains within the filesystem
> (e.g. HAMMER PFS's).

Similar kind of idea.

> Most linux dists don't bother with multiple partitions any more.
> They just have '/' and maybe a small boot partition, and that's it.

Heh, that's more proof of the difficulties inherent with old-school
disk partitioning, compared to pooled storage setups, than an
endorsement of using a single partition/filesystem.  :)

-- 
Freddie Cash
[EMAIL PROTECTED]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: ZFS boot

2008-10-11 Thread Matthew Dillon
:To Matt:
:   since 'small' nowadays is big enough to hold /, what advantages are 
there
:in having root split up?
:also, having this split personality, what if the disk goes? the hammer/zfs
:is probably raided ...

You mean /boot + root , or do you mean /root vs /usr vs /home?  I'll
answer both.

With regards to /boot + root.  A small separate /boot partition
(256m) allows your root filesystem to use an arbitrarily complex
topology.  e.g. multiple geom layers, weird zfs setups, etc.  So
you get flexibility that you would otherwise not have if you went
with a directly-bootable ZFS root.

/boot can be as complex as boot2 allows.  There's nothing preventing
it from being RAIDed if boot2 supported that, and there's nothing
preventing it (once you had ZFS boot capabilities) from being ZFS
using a topology supported by boot2.  Having a sparate /boot allows
your filesystem root to use topologues boot2 would otherwise not
support.

With regards to the traditional BSD partitioning scheme, having a
separate /usr, /home, /tmp, etc... there's no reason to do that stuff
any more with ZFS (or HAMMER).  You just need one, and can break it
down into separate management domains within the filesystem
(e.g. HAMMER PFS's).  That's a generic statement of course, there
will always be situations where you might want to partition things
out separately.

Most linux dists don't bother with multiple partitions any more.
They just have '/' and maybe a small boot partition, and that's it.

-Matt
Matthew Dillon 
<[EMAIL PROTECTED]>
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


ZFS boot

2008-10-11 Thread Danny Braniss

> > > > so can Freebsd boot off a ZFS root? in stable? current? ...
> > > 
> > > boot0 doesn't apply here; it cares about what's at sector 0 on the
> > > disk, not filesystems.
> > > 
> > > boot2/loader does not speak ZFS -- this is why you need the /boot UFS2
> > > partition.  This is an annoyance.
> > > 
> > > For the final "stage/step", vfs.root.mountfrom="zfs:mypool/root" in
> > > loader.conf will cause FreeBSD to mount the root filesystem from ZFS.
> > > This works fine.
> > 
> > so the answer is:
> > yes, if you have only one disk.
> > no, if you have ZFS over many disks
> > 
> > because I see no advantage in the springboard solution where ZFS is used to
> > cover several disks.
> > 
> > I'm asking, because I want to deploy some zfs fileservers soon, and so
> > far the solution is either PXE boot, or keep one disk UFS (or boot off a 
> > USB)
> > Today's /(root+usr) is somewhere between .5 to 1Gb(kernel+debug+src),
> > and is readonly, so having 1 disk UFS seems to be a pitty.
> 
> Hold on a minute.  "One disk" has nothing to do with the filesystem.
> You asked if FreeBSD could boot off of a specific filesystem, and I
> answered that -- I didn't state anything about disk counts.  Now you're
> changing the focus.  :-)
> 
not intentionaly, but once you mention boot0/2, bsdlabel, slice/partition ...

/OT
Initially, I was not thrilled with ZFS, but once you cross the
few hundred gigabyte filesystems UFS is impractical, and though old sysadmins
will have to be re-educated (zfs catch-all-commands, instead of 
mount/fsck/export/blah...)
it is the (only?) way for the new-terrabyte-world. So having bitten the bullet,
and doing some experiments i'm stuck as to what to do with /
OT/

> I'm pretty sure FreeBSD can boot off of gmirror setups (see above,
> boot2/loader should work off of gmirror), which means >1 disk.  You
> do not have to gmirror the entire disk, you can gmirror just a slice
> (AFAIK).
but gmirror is not ZFS, and yes it can, why not.

> 
> I think (hope?) you can use the "remaining" (e.g. non-UFS/non-gmirror)
> part of the 2nd disk for ZFS as well, otherwise the space would go
> to waste.  The "Root on ZFS configuration" FreeBSD ZFS Wiki page
> seems to imply you can.

The idea is to used the 'free/remaining' as part of the BIG ZFS 'array'

[ED note : I've highjacked the 'Re: continuous backup solution for FreeBSD']

To Matt:
since 'small' nowadays is big enough to hold /, what advantages are 
there
in having root split up?
also, having this split personality, what if the disk goes? the hammer/zfs
is probably raided ...
[btw, having a small-boot-partition brings back bad memories: the first thing 
I did on a new Compact was boot it diskless, repartition the disk, newfs, and
I could no longer boot it :-) - part of the BIOS was there]

To Doug:
> ZFS boot is coming.
great! any time estimate?, just curious, no preasure :-)

some food for thought:
In the past raid 5 would reduce the throughput conciderably, though
nowadays it's hardly notisable, so I guess my reluctance to having
a swap partition raided is gone.

danny



___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: continuous backup solution for FreeBSD

2008-10-11 Thread Matthew Dillon

:> boot2/loader does not speak ZFS -- this is why you need the /boot UFS2
:> partition.  This is an annoyance.
:> 
:> For the final "stage/step", vfs.root.mountfrom="zfs:mypool/root" in
:> loader.conf will cause FreeBSD to mount the root filesystem from ZFS.
:> This works fine.
:
:so the answer is:
:   yes, if you have only one disk.
:   no, if you have ZFS over many disks
:
:because I see no advantage in the springboard solution where ZFS is used to
:cover several disks.
:
:I'm asking, because I want to deploy some zfs fileservers soon, and so
:far the solution is either PXE boot, or keep one disk UFS (or boot off a USB)
:Today's /(root+usr) is somewhere between .5 to 1Gb(kernel+debug+src),
:and is readonly, so having 1 disk UFS seems to be a pitty.
:
:danny

I think it is is perfectly acceptable to have a /boot + ZFS style
solution, where /boot is a small ~256M UFS filesystem that the
system actually boots from (containing only the kernel, modules,
loader.conf, etc), and ZFS is the root filesystem.  In a running
system /boot would be mounted under the ZFS root.

All I needed was a line in /boot/loader.conf to tell the kernel
where the root FS was.  In my case, I pointed it at HAMMER.

vfs.root.mountfrom="hammer:ad0s1d"

This gives you the flexibility of being able to have as complex a
root FS as you want.

Filesystem  1K-blocksUsedAvail Capacity  Mounted on
HAMMER_ROOT  36388864 9789440 2659942427%/
/dev/ad0s1a257998  100074   13728642%/boot
/pfs/@@-1:1  36388864 9789440 2659942427%/usr
/pfs/@@-1:3  36388864 9789440 2659942427%/var
/pfs/@@-1:6  36388864 9789440 2659942427%/tmp
/pfs/@@-1:7  36388864 9789440 2659942427%/home
/pfs/@@-1:5  36388864 9789440 2659942427%/var/tmp
/pfs/@@-1:2  36388864 9789440 2659942427%/usr/obj
/pfs/@@-1:4  36388864 9789440 2659942427%/var/crash
procfs  4   40   100%/proc

 The /boot is small enough that it can be dealt with numerous ways,
 including simple duplication if you have multiple disks (have a
 adXs1a on two drives).  And if you were really that worried you
 could put /boot on a SSD.  Frankly, anything that has approximately
 the same MTBF as the motherboard itself is suitable, there's really
 no point trying to make /boot disk-redundant when the motherboard
 and memory aren't redundant.  If you have more then one HD connected
 to the system, and you want boot redundancy, then you also likely
 have the $$ to purchase a tiny SSD for your /boot.

 The big problem trying to boot from a completely generic FS setup
 is that it tends to severely limit your options.  You might want to
 have more flexibility in your root filesystem that you could otherwise
 accomodate if /boot were integrated into it.

-Matt
Matthew Dillon 
<[EMAIL PROTECTED]>
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: continuous backup solution for FreeBSD

2008-10-11 Thread Doug Rabson


On 11 Oct 2008, at 12:07, Danny Braniss wrote:


On Sat, Oct 11, 2008 at 12:35:16PM +0200, Danny Braniss wrote:

On Fri, 10 Oct 2008 08:42:49 -0700
Jeremy Chadwick <[EMAIL PROTECTED]> wrote:


On Fri, Oct 10, 2008 at 11:29:52AM -0400, Mike Meyer wrote:

On Fri, 10 Oct 2008 07:41:11 -0700
Jeremy Chadwick <[EMAIL PROTECTED]> wrote:


On Fri, Oct 10, 2008 at 03:53:38PM +0300, Evren Yurtesen wrote:

Mike Meyer wrote:

On Fri, 10 Oct 2008 02:34:28 +0300
[EMAIL PROTECTED] wrote:


Quoting "Oliver Fromme" <[EMAIL PROTECTED]>:


These features are readily available right now on FreeBSD.
You don't have to code anything.

Well with 2 downsides,


Once you actually try and implement these solutions, you'll  
see that

your "downsides" are largely figments of your imagination.


So if it is my imagination, how can I actually convert UFS to  
ZFS
easily? Everybody seems to say that this is easy and that is  
easy.


It's not that easy.  I really don't know why people are  
telling you it

is.


Maybe because it is? Of course, it *does* require a little prior
planning, but anyone with more than a few months experience as a
sysadmin should be able to deal with it without to much hassle.

Converting some filesystems are easier than others; /home (if  
you

create one) for example is generally easy:

1) ZFS fs is called foo/home, mounted as /mnt
2) fstat, ensure nothing is using /home -- if something is,  
shut it

  down or kill it
3) rsync or cpdup /home files to /mnt
4) umount /home
5) zfs set mountpoint=/home foo/home
6) Restart said processes or daemons

"See! It's like I said! EASY!"  You can do this with /var as  
well.


Yup. Of course, if you've done it that way, you're not thinking  
ahead,

because:

Now try /usr.  Hope you've got /rescue available, because  
once /usr/lib
and /usr/libexec disappear, you're in trouble.  Good luck  
doing this in

multi-user, too.


Oops. You F'ed up. If you'd done a little planning, you would  
have
realized that / and /usr would be a bit of extra trouble, and  
planned

accordingly.

And finally, the root fs.  Whoever says "this is easy" is  
kidding

themselves; it's a pain.


Um, no, it wasn't. Of course, I've been doing this long enough  
to have
a system set up to make this kind of thing easy. My system disk  
is on

a mirror, and I do system upgrades by breaking the mirror and
upgrading one disk, making everything work, then putting the  
mirror

back together. And moving to zfs on root is a lot like a system
upgrade:

1) Break the mirror (mirrors actually, as I mirrored file  
systems).

2) Repartition the unused drive into /boot, swap & data
3) Build zfs & /boot according to the instructions on ZFSOnRoot
  wiki, just copying /boot and / at this point.
4) Boot the zfs disk in single user mode.
5) If 4 fails, boot back to the ufs disk so you're operational  
while
  you contemplate what went wrong, then repeat step 3.  
Otherwise, go

  on to step 6.
6) Create zfs file systems as appropriate (given that zfs file
  systems are cheap, and have lots of cool features that ufs
  file systems don't have, you probably want to create more than
  you had before, doing thing like putting SQL serves on their
  own file system with appropriate blocking, etc, but you'll  
want to

  have figured all this out before starting step 1).
7) Copy data from the ufs file systems to their new homes,
  not forgetting to take them out of /etc/fstab.
8) Reboot on the zfs disk.
9) Test until you're happy that everything is working properly,
  and be prepared to reboot on the ufs disk if something is  
broken.

10) Reformat the ufs disk to match the zfs one. Gmirror /boot,
   add the data partition to the zfs pool so it's mirrored, and
   you should have already been using swap.

This is 10 steps to your "easy" 6, but two of the extra steps are
testing you didn't include, and 1 of the steps is a failure  
recovery
step that shouldn't be necessary. So - one more step than your  
easy

process.


Of course, the part you seem to be (intentionally?) forgetting:  
most
people are not using gmirror.  There is no 2nd disk.  They have  
one disk
with a series of UFS2 filesystems, and they want to upgrade.   
That's how
I read Evren's "how do I do this? You say it's easy..." comment,  
and I

think his viewpoint is very reasonable.


Granted, most people don't think about system upgrades when they  
build

a system, so they wind up having to do extra work. In particular,
Evren is talking about spending thousands of dollars on proprietary
software, not to mention the cost of the server that all this  
data is
going to flow to, for a backup solution. Compared to that, the  
cost of

a few spare disks and the work to install them are trivial.

Yeah, this isn't something you do on a whim. On the other hand,  
it's
not something that any competent sysadmin would consider a  
pain. For a
good senior admin, it's a lot easier than doing an OS upgrade  
from

source, which should be the next step up from trivial.


Re: VirtualBox looks for FreeBSD developer

2008-10-11 Thread Edwin Groothuis
On Fri, Oct 10, 2008 at 09:42:04PM +0400, Dmitry Marakasov wrote:
> Little time ago I was misleaded by the certain people and got an
> idea that VirtualBox actually works on FreeBSD, so I've made a draft
> port for it. It doesn't actually work, but since I've spent several
> hours hacking it and made bunch of (likely) useful patches, here
> it is, feel free to use it for any purpose. I hope someone of kernel
> hackers will make it work actually ;)

Have a talk with bms@ about it, he had some interesting working
code too.

Edwin

-- 
Edwin Groothuis Website: http://www.mavetju.org/
[EMAIL PROTECTED]   Weblog:  http://www.mavetju.org/weblog/
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to "[EMAIL PROTECTED]"


Re: continuous backup solution for FreeBSD

2008-10-11 Thread Jeremy Chadwick
On Sat, Oct 11, 2008 at 01:07:44PM +0200, Danny Braniss wrote:
> > On Sat, Oct 11, 2008 at 12:35:16PM +0200, Danny Braniss wrote:
> > > > On Fri, 10 Oct 2008 08:42:49 -0700
> > > > Jeremy Chadwick <[EMAIL PROTECTED]> wrote:
> > > > 
> > > > > On Fri, Oct 10, 2008 at 11:29:52AM -0400, Mike Meyer wrote:
> > > > > > On Fri, 10 Oct 2008 07:41:11 -0700
> > > > > > Jeremy Chadwick <[EMAIL PROTECTED]> wrote:
> > > > > > 
> > > > > > > On Fri, Oct 10, 2008 at 03:53:38PM +0300, Evren Yurtesen wrote:
> > > > > > > > Mike Meyer wrote:
> > > > > > > >> On Fri, 10 Oct 2008 02:34:28 +0300
> > > > > > > >> [EMAIL PROTECTED] wrote:
> > > > > > > >>
> > > > > > > >>> Quoting "Oliver Fromme" <[EMAIL PROTECTED]>:
> > > > > > > >>>
> > > > > > >  These features are readily available right now on FreeBSD.
> > > > > > >  You don't have to code anything.
> > > > > > > >>> Well with 2 downsides,
> > > > > > > >>
> > > > > > > >> Once you actually try and implement these solutions, you'll 
> > > > > > > >> see that
> > > > > > > >> your "downsides" are largely figments of your imagination.
> > > > > > > >
> > > > > > > > So if it is my imagination, how can I actually convert UFS to 
> > > > > > > > ZFS  
> > > > > > > > easily? Everybody seems to say that this is easy and that is 
> > > > > > > > easy.
> > > > > > > 
> > > > > > > It's not that easy.  I really don't know why people are telling 
> > > > > > > you it
> > > > > > > is.
> > > > > > 
> > > > > > Maybe because it is? Of course, it *does* require a little prior
> > > > > > planning, but anyone with more than a few months experience as a
> > > > > > sysadmin should be able to deal with it without to much hassle.
> > > > > > 
> > > > > > > Converting some filesystems are easier than others; /home (if you
> > > > > > > create one) for example is generally easy:
> > > > > > > 
> > > > > > > 1) ZFS fs is called foo/home, mounted as /mnt
> > > > > > > 2) fstat, ensure nothing is using /home -- if something is, shut 
> > > > > > > it
> > > > > > >down or kill it
> > > > > > > 3) rsync or cpdup /home files to /mnt
> > > > > > > 4) umount /home
> > > > > > > 5) zfs set mountpoint=/home foo/home
> > > > > > > 6) Restart said processes or daemons
> > > > > > > 
> > > > > > > "See! It's like I said! EASY!"  You can do this with /var as well.
> > > > > > 
> > > > > > Yup. Of course, if you've done it that way, you're not thinking 
> > > > > > ahead,
> > > > > > because:
> > > > > > 
> > > > > > > Now try /usr.  Hope you've got /rescue available, because once 
> > > > > > > /usr/lib
> > > > > > > and /usr/libexec disappear, you're in trouble.  Good luck doing 
> > > > > > > this in
> > > > > > > multi-user, too.
> > > > > > 
> > > > > > Oops. You F'ed up. If you'd done a little planning, you would have
> > > > > > realized that / and /usr would be a bit of extra trouble, and 
> > > > > > planned
> > > > > > accordingly.
> > > > > > 
> > > > > > > And finally, the root fs.  Whoever says "this is easy" is kidding
> > > > > > > themselves; it's a pain.
> > > > > > 
> > > > > > Um, no, it wasn't. Of course, I've been doing this long enough to 
> > > > > > have
> > > > > > a system set up to make this kind of thing easy. My system disk is 
> > > > > > on
> > > > > > a mirror, and I do system upgrades by breaking the mirror and
> > > > > > upgrading one disk, making everything work, then putting the mirror
> > > > > > back together. And moving to zfs on root is a lot like a system
> > > > > > upgrade:
> > > > > > 
> > > > > > 1) Break the mirror (mirrors actually, as I mirrored file systems).
> > > > > > 2) Repartition the unused drive into /boot, swap & data
> > > > > > 3) Build zfs & /boot according to the instructions on ZFSOnRoot
> > > > > >wiki, just copying /boot and / at this point.
> > > > > > 4) Boot the zfs disk in single user mode.
> > > > > > 5) If 4 fails, boot back to the ufs disk so you're operational while
> > > > > >you contemplate what went wrong, then repeat step 3. Otherwise, 
> > > > > > go
> > > > > >on to step 6.
> > > > > > 6) Create zfs file systems as appropriate (given that zfs file
> > > > > >systems are cheap, and have lots of cool features that ufs
> > > > > >file systems don't have, you probably want to create more than
> > > > > >you had before, doing thing like putting SQL serves on their
> > > > > >own file system with appropriate blocking, etc, but you'll want 
> > > > > > to
> > > > > >have figured all this out before starting step 1).
> > > > > > 7) Copy data from the ufs file systems to their new homes,
> > > > > >not forgetting to take them out of /etc/fstab.
> > > > > > 8) Reboot on the zfs disk.
> > > > > > 9) Test until you're happy that everything is working properly,
> > > > > >and be prepared to reboot on the ufs disk if something is 
> > > > > > broken. 
> > > > > > 10) Reformat the ufs disk to match the zfs one. Gmirror /boot,
> > > > > > add the data partition to the zfs pool so it's mirr

Re: continuous backup solution for FreeBSD

2008-10-11 Thread Danny Braniss
> On Sat, Oct 11, 2008 at 12:35:16PM +0200, Danny Braniss wrote:
> > > On Fri, 10 Oct 2008 08:42:49 -0700
> > > Jeremy Chadwick <[EMAIL PROTECTED]> wrote:
> > > 
> > > > On Fri, Oct 10, 2008 at 11:29:52AM -0400, Mike Meyer wrote:
> > > > > On Fri, 10 Oct 2008 07:41:11 -0700
> > > > > Jeremy Chadwick <[EMAIL PROTECTED]> wrote:
> > > > > 
> > > > > > On Fri, Oct 10, 2008 at 03:53:38PM +0300, Evren Yurtesen wrote:
> > > > > > > Mike Meyer wrote:
> > > > > > >> On Fri, 10 Oct 2008 02:34:28 +0300
> > > > > > >> [EMAIL PROTECTED] wrote:
> > > > > > >>
> > > > > > >>> Quoting "Oliver Fromme" <[EMAIL PROTECTED]>:
> > > > > > >>>
> > > > > >  These features are readily available right now on FreeBSD.
> > > > > >  You don't have to code anything.
> > > > > > >>> Well with 2 downsides,
> > > > > > >>
> > > > > > >> Once you actually try and implement these solutions, you'll see 
> > > > > > >> that
> > > > > > >> your "downsides" are largely figments of your imagination.
> > > > > > >
> > > > > > > So if it is my imagination, how can I actually convert UFS to ZFS 
> > > > > > >  
> > > > > > > easily? Everybody seems to say that this is easy and that is easy.
> > > > > > 
> > > > > > It's not that easy.  I really don't know why people are telling you 
> > > > > > it
> > > > > > is.
> > > > > 
> > > > > Maybe because it is? Of course, it *does* require a little prior
> > > > > planning, but anyone with more than a few months experience as a
> > > > > sysadmin should be able to deal with it without to much hassle.
> > > > > 
> > > > > > Converting some filesystems are easier than others; /home (if you
> > > > > > create one) for example is generally easy:
> > > > > > 
> > > > > > 1) ZFS fs is called foo/home, mounted as /mnt
> > > > > > 2) fstat, ensure nothing is using /home -- if something is, shut it
> > > > > >down or kill it
> > > > > > 3) rsync or cpdup /home files to /mnt
> > > > > > 4) umount /home
> > > > > > 5) zfs set mountpoint=/home foo/home
> > > > > > 6) Restart said processes or daemons
> > > > > > 
> > > > > > "See! It's like I said! EASY!"  You can do this with /var as well.
> > > > > 
> > > > > Yup. Of course, if you've done it that way, you're not thinking ahead,
> > > > > because:
> > > > > 
> > > > > > Now try /usr.  Hope you've got /rescue available, because once 
> > > > > > /usr/lib
> > > > > > and /usr/libexec disappear, you're in trouble.  Good luck doing 
> > > > > > this in
> > > > > > multi-user, too.
> > > > > 
> > > > > Oops. You F'ed up. If you'd done a little planning, you would have
> > > > > realized that / and /usr would be a bit of extra trouble, and planned
> > > > > accordingly.
> > > > > 
> > > > > > And finally, the root fs.  Whoever says "this is easy" is kidding
> > > > > > themselves; it's a pain.
> > > > > 
> > > > > Um, no, it wasn't. Of course, I've been doing this long enough to have
> > > > > a system set up to make this kind of thing easy. My system disk is on
> > > > > a mirror, and I do system upgrades by breaking the mirror and
> > > > > upgrading one disk, making everything work, then putting the mirror
> > > > > back together. And moving to zfs on root is a lot like a system
> > > > > upgrade:
> > > > > 
> > > > > 1) Break the mirror (mirrors actually, as I mirrored file systems).
> > > > > 2) Repartition the unused drive into /boot, swap & data
> > > > > 3) Build zfs & /boot according to the instructions on ZFSOnRoot
> > > > >wiki, just copying /boot and / at this point.
> > > > > 4) Boot the zfs disk in single user mode.
> > > > > 5) If 4 fails, boot back to the ufs disk so you're operational while
> > > > >you contemplate what went wrong, then repeat step 3. Otherwise, go
> > > > >on to step 6.
> > > > > 6) Create zfs file systems as appropriate (given that zfs file
> > > > >systems are cheap, and have lots of cool features that ufs
> > > > >file systems don't have, you probably want to create more than
> > > > >you had before, doing thing like putting SQL serves on their
> > > > >own file system with appropriate blocking, etc, but you'll want to
> > > > >have figured all this out before starting step 1).
> > > > > 7) Copy data from the ufs file systems to their new homes,
> > > > >not forgetting to take them out of /etc/fstab.
> > > > > 8) Reboot on the zfs disk.
> > > > > 9) Test until you're happy that everything is working properly,
> > > > >and be prepared to reboot on the ufs disk if something is broken. 
> > > > > 10) Reformat the ufs disk to match the zfs one. Gmirror /boot,
> > > > > add the data partition to the zfs pool so it's mirrored, and
> > > > > you should have already been using swap.
> > > > > 
> > > > > This is 10 steps to your "easy" 6, but two of the extra steps are
> > > > > testing you didn't include, and 1 of the steps is a failure recovery
> > > > > step that shouldn't be necessary. So - one more step than your easy
> > > > > process.
> > > > 
> > > > Of course, the part you s

Re: continuous backup solution for FreeBSD

2008-10-11 Thread Jeremy Chadwick
On Sat, Oct 11, 2008 at 12:35:16PM +0200, Danny Braniss wrote:
> > On Fri, 10 Oct 2008 08:42:49 -0700
> > Jeremy Chadwick <[EMAIL PROTECTED]> wrote:
> > 
> > > On Fri, Oct 10, 2008 at 11:29:52AM -0400, Mike Meyer wrote:
> > > > On Fri, 10 Oct 2008 07:41:11 -0700
> > > > Jeremy Chadwick <[EMAIL PROTECTED]> wrote:
> > > > 
> > > > > On Fri, Oct 10, 2008 at 03:53:38PM +0300, Evren Yurtesen wrote:
> > > > > > Mike Meyer wrote:
> > > > > >> On Fri, 10 Oct 2008 02:34:28 +0300
> > > > > >> [EMAIL PROTECTED] wrote:
> > > > > >>
> > > > > >>> Quoting "Oliver Fromme" <[EMAIL PROTECTED]>:
> > > > > >>>
> > > > >  These features are readily available right now on FreeBSD.
> > > > >  You don't have to code anything.
> > > > > >>> Well with 2 downsides,
> > > > > >>
> > > > > >> Once you actually try and implement these solutions, you'll see 
> > > > > >> that
> > > > > >> your "downsides" are largely figments of your imagination.
> > > > > >
> > > > > > So if it is my imagination, how can I actually convert UFS to ZFS  
> > > > > > easily? Everybody seems to say that this is easy and that is easy.
> > > > > 
> > > > > It's not that easy.  I really don't know why people are telling you it
> > > > > is.
> > > > 
> > > > Maybe because it is? Of course, it *does* require a little prior
> > > > planning, but anyone with more than a few months experience as a
> > > > sysadmin should be able to deal with it without to much hassle.
> > > > 
> > > > > Converting some filesystems are easier than others; /home (if you
> > > > > create one) for example is generally easy:
> > > > > 
> > > > > 1) ZFS fs is called foo/home, mounted as /mnt
> > > > > 2) fstat, ensure nothing is using /home -- if something is, shut it
> > > > >down or kill it
> > > > > 3) rsync or cpdup /home files to /mnt
> > > > > 4) umount /home
> > > > > 5) zfs set mountpoint=/home foo/home
> > > > > 6) Restart said processes or daemons
> > > > > 
> > > > > "See! It's like I said! EASY!"  You can do this with /var as well.
> > > > 
> > > > Yup. Of course, if you've done it that way, you're not thinking ahead,
> > > > because:
> > > > 
> > > > > Now try /usr.  Hope you've got /rescue available, because once 
> > > > > /usr/lib
> > > > > and /usr/libexec disappear, you're in trouble.  Good luck doing this 
> > > > > in
> > > > > multi-user, too.
> > > > 
> > > > Oops. You F'ed up. If you'd done a little planning, you would have
> > > > realized that / and /usr would be a bit of extra trouble, and planned
> > > > accordingly.
> > > > 
> > > > > And finally, the root fs.  Whoever says "this is easy" is kidding
> > > > > themselves; it's a pain.
> > > > 
> > > > Um, no, it wasn't. Of course, I've been doing this long enough to have
> > > > a system set up to make this kind of thing easy. My system disk is on
> > > > a mirror, and I do system upgrades by breaking the mirror and
> > > > upgrading one disk, making everything work, then putting the mirror
> > > > back together. And moving to zfs on root is a lot like a system
> > > > upgrade:
> > > > 
> > > > 1) Break the mirror (mirrors actually, as I mirrored file systems).
> > > > 2) Repartition the unused drive into /boot, swap & data
> > > > 3) Build zfs & /boot according to the instructions on ZFSOnRoot
> > > >wiki, just copying /boot and / at this point.
> > > > 4) Boot the zfs disk in single user mode.
> > > > 5) If 4 fails, boot back to the ufs disk so you're operational while
> > > >you contemplate what went wrong, then repeat step 3. Otherwise, go
> > > >on to step 6.
> > > > 6) Create zfs file systems as appropriate (given that zfs file
> > > >systems are cheap, and have lots of cool features that ufs
> > > >file systems don't have, you probably want to create more than
> > > >you had before, doing thing like putting SQL serves on their
> > > >own file system with appropriate blocking, etc, but you'll want to
> > > >have figured all this out before starting step 1).
> > > > 7) Copy data from the ufs file systems to their new homes,
> > > >not forgetting to take them out of /etc/fstab.
> > > > 8) Reboot on the zfs disk.
> > > > 9) Test until you're happy that everything is working properly,
> > > >and be prepared to reboot on the ufs disk if something is broken. 
> > > > 10) Reformat the ufs disk to match the zfs one. Gmirror /boot,
> > > > add the data partition to the zfs pool so it's mirrored, and
> > > > you should have already been using swap.
> > > > 
> > > > This is 10 steps to your "easy" 6, but two of the extra steps are
> > > > testing you didn't include, and 1 of the steps is a failure recovery
> > > > step that shouldn't be necessary. So - one more step than your easy
> > > > process.
> > > 
> > > Of course, the part you seem to be (intentionally?) forgetting: most
> > > people are not using gmirror.  There is no 2nd disk.  They have one disk
> > > with a series of UFS2 filesystems, and they want to upgrade.  That's how
> > > I read Evren's "

Re: continuous backup solution for FreeBSD

2008-10-11 Thread Danny Braniss
> On Fri, 10 Oct 2008 08:42:49 -0700
> Jeremy Chadwick <[EMAIL PROTECTED]> wrote:
> 
> > On Fri, Oct 10, 2008 at 11:29:52AM -0400, Mike Meyer wrote:
> > > On Fri, 10 Oct 2008 07:41:11 -0700
> > > Jeremy Chadwick <[EMAIL PROTECTED]> wrote:
> > > 
> > > > On Fri, Oct 10, 2008 at 03:53:38PM +0300, Evren Yurtesen wrote:
> > > > > Mike Meyer wrote:
> > > > >> On Fri, 10 Oct 2008 02:34:28 +0300
> > > > >> [EMAIL PROTECTED] wrote:
> > > > >>
> > > > >>> Quoting "Oliver Fromme" <[EMAIL PROTECTED]>:
> > > > >>>
> > > >  These features are readily available right now on FreeBSD.
> > > >  You don't have to code anything.
> > > > >>> Well with 2 downsides,
> > > > >>
> > > > >> Once you actually try and implement these solutions, you'll see that
> > > > >> your "downsides" are largely figments of your imagination.
> > > > >
> > > > > So if it is my imagination, how can I actually convert UFS to ZFS  
> > > > > easily? Everybody seems to say that this is easy and that is easy.
> > > > 
> > > > It's not that easy.  I really don't know why people are telling you it
> > > > is.
> > > 
> > > Maybe because it is? Of course, it *does* require a little prior
> > > planning, but anyone with more than a few months experience as a
> > > sysadmin should be able to deal with it without to much hassle.
> > > 
> > > > Converting some filesystems are easier than others; /home (if you
> > > > create one) for example is generally easy:
> > > > 
> > > > 1) ZFS fs is called foo/home, mounted as /mnt
> > > > 2) fstat, ensure nothing is using /home -- if something is, shut it
> > > >down or kill it
> > > > 3) rsync or cpdup /home files to /mnt
> > > > 4) umount /home
> > > > 5) zfs set mountpoint=/home foo/home
> > > > 6) Restart said processes or daemons
> > > > 
> > > > "See! It's like I said! EASY!"  You can do this with /var as well.
> > > 
> > > Yup. Of course, if you've done it that way, you're not thinking ahead,
> > > because:
> > > 
> > > > Now try /usr.  Hope you've got /rescue available, because once /usr/lib
> > > > and /usr/libexec disappear, you're in trouble.  Good luck doing this in
> > > > multi-user, too.
> > > 
> > > Oops. You F'ed up. If you'd done a little planning, you would have
> > > realized that / and /usr would be a bit of extra trouble, and planned
> > > accordingly.
> > > 
> > > > And finally, the root fs.  Whoever says "this is easy" is kidding
> > > > themselves; it's a pain.
> > > 
> > > Um, no, it wasn't. Of course, I've been doing this long enough to have
> > > a system set up to make this kind of thing easy. My system disk is on
> > > a mirror, and I do system upgrades by breaking the mirror and
> > > upgrading one disk, making everything work, then putting the mirror
> > > back together. And moving to zfs on root is a lot like a system
> > > upgrade:
> > > 
> > > 1) Break the mirror (mirrors actually, as I mirrored file systems).
> > > 2) Repartition the unused drive into /boot, swap & data
> > > 3) Build zfs & /boot according to the instructions on ZFSOnRoot
> > >wiki, just copying /boot and / at this point.
> > > 4) Boot the zfs disk in single user mode.
> > > 5) If 4 fails, boot back to the ufs disk so you're operational while
> > >you contemplate what went wrong, then repeat step 3. Otherwise, go
> > >on to step 6.
> > > 6) Create zfs file systems as appropriate (given that zfs file
> > >systems are cheap, and have lots of cool features that ufs
> > >file systems don't have, you probably want to create more than
> > >you had before, doing thing like putting SQL serves on their
> > >own file system with appropriate blocking, etc, but you'll want to
> > >have figured all this out before starting step 1).
> > > 7) Copy data from the ufs file systems to their new homes,
> > >not forgetting to take them out of /etc/fstab.
> > > 8) Reboot on the zfs disk.
> > > 9) Test until you're happy that everything is working properly,
> > >and be prepared to reboot on the ufs disk if something is broken. 
> > > 10) Reformat the ufs disk to match the zfs one. Gmirror /boot,
> > > add the data partition to the zfs pool so it's mirrored, and
> > > you should have already been using swap.
> > > 
> > > This is 10 steps to your "easy" 6, but two of the extra steps are
> > > testing you didn't include, and 1 of the steps is a failure recovery
> > > step that shouldn't be necessary. So - one more step than your easy
> > > process.
> > 
> > Of course, the part you seem to be (intentionally?) forgetting: most
> > people are not using gmirror.  There is no 2nd disk.  They have one disk
> > with a series of UFS2 filesystems, and they want to upgrade.  That's how
> > I read Evren's "how do I do this? You say it's easy..." comment, and I
> > think his viewpoint is very reasonable.
> 
> Granted, most people don't think about system upgrades when they build
> a system, so they wind up having to do extra work. In particular,
> Evren is talking about spending thousands of dol