Re: continuous backup solution for FreeBSD

2008-10-11 Thread Danny Braniss
 On Fri, 10 Oct 2008 08:42:49 -0700
 Jeremy Chadwick [EMAIL PROTECTED] wrote:
 
  On Fri, Oct 10, 2008 at 11:29:52AM -0400, Mike Meyer wrote:
   On Fri, 10 Oct 2008 07:41:11 -0700
   Jeremy Chadwick [EMAIL PROTECTED] wrote:
   
On Fri, Oct 10, 2008 at 03:53:38PM +0300, Evren Yurtesen wrote:
 Mike Meyer wrote:
 On Fri, 10 Oct 2008 02:34:28 +0300
 [EMAIL PROTECTED] wrote:

 Quoting Oliver Fromme [EMAIL PROTECTED]:

 These features are readily available right now on FreeBSD.
 You don't have to code anything.
 Well with 2 downsides,

 Once you actually try and implement these solutions, you'll see that
 your downsides are largely figments of your imagination.

 So if it is my imagination, how can I actually convert UFS to ZFS  
 easily? Everybody seems to say that this is easy and that is easy.

It's not that easy.  I really don't know why people are telling you it
is.
   
   Maybe because it is? Of course, it *does* require a little prior
   planning, but anyone with more than a few months experience as a
   sysadmin should be able to deal with it without to much hassle.
   
Converting some filesystems are easier than others; /home (if you
create one) for example is generally easy:

1) ZFS fs is called foo/home, mounted as /mnt
2) fstat, ensure nothing is using /home -- if something is, shut it
   down or kill it
3) rsync or cpdup /home files to /mnt
4) umount /home
5) zfs set mountpoint=/home foo/home
6) Restart said processes or daemons

See! It's like I said! EASY!  You can do this with /var as well.
   
   Yup. Of course, if you've done it that way, you're not thinking ahead,
   because:
   
Now try /usr.  Hope you've got /rescue available, because once /usr/lib
and /usr/libexec disappear, you're in trouble.  Good luck doing this in
multi-user, too.
   
   Oops. You F'ed up. If you'd done a little planning, you would have
   realized that / and /usr would be a bit of extra trouble, and planned
   accordingly.
   
And finally, the root fs.  Whoever says this is easy is kidding
themselves; it's a pain.
   
   Um, no, it wasn't. Of course, I've been doing this long enough to have
   a system set up to make this kind of thing easy. My system disk is on
   a mirror, and I do system upgrades by breaking the mirror and
   upgrading one disk, making everything work, then putting the mirror
   back together. And moving to zfs on root is a lot like a system
   upgrade:
   
   1) Break the mirror (mirrors actually, as I mirrored file systems).
   2) Repartition the unused drive into /boot, swap  data
   3) Build zfs  /boot according to the instructions on ZFSOnRoot
  wiki, just copying /boot and / at this point.
   4) Boot the zfs disk in single user mode.
   5) If 4 fails, boot back to the ufs disk so you're operational while
  you contemplate what went wrong, then repeat step 3. Otherwise, go
  on to step 6.
   6) Create zfs file systems as appropriate (given that zfs file
  systems are cheap, and have lots of cool features that ufs
  file systems don't have, you probably want to create more than
  you had before, doing thing like putting SQL serves on their
  own file system with appropriate blocking, etc, but you'll want to
  have figured all this out before starting step 1).
   7) Copy data from the ufs file systems to their new homes,
  not forgetting to take them out of /etc/fstab.
   8) Reboot on the zfs disk.
   9) Test until you're happy that everything is working properly,
  and be prepared to reboot on the ufs disk if something is broken. 
   10) Reformat the ufs disk to match the zfs one. Gmirror /boot,
   add the data partition to the zfs pool so it's mirrored, and
   you should have already been using swap.
   
   This is 10 steps to your easy 6, but two of the extra steps are
   testing you didn't include, and 1 of the steps is a failure recovery
   step that shouldn't be necessary. So - one more step than your easy
   process.
  
  Of course, the part you seem to be (intentionally?) forgetting: most
  people are not using gmirror.  There is no 2nd disk.  They have one disk
  with a series of UFS2 filesystems, and they want to upgrade.  That's how
  I read Evren's how do I do this? You say it's easy... comment, and I
  think his viewpoint is very reasonable.
 
 Granted, most people don't think about system upgrades when they build
 a system, so they wind up having to do extra work. In particular,
 Evren is talking about spending thousands of dollars on proprietary
 software, not to mention the cost of the server that all this data is
 going to flow to, for a backup solution. Compared to that, the cost of
 a few spare disks and the work to install them are trivial.
 
   Yeah, this isn't something you do on a whim. On the other hand, it's
   not something that any competent sysadmin would consider a pain. For a
  

Re: continuous backup solution for FreeBSD

2008-10-11 Thread Jeremy Chadwick
On Sat, Oct 11, 2008 at 01:07:44PM +0200, Danny Braniss wrote:
  On Sat, Oct 11, 2008 at 12:35:16PM +0200, Danny Braniss wrote:
On Fri, 10 Oct 2008 08:42:49 -0700
Jeremy Chadwick [EMAIL PROTECTED] wrote:

 On Fri, Oct 10, 2008 at 11:29:52AM -0400, Mike Meyer wrote:
  On Fri, 10 Oct 2008 07:41:11 -0700
  Jeremy Chadwick [EMAIL PROTECTED] wrote:
  
   On Fri, Oct 10, 2008 at 03:53:38PM +0300, Evren Yurtesen wrote:
Mike Meyer wrote:
On Fri, 10 Oct 2008 02:34:28 +0300
[EMAIL PROTECTED] wrote:
   
Quoting Oliver Fromme [EMAIL PROTECTED]:
   
These features are readily available right now on FreeBSD.
You don't have to code anything.
Well with 2 downsides,
   
Once you actually try and implement these solutions, you'll 
see that
your downsides are largely figments of your imagination.
   
So if it is my imagination, how can I actually convert UFS to 
ZFS  
easily? Everybody seems to say that this is easy and that is 
easy.
   
   It's not that easy.  I really don't know why people are telling 
   you it
   is.
  
  Maybe because it is? Of course, it *does* require a little prior
  planning, but anyone with more than a few months experience as a
  sysadmin should be able to deal with it without to much hassle.
  
   Converting some filesystems are easier than others; /home (if you
   create one) for example is generally easy:
   
   1) ZFS fs is called foo/home, mounted as /mnt
   2) fstat, ensure nothing is using /home -- if something is, shut 
   it
  down or kill it
   3) rsync or cpdup /home files to /mnt
   4) umount /home
   5) zfs set mountpoint=/home foo/home
   6) Restart said processes or daemons
   
   See! It's like I said! EASY!  You can do this with /var as well.
  
  Yup. Of course, if you've done it that way, you're not thinking 
  ahead,
  because:
  
   Now try /usr.  Hope you've got /rescue available, because once 
   /usr/lib
   and /usr/libexec disappear, you're in trouble.  Good luck doing 
   this in
   multi-user, too.
  
  Oops. You F'ed up. If you'd done a little planning, you would have
  realized that / and /usr would be a bit of extra trouble, and 
  planned
  accordingly.
  
   And finally, the root fs.  Whoever says this is easy is kidding
   themselves; it's a pain.
  
  Um, no, it wasn't. Of course, I've been doing this long enough to 
  have
  a system set up to make this kind of thing easy. My system disk is 
  on
  a mirror, and I do system upgrades by breaking the mirror and
  upgrading one disk, making everything work, then putting the mirror
  back together. And moving to zfs on root is a lot like a system
  upgrade:
  
  1) Break the mirror (mirrors actually, as I mirrored file systems).
  2) Repartition the unused drive into /boot, swap  data
  3) Build zfs  /boot according to the instructions on ZFSOnRoot
 wiki, just copying /boot and / at this point.
  4) Boot the zfs disk in single user mode.
  5) If 4 fails, boot back to the ufs disk so you're operational while
 you contemplate what went wrong, then repeat step 3. Otherwise, 
  go
 on to step 6.
  6) Create zfs file systems as appropriate (given that zfs file
 systems are cheap, and have lots of cool features that ufs
 file systems don't have, you probably want to create more than
 you had before, doing thing like putting SQL serves on their
 own file system with appropriate blocking, etc, but you'll want 
  to
 have figured all this out before starting step 1).
  7) Copy data from the ufs file systems to their new homes,
 not forgetting to take them out of /etc/fstab.
  8) Reboot on the zfs disk.
  9) Test until you're happy that everything is working properly,
 and be prepared to reboot on the ufs disk if something is 
  broken. 
  10) Reformat the ufs disk to match the zfs one. Gmirror /boot,
  add the data partition to the zfs pool so it's mirrored, and
  you should have already been using swap.
  
  This is 10 steps to your easy 6, but two of the extra steps are
  testing you didn't include, and 1 of the steps is a failure recovery
  step that shouldn't be necessary. So - one more step than your easy
  process.
 
 Of course, the part you seem to be (intentionally?) forgetting: most
 people are not using gmirror.  There is no 2nd disk.  They have one 
 disk
 with a series of UFS2 filesystems, and they want to upgrade.  That's 
 how
 I read Evren's how do I do this? You say it's easy... comment, and I
 think his viewpoint is very reasonable.

Granted, 

Re: VirtualBox looks for FreeBSD developer

2008-10-11 Thread Edwin Groothuis
On Fri, Oct 10, 2008 at 09:42:04PM +0400, Dmitry Marakasov wrote:
 Little time ago I was misleaded by the certain people and got an
 idea that VirtualBox actually works on FreeBSD, so I've made a draft
 port for it. It doesn't actually work, but since I've spent several
 hours hacking it and made bunch of (likely) useful patches, here
 it is, feel free to use it for any purpose. I hope someone of kernel
 hackers will make it work actually ;)

Have a talk with bms@ about it, he had some interesting working
code too.

Edwin

-- 
Edwin Groothuis Website: http://www.mavetju.org/
[EMAIL PROTECTED]   Weblog:  http://www.mavetju.org/weblog/
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: continuous backup solution for FreeBSD

2008-10-11 Thread Danny Braniss
 On Sat, Oct 11, 2008 at 12:35:16PM +0200, Danny Braniss wrote:
   On Fri, 10 Oct 2008 08:42:49 -0700
   Jeremy Chadwick [EMAIL PROTECTED] wrote:
   
On Fri, Oct 10, 2008 at 11:29:52AM -0400, Mike Meyer wrote:
 On Fri, 10 Oct 2008 07:41:11 -0700
 Jeremy Chadwick [EMAIL PROTECTED] wrote:
 
  On Fri, Oct 10, 2008 at 03:53:38PM +0300, Evren Yurtesen wrote:
   Mike Meyer wrote:
   On Fri, 10 Oct 2008 02:34:28 +0300
   [EMAIL PROTECTED] wrote:
  
   Quoting Oliver Fromme [EMAIL PROTECTED]:
  
   These features are readily available right now on FreeBSD.
   You don't have to code anything.
   Well with 2 downsides,
  
   Once you actually try and implement these solutions, you'll see 
   that
   your downsides are largely figments of your imagination.
  
   So if it is my imagination, how can I actually convert UFS to ZFS 

   easily? Everybody seems to say that this is easy and that is easy.
  
  It's not that easy.  I really don't know why people are telling you 
  it
  is.
 
 Maybe because it is? Of course, it *does* require a little prior
 planning, but anyone with more than a few months experience as a
 sysadmin should be able to deal with it without to much hassle.
 
  Converting some filesystems are easier than others; /home (if you
  create one) for example is generally easy:
  
  1) ZFS fs is called foo/home, mounted as /mnt
  2) fstat, ensure nothing is using /home -- if something is, shut it
 down or kill it
  3) rsync or cpdup /home files to /mnt
  4) umount /home
  5) zfs set mountpoint=/home foo/home
  6) Restart said processes or daemons
  
  See! It's like I said! EASY!  You can do this with /var as well.
 
 Yup. Of course, if you've done it that way, you're not thinking ahead,
 because:
 
  Now try /usr.  Hope you've got /rescue available, because once 
  /usr/lib
  and /usr/libexec disappear, you're in trouble.  Good luck doing 
  this in
  multi-user, too.
 
 Oops. You F'ed up. If you'd done a little planning, you would have
 realized that / and /usr would be a bit of extra trouble, and planned
 accordingly.
 
  And finally, the root fs.  Whoever says this is easy is kidding
  themselves; it's a pain.
 
 Um, no, it wasn't. Of course, I've been doing this long enough to have
 a system set up to make this kind of thing easy. My system disk is on
 a mirror, and I do system upgrades by breaking the mirror and
 upgrading one disk, making everything work, then putting the mirror
 back together. And moving to zfs on root is a lot like a system
 upgrade:
 
 1) Break the mirror (mirrors actually, as I mirrored file systems).
 2) Repartition the unused drive into /boot, swap  data
 3) Build zfs  /boot according to the instructions on ZFSOnRoot
wiki, just copying /boot and / at this point.
 4) Boot the zfs disk in single user mode.
 5) If 4 fails, boot back to the ufs disk so you're operational while
you contemplate what went wrong, then repeat step 3. Otherwise, go
on to step 6.
 6) Create zfs file systems as appropriate (given that zfs file
systems are cheap, and have lots of cool features that ufs
file systems don't have, you probably want to create more than
you had before, doing thing like putting SQL serves on their
own file system with appropriate blocking, etc, but you'll want to
have figured all this out before starting step 1).
 7) Copy data from the ufs file systems to their new homes,
not forgetting to take them out of /etc/fstab.
 8) Reboot on the zfs disk.
 9) Test until you're happy that everything is working properly,
and be prepared to reboot on the ufs disk if something is broken. 
 10) Reformat the ufs disk to match the zfs one. Gmirror /boot,
 add the data partition to the zfs pool so it's mirrored, and
 you should have already been using swap.
 
 This is 10 steps to your easy 6, but two of the extra steps are
 testing you didn't include, and 1 of the steps is a failure recovery
 step that shouldn't be necessary. So - one more step than your easy
 process.

Of course, the part you seem to be (intentionally?) forgetting: most
people are not using gmirror.  There is no 2nd disk.  They have one disk
with a series of UFS2 filesystems, and they want to upgrade.  That's how
I read Evren's how do I do this? You say it's easy... comment, and I
think his viewpoint is very reasonable.
   
   Granted, most people don't think about system upgrades when they build
   a system, so they wind up having to do extra work. In particular,
   Evren is talking about spending thousands of dollars on proprietary
   software, not to mention the cost of the 

Re: continuous backup solution for FreeBSD

2008-10-11 Thread Jeremy Chadwick
On Sat, Oct 11, 2008 at 12:35:16PM +0200, Danny Braniss wrote:
  On Fri, 10 Oct 2008 08:42:49 -0700
  Jeremy Chadwick [EMAIL PROTECTED] wrote:
  
   On Fri, Oct 10, 2008 at 11:29:52AM -0400, Mike Meyer wrote:
On Fri, 10 Oct 2008 07:41:11 -0700
Jeremy Chadwick [EMAIL PROTECTED] wrote:

 On Fri, Oct 10, 2008 at 03:53:38PM +0300, Evren Yurtesen wrote:
  Mike Meyer wrote:
  On Fri, 10 Oct 2008 02:34:28 +0300
  [EMAIL PROTECTED] wrote:
 
  Quoting Oliver Fromme [EMAIL PROTECTED]:
 
  These features are readily available right now on FreeBSD.
  You don't have to code anything.
  Well with 2 downsides,
 
  Once you actually try and implement these solutions, you'll see 
  that
  your downsides are largely figments of your imagination.
 
  So if it is my imagination, how can I actually convert UFS to ZFS  
  easily? Everybody seems to say that this is easy and that is easy.
 
 It's not that easy.  I really don't know why people are telling you it
 is.

Maybe because it is? Of course, it *does* require a little prior
planning, but anyone with more than a few months experience as a
sysadmin should be able to deal with it without to much hassle.

 Converting some filesystems are easier than others; /home (if you
 create one) for example is generally easy:
 
 1) ZFS fs is called foo/home, mounted as /mnt
 2) fstat, ensure nothing is using /home -- if something is, shut it
down or kill it
 3) rsync or cpdup /home files to /mnt
 4) umount /home
 5) zfs set mountpoint=/home foo/home
 6) Restart said processes or daemons
 
 See! It's like I said! EASY!  You can do this with /var as well.

Yup. Of course, if you've done it that way, you're not thinking ahead,
because:

 Now try /usr.  Hope you've got /rescue available, because once 
 /usr/lib
 and /usr/libexec disappear, you're in trouble.  Good luck doing this 
 in
 multi-user, too.

Oops. You F'ed up. If you'd done a little planning, you would have
realized that / and /usr would be a bit of extra trouble, and planned
accordingly.

 And finally, the root fs.  Whoever says this is easy is kidding
 themselves; it's a pain.

Um, no, it wasn't. Of course, I've been doing this long enough to have
a system set up to make this kind of thing easy. My system disk is on
a mirror, and I do system upgrades by breaking the mirror and
upgrading one disk, making everything work, then putting the mirror
back together. And moving to zfs on root is a lot like a system
upgrade:

1) Break the mirror (mirrors actually, as I mirrored file systems).
2) Repartition the unused drive into /boot, swap  data
3) Build zfs  /boot according to the instructions on ZFSOnRoot
   wiki, just copying /boot and / at this point.
4) Boot the zfs disk in single user mode.
5) If 4 fails, boot back to the ufs disk so you're operational while
   you contemplate what went wrong, then repeat step 3. Otherwise, go
   on to step 6.
6) Create zfs file systems as appropriate (given that zfs file
   systems are cheap, and have lots of cool features that ufs
   file systems don't have, you probably want to create more than
   you had before, doing thing like putting SQL serves on their
   own file system with appropriate blocking, etc, but you'll want to
   have figured all this out before starting step 1).
7) Copy data from the ufs file systems to their new homes,
   not forgetting to take them out of /etc/fstab.
8) Reboot on the zfs disk.
9) Test until you're happy that everything is working properly,
   and be prepared to reboot on the ufs disk if something is broken. 
10) Reformat the ufs disk to match the zfs one. Gmirror /boot,
add the data partition to the zfs pool so it's mirrored, and
you should have already been using swap.

This is 10 steps to your easy 6, but two of the extra steps are
testing you didn't include, and 1 of the steps is a failure recovery
step that shouldn't be necessary. So - one more step than your easy
process.
   
   Of course, the part you seem to be (intentionally?) forgetting: most
   people are not using gmirror.  There is no 2nd disk.  They have one disk
   with a series of UFS2 filesystems, and they want to upgrade.  That's how
   I read Evren's how do I do this? You say it's easy... comment, and I
   think his viewpoint is very reasonable.
  
  Granted, most people don't think about system upgrades when they build
  a system, so they wind up having to do extra work. In particular,
  Evren is talking about spending thousands of dollars on proprietary
  software, not to mention the cost of the server that all this data is
  going to flow to, for a backup solution. Compared to that, the cost of
  a few spare disks and 

Re: ZFS boot

2008-10-11 Thread Matthew Dillon
:To Matt:
:   since 'small' nowadays is big enough to hold /, what advantages are 
there
:in having root split up?
:also, having this split personality, what if the disk goes? the hammer/zfs
:is probably raided ...

You mean /boot + root , or do you mean /root vs /usr vs /home?  I'll
answer both.

With regards to /boot + root.  A small separate /boot partition
(256m) allows your root filesystem to use an arbitrarily complex
topology.  e.g. multiple geom layers, weird zfs setups, etc.  So
you get flexibility that you would otherwise not have if you went
with a directly-bootable ZFS root.

/boot can be as complex as boot2 allows.  There's nothing preventing
it from being RAIDed if boot2 supported that, and there's nothing
preventing it (once you had ZFS boot capabilities) from being ZFS
using a topology supported by boot2.  Having a sparate /boot allows
your filesystem root to use topologues boot2 would otherwise not
support.

With regards to the traditional BSD partitioning scheme, having a
separate /usr, /home, /tmp, etc... there's no reason to do that stuff
any more with ZFS (or HAMMER).  You just need one, and can break it
down into separate management domains within the filesystem
(e.g. HAMMER PFS's).  That's a generic statement of course, there
will always be situations where you might want to partition things
out separately.

Most linux dists don't bother with multiple partitions any more.
They just have '/' and maybe a small boot partition, and that's it.

-Matt
Matthew Dillon 
[EMAIL PROTECTED]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ZFS boot

2008-10-11 Thread Freddie Cash
On 10/11/08, Matthew Dillon [EMAIL PROTECTED] wrote:
 With regards to the traditional BSD partitioning scheme, having a
 separate /usr, /home, /tmp, etc... there's no reason to do that stuff
 any more with ZFS (or HAMMER).

As separate partitions, no.  As separate filesystems, definitely.

While HAMMER PFSes may not support these things yet, ZFS allows you to
tailor each filesystem to its purpose.  For example, you can enable
compression on /usr/ports, but have a separate /usr/ports/distfilles
and /usr/ports/work that aren't compressed.  Or /usr/src compressed
and /usr/obj not.  Have a small record (block) size for /usr/src, but
a larger one for /home.  Give each user a separate filesystem for
their /home/username, with separate snapshot policies, quotas, and
reservations (initial filesystem size).

Creating new filesystems with ZFS is as simple as zfs create -o
mountpoint=/wherever pool/fsname.   If you put a little time into
planning the hierarchy/structure,  you can take advantage off the
properties inheritance features of ZFS as well.

You just need one, and can break it
 down into separate management domains within the filesystem
 (e.g. HAMMER PFS's).

Similar kind of idea.

 Most linux dists don't bother with multiple partitions any more.
 They just have '/' and maybe a small boot partition, and that's it.

Heh, that's more proof of the difficulties inherent with old-school
disk partitioning, compared to pooled storage setups, than an
endorsement of using a single partition/filesystem.  :)

-- 
Freddie Cash
[EMAIL PROTECTED]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ZFS boot

2008-10-11 Thread Xin LI
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi, Matt,

Matthew Dillon wrote:
[...]
 /boot can be as complex as boot2 allows.  There's nothing preventing
 it from being RAIDed if boot2 supported that, and there's nothing
 preventing it (once you had ZFS boot capabilities) from being ZFS
 using a topology supported by boot2.  Having a sparate /boot allows
 your filesystem root to use topologues boot2 would otherwise not
 support.

I believe that it's a good idea to separate / from the zpool for other
file systems, or even use a UFS /.  My experience with ZFS on my laptop
shows that disk failures can be more easily fixed if there are some
utilities available in the UFS /, even when ZFS is used as /.  Another
issue with a ZFS / is that the snapshot rollback feature generally does
not work on / since it needs the mountpoint to be unmounted.

One thing that I found very useful is the new GPT boot feature on 8.0,
which also works on older BIOS because the protected MBR would deal with
the bootstrap to the actual GPT boot.  Now we have a 15-block sized
gptboot that can boot FreeBSD from UFS, however this 'boot' can be in
virtually any size that the BIOS supports, so we can embed more logic there.

Cheers,
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.9 (FreeBSD)

iEYEARECAAYFAkjxHV0ACgkQi+vbBBjt66CpXgCfWstsxNc3B4xOzNTxz9/kdl3Y
/WYAnjqiV5H8xQYxGgZTnwWieuG6ZZij
=LH+x
-END PGP SIGNATURE-
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: ZFS boot

2008-10-11 Thread Nate Eldredge

On Sat, 11 Oct 2008, Pegasus Mc Cleaft wrote:


FWIW, my system is amd64 with 1 G of memory, which the page implies is
insufficient.  Is it really?


This may be purely subjective, as I have never bench marked the speeds, 
but
when I was first testing zfs on a i386 machine with 1gig ram, I thought the
performance was mediocre. However, when I loaded the system on a quad core -
core2 with 8 gigs ram, I was quite impressed. I put localized changes in my
/boot/loader.conf to give the kernel more breathing room and disabled the
prefetch for zfs.

#more loader.conf
vm.kmem_size_max=1073741824
vm.kmem_size=1073741824
vfs.zfs.prefetch_disable=1


I was somewhat confused by the suggestions on the wiki.  Do the kmem_size 
sysctls affect the allocation of *memory* or of *address space*?  It seems 
a bit much to reserve 1 G of memory solely for the use of the kernel, 
expecially in my case when that's all I have :)  But on amd64, it's 
welcome to have terabytes of address space if it will help.



The best advice I can give is for you to find an old machine and 
test-bed zfs
for yourself. I personally have been pleased with it and It has saved my
machines data 4 times already (dieing hardware, unexpected power bounces, etc)


Sure, but if my new machine isn't studly enough to run it, there's no 
hope for an old machine.  So I'm trying to figure out what I actually 
need.


--

Nate Eldredge
[EMAIL PROTECTED]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Is it possible to recover from SEGV?

2008-10-11 Thread Nate Eldredge

On Sat, 11 Oct 2008, Yuri wrote:


Let's say I have signal(3) handler set.
And I know exactly what instruction caused SEGV and why.

Is there a way to access from signal handler CPU registers as they
were before signal, modify some of them, clear the signal and
continue from the instruction that caused SEGV initially?


Absolutely.  Declare your signal handler as

void handler(int sig, int code, struct sigcontext *scp);

You will need to cast the pointer passed to signal(3).  struct sigcontext 
is defined in machine/sysarch.h I believe.   struct sigcontext contains 
the CPU registers as they were when the faulting instruction began to 
execute.  You can modify them and then return from the signal handler. 
The program will resume the faulting instruction with the new registers. 
You can also alter the copy of the instruction pointer in the struct 
sigcontext if you want it to resume somewhere else.


There is also a libsigsegv which looks like it wraps some of this process 
in a less machine-specific way.


Out of curiosity, what are you looking to achieve with this?  And what 
architecture are you on?


--

Nate Eldredge
[EMAIL PROTECTED]
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to [EMAIL PROTECTED]