[gentoo-dev] Re: btrfs status and/was: preserve_old_lib

2012-02-25 Thread Duncan
Richard Yao posted on Fri, 24 Feb 2012 20:06:21 -0500 as excerpted:

 Have you tried ZFS? The kernel modules are in the portage tree and I am
 maintaining a FAQ regarding the status of Gentoo ZFS support at github:
 
 https://github.com/gentoofan/zfs-overlay/wiki/FAQ
 
 Data stored on ZFS is generally safe unless you go out of your way to
 lose it (e.g. put the ZIL/SLOG on a tmpfs).

I haven't.

One reason is licensing issues.  I know they resolve to some degree for 
end users who don't distribute and for those only distributing sources, 
since the gpl isn't particularly concerned in that case, but it's still 
an issue that I'd prefer not to touch, personally (nothing against others 
doing so, just not me), so no zfs here.  There's a discussion that could 
be had beyond that and I'm tempted, but here isn't the place for it.

My reason for posting wasn't really that, anyway, it was the apparently 
common misconception out there that btrfs is basically ready and that 
they're just being conservative in switching off the experimental label.  
There's several posts a week on the btrfs list from people caught out 
trying to depend on it, asking about recovery tool status and the like, 
that they'd already /know/ the status of if they were using btrfs for 
testing, etc, it's only appropriate use atm, and it's simply not ready 
for that.

Additionally in the context of gentoo-dev, the post was to say, don't 
plan on btrfs stability for anything but pre-release versions of anything 
you might be maintaining this year (kernel, btrfs-progs and grub2 
packages excepted, but they don't depend on btrfs stability, they help 
create it).

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman




Re: [gentoo-dev] rfc: virtual/modutils and module-init-tools

2012-02-25 Thread Mike Gilbert
On Sat, Feb 25, 2012 at 1:01 AM, William Hubbs willi...@gentoo.org wrote:
 If not, once the dependencies are correct, I propose
 dropping virtual/modutils from the system set.

If we drop it from the system set, the kernel modules section of the
handbook should be updated.



[gentoo-dev] Re: preserve_old_lib and I'm even more lazy

2012-02-25 Thread Duncan
Rich Freeman posted on Fri, 24 Feb 2012 22:53:50 -0500 as excerpted:

 From what I've seen as long as you keep things simple, and don't have
 heavy loads, you're at least reasonably likely to get by unscathed. I'd
 definitely keep good backups though.  Just read the mailing lists,
 or for kicks run xfs-test

 Oh, and go ahead and try filling up your disk some time.  If your kernel
 is recent enough it might not panic when you get down to a few GB left.
 
 I'm eager for the rise of btrfs - it IS the filesystem of the future.
 However, that cuts both ways right now.

That's about right... along with the caveat that if something /does/ go 
wrong on your not too corner-case, generally normal, lightly loaded 
system, while there are recovery tools for /some/ situations, the normal 
distribution btrfsck is read-only.  The freshly sort-of available but 
still rather hidden in the DANGER, DON'T EVER USE branch error-correcting 
btrfsck, is still under very heavy stress testing internally by Oracle 
QA.  (As a result of those tests, there's a load of fixes headed to Linus 
for inclusion, discovered just since 3.3-rc1.  As a result of /that/ 3.3 
should be the most stable btrfs yet, but that's still far from saying 
it's stable!)

And yes, filesystem of the future DOES cut both ways, ATM.  It's an apt 
description and I too am seriously looking forward to btrfs.  But it's 
definitely NOT the filesystem of now, for sure!  =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman




[gentoo-dev] Re: preserve_old_lib and I'm even more lazy

2012-02-25 Thread Duncan
Zac Medico posted on Fri, 24 Feb 2012 20:35:24 -0800 as excerpted:

 I've been using btrfs for temp storage, for more than a year

 The only problems I've experienced are:
 
  1) Intermittent ENOSPC when unpacking lots of files. Maybe this is
 related to having compression enabled. I haven't experienced it lately,
 so maybe it's fixed in recent kernels.

This is one of those many bugs, same result bugs.  The way btrfs 
allocates space is /extremely/ complicated, and based on what I read on-
list they've been fixing bugs in it, gradually reducing the ENOSPC 
triggers, for quite some time.

Last I read, the biggest remaining known one was indeed related to 
compression, apparently to a race condition of some sort, with one bit of 
code reaching the ENOSPC conclusion because it finished before the the 
actual processing code did.

However, apparently same bug could be triggered on uncompressed btrfs if 
it was stressed enough (rsyncing several gigs was a common duplicator).

Last I read they hadn't fully traced that one down in btrfs itself yet, 
but they had worked around the problem by throttling things further up 
the stack, in the kernel VFS code I believe.  The reasoning was that if a 
device was so overwhelmed it clearly couldn't keep up, regardless of the 
filesystem, throttling requests at the vfs level would put less pressure 
on the filesystem code, allowing things to work smoother.  It MAY (my own 
thought here) have been another application of the buffer-bloat work -- 
simply increasing buffer size and filling it even more doesn't help, when 
the bottleneck is further down the stack, rather the reverse!

AFAIK that's the present status for 3.3.  At least that one spurious 
ENOSPC trigger remains, but they've worked around it for now with the 
throttling, so it shouldn't hit anyone but those deliberately disabling 
the throttling in ordered to further test it, now.

But with luck, the stress-testing that Oracle QA's doing ATM will have 
found the root bug and it's fixed now too.  I hope...

  2) Bug 353907 [1] which is fixed in recent kernels and coreutils.
 
 [1] https://bugs.gentoo.org/show_bug.cgi?id=353907

That one could be another head of the same race-related root bug.  In 
fact, reading it and seeing that ext4 was affected as well, I'm wondering 
if that's what triggered the introduction of the throttling at the VFS 
level.

(NB: Interesting that I wasn't the only one to see that as an invitation 
to discuss btrfs.  At least my subthread has the subject changed so 
people that want can ignore it, tho.  I wish that had happened here too, 
but I guess it's kind of late to try and change it with this post, so...)

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman




Re: [gentoo-dev] rfc: virtual/modutils and module-init-tools

2012-02-25 Thread Robin H. Johnson
On Sat, Feb 25, 2012 at 12:01:07AM -0600, William Hubbs wrote:
 The dependencies on module-init-tools in the tree should be changed to
 virtual/modutils. I am willing to do this myself if no one objects. If I
 do, should I open individual bugs for the packages?
As kernel-misc, I've fixed them all up.

 Also, this brings up another question. I replaced module-init-tools in
 the system set with virtual/modutils.  But, since it is possible to have
 a linux system with a monolithic kernel, should this even be in the
 system set? If not, once the dependencies are correct, I propose
 dropping virtual/modutils from the system set.
I think we should examine dropping virtual/modutils from system.
It'll be on most systems anyway however. It's needed to build any
kernel, so the only place where it won't be would be a system with a
monolithic kernel that was built on a different host and copied over or
used for booting without being on the filesystem (common in VMs).

-- 
Robin Hugh Johnson
Gentoo Linux: Developer, Trustee  Infrastructure Lead
E-Mail : robb...@gentoo.org
GnuPG FP   : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85



[gentoo-dev] Re: rfc: virtual/modutils and module-init-tools

2012-02-25 Thread Duncan
William Hubbs posted on Sat, 25 Feb 2012 00:01:07 -0600 as excerpted:

 Also, this brings up another question. I replaced module-init-tools in
 the system set with virtual/modutils.  But, since it is possible to have
 a linux system with a monolithic kernel, should this even be in the
 system set? If not, once the dependencies are correct, I propose
 dropping virtual/modutils from the system set.

FWIW, I'm one of those monolithic kernel running folks.

I'm also one of those folks with everything the PM installs on rootfs, so 
haven't been affected by the reason for masking newer udev and thus I 
unmasked and installed it some time ago.

As such, I got udev-181 before it depended on kmod, and thus know that 
udev-181 won't build without it.

Given that udev-181 requires kmod, and while udev itself isn't in the 
system set, it's the preferred dep of virtual/dev-manager, which IS in 
the system set...

By udev-181, the vast majority of gentoo users who use udev WILL have kmod 
installed (and not module-init-tools, since it and kmod block each 
other), system-set, other dependency, or not, simply due to udev.

As such, IMO virtual/modutils doesn't need to be in the system set, 
because udev pulls it in.

Since most users have udev (and it's part of the stage-3 as the preferred 
dev-manager), they'll have kmod as a dependency and given its default-
USE, they'll normally have the module-init-tools compatibility symlinks, 
so module handling will work as it always has, for them.

As such, I disagree with floppym that the handbook's kernel module 
section needs updating for this, too.  The handbook doesn't even deal 
with non-default dev-managers, nor does it mention module-init-tools, it 
just assumes it's there.  Udev, as the default dev-manager, will be 
pulling in kmod already, with its default module-init-tools compatibility 
meaning no change in documentation necessary.  Only if we're going to 
start giving users dev-manager alternatives in the handbook does it 
become an issue, and while that would be nice, I don't think it's 
necessary for this change.

That leaves those using a dev-manager other than udev in a current 
installation who are depending on the current system set listing to bring 
in module-init-tools.  I believe busybox has it's own modutils as well, 
doesn't it, so that eliminates them.  Similarly, the fbsd folks aren't 
likely to be using Linux module-init-tools, right?

That leaves those still using kernel 2.4 and devfsd, and those using 
static-dev.

Is kernel 2.4 and devfsd still a supported option?  If not, that pretty 
much eliminates it.  If it /is/ still supported, maybe this can be our 
excuse to drop it?  Is that feasible, or are there users, perhaps on some 
of the supported exotic archs, for which kernel 2.6 and udev, etc, is not 
a viable option?

That means the static-dev folks, and possibly some still on 2.4 and devfs, 
if that's even still supported.  Static-dev could arguably pull in 
modultils as a dependency, or a news item could be created that triggered 
on static-dev installed.  Similarly for devfsd, if it's still supported.

 On the other hand, if we want virtual/modutils in the system set, there
 should be no dependencies in the tree  on virtual/modutils.

Good point.  Hopefully, tho, it can simply be removed from the system set.

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman




Re: [gentoo-dev] github g.o.g.o

2012-02-25 Thread Alex Alexander
On Sat, Feb 25, 2012 at 01:55:37PM +0100, Justin wrote:
 Hi all,
 
 is there a way to do a way or two way sync between a repo on github and
 on g.o.g.o?
 
 I have the felling that I heard of an official overlay which is operated
 like this. Could someone please point me to this overlay and the technique?

For every secondary repo you want to keep synced, add a pushurl entry

pushurl = git-repo-url

below your main url = line in the repo's .git/config file.
Git will automatically push to all pushurl entries automatically when
you git push.

We do this in the Qt overlay (with gitorious atm):

[remote origin]
fetch = +refs/heads/*:refs/remotes/origin/*
url = git://git.overlays.gentoo.org/proj/qt.git
pushurl = g...@gitorious.org:gentoo-qt/qt.git

Regards,
-- 
Alex Alexander | wired
+ Gentoo Linux Developer
++ www.linuxized.com


signature.asc
Description: Digital signature


Re: [gentoo-dev] preserve_old_lib and I'm even more lazy

2012-02-25 Thread Rich Freeman
On Sat, Feb 25, 2012 at 10:02 AM, Doug Goldstein car...@gentoo.org wrote:
 FWIW, I'll second the ZFS  btrfs suggestion.

Oh, if you need a safe COW filesystem today I'd definitely recommend
ZFS over btrfs for sure, although I suspect the people who are most
likely to take this sort of advice are also the sort of people who are
most likely to not be running Gentoo.  There are a bazillion problems
with btrfs as it stands.

However, fundamentally there is no reason to think that ZFS will
remain better in the future, once the bugs are worked out.  They're
still focusing on keeping btrfs from hosing your data - tuning
performance is not a priority yet.  However, the b-tree design of
btrfs should scale very well once the bugs are worked out.

Rich



Re: [gentoo-dev] Re: rfc: virtual/modutils and module-init-tools

2012-02-25 Thread William Hubbs
On Sat, Feb 25, 2012 at 08:44:39AM +, Duncan wrote:
 You are however correct that it'll be on most systems, at least with 
 udev-181, since udev won't build without kmod, now.  (I found that out 
 when the build broke on me due to missing kmod, as I've had udev unmasked 
 for awhile and got 181 before kmod was added as a dep.)

But, one thing about kmod is that you can turn off the command line
portions of it completely on a monolythic system since udev just uses
the library. That is actually the main reason we are transitioning over
to kmod.

You do that by putting the following in /etc/portage/package.use:

sys-apps/kmod -compat -tools

William



pgpDh3MVxHmOp.pgp
Description: PGP signature


Re: [gentoo-dev] Re: rfc: virtual/modutils and module-init-tools

2012-02-25 Thread Walter Dnes
On Sat, Feb 25, 2012 at 08:28:23AM +, Duncan wrote

 That leaves those using a dev-manager other than udev in a current 
 installation who are depending on the current system set listing to bring 
 in module-init-tools.  I believe busybox has it's own modutils as well, 
 doesn't it, so that eliminates them.

  Would this require tweaking the virtual/dev-manager ebuild?  Taking a
quick glance at http://busybox.net/downloads/BusyBox.html it does have
lsmod/modprobe/rmmod, but not modinfo.  Are there any ebuilds or init
scripts that use modprobe or rmmod?  If so, the ebuild should at least
have an ewarn message telling mdev users to create the necessary
symlinks to modprobe/rmmod.  Maybe even attempt to create the symlinks
if they don't already exist.

  I'm not a programmer or developer, but I am running udev-less Gentoo
using busybox's mdev.  I've got a spare machine that I'm willing to use
as a guinea-pig for testing mdev under the proposed setup.

  How difficult would it be to set up an mdev-based profile, already?

-- 
Walter Dnes waltd...@waltdnes.org



Re: [gentoo-dev] preserve_old_lib and I'm even more lazy

2012-02-25 Thread Richard Yao
 Oh, if you need a safe COW filesystem today I'd definitely recommend
 ZFS over btrfs for sure, although I suspect the people who are most
 likely to take this sort of advice are also the sort of people who are
 most likely to not be running Gentoo.  There are a bazillion problems
 with btrfs as it stands.

There is significant interest in ZFS in the Gentoo community,
especially on freenode. Several veteran users are evaluating it and
others have already begun to switch from other filesystems, volume
managers and RAID solutions.

 However, fundamentally there is no reason to think that ZFS will
 remain better in the future, once the bugs are worked out.  They're
 still focusing on keeping btrfs from hosing your data - tuning
 performance is not a priority yet.  However, the b-tree design of
 btrfs should scale very well once the bugs are worked out.

ZFSOnLinux performance tuning is not a priority either, but there have
been a few patches and the performance is good. btrfs might one day
outperform ZFS in terms of single disk performance, assuming that it
does not already, but I question the usefulness of single disk
performance as a performance metric. If I add a SSD to a ZFS pool
machine to complement the disk, system performance will increase
many-fold. As far as I can tell, that will never be possible with
btrfs without external solutions like Google's flashcache, which
killed an OCZ Vertex 3 within 16 days about a month ago that Wyatt in
#gentoo-chat on freenode had to replace. I imagine that its death
could have been delayed through write rate limiting, which is what ZFS
uses for L2ARC, but until you can replace the Linux page replacement
algorithm with either ARC or something comparable, flashcache will be
inferior to ZFS L2ARC. You can read more about this topic at the
following link:

http://linux-mm.org/AdvancedPageReplacement

ZFS at its core is a transactional object store and everything that
enables its use as a filesystem is implemented on top of that. ZFS
supports raidz3, zvols, L2ARC, SLOG/ZIL and endian independence, which
as far as I can tell, are things that btrfs will never support.. ZFS
also has either first-party or third-party support on Solaris,
FreeBSD, Linux, Mac OS X and Windows, while btrfs appears to have no
future outside of Linux.

Lastly, ZFS' performance scaling exceeds that of any block device
based filesystem I have seen (which excludes comparisons with
tmpfs/ramfs and lustre/gpfs). The following benchmark is of a SAN
device using ZFS:

http://www.anandtech.com/show/3963/zfs-building-testing-and-benchmarking/2

While ZFS performance in that benchmark is impressive, ZFS can scale
far higher with additional disks and more SSDs. SuperMicro has a
hotswappable 72-disk enclosure that should enable ZFS to far exceed
the performance of the system that Anandtech benchmarked, provided
that it is configured with a large ARC cache and multiple vdevs each
with multiple disks, some SSDs for L2ARC and a SLC SSD-based SLOG/ZIL.
I would not be surprised if ZFS performance were to exceed 1 million
IOPS on such hardware. Nothing that I have seen planned for btrfs can
perform comparably, in any configuration.



Re: [gentoo-dev] preserve_old_lib and I'm even more lazy

2012-02-25 Thread Rich Freeman
On Sat, Feb 25, 2012 at 3:52 PM, Richard Yao r...@cs.stonybrook.edu wrote:
 ZFSOnLinux performance tuning is not a priority either, but there have
 been a few patches and the performance is good. btrfs might one day
 outperform ZFS in terms of single disk performance, assuming that it
 does not already, but I question the usefulness of single disk
 performance as a performance metric.

Why would btrfs be inferior to ZFS on multiple disks?  I can't see how
its architecture would do any worse, and the planned features are
superior to ZFS (which isn't to say that ZFS can't improve either).

Beyond the licensing issues ZFS also does not support reshaping of
raid-z, which is the only n+1 redundancy solution it offers.  Btrfs of
course does not yet support n+1 at all aside from some experimental
patches floating around, but it plans to support reshaping at some
point in time.  Of course, there is no reason you couldn't implement
reshaping for ZFS, it just hasn't happened yet.  Right now the
competition for me is with ext4+lvm+mdraid.  While I really would like
to have COW soon, I doubt I'll implement anything that doesn't support
reshaping as mdraid+lvm does.

I do realize that you can add multiple raid-zs to a zpool, but that
isn't quite enough.  If I have 4x1TB disks I'd like to be able to add
a single 1TB disk and end up with 5TB of space.  I'd rather not have
to find 3 more 1TB hard drives to hold the data on while I redo my
raid and then try to somehow sell them again.

Rich



Re: [gentoo-dev] preserve_old_lib and I'm even more lazy

2012-02-25 Thread Richard Yao
 Why would btrfs be inferior to ZFS on multiple disks?  I can't see how
 its architecture would do any worse, and the planned features are
 superior to ZFS (which isn't to say that ZFS can't improve either).

ZFS uses ARC as its page replacement algorithm, which is superior to
the LRU page replacement algorithm used by btrfs. ZFS has L2ARC and
SLOG. L2ARC permits things that would not be evacuated from ARC had it
been bigger to be stored in a Level 2 cache. SLOG permits writes to be
stored in memory before they are committed to the disks. This provides
the benefits of write sequentialization and protection against data
inconsistency in the event of a kernel panic. Furthermore, data is
striped across vdevs, so the more vdevs you have, the higher your
performance goes.

These features enable ZFS performance to go to impressive heights and
the btrfs developers display no intention of following it as far as I
have seen.

 Beyond the licensing issues ZFS also does not support reshaping of
 raid-z, which is the only n+1 redundancy solution it offers.  Btrfs of
 course does not yet support n+1 at all aside from some experimental
 patches floating around, but it plans to support reshaping at some
 point in time.  Of course, there is no reason you couldn't implement
 reshaping for ZFS, it just hasn't happened yet.  Right now the
 competition for me is with ext4+lvm+mdraid.  While I really would like
 to have COW soon, I doubt I'll implement anything that doesn't support
 reshaping as mdraid+lvm does.

raidz has 3 varieties, which are single parity, double parity and
triple parity. As for reshaping, ZFS is a logical volume manager. You
can set and resize limits on ZFS datasets as you please.

As for competiting with ext4+lvm+mdraid, I recently migrated a server
from that exact configuration. It had 6 disks, using RAID 6. I had a
VM on it running Gentoo Hardened in which I did a benchmark using dd
to write zeroes to the disk. Nothing I could do with ext4+lvm+mdraid
could get performance above 20MB/sec. After switching to ZFS,
performance went to 205MB/sec. The worst performance I observed was
92MB/sec. This used 6 Samsung HD204UI hard drives.

 I do realize that you can add multiple raid-zs to a zpool, but that
 isn't quite enough.  If I have 4x1TB disks I'd like to be able to add
 a single 1TB disk and end up with 5TB of space.  I'd rather not have
 to find 3 more 1TB hard drives to hold the data on while I redo my
 raid and then try to somehow sell them again.

You would probably be better served by making your additional drive
into a hotspare, but if you insist on using it, you can make it a
separate vdev, which should provide more space. To be honest, anyone
who wants to upgrade such a configuration probably is better off
getting 4x2TB disks, do a scrub and then start replacing disks in the
pool, iterating between replacing a disk and resilvering the vdev.
After you have finished this process, you will have doubled the amount
of space in the pool.



[gentoo-dev] Re: rfc: virtual/modutils and module-init-tools

2012-02-25 Thread Duncan
William Hubbs posted on Sat, 25 Feb 2012 11:25:55 -0600 as excerpted:

 On Sat, Feb 25, 2012 at 08:44:39AM +, Duncan wrote:
 You are however correct that it'll be on most systems, at least with
 udev-181, since udev won't build without kmod, now.  (I found that out
 when the build broke on me due to missing kmod, as I've had udev
 unmasked for awhile and got 181 before kmod was added as a dep.)
 
 But, one thing about kmod is that you can turn off the command line
 portions of it completely on a monolythic system since udev just uses
 the library. That is actually the main reason we are transitioning over
 to kmod.
 
 You do that by putting the following in /etc/portage/package.use:
 
 sys-apps/kmod -compat -tools

Good point, and I'd done exactly that.

But current docs and @system assume modules, and on principles of least 
change for both packages and docs, I kept that assumption.

For advanced users with monolithic kernel systems, kmod as a udev dep and 
modutils removed from @system will at once be already better and worse 
than current state, better, since a package.use entry is way less drastic 
than a package.provided and an @system negating packages files entries, 
worse, since previously, no modutils package was necessary at all once 
the appropriate portage configs were setup, but now, kmod is required for 
udev, as an upstream choice made for us.  package.use can take care of 
the command line stuff, but the package is still a hard dep, since udev 
itself won't build without it.

Unless of course upstream udev provides a build-time option allowing udev 
to be built without module support, so it doesn't link kmod at all.  I've 
not actually investigated that, but I doubt they do.  It would sure be 
nice, tho, if they did.  Has a request been made, at least?  Gentoo could 
then expose that option as a USE flag in the routine fashion, which would 
make killing the kmod dep entirely possible, for those who do have 
monolithic kernels.

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman




Re: [gentoo-dev] preserve_old_lib and I'm even more lazy

2012-02-25 Thread Richard Yao
 That isn't my understanding as far as raidz reshaping goes.  You can
 create raidz's and add them to a zpool.  You can add individual
 drives/partitions to zpools.  You can remove any of these from a zpool
 at any time and have it move data into other storage areas.  However,
 you can't reshape a raidz.

ZFS is organized into pools, which are transactional object stores.
Various things can go into these transactional object stores, such as
ZFS data sets and zvols. A ZFS data set is what you would consider to
be a filesystem. A zvol is a block device on which other filesystems
can be installed. Data in pools are stored in vdevs, which can be
files masquerading as block devices, single disks, mirrored disks or a
raidz level.

ZFS is designed to put data integrity first. I question how many other
volume managers are capable of recovering from a crash during a
reshape without some sort of catastrophic data loss. WIth that said, I
do not see what your point is to talk about this. There are things you
can use your extra disk to do, but as far as storage requirements go,
a single disk does not go very far. You are better off replacing
hardware if your storage requirements grow beyond the ability of your
current disks to handle.

 Suppose I have a system with 5x1TB hard drives.  They're merged into a
 single raidz with single-parity, so I have 4TB of space.  I want to
 add one 1TB drive to the array and have 5TB of single-parity storage.
 As far as I'm aware you can't do that with raidz.  What you could do
 is set up some other 4TB storage area (raidz or otherwise), remove the
 original raidz, recycle those drives into the new raidz, and then move
 the data back onto it.  However, doing this requires 4TB of storage
 space.  With mdadm you could do this online without the need for
 additional space as a holding area.

If you have proper backups, you should be able to destroy the pool,
make a new one and restore the backup. If you do not have backups,
then I think there are more important things to consider than your
ability to do this without them.

 ZFS is obviously a capable filesystem, but unless Oracle re-licenses
 it we'll never see it take off on Linux.  For good or bad everybody
 seems to like the monolithic kernel.  Btrfs obviously has a ways to go
 before it is a viable replacement, but I doubt Oracle would be sinking
 so much money into it if they intended to ever re-license ZFS.

I heard a statement in IRC that Oracle owns all of the next generation
filesystems, which enables them to position btrfs for the low-end and
use ZFS at the high-end. I have no way of substantiating this, but I
can say that this does appear to be the case.

With that said, ebuilds are in the portage tree and support has been
integrated into genkernel. I have a physical system booting off ZFS
(no ext4 et al) and genkernel makes kernel upgrades incredibly easy,
even when configuring my own kernel through --menuconfig. Gentoo users
in IRC are quite interested in this and they do not seem to care that
the modules are out-of-tree or that the licensing is different. As far
as I can tell, there is no need for them to care.

You might want to look at Gentoo/FreeBSD, which also supports ZFS with
a monolithic kernel design, but has no licensing issues. There is
nothing forcing any of us to use Linux and if the licensing is a
problem for you, then perhaps it would be a good idea to switch.

Also, to avoid any confusion, a proper bootloader for ZFS does not
exist in portage at this time. I hacked the boot process to enable the
system to boot off ZFS using GRUB and it will require some more work
before this is ready for inclusion into portage. I made an
announcement to the ZFSOnLinux mailing list not that long ago
explaining what I did. I was waiting until ZFS support in Gentoo
reached a few milestones before I made an announcement about it here,
although most of the stuff you need is already in-tree:

http://groups.google.com/a/zfsonlinux.org/group/zfs-discuss/browse_thread/thread/d94f597f8f4e3c88



[gentoo-dev] Re: rfc: virtual/modutils and module-init-tools

2012-02-25 Thread Duncan
Walter Dnes posted on Sat, 25 Feb 2012 15:04:22 -0500 as excerpted:

 On Sat, Feb 25, 2012 at 08:28:23AM +, Duncan wrote
 
 That leaves those using a dev-manager other than udev in a current
 installation who are depending on the current system set listing to
 bring in module-init-tools.  I believe busybox has it's own modutils as
 well, doesn't it, so that eliminates them.
 
   Would this require tweaking the virtual/dev-manager ebuild?  Taking a
 quick glance at http://busybox.net/downloads/BusyBox.html it does have
 lsmod/modprobe/rmmod, but not modinfo.  Are there any ebuilds or init
 scripts that use modprobe or rmmod?  If so, the ebuild should at least
 have an ewarn message telling mdev users to create the necessary
 symlinks to modprobe/rmmod.  Maybe even attempt to create the symlinks
 if they don't already exist.

FWIW I don't have busybox installed either (negating @system entry in 
/etc/portage/profiles/packages, I use either init=/bin/bash on the kernel 
command line, or a second copy of the rootfs taken when the system was 
generally stable, as my emergency boot solution, no busybox necessary), 
so I'm not familiar with it at all.

But as I stated I've had module-init-tools in package.provided for quite 
some time, no noticed ill effects.  The only deps I see on it presently 
are sys-apps/rescan-scsi-bus (itself a dep of k3b, cd/dvd-burning app, 
this will obviously be replaced with a virtual/modutils dep) and
virtual/modutils itself.  I don't see any deps on virtual/modutils 
presently.  But of course I don't have all apps installed, either.

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman