Re: [gentoo-user] snapshots?

2016-01-13 Thread Neil Bothwick
On Tue, 12 Jan 2016 00:43:12 +0100, lee wrote:

> >> The relevant advantage of btrfs is being able to make snapshots.  Is
> >> that worth all the (potential) trouble?  Snapshots are worthless when
> >> the file system destroys them with the rest of the data.  
> >
> > You forgot the data checksumming.  
> 
> Not at all, I'm seeing it as an advantage, especially when you want to
> store large amounts of data.  Since I don't trust btrfs with that, I'm
> using ZFS.

You already have snapshots with ZFS. If you're happy with it, keep using
it.

> > If you use hardware RAID then btrfs
> > only sees a single disk. It can still warn you of corrupt data but it
> > cannot fix it because it only has the one copy.  
> 
> or it corrupts the data itself ;)

Well, any filesystem is capable of that, and anybody is capable of making
vague comments about it.

I switched from ZFS to btrfs a while ago. ZFS is more mature, but the
licensing issues and the lack of recent source code mean it isn't really
going anywhere whereas btrfs is in the kernel and  under active
development. If you're already using ZFS and happy with it, you are
probably better off sticking with it for now.


-- 
Neil Bothwick

The trouble with life is that you are halfway through it before you
realize it's a "do it yourself" thing.


pgp1cP0pl2aYa.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] snapshots?

2016-01-12 Thread lee
Rich Freeman  writes:

> On Tue, Jan 5, 2016 at 5:16 PM, lee  wrote:
>> Rich Freeman  writes:
>>
>>>
>>> I would run btrfs on bare partitions and use btrfs's raid1
>>> capabilities.  You're almost certainly going to get better
>>> performance, and you get more data integrity features.
>>
>> That would require me to set up software raid with mdadm as well, for
>> the swap partition.
>
> Correct, if you don't want a panic if a single swap drive fails.
>
>>
>>> If you have a silent corruption with mdadm doing the raid1 then btrfs
>>> will happily warn you of your problem and you're going to have a
>>> really hard time fixing it,
>>
>> BTW, what do you do when you have silent corruption on a swap partition?
>> Is that possible, or does swapping use its own checksums?
>
> If the kernel pages in data from the good mirror, nothing happens.  If
> the kernel pages in data from the bad mirror, then whatever data
> happens to be there is what will get loaded and used and/or executed.
> If you're lucky the modified data will be part of unused heap or
> something.  If not, well, just about anything could happen.
>
> Nothing in this scenario will check that the data is correct, except
> for a forced scrub of the disks.  A scrub would probably detect the
> error, but I don't think mdadm has any ability to recover it.  Your
> best bet is probably to try to immediately reboot and save what you
> can, or a less-risky solution assuming you don't have anything
> critical in RAM is to just do an immediate hard reset so that there is
> no risk of bad data getting swapped in and overwriting good data on
> your normal filesystems.

Then you might be better off with no swap unless you put it on a file
system that uses check sums.

>> It's still odd.  I already have two different file systems and the
>> overhead of one kind of software raid while I would rather stick to one
>> file system.  With btrfs, I'd still have two different file systems ---
>> plus mdadm and the overhead of three different kinds of software raid.
>
> I'm not sure why you'd need two different filesystems.

btrfs and zfs

I won't put my data on btrfs for at least quite a while.

> Just btrfs for your data.  I'm not sure where you're counting three
> types of software raid either - you just have your swap.

btrfs raid is software raid, zfs raid is software raid, mdadm is
software raid.  That makes three different sofware raids.

> And I don't think any of this involves any significant overhead, other
> than configuration.

mdadm does have a very significant performance overhead.  ZFS mirror
performance seems to be rather poor.  I don't know how much overhead is
involved with zfs and btrfs software raid, yet since they basically all
do the same thing, I have my doubts that the overhead is significantly
lower than the overhead of mdadm.

>> How would it be so much better to triple the software raids and to still
>> have the same number of file systems?
>
> Well, the difference would be more data integrity insofar as hardware
> failure goes, but certainly more risk of logical errors (IMO).

There would be a possibility for more data integrity for the root file
system, assuming that btrfs is as reliable as ext4 on hardware raid.  Is
it?

That's about 10GB, mostly read and not written to.  It would be a
very minor improvement, if any.

 When you use hardware raid, it
 can be disadvantageous compared to btrfs-raid --- and when you use it
 anyway, things are suddenly much more straightforward because everything
 is on raid to begin with.
>>>
>>> I'd stick with mdadm.  You're never going to run mixed
>>> btrfs/hardware-raid on a single drive,
>>
>> A single disk doesn't make for a raid.
>
> You misunderstood my statement.  If you have two drives, you can't run
> both hardware raid and btrfs raid across them.  Hardware raid setups
> don't generally support running across only part of a drive, and in
> this setup you'd have to run hardware raid on part of each of two
> single drives.

I have two drives to hold the root file system and the swap space.  The
raid controller they'd be connected do does not support using disks
partially.

>>> and the only time I'd consider
>>> hardware raid is with a high quality raid card.  You'd still have to
>>> convince me not to use mdadm even if I had one of those lying around.
>>
>> From my own experience, I can tell you that mdadm already does have
>> significant overhead when you use a raid1 of two disks and a raid5 with
>> three disks.  This overhead may be somewhat due to the SATA controller
>> not being as capable as one would expect --- yet that doesn't matter
>> because one thing you're looking at, besides reliability, is the overall
>> performance.  And the overall performance very noticeably increased when
>> I migrated from mdadm raids to hardware raids, with the same disks and
>> the same hardware, except that the raid card was added.
>
> Well, sure, the 

Re: [gentoo-user] snapshots?

2016-01-12 Thread lee
Neil Bothwick  writes:

> On Tue, 5 Jan 2016 18:22:59 -0500, Rich Freeman wrote:
>
>> > There's no need to use RAID for swap, it's not like it contains
>> > anything of permanent importance. Create a swap partition on each
>> > disk and let the kernel use the space as it wants.  
>> 
>> So, while I tend not to run swap on RAID, it isn't an uncommon
>> approach because if you don't put swap on raid and you have a drive
>> failure while the system is running, then you are likely to have a
>> kernel panic.  Since one of the main goals of RAID is availability, it
>> is logical to put swap on RAID.
>
> That's a point I hadn't considered, but I think I'll leave things as they
> are for now. I have three drives with a swap partition on each. My system
> uses very little swap as it is, so the chances of one of those drives
> failing exactly when something is using that particular drive is pretty
> small. There's probably more chance of my winning the lottery...

It seems far more likely for a drive to fail when it is used than when
it is not used.



Re: [gentoo-user] snapshots?

2016-01-12 Thread lee
Neil Bothwick  writes:

> On Tue, 05 Jan 2016 23:16:48 +0100, lee wrote:
>
>> > I would run btrfs on bare partitions and use btrfs's raid1
>> > capabilities.  You're almost certainly going to get better
>> > performance, and you get more data integrity features.  
>> 
>> That would require me to set up software raid with mdadm as well, for
>> the swap partition.
>
> There's no need to use RAID for swap, it's not like it contains anything
> of permanent importance. Create a swap partition on each disk and let
> the kernel use the space as it wants.

When a disk fails a swap partition is on, the system is likely to go
down.  Raid is not a replacement for backups.

>> The relevant advantage of btrfs is being able to make snapshots.  Is
>> that worth all the (potential) trouble?  Snapshots are worthless when
>> the file system destroys them with the rest of the data.
>
> You forgot the data checksumming.

Not at all, I'm seeing it as an advantage, especially when you want to
store large amounts of data.  Since I don't trust btrfs with that, I'm
using ZFS.

A system partition of 50 or 60GB --- of which about 10GB are used --- is
not exactly storing large amounts of data, and the data on it doesn't
change much.  In this application, checksums would still be a benefit,
yet a rather small one.  So as I said, the /relevant/ advantage of btrfs
is being able to make snapshots.  And that isn't worth the trouble.

> If you use hardware RAID then btrfs
> only sees a single disk. It can still warn you of corrupt data but it
> cannot fix it because it only has the one copy.

or it corrupts the data itself ;)

>> Well, then they need to make special provisions for swap files in btrfs
>> so that we can finally get rid of the swap partitions.
>
> I think there are more important priorities, its not like having a swap
> partition or two is a hardship or limitation.

Still needing swap partitions and removing the option to use swap files
instead simply defeats the purpose of btrfs and makes it significantly
harder to use.



Re: [OT] Re: [gentoo-user] snapshots?

2016-01-06 Thread Neil Bothwick
On Wed, 06 Jan 2016 16:56:57 +, Peter Humphrey wrote:

> > There's probably more chance of my winning the lottery...  
> 
> Hardly, since they reduced our odds by a factor of 50 x 51 x 52 x ... x
> 59 by adding ten more numbers.

The fact I was unaware of that should give an indication of my chances of
winning in the first place :)


-- 
Neil Bothwick

Biology is the only science in which multiplication means the same thing
as division.


pgprxdTXx7rno.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] snapshots?

2016-01-06 Thread Neil Bothwick
On Tue, 5 Jan 2016 18:22:59 -0500, Rich Freeman wrote:

> > There's no need to use RAID for swap, it's not like it contains
> > anything of permanent importance. Create a swap partition on each
> > disk and let the kernel use the space as it wants.  
> 
> So, while I tend not to run swap on RAID, it isn't an uncommon
> approach because if you don't put swap on raid and you have a drive
> failure while the system is running, then you are likely to have a
> kernel panic.  Since one of the main goals of RAID is availability, it
> is logical to put swap on RAID.

That's a point I hadn't considered, but I think I'll leave things as they
are for now. I have three drives with a swap partition on each. My system
uses very little swap as it is, so the chances of one of those drives
failing exactly when something is using that particular drive is pretty
small. There's probably more chance of my winning the lottery...


-- 
Neil Bothwick

[unwieldy legal disclaimer would go here - feel free to type your own]


pgpSnv49jgJZS.pgp
Description: OpenPGP digital signature


[OT] Re: [gentoo-user] snapshots?

2016-01-06 Thread Peter Humphrey
On Wednesday 06 January 2016 16:27:26 Neil Bothwick wrote:

> There's probably more chance of my winning the lottery...

Hardly, since they reduced our odds by a factor of 50 x 51 x 52 x ... x 59 
by adding ten more numbers.

:)

-- 
Rgds
Peter




Re: [gentoo-user] snapshots?

2016-01-05 Thread lee
Rich Freeman  writes:

> On Fri, Jan 1, 2016 at 5:42 AM, lee  wrote:
>> "Stefan G. Weichinger"  writes:
>>
>>> btrfs offers RAID-like redundancy as well, no mdadm involved here.
>>>
>>> The general recommendation now is to stay at level-1 for now. That fits
>>> your 2-disk-situation.
>>
>> Well, what shows better performance?  No btrfs-raid on hardware raid or
>> btrfs raid on JBOD?
>
> I would run btrfs on bare partitions and use btrfs's raid1
> capabilities.  You're almost certainly going to get better
> performance, and you get more data integrity features.

That would require me to set up software raid with mdadm as well, for
the swap partition.

> If you have a silent corruption with mdadm doing the raid1 then btrfs
> will happily warn you of your problem and you're going to have a
> really hard time fixing it,

BTW, what do you do when you have silent corruption on a swap partition?
Is that possible, or does swapping use its own checksums?

> [...]
>
>>>
>>> I would avoid converting and stuff.
>>>
>>> Why not try a fresh install on the new disks with btrfs?
>>
>> Why would I want to spend another year to get back to where I'm now?
>
> I wouldn't do a fresh install.  I'd just set up btrfs on the new disks
> and copy your data over (preserving attributes/etc).

That was the idea.

> I wouldn't do an in-place ext4->btrfs conversion.  I know that there
> were some regressions in that feature recently and I'm not sure where
> it stands right now.

That adds to the uncertainty of btrfs.


> [...]
>>
>> There you go, you end up with an odd setup.  I don't like /boot
>> partitions.  As well as swap partitions, they need to be on raid.  So
>> unless you use hardware raid, you end up with mdadm /and/ btrfs /and/
>> perhaps ext4, /and/ multiple partitions.
>
> [...]
> There isn't really anything painful about that setup though.

It's still odd.  I already have two different file systems and the
overhead of one kind of software raid while I would rather stick to one
file system.  With btrfs, I'd still have two different file systems ---
plus mdadm and the overhead of three different kinds of software raid.

How would it be so much better to triple the software raids and to still
have the same number of file systems?

>> When you use hardware raid, it
>> can be disadvantageous compared to btrfs-raid --- and when you use it
>> anyway, things are suddenly much more straightforward because everything
>> is on raid to begin with.
>
> I'd stick with mdadm.  You're never going to run mixed
> btrfs/hardware-raid on a single drive,

A single disk doesn't make for a raid.

> and the only time I'd consider
> hardware raid is with a high quality raid card.  You'd still have to
> convince me not to use mdadm even if I had one of those lying around.

>From my own experience, I can tell you that mdadm already does have
significant overhead when you use a raid1 of two disks and a raid5 with
three disks.  This overhead may be somewhat due to the SATA controller
not being as capable as one would expect --- yet that doesn't matter
because one thing you're looking at, besides reliability, is the overall
performance.  And the overall performance very noticeably increased when
I migrated from mdadm raids to hardware raids, with the same disks and
the same hardware, except that the raid card was added.

And that was only 5 disks.  I also know that the performance with a ZFS
mirror with two disks was disappointingly poor.  Those disks aren't
exactly fast, but still.  I haven't tested yet if it changed after
adding 4 mirrored disks to the pool.  And I know that the performance of
another hardware raid5 with 6 disks was very good.

Thus I'm not convinced that software raid is the way to go.  I wish they
would make hardware ZFS (or btrfs, if it ever becomes reliable)
controllers.

Now consider:


+ candidates for hardware raid are two small disks (72GB each)
+ data on those is either mostly read, or temporary/cache-like
+ this setup works without any issues for over a year now
+ using btrfs would triple the software raids used
+ btrfs is uncertain, reliability questionable
+ mdadm would have to be added as another layer of complexity
+ the disks are SAS disks, genuinely made to be run in a hardware raid
+ the setup with hardware raid is straightforward and simple, the setup
  with btrfs is anything but


The relevant advantage of btrfs is being able to make snapshots.  Is
that worth all the (potential) trouble?  Snapshots are worthless when
the file system destroys them with the rest of the data.

> [...]
>> How's btrfs's performance when you use swap files instead of swap
>> partitions to avoid the need for mdadm?
>
> btrfs does not support swap files at present.

What happens when you try it?

> When it does you'll need to disable COW for them (using chattr)
> otherwise they'll be fragmented until your system grinds to a halt.  A
> swap file is about the worst case scenario for any 

Re: [gentoo-user] snapshots?

2016-01-05 Thread Neil Bothwick
On Tue, 05 Jan 2016 23:16:48 +0100, lee wrote:

> > I would run btrfs on bare partitions and use btrfs's raid1
> > capabilities.  You're almost certainly going to get better
> > performance, and you get more data integrity features.  
> 
> That would require me to set up software raid with mdadm as well, for
> the swap partition.

There's no need to use RAID for swap, it's not like it contains anything
of permanent importance. Create a swap partition on each disk and let
the kernel use the space as it wants.
 
> The relevant advantage of btrfs is being able to make snapshots.  Is
> that worth all the (potential) trouble?  Snapshots are worthless when
> the file system destroys them with the rest of the data.

You forgot the data checksumming. If you use hardware RAID then btrfs
only sees a single disk. It can still warn you of corrupt data but it
cannot fix it because it only has the one copy.

> Well, then they need to make special provisions for swap files in btrfs
> so that we can finally get rid of the swap partitions.

I think there are more important priorities, its not like having a swap
partition or two is a hardship or limitation.


-- 
Neil Bothwick

Being politically correct means always having to say you're sorry.


pgpnzPE5AweUW.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] snapshots?

2016-01-05 Thread Rich Freeman
On Tue, Jan 5, 2016 at 6:01 PM, Neil Bothwick  wrote:
> On Tue, 05 Jan 2016 23:16:48 +0100, lee wrote:
>
>> > I would run btrfs on bare partitions and use btrfs's raid1
>> > capabilities.  You're almost certainly going to get better
>> > performance, and you get more data integrity features.
>>
>> That would require me to set up software raid with mdadm as well, for
>> the swap partition.
>
> There's no need to use RAID for swap, it's not like it contains anything
> of permanent importance. Create a swap partition on each disk and let
> the kernel use the space as it wants.

So, while I tend not to run swap on RAID, it isn't an uncommon
approach because if you don't put swap on raid and you have a drive
failure while the system is running, then you are likely to have a
kernel panic.  Since one of the main goals of RAID is availability, it
is logical to put swap on RAID.

It is a risk thing.  If your system going down suddenly with no loss
to data in your regular filesystems isn't a huge problem (maybe this
is Google's 10,000th read-only caching server) then by all means don't
put swap on RAID.

The important thing is to understand the risks and make an informed decision.

-- 
Rich



Re: [gentoo-user] snapshots?

2016-01-05 Thread Rich Freeman
On Tue, Jan 5, 2016 at 5:16 PM, lee  wrote:
> Rich Freeman  writes:
>
>>
>> I would run btrfs on bare partitions and use btrfs's raid1
>> capabilities.  You're almost certainly going to get better
>> performance, and you get more data integrity features.
>
> That would require me to set up software raid with mdadm as well, for
> the swap partition.

Correct, if you don't want a panic if a single swap drive fails.

>
>> If you have a silent corruption with mdadm doing the raid1 then btrfs
>> will happily warn you of your problem and you're going to have a
>> really hard time fixing it,
>
> BTW, what do you do when you have silent corruption on a swap partition?
> Is that possible, or does swapping use its own checksums?

If the kernel pages in data from the good mirror, nothing happens.  If
the kernel pages in data from the bad mirror, then whatever data
happens to be there is what will get loaded and used and/or executed.
If you're lucky the modified data will be part of unused heap or
something.  If not, well, just about anything could happen.

Nothing in this scenario will check that the data is correct, except
for a forced scrub of the disks.  A scrub would probably detect the
error, but I don't think mdadm has any ability to recover it.  Your
best bet is probably to try to immediately reboot and save what you
can, or a less-risky solution assuming you don't have anything
critical in RAM is to just do an immediate hard reset so that there is
no risk of bad data getting swapped in and overwriting good data on
your normal filesystems.

> It's still odd.  I already have two different file systems and the
> overhead of one kind of software raid while I would rather stick to one
> file system.  With btrfs, I'd still have two different file systems ---
> plus mdadm and the overhead of three different kinds of software raid.

I'm not sure why you'd need two different filesystems.  Just btrfs for
your data.  I'm not sure where you're counting three types of software
raid either - you just have your swap.  And I don't think any of this
involves any significant overhead, other than configuration.

>
> How would it be so much better to triple the software raids and to still
> have the same number of file systems?

Well, the difference would be more data integrity insofar as hardware
failure goes, but certainly more risk of logical errors (IMO).

>
>>> When you use hardware raid, it
>>> can be disadvantageous compared to btrfs-raid --- and when you use it
>>> anyway, things are suddenly much more straightforward because everything
>>> is on raid to begin with.
>>
>> I'd stick with mdadm.  You're never going to run mixed
>> btrfs/hardware-raid on a single drive,
>
> A single disk doesn't make for a raid.

You misunderstood my statement.  If you have two drives, you can't run
both hardware raid and btrfs raid across them.  Hardware raid setups
don't generally support running across only part of a drive, and in
this setup you'd have to run hardware raid on part of each of two
single drives.

>
>> and the only time I'd consider
>> hardware raid is with a high quality raid card.  You'd still have to
>> convince me not to use mdadm even if I had one of those lying around.
>
> From my own experience, I can tell you that mdadm already does have
> significant overhead when you use a raid1 of two disks and a raid5 with
> three disks.  This overhead may be somewhat due to the SATA controller
> not being as capable as one would expect --- yet that doesn't matter
> because one thing you're looking at, besides reliability, is the overall
> performance.  And the overall performance very noticeably increased when
> I migrated from mdadm raids to hardware raids, with the same disks and
> the same hardware, except that the raid card was added.

Well, sure, the raid card probably had battery-backed cache if it was
decent, so linux could complete its commits to RAM and not have to
wait for the disks.

>
> And that was only 5 disks.  I also know that the performance with a ZFS
> mirror with two disks was disappointingly poor.  Those disks aren't
> exactly fast, but still.  I haven't tested yet if it changed after
> adding 4 mirrored disks to the pool.  And I know that the performance of
> another hardware raid5 with 6 disks was very good.

You're probably going to find the performance of a COW filesystem to
be inferior to that of an overwrite-in-place filesystem, simply
because the latter has to do less work.

>
> Thus I'm not convinced that software raid is the way to go.  I wish they
> would make hardware ZFS (or btrfs, if it ever becomes reliable)
> controllers.

I doubt it would perform any better.  What would that controller do
that your CPU wouldn't do?  Well, other than have battery-backed
cache, which would help in any circumstance.  If you stuck 5 raid
cards in your PC and put one drive on each card and put mdadm or ZFS
across all five it would almost certainly perform better because
you're 

Re: [gentoo-user] snapshots?

2016-01-01 Thread lee
"Stefan G. Weichinger"  writes:

> On 12/30/2015 10:14 PM, lee wrote:
>> Hi,
>> 
>> soon I'll be replacing the system disks and will copy over the existing
>> system to the new disks.  I'm wondering how much merit there would be in
>> being able to make snapshots to be able to revert back to a previous
>> state when updating software or when installing packages to just try
>> them out.
>> 
>> To be able to make snapshots, I could use btrfs on the new disks.  When
>> using btrfs, I could use the hardware RAID-1 as I do now, or I could use
>> the raid features of btrfs instead to create a RAID-1.
>> 
>> 
>> Is it worthwhile to use btrfs?
>
> Yes.
>
> ;-)
>
>> Am I going to run into problems when trying to boot from the new disks
>> when I use btrfs?
>
> Yes.
>
> ;-)
>
> well ... maybe.
>
> prepare for some learning curve. but it is worth it!

So how does that go?  Having trouble to boot is something I really don't
need.

>> Am I better off using the hardware raid or software raid if I use btrfs?
>
> I would be picky here and separate "software raid" from "btrfs raid":
>
> software raid .. you think of mdadm-based software RAID as we know it in
> the linux world?

I'm referring to the software raid btrfs uses.

> btrfs offers RAID-like redundancy as well, no mdadm involved here.
>
> The general recommendation now is to stay at level-1 for now. That fits
> your 2-disk-situation.

Well, what shows better performance?  No btrfs-raid on hardware raid or
btrfs raid on JBOD?

>> Suggestions?
>
> I would avoid converting and stuff.
>
> Why not try a fresh install on the new disks with btrfs?

Why would I want to spend another year to get back to where I'm now?

> You can always step back and plug in the old disks.
> You could even add your new disks *beside the existing system and set up
> a new rootfs alongside (did that several times here).

The plan is to replace the 3.5" SAS disks with 1TB disks.  There is no
room to fit any more 3.5" disks.  Switching disks all the time is not an
option.

That's why I want to use the 2.5" SAS disks.  But I found out that I
can't fit those as planned.  Unless I tape them to the bottom of the
case or something, I'm out of options :(  However, if tape them, I could
use 4 instead of two ...

> There is nearly no partitioning needed with btrfs (one of the great
> benefits).

That depends.  Try to install on btrfs when you have 4TB disks.  That
totally sucks, even without btrfs.  Add btrfs and it doesn't work at
all --- at least not with Debian, though I was thinking all the time
that if that wasn't Debian but Gentoo, it would just work ...

With 72GB disks, there's nearly no partitioning involved, either.  And
the system is currently only 20GB, including two VMs.

> I never had /boot on btrfs so far, maybe others can guide you with this.
>
> My /boot is plain extX on maybe RAID1 (differs on
> laptops/desktop/servers), I size it 500 MB to have space for multiple
> kernels (especially on dualboot-systems).
>
> Then some swap-partitions, and the rest for btrfs.

There you go, you end up with an odd setup.  I don't like /boot
partitions.  As well as swap partitions, they need to be on raid.  So
unless you use hardware raid, you end up with mdadm /and/ btrfs /and/
perhaps ext4, /and/ multiple partitions.  When you use hardware raid, it
can be disadvantageous compared to btrfs-raid --- and when you use it
anyway, things are suddenly much more straightforward because everything
is on raid to begin with.

We should be able to get away with something really straightforward,
like btrfs-raid on unpartitioned devices and special provisions in btrfs
for swap space so that we don't need extra swap partitions anymore.  The
swap space could even be allowed to grow (to some limit) and shrink back
to a starting size after a reboot.

> So you will have something like /dev/sd[ab]3 for btrfs then.

But I want straightforward :)

> Create your btrfs-"pool" with:
>
> # mkfs.btrfs -m raid1 -d raid1 /dev/sda3 /dev/sdb3
>
> Then check for your btrfs-fs with:
>
> # btrfs fi show
>
> Oh: I realize that I start writing a howto here ;-)

That doesn't work without an extra /boot partition?

How's btrfs's performance when you use swap files instead of swap
partitions to avoid the need for mdadm?

> In short:
>
> In my opinion it is worth learning to use btrfs.
> checksums, snapshots, subvolumes, compression ... bla ...
>
> It has some learning curve, especially with a distro like gentoo.
> But it is manageable.

Well, I found it pretty easy since you can always look up how to do
something.  The question is whether it's worthwhile or not.  If I had
time, I could do some testing ...

Now I understand that it's apparently not possible to simply make a
btrfs-raid1 from the two raw disks, copy the system over, install grub
and boot from that.  (I could live with swap files instead of swap
partitions.)

> As mentioned here several times I am using btrfs on >6 of my systems for
> years now. And I don't 

Re: [gentoo-user] snapshots?

2016-01-01 Thread Rich Freeman
On Fri, Jan 1, 2016 at 5:42 AM, lee  wrote:
> "Stefan G. Weichinger"  writes:
>
>> btrfs offers RAID-like redundancy as well, no mdadm involved here.
>>
>> The general recommendation now is to stay at level-1 for now. That fits
>> your 2-disk-situation.
>
> Well, what shows better performance?  No btrfs-raid on hardware raid or
> btrfs raid on JBOD?

I would run btrfs on bare partitions and use btrfs's raid1
capabilities.  You're almost certainly going to get better
performance, and you get more data integrity features.  If you have a
silent corruption with mdadm doing the raid1 then btrfs will happily
warn you of your problem and you're going to have a really hard time
fixing it, because btrfs only sees one copy of the data which is bad,
and all mdadm can tell you is that the data is inconsistent with no
idea which one is right.  You'd end up having to try to manipulate the
underlying data to figure out which one is right and fix it (the data
is all there, but you'd probably end up hex-editing your disks).  If
you were using btrfs raid1 you'd just run a scrub and it would
detect/fix the problem, since btrfs would see both copies and know
which one is right.  Then if you ever move to raid5 when that matures
you eliminate the write hole with btrfs.

>>
>> I would avoid converting and stuff.
>>
>> Why not try a fresh install on the new disks with btrfs?
>
> Why would I want to spend another year to get back to where I'm now?

I wouldn't do a fresh install.  I'd just set up btrfs on the new disks
and copy your data over (preserving attributes/etc).  Before I did
that I'd create any subvolumes you want to have on the new disks and
copy the data into them.  The only way to convert a directory into a
subvolume after the fact is to create a subvolume with the new name,
copy the directory into it, and then rename the directory and
subvolume to swap their names, then delete the old directory.  That is
time-consuming, and depending on what directory you're talking about
you might want to be in single-user or boot from a rescue disk to do
it.

I wouldn't do an in-place ext4->btrfs conversion.  I know that there
were some regressions in that feature recently and I'm not sure where
it stands right now.

>> I never had /boot on btrfs so far, maybe others can guide you with this.
>>
>> My /boot is plain extX on maybe RAID1 (differs on
>> laptops/desktop/servers), I size it 500 MB to have space for multiple
>> kernels (especially on dualboot-systems).
>>
>> Then some swap-partitions, and the rest for btrfs.
>
> There you go, you end up with an odd setup.  I don't like /boot
> partitions.  As well as swap partitions, they need to be on raid.  So
> unless you use hardware raid, you end up with mdadm /and/ btrfs /and/
> perhaps ext4, /and/ multiple partitions.

With grub2 you can boot from btrfs.  I used to use a separate boot
partition on ext4 with btrfs for the rest, but now my /boot is on my
root partition.  I'd still partition space for a boot partition in
case you move to EFI in the future but I wouldn't bother formatting it
or setting it up right now.  As long as you're using grub2 you really
don't need to do anything special.

You DO need to partition your disks though, even if you only have one
big partition for the whole thing.  The reason is that this gives
space for grub to stick its loaders/etc on the disk.

I don't use swap.  If I did I'd probably set up an mdadm array for it.
According to the FAQ btrfs still doesn't support swap from a file.

There isn't really anything painful about that setup though.  Swap
isn't needed to boot, so openrc/systemd will start up mdadm and
activate your swap.  I'm not sure if dracut will do that during early
boot or not, but it doesn't really matter if it does.

If you have two drives I'd just set them up as:
sd[ab]1 - 1GB boot partition unformatted for future EFI
sd[ab]2 - mdadm raid1 for swap
sd[ab]3 - btrfs


> When you use hardware raid, it
> can be disadvantageous compared to btrfs-raid --- and when you use it
> anyway, things are suddenly much more straightforward because everything
> is on raid to begin with.

I'd stick with mdadm.  You're never going to run mixed
btrfs/hardware-raid on a single drive, and the only time I'd consider
hardware raid is with a high quality raid card.  You'd still have to
convince me not to use mdadm even if I had one of those lying around.

>> Create your btrfs-"pool" with:
>>
>> # mkfs.btrfs -m raid1 -d raid1 /dev/sda3 /dev/sdb3
>>
>> Then check for your btrfs-fs with:
>>
>> # btrfs fi show
>>
>> Oh: I realize that I start writing a howto here ;-)
>
> That doesn't work without an extra /boot partition?

It works fine without a boot partition if you're using grub2.  If you
want to use grub legacy you'll need a boot partition.

>
> How's btrfs's performance when you use swap files instead of swap
> partitions to avoid the need for mdadm?

btrfs does not support swap files at present.  When it does you'll
need 

[gentoo-user] snapshots?

2015-12-30 Thread lee
Hi,

soon I'll be replacing the system disks and will copy over the existing
system to the new disks.  I'm wondering how much merit there would be in
being able to make snapshots to be able to revert back to a previous
state when updating software or when installing packages to just try
them out.

To be able to make snapshots, I could use btrfs on the new disks.  When
using btrfs, I could use the hardware RAID-1 as I do now, or I could use
the raid features of btrfs instead to create a RAID-1.


Is it worthwhile to use btrfs?

Am I going to run into problems when trying to boot from the new disks
when I use btrfs?

Am I better off using the hardware raid or software raid if I use btrfs?


The installation/setup is simple: 2x3.5" are to be replaced by 2x2.5",
each 15krpm, 72GB SAS disks, so no fancy partioning is involved.

(I need the physical space to plug in more 3.5" disks for storage.  Sure
I have considered SSDs, but they would cost 20 times as much and provide
no significant advantage in this case.)


I could just replace one disk after the other and let the hardware raid
do it all for me.  A rebuilt takes only 10 minutes or so.  Then I could
convert the file system to btrfs, or leave it as is.  That might even be
the safest bet because I can't miss anything when copying.  (What the
heck do I have it for? :) )


Suggestions?



Re: [gentoo-user] snapshots?

2015-12-30 Thread Stefan G. Weichinger
On 12/30/2015 10:14 PM, lee wrote:
> Hi,
> 
> soon I'll be replacing the system disks and will copy over the existing
> system to the new disks.  I'm wondering how much merit there would be in
> being able to make snapshots to be able to revert back to a previous
> state when updating software or when installing packages to just try
> them out.
> 
> To be able to make snapshots, I could use btrfs on the new disks.  When
> using btrfs, I could use the hardware RAID-1 as I do now, or I could use
> the raid features of btrfs instead to create a RAID-1.
> 
> 
> Is it worthwhile to use btrfs?

Yes.

;-)

> Am I going to run into problems when trying to boot from the new disks
> when I use btrfs?

Yes.

;-)

well ... maybe.

prepare for some learning curve. but it is worth it!

> Am I better off using the hardware raid or software raid if I use btrfs?

I would be picky here and separate "software raid" from "btrfs raid":

software raid .. you think of mdadm-based software RAID as we know it in
the linux world?

btrfs offers RAID-like redundancy as well, no mdadm involved here.

The general recommendation now is to stay at level-1 for now. That fits
your 2-disk-situation.

> The installation/setup is simple: 2x3.5" are to be replaced by 2x2.5",
> each 15krpm, 72GB SAS disks, so no fancy partioning is involved.
> 
> (I need the physical space to plug in more 3.5" disks for storage.  Sure
> I have considered SSDs, but they would cost 20 times as much and provide
> no significant advantage in this case.)
> 
> 
> I could just replace one disk after the other and let the hardware raid
> do it all for me.  A rebuilt takes only 10 minutes or so.  Then I could
> convert the file system to btrfs, or leave it as is.  That might even be
> the safest bet because I can't miss anything when copying.  (What the
> heck do I have it for? :) )
> 
> 
> Suggestions?

I would avoid converting and stuff.

Why not try a fresh install on the new disks with btrfs?
You can always step back and plug in the old disks.
You could even add your new disks *beside the existing system and set up
a new rootfs alongside (did that several times here).

-

There is nearly no partitioning needed with btrfs (one of the great
benefits).

I never had /boot on btrfs so far, maybe others can guide you with this.

My /boot is plain extX on maybe RAID1 (differs on
laptops/desktop/servers), I size it 500 MB to have space for multiple
kernels (especially on dualboot-systems).

Then some swap-partitions, and the rest for btrfs.

So you will have something like /dev/sd[ab]3 for btrfs then.

Create your btrfs-"pool" with:

# mkfs.btrfs -m raid1 -d raid1 /dev/sda3 /dev/sdb3

Then check for your btrfs-fs with:

# btrfs fi show

Oh: I realize that I start writing a howto here ;-)

In short:

In my opinion it is worth learning to use btrfs.
checksums, snapshots, subvolumes, compression ... bla ...

It has some learning curve, especially with a distro like gentoo.
But it is manageable.

As mentioned here several times I am using btrfs on >6 of my systems for
years now. And I don't look back so far.

-

look up:

https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices