Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-01 Thread Wols Lists
On 01/12/17 17:14, Rich Freeman wrote:
> You could run btrfs over md-raid, but other than the snapshots I think
> this loses a lot of the benefit of btrfs in the first place.  You are
> vulnerable to the write hole,

The write hole is now "fixed".

In quotes because, although journalling has now been merged and is
available, there still seem to be a few corner case (and not so corner
case) bugs that need ironing out before it's solid.

Cheers,
Wol



Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-01 Thread Rich Freeman
On Fri, Dec 1, 2017 at 11:58 AM, Wols Lists  wrote:
> On 27/11/17 22:30, Bill Kenworthy wrote:
>> Hi all,
>>   I need to expand two bcache fronted 4xdisk btrfs raid 10's - this
>> requires purchasing 4 drives (and one system does not have room for two
>> more drives) so I am trying to see if using raid 5 is an option
>>
>> I have been trying to find if btrfs raid 5/6 is stable enough to use but
>> while there is mention of improvements in kernel 4.12, and fixes for the
>> write hole problem I cant see any reports that its "working fine now"
>> though there is a phoronix article saying Oracle is using it since the
>> fixes.
>>
>> Is anyone here successfully using btrfs raid 5/6?  What is the status of
>> scrub and self healing?  The btrfs wiki is woefully out of date :(
>>
> Or put btrfs over md-raid?
>
> Thing is, with raid-6 over four drives, you have a 100% certainty of
> surviving a two-disk failure. With raid-10 you have a 33% chance of
> losing your array.
>

I tend to be a fan of parity raid in general for these reasons.  I'm
not sure the performance gains with raid-10 are enough to warrant the
waste of space.

With btrfs though I don't really see the point of "Raid-10" vs just a
pile of individual disks in raid1 mode.  Btrfs will do a so-so job of
balancing the IO across them already (they haven't really bothered to
optimize this yet).

I've moved away from btrfs entirely until they sort things out.
However, I would not use btrfs for raid-5/6 under any circumstances.
That has NEVER been stable, and if anything has gone backwards.  I'm
sure they'll sort it out sometime, but I have no idea when.  RAID-1 on
btrfs is reasonably stable, but I've still had it run into issues
(nothing that kept me from reading the data off the array, but I've
had various issues with it, and when I finally moved it to ZFS it was
in a state where I couldn't run it in anything other than degraded
mode).

You could run btrfs over md-raid, but other than the snapshots I think
this loses a lot of the benefit of btrfs in the first place.  You are
vulnerable to the write hole, the ability of btrfs to recover data
with soft errors is compromised (though you can detect it still), and
you're potentially faced with more read-write-read cycles when raid
stripes are modified.  Both zfs and btrfs were really designed to work
best on raw block devices without any layers below.  They still work
of course, but you don't get some of those optimizations since they
don't have visibility into what is happening at the disk level.

-- 
Rich



Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-01 Thread Wols Lists
On 27/11/17 22:30, Bill Kenworthy wrote:
> Hi all,
>   I need to expand two bcache fronted 4xdisk btrfs raid 10's - this
> requires purchasing 4 drives (and one system does not have room for two
> more drives) so I am trying to see if using raid 5 is an option
> 
> I have been trying to find if btrfs raid 5/6 is stable enough to use but
> while there is mention of improvements in kernel 4.12, and fixes for the
> write hole problem I cant see any reports that its "working fine now"
> though there is a phoronix article saying Oracle is using it since the
> fixes.
> 
> Is anyone here successfully using btrfs raid 5/6?  What is the status of
> scrub and self healing?  The btrfs wiki is woefully out of date :(
> 
Or put btrfs over md-raid?

Thing is, with raid-6 over four drives, you have a 100% certainty of
surviving a two-disk failure. With raid-10 you have a 33% chance of
losing your array.

Cheers,
Wol



Re: [gentoo-user] OT: btrfs raid 5/6

2017-12-01 Thread J. Roeleveld
On Monday, November 27, 2017 11:30:13 PM CET Bill Kenworthy wrote:
> Hi all,
>   I need to expand two bcache fronted 4xdisk btrfs raid 10's - this
> requires purchasing 4 drives (and one system does not have room for two
> more drives) so I am trying to see if using raid 5 is an option
> 
> I have been trying to find if btrfs raid 5/6 is stable enough to use but
> while there is mention of improvements in kernel 4.12, and fixes for the
> write hole problem I cant see any reports that its "working fine now"
> though there is a phoronix article saying Oracle is using it since the
> fixes.
> 
> Is anyone here successfully using btrfs raid 5/6?  What is the status of
> scrub and self healing?  The btrfs wiki is woefully out of date :(
> 
> BillK

I have not seen any indication that BTRFS raid 5/6/.. is usable.
Last status I heard: No scrub, no rebuild when disk failed, ...
It should work as long as all disks stay functioning, but then I wonder why 
bother with anything more advanced than raid-0 ?

It's the lack of progress with regards to proper "raid" support in BTRFS which 
made me stop considering it and simply went with ZFS.

--
Joost




Re: [gentoo-user] Re: is multi-core really worth it?

2017-12-01 Thread J. Roeleveld
On 28 November 2017 11:07:58 GMT+01:00, Raffaele Belardi 
 wrote:
>Raffaele Belardi wrote:
>> Hi,
>> 
>> rebuilding system and world with gcc-7.2.0 on a 6-core AMD CPU I have
>the impression that
>> most of the ebuilds limit parallel builds to 1, 2 or 3 threads. I'm
>aware it is only an
>> impression, I did not spend the night monitoring the process, but
>nevertheless every time
>> I checked the load was very low.
>> 
>> Does anyone have real-world statistics of CPU usage based on gentoo
>world build?
>
>I graphed the number of parallel ebuilds while doing an 'emerge -e'
>world on a 4-core CPU,
>the graph is attached. There is an initial peak of ebuilds but I assume
>it is fake data
>due to prints being delayed. Then there is a long interval during which
>there are few (~2)
>ebuilds running. This may be due to lack of data (~700Mb still had to
>be downloaded when I
>started the emerge) or due to dependencies. Then, after ~500 merged
>packages, finally the
>number of parallel ebuilds rises to something very close to the
>requested 5.
>
>Note: the graph represents the number of parallel ebuilds in time, not
>the number of
>parallel jobs. The latter would be more interesting but requires a lot
>more effort.
>
>Note also in the log near the seamonkey build that the load rises to 15
>jobs; I suppose
>seamonkey and other two potentially massively parallel jobs started
>with low parallelism,
>fooling emerge into starting all three of them, but then each one
>spawned the full -j5
>jobs requested by MAKEOPTS. There's little emerge can do in these cases
>to maintain the
>load-average.
>
>All of this just to convince myself that yes, it is worth it!
>
>raffaele
>
>Method:
>The relevant part of the command line:
># "MAKEOPTS=-j5 EMERGE_DEFAULT_OPTS=--jobs 3 --load-average 5" emerge
>-e world
>on a 4 core CPU.
>In the log I substituted a +1 for every 'Emerging' and -1 for every
>'Installing', removed
>the rest of the line, summed and graphed the result.

Add the load average part to the makeopts and make will keep the jobs down when 
load rises.

--
Joost
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.