Re: RAID1 vs RAID10 and best way to set up 6 disks

2016-06-03 Thread Mitchell Fossen
Thanks for pointing that out, so if I'm thinking correctly, with RAID1
it's just that there is a copy of the data somewhere on some other
drive.

With RAID10, there's still only 1 other copy, but the entire "original"
disk is mirrored to another one, right?

On Fri, 2016-06-03 at 20:13 +0200, Christoph Anton Mitterer wrote:
> On Fri, 2016-06-03 at 13:10 -0500, Mitchell Fossen wrote:
> > 
> > Is there any caveats between RAID1 on all 6 vs RAID10?
> Just to be safe: RAID1 in btrfs means not what RAID1 means in any
> other
> terminology about RAID.
> 
> The former has only two duplicates, the later means full mirroring of
> all devices.
> 
> 
> Cheers,
> Chris.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RAID1 vs RAID10 and best way to set up 6 disks

2016-06-03 Thread Mitchell Fossen
Hello,

I have 6 WD Red Pro drives, each 6TB in space. My question is, what is
the best way to set these up? 

The system drive (and root) are on a 500GB SSD, so these drives will
only be used for /home and file storage.

Is there any caveats between RAID1 on all 6 vs RAID10?

Thanks for the help,

Mitch
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs: poor performance on deleting many large files

2015-11-25 Thread Mitchell Fossen
On Mon, 2015-11-23 at 06:29 +, Duncan wrote:

> Using subvolumes was the first recommendation I was going to make, too, 
> so you're on the right track. =:^)
> 
> Also, in case you are using it (you didn't say, but this has been 
> demonstrated to solve similar issues for others so it's worth 
> mentioning), try turning btrfs quota functionality off.  While the devs 
> are working very hard on that feature for btrfs, the fact is that it's 
> simply still buggy and doesn't work reliably anyway, in addition to 
> triggering scaling issues before they'd otherwise occur.  So my 
> recommendation has been, and remains, unless you're working directly with 
> the devs to fix quota issues (in which case, thanks!), if you actually 
> NEED quota functionality, use a filesystem where it works reliably, while 
> if you don't, just turn it off and avoid the scaling and other issues 
> that currently still come with it.
> 

I did indeed have quotas turned on for the home directories! Since they were
mostly to calculate space used by everyone (since du -hs is so slow) and not
actually needed to limit people, I disabled them. 

> As for defrag, that's quite a topic of its own, with complications 
> related to snapshots and the nocow file attribute.  Very briefly, if you 
> haven't been running it regularly or using the autodefrag mount option by 
> default, chances are your available free space is rather fragmented as 
> well, and while defrag may help, it may not reduce fragmentation to the 
> degree you'd like.  (I'd suggest using filefrag to check fragmentation, 
> but it doesn't know how to deal with btrfs compression, and will report 
> heavy fragmentation for compressed files even if they're fine.  Since you 
> use compression, that kind of eliminates using filefrag to actually see 
> what your fragmentation is.)
> Additionally, defrag isn't snapshot aware (they tried it for a few 
> kernels a couple years ago but it simply didn't scale), so if you're 
> using snapshots (as I believe Ubuntu does by default on btrfs, at least 
> taking snapshots for upgrade-in-place), so using defrag on files that 
> exist in the snapshots as well can dramatically increase space usage, 
> since defrag will break the reflinks to the snapshotted extents and 
> create new extents for defragged files.
> 
> Meanwhile, the absolute worst-case fragmentation on btrfs occurs with  
> random-internal-rewrite-pattern files (as opposed to never changed, or 
> append-only).  Common examples are database files and VM images.  For 
> /relatively/ small files, to say 256 MiB, the autodefrag mount option is 
> a reasonably effective solution, but it tends to have scaling issues with 
> files over half a GiB so you can call this a negative recommendation for 
> trying that option with half-gig-plus internal-random-rewrite-pattern 
> files.  There are other mitigation strategies that can be used, but here 
> the subject gets complex so I'll not detail them.  Suffice it to say that 
> if the filesystem in question is used with large VM images or database 
> files and you haven't taken specific fragmentation avoidance measures, 
> that's very likely a good part of your problem right there, and you can 
> call this a hint that further research is called for.
> 
> If your half-gig-plus files are mostly write-once, for example most media 
> files unless you're doing heavy media editing, however, then autodefrag 
> could be a good option in general, as it deals well with such files and 
> with random-internal-rewrite-pattern files under a quarter gig or so.  Be 
> aware, however, that if it's enabled on an already heavily fragmented 
> filesystem (as yours likely is), it's likely to actually make performance 
> worse until it gets things under control.  Your best bet in that case, if 
> you have spare devices available to do so, is probably to create a fresh 
> btrfs and consistently use autodefrag as you populate it from the 
> existing heavily fragmented btrfs.  That way, it'll never have a chance 
> for the fragmentation to build up in the first place, and autodefrag used 
> as a routine mount option should keep it from getting bad in normal use.

Thanks for explaining that! Most of these files are written once and then read
from for the rest of their "lifetime" until the simulations are done and they
get archived/deleted. I'll try leaving autodefrag on and defragging directories
over the holiday weekend when no one is using the server. There is some database
usage, but I turned off COW for its folder and it only gets used sporadically
and shouldn't be a huge factor in day-to-day usage. 

Also, is there a recommendation for relatime vs noatime mount options? I don't
believe anything that runs on the server needs to use file access times, so if
it can help with performance/disk usage I'm fine with setting it to noatime.

I just tried copying a 70GB folder and then rm -rf it and it didn't appear to
impact performance, and I plan to try some larger