Re: Monitoring Btrfs

2016-10-19 Thread Austin S. Hemmelgarn
On 2016-10-19 09:06, Anand Jain wrote: On 10/19/16 19:15, Austin S. Hemmelgarn wrote: On 2016-10-18 17:36, Anand Jain wrote: I would like to monitor my btrfs-filesystem for missing drives. This is actually correct behavior, the filesystem reports that it should have 6 devices, which

Re: Btrfs dev del

2016-10-18 Thread Austin S. Hemmelgarn
On 2016-10-18 11:02, Stefan Malte Schumacher wrote: Hello One of the drives which I added to my array two days ago was most likely already damaged when I bought it - 312 read errors while scrubbing and lots of SMART errors. I want to take the drive out, go to my hardware vendor and have it

Re: Monitoring Btrfs

2016-10-18 Thread Austin S. Hemmelgarn
On 2016-10-17 16:40, Chris Murphy wrote: May be better to use /sys/fs/btrfs//devices to find the device to monitor, and then monitor them with blktrace - maybe there's some courser granularity available there, I'm not sure. The thing is, as far as Btrfs alone is concerned, a drive can be "bad"

Re: Monitoring Btrfs

2016-10-18 Thread Austin S. Hemmelgarn
On 2016-10-17 23:23, Anand Jain wrote: I would like to monitor my btrfs-filesystem for missing drives. This is actually correct behavior, the filesystem reports that it should have 6 devices, which is how it knows a device is missing. Missing - means missing at the time of mount. So how

Re: Monitoring Btrfs

2016-10-17 Thread Austin S. Hemmelgarn
On 2016-10-17 12:44, Stefan Malte Schumacher wrote: Hello I would like to monitor my btrfs-filesystem for missing drives. On Debian mdadm uses a script in /etc/cron.daily, which calls mdadm and sends an email if anything is wrong with the array. I would like to do the same with btrfs. In my

Re: btrfs and numa - needing drop_caches to keep speed up

2016-10-14 Thread Austin S. Hemmelgarn
On 2016-10-14 02:28, Stefan Priebe - Profihost AG wrote: Hello list, while running the same workload on two machines (single xeon and a dual xeon) both with 64GB RAM. I need to run echo 3 >/proc/sys/vm/drop_caches every 15-30 minutes to keep the speed as good as on the non numa system. I'm not

Re: Copy BTRFS volume to another BTRFS volume including subvolumes and snapshots

2016-10-14 Thread Austin S. Hemmelgarn
On 2016-10-13 17:21, Alberto Bursi wrote: Hi, I'm using OpenSUSE on a btrfs volume spanning 2 disks (set as raid1 for both metadata and data), no separate /home partition. The distro loves to create dozens of subvolumes for various things and makes snapshots, see: alby@openSUSE-xeon:~> sudo

Re: Unable to rescue RAID5

2016-10-14 Thread Austin S. Hemmelgarn
On 2016-10-14 06:11, Hiroshi Honda wrote: That's the proper answer. In practice... all hope isn't yet lost. I understood the proper answer. I'll take care it in the future. Is there something step/method can I do from this situation? You should probably look at `btrfs restore`. I'm not sure

Re: raid levels and NAS drives

2016-10-12 Thread Austin S. Hemmelgarn
On 2016-10-11 18:10, Nicholas D Steeves wrote: On Mon, Oct 10, 2016 at 08:07:53AM -0400, Austin S. Hemmelgarn wrote: On 2016-10-09 19:12, Charles Zeitler wrote: Is there any advantage to using NAS drives under RAID levels, as oppposed to regular 'desktop' drives for BTRFS? [...] So

Re: RAID system with adaption to changed number of disks

2016-10-11 Thread Austin S. Hemmelgarn
On 2016-10-11 11:14, Philip Louis Moetteli wrote: Hello, I have to build a RAID 6 with the following 3 requirements: • Use different kinds of disks with different sizes. • When a disk fails and there's enough space, the RAID should be able to reconstruct itself out of the

Re: raid levels and NAS drives

2016-10-10 Thread Austin S. Hemmelgarn
On 2016-10-09 19:12, Charles Zeitler wrote: Is there any advantage to using NAS drives under RAID levels, as oppposed to regular 'desktop' drives for BTRFS? Before I answer the question, it is worth explaining the differences between the marketing terms 'desktop', 'enterprise', 'NAS', and

Re: Is stability a joke? (wiki updated)

2016-09-19 Thread Austin S. Hemmelgarn
On 2016-09-19 14:27, Chris Murphy wrote: On Mon, Sep 19, 2016 at 11:38 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: ReiserFS had no working fsck for all of the 8 years I used it (and still didn't last year when I tried to use it on an old disk). "Not working" here

Re: Is stability a joke? (wiki updated)

2016-09-19 Thread Austin S. Hemmelgarn
On 2016-09-19 00:08, Zygo Blaxell wrote: On Thu, Sep 15, 2016 at 01:02:43PM -0600, Chris Murphy wrote: Right, well I'm vaguely curious why ZFS, as different as it is, basically take the position that if the hardware went so batshit that they can't unwind it on a normal mount, then an fsck

Re: stability matrix

2016-09-19 Thread Austin S. Hemmelgarn
On 2016-09-19 11:27, David Sterba wrote: Hi, On Thu, Sep 15, 2016 at 04:14:04AM +0200, Christoph Anton Mitterer wrote: In general: - I think another column should be added, which tells when and for which kernel version the feature-status of each row was revised/updated the last time and

Re: Is stability a joke? (wiki updated)

2016-09-19 Thread Austin S. Hemmelgarn
On 2016-09-18 22:57, Zygo Blaxell wrote: On Fri, Sep 16, 2016 at 08:00:44AM -0400, Austin S. Hemmelgarn wrote: To be entirely honest, both zero-log and super-recover could probably be pretty easily integrated into btrfs check such that it detects when they need to be run and does so. zero-log

Re: Is stability a joke? (wiki updated)

2016-09-19 Thread Austin S. Hemmelgarn
On 2016-09-18 23:47, Zygo Blaxell wrote: On Mon, Sep 12, 2016 at 12:56:03PM -0400, Austin S. Hemmelgarn wrote: 4. File Range Cloning and Out-of-band Dedupe: Similarly, work fine if the FS is healthy. I've found issues with OOB dedup (clone/extent-same): 1. Don't dedup data that has not been

Re: RAID1 availability issue[2], Hot-spare and auto-replace

2016-09-19 Thread Austin S. Hemmelgarn
On 2016-09-18 13:28, Chris Murphy wrote: On Sun, Sep 18, 2016 at 2:34 AM, Anand Jain wrote: (updated the subject, was [1]) IMO the hot-spare feature makes most sense with the raid56, Why. ? Raid56 is not scalable, has less redundancy in most all configurations,

Re: RAID1 availability issue[2], Hot-spare and auto-replace

2016-09-19 Thread Austin S. Hemmelgarn
On 2016-09-18 22:25, Anand Jain wrote: Chris Murphy, Thanks for writing in detail, it makes sense.. Generally hot spare is to reduce the risk of double disk failures leading to the data lose at the data centers before the data is reconstructed again for redundancy. On 09/19/2016 01:28

Re: Thoughts on btrfs RAID-1 for cold storage/archive?

2016-09-16 Thread Austin S. Hemmelgarn
necessary, I only listed it as that will provide automatic recovery of things the FEC support in dm-verity can't fix. In a situation where I can be relatively sure that the errors will be infrequent and probably not co-located, I would probably skip it myself. On Fri, Sep 16, 2016 at 7:45 AM, Austin S

Re: Is stability a joke? (wiki updated)

2016-09-16 Thread Austin S. Hemmelgarn
On 2016-09-15 17:23, Christoph Anton Mitterer wrote: On Thu, 2016-09-15 at 14:20 -0400, Austin S. Hemmelgarn wrote: 3. Fsck should be needed only for un-mountable filesystems. Ideally, we should be handling things like Windows does. Preform slightly better checking when reading data

Re: Is stability a joke? (wiki updated)

2016-09-16 Thread Austin S. Hemmelgarn
On 2016-09-15 16:26, Chris Murphy wrote: On Thu, Sep 15, 2016 at 2:16 PM, Hugo Mills <h...@carfax.org.uk> wrote: On Thu, Sep 15, 2016 at 01:02:43PM -0600, Chris Murphy wrote: On Thu, Sep 15, 2016 at 12:20 PM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: 2. We're developing

Re: Thoughts on btrfs RAID-1 for cold storage/archive?

2016-09-16 Thread Austin S. Hemmelgarn
On 2016-09-15 22:58, Duncan wrote: E V posted on Thu, 15 Sep 2016 11:48:13 -0400 as excerpted: I'm investigating using btrfs for archiving old data and offsite storage, essentially put 2 drives in btrfs RAID-1, copy the data to the filesystem and then unmount, remove a drive and take it to an

Re: Is stability a joke? (wiki updated)

2016-09-15 Thread Austin S. Hemmelgarn
On 2016-09-15 14:01, Chris Murphy wrote: On Tue, Sep 13, 2016 at 5:35 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: On 2016-09-12 16:08, Chris Murphy wrote: - btrfsck status e.g. btrfs-progs 4.7.2 still warns against using --repair, and lists it under dangerous options also;

Re: [RFC] Preliminary BTRFS Encryption

2016-09-15 Thread Austin S. Hemmelgarn
On 2016-09-15 10:06, Anand Jain wrote: Thanks for comments. Pls see inline as below. On 09/15/2016 07:37 PM, Austin S. Hemmelgarn wrote: On 2016-09-13 09:39, Anand Jain wrote: This patchset adds btrfs encryption support. The main objective of this series is to have bugs fixed and stability

Re: stability matrix

2016-09-15 Thread Austin S. Hemmelgarn
On 2016-09-15 05:49, Hans van Kranenburg wrote: On 09/15/2016 04:14 AM, Christoph Anton Mitterer wrote: Hey. As for the stability matrix... In general: - I think another column should be added, which tells when and for which kernel version the feature-status of each row was

Re: [RFC] Preliminary BTRFS Encryption

2016-09-15 Thread Austin S. Hemmelgarn
On 2016-09-13 09:39, Anand Jain wrote: This patchset adds btrfs encryption support. The main objective of this series is to have bugs fixed and stability. I have verified with fstests to confirm that there is no regression. A design write-up is coming next, however here below is the quick

Re: Filesystem forced to readonly after use

2016-09-14 Thread Austin S. Hemmelgarn
On 2016-09-13 16:39, Cesar Strauss wrote: On 13-09-2016 16:49, Austin S. Hemmelgarn wrote: I'd be kind of curious to see the results from btrfs check run without repair, but I doubt that will help narrow things down any further. Attached. As of right now, the absolute first thing I'd do

Re: Filesystem forced to readonly after use

2016-09-13 Thread Austin S. Hemmelgarn
On 2016-09-13 15:20, Cesar Strauss wrote: Hello, I have a BTRFS filesystem that is reverting to read-only after a few moments of use. There is a stack trace visible in the kernel log, which is attached. Here is my system information: # uname -a Linux rescue 4.7.2-1-ARCH #1 SMP PREEMPT Sat

Re: Security implications of btrfs receive?

2016-09-13 Thread Austin S. Hemmelgarn
On 2016-09-12 16:25, Chris Murphy wrote: On Mon, Sep 12, 2016 at 5:24 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: After device discovery, specify UUID= instead of a device node. Oh yeah good point, -U --uuid is also doable. I'm not sure what the benefit is of using sysfs to

Re: Is stability a joke? (wiki updated)

2016-09-13 Thread Austin S. Hemmelgarn
On 2016-09-12 16:08, Chris Murphy wrote: On Mon, Sep 12, 2016 at 10:56 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: Things listed as TBD status: 1. Seeding: Seems to work fine the couple of times I've tested it, however I've only done very light testing, and the whole f

Re: Is stability a joke? (wiki updated)

2016-09-13 Thread Austin S. Hemmelgarn
On 2016-09-12 16:44, Chris Murphy wrote: On Mon, Sep 12, 2016 at 2:35 PM, Martin Steigerwald wrote: Am Montag, 12. September 2016, 23:21:09 CEST schrieb Pasi Kärkkäinen: On Mon, Sep 12, 2016 at 09:57:17PM +0200, Martin Steigerwald wrote: Am Montag, 12. September 2016,

Re: Is stability a joke? (wiki updated)

2016-09-13 Thread Austin S. Hemmelgarn
On 2016-09-13 04:38, Timofey Titovets wrote: https://btrfs.wiki.kernel.org/index.php/Status I suggest to mark RAID1/10 as 'mostly ok' as on btrfs RAID1/10 is safe to data, but not for application that uses it. i.e. it not hide I/O error even if it's can be masked.

Re: Small fs

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-12 14:46, Imran Geriskovan wrote: Wait wait wait a second: This is 256 MB SINGLE created by GPARTED, which is the replacement of MANUALLY CREATED 127MB DUP which is now non-existant.. Which I was not aware it was a DUP at the time.. Peeww... Small btrfs is full of surprises.. ;)

Re: Is stability a joke? (wiki updated)

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-12 13:29, Filipe Manana wrote: On Mon, Sep 12, 2016 at 5:56 PM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: On 2016-09-12 12:27, David Sterba wrote: On Mon, Sep 12, 2016 at 04:27:14PM +0200, David Sterba wrote: I therefore would like to propose that some sort of f

Re: Is stability a joke?

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-12 12:51, David Sterba wrote: On Mon, Sep 12, 2016 at 10:54:40AM -0400, Austin S. Hemmelgarn wrote: Somebody has put that table on the wiki, so it's a good starting point. I'm not sure we can fit everything into one table, some combinations do not bring new information and we'd need

Re: Is stability a joke? (wiki updated)

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-12 12:27, David Sterba wrote: On Mon, Sep 12, 2016 at 04:27:14PM +0200, David Sterba wrote: I therefore would like to propose that some sort of feature / stability matrix for the latest kernel is added to the wiki preferably somewhere where it is easy to find. It would be nice to

Re: Small fs

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-12 10:51, Chris Murphy wrote: On Mon, Sep 12, 2016 at 8:09 AM, Henk Slager wrote: FWIW, I use BTRFS for /boot, but it's not for snapshotting or even the COW, it's for DUP mode and the error recovery it provides. Most people don't think about this if it hasn't

Re: Is stability a joke?

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-12 10:27, David Sterba wrote: Hi, first, thanks for choosing a catchy subject, this always helps. While it will serve as another beating stick to those who enjoy bashing btrfs, I'm glad to see people answer in a constructive way. On Sun, Sep 11, 2016 at 10:55:21AM +0200, Waxhead

Re: Small fs

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-12 10:09, Henk Slager wrote: FWIW, I use BTRFS for /boot, but it's not for snapshotting or even the COW, it's for DUP mode and the error recovery it provides. Most people don't think about this if it hasn't happened to them, but if you get a bad read from /boot when loading the

Re: btrfs kernel oops on mount

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-12 09:27, Jeff Mahoney wrote: On 9/12/16 2:54 PM, Austin S. Hemmelgarn wrote: On 2016-09-12 08:33, Jeff Mahoney wrote: On 9/9/16 8:47 PM, Austin S. Hemmelgarn wrote: A couple of other things to comment about on this: 1. 'can_overcommit' (the function that the Arch kernel choked

Re: Is stability a joke?

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-12 08:59, Michel Bouissou wrote: Le lundi 12 septembre 2016, 08:20:20 Austin S. Hemmelgarn a écrit : FWIW, here's a list of what I personally consider stable (as in, I'm willing to bet against reduced uptime to use this stuff on production systems at work and personal systems at home

Re: Small fs

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-12 08:54, Imran Geriskovan wrote: On 9/11/16, Chris Murphy wrote: Something else that's screwy in that bug that I just realized, why is it not defaulting to mixed-block groups on a 100MiB fallocated file? I thought mixed-bg was the default below a certain

Re: btrfs kernel oops on mount

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-12 08:33, Jeff Mahoney wrote: On 9/9/16 8:47 PM, Austin S. Hemmelgarn wrote: A couple of other things to comment about on this: 1. 'can_overcommit' (the function that the Arch kernel choked on) is from the memory management subsystem. The fact that that's throwing a null pointer

Re: Small fs

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-11 15:51, Martin Steigerwald wrote: Am Sonntag, 11. September 2016, 19:46:32 CEST schrieb Hugo Mills: On Sun, Sep 11, 2016 at 09:13:28PM +0200, Martin Steigerwald wrote: Am Sonntag, 11. September 2016, 16:44:23 CEST schrieb Duncan: * Metadata, and thus mixed-bg, defaults to DUP

Re: Small fs

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-11 15:21, Martin Steigerwald wrote: Am Sonntag, 11. September 2016, 21:56:07 CEST schrieb Imran Geriskovan: On 9/11/16, Duncan <1i5t5.dun...@cox.net> wrote: Martin Steigerwald posted on Sun, 11 Sep 2016 17:32:44 +0200 as excerpted: What is the smallest recommended fs size for

Re: Is stability a joke?

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-11 13:11, Duncan wrote: Martin Steigerwald posted on Sun, 11 Sep 2016 14:05:03 +0200 as excerpted: Just add another column called "Production ready". Then research / ask about production stability of each feature. The only challenge is: Who is authoritative on that? I´d certainly

Re: Is stability a joke?

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-11 09:02, Hugo Mills wrote: On Sun, Sep 11, 2016 at 02:39:14PM +0200, Waxhead wrote: Martin Steigerwald wrote: Am Sonntag, 11. September 2016, 13:43:59 CEST schrieb Martin Steigerwald: Thing is: This just seems to be when has a feature been implemented matrix. Not when it is

Re: btrfs kernel oops on mount

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-09 15:23, moparisthebest wrote: On 09/09/2016 02:47 PM, Austin S. Hemmelgarn wrote: On 2016-09-09 12:12, moparisthebest wrote: Hi, I'm hoping to get some help with mounting my btrfs array which quit working yesterday. My array was in the middle of a balance, about 50% remaining

Re: Security implications of btrfs receive?

2016-09-12 Thread Austin S. Hemmelgarn
On 2016-09-09 14:58, Chris Murphy wrote: On Thu, Sep 8, 2016 at 5:48 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: On 2016-09-07 15:34, Chris Murphy wrote: I like the idea of matching WWN as part of the check, with a couple of caveats: 1. We need to keep in mind that i

Re: btrfs kernel oops on mount

2016-09-09 Thread Austin S. Hemmelgarn
On 2016-09-09 14:32, moparisthebest wrote: On 09/09/2016 01:51 PM, Chris Murphy wrote: On Fri, Sep 9, 2016 at 10:12 AM, moparisthebest wrote: Hi, I'm hoping to get some help with mounting my btrfs array which quit working yesterday. My array was in the middle of a

Re: btrfs kernel oops on mount

2016-09-09 Thread Austin S. Hemmelgarn
On 2016-09-09 12:12, moparisthebest wrote: Hi, I'm hoping to get some help with mounting my btrfs array which quit working yesterday. My array was in the middle of a balance, about 50% remaining, when it hit an error and remounted itself read-only [1]. btrfs fi show output [2], btrfs df output

Re: Security implications of btrfs receive?

2016-09-09 Thread Austin S. Hemmelgarn
On 2016-09-09 12:33, David Sterba wrote: On Wed, Sep 07, 2016 at 03:08:18PM -0400, Austin S. Hemmelgarn wrote: On 2016-09-07 14:07, Christoph Anton Mitterer wrote: On Wed, 2016-09-07 at 11:06 -0400, Austin S. Hemmelgarn wrote: This is an issue with any filesystem, Not really... any other

Re: Security implications of btrfs receive?

2016-09-09 Thread Austin S. Hemmelgarn
On 2016-09-09 12:18, David Sterba wrote: On Wed, Sep 07, 2016 at 07:58:30AM -0400, Austin S. Hemmelgarn wrote: On 2016-09-06 13:20, Graham Cobb wrote: Thanks to Austin and Duncan for their replies. On 06/09/16 13:15, Austin S. Hemmelgarn wrote: On 2016-09-05 05:59, Graham Cobb wrote: Does

Re: Security implications of btrfs receive?

2016-09-08 Thread Austin S. Hemmelgarn
On 2016-09-07 15:34, Chris Murphy wrote: On Wed, Sep 7, 2016 at 1:08 PM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: I think I covered it already in the last thread on this, but the best way I see to fix the whole auto-assembly issue is: 1. Stop the damn auto-scanning of new d

Re: Security implications of btrfs receive?

2016-09-07 Thread Austin S. Hemmelgarn
On 2016-09-07 14:07, Christoph Anton Mitterer wrote: On Wed, 2016-09-07 at 11:06 -0400, Austin S. Hemmelgarn wrote: This is an issue with any filesystem, Not really... any other filesystem I'd know (not sure about ZFS) keeps working when there are UUID collisions... or at least it won't cause

Re: Security implications of btrfs receive?

2016-09-07 Thread Austin S. Hemmelgarn
On 2016-09-07 12:10, Graham Cobb wrote: On 07/09/16 16:20, Austin S. Hemmelgarn wrote: I should probably add to this that you shouldn't be accepting send/receive data streams from untrusted sources anyway. While it probably won't crash your system, it's not intended for use as something like

Re: Security implications of btrfs receive?

2016-09-07 Thread Austin S. Hemmelgarn
On 2016-09-07 07:58, Austin S. Hemmelgarn wrote: On 2016-09-06 13:20, Graham Cobb wrote: Thanks to Austin and Duncan for their replies. On 06/09/16 13:15, Austin S. Hemmelgarn wrote: On 2016-09-05 05:59, Graham Cobb wrote: Does the "path" argument of btrfs-receive mean that *all*

Re: Security implications of btrfs receive?

2016-09-07 Thread Austin S. Hemmelgarn
On 2016-09-07 10:41, Christoph Anton Mitterer wrote: On Tue, 2016-09-06 at 18:20 +0100, Graham Cobb wrote: they know the UUID of the subvolume? Unfortunately, btrfs seems to be pretty problematic when anyone knows your UUIDs... This is an issue with any filesystem, it is just a bigger issue

Re: Security implications of btrfs receive?

2016-09-07 Thread Austin S. Hemmelgarn
On 2016-09-07 10:44, Christoph Anton Mitterer wrote: On Wed, 2016-09-07 at 07:58 -0400, Austin S. Hemmelgarn wrote: if you want proper security you should be using a real container system Won't these probably use the same filesystems? That depends on how it's set up. Most container software

Re: Security implications of btrfs receive?

2016-09-07 Thread Austin S. Hemmelgarn
On 2016-09-06 13:20, Graham Cobb wrote: Thanks to Austin and Duncan for their replies. On 06/09/16 13:15, Austin S. Hemmelgarn wrote: On 2016-09-05 05:59, Graham Cobb wrote: Does the "path" argument of btrfs-receive mean that *all* operations are confined to that path? For example,

Re: [OT] Re: Balancing subvolume on a specific device

2016-09-06 Thread Austin S. Hemmelgarn
On 2016-09-02 06:55, Duncan wrote: Kai Krakow posted on Thu, 01 Sep 2016 21:45:19 +0200 as excerpted: Am Sat, 20 Aug 2016 06:30:11 + (UTC) schrieb Duncan <1i5t5.dun...@cox.net>: There's at least three other options to try to get what you mention, however. FWIW, I'm a gentooer and thus

Re: Security implications of btrfs receive?

2016-09-06 Thread Austin S. Hemmelgarn
On 2016-09-05 05:59, Graham Cobb wrote: Does anyone know of a security analysis of btrfs receive? I'm not a developer, and definitely not a security specialist, just a security minded sysadmin who has some idea what's going on, but I can at least try and answer this. I assume that just using

Re: your mail

2016-09-02 Thread Austin S. Hemmelgarn
On 2016-09-01 12:44, Kyle Gates wrote: -Original Message- From: linux-btrfs-ow...@vger.kernel.org [mailto:linux-btrfs- ow...@vger.kernel.org] On Behalf Of Austin S. Hemmelgarn Sent: Thursday, September 01, 2016 6:18 AM To: linux-btrfs@vger.kernel.org Subject: Re: your mail On 2016-09-01

Re: BTRFS constantly reports "No space left on device" even with a huge unallocated space

2016-09-01 Thread Austin S. Hemmelgarn
On 2016-09-01 13:12, Jeff Mahoney wrote: On 9/1/16 1:04 PM, Austin S. Hemmelgarn wrote: On 2016-09-01 12:34, Ronan Arraes Jardim Chagas wrote: Em Qui, 2016-09-01 às 09:21 -0400, Austin S. Hemmelgarn escreveu: Yes, you can just run `btrfs quota disable /` and it should work. This ironically

Re: BTRFS constantly reports "No space left on device" even with a huge unallocated space

2016-09-01 Thread Austin S. Hemmelgarn
On 2016-09-01 12:34, Ronan Arraes Jardim Chagas wrote: Em Qui, 2016-09-01 às 09:21 -0400, Austin S. Hemmelgarn escreveu: Yes, you can just run `btrfs quota disable /` and it should work. This ironically reiterates that one of the bigger problems with BTRFS is that distros are enabling unstable

Re: BTRFS constantly reports "No space left on device" even with a huge unallocated space

2016-09-01 Thread Austin S. Hemmelgarn
On 2016-09-01 08:57, Ronan Arraes Jardim Chagas wrote: Hi! Em Qua, 2016-08-31 às 17:09 -0600, Chris Murphy escreveu: OK so Ronan, I'm gonna guess the simplest work around for your problem is to disable quota support, and see if the problem happens again. Look at the output of the command

Re: Recommendation on raid5 drive error resolution

2016-09-01 Thread Austin S. Hemmelgarn
On 2016-08-31 19:04, Gareth Pye wrote: ro,degraded has mounted it nicely and my rsync of the more useful data is progressing at the speed of WiFi. There are repeated read errors from one drive still but the rsync hasn't bailed yet, which I think means there isn't any overlapping errors in any

Re: your mail

2016-09-01 Thread Austin S. Hemmelgarn
On 2016-09-01 03:44, M G Berberich wrote: Am Mittwoch, den 31. August schrieb Fennec Fox: Linux Titanium 4.7.2-1-MANJARO #1 SMP PREEMPT Sun Aug 21 15:04:37 UTC 2016 x86_64 GNU/Linux btrfs-progs v4.7 Data, single: total=30.01GiB, used=18.95GiB System, single: total=4.00MiB, used=16.00KiB

Re: btrfs and systemd

2016-08-29 Thread Austin S. Hemmelgarn
On 2016-08-29 07:18, Imran Geriskovan wrote: I can't find any fstab setting for systemd to higher this timeout. There's just the x-systemd.device-timeout but this controls how long to wait for the device and not for the mount command. Is there any solution for big btrfs volumes and systemd?

Re: Switch raid mode without rebalance?

2016-08-26 Thread Austin S. Hemmelgarn
On 2016-08-25 18:32, Gert Menke wrote: Hi, On 2016-08-25 20:26, Justin Kilpatrick wrote: I'm not sure why you want to avoid a balance, I didn't check, but I imagined it would slow down my rsync significantly. It will slow it down, but I can't tell you exactly how much (there are too many

Re: Please disable balance auto-resume for 4.9 (or even 4.8)

2016-08-25 Thread Austin S. Hemmelgarn
On 2016-08-25 05:38, Holger Hoffstätte wrote: Automatically resuming an interrupted balance has repeatedly caused all sorts of problems because it creates a possible failure mode when a user can least use it: after a crash/power loss/sudden reboot (which, like it or not, is the de facto "fix

Re: Will Btrfs have an official command to "uncow" existing files?

2016-08-23 Thread Austin S. Hemmelgarn
On 2016-08-22 22:43, Chris Murphy wrote: On Mon, Aug 22, 2016 at 5:06 PM, Darrick J. Wong wrote: [add Dave and Christoph to cc] On Mon, Aug 22, 2016 at 04:14:19PM -0400, Jeff Mahoney wrote: On 8/21/16 2:59 PM, Tomokhov Alexander wrote: Btrfs wiki FAQ gives a link to

Re: About minimal device number for RAID5/6

2016-08-16 Thread Austin S. Hemmelgarn
On 2016-08-15 21:32, Qu Wenruo wrote: At 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote: On 2016-08-15 10:08, Anand Jain wrote: IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6. Any comment is welcomed. Based on looking at the code, we do in fact support 2/3

Re: About minimal device number for RAID5/6

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 10:32, Anand Jain wrote: On 08/15/2016 10:10 PM, Austin S. Hemmelgarn wrote: On 2016-08-15 10:08, Anand Jain wrote: IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6. Any comment is welcomed. Based on looking at the code, we do in fact support 2/3

Re: Huge load on btrfs subvolume delete

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 10:06, Daniel Caillibaud wrote: Le 15/08/16 à 08:32, "Austin S. Hemmelgarn" <ahferro...@gmail.com> a écrit : ASH> On 2016-08-15 06:39, Daniel Caillibaud wrote: ASH> > I'm newbie with btrfs, and I have pb with high load after each btrfs subvolume delete

Re: About minimal device number for RAID5/6

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 10:08, Anand Jain wrote: IMHO it's better to warn user about 2 devices RAID5 or 3 devices RAID6. Any comment is welcomed. Based on looking at the code, we do in fact support 2/3 devices for raid5/6 respectively. Personally, I agree that we should warn when trying to do this,

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 09:39, Martin wrote: That really is the case, there's currently no way to do this with BTRFS. You have to keep in mind that the raid5/6 code only went into the mainline kernel a few versions ago, and it's still pretty immature as far as kernel code goes. I don't know when (if

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 09:38, Martin wrote: Looking at the kernel log itself, you've got a ton of write errors on /dev/sdap. I would suggest checking that particular disk with smartctl, and possibly checking the other hardware involved (the storage controller and cabling). I would kind of expect BTRFS

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 08:19, Martin wrote: I'm not sure what Arch does any differently to their kernels from kernel.org kernels. But bugzilla.kernel.org offers a Mainline and Fedora drop down for identifying the kernel source tree. IIRC, they're pretty close to mainline kernels. I don't think they

Re: How to stress test raid6 on 122 disk array

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 08:19, Martin wrote: The smallest disk of the 122 is 500GB. Is it possible to have btrfs see each disk as only e.g. 10GB? That way I can corrupt and resilver more disks over a month. Well, at least you can easily partition the devices for that to happen. Can it be done with

Re: Huge load on btrfs subvolume delete

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 06:39, Daniel Caillibaud wrote: Hi, I'm newbie with btrfs, and I have pb with high load after each btrfs subvolume delete I use snapshots on lxc hosts under debian jessie with - kernel 4.6.0-0.bpo.1-amd64 - btrfs-progs 4.6.1-1~bpo8 For backup, I have each day, for each

Re: About minimal device number for RAID5/6

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-15 03:50, Qu Wenruo wrote: Hi, Recently I found that manpage of mkfs is saying minimal device number for RAID5 and RAID6 is 2 and 3. Personally speaking, although I understand that RAID5/6 only requires 1/2 devices for parity stripe, it is still quite strange behavior. Under most

Re: checksum error in metadata node - best way to move root fs to new drive?

2016-08-15 Thread Austin S. Hemmelgarn
On 2016-08-12 11:06, Duncan wrote: Austin S. Hemmelgarn posted on Fri, 12 Aug 2016 08:04:42 -0400 as excerpted: On a file server? No, I'd ensure proper physical security is established and make sure it's properly secured against network based attacks and then not worry about it. Unless you

Re: checksum error in metadata node - best way to move root fs to new drive?

2016-08-12 Thread Austin S. Hemmelgarn
On 2016-08-11 16:23, Dave T wrote: What I have gathered so far is the following: 1. my RAM is not faulty and I feel comfortable ruling out a memory error as having anything to do with the reported problem. 2. my storage device does not seem to be faulty. I have not figured out how to do more

Re: checksum error in metadata node - best way to move root fs to new drive?

2016-08-11 Thread Austin S. Hemmelgarn
ly seen one of those in at least a few months. In general, BTRFS is moving fast enough that reports older than a kernel release cycle are generally out of date unless something confirms otherwise, but I do distinctly recall such issues being commonly reported in the past. On 10 August 2016 at 15:46,

Re: checksum error in metadata node - best way to move root fs to new drive?

2016-08-10 Thread Austin S. Hemmelgarn
On 2016-08-10 02:27, Duncan wrote: Dave T posted on Tue, 09 Aug 2016 23:27:56 -0400 as excerpted: btrfs scrub returned with uncorrectable errors. Searching in dmesg returns the following information: BTRFS warning (device dm-0): checksum error at logical N on /dev/mapper/[crypto] sector:

Re: system locked up with btrfs-transaction consuming 100% CPU

2016-08-10 Thread Austin S. Hemmelgarn
On 2016-08-09 18:20, Dave T wrote: Thank you for the info, Duncan. I will use Alt-sysrq-s alt-sysrq-u alt-sysrq-b. This is the best description / recommendation I've read on the subject. I had read about these special key sequences before but I could never remember them and I didn't fully

Re: Issue: errno:28 (No space left on device)

2016-08-09 Thread Austin S. Hemmelgarn
On 2016-08-09 07:50, Thomas wrote: Hello! First things first: Mailing lists are asynchronous. You will almost _never_ get an immediate response, and will quite often not get a response for a few hours at least. Sending a message more than once when you don't get a response does not make it

Re: "No space left on device" and balance doesn't work

2016-08-09 Thread Austin S. Hemmelgarn
On 2016-08-09 05:50, MegaBrutal wrote: 2016-06-03 14:43 GMT+02:00 Austin S. Hemmelgarn <ahferro...@gmail.com>: Also, since you're on a new enough kernel, try 'lazytime' in the mount options as well, this defers all on-disk timestamp updates for up to 24 hours or until the inode gets w

Re: 6TB partition, Data only 2TB - aka When you haven't hit the "usual" problem

2016-08-05 Thread Austin S. Hemmelgarn
On 2016-08-05 06:56, Lutz Vieweg wrote: On 08/04/2016 10:30 PM, Chris Murphy wrote: Keep in mind the list is rather self-selecting for problems. People who aren't having problems are unlikely to post their non-problems to the list. True, but the number of people inclined to post a bug report

Re: How to stress test raid6 on 122 disk array

2016-08-05 Thread Austin S. Hemmelgarn
On 2016-08-04 17:12, Chris Murphy wrote: On Thu, Aug 4, 2016 at 2:51 PM, Martin wrote: Thanks for the benchmark tools and tips on where the issues might be. Is Fedora 24 rawhide preferred over ArchLinux? I'm not sure what Arch does any differently to their kernels

Re: How to stress test raid6 on 122 disk array

2016-08-04 Thread Austin S. Hemmelgarn
On 2016-08-04 13:43, Martin wrote: Hi, I would like to find rare raid6 bugs in btrfs, where I have the following hw: * 2x 8 core CPU * 128GB ram * 70 FC disk array (56x 500GB + 14x 1TB SATA disks) * 24 FC or 2x SAS disk array (1TB SAS disks) * 16 FC disk array (1TB SATA disks) * 12 SAS disk

Re: Extents for a particular subvolume

2016-08-04 Thread Austin S. Hemmelgarn
On 2016-08-03 17:55, Graham Cobb wrote: On 03/08/16 21:37, Adam Borowski wrote: On Wed, Aug 03, 2016 at 08:56:01PM +0100, Graham Cobb wrote: Are there any btrfs commands (or APIs) to allow a script to create a list of all the extents referred to within a particular (mounted) subvolume? And is

Re: systemd KillUserProcesses=yes and btrfs scrub

2016-08-01 Thread Austin S. Hemmelgarn
On 2016-08-01 13:15, Chris Murphy wrote: On Mon, Aug 1, 2016 at 10:58 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: On 2016-08-01 12:19, Chris Murphy wrote: On Mon, Aug 1, 2016 at 10:08 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: MD and DM RAID handle this

Re: systemd KillUserProcesses=yes and btrfs scrub

2016-08-01 Thread Austin S. Hemmelgarn
On 2016-08-01 12:19, Chris Murphy wrote: On Mon, Aug 1, 2016 at 10:08 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: MD and DM RAID handle this by starting kernel threads to do the scrub. They then store the info about the scrub in the array itself, so you can query it exte

Re: systemd KillUserProcesses=yes and btrfs scrub

2016-08-01 Thread Austin S. Hemmelgarn
On 2016-08-01 11:46, Chris Murphy wrote: OK I've created a new volume that's sufficiently large I can tell if the kernel workers doing the scrub are also being killed off. First, I do a scrub without logging out to get a time for an uninterrupted scrub. And then initiate a scrub which I start

Re: systemd KillUserProcesses=yes and btrfs scrub

2016-08-01 Thread Austin S. Hemmelgarn
On 2016-07-30 20:29, Chris Murphy wrote: On Sat, Jul 30, 2016 at 2:02 PM, Chris Murphy wrote: Short version: When systemd-logind login.conf KillUserProcesses=yes, and the user does "sudo btrfs scrub start" in e.g. GNOME Terminal, and Same thing with Xfce, so it's not

Re: [PATCH] btrfs-progs: Make RAID stripesize configurable

2016-07-26 Thread Austin S. Hemmelgarn
On 2016-07-26 13:14, Chris Murphy wrote: On Fri, Jul 22, 2016 at 8:58 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: On 2016-07-22 09:42, Sanidhya Solanki wrote: +*stripesize=*;; +Specifies the new stripe size It'd be nice to stop conflating stripe size and stripe elemen

Re: [PATCH] btrfs-progs: add option to run balance as daemon

2016-07-26 Thread Austin S. Hemmelgarn
On 2016-07-26 13:07, David Sterba wrote: On Mon, Jul 11, 2016 at 10:44:30AM +0900, Satoru Takeuchi wrote: + chdir("/"); You should check the return value of chdir(). Otherwise we get the following warning message at the build time. Can we actually fail

Re: Any suggestions for thousands of disk image snapshots ?

2016-07-26 Thread Austin S. Hemmelgarn
On 2016-07-26 10:42, Chris Murphy wrote: On Tue, Jul 26, 2016 at 3:37 AM, Kurt Seo wrote: 2016-07-26 5:49 GMT+09:00 Chris Murphy : On Mon, Jul 25, 2016 at 1:25 AM, Kurt Seo wrote: Hi all I am currently

<    2   3   4   5   6   7   8   9   10   11   >