Re: 4.11.1: cannot btrfs check --repair a filesystem, causes heavy memory stalls

2017-05-23 Thread Austin S. Hemmelgarn
On 2017-05-22 22:07, Chris Murphy wrote: On Mon, May 22, 2017 at 5:57 PM, Marc MERLIN wrote: On Mon, May 22, 2017 at 05:26:25PM -0600, Chris Murphy wrote: On Mon, May 22, 2017 at 10:31 AM, Marc MERLIN wrote: I already have 24GB of RAM in that machine,

Re: Btrfs/SSD

2017-05-16 Thread Austin S. Hemmelgarn
On 2017-05-16 08:21, Tomasz Torcz wrote: On Tue, May 16, 2017 at 03:58:41AM +0200, Kai Krakow wrote: Am Mon, 15 May 2017 22:05:05 +0200 schrieb Tomasz Torcz : My drive has # smartctl -a /dev/sda | grep LBA 241 Total_LBAs_Written 0x0032 099 099 000Old_age

Re: Btrfs/SSD

2017-05-16 Thread Austin S. Hemmelgarn
On 2017-05-15 15:49, Kai Krakow wrote: Am Mon, 15 May 2017 08:03:48 -0400 schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>: That's why I don't trust any of my data to them. But I still want the benefit of their speed. So I use SSDs mostly as frontend caches to HDDs. T

Re: "Corrected" errors persist after scrubbing

2017-05-16 Thread Austin S. Hemmelgarn
On 2017-05-16 05:53, Tom Hale wrote: Hi Chris, On 09/05/17 02:26, Chris Murphy wrote: Read errors are fixed by overwrites. If the underlying device doesn't report an error for the write command, it's assumed to succeed. Even md and LVM raid's do this. I understand assuming writes succeed in

Re: balancing every night broke balancing so now I can't balance anymore?

2017-05-15 Thread Austin S. Hemmelgarn
On 2017-05-15 04:14, Hugo Mills wrote: On Sun, May 14, 2017 at 04:16:52PM -0700, Marc MERLIN wrote: On Sun, May 14, 2017 at 09:21:11PM +, Hugo Mills wrote: 2) balance -musage=0 3) balance -musage=20 In most cases, this is going to make ENOSPC problems worse, not better. The reason for

Re: Btrfs/SSD

2017-05-15 Thread Austin S. Hemmelgarn
On 2017-05-12 14:36, Kai Krakow wrote: Am Fri, 12 May 2017 15:02:20 +0200 schrieb Imran Geriskovan : On 5/12/17, Duncan <1i5t5.dun...@cox.net> wrote: FWIW, I'm in the market for SSDs ATM, and remembered this from a couple weeks ago so went back to find it. Thanks.

Re: Btrfs/SSD

2017-05-15 Thread Austin S. Hemmelgarn
On 2017-05-12 14:27, Kai Krakow wrote: Am Tue, 18 Apr 2017 15:02:42 +0200 schrieb Imran Geriskovan <imran.gerisko...@gmail.com>: On 4/17/17, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: Regarding BTRFS specifically: * Given my recently newfound understanding of what the

Re: Creating btrfs RAID on LUKS devs makes devices disappear

2017-05-12 Thread Austin S. Hemmelgarn
On 2017-05-12 09:54, Ochi wrote: On 12.05.2017 13:25, Austin S. Hemmelgarn wrote: On 2017-05-11 19:24, Ochi wrote: Hello, here is the journal.log (I hope). It's quite interesting. I rebooted the machine, performed a mkfs.btrfs on dm-{2,3,4} and dm-3 was missing afterwards (around timestamp 66

Re: Creating btrfs RAID on LUKS devs makes devices disappear

2017-05-12 Thread Austin S. Hemmelgarn
On 2017-05-11 19:24, Ochi wrote: Hello, here is the journal.log (I hope). It's quite interesting. I rebooted the machine, performed a mkfs.btrfs on dm-{2,3,4} and dm-3 was missing afterwards (around timestamp 66.*). However, I then logged into the machine from another terminal (around timestamp

Re: Qgroup reserved space like in ZFS?

2017-05-12 Thread Austin S. Hemmelgarn
On 2017-05-11 12:17, Robert Mader wrote: Hello everyone, I just wanted to ask a short question as I couldn't find a clear answer anywhere on the net, yet: Is it currently possible to reserve space for a BTRFS subvolume? Currently, there is no way to do this directly right now. However, you

Re: BTRFS converted from EXT4 becomes read-only after reboot

2017-05-08 Thread Austin S. Hemmelgarn
On 2017-05-08 12:22, Sean Greenslade wrote: On May 8, 2017 11:28:42 AM EDT, Sanidhya Solanki wrote: On Mon, 8 May 2017 10:16:44 -0400 Alexandru Guzu wrote: Sean, how would you approach the copy of the data back and forth if the OS is on it? Would a

Re: Can I see what device was used to mount btrfs?

2017-05-03 Thread Austin S. Hemmelgarn
On 2017-05-03 14:12, Andrei Borzenkov wrote: 03.05.2017 14:26, Austin S. Hemmelgarn пишет: On 2017-05-02 15:50, Goffredo Baroncelli wrote: On 2017-05-02 20:49, Adam Borowski wrote: It could be some daemon that waits for btrfs to become complete. Do we have something? Such a daemon would

Re: [PATCH 0/2] [RFC] Introduce device state 'failed'

2017-05-03 Thread Austin S. Hemmelgarn
lumes.c | 135 + fs/btrfs/volumes.h | 18 +++ 4 files changed, 255 insertions(+), 1 deletion(-) All my tests passed, and manual testing shows that it does as advertised, so for the series as a whole you can add: Tested-by: Austin S. He

Re: File system corruption, btrfsck abort

2017-05-03 Thread Austin S. Hemmelgarn
On 2017-05-03 10:17, Christophe de Dinechin wrote: On 29 Apr 2017, at 21:13, Chris Murphy wrote: On Sat, Apr 29, 2017 at 2:46 AM, Christophe de Dinechin wrote: On 28 Apr 2017, at 22:09, Chris Murphy wrote: On Fri,

Re: Can I see what device was used to mount btrfs?

2017-05-03 Thread Austin S. Hemmelgarn
On 2017-05-02 16:15, Kai Krakow wrote: Am Tue, 2 May 2017 21:50:19 +0200 schrieb Goffredo Baroncelli : On 2017-05-02 20:49, Adam Borowski wrote: It could be some daemon that waits for btrfs to become complete. Do we have something? Such a daemon would also have to read

Re: Can I see what device was used to mount btrfs?

2017-05-03 Thread Austin S. Hemmelgarn
On 2017-05-02 15:50, Goffredo Baroncelli wrote: On 2017-05-02 20:49, Adam Borowski wrote: It could be some daemon that waits for btrfs to become complete. Do we have something? Such a daemon would also have to read the chunk tree. I don't think that a daemon is necessary. As proof of

Re: Experiences with metadata balance/convert

2017-04-21 Thread Austin S. Hemmelgarn
On 2017-04-21 07:13, Hans van Kranenburg wrote: On 04/21/2017 12:31 PM, Hans van Kranenburg wrote: Doh, On 04/21/2017 12:26 PM, Hans van Kranenburg wrote: [...] == Thinking out of the box == Technically, converting from DUP to single could also mean: * Flipping one bit in the block group

Re: Reporting and monitoring storage events (blog)

2017-04-20 Thread Austin S. Hemmelgarn
On 2017-04-19 13:39, Chris Murphy wrote: http://www-rhstorage.rhcloud.com/blog/vpodzime/reporting-and-monitoring-storage-events I think the most useful part of this would be standardized messaging. For the exact same defect state on disk (data corruption), I get two different formatted messages

Re: Btrfs/SSD

2017-04-18 Thread Austin S. Hemmelgarn
On 2017-04-18 09:02, Imran Geriskovan wrote: On 4/17/17, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: Regarding BTRFS specifically: * Given my recently newfound understanding of what the 'ssd' mount option actually does, I'm inclined to recommend that people who are using high-end

Re: Btrfs/SSD

2017-04-18 Thread Austin S. Hemmelgarn
On 2017-04-17 15:22, Imran Geriskovan wrote: On 4/17/17, Roman Mamedov <r...@romanrm.net> wrote: "Austin S. Hemmelgarn" <ahferro...@gmail.com> wrote: * Compression should help performance and device lifetime most of the time, unless your CPU is fully utilized on a r

Re: Btrfs/SSD

2017-04-18 Thread Austin S. Hemmelgarn
On 2017-04-17 15:39, Chris Murphy wrote: On Mon, Apr 17, 2017 at 1:26 PM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: On 2017-04-17 14:34, Chris Murphy wrote: Nope. The first paragraph applies to NVMe machine with ssd mount option. Few fragments. The second paragraph applies

Re: Btrfs/SSD

2017-04-17 Thread Austin S. Hemmelgarn
On 2017-04-17 14:34, Chris Murphy wrote: On Mon, Apr 17, 2017 at 11:13 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: What is a high end SSD these days? Built-in NVMe? One with a good FTL in the firmware. At minimum, the good Samsung EVO drives, the high quality Inte

Re: compressing nocow files

2017-04-17 Thread Austin S. Hemmelgarn
On 2017-04-17 13:36, Chris Murphy wrote: HI, /dev/nvme0n1p8 on / type btrfs (rw,relatime,seclabel,ssd,space_cache,subvolid=258,subvol=/root) I've got a test folder with +C set and then copied a test file into it. $ lsattr C--

Re: compressing nocow files

2017-04-17 Thread Austin S. Hemmelgarn
On 2017-04-17 13:36, Chris Murphy wrote: HI, /dev/nvme0n1p8 on / type btrfs (rw,relatime,seclabel,ssd,space_cache,subvolid=258,subvol=/root) I've got a test folder with +C set and then copied a test file into it. $ lsattr C--

Re: Btrfs/SSD

2017-04-17 Thread Austin S. Hemmelgarn
On 2017-04-17 12:58, Chris Murphy wrote: On Mon, Apr 17, 2017 at 5:53 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: Regarding BTRFS specifically: * Given my recently newfound understanding of what the 'ssd' mount option actually does, I'm inclined to recommend that peop

Re: Btrfs/SSD

2017-04-17 Thread Austin S. Hemmelgarn
On 2017-04-14 07:02, Imran Geriskovan wrote: Hi, Sometime ago we had some discussion about SSDs. Within the limits of unknown/undocumented device infos, we loosely had covered data retension capability/disk age/life time interrelations, (in?)effectiveness of btrfs dup on SSDs, etc.. Now, as

Re: Deduplication tools

2017-04-13 Thread Austin S. Hemmelgarn
On 2017-04-13 07:06, Marat Khalili wrote: After reading this maillist for a while I became a bit more cautious about using various BTRFS features, so decided to ask just in case: is it safe to use out-of-band deduplication tools , and which

Re: BTRFS as a GlusterFS storage back-end, and what I've learned from using it as such.

2017-04-13 Thread Austin S. Hemmelgarn
On 2017-04-12 18:48, Duncan wrote: Austin S. Hemmelgarn posted on Wed, 12 Apr 2017 07:18:44 -0400 as excerpted: On 2017-04-12 01:49, Qu Wenruo wrote: At 04/11/2017 11:40 PM, Austin S. Hemmelgarn wrote: 4. Depending on other factors, compression can actually slow you down pretty

Re: Btrfs disk layout question

2017-04-12 Thread Austin S. Hemmelgarn
On 2017-04-12 12:44, Andrei Borzenkov wrote: 12.04.2017 14:20, Austin S. Hemmelgarn пишет: On 2017-04-12 00:18, Chris Murphy wrote: On Tue, Apr 11, 2017 at 3:00 PM, Adam Borowski <kilob...@angband.pl> wrote: On Tue, Apr 11, 2017 at 12:15:32PM -0700, Amin Hassani wrote: I am w

Re: Btrfs disk layout question

2017-04-12 Thread Austin S. Hemmelgarn
On 2017-04-12 00:18, Chris Murphy wrote: On Tue, Apr 11, 2017 at 3:00 PM, Adam Borowski wrote: On Tue, Apr 11, 2017 at 12:15:32PM -0700, Amin Hassani wrote: I am working on a project with Btrfs and I was wondering if there is any way to see the disk layout of the btrfs

Re: BTRFS as a GlusterFS storage back-end, and what I've learned from using it as such.

2017-04-12 Thread Austin S. Hemmelgarn
On 2017-04-12 01:49, Qu Wenruo wrote: At 04/11/2017 11:40 PM, Austin S. Hemmelgarn wrote: About a year ago now, I decided to set up a small storage cluster to store backups (and partially replace Dropbox for my usage, but that's a separate story). I ended up using GlusterFS as the clustering

BTRFS as a GlusterFS storage back-end, and what I've learned from using it as such.

2017-04-11 Thread Austin S. Hemmelgarn
About a year ago now, I decided to set up a small storage cluster to store backups (and partially replace Dropbox for my usage, but that's a separate story). I ended up using GlusterFS as the clustering software itself, and BTRFS as the back-end storage. GlusterFS itself is actually a pretty

Re: About free space fragmentation, metadata write amplification and (no)ssd

2017-04-11 Thread Austin S. Hemmelgarn
On 2017-04-10 18:59, Hans van Kranenburg wrote: On 04/10/2017 02:23 PM, Austin S. Hemmelgarn wrote: On 2017-04-08 16:19, Hans van Kranenburg wrote: So... today a real life story / btrfs use case example from the trenches at work... tl;dr 1) btrfs is awesome, but you have to carefully choose

Re: btrfs filesystem keeps allocating new chunks for no apparent reason

2017-04-11 Thread Austin S. Hemmelgarn
On 2017-04-11 05:55, Adam Borowski wrote: On Tue, Apr 11, 2017 at 06:01:19AM +0200, Kai Krakow wrote: Yes, I know all this. But I don't see why you still want noatime or relatime if you use lazytime, except for super-optimizing. Lazytime gives you POSIX conformity for a problem that the other

Re: btrfs filesystem keeps allocating new chunks for no apparent reason

2017-04-10 Thread Austin S. Hemmelgarn
On 2017-04-10 14:18, Kai Krakow wrote: Am Mon, 10 Apr 2017 13:13:39 -0400 schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>: On 2017-04-10 12:54, Kai Krakow wrote: Am Mon, 10 Apr 2017 18:44:44 +0200 schrieb Kai Krakow <hurikha...@gmail.com>: Am Mon, 10 Apr 2017

Re: btrfs filesystem keeps allocating new chunks for no apparent reason

2017-04-10 Thread Austin S. Hemmelgarn
On 2017-04-10 12:54, Kai Krakow wrote: Am Mon, 10 Apr 2017 18:44:44 +0200 schrieb Kai Krakow <hurikha...@gmail.com>: Am Mon, 10 Apr 2017 08:51:38 -0400 schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>: On 2017-04-10 08:45, Kai Krakow wrote: Am Mon, 10 Apr 2017

Re: btrfs filesystem keeps allocating new chunks for no apparent reason

2017-04-10 Thread Austin S. Hemmelgarn
On 2017-04-10 08:45, Kai Krakow wrote: Am Mon, 10 Apr 2017 08:39:23 -0400 schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>: They've been running BTRFS with LZO compression, the SSD allocator, atime disabled, and mtime updates deferred (lazytime mount option) the whole

Re: [PATCH] btrfs: scrub: use do_div() for 64-by-32 division

2017-04-10 Thread Austin S. Hemmelgarn
On 2017-04-08 17:07, Adam Borowski wrote: Unbreaks ARM and possibly other 32-bit architectures. Fixes: 7d0ef8b4d: Btrfs: update scrub_parity to use u64 stripe_len Reported-by: Icenowy Zheng Signed-off-by: Adam Borowski --- You'd probably want to squash

Re: parity scrub on 32-bit

2017-04-10 Thread Austin S. Hemmelgarn
On 2017-04-10 04:53, Adam Borowski wrote: Hi! While messing with the division failure on current -next, I've noticed that parity scrub splats immediately on all 32-bit archs I tried. But, it's not a regression: it bisects to 5a6ac9eacb49143cbad3bbfda72263101cb1f3df (merged in 3.19) which

Re: btrfs filesystem keeps allocating new chunks for no apparent reason

2017-04-10 Thread Austin S. Hemmelgarn
On 2017-04-09 19:23, Hans van Kranenburg wrote: On 04/08/2017 01:16 PM, Hans van Kranenburg wrote: On 04/07/2017 11:25 PM, Hans van Kranenburg wrote: Ok, I'm going to revive a year old mail thread here with interesting new info: [...] Now, another surprise: From the exact moment I did mount

Re: About free space fragmentation, metadata write amplification and (no)ssd

2017-04-10 Thread Austin S. Hemmelgarn
On 2017-04-08 16:19, Hans van Kranenburg wrote: So... today a real life story / btrfs use case example from the trenches at work... tl;dr 1) btrfs is awesome, but you have to carefully choose which parts of it you want to use or avoid 2) improvements can be made, but at least the problems

Re: Volume appears full but TB's of space available

2017-04-10 Thread Austin S. Hemmelgarn
On 2017-04-08 01:12, Duncan wrote: Austin S. Hemmelgarn posted on Fri, 07 Apr 2017 07:41:22 -0400 as excerpted: 2. Results from 'btrfs scrub'. This is somewhat tricky because scrub is either asynchronous or blocks for a _long_ time. The simplest option I've found is to fire off

Re: Volume appears full but TB's of space available

2017-04-07 Thread Austin S. Hemmelgarn
On 2017-04-07 13:05, John Petrini wrote: The use case actually is not Ceph, I was just drawing a comparison between Ceph's object replication strategy vs BTRF's chunk mirroring. That's actually a really good comparison that I hadn't thought of before. From what I can tell from my limited

Re: Volume appears full but TB's of space available

2017-04-07 Thread Austin S. Hemmelgarn
On 2017-04-07 12:58, John Petrini wrote: When you say "running BTRFS raid1 on top of LVM RAID0 volumes" do you mean creating two LVM RAID-0 volumes and then putting BTRFS RAID1 on the two resulting logical volumes? Yes, although it doesn't have to be LVM, it could just as easily be MD or even

Re: Volume appears full but TB's of space available

2017-04-07 Thread Austin S. Hemmelgarn
On 2017-04-07 12:28, Chris Murphy wrote: On Fri, Apr 7, 2017 at 7:50 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: If you care about both performance and data safety, I would suggest using BTRFS raid1 mode on top of LVM or MD RAID0 together with having good backups and good moni

Re: Volume appears full but TB's of space available

2017-04-07 Thread Austin S. Hemmelgarn
On 2017-04-07 12:04, Chris Murphy wrote: On Fri, Apr 7, 2017 at 5:41 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: I'm rather fond of running BTRFS raid1 on top of LVM RAID0 volumes, which while it provides no better data safety than BTRFS raid10 mode, gets noticeably

Re: Volume appears full but TB's of space available

2017-04-07 Thread Austin S. Hemmelgarn
On 2017-04-07 09:28, John Petrini wrote: Hi Austin, Thanks for taking to time to provide all of this great information! Glad I could help. You've got me curious about RAID1. If I were to convert the array to RAID1 could it then sustain a multi drive failure? Or in other words do I actually

Re: Volume appears full but TB's of space available

2017-04-07 Thread Austin S. Hemmelgarn
On 2017-04-06 23:25, John Petrini wrote: Interesting. That's the first time I'm hearing this. If that's the case I feel like it's a stretch to call it RAID10 at all. It sounds a lot more like basic replication similar to Ceph only Ceph understands failure domains and therefore can be configured

Re: Need some help: "BTRFS critical (device sda): corrupt leaf, slot offset bad: block"

2017-04-04 Thread Austin S. Hemmelgarn
On 2017-04-04 09:29, Brian B wrote: On 04/04/2017 12:02 AM, Robert Krig wrote: My storage array is BTRFS Raid1 with 4x8TB Drives. Wouldn't it be possible to simply disconnect two of those drives, mount with -o degraded and still have access (even if read-only) to all my data? Just jumping on

Re: Is btrfs-convert able to deal with sparse files in a ext4 filesystem?

2017-04-03 Thread Austin S. Hemmelgarn
On 2017-04-01 05:48, Kai Herlemann wrote: Hi, I have on my ext4 filesystem some sparse files, mostly images from ext4 filesystems. Is btrfs-convert (4.9.1) able to deal with sparse files or can that cause any problems? I would tend to agree with some of the other people who have commented here,

Re: mix ssd and hdd in single volume

2017-04-03 Thread Austin S. Hemmelgarn
On 2017-04-01 02:06, UGlee wrote: We are working on a small NAS server for home user. The product is equipped with a small fast SSD (around 60-120GB) and a large HDD (2T to 4T). We have two choices: 1. using bcache to accelerate io operation 2. combining SSD and HDD into a single btrfs volume.

Re: Shrinking a device - performance?

2017-03-31 Thread Austin S. Hemmelgarn
On 2017-03-30 11:55, Peter Grandi wrote: My guess is that very complex risky slow operations like that are provided by "clever" filesystem developers for "marketing" purposes, to win box-ticking competitions. That applies to those system developers who do know better; I suspect that even some

Re: Fwd: Confusion about snapshots containers

2017-03-31 Thread Austin S. Hemmelgarn
On 2017-03-30 09:07, Tim Cuthbertson wrote: On Wed, Mar 29, 2017 at 10:46 PM, Duncan <1i5t5.dun...@cox.net> wrote: Tim Cuthbertson posted on Wed, 29 Mar 2017 18:20:52 -0500 as excerpted: So, another question... Do I then leave the top level mounted all the time for snapshots, or should I

Re: Qgroups are not applied when snapshotting a subvol?

2017-03-29 Thread Austin S. Hemmelgarn
On 2017-03-29 01:38, Duncan wrote: Austin S. Hemmelgarn posted on Tue, 28 Mar 2017 07:44:56 -0400 as excerpted: On 2017-03-27 21:49, Qu Wenruo wrote: The problem is, how should we treat subvolume. Btrfs subvolume sits in the middle of directory and (logical) volume used in traditional

Re: Shrinking a device - performance?

2017-03-28 Thread Austin S. Hemmelgarn
On 2017-03-28 10:43, Peter Grandi wrote: This is going to be long because I am writing something detailed hoping pointlessly that someone in the future will find it by searching the list archives while doing research before setting up a new storage system, and they will be the kind of person

Re: Qgroups are not applied when snapshotting a subvol?

2017-03-28 Thread Austin S. Hemmelgarn
On 2017-03-28 09:53, Marat Khalili wrote: There are a couple of reasons I'm advocating the specific behavior I outlined: Some of your points are valid, but some break current behaviour and expectations or create technical difficulties. 1. It doesn't require any specific qgroup setup. By

Re: Qgroups are not applied when snapshotting a subvol?

2017-03-28 Thread Austin S. Hemmelgarn
/03/17 14:24, Austin S. Hemmelgarn wrote: On 2017-03-27 15:32, Chris Murphy wrote: How about if qgroups are enabled, then non-root user is prevented from creating new subvolumes? Or is there a way for a new nested subvolume to be included in its parent's quota, rather than the new subvolume having

Re: Qgroups are not applied when snapshotting a subvol?

2017-03-28 Thread Austin S. Hemmelgarn
On 2017-03-27 21:49, Qu Wenruo wrote: At 03/27/2017 08:01 PM, Austin S. Hemmelgarn wrote: On 2017-03-27 07:02, Moritz Sichert wrote: Am 27.03.2017 um 05:46 schrieb Qu Wenruo: At 03/27/2017 11:26 AM, Andrei Borzenkov wrote: 27.03.2017 03:39, Qu Wenruo пишет: At 03/26/2017 06:03 AM

Re: Qgroups are not applied when snapshotting a subvol?

2017-03-28 Thread Austin S. Hemmelgarn
On 2017-03-27 15:32, Chris Murphy wrote: How about if qgroups are enabled, then non-root user is prevented from creating new subvolumes? Or is there a way for a new nested subvolume to be included in its parent's quota, rather than the new subvolume having a whole new quota limit? Tricky

Re: Shrinking a device - performance?

2017-03-27 Thread Austin S. Hemmelgarn
On 2017-03-27 09:54, Christian Theune wrote: Hi, On Mar 27, 2017, at 3:50 PM, Christian Theune <c...@flyingcircus.io> wrote: Hi, On Mar 27, 2017, at 3:46 PM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: Something I’d like to verify: does having traffic on the

Re: Shrinking a device - performance?

2017-03-27 Thread Austin S. Hemmelgarn
On 2017-03-27 09:50, Christian Theune wrote: Hi, On Mar 27, 2017, at 3:46 PM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: Something I’d like to verify: does having traffic on the volume have the potential to delay this infinitely? I.e. does the system write to any segments that

Re: Shrinking a device - performance?

2017-03-27 Thread Austin S. Hemmelgarn
On 2017-03-27 09:24, Hugo Mills wrote: On Mon, Mar 27, 2017 at 03:20:37PM +0200, Christian Theune wrote: Hi, On Mar 27, 2017, at 3:07 PM, Hugo Mills wrote: On my hardware (consumer HDDs and SATA, RAID-1 over 6 devices), it takes about a minute to move 1 GiB of data. At

Re: Qgroups are not applied when snapshotting a subvol?

2017-03-27 Thread Austin S. Hemmelgarn
On 2017-03-27 07:02, Moritz Sichert wrote: Am 27.03.2017 um 05:46 schrieb Qu Wenruo: At 03/27/2017 11:26 AM, Andrei Borzenkov wrote: 27.03.2017 03:39, Qu Wenruo пишет: At 03/26/2017 06:03 AM, Moritz Sichert wrote: Hi, I tried to configure qgroups on a btrfs filesystem but was really

Re: backing up a file server with many subvolumes

2017-03-27 Thread Austin S. Hemmelgarn
On 2017-03-25 23:00, J. Hart wrote: I have a Btrfs filesystem on a backup server. This filesystem has a directory to hold backups for filesystems from remote machines. In this directory is a subdirectory for each machine. Under each machine subdirectory is one directory for each filesystem

Re: Cross-subvolume rename behavior

2017-03-23 Thread Austin S. Hemmelgarn
On 2017-03-23 06:09, Hugo Mills wrote: On Wed, Mar 22, 2017 at 10:37:23PM -0700, Sean Greenslade wrote: Hello, all. I'm currently tracking down the source of some strange behavior in my setup. I recognize that this isn't strictly a btrfs issue, but I figured I'd start at the bottom of the stack

Re: Thoughts on 'btrfs device stats' and security.

2017-03-17 Thread Austin S. Hemmelgarn
On 2017-03-17 15:01, Eric Sandeen wrote: On 3/17/17 11:25 AM, Austin S. Hemmelgarn wrote: I'm currently working on a plugin for colllectd [1] to track per-device per-filesystem error rates for BTRFS volumes. Overall, this is actually going quite well (I've got most of the secondary logic

Re: BTRFS Metadata Corruption Prevents Scrub and btrfs check

2017-03-17 Thread Austin S. Hemmelgarn
On 2017-03-17 15:25, John Marrett wrote: Peter, Bad news. That means that probably the disk is damaged and further issues may happen. This system has a long history, I have had a dual drive failure in the past, I managed to recover from that with ddrescue. I've subsequently copied the

Thoughts on 'btrfs device stats' and security.

2017-03-17 Thread Austin S. Hemmelgarn
I'm currently working on a plugin for colllectd [1] to track per-device per-filesystem error rates for BTRFS volumes. Overall, this is actually going quite well (I've got most of the secondary logic like matching filesystems to watch and parsing the data done already), but I've come across a

Re: Home storage with btrfs

2017-03-13 Thread Austin S. Hemmelgarn
On 2017-03-13 07:52, Juan Orti Alcaine wrote: 2017-03-13 12:29 GMT+01:00 Hérikz Nawarro : Hello everyone, Today is safe to use btrfs for home storage? No raid, just secure storage for some files and create snapshots from it. In my humble opinion, yes. I'm running a

Re: raid1 degraded mount still produce single chunks, writeable mount not allowed

2017-03-09 Thread Austin S. Hemmelgarn
On 2017-03-09 04:49, Peter Grandi wrote: Consider the common case of a 3-member volume with a 'raid1' target profile: if the sysadm thinks that a drive should be replaced, the goal is to take it out *without* converting every chunk to 'single', because with 2-out-of-3 devices half of the chunks

Re: [PATCH v3 0/7] Chunk level degradable check

2017-03-08 Thread Austin S. Hemmelgarn
es.c | 156 - fs/btrfs/volumes.h | 37 + 6 files changed, 188 insertions(+), 101 deletions(-) Everything appears to work as advertised here, so for the patcheset as a whole, you can add: Tested-by: Austin S. Hemmelgarn <ahferro...@gma

Re: raid1 degraded mount still produce single chunks, writeable mount not allowed

2017-03-06 Thread Austin S. Hemmelgarn
On 2017-03-05 14:13, Peter Grandi wrote: What makes me think that "unmirrored" 'raid1' profile chunks are "not a thing" is that it is impossible to remove explicitly a member device from a 'raid1' profile volume: first one has to 'convert' to 'single', and then the 'remove' copies back to the

Re: raid1 degraded mount still produce single chunks, writeable mount not allowed

2017-03-06 Thread Austin S. Hemmelgarn
On 2017-03-03 15:10, Kai Krakow wrote: Am Fri, 3 Mar 2017 07:19:06 -0500 schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>: On 2017-03-03 00:56, Kai Krakow wrote: Am Thu, 2 Mar 2017 11:37:53 +0100 schrieb Adam Borowski <kilob...@angband.pl>: On Wed, Mar 01, 20

Re: raid1 degraded mount still produce single chunks, writeable mount not allowed

2017-03-03 Thread Austin S. Hemmelgarn
On 2017-03-03 00:56, Kai Krakow wrote: Am Thu, 2 Mar 2017 11:37:53 +0100 schrieb Adam Borowski : On Wed, Mar 01, 2017 at 05:30:37PM -0700, Chris Murphy wrote: [1717713.408675] BTRFS warning (device dm-8): missing devices (1) exceeds the limit (0), writeable mount is not

Re: raid1 degraded mount still produce single chunks, writeable mount not allowed

2017-03-03 Thread Austin S. Hemmelgarn
On 2017-03-02 19:47, Peter Grandi wrote: [ ... ] Meanwhile, the problem as I understand it is that at the first raid1 degraded writable mount, no single-mode chunks exist, but without the second device, they are created. [ ... ] That does not make any sense, unless there is a fundamental

Re: raid1 degraded mount still produce single chunks, writeable mount not allowed

2017-03-02 Thread Austin S. Hemmelgarn
On 2017-03-02 12:26, Andrei Borzenkov wrote: 02.03.2017 16:41, Duncan пишет: Chris Murphy posted on Wed, 01 Mar 2017 17:30:37 -0700 as excerpted: [1717713.408675] BTRFS warning (device dm-8): missing devices (1) exceeds the limit (0), writeable mount is not allowed [1717713.446453] BTRFS

Re: Low IOOP Performance

2017-02-27 Thread Austin S. Hemmelgarn
On 2017-02-27 14:15, John Marrett wrote: Liubo correctly identified direct IO as a solution for my test performance issues, with it in use I achieved 908 read and 305 write, not quite as fast as ZFS but more than adequate for my needs. I then applied Peter's recommendation of switching to raid10

Re: Downgrading kernel 4.9 to 4.4 with space_cache=v2 enabled?

2017-02-24 Thread Austin S. Hemmelgarn
On 2017-02-23 19:54, Qu Wenruo wrote: At 02/23/2017 06:51 PM, Christian Theune wrote: Hi, not sure whether it’s possible, but we tried space_cache=v2 and obviously after working fine in staging it broke in production. Or rather: we upgraded from 4.4 to 4.9 and enabled the space_cache. Our

Re: Downgrading kernel 4.9 to 4.4 with space_cache=v2 enabled?

2017-02-23 Thread Austin S. Hemmelgarn
On 2017-02-23 08:19, Christian Theune wrote: Hi, just for future reference if someone finds this thread: there is a bit of output I’m seeing with this crashing kernel (unclear whether related to btrfs or not): 31 | 02/23/2017 | 09:51:22 | OS Stop/Shutdown #0x4f | Run-time critical stop |

Re: Downgrading kernel 4.9 to 4.4 with space_cache=v2 enabled?

2017-02-23 Thread Austin S. Hemmelgarn
On 2017-02-23 05:51, Christian Theune wrote: Hi, not sure whether it’s possible, but we tried space_cache=v2 and obviously after working fine in staging it broke in production. Or rather: we upgraded from 4.4 to 4.9 and enabled the space_cache. Our production volume is around 50TiB usable

Re: Opps.. Should be 4.9/4.10 Experiences

2017-02-17 Thread Austin S. Hemmelgarn
On 2017-02-17 03:26, Duncan wrote: Imran Geriskovan posted on Thu, 16 Feb 2017 13:42:09 +0200 as excerpted: Opps.. I mean 4.9/4.10 Experiences On 2/16/17, Imran Geriskovan wrote: What are your experiences for btrfs regarding 4.10 and 4.11 kernels? I'm still on

Re: man filesystems(5) doesn't contain Btrfs

2017-02-16 Thread Austin S. Hemmelgarn
On 2017-02-16 15:36, Chris Murphy wrote: Hi, This man page contains a list for pretty much every other file system, with a oneliner description: ext4, XFS is in there, and even NTFS, but not Btrfs. Also, /etc/filesystems doesn't contain Btrfs. Anyone know if either, or both, ought to contain

Re: Way to force allocation of more metadata?

2017-02-16 Thread Austin S. Hemmelgarn
On 2017-02-16 15:13, E V wrote: It would be nice if there was an easy way to tell btrfs to allocate another metadata chunk. For example, the below fs is full due to exhausted metadata: Device size:1013.28GiB Device allocated: 1013.28GiB Device unallocated:

Re: Unexpected behavior involving file attributes and snapshots.

2017-02-14 Thread Austin S. Hemmelgarn
On 2017-02-14 11:46, Austin S. Hemmelgarn wrote: On 2017-02-14 11:07, Chris Murphy wrote: On Tue, Feb 14, 2017 at 8:30 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: I was just experimenting with snapshots on 4.9.0, and came across some unexpected behavior. The simple expla

Re: Unexpected behavior involving file attributes and snapshots.

2017-02-14 Thread Austin S. Hemmelgarn
On 2017-02-14 11:07, Chris Murphy wrote: On Tue, Feb 14, 2017 at 8:30 AM, Austin S. Hemmelgarn <ahferro...@gmail.com> wrote: I was just experimenting with snapshots on 4.9.0, and came across some unexpected behavior. The simple explanation is that if you snapshot a subvolume, any

Unexpected behavior involving file attributes and snapshots.

2017-02-14 Thread Austin S. Hemmelgarn
I was just experimenting with snapshots on 4.9.0, and came across some unexpected behavior. The simple explanation is that if you snapshot a subvolume, any files in the subvolume that have the NOCOW attribute will not have that attribute in the snapshot. Some further testing indicates that

Re: Help understanding autodefrag details

2017-02-13 Thread Austin S. Hemmelgarn
On 2017-02-10 09:21, Peter Zaitsev wrote: Hi, As I have been reading btrfs whitepaper it speaks about autodefrag in very generic terms - once random write in the file is detected it is put in the queue to be defragmented. Yet I could not find any specifics about this process described

Re: BTRFS for OLTP Databases

2017-02-13 Thread Austin S. Hemmelgarn
On 2017-02-09 22:58, Andrei Borzenkov wrote: 07.02.2017 23:47, Austin S. Hemmelgarn пишет: ... Sadly, freezefs (the generic interface based off of xfs_freeze) only works for block device snapshots. Filesystem level snapshots need the application software to sync all it's data and then stop

Re: understanding disk space usage

2017-02-09 Thread Austin S. Hemmelgarn
On 2017-02-09 08:25, Adam Borowski wrote: On Wed, Feb 08, 2017 at 11:48:04AM +0800, Qu Wenruo wrote: Just don't believe the vanilla df output for btrfs. For btrfs, unlike other fs like ext4/xfs, which allocates chunk dynamically and has different metadata/data profile, we can only get a clear

Re: csum failed, checksum error, questions

2017-02-09 Thread Austin S. Hemmelgarn
On 2017-02-08 20:42, Ian Kelling wrote: I had a file read fail repeatably, in syslog, lines like this kernel: BTRFS warning (device dm-5): csum failed ino 2241616 off 51580928 csum 4redacted expected csum 2redacted I rmed the file. Another error more recently, 5 instances which look like

Re: BTRFS and cyrus mail server

2017-02-09 Thread Austin S. Hemmelgarn
On 2017-02-09 06:49, Adam Borowski wrote: On Wed, Feb 08, 2017 at 02:21:13PM -0500, Austin S. Hemmelgarn wrote: - maybe deduplication (cyrus does it by hardlinking of same content messages now) later Deduplication beyond what Cyrus does is probably not worth it. In most cases about 10

Re: understanding disk space usage

2017-02-09 Thread Austin S. Hemmelgarn
On 2017-02-08 16:45, Peter Grandi wrote: [ ... ] The issue isn't total size, it's the difference between total size and the amount of data you want to store on it. and how well you manage chunk usage. If you're balancing regularly to compact chunks that are less than 50% full, [ ... ] BTRFS on

Re: understanding disk space usage

2017-02-08 Thread Austin S. Hemmelgarn
On 2017-02-08 09:46, Peter Grandi wrote: My system is or seems to be running out of disk space but I can't find out how or why. [ ... ] FilesystemSize Used Avail Use% Mounted on /dev/sda3 28G 26G 2.1G 93% / [ ... ] So from chunk level, your fs is already full.

Re: BTRFS and cyrus mail server

2017-02-08 Thread Austin S. Hemmelgarn
On 2017-02-08 13:38, Libor Klepáč wrote: Hello, inspired by recent discussion on BTRFS vs. databases i wanted to ask on suitability of BTRFS for hosting a Cyrus imap server spool. I haven't found any recent article on this topic. I'm preparing migration of our mailserver to Debian Stretch, ie.

Re: raid1: cannot add disk to replace faulty because can only mount fs as read-only.

2017-02-08 Thread Austin S. Hemmelgarn
On 2017-02-08 08:46, Tomasz Torcz wrote: On Wed, Feb 08, 2017 at 07:50:22AM -0500, Austin S. Hemmelgarn wrote: It is exponentially safer in BTRFS to run single data single metadata than half raid1 data half raid1 metadata. Why? To convert to profiles _designed_ for a single device

Re: BTRFS for OLTP Databases

2017-02-08 Thread Austin S. Hemmelgarn
On 2017-02-08 08:26, Martin Raiber wrote: On 08.02.2017 14:08 Austin S. Hemmelgarn wrote: On 2017-02-08 07:14, Martin Raiber wrote: Hi, On 08.02.2017 03:11 Peter Zaitsev wrote: Out of curiosity, I see one problem here: If you're doing snapshots of the live database, each snapshot leaves

Re: user_subvol_rm_allowed? Is there a user_subvol_create_deny|allowed?

2017-02-08 Thread Austin S. Hemmelgarn
On 2017-02-07 20:49, Nicholas D Steeves wrote: Dear btrfs community, Please accept my apologies in advance if I missed something in recent btrfs development; my MUA tells me I'm ~1500 unread messages out-of-date. :/ I recently read about "mount -t btrfs -o user_subvol_rm_allowed" while doing

Re: BTRFS for OLTP Databases

2017-02-08 Thread Austin S. Hemmelgarn
On 2017-02-08 07:14, Martin Raiber wrote: Hi, On 08.02.2017 03:11 Peter Zaitsev wrote: Out of curiosity, I see one problem here: If you're doing snapshots of the live database, each snapshot leaves the database files like killing the database in-flight. Like shutting the system down in the

Re: dup vs raid1 in single disk

2017-02-08 Thread Austin S. Hemmelgarn
On 2017-02-07 17:28, Kai Krakow wrote: Am Thu, 19 Jan 2017 15:02:14 -0500 schrieb "Austin S. Hemmelgarn" <ahferro...@gmail.com>: On 2017-01-19 13:23, Roman Mamedov wrote: On Thu, 19 Jan 2017 17:39:37 +0100 "Alejandro R. Mosteo" <alejan...@mosteo.com> wrote:

Re: raid1: cannot add disk to replace faulty because can only mount fs as read-only.

2017-02-08 Thread Austin S. Hemmelgarn
On 2017-02-07 22:21, Hans Deragon wrote: Greetings, On 2017-02-02 10:06, Austin S. Hemmelgarn wrote: On 2017-02-02 09:25, Adam Borowski wrote: On Thu, Feb 02, 2017 at 07:49:50AM -0500, Austin S. Hemmelgarn wrote: This is a severe bug that makes a not all that uncommon (albeit bad) use case

<    1   2   3   4   5   6   7   8   9   10   >