[Mount time bug bounty?] was: BTRFS Mount Delay Time Graph

2018-12-04 Thread Lionel Bouton
Le 03/12/2018 à 23:22, Hans van Kranenburg a écrit : > [...] > Yes, I think that's true. See btrfs_read_block_groups in extent-tree.c: > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/btrfs/extent-tree.c#n9982 > > What the code is doing here is starting at the

Re: BTRFS Mount Delay Time Graph

2018-12-04 Thread Lionel Bouton
Le 04/12/2018 à 03:52, Chris Murphy a écrit : > On Mon, Dec 3, 2018 at 1:04 PM Lionel Bouton > wrote: >> Le 03/12/2018 à 20:56, Lionel Bouton a écrit : >>> [...] >>> Note : recently I tried upgrading from 4.9 to 4.14 kernels, various >>> tuning of the

Re: BTRFS Mount Delay Time Graph

2018-12-03 Thread Lionel Bouton
Le 03/12/2018 à 20:56, Lionel Bouton a écrit : > [...] > Note : recently I tried upgrading from 4.9 to 4.14 kernels, various > tuning of the io queue (switching between classic io-schedulers and > blk-mq ones in the virtual machines) and BTRFS mount options > (space_cach

Re: BTRFS Mount Delay Time Graph

2018-12-03 Thread Lionel Bouton
Hi, Le 03/12/2018 à 19:20, Wilson, Ellis a écrit : > Hi all, > > Many months ago I promised to graph how long it took to mount a BTRFS > filesystem as it grows. I finally had (made) time for this, and the > attached is the result of my testing. The image is a fairly > self-explanatory graph,

Re: So, does btrfs check lowmem take days? weeks?

2018-06-29 Thread Lionel Bouton
Hi, On 29/06/2018 09:22, Marc MERLIN wrote: > On Fri, Jun 29, 2018 at 12:09:54PM +0500, Roman Mamedov wrote: >> On Thu, 28 Jun 2018 23:59:03 -0700 >> Marc MERLIN wrote: >> >>> I don't waste a week recreating the many btrfs send/receive relationships. >> Consider not using send/receive, and

Re: Kernel 4.14 RAID5 multi disk array on bcache not mounting

2017-11-21 Thread Lionel Bouton
Le 21/11/2017 à 23:04, Andy Leadbetter a écrit : > I have a 4 disk array on top of 120GB bcache setup, arranged as follows [...] > Upgraded today to 4.14.1 from their PPA and the 4.14 and 4.14.1 have a nasty bug affecting bcache users. See for example :

Re: [PATCH v2 3/4] btrfs: Add zstd support

2017-07-06 Thread Lionel Bouton
Le 06/07/2017 à 13:59, Austin S. Hemmelgarn a écrit : > On 2017-07-05 20:25, Nick Terrell wrote: >> On 7/5/17, 12:57 PM, "Austin S. Hemmelgarn" >> wrote: >>> It's the slower compression speed that has me arguing for the >>> possibility of configurable levels on zlib. 11MB/s

Re: Btrfs Compression

2017-07-06 Thread Lionel Bouton
Le 06/07/2017 à 13:51, Austin S. Hemmelgarn a écrit : > > Additionally, when you're referring to extent size, I assume you mean > the huge number of 128k extents that the FIEMAP ioctl (and at least > older versions of `filefrag`) shows for compressed files? If that's > the case, then it's

Re: [PATCH 2/3] Btrfs: lzo compression must free at least PAGE_SIZE

2017-05-20 Thread Lionel Bouton
Le 19/05/2017 à 23:15, Timofey Titovets a écrit : > 2017-05-19 23:19 GMT+03:00 Lionel Bouton > <lionel-subscript...@bouton.name>: >> I was too focused on other problems and having a fresh look at what I >> wrote I'm embarrassed by what I read. Used pages for a given

Re: [PATCH 2/3] Btrfs: lzo compression must free at least PAGE_SIZE

2017-05-19 Thread Lionel Bouton
Le 19/05/2017 à 16:17, Lionel Bouton a écrit : > Hi, > > Le 19/05/2017 à 15:38, Timofey Titovets a écrit : >> If data compression didn't free at least one PAGE_SIZE, it useless to store >> that compressed extent >> >> Signed-off-by: Timofey Titovets <nefel

Re: [PATCH 2/3] Btrfs: lzo compression must free at least PAGE_SIZE

2017-05-19 Thread Lionel Bouton
Hi, Le 19/05/2017 à 15:38, Timofey Titovets a écrit : > If data compression didn't free at least one PAGE_SIZE, it useless to store > that compressed extent > > Signed-off-by: Timofey Titovets > --- > fs/btrfs/lzo.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-)

Re: balancing every night broke balancing so now I can't balance anymore?

2017-05-15 Thread Lionel Bouton
Le 15/05/2017 à 10:14, Hugo Mills a écrit : > [...] >> As for limit= I'm not sure if it would be helpful since I run this >> nightly. Anything that doesn't get done tonight due to limit, would be >> done tomorrow? >I'm suggesting limit= on its own. It's a fixed amount of work > compared to

Re: balancing every night broke balancing so now I can't balance anymore?

2017-05-14 Thread Lionel Bouton
Le 14/05/2017 à 23:30, Kai Krakow a écrit : > Am Sun, 14 May 2017 22:57:26 +0200 > schrieb Lionel Bouton <lionel-subscript...@bouton.name>: > >> I've coded one Ruby script which tries to balance between the cost of >> reallocating group and the need for it.[...] >>

Re: balancing every night broke balancing so now I can't balance anymore?

2017-05-14 Thread Lionel Bouton
Le 14/05/2017 à 22:15, Marc MERLIN a écrit : > On Sun, May 14, 2017 at 09:13:35PM +0200, Hans van Kranenburg wrote: >> On 05/13/2017 10:54 PM, Marc MERLIN wrote: >>> Kernel 4.11, btrfs-progs v4.7.3 >>> >>> I run scrub and balance every night, been doing this for 1.5 years on this >>> filesystem.

Re: help : "bad tree block start" -> btrfs forced readonly

2017-03-17 Thread Lionel Bouton
Hi, some news from the coal mine... Le 17/03/2017 à 11:03, Lionel Bouton a écrit : > [...] > I'm considering trying to use a 4 week old snapshot of the device to > find out if it was corrupted or not instead. It will still be a pain if > it works but rsync for less than a m

Re: help : "bad tree block start" -> btrfs forced readonly

2017-03-17 Thread Lionel Bouton
Le 17/03/2017 à 10:51, Roman Mamedov a écrit : > On Fri, 17 Mar 2017 10:27:11 +0100 > Lionel Bouton <lionel-subscript...@bouton.name> wrote: > >> Hi, >> >> Le 17/03/2017 à 09:43, Hans van Kranenburg a écrit : >>> btrfs-debug-tree -b 3415463870464 >

Re: help : "bad tree block start" -> btrfs forced readonly

2017-03-17 Thread Lionel Bouton
Hi, Le 17/03/2017 à 09:43, Hans van Kranenburg a écrit : > btrfs-debug-tree -b 3415463870464 Here is what it gives me back : btrfs-debug-tree -b 3415463870464 /dev/sdb btrfs-progs v4.6.1 checksum verify failed on 3415463870464 found A85405B7 wanted 01010101 checksum verify failed on

Re: help : "bad tree block start" -> btrfs forced readonly

2017-03-17 Thread Lionel Bouton
Le 17/03/2017 à 05:32, Lionel Bouton a écrit : > Hi, > > [...] > I'll catch some sleep right now (it's 5:28 AM here) but I'll be able to > work on this in 3 or 4 hours. I woke up to this : Mar 17 06:56:30 fileserver kernel: btree_readpage_end_io_hook: 104476 callbacks suppressed

help : "bad tree block start" -> btrfs forced readonly

2017-03-16 Thread Lionel Bouton
Hi, our largest BTRFS filesystem is damaged but I'm unclear if it is recoverable or not. This is a 20TB filesystem with ~13TB used in a virtual machine using virtio-scsi backed by Ceph (Firefly 0.8.10). The following messages have become more frequent : fileserver kernel: sd 0:0:1:0: [sdb] tag#

Re: BTRFS for OLTP Databases

2017-02-07 Thread Lionel Bouton
Le 07/02/2017 à 21:47, Austin S. Hemmelgarn a écrit : > On 2017-02-07 15:36, Kai Krakow wrote: >> Am Tue, 7 Feb 2017 09:13:25 -0500 >> schrieb Peter Zaitsev : >> >>> Hi Hugo, >>> >>> For the use case I'm looking for I'm interested in having snapshot(s) >>> open at all time.

Re: BTRFS for OLTP Databases

2017-02-07 Thread Lionel Bouton
Le 07/02/2017 à 21:36, Kai Krakow a écrit : > [...] > I think I've read that btrfs snapshots do not guarantee single point in > time snapshots - the snapshot may be smeared across a longer period of > time while the kernel is still writing data. So parts of your writes > may still end up in the

Re: BTRFS for OLTP Databases

2017-02-07 Thread Lionel Bouton
Hi Peter, Le 07/02/2017 à 15:13, Peter Zaitsev a écrit : > Hi Hugo, > > For the use case I'm looking for I'm interested in having snapshot(s) > open at all time. Imagine for example snapshot being created every > hour and several of these snapshots kept at all time providing quick > recovery

Re: missing checksums on reboot

2016-12-02 Thread Lionel Bouton
Hi, Le 02/12/2016 à 20:07, Blake Lewis a écrit : > Hi, all, this is my first posting to the mailing list. I am a > long-time file system guy who is just starting to take a serious > interest in btrfs. > > My company's product uses btrfs for its backing storage. We > maintain a log file to let

Re: Convert from RAID 5 to 10

2016-11-29 Thread Lionel Bouton
Hi, Le 29/11/2016 à 18:20, Florian Lindner a écrit : > [...] > > * Any other advice? ;-) Don't rely on RAID too much... The degraded mode is unstable even for RAID10: you can corrupt data simply by writing to a degraded RAID10. I could reliably reproduce this on a 6 devices RAID10 BTRFS

replace panic solved with add/balance/delete was: Compression and device replace on raid10 kernel panic on 4.4.6 and 4.6.x

2016-11-12 Thread Lionel Bouton
Hi, here's how I managed to recover from a BTRFS replace panic which happened even on 4.8.4. The kernel didn't seem to handle our raid10 filesystem with a missing device correctly (even though it passed a precautionary scrub before removing the device) : - replace didn't work and triggered a

Re: Compression and device replace on raid10 kernel panic on 4.4.6 and 4.6.x

2016-10-28 Thread Lionel Bouton
: the problem still made the kernel panic. Unless someone comes up with a somewhat safe way to recover from this situation I'll let the filesystem as is (we are building a new platform where redundancy will be handled by Ceph anyway). Lionel Le 27/10/2016 à 18:07, Lionel Bouton a écrit : > Hi, > >

Re: Compression and device replace on raid10 kernel panic on 4.4.6 and 4.6.x

2016-10-27 Thread Lionel Bouton
Hi, Le 27/10/2016 à 02:50, Lionel Bouton a écrit : > [...] > I'll stop for tonight and see what happens during the day. I'd like to > try a device add / delete next but I'm worried I could end up with a > completely unusable filesystem if the device delete hits the same > probl

Re: Compression and device replace on raid10 kernel panic on 4.4.6 and 4.6.x

2016-10-26 Thread Lionel Bouton
Hi, Le 27/10/2016 à 01:54, Lionel Bouton a écrit : > > I'll post the final result of the btrfs replace later (it's currently at > 5.6% after 45 minutes). Result : kernel panic (so 4.8.4 didn't solve my main problem). Unfortunately I don't have a remote KVM anymore so I couldn't capture

Re: Compression and device replace on raid10 kernel panic on 4.4.6 and 4.6.x

2016-10-26 Thread Lionel Bouton
Hi, Le 26/10/2016 à 02:57, Lionel Bouton a écrit : > Hi, > > I'm currently trying to recover from a disk failure on a 6-drive Btrfs > RAID10 filesystem. A "mount -o degraded" auto-resumes a current > btrfs-replace from a missing dev to a new disk. This eventually

Compression and device replace on raid10 kernel panic on 4.4.6 and 4.6.x

2016-10-25 Thread Lionel Bouton
Hi, I'm currently trying to recover from a disk failure on a 6-drive Btrfs RAID10 filesystem. A "mount -o degraded" auto-resumes a current btrfs-replace from a missing dev to a new disk. This eventually triggers a kernel panic (and the panic seemed faster on each new boot). I managed to cancel

Re: Is stability a joke?

2016-09-12 Thread Lionel Bouton
Hi, On 12/09/2016 14:59, Michel Bouissou wrote: > [...] > I never had problems with lzo compression, although I suspect that it (in > conjuction with snapshots) adds much fragmentation that may relate to the > extremely bad performance I get over time with mechanical HDs. I had about 30 btrfs

Re: btrfstune settings

2016-08-28 Thread Lionel Bouton
Hi, happy borgbackup user here. This is probably off-topic for most but as many users probably are evaluating send/receive versus other backup solutions, I'll keep linux-btrfs in the loop. On 28/08/2016 20:10, Oliver Freyermuth wrote: >> Try borgbackup, I'm using it very successfully. It is very

Re: Is "btrfs balance start" truly asynchronous?

2016-06-21 Thread Lionel Bouton
Le 21/06/2016 15:17, Graham Cobb a écrit : > On 21/06/16 12:51, Austin S. Hemmelgarn wrote: >> The scrub design works, but the whole state file thing has some rather >> irritating side effects and other implications, and developed out of >> requirements that aren't present for balance (it might be

Re: btrfs ate my data in just two days, after a fresh install. ram and disk are ok. it still mounts, but I cannot repair

2016-05-09 Thread Lionel Bouton
Hi, Le 09/05/2016 16:53, Niccolò Belli a écrit : > On domenica 8 maggio 2016 20:27:55 CEST, Patrik Lundquist wrote: >> Are you using any power management tweaks? > > Yes, as stated in my very first post I use TLP with > SATA_LINKPWR_ON_BAT=max_performance, but I managed to reproduce the > bug

Re: RAID5 Unable to remove Failing HD

2016-04-19 Thread Lionel Bouton
Hi, Le 19/04/2016 11:13, Anand Jain a écrit : > >>> # btrfs device delete 3 /mnt/store/ >>> ERROR: device delete by id failed: Inappropriate ioctl for device >>> >>> Were the patch sets above for btrfs-progs or for the kernel ? >> [...] > > By the way, For Lionel issue, delete missing should

Re: RAID5 Unable to remove Failing HD

2016-04-18 Thread Lionel Bouton
Le 18/04/2016 10:59, Lionel Bouton a écrit : > [...] > So the obvious thing to do in this circumstance is to delete the drive, > forcing the filesystem to create the missing replicas in the process and > only reboot if needed (no hotplug). Unfortunately I'm not sure of the > c

Re: RAID5 Unable to remove Failing HD

2016-04-18 Thread Lionel Bouton
Hi, Le 10/02/2016 10:00, Anand Jain a écrit : > > > Rene, > > Thanks for the report. Fixes are in the following patch sets > > concern1: > Btrfs to fail/offline a device for write/flush error: >[PATCH 00/15] btrfs: Hot spare and Auto replace > > concern2: > User should be able to delete a

Re: "/tmp/mnt.", and not honouring compression

2016-03-31 Thread Lionel Bouton
Le 31/03/2016 22:49, Chris Murray a écrit : > Hi, > > I'm trying to troubleshoot a ceph cluster which doesn't seem to be > honouring BTRFS compression on some OSDs. Can anyone offer some help? Is > it likely to be a ceph issue or a BTRFS one? Or something else? I've > asked on ceph-users already,

Re: btrfs raid1 filesystem on sdcard corrupted

2016-02-25 Thread Lionel Bouton
Hi, Le 25/02/2016 18:44, Hegner Robert a écrit : > Am 25.02.2016 um 18:34 schrieb Hegner Robert: >> Hi all! >> >> I'm working on a embedded system (ARM) running from a SDcard. >From experience, most SD cards are not to be trusted. They are not designed for storing an operating system and

Re: Major HDD performance degradation on btrfs receive

2016-02-23 Thread Lionel Bouton
Le 23/02/2016 19:30, Marc MERLIN a écrit : > On Tue, Feb 23, 2016 at 07:01:52PM +0100, Lionel Bouton wrote: >> Why don't you use autodefrag ? If you have writable snapshots and do >> write to them heavily it would not be a good idea (depending on how >> BTRFS handles this in

Re: Major HDD performance degradation on btrfs receive

2016-02-23 Thread Lionel Bouton
Le 23/02/2016 18:34, Marc MERLIN a écrit : > On Tue, Feb 23, 2016 at 09:26:35AM -0800, Marc MERLIN wrote: >> Label: 'dshelf2' uuid: d4a51178-c1e6-4219-95ab-5c5864695bfd >> Total devices 1 FS bytes used 4.25TiB >> devid1 size 7.28TiB used 4.44TiB path /dev/mapper/dshelf2 >> >>

Auto-rebalancing script

2016-02-14 Thread Lionel Bouton
Hi, I'm using this Ruby script to maintain my BTRFS filesystems and try to avoid them getting in a position where they can't allocate space even though there is still plenty of it. http://pastebin.com/39567Dun It seems to work well (it maintains dozens of BTRFS filesystems, running balance on

Re: Fi corruption on RAID1, generation doesn't match

2016-02-07 Thread Lionel Bouton
Hi, Le 07/02/2016 14:15, Andreas Hild a écrit : > Dear All, > > The file system on a RAID1 Debian server seems corrupted in a major > way, with 99% of the files not found. This was the result of a > precarious shutdown after a crash that was preceded by an accidental > misconfiguration in

Re: device removal seems to be very slow (kernel 4.1.15)

2016-01-05 Thread Lionel Bouton
Le 05/01/2016 14:04, David Goodwin a écrit : > Using btrfs progs 4.3.1 on a Vanilla kernel.org 4.1.15 kernel. > > time btrfs device delete /dev/xvdh /backups > > real13936m56.796s > user0m0.000s > sys 1351m48.280s > > > (which is about 9 days). > > Where : > > /dev/xvdh was 120gb in

Re: btrfs: poor performance on deleting many large files

2015-12-14 Thread Lionel Bouton
Le 15/12/2015 02:49, Duncan a écrit : > Christoph Anton Mitterer posted on Tue, 15 Dec 2015 00:25:05 +0100 as > excerpted: > >> On Mon, 2015-12-14 at 22:30 +0100, Lionel Bouton wrote: >> >>> I use noatime and nodiratime >> FYI: noatime implies nodiratime

Re: btrfs: poor performance on deleting many large files

2015-12-14 Thread Lionel Bouton
Le 14/12/2015 21:27, Austin S. Hemmelgarn a écrit : > AFAIUI, the _only_ reason that that is still the default is because of > Mutt, and that won't change as long as some of the kernel developers > are using Mutt for e-mail and the Mutt developers don't realize that > what they are doing is

Re: Scrub: no spae left on device

2015-12-08 Thread Lionel Bouton
Le 08/12/2015 16:06, Marc MERLIN a écrit : > Howdy, > > Why would scrub need space and why would it cancel if there isn't enough of > it? > (kernel 4.3) > > /etc/cron.daily/btrfs-scrub: > btrfs scrub start -Bd /dev/mapper/cryptroot > scrub device /dev/mapper/cryptroot (id 1) done > scrub

Re: Scrub: no spae left on device

2015-12-08 Thread Lionel Bouton
Le 08/12/2015 16:37, Holger Hoffstätte a écrit : > On 12/08/15 16:06, Marc MERLIN wrote: >> Howdy, >> >> Why would scrub need space and why would it cancel if there isn't enough of >> it? >> (kernel 4.3) >> >> /etc/cron.daily/btrfs-scrub: >> btrfs scrub start -Bd /dev/mapper/cryptroot >> scrub

Re: RAID6 stable enough for production?

2015-10-14 Thread Lionel Bouton
Le 14/10/2015 22:23, Donald Pearson a écrit : > I would not use Raid56 in production. I've tried using it a few > different ways but have run in to trouble with stability and > performance. Raid10 has been working excellently for me. Hi, could you elaborate on the stability and performance

Re: RAID6 stable enough for production?

2015-10-14 Thread Lionel Bouton
Le 14/10/2015 22:53, Donald Pearson a écrit : > I've used it from 3.8 something to current, it does not handle drive > failure well at all, which is the point of parity raid. I had a 10disk > Raid6 array on 4.1.1 and a drive failure put the filesystem in an > irrecoverable state. Scrub speeds are

Re: btrfs says no errors, but booting gives lots of errors

2015-10-10 Thread Lionel Bouton
Le 10/10/2015 16:41, cov...@ccs.covici.com a écrit : > Holger Hoffstätte wrote: > >> On 10/10/15 14:46, cov...@ccs.covici.com wrote: >>> Hi. I am having lots of btrfs troubles -- I am using a 4.1.9 kernel >> Just FYI, both 4.1.9 and .10 have serious

Re: btrfs says no errors, but booting gives lots of errors

2015-10-10 Thread Lionel Bouton
Le 11/10/2015 01:32, cov...@ccs.covici.com a écrit : > [...] > I don't know if the file in question had the correct data, I only did a > directory listing, but this makes no sense -- I did an rsync just before > booting and got all kinds of errors and the only difference is the file > system, this

Re: btrfs says no errors, but booting gives lots of errors

2015-10-10 Thread Lionel Bouton
Le 10/10/2015 18:55, cov...@ccs.covici.com a écrit : > [...] > But do you folks have any idea about my original question, this leads me > to think that btrfs is too new or something. I've seen a recent report of a problem with btrfs-progs 4.2 confirmed as a bug in mkfs. As you created the

Re: btrfs says no errors, but booting gives lots of errors

2015-10-10 Thread Lionel Bouton
Le 11/10/2015 01:02, cov...@ccs.covici.com a écrit : > Lionel Bouton <lionel+c...@bouton.name> wrote: > >> Le 10/10/2015 18:55, cov...@ccs.covici.com a écrit : >>> [...] >>> But do you folks have any idea about my original question, this leads me >>>

Re: BTRFS as image store for KVM?

2015-10-05 Thread Lionel Bouton
Hi, Le 04/10/2015 14:03, Lionel Bouton a écrit : > [...] > This focus on single reader RAID1 performance surprises me. > > 1/ AFAIK the kernel md RAID1 code behaves the same (last time I checked > you need 2 processes to read from 2 devices at once) and I've never seen &

Re: BTRFS as image store for KVM?

2015-10-04 Thread Lionel Bouton
Hi, Le 04/10/2015 04:09, Duncan a écrit : > Russell Coker posted on Sat, 03 Oct 2015 18:32:17 +1000 as excerpted: > >> Last time I checked a BTRFS RAID-1 filesystem would assign each process >> to read from one disk based on it's PID. Every RAID-1 implementation >> that has any sort of

Re: btrfs fi defrag interfering (maybe) with Ceph OSD operation

2015-09-29 Thread Lionel Bouton
Le 27/09/2015 17:34, Lionel Bouton a écrit : > [...] > It's not clear to me that "btrfs fi defrag " can't interfere with > another process trying to use the file. I assume basic reading and > writing is OK but there might be restrictions on unlinking/locking/using > other

Re: btrfs fi defrag interfering (maybe) with Ceph OSD operation

2015-09-29 Thread Lionel Bouton
Le 29/09/2015 16:49, Lionel Bouton a écrit : > Le 27/09/2015 17:34, Lionel Bouton a écrit : >> [...] >> It's not clear to me that "btrfs fi defrag " can't interfere with >> another process trying to use the file. I assume basic reading and >> writing

Re: btrfs fi defrag interfering (maybe) with Ceph OSD operation

2015-09-28 Thread Lionel Bouton
Le 28/09/2015 22:52, Duncan a écrit : > Lionel Bouton posted on Mon, 28 Sep 2015 11:55:15 +0200 as excerpted: > >> From what I understood, filefrag doesn't known the length of each extent >> on disk but should have its position. This is enough to have a rough >> estimation

Re: btrfs fi defrag interfering (maybe) with Ceph OSD operation

2015-09-28 Thread Lionel Bouton
conclusions on your own), In fact I was initially aware of (no)CoW/defragmentation/snapshots performance gotchas (I already used BTRFS for PostgreSQL slaves hosting for example...). But Ceph is filesystem aware: its OSDs detect if they are running on XFS/BTRFS and activate automatically some filesystem features. So even though I was aware of the problems that can happen on a CoW filesystem, I preferred to do actual testing with the default Ceph settings and filesystem mount options before tuning. Best regards, Lionel Bouton -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html

btrfs fi defrag interfering (maybe) with Ceph OSD operation

2015-09-27 Thread Lionel Bouton
n 4.0.5 (or better if we have the time to test a more recent kernel before rebooting : 4.1.8 and 4.2.1 are our candidates for testing right now). Best regards, Lionel Bouton -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord