state of btrfs snapshot limitations?

2018-09-19 Thread Piotr Pawłow
Hello, > If the limit is 100 or less I'd need use a more complicated > rotation scheme. If you just want to thin them out over time without having selected "special" monthly, yearly etc snapshots, then my favorite scheme is to just compare the age of a snapshot to the distance to its neighbours,

Re: Raid1 volume stuck as read-only: How to dump, recreate and restore its content?

2018-03-13 Thread Piotr Pawłow
Hello, > Put differently, 4.7 is missing a year and a half worth of bugfixes that you > won't have when you run it to try to check or recover that btrfs that won't > mount! Do you *really* want to risk your data on bugs that were after all > discovered and fixed over a year ago? It is also mis

Re: btrfs-image hash collision option, super slow

2017-11-13 Thread Piotr Pawłow
W dniu 13.11.2017 o 04:42, Chris Murphy pisze: > Strange. I was using 4.3.3 and it had been running for over 9 hours at > the time I finally cancelled it. If you're compiling from source, the usual advice would be to "make clean" and make sure you're using the correct executable. If your fs is v

Re: btrfs-image hash collision option, super slow

2017-11-12 Thread Piotr Pawłow
>>It could definitely be improved -- I believe there are some good >> (but non-trivial) algorithms for finding preimages for CRC32 checksums >> out there. It's just that btrfs-image doesn't use them. I implemented a faster method using a reverse CRC32 table, which is in btrfs-progs since rel

Re: joining to contribute

2017-09-03 Thread Piotr Pawłow
Hello, > >Alongside this, there's also a requirement for being able to do > round-trip send/receive while preserving the ability to do incremental > sends. This is likely to be related to the above bug-fix. I did a > complete write-up of what's happening, and what needs to happen, here: > > htt

Re: kernel BUG at /build/linux-H5UzH8/linux-4.10.0/fs/btrfs/extent_io.c:2318

2017-08-11 Thread Piotr Pawłow
Hello, > So 4.10 isn't /too/ far out of range yet, but I'd strongly consider > upgrading (or downgrading to 4.9 LTS) as soon as it's reasonably > convenient, before 4.13 in any case. Unless you prefer to go the > distro support route, of course. I used to stick to latest kernels back when btrf

kernel BUG at /build/linux-H5UzH8/linux-4.10.0/fs/btrfs/extent_io.c:2318

2017-08-07 Thread Piotr Pawłow
Hello, my btrfs raid1 fs with 4 drives crashed after only one drive developed a couple of bad blocks. Seems similar to https://bugzilla.kernel.org/show_bug.cgi?id=196251 but it happened during normal usage instead of during replace. As a sidenote, I tried to make the system less noisy during the

Re: Shrinking a device - performance?

2017-03-30 Thread Piotr Pawłow
> The proposed "move whole chunks" implementation helps only if > there are enough unallocated chunks "below the line". If regular > 'balance' is done on the filesystem there will be some, but that > just spreads the cost of the 'balance' across time, it does not > by itself make a «risky, difficul

Re: Shrinking a device - performance?

2017-03-30 Thread Piotr Pawłow
> As a general consideration, shrinking a large filetree online > in-place is an amazingly risky, difficult, slow operation and > should be a last desperate resort (as apparently in this case), > regardless of the filesystem type, and expecting otherwise is > "optimistic". The way btrfs is designe

Re: Send/receive snapshot from/between backup

2016-11-17 Thread Piotr Pawłow
On 16.11.2016 21:32, René Bühlmann wrote: > >> 1. Change received uuid of S1 on SSH to match S1 uuid on USB. >> [...] > 2. Full transfer S1 from SSH to SSH' > > 3. Incremental transfer S2 from USB to SSH' (S1 as parent) > > 4. Incremental transfer S3 from Origin to SSH' (S2 as parent) > > [..] > St

Re: Send/receive snapshot from/between backup

2016-11-02 Thread Piotr Pawłow
On 02.11.2016 15:23, René Bühlmann wrote: > Origin: S2 S3 > > USB: S1 S2 > > SSH: S1 > > Transferring S3 to USB is no problem as S2 is on both btrfs drives. But > how can I transfer S3 to SSH? If I understand correctly how send / receive works, for the incremental receive to work there must be a s

Re: Timeouts copying large files to a Samba server with Btrfs

2015-12-26 Thread Piotr Pawłow
Hello, My guess is that the Samba server process submits too much queued buffers at once to be written to disk, then blocks on waiting for this, and the whole operation ends up taking so long, that it doesn't get back to the client in time. I've seen something similar. I could reproduce it easi

Re: btrfs und lvm-cache?

2015-12-24 Thread Piotr Pawłow
W dniu 24.12.2015 o 16:29, Neuer User pisze: Am 24.12.2015 um 15:56 schrieb Piotr Pawłow: Hello, - both hdd and ssd in one LVM VG - one LV on each hdd, containing a btrfs filesystem - both btrfs LV configured as RAID1 - the single SDD used as a LVM cache device for both HDD LVs to speed up

Re: btrfs und lvm-cache?

2015-12-24 Thread Piotr Pawłow
Hello, - both hdd and ssd in one LVM VG - one LV on each hdd, containing a btrfs filesystem - both btrfs LV configured as RAID1 - the single SDD used as a LVM cache device for both HDD LVs to speed up random access, where possible I have a setup like this for my /home. It works but it's a crapp

Re: Cancel device remove?

2015-09-10 Thread Piotr Pawłow
Hello, Is there some way to cancel a device remove operation? I have discovered that if I reboot that will cancel it, but that's not always possible. What I'm after is something the same as cancelling scrub. I keep running into situations where I want to pause a remove operation for speed reas

Re: btrfs freeze/thaw when using with LVM2

2015-04-27 Thread Piotr Pawłow
On 27.04.2015 o 14:15, Hugo Mills wrote: HOWEVER, you shouldn't take LVM snapshots of a btrfs filesystem AT ALL I'd like to add, that generally when working with LVM and BTRFS, it's probably a good idea to always use "device=" mount option to make it scan only specified devices instead of all

filesystem unmountable and btrfsck fails to repair after SATA link hiccup

2015-01-12 Thread Piotr Pawłow
Hello, Situation: btrfs on LVM on RAID5, formatted with default options, mounted with noatime,compress=lzo, kernel 3.18.1. While recovering RAID after drive failure, another drive gets a couple of SATA link errors, and it corrupts the FS: http://pp.siedziba.pl/tmp/btrfs-corruption/kern.log.t

Re: "Transaction commit" in btrfs sub del

2014-10-23 Thread Piotr Pawłow
On 23.10.2014 16:24, Roman Mamedov wrote: I was under impression that the "Transaction commit:" setting in 'btrfs sub del' finally allows us to make it not return until all free space from the snapshots that are being deleted, is completely freed up. This is not what "commit-each" or "commit-af

Re: device balance times

2014-10-22 Thread Piotr Pawłow
On 22.10.2014 03:43, Chris Murphy wrote: On Oct 21, 2014, at 4:14 PM, Piotr Pawłow wrote: Looks normal to me. Last time I started a balance after adding 6th device to my FS, it took 4 days to move 25GBs of data. It's long term untenable. At some point it must be fixed. It's way,

Re: device balance times

2014-10-21 Thread Piotr Pawłow
On 21.10.2014 20:59, Tomasz Chmielewski wrote: FYI - after a failed disk and replacing it I've run a balance; it took almost 3 weeks to complete, for 120 GBs of data: Looks normal to me. Last time I started a balance after adding 6th device to my FS, it took 4 days to move 25GBs of data. Some

Re: RAID1 failure and recovery

2014-09-14 Thread Piotr Pawłow
On 14.09.2014 06:44, Hugo Mills wrote: I've done this before, by accident (pulled the wrong drive, reinserted it). You can fix it by running a scrub on the device (btrfs scrub start /dev/ice, I think). Checksums are done for each 4k block, so the increase in probability of a false negative is pu

Re: RAID1 failure and recovery

2014-09-13 Thread Piotr Pawłow
On 12.09.2014 12:47, Hugo Mills wrote: I've done this before, by accident (pulled the wrong drive, reinserted it). You can fix it by running a scrub on the device (btrfs scrub start /dev/ice, I think). I'd like to remind everyone that btrfs has weak checksums. It may be good for correcting an

Re: Help! - "btrfs device delete missing" running out of space

2014-01-06 Thread Piotr Pawłow
Hello, I'm not sure what the solution is, but the issue seems to be that btrfs is laying out the RAID1 like this: [snip] Yeah, it kinda ended up like this. I think the problem stems from the fact, that restoring redundancy works by relocating block groups, which rewrites all chunks instead of r

Re: Help! - "btrfs device delete missing" running out of space

2014-01-05 Thread Piotr Pawłow
Hello, > distribution, used space on each device should be accordingly: 160, > 216, and 405. The last number should be 376, I copied the wrong one. Anyway, I deleted as much data as possible, which probably won't help in the end, but at the moment it's still going. Meanwhile, I made a script to

Help! - "btrfs device delete missing" running out of space

2014-01-05 Thread Piotr Pawłow
Hello, I replaced a failed 500G disk in btrfs raid1 with 2 smaller ones - 200 and 250GB. I started "btrfs device delete missing" command, which is running since friday, and it still seems far from finished. It seems to be doing something very strange: used space on the largest drive is going d

Re: Kernel BUG: __tree_mod_log_rewind

2013-05-25 Thread Piotr Pawłow
>> I can get btrfs to throw a kernel bug easily by running btrfs fi defrag >> on some files in 3.9.0: > > Thanks for reporting. It's a known bug (that ought to be fixed before > the 3.9 release in fact). I'm still getting a BUG in __tree_mod_log_rewind on kernels 3.9.2 and 3.10-rc2 when trying to

Re: btrfs and LVM snapshots (Re: kernel BUG at fs/btrfs/extent-tree.c:1772)

2013-02-12 Thread Piotr Pawłow
I can confirm that, even with a single-device btrfs filesystem. However I am curious why you want to use the lvm snapshot capability instead of the btrfs one. You can't use btrfs snapshots on a broken FS. LVM snapshots would be useful to save the original state, before any potentially destructi

btrfs and LVM snapshots (Re: kernel BUG at fs/btrfs/extent-tree.c:1772)

2013-02-09 Thread Piotr Pawłow
Hello, Yeah you can't mount images, we clear out the chunk tree so nothing works. Let me know if you run into any problems in the future. Thanks, That's surprising, I haven't seen it mentioned anywhere. With any other filesystem I could use an LVM snapshot to save the original state, but wit

Re: kernel BUG at fs/btrfs/extent-tree.c:1772

2013-02-05 Thread Piotr Pawłow
> mounts before it panics, can you capture all the output and reply to this > email > with it? Thanks, I'm sorry, I could not keep it longer in that state. I used btrfs-image to create metadata image from both disks, then let it run without space cache. Unfortunately, when I run qemu with these

Re: kernel BUG at fs/btrfs/extent-tree.c:1755 (was: 1772)

2013-02-04 Thread Piotr Pawłow
Hello, situation update: I realized I haven't tried "nospace_cache" yet, even though I tried "clear_cache". "clear cache" doesn't help at all. On the other hand, with "nospace_cache": # mount /dev/hdb -odevice=/dev/hdc,nospace_cache /mnt/test device label root devid 2 transid 56099 /dev/hdc devic

Re: kernel BUG at fs/btrfs/extent-tree.c:1755 (was: 1772)

2013-02-03 Thread Piotr Pawłow
I also tried for-linus branch, looks like the same problem: # mount /dev/hdb -odevice=/dev/hdc /mnt/test device label root devid 2 transid 56098 /dev/hdc device label root devid 1 transid 56098 /dev/hdb btrfs: disk space caching is enabled [ cut here ] WARNING: at fs/btrfs/

kernel BUG at fs/btrfs/extent-tree.c:1772

2013-02-03 Thread Piotr Pawłow
Hello, 1 week ago I bought a new 2TB hard drive, created 1 partition on the whole disk, and created root and swap LVM volumes on it. I formatted root to btrfs with default options, except meta-data profile which I set to "single". I copied all my data to it and unplugged my old hard drives. It wor

IO failure when trying to convert degraded raid1 to single

2013-02-01 Thread Piotr Pawłow
Hello, Trying to convert degraded raid1 to single... # mount btrfs0.img /mnt/test -oloop,degraded # btrfs filesystem balance start -mconvert=single -dconvert=single -f /mnt/test ...ends up with: Btrfs v0.20-rc1-56-g6cd836d device fsid 88c73405-12f4-4dc8-90f2-71925867d0c5 devid 1 transid 4 /

Re: RAID 0 across SSD and HDD

2013-01-31 Thread Piotr Pawłow
With RAID-0, you'd get data striped equally across all (in this case, both) the devices, up to the size of the second-largest one, at which point it'll stop allocating space. By "stop allocating space" I assume you mean it will return out of space errors, even though there is technically 250GB

Re: no activity in kernel.org btrfs-progs git repo?

2012-12-14 Thread Piotr Pawłow
Hello, It looks like it's been a bit more than two months since http://git.kernel.org/?p=linux/kernel/git/mason/btrfs-progs.git;a=summary Try git://github.com/josefbacik/btrfs-progs I just spent whole day debugging btrfs-restore, fixing signed / unsigned comparisons, adding another mirror

Re: various problems converting to RAID1

2012-12-06 Thread Piotr Pawłow
When I try btrfsck in repair mode, it fails to fix the corruption (log below). Is there any other version of btrfsck besides the one at git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git that I could try? (gdb) run Starting program: /home/pp/btrfs-progs/btrfsck --repair /dev/m

Re: various problems converting to RAID1

2012-12-04 Thread Piotr Pawłow
As I was in a hurry, I forgot about some things: > I rebooted my computer and the balance started continuing its work Of course I deleted around 15GB of data to free some space after noticing there is no space left, then tried to restart balance, it didn't work, checked logs, noticed problems and

various problems converting to RAID1

2012-12-04 Thread Piotr Pawłow
Hello, On saturday I added another disk to my BTRFS filesystem. I started a rebalance to convert it from m:DUP/d:single to m:RAID1/d:RAID1. I quickly noticed it started filling my logs with: "btrfs: block rsv returned -28", and "slowpath" warnings from "use_block_rsv+0x198/0x1a0 [btrfs]" (ht