On Sun, Sep 01, 2019 at 11:03:59AM +0300, Andrei Borzenkov wrote: > 01.09.2019 6:28, Sean Greenslade пишет: > > > > I decided to do a bit of experimentation to test this theory. The > > primary goal was to see if a filesystem could suffer a failed disk and > > have that disk removed and rebalanced among the remaining disks without > > the filesystem losing data or going read-only. Tested on kernel > > 5.2.5-arch1-1-ARCH, progs: v5.2.1. > > > > I was actually quite impressed. When I ripped one of the block devices > > out from under btrfs, the kernel started spewing tons of BTRFS errors, > > but seemed to keep on trucking. I didn't leave it in this state for too > > long, but I was reading, writing, and syncing the fs without issue. > > After performing a btrfs device delete <MISSING_DEVID>, the filesystem > > rebalanced and stopped reporting errors. > > How many devices did filesystem have? What profiles did original > filesystem use and what profiles were present after deleting device? > Just to be sure there was no silent downgrade from raid1 to dup or > single as example.
I did the simplest case: raid1 with 3 disks, dropping 1 disk to end up with raid1 with 2 disks. I did check and btrfs fi usage reported no dup or single chunks. > > Looks like this may be a viable > > strategy for high-availability filesystems assuming you have adequate > > monitoring in place to catch the disk failures quickly. I personally > > wouldn't want to fully automate the disk deletion, but it's certainly > > possible. > > > > This would be valid strategy if we could tell btrfs to reserve enough > spare space; but even this is not enough, every allocation btrfs does > must be done so that enough spare space remains to reconstruct every > other missing chunk. > > Actually I now ask myself - what happens when btrfs sees unusable disk > sector(s) in some chunk? Will it automatically reconstruct content of > this chunk somewhere else? If not, what is an option besides full device > replacement? As far as I can tell, btrfs has no facility for dealing with medium errors (besides just reporting the error). I just re-ran a simple test with a two-device raid1 with one device deleted after mounting. Btrfs complains loudly every time writes to the missing disk fail, but doesn't retry or redirect these writes. One half of the raid1 block group makes it to disk, the other gets lost to the void. The chunk that makes it to disk is still of raid1 type. Essentially, it seems that btrfs currently had no way of marking a disk as offline / missing / problematic post-mount. Additionally, and possibly more troubling, is the fact that a failed chunk write will not get retried, even if there is another disk that could possibly accept that write. I think that for my fake-hot-spare proposal to be viable as a fault resiliancy measure, this failed-chunk-retry logic would need to be implemented. Otherwise you're living without data redundancy for some old data and some (or potentially all) new data from the moment the first medium error occurs until the moment the device delete completes successfully. --Sean