On Tue, Feb 12, 2019 at 3:11 PM Zygo Blaxell <ce3g8...@umail.furryterror.org> wrote: > > On Tue, Feb 12, 2019 at 02:48:38PM -0700, Chris Murphy wrote: > > Is it possibly related to the zlib library being used on > > Debian/Ubuntu? That you've got even one reproducer with the exact same > > hash for the transient error case means it's not hardware or random > > error; let alone two independent reproducers. > > The errors are not consistent between runs. The above pattern is quite > common, but it is not the only possible output. Add in other processes > reading the 'am' file at the same time and it gets very random. > > The bad data tends to have entire extents missing, replaced with zeros. > That leads to a small number of possible outputs (the choices seem to be > only to have the data or have the zeros). It does seem to be a lot more > consistent in recent (post 4.14.80) kernels, which may be interesting. > > Here is an example of a diff between two copies of the 'am' file copied > while the repro script was running, filtered through hd: > > # diff -u /tmp/f1 /tmp/f2 > --- /tmp/f1 2019-02-12 17:05:14.861844871 -0500 > +++ /tmp/f2 2019-02-12 17:05:16.883868402 -0500 > @@ -56,10 +56,6 @@ > * > 00020000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > * > -00021000 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 > |................| > -* > -00022000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > -* > 00023000 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 > |................| > * > 00024000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > @@ -268,10 +264,6 @@ > * > 000a0000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > * > -000a1000 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 > |................| > -* > -000a2000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > -* > 000a3000 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 > |................| > * > 000a4000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > @@ -688,10 +680,6 @@ > * > 001a0000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > * > -001a1000 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 > |................| > -* > -001a2000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > -* > 001a3000 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 > |................| > * > 001a4000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > @@ -1524,10 +1512,6 @@ > * > 003a0000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > * > -003a1000 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 > |................| > -* > -003a2000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > -* > 003a3000 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 > |................| > * > 003a4000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > @@ -3192,10 +3176,6 @@ > * > 007a0000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > * > -007a1000 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 > |................| > -* > -007a2000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > -* > 007a3000 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 > |................| > * > 007a4000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > @@ -5016,10 +4996,6 @@ > * > 00c00000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > * > -00c01000 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 > |................| > -* > -00c02000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > |................| > -* > [etc...you get the idea]
And yet the file is delivered to user space, despite the changes, as if it's immune to checksum computation or matching. The data is clearly difference so how is it bypassing checksumming? Data csums are based on original uncompressed data, correct? So any holes are zeros, there are still csums for those holes? > > I'm not sure how the zlib library is involved--sha1sum doesn't use one. > > > And then what happens if you do the exact same test but change to zstd > > or lzo? No error? Strictly zlib? > > Same errors on all three btrfs compression algorithms (as mentioned in > the original post from August 2018). Obviously there is a pattern. It's not random. I just don't know what it looks like. I use compression, for years now, mostly zstd lately and a mix of lzo and zlib before that, but never any errors or corruptions. But I also never use holes, no punched holes, and rarely use fallocated files which I guess isn't quite the same thing as hole punching. So the bug you're reproducing is for sure 100% not on the media itself, it's somehow transiently being interpreted differently roughly 1 in 10 reads, but with a pattern. What about scrub? Do you get errors every 1 in 10 scrubs? Or how does it manifest? No scrub errors? I know very little about what parts of the kernel a file system depends on outside of its own code (e.g. page cache) but I wonder if there's something outside of Btrfs that's the source but it never gets triggered because no other file systems use compression. Huh - what file system uses compression *and* hole punching? squashfs? Is sparse file support different than hole punching? -- Chris Murphy