Hi,

I just got bitten by the same problem reported back in July 2016:

  https://lists.gnu.org/archive/html/bug-tar/2016-07/msg00000.html

At the time, Joerg Schilling unilaterally refused to fix the bug,
claiming that Btrfs was broken and violated POSIX, although when asked
for a reference to back that up he never provided one.  Everyone else in
the thread disagreed with him, but the bug never got fixed.

Paul Eggert argued that there's no guarantee that st_blocks must be zero
for a file with nonzero data.  As an example, he pointed out that if all
of the file's data fits within the inode, it would be reasonable to
report st_blocks == 0 for a file with nonzero data.

Others pointed out that in Linux's /proc filesystem, all files have
st_blocks == 0.  That is also the case on my system running
linux-libre-4.14.12.  Joerg claimed that his /proc filesystem reported
nonzero st_blocks, but he was the only one in the thread who did so.

It was also pointed out that with the advent of SEEK_HOLE and SEEK_DATA,
the st_blocks hack is no longer needed for efficiency on modern systems.

I see from the GNU maintainers file that Paul Eggert is a maintainer for
GNU tar, and Joerg Schilling is not, so I don't see why we should let
Joerg continue to prevent us from fixing this bug.

I propose that we revisit this bug and fix it.  We clearly cannot assume
that st_blocks == 0 implies that the file contains only zeroes.  This
bug is fairly serious for anyone using btrfs and possibly other
filesystems, as it has the potential to lose user data.

What do you think?

      Mark

Reply via email to