Package: e2fsprogs
Version: 1.46~WIP.2019.10.03-1
Severity: normal
File: /sbin/mke2fs
X-Debbugs-Cc: j...@joshtriplett.org

This bug exists in both 1.45.6-1 (sid) and 1.46~WIP.2019.10.03-1
(experimental).

In the course of creating some filesystems containing Debian
installations using `mke2fs -d`, I managed to find a bug in the
`inline_data` handling, which seems to apply to files containing ACLs.
On a default Debian installation, /var/log/journal triggered this
problem.

I managed to create a minimal test case. To reproduce:

mkdir -p target/testdir
setfacl --restore=- <<EOF
# file: target/testdir/
user::rwx
group::r-x
group:adm:r-x
mask::r-x
other::r-x
default:user::rwx
default:group::r-x
default:group:adm:r-x
default:mask::r-x
default:other::r-x
EOF
mke2fs -b 4096 -I 256 -O sparse_super2,inline_data,^has_journal -d target 
disk.img 8M

and then run:

e2fsck -n -f disk.img

This will show:

e2fsck 1.46-WIP (03-Oct-2019)
Pass 1: Checking inodes, blocks, and sizes
Inode 12 has INLINE_DATA_FL flag but extended attribute not found.  Truncate? no

Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information

disk.img: ********** WARNING: Filesystem still has errors **********

disk.img: 12/2048 files (0.0% non-contiguous), 137/2048 blocks


And the kernel ext4 implementation will fail to handle that inode, with
a message like this:

EXT4-fs error (device loop0): __ext4_iget:4776: inode #3215: block 56: comm ls: 
invalid block


The size of 8M doesn't matter; the issue reproduces with other sizes.
/etc/mke2fs.conf is unchanged from the defaults. `tune2fs -l disk.img`
shows the following:

tune2fs 1.46-WIP (03-Oct-2019)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          931e3151-83db-4c33-be5f-655c9323fab4
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      ext_attr resize_inode dir_index sparse_super2 
filetype inline_data sparse_super large_file
Filesystem flags:         signed_directory_hash 
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              2048
Block count:              2048
Reserved block count:     102
Free blocks:              1911
Free inodes:              2036
First block:              0
Block size:               4096
Fragment size:            4096
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         2048
Inode blocks per group:   128
Filesystem created:       Sat Sep 26 00:50:53 2020
Last mount time:          n/a
Last write time:          Sat Sep 26 00:50:53 2020
Mount count:              0
Maximum mount count:      -1
Last checked:             Sat Sep 26 00:50:53 2020
Check interval:           0 (<none>)
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:               256
Required extra isize:     32
Desired extra isize:      32
Default directory hash:   half_md4
Directory Hash Seed:      df46d104-ed26-4cea-ac61-0feff2a54622


-- System Information:
Debian Release: bullseye/sid
  APT prefers unstable-debug
  APT policy: (500, 'unstable-debug'), (500, 'unstable'), (1, 
'experimental-debug'), (1, 'experimental')
Architecture: amd64 (x86_64)

Kernel: Linux 5.8.0-2-amd64 (SMP w/4 CPU threads)
Locale: LANG=C.UTF-8, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

Versions of packages e2fsprogs depends on:
ii  libblkid1    2.36-3+b1
ii  libc6        2.31-3
ii  libcom-err2  1.45.6-1
ii  libext2fs2   1.46~WIP.2019.10.03-1
ii  libss2       1.45.6-1
ii  libuuid1     2.36-3+b1
ii  logsave      1.45.6-1

Versions of packages e2fsprogs recommends:
pn  e2fsprogs-l10n  <none>

Versions of packages e2fsprogs suggests:
pn  e2fsck-static  <none>
pn  fuse2fs        <none>
pn  gpart          <none>
ii  parted         3.3-4

-- no debconf information

Reply via email to