Note the 10.30 cycle mantic kernels were built with this zfs-linux dkms
prebuilt, vendored in, and passing kernel regresstion testing, adt-
matrix testing and so on, for all kernels.

Mantic kernels that contain this update of zfs.ko have now been mostly
released.

 * autopkgtest pass

As seen on 
https://ubuntu-archive-team.ubuntu.com/proposed-migration/mantic/update_excuses.html#zfs-linux
And due to lack of automated comments of autopkgtest regressions
And also via adt matrix at for example 
https://kernel.ubuntu.com/adt-matrix/mantic-linux-meta.html for zfs-linux 
results with generic kernel, but also all other mantic kernels.

 * kernel regression zfs testsuite pass

(Canonical only URL i believe) http://10.246.75.167/2023.10.30/tc-stats-
lvl1.html shows all RT testing stats for the 2013.10.30 cycle, and you
can see that zfs ones are all green (apart from one stress-ng test case
which seems to have passed, but failed to clean up with a timeout, and
is being investigated). Clicking through to the zfs test case name you
can see break-down of tests per release / kernel flavour / target
hardware / arch etc. For this SRU only mantic one matters.

 * zsys integration test pass

Tested that existing installs upgrade fine, and new ones are also
possible by upgrading zfs-linux userspace tooling in live desktop
session to proposed.

 * LXD support retested

That was done by lxd team, and they have shipped upgrade to 2.0 final
inside lxd snap already.

In practice this SRU is already released to most users via LXD and via
kernel upgrades. The change this sru release will introduce is userspace
tooling update, which was minor in this update.

Note there will be a new zfs-linux sru, right after this one to address
the recently identified and patched up issues. See
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/2044657

It is still worth it to release this sru, before staging the next one in
proposed.

** Tags removed: verification-needed verification-needed-mantic
** Tags added: verification-done verification-done-mantic

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/2040181

Title:
  upgrade zfs-linux to 2.2.0 final

Status in zfs-linux package in Ubuntu:
  Fix Released
Status in zfs-linux source package in Mantic:
  Fix Committed
Status in zfs-linux source package in Noble:
  Fix Released

Bug description:
  [ Impact ]

   * Potential data loss with zfs 2.2.0-rc3 resolved in -rc5, proposal
  to update to final

   * Ubuntu Mantic shipped 2.2.0-rc3 with approximately 10 additional
  cherrypicks, or kernel team created fixes that got contributed &
  accepted upstream. At the time this was the only way to get zfs
  working with  v6.5 kernels and also drop the need for shiftfs (due to
  zfs impovements). Since us shipping this package, upstream has
  identified and fixed multiple small bugfixes in the subsequent RC and
  the final releases, including one bug fix that can lead to potential
  data loss.

   * The 2.2.0 release branch was frozen for a long time already, and
  outstanding number of commits of fixes that Mantic does not have is
  less than 30 small patches.

   * Proposal to upgrade our build to 2.2.0 final, pick up all the
  regression fixes, and drop all the cherrypicked patches that enable
  v6.5 support. This will give us the best kernel driver to support in
  the runnup to next Ubuntu LTS.

  [ Test Plan ]

   * autopkgtest pass

   * kernel regression zfs testsuite pass

   * zsys integration test pass

   * LXD support retested

  [ Where problems could occur ]

   * LXD snap in edge shipped zfs tooling of RC5 version until 16th
  October when they upgraded to 2.2.0 final, there are no kernel-
  userspace incompatiblities between RC & final, but we should
  explicitly test this.

  [ Other Info ]

   * Upstream is alerting us to the potential data loss and requesting
  upgrade to 2.2.0-rc5 or better.

  [ Abbriviated changes being introduced ]

  $ git log --oneline 4a104ac047..95785196f2 -- cmd/ lib/ module/os/linux/ | 
grep -v compat
  810fc49a3e Ensure we call fput when cloning fails due to different devices.
  a80e1f1c90 zvol: Temporally disable blk-mq
  33d7c2d165 import: require force when cachefile hostid doesn't match on-disk
  8015e2ea66 Add '-u' - nomount flag for zfs set
  c53bc3837c Improve the handling of sharesmb,sharenfs properties
  e9dc31c74e Update the behavior of mountpoint property
  608741d062 Report ashift of L2ARC devices in zdb
  0ce1b2ca19 Invoke zdb by guid to avoid import errors
  0aabd6b482 ZIL: Avoid dbuf_read() in ztest_get_data()
  a199cac6cd status: report pool suspension state under failmode=continue
  729507d309 Fix occasional rsend test crashes
  3af63683fe cmd: add 'help' subcommand to zpool and zfs
  9aa1a2878e Fix incorrect expected error in ztest
  f7a07d76ee Retire z_nr_znodes
  54c6fbd378 zed: Allow autoreplace and fault LEDs for removed vdevs
  32949f2560 Relax error reporting in zpool import and zpool split
  63159e5bda checkstyle: fix action failures
  e99e684b33 zed: update zed.d/statechange-slot_off.sh
  d19304ffee zed: Add zedlet to power off slot when drive is faulted
  92f095a903 copy_file_range: fix fallback when source create on same txg
  895cb689d3 zfs_clone_range should return a descriptive error codes
  6bdc7259d1 libzfs: sendrecv: send_progress_thread: handle SIGINFO/SIGUSR1
  df8c9f351d ZIL: Second attempt to reduce scope of zl_issuer_lock.
  0ae7bfc0a4 zpool_vdev_remove() should handle EALREADY error return
  bd1eab16eb linux: zfs: ctldir: set [amc]time to snapshot's creation property
  c47f0f4417 linux/copy_file_range: properly request a fallback copy on Linux 
<5.3
  12f2b1f65e zdb: include cloned blocks in block statistics

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/2040181/+subscriptions


-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to     : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp

Reply via email to