Hi Evan,
The SRU cycle has completed, and all kernels containing the Raid10 block
discard performance patches have now been released to -updates.
Note that the versions are different than the kernels in -proposed, due
to the kernel team needing to do a last minute respin to fix two sets of
CVEs,
This bug was fixed in the package linux - 4.15.0-147.151
---
linux (4.15.0-147.151) bionic; urgency=medium
* CVE-2021-3444
- bpf: Fix truncation handling for mod32 dst reg wrt zero
* CVE-2021-3600
- SAUCE: bpf: Do not use ax register in interpreter on div/mod
- bpf:
This bug was fixed in the package linux - 5.4.0-77.86
---
linux (5.4.0-77.86) focal; urgency=medium
* UAF on CAN J1939 j1939_can_recv (LP: #1932209)
- SAUCE: can: j1939: delay release of j1939_priv after synchronize_rcu
* UAF on CAN BCM bcm_rx_handler (LP: #1931855)
-
This bug was fixed in the package linux - 5.8.0-59.66
---
linux (5.8.0-59.66) groovy; urgency=medium
* UAF on CAN J1939 j1939_can_recv (LP: #1932209)
- SAUCE: can: j1939: delay release of j1939_priv after synchronize_rcu
* UAF on CAN BCM bcm_rx_handler (LP: #1931855)
-
This bug was fixed in the package linux - 5.11.0-22.23
---
linux (5.11.0-22.23) hirsute; urgency=medium
* UAF on CAN J1939 j1939_can_recv (LP: #1932209)
- SAUCE: can: j1939: delay release of j1939_priv after synchronize_rcu
* UAF on CAN BCM bcm_rx_handler (LP: #1931855)
This bug was fixed in the package linux - 5.11.0-20.21+21.10.1
---
linux (5.11.0-20.21+21.10.1) impish; urgency=medium
* impish/linux: 5.11.0-20.21+21.10.1 -proposed tracker (LP: #1930056)
* Packaging resync (LP: #1786013)
- update dkms package versions
[ Ubuntu:
Hi Evan,
Just checking in. Are you still running 5.4.0-75-generic on your server?
Is everything nice and stable? Is your data fully intact, and no signs
of corruption at all?
My server has been running for two weeks now, and it does a fstrim every
30 minutes, and everything appears to be
Hi Evan,
Great to hear things are looking good for you and that the block discard
performance is there. If possible, keep running the kernel from
-proposed for a bit longer, just to make sure nothing comes up on longer
runs.
I spent some time today performing verification on all the kernels in
Performing verification for Bionic.
I'm going to do three rounds of verification.
The first is the testcase from this bug, showing block discard
performance.
The second is running through the regression reproducer from bug
1907262.
The third will be results from my testing with my /home
Performing verification for Focal.
I'm going to do three rounds of verification.
The first is the testcase from this bug, showing block discard
performance.
The second is running through the regression reproducer from bug
1907262.
The third will be results from my testing with my /home
Performing verification for Groovy.
I'm going to do three rounds of verification.
The first is the testcase from this bug, showing block discard
performance.
The second is running through the regression reproducer from bug
1907262.
The third will be results from my testing with my /home
Performing verification for Hirsute.
I'm going to do three rounds of verification.
The first is the testcase from this bug, showing block discard
performance.
The second is running through the regression reproducer from bug
1907262.
The third will be results from my testing with my /home
Thanks Matt. I have it installed on one machine so far and looks good
(in the past 10 minutes). fstrim of a ~30 TB RAID 10 took 73 seconds
instead of multiple hours.
# uname -a
Linux xxx 5.4.0-75-generic #84-Ubuntu SMP Fri May 28 16:28:37 UTC 2021 x86_64
x86_64 x86_64 GNU/Linux
# df -h
Hi Evan,
The kernel team have built all of the kernels for this SRU cycle, and
have placed them into -proposed for verification.
We now need to do some thorough testing and make sure that Raid10 arrays
function with good performance, ensure data integrity and make sure we
won't be introducing
This bug is awaiting verification that the kernel in -proposed solves
the problem. Please test the kernel and update this bug with the
results. If the problem is solved, change the tag 'verification-needed-
groovy' to 'verification-done-groovy'. If the problem still exists,
change the tag
This bug is awaiting verification that the kernel in -proposed solves
the problem. Please test the kernel and update this bug with the
results. If the problem is solved, change the tag 'verification-needed-
focal' to 'verification-done-focal'. If the problem still exists, change
the tag
This bug is awaiting verification that the kernel in -proposed solves
the problem. Please test the kernel and update this bug with the
results. If the problem is solved, change the tag 'verification-needed-
bionic' to 'verification-done-bionic'. If the problem still exists,
change the tag
This bug is awaiting verification that the kernel in -proposed solves
the problem. Please test the kernel and update this bug with the
results. If the problem is solved, change the tag 'verification-needed-
hirsute' to 'verification-done-hirsute'. If the problem still exists,
change the tag
Hi Evan,
As I mentioned in my previous message, I submitted the patches to the
Ubuntu kernel mailing list for SRU.
These patches have now gotten 2 acks [1][2] from senior kernel team
members, and the patches have now been applied [3] to the 4.15, 5.4, 5.8
and 5.11 kernels.
[1]
** Changed in: linux (Ubuntu Bionic)
Status: In Progress => Fix Committed
** Changed in: linux (Ubuntu Focal)
Status: In Progress => Fix Committed
** Changed in: linux (Ubuntu Groovy)
Status: In Progress => Fix Committed
** Changed in: linux (Ubuntu Hirsute)
Status:
I have it running on two machines now that needed big RAID 10s:
# uname -rv
5.4.0-72-generic #80+TEST1896578v20210504b1-Ubuntu SMP Tue May 4 00:30:36 UTC
202
# df -h /opt/raid
Filesystem Size Used Avail Use% Mounted on
/dev/md0 30T 208G 29T 1% /opt/raid
# cat /proc/mdstat
Hi Evan,
The patches have been submitted for SRU to the Ubuntu kernel mailing
list, for the 4.15, 5.4, 5.8 and 5.11 kernels:
[0] https://lists.ubuntu.com/archives/kernel-team/2021-May/119935.html
[1] https://lists.ubuntu.com/archives/kernel-team/2021-May/119936.html
[2]
Is there any ETA on a supported kernel with this patch?
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896578
Title:
raid10: Block discard is very slow, causing severe delays for mkfs and
fstrim
I have completed most of my regression testing, and things are still looking
good. The performance of the block discard is there, and I haven't seen any
data corruption.
In particular, I have been testing against the testcase for the regression that
occurred with the previous revision of the
If anyone is interested in testing, there are new re-spins of the test kernels
available in the following ppa:
https://launchpad.net/~mruffell/+archive/ubuntu/lp1896578-test
The patches used are the ones I will be submitting for SRU, and are more
or less identical to the patches in the previous
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1896578
[Impact]
Block discard is very slow on Raid10, which causes common use cases
which invoke block discard, such as mkfs and fstrim operations, to take
a very long time.
For example, on a i3.8xlarge
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1896578
[Impact]
Block discard is very slow on Raid10, which causes common use cases
which invoke block discard, such as mkfs and fstrim operations, to take
a very long time.
For example, on a i3.8xlarge
** Also affects: linux (Ubuntu Hirsute)
Importance: Undecided
Status: New
** Changed in: linux (Ubuntu Hirsute)
Status: New => In Progress
** Changed in: linux (Ubuntu Hirsute)
Importance: Undecided => Medium
** Changed in: linux (Ubuntu Hirsute)
Assignee: (unassigned)
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1896578
[Impact]
Block discard is very slow on Raid10, which causes common use cases
which invoke block discard, such as mkfs and fstrim operations, to take
a very long time.
For example, on a i3.8xlarge
Hi everyone,
The original patch author, Xiao Ni, has sent a V2 patchset to the linux-
raid mailing list for feedback. This new patchset fixes the problems the
previous version had, namely, properly calculating the discard offset
for second and onward disks, and correctly calculates the stripe
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1896578
[Impact]
Block discard is very slow on Raid10, which causes common use cases
which invoke block discard, such as mkfs and fstrim operations, to take
a very long time.
For example, on a i3.8xlarge
** Changed in: linux (Ubuntu)
Status: Fix Released => In Progress
** Changed in: linux (Ubuntu Bionic)
Status: Fix Released => In Progress
** Changed in: linux (Ubuntu Focal)
Status: Fix Released => In Progress
** Changed in: linux (Ubuntu Groovy)
Status: Fix
This bug was fixed in the package linux - 5.8.0-36.40+21.04.1
---
linux (5.8.0-36.40+21.04.1) hirsute; urgency=medium
* Packaging resync (LP: #1786013)
- update dkms package versions
[ Ubuntu: 5.8.0-36.40 ]
* debian/scripts/file-downloader does not handle positive
Hi Markus,
I am deeply sorry for causing the regression. We are aware, and tracking
the issue in bug 1907262.
The kernel team have started an emergency revert and you can expect
fixed kernels to be released in the next day or so.
--
You received this bug notification because you are a member
On focal (5.4.0-56-generic) we are starting to see massive file system
corruptions on systems updated to this kernel version.
These systems are using LVM with discards and thin provisioning on 6 or 8 NVMe
drives in a RAID10 near configuration. We are currently downgrading all systems
back to
This bug was fixed in the package linux - 4.15.0-126.129
---
linux (4.15.0-126.129) bionic; urgency=medium
* bionic/linux: 4.15.0-126.129 -proposed tracker (LP: #1905305)
* CVE-2020-4788
- SAUCE: powerpc/64s: Define MASKABLE_RELON_EXCEPTION_PSERIES_OOL
- SAUCE:
This bug was fixed in the package linux - 5.8.0-31.33
---
linux (5.8.0-31.33) groovy; urgency=medium
* groovy/linux: 5.8.0-31.33 -proposed tracker (LP: #1905299)
* Groovy 5.8 kernel hangs on boot on CPUs with eLLC (LP: #1903397)
- drm/i915: Mark ininitial fb obj as WT on
This bug was fixed in the package linux - 5.4.0-56.62
---
linux (5.4.0-56.62) focal; urgency=medium
* focal/linux: 5.4.0-56.62 -proposed tracker (LP: #1905300)
* CVE-2020-4788
- selftests/powerpc: rfi_flush: disable entry flush if present
- powerpc/64s: flush L1D on
Performing verification for Bionic.
I enabled -proposed and installed 4.15.0-125-generic to a i3.8xlarge AWS
instance.
>From there, I followed the testcase steps:
$ uname -rv
4.15.0-125-generic #128-Ubuntu SMP Mon Nov 9 20:51:00 UTC 2020
$ lsblk
NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda
Performing verification for Focal.
I enabled -proposed and installed 5.4.0-55-generic to a i3.8xlarge AWS
instance.
>From there, I followed the testcase steps:
$ uname -rv
5.4.0-55-generic #61-Ubuntu SMP Mon Nov 9 20:49:56 UTC 2020
$ lsblk
NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda
Performing verification for Groovy.
I enabled -proposed and installed 5.8.0-30-generic to a i3.8xlarge AWS
instance.
>From there, I followed the testcase steps:
$ uname -rv
5.8.0-30-generic #32-Ubuntu SMP Mon Nov 9 21:03:15 UTC 2020
$ lsblk
NAMEMAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda
This bug is awaiting verification that the kernel in -proposed solves
the problem. Please test the kernel and update this bug with the
results. If the problem is solved, change the tag 'verification-needed-
bionic' to 'verification-done-bionic'. If the problem still exists,
change the tag
This bug is awaiting verification that the kernel in -proposed solves
the problem. Please test the kernel and update this bug with the
results. If the problem is solved, change the tag 'verification-needed-
focal' to 'verification-done-focal'. If the problem still exists, change
the tag
This bug is awaiting verification that the kernel in -proposed solves
the problem. Please test the kernel and update this bug with the
results. If the problem is solved, change the tag 'verification-needed-
groovy' to 'verification-done-groovy'. If the problem still exists,
change the tag
** Changed in: linux (Ubuntu Bionic)
Status: In Progress => Fix Committed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896578
Title:
raid10: Block discard is very slow, causing severe
** Changed in: linux (Ubuntu Focal)
Status: In Progress => Fix Committed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896578
Title:
raid10: Block discard is very slow, causing severe
** Changed in: linux (Ubuntu Groovy)
Status: In Progress => Fix Committed
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1896578
Title:
raid10: Block discard is very slow, causing severe
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1896578
[Impact]
Block discard is very slow on Raid10, which causes common use cases
which invoke block discard, such as mkfs and fstrim operations, to take
a very long time.
For example, on a i3.8xlarge
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1896578
[Impact]
Block discard is very slow on Raid10, which causes common use cases
which invoke block discard, such as mkfs and fstrim operations, to take
a very long time.
For example, on a i3.8xlarge
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1896578
[Impact]
Block discard is very slow on Raid10, which causes common use cases
which invoke block discard, such as mkfs and fstrim operations, to take
a very long time.
For example, on a i3.8xlarge
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1896578
[Impact]
Block discard is very slow on Raid10, which causes common use cases
which invoke block discard, such as mkfs and fstrim operations, to take
a very long time.
For example, on a i3.8xlarge
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1896578
[Impact]
Block discard is very slow on Raid10, which causes common use cases
which invoke block discard, such as mkfs and fstrim operations, to take
a very long time.
For example, on a i3.8xlarge
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1896578
[Impact]
Block discard is very slow on Raid10, which causes common use cases
which invoke block discard, such as mkfs and fstrim operations, to take
a very long time.
For example, on a i3.8xlarge
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1896578
[Impact]
Block discard is very slow on Raid10, which causes common use cases
which invoke block discard, such as mkfs and fstrim operations, to take
a very long time.
For example, on a i3.8xlarge
54 matches
Mail list logo