Attached is a debdiff for glib2.0 on Focal which fixes this problem.
** Patch added: "Debdiff for glib2.0 for Focal"
https://bugs.launchpad.net/ubuntu/+source/glib2.0/+bug/1930359/+attachment/5510466/+files/lp1930359_focal.debdiff
** Tags removed: regression-update
** Tags added: sts-sponsor
** No longer affects: mutter (Ubuntu)
** No longer affects: mutter (Ubuntu Focal)
** Changed in: glib2.0 (Ubuntu)
Status: New => Fix Released
** Changed in: glib2.0 (Ubuntu Focal)
Status: New => In Progress
** Changed in: glib2.0 (Ubuntu Focal)
Importance: Undecided => High
** No longer affects: mutter (Ubuntu)
** No longer affects: mutter (Ubuntu Focal)
** Changed in: glib2.0 (Ubuntu)
Status: New => Fix Released
** Changed in: glib2.0 (Ubuntu Focal)
Status: New => In Progress
** Changed in: glib2.0 (Ubuntu Focal)
Importance: Undecided => High
** No longer affects: mutter (Ubuntu)
** No longer affects: mutter (Ubuntu Focal)
** Changed in: glib2.0 (Ubuntu)
Status: New => Fix Released
** Changed in: glib2.0 (Ubuntu Focal)
Status: New => In Progress
** Changed in: glib2.0 (Ubuntu Focal)
Importance: Undecided => High
** No longer affects: mutter (Ubuntu)
** No longer affects: mutter (Ubuntu Focal)
** Changed in: glib2.0 (Ubuntu)
Status: New => Fix Released
** Changed in: glib2.0 (Ubuntu Focal)
Status: New => In Progress
** Changed in: glib2.0 (Ubuntu Focal)
Importance: Undecided => High
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1934709
[Impact]
During an automatic balance, users may encounter an error when writing
the transaction log to disk, when the log tree is being parsed, which
forces the filesystem to be remounted read-only and
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1934709
[Impact]
During an automatic balance, users may encounter an error when writing
the transaction log to disk, when the log tree is being parsed, which
forces the filesystem to be remounted read-only and
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1934709
[Impact]
During an automatic balance, users may encounter an error when writing
the transaction log to disk, when the log tree is being parsed, which
forces the filesystem to be remounted read-only and
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1934709
[Impact]
During an automatic balance, users may encounter an error when writing
the transaction log to disk, when the log tree is being parsed, which
forces the filesystem to be remounted read-only and
ht be
linked to bug 1933172, which I came across while trying to reproduce the
issue in this bug.
** Affects: linux (Ubuntu)
Importance: Undecided
Status: Fix Released
** Affects: linux (Ubuntu Bionic)
Importance: Medium
Assignee: Matthew Ruffell (mruffell)
Status: I
ht be
linked to bug 1933172, which I came across while trying to reproduce the
issue in this bug.
** Affects: linux (Ubuntu)
Importance: Undecided
Status: Fix Released
** Affects: linux (Ubuntu Bionic)
Importance: Medium
Assignee: Matthew Ruffell (mruffell)
Status: I
Hi Daniel,
Yes, I am sure this is the same issue that they are experiencing there,
and I now believe the issue lies in glib, and not mutter.
When we install mutter-common, it calls the libglib2.0-0 hook to
recompile the gsettings schemas.
The customer provided me with a tarball of their
Hi Daniel,
Yes, I am sure this is the same issue that they are experiencing there,
and I now believe the issue lies in glib, and not mutter.
When we install mutter-common, it calls the libglib2.0-0 hook to
recompile the gsettings schemas.
The customer provided me with a tarball of their
Hi Daniel,
Yes, I am sure this is the same issue that they are experiencing there,
and I now believe the issue lies in glib, and not mutter.
When we install mutter-common, it calls the libglib2.0-0 hook to
recompile the gsettings schemas.
The customer provided me with a tarball of their
Okay, the affected user has now started experiencing the issue again,
and was forced to roll mutter back on their fleet.
We managed to get some logs this time, and we have determined what is
happening.
When the user upgrades the mutter packages, particularly mutter-common,
the libglib2.0-0 hook
Okay, the affected user has now started experiencing the issue again,
and was forced to roll mutter back on their fleet.
We managed to get some logs this time, and we have determined what is
happening.
When the user upgrades the mutter packages, particularly mutter-common,
the libglib2.0-0 hook
Okay, the affected user has now started experiencing the issue again,
and was forced to roll mutter back on their fleet.
We managed to get some logs this time, and we have determined what is
happening.
When the user upgrades the mutter packages, particularly mutter-common,
the libglib2.0-0 hook
The affected user cannot reproduce the issue anymore. The new mutter
packages install fine in their environment, and gdm starts every time.
They have since rolled out the new mutter packages to their fleet
without any issues.
I'm going to mark this bug as invalid, as we can't reproduce the issue
The affected user cannot reproduce the issue anymore. The new mutter
packages install fine in their environment, and gdm starts every time.
They have since rolled out the new mutter packages to their fleet
without any issues.
I'm going to mark this bug as invalid, as we can't reproduce the issue
The affected user cannot reproduce the issue anymore. The new mutter
packages install fine in their environment, and gdm starts every time.
They have since rolled out the new mutter packages to their fleet
without any issues.
I'm going to mark this bug as invalid, as we can't reproduce the issue
Hi Evan,
The SRU cycle has completed, and all kernels containing the Raid10 block
discard performance patches have now been released to -updates.
Note that the versions are different than the kernels in -proposed, due
to the kernel team needing to do a last minute respin to fix two sets of
CVEs,
Hi Thimo,
The SRU cycle has completed, and all kernels containing the Raid10 block
discard performance patches have now been released to -updates.
Note that the versions are different than the kernels in -proposed, due
to the kernel team needing to do a last minute respin to fix two sets of
Hi Evan,
The SRU cycle has completed, and all kernels containing the Raid10 block
discard performance patches have now been released to -updates.
Note that the versions are different than the kernels in -proposed, due
to the kernel team needing to do a last minute respin to fix two sets of
CVEs,
Hi Thimo,
The SRU cycle has completed, and all kernels containing the Raid10 block
discard performance patches have now been released to -updates.
Note that the versions are different than the kernels in -proposed, due
to the kernel team needing to do a last minute respin to fix two sets of
* Affects: linux (Ubuntu)
Importance: Undecided
Status: Fix Released
** Affects: linux (Ubuntu Bionic)
Importance: Medium
Assignee: Matthew Ruffell (mruffell)
Status: In Progress
** Tags: bionic sts
** Changed in: linux (Ubuntu)
Status: New => Fix Releas
* Affects: linux (Ubuntu)
Importance: Undecided
Status: Fix Released
** Affects: linux (Ubuntu Bionic)
Importance: Medium
Assignee: Matthew Ruffell (mruffell)
Status: In Progress
** Tags: bionic sts
** Changed in: linux (Ubuntu)
Status: New => Fix Releas
** Attachment added: "screenshot of working gdm on AWS"
https://bugs.launchpad.net/ubuntu/+source/mutter/+bug/1930359/+attachment/5505412/+files/Screenshot%20from%202021-06-18%2013-40-15.png
--
You received this bug notification because you are a member of Desktop
Packages, which is
I spent some time attempting to reproduce on AWS today, using a
g4dn.xlarge instance, which has a Nvidia Tesla T4 GPU, which supports
GRID.
I installed ubuntu-desktop-minimal and rebooted, and gdm started fine
with mutter 3.36.9-0ubuntu0.20.04.1. I confirmed this by looking at the
instance
** Attachment added: "screenshot of working gdm on AWS"
https://bugs.launchpad.net/ubuntu/+source/mutter/+bug/1930359/+attachment/5505412/+files/Screenshot%20from%202021-06-18%2013-40-15.png
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to
** Attachment added: "screenshot of working gdm on AWS"
https://bugs.launchpad.net/ubuntu/+source/mutter/+bug/1930359/+attachment/5505412/+files/Screenshot%20from%202021-06-18%2013-40-15.png
--
You received this bug notification because you are a member of Ubuntu
Desktop Bugs, which is
I spent some time attempting to reproduce on AWS today, using a
g4dn.xlarge instance, which has a Nvidia Tesla T4 GPU, which supports
GRID.
I installed ubuntu-desktop-minimal and rebooted, and gdm started fine
with mutter 3.36.9-0ubuntu0.20.04.1. I confirmed this by looking at the
instance
I spent some time attempting to reproduce on AWS today, using a
g4dn.xlarge instance, which has a Nvidia Tesla T4 GPU, which supports
GRID.
I installed ubuntu-desktop-minimal and rebooted, and gdm started fine
with mutter 3.36.9-0ubuntu0.20.04.1. I confirmed this by looking at the
instance
Hi Evan,
Just checking in. Are you still running 5.4.0-75-generic on your server?
Is everything nice and stable? Is your data fully intact, and no signs
of corruption at all?
My server has been running for two weeks now, and it does a fstrim every
30 minutes, and everything appears to be
Hi Evan,
Just checking in. Are you still running 5.4.0-75-generic on your server?
Is everything nice and stable? Is your data fully intact, and no signs
of corruption at all?
My server has been running for two weeks now, and it does a fstrim every
30 minutes, and everything appears to be
Hi Thimo,
Just checking in. Are you still running 5.4.0-75-generic on your server?
Is everything nice and stable? Is your data fully intact, and no signs
of corruption at all?
My server has been running for two weeks now, and it does a fstrim every
30 minutes, and everything appears to be
Hi Thimo,
Just checking in. Are you still running 5.4.0-75-generic on your server?
Is everything nice and stable? Is your data fully intact, and no signs
of corruption at all?
My server has been running for two weeks now, and it does a fstrim every
30 minutes, and everything appears to be
I built a test package based on mutter 3.36.9-0ubuntu0.20.04.1, and
reverted the three commits introduced by LP #1905825, namely:
commit: 92834d8feceeac538299a47a8c742e155de4e6e8
From: Kai-Heng Feng
Date: Mon, 21 Dec 2020 14:34:43 +0800
Subject: renderer/native: Refactor modeset boilerplate into
I built a test package based on mutter 3.36.9-0ubuntu0.20.04.1, and
reverted the three commits introduced by LP #1905825, namely:
commit: 92834d8feceeac538299a47a8c742e155de4e6e8
From: Kai-Heng Feng
Date: Mon, 21 Dec 2020 14:34:43 +0800
Subject: renderer/native: Refactor modeset boilerplate into
I built a test package based on mutter 3.36.9-0ubuntu0.20.04.1, and
reverted the three commits introduced by LP #1905825, namely:
commit: 92834d8feceeac538299a47a8c742e155de4e6e8
From: Kai-Heng Feng
Date: Mon, 21 Dec 2020 14:34:43 +0800
Subject: renderer/native: Refactor modeset boilerplate into
Hi Evan,
Great to hear things are looking good for you and that the block discard
performance is there. If possible, keep running the kernel from
-proposed for a bit longer, just to make sure nothing comes up on longer
runs.
I spent some time today performing verification on all the kernels in
Hi Evan,
Great to hear things are looking good for you and that the block discard
performance is there. If possible, keep running the kernel from
-proposed for a bit longer, just to make sure nothing comes up on longer
runs.
I spent some time today performing verification on all the kernels in
Hi Thimo,
Thanks for letting me know, and great to hear that things are working as
expected. I'll check in with you in one week's time, to double check things are
still going okay.
I spent some time today performing verification on all the kernels in -proposed,
testing block discard performance
Hi Thimo,
Thanks for letting me know, and great to hear that things are working as
expected. I'll check in with you in one week's time, to double check things are
still going okay.
I spent some time today performing verification on all the kernels in -proposed,
testing block discard performance
Performing verification for Bionic.
I'm going to do three rounds of verification.
The first is the testcase from this bug, showing block discard
performance.
The second is running through the regression reproducer from bug
1907262.
The third will be results from my testing with my /home
Performing verification for Focal.
I'm going to do three rounds of verification.
The first is the testcase from this bug, showing block discard
performance.
The second is running through the regression reproducer from bug
1907262.
The third will be results from my testing with my /home
Performing verification for Hirsute.
I'm going to do three rounds of verification.
The first is the testcase from this bug, showing block discard
performance.
The second is running through the regression reproducer from bug
1907262.
The third will be results from my testing with my /home
Performing verification for Groovy.
I'm going to do three rounds of verification.
The first is the testcase from this bug, showing block discard
performance.
The second is running through the regression reproducer from bug
1907262.
The third will be results from my testing with my /home
Performing verification for Bionic.
I'm going to do three rounds of verification.
The first is the testcase from this bug, showing block discard
performance.
The second is running through the regression reproducer from bug
1907262.
The third will be results from my testing with my /home
Performing verification for Focal.
I'm going to do three rounds of verification.
The first is the testcase from this bug, showing block discard
performance.
The second is running through the regression reproducer from bug
1907262.
The third will be results from my testing with my /home
Performing verification for Groovy.
I'm going to do three rounds of verification.
The first is the testcase from this bug, showing block discard
performance.
The second is running through the regression reproducer from bug
1907262.
The third will be results from my testing with my /home
Performing verification for Hirsute.
I'm going to do three rounds of verification.
The first is the testcase from this bug, showing block discard
performance.
The second is running through the regression reproducer from bug
1907262.
The third will be results from my testing with my /home
Hi Thimo,
The kernel team have built all of the kernels for this SRU cycle, and
have placed them into -proposed for verification.
We now need to do some thorough testing and make sure that Raid10 arrays
function with good performance, ensure data integrity and make sure we
won't be introducing
Hi Evan,
The kernel team have built all of the kernels for this SRU cycle, and
have placed them into -proposed for verification.
We now need to do some thorough testing and make sure that Raid10 arrays
function with good performance, ensure data integrity and make sure we
won't be introducing
Hi Evan,
The kernel team have built all of the kernels for this SRU cycle, and
have placed them into -proposed for verification.
We now need to do some thorough testing and make sure that Raid10 arrays
function with good performance, ensure data integrity and make sure we
won't be introducing
Hi Thimo,
The kernel team have built all of the kernels for this SRU cycle, and
have placed them into -proposed for verification.
We now need to do some thorough testing and make sure that Raid10 arrays
function with good performance, ensure data integrity and make sure we
won't be introducing
** Tags added: sts
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1929129
Title:
if cloud-init status is not done, set_installer_password will crash
To manage notifications about this bug go to:
Just adding a note for Bionic users, that the commit "cifs: Set
CIFS_MOUNT_USE_PREFIX_PATH flag on setting cifs_sb->prepath." landed in
4.15.0-143-generic, causing the regression, and it has been reverted in
4.15.0-144-generic.
If you are experiencing any issues with 4.15.0-143-generic, please
Just adding a note for Bionic users, that the commit "cifs: Set
CIFS_MOUNT_USE_PREFIX_PATH flag on setting cifs_sb->prepath." landed in
4.15.0-143-generic, causing the regression, and it has been reverted in
4.15.0-144-generic.
If you are experiencing any issues with 4.15.0-143-generic, please
*** This bug is a duplicate of bug 1923670 ***
https://bugs.launchpad.net/bugs/1923670
** This bug has been marked a duplicate of bug 1923670
CIFS DFS entries not accessible with 5.4.0-71.74-generic
--
You received this bug notification because you are a member of Kernel
Packages, which
*** This bug is a duplicate of bug 1923670 ***
https://bugs.launchpad.net/bugs/1923670
** This bug has been marked a duplicate of bug 1923670
CIFS DFS entries not accessible with 5.4.0-71.74-generic
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is
Hi Chris,
Thanks for reporting! Looking at the difference between
4.15.0-142-generic and 4.15.0-143-generic, there is one commit:
ubuntu-bionic$ git log --grep "cifs"
Ubuntu-4.15.0-142.146..Ubuntu-4.15.0-143.147
commit 7dd995facbb57b35b10715a27e252c8af5a39a6c
Author: Shyam Prasad N
Date:
Hi Chris,
Thanks for reporting! Looking at the difference between
4.15.0-142-generic and 4.15.0-143-generic, there is one commit:
ubuntu-bionic$ git log --grep "cifs"
Ubuntu-4.15.0-142.146..Ubuntu-4.15.0-143.147
commit 7dd995facbb57b35b10715a27e252c8af5a39a6c
Author: Shyam Prasad N
Date:
Hello,
Just a note that mutter 3.36.9-0ubuntu0.20.04.1 has introduced a
regression in a VMware Horizon VDI environment, where gdm fails to
start, and has taken out several hundred VDIs.
I am tracking the issue in bug 1930359.
The plan is to make a test package with the patches from bug 1905825
Hello,
Just a note that mutter 3.36.9-0ubuntu0.20.04.1 has introduced a
regression in a VMware Horizon VDI environment, where gdm fails to
start, and has taken out several hundred VDIs.
I am tracking the issue in bug 1930359.
The plan is to make a test package with the patches from bug 1905825
Hello,
Just a note that mutter 3.36.9-0ubuntu0.20.04.1 has introduced a
regression in a VMware Horizon VDI environment, where gdm fails to
start, and has taken out several hundred VDIs.
I am tracking the issue in bug 1930359.
The plan is to make a test package with the patches from bug 1905825
Hello,
Just a note that mutter 3.36.9-0ubuntu0.20.04.1 has introduced a
regression in a VMware Horizon VDI environment, where gdm fails to
start, and has taken out several hundred VDIs.
I am tracking the issue in bug 1930359.
The plan is to make a test package with the patches from bug 1905825
Hello,
Just a note that mutter 3.36.9-0ubuntu0.20.04.1 has introduced a
regression in a VMware Horizon VDI environment, where gdm fails to
start, and has taken out several hundred VDIs.
I am tracking the issue in bug 1930359.
The plan is to make a test package with the patches from bug 1905825
Hello,
Just a note that mutter 3.36.9-0ubuntu0.20.04.1 has introduced a
regression in a VMware Horizon VDI environment, where gdm fails to
start, and has taken out several hundred VDIs.
I am tracking the issue in bug 1930359.
The plan is to make a test package with the patches from bug 1905825
.
Currently looking into what landed in bug 1919143 and bug 1905825.
[Testcase]
[Where problems can occur]
** Affects: mutter (Ubuntu)
Importance: Undecided
Status: New
** Affects: mutter (Ubuntu Focal)
Importance: High
Assignee: Matthew Ruffell (mruffell)
Status
.
Currently looking into what landed in bug 1919143 and bug 1905825.
[Testcase]
[Where problems can occur]
** Affects: mutter (Ubuntu)
Importance: Undecided
Status: New
** Affects: mutter (Ubuntu Focal)
Importance: High
Assignee: Matthew Ruffell (mruffell)
Status
.
Currently looking into what landed in bug 1919143 and bug 1905825.
[Testcase]
[Where problems can occur]
** Affects: mutter (Ubuntu)
Importance: Undecided
Status: New
** Affects: mutter (Ubuntu Focal)
Importance: High
Assignee: Matthew Ruffell (mruffell)
Status
Hi Evan,
As I mentioned in my previous message, I submitted the patches to the
Ubuntu kernel mailing list for SRU.
These patches have now gotten 2 acks [1][2] from senior kernel team
members, and the patches have now been applied [3] to the 4.15, 5.4, 5.8
and 5.11 kernels.
[1]
Hi Evan,
As I mentioned in my previous message, I submitted the patches to the
Ubuntu kernel mailing list for SRU.
These patches have now gotten 2 acks [1][2] from senior kernel team
members, and the patches have now been applied [3] to the 4.15, 5.4, 5.8
and 5.11 kernels.
[1]
Hi Thimo,
As I mentioned in my previous message, I submitted the patches to the
Ubuntu kernel mailing list for SRU.
These patches have now gotten 2 acks [1][2] from senior kernel team
members, and the patches have now been applied [3] to the 4.15, 5.4, 5.8
and 5.11 kernels.
[1]
Hi Thimo,
As I mentioned in my previous message, I submitted the patches to the
Ubuntu kernel mailing list for SRU.
These patches have now gotten 2 acks [1][2] from senior kernel team
members, and the patches have now been applied [3] to the 4.15, 5.4, 5.8
and 5.11 kernels.
[1]
Hi Evan,
The patches have been submitted for SRU to the Ubuntu kernel mailing
list, for the 4.15, 5.4, 5.8 and 5.11 kernels:
[0] https://lists.ubuntu.com/archives/kernel-team/2021-May/119935.html
[1] https://lists.ubuntu.com/archives/kernel-team/2021-May/119936.html
[2]
Hi Evan,
The patches have been submitted for SRU to the Ubuntu kernel mailing
list, for the 4.15, 5.4, 5.8 and 5.11 kernels:
[0] https://lists.ubuntu.com/archives/kernel-team/2021-May/119935.html
[1] https://lists.ubuntu.com/archives/kernel-team/2021-May/119936.html
[2]
Hi Thimo,
Thanks for helping test! I really appreciate it. It is great to hear
that you haven't had any trouble with the test kernel.
Just a quick update on the state of the Raid10 patchset. I submitted
them for SRU for the current cycle, and the kernel team wrote back to me
asking for more
Hi Thimo,
Thanks for helping test! I really appreciate it. It is great to hear
that you haven't had any trouble with the test kernel.
Just a quick update on the state of the Raid10 patchset. I submitted
them for SRU for the current cycle, and the kernel team wrote back to me
asking for more
Performing verification for Groovy
I enabled -proposed and installed libpam-modules libpam-modules-bin
libpam-runtime libpam0g version 1.3.1-5ubuntu6.20.10.1
>From there, I set the pam_faillock configuration in:
/etc/security/faillock.conf:
deny = 3
unlock_time = 120
and also:
Performing verification for Groovy
I enabled -proposed and installed libpam-modules libpam-modules-bin
libpam-runtime libpam0g version 1.3.1-5ubuntu6.20.10.1
>From there, I set the pam_faillock configuration in:
/etc/security/faillock.conf:
deny = 3
unlock_time = 120
and also:
Performing verification for Hirsute
I enabled -proposed and installed libpam-modules libpam-modules-bin
libpam-runtime libpam0g version 1.3.1-5ubuntu6.21.04.1
>From there, I set the pam_faillock configuration in:
/etc/security/faillock.conf:
deny = 3
unlock_time = 120
and also:
Performing verification for Hirsute
I enabled -proposed and installed libpam-modules libpam-modules-bin
libpam-runtime libpam0g version 1.3.1-5ubuntu6.21.04.1
>From there, I set the pam_faillock configuration in:
/etc/security/faillock.conf:
deny = 3
unlock_time = 120
and also:
Performing verification for Focal
I enabled -proposed and installed libpam-modules libpam-modules-bin
libpam-runtime libpam0g version 1.3.1-5ubuntu4.2
>From there, I set the pam_faillock configuration in:
/etc/security/faillock.conf:
deny = 3
unlock_time = 120
and also:
Performing verification for Bionic
I enabled -proposed and installed libpam-modules libpam-modules-bin
libpam-runtime libpam0g version 1.1.8-3.6ubuntu2.18.04.3
>From there, I set the pam_faillock configuration in:
/etc/security/faillock.conf:
deny = 3
unlock_time = 120
and also:
Performing verification for Focal
I enabled -proposed and installed libpam-modules libpam-modules-bin
libpam-runtime libpam0g version 1.3.1-5ubuntu4.2
>From there, I set the pam_faillock configuration in:
/etc/security/faillock.conf:
deny = 3
unlock_time = 120
and also:
Performing verification for Bionic
I enabled -proposed and installed libpam-modules libpam-modules-bin
libpam-runtime libpam0g version 1.1.8-3.6ubuntu2.18.04.3
>From there, I set the pam_faillock configuration in:
/etc/security/faillock.conf:
deny = 3
unlock_time = 120
and also:
Hi Jiatong,
Thanks for emailing me, happy to answer questions anytime.
> 1. why linux-hwe-4.15.0 source code is used?
If you look closely at the oops in the description, the customer I was
working with was running:
4.15.0-106-generic #107~16.04.1-Ubuntu
This is the Xenial (16.04) HWE kernel.
Hi Jiatong,
Thanks for emailing me, happy to answer questions anytime.
> 1. why linux-hwe-4.15.0 source code is used?
If you look closely at the oops in the description, the customer I was
working with was running:
4.15.0-106-generic #107~16.04.1-Ubuntu
This is the Xenial (16.04) HWE kernel.
Performing verification for Groovy.
I went and generated the ssl certificates and attempted to verify them with
the openssl version 1.1.1f-1ubuntu4.3 from -updates.
ubuntu@deep-mako:~$ sudo apt-cache policy openssl | grep Installed
Installed: 1.1.1f-1ubuntu4.3
ubuntu@deep-mako:~$ mkdir
Performing verification for Groovy.
I went and generated the ssl certificates and attempted to verify them with
the openssl version 1.1.1f-1ubuntu4.3 from -updates.
ubuntu@deep-mako:~$ sudo apt-cache policy openssl | grep Installed
Installed: 1.1.1f-1ubuntu4.3
ubuntu@deep-mako:~$ mkdir
Performing verification for Focal
Generating the ssl certificates, and reproducing the problem with version
1.1.1f-1ubuntu2.3 from -updates.
ubuntu@select-lobster:~$ sudo apt-cache policy openssl | grep Installed
Installed: 1.1.1f-1ubuntu2.3
ubuntu@select-lobster:~$ mkdir reproducer
Performing verification for Focal
Generating the ssl certificates, and reproducing the problem with version
1.1.1f-1ubuntu2.3 from -updates.
ubuntu@select-lobster:~$ sudo apt-cache policy openssl | grep Installed
Installed: 1.1.1f-1ubuntu2.3
ubuntu@select-lobster:~$ mkdir reproducer
and grub.cfg in correct directories
since grub 2.04 seems to enforce pedantic locations. (LP: #1928040)
Date: Tue, 11 May 2021 19:38:29 +1200
Changed-By: Matthew Ruffell
Maintainer: Colin Watson
Signed-By: Ćukasz Zemczak
https://launchpad.net/ubuntu/+source/grub2-signed/1.167~16.04.2
Format: 1.8
Attached is a debdiff for grub2-signed which issues a sed to remove the
GRUB_DISTRIBUTOR=Debian line from /etc/default/grub.d/50-cloudimg-
settings.cfg
** Patch added: "Debdiff for grub2-signed on Xenial"
** Description changed:
[Impact]
GCE cloud instances started with images released prior to 2020-11-11
will fail to reboot when the newest grub2 2.02~beta2-36ubuntu3.32
packages are installed from -updates.
Upon reboot, the instance drops down to a grub prompt, and ceases to
boot
Public bug reported:
[Impact]
GCE cloud instances started with images released prior to 2020-11-11
will fail to reboot when the newest grub2 2.02~beta2-36ubuntu3.32
packages are installed from -updates.
Upon reboot, the instance drops down to a grub prompt, and ceases to
boot any further.
The
Public bug reported:
[Impact]
GCE cloud instances started with images released prior to 2020-11-11
will fail to reboot when the newest grub2 2.02~beta2-36ubuntu3.32
packages are installed from -updates.
Upon reboot, the instance drops down to a grub prompt, and ceases to
boot any further.
The
I have completed most of my regression testing, and things are still looking
good. The performance of the block discard is there, and I haven't seen any
data corruption.
In particular, I have been testing against the testcase for the regression that
occurred with the previous revision of the
** Description changed:
BugLink: https://bugs.launchpad.net/bugs/1896578
[Impact]
Block discard is very slow on Raid10, which causes common use cases
which invoke block discard, such as mkfs and fstrim operations, to take
a very long time.
For example, on a i3.8xlarge
601 - 700 of 2045 matches
Mail list logo