[Bug 1957024] Re: pam-mkhomedir does not honor private home directories
Attaching the patch from #2 as a debdiff. ** Patch added: "private_home.debdiff" https://bugs.launchpad.net/ubuntu/+source/pam/+bug/1957024/+attachment/5812180/+files/private_home.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1957024 Title: pam-mkhomedir does not honor private home directories To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pam/+bug/1957024/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1957024] Re: pam-mkhomedir does not honor private home directories
** Changed in: pam (Ubuntu) Status: Confirmed => In Progress ** Changed in: pam (Ubuntu) Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1957024 Title: pam-mkhomedir does not honor private home directories To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/pam/+bug/1957024/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2068743] Re: Ceph PGs get stuck in repair state
Regression potential is high and also not within the SRU requirements: https://canonical-sru-docs.readthedocs- hosted.com/en/latest/explanation/requirements/#minimal-changes-only Auto repair can still be enabled - just need to be aware that sometimes there can be false positive "repair" status but has no functional impact. This has been fixed from Pacific onwards. Marking as 'won't fix'. ** Changed in: ceph (Ubuntu) Status: New => Won't Fix -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2068743 Title: Ceph PGs get stuck in repair state To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2068743/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2068743] Re: Ceph PGs get stuck in repair state
** Description changed: Due to bug [0], auto_repair, when enabled, doesn't work correctly and the PGs get stuck in repair state even when there was no scrub repairs needed/done by Ceph. It's been fixed from Pacific onwards. Need to be backported to Octopus. + Upstream patch: https://github.com/ceph/ceph/pull/41258 + + But there has been significant refactoring in this part of code since + Pacific that the above patch can't be cherry-picked. It'd be a copying + the idea and applying it to Octopus code. [0] https://tracker.ceph.com/issues/50446 ** Description changed: Due to bug [0], auto_repair, when enabled, doesn't work correctly and the PGs get stuck in repair state even when there was no scrub repairs needed/done by Ceph. It's been fixed from Pacific onwards. Need to be backported to Octopus. Upstream patch: https://github.com/ceph/ceph/pull/41258 But there has been significant refactoring in this part of code since Pacific that the above patch can't be cherry-picked. It'd be a copying the idea and applying it to Octopus code. + If done, this would be a Octopus-only SRU as newer releases already the + fix in. + [0] https://tracker.ceph.com/issues/50446 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2068743 Title: Ceph PGs get stuck in repair state To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2068743/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2060666] Re: [SRU] Memory leak in krb5 version 1.17
Verfiication has been repeated with 1.17-6ubuntu4.7 from focal-proposed and has been confirmed to fix the leak (memory usage is stable after several hours - following the test procedure). Marking verification done. ** Tags removed: verification-needed verification-needed-focal ** Tags added: verification-done verification-done-focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2060666 Title: [SRU] Memory leak in krb5 version 1.17 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2061825] Re: [SRU] ucf fails to work for local diversions on Jammy
** Changed in: ucf (Ubuntu Jammy) Status: In Progress => Won't Fix -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2061825 Title: [SRU] ucf fails to work for local diversions on Jammy To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ucf/+bug/2061825/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2060666] Re: [SRU] Memory leak in krb5 version 1.17
1.17-6ubuntu4.6 has superseded the previous version 1.17-6ubuntu4.5 :( Uploading a new debdiff on top of 1.17-6ubuntu4.6. ** Patch added: "focal-new-patch.diff" https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+attachment/5803755/+files/focal-new-patch.diff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2060666 Title: [SRU] Memory leak in krb5 version 1.17 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2060666] Re: [SRU] Memory leak in krb5 version 1.17
Thanks, Andreas, and Mitchell. All passed now. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2060666 Title: [SRU] Memory leak in krb5 version 1.17 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2060666] Re: [SRU] Memory leak in krb5 version 1.17
1.17-6ubuntu4.5 has been installed from focal-proposed and has been confirmed to fix the leak (memory usage is stable after several hours - following the test procedure). Marking verification done. ** Tags removed: verification-needed verification-needed-focal ** Tags added: verification-done verification-done-focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2060666 Title: [SRU] Memory leak in krb5 version 1.17 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2039955] Re: Opening NFS tab in the dashboard leads to ceph mgr crash - orchestrator._interface.NoOrchestrator: No orchestrator configured
The proposed patch [0] to fix this has been merged in main. I have created the backport PRs: Reef: https://github.com/ceph/ceph/pull/58283 Squid: https://github.com/ceph/ceph/pull/58285 Quincy: https://github.com/ceph/ceph/pull/58284 [0] https://github.com/ceph/ceph/pull/56876 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2039955 Title: Opening NFS tab in the dashboard leads to ceph mgr crash - orchestrator._interface.NoOrchestrator: No orchestrator configured To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-dashboard/+bug/2039955/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2068743] Re: Ceph PGs get stuck in repair state
** Changed in: ceph (Ubuntu) Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2068743 Title: Ceph PGs get stuck in repair state To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2068743/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2068743] [NEW] Ceph PGs get stuck in repair state
Public bug reported: Due to bug [0], auto_repair, when enabled, doesn't work correctly and the PGs get stuck in repair state even when there was no scrub repairs needed/done by Ceph. It's been fixed from Pacific onwards. Need to be backported to Octopus. [0] https://tracker.ceph.com/issues/50446 ** Affects: ceph (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2068743 Title: Ceph PGs get stuck in repair state To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2068743/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2067722] Re: cephfs-mirror package is missing systemd unit files and manpage
** Also affects: cloud-archive Importance: Undecided Status: New ** Also affects: cloud-archive/caracal Importance: Undecided Status: New ** Also affects: cloud-archive/bobcat Importance: Undecided Status: New ** Also affects: cloud-archive/yoga Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2067722 Title: cephfs-mirror package is missing systemd unit files and manpage To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/2067722/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2067722] [NEW] cephfs-mirror package is missing systemd unit files and manpage
Public bug reported: The missing unit file makes the cephfs-mirror essentially broken. This is due to the upstream bug: https://tracker.ceph.com/issues/59682 We would have to SRU this to Squid, Reef, and Quincy. ** Affects: ceph (Ubuntu) Importance: Undecided Status: New ** Affects: ceph (Ubuntu Jammy) Importance: Undecided Status: New ** Affects: ceph (Ubuntu Mantic) Importance: Undecided Status: New ** Affects: ceph (Ubuntu Noble) Importance: Undecided Status: New ** Affects: ceph (Ubuntu Oracular) Importance: Undecided Status: New ** Tags: seg ** Tags added: seg ** Also affects: ceph (Ubuntu Noble) Importance: Undecided Status: New ** Also affects: ceph (Ubuntu Jammy) Importance: Undecided Status: New ** Also affects: ceph (Ubuntu Oracular) Importance: Undecided Status: New ** Also affects: ceph (Ubuntu Mantic) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2067722 Title: cephfs-mirror package is missing systemd unit files and manpage To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2067722/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2028358] Re: [SRU] missing cephfs-mirror package
This has been fixed in https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2003704 ** Changed in: ceph (Ubuntu) Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2028358 Title: [SRU] missing cephfs-mirror package To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2028358/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2061825] Re: [SRU] ucf fails to work for local diversions on Jammy
@Robie 1. From man page does allude to that. " -P foo, --package foo Don't follow dpkg-divert diversions by package foo when updating configuration files. " which implies ucf should (and does) respect/handle diversions. 2. Further: "ucf should respect dpkg-divert" (refer https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=43) indicates it is a feature and has been supported since 3.0040 and has been accepted by the upstream maintainer. 3. Likewise the most recent change by the upstream maintainer is also addressing similar issue: ``` ucf (3.0043) unstable; urgency=high * The argument to dpkg-divert needs to be the actual file name, not the fully escaped regexp safe one. * Bug fix: "dpkg-divert error when upgrading grub with diverted config", thanks to deb...@project-mindfuck.org.uk; (Closes: #962818). -- Manoj Srivastava Mon, 15 Jun 2020 22:37:53 -0700 ``` So I believe ucf is expected handle diversions even if this specific patch wasn't acked by the upstream maintainer. > If we do decide to go ahead, then there are a few things that need fixing, please. Updated the bug description as well as the changelog (re-uploaded). FWIW, we do have NMU ucf in several Ubuntu releases in general: ``` ucf | 3.0027+nmu1 | trusty | source, all ucf | 3.0036 | xenial | source, all ucf | 3.0038 | bionic | source, all ucf | 3.0038+nmu1 | focal| source, all ucf | 3.0043 | jammy| source, all ucf | 3.0043+nmu1 | mantic | source, all ucf | 3.0043+nmu1 | noble| source, all ucf | 3.0043+nmu1 | oracular | source, all And specifically, the NMU versions on mantic, noble, and Oracular has _this patch_ already (which I am backporting here to Jammy). So I think it's reasonable to assume the backport is safe. Please let me know if there are further concerns. ** Description changed: [ Impact ] When a dpkg-diversion is used to setup a package diversion and ucf for managing the configuration files for chrony package, the postinst script of ucf fails when installing chrony. This issue isn't specific to chrony but can happen for any package whose config files are managed by ucf. This affects users on Jammy who use ucf. Newer versions of ucf have this bug fixed already. + "ucf should respect dpkg-divert" (refer https://bugs.debian.org/cgi- + bin/bugreport.cgi?bug=43) indicates it is a feature and has been + supported since 3.0040 and has been accepted by the upstream maintainer. + + [ Test Plan ] + Common case. + A1. Create a Jammy container or VM + A2. Install chrony: apt install chrony -y + A3. Confirm ucf works with no failures (including syntax errors) + B1. Modify the configuration: /etc/chrony/chrony.conf + B2. Remove chrony package and re-install + B3. Confirm it still works. + + B. Case when a diversion is in place. 1. Create a Jammy container or VM 2. Setup a diversion for chrony.conf: dpkg-divert --package chrony --add --rename --divert /etc/chrony/chrony.conf.custom /etc/chrony/chrony.conf 3. Install chrony: apt install chrony -y 4. Notice the postinst script fail with syntax errors such as: ``` Preparing to unpack .../chrony_4.2-2ubuntu2_amd64.deb ... Unpacking chrony (4.2-2ubuntu2) ... Setting up chrony (4.2-2ubuntu2) ... /usr/bin/ucf: 444: [: missing ] grep: ]: No such file or directory /usr/bin/ucf: 444: [: missing ] grep: ]: No such file or directory ``` 5. Install the package with the fix from the PPA: https://launchpad.net/~pponnuvel/+archive/ubuntu/ucf-jammy (to be replaced with the package from the -proposed pocket) 6. Repeat the same from steps 1 to 4 and notice no failures at step4. [ Where problems could occur ] Can further introduce similar bugs if the patch contains similar syntax errors. Consequently local diversion may not take effect for packages using ucf to manage configuration files. [ Other Info ] - + Upstream bug: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=979354 It's been fixed in version ucf/3.0043+nmu1. Lunar/Mantic/Noble all have the ucf version with this patch. Affects Jammy only and thus backported to only Jammy. ** Bug watch added: Debian Bug tracker #43 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=43 ** Patch added: "lp2061825.debdiff" https://bugs.launchpad.net/ubuntu/jammy/+source/ucf/+bug/2061825/+attachment/5779403/+files/lp2061825.debdiff ** Attachment removed: "debdiff_2061825_new.txt" https://bugs.launchpad.net/ubuntu/jammy/+source/ucf/+bug/2061825/+attachment/5770060/+files/debdiff_2061825_new.txt ** Attachment removed: "debdiff.txt" https://bugs.launchpad.net/ubuntu/jammy/+source/ucf/+bug/2061825/+attachment/5767388/+files/debdiff.txt ** Changed in: ucf (Ubuntu Jammy) Status: Incomplete => In Progress -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/
[Bug 2065854] [NEW] Provide a table for Ceph support timelines
Public bug reported: Currently the "supported Ceph versions" info is available in a few places: [0], [1], and [2]. But none of them provide a complete information. Even then, it's not straightforward to work out the EOL dates for a given Ceph version. Something similar to OpenStack's [2] that clearly states the timelines for Ceph packages from both Ubuntu and UCA. [0] https://ubuntu.com/ceph/docs/supported-ceph-versions [1] https://wiki.ubuntu.com/OpenStack/CloudArchive [2] https://ubuntu.com/openstack/docs/supported-versions ** Affects: ceph (Ubuntu) Importance: Undecided Status: New ** Tags: documentation ** Tags added: documentation -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2065854 Title: Provide a table for Ceph support timelines To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2065854/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2060666] Re: [SRU] Memory leak in krb5 version 1.17
Attaching the debdiff for Focal. ** Attachment removed: "krb5-focal-debdiff.txt" https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+attachment/5777986/+files/krb5-focal-debdiff.txt ** Attachment added: "krb5-focal-debdiff.txt" https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+attachment/5778293/+files/krb5-focal-debdiff.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2060666 Title: [SRU] Memory leak in krb5 version 1.17 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2060666] Re: [SRU] Memory leak in krb5 version 1.17
Attaching the debdiff for Focal. ** Attachment added: "krb5-focal-debdiff.txt" https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+attachment/5777986/+files/krb5-focal-debdiff.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2060666 Title: [SRU] Memory leak in krb5 version 1.17 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2060666] Re: Memory leak in krb5 version 1.17
** Description changed: - Commit 1cd2821 altered the memory - management of krb5_gss_inquire_cred(), introducing defcred to act as + [ Impact ] + + Commit https://github.com/krb5/krb5/commit/1cd2821c19b2b95e39d5fc2f451a035585a40fa5 + altered the memory management of krb5_gss_inquire_cred(), introducing defcred to act as an owner pointer when the function must acquire a default credential. The commit neglected to update the code to release the default cred - along the successful path. The old code does not trigger because + along the successful path. The old code does not trigger because cred_handle is now reassigned, so the default credential is leaked. - The commit https://github.com/krb5/krb5/commit/098f874f3b50dd2c46c0a574677324b5f6f3a1a8 fixes the leak. - It's been part of newer krb5 releases (Jammy, and Noble have the releases with the fix). Bionic doesn't have the commit the introduced the memory leak. + Resulting gradual increase in memory usage (memory leak) and eventual + crash. - So this fix needs to be backported to Focal (only). + [ Test Plan ] + + Setup 3 VMs: + + 1. Windows Server act as Domain controller (AD) + 2. Windows machine AD Joined with Ostress installed. (Ostress is part of RML utilities https://learn.microsoft.com/en-us/troubleshoot/sql/tools/replay-markup-language-utility) + 3. SQL on Linux AD Joined ( configuration steps https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-ad-auth-adutil-tutorial?view=sql-server-ver16) + + On the Machine with OStress create a file (name it disconnect.ini) with + the following content under the same folder “C:\Program Files\Microsoft + Corporation\RMLUtils” where OStress is installed. + + disconnect.ini + == + + [Connection Options] + LoginTimeout=30 + QuotedIdentifier=Off + AutocommitMode=On + DisconnectPct=100.0 + MaxThreadErrors=0 + + [Query Options] + NoSQLBindCol=Off + NoResultDisplay=Off + PrepareExecute=Off + ExecuteAsync=Off + RollbackOnCancel=Off + QueryTimeout=0 + QueryDelay=0 + MaxRetries=0 + BatchDisconnectPct=0.0 + CancelPct=0.00 + CancelDelay=0 + CancelDelayMin=0 + CursorType= + CursorConcurrency= + RowFetchDelay=0 + + [Replay Options] + Sequencing Options=global sequence + ::Sequencing Options=global sequence, dtc replay + DTC Timeout= + DTC Machine=(local) + Playback Coordinator=(local) + StartSeqNum= + StopSeqNum= + TimeoutFactor=1.0 + + Run the following command to start the load using Ostress, change Server + name (-S) accordingly and the number of threads (-n) as needed. + + Start 4 different CMD consoles and use the following different commands for each CMD window: + 1. ostress.exe -E -S -Q"select * from sys.all_objects" -q -cdisconnect.ini -n40 -r999 -oc:\temp\log01 -T146 + 2. ostress.exe -E -S -Q"select * from sys.all_views" -q -cdisconnect.ini -n40 -r999 -oc:\temp\log02 -T146 + 3. ostress.exe -E -S -Q"select * from sys.all_columns" -q -cdisconnect.ini -n40 -r999 -oc:\temp\log03 -T146 + 4. ostress.exe -E -S -Q"select * from sys.all_parameters" -q -cdisconnect.ini -n40 -r999 -oc:\temp\log04 -T146 + + After a run of about 5 hours, the memory usage for this is expected to be around 5G with the fix. + Without the fix, it was observed that it reached around ~22G in 5 hours. Hence the increase in + memory usage can be observed if the ostress.exe programs are let to run longer. + + [ Where problems could occur ] + + The fix may not fix the memory leak or could result in releasing the memory + early in a different code path, and thus resulting in crashes. + + A mitigating fact is that the fix has been in Ubuntu since at least 22.04 and + they do not exhibit any issues. + + Likewise I've previously provided the fix in a PPA https://launchpad.net/~pponnuvel/+archive/ubuntu/krb5-focal + to user who's been hit by this issue. They've tested and confirmed it fixes the memory leak. + + [ Other Info ] + + The commit + https://github.com/krb5/krb5/commit/098f874f3b50dd2c46c0a574677324b5f6f3a1a8 + fixes the leak. + + The fix has been included in newer krb5 releases (Jammy, and Noble have + the releases with the fix). + + Bionic doesn't have the commit the introduced the memory leak in the first place. + So this will be a Focal-only backport. ** Summary changed: - Memory leak in krb5 version 1.17 + [SRU] Memory leak in krb5 version 1.17 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2060666 Title: [SRU] Memory leak in krb5 version 1.17 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2009885] Re: Timeshift 21.09.1-1 broken after Rsync upgrade to 3.2.7-0ubuntu0.22.04.2
@SRU team, is this good to move to jammy from jammy-proposed? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2009885 Title: Timeshift 21.09.1-1 broken after Rsync upgrade to 3.2.7-0ubuntu0.22.04.2 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/timeshift/+bug/2009885/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2057713] Re: [SRU] ceph 16.2.15
Can the packages be moved to wallaby/focal-updates pocket now? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2057713 Title: [SRU] ceph 16.2.15 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/2057713/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2009885] Re: Timeshift 21.09.1-1 broken after Rsync upgrade to 3.2.7-0ubuntu0.22.04.2
Set the locale variables to de_DE.UTF-8. Then testing timeshift: ubuntu@bronzor:~$ dpkg -l | grep timeshift ii timeshift 21.09.1-1 amd64System restore utility ubuntu@bronzor:~$ locale LANG=en_US.UTF-8 LANGUAGE=en_US LC_CTYPE="en_US.UTF-8" LC_NUMERIC=de_DE.UTF-8 LC_TIME=de_DE.UTF-8 LC_COLLATE="en_US.UTF-8" LC_MONETARY=de_DE.UTF-8 LC_MESSAGES="en_US.UTF-8" LC_PAPER=de_DE.UTF-8 LC_NAME=de_DE.UTF-8 LC_ADDRESS=de_DE.UTF-8 LC_TELEPHONE=de_DE.UTF-8 LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=de_DE.UTF-8 LC_ALL= ubuntu@bronzor:~$ sudo timeshift --create Mounted '/dev/sda1' at '/run/timeshift/backup' -- Creating new snapshot...(RSYNC) Saving to device: /dev/sda1, mounted at path: /run/timeshift/backup Linking from snapshot: 2024-04-26_12-40-26 Synching files with rsync... E: rsync returned an error E: Failed to create new snapshot Failed to create snapshot -- Removing snapshots (incomplete): -- Removing '2024-04-26_13-01-21'... 73.52% complete (00:00:00 remaining) (process:1524): GLib-GIO-CRITICAL **: 13:01:33.226: g_output_stream_clear_pending: assertion 'G_IS_OUTPUT_STREAM (stream)' failed (process:1524): GLib-GIO-CRITICAL **: 13:01:33.226: g_output_stream_clear_pending: assertion 'G_IS_OUTPUT_STREAM (stream)' failed Removed '2024-04-26_13-01-21' -- After installing timeshift from jammy-prposed, the same command works: ubuntu@bronzor:~$ dpkg -l | grep timeshift ii timeshift 21.09.1-1ubuntu1 amd64System restore utility ubuntu@bronzor:~$ locale LANG=en_US.UTF-8 LANGUAGE=en_US LC_CTYPE="en_US.UTF-8" LC_NUMERIC=de_DE.UTF-8 LC_TIME=de_DE.UTF-8 LC_COLLATE="en_US.UTF-8" LC_MONETARY=de_DE.UTF-8 LC_MESSAGES="en_US.UTF-8" LC_PAPER=de_DE.UTF-8 LC_NAME=de_DE.UTF-8 LC_ADDRESS=de_DE.UTF-8 LC_TELEPHONE=de_DE.UTF-8 LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=de_DE.UTF-8 LC_ALL= ubuntu@bronzor:~$ sudo timeshift --create /dev/sda1 is mounted at: /run/timeshift/backup, options: rw,relatime,stripe=64 -- Creating new snapshot...(RSYNC) Saving to device: /dev/sda1, mounted at path: /run/timeshift/backup Linking from snapshot: 2024-04-26_12-40-26 Synching files with rsync... Created control file: /run/timeshift/backup/timeshift/snapshots/2024-04-26_14-50-25/info.json RSYNC Snapshot saved successfully (4s) Tagged snapshot '2024-04-26_14-50-25': ondemand -- ubuntu@bronzor:~$ ** Tags removed: verification-needed verification-needed-jammy ** Tags added: verification-done verification-done-jammy -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2009885 Title: Timeshift 21.09.1-1 broken after Rsync upgrade to 3.2.7-0ubuntu0.22.04.2 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/timeshift/+bug/2009885/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2061825] Re: [SRU] ucf fails to work for local diversions on Jammy
Thanks, Heitor. I'll remember to run `update-maintainer` going forward! Re. Focal: The syntax error and relevant code was introduced in 3.0040 whereas Focal is using older ucf. Thus Focal is unaffected. Likewise Lunar/Mantic/Noble have the fixed version already. So this is a Jammy- only backport. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2061825 Title: [SRU] ucf fails to work for local diversions on Jammy To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ucf/+bug/2061825/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2061825] Re: [SRU] ucf fails to work for local diversions on Jammy
Thanks, Dariusz! I've attached a new debdiff. ** Attachment added: "debdiff_2061825_new.txt" https://bugs.launchpad.net/ubuntu/+source/ucf/+bug/2061825/+attachment/5770060/+files/debdiff_2061825_new.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2061825 Title: [SRU] ucf fails to work for local diversions on Jammy To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ucf/+bug/2061825/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2061825] Re: [SRU] ucf fails to work for local diversions on Jammy
Attaching the debdiff. ** Attachment added: "debdiff.txt" https://bugs.launchpad.net/ubuntu/+source/ucf/+bug/2061825/+attachment/5767388/+files/debdiff.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2061825 Title: [SRU] ucf fails to work for local diversions on Jammy To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ucf/+bug/2061825/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2061825] Re: [SRU] ucf fails to work for local diversions on Jammy
** Tags added: sts ** Description changed: - ucf doesn't work correctly when local diversions in place. + [ Impact ] - This is due to a syntax error and has been fixed in Debian upstream: + When a dpkg-diversion is used to setup a package diversion and ucf for managing + the configuration files for chrony package, the postinst script of ucf fails + when installing chrony. - https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=979354 + This issue isn't specific to chrony but can happen for any package whose + config files are managed by ucf. - Mantic and Noble have the fixed version already. This bug doesn't exist - Focal. + This affects users on Jammy who use ucf. Newer versions of ucf have this bug + fixed already. + [ Test Plan ] - This will be a Jammy-only backport. + 1. Create a Jammy container or VM + 2. Setup a diversion for chrony.conf: dpkg-divert --package chrony --add --rename --divert /etc/chrony/chrony.conf.custom /etc/chrony/chrony.conf + 3. Install chrony: apt install chrony -y + 4. Notice the postinst script fail with syntax errors such as: + ``` + Preparing to unpack .../chrony_4.2-2ubuntu2_amd64.deb ... + Unpacking chrony (4.2-2ubuntu2) ... + Setting up chrony (4.2-2ubuntu2) ... + /usr/bin/ucf: 444: [: missing ] + grep: ]: No such file or directory + /usr/bin/ucf: 444: [: missing ] + grep: ]: No such file or directory + ``` + 5. Install the package with the fix from the PPA: https://launchpad.net/~pponnuvel/+archive/ubuntu/ucf-jammy (to be replaced with the package from the -proposed pocket) + 6. Repeat the same from steps 1 to 4 and notice no failures at step4. + + [ Where problems could occur ] + + Can further introduce similar bugs if the patch contains similar syntax + errors. Consequently local diversion may not take effect for packages + using ucf to manage configuration files. + + [ Other Info ] + + Upstream bug: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=979354 + + It's been fixed in version ucf/3.0043+nmu1. Lunar/Mantic/Noble all have the + ucf version with this patch. + + Affects Jammy only and thus backported to only Jammy. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2061825 Title: [SRU] ucf fails to work for local diversions on Jammy To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ucf/+bug/2061825/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2061825] Re: [SRU] ucf fails to work for local diversions on Jammy
** Summary changed: - ucf fails to work for local diversions on Jammy + [SRU] ucf fails to work for local diversions on Jammy -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2061825 Title: [SRU] ucf fails to work for local diversions on Jammy To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ucf/+bug/2061825/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2061825] [NEW] ucf fails to work for local diversions on Jammy
Public bug reported: ucf doesn't work correctly when local diversions in place. This is due to a syntax error and has been fixed in Debian upstream: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=979354 Mantic and Noble have the fixed version already. This bug doesn't exist Focal. This will be a Jammy-only backport. ** Affects: ucf (Ubuntu) Importance: Undecided Status: Fix Released ** Affects: ucf (Ubuntu Jammy) Importance: High Assignee: Ponnuvel Palaniyappan (pponnuvel) Status: In Progress ** Also affects: ucf (Ubuntu Jammy) Importance: Undecided Status: New ** Changed in: ucf (Ubuntu) Status: New => Fix Released ** Changed in: ucf (Ubuntu Jammy) Status: New => In Progress ** Changed in: ucf (Ubuntu Jammy) Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel) ** Changed in: ucf (Ubuntu Jammy) Importance: Undecided => High -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2061825 Title: ucf fails to work for local diversions on Jammy To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ucf/+bug/2061825/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2060666] Re: Memory leak in krb5 version 1.17
** Changed in: krb5 (Ubuntu Focal) Status: New => In Progress -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2060666 Title: Memory leak in krb5 version 1.17 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2060666] Re: Memory leak in krb5 version 1.17
** Changed in: krb5 (Ubuntu Focal) Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2060666 Title: Memory leak in krb5 version 1.17 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2060666] [NEW] Memory leak in krb5 version 1.17
Public bug reported: Commit 1cd2821 altered the memory management of krb5_gss_inquire_cred(), introducing defcred to act as an owner pointer when the function must acquire a default credential. The commit neglected to update the code to release the default cred along the successful path. The old code does not trigger because cred_handle is now reassigned, so the default credential is leaked. The commit https://github.com/krb5/krb5/commit/098f874f3b50dd2c46c0a574677324b5f6f3a1a8 fixes the leak. It's been part of newer krb5 releases (Jammy, and Noble have the releases with the fix). Bionic doesn't have the commit the introduced the memory leak. So this fix needs to be backported to Focal (only). ** Affects: krb5 (Ubuntu) Importance: Undecided Status: New ** Affects: krb5 (Ubuntu Focal) Importance: Undecided Assignee: Ponnuvel Palaniyappan (pponnuvel) Status: New ** Tags: sts ** Tags added: sts ** Also affects: krb5 (Ubuntu Focal) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2060666 Title: Memory leak in krb5 version 1.17 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/krb5/+bug/2060666/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2009885] Re: Timeshift 21.09.1-1 broken after Rsync upgrade to 3.2.7-0ubuntu0.22.04.2
** Changed in: timeshift (Ubuntu Jammy) Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2009885 Title: Timeshift 21.09.1-1 broken after Rsync upgrade to 3.2.7-0ubuntu0.22.04.2 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/timeshift/+bug/2009885/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2009885] Re: Timeshift 21.09.1-1 broken after Rsync upgrade to 3.2.7-0ubuntu0.22.04.2
** Description changed: [ Impact ] Timeshift is broken after upgrade to 21.09.1-1. This is because of a change in behaviour by rsync; rsync 3.2.4 changed how the locale's worked out. From https://download.samba.org/pub/rsync/NEWS#3.2.4 "A long-standing bug was preventing rsync from figuring out the current locale's decimal point character, which made rsync always output numbers using the "C" locale. Since this is now fixed in 3.2.4, a script that parses rsync's decimal numbers (e.g. from the verbose footer) may want to setup the environment in a way that the output continues to be in the C locale. For instance, one of the following should work fine: - export LC_ALL=C.UTF-8 + export LC_ALL=C.UTF-8 " This broke timeshift and the workaround mentioned in rsync release notes needed to be applied to fix the locale. - * justification for backporting the fix to the stable release. + * justification for backporting the fix to the stable release. While the behaviour change is external to timeshift (it's in rsync), for anyone using newer rsync it broke timeshift completely and worse, users aren't even aware that their existing snapshots and backups are no longer working. The said workaround has been applied in upstream timeshift https://github.com/teejee2008/timeshift/pull/904 But we haven't got this fix in Jammy. [ Test Plan ] It's readily reproducible on Jammy: 1. Install Ubuntu 22.04 LTS 2. Install the latest Timeshift - 3. (a) Launch Timeshift, start rsync-backup, log message said that rsync failed to create backup. -(b) Can use CLI too with `sudo timeshift --create` and see it fail. - 4. Then use the timshift package from PPA https://launchpad.net/~pponnuvel/+archive/ubuntu/jammy-timeshift -which contains the fix and `sudo timeshift --create` will succeed. + 3. Change locale to any language with a (,) decimal seperator (e.g. German) + 4. (a) Launch Timeshift, start rsync-backup, log message said that rsync failed to create backup. + (b) Can use CLI too with `sudo timeshift --create` and see it fail. + 5. Then use the timshift package from PPA https://launchpad.net/~pponnuvel/+archive/ubuntu/jammy-timeshift + which contains the fix and `sudo timeshift --create` will succeed. [ Where problems could occur ] This changes the locale to "C.UTF-8". If anyone relies existing broken behaviour that won't work anymore. Similarly, if rsync doesn't changes behaviour again based on locale, timeshift might start failing again. + [ Other Info ] - [ Other Info ] - I've looked into Focal, Jammy, Mantic, and Noble for this issue. Focal is using older rsync (before the locale change [0]), so it's unaffected. Both Mantic, and Noble have the upstream fix [1] incorporated (fixed through new releases). So they don't have this issue either. Thus this is a Jammy-only backport of the fix [1]. [0] https://download.samba.org/pub/rsync/NEWS#3.2.4 [1] https://github.com/teejee2008/timeshift/pull/904 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2009885 Title: Timeshift 21.09.1-1 broken after Rsync upgrade to 3.2.7-0ubuntu0.22.04.2 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/timeshift/+bug/2009885/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2009885] Re: Timeshift 21.09.1-1 broken after Rsync upgrade to 3.2.7-0ubuntu0.22.04.2
** Description changed: - Timeshift version 21.09.1-1 broken after rsync upgrade to - 3.2.7-0ubuntu0.22.04.2. I had to upgrade Timeshift to version 22.11.2 - from Mint PPA to have it working again. Log messages tell rsync failed - to create backup file. The behavior to note is that in the Timeshift - GUI, the list of backups (snapshots) are not updated (refreshed). - Perhaps updating the Timeshift package in Jammy repository fix it ? + [ Impact ] + + Timeshift is broken after upgrade to 21.09.1-1. This is because of + a change in behaviour by rsync; rsync 3.2.4 changed how the locale's + worked out. From https://download.samba.org/pub/rsync/NEWS#3.2.4 + + "A long-standing bug was preventing rsync from figuring out the current + locale's decimal point character, which made rsync always output numbers + using the "C" locale. Since this is now fixed in 3.2.4, a script that + parses rsync's decimal numbers (e.g. from the verbose footer) may want + to setup the environment in a way that the output continues to be in the + C locale. For instance, one of the following should work fine: + + export LC_ALL=C.UTF-8 + " + + This broke timeshift and the workaround mentioned in rsync release + notes needed to be applied to fix the locale. + + * justification for backporting the fix to the stable release. + + While the behaviour change is external to timeshift (it's in rsync), + for anyone using newer rsync it broke timeshift completely and worse, + users aren't even aware that their existing snapshots and backups are no longer + working. + + The said workaround has been applied in upstream timeshift + https://github.com/teejee2008/timeshift/pull/904 + + But we haven't got this fix in Jammy. + + [ Test Plan ] + + It's readily reproducible on Jammy: + 1. Install Ubuntu 22.04 LTS + 2. Install the latest Timeshift + 3. (a) Launch Timeshift, start rsync-backup, log message said that rsync failed to create backup. +(b) Can use CLI too with `sudo timeshift --create` and see it fail. + 4. Then use the timshift package from PPA https://launchpad.net/~pponnuvel/+archive/ubuntu/jammy-timeshift +which contains the fix and `sudo timeshift --create` will succeed. + + [ Where problems could occur ] + + This changes the locale to "C.UTF-8". If anyone relies existing broken behaviour + that won't work anymore. Similarly, if rsync doesn't changes behaviour again based + on locale, timeshift might start failing again. + + + [ Other Info ] + + I've looked into Focal, Jammy, Mantic, and Noble for this issue. + + Focal is using older rsync (before the locale change [0]), so it's unaffected. + Both Mantic, and Noble have the upstream fix [1] incorporated (fixed through new releases). + So they don't have this issue either. + + Thus this is a Jammy-only backport of the fix [1]. + + [0] https://download.samba.org/pub/rsync/NEWS#3.2.4 + [1] https://github.com/teejee2008/timeshift/pull/904 ** Patch added: "timeshift_jammy_diff.patch" https://bugs.launchpad.net/ubuntu/+source/timeshift/+bug/2009885/+attachment/5761248/+files/timeshift_jammy_diff.patch -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2009885 Title: Timeshift 21.09.1-1 broken after Rsync upgrade to 3.2.7-0ubuntu0.22.04.2 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/timeshift/+bug/2009885/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2009885] Re: Timeshift 21.09.1-1 broken after Rsync upgrade to 3.2.7-0ubuntu0.22.04.2
Mantic and Noble have newer packages of timeshift which contain the upstream fix [0]. Focal is at 3.1.3-8ubuntu0.7 which predates the rsync change (3.2.4). So this issue affects just Jammy. I've created a PPA with this fix for Jammy [1] to test. I'll do an SRU for Jammy after confirmation. [0] https://github.com/teejee2008/timeshift/pull/904 [1] https://launchpad.net/~pponnuvel/+archive/ubuntu/jammy-timeshift ** Also affects: timeshift (Ubuntu Jammy) Importance: Undecided Status: New ** Changed in: timeshift (Ubuntu Jammy) Status: New => Confirmed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2009885 Title: Timeshift 21.09.1-1 broken after Rsync upgrade to 3.2.7-0ubuntu0.22.04.2 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/timeshift/+bug/2009885/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2056572] Re: [SRU] Ceph 16.2.15 point release
*** This bug is a duplicate of bug 2057713 *** https://bugs.launchpad.net/bugs/2057713 Marking as duplicate of https://bugs.launchpad.net/bugs/2057713 ** This bug has been marked a duplicate of bug 2057713 [SRU] ceph 16.2.15 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2056572 Title: [SRU] Ceph 16.2.15 point release To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2056572/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 2056572] [NEW] [SRU] Ceph 16.2.15 point release
Public bug reported: Ceph 16.2.15 has been released. https://ceph.io/en/news/blog/2024/v16-2-15-pacific-released/ This is also the last point release in Pacific and contains several bug fixes. It also contains the fix for CVE-2023-43040. ** Affects: ceph (Ubuntu) Importance: Undecided Status: New ** Tags: sts ** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/2056572 Title: [SRU] Ceph 16.2.15 point release To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/2056572/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1975906] Re: [SRU] ceph 16.2.9
** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1975906 Title: [SRU] ceph 16.2.9 To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1975906/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1975906] [NEW] [SRU] Ceph 16.2.8
Public bug reported: A Pacific point release is now available: https://docs.ceph.com/en/latest/releases/pacific/#v16-2-8-pacific ** Affects: ceph (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1975906 Title: [SRU] Ceph 16.2.8 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1975906/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1970460] Re: [SRU] Avoid premature onode release
** Description changed: - The upstream bug is https://tracker.ceph.com/issues/53002 + [Impact] - It's been backported to relevant releases upstream (Octopus, Pacific, and Quincy). - Octopus 15.2.16 has the fix. So does Quincy 17.2.0. However, the latest Pacific release missed out this fix. So needed to be SRU'ed for Pacific only. + OSDs crash at randomly due to race condition that can occur + at times. + + This was observed when onode's removal is followed by reading + and the latter causes object release before the removal is finalized. + The root cause is an improper 'pinned' state assessment in Onode::get(). - Master tracker: https://tracker.ceph.com/issues/53002 + [Test Plan] + + Deploy a ceph cluster and do write some data to the cluster. + While performing some reads again from the cluster, no crashes + are seen in any OSDs. The race condition can be mimicked + by holding one thread (under debugger) while the other one + continues to update 'nput' counter. + + [Where problems could occur] + + Despite the new atomic counter it might not be cover cases + and still introduce further data race and/or crashes continue + to happen. + + [Other Info] + + The upstream bug is https://tracker.ceph.com/issues/53002 + + It's been backported to relevant releases upstream (Octopus, Pacific, and + Quincy). Octopus 15.2.16 has the fix. So does Quincy 17.2.0. However, + the latest Pacific release missed out this fix. So SRU is needed for + Pacific (only). Pacific tracker: https://tracker.ceph.com/issues/53608 Pacific PR: https://github.com/ceph/ceph/pull/44723 ** Description changed: [Impact] - OSDs crash at randomly due to race condition that can occur - at times. - - This was observed when onode's removal is followed by reading - and the latter causes object release before the removal is finalized. - The root cause is an improper 'pinned' state assessment in Onode::get(). + OSDs crash at randomly due to race condition that can occur + at times. + + This was observed when onode's removal is followed by reading + and the latter causes object release before the removal is finalized. + The root cause is an improper 'pinned' state assessment in Onode::get(). [Test Plan] - Deploy a ceph cluster and do write some data to the cluster. - While performing some reads again from the cluster, no crashes - are seen in any OSDs. The race condition can be mimicked - by holding one thread (under debugger) while the other one - continues to update 'nput' counter. + Deploy a ceph cluster and do write some data to the cluster. + While performing some reads again from the cluster, no crashes + are seen in any OSDs. The race condition can be mimicked + by holding one thread (under debugger) while the other one + continues to update 'nput' counter. [Where problems could occur] - Despite the new atomic counter it might not be cover cases - and still introduce further data race and/or crashes continue - to happen. - + Despite the new atomic counter it might not be cover cases + and still introduce further data race and/or crashes continue + to happen. + [Other Info] - - The upstream bug is https://tracker.ceph.com/issues/53002 It's been backported to relevant releases upstream (Octopus, Pacific, and Quincy). Octopus 15.2.16 has the fix. So does Quincy 17.2.0. However, the latest Pacific release missed out this fix. So SRU is needed for Pacific (only). + Master tracker: https://tracker.ceph.com/issues/53002 + Pacific tracker: https://tracker.ceph.com/issues/53608 Pacific PR: https://github.com/ceph/ceph/pull/44723 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1970460 Title: [SRU] Avoid premature onode release To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1970460/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1970460] Re: Avoid premature onode release
** Description changed: The upstream bug is https://tracker.ceph.com/issues/53002 It's been backported to relevant releases upstream (Octopus, Pacific, and Quincy). Octopus 15.2.16 has the fix. So does Quincy 17.2.0. However, the latest Pacific release missed out this fix. So needed to be SRU'ed for Pacific only. + Master tracker: https://tracker.ceph.com/issues/53002 - Master tracker: https://tracker.ceph.com/issues/53608 - - Pacific tracker: https://tracker.ceph.com/issues/53002 + Pacific tracker: https://tracker.ceph.com/issues/53608 Pacific PR: https://github.com/ceph/ceph/pull/44723 ** Summary changed: - Avoid premature onode release + [SRU] Avoid premature onode release -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1970460 Title: [SRU] Avoid premature onode release To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1970460/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1970460] Re: Avoid premature onode release
Attaching debdiff for Pacific (hirsute). ** Attachment added: "debdiff.txt" https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1970460/+attachment/5584420/+files/debdiff.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1970460 Title: Avoid premature onode release To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1970460/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1970460] [NEW] Avoid premature onode release
Public bug reported: The upstream bug is https://tracker.ceph.com/issues/53002 It's been backported to relevant releases upstream (Octopus, Pacific, and Quincy). Octopus 15.2.16 has the fix. So does Quincy 17.2.0. However, the latest Pacific release missed out this fix. So needed to be SRU'ed for Pacific only. Master tracker: https://tracker.ceph.com/issues/53608 Pacific tracker: https://tracker.ceph.com/issues/53002 Pacific PR: https://github.com/ceph/ceph/pull/44723 ** Affects: ceph (Ubuntu) Importance: Undecided Status: New ** Tags: sts ** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1970460 Title: Avoid premature onode release To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1970460/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1969000] Re: [SRU] mon crashes when improper json is passed to rados
** Also affects: cloud-archive Importance: Undecided Status: New ** Also affects: cloud-archive/ussuri Importance: Undecided Status: New ** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1969000 Title: [SRU] mon crashes when improper json is passed to rados To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1969000/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1955345] Re: Active ceph-mgr crashes on receiving report from a non-active mgr
This has been superseded by https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1964802 which is a newer Octopus release (15.2.16) and contains the commits for fixing this as well. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1955345 Title: Active ceph-mgr crashes on receiving report from a non-active mgr To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1955345/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1852441] Re: In bionic, one of the ceph packages installed causes chrony to auto-install even on lxd
** Changed in: ntp-charm Assignee: Ponnuvel Palaniyappan (pponnuvel) => (unassigned) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1852441 Title: In bionic, one of the ceph packages installed causes chrony to auto- install even on lxd To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-mon/+bug/1852441/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1852441] Re: In bionic, one of the ceph packages installed causes chrony to auto-install even on lxd
I am not sure if there's anything to fix here for the ntp charm. The ntp charm shouldn't really be installed in a container. In general, It should be installed alongside a principal charm that's on a bare-metal machine. In situations like charm upgrade, The ntp charm could end up installing the chrony package again even if it's a container where it was previously removed. And ceph-mon charm could remove it again. In the case of ntp being a sub-ordinate of ceph-mon container and when ceph-mon removes chrony package, the ntp unit goes into 'blocked' state which seems reasonable to me given it shouldn't be there in the container in the firt place. So the only things I could think of are: - update ntp charm's doc that it shouldn't be installed in a container - provide a clear error message if ntp is deployed on a container Thoughts? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1852441 Title: In bionic, one of the ceph packages installed causes chrony to auto- install even on lxd To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-mon/+bug/1852441/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1955345] Re: Active ceph-mgr crashes on receiving report from a non-active mgr
** Also affects: cloud-archive Importance: Undecided Status: New ** Changed in: cloud-archive Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel) ** Changed in: cloud-archive Importance: Undecided => High ** Changed in: cloud-archive Status: New => In Progress ** Changed in: cloud-archive Assignee: Ponnuvel Palaniyappan (pponnuvel) => (unassigned) ** Changed in: cloud-archive Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel) ** Also affects: cloud-archive/ussuri Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1955345 Title: Active ceph-mgr crashes on receiving report from a non-active mgr To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1955345/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1955345] Re: Active ceph-mgr crashes on receiving report from a non-active mgr
Attaching debdiff for Focal. ** Patch added: "focal1955345.patch" https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1955345/+attachment/5548581/+files/focal1955345.patch -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1955345 Title: Active ceph-mgr crashes on receiving report from a non-active mgr To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1955345/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1955345] [NEW] Active ceph-mgr crashes on receiving report from a non-active mgr
Public bug reported: [Impact] An active ceph-mgr crashes and another ceph-mgr takes over and becomes the active mgr. But this could again hit same issue and crash and the cycle can continue indefinitely (previously crashed ceph-mgr gets restarted by systemd). This could affect the cluster stability/usability as ceph mgr handles a number of essential operations (modules that control/change Ceph cluster behaviour, metrics, etc). [Test Plan] Deploy and operate a Ceph cluster normally. Increase the log level of mgr to 20. Observe MMgrReport sent from non-active mgrs get ignored (no crash). [Where problems could occur] Possibly the fix may not actually fix and mgr continue to crash as before. Might incorrectly ignore reports from active mgrs. [Other Info] Upstream main bug: https://tracker.ceph.com/issues/48022 Octopus backport PR: https://github.com/ceph/ceph/pull/43861 Octopus backport bug: https://tracker.ceph.com/issues/53198 This has been already been fixed and available in Pacific. So needed to backport only for Octopus. ** Affects: ceph (Ubuntu) Importance: High Assignee: Ponnuvel Palaniyappan (pponnuvel) Status: In Progress ** Affects: ceph (Ubuntu Focal) Importance: High Assignee: Ponnuvel Palaniyappan (pponnuvel) Status: In Progress ** Tags: sts ** Changed in: ceph (Ubuntu) Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel) ** Changed in: ceph (Ubuntu) Status: New => In Progress ** Description changed: - [Impact] + [Impact] An active ceph-mgr crashes and another ceph-mgr takes over and becomes - the active mgr. But this could again hit same issue and crash and the cycle - can continue indefinitely (previously crashed ceph-mgr gets restarted by - systemd). + the active mgr. But this could again hit same issue and crash and the cycle can continue indefinitely (previously crashed ceph-mgr gets restarted by systemd). - This could affect the cluster stability/usability as ceph mgr handles a number - of essential operations (modules that control/change Ceph cluster behaviour, - metrics, etc). + This could affect the cluster stability/usability as ceph mgr handles a + number of essential operations (modules that control/change Ceph cluster + behaviour, metrics, etc). [Test Plan] Deploy and operate a Ceph cluster normally. Increase the log level of mgr to 20. Observe MMgrReport sent from non-active mgrs get ignored (no crash). [Where problems could occur] Possibly the fix may not actually fix and mgr continue to crash as before. Might incorrectly ignore reports from active mgrs. [Other Info] - Upstream main bug: https://tracker.ceph.com/issues/48022 + Upstream main bug: https://tracker.ceph.com/issues/48022 Octopus backport PR: https://github.com/ceph/ceph/pull/43861 Octopus backport bug: https://tracker.ceph.com/issues/53198 This has been already been fixed and available in Pacific. So needed to backport only for Octopus. ** Also affects: ceph (Ubuntu Focal) Importance: Undecided Status: New ** Changed in: ceph (Ubuntu Focal) Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel) ** Changed in: ceph (Ubuntu Focal) Status: New => In Progress ** Changed in: ceph (Ubuntu) Importance: Undecided => High ** Changed in: ceph (Ubuntu Focal) Importance: Undecided => High ** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1955345 Title: Active ceph-mgr crashes on receiving report from a non-active mgr To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1955345/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1942182] Re: apt-cache-ng is taking 100% cpu
** Description changed: - Every once in a while, /usr/sbin/apt-cacher-ng is seems to get stuck in - a loop and takes up 100% of cpu. + Every once in a while, /usr/sbin/apt-cacher-ng seems to get stuck in a + loop and takes up 100% of cpu. root@magicbox:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 21.04 Release: 21.04 Codename: hirsute root@magicbox:~# dpkg -l | grep apt-cacher ii apt-cacher-ng3.6.3-1 amd64caching proxy server for software repositories root@magicbox:~# ps -P 1494 PID PSR TTY STAT TIME COMMAND 1494 2 ?Ssl 2674:27 /usr/sbin/apt-cacher-ng -c /etc/apt-cacher-ng ForeGround=1 root@magicbox:~# top -p 1494 PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND 1494 apt-cac+ 20 0 755256 10244 7392 S 100.0 0.1 2681:35 apt-cacher-ng Seems one of the threads stuck in a select/pol loop (from gdb): (gdb) bt #0 0x7f8c9495309e in epoll_wait (epfd=3, events=0x55b8c69e4eb0, maxevents=32, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 #1 0x7f8c94ca8719 in ?? () from /lib/x86_64-linux-gnu/libevent-2.1.so.7 #2 0x7f8c94c9e685 in event_base_loop () from /lib/x86_64-linux-gnu/libevent-2.1.so.7 #3 0x7f8c94d8455a in acng::evabase::MainLoop() () from /lib/x86_64-linux-gnu/libsupacng.so #4 0x55b8c61f5b25 in ?? () #5 0x7f8c94863565 in __libc_start_main (main=0x55b8c61f5af0, argc=4, argv=0x7fff32802fc8, init=, fini=, rtld_fini=, stack_end=0x7fff32802fb8) at ../csu/libc-start.c:332 #6 0x55b8c61f5ede in ?? () (gdb) info threads Id Target Id Frame * 1LWP 1494 "apt-cacher-ng" 0x7f8c9495309e in epoll_wait (epfd=3, events=0x55b8c69e4eb0, maxevents=32, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 2LWP 5052 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c94dae090) at ../sysdeps/nptl/futex-internal.c:74 3LWP 11108 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 4LWP 18009 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 5LWP 18010 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 6LWP 18014 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x7f8c7bffeca0, clockid=1, expected=0, futex_word=0x7f8c94dacd38) at ../sysdeps/nptl/futex-internal.c:74 7LWP 19462 "apt-cacher-ng" 0x7f8c94949b31 in __GI___select (nfds=15, readfds=0x7f8c915daa00, writefds=0x7f8c915daa80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 8LWP 21198 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c94dae090) at ../sysdeps/nptl/futex-internal.c:74 9LWP 22284 "apt-cacher-ng" 0x7f8c94949b31 in __GI___select (nfds=19, readfds=0x7f8c7b7fda00, writefds=0x7f8c7b7fda80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 (gdb) thread 7 [Switching to thread 7 (LWP 19462)] #0 0x7f8c94949b31 in __GI___select (nfds=15, readfds=0x7f8c915daa00, writefds=0x7f8c915daa80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 Download failed: Function not implemented. Continuing without source file ./misc/../sysdeps/unix/sysv/linux/select.c. 49../sysdeps/unix/sysv/linux/select.c: No such file or directory. (gdb) bt #0 0x7f8c94949b31 in __GI___select (nfds=15, readfds=0x7f8c915daa00, writefds=0x7f8c915daa80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 #1 0x7f8c94d135f8 in acng::dlcon::Impl::ExchangeData(std::__cxx11::basic_string, std::allocator >&, std::shared_ptr&, std::__cxx11::list >&) () from /lib/x86_64-linux-gnu/libsupacng.so #2 0x7f8c94d15636 in acng::dlcon::Impl::WorkLoop() () from /lib/x86_64-linux-gnu/libsupacng.so #3 0x7f8c94b1c694 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6 #4 0x7f8c94c64450 in start_thread (arg=0x7f8c915db640) at pthread_create.c:473 #5 0x7f8c94952d53 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1942182 Title: apt-cache-ng is taking 100% cpu To manage notifications about this bug go to: https://bugs.launchpad.net/ubun
[Bug 1942182] Re: apt-cache-ng is taking 100% cpu
** Description changed: Every once in a while, /usr/sbin/apt-cacher-ng is seems to get stuck in a loop and takes up 100% of cpu. root@magicbox:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 21.04 Release: 21.04 Codename: hirsute root@magicbox:~# dpkg -l | grep apt-cacher ii apt-cacher-ng3.6.3-1 amd64caching proxy server for software repositories root@magicbox:~# ps -P 1494 PID PSR TTY STAT TIME COMMAND 1494 2 ?Ssl 2674:27 /usr/sbin/apt-cacher-ng -c /etc/apt-cacher-ng ForeGround=1 root@magicbox:~# top -p 1494 - PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND -1494 apt-cac+ 20 0 755256 10244 7392 S 100.0 0.1 2681:35 apt-cacher-ng + PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND + 1494 apt-cac+ 20 0 755256 10244 7392 S 100.0 0.1 2681:35 apt-cacher-ng - Seems one of the threads stuck in a select/pool loop: - x7f8c9495309e in epoll_wait (epfd=3, events=0x55b8c69e4eb0, maxevents=32, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 - Download failed: Function not implemented. Continuing without source file ./misc/../sysdeps/unix/sysv/linux/epoll_wait.c. - 30../sysdeps/unix/sysv/linux/epoll_wait.c: No such file or directory. + Seems one of the threads stuck in a select/pool loop in gbp: (gdb) bt #0 0x7f8c9495309e in epoll_wait (epfd=3, events=0x55b8c69e4eb0, maxevents=32, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 #1 0x7f8c94ca8719 in ?? () from /lib/x86_64-linux-gnu/libevent-2.1.so.7 #2 0x7f8c94c9e685 in event_base_loop () from /lib/x86_64-linux-gnu/libevent-2.1.so.7 #3 0x7f8c94d8455a in acng::evabase::MainLoop() () from /lib/x86_64-linux-gnu/libsupacng.so #4 0x55b8c61f5b25 in ?? () #5 0x7f8c94863565 in __libc_start_main (main=0x55b8c61f5af0, argc=4, argv=0x7fff32802fc8, init=, fini=, rtld_fini=, stack_end=0x7fff32802fb8) at ../csu/libc-start.c:332 #6 0x55b8c61f5ede in ?? () (gdb) info threads Id Target Id Frame * 1LWP 1494 "apt-cacher-ng" 0x7f8c9495309e in epoll_wait (epfd=3, events=0x55b8c69e4eb0, maxevents=32, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 2LWP 5052 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c94dae090) at ../sysdeps/nptl/futex-internal.c:74 3LWP 11108 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 4LWP 18009 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 5LWP 18010 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 6LWP 18014 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x7f8c7bffeca0, clockid=1, expected=0, futex_word=0x7f8c94dacd38) at ../sysdeps/nptl/futex-internal.c:74 7LWP 19462 "apt-cacher-ng" 0x7f8c94949b31 in __GI___select (nfds=15, readfds=0x7f8c915daa00, writefds=0x7f8c915daa80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 8LWP 21198 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c94dae090) at ../sysdeps/nptl/futex-internal.c:74 9LWP 22284 "apt-cacher-ng" 0x7f8c94949b31 in __GI___select (nfds=19, readfds=0x7f8c7b7fda00, writefds=0x7f8c7b7fda80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 (gdb) thread 7 [Switching to thread 7 (LWP 19462)] #0 0x7f8c94949b31 in __GI___select (nfds=15, readfds=0x7f8c915daa00, writefds=0x7f8c915daa80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 Download failed: Function not implemented. Continuing without source file ./misc/../sysdeps/unix/sysv/linux/select.c. 49../sysdeps/unix/sysv/linux/select.c: No such file or directory. (gdb) bt #0 0x7f8c94949b31 in __GI___select (nfds=15, readfds=0x7f8c915daa00, writefds=0x7f8c915daa80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 #1 0x7f8c94d135f8 in acng::dlcon::Impl::ExchangeData(std::__cxx11::basic_string, std::allocator >&, std::shared_ptr&, std::__cxx11::list >&) () from /lib/x86_64-linux-gnu/libsupacng.so #2 0x7f8c94d15636 in acng::dlcon::Impl::WorkLoop() () from /lib/x86_64-linux-gnu/libsupacng.so #3 0x7f8c94b1c694 in ?? () from /lib/x86_64-linux-gnu/l
[Bug 1942182] Re: apt-cache-ng is taking 100% cpu
Attached apt-cacher-ng logs. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1942182 Title: apt-cache-ng is taking 100% cpu To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/apt-cacher-ng/+bug/1942182/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1942182] Re: apt-cache-ng is taking 100% cpu
** Attachment added: "apt-cacher-ng.zip" https://bugs.launchpad.net/ubuntu/+source/apt-cacher-ng/+bug/1942182/+attachment/5521847/+files/apt-cacher-ng.zip -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1942182 Title: apt-cache-ng is taking 100% cpu To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/apt-cacher-ng/+bug/1942182/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1942182] Re: apt-cache-ng is taking 100% cpu
** Description changed: Every once in a while, /usr/sbin/apt-cacher-ng is seems to get stuck in a loop and takes up 100% of cpu. root@magicbox:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 21.04 Release: 21.04 Codename: hirsute root@magicbox:~# dpkg -l | grep apt-cacher ii apt-cacher-ng3.6.3-1 amd64caching proxy server for software repositories root@magicbox:~# ps -P 1494 - PID PSR TTY STAT TIME COMMAND -1494 2 ?Ssl 2674:27 /usr/sbin/apt-cacher-ng -c /etc/apt-cacher-ng ForeGround=1 + PID PSR TTY STAT TIME COMMAND + 1494 2 ?Ssl 2674:27 /usr/sbin/apt-cacher-ng -c /etc/apt-cacher-ng ForeGround=1 + + root@magicbox:~# top -p 1494 + PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND +1494 apt-cac+ 20 0 755256 10244 7392 S 100.0 0.1 2681:35 apt-cacher-ng Seems one of the threads stuck in a select/pool loop: x7f8c9495309e in epoll_wait (epfd=3, events=0x55b8c69e4eb0, maxevents=32, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 Download failed: Function not implemented. Continuing without source file ./misc/../sysdeps/unix/sysv/linux/epoll_wait.c. 30../sysdeps/unix/sysv/linux/epoll_wait.c: No such file or directory. (gdb) bt #0 0x7f8c9495309e in epoll_wait (epfd=3, events=0x55b8c69e4eb0, maxevents=32, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 #1 0x7f8c94ca8719 in ?? () from /lib/x86_64-linux-gnu/libevent-2.1.so.7 #2 0x7f8c94c9e685 in event_base_loop () from /lib/x86_64-linux-gnu/libevent-2.1.so.7 #3 0x7f8c94d8455a in acng::evabase::MainLoop() () from /lib/x86_64-linux-gnu/libsupacng.so #4 0x55b8c61f5b25 in ?? () #5 0x7f8c94863565 in __libc_start_main (main=0x55b8c61f5af0, argc=4, argv=0x7fff32802fc8, init=, fini=, rtld_fini=, stack_end=0x7fff32802fb8) at ../csu/libc-start.c:332 #6 0x55b8c61f5ede in ?? () (gdb) info threads - Id Target Id Frame + Id Target Id Frame * 1LWP 1494 "apt-cacher-ng" 0x7f8c9495309e in epoll_wait (epfd=3, events=0x55b8c69e4eb0, maxevents=32, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 - 2LWP 5052 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c94dae090) at ../sysdeps/nptl/futex-internal.c:74 - 3LWP 11108 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 - 4LWP 18009 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 - 5LWP 18010 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 - 6LWP 18014 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x7f8c7bffeca0, clockid=1, expected=0, futex_word=0x7f8c94dacd38) at ../sysdeps/nptl/futex-internal.c:74 - 7LWP 19462 "apt-cacher-ng" 0x7f8c94949b31 in __GI___select (nfds=15, readfds=0x7f8c915daa00, writefds=0x7f8c915daa80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 - 8LWP 21198 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c94dae090) at ../sysdeps/nptl/futex-internal.c:74 - 9LWP 22284 "apt-cacher-ng" 0x7f8c94949b31 in __GI___select (nfds=19, readfds=0x7f8c7b7fda00, writefds=0x7f8c7b7fda80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 + 2LWP 5052 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c94dae090) at ../sysdeps/nptl/futex-internal.c:74 + 3LWP 11108 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 + 4LWP 18009 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 + 5LWP 18010 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 + 6LWP 18014 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x7f8c7bffeca0, clockid=1, expected=0, futex_word=0x7f8c94dacd38) at ../sysdeps/nptl/futex-internal.c:74 + 7LWP 19462 "apt-ca
[Bug 1942182] [NEW] apt-cache-ng is taking 100% cpu
Public bug reported: Every once in a while, /usr/sbin/apt-cacher-ng is seems to get stuck in a loop and takes up 100% of cpu. root@magicbox:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description:Ubuntu 21.04 Release:21.04 Codename: hirsute root@magicbox:~# dpkg -l | grep apt-cacher ii apt-cacher-ng3.6.3-1 amd64caching proxy server for software repositories root@magicbox:~# ps -P 1494 PID PSR TTY STAT TIME COMMAND 1494 2 ?Ssl 2674:27 /usr/sbin/apt-cacher-ng -c /etc/apt-cacher-ng ForeGround=1 root@magicbox:~# top -p 1494 PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+ COMMAND 1494 apt-cac+ 20 0 755256 10244 7392 S 100.0 0.1 2681:35 apt-cacher-ng Seems one of the threads stuck in a select/pool loop: x7f8c9495309e in epoll_wait (epfd=3, events=0x55b8c69e4eb0, maxevents=32, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 Download failed: Function not implemented. Continuing without source file ./misc/../sysdeps/unix/sysv/linux/epoll_wait.c. 30 ../sysdeps/unix/sysv/linux/epoll_wait.c: No such file or directory. (gdb) bt #0 0x7f8c9495309e in epoll_wait (epfd=3, events=0x55b8c69e4eb0, maxevents=32, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 #1 0x7f8c94ca8719 in ?? () from /lib/x86_64-linux-gnu/libevent-2.1.so.7 #2 0x7f8c94c9e685 in event_base_loop () from /lib/x86_64-linux-gnu/libevent-2.1.so.7 #3 0x7f8c94d8455a in acng::evabase::MainLoop() () from /lib/x86_64-linux-gnu/libsupacng.so #4 0x55b8c61f5b25 in ?? () #5 0x7f8c94863565 in __libc_start_main (main=0x55b8c61f5af0, argc=4, argv=0x7fff32802fc8, init=, fini=, rtld_fini=, stack_end=0x7fff32802fb8) at ../csu/libc-start.c:332 #6 0x55b8c61f5ede in ?? () (gdb) info threads Id Target Id Frame * 1LWP 1494 "apt-cacher-ng" 0x7f8c9495309e in epoll_wait (epfd=3, events=0x55b8c69e4eb0, maxevents=32, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30 2LWP 5052 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c94dae090) at ../sysdeps/nptl/futex-internal.c:74 3LWP 11108 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 4LWP 18009 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 5LWP 18010 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c8802ba98) at ../sysdeps/nptl/futex-internal.c:74 6LWP 18014 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x7f8c7bffeca0, clockid=1, expected=0, futex_word=0x7f8c94dacd38) at ../sysdeps/nptl/futex-internal.c:74 7LWP 19462 "apt-cacher-ng" 0x7f8c94949b31 in __GI___select (nfds=15, readfds=0x7f8c915daa00, writefds=0x7f8c915daa80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 8LWP 21198 "apt-cacher-ng" __futex_abstimed_wait_common64 (cancel=true, private=0, abstime=0x0, clockid=0, expected=0, futex_word=0x7f8c94dae090) at ../sysdeps/nptl/futex-internal.c:74 9LWP 22284 "apt-cacher-ng" 0x7f8c94949b31 in __GI___select (nfds=19, readfds=0x7f8c7b7fda00, writefds=0x7f8c7b7fda80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 (gdb) thread 7 [Switching to thread 7 (LWP 19462)] #0 0x7f8c94949b31 in __GI___select (nfds=15, readfds=0x7f8c915daa00, writefds=0x7f8c915daa80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 Download failed: Function not implemented. Continuing without source file ./misc/../sysdeps/unix/sysv/linux/select.c. 49 ../sysdeps/unix/sysv/linux/select.c: No such file or directory. (gdb) bt #0 0x7f8c94949b31 in __GI___select (nfds=15, readfds=0x7f8c915daa00, writefds=0x7f8c915daa80, exceptfds=0x0, timeout=) at ../sysdeps/unix/sysv/linux/select.c:49 #1 0x7f8c94d135f8 in acng::dlcon::Impl::ExchangeData(std::__cxx11::basic_string, std::allocator >&, std::shared_ptr&, std::__cxx11::list >&) () from /lib/x86_64-linux-gnu/libsupacng.so #2 0x7f8c94d15636 in acng::dlcon::Impl::WorkLoop() () from /lib/x86_64-linux-gnu/libsupacng.so #3 0x7f8c94b1c694 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6 #4 0x7f8c94c64450 in start_thread (arg=0x7f8c915db640) at pthread_create.c:473 #5 0x7f8c94952d53 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 ** Affects: apt-cacher-ng (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu
[Bug 1940456] Re: [SRU] radosgw-admin's diagnostics are confusing if user data exists
** Also affects: cloud-archive/xena Importance: Undecided Status: New ** Also affects: cloud-archive/wallaby Importance: Undecided Status: New ** Also affects: cloud-archive/victoria Importance: Undecided Status: New ** Also affects: cloud-archive/ussuri Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1940456 Title: [SRU] radosgw-admin's diagnostics are confusing if user data exists To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1940456/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1940456] Re: [SRU] radosgw-admin's diagnostics are confusing if user data exists
** Also affects: cloud-archive Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1940456 Title: [SRU] radosgw-admin's diagnostics are confusing if user data exists To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1940456/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1940456] Re: [SRU] radosgw-admin's diagnostics are confusing if user data exists
** Also affects: ceph (Ubuntu Focal) Importance: Undecided Status: New ** Also affects: ceph (Ubuntu Hirsute) Importance: Undecided Status: New ** Also affects: ceph (Ubuntu Bionic) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1940456 Title: [SRU] radosgw-admin's diagnostics are confusing if user data exists To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1940456/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1940456] Re: [SRU] radosgw-admin's diagnostics are confusing if user data exists
** Changed in: ceph (Ubuntu) Assignee: (unassigned) => nikhil kshirsagar (nkshirsagar) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1940456 Title: [SRU] radosgw-admin's diagnostics are confusing if user data exists To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1940456/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
Reverted the 'status' of the bug to the previous states. As Dan Streetman pointed out, 'reuse' of this bug causes confusion and strays from the standard SRU process. I creates a new bug to carry out the SRU https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1940456 The fixed patch is already accepted upstream and backported to supported upstream releases as well. So this is just a standard SRU process now that'll be done via LP#1940456. ** Changed in: cloud-archive Status: New => Fix Released ** Changed in: ceph (Ubuntu) Status: New => Fix Released ** Changed in: ceph (Ubuntu Bionic) Status: New => Fix Released ** Changed in: ceph (Ubuntu Focal) Status: New => Fix Released ** Changed in: ceph (Ubuntu Groovy) Status: New => Fix Released ** Changed in: ceph (Ubuntu Hirsute) Status: New => Fix Released -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1940456] [NEW] [SRU] radosgw-admin's diagnostics are confusing if user data exists
Public bug reported: This is same as LP#1914584 but its original patch was wrong which was found out in SRU tests and that particular release went ahead without the patch. Since that LP is kind of in hard-to-follow/confusing states, opened this new bug to carry out the SRU work. ** Affects: ceph (Ubuntu) Importance: Undecided Status: New ** Tags: sts ** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1940456 Title: [SRU] radosgw-admin's diagnostics are confusing if user data exists To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1940456/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
** Changed in: cloud-archive/victoria Status: Triaged => New ** Changed in: cloud-archive/train Status: Triaged => Won't Fix ** Changed in: cloud-archive/ussuri Status: Fix Committed => New ** Changed in: cloud-archive Status: Fix Released => New ** Changed in: ceph (Ubuntu) Status: Fix Released => New ** Changed in: ceph (Ubuntu Bionic) Status: Triaged => New ** Changed in: ceph (Ubuntu Focal) Status: Fix Released => New ** Changed in: ceph (Ubuntu Groovy) Status: Fix Released => New ** Changed in: ceph (Ubuntu Hirsute) Status: Fix Released => New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1936136] Re: ceph on bcache performance regression
** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1936136 Title: ceph on bcache performance regression To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1936136/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
** Changed in: cloud-archive/queens Status: Triaged => Won't Fix ** Changed in: cloud-archive/rocky Status: Triaged => Won't Fix ** Changed in: cloud-archive/stein Status: Triaged => Won't Fix -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914911] Re: [SRU] bluefs doesn't compact log file
Verified that log compactions occur in bluefs with read/write I/O. Attaching test notes. ** Attachment added: "queens_sru_1914911" https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1914911/+attachment/5500895/+files/queens_sru_1914911 ** Tags removed: verification-queens-needed ** Tags added: verification-queens-done -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914911 Title: [SRU] bluefs doesn't compact log file To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
This (incorrect) patch is in upstream master and backported to Nautilus, Octopus, and Pacific. This isn't critical and doesn't break functionality - except for an incorrect error message if user attributes modification fails. However, the reported problem still exists. So I've marked all verifications as failed. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
** Tags removed: verification-ussuri-needed ** Tags added: verification-ussuri-failed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
Confirmed that the patch needs re-work. So marking all verification failed. I've opened an issue in upstream tracker [0]. Submitted the patch upstream [1]. Once that gets approved, backports to Nautilus, Octopus, and Pacific to follow. I am not sure how 15.2.11 release can be worked around whether we wait for upstream acceptance or proceed without this patch. Please let me know what you think, James/Corey. [0] https://tracker.ceph.com/issues/50554 [1] https://github.com/ceph/ceph/pull/41065 ** Bug watch added: tracker.ceph.com/issues #50554 http://tracker.ceph.com/issues/50554 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
** Tags removed: verification-needed verification-needed-focal verification-needed-groovy ** Tags added: verification-failed verification-failed-focal verification-failed-groovy -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
Submitted patch upstream: https://github.com/ceph/ceph/pull/41065 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
With SRU verification test, it doesn't quite work as intended. I believe the problem is with the upstream patch [0]. I've asked for clarification/confirmation [1]. I'll then have to mark the verifications failed, I am afraid. This obviously affects 15.2.11 release as well. I'll try to get this fixed upstream as soon as possible and re-submit patches. Please let me know if there's anything else that can be (or needs to be) done. [0] https://github.com/ceph/ceph/pull/39293 [1] https://github.com/ceph/ceph/pull/39293#issuecomment-827751233 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1911900] Re: [SRU] Active scrub blocks upmap balancer
The regression that this patch fixes wasn't introduced (or backported to) Luminous by upstream. So this doesn't affect Bionic (confirmed by checking Ubuntu's latest Bionic source too). ** Changed in: ceph (Ubuntu Bionic) Status: New => Invalid -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1911900 Title: [SRU] Active scrub blocks upmap balancer To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1911900/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
Thanks, James. Do you know this is in the SRU queue and when this patch might be committed? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
Attaching debdiff for Hirsute. ** Attachment added: "hirsute-debdiff.txt" https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1914584/+attachment/5486157/+files/hirsute-debdiff.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
Attaching debdiff for Groovy. ** Attachment added: "groovy-debdiff.txt" https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1914584/+attachment/5486155/+files/groovy-debdiff.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
Attaching debdiff for Focal. ** Attachment added: "focal-debidff.txt" https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1914584/+attachment/5486154/+files/focal-debidff.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists
Hi Brian, James Page is doing a newer hirsute release in [0] which includes this fix as well. I'll upload the debdiffs for various Ubuntu releases as well. [0] https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1922883 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914584 Title: [SRU] radosgw-admin user create error message confusing if user with email already exists To manage notifications about this bug go to: https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1913284] Re: [plugin] [bcache] add a new bcache plugin
SRU verification for Bionic passed. Attached a text file w/ steps and relevant info. ** Attachment added: "bcache-bionic-verification.txt" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+attachment/5483289/+files/bcache-bionic-verification.txt ** Tags removed: verification-needed verification-needed-bionic ** Tags added: verification-done verification-done-bionic -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1913284 Title: [plugin] [bcache] add a new bcache plugin To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1913284] Re: [plugin] [bcache] add a new bcache plugin
SRU verification for Groovy passed. Attached a text file w/ steps and relevant info. ** Attachment added: "bcache-groovy-verification.txt" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+attachment/5483278/+files/bcache-groovy-verification.txt ** Tags removed: verification-needed-groovy ** Tags added: verification-done-groovy -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1913284 Title: [plugin] [bcache] add a new bcache plugin To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1913284] Re: [plugin] [bcache] add a new bcache plugin
SRU verification for Focal passed. Attached a text file w/ steps and relevant info. ** Attachment added: "bcache-focal-verification2.txt" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+attachment/5483242/+files/bcache-focal-verification2.txt ** Tags removed: verification-needed-focal ** Tags added: verification-done-focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1913284 Title: [plugin] [bcache] add a new bcache plugin To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1910264] Re: [plugin][ceph] include time-sync-status for ceph mon
SRU verification: Verification done on Bionic, Focal, and Groovy. Please refer comments above and let me know if there are questions. Thanks, Pon -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1910264 Title: [plugin][ceph] include time-sync-status for ceph mon To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1910264] Re: [plugin][ceph] include time-sync-status for ceph mon
Verified on Groovy with 4.1-1ubuntu0.20.10.1. Time sync status is collected as expected. Attached a reference. ** Attachment added: "time-sync-groovy-sru-verify.txt" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+attachment/5483241/+files/time-sync-groovy-sru-verify.txt ** Tags removed: verification-needed verification-needed-groovy ** Tags added: verification-done verification-done-groovy -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1910264 Title: [plugin][ceph] include time-sync-status for ceph mon To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1910264] Re: [plugin][ceph] include time-sync-status for ceph mon
Verified on Bionic with 4.1-1ubuntu0.18.04.1. Time sync status is collected as expected. Attached a reference. ** Attachment added: "time-sync-bionic-sru-verify.txt" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+attachment/5483226/+files/time-sync-bionic-sru-verify.txt ** Tags removed: verification-needed-bionic ** Tags added: verification-done-bionic -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1910264 Title: [plugin][ceph] include time-sync-status for ceph mon To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1910264] Re: [plugin][ceph] include time-sync-status for ceph mon
Verified on Focal with 4.1-1ubuntu0.20.04.1. Time sync status is collected as expected. Attached a reference. ** Attachment added: "time-sync-focal-sru-verify.txt" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+attachment/5483227/+files/time-sync-focal-sru-verify.txt ** Tags removed: verification-needed-focal ** Tags added: verification-done-focal -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1910264 Title: [plugin][ceph] include time-sync-status for ceph mon To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1910264/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914911] Re: [SRU] bluefs doesn't compact log file
Hi James, I manually edited the debdiff because it produced a lot of cruft when doing debdiff which aren't relevant to the patch (that probably cause the issue). Perhaps that's not an issue after all? I created the patch again and attached here (that does contain the cruft I noted) and also left the release as UNRELEASED. Please verify if this is OK. I wasn't aware that I could propose changes against git repos for SRUs. I can try that route next time :) Thanks! ** Patch added: "bluefs-bionic.debdiff" https://bugs.launchpad.net/cloud-archive/+bug/1914911/+attachment/5480862/+files/bluefs-bionic.debdiff -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914911 Title: [SRU] bluefs doesn't compact log file To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1852441] Re: In bionic, one of the ceph packages installed causes chrony to auto-install even on lxd
** Changed in: ntp-charm Status: New => Confirmed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1852441 Title: In bionic, one of the ceph packages installed causes chrony to auto- install even on lxd To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-mon/+bug/1852441/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914911] Re: [SRU] bluefs doesn't compact log file
Attached debdiff for bionic (fixed the previous patch which had additional unnecessary changes). ** Attachment added: "bionic.debdiff.fixed" https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1914911/+attachment/5477289/+files/bionic.debdiff.fixed -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914911 Title: [SRU] bluefs doesn't compact log file To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1914911] Re: [SRU] bluefs doesn't compact log file
** Description changed: [Impact] For a certain type of workload, the bluefs might never compact the log file, which would cause the bluefs log file slowly grows to a huge size (some bigger than 1TB for a 1.5T device). There are more details in the bluefs perf counters when this issue happened: e.g. "bluefs": { "gift_bytes": 811748818944, "reclaim_bytes": 0, "db_total_bytes": 888564350976, "db_used_bytes": 867311747072, "wal_total_bytes": 0, "wal_used_bytes": 0, "slow_total_bytes": 0, "slow_used_bytes": 0, "num_files": 11, "log_bytes": 866545131520, "log_compactions": 0, "logged_bytes": 866542977024, "files_written_wal": 2, "files_written_sst": 3, "bytes_written_wal": 32424281934, "bytes_written_sst": 25382201 } This bug could eventually cause osd crash and failed to restart as it couldn't get through the bluefs replay phase during boot time. We might see below log when trying to restart the osd: bluefs mount failed to replay log: (5) Input/output error As we can see the log_compactions is 0, which means it's never compacted and the log file size(log_bytes) is already 800+G. After the compaction, the log file size would need to be reduced to around 1G. [Test Case] Deploy a test ceph cluster (Luminous 12.2.13 which has the bug) and drive I/O. The compaction doesn't get triggered often when most I/O are reads. So fill up the cluster initially with lots of writes and then start reading heavy reads (no writes). Then the problem should occur. Smaller sized OSDs are OK as we'are only interested filling up the OSD and grow the bluefs log. [Where problems could occur] This fix has been part of all upstream releases since Mimic, so there's been quite good "runtime". The changes ensure that compaction happens more often. But that's not going to cause any problem. I can't see any real problems. [Other Info] - - It's only needed for Luminous (Bionic). All new releases since have this already. - - Upstream PR: https://github.com/ceph/ceph/pull/17354 + - It's only needed for Luminous (Bionic). All new releases since have this already. + - Upstream master PR: https://github.com/ceph/ceph/pull/17354 + - Upstream Luminous PR: https://github.com/ceph/ceph/pull/34876/files -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1914911 Title: [SRU] bluefs doesn't compact log file To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-archive/+bug/1914911/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1917494] Re: ceph-mgr hangs in large clusters
This has been merged upstream master and backported to Octopus [0]. Octopus release 15.2.9 contains the fix. Proposed the backport for Nautilus: https://github.com/ceph/ceph/pull/40047 [0] https://github.com/ceph/ceph/pull/38801 ** Changed in: ceph (Ubuntu) Importance: Undecided => High ** Changed in: ceph (Ubuntu) Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel) ** Tags added: sts -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1917494 Title: ceph-mgr hangs in large clusters To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1917494/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1852441] Re: In bionic, one of the ceph packages installed causes chrony to auto-install even on lxd
Quoting Ante Karamtic: "If ntp/chrony is removed from ceph-mon, then ntp charm goes into error state if it's installed on ceph-mon units. On other machines, ntp charm detects that it's in the container and then reports that it is a container and that there's nothing to do. In case of ceph-mon, now it goes into error state because chrony is not there. So ntp charm should update the status before it checks if chrony is installed." ** Also affects: ntp-charm Importance: Undecided Status: New ** Changed in: ntp-charm Assignee: (unassigned) => Ponnuvel Palaniyappan (pponnuvel) -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1852441 Title: In bionic, one of the ceph packages installed causes chrony to auto- install even on lxd To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-mon/+bug/1852441/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1915705] Re: Ceph block device permission gets wrongly reverted by udev
James Page suggested this might need to be done in the packaging as well. Once the fix is finalized in charm-ceph-osd, I'll update the udev rules in debian/udev/95-ceph-osd-lvm.rules. ** Also affects: ceph (Ubuntu) Importance: Undecided Status: New -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1915705 Title: Ceph block device permission gets wrongly reverted by udev To manage notifications about this bug go to: https://bugs.launchpad.net/charm-ceph-osd/+bug/1915705/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1917651] Re: IndependentPlugin is not working
Done both - thanks, Eric! -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1917651 Title: IndependentPlugin is not working To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1917651] Re: IndependentPlugin is not working
Attached debdiff for hirsute. ** Description changed: [IMPACT] Regression found while doing the verification testing of (LP: #1913284) As described by my colleague Ponnuvel: " The problem is that IndependentPlugin is not working in Focal's sosreport. The bcache plugin uses IndependentPlugin (as in, it's not tied to any specific Distro). IndependentPlugin was broken at some point since Bionic and has been fixed upstream. (It's working on Bionic in 3.X series). sosreport | 3.9.1-1ubuntu0.18.04.3 | bionic-updates sosreport | 4.0-1~ubuntu0.20.04.4| focal-proposed sosreport | 4.0-1ubuntu2.2 | groovy-proposed sosreport | 4.0-1ubuntu9 | hirsute However, the sos report package on Focal, Groovy, and Hirsute all have the broken code. " It currently impacts plugins relying on "IndependentPlugin" to run such as the bcache plugin. "IndependentPlugin" is a class for plugins that can run on any platform. [TEST PLAN] + The patch includes a fix for the IndependentPlugin. Currently, bcache plugin uses the IndependentPlugin. So sosreport collection has to be tested on a machine with bcache deployment. The cherry-picked commit includes a number of other changes, so --all-logs and -a options would need to be used to ensure there's no other breakages. - [WHERE PROBLEM COULD OCCUR] - + The IndependentPlugin may still not work - in that case, it'd only + affect bcache plugin. Worse, the changes could affect other Plugin types + and cause more regressions - affecting data collection for multiple + plugins. + [OTHER INFORMATION] Upstream bug: https://github.com/sosreport/sos/pull/2018 Upstream commit: https://github.com/sosreport/sos/commit/a36e1b83040f3f2c63912d4601f4b33821cd4afb ** Attachment added: "hirsute-debdiff.txt" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+attachment/5473096/+files/hirsute-debdiff.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1917651 Title: IndependentPlugin is not working To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1913284] Re: [plugin] [bcache] add a new bcache plugin
Tested using the patch from [0] and it works (= generated sosreport contains the bcache stats). Attached the test sos report here. So once this is re-uploaded, I'll repeat the verification. [0] https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/comments/3 ** Attachment added: "sosreport-buneary-1913284-2021-03-03-iwdqcjs.tar.xz" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+attachment/5472234/+files/sosreport-buneary-1913284-2021-03-03-iwdqcjs.tar.xz -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1913284 Title: [plugin] [bcache] add a new bcache plugin To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1917651] Re: IndependentPlugin is not working
Attached debdiff for groovy. ** Attachment added: "groovy-debdiff.txt" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+attachment/5472233/+files/groovy-debdiff.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1917651 Title: IndependentPlugin is not working To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1917651] Re: IndependentPlugin is not working
** Attachment added: "focal-debdiff.txt" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+attachment/5472231/+files/focal-debdiff.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1917651 Title: IndependentPlugin is not working To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1917651] Re: IndependentPlugin is not working
Attaching debdiff for focal. ** Attachment added: "focal-debdiff.txt" https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+attachment/5472232/+files/focal-debdiff.txt -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1917651 Title: IndependentPlugin is not working To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1917651/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1913284] Re: [plugin] [bcache] add a new bcache plugin
Thanks, Eric. I'll update #1917651 with the debdiff. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1913284 Title: [plugin] [bcache] add a new bcache plugin To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1913284/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs