Re: PPU upload rights application for sosreport

2023-07-07 Thread Nikhil Kshirsagar
Thank you very much!

Regards,
Nikhil.

On Tue, Jul 4, 2023 at 10:26 PM Utkarsh Gupta 
wrote:

> Hello again,
>
> On Fri, Jun 30, 2023 at 2:02 PM Utkarsh Gupta
>  wrote:
> > Nikhil, I've added you to ~ubuntu-dev and a TB member shall soon
> > adjust ACL for the sosreport package.
>
> This has been done by Robie already. You're good to go now. :)
>
>
> - u
>
-- 
ubuntu-devel mailing list
ubuntu-devel@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel


[Bug 1969000] Re: [SRU] mon crashes when improper json is passed to rados

2022-05-04 Thread nikhil kshirsagar
Attaching debdiff.

** Attachment added: "debdiff_1969000"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1969000/+attachment/5586550/+files/debdiff_1969000

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1969000

Title:
  [SRU] mon crashes when improper json is passed to rados

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1969000/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1969000] Re: [SRU] mon crashes when improper json is passed to rados

2022-04-14 Thread nikhil kshirsagar
** Also affects: ceph (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: ceph (Ubuntu Focal)
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1969000

Title:
  [SRU] mon crashes when improper json is passed to rados

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1969000/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1969000] [NEW] [SRU] mon crashes when improper json is passed to rados

2022-04-14 Thread nikhil kshirsagar
Public bug reported:

[Impact] 
If improper json data is passed to rados, it can end up crashing the mon. 

[Test Plan]
The malformed request looks like -

curl -k -H "Authorization: Basic $TOKEN"
"https://juju-3b3d82-10-lxd-0:8003/request; -X POST -d '{"prefix":"auth
add","entity":"client.testuser02","caps":"mon '\''allow r'\'' osd
'\''allow rw pool=testpool01'\''"}'

The request status shows it is still in the queue.

[
{
"failed": [],
"finished": [],
"has_failed": false,
"id": "140576245092648",
"is_finished": false,
"is_waiting": false,
"running": [
{
"command": "auth add entity=client.testuser02 caps=mon 'allow 
r' osd 'allow rw pool=testpool01'",
"outb": "",
"outs": "" 
}
],
"state": "pending",
"waiting": []
}
]

[Where problems could occur]
No problems foreseen because the exception is hit only in case of malformed 
json data, and not otherwise, and it is a desirable thing to catch and handle 
it instead of allowing process termination due to uncaught exception.

[Other Info]
Reported upstream at https://tracker.ceph.com/issues/54558 (including 
reproducer, and fix testing details) and fixed through 
https://github.com/ceph/ceph/pull/45547

PR for Octopus is at https://github.com/ceph/ceph/pull/45891

** Affects: ceph (Ubuntu)
 Importance: Medium
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1969000

Title:
  [SRU] mon crashes when improper json is passed to rados

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1969000/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-03-11 Thread nikhil kshirsagar
** Tags removed: verification-needed
** Tags added: verification-done

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962733] Re: [sru] sosreport does not obfuscate a mac address when --mask is used

2022-03-10 Thread nikhil kshirsagar
** Description changed:

- sos 4.3 seems to have a regression in mac address obfuscation. We found
- this in the ubuntu package testing. The file concerned seems to be the
- etc/netplan/50-cloud-init.yaml which seems to end up with an
- unobfuscated mac address inspite of using --mask flag to sos report
- command.
- 
- 
- 
- autopkgtest run shows,
- 
- Found 1 total reports to obfuscate, processing up to 4 concurrently
- 
- sosreport-autopkgtest-2022-03-02-kluxwcz : Beginning obfuscation...
- sosreport-autopkgtest-2022-03-02-kluxwcz : Obfuscation completed 
[removed 16 unprocessable files]
- 
- Successfully obfuscated 1 report(s)
- 
- Creating compressed archive...
- 
- A mapping of obfuscated elements is available at
-   /tmp/sosreport-host0-2022-03-02-kluxwcz-private_map
- 
- Your sosreport has been generated and saved in:
-   /tmp/sosreport-host0-2022-03-02-kluxwcz-obfuscated.tar.xz
- 
-  Size 2.28MiB
-  Ownerroot
-  sha256   42db961f8cde1aa72f78afbef825d7bd54884e76996f96ce657a37fca5e1fa44
- 
- Please send this file to your support representative.
- 
- ### end stdout
- ### start extraction
- ### stop extraction
- # DONE WITH --mask #
- !!! TEST FAILED: MAC address not obfuscated in all places !!!
- /tmp/sosreport_test/etc/netplan/50-cloud-init.yaml:
macaddress: '52:54:00:12:34:56'
+ [Impact]
+ 
+ sos 4.3 has a regression in mac address obfuscation. The file
+ etc/netplan/50-cloud-init.yaml ends up with an unobfuscated mac address
+ inspite of using --mask.
+ 
+ [TEST PLAN]
+ 
+ Documentation for Special Cases:
+ https://wiki.ubuntu.com/SosreportUpdates
+ 
+ [WHERE PROBLEMS COULD OCCUR]
+ 
+ Since we are changing the regex parser code in
+ sos/cleaner/parsers/mac_parser.py we would need to ensure no other regex
+ behavior is changed. The unit tests in autopkgtest will suffice to
+ determine that.
  
  -
- 
+ [Other Info]
+ 
+ Upstream issue is https://github.com/sosreport/sos/issues/2873
+ Upstream MR is https://github.com/sosreport/sos/pull/2875
+ 
+ Reproducer details:
  sos 4.2 shows correct behavior. testing shows..
  
  /etc/netplan/50-cloud-init.yaml contains
  
  network:
- ethernets:
- ens3:
- dhcp4: true
- match:
- macaddress: '52:54:00:12:34:56'
- set-name: ens3
- version: 2
- 
- 
- 4.2 sos contains the file but with the obfuscated mac address. correct 
behavior.
- 
- # This file is generated from information provided by the datasource.  Changes
- # to it will not persist across an instance reboot.  To disable cloud-init's
- # network configuration capabilities, write a file
- # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
- # network: {config: disabled}
- network:
- ethernets:
- ens3:
- dhcp4: true
- match:
- macaddress: '53:4f:53:cf:3a:9e'
- set-name: ens3
- version: 2
- 
+ ethernets:
+ ens3:
+ dhcp4: true
+ match:
+ macaddress: '52:54:00:12:34:56'
+ set-name: ens3
+ version: 2
+ 
+ 4.2 sos contains the file but with the obfuscated mac address. correct
+ behavior.
+ 
+ # This file is generated from information provided by the datasource.  Changes
+ # to it will not persist across an instance reboot.  To disable cloud-init's
+ # network configuration capabilities, write a file
+ # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
+ # network: {config: disabled}
+ network:
+ ethernets:
+ ens3:
+ dhcp4: true
+ match:
+ macaddress: '53:4f:53:cf:3a:9e'
+ set-name: ens3
+ version: 2
  
  --
  
  4.3 testing shows the bug,
  
  the /etc/netplan/50-cloud-init.yaml contains
  
  # This file is generated from information provided by the datasource.  Changes
  # to it will not persist across an instance reboot.  To disable cloud-init's
  # network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}
  network:
- ethernets:
- ens3:
- dhcp4: true
- match:
- macaddress: '52:54:00:12:34:56'
- set-name: ens3
- version: 2
- 
+ ethernets:
+ ens3:
+ dhcp4: true
+ match:
+ macaddress: '52:54:00:12:34:56'
+ set-name: ens3
+ version: 2
  
  ---
  
  generated sosreport (run with --mask) contains
  
  # This file is generated from information provided by the datasource.  Changes
  # to it will not persist across an instance reboot.  To disable cloud-init's
  # network configuration capabilities, write a file
  # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
  # network: {config: disabled}
  network:
- ethernets:
- ens3:
-

[Bug 1962733] Re: [sru] sosreport does not obfuscate a mac address even with --mask is used

2022-03-10 Thread nikhil kshirsagar
** Summary changed:

- sosreport does not obfuscate a mac address even with --mask is used
+ [sru] sosreport does not obfuscate a mac address even with --mask is used

** Summary changed:

- [sru] sosreport does not obfuscate a mac address even with --mask is used
+ [sru] sosreport does not obfuscate a mac address when --mask is used

** Changed in: sosreport (Ubuntu Impish)
Milestone: None => impish-updates

** Changed in: sosreport (Ubuntu Focal)
Milestone: None => focal-updates

** Changed in: sosreport (Ubuntu Bionic)
Milestone: None => bionic-updates

** Changed in: sosreport (Ubuntu)
Milestone: None => jammy-updates

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1962733

Title:
  [sru] sosreport does not obfuscate a mac address when --mask is used

To manage notifications about this bug go to:
https://bugs.launchpad.net/sosreport/+bug/1962733/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-03-08 Thread nikhil kshirsagar
I have tested sos 4.3-1ubuntu0.21.10.1 on an impish VM and verified that
everything looks OK.

I also tested hotsos on the generated sos archive.

Some details of the testing done -
https://pastebin.canonical.com/p/Ns5x94MrJY/

Also used Jorge's script with an additional python script I hacked
together to brute force search missing files in the new sos version -
https://gist.github.com/drencrom/c455f7ff0e55b819a7a9f1901cc6cc60 and
https://gist.github.com/drencrom/c455f7ff0e55b819a7a9f1901cc6cc60?permalink_comment_id=4090486#gistcomment-4090486

** Tags removed: verification-needed-impish
** Tags added: verification-done-impish

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-03-07 Thread nikhil kshirsagar
** Tags removed: verification-needed-bionic verification-needed-focal
** Tags added: verification-done-bionic verification-done-focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-03-07 Thread nikhil kshirsagar
I have tested sos 4.3-1ubuntu0.20.04.2 on focal (baremetal) -
https://pastebin.canonical.com/p/RD9ThxMHX3/

I've verified the generated sosreport is reasonable. I've also tested
hotsos on the generated archive.

It looks good to me.

Regards,
Nikhil.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-03-06 Thread nikhil kshirsagar
Hi Lukasz,

I have tested bionic in a VM and also a bionic container and verified
the installed package is OK.

root@juju-677128-1-lxd-0:~# dpkg -l | grep sos
ii  sosreport  4.3-1ubuntu0.18.04.1amd64
Set of tools to gather troubleshooting data from a system
root@juju-677128-1-lxd-0:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 18.04.6 LTS
Release:18.04
Codename:   bionic

testing:
https://pastebin.canonical.com/p/bNbcGmS2yM/

The produced sos archive looks reasonable. (some ceph commands error out
due to those not being present in luminous, that is expected)

I also ran latest hotsos on the produced bionic archive and verified it works 
fine.
https://pastebin.canonical.com/p/QHhR4hSF6b/

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-03-02 Thread nikhil kshirsagar
I have opened
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1962733 for
issue flagged in the autopkgtests.

I've also reported it upstream since I was able to reproduce it with
upstream code as well - https://github.com/sosreport/sos/issues/2873

** Description changed:

  [IMPACT]
  
  The sos team is pleased to announce the release of sos-4.3. This release
  includes a number of quality-of-life changes to both end user experience
  and for contributors dealing with the plugin API.
  
  [TEST PLAN]
  
  Documentation for Special Cases:
  https://wiki.ubuntu.com/SosreportUpdates
  
  [WHERE PROBLEMS COULD OCCUR]
+ 
+ * Problem found and fixed during the packaging process:
+ 
+  ** sos-help module wasn't part of the build process
+  ** sos-help man page wasn't also not part of the build process nor mention 
in main sos man page
+ 
+ Bug:
+ https://github.com/sosreport/sos/issues/2860
+ 
+ Both commits of PR need to be part of 4.3 Ubuntu package:
+ https://github.com/sosreport/sos/pull/2861
+ 
+ Known issue:
+ https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1962733
+ 
+ [OTHER INFORMATION]
  
  Regression could occur at core functionality, which may prevent sos (or
  its subcommand to work. I consider this regression type as 'low'. That
  is generally well tested, and we would find a problem at an early stage
  during the verification phase if it is the case.
  
  On the other end, regression could happen and are some kind of expected
  at plugins levels. As of today, sos has more than 300 plugins. It is
  nearly impossible to test them all.
  
  If a regression is found in a plugin, it is rarely affecting sos core
  functionalities nor other plugins. So mainly the impact would be limited
  to that plugin. The impact being that the plugin can't or partially can
  collect the information that it is instructed to gather.
  
  A 3rd party vendor would then ask user/customer to collect the
  information manually for that particular plugins.
  
  Plugins are segmented by services and/or applications (e.g.
  openstack_keystone, bcache, system, logs, ...) in order to collect
  things accordingly to the plugin detected or intentionally requested
  for.
  
  Sosreport plugins philosophy is to (as much as possible) maintain
  backward compatibility when updating a plugin. The risk that an ancient
  version of a software has been dropped, is unlikely, unless it was
  intended to be that way for particular reasons. Certain plugin also
  support the DEB installation way and the snap one (MAAS, LXD, ...) so
  all Ubuntu standard installation types are covered.
  
- * Problem found and fixed during the packaging process:
- 
-  ** sos-help module wasn't part of the build process
-  ** sos-help man page wasn't also not part of the build process nor mention 
in main sos man page
- 
- Bug:
- https://github.com/sosreport/sos/issues/2860
- 
- Both commits of PR need to be part of 4.3 Ubuntu package:
- https://github.com/sosreport/sos/pull/2861
- 
- Known issue:
- https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1962733
- 
- [OTHER INFORMATION]
- 
  Release note:
  https://github.com/sosreport/sos/releases/tag/4.3

** Bug watch added: github.com/sosreport/sos/issues #2873
   https://github.com/sosreport/sos/issues/2873

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-03-02 Thread nikhil kshirsagar
** Description changed:

  [IMPACT]
  
  The sos team is pleased to announce the release of sos-4.3. This release
  includes a number of quality-of-life changes to both end user experience
  and for contributors dealing with the plugin API.
  
  [TEST PLAN]
  
  Documentation for Special Cases:
  https://wiki.ubuntu.com/SosreportUpdates
  
- [WHERE PROBLEM COULD OCCUR]
+ [WHERE PROBLEMD COULD OCCUR]
  
  Regression could occur at core functionality, which may prevent sos (or
  its subcommand to work. I consider this regression type as 'low'. That
  is generally well tested, and we would find a problem at an early stage
  during the verification phase if it is the case.
  
  On the other end, regression could happen and are some kind of expected
  at plugins levels. As of today, sos has more than 300 plugins. It is
  nearly impossible to test them all.
  
  If a regression is found in a plugin, it is rarely affecting sos core
  functionalities nor other plugins. So mainly the impact would be limited
  to that plugin. The impact being that the plugin can't or partially can
  collect the information that it is instructed to gather.
  
  A 3rd party vendor would then ask user/customer to collect the
  information manually for that particular plugins.
  
  Plugins are segmented by services and/or applications (e.g.
  openstack_keystone, bcache, system, logs, ...) in order to collect
  things accordingly to the plugin detected or intentionally requested
  for.
  
  Sosreport plugins philosophy is to (as much as possible) maintain
  backward compatibility when updating a plugin. The risk that an ancient
  version of a software has been dropped, is unlikely, unless it was
  intended to be that way for particular reasons. Certain plugin also
  support the DEB installation way and the snap one (MAAS, LXD, ...) so
  all Ubuntu standard installation types are covered.
  
  * Problem found and fixed during the packaging process:
  
   ** sos-help module wasn't part of the build process
   ** sos-help man page wasn't also not part of the build process nor mention 
in main sos man page
  
  Bug:
  https://github.com/sosreport/sos/issues/2860
  
  Both commits of PR need to be part of 4.3 Ubuntu package:
  https://github.com/sosreport/sos/pull/2861
  
  Known issue:
  https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1962733
  
  [OTHER INFORMATION]
  
  Release note:
  https://github.com/sosreport/sos/releases/tag/4.3

** Description changed:

  [IMPACT]
  
  The sos team is pleased to announce the release of sos-4.3. This release
  includes a number of quality-of-life changes to both end user experience
  and for contributors dealing with the plugin API.
  
  [TEST PLAN]
  
  Documentation for Special Cases:
  https://wiki.ubuntu.com/SosreportUpdates
  
- [WHERE PROBLEMD COULD OCCUR]
+ [WHERE PROBLEMS COULD OCCUR]
  
  Regression could occur at core functionality, which may prevent sos (or
  its subcommand to work. I consider this regression type as 'low'. That
  is generally well tested, and we would find a problem at an early stage
  during the verification phase if it is the case.
  
  On the other end, regression could happen and are some kind of expected
  at plugins levels. As of today, sos has more than 300 plugins. It is
  nearly impossible to test them all.
  
  If a regression is found in a plugin, it is rarely affecting sos core
  functionalities nor other plugins. So mainly the impact would be limited
  to that plugin. The impact being that the plugin can't or partially can
  collect the information that it is instructed to gather.
  
  A 3rd party vendor would then ask user/customer to collect the
  information manually for that particular plugins.
  
  Plugins are segmented by services and/or applications (e.g.
  openstack_keystone, bcache, system, logs, ...) in order to collect
  things accordingly to the plugin detected or intentionally requested
  for.
  
  Sosreport plugins philosophy is to (as much as possible) maintain
  backward compatibility when updating a plugin. The risk that an ancient
  version of a software has been dropped, is unlikely, unless it was
  intended to be that way for particular reasons. Certain plugin also
  support the DEB installation way and the snap one (MAAS, LXD, ...) so
  all Ubuntu standard installation types are covered.
  
  * Problem found and fixed during the packaging process:
  
   ** sos-help module wasn't part of the build process
   ** sos-help man page wasn't also not part of the build process nor mention 
in main sos man page
  
  Bug:
  https://github.com/sosreport/sos/issues/2860
  
  Both commits of PR need to be part of 4.3 Ubuntu package:
  https://github.com/sosreport/sos/pull/2861
  
  Known issue:
  https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1962733
  
  [OTHER INFORMATION]
  
  Release note:
  https://github.com/sosreport/sos/releases/tag/4.3

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which 

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-03-02 Thread nikhil kshirsagar
** Description changed:

  [IMPACT]
  
  The sos team is pleased to announce the release of sos-4.3. This release
  includes a number of quality-of-life changes to both end user experience
  and for contributors dealing with the plugin API.
  
  [TEST PLAN]
  
  Documentation for Special Cases:
  https://wiki.ubuntu.com/SosreportUpdates
  
  [WHERE PROBLEM COULD OCCUR]
  
  Regression could occur at core functionality, which may prevent sos (or
  its subcommand to work. I consider this regression type as 'low'. That
  is generally well tested, and we would find a problem at an early stage
  during the verification phase if it is the case.
  
  On the other end, regression could happen and are some kind of expected
  at plugins levels. As of today, sos has more than 300 plugins. It is
  nearly impossible to test them all.
  
  If a regression is found in a plugin, it is rarely affecting sos core
  functionalities nor other plugins. So mainly the impact would be limited
  to that plugin. The impact being that the plugin can't or partially can
  collect the information that it is instructed to gather.
  
  A 3rd party vendor would then ask user/customer to collect the
  information manually for that particular plugins.
  
  Plugins are segmented by services and/or applications (e.g.
  openstack_keystone, bcache, system, logs, ...) in order to collect
  things accordingly to the plugin detected or intentionally requested
  for.
  
  Sosreport plugins philosophy is to (as much as possible) maintain
  backward compatibility when updating a plugin. The risk that an ancient
  version of a software has been dropped, is unlikely, unless it was
  intended to be that way for particular reasons. Certain plugin also
  support the DEB installation way and the snap one (MAAS, LXD, ...) so
  all Ubuntu standard installation types are covered.
  
  * Problem found and fixed during the packaging process:
  
   ** sos-help module wasn't part of the build process
   ** sos-help man page wasn't also not part of the build process nor mention 
in main sos man page
  
  Bug:
  https://github.com/sosreport/sos/issues/2860
  
  Both commits of PR need to be part of 4.3 Ubuntu package:
  https://github.com/sosreport/sos/pull/2861
  
+ Known issue:
+ https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1962733
+ 
  [OTHER INFORMATION]
  
  Release note:
  https://github.com/sosreport/sos/releases/tag/4.3

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1962733] [NEW] sosreport does not obfuscate a mac address even with --mask is used

2022-03-02 Thread nikhil kshirsagar
:
/tmp/sosreport-host0-2022-03-02-abhwscl-obfuscated.tar.xz

 Size   2.27MiB
 Owner  root
 sha256 e9d19933cfed512a59790edf65f70a0139f8da162f406153c298bb093bfbd939

Please send this file to your support representative.


Lets open the file and see if mac address in that file is left unobfuscated,


root@autopkgtest:/tmp# cat 
sosreport-host0-2022-03-02-abhwscl/etc/netplan/50-cloud-init.yaml 
# This file is generated from information provided by the datasource.  Changes
# to it will not persist across an instance reboot.  To disable cloud-init's
# network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
ethernets:
ens3:
dhcp4: true
match:
macaddress: '52:54:00:12:34:56'
set-name: ens3
version: 2
root@autopkgtest:/tmp# 

Note,

root@autopkgtest:/tmp# ls -lrt
total 9448
drwx--  3 root root4096 Mar  2 15:48 
systemd-private-e709306472c1435993a0b8d1f15e9dd3-systemd-resolved.service-7kMEUf
drwx--  3 root root4096 Mar  2 15:48 
systemd-private-e709306472c1435993a0b8d1f15e9dd3-systemd-timesyncd.service-FqCM6e
drwx--  3 root root4096 Mar  2 15:48 
systemd-private-e709306472c1435993a0b8d1f15e9dd3-systemd-logind.service-xFJpBh
drwx--  3 root root4096 Mar  2 15:48 
systemd-private-e709306472c1435993a0b8d1f15e9dd3-ModemManager.service-x5UZXh
-rwxr-xr-x  1 root root 691 Mar  2 15:48 eofcat
-rwxr-xr-x  1 root root 285 Mar  2 15:48 autopkgtest-reboot
-rwxr-xr-x  1 root root 269 Mar  2 15:48 autopkgtest-reboot-prepare
drwxrwxrwt  5 root root4096 Mar  2 15:48 autopkgtest.RixDKr
drwx-- 10 root root4096 Mar  2 15:48 
sosreport-autopkgtest-2022-03-02-zwngejm
drwx--  3 root root4096 Mar  2 15:48 
systemd-private-e709306472c1435993a0b8d1f15e9dd3-fwupd.service-Zasqxf
drwx--  3 root root4096 Mar  2 15:48 
systemd-private-e709306472c1435993a0b8d1f15e9dd3-upower.service-qb69Wg
-rw---  1 root root1645 Mar  2 15:49 
sosreport-host0-2022-03-02-bwcteqj-private_map
-rw---  1 root root 2389116 Mar  2 15:50 
sosreport-host0-2022-03-02-bwcteqj-obfuscated.tar.xz
drwxr-xr-x 12 root root4096 Mar  2 15:50 sosreport_test
drwx-- 12 root root4096 Mar  2 15:55 
sosreport-autopkgtest-2022-03-02-nwzytde
-rw---  1 root root 2409380 Mar  2 15:55 
sosreport-autopkgtest-2022-03-02-nwzytde.tar.xz
-rw-r--r--  1 root root  65 Mar  2 15:55 
sosreport-autopkgtest-2022-03-02-nwzytde.tar.xz.sha256
-rw---  1 root root 2411848 Mar  2 15:58 
sosreport-autopkgtest-2022-03-02-hkqkbak.tar.xz
-rw-r--r--  1 root root  65 Mar  2 15:58 
sosreport-autopkgtest-2022-03-02-hkqkbak.tar.xz.sha256
drwx-- 12 root root4096 Mar  2 15:58 sosreport-host0-2022-03-02-abhwscl
-rw---  1 root root1645 Mar  2 15:59 
sosreport-host0-2022-03-02-abhwscl-private_map <---
-rw---  1 root root 2378324 Mar  2 15:59 
sosreport-host0-2022-03-02-abhwscl-obfuscated.tar.xz
-rw---  1 root root  65 Mar  2 15:59 
sosreport-host0-2022-03-02-abhwscl-obfuscated.tar.xz.sha256


root@autopkgtest:/tmp# cat sosreport-host0-2022-03-02-abhwscl-private_map 
{
"hostname_map": {
"autopkgtest": "host0"
},
"ip_map": {
"10.0.2.0/24": "100.0.0.0/24",
"10.0.2.15/24": "100.0.0.1/24",
"10.0.2.255": "100.0.0.255",
"10.0.2.3": "100.0.0.2/24",
"91.189.89.198": "33.43.50.21",
"5.4.0.102": "80.74.90.96",
"5.4.0.100": "69.87.15.65",
"5.4.0.26": "13.16.68.51",
"224.0.0.1": "92.20.91.63",
"91.189.94.4": "42.38.68.46",
"3.192.30.10": "93.87.22.28",
"5.4.0.99": "37.44.72.50",
"10.0.2.0/28": "101.0.0.1/28",
"10.0.2.0/30": "102.0.0.1/30",
"192.168.200.1": "37.72.13.85",
"192.168.200.4": "19.35.86.99",
"192.168.200.9": "39.80.73.13",
"192.168.201.0/24": "103.0.0.1/24",
"192.168.201.0/25": "104.0.0.1/25",
"224.0.0.251": "19.45.84.66",
"239.255.255.250": "93.52.70.42",
"123.45.67.89": "92.20.45.84",
"192.168.0.133": "29.81.60.51"
},
"mac_map": {
"52:54:00:12:34:56": "53:4f:53:45:22:61", <- never made it into the 
file collected
"33:33:00:00:00:16": "53:4f:53:63:ca:e1",
"33:33:00:00:00:02": "53:4f:53:46:bc:12",
"33

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-02-28 Thread nikhil kshirsagar
Hello Lukasz,

Thank you for accepting the update.

The focal release already has the sos.conf changes because they had
landed in 4.2.. However, 4.2 did not release for bionic
(https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1941745 has it
as confirmed but not fix released) so for 4.3, for bionic, I needed to
take the 4.2 changes as well as the new patches for 4.3. Hence the
additional patches.

I've mentioned in the changelog for bionic,

--

sosreport (4.3-1ubuntu0.18.04.1) bionic; urgency=medium

  * New 4.3 upstream. (LP: #1960996)

  * For more details, full release note is available here:
- https://github.com/sosreport/sos/releases/tag/4.3

  * New patches:
- d/p/0002-fix-setup-py.patch:
  Add python sos.help module, it was missed in
  upstream release.
- d/p/0003-mention-sos-help-in-sos-manpage.patch:
  Fix sos-help manpage.

  * Former patches, now fixed:
- d/p/0002-clean-prevent-parsing-ubuntu-user.patch
- d/p/0003-ubuntu-policy-fix-upload.patch
- d/p/0004-chrony-configuration-can-now-be-fragmented.patch
- d/p/0005-global-drop-plugin-version.patch
- d/p/0006-networking-check-presence-of-devlink.patch
- d/p/0007-sosnode-avoid-checksum-cleanup-if-no-archive.patch

  * d/control:
   - Add 'python3-coverage' as part of the build depends.  

  * d/rules:
   - Fix misplaced and duplicated sos.conf file in /usr/config. 

  * Remaining patches:
- d/p/0001-debian-change-tmp-dir-location.patch
 
--

Regards,
Nikhil.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-02-28 Thread nikhil kshirsagar
** Changed in: sosreport (Ubuntu Bionic)
   Importance: Low => Medium

** Changed in: sosreport (Ubuntu Bionic)
 Assignee: (unassigned) => nikhil kshirsagar (nkshirsagar)

** Changed in: sosreport (Ubuntu Bionic)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-02-27 Thread nikhil kshirsagar
** Changed in: sosreport (Ubuntu Bionic)
 Assignee: nikhil kshirsagar (nkshirsagar) => (unassigned)

** Changed in: sosreport (Ubuntu Bionic)
   Importance: Medium => Low

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-02-18 Thread nikhil kshirsagar
** Changed in: sosreport (Ubuntu Impish)
   Status: New => In Progress

** Changed in: sosreport (Ubuntu Focal)
   Status: New => In Progress

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960996] Re: [sru] sos upstream 4.3

2022-02-18 Thread nikhil kshirsagar
Focal debdiff attached.

Regards,
Nikhil.

** Attachment added: "debdiff_focal_sos4.3"
   
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+attachment/5561799/+files/debdiff_focal_sos4.3

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1960996] [NEW] [sru] sos upstream 4.3

2022-02-15 Thread nikhil kshirsagar
Public bug reported:

[IMPACT]

The sos team is pleased to announce the release of sos-4.3. This release
includes a number of quality-of-life changes to both end user experience
and for contributors dealing with the plugin API.

[TEST PLAN]

Documentation for Special Cases:
https://wiki.ubuntu.com/SosreportUpdates

[WHERE PROBLEM COULD OCCUR]

Regression could occur at core functionality, which may prevent sos (or
its subcommand to work. I consider this regression type as 'low'. That
is generally well tested, and we would find a problem at an early stage
during the verification phase if it is the case.

On the other end, regression could happen and are some kind of expected
at plugins levels. As of today, sos has more than 300 plugins. It is
nearly impossible to test them all.

If a regression is found in a plugin, it is rarely affecting sos core
functionalities nor other plugins. So mainly the impact would be limited
to that plugin. The impact being that the plugin can't or partially can
collect the information that it is instructed to gather.

A 3rd party vendor would then ask user/customer to collect the
information manually for that particular plugins.

Plugins are segmented by services and/or applications (e.g.
openstack_keystone, bcache, system, logs, ...) in order to collect
things accordingly to the plugin detected or intentionally requested
for.

Sosreport plugins philosophy is to (as much as possible) maintain
backward compatibility when updating a plugin. The risk that an ancient
version of a software has been dropped, is unlikely, unless it was
intended to be that way for particular reasons. Certain plugin also
support the DEB installation way and the snap one (MAAS, LXD, ...) so
all Ubuntu standard installation types are covered.

[OTHER INFORMATION]

Release note: 
https://github.com/sosreport/sos/releases/tag/4.3

** Affects: sosreport (Ubuntu)
 Importance: Undecided
 Status: New

** Affects: sosreport (Ubuntu Bionic)
 Importance: Undecided
 Status: New

** Affects: sosreport (Ubuntu Focal)
 Importance: Undecided
 Status: New

** Affects: sosreport (Ubuntu Impish)
 Importance: Undecided
 Status: New

** Affects: sosreport (Ubuntu Jammy)
 Importance: Undecided
 Status: New

** Also affects: sosreport (Ubuntu Bionic)
   Importance: Undecided
   Status: New

** Also affects: sosreport (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: sosreport (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: sosreport (Ubuntu Impish)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1960996

Title:
  [sru] sos upstream 4.3

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sosreport/+bug/1960996/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959649] Re: BlueFS spillover detected for particular OSDs

2022-02-03 Thread nikhil kshirsagar
** Description changed:

  This is an issue described in https://tracker.ceph.com/issues/38745,
  where ceph health details shows messages like,
  
  sudo ceph health detail
  HEALTH_WARN 3 OSD(s) experiencing BlueFS spillover; mon juju-6879b7-6-lxd-1 
is low on available space
  [WRN] BLUEFS_SPILLOVER: 3 OSD(s) experiencing BlueFS spillover <---
  osd.41 spilled over 66 MiB metadata from 'db' device (3.0 GiB used of 29 GiB) 
to slow device
  osd.96 spilled over 461 MiB metadata from 'db' device (3.0 GiB used of 29 
GiB) to slow device
  osd.105 spilled over 198 MiB metadata from 'db' device (3.0 GiB used of 29 
GiB) to slow device
  
  The bluefs spillover is very likely caused because of the rocksdb's
  level-sized issue.
  
  https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-
  ref/#sizing has a statement about this leveled sizing.
  
  Between versions 15.2.6 and 15.2.10 , if the value of
  bluestore_volume_selection_policy is not set to use_some_extra, this
  issue can be faced inspite of free space available, due to the fact that
  RocksDB only uses "leveled" space on the NVME partition. The values are
  set to be 300MB, 3GB, 30GB and 300GB. Every DB space above such a limit
  will automatically end up on slow devices.
  
  There is also a discussion at www.mail-archive.com/ceph-
  us...@ceph.io/msg05782.html
  
  Running compaction on the database, i.e ceph tell osd.XX compact
  (replace XX with the OSD number) can work around the issue, but the best
  fix is to either,
  
- I am also pasting some notes Dongdong mentions on SF case 00326782,
- where the fix is to either,
+ I am also pasting some notes Dongdong mentions, where the fix is to
+ either,
  
  A. Redeploy the OSDs with a larger DB lvm/partition.
  
  OR
  
  B. Migrate to a new larger DB lvm/partition, this can be done offline
  with ceph-volume lvm migrate, please refer to
  https://docs.ceph.com/en/octopus/ceph-volume/lvm/migrate/ but it
  requires to upgrade the cluster to 15.2.14 first.
  
  A will be much safer, but more time-consuming. B will be much faster,
  but its recommended to do it on one node first and wait/monitoring for a
  couple of weeks before moving forward.
  
  As mentioned above, to avoid running into the issue even with free space
  available, the value of bluestore_volume_selection_policy should be set
  to use_some_extra for all OSDs. 15.2.6 has
  bluestore_volume_selection_policy but the default was only set to
  use_some_extra 15.2.11 onwards. (https://tracker.ceph.com/issues/47053)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959649

Title:
  BlueFS spillover detected for particular OSDs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1959649/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959649] Re: BlueFS spillover detected for particular OSDs

2022-02-01 Thread nikhil kshirsagar
(not a bug. has a workaround to set the
bluestore_volume_selection_policy to use_some_extra)

** Changed in: ceph (Ubuntu)
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959649

Title:
  BlueFS spillover detected for particular OSDs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1959649/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1959649] [NEW] BlueFS spillover detected for particular OSDs

2022-01-31 Thread nikhil kshirsagar
Public bug reported:

This is an issue described in https://tracker.ceph.com/issues/38745,
where ceph health details shows messages like,

sudo ceph health detail
HEALTH_WARN 3 OSD(s) experiencing BlueFS spillover; mon juju-6879b7-6-lxd-1 is 
low on available space
[WRN] BLUEFS_SPILLOVER: 3 OSD(s) experiencing BlueFS spillover <---
osd.41 spilled over 66 MiB metadata from 'db' device (3.0 GiB used of 29 GiB) 
to slow device
osd.96 spilled over 461 MiB metadata from 'db' device (3.0 GiB used of 29 GiB) 
to slow device
osd.105 spilled over 198 MiB metadata from 'db' device (3.0 GiB used of 29 GiB) 
to slow device

The bluefs spillover is very likely caused because of the rocksdb's
level-sized issue.

https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-
ref/#sizing has a statement about this leveled sizing.

Between versions 15.2.6 and 15.2.10 , if the value of
bluestore_volume_selection_policy is not set to use_some_extra, this
issue can be faced inspite of free space available, due to the fact that
RocksDB only uses "leveled" space on the NVME partition. The values are
set to be 300MB, 3GB, 30GB and 300GB. Every DB space above such a limit
will automatically end up on slow devices.

There is also a discussion at www.mail-archive.com/ceph-
us...@ceph.io/msg05782.html

Running compaction on the database, i.e ceph tell osd.XX compact
(replace XX with the OSD number) can work around the issue, but the best
fix is to either,

I am also pasting some notes Dongdong mentions on SF case 00326782,
where the fix is to either,

A. Redeploy the OSDs with a larger DB lvm/partition.

OR

B. Migrate to a new larger DB lvm/partition, this can be done offline
with ceph-volume lvm migrate, please refer to
https://docs.ceph.com/en/octopus/ceph-volume/lvm/migrate/ but it
requires to upgrade the cluster to 15.2.14 first.

A will be much safer, but more time-consuming. B will be much faster,
but its recommended to do it on one node first and wait/monitoring for a
couple of weeks before moving forward.

As mentioned above, to avoid running into the issue even with free space
available, the value of bluestore_volume_selection_policy should be set
to use_some_extra for all OSDs. 15.2.6 has
bluestore_volume_selection_policy but the default was only set to
use_some_extra 15.2.11 onwards. (https://tracker.ceph.com/issues/47053)

** Affects: ceph (Ubuntu)
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1959649

Title:
  BlueFS spillover detected for particular OSDs

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1959649/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946211] Re: [SRU] "radosgw-admin bucket limit check" has duplicate entries if bucket count exceeds 1000 (max_entries)

2021-11-01 Thread nikhil kshirsagar
Attaching debdiff built on focal for octopus


** Attachment added: "debdiff_1946211_focal"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1946211/+attachment/5537457/+files/debdiff_1946211_focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946211

Title:
  [SRU] "radosgw-admin bucket limit check" has duplicate entries if
  bucket count exceeds 1000 (max_entries)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946211/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946211] Re: [SRU] "radosgw-admin bucket limit check" has duplicate entries if bucket count exceeds 1000 (max_entries)

2021-11-01 Thread nikhil kshirsagar
parse 15.2.14-0ubuntu0.20.04.3 amd64 Python 3 utility 
libraries for Ceph CLI
  ii python3-ceph-common 15.2.14-0ubuntu0.20.04.3 all Python 3 utility 
libraries for Ceph
  ii python3-cephfs 15.2.14-0ubuntu0.20.04.3 amd64 Python 3 libraries for the 
Ceph libcephfs library
  ubuntu@crush-ceph-rgw01:~$ sudo apt-cache policy ceph
  ceph:
  Installed: 15.2.14-0ubuntu0.20.04.3
  Candidate: 15.2.14-0ubuntu0.20.04.3
  
  $ sudo radosgw-admin bucket list | jq .[] | wc -l
  5572
  $ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
  5572
  $ sudo radosgw-admin bucket limit check | jq '.[].buckets[] | 
select(.bucket=="bucket_1095")'
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
+ 
+ --
+ 
+ [Impact]
+ 
+ duplicated bucket name entries appear in the customers outputs when they
+ script the `radosgw-admin bucket limit check` commands.
+ 
+ To reproduce:
+ 
+ Create more than 1000 (default value of max_entries) buckets in a
+ cluster, and run 'radosgw-admin bucket limit check'
+ 
+ Duplicated entries are seen in the output on Octopus. For example,
+ 
+ $ sudo radosgw-admin bucket list | jq .[] | wc -l
+ 5572
+ 
+ $ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
+ 20572
+ 
+ [Test case]
+ 
+ Create more than 1000 buckets in a cluster, then run the 'radosgw-admin
+ bucket limit check' command. There should be no duplicated entries in
+ the output. Below is correct output, where the numbers match.
+ 
+ $ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
+ 5572
+ 
+ $ sudo radosgw-admin bucket list | jq .[] | wc -l
+ 5572
+ 
+ [Where problems could occur]
+ 
+ The duplicate entries could end up causing admins or even scripts to
+ assume that there are more buckets than there really are.
+ 
+ [Other Info]
+ - The patch was provided by Nikhil Kshirsagar (attached here)
+ - Upstream tracker: https://tracker.ceph.com/issues/52813
+ - Upstream PR: https://github.com/ceph/ceph/pull/43381
+ - Patched into Octopus upstream release.

** Tags added: sts-sru-needed

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946211

Title:
  [SRU] "radosgw-admin bucket limit check" has duplicate entries if
  bucket count exceeds 1000 (max_entries)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946211/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946211] Re: [SRU] "radosgw-admin bucket limit check" has duplicate entries if bucket count exceeds 1000 (max_entries)

2021-10-06 Thread nikhil kshirsagar
** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946211

Title:
  [SRU] "radosgw-admin bucket limit check" has duplicate entries if
  bucket count exceeds 1000 (max_entries)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946211/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946211] Re: [SRU] "radosgw-admin bucket limit check" has duplicate entries if bucket count exceeds 1000 (max_entries)

2021-10-06 Thread nikhil kshirsagar
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: ceph (Ubuntu Focal)
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946211

Title:
  [SRU] "radosgw-admin bucket limit check" has duplicate entries if
  bucket count exceeds 1000 (max_entries)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1946211/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1946211] [NEW] [SRU] "radosgw-admin bucket limit check" has duplicate entries if bucket count exceeds 1000 (max_entries)

2021-10-06 Thread nikhil kshirsagar
Public bug reported:

The "radosgw-admin bucket limit check" command has a bug in octopus.

Since we do not clear the bucket list in RGWRadosUser::list_buckets()
before asking for the next "max_entries", they are appended to the
existing list and we end up counting the first ones again. This causes
duplicated entries in the output of "ragodgw-admin bucket limit check"

This bug is triggered if bucket count exceeds 1000 (default
max_entries).

--

$ dpkg -l | grep ceph
ii ceph 15.2.12-0ubuntu0.20.04.1 amd64 distributed storage and file system
ii ceph-base 15.2.12-0ubuntu0.20.04.1 amd64 common ceph daemon libraries and 
management tools
ii ceph-common 15.2.12-0ubuntu0.20.04.1 amd64 common utilities to mount and 
interact with a ceph storage cluster
ii ceph-mds 15.2.12-0ubuntu0.20.04.1 amd64 metadata server for the ceph 
distributed file system
ii ceph-mgr 15.2.12-0ubuntu0.20.04.1 amd64 manager for the ceph distributed 
file system
ii ceph-mgr-modules-core 15.2.12-0ubuntu0.20.04.1 all ceph manager modules 
which are always enabled
ii ceph-mon 15.2.12-0ubuntu0.20.04.1 amd64 monitor server for the ceph storage 
system
ii ceph-osd 15.2.12-0ubuntu0.20.04.1 amd64 OSD server for the ceph storage 
system
ii libcephfs2 15.2.12-0ubuntu0.20.04.1 amd64 Ceph distributed file system 
client library
ii python3-ceph-argparse 15.2.12-0ubuntu0.20.04.1 amd64 Python 3 utility 
libraries for Ceph CLI
ii python3-ceph-common 15.2.12-0ubuntu0.20.04.1 all Python 3 utility libraries 
for Ceph
ii python3-cephfs 15.2.12-0ubuntu0.20.04.1 amd64 Python 3 libraries for the 
Ceph libcephfs library

$ sudo radosgw-admin bucket list | jq .[] | wc -l
5572
$ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
20572
$ sudo radosgw-admin bucket limit check | jq '.[].buckets[] | 
select(.bucket=="bucket_1095")'
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}

--

Fix proposed through https://github.com/ceph/ceph/pull/43381

diff --git a/src/rgw/rgw_sal.cc b/src/rgw/rgw_sal.cc
index 2b7a313ed91..65880a4757f 100644
--- a/src/rgw/rgw_sal.cc
+++ b/src/rgw/rgw_sal.cc
@@ -35,6 +35,7 @@ int RGWRadosUser::list_buckets(const string& marker, const 
string& end_marker,
   RGWUserBuckets ulist;
   bool is_truncated = false;
   int ret;
+  buckets.clear();

   ret = store->ctl()->user->list_buckets(info.user_id, marker, end_marker, max,
 need_stats, , _truncated);

--

tested and verified the fix works:

$ sudo dpkg -l | grep ceph
ii ceph 15.2.14-0ubuntu0.20.04.3 amd64 distributed storage and file system
ii ceph-base 15.2.14-0ubuntu0.20.04.3 amd64 common ceph daemon libraries and 
management tools
ii ceph-common 15.2.14-0ubuntu0.20.04.3 amd64 common utilities to mount and 
interact with a ceph storage cluster
ii ceph-mds 15.2.14-0ubuntu0.20.04.3 amd64 metadata server for the ceph 
distributed file system
ii ceph-mgr 15.2.14-0ubuntu0.20.04.3 amd64 manager for the ceph distributed 
file system
ii ceph-mgr-modules-core 15.2.14-0ubuntu0.20.04.3 all ceph manager modules 
which are always enabled
ii ceph-mon 15.2.14-0ubuntu0.20.04.3 amd64 monitor server for the ceph storage 
system
ii ceph-osd 15.2.14-0ubuntu0.20.04.3 amd64 OSD server for the ceph storage 
system
ii libcephfs2 15.2.14-0ubuntu0.20.04.3 amd64 Ceph distributed file system 
client library
ii python3-ceph-argparse 15.2.14-0ubuntu0.20.04.3 amd64 Python 3 utility 
libraries for Ceph CLI
ii python3-ceph-common 15.2.14-0ubuntu0.20.04.3 all Python 3 utility libraries 
for Ceph
ii python3-cephfs 15.2.14-0ubuntu0.20.04.3 amd64 Python 3 l

[Bug 1940456] Re: [SRU] radosgw-admin's diagnostics are confusing if user data exists

2021-08-20 Thread nikhil kshirsagar
Attaching debdiff built on focal for octopus


** Description changed:

  This is same as LP#1914584 but its original patch was wrong which was
  found out in SRU tests and that particular release went ahead without
  the patch. Since that LP is kind of in hard-to-follow/confusing states,
  opened this new bug to carry out the SRU work.
+ 
+ --
+ 
+ [Impact]
+ 
+ When creating a new S3 user, the error message is confusing if the email
+ address used is already associated with another S3 account.
+ 
+ To reproduce:
+ 
+ radosgw-admin user create --uid=foo --display-name="Foo test" 
--email=bar@domain.invalid
+ #[ success ]
+ radosgw-admin user create --uid=test --display-name="AN test" 
--email=bar@domain.invalid
+ could not create user: unable to parse parameters, user id mismatch, 
operation id: foo does not match: test
+ 
+ As a result, it's completely unclear what went wrong with the user
+ creation.
+ 
+ [Test case]
+ 
+ Create an S3 account via radosgw-admin. Then create another user but use
+ the same email address - it should provide a clear description of what
+ the problem is.
+ 
+ [Where problems could occur]
+ 
+ The new message may yet be unclear or could complain that an email
+ exists even though it doesn't exist (false positive). It's an improved
+ diagnostic by checking if the email id exists. Perhaps, user creation
+ might become problematic if the fix doesn't work.
+ 
+ [Other Info]
+ - The patch was provided by Ponnuvel Palaniyappan (attached here)
+ - Upstream tracker: https://tracker.ceph.com/issues/50554
+ - Upstream PR: https://github.com/ceph/ceph/pull/41065
+ - Backported to Pacific, and Octopus upstream releases.

** Attachment added: "debdiff_1914584_focal"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1940456/+attachment/5519336/+files/debdiff_1914584_focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1940456

Title:
  [SRU] radosgw-admin's diagnostics are confusing if user data exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1940456/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1914584] Re: [SRU] radosgw-admin user create error message confusing if user with email already exists

2021-08-06 Thread nikhil kshirsagar
Attaching debdiff built on focal for octopus

** Attachment added: "debdiff_1914584_focal"
   
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1914584/+attachment/5516357/+files/debdiff_1914584_focal

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1914584

Title:
  [SRU] radosgw-admin user create error message confusing if user with
  email already exists

To manage notifications about this bug go to:
https://bugs.launchpad.net/ceph/+bug/1914584/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs