Public bug reported:

The "radosgw-admin bucket limit check" command has a bug in octopus.

Since we do not clear the bucket list in RGWRadosUser::list_buckets()
before asking for the next "max_entries", they are appended to the
existing list and we end up counting the first ones again. This causes
duplicated entries in the output of "ragodgw-admin bucket limit check"

This bug is triggered if bucket count exceeds 1000 (default
max_entries).

------

$ dpkg -l | grep ceph
ii ceph 15.2.12-0ubuntu0.20.04.1 amd64 distributed storage and file system
ii ceph-base 15.2.12-0ubuntu0.20.04.1 amd64 common ceph daemon libraries and 
management tools
ii ceph-common 15.2.12-0ubuntu0.20.04.1 amd64 common utilities to mount and 
interact with a ceph storage cluster
ii ceph-mds 15.2.12-0ubuntu0.20.04.1 amd64 metadata server for the ceph 
distributed file system
ii ceph-mgr 15.2.12-0ubuntu0.20.04.1 amd64 manager for the ceph distributed 
file system
ii ceph-mgr-modules-core 15.2.12-0ubuntu0.20.04.1 all ceph manager modules 
which are always enabled
ii ceph-mon 15.2.12-0ubuntu0.20.04.1 amd64 monitor server for the ceph storage 
system
ii ceph-osd 15.2.12-0ubuntu0.20.04.1 amd64 OSD server for the ceph storage 
system
ii libcephfs2 15.2.12-0ubuntu0.20.04.1 amd64 Ceph distributed file system 
client library
ii python3-ceph-argparse 15.2.12-0ubuntu0.20.04.1 amd64 Python 3 utility 
libraries for Ceph CLI
ii python3-ceph-common 15.2.12-0ubuntu0.20.04.1 all Python 3 utility libraries 
for Ceph
ii python3-cephfs 15.2.12-0ubuntu0.20.04.1 amd64 Python 3 libraries for the 
Ceph libcephfs library

$ sudo radosgw-admin bucket list | jq .[] | wc -l
5572
$ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
20572
$ sudo radosgw-admin bucket limit check | jq '.[].buckets[] | 
select(.bucket=="bucket_1095")'
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}

------------------------------------------------------------------------------

Fix proposed through https://github.com/ceph/ceph/pull/43381

diff --git a/src/rgw/rgw_sal.cc b/src/rgw/rgw_sal.cc
index 2b7a313ed91..65880a4757f 100644
--- a/src/rgw/rgw_sal.cc
+++ b/src/rgw/rgw_sal.cc
@@ -35,6 +35,7 @@ int RGWRadosUser::list_buckets(const string& marker, const 
string& end_marker,
   RGWUserBuckets ulist;
   bool is_truncated = false;
   int ret;
+  buckets.clear();

   ret = store->ctl()->user->list_buckets(info.user_id, marker, end_marker, max,
                                         need_stats, &ulist, &is_truncated);

------------------------------------------------------------------------------

tested and verified the fix works:

$ sudo dpkg -l | grep ceph
ii ceph 15.2.14-0ubuntu0.20.04.3 amd64 distributed storage and file system
ii ceph-base 15.2.14-0ubuntu0.20.04.3 amd64 common ceph daemon libraries and 
management tools
ii ceph-common 15.2.14-0ubuntu0.20.04.3 amd64 common utilities to mount and 
interact with a ceph storage cluster
ii ceph-mds 15.2.14-0ubuntu0.20.04.3 amd64 metadata server for the ceph 
distributed file system
ii ceph-mgr 15.2.14-0ubuntu0.20.04.3 amd64 manager for the ceph distributed 
file system
ii ceph-mgr-modules-core 15.2.14-0ubuntu0.20.04.3 all ceph manager modules 
which are always enabled
ii ceph-mon 15.2.14-0ubuntu0.20.04.3 amd64 monitor server for the ceph storage 
system
ii ceph-osd 15.2.14-0ubuntu0.20.04.3 amd64 OSD server for the ceph storage 
system
ii libcephfs2 15.2.14-0ubuntu0.20.04.3 amd64 Ceph distributed file system 
client library
ii python3-ceph-argparse 15.2.14-0ubuntu0.20.04.3 amd64 Python 3 utility 
libraries for Ceph CLI
ii python3-ceph-common 15.2.14-0ubuntu0.20.04.3 all Python 3 utility libraries 
for Ceph
ii python3-cephfs 15.2.14-0ubuntu0.20.04.3 amd64 Python 3 libraries for the 
Ceph libcephfs library
ubuntu@crush-ceph-rgw01:~$ sudo apt-cache policy ceph
ceph:
Installed: 15.2.14-0ubuntu0.20.04.3
Candidate: 15.2.14-0ubuntu0.20.04.3

$ sudo radosgw-admin bucket list | jq .[] | wc -l
5572
$ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
5572
$ sudo radosgw-admin bucket limit check | jq '.[].buckets[] | 
select(.bucket=="bucket_1095")'
{
"bucket": "bucket_1095",
"tenant": "",
"num_objects": 5,
"num_shards": 3,
"objects_per_shard": 1,
"fill_status": "OK"
}

** Affects: ceph (Ubuntu)
     Importance: Medium
     Assignee: nikhil kshirsagar (nkshirsagar)
         Status: Confirmed

** Patch added: "0001-rgw-clear-buckets-before-calling-list_buckets.patch"
   
https://bugs.launchpad.net/bugs/1946211/+attachment/5531032/+files/0001-rgw-clear-buckets-before-calling-list_buckets.patch

** Description changed:

- The ragodgw-admin bucket limit check command has a bug in octopus.
+ The "radosgw-admin bucket limit check" command has a bug in octopus.
  
  Since we do not clear the bucket list in RGWRadosUser::list_buckets()
  before asking for the next "max_entries", they are appended to the
  existing list and we end up counting the first ones again. This causes
  duplicated entries in the output of "ragodgw-admin bucket limit check"
  
  This bug is triggered if bucket count exceeds 1000 (default
  max_entries).
  
  ------
  
  $ dpkg -l | grep ceph
  ii ceph 15.2.12-0ubuntu0.20.04.1 amd64 distributed storage and file system
  ii ceph-base 15.2.12-0ubuntu0.20.04.1 amd64 common ceph daemon libraries and 
management tools
  ii ceph-common 15.2.12-0ubuntu0.20.04.1 amd64 common utilities to mount and 
interact with a ceph storage cluster
  ii ceph-mds 15.2.12-0ubuntu0.20.04.1 amd64 metadata server for the ceph 
distributed file system
  ii ceph-mgr 15.2.12-0ubuntu0.20.04.1 amd64 manager for the ceph distributed 
file system
  ii ceph-mgr-modules-core 15.2.12-0ubuntu0.20.04.1 all ceph manager modules 
which are always enabled
  ii ceph-mon 15.2.12-0ubuntu0.20.04.1 amd64 monitor server for the ceph 
storage system
  ii ceph-osd 15.2.12-0ubuntu0.20.04.1 amd64 OSD server for the ceph storage 
system
  ii libcephfs2 15.2.12-0ubuntu0.20.04.1 amd64 Ceph distributed file system 
client library
  ii python3-ceph-argparse 15.2.12-0ubuntu0.20.04.1 amd64 Python 3 utility 
libraries for Ceph CLI
  ii python3-ceph-common 15.2.12-0ubuntu0.20.04.1 all Python 3 utility 
libraries for Ceph
  ii python3-cephfs 15.2.12-0ubuntu0.20.04.1 amd64 Python 3 libraries for the 
Ceph libcephfs library
  
  $ sudo radosgw-admin bucket list | jq .[] | wc -l
  5572
  $ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
  20572
  $ sudo radosgw-admin bucket limit check | jq '.[].buckets[] | 
select(.bucket=="bucket_1095")'
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }
  
  ------------------------------------------------------------------------------
  
  Fix proposed through https://github.com/ceph/ceph/pull/43381
  
  diff --git a/src/rgw/rgw_sal.cc b/src/rgw/rgw_sal.cc
  index 2b7a313ed91..65880a4757f 100644
  --- a/src/rgw/rgw_sal.cc
  +++ b/src/rgw/rgw_sal.cc
  @@ -35,6 +35,7 @@ int RGWRadosUser::list_buckets(const string& marker, const 
string& end_marker,
-    RGWUserBuckets ulist;
-    bool is_truncated = false;
-    int ret;
+    RGWUserBuckets ulist;
+    bool is_truncated = false;
+    int ret;
  +  buckets.clear();
-  
-    ret = store->ctl()->user->list_buckets(info.user_id, marker, end_marker, 
max,
-                                          need_stats, &ulist, &is_truncated);
+ 
+    ret = store->ctl()->user->list_buckets(info.user_id, marker, end_marker, 
max,
+                                          need_stats, &ulist, &is_truncated);
  
  ------------------------------------------------------------------------------
  
  tested and verified the fix works:
  
  $ sudo dpkg -l | grep ceph
  ii ceph 15.2.14-0ubuntu0.20.04.3 amd64 distributed storage and file system
  ii ceph-base 15.2.14-0ubuntu0.20.04.3 amd64 common ceph daemon libraries and 
management tools
  ii ceph-common 15.2.14-0ubuntu0.20.04.3 amd64 common utilities to mount and 
interact with a ceph storage cluster
  ii ceph-mds 15.2.14-0ubuntu0.20.04.3 amd64 metadata server for the ceph 
distributed file system
  ii ceph-mgr 15.2.14-0ubuntu0.20.04.3 amd64 manager for the ceph distributed 
file system
  ii ceph-mgr-modules-core 15.2.14-0ubuntu0.20.04.3 all ceph manager modules 
which are always enabled
  ii ceph-mon 15.2.14-0ubuntu0.20.04.3 amd64 monitor server for the ceph 
storage system
  ii ceph-osd 15.2.14-0ubuntu0.20.04.3 amd64 OSD server for the ceph storage 
system
  ii libcephfs2 15.2.14-0ubuntu0.20.04.3 amd64 Ceph distributed file system 
client library
  ii python3-ceph-argparse 15.2.14-0ubuntu0.20.04.3 amd64 Python 3 utility 
libraries for Ceph CLI
  ii python3-ceph-common 15.2.14-0ubuntu0.20.04.3 all Python 3 utility 
libraries for Ceph
  ii python3-cephfs 15.2.14-0ubuntu0.20.04.3 amd64 Python 3 libraries for the 
Ceph libcephfs library
  ubuntu@crush-ceph-rgw01:~$ sudo apt-cache policy ceph
  ceph:
  Installed: 15.2.14-0ubuntu0.20.04.3
  Candidate: 15.2.14-0ubuntu0.20.04.3
  
  $ sudo radosgw-admin bucket list | jq .[] | wc -l
  5572
  $ sudo radosgw-admin bucket limit check | jq .[].buckets[].bucket | wc -l
  5572
  $ sudo radosgw-admin bucket limit check | jq '.[].buckets[] | 
select(.bucket=="bucket_1095")'
  {
  "bucket": "bucket_1095",
  "tenant": "",
  "num_objects": 5,
  "num_shards": 3,
  "objects_per_shard": 1,
  "fill_status": "OK"
  }

** Summary changed:

- [SRU] "ragosgw-admin bucket limit check" has duplicate entries if bucket 
count exceeds 1000
+ [SRU] "radosgw-admin bucket limit check" has duplicate entries if bucket 
count exceeds 1000

** Summary changed:

- [SRU] "radosgw-admin bucket limit check" has duplicate entries if bucket 
count exceeds 1000
+ [SRU] "radosgw-admin bucket limit check" has duplicate entries if bucket 
count exceeds 1000 (max_entries)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1946211

Title:
  [SRU] "radosgw-admin bucket limit check" has duplicate entries if
  bucket count exceeds 1000 (max_entries)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ceph/+bug/1946211/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to