I just want the latest minor version before upgrading to the next major version
:) This practice isn't recommended elsewhere, but I want to make sure and limit
errors as much as possible.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
Thanks for your answer. I was able to get the Lua debug log. But I think some
request fields don't work:
I have this lua script, for example:
if Request.HTTP.StorageClass == 'COLD' then
RGWDebugLog(Request.RGWOp .. " request with StorageClass: " ..
Request.HTTP.StorageClass .. " Obj name: "
I have a lua script that read Storageclass header of any put request (as I
understand):
local function isempty(input)
return input == nil or input == ''
end
if Request.RGWOp == 'put_obj' then
RGWDebugLog("Put_Obj with StorageClass: " .. Request.HTTP.StorageClass )
end
Then apply the script:
Yes, the documents show an example of upgrading from Nautilus to Pacific. But
I'm not really 100% trusting the Ceph documents, and I'm also afraid of what if
Nautilus is not compatible with Pacific in some operations of monitor or osd =)
___
ceph-users
Hi, I want to upgrade my old Ceph cluster + Radosgw from v14 to v15. But I'm
not using cephadm and I'm not sure how to limit errors as much as possible
during the upgrade process?
Here is my upgrade steps:
Firstly, upgrade from 14.2.18 to 14.2.22 (latest nautilus version)
Then, upgrade it from
The radosgw has been configured like this:
[client.rgw.ceph1]
host = ceph1
rgw_frontends = beast port=8080 ssl_port=443 ssl_certificate=/root/ssl/ca.crt
ssl_private_key=/root/ssl/ca.key
#rgw_frontends = beast port=8080 ssl_port=443 ssl_certificate=/root/ssl/ca.crt
I find a log like this, and I thought the bucket name should be "photos":
[2023-04-19 15:48:47.0.5541s] "GET /photos/shares/
But I can not find it:
radosgw-admin bucket stats --bucket photos
failure: 2023-04-19 15:48:53.969 7f69dce49a80 0 could not get bucket info for
bucket=photos
The radosgw-admin bucket stats show there are 209266 objects in this bucket,
but it included failed multiparts, so that make the size parameter is also
wrong. When I use boto3 to count objects, the bucket only has 209049 objects.
The only solution I can find is to use lifecycle to clean these