[ceph-users] Re: radosgw not working - upgraded from mimic to octopus

2021-01-27 Thread Youzhong Yang
Anyone running octopus (v15)? Can you please share your experience of
radosgw-admin performance?

A simple 'radosgw-admin user list' took 11 minutes; if I use a v13.2.4
radosgw-admin, it can be finished in a few seconds.

This sounds like a performance regression to me. I've already filed a bug
report (https://tracker.ceph.com/issues/48983) but so far no feedback yet.

On Mon, Jan 25, 2021 at 10:06 AM Youzhong Yang  wrote:

> I upgraded our ceph cluster (6 bare metal nodes, 3 rgw VMs) from v13.2.4
> to v15.2.8. The mon, mgr, mds and osd daemons were all upgraded
> successfully, everything looked good.
>
> After the radosgw was upgraded, they refused to work, the log messages are
> at the end of this e-mail.
>
> Here are the things I tried:
>
> 1. I moved aside the pools for the rgw service, started from scratch
> (creating realm, zonegroup, zone, users), but when I tried to run
> 'radosgw-admin user create ...', it appeared to be stuck and never
> returned, other command like 'radosgw-admin period update --commit' also
> got stuck.
>
> 2. I rolled back radosgw to the old version v13.2.4, then everything works
> great again.
>
> What am I missing here? Is there anything extra that needs to be done for
> rgw after upgrading from mimic to octopus?
>
> Please kindly help. Thanks.
>
> -
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 deferred set uid:gid to
> 64045:64045 (ceph:ceph)
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 ceph version 15.2.8
> (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable), process
> radosgw, pid 898
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 framework: civetweb
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 framework conf key: port,
> val: 80
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 framework conf key:
> num_threads, val: 1024
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 framework conf key:
> request_timeout_ms, val: 5
> 2021-01-24T09:24:10.192-0500 7f638f79f9c0  1 radosgw_Main not setting numa
> affinity
> 2021-01-24T09:29:10.195-0500 7f638cbcd700 -1 Initialization timeout,
> failed to initialize
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 deferred set uid:gid to
> 64045:64045 (ceph:ceph)
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 ceph version 15.2.8
> (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable), process
> radosgw, pid 1541
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 framework: civetweb
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 framework conf key: port,
> val: 80
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 framework conf key:
> num_threads, val: 1024
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 framework conf key:
> request_timeout_ms, val: 5
> 2021-01-24T09:29:10.367-0500 7f4c213ba9c0  1 radosgw_Main not setting numa
> affinity
> 2021-01-24T09:29:25.883-0500 7f4c213ba9c0  1 robust_notify: If at first
> you don't succeed: (110) Connection timed out
> 2021-01-24T09:29:25.883-0500 7f4c213ba9c0  0 ERROR: failed to distribute
> cache for coredumps.rgw.log:meta.history
> 2021-01-24T09:32:27.754-0500 7fcdac2bf9c0  0 deferred set uid:gid to
> 64045:64045 (ceph:ceph)
> 2021-01-24T09:32:27.754-0500 7fcdac2bf9c0  0 ceph version 15.2.8
> (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable), process
> radosgw, pid 978
> 2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  0 framework: civetweb
> 2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  0 framework conf key: port,
> val: 80
> 2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  0 framework conf key:
> num_threads, val: 1024
> 2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  0 framework conf key:
> request_timeout_ms, val: 5
> 2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  1 radosgw_Main not setting numa
> affinity
> 2021-01-24T09:32:44.719-0500 7fcdac2bf9c0  1 robust_notify: If at first
> you don't succeed: (110) Connection timed out
> 2021-01-24T09:32:44.719-0500 7fcdac2bf9c0  0 ERROR: failed to distribute
> cache for coredumps.rgw.log:meta.history
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] radosgw not working - upgraded from mimic to octopus

2021-01-25 Thread Youzhong Yang
I upgraded our ceph cluster (6 bare metal nodes, 3 rgw VMs) from v13.2.4 to
v15.2.8. The mon, mgr, mds and osd daemons were all upgraded successfully,
everything looked good.

After the radosgw was upgraded, they refused to work, the log messages are
at the end of this e-mail.

Here are the things I tried:

1. I moved aside the pools for the rgw service, started from scratch
(creating realm, zonegroup, zone, users), but when I tried to run
'radosgw-admin user create ...', it appeared to be stuck and never
returned, other command like 'radosgw-admin period update --commit' also
got stuck.

2. I rolled back radosgw to the old version v13.2.4, then everything works
great again.

What am I missing here? Is there anything extra that needs to be done for
rgw after upgrading from mimic to octopus?

Please kindly help. Thanks.

-
2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 deferred set uid:gid to
64045:64045 (ceph:ceph)
2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 ceph version 15.2.8
(bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable), process
radosgw, pid 898
2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 framework: civetweb
2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 framework conf key: port, val:
80
2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 framework conf key:
num_threads, val: 1024
2021-01-24T09:24:10.192-0500 7f638f79f9c0  0 framework conf key:
request_timeout_ms, val: 5
2021-01-24T09:24:10.192-0500 7f638f79f9c0  1 radosgw_Main not setting numa
affinity
2021-01-24T09:29:10.195-0500 7f638cbcd700 -1 Initialization timeout, failed
to initialize
2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 deferred set uid:gid to
64045:64045 (ceph:ceph)
2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 ceph version 15.2.8
(bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable), process
radosgw, pid 1541
2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 framework: civetweb
2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 framework conf key: port, val:
80
2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 framework conf key:
num_threads, val: 1024
2021-01-24T09:29:10.367-0500 7f4c213ba9c0  0 framework conf key:
request_timeout_ms, val: 5
2021-01-24T09:29:10.367-0500 7f4c213ba9c0  1 radosgw_Main not setting numa
affinity
2021-01-24T09:29:25.883-0500 7f4c213ba9c0  1 robust_notify: If at first you
don't succeed: (110) Connection timed out
2021-01-24T09:29:25.883-0500 7f4c213ba9c0  0 ERROR: failed to distribute
cache for coredumps.rgw.log:meta.history
2021-01-24T09:32:27.754-0500 7fcdac2bf9c0  0 deferred set uid:gid to
64045:64045 (ceph:ceph)
2021-01-24T09:32:27.754-0500 7fcdac2bf9c0  0 ceph version 15.2.8
(bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable), process
radosgw, pid 978
2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  0 framework: civetweb
2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  0 framework conf key: port, val:
80
2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  0 framework conf key:
num_threads, val: 1024
2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  0 framework conf key:
request_timeout_ms, val: 5
2021-01-24T09:32:27.758-0500 7fcdac2bf9c0  1 radosgw_Main not setting numa
affinity
2021-01-24T09:32:44.719-0500 7fcdac2bf9c0  1 robust_notify: If at first you
don't succeed: (110) Connection timed out
2021-01-24T09:32:44.719-0500 7fcdac2bf9c0  0 ERROR: failed to distribute
cache for coredumps.rgw.log:meta.history
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: setting bucket quota using admin API does not work

2020-08-31 Thread Youzhong Yang
Figured it out:
admin/bucket?quota works, but it does not seem to be documented.

On Mon, Aug 31, 2020 at 4:16 PM Youzhong Yang  wrote:

> Hi all,
>
> I tried to set bucket quota using admin API as shown below:
>
> admin/user?quota&uid=bse&bucket=test"a-type=bucket
>
> with payload in json format:
> {
> "enabled": true,
> "max_size": 1099511627776,
> "max_size_kb": 1073741824,
> "max_objects": -1
> }
>
> it returned success but the quota change did not happen, as confirmed by
> 'radosgw-admin bucket stats --bucket=test' command.
>
> Am I missing something obvious? Please kindly advise/suggest.
>
> By the way, I am using ceph mimic (v13.2.4). Setting quota by
> radosgw-admin quota set --bucket=${BUCK} --max-size=1T --quota-scope=bucket
> works, but I want to do it programmatically.
>
> Thanks in advance,
> -Youzhong
>
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] setting bucket quota using admin API does not work

2020-08-31 Thread Youzhong Yang
 Hi all,

I tried to set bucket quota using admin API as shown below:

admin/user?quota&uid=bse&bucket=test"a-type=bucket

with payload in json format:
{
"enabled": true,
"max_size": 1099511627776,
"max_size_kb": 1073741824,
"max_objects": -1
}

it returned success but the quota change did not happen, as confirmed by
'radosgw-admin bucket stats --bucket=test' command.

Am I missing something obvious? Please kindly advise/suggest.

By the way, I am using ceph mimic (v13.2.4). Setting quota by radosgw-admin
quota set --bucket=${BUCK} --max-size=1T --quota-scope=bucket works, but I
want to do it programmatically.

Thanks in advance,
-Youzhong
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io