** Changed in: charm-ceph-radosgw
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1921453
Title:
multi-zone replication doesn't work
To manage notificatio
** Changed in: charm-ceph-radosgw
Milestone: None => 21.10
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1921453
Title:
multi-zone replication doesn't work
To manage notifications about this bu
Reviewed: https://review.opendev.org/c/openstack/charm-ceph-radosgw/+/786030
Committed:
https://opendev.org/openstack/charm-ceph-radosgw/commit/15d7a9d82729758e50efb48106f8cd1f1284210c
Submitter: "Zuul (22348)"
Branch:master
commit 15d7a9d82729758e50efb48106f8cd1f1284210c
Author: James Page
With proposed fix applied:
$ sudo radosgw-admin sync status
realm 964d0450-a906-4726-9c1c-a2aa5d788684 (replicated)
zonegroup f846f9a9-c075-4541-b449-3891b796f480 (us)
zone 8e6e56a1-cdfc-49f7-a581-1f4a8059037e (us-west)
metadata sync syncing
full sync:
Removing field high - in anything other than a cut down test deployment
this will work fine - however a charm update is appropriate to get
consistent pool configuration.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.laun
The otp_pool was new in >= mimic.
The radosgw daemon will create the pool if it does not exist - in a
normal deployment this is created with the default size (3) which is
fine and then gets autotuned as need be.
** Changed in: ceph (Ubuntu)
Status: New => Invalid
--
You received this bug
OK figured this out - the master site has an .otp pool that is
configured with size 3 (and the supplied test bundle only has single OSD
hosts). Setting the size of this pool to 1 resolves the issue but this
does highlight that the .otp pool is not created by the charm and gets
auto-created by rgw
"the system user is absent from the slave deployment"
can you elaborate on this? In the procedure on docs.ceph.com there is no step
to configure the system user on the secondary site - it is supposed to come
through the replication once I pull the realm.
And just to add once again - if I follow t
I don't believe this is a charm issue - it looks like a bug in the
metadata sync process in ceph itself.
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1921453
Title:
multi-zone replication doesn't w
It looks like the metadata sync is failing for some reason, resulting in
the secondary zone not have the right system user - which results in the
errors on the primary zone when it attempts to sync data back
** Also affects: ceph (Ubuntu)
Importance: Undecided
Status: New
--
You receiv
10 matches
Mail list logo