Ok I fix it works
-Message d'origine-
De : St-Germain, Sylvain (SSC/SPC)
Envoyé : 9 mars 2021 17:41
À : St-Germain, Sylvain (SSC/SPC) ;
ceph-users@ceph.io
Objet : RE: Rados gateway basic pools missing
Ok in the interface when I create a bucket the index in created automatically
1
3tools/s3cmd
and have a look at the known issues list:
https://github.com/s3tools/s3cmd/wiki/Common-known-issues-and-their-solutions
If the error persists, please report the
above lines (removing any private
info as necessary) to:
s3tools-b...@lists.sourceforge.net
!!!!!!!!!!!!!!!!!
Hi everyone,
I just rebuild a (test) cluster using :
OS : Ubuntu 20.04.2 LTS
CEPH : ceph version 15.2.9 (357616cbf726abb779ca75a551e8d02568e15b17) octopus
(stable)
3 nodes : monitor/storage
1. The cluster looks good :
# ceph -s
cluster:
id: 9a89aa5a-1702-4f87-a99c-f94c9f2cdabd
Good morning all,
I don't know if this happened to someone, but recently a user ran out of their
quota for the number of objects in a bucket. The only sign I could see in the
logs (tail -f /var/log/ceph/ceph-rgw-dao-wkr-01.rgw0.log)
was the following:
2020-10-01T03:07:02.098+ 7fd872bf6700
Hi everyone,
I try to configure a simple rados gateway with three nodes and I add a virtual
IP address for the load balancer. I make calls to the gateway using S3 and
SWIFT.
In the case of S3 everything works, I can make requests directly on the node
where there is a gateway or use the
Ouch , ok .
-Message d'origine-
De : Michael Fladischer
Envoyé : 22 juin 2020 15:57
À : St-Germain, Sylvain (SSC/SPC) ;
ceph-users@ceph.io
Objet : Re: [ceph-users] Re: OSD crash with assertion
Hi Sylvain,
Yeah, that's the best and safes way to do it. The pool I wrecked
The way I did is I create a new pool, copy data on it and put the new pool in
place of the old one after I delete the former pool
echo ""
echo " Create a new pool with erasure coding"
echo
n_max_object_skew 20
4- Confirm the change
#ceph daemon mgr.`hostname -s` config show | grep mon_pg_warn_max_object_skew
"mon_pg_warn_max_object_skew": "20.00",
5- Result
#ceph health detail
HEALTH_OK
Sylvain
-----Message d'origine-
De : St-Germain, Sylvain (SS
/ Problem ///
I've got a Warning on my cluster that I cannot remove :
"1 pools have many more objects per pg than average"
Does somebody has some insight ? I think it's normal to have this warning
because I have just one pool in use, but how can