I'm attempting to install an OpenStack cluster, with Ceph. It's doing a
cephadm install (bootstrap overcloud-controller-0, then deploy from
there to the other two nodes)
This is a containerized install:
parameter_defaults:
ContainerImagePrepare:
- set:
ceph_alertmanager_image:
Hi,
can you be more specific what exactly you are looking for? Are you
talking about the rocksDB size? And what is the unit for 5012? It’s
really not clear to me what you’re asking. And since the
recommendations vary between different use cases you might want to
share more details about
Hi Christian,
Am 01.03.2022 um 09:01 schrieb Christian Rohmann:
On 28/02/2022 20:54, Sascha Vogt wrote:
Is there a way to clear the error counter on pacific? If so, how?
No, no anymore. See https://tracker.ceph.com/issues/54182
Thanks for the link. Restarting the OSD seems to clear the
Hi,
There was a recent (long) thread about this. It might give you some hints:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/2NT55RUMD33KLGQCDZ74WINPPQ6WN6CW/
And about the crash, it could be related to
https://tracker.ceph.com/issues/51824
Cheers, dan
On Tue, Mar 1,
Hello Dan
Thanks a lot for the answer
i do remove the the snap everydays (I keep them for one month)
But the "num_strays" never seems to reduce.
I know I can do a listing of the folder with "find . -ls".
So my question is: is there a way to find the directory causing the strays
so I can "find
Hi Julian:
Thanks for your reply.
We are using tenant enabled RGW for now. :(
I will try to Use CEPH 16 as secondary cluster to do the testing. If it
works, I will upgrade master cluster upgrade to Ceph v16 too.
Have a good day.
Poß, Julian 于2022年3月1日周二 17:42写道:
> Hey,
>
>
>
> my cluster is
I am using ceph pacific (16.2.5)
Does anyone have an idea about my issues ?
Thanks again to everyone
All the best
Arnaud
Le mar. 1 mars 2022 à 01:04, Arnaud M a écrit :
> Hello to everyone
>
> Our ceph cluster is healthy and everything seems to go well but we have a
> lot of num_strays
>
>
Hey,
my cluster is only a test installation, to generally verify a rgw multisite
design, so there is no production data on it.
Therefore my “solution” was to create a rgw s3 user without a tenant, so
instead of
radosgw-admin user create –tenant=test --uid=test --display-name=test
Hi,
Disclaimer: I'm in no way a Ceph expert. Have just been tinkering with Ceph/RGW
for a larger installation for a while.
My understanding is that the data between zones in a zonegroup is synced by
default. And that works well, most of the time.
If you, as I had to, want to restrict what data
Hi Julian:
Could you share your solution for this? We are also trying to find out a
solution for this.
Thanks
> 在 2022年3月1日,下午5:18,Poß, Julian 写道:
>
>
> Thanks a ton for pointing this out.
> Just verified this with a rgw user without tenant, works perfectly as you
> would expect.
> I
Thanks a ton for pointing this out.
Just verified this with a rgw user without tenant, works perfectly as you would
expect.
I guess I could have suspected that tenants have something to do with it, since
I spotted issues with them in the past, too.
Anyways, I got my “solution”. Thanks again!
On 28/02/2022 20:54, Sascha Vogt wrote:
Is there a way to clear the error counter on pacific? If so, how?
No, no anymore. See https://tracker.ceph.com/issues/54182
Regards
Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To
12 matches
Mail list logo