[ceph-users] Re: cephadm picks development/latest tagged image for daemon-base (docker.io/ceph/daemon-base:latest-pacific-devel)

2022-02-02 Thread Arun Vinod
Hi Adam, Big Thanks for the responses and clarifying the global usage of the --image parameter. Eventhough, I gave --image during bootstrap only mgr & mon daemons on the bootstrap host are getting created with that image and the rest of the demons are created on the image daemon-base as I mention

[ceph-users] Re: Moving all s3 objects from an ec pool to a replicated pool using storage classes.

2022-02-02 Thread Anthony D'Atri
This probably doesn’t solve your overall immediate problem, but these PRs that should be in Quincy enable Lua scripting to override any user-supplied storage class on upload. This is useful in contexts where user / client behavior is difficult to enforce but the operator wishes to direct object

[ceph-users] Re: Moving all s3 objects from an ec pool to a replicated pool using storage classes.

2022-02-02 Thread Scheurer François
Hi Frederic For your point 3, the default_storage_class from the user info is apparently ignored. Setting it on Nautilus 14.2.15 had no impact and objects were still stored with STANDARD. Another issue is that some clients like s3cmd are per default explicitly using STANDARD. And even afte

[ceph-users] pg_autoscaler using uncompressed bytes as pool current total_bytes triggering false POOL_TARGET_SIZE_BYTES_OVERCOMMITTED warnings?

2022-02-02 Thread Christian Rohmann
Hey ceph-users, I am debugging a mgr pg_autoscaler WARN which states a target_size_bytes on a pool would overcommit the available storage. There is only one pool with value for  target_size_bytes (=5T) defined and that apparently would consume more than the available storage: --- cut --- # c

[ceph-users] CEPH cluster stopped client I/O's when OSD host hangs

2022-02-02 Thread Prayank Saxena
Hello Everyone, We encounter some issue that OS hanging on host OSD causes the cluster to stop ingesting data. Below are CEPH Cluster details: CEPH Object Storage v14.2.22 No. of Monitor nodes: 5 No. of RGW nodes:5 No.of OSD's:252 (all NVME's) OS : Centos 7.9 kernel: 3.10.0-1160.45.1.el7

[ceph-users] Re: 1 bogus remapped PG (stuck pg_temp) -- how to cleanup?

2022-02-02 Thread Burkhard Linke
Hi, I've found a solution for getting rid of the stale pg_temp. I've scaled the pool up to 128 PGs (thus "covering" the pg_temp). Afterwards the remapped PG was gone. I'm currently scaling down back to 32, no extra PG (either regular or temp) so far. The pool is almost empty, so playing ar

[ceph-users] Re: 1 bogus remapped PG (stuck pg_temp) -- how to cleanup?

2022-02-02 Thread Burkhard Linke
Hi, On 2/2/22 14:39, Konstantin Shalygin wrote: Hi, The cluster is Nautilus 14.2.22 For a long time we have bogus 1 remapped PG, without actual 'remapped' PG's # ceph pg dump pgs_brief | awk '{print $2}' | grep active | sort | uniq -c dumped pgs_brief 15402 active+clean 6 active+cle

[ceph-users] 1 bogus remapped PG (stuck pg_temp) -- how to cleanup?

2022-02-02 Thread Konstantin Shalygin
Hi, The cluster is Nautilus 14.2.22 For a long time we have bogus 1 remapped PG, without actual 'remapped' PG's # ceph pg dump pgs_brief | awk '{print $2}' | grep active | sort | uniq -c dumped pgs_brief 15402 active+clean 6 active+clean+scrubbing # ceph osd dump | grep pg_temp pg_temp

[ceph-users] Re: Pacific 16.2.6: Trying to get an RGW running for a scond zonegroup in an existing realm

2022-02-02 Thread Ulrich Klein
Well, looks like not many people have tried this. And to me it looks like a bug/omission in "ceph orch apply rgw". After digging through the setup I figured out that the unit.run file for the new rgw.zone21 process/container doesn't get the --rgw-zonegroup (or --rgw-region) parameter for radosgw

[ceph-users] Re: Reinstalling OSD node managed by cephadm

2022-02-02 Thread Robert Sander
On 02.02.22 12:15, Manuel Holtgrewe wrote: Would this also work when renaming hosts at the same time? - remove host from ceph orch - reinstall host with different name/IP - add back host into ceph orch - use ceph osd activate as above? That could also work as long as the OSDs are still in th

[ceph-users] Re: Reinstalling OSD node managed by cephadm

2022-02-02 Thread Manuel Holtgrewe
Thank you for the information. I will try this. Would this also work when renaming hosts at the same time? - remove host from ceph orch - reinstall host with different name/IP - add back host into ceph orch - use ceph osd activate as above? On Mon, Jan 31, 2022 at 10:44 AM Robert Sander wrote: