[ceph-users] error: _ASSERT_H not a pointer

2022-06-13 Thread renjianxinlover
| | renjianxinlover | | renjianxinlo...@163.com | On 6/14/2022 13:21,renjianxinlover wrote: Ceph version: v12.2.10 OS Destribution: debian9 Kernel Release & Version: 4.9.0-18-amd64 #1 SMP Debian 4.9.303-1 (2022-03-07) x86_64 GNU/Linux But, building ceph failed, error snippet looks like ...

[ceph-users] Re: ceph-users Digest, Vol 113, Issue 36

2022-06-13 Thread renjianxinlover
Ceph version: v12.2.10 OS Destribution: debian9 Kernel Release & Version: 4.9.0-18-amd64 #1 SMP Debian 4.9.303-1 (2022-03-07) x86_64 GNU/Linux But, building ceph failed, error snippet looks like ... [ 33%] Built target osdc Scanning dependencies of target librados_api_obj [ 33%] Building CXX

[ceph-users] Re: Ceph Octopus RGW - files vanished from rados while still in bucket index

2022-06-13 Thread Boris Behrens
Hmm.. I will check what the user is deleting. Maybe this is it. Do you know if this bug is new in 15.2.16? I can't share the data, but I can share the metadata: https://pastebin.com/raw/T1YYLuec For the missing files I have, the multipart file is not available in rados, but the 0 byte file is.

[ceph-users] Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped

2022-06-13 Thread Wesley Dillingham
Thanks for the reply. I believe regarding "0" vs "0.0" its the same difference. I will note its not just changing crush weights which induces this situation. Introducing upmaps manually or via the balancer also causes the PGs to be degraded instead of the expected remapped PG state. Respectfully,

[ceph-users] Ceph Octopus RGW - files vanished from rados while still in bucket index

2022-06-13 Thread Boris Behrens
Hi everybody, are there other ways for rados objects to get removed, other than "rados -p POOL rm OBJECT"? We have a customer who got objects in the bucket index, but can't download it. After checking it seems like the rados object is gone. Ceph cluster is running ceph octopus 15.2.16

[ceph-users] Copying and renaming pools

2022-06-13 Thread Pardhiv Karri
Hi, Our Ceph is used as backend storage for Openstack. We use the "images" pool for glance and the "compute" pool for instances. We need to migrate our images pool which is on HDD drives to SSD drives. I copied all the data from the "images" pool that is on HDD disks to an "ssdimages" pool that

[ceph-users] Re: Changes to Crush Weight Causing Degraded PGs instead of Remapped

2022-06-13 Thread Eugen Block
I remember someone reporting the same thing but I can’t find the thread right now. I’ll try again tomorrow. Zitat von Wesley Dillingham : I have a brand new Cluster 16.2.9 running bluestore with 0 client activity. I am modifying some crush weights to move PGs off of a host for testing

[ceph-users] Changes to Crush Weight Causing Degraded PGs instead of Remapped

2022-06-13 Thread Wesley Dillingham
I have a brand new Cluster 16.2.9 running bluestore with 0 client activity. I am modifying some crush weights to move PGs off of a host for testing purposes but the result is that the PGs go into a degraded+remapped state instead of simply a remapped state. This is a strange result to me as in

[ceph-users] Re: something wrong with my monitor database ?

2022-06-13 Thread Eric Le Lay
Le 13/06/2022 à 17:54, Eric Le Lay a écrit : Le 10/06/2022 à 11:58, Stefan Kooman a écrit : CAUTION: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. On 6/10/22 11:41, Eric Le Lay wrote:

[ceph-users] Re: something wrong with my monitor database ?

2022-06-13 Thread Eric Le Lay
Le 10/06/2022 à 11:58, Stefan Kooman a écrit : CAUTION: This email originated from outside the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. On 6/10/22 11:41, Eric Le Lay wrote: Hello list, my ceph cluster was upgraded

[ceph-users] Ceph add-repo Unable to find a match epel-release

2022-06-13 Thread Kostadin Bukov
Hello ceph users, I'm trying to setup ceph-client on one of my ceph-cluster host. My setup is following: - 3 bare-metal HP synergy servers - installed latest ceph release quincy (17.2.0) using curl/cephadm - RHEL 8.6 - ceph-cluster is working fine and health status is OK compute1 is the

[ceph-users] snap-schedule reappearing

2022-06-13 Thread Stolte, Felix
Hi folks, i removed snapshot scheduling on a cephfs path (pacific), but they reappear the next day. I didn’t remove the retention for this path though. Does the retention on a path trigger the recreation of the snap scheduling if they were removed? Is this intended? regards Felix