| |
renjianxinlover
|
|
renjianxinlo...@163.com
|
On 6/14/2022 13:21,renjianxinlover wrote:
Ceph version: v12.2.10
OS Destribution: debian9
Kernel Release & Version: 4.9.0-18-amd64 #1 SMP Debian 4.9.303-1 (2022-03-07)
x86_64 GNU/Linux
But, building ceph failed, error snippet looks like
...
Ceph version: v12.2.10
OS Destribution: debian9
Kernel Release & Version: 4.9.0-18-amd64 #1 SMP Debian 4.9.303-1 (2022-03-07)
x86_64 GNU/Linux
But, building ceph failed, error snippet looks like
...
[ 33%] Built target osdc
Scanning dependencies of target librados_api_obj
[ 33%] Building CXX
Hmm.. I will check what the user is deleting. Maybe this is it.
Do you know if this bug is new in 15.2.16?
I can't share the data, but I can share the metadata:
https://pastebin.com/raw/T1YYLuec
For the missing files I have, the multipart file is not available in rados,
but the 0 byte file is.
Thanks for the reply. I believe regarding "0" vs "0.0" its the same
difference. I will note its not just changing crush weights which induces
this situation. Introducing upmaps manually or via the balancer also causes
the PGs to be degraded instead of the expected remapped PG state.
Respectfully,
Hi everybody,
are there other ways for rados objects to get removed, other than "rados -p
POOL rm OBJECT"?
We have a customer who got objects in the bucket index, but can't download
it. After checking it seems like the rados object is gone.
Ceph cluster is running ceph octopus 15.2.16
Hi,
Our Ceph is used as backend storage for Openstack. We use the "images" pool
for glance and the "compute" pool for instances. We need to migrate our
images pool which is on HDD drives to SSD drives.
I copied all the data from the "images" pool that is on HDD disks to an
"ssdimages" pool that
I remember someone reporting the same thing but I can’t find the
thread right now. I’ll try again tomorrow.
Zitat von Wesley Dillingham :
I have a brand new Cluster 16.2.9 running bluestore with 0 client activity.
I am modifying some crush weights to move PGs off of a host for testing
I have a brand new Cluster 16.2.9 running bluestore with 0 client activity.
I am modifying some crush weights to move PGs off of a host for testing
purposes but the result is that the PGs go into a degraded+remapped state
instead of simply a remapped state. This is a strange result to me as in
Le 13/06/2022 à 17:54, Eric Le Lay a écrit :
Le 10/06/2022 à 11:58, Stefan Kooman a écrit :
CAUTION: This email originated from outside the organization. Do not
click links or open attachments unless you recognize the sender and
know the content is safe.
On 6/10/22 11:41, Eric Le Lay wrote:
Le 10/06/2022 à 11:58, Stefan Kooman a écrit :
CAUTION: This email originated from outside the organization. Do not
click links or open attachments unless you recognize the sender and
know the content is safe.
On 6/10/22 11:41, Eric Le Lay wrote:
Hello list,
my ceph cluster was upgraded
Hello ceph users,
I'm trying to setup ceph-client on one of my ceph-cluster host.
My setup is following:
- 3 bare-metal HP synergy servers
- installed latest ceph release quincy (17.2.0) using curl/cephadm
- RHEL 8.6
- ceph-cluster is working fine and health status is OK
compute1 is the
Hi folks,
i removed snapshot scheduling on a cephfs path (pacific), but they reappear the
next day. I didn’t remove the retention for this path though. Does the
retention on a path trigger the recreation of the snap scheduling if they were
removed? Is this intended?
regards
Felix
12 matches
Mail list logo