Hi , we have random error with rgw during a backup from veeam .
Daemons go in error state.
Where we can find the appropriate logs about it ?
I just find out something related to this
-1788> 2023-07-31T19:51:21.169+ 7f04567d3700 2 req 10656715914266436796
0.0s getting op 0
-1787>
Hi, we have service that is still crashing when S3 client (veeam backup) start
to write data
main log from rgw service
req 13170422438428971730 0.00886s s3:get_obj WARNING: couldn't find acl
header for object, generating
default
2023-07-20T14:36:45.331+ 7fa5adb4c700 -1 *** Caught
Hi, we have a ceph 17.2.6 with ragosgw and a couple of buckets in it.
We use it for backup with lock directly from veeam.
After few backups we got
HEALTH_WARN 2 large omap objects
Hi, the system is still in backfilling and still have the same pg in degraded.
I see that % of degraded object is in still.
I mean it never decrease belove 0.010% from days.
Is the backfilling connected to the degraded ?
System must finish backfilling before finishing the degraded one ?
[WRN]
Thanks, I try to change the pg and pgp number to an higher value but pg do not
increase
ta:
pools: 8 pools, 1085 pgs
objects: 242.28M objects, 177 TiB
usage: 553 TiB used, 521 TiB / 1.0 PiB avail
pgs: 635281/726849381 objects degraded (0.087%)
Hi to all
Using ceph 17.2.5 i have 3 pgs in stuck state
ceph pg map 8.2a6
osdmap e32862 pg 8.2a6 (8.2a6) -> up [88,100,59] acting [59,100]
looking at it ho 88 ,100 and 59 i got that
ceph pg ls-by-osd osd.100 | grep 8.2a6
8.2a6 211004209089 00 1747979252050
Main error now is
[ERR] MGR_MODULE_ERROR: Module 'cephadm' has failed: Expecting value: line 1
column 1 (char 0)
Module 'cephadm' has failed: Expecting value: line 1 column 1 (char 0)
If we disable cephadm, the health becomes ok.
So is there a way to change the cephadm's version ?
Mar
Hi, we have a cluster with 3 nodes . Each node has 4 HDD and 1 SSD
We would like to have a pool only on ssd and a pool only on hdd, using class
feature.
here is the setup
# buckets
host ceph01s3 {
id -3 # do not change unnecessarily
id -4 class hdd # do not
looking at ceph orch upgrade check
I find out
},
"cephadm.8d0364fef6c92fc3580b0d022e32241348e6f11a7694d2b957cdafcb9d059ff2": {
"current_id": null,
"current_name": null,
"current_version": null
},
Could this lead to the issue?
I find out with ceph orch ps
cephadm.8d0364fef6c92fc3580b0d022e32241348e6f11a7694d2b957cdafcb9d059ff2
srvcephprod04 stopped4m ago -
I cannnot find anything interesting in the cephadm.log
now the error is
HEALTH_ERR
Module 'cephadm' has failed: 'cephadm'
Idea how to fix it ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
hi , starting upgrade from 15.2.17 i got this error
Module 'cephadm' has failed: Expecting value: line 1 column 1 (char 0)
Cluster was in health ok before starting.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
I do not have cache pool in it
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ceph 16.2.11,
is safe to enable scrub and deep scrub during backfilling ?
I have log recovery-backfilling due to a new crushmap , backfilling is going
slow and deep scrub interval as expired so I have many pgs not deep-scrubbed
in time.
Best regards
Alessandro
Hi, we have a cluster with this ceph df
--- RAW STORAGE ---
CLASS SIZE AVAILUSED RAW USED %RAW USED
hdd240 GiB 205 GiB 29 GiB35 GiB 14.43
hddvm 1.6 TiB 1.2 TiB 277 GiB 332 GiB 20.73
TOTAL 1.8 TiB 1.4 TiB 305 GiB 366 GiB 19.91
--- POOLS ---
POOL
Hi, and thanks for the answer.
I install the 16.2.10. I do not check for the shadow's one before doing the
crush map modification.
So it is expected in order that it sees like a new route for the algorithm to
calculate position of pgs and datas ?
Best regards.
Hi to all and thanks for sharing your experience on ceph !
We have an easy setup with 9 osd all hdd and 3 nodes, 3 osd for each node.
We started the cluster to test how it works with hdd with default and easy
bootstrap . Then we decide to add ssd and create a pool to use only ssd.
In order to
17 matches
Mail list logo