I set up a test cluster (Pacific 16.2.7 deployed with cephadm) with
several hdds of different sizes, 1.8 Tb and 3.6 TB; they have weight 1.8
and 3.6, respectively, with 2 pools (metadata+data for CephFS). I'm
currently having a PG count varying from 177 to 182 for OSDs with small
disks and from
What David said!
A couple of additional thoughts:
o Nagios (and derivatives like Icinga and check_mk) have been popular for
years. Note that they’re monitoring solutions vs metrics solutions — it’s good
to have both. One issue I’ve seen multiple times with Nagios-family monitoring
is that o
I have a problem with the snap_schedule MGR module. It seams to forget at least
parts of the configuration after the active MGR is restarted.
The following cli commands (lines starting with ‘$’) and their std out (lines
starting with >) demonstrates the problem.
$ ceph fs snap-schedule add /shar
What version of Ceph are you using? Newer versions deploy a dashboard and
prometheus module, which has some of this built in. It's a great start to
seeing what can be done using Prometheus and the built in exporter. Once
you learn this, if you decide you want something more robust, you can do an
ex
Hi,
For anyone using VMware ESXi (6.7) with Ceph iSCSI GWs (Nautilus), I
thought you might benefit from our experience: I have finally identified
what was causing a permanent ~500 MB/s and ~4k iops load on our cluster,
specifically on one of our RBD image used as a VMware Datastore and it
was
Hi.
Basic logic:
1.bucket policy transition
2.radosgw-admin gc process --include-all
3.1.rados ls -p pool | grep >bucket_objects.txt
3.2.rados listxattr -p pool objname | xargs -L1 echo rados getattr -p pool
objname >> objname.txt
3.3.rados create -p pool objname
3.4.cat objname.txt | xargs -L1 e
I've got a cluster with different OSD structures, some are updated to
15.2.12 and the others are 15.2.9 (bluestore).
No problem so far with the cluster, but I think it's better to normalize
the situation.
*15.2.9*
drwxr-xr-x 23 ceph ceph 4096 Nov 30 15:50 ../
lrwxrwxrwx 1 ceph ceph 24 Nov
libradosstriper ?
Am 26.01.22 um 10:16 schrieb lin yunfan:
> Hi,
> I know with rbd and cephfs there is a stripe setting to stripe data
> into multiple rodos object.
> Is it possible to use librados api to stripe a large object into many
> small ones?
>
> linyunfan
> ___
Thanks for the tips!!!
>
> I would still set noout on relevant parts of the cluster in case something
> goes south and it does take longer than 2 minutes. Otherwise OSDs will
> start outing themselves after 10 minutes or so by default and then you
> have a lot of churn going on.
>
> The monitors
Hi!
I restarted mgr - it didn't help. Or do you mean something else?
> Hi,
>
> have you tried to failover the mgr service? I noticed similar
> behaviour in Octopus.
>
>
> Zitat von Fyodor Ustinov :
>
>> Hi!
>>
>> No one knows how to fix it?
>>
>>
>> - Original Message -
>>> From: "Fyo
Hi,
I know with rbd and cephfs there is a stripe setting to stripe data
into multiple rodos object.
Is it possible to use librados api to stripe a large object into many
small ones?
linyunfan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscri
Dear all,
I want to mirror a snapshot in Ceph v16.2.6 deployed with cephadm
using the stock quay.io images. My source file system has a folder
"/src/folder/x" where "/src/folder" has mode "ug=r,o=", in other words
no write permissions for the owner (root).
The sync of a snapshot "initial" now fai
Hi,
have you tried to failover the mgr service? I noticed similar
behaviour in Octopus.
Zitat von Fyodor Ustinov :
Hi!
No one knows how to fix it?
- Original Message -
From: "Fyodor Ustinov"
To: "ceph-users"
Sent: Tuesday, 25 January, 2022 11:29:53
Subject: [ceph-users] How t
Hi!
No one knows how to fix it?
- Original Message -
> From: "Fyodor Ustinov"
> To: "ceph-users"
> Sent: Tuesday, 25 January, 2022 11:29:53
> Subject: [ceph-users] How to remove stuck daemon?
> Hi!
>
> I have Ceph cluster version 16.2.7 with this error:
>
> root@s-26-9-19-mon-m1:~#
14 matches
Mail list logo