Hi,

I don't have an answer for the SNMP part, I guess you could just bring up your own snmp daemon and configure it to your needs. As for the orchestrator backend you have these three options (I don't know what "test_orchestrator" does but it doesn't sound like it should be used in production):

            enum_allowed=['cephadm', 'rook', 'test_orchestrator'],

If you intend to use the orchestrator I suggest to move to cephadm (you can convert an existing cluster by following this guide: https://docs.ceph.com/en/latest/cephadm/adoption/). Although the orchestrator is "on" it requires a backend.

Regards,
Eugen

Zitat von Lokendra Rathour <lokendrarath...@gmail.com>:

Hi Team,
please help in the reference of the issue raised.


Best Regards,
Lokendra

On Wed, Dec 13, 2023 at 2:33 PM Kushagr Gupta <kushagrguptasps....@gmail.com>
wrote:

Hi Team,

*Environment:*
We have deployed a ceph setup using ceph-ansible.
Ceph-version: 18.2.0
OS: Almalinux 8.8
We have a 3 node-setup.

*Queries:*

1. Is SNMP supported for ceph-ansible?Is there some other way to setup
SNMP gateway for the ceph cluster?
2. Do we have a procedure to set backend for ceph-orchestrator via
ceph-ansible? Which backend to use?
3. Are there any CEPH MIB files which work independent of prometheus.


*Description:*
We are trying to perform SNMP monitoring for the ceph cluster using the
following link:

1.
https://docs.ceph.com/en/quincy/cephadm/services/snmp-gateway/#:~:text=Ceph's%20SNMP%20integration%20focuses%20on,a%20designated%20SNMP%20management%20platform
.
2.
https://www.ibm.com/docs/en/storage-ceph/7?topic=traps-deploying-snmp-gateway

But when we try to follow the steps mentioned in the above link, we get
the following error when we try to run any "ceph orch" we get the following
error:
"Error ENOENT: No orchestrator configured (try `ceph orch set backend`)"

After going through following links:
1.
https://www.ibm.com/docs/en/storage-ceph/5?topic=operations-use-ceph-orchestrator
2.
https://forum.proxmox.com/threads/ceph-mgr-orchestrator-enabled-but-showing-missing.119145/
3. https://docs.ceph.com/en/latest/mgr/orchestrator_modules/
I think since we have deployed the cluster using ceph-ansible, we can't
use the ceph-orch commands.
When we checked in the cluster, the following are the enabled modules:
"
[root@storagenode1 ~]# ceph mgr module ls
MODULE
balancer           on (always on)
crash              on (always on)
devicehealth       on (always on)
orchestrator       on (always on)
pg_autoscaler      on (always on)
progress           on (always on)
rbd_support        on (always on)
status             on (always on)
telemetry          on (always on)
volumes            on (always on)
alerts             on
iostat             on
nfs                on
prometheus         on
restful            on
dashboard          -
influx             -
insights           -
localpool          -
mds_autoscaler     -
mirroring          -
osd_perf_query     -
osd_support        -
rgw                -
selftest           -
snap_schedule      -
stats              -
telegraf           -
test_orchestrator  -
zabbix             -
[root@storagenode1 ~]#
"
As can be seen above, orchestrator is on.

Also, We were exploring more about snmp and as per the file:
"/etc/prometheus/ceph/ceph_default_alerts.yml" on the ceph storage, the
OIDs in the file represents the OID for ceph components via prometheus.
For example:
for the following OID: 1.3.6.1.4.1.50495.1.2.1.2.1
[root@storagenode3 ~]# snmpwalk -v 2c -c 209ijvfwer0df92jd -O e 10.0.1.36
1.3.6.1.4.1.50495.1.2.1.2.1
CEPH-MIB::promHealthStatusError = No Such Object available on this agent
at this OID
[root@storagenode3 ~]#

Kindly help us for the same.

Thanks and regards,
Kushagra Gupta



--
~ Lokendra
skype: lokendrarathour
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to