Could the problem be related with some faulty hardware (RAID-controller, port, cable) but not disk?
Does "faulty" disk works OK on other server?
Behnam Loghmani wrote on 21/02/18 16:09:
Hi there,
I changed the SSD on the problematic node with the new one and reconfigure OSDs
and MON service
Ben, first of all thanks a lot for such quick reply! I appreciate a provided explanation and info on
things to check!
I am new to all that that including InfluxDB that's why I used wrong influx cli to check if there
are actual data is coming. But
Marc Roos wrote on 13/02/18 00:50:
why not use collectd? centos7 rpms should do fine.
Marc, sorry I somehow missed your question. One of the reason could be that collectd is a additional
daemon whereas influx plugin for ceph is just an additional part of the already running system (ceph).
Forgot to mentioned that influx self-test produces a reasonable output too (long json list with some
metrics and timestamps) as well as there are the following lines in mgr log:
2018-02-19 17:35:04.208858 7f33a50ec700 1 mgr.server reply handle_command (0)
Success
2018-02-19 17:35:04.245285
Dear Ceph users,
I am trying to enable influx plugin for ceph following http://docs.ceph.com/docs/master/mgr/influx/
but no data comes to influxdb DB. As soon as 'ceph mgr module enable influx' command is executed on
one of ceph mgr node (running on CentOS 7.4.1708) there are the following
Benjeman Meekhof wrote on 12/02/18 23:50:
In our case I think we grabbed the SRPM from Fedora and rebuilt it on
Scientific Linux (another RHEL derivative).
I've just done the same: rebuild from fc28 srpm (some spec-file tunning was required to build it on
centos 7).
Presumably the binary
Dear all,
I'd like to store ceph luminous metrics into influxdb. It seems like influx plugin has been already
backported for lumious:
rpm -ql ceph-mgr-12.2.2-0.el7.x86_64|grep -i influx
/usr/lib64/ceph/mgr/influx
/usr/lib64/ceph/mgr/influx/__init__.py
/usr/lib64/ceph/mgr/influx/__init__.pyc
Thanks a lot who shared thoughts and own experience on that topic! It seems that Frédéric's input is
exactly I've been looking for. Thanks Frédéric!
Jason Dillaman wrote on 02/02/18 19:24:
Concur that it's technically feasible by restricting access to
"rbd_id.", "rbd_header..",
Hello!
I wonder if it's possible in ceph Luminous to manage user access to rbd images on per image (but not
the whole rbd pool) basis?
I need to provide rbd images for my users but would like to disable their ability to list all images
in a pool as well as to somehow access/use ones if a ceph
Hello!
I need to mount automatically cephfs on KVM VM boot .
I tried to follow recommendations mentioned at http://docs.ceph.com/docs/master/cephfs/fstab/ but in
both cases (kernel mode or fuse) as well as by specifying mounting command in /etc/rc.local it
always fails to get mounted cephfs
10 matches
Mail list logo