The perf counter dump added a "avgtime" field for which collectd-5.7.2
ceph plugin does not understand and put out a warning and exit.
ceph plugin: ds %s was not properly initialized.",
Anybody knows a patch to collectd which might help?
Thanks,
Yang
Our existing setup is as follows and we won't be able to change the network
configuration due to security limitations:
client 1: rbd devices on 153.64.X.X network (1GE network)
client 2: rbd devices on 10.25.X.X network (10GE fast switch)
single monitor and MDS server multihomed on both 153.64.
Our ceph client is using bobtail legacy tunable and in particular,
"chooseleaf_vary_r" is set to 0.
My question is how it would impact CRUSH and hence performance in deploying
"jewel" on the server side and also the experimental "bluestore" backend.
Does it only affect data placement or does it a
See title.
We have Firefly on the client side (SLES11SP3) and it does not seem to work
well with the "jewel" server nodes (CentOS 7)
Can somebody please provide some guidelines?
Thanks,
Yang
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:/
Hi,
I am following the documentation on how to prepare and activate ceph-disk
and ran into the following problem:
command_check_call: Running command: /usr/bin/ceph-osd --cluster ceph
--mkfs --mkkey -i 8 --monmap /var/lib/ceph/tmp/mnt.RxRUd8/activate.monmap
--osd-data /var/lib/ceph/tmp/mnt.RxRUd8