Hi Ernesto,
That makes sense - thanks for explaining why the changes were made.
As we don't use cephadm, being able to customise the data source would
be fantastic.
I've put a tiny pull request to update the docs mentioning "The data
source must be named "Dashboard1"
best regards,
Jake
Hi Marc,
While it might be tempting to rely on Ceph itself to store the monitoring
data, IMHO it would be a bad idea: the monitoring I/O load might interfere
with the cluster performance, and, in a worst case scenario, the monitoring
data might become unavailable (probably the exact kind of situat
>- Now, with Cephadm, you can easily deploy a highly-available
> monitoring
Really? Sorry for asking because I don't know anything about podman. But does
podman automatically start tasks on a different node when one is down? And do
they have some native ceph support for external volumes to f
Hi Jake,
AFAIC there were a couple of reasons why we changed that:
- Now, with Cephadm, you can easily deploy a highly-available monitoring
stack (needs a bit of polishing yet though). If, for example, you extend
the Prometheus instance count to 2 or 3, Cephadm will automatically
conf