>
> Alternately publish
> all metrics to prometheus with fsid label then you can auto-filter
> based on the fsid of the ceph cluster since fsid is unique.
>

This is exactly something that we are looking into as we are looking into
providing
the support for multi-cluster monitoring and management from the Ceph
Dashboard
which is right now an ongoing PoC. Thanks for providing more context here.

But as of now, I don't see a way to make it configurable in the dashboard.
But in the near future
you can expect these to be added.

Regards,
Nizam

On Fri, Nov 3, 2023 at 7:54 AM Matthew Darwin <b...@mdarwin.ca> wrote:

> In my case I'm adding a label that is unique to each ceph cluster and
> then can filter on that.  In my ceph dashboard in grafana I've added a
> pull-down list to check each different ceph cluster.
>
> You need a way for me to configure what labels to filter on so I can
> match it up with how I configured the prometheus. Alternately publish
> all metrics to prometheus with fsid label then you can auto-filter
> based on the fsid of the ceph cluster since fsid is unique.
>
> On 2023-11-02 01:03, Nizamudeen A wrote:
> >
> >     We have 4 ceph clusters going into the same prometheus instance.
> >
> > Just curious, In the prometheus, if you want to see the details for
> > a single cluster, how's it done through query?
> >
> > For reference, these are the queries that we are currently using now.
> >
> >     USEDCAPACITY = 'ceph_cluster_total_used_bytes',
> >       WRITEIOPS = 'sum(rate(ceph_pool_wr[1m]))',
> >       READIOPS = 'sum(rate(ceph_pool_rd[1m]))',
> >       READLATENCY = 'avg_over_time(ceph_osd_apply_latency_ms[1m])',
> >       WRITELATENCY = 'avg_over_time(ceph_osd_commit_latency_ms[1m])',
> >       READCLIENTTHROUGHPUT = 'sum(rate(ceph_pool_rd_bytes[1m]))',
> >       WRITECLIENTTHROUGHPUT = 'sum(rate(ceph_pool_wr_bytes[1m]))',
> >       RECOVERYBYTES = 'sum(rate(ceph_osd_recovery_bytes[1m]))'
> >
> > We might not have considered the possibility of multiple
> > ceph-clusters pointing to a single prometheus instance.
> > In that case there should be some filtering done with cluster id or
> > something to properly identify it.
> >
> > FYI @Pedro Gonzalez Gomez <mailto:pegon...@redhat.com> @Ankush Behl
> > <mailto:anb...@redhat.com> @Aashish Sharma <mailto:aasha...@redhat.com>
> >
> > Regards,
> > Nizam
> >
> > On Mon, Oct 30, 2023 at 11:05 PM Matthew Darwin <b...@mdarwin.ca> wrote:
> >
> >     Ok, so I tried the new ceph dashboard by "set-prometheus-api-host"
> >     (note "host" and not "url") and it returns the wrong data.  We
> >     have 4
> >     ceph clusters going into the same prometheus instance.  How does it
> >     know which data to pull? Do I need to pass a promql query?
> >
> >     The capacity widget at the top right (not using prometheus)
> >     shows 35%
> >     of 51 TiB used (test cluster data)... This is correct. The chart
> >     shows
> >     use capacity is 1.7 PiB, which is coming from the production
> >     cluster
> >     (incorrect).
> >
> >     Ideas?
> >
> >
> >     On 2023-10-30 11:30, Nizamudeen A wrote:
> >     > Ah yeah, probably that's why the utilization charts are empty
> >     > because it relies on
> >     > the prometheus info.
> >     >
> >     > And I raised a PR to disable the new dashboard in quincy.
> >     > https://github.com/ceph/ceph/pull/54250
> >     >
> >     > Regards,
> >     > Nizam
> >     >
> >     > On Mon, Oct 30, 2023 at 6:09 PM Matthew Darwin
> >     <b...@mdarwin.ca> wrote:
> >     >
> >     >     Hello,
> >     >
> >     >     We're not using prometheus within ceph (ceph dashboards
> >     show in our
> >     >     grafana which is hosted elsewhere). The old dashboard
> >     showed the
> >     >     metrics fine, so not sure why in a patch release we would need
> >     >     to make
> >     >     configuration changes to get the same metrics.... Agree it
> >     >     should be
> >     >     off by default.
> >     >
> >     >     "ceph dashboard feature disable dashboard" works to put
> >     the old
> >     >     dashboard back.  Thanks.
> >     >
> >     >     On 2023-10-30 00:09, Nizamudeen A wrote:
> >     >     > Hi Matthew,
> >     >     >
> >     >     > Is the prometheus configured in the cluster? And also the
> >     >     > PROMETHUEUS_API_URL is set? You can set it manually by ceph
> >     >     dashboard
> >     >     > set-prometheus-api-url <url-of-prom>.
> >     >     >
> >     >     > You can switch to the old Dashboard by switching the feature
> >     >     toggle in the
> >     >     > dashboard. `ceph dashboard feature disable dashboard` and
> >     >     reloading the
> >     >     > page. Probably this should have been disabled by default.
> >     >     >
> >     >     > Regards,
> >     >     > Nizam
> >     >     >
> >     >     > On Sun, Oct 29, 2023, 23:04 Matthew
> >     Darwin<b...@mdarwin.ca> wrote:
> >     >     >
> >     >     >> Hi all,
> >     >     >>
> >     >     >> I see17.2.7 quincy is published as debian-bullseye
> >     packages.
> >     >     So I
> >     >     >> tried it on a test cluster.
> >     >     >>
> >     >     >> I must say I was not expecting the big dashboard change
> >     in a
> >     >     patch
> >     >     >> release.  Also all the "cluster utilization" numbers
> >     are all
> >     >     blank now
> >     >     >> (any way to fix it?), so the dashboard is much less
> >     usable now.
> >     >     >>
> >     >     >> Thoughts?
> >     >     >> _______________________________________________
> >     >     >> ceph-users mailing list --ceph-users@ceph.io
> >     >     >> To unsubscribe send an email toceph-users-le...@ceph.io
> >     >     >>
> >     >     > _______________________________________________
> >     >     > ceph-users mailing list --ceph-users@ceph.io
> >     >     > To unsubscribe send an email toceph-users-le...@ceph.io
> >     >     _______________________________________________
> >     >     ceph-users mailing list -- ceph-users@ceph.io
> >     >     To unsubscribe send an email to ceph-users-le...@ceph.io
> >     >
> >     _______________________________________________
> >     ceph-users mailing list -- ceph-users@ceph.io
> >     To unsubscribe send an email to ceph-users-le...@ceph.io
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to