Ok, so I tried the new ceph dashboard by "set-prometheus-api-host"
(note "host" and not "url") and it returns the wrong data. We have 4
ceph clusters going into the same prometheus instance. How does it
know which data to pull? Do I need to pass a promql query?
The capacity widget at the
Hi Vahideh,
Lua scripting was added in pacific. Did you try uploading that file to a
"pacific" RGW?
What is failing there?
Yuval
On Mon, Oct 30, 2023 at 5:04 PM Vahideh Alinouri
wrote:
> Dear Ceph Users,
>
> I am requesting the backporting changes related to the nats_adapter.lua.
> This
Ah yeah, probably that's why the utilization charts are empty because it
relies on
the prometheus info.
And I raised a PR to disable the new dashboard in quincy.
https://github.com/ceph/ceph/pull/54250
Regards,
Nizam
On Mon, Oct 30, 2023 at 6:09 PM Matthew Darwin wrote:
> Hello,
>
> We're not
Dear Ceph Users,
I am requesting the backporting changes related to the nats_adapter.lua.
This feature is in a version newer than pacific, but we don't have it
in pacific version.
I would greatly appreciate it if someone from the Ceph development
team backport this change to the pacific version.
We're happy to announce the 7th backport release in the Quincy series.
https://ceph.io/en/news/blog/2023/v17-2-7-quincy-released/
Notable Changes
---
* `ceph mgr dump` command now displays the name of the Manager module that
registered a RADOS client in the `name` field added to
In a production setup of 36 OSDs( SAS disks) totalling 180 TB allocated to a
single Ceph Cluster with 3 monitors and 3 managers. There were 830 volumes and
VMs created in Openstack with Ceph as a backend. On Sep 21, users reported
slowness in accessing the VMs.
Analysing the logs lead us to
Hi guys,
Do you can find this solution for issue with long hearbeat and warning
slow ops?
Thank you so much.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Yuval,
this is cool. Thanks for the fast reply and PR. Fingers crossed it gets merged
soon.
This would be very valuable for us and hopefully for other too.
Cheers
Stephan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an
Hi,
We have a ceph cluster running reef version. We want to buy some enterprise
ssd for our ceph cluster, and our prepared storage size is 1.92TB.
For that, we have selected the Intel model. Please give a review about this
model, and if you have any other model preference, please share with us.
another option is to enable the rgw ops log, which includes the bucket
name for each request
the http access log line that's visible at log level 1 follows a known
apache format that users can scrape, so i've resisted adding extra
s3-specific stuff like bucket/object names there. there was some
Sorry to dig up this old thread ...
On 25.01.23 10:26, Christian Rohmann wrote:
On 20/10/2022 10:12, Christian Rohmann wrote:
1) May I bring up again my remarks about the timing:
On 19/10/2022 11:46, Christian Rohmann wrote:
I believe the upload of a new release to the repo prior to the
Hello,
We're not using prometheus within ceph (ceph dashboards show in our
grafana which is hosted elsewhere). The old dashboard showed the
metrics fine, so not sure why in a patch release we would need to make
configuration changes to get the same metrics Agree it should be
off by
Hi Dan,
we are currently moving all the logging into lua scripts, so it is not an
issue anymore for us.
Thanks
ps: the ceph analyzer is really cool. plusplus
Am Sa., 28. Okt. 2023 um 22:03 Uhr schrieb Dan van der Ster <
dan.vanders...@clyso.com>:
> Hi Boris,
>
> I found that you need to use
i use ceph 17.2.6 and when i deploy two number of separate rgw realm with
zonegroup and zone , dashboard enabled access for bouth object gateway and
i can create user and bucket and etc .but when i trying create bucket in on
of object gatways .i get this error in below:
debug
14 matches
Mail list logo