Hello Christopher,
Could you please paste the logs and exceptions to this thread as well?
Regards,
Nizam
On Wed, May 8, 2024 at 11:21 PM Christopher Durham
wrote:
> Hello,
> I am uisng 18.2.2 on Rocky 8 Linux.
>
> I am getting http error 500 whe trying to hit the ceph dashboard on reef
>
remain empty, the numbers 1 and 0.5 on
> each.
>
> Regarding the used storage, notice the overall usage is 43.6 of 111
> TiB.Seems quite a distance from the trigger warning points of 85 and
> 95? The default values are in use. All the OSDs are between 37% to 42%
> usage. What
Hi,
The warning and danger indicator in the capacity chart points to the
nearful and full ratio set to the cluster and
the default values for them are 85% and 95% respectively. You can do a
`ceph osd dump | grep ratio` and see those.
When this got introduced, there was a blog post
oke suite summaries.
>
> @Radoslaw Zarzynski , @Adam King
> , @Nizamudeen A , mind having a look to ensure the
> results from the rados suite look good to you?
>
> @Venky Shankar mind having a look at the smoke
> suite? There was a resurgence of https://tracker.ceph.com/iss
dashboard approved. our e2e specs are passing but the suite failed because
of a different error.
cluster [WRN] Health check failed: 1 stray daemon(s) not managed by cephadm
(CEPHADM_STRAY_DAEMON)" in cluster log
On Tue, Feb 20, 2024 at 9:29 PM Yuri Weinstein wrote:
> We have restarted QE
Thanks Laura,
Raised a PR for https://tracker.ceph.com/issues/57386
https://github.com/ceph/ceph/pull/55415
On Thu, Feb 1, 2024 at 5:15 AM Laura Flores wrote:
> I reviewed the rados suite. @Adam King , @Nizamudeen A
> would appreciate a look from you, as there are some
> orc
dashboard looks good! approved.
Regards,
Nizam
On Tue, Jan 30, 2024 at 3:09 AM Yuri Weinstein wrote:
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/64151#note-1
>
> Seeking approvals/reviews for:
>
> rados - Radek, Laura, Travis, Ernesto, Adam King
> rgw -
Understood, thank you.
On Thu, Jan 25, 2024, 20:24 Sake Ceph wrote:
> I would say drop it for squid release or if you keep it in squid, but
> going to disable it in a minor release later, please make a note in the
> release notes if the option is being removed.
> Just my 2 cents :)
>
> Best
Ah okay, thanks for the clarification.
In that case, probably we'll need to keep this 1.2 fix for squid i guess.
I'll check and will update as necessary.
On Thu, Jan 25, 2024, 20:12 Sake Ceph wrote:
> Hi Nizamudeen,
>
> Thank you for your quick response!
>
> The load balancers
Hi,
I'll re-open the PR and will merge it to Quincy. Btw i want to know if the
load balancers will be supporting tls 1.3 in future. Because we were
planning to completely drop the tls1.2 support from dashboard because of
security reasons. (But so far we are planning to keep it as it is atleast
things are!) and so we have resolved
> our problem - a mis-configured port number - obvious when you think about
> it - and so I'd like to thank you once again for all of you patience and
> help
>
> Cheers
>
> Dulux-oz
> On 05/01/2024 20:39, Nizamudeen A wrote:
>
> ah
Hi,
Is it possible that this is related to https://tracker.ceph.com/issues/63927
?
Regards,
Nizam
On Fri, Jan 5, 2024 at 4:22 PM Zoltán Beck wrote:
> Hi All,
>
> we just upgraded to Reef, everything looks great, except the new
> Dashboard. The Recovery Throughput graph is empty, the
said I'm new to podman and containers -
> so, stupid Q: What is the "typical" name for a given container eg if the
> server is "node1" is the management container "mgr.node1" of something
> similar?
>
> And thanks for the help - I really *do* appreciate i
PM duluxoz wrote:
> Yeap, can do - are the relevant logs in the "usual" place or buried
> somewhere inside some sort of container (typically)? :-)
> On 05/01/2024 20:14, Nizamudeen A wrote:
>
> no, the error message is not clear enough to deduce an error. could you
>
done all that - we're now at the point of creating the iSCSI
> Target(s) for the gateway (via the Dashboard and/or the CLI: see the error
> message in the OP) - any ideas? :-)
>
> Cheers
>
> Dulux-Oz
> On 05/01/2024 19:10, Nizamudeen A wrote:
>
> Hi,
>
> You can fi
Hi,
You can find the APIs associated with the iscsi here:
https://docs.ceph.com/en/reef/mgr/ceph_api/#iscsi
and if you create iscsi service through dashboard or cephadm, it should add
the iscsi gateways to the dashboard.
you can view them by issuing *ceph dashboard iscsi-gateway-list* and you
Hi,
The new dashboard refreshes every 5 seconds (not 25 seconds). But the
Cluster Utilization chart refreshes in sync with the
scrape interval of prometheus (which is defaulted to 15s unless explicitly
changed in the prometheus configuration).
Are you seeing the whole dashboard gets refreshed
more like a production
cluster
RCs for reef, quincy and pacific
for next week when there is more time to discuss
Regards,
--
Nizamudeen A
Software Engineer
Red Hat <https://www.redhat.com/>
<https://www.redhat.com/>
___
ceph-users mailing list
BUG
>
> cephadm ['--image', '
> quay.io/ceph/ceph@sha256:56984a149e89ce282e9400ca53371ff7df74b1c7f5e979b6ec651b751931483a',
> '--timeout', '895', 'list-networks']
> 2023-11-16 15:03:53,692 7f652d1e6740 DEBUG
> ------
Hello,
can you also add the mgr logs at the time of this error?
Regards,
On Thu, Nov 16, 2023 at 4:12 PM Jean-Marc FONTANA
wrote:
> Hello David,
>
> We tried what you pointed in your message. First, it was set to
>
> "s3, s3website, swift, swift_auth, admin, sts, iam, subpub"
>
> We tried to
, 2023 at 10:33 AM Yuri Weinstein
> wrote:
>
>> Build 4 with https://github.com/ceph/ceph/pull/54224 was built and I
>> ran the tests below and asking for approvals:
>>
>> smoke - Laura
>> rados/mgr - PASSED
>> rados/dashboard - Nizamudeen
>> orc
dashboard changes are minimal and approved. and since the dashboard change
is related to the
monitoring stack (prometheus..) which is something not covered in the
dashboard test suites, I don't think running it is necessary.
But maybe the cephadm suite has some monitoring stack related testings
ith its implementation, we thought it'd be good to
> get
> >> some community feedback around it. So please let us know what you think
> >> (the goods and the bads).
> >>
> >> Regards,
> >> --
> >>
> >> Nizamudeen A
> ___
what you think
(the goods and the bads).
Regards,
--
Nizamudeen A
Software Engineer
Red Hat <https://www.redhat.com/>
<https://www.redhat.com/>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
dashboard approved, the test failure is known cypress issue which is not a
blocker.
Regards,
Nizam
On Wed, Nov 8, 2023, 21:41 Yuri Weinstein wrote:
> We merged 3 PRs and rebuilt "reef-release" (Build 2)
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek 2 jobs failed in
o I can
> match it up with how I configured the prometheus. Alternately publish
> all metrics to prometheus with fsid label then you can auto-filter
> based on the fsid of the ceph cluster since fsid is unique.
>
> On 2023-11-02 01:03, Nizamudeen A wrote:
> >
> > We hav
s 35%
> of 51 TiB used (test cluster data)... This is correct. The chart shows
> use capacity is 1.7 PiB, which is coming from the production cluster
> (incorrect).
>
> Ideas?
>
>
> On 2023-10-30 11:30, Nizamudeen A wrote:
> > Ah yeah, probably that's why the utiliza
by default.
>
> "ceph dashboard feature disable dashboard" works to put the old
> dashboard back. Thanks.
>
> On 2023-10-30 00:09, Nizamudeen A wrote:
> > Hi Matthew,
> >
> > Is the prometheus configured in the cluster? And also the
> > PROMETH
Hi Matthew,
Is the prometheus configured in the cluster? And also the
PROMETHUEUS_API_URL is set? You can set it manually by ceph dashboard
set-prometheus-api-url .
You can switch to the old Dashboard by switching the feature toggle in the
dashboard. `ceph dashboard feature disable dashboard`
dashboard approved!
On Tue, Oct 17, 2023 at 12:22 AM Yuri Weinstein wrote:
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/63219#note-2
> Release Notes - TBD
>
> Issue https://tracker.ceph.com/issues/63192 appears to be failing several
> runs.
> Should it be
The source is prometheus. The below are the set of queries that we use to
populate the charts.
USEDCAPACITY = 'ceph_cluster_total_used_bytes',
WRITEIOPS = 'sum(rate(ceph_pool_wr[1m]))',
READIOPS = 'sum(rate(ceph_pool_rd[1m]))',
READLATENCY = 'avg_over_time(ceph_osd_apply_latency_ms[1m])',
Hey Marc,
We have some screenshots in a blog post we did a while back:
https://ceph.io/en/news/blog/2023/landing-page/
and also in the documentation:
https://docs.ceph.com/en/latest/mgr/dashboard/#overview-of-the-dashboard-landing-page
Regards,
On Wed, Sep 13, 2023 at 5:59 PM Marc wrote:
>
Thank you Nicola,
We are collecting these feedbacks. For a while we weren't focusing on the
mobile view
of the dashboard. If there are users using those, we'll look into it as
well. Will let everyone know
soon with the improvements in the UI.
Regards,
Nizam
On Mon, Sep 11, 2023 at 2:23 PM
Hey guys,
Thanks for the feedback. The new landing page is still improving. While we
are doing it, we haven't removed the old page completely.
If you want you can switch to the old Dashboard by switching the feature
toggle in the dashboard. `ceph dashboard feature disable dashboard`
will bring
://pad.ceph.com/p/user_dev_relaunch
First topic will come from David's team
16.2.14 release
Pushing to release by this week.
Regards,
Nizam
--
Nizamudeen A
Software Engineer
Red Hat <https://www.redhat.com/>
<https://www.r
Dashboard approved!
@Laura Flores https://tracker.ceph.com/issues/62559,
this could be a dashboard issue. We'll be removing those tests from the
orch suite. Because we are already checking them
in the jenkins pipeline. The current one in the teuthology suite is a bit
flaky and not reliable.
dashboard approved! failure is unrelated and tracked via
https://tracker.ceph.com/issues/58946
Regards,
Nizam
On Sun, Jul 30, 2023 at 9:16 PM Yuri Weinstein wrote:
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/62231#note-1
>
> Seeking approvals/reviews for:
Hi,
You can upgrade the grafana version individually by setting the config_opt
for grafana container image like:
ceph config set mgr mgr/cephadm/container_image_grafana
quay.io/ceph/ceph-grafana:8.3.5
and then redeploy the grafana container again either via dashboard or
cephadm.
Regards,
Nizam
Hi Ben,
It looks like you forgot to attach the screenshots.
Regards,
Nizam
On Wed, Jun 21, 2023, 12:23 Ben wrote:
> Hi,
>
> I got many critical alerts in ceph dashboard. Meanwhile the cluster shows
> health ok status.
>
> See attached screenshot for detail. My questions are, are they real
Hey all,
Ceph Quarterly announcement [Josh and Zac]
One page digest that may published quarterly
Planning for 1st of June, September and December
Reef RC
https://pad.ceph.com/p/reef_scale_testing
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes#L17
ETA last week of May
dashboard approved!
Regards,
Nizam
On Tue, May 2, 2023, 20:48 Yuri Weinstein wrote:
> Please review the Release Notes - https://github.com/ceph/ceph/pull/51301
>
> Still seeking approvals for:
>
> rados - Neha, Radek, Laura
> rook - Sébastien Han
> dashboard - Ernesto
>
> fs - Venky,
Dashboard LGTM!
On Sat, Mar 25, 2023 at 1:16 AM Yuri Weinstein wrote:
> Details of this release are updated here:
>
> https://tracker.ceph.com/issues/59070#note-1
> Release Notes - TBD
>
> The slowness we experienced seemed to be self-cured.
> Neha, Radek, and Laura please provide any findings
Maybe an etherpad and pinning that to #sepia channel.
On Wed, Feb 15, 2023, 23:32 Laura Flores wrote:
> I would be interested in helping catalogue errors and fixes we experience
> in the lab. Do we have a preferred platform for this cheatsheet?
>
> On Wed, Feb 15, 2023 at 11:54 A
the component leads page:
https://ceph.io/en/community/team/
- Vikhyath volunteered before, so Josh will check with him.
Regards,
--
Nizamudeen A
Software Engineer
Red Hat <https://www.redhat.com/>
<https://www.redhat.com/>
___
ceph-u
Hi,
I am not sure about cephadm but if you were to use the ceph-dashboard, in
its host creation form you can enter a pattern like ceph[01-19] should add
ceph01...ceph19.
Regards,
Nizam
On Fri, Jan 27, 2023, 23:52 E Taka <0eta...@gmail.com> wrote:
> Thanks, Ulrich, but:
>
> # ceph orch host ls
Dashboard lgtm!
Regards,
Nizam
On Fri, Jan 20, 2023, 22:09 Yuri Weinstein wrote:
> The overall progress on this release is looking much better and if we
> can approve it we can plan to publish it early next week.
>
> Still seeking approvals
>
> rados - Neha, Laura
> rook - Sébastien Han
>
dashboard approved.
Regards,
Nizam
On Thu, Dec 15, 2022 at 10:45 PM Yuri Weinstein wrote:
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/58257#note-1
> Release Notes - TBD
>
> Seeking approvals for:
>
> rados - Neha (https://github.com/ceph/ceph/pull/49431
Hi,
Did you login to the grafana dashboard? For centralized logging you'll need
to login to the
grafana using your grafana username and password. If you do that and
refresh the dashboard,
I think the Loki page should be visible from the Daemon Logs page.
Regards,
Nizam
On Wed, Nov 16, 2022 at
Great, thanks Ilya.
Regards,
On Thu, Oct 27, 2022 at 2:00 PM Ilya Dryomov wrote:
> On Thu, Oct 27, 2022 at 9:05 AM Nizamudeen A wrote:
> >
> > >
> > > lab issues blocking centos container builds and teuthology testing:
> > > * https://tracker.ceph.com
>
> lab issues blocking centos container builds and teuthology testing:
> * https://tracker.ceph.com/issues/57914
> * delays testing for 16.2.11
The quay.ceph.io has been down for some days now. Not sure who is actively
maintaining the quay repos now.
At least in the ceph-dashboard, we have a
Dashboard LGTM!
On Wed, 14 Sept 2022, 01:33 Yuri Weinstein, wrote:
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/57472#note-1
> Release Notes - https://github.com/ceph/ceph/pull/48072
>
> Seeking approvals for:
>
> rados - Neha, Travis, Ernesto, Adam
> rgw -
Hey Pardhiv,
What happens when you try a different browser other than the one you are
using now? Also can you please try login again after clearing all the
browser cache?
Regards,
Nizamudeen
On Thu, Jan 20, 2022 at 2:38 AM Pardhiv Karri wrote:
> Hi,
>
> I installed Ceph Pacific on
52 matches
Mail list logo