On Tue, Dec 20, 2022 at 11:41 AM Casey Bodley wrote:
> thanks Yuri, rgw approved based on today's results from
>
> https://pulpito.ceph.com/yuriw-2022-12-20_15:27:49-rgw-pacific_16.2.11_RC2-distro-default-smithi/
>
> On Mon, Dec 19, 2022 at 12:08 PM Yuri Weinstein
> wrote:
>
> > If you look at t
rados runs for pacific_16.2.11_RC3 look good! Unrelated failures summarized
in https://tracker.ceph.com/issues/58257#note-1.
Thanks,
Neha
On Tue, Dec 20, 2022 at 11:43 AM Neha Ojha wrote:
>
>
> On Tue, Dec 20, 2022 at 11:41 AM Casey Bodley wrote:
>
>> thanks Yuri, rgw
still active.
> --
> *From:* Wyll Ingersoll
> *Sent:* Tuesday, January 10, 2023 1:20 PM
> *To:* Neha Ojha ; Adam Kraitman ;
> Dan Mick
> *Subject:* Re: What's happening with ceph-users?
>
> I ended up re-subscribing this morning. But it mig
Hi everyone,
This month's Ceph User + Dev Monthly meetup is on January 19, 15:00-16:00
UTC. There are some topics in the agenda regarding RGW backports, please
feel free to add other topics to
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes.
Hope to see you there!
Thanks,
Neha
_
On Fri, Jan 20, 2023 at 12:36 PM Laura Flores wrote:
> From my end, rados looks good. All failures are known. Leaving final
> approval to Neha.
>
> On Fri, Jan 20, 2023 at 12:03 PM Ernesto Puerta
> wrote:
>
>> CCing Nizam as Dashboard lead for review & approval.
>>
>> Kind Regards,
>> Ernesto
>>
upgrades approved!
Thanks,
Neha
On Tue, Mar 28, 2023 at 12:09 PM Radoslaw Zarzynski
wrote:
> rados: approved!
>
> On Mon, Mar 27, 2023 at 7:02 PM Laura Flores wrote:
>
>> Rados review, second round:
>>
>> Failures:
>> 1. https://tracker.ceph.com/issues/58560
>> 2. https://tracker.ceph.
Hi everyone,
This is the first release candidate for Reef.
The Reef release comes with a new RockDB version (7.9.2) [0], which
incorporates several performance improvements and features. Our internal
testing doesn't show any side effects from the new version, but we are very
eager to hear communi
Hi everyone,
This month's Ceph User + Dev Monthly meetup is on June 15, 14:00-15:00 UTC.
We'd love to share details about the first Reef release candidate. Please
feel free to add more topics to
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes.
Hope to see you there!
Thanks,
Neha
___
Hi everyone,
Join us for the Ceph Developer Summit happening virtually from Jul 17 – 24,
2023. We'll be discussing features planned for our next release, Squid,
across the course of this summit. The schedule has been published on our
website[0] and meetings have been added to the community calenda
Hi everyone,
You are invited to join us at the User + Dev meeting this week Thursday,
February 22 at 10:00 AM Eastern Time!
Focus Topic: CephFS Snapshots Evaluation
Presented by: Enrico Bocchi and Abhishek Lekshmanan, Ceph operators from
CERN
>From the presenters:
Ceph at CERN provides block, o
Hi everyone,
On behalf of the Ceph Foundation Board, I would like to announce the
creation of, and cordially invite you to, the first of a recurring series
of meetings focused solely on gathering feedback from the users of
Ceph. The overarching goal of these meetings is to elicit feedback from the
rd to your
participation in order to make this discussion more meaningful.
Thanks,
Neha
On Tue, Mar 12, 2024 at 9:00 AM Neha Ojha wrote:
> Hi everyone,
>
> On behalf of the Ceph Foundation Board, I would like to announce the
> creation of, and cordially invite you to, the first of a
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Tue, Jul 9, 2024 at 11:17 AM Yuri Weinstein wrote:
> Neha, and Josh pls do a final review and approval.
>
> Pls confirm that the Gibba/LRC upgrade is out of scope for this
>
Gibba was upgraded successfully and LRC upgrade is out of space for this rc.
>
> We will still need to promote files
On Wed, Jul 10, 2024 at 6:58 AM Yuri Weinstein wrote:
> We built a new branch with all the cherry-picks on top
> (https://pad.ceph.com/p/release-cherry-pick-coordination).
>
> I am rerunning fs:upgrade:
>
> https://pulpito.ceph.com/yuriw-2024-07-10_13:47:23-fs:upgrade-reef-release-distro-default-
We saw this warning once in testing
(https://tracker.ceph.com/issues/49900#note-1), but there, the problem
was different, which also led to a crash. That issue has been fixed
but if you can provide osd logs with verbose logging, we might be able
to investigate further.
Neha
On Wed, Apr 14, 2021 a
Thanks for reporting this issue. It has been fixed by
https://github.com/ceph/ceph/pull/40845 and will be released in the
next pacific point release.
Neha
On Mon, Apr 19, 2021 at 8:19 AM Behzad Khoshbakhti
wrote:
>
> thanks by commenting the ProtectClock directive, the issue is resolved.
> Thank
On Fri, May 14, 2021 at 10:47 AM Andrius Jurkus
wrote:
>
> Hello, I will try to keep it sad and short :) :(PS sorry if this
> dublicate I tried post it from web also.
>
> Today I upgraded from 16.2.3 to 16.2.4 and added few hosts and osds.
> After data migration for few hours, 1 SSD failed, th
allocator = bitmap
>
> by setting this parameter all failed OSD started.
>
> Thanks again!
>
> On 2021-05-14 21:09, Neha Ojha wrote:
> > On Fri, May 14, 2021 at 10:47 AM Andrius Jurkus
> > wrote:
> >>
> >> Hello, I will try to keep it sad and short :) :
Hello everyone,
Given that BlueStore has been the default and more widely used
objectstore since quite some time, we would like to understand whether
we can consider deprecating FileStore in our next release, Quincy and
remove it in the R release. There is also a proposal [0] to add a
health warni
On Wed, Jun 2, 2021 at 12:31 PM Willem Jan Withagen wrote:
>
> On 1-6-2021 21:24, Neha Ojha wrote:
> > Hello everyone,
> >
> > Given that BlueStore has been the default and more widely used
> > objectstore since quite some time, we would like to understand whether
&
re!
- Neha
>
> Thanks,
> Ansgar
>
> Neha Ojha schrieb am Di., 1. Juni 2021, 21:24:
>>
>> Hello everyone,
>>
>> Given that BlueStore has been the default and more widely used
>> objectstore since quite some time, we would like to understand whether
>
On Mon, Jun 7, 2021 at 5:24 PM Jeremy Hansen wrote:
>
>
> I’m seeing this in my health status:
>
> progress:
> Global Recovery Event (13h)
> [] (remaining: 5w)
>
> I’m not sure how this was initiated but this is a cluster with almost zero
> objects. Is the
This seems to be a real issue, created
https://tracker.ceph.com/issues/51676 to track it.
Thanks,
Neha
On Thu, Jul 8, 2021 at 8:21 AM Robert Sander
wrote:
>
> Hi,
>
> I am trying to apply the resharding to a containerized OSD (16.2.4) as
> described here:
>
> https://docs.ceph.com/en/latest/rad
Hi everyone,
I'd like to share a few updates on completed/ongoing RADOS and Crimson
projects with the community.
Significant PRs merged
- Remove allocation metadata from RocksDB - should significantly
improve small write performance
- PG Autoscaler scale-down profile - default in new clusters fo
Can we please create a bluestore tracker issue for this
(if one does not exist already), where we can start capturing all the
relevant information needed to debug this? Given that this has been
encountered in previous 16.2.* versions, it doesn't sound like a
regression in 16.2.6 to me, rather an is
On Thu, Aug 19, 2021 at 9:29 AM Jeremy Austin wrote:
>
> I cannot speak in any official capacity, but my limited experience
> (20-30TB) EC CLAY has been functioning without an error for about 2 years.
> No issues in Pacific myself yet (fingers crossed).
This is good to know! I don't recall too ma
Jonas, would you be interested in joining one of our performance
meetings and presenting some of your work there? Seems like we can
have a good discussion about further improvements to the balancer.
Thanks,
Neha
On Mon, Oct 25, 2021 at 11:39 AM Josh Salomon wrote:
>
> Hi Jonas,
>
> I have some c
Hi everyone,
We are kicking off a new monthly meeting for Ceph users to directly
interact with Ceph Developers. The high-level aim of this meeting is
to provide users with a forum to:
- share their experience running Ceph clusters
- provide feedback on Ceph versions they are using
- ask questions
Hi Luis,
On Mon, Nov 15, 2021 at 4:57 AM Luis Domingues wrote:
>
> Hi,
>
> We are testing currently testing the mclock scheduler in a ceph Pacific
> cluster. We did not test if heavily, but at first glance it looks good on our
> installation. Probably better than wqp. But we still have a few qu
Hi everyone,
This event is happening on November 18, 2021, 15:00-16:00 UTC - this
is an hour later than what I had sent in my earlier email (I hadn't
accounted for daylight savings change, sorry!), the calendar invite
reflects the same.
Thanks,
Neha
On Thu, Oct 28, 2021 at 11:53 AM Neha
a Services Co., Ltd.
> e: istvan.sz...@agoda.com
> -------
>
> On 2021. Nov 15., at 18:35, Neha Ojha wrote:
>
> Email received from the internet. If in doubt, don't click any link nor open
> any attachment !
>
Hi Luis,
On Wed, Dec 1, 2021 at 8:19 AM Luis Domingues wrote:
>
> We upgraded a test cluster (3 controllers + 6 osds nodes with HDD and SSDs
> for rocksdb) from last Nautilus to this 16.2.7 RC1.
>
> Upgrade went well without issues. We repaired the OSDs and no one crashed.
That's good to know!
On Mon, Nov 29, 2021 at 9:23 AM Yuri Weinstein wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/53324
> Release Notes - https://github.com/ceph/ceph/pull/44131
>
> Seeking approvals for:
>
> rados - Neha
Approved, known issues:
- rados/perf failures, sho
Hi everyone,
This month's Ceph User + Dev Monthly meetup is on December 16, 2021,
15:00-16:00 UTC. Please add topics you'd like to discuss in the agenda
here: https://pad.ceph.com/p/ceph-user-dev-monthly-minutes.
Hope to see you there!
Thanks,
Neha
__
Hi everyone,
This month's Ceph User + Dev Monthly meetup is next Thursday, January
20, 2022, 15:00-16:00 UTC. This time we would like to hear what users
have to say about four themes of Ceph: Quality, Usability, Performance
and Ecosystem. Any kind of feedback is welcome! Please feel free to
add mo
Hi everyone,
This month's Ceph User + Dev Monthly meetup is on February 17,
15:00-16:00 UTC. Please add topics you'd like to discuss in the agenda
here: https://pad.ceph.com/p/ceph-user-dev-monthly-minutes. We are
hoping to get more feedback from users on the four major themes of
Ceph and ask them
Hi everyone,
We'd like to understand how many users are using cache tiering and in
which release.
The cache tiering code is not actively maintained, and there are known
performance issues with using it (documented in
https://docs.ceph.com/en/latest/rados/operations/cache-tiering/#a-word-of-caution
Starting now!
On Thu, Feb 10, 2022 at 3:00 PM Neha Ojha wrote:
>
> Hi everyone,
>
> This month's Ceph User + Dev Monthly meetup is on February 17,
> 15:00-16:00 UTC. Please add topics you'd like to discuss in the agenda
> here: https://pad.ceph.com/p/ceph-user
Hi everyone,
This month's Ceph User + Dev Monthly meetup is on March 17,
14:00-15:00 UTC (note the time change!). Please add topics you'd like
to discuss in the agenda:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes.
Hope to see you there!
Thanks,
Neha
Hi Luis,
Thanks for testing the Quincy rc and trying out the mClock settings!
Sridhar is looking into this issue and will provide his feedback as
soon as possible.
Thanks,
Neha
On Thu, Mar 3, 2022 at 5:05 AM Luis Domingues wrote:
>
> Hi all,
>
> As we are doing some tests on our lab cluster, ru
Starting now!
On Fri, Mar 18, 2022 at 6:02 AM Mike Perez wrote:
> Hi everyone
>
> On March 24 at 17:00 UTC, hear Kamoltat (Junior) Sirivadhna give a
> Ceph Tech Talk on how Teuthology, Ceph's integration test framework,
> works!
>
> https://ceph.io/en/community/tech-talks/
>
> Also, if you would
On Mon, Mar 28, 2022 at 2:48 PM Yuri Weinstein wrote:
>
> We are trying to release v17.2.0 as soon as possible.
> And need to do a quick approval of tests and review failures.
>
> Still outstanding are two PRs:
> https://github.com/ceph/ceph/pull/45673
> https://github.com/ceph/ceph/pull/45604
>
>
Recording of this talk is now available
https://www.youtube.com/watch?v=wZHcg0oVzhY.
Thanks,
Neha
On Thu, Mar 24, 2022 at 10:01 AM Neha Ojha wrote:
>
> Starting now!
>
> On Fri, Mar 18, 2022 at 6:02 AM Mike Perez wrote:
>>
>> Hi everyone
>>
>> On March 24
For the moment, Dan's workaround sounds good to me, but I'd like to
understand how we got here, in terms of the decisions that were made
by the autoscaler.
We have a config option called "target_max_misplaced_ratio" (default
value is 0.05), which is supposed to limit the number of misplaced
objects
On Thu, Apr 14, 2022 at 12:48 PM Yuri Weinstein wrote:
>
> I am assuming approvals from Neha and Venky.
rados looks good!
> Still not sure if Sébastien Han approved rook.
>
> We are waiting for the last PR to be
> mergedhttps://github.com/ceph/ceph/pull/45885
I just merged it.
>
> Then all (a
release until Monday. The release notes PR is
still being worked on.
Thanks,
Neha
>
> I can make either situation work. Just let me know.
>
> On 4/14/22 15:59, Neha Ojha wrote:
> > On Thu, Apr 14, 2022 at 12:48 PM Yuri Weinstein wrote:
> >>
> >> I am assuming
On Mon, Apr 18, 2022 at 12:34 PM Ilya Dryomov wrote:
>
> On Mon, Apr 18, 2022 at 9:04 PM David Galloway wrote:
> >
> > The LRC is upgraded but the same mgr did crash during the upgrade. It is
> > running now despite the crash. Adam suspects it's due to earlier breakage.
> >
> > https://pastebi
Hi everyone,
This month's Ceph User + Dev Monthly Meetup has been canceled due to
the ongoing Ceph Developer Summit. However, we'd like to know if there
is any interest in an APAC friendly meeting. If so, we could alternate
between APAC and EMEA friendly meetings, like we do for CDM.
Thanks,
Neha
Can you check what "ceph versions" reports?
On Fri, Apr 29, 2022 at 9:15 AM Dominique Ramaekers
wrote:
>
> Hi,
>
> I never got a reply on my question. I can't seem to find how I upgrade the
> cephadm shell docker container.
>
> Any ideas?
>
> Greetings,
>
> Dominique.
>
>
> > -Oorspronkelijk
Hi Yuri,
rados and upgrade/pacific-p2p look good to go.
On Tue, May 10, 2022 at 5:46 AM Benoît Knecht wrote:
>
> On Mon, May 09, 2022 at 07:32:59PM +1000, Brad Hubbard wrote:
> > It's the current HEAD of the pacific branch or, alternatively,
> > https://github.com/ceph/ceph-ci/tree/pacific-16.2.
Hi everyone,
This month's Ceph User + Dev Monthly meetup is on May 19, 14:00-15:00
UTC. Please add topics to the agenda:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes. We are hoping to
receive feedback on the Quincy release and hear more about your
general ops experience regarding upgrades,
Hi Cory,
Thanks for identifying the bug and creating a PR to fix it. We'll do a
retrospective on this issue to catch and avoid such regressions in the
future. At the moment, we will go ahead with a minimal 16.2.9 release
for this issue.
Thanks,
Neha
On Tue, May 17, 2022 at 5:03 AM Cory Snyder
Great, thanks for all the hard work, David and team!
- Neha
On Wed, May 25, 2022 at 12:47 PM David Galloway wrote:
>
> I was successfully able to get a 'main' build completed.
>
> This means you should be able to push your branches to ceph-ci.git and
> get a build now.
>
> Thank you for your pat
Hi everyone,
This month's Ceph User + Dev Monthly meetup is on June 16, 14:00-15:00 UTC.
Please add topics to the agenda:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes.
Hope to see you there!
Thanks,
Neha
___
ceph-users mailing list -- ceph-user
On Wed, Jun 15, 2022 at 7:23 AM Venky Shankar wrote:
>
> On Tue, Jun 14, 2022 at 10:51 PM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/55974
> > Release Notes - https://github.com/ceph/ceph/pull/46576
> >
> > Seeking approvals
ker.ceph.com/issues/55974#note-1
>
> On Wed, Jun 15, 2022 at 6:59 PM Neha Ojha wrote:
>>
>> On Wed, Jun 15, 2022 at 7:23 AM Venky Shankar wrote:
>> >
>> > On Tue, Jun 14, 2022 at 10:51 PM Yuri Weinstein
>> > wrote:
>> > >
&g
On Wed, Jun 22, 2022 at 11:44 AM Laura Flores wrote:
>
> Here is the summary of RADOS failures. Everything looks good and normal to
> me! I will leave it to Neha to give final approval though.
Thanks Laura. These runs look good. We encountered
https://tracker.ceph.com/issues/56101 while upgrading
This issue should be addressed by https://github.com/ceph/ceph/pull/46860.
Thanks,
Neha
On Fri, Jun 24, 2022 at 2:53 AM Kenneth Waegeman
wrote:
>
> Hi,
>
> I’ve updated the cluster to 17.2.0, but the log is still filled with these
> entries:
>
> 2022-06-24T11:45:12.408944+02:00 osd031 ceph-osd[
Hi everyone,
This month's Ceph User + Dev Monthly meetup is on July 21, 14:00-15:00
UTC. Please add topics to the agenda:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes.
Hope to see you there!
Thanks,
Neha
___
ceph-users mailing list -- ceph-use
On Thu, Jul 21, 2022 at 8:47 AM Ilya Dryomov wrote:
>
> On Thu, Jul 21, 2022 at 4:24 PM Yuri Weinstein wrote:
> >
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/56484
> > Release Notes - https://github.com/ceph/ceph/pull/47198
> >
> > Seeking approvals fo
h/)? Its a really
> nasty problem and I'm waiting for this to show up in octopus.
>
> Thanks and best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
> ____
> From: Neha Ojha
> Sent:
On Mon, Jul 25, 2022 at 3:48 PM Neha Ojha wrote:
>
> Hello Frank,
>
> 15.2.17 includes
> https://github.com/ceph/ceph/pull/46611/commits/263e0fa6b3e6e1d6e7b382923a1d586d9d1ffa1b,
> which adds capability in the ceph-objectstore-tool to trim the dup ops
> that led to m
Hi Daniel,
This issue seems to be showing up in 17.2.2, details in
https://tracker.ceph.com/issues/55304. We are currently in the process
of validating the fix https://github.com/ceph/ceph/pull/47270 and
we'll try to expedite a quick fix.
In the meantime, we have builds/images of the dev version
Hi everyone,
This month's Ceph User + Dev Monthly meetup is on August 18,
14:00-15:00 UTC. We are planning to get some user feedback on
BlueStore compression modes. Please add other topics to the agenda:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes.
Hope to see you there!
Thanks,
Neha
_
Hi everyone,
Here are the topics discussed in today's meeting.
- David Galoway's last CLT meeting, mixed emotions but we wish David
all the best for all his future endeavors
- Tracker upgrade postponed for now
- OVH payment logistics handover to Patrick
- Discussion on cephadm documentation
- cep
Hi Satoru,
Apologies for the delay in responding to your questions.
In the case of https://github.com/ceph/ceph/pull/45963, we caught the
bug in an upgrade test (as described in
https://tracker.ceph.com/issues/55444) and not in the rados test
suite. Our upgrade test suites are meant to be run bef
Hi everyone,
This month's Ceph User + Dev Monthly meetup is on September 15,
14:00-15:00 UTC. We would like to get some feedback on a POC that we
are working on to measure availability in Ceph. Please add other
topics to the agenda:
https://pad.ceph.com/p/ceph-user-dev-monthly-minutes.
Hope to se
Hi Yuri,
On Wed, Sep 14, 2022 at 8:02 AM Adam King wrote:
>
> orch suite failures fall under
> https://tracker.ceph.com/issues/49287
> https://tracker.ceph.com/issues/57290
> https://tracker.ceph.com/issues/57268
> https://tracker.ceph.com/issues/52321
>
> For rados/cephadm the failures are both
The new rados runs look good, rados approved!
Thanks,
Neha
On Fri, Sep 16, 2022 at 8:55 AM Nizamudeen A wrote:
>
> Dashboard LGTM!
>
> On Wed, 14 Sept 2022, 01:33 Yuri Weinstein, wrote:
>
> > Details of this release are summarized here:
> >
> > https://tracker.ceph.com/issues/57472#note-1
> > R
On Thu, Sep 22, 2022 at 12:55 PM Yuri Weinstein wrote:
>
> We are publishing a release candidate this time for users to try
> for testing only.
>
> Please note this RC had only limited testing. Full testing is being done now.
It might be worth sharing that the Gibba cluster has been upgraded to
On Mon, Sep 19, 2022 at 9:38 AM Yuri Weinstein wrote:
> Update:
>
> Remaining =>
> upgrade/octopus-x - Neha pls review/approve
>
Both the failures in
http://pulpito.front.sepia.ceph.com/yuriw-2022-09-16_16:33:35-upgrade:octopus-x-quincy-release-distro-default-smithi/
seem related to RGW. Casey,
Hi everyone,
Here are the topics discussed in today's meeting.
- What changes with the announcement about IBM [1]? Nothing changes
for the upstream Ceph community. There will be more focus on
performance and scale testing.
- 17.2.4 was released last week, no major issues reported yet. This
releas
On Tue, Jun 30, 2020 at 6:04 PM Dan Mick wrote:
>
> True. That said, the blog post points to
> http://download.ceph.com/tarballs/ where all the tarballs, including
> 15.2.4, live.
>
> On 6/30/2020 5:57 PM, Sasha Litvak wrote:
> > David,
> >
> > Download link points to 14.2.10 tarball.
> >
> > O
Hi everyone,
We are in the process of migrating from docs.ceph.com to
ceph.readthedocs.io. We enabled it in
https://github.com/ceph/ceph/pull/34499 and will now be using it by
default.
Why?
- The search feature in ceph.readthedocs.io is much better than
docs.ceph.com and allows you to search mul
On Wed, Sep 16, 2020 at 10:51 AM Sasha Litvak
wrote:
>
> I wonder if this new system allows me to choose Ceph versions. I see the
> v:latest in the right bottom corner but it seems to be the only choice so far.
Not yet, but that's where we plan to incorporate other versions.
>
> On Wed, Sep 16
On Wed, Sep 16, 2020 at 11:08 AM Marc Roos wrote:
>
>
>
> - In the future you will not be able to read the docs if you have an
> adblocker(?)
not aware of anything of this sort
>
>
>
> -Original Message-
> To: dev; ceph-users
> Cc: Kefu Chai
> Subject: [ceph-users] Migration to ceph.read
We'd let to verify it the network ping time monitoring feature in
14.2.5 is attributing to this problem.
It'd be great if someone could try
https://tracker.ceph.com/issues/43364#note-3 and let us know.
Thanks,
Neha
On Thu, Dec 19, 2019 at 8:48 AM Mark Nelson wrote:
>
> If you can get a wallclock
Not yet, but we have a theory and a test build in
https://tracker.ceph.com/issues/43364#note-6, if anybody would like to
give it a try.
Thanks,
Neha
On Fri, Dec 20, 2019 at 2:31 PM Sasha Litvak
wrote:
>
> Was the root cause found and fixed? If so, will the fix be available in
> 14.2.6 or soone
. We'll get this fix out
in 14.2.6 after the holidays.
On Fri, Dec 20, 2019 at 6:24 PM Neha Ojha wrote:
>
> Not yet, but we have a theory and a test build in
> https://tracker.ceph.com/issues/43364#note-6, if anybody would like to
> give it a try.
>
> Thanks,
> Neha
>
Hi Joe,
Can you grab a wallclock profiler dump from the mgr process and share
it with us? This was useful for us to get to the root cause of the
issue in 14.2.5.
Quoting Mark's suggestion from "[ceph-users] High CPU usage by
ceph-mgr in 14.2.5" below.
If you can get a wallclock profiler on the m
81 matches
Mail list logo