@Kotresh Hiremath Ravishankar
Can you please help on above
On Fri, 17 May, 2024, 12:26 pm Akash Warkhade,
wrote:
> Hi Kotresh,
>
>
> Thanks for the reply.
> 1)There are no customer configs defined
> 2) not enabled subtree pinning
> 3) there were no warning related to rados
>
> So wanted to k
Hi,
is there a ballpark timeline for a Squid release candidate / release?
I'm aware of this pad that tracks blockers, is that still accurate or should I
be looking at another resource?
https://pad.ceph.com/p/squid-upgrade-failures
Thanks!
peter.
___
Hi,
We are running 3 clusters in multisite. All 3 were running Quincy 17.2.6 and
using cephadm. We upgraded one of the secondary sites to Reef 18.2.1 a couple
of weeks ago and were planning on doing the rest shortly afterwards.
We run 3 RGW daemons on separate physical hosts behind an external
Hi,
We are running 3 clusters in multisite. All 3 were running Quincy 17.2.6 and
using cephadm. We upgraded one of the secondary sites to Reef 18.2.1 a couple
of weeks ago and were planning on doing the rest shortly afterwards.
We run 3 RGW daemons on separate physical hosts behind an external
Hi,
On 5/17/24 08:51, Kotresh Hiremath Ravishankar wrote:
Yes, it's already merged to the reef branch, and should be available in the
next reef release.
Please look at https://tracker.ceph.com/issues/62952
This is great news! Many thanks to all involved.
F.
___
Hi Kotresh,
Thanks for the reply.
1)There are no customer configs defined
2) not enabled subtree pinning
3) there were no warning related to rados
So wanted to know In order to fix this should we increase default
mds_cache_memory_limit from 4Gb to 6Gb or more?
Or is there any other solution for