Thank you very much!
On Thu, Jun 2, 2022 at 11:23 PM Konstantin Shalygin wrote:
> The "next" release is always compatible with "previous one" clusters
>
>
> k
> Sent from my iPhone
>
> > On 2 Jun 2022, at 16:28, Jiatong Shen wrote:
> >
> > Hello,
> >
> >where can I find librbd compatility
Is it just your deletes which are slow or writes and read as well?
On Thu, Jun 2, 2022, 4:09 PM J-P Methot wrote:
> I'm following up on this as we upgraded to Pacific 16.2.9 and deletes
> are still incredibly slow. The pool rgw is using is a fairly small
> erasure coding pool set at 8 + 3. Is
On Thu, Jun 2, 2022 at 11:40 AM Stefan Kooman wrote:
>
> Hi,
>
> We have a CephFS filesystem holding 70 TiB of data in ~ 300 M files and
> ~ 900 M sub directories. We currently have 180 OSDs in this cluster.
>
> POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED
> (DATA)
The "next" release is always compatible with "previous one" clusters
k
Sent from my iPhone
> On 2 Jun 2022, at 16:28, Jiatong Shen wrote:
>
> Hello,
>
>where can I find librbd compatility matrix? For example, Is octopus
> client compatible with nautilus server? Thank you.
>
> --
>
>
Hi,
On my test cluster, I migrated from Nautilus to Octopus and the
converted most of the daemons to cephadm. I got a lot of problem with
podman 1.6.4 on CentOS 7 through an https proxy because my servers are
on a private network.
Now, I'm unable to deploy new managers and the cluster is in
Hi,
Yesterday we hit OSD_FULL / POOL_FULL conditions for two brief moments.
As all OSDs are present in all pools, all IO was stalled. Which impacted
a few MDs clients (got evicted). Although the impact was limited, I
*really* would like to understand how that could happen, as it should
not
Hello,
where can I find librbd compatility matrix? For example, Is octopus
client compatible with nautilus server? Thank you.
--
Best Regards,
Jiatong Shen
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to
Hey Angelo,
what you're asking for is "Live Migration".
https://docs.ceph.com/en/latest/rbd/rbd-live-migration/ says:
The live-migration copy process can safely run in the background while the new
target image is in use. There is currently a requirement to temporarily stop
using the source
Hi,
I'm currently debugging a reoccuring issue with multi-active MDS. The
cluster is still on Nautilus and can't be upgraded at this time. There
have been many discussions about "cache pressure" and I was able to
find the right settings a couple of times, but before I change too
much in
at this stage we are not so worried about recovery since we moved to
our new pacific cluster. The problem arose during one of the nightly
syncs of the old cluster to the new cluster. However, we are quite keen
to use this as a learning opportunity to see what we can do to bring
this filesystem
10 matches
Mail list logo