Hi Jakub,
Comments inline.
On Tue, Jul 25, 2023 at 11:03 PM Jakub Petrzilka
wrote:
> Hello everyone!
>
> Recently we had a very nasty incident with one of our CEPH storages.
>
> During basic backfill recovery operation due to faulty disk CephFS
> metadata started growing exponentially until
Hi,
First, thank you for taking time to reply to me.
However, my question was not on user-space memory neither on cache usage, as I
can see on my machines everything sums up quite nicely.
My question is: with packages, the non-cache kernel memory is around 2G to 3G,
while with Podman usage,
On 7/24/23 23:02, Frank Schilder wrote:
Hi Xiubo,
I seem to have gotten your e-mail twice.
Its a very old kclient. It was in that state when I came to work in the morning
and I looked at it in the afternoon. Was hoping the problem would clear by
itself.
Okay.
One correction for my last
Yes,
Check [1] and [2] for example codes. I don't have a Pacific lab but, test
it and let us know how it went.
1.- https://tracker.ceph.com/issues/18800
2.- https://github.com/aws-samples/sigv4a-signing-examples
On Tue, Jul 25, 2023 at 1:50 PM wrote:
> Can anyone help me ?
>
> I need to know
Can anyone help me ?
I need to know do Ceph 16.2.4 support Signature V4 for S3 API ? IF yes, pls
guide us
Thank u all
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello everyone!
Recently we had a very nasty incident with one of our CEPH storages.
During basic backfill recovery operation due to faulty disk CephFS metadata
started growing exponentially until they used all available space and whole
cluster DIED. Usage graph screenshot in attachment.
Hi Adam!
I guess you only want the output for the 9100 port?
[root@darkside1]# ss -tulpn | grep 9100
tcp LISTEN 0 128 [::]:9100 [::]:*
users:(("node_exporter",pid=9103,fd=3))
Also, this:
[root@darkside1 ~]# ps aux | grep 9103
nfsnobo+ 9103 38.4 0.0
okay, not much info on the mon failure. The other one at least seems to be
a simple port conflict. What does `sudo netstat -tulpn` give you on that
host?
On Tue, Jul 25, 2023 at 12:00 PM Renata Callado Borges <
renato.call...@incor.usp.br> wrote:
> Hi Adam!
>
>
> Thank you for your response, but
Hi Adam!
Thank you for your response, but I am still trying to figure out the
issue. I am pretty sure the problem occurs "inside" the container, and I
don´t know how to get logs from there.
Just in case, this is what systemd sees:
Jul 25 12:36:32 darkside1 systemd[1]: Stopped Ceph
Hi Eric,
> 1. I recommend that you *not* issue another bucket reshard until you figure
> out what’s going on.
Thanks, noted!
> 2. Which version of Ceph are you using?
17.2.5
I wanted to get the Cluster to Health OK before upgrading. I didn't
see anything that led me to believe that an upgrade
Good,
> On 24 Jul 2023, at 20:01, Luis Domingues wrote:
>
> Of course:
>
> free -h
> totalusedfree shared buff/cache available
> Mem: 125Gi96Gi 9.8Gi 4.0Gi19Gi
> 7.6Gi
> Swap:0B 0B 0B
11 matches
Mail list logo