[ceph-users] Re: Multi-MDS

2024-04-03 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Multi-MDS

2024-04-02 Thread quag...@bol.com.br
Hello, I did the configuration to activate multimds in ceph. The parameters I entered looked like this: 3 assets 1 standby I also placed the distributed pinning configuration at the root of the mounted dir of the storage: setfattr -n ceph.dir.pin.distributed -v 1 / This

[ceph-users] node-exporter error

2024-03-20 Thread quag...@bol.com.br
Hello, After some time, I'm adding some more disks on a new machine in the ceph cluster. However, there is a container that is not rising. It is the "node-exporter". Below is an excerpt from the log that reports the error: Mar 20 15:51:08 adafn02

[ceph-users] Re: Reef (18.2): Some PG not scrubbed/deep scrubbed for 1 month

2024-03-20 Thread quag...@bol.com.br
Hi,      I upgraded a cluster 2 weeks ago here. The situation is the same as Michel.      A lot of PGs no scrubbed/deep-scrubed. Rafael.___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Performance improvement suggestion

2024-02-20 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Performance improvement suggestion

2024-02-01 Thread quag...@bol.com.br
it's just a suggestion. If this type of functionality is not interesting, it is ok. Rafael.   De: "Anthony D'Atri" Enviada: 2024/02/01 12:10:30 Para: quag...@bol.com.br Cc: ceph-users@ceph.io Assunto: [ceph-users] Re: Performance improvement suggestion   > I didn't say I would ac

[ceph-users] Re: Performance improvement suggestion

2024-02-01 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Performance improvement suggestion

2024-02-01 Thread quag...@bol.com.br
Janne Johansson" Enviada: 2024/02/01 04:08:05 Para: anthony.da...@gmail.com Cc: acozy...@gmail.com, quag...@bol.com.br, ceph-users@ceph.io Assunto: Re: [ceph-users] Re: Performance improvement suggestion   > I’ve heard conflicting asserts on whether the write returns with min_size sha

[ceph-users] Re: Performance improvement suggestion

2024-02-01 Thread quag...@bol.com.br
. However, I don't think it's interesting to lose the functionality of the replicas. I'm just suggesting another way to increase performance without losing the functionality of replicas. Rafael.   De: "Anthony D'Atri" Enviada: 2024/01/31 17:04:08 Para: quag...@bol.com.br Cc: ceph-use

[ceph-users] Re: Performance improvement suggestion

2024-02-01 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Performance improvement suggestion

2024-02-01 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Performance improvement suggestion

2024-01-31 Thread quag...@bol.com.br
Hello everybody, I would like to make a suggestion for improving performance in Ceph architecture. I don't know if this group would be the best place or if my proposal is correct. My suggestion would be in the item https://docs.ceph.com/en/latest/architecture/, at the end of the

[ceph-users] Performance improvement suggestion

2024-01-31 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: HDD cache

2023-11-09 Thread quag...@bol.com.br
___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: CephFS performance

2022-11-23 Thread quag...@bol.com.br
ready sent that the cluster is configured with size=2 and min_size=1 for the data and metadata pools. If I have any more information to contribute, please let me know! Obrigado Rafael   De: "David C" Enviada: 2022/11/22 12:27:24 Para: quag...@bol.com.br Cc: ceph-users@ceph.io Assun

[ceph-users] CephFS performance

2022-10-20 Thread quag...@bol.com.br
Hello everyone,     I have some considerations and doubts to ask...     I work at an HPC center and my doubts stem from performance in this environment. All clusters here was suffering from NFS performance and also problems of a single point of failure it has. We were suffering from the