[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-17 Thread Frank Schilder
: Monday, July 17, 2023 9:36 AM To: Sridhar Seshasayee Cc: Mark Nelson; ceph-users@ceph.io Subject: [ceph-users] Re: OSD memory usage after cephadm adoption It looks indeed to be that bug that I hit. Thanks. Luis Domingues Proton AG --- Original Message --- On Monday, July 17th, 2023

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-17 Thread Luis Domingues
It looks indeed to be that bug that I hit. Thanks. Luis Domingues Proton AG --- Original Message --- On Monday, July 17th, 2023 at 07:45, Sridhar Seshasayee wrote: > Hello Luis, > > Please see my response below: > > But when I took a look on the memory usage of my OSDs, I was

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-16 Thread Sridhar Seshasayee
Hello Luis, Please see my response below: But when I took a look on the memory usage of my OSDs, I was below of that > value, by quite a bite. Looking at the OSDs themselves, I have: > > "bluestore-pricache": { > "target_bytes": 4294967296, > "mapped_bytes": 1343455232, >

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-16 Thread Luis Domingues
Hi, Thanks for your hints. I tries to play a little bit with the configs. And now I want to put the 0.7 value as default. So I configured ceph: mgradvanced mgr/cephadm/autotune_memory_target_ratio0.70

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-11 Thread Mark Nelson
On 7/11/23 09:44, Luis Domingues wrote: "bluestore-pricache": { "target_bytes": 6713193267, "mapped_bytes": 6718742528, "unmapped_bytes": 467025920, "heap_bytes": 7185768448, "cache_bytes": 4161537138 }, Hi Luis, Looks like the mapped

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-11 Thread Luis Domingues
Here you have. Perf dump: { "AsyncMessenger::Worker-0": { "msgr_recv_messages": 12239872, "msgr_send_messages": 12284221, "msgr_recv_bytes": 43759275160, "msgr_send_bytes": 61268769426, "msgr_created_connections": 754, "msgr_active_connections":

[ceph-users] Re: OSD memory usage after cephadm adoption

2023-07-11 Thread Mark Nelson
Hi Luis, Can you do a "ceph tell osd. perf dump" and "ceph daemon osd. dump_mempools"?  Those should help us understand how much memory is being used by different parts of the OSD/bluestore and how much memory the priority cache thinks it has to work with. Mark On 7/11/23 4:57 AM, Luis

[ceph-users] Re: OSD Memory usage

2020-11-26 Thread Seena Fallah
This is what happens with my cluster (Screenshots attached). At 10:11 I turn on bluefs_buffered_io on all my OSDs and latency gets back but throughput decreases. I had these configs for all OSDs in recovery osd-max-backfills 1 osd-recovery-max-active 1 osd-recovery-op-priority 1 Do you have any

[ceph-users] Re: OSD Memory usage

2020-11-24 Thread Seena Fallah
I add one OSD node to the cluster and I get 500MB/s throughput over my disks and it was 2 or 3 times better than before! but my latency raised 5 times!!! When I enable bluefs_buffered_io the throughput on disks gets 200MB/s and my latency gets down! Is there any kernel config/tuning that should be

[ceph-users] Re: OSD Memory usage

2020-11-23 Thread Igor Fedotov
Hi Seena, just to note  - this ticket might be relevant. https://tracker.ceph.com/issues/48276 Mind leaving a comment there? Thanks, Igor On 11/23/2020 2:51 AM, Seena Fallah wrote: Now one of my OSDs gets segfault. Here is the full trace: https://paste.ubuntu.com/p/4KHcCG9YQx/ On Mon,

[ceph-users] Re: OSD Memory usage

2020-11-22 Thread Seena Fallah
Now one of my OSDs gets segfault. Here is the full trace: https://paste.ubuntu.com/p/4KHcCG9YQx/ On Mon, Nov 23, 2020 at 2:16 AM Seena Fallah wrote: > Hi all, > > After I upgrade from 14.2.9 to 14.2.14 my OSDs are using less more memory > than before! I give each OSD 6GB memory target and