[ceph-users] Question regarding bluestore labels

2024-06-07 Thread Bailey Allison
I have a question regarding bluestore labels, specifically for a block.db partition. To make a long story short, we are currently in a position where checking the label of a block.db partition and it appears corrupted. I have seen another thread on here suggesting to copy the label from a

[ceph-users] Re: Ceph RBD, MySQL write IOPs - what is possible?

2024-06-07 Thread Anthony D'Atri
> On Jun 7, 2024, at 13:20, Mark Lehrer wrote: > >> server RAM and CPU >> * osd_memory_target >> * OSD drive model > > Thanks for the reply. The servers have dual Xeon Gold 6154 CPUs with > 384 GB So roughly 7 vcores / HTs per OSD? Your Ceph is a recent release? > The drives are older,

[ceph-users] Re: Ceph RBD, MySQL write IOPs - what is possible?

2024-06-07 Thread Mark Lehrer
> server RAM and CPU > * osd_memory_target > * OSD drive model Thanks for the reply. The servers have dual Xeon Gold 6154 CPUs with 384 GB. The drives are older, first gen NVMe - WDC SN620. osd_memory_target is at the default. Mellanox CX5 and SN2700 hardware. The test client is a similar

[ceph-users] Re: Ceph RBD, MySQL write IOPs - what is possible?

2024-06-07 Thread Anthony D'Atri
Please describe: * server RAM and CPU * osd_memory_target * OSD drive model > On Jun 7, 2024, at 11:32, Mark Lehrer wrote: > > I've been using MySQL on Ceph forever, and have been down this road > before but it's been a couple of years so I wanted to see if there is > anything new here. > >

[ceph-users] Ceph RBD, MySQL write IOPs - what is possible?

2024-06-07 Thread Mark Lehrer
I've been using MySQL on Ceph forever, and have been down this road before but it's been a couple of years so I wanted to see if there is anything new here. So the TL:DR version of this email - is there a good way to improve 16K write IOPs with a small number of threads? The OSDs themselves are

[ceph-users] Re: Testing CEPH scrubbing / self-healing capabilities

2024-06-07 Thread Frédéric Nass
Hello Petr, - Le 4 Juin 24, à 12:13, Petr Bena petr@bena.rocks a écrit : > Hello, > > I wanted to try out (lab ceph setup) what exactly is going to happen > when parts of data on OSD disk gets corrupted. I created a simple test > where I was going through the block device data until I found

[ceph-users] Re: Excessively Chatty Daemons RHCS v5

2024-06-07 Thread Frédéric Nass
Hi Joshua, These messages actually deserve more attention than you think, I believe. You may hit this one [1] that Mark (comment #4) also hit with 16.2.10 (RHCS 5). PR's here: https://github.com/ceph/ceph/pull/51669 Could you try raising osd_max_scrubs to 2 or 3 (now defaults to 3 in quincy and