On 12/22/21 4:23 AM, Marc wrote:
I guess what caused the issue was high latencies on our “big” SSD’s (7TB
drives), which got really high after the upgrade to Octopus. We split them
into 4OSD’s some days ago and since then the high commit latencies on the
OSD’s and on bluestore are gone
Hmm,
> I guess what caused the issue was high latencies on our “big” SSD’s (7TB
> drives), which got really high after the upgrade to Octopus. We split them
> into 4OSD’s some days ago and since then the high commit latencies on the
> OSD’s and on bluestore are gone
Hmm, but this is sort of a work
>
> Thanks a lot! This is reasonable data, do you plan tp upgrade to Octopus
> anytime soon? Would be very interested in the same tests after the
> migration
>
H, not really, the idea is to discover what is going on with your
situation, so if I am having this also after upgrading, I know
I am still on nautilus, albeit a tiny cluster. I would not mind doing some
tests for comparison if necessary.
>
> Hi Frank, thanks for the input. Im still a bit sceptical to be honest
> that this is all, since a.) our bench values are pretty stable over time
> (natilus times and octopus times)
ember 2021 11:55:38
> To: Dan van der Ster
> Cc: Ceph Users
> Subject: [ceph-users] Re: 50% IOPS performance drop after upgrade from
> Nautilus 14.2.22 to Octopus 15.2.15
>
> Hi Dan, Josh,
>
> thanks for the input, bluefs_buffered_io with true and false, no real
> diffe
Hi Dan, Josh,
thanks for the input, bluefs_buffered_io with true and false, no real
differences to be seen (hard to say in a productive cluster. maybe some little
percent).
We now disabled the write cache on our SSD’s and see a “felt” increase of the
performance up to 17k IOPS with 4k blocks
Hi,
It's a bit weird that you benchmark 1024 bytes -- or is that your
realistic use-case?
This is smaller than the min alloc unit for even SSDs, so will need a
read/modify/write cycle to update, slowing substantially.
Anyway, since you didn't mention it, have you disabled the write cache
on your