Hi Stefan
I can't recall that that was the case and unfortunately we do not have
enough history for our performance measurements to look back
We are on nautilus. Please let me know your findings when you do your pg
expansion on nautilus
Grtz
Marcel
> OK, I'm really curious if you observed
On 2020-09-01 10:51, Marcel Kuiper wrote:
> As a matter of fact we did. We doubled the storage nodes from 25 to 50.
> Total osds now 460.
>
> You want to share your thoughts on that?
OK, I'm really curious if you observed the following behaviour:
During, or shortly after the rebalance, did you
On 2020-09-02 23:50, Wido den Hollander wrote:
>
> Indeed, it shouldn't be.
>
> This config option should make it easier in a future release:
> https://github.com/ceph/ceph/commit/93e4c56ecc13560e0dad69aaa67afc3ca053fb4c
>
>
> [osd]
> osd_compact_on_start = true
>
> Then just restart the
On 02/09/2020 12:07, Stefan Kooman wrote:
On 2020-09-01 10:51, Marcel Kuiper wrote:
As a matter of fact we did. We doubled the storage nodes from 25 to 50.
Total osds now 460.
You want to share your thoughts on that?
Yes. We observed the same thing with expansions. The OSDs will be very
On 2020-09-01 10:51, Marcel Kuiper wrote:
> As a matter of fact we did. We doubled the storage nodes from 25 to 50.
> Total osds now 460.
>
> You want to share your thoughts on that?
Yes. We observed the same thing with expansions. The OSDs will be very
busy (with multiple threads per OSD) on
On 2020-08-31 14:16, Marcel Kuiper wrote:
> The compaction of the bluestore-kv's helped indeed. The repons is back to
> acceptable levels
Just curious. Did you do any cluster expansion and or PG expansion
before the slowness occurred?
Gr. Stefan
___
As a matter of fact we did. We doubled the storage nodes from 25 to 50.
Total osds now 460.
You want to share your thoughts on that?
Regards
Marcel
> On 2020-08-31 14:16, Marcel Kuiper wrote:
>> The compaction of the bluestore-kv's helped indeed. The repons is back
>> to
>> acceptable levels
>
The compaction of the bluestore-kv's helped indeed. The repons is back to
acceptable levels
Thanks for the help
> Thank you Stefan, I'm going to give that a try
>
> Kind Regards
>
> Marcel Kuiper
>
>> On 2020-08-27 13:29, Marcel Kuiper wrote:
>>> Sorry that had to be Wido/Stefan
>>
>> What does
On 2020-08-27 13:29, Marcel Kuiper wrote:
> Sorry that had to be Wido/Stefan
What does "ceph osd df" give you? There is a column with "OMAP" and
"META". OMAP is ~ 13 B, META 26 GB in our setup. Quite a few files in
cephfs (main reason we have large OMAP).
>
> Another question is: hoe to use
Thank you Stefan, I'm going to give that a try
Kind Regards
Marcel Kuiper
> On 2020-08-27 13:29, Marcel Kuiper wrote:
>> Sorry that had to be Wido/Stefan
>
> What does "ceph osd df" give you? There is a column with "OMAP" and
> "META". OMAP is ~ 13 B, META 26 GB in our setup. Quite a few files
Sorry that had to be Wido/Stefan
Another question is: hoe to use this ceph-kvstore-tool tool to compact the
rocksdb? (can't find a lot of examples)
The WAL and DB are on a separate NVMe. The directoy structure for an osd
looks like:
root@se-rc3-st8vfr2t2:/var/lib/ceph/osd# ls -l ceph-174
total
Hi Wido/Joost
pg_num is 64. It is not that we use 'rados ls' for operations. We just
noticed as a difference that on this cluster it takes about 15 seconds to
return on pool .rgw.root or rc3-se.rgw.buckets.index and our other
clusters return almost instantaniously
Is there a way that I can
On 2020-08-26 15:20, Marcel Kuiper wrote:
> Hi Vladimir,
>
> no it is the same on all monitors. Actually I got triggered because I got
> slow responses on my rados gateway with the radosgw-admin command and
> narrowed it down to slow respons for rados commands anywhere in the
> cluster.
Do you
On 26/08/2020 15:59, Stefan Kooman wrote:
On 2020-08-26 15:20, Marcel Kuiper wrote:
Hi Vladimir,
no it is the same on all monitors. Actually I got triggered because I got
slow responses on my rados gateway with the radosgw-admin command and
narrowed it down to slow respons for rados
Hi Vladimir,
no it is the same on all monitors. Actually I got triggered because I got
slow responses on my rados gateway with the radosgw-admin command and
narrowed it down to slow respons for rados commands anywhere in the
cluster.
The cluster is not that busy and all osds and monitors use
Hi Marcel,
If this issue related to only one monitor?
If yes, check the overall node status: average load, disk I/O, RAM
consumption, swap size, etc. Could be not a ceph-related issue.
Regards,
Vladimir.
On Wed, Aug 26, 2020 at 9:07 AM Marcel Kuiper wrote:
> Hi
>
> One of my clusters running
16 matches
Mail list logo