t; > it seems to have a beneficial effect on our data node.
>> >
>> > I've seen that the 16.2.8 was out yesterday, but I'm a little confused
>> on :
>> > [Revert] bluestore: set upper and lower bounds on rocksdb omap iterators
>> > (pr#46092, Neha Ojha)
&g
on rocksdb omap iterators
> > (pr#46092, Neha Ojha)
> > bluestore: set upper and lower bounds on rocksdb omap iterators
> (pr#45963,
> > Cory Snyder)
> >
> > (theses two lines seems related to https://tracker.ceph.com/issues/55324
> ).
> >
> > One step forward, o
issues/55324).
>
> One step forward, one step backward ?
>
> Hubert Beaudichon
>
>
> -----Message d'origine-----
> De : Josh Baergen
> Envoyé : lundi 16 mai 2022 16:56
> À : stéphane chalansonnet
> Cc : ceph-users@ceph.io
> Objet : [ceph-users] Re: Migration Naut
n
Envoyé : lundi 16 mai 2022 16:56
À : stéphane chalansonnet
Cc : ceph-users@ceph.io
Objet : [ceph-users] Re: Migration Nautilus to Pacifi : Very high latencies (EC
profile)
Hi Stéphane,
On Sat, May 14, 2022 at 4:27 AM stéphane chalansonnet
wrote:
> After a successful update from Nautilus to Paci
Hello,
Yes we got several slow ops stocks for many seconds.
What we noted : CPU/MeM usage less than Nautilus (
https://drive.google.com/file/d/1NGa5sA8dlQ65ld196Ku2hm_Y0xxvfvNs/view?usp=sharingt
)
Same behaviour than you .
For the moment, the rebuild of one our node seems to fix the latency
In our case it appears that file deletes have a very high impact on osd
operations. Not a significant delete either ~20T on a 1PB utilized
filesystem (large files as well).
We are trying to tune down cephfs delayed deletes via:
"mds_max_purge_ops": "512",
"mds_max_purge_ops_per_pg":
We have a newly-built pacific (16.2.7) cluster running 8+3 EC jerasure ~250
OSDS across 21 hosts which has significantly lower than expected IOPS. Only
doing about 30 IOPS per spinning disk (with appropriately sized SSD
bluestore db) around ~100 PGs per OSD. Have around 100 CephFS (ceph fuse
Hello,
depending on your workload, drives and OSD allocation size, using the 3+2
can be way slower than the 4+2. Maybe give it a small benchmark and try if
you see a huge difference. We had some benchmarks with such and they showed
quite ugly results in some tests. Best way to deploy EC in our
Hi,
Thank you for your answer.
this is not a good news if you also notice a performance decrease on your
side
No, as far as we know, you cannot downgrade to Octopus.
Going forward seems to be the only way, so Quincy .
We have a a qualification cluster so we can try on it (but full virtual
Hello,
what exact EC level do you use?
I can confirm, that our internal data shows a performance drop when using
pacific. So far Octopus is faster and better than pacific but I doubt you
can roll back to it. We haven't rerun our benchmarks on Quincy yet, but
according to some presentation it
10 matches
Mail list logo