[ceph-users] Re: 16.2.7 pacific QE validation status, RC1 available for testing

2021-12-03 Thread Ernesto Puerta
Neha: sure, done! Yuri: dashboard approved. Kind Regards, Ernesto On Thu, Dec 2, 2021 at 9:38 PM Neha Ojha wrote: > On Mon, Nov 29, 2021 at 9:23 AM Yuri Weinstein > wrote: > > > > Details of this release are summarized here: > > > > https://tracker.ceph.com/issues/53324 > > Release Notes - h

[ceph-users] bfq in centos 8.5 kernel

2021-12-03 Thread Dan van der Ster
Hi all, Just a heads up: we're observing very long IO stalls (10s of seconds) on OSDs using bfq in the 4.18.0-348* kernel -- especially during deep-scrubbing, but we've seen on different clusters, both hdds and ssd OSDs. The stalls are often long enough for the OSD to suicide with a 180s stuck op.

[ceph-users] Removing an OSD node the right way

2021-12-03 Thread huxia...@horebdata.cn
Dear Cephers, I had to remove a failed OSD server node, and what i did is the following 1) First marked all OSDs on that (to be removed) server down and out 2) Secondly, let Ceph do backfilling and rebalancing, and wait for completing 3) Now i have full redundancy, so i delete thoses removed OSDs

[ceph-users] Re: Removing an OSD node the right way

2021-12-03 Thread Dan van der Ster
Hi, This is indeed the expected behaviour. The in/out are used as a 2nd factor weight in the OSD placement algorithm. So crush weight 1, weight 0 is not equivalent to crush weight 0. The correct way to decommission OSDs / Hosts is to decrease the crush weight. Cheers, Dan On Fri, Dec 3, 2021

[ceph-users] Re: Removing an OSD node the right way

2021-12-03 Thread Boris Behrens
Hi Samuel, I tend to set the crush-weight to 0, but I am not sure if this is the "correct" way. ceph osd crush reweight osd.0 0 After the rebelance I can remove them from crush without rebalancing. Hope that helps Cheers Boris Am Fr., 3. Dez. 2021 um 13:09 Uhr schrieb huxia...@horebdata.cn < h

[ceph-users] Re: 16.2.7 pacific QE validation status, RC1 available for testing

2021-12-03 Thread David Orman
We've been testing RC1 since release on our 504 OSD/21 host, with split db/wal test cluster, and have experienced no issues on upgrade or operation so far. On Mon, Nov 29, 2021 at 11:23 AM Yuri Weinstein wrote: > Details of this release are summarized here: > > https://tracker.ceph.com/issues/53

[ceph-users] Re: Removing an OSD node the right way

2021-12-03 Thread Janne Johansson
Den fre 3 dec. 2021 kl 13:08 skrev huxia...@horebdata.cn : > Dear Cephers, > I had to remove a failed OSD server node, and what i did is the following > 1) First marked all OSDs on that (to be removed) server down and out > 2) Secondly, let Ceph do backfilling and rebalancing, and wait for completi

[ceph-users] 回复: How data is stored on EC?

2021-12-03 Thread 胡 玮文
Hi Istvan, Upper-level applications may chunk data into smaller objects, typically 4M each, e.g. CephFS [1], RBD [2]. However, the max object size enforced by OSD is configurable by osd_max_object_size, which defaults to 128M. So, as of my understanding, your 100MB file will typically be chunke