[ceph-users] Re: quincy v17.2.0 QE Validation status

2022-03-28 Thread Venky Shankar
On Tue, Mar 29, 2022 at 10:56 AM Venky Shankar wrote: > > Hey Yuri, > > > On Tue, Mar 29, 2022 at 3:18 AM Yuri Weinstein wrote: > > > > We are trying to release v17.2.0 as soon as possible. > > And need to do a quick approval of tests and review failures. > > > > Still outstanding are two PRs: >

[ceph-users] Re: quincy v17.2.0 QE Validation status

2022-03-28 Thread Venky Shankar
Hey Yuri, On Tue, Mar 29, 2022 at 3:18 AM Yuri Weinstein wrote: > > We are trying to release v17.2.0 as soon as possible. > And need to do a quick approval of tests and review failures. > > Still outstanding are two PRs: > https://github.com/ceph/ceph/pull/45673 > https://github.com/ceph/ceph/pu

[ceph-users] Re: quincy v17.2.0 QE Validation status

2022-03-28 Thread Neha Ojha
On Mon, Mar 28, 2022 at 2:48 PM Yuri Weinstein wrote: > > We are trying to release v17.2.0 as soon as possible. > And need to do a quick approval of tests and review failures. > > Still outstanding are two PRs: > https://github.com/ceph/ceph/pull/45673 > https://github.com/ceph/ceph/pull/45604 > >

[ceph-users] Re: PG down, due to 3 OSD failing

2022-03-28 Thread Dan van der Ster
Hi Fulvio, You can check (offline) which PGs are on an OSD with the list-pgs op, e.g. ceph-objectstore-tool --data-path /var/lib/ceph/osd/cephpa1-158/ --op list-pgs The EC pgs have a naming convention like 85.25s1 etc.. for the various k/m EC shards. -- dan On Mon, Mar 28, 2022 at 2:29 PM F

[ceph-users] PG down, due to 3 OSD failing

2022-03-28 Thread Fulvio Galeazzi
Hallo, all of a sudden, 3 of my OSDs failed, showing similar messages in the log: . -5> 2022-03-28 14:19:02.451 7fc20fe99700 5 osd.145 pg_epoch: 616454 pg[70.2c6s1( empty local-lis/les=612106/612107 n=0 ec=148456/148456 lis/c 612106/612106 les/c/f 612107/612107/0 612106/612106/6

[ceph-users] Re: RBD Exclusive lock to shared lock

2022-03-28 Thread Marc
> > > > > My use case would be a HA cluster where a VM is mapping an rbd image, > and then it encounters some network issue. An other node of the HA > cluster could start the VM and map again the image, but if the > networking is fixed on the first VM that would keep using the already > mapped ima

[ceph-users] Re: ceph mon failing to start

2022-03-28 Thread Dan van der Ster
Are the two running mons also running 14.2.9 ? --- dan On Mon, Mar 28, 2022 at 8:27 AM Tomáš Hodek wrote: > > Hi, I have 3 node ceph cluster (managed via proxmox). Got single node > fatal failure and replaced it. Os boots correctly, however monitor on > failed node did not start successfully; Ot