[ceph-users] Re: cephadm upgrade 16.2.10 to 16.2.11: osds crash and get stuck restarting

2023-01-25 Thread Konstantin Shalygin
Hi Zakhar, > On 26 Jan 2023, at 08:33, Zakhar Kirpichenko wrote: > > Jan 25 23:07:53 ceph01 bash[2553123]: >

[ceph-users] cephadm upgrade 16.2.10 to 16.2.11: osds crash and get stuck restarting

2023-01-25 Thread Zakhar Kirpichenko
Hi, Attempted to upgrade 16.2.10 to 16.2.11, 2 OSDs out of many started crashing in a loop on the very 1st host: Jan 25 23:07:53 ceph01 bash[2553123]: Uptime(secs): 0.0 total, 0.0 interval Jan 25 23:07:53 ceph01 bash[2553123]: Flush(GB): cumulative 0.000, interval 0.000 Jan 25 23:07:53 ceph01

[ceph-users] Re: MDS stuck in "up:replay"

2023-01-25 Thread Thomas Widhalm
Hi, Sorry for the delay. As I told Venky directly, there seems to be a problem with DMARC handling of the Ceph users list. So it was blocked by the company I work for. So I'm writing from my personal e-mail address, now. Did I miss something? Venky, you said, that, as soon as the

[ceph-users] v16.2.11 Pacific released

2023-01-25 Thread Yuri Weinstein
We're happy to announce the 11th backport release in the Pacific series. We recommend users to update to this release. For detailed release notes with links & changelog please refer to the official blog entry at https://ceph.io/en/news/blog/2023/v16-2-11-pacific-released Notable Changes

[ceph-users] Re: Mount ceph using FQDN

2023-01-25 Thread Anthony D'Atri
One might argue that that mount command should, and that it shouldn’t pass an FQDN to the kernel > On Jan 24, 2023, at 23:42, Konstantin Shalygin wrote: > > Hi, > > Do you think kernel should care about DNS resolution? > > > k > >> On 24 Jan 2023, at 19:07, kushagra.gu...@hsc.com wrote: >>

[ceph-users] Re: deploying Ceph using FQDN for MON / MDS Services

2023-01-25 Thread John Mulligan
On Tuesday, January 24, 2023 9:02:41 AM EST Lokendra Rathour wrote: > Hi Team, > > > > We have a ceph cluster with 3 storage nodes: > > 1. storagenode1 - abcd:abcd:abcd::21 > > 2. storagenode2 - abcd:abcd:abcd::22 > > 3. storagenode3 - abcd:abcd:abcd::23 > > > > The requirement is to

[ceph-users] Re: Status of Quincy 17.2.5 ?

2023-01-25 Thread Konstantin Shalygin
May be Mike can organize this release flow...  CC'ed Mike Perez, I think team need some manager observability (a little) k > On 25 Jan 2023, at 16:26, Christian Rohmann > wrote: > > Hey everyone, > > > On 20/10/2022 10:12, Christian Rohmann wrote: >> 1) May I bring up again my remarks

[ceph-users] Re: Image corrupt after restoring snapshot via Proxmox

2023-01-25 Thread Marc
> We have had a situation three times where rbd images seem to be corrupt after > restoring a snapshot, and I'm looking for advice on how to investigate this. > > We're running Proxmox 7 with Ceph Octopus (Proxmox build, 15.2.17-pve1). Every > time the problem has happened, it has happened

[ceph-users] Re: Problems with autoscaler (overlapping roots) after changing the pool class

2023-01-25 Thread Massimo Sgaravatto
I tried the following on a small testbed first: ceph osd erasure-code-profile set profile-4-2-hdd k=4 m=2 crush-failure-domain=host crush-device-class=hdd ceph osd crush rule create-erasure ecrule-4-2-hdd profile-4-2-hdd ceph osd pool set ecpool-4-2 crush_rule ecrule-4-2-hdd and indeed after

[ceph-users] OSDs will not start

2023-01-25 Thread Geoffrey Rhodes
Good day all, I've an issue with a few OSDs (in two different nodes) that attempt to start but fail / crash quite quickly. They are all LVM disks. I've tried upgrading software, health checks on the hardware (nodes and disks) and there doesn't seem to be any issues there. Recently I've had a few

[ceph-users] Re: Status of Quincy 17.2.5 ?

2023-01-25 Thread Christian Rohmann
Hey everyone, On 20/10/2022 10:12, Christian Rohmann wrote: 1) May I bring up again my remarks about the timing: On 19/10/2022 11:46, Christian Rohmann wrote: I believe the upload of a new release to the repo prior to the announcement happens quite regularly - it might just be due to the

[ceph-users] Image corrupt after restoring snapshot via Proxmox

2023-01-25 Thread Roel van Meer
Hi, We have had a situation three times where rbd images seem to be corrupt after restoring a snapshot, and I'm looking for advice on how to investigate this. We're running Proxmox 7 with Ceph Octopus (Proxmox build, 15.2.17-pve1). Every time the problem has happened, it has happened after