Hi,

we upgraded two months ago from 18.2.7 to 19.2.3 without any issues. We use
RBD, RGW and CephFS. The only thing I made sure of was to
disable bluestore_elastic_shared_blobs as Wannes already mentioned because
we needed to deploy additional OSDs.

Regards,
Eugen

Am Do., 18. Dez. 2025 um 09:07 Uhr schrieb Wannes Smet via ceph-users <
[email protected]>:

> I can't directly answer your question, but I'm currently redeploying all
> my OSDs with `bluestore_elastic_shared_blobs` disabled. I don't know for
> your situation, but you might want to investigate if that's a good idea
> before upgrading from 18.2.7 to 19.2.3.
>
> AFAIK the issue has also only be reported on EC pools. Not on replicated
> pools that I'm aware of.
>
> Wannes
> ________________________________
> From: Văn Trần Quốc via ceph-users <[email protected]>
> Sent: Thursday, December 18, 2025 08:09
> To: [email protected] <[email protected]>
> Subject: [ceph-users] Upgrade Advice: Ceph Reef (18.2.7) to Squid (19.2.3)
> on Proxmox VMs
>
> Hi Ceph Users, I am planning a major upgrade for our production cluster
> from Reef (18. 2. 7) to Squid (19. 2. 3) and would like to seek advice
> regarding stability and potential risks. Infrastructure Overview: +
> Deployment: Cephadm. + Cluster Size:
> ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
> Hi Ceph Users,
>
> I am planning a major upgrade for our production cluster from Reef (18.2.7)
> to Squid (19.2.3) and would like to seek advice regarding stability and
> potential risks.
>
> Infrastructure Overview:
>
> + Deployment: Cephadm.
> + Cluster Size: 7 Nodes total.
> + Hardware/Virtualization: Each node is a Virtual Machine hosted on
> Proxmox.
> + OSD Layout: 7 OSDs total (7 OSD per node).
> + Other Daemons: 5 MONs, 7 MDSs, 3 MGRs. Services include RGW, CephFS, and
> Block Devices.
> + Pool Type: Replicated.
>
> The Concern: We are currently running stable on Reef 18.2.7. However, I
> have been following recent discussions on the mailing list and tracker
> regarding critical failures when upgrading to Squid, specifically
> concerning OSD crashes and data corruption.
> I am particularly worried about the issues reported in these threads, where
> users experienced failures during or after the upgrade:
> +
> https://urldefense.com/v3/__https://www.mail-archive.com/[email protected]/msg30399.html__;!!FtrhtPsWDhZ6tw!DsPi6pIv4HTyn0JCNNnmIKmLY-ziJgiC9lwTpP92pxuKNqtMOROVIVhYLzFigdllIqGzG2gw495sw7kS$[mail-archive[.]com]
> +
> https://urldefense.com/v3/__https://www.mail-archive.com/[email protected]/msg31238.html__;!!FtrhtPsWDhZ6tw!DsPi6pIv4HTyn0JCNNnmIKmLY-ziJgiC9lwTpP92pxuKNqtMOROVIVhYLzFigdllIqGzG2gw4yUJ3xsW$[mail-archive[.]com]
> +
> https://urldefense.com/v3/__https://tracker.ceph.com/issues/70390__;!!FtrhtPsWDhZ6tw!DsPi6pIv4HTyn0JCNNnmIKmLY-ziJgiC9lwTpP92pxuKNqtMOROVIVhYLzFigdllIqGzG2gw40vjAxeH$[tracker[.]ceph[.]com]
>
> Given our deployment topology and the jump to a new major version (Squid),
> I have a few questions:
> + Stability & Success Stories: Is Ceph Squid 19.2.3 considered safe
> regarding the OSD crash/corruption bugs mentioned in the links above? Has
> anyone in the community successfully completed the upgrade from 18.2.7 to
> 19.2.3 without issues? Confirmation of a clean upgrade path would be very
> reassuring.
> + Upgrade Path: Are there any known regressions or critical "gotchas" when
> moving from 18.2.7 directly to 19.2.3 in a virtualized environment?
>
> Any experiences or warnings from those running Squid in similar
> environments would be greatly appreciated.
>
> Thank you,
> Van Tran
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to