[ceph-users] Re: Another Pacific point release?
On Mon, Jul 17, 2023 at 6:26 PM David Orman wrote: > > I'm hoping to see at least one more, if not more than that, but I have no > crystal ball. I definitely support this idea, and strongly suggest it's given > some thought. There have been a lot of delays/missed releases due to all of > the lab issues, and it's significantly impacted the release cadence for quite > some time. Right, we are definitely going to see 16.2.14. Whether it ends up being the last one will be communicated in the release notes. Thanks, Ilya > > We've got a fair number of patches we intend for backport to Pacific that > address core functionality issues impacting our customer workloads, but > haven't been able to get released due to all of the infrastructure problems. > > David > > On Mon, Jul 17, 2023, at 05:27, Konstantin Shalygin wrote: > > Hi, > > > >> On 17 Jul 2023, at 12:53, Ponnuvel Palaniyappan > >> wrote: > >> > >> The typical EOL date (2023-06-01) has already passed for Pacific. Just > >> wondering if there's going to be another Pacific point release (16.2.14) in > >> the pipeline. > > > > Good point! At least, for possibility upgrade RBD clusters from > > Nautilus to Pacific, seems this release should get this backport [1] > > > > Also, it will be good to see an update of information on distributions > > (ABC QA grades) [2] > > > > Thanks, > > > > [1] https://tracker.ceph.com/issues/59538 > > [2] https://docs.ceph.com/en/quincy/start/os-recommendations/#platforms > > > > k > > > > > > ___ > > ceph-users mailing list -- ceph-users@ceph.io > > To unsubscribe send an email to ceph-users-le...@ceph.io > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: Another Pacific point release?
I'm hoping to see at least one more, if not more than that, but I have no crystal ball. I definitely support this idea, and strongly suggest it's given some thought. There have been a lot of delays/missed releases due to all of the lab issues, and it's significantly impacted the release cadence for quite some time. We've got a fair number of patches we intend for backport to Pacific that address core functionality issues impacting our customer workloads, but haven't been able to get released due to all of the infrastructure problems. David On Mon, Jul 17, 2023, at 05:27, Konstantin Shalygin wrote: > Hi, > >> On 17 Jul 2023, at 12:53, Ponnuvel Palaniyappan wrote: >> >> The typical EOL date (2023-06-01) has already passed for Pacific. Just >> wondering if there's going to be another Pacific point release (16.2.14) in >> the pipeline. > > Good point! At least, for possibility upgrade RBD clusters from > Nautilus to Pacific, seems this release should get this backport [1] > > Also, it will be good to see an update of information on distributions > (ABC QA grades) [2] > > Thanks, > > [1] https://tracker.ceph.com/issues/59538 > [2] https://docs.ceph.com/en/quincy/start/os-recommendations/#platforms > > k > > > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: Another Pacific point release?
Hi, > On 17 Jul 2023, at 12:53, Ponnuvel Palaniyappan wrote: > > The typical EOL date (2023-06-01) has already passed for Pacific. Just > wondering if there's going to be another Pacific point release (16.2.14) in > the pipeline. Good point! At least, for possibility upgrade RBD clusters from Nautilus to Pacific, seems this release should get this backport [1] Also, it will be good to see an update of information on distributions (ABC QA grades) [2] Thanks, [1] https://tracker.ceph.com/issues/59538 [2] https://docs.ceph.com/en/quincy/start/os-recommendations/#platforms k ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io