[ceph-users] Re: 16.2.11 branch

2022-12-15 Thread Christian Rohmann

On 15/12/2022 10:31, Christian Rohmann wrote:


May I kindly ask for an update on how things are progressing? Mostly I 
am interested on the (persisting) implications for testing new point 
releases (e.g. 16.2.11) with more and more bugfixes in them.


I guess I just have not looked on the right ML, it's being worke on 
already ... 
https://lists.ceph.io/hyperkitty/list/d...@ceph.io/thread/CQPQJXD6OVTZUH43I4U3GGOP2PKYOREJ/




Sorry for the nagging,


Christian

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.11 branch

2022-12-15 Thread Christian Rohmann

Hey Laura, Greg, all,

On 31/10/2022 17:15, Gregory Farnum wrote:

If you don't mind me asking Laura, have those issues regarding the

testing lab been resolved yet?


There are currently a lot of folks working to fix the testing lab issues.
Essentially, disk corruption affected our ability to reach quay.ceph.io.
We've made progress this morning, but we are still working to understand
the root cause of the corruption. We expect to re-deploy affected services
soon so we can resume testing for v16.2.11.

We got a note about this today, so I wanted to clarify:

For Reasons, the sepia lab we run teuthology in currently uses a Red
Hat Enterprise Virtualization stack — meaning, mostly KVM with a lot
of fancy orchestration all packaged up, backed by Gluster. (Yes,
really — a full Ceph integration was never built and at one point this
was deemed the most straightforward solution compared to running
all-up OpenStack backed by Ceph, which would have been the available
alternative.) The disk images stored in Gluster started reporting
corruption last week (though Gluster was claiming to be healthy), and
with David's departure and his backup on vacation it took a while for
the remaining team members to figure out what was going on and
identify strategies to resolve or work around it.

The relevant people have figured out a lot more of what was going on,
and Adam (David's backup) is back now so we're expecting things to
resolve more quickly at this point. And indeed the team's looking at
other options for providing this infrastructure going forward. 
-Greg



May I kindly ask for an update on how things are progressing? Mostly I 
am interested on the (persisting) implications for testing new point 
releases (e.g. 16.2.11) with more and more bugfixes in them.



Thanks a bunch!


Christian
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.11 branch

2022-10-31 Thread Gregory Farnum
On Fri, Oct 28, 2022 at 8:51 AM Laura Flores  wrote:
>
> Hi Christian,
>
> There also is https://tracker.ceph.com/versions/656 which seems to be
> > tracking
> > the open issues tagged for this particular point release.
> >
>
> Yes, thank you for providing the link.
>
> If you don't mind me asking Laura, have those issues regarding the
> > testing lab been resolved yet?
> >
>
> There are currently a lot of folks working to fix the testing lab issues.
> Essentially, disk corruption affected our ability to reach quay.ceph.io.
> We've made progress this morning, but we are still working to understand
> the root cause of the corruption. We expect to re-deploy affected services
> soon so we can resume testing for v16.2.11.

We got a note about this today, so I wanted to clarify:

For Reasons, the sepia lab we run teuthology in currently uses a Red
Hat Enterprise Virtualization stack — meaning, mostly KVM with a lot
of fancy orchestration all packaged up, backed by Gluster. (Yes,
really — a full Ceph integration was never built and at one point this
was deemed the most straightforward solution compared to running
all-up OpenStack backed by Ceph, which would have been the available
alternative.) The disk images stored in Gluster started reporting
corruption last week (though Gluster was claiming to be healthy), and
with David's departure and his backup on vacation it took a while for
the remaining team members to figure out what was going on and
identify strategies to resolve or work around it.

The relevant people have figured out a lot more of what was going on,
and Adam (David's backup) is back now so we're expecting things to
resolve more quickly at this point. And indeed the team's looking at
other options for providing this infrastructure going forward. :)
-Greg

>
> You can follow updates on the two Tracker issues below:
>
>1. https://tracker.ceph.com/issues/57914
>2. https://tracker.ceph.com/issues/57935
>
>
> There are quite a few bugfixes in the pending release 16.2.11 which we
> > are waiting for. TBH I was about

> > to ask if it would not be sensible to do an intermediate release and not
> > let it grow bigger and
> > bigger (with even more changes / fixes)  going out at once.
> >
>
> Fixes for v16.2.11 are pretty much paused at this point; the bottleneck
> lies in getting some outstanding patches tested before they are backported.
> Whether we stop now or continue to introduce more patches, the timeframe
> for getting things tested remains the same.
>
> I hope this clears up some of the questions.
>
> Thanks,
> Laura Flores
>
>
> On Fri, Oct 28, 2022 at 9:41 AM Christian Rohmann <
> christian.rohm...@inovex.de> wrote:
>
> > On 28/10/2022 00:25, Laura Flores wrote:
> > > Hi Oleksiy,
> > >
> > > The Pacific RC has not been declared yet since there have been problems
> > in
> > > our upstream testing lab. There is no ETA yet for v16.2.11 for that
> > reason,
> > > but the full diff of all the patches that were included will be published
> > > to ceph.io when v16.2.11 is released. There will also be a diff
> > published
> > > in the documentation on this page:
> > > https://docs.ceph.com/en/latest/releases/pacific/
> > >
> > > In the meantime, here is a link to the diff in commits between v16.2.10
> > and
> > > the Pacific branch:
> > https://github.com/ceph/ceph/compare/v16.2.10...pacific
> >
> > There also is https://tracker.ceph.com/versions/656 which seems to be
> > tracking
> > the open issues tagged for this particular point release.
> >
> >
> > If you don't mind me asking Laura, have those issues regarding the
> > testing lab been resolved yet?
> >
> > There are quite a few bugfixes in the pending release 16.2.11 which we
> > are waiting for. TBH I was about
> > to ask if it would not be sensible to do an intermediate release and not
> > let it grow bigger and
> > bigger (with even more changes / fixes)  going out at once.
> >
> >
> >
> > Regards
> >
> >
> > Christian
> >
> >
>
> --
>
> Laura Flores
>
> She/Her/Hers
>
> Software Engineer, Ceph Storage
>
> Red Hat Inc. 
>
> Chicago, IL
>
> lflo...@redhat.com
> M: +17087388804
> @RedHat    Red Hat
>   Red Hat
> 
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.11 branch

2022-10-28 Thread Oleksiy Stashok
Thank you all! It's exactly what I needed.

On Fri, Oct 28, 2022 at 8:51 AM Laura Flores  wrote:

> Hi Christian,
>
> There also is https://tracker.ceph.com/versions/656 which seems to be
> > tracking
> > the open issues tagged for this particular point release.
> >
>
> Yes, thank you for providing the link.
>
> If you don't mind me asking Laura, have those issues regarding the
> > testing lab been resolved yet?
> >
>
> There are currently a lot of folks working to fix the testing lab issues.
> Essentially, disk corruption affected our ability to reach quay.ceph.io.
> We've made progress this morning, but we are still working to understand
> the root cause of the corruption. We expect to re-deploy affected services
> soon so we can resume testing for v16.2.11.
>
> You can follow updates on the two Tracker issues below:
>
>1. https://tracker.ceph.com/issues/57914
>2. https://tracker.ceph.com/issues/57935
>
>
> There are quite a few bugfixes in the pending release 16.2.11 which we
> > are waiting for. TBH I was about
> > to ask if it would not be sensible to do an intermediate release and not
> > let it grow bigger and
> > bigger (with even more changes / fixes)  going out at once.
> >
>
> Fixes for v16.2.11 are pretty much paused at this point; the bottleneck
> lies in getting some outstanding patches tested before they are backported.
> Whether we stop now or continue to introduce more patches, the timeframe
> for getting things tested remains the same.
>
> I hope this clears up some of the questions.
>
> Thanks,
> Laura Flores
>
>
> On Fri, Oct 28, 2022 at 9:41 AM Christian Rohmann <
> christian.rohm...@inovex.de> wrote:
>
> > On 28/10/2022 00:25, Laura Flores wrote:
> > > Hi Oleksiy,
> > >
> > > The Pacific RC has not been declared yet since there have been problems
> > in
> > > our upstream testing lab. There is no ETA yet for v16.2.11 for that
> > reason,
> > > but the full diff of all the patches that were included will be
> published
> > > to ceph.io when v16.2.11 is released. There will also be a diff
> > published
> > > in the documentation on this page:
> > > https://docs.ceph.com/en/latest/releases/pacific/
> > >
> > > In the meantime, here is a link to the diff in commits between v16.2.10
> > and
> > > the Pacific branch:
> > https://github.com/ceph/ceph/compare/v16.2.10...pacific
> >
> > There also is https://tracker.ceph.com/versions/656 which seems to be
> > tracking
> > the open issues tagged for this particular point release.
> >
> >
> > If you don't mind me asking Laura, have those issues regarding the
> > testing lab been resolved yet?
> >
> > There are quite a few bugfixes in the pending release 16.2.11 which we
> > are waiting for. TBH I was about
> > to ask if it would not be sensible to do an intermediate release and not
> > let it grow bigger and
> > bigger (with even more changes / fixes)  going out at once.
> >
> >
> >
> > Regards
> >
> >
> > Christian
> >
> >
>
> --
>
> Laura Flores
>
> She/Her/Hers
>
> Software Engineer, Ceph Storage
>
> Red Hat Inc. 
>
> Chicago, IL
>
> lflo...@redhat.com
> M: +17087388804 <(708)%20738-8804>
> @RedHat    Red Hat
>   Red Hat
> 
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.11 branch

2022-10-28 Thread Laura Flores
Hi Christian,

There also is https://tracker.ceph.com/versions/656 which seems to be
> tracking
> the open issues tagged for this particular point release.
>

Yes, thank you for providing the link.

If you don't mind me asking Laura, have those issues regarding the
> testing lab been resolved yet?
>

There are currently a lot of folks working to fix the testing lab issues.
Essentially, disk corruption affected our ability to reach quay.ceph.io.
We've made progress this morning, but we are still working to understand
the root cause of the corruption. We expect to re-deploy affected services
soon so we can resume testing for v16.2.11.

You can follow updates on the two Tracker issues below:

   1. https://tracker.ceph.com/issues/57914
   2. https://tracker.ceph.com/issues/57935


There are quite a few bugfixes in the pending release 16.2.11 which we
> are waiting for. TBH I was about
> to ask if it would not be sensible to do an intermediate release and not
> let it grow bigger and
> bigger (with even more changes / fixes)  going out at once.
>

Fixes for v16.2.11 are pretty much paused at this point; the bottleneck
lies in getting some outstanding patches tested before they are backported.
Whether we stop now or continue to introduce more patches, the timeframe
for getting things tested remains the same.

I hope this clears up some of the questions.

Thanks,
Laura Flores


On Fri, Oct 28, 2022 at 9:41 AM Christian Rohmann <
christian.rohm...@inovex.de> wrote:

> On 28/10/2022 00:25, Laura Flores wrote:
> > Hi Oleksiy,
> >
> > The Pacific RC has not been declared yet since there have been problems
> in
> > our upstream testing lab. There is no ETA yet for v16.2.11 for that
> reason,
> > but the full diff of all the patches that were included will be published
> > to ceph.io when v16.2.11 is released. There will also be a diff
> published
> > in the documentation on this page:
> > https://docs.ceph.com/en/latest/releases/pacific/
> >
> > In the meantime, here is a link to the diff in commits between v16.2.10
> and
> > the Pacific branch:
> https://github.com/ceph/ceph/compare/v16.2.10...pacific
>
> There also is https://tracker.ceph.com/versions/656 which seems to be
> tracking
> the open issues tagged for this particular point release.
>
>
> If you don't mind me asking Laura, have those issues regarding the
> testing lab been resolved yet?
>
> There are quite a few bugfixes in the pending release 16.2.11 which we
> are waiting for. TBH I was about
> to ask if it would not be sensible to do an intermediate release and not
> let it grow bigger and
> bigger (with even more changes / fixes)  going out at once.
>
>
>
> Regards
>
>
> Christian
>
>

-- 

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage

Red Hat Inc. 

Chicago, IL

lflo...@redhat.com
M: +17087388804
@RedHat    Red Hat
  Red Hat


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.11 branch

2022-10-28 Thread Christian Rohmann

On 28/10/2022 00:25, Laura Flores wrote:

Hi Oleksiy,

The Pacific RC has not been declared yet since there have been problems in
our upstream testing lab. There is no ETA yet for v16.2.11 for that reason,
but the full diff of all the patches that were included will be published
to ceph.io when v16.2.11 is released. There will also be a diff published
in the documentation on this page:
https://docs.ceph.com/en/latest/releases/pacific/

In the meantime, here is a link to the diff in commits between v16.2.10 and
the Pacific branch: https://github.com/ceph/ceph/compare/v16.2.10...pacific


There also is https://tracker.ceph.com/versions/656 which seems to be 
tracking

the open issues tagged for this particular point release.


If you don't mind me asking Laura, have those issues regarding the 
testing lab been resolved yet?


There are quite a few bugfixes in the pending release 16.2.11 which we 
are waiting for. TBH I was about
to ask if it would not be sensible to do an intermediate release and not 
let it grow bigger and

bigger (with even more changes / fixes)  going out at once.



Regards


Christian

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: 16.2.11 branch

2022-10-27 Thread Laura Flores
Hi Oleksiy,

The Pacific RC has not been declared yet since there have been problems in
our upstream testing lab. There is no ETA yet for v16.2.11 for that reason,
but the full diff of all the patches that were included will be published
to ceph.io when v16.2.11 is released. There will also be a diff published
in the documentation on this page:
https://docs.ceph.com/en/latest/releases/pacific/

In the meantime, here is a link to the diff in commits between v16.2.10 and
the Pacific branch: https://github.com/ceph/ceph/compare/v16.2.10...pacific

- Laura

On Thu, Oct 27, 2022 at 12:04 PM Oleksiy Stashok 
wrote:

> Hey guys,
>
> Could you please point me to the branch that will be used for the upcoming
> 16.2.11 release? I'd like to see the diff w/ 16.2.10 to better understand
> what was fixed.
>
> Thank you.
> Oleksiy
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>

-- 

Laura Flores

She/Her/Hers

Software Engineer, Ceph Storage

Red Hat Inc. 

Chicago, IL

lflo...@redhat.com
M: +17087388804
@RedHat    Red Hat
  Red Hat


___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io