NGCC 2018?

2018-07-23 Thread Ben Bromhead
The year has gotten away from us a little bit, but now is as good a time as
any to put out a general call for interest in an NGCC this year.

Last year Gary and Eric did an awesome job organizing it in San Antonio.
This year it might be a good idea to do it in another city?

We at Instaclustr are happy to sponsor/organize/run it, but ultimately this
is a community event and we only want to do it if there is a strong desire
to attend from the community and it meets the wider needs.

Here are a few thoughts we have had in no particular order:

   - I was thinking it might be worth doing it in SF/Bay Area around the
   dates of distributed data day (14th of September) as I know a number of
   folks will be in town for it.
   - Typically NGCC has focused on being a single day, single track
   conference with scheduled sessions and an unconference set of ad-hoc talks
   at the end. It may make sense to change this up given the pending freeze
   (maybe make this more like a commit/review fest)? Or keep it in the same
   format but focus on the 4.0 work at hand.
   - Any community members who want to get involved again in the more
   organizational side of it (Gary, Eric)?
   - Any other sponsors (doesn't have to be monetary, can be space,
   resource etc) who want to get involved?

If folks are generally happy with the end approach we'll post details as
soon as possible (given its July right now)!

Ben


-- 
Ben Bromhead
CTO | Instaclustr 
+1 650 284 9692
Reliability at Scale
Cassandra, Spark, Elasticsearch on AWS, Azure, GCP and Softlayer


Re: reroll the builds?

2018-07-23 Thread dinesh.jo...@yahoo.com.INVALID
I can help out with the triage / rerunning dtests if needed.
Dinesh 

On Monday, July 23, 2018, 10:22:18 AM PDT, Jason Brown 
 wrote:  
 
 I spoke with some people over here, and I'm going to spend a day doing a
quick triage of the failing dtests. There are some fixes for data loss bugs
that are critical to get out in these builds, so I'll ensure the current
failures are within an acceptable level of flakey-ness in order to unblock
those fixes.

Will have an update shortly ...

-Jason

On Mon, Jul 23, 2018 at 9:18 AM, Jason Brown  wrote:

> Hi all,
>
> First, thanks Joey for running the tests. Your pass/fail counts are
> basically what in line with what I've seen for the last several months. (I
> don't have an aggregated list anywhere, just observations from recent runs).
>
> Second, it's beyond me why there's such inertia to actually cutting a
> release. We're getting up to almost *six months* since the last release.
> Are there any grand objections at this point?
>
> Thanks,
>
> -Jason
>
>
> On Tue, Jul 17, 2018 at 4:01 PM, Joseph Lynch 
> wrote:
>
>> We ran the tests against 3.0, 2.2 and 3.11 using circleci and there are
>> various failing dtests but all three have green unit tests.
>>
>> 3.11.3 tentative (31d5d87, test branch
>> > cassandra_3.11_temp_testing>,
>> unit tests 
>> pass, 5
>>  and 6
>> > tests/containers/8>
>> dtest failures)
>> 3.0.17 tentative (d52c7b8, test branch
>> ,
>> unit
>> tests  pass, 14
>>  and 15
>>  dtest failures)
>> 2.2.13 tentative (3482370, test branch
>> > dra/tree/2.2-testing>,
>> unit tests 
>> pass, 9
>>  and 10
>> > tests/containers/8>
>> dtest failures)
>>
>> It looks like many (~6) of the failures in 3.0.x are related to
>> snapshot_test.TestArchiveCommitlog. I'm not sure if this is abnormal.
>>
>> I don't see a good historical record to know if these are just flakes, but
>> if we only want to go on green builds perhaps we can either disable the
>> flakey tests or fix them up? If someone feels strongly we should fix
>> particular tests up please link a jira and I can take a whack at some of
>> them.
>>
>> -Joey
>>
>> On Tue, Jul 17, 2018 at 9:35 AM Michael Shuler 
>> wrote:
>>
>> > On 07/16/2018 11:27 PM, Jason Brown wrote:
>> > > Hey all,
>> > >
>> > > The recent builds were -1'd, but it appears the issues have been
>> resolved
>> > > (2.2.13 with CASSANDRA-14423, and 3.0.17 / 3.11.3 reverting
>> > > CASSANDRA-14252). Can we go ahead and reroll now?
>> >
>> > Could someone run through the tests on 2.2, 3.0, 3.11 branches and link
>> > them?  Thanks!
>> >
>> > Michael
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
>> > For additional commands, e-mail: dev-h...@cassandra.apache.org
>> >
>> >
>>
>
>
  

Re: reroll the builds?

2018-07-23 Thread Jason Brown
I spoke with some people over here, and I'm going to spend a day doing a
quick triage of the failing dtests. There are some fixes for data loss bugs
that are critical to get out in these builds, so I'll ensure the current
failures are within an acceptable level of flakey-ness in order to unblock
those fixes.

Will have an update shortly ...

-Jason

On Mon, Jul 23, 2018 at 9:18 AM, Jason Brown  wrote:

> Hi all,
>
> First, thanks Joey for running the tests. Your pass/fail counts are
> basically what in line with what I've seen for the last several months. (I
> don't have an aggregated list anywhere, just observations from recent runs).
>
> Second, it's beyond me why there's such inertia to actually cutting a
> release. We're getting up to almost *six months* since the last release.
> Are there any grand objections at this point?
>
> Thanks,
>
> -Jason
>
>
> On Tue, Jul 17, 2018 at 4:01 PM, Joseph Lynch 
> wrote:
>
>> We ran the tests against 3.0, 2.2 and 3.11 using circleci and there are
>> various failing dtests but all three have green unit tests.
>>
>> 3.11.3 tentative (31d5d87, test branch
>> > cassandra_3.11_temp_testing>,
>> unit tests 
>> pass, 5
>>  and 6
>> > tests/containers/8>
>> dtest failures)
>> 3.0.17 tentative (d52c7b8, test branch
>> ,
>> unit
>> tests  pass, 14
>>  and 15
>>  dtest failures)
>> 2.2.13 tentative (3482370, test branch
>> > dra/tree/2.2-testing>,
>> unit tests 
>> pass, 9
>>  and 10
>> > tests/containers/8>
>> dtest failures)
>>
>> It looks like many (~6) of the failures in 3.0.x are related to
>> snapshot_test.TestArchiveCommitlog. I'm not sure if this is abnormal.
>>
>> I don't see a good historical record to know if these are just flakes, but
>> if we only want to go on green builds perhaps we can either disable the
>> flakey tests or fix them up? If someone feels strongly we should fix
>> particular tests up please link a jira and I can take a whack at some of
>> them.
>>
>> -Joey
>>
>> On Tue, Jul 17, 2018 at 9:35 AM Michael Shuler 
>> wrote:
>>
>> > On 07/16/2018 11:27 PM, Jason Brown wrote:
>> > > Hey all,
>> > >
>> > > The recent builds were -1'd, but it appears the issues have been
>> resolved
>> > > (2.2.13 with CASSANDRA-14423, and 3.0.17 / 3.11.3 reverting
>> > > CASSANDRA-14252). Can we go ahead and reroll now?
>> >
>> > Could someone run through the tests on 2.2, 3.0, 3.11 branches and link
>> > them?  Thanks!
>> >
>> > Michael
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
>> > For additional commands, e-mail: dev-h...@cassandra.apache.org
>> >
>> >
>>
>
>


Re: reroll the builds?

2018-07-23 Thread Jason Brown
Hi all,

First, thanks Joey for running the tests. Your pass/fail counts are
basically what in line with what I've seen for the last several months. (I
don't have an aggregated list anywhere, just observations from recent runs).

Second, it's beyond me why there's such inertia to actually cutting a
release. We're getting up to almost *six months* since the last release.
Are there any grand objections at this point?

Thanks,

-Jason


On Tue, Jul 17, 2018 at 4:01 PM, Joseph Lynch  wrote:

> We ran the tests against 3.0, 2.2 and 3.11 using circleci and there are
> various failing dtests but all three have green unit tests.
>
> 3.11.3 tentative (31d5d87, test branch
>  tree/cassandra_3.11_temp_testing>,
> unit tests  pass,
> 5
>  and 6
>  >
> dtest failures)
> 3.0.17 tentative (d52c7b8, test branch
> ,
> unit
> tests  pass, 14
>  and 15
>  dtest failures)
> 2.2.13 tentative (3482370, test branch
>  cassandra/tree/2.2-testing>,
> unit tests 
> pass, 9
>  and 10
>  22#tests/containers/8>
> dtest failures)
>
> It looks like many (~6) of the failures in 3.0.x are related to
> snapshot_test.TestArchiveCommitlog. I'm not sure if this is abnormal.
>
> I don't see a good historical record to know if these are just flakes, but
> if we only want to go on green builds perhaps we can either disable the
> flakey tests or fix them up? If someone feels strongly we should fix
> particular tests up please link a jira and I can take a whack at some of
> them.
>
> -Joey
>
> On Tue, Jul 17, 2018 at 9:35 AM Michael Shuler 
> wrote:
>
> > On 07/16/2018 11:27 PM, Jason Brown wrote:
> > > Hey all,
> > >
> > > The recent builds were -1'd, but it appears the issues have been
> resolved
> > > (2.2.13 with CASSANDRA-14423, and 3.0.17 / 3.11.3 reverting
> > > CASSANDRA-14252). Can we go ahead and reroll now?
> >
> > Could someone run through the tests on 2.2, 3.0, 3.11 branches and link
> > them?  Thanks!
> >
> > Michael
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@cassandra.apache.org
> > For additional commands, e-mail: dev-h...@cassandra.apache.org
> >
> >
>