Re: What to do about issues that track flaky tests?

2022-09-15 Thread Brian Hulette via dev
I agree with Austin on this one, it makes sense to be realistic, but I'm
concerned about just blanket reducing the priority on all flakes. Two
classes of issues that could certainly be dropped to P2:
- Issues tracking flakes that have not been sickbayed yet (e.g.
https://github.com/apache/beam/issues/21266). These tests are still
providing signal (we should notice if it goes perma-red), and clearly the
flakes aren't so painful that someone felt the need to sickbay it.
- A sickbayed test, iff a breakage in the functionality it's testing would
be P2. This is admittedly difficult to identify.

It looks like we don't have a way to label sickbayed tests (or the inverse,
currently-failing), maybe we should have one?

Another thing to note: this email is reporting _unassigned_ P1 issues,
another way to remove issues from the search results would be to ensure
each flake has an owner (somehow). Maybe that's just shifting the problem,
but it could avoid the tragedy of the commons. To Manu's point, maybe those
new owners will happily discover their flake is no longer a problem.

Brian

On Wed, Sep 14, 2022 at 5:58 PM Manu Zhang  wrote:

> Agreed. I also mentioned in a previous email that some issues have been
> open for a long time (before being migrated to GitHub) and it's possible
> that those tests can pass constantly now.
> We may double check and close them since reopening is just one click.
>
> Manu
>
> On Thu, Sep 15, 2022 at 6:58 AM Austin Bennett <
> whatwouldausti...@gmail.com> wrote:
>
>> +1 to being realistic -- proper labels are worthwhile.  Though, some
>> flaky tests probably should be P1, and just because isn't addressed in a
>> timely manner doesn't mean it isn't a P1 - though, it does mean it wasn't
>> addressed.
>>
>>
>>
>> On Wed, Sep 14, 2022 at 1:19 PM Kenneth Knowles  wrote:
>>
>>> I would like to make this alert email actionable.
>>>
>>> I went through most of these issues. About half are P1 "flake" issues. I
>>> don't think magically expecting them to be deflaked is helpful. So I have a
>>> couple ideas:
>>>
>>> 1. Exclude "flake" P1s from this email. This is what we used to do. But
>>> then... are they really P1s?
>>> 2. Make "flake" bugs P2 if they are not currently impacting our test
>>> signal. But then... we may have a gap in test coverage that could cause
>>> severe problems. But anyhow something that is P1 for a long time is not
>>> *really* P1, so it is just being realistic.
>>>
>>> What do you all think?
>>>
>>> Kenn
>>>
>>> On Wed, Sep 14, 2022 at 3:03 AM  wrote:
>>>
 This is your daily summary of Beam's current high priority issues that
 may need attention.

 See https://beam.apache.org/contribute/issue-priorities for the
 meaning and expectations around issue priorities.

 Unassigned P1 Issues:

 https://github.com/apache/beam/issues/23227 [Bug]: Python SDK
 installation cannot generate proto with protobuf 3.20.2
 https://github.com/apache/beam/issues/23179 [Bug]: Parquet size
 exploded for no apparent reason
 https://github.com/apache/beam/issues/22913 [Bug]:
 beam_PostCommit_Java_ValidatesRunner_Flink is flakey
 https://github.com/apache/beam/issues/22303 [Task]: Add tests to Kafka
 SDF and fix known and discovered issues
 https://github.com/apache/beam/issues/22299 [Bug]: JDBCIO Write freeze
 at getConnection() in WriteFn
 https://github.com/apache/beam/issues/21794 Dataflow runner creates a
 new timer whenever the output timestamp is change
 https://github.com/apache/beam/issues/21713 404s in BigQueryIO don't
 get output to Failed Inserts PCollection
 https://github.com/apache/beam/issues/21704
 beam_PostCommit_Java_DataflowV2 failures parent bug
 https://github.com/apache/beam/issues/21701
 beam_PostCommit_Java_DataflowV1 failing with a variety of flakes and errors
 https://github.com/apache/beam/issues/21700
 --dataflowServiceOptions=use_runner_v2 is broken
 https://github.com/apache/beam/issues/21696 Flink Tests failure :
 java.lang.NoClassDefFoundError: Could not initialize class
 org.apache.beam.runners.core.construction.SerializablePipelineOptions
 https://github.com/apache/beam/issues/21695 DataflowPipelineResult
 does not raise exception for unsuccessful states.
 https://github.com/apache/beam/issues/21694 BigQuery Storage API
 insert with writeResult retry and write to error table
 https://github.com/apache/beam/issues/21480 flake:
 FlinkRunnerTest.testEnsureStdoutStdErrIsRestored
 https://github.com/apache/beam/issues/21472 Dataflow streaming tests
 failing new AfterSynchronizedProcessingTime test
 https://github.com/apache/beam/issues/21471 Flakes: Failed to load
 cache entry
 https://github.com/apache/beam/issues/21470 Test flake:
 test_split_half_sdf
 https://github.com/apache/beam/issues/21469 beam_PostCommit_XVR_Flink
 flaky: Connection refused
 

Re: What to do about issues that track flaky tests?

2022-09-14 Thread Manu Zhang
Agreed. I also mentioned in a previous email that some issues have been
open for a long time (before being migrated to GitHub) and it's possible
that those tests can pass constantly now.
We may double check and close them since reopening is just one click.

Manu

On Thu, Sep 15, 2022 at 6:58 AM Austin Bennett 
wrote:

> +1 to being realistic -- proper labels are worthwhile.  Though, some flaky
> tests probably should be P1, and just because isn't addressed in a timely
> manner doesn't mean it isn't a P1 - though, it does mean it wasn't
> addressed.
>
>
>
> On Wed, Sep 14, 2022 at 1:19 PM Kenneth Knowles  wrote:
>
>> I would like to make this alert email actionable.
>>
>> I went through most of these issues. About half are P1 "flake" issues. I
>> don't think magically expecting them to be deflaked is helpful. So I have a
>> couple ideas:
>>
>> 1. Exclude "flake" P1s from this email. This is what we used to do. But
>> then... are they really P1s?
>> 2. Make "flake" bugs P2 if they are not currently impacting our test
>> signal. But then... we may have a gap in test coverage that could cause
>> severe problems. But anyhow something that is P1 for a long time is not
>> *really* P1, so it is just being realistic.
>>
>> What do you all think?
>>
>> Kenn
>>
>> On Wed, Sep 14, 2022 at 3:03 AM  wrote:
>>
>>> This is your daily summary of Beam's current high priority issues that
>>> may need attention.
>>>
>>> See https://beam.apache.org/contribute/issue-priorities for the
>>> meaning and expectations around issue priorities.
>>>
>>> Unassigned P1 Issues:
>>>
>>> https://github.com/apache/beam/issues/23227 [Bug]: Python SDK
>>> installation cannot generate proto with protobuf 3.20.2
>>> https://github.com/apache/beam/issues/23179 [Bug]: Parquet size
>>> exploded for no apparent reason
>>> https://github.com/apache/beam/issues/22913 [Bug]:
>>> beam_PostCommit_Java_ValidatesRunner_Flink is flakey
>>> https://github.com/apache/beam/issues/22303 [Task]: Add tests to Kafka
>>> SDF and fix known and discovered issues
>>> https://github.com/apache/beam/issues/22299 [Bug]: JDBCIO Write freeze
>>> at getConnection() in WriteFn
>>> https://github.com/apache/beam/issues/21794 Dataflow runner creates a
>>> new timer whenever the output timestamp is change
>>> https://github.com/apache/beam/issues/21713 404s in BigQueryIO don't
>>> get output to Failed Inserts PCollection
>>> https://github.com/apache/beam/issues/21704
>>> beam_PostCommit_Java_DataflowV2 failures parent bug
>>> https://github.com/apache/beam/issues/21701
>>> beam_PostCommit_Java_DataflowV1 failing with a variety of flakes and errors
>>> https://github.com/apache/beam/issues/21700
>>> --dataflowServiceOptions=use_runner_v2 is broken
>>> https://github.com/apache/beam/issues/21696 Flink Tests failure :
>>> java.lang.NoClassDefFoundError: Could not initialize class
>>> org.apache.beam.runners.core.construction.SerializablePipelineOptions
>>> https://github.com/apache/beam/issues/21695 DataflowPipelineResult does
>>> not raise exception for unsuccessful states.
>>> https://github.com/apache/beam/issues/21694 BigQuery Storage API insert
>>> with writeResult retry and write to error table
>>> https://github.com/apache/beam/issues/21480 flake:
>>> FlinkRunnerTest.testEnsureStdoutStdErrIsRestored
>>> https://github.com/apache/beam/issues/21472 Dataflow streaming tests
>>> failing new AfterSynchronizedProcessingTime test
>>> https://github.com/apache/beam/issues/21471 Flakes: Failed to load
>>> cache entry
>>> https://github.com/apache/beam/issues/21470 Test flake:
>>> test_split_half_sdf
>>> https://github.com/apache/beam/issues/21469 beam_PostCommit_XVR_Flink
>>> flaky: Connection refused
>>> https://github.com/apache/beam/issues/21468
>>> beam_PostCommit_Python_Examples_Dataflow failing
>>> https://github.com/apache/beam/issues/21467 GBK and CoGBK streaming
>>> Java load tests failing
>>> https://github.com/apache/beam/issues/21465 Kafka commit offset drop
>>> data on failure for runners that have non-checkpointing shuffle
>>> https://github.com/apache/beam/issues/21463 NPE in Flink Portable
>>> ValidatesRunner streaming suite
>>> https://github.com/apache/beam/issues/21462 Flake in
>>> org.apache.beam.sdk.io.mqtt.MqttIOTest.testReadObject: Address already in
>>> use
>>> https://github.com/apache/beam/issues/21271 pubsublite.ReadWriteIT
>>> flaky in beam_PostCommit_Java_DataflowV2
>>> https://github.com/apache/beam/issues/21270
>>> org.apache.beam.sdk.transforms.CombineTest$WindowingTests.testWindowedCombineGloballyAsSingletonView
>>> flaky on Dataflow Runner V2
>>> https://github.com/apache/beam/issues/21267 WriteToBigQuery submits a
>>> duplicate BQ load job if a 503 error code is returned from googleapi
>>> https://github.com/apache/beam/issues/21266
>>> org.apache.beam.sdk.transforms.ParDoLifecycleTest.testTeardownCalledAfterExceptionInProcessElementStateful
>>> is flaky in Java ValidatesRunner Flink suite.
>>> https://github.com/apache/beam/issues/21262 

Re: What to do about issues that track flaky tests?

2022-09-14 Thread Austin Bennett
+1 to being realistic -- proper labels are worthwhile.  Though, some flaky
tests probably should be P1, and just because isn't addressed in a timely
manner doesn't mean it isn't a P1 - though, it does mean it wasn't
addressed.



On Wed, Sep 14, 2022 at 1:19 PM Kenneth Knowles  wrote:

> I would like to make this alert email actionable.
>
> I went through most of these issues. About half are P1 "flake" issues. I
> don't think magically expecting them to be deflaked is helpful. So I have a
> couple ideas:
>
> 1. Exclude "flake" P1s from this email. This is what we used to do. But
> then... are they really P1s?
> 2. Make "flake" bugs P2 if they are not currently impacting our test
> signal. But then... we may have a gap in test coverage that could cause
> severe problems. But anyhow something that is P1 for a long time is not
> *really* P1, so it is just being realistic.
>
> What do you all think?
>
> Kenn
>
> On Wed, Sep 14, 2022 at 3:03 AM  wrote:
>
>> This is your daily summary of Beam's current high priority issues that
>> may need attention.
>>
>> See https://beam.apache.org/contribute/issue-priorities for the
>> meaning and expectations around issue priorities.
>>
>> Unassigned P1 Issues:
>>
>> https://github.com/apache/beam/issues/23227 [Bug]: Python SDK
>> installation cannot generate proto with protobuf 3.20.2
>> https://github.com/apache/beam/issues/23179 [Bug]: Parquet size exploded
>> for no apparent reason
>> https://github.com/apache/beam/issues/22913 [Bug]:
>> beam_PostCommit_Java_ValidatesRunner_Flink is flakey
>> https://github.com/apache/beam/issues/22303 [Task]: Add tests to Kafka
>> SDF and fix known and discovered issues
>> https://github.com/apache/beam/issues/22299 [Bug]: JDBCIO Write freeze
>> at getConnection() in WriteFn
>> https://github.com/apache/beam/issues/21794 Dataflow runner creates a
>> new timer whenever the output timestamp is change
>> https://github.com/apache/beam/issues/21713 404s in BigQueryIO don't get
>> output to Failed Inserts PCollection
>> https://github.com/apache/beam/issues/21704
>> beam_PostCommit_Java_DataflowV2 failures parent bug
>> https://github.com/apache/beam/issues/21701
>> beam_PostCommit_Java_DataflowV1 failing with a variety of flakes and errors
>> https://github.com/apache/beam/issues/21700
>> --dataflowServiceOptions=use_runner_v2 is broken
>> https://github.com/apache/beam/issues/21696 Flink Tests failure :
>> java.lang.NoClassDefFoundError: Could not initialize class
>> org.apache.beam.runners.core.construction.SerializablePipelineOptions
>> https://github.com/apache/beam/issues/21695 DataflowPipelineResult does
>> not raise exception for unsuccessful states.
>> https://github.com/apache/beam/issues/21694 BigQuery Storage API insert
>> with writeResult retry and write to error table
>> https://github.com/apache/beam/issues/21480 flake:
>> FlinkRunnerTest.testEnsureStdoutStdErrIsRestored
>> https://github.com/apache/beam/issues/21472 Dataflow streaming tests
>> failing new AfterSynchronizedProcessingTime test
>> https://github.com/apache/beam/issues/21471 Flakes: Failed to load cache
>> entry
>> https://github.com/apache/beam/issues/21470 Test flake:
>> test_split_half_sdf
>> https://github.com/apache/beam/issues/21469 beam_PostCommit_XVR_Flink
>> flaky: Connection refused
>> https://github.com/apache/beam/issues/21468
>> beam_PostCommit_Python_Examples_Dataflow failing
>> https://github.com/apache/beam/issues/21467 GBK and CoGBK streaming Java
>> load tests failing
>> https://github.com/apache/beam/issues/21465 Kafka commit offset drop
>> data on failure for runners that have non-checkpointing shuffle
>> https://github.com/apache/beam/issues/21463 NPE in Flink Portable
>> ValidatesRunner streaming suite
>> https://github.com/apache/beam/issues/21462 Flake in
>> org.apache.beam.sdk.io.mqtt.MqttIOTest.testReadObject: Address already in
>> use
>> https://github.com/apache/beam/issues/21271 pubsublite.ReadWriteIT flaky
>> in beam_PostCommit_Java_DataflowV2
>> https://github.com/apache/beam/issues/21270
>> org.apache.beam.sdk.transforms.CombineTest$WindowingTests.testWindowedCombineGloballyAsSingletonView
>> flaky on Dataflow Runner V2
>> https://github.com/apache/beam/issues/21267 WriteToBigQuery submits a
>> duplicate BQ load job if a 503 error code is returned from googleapi
>> https://github.com/apache/beam/issues/21266
>> org.apache.beam.sdk.transforms.ParDoLifecycleTest.testTeardownCalledAfterExceptionInProcessElementStateful
>> is flaky in Java ValidatesRunner Flink suite.
>> https://github.com/apache/beam/issues/21262 Python AfterAny, AfterAll do
>> not follow spec
>> https://github.com/apache/beam/issues/21261
>> org.apache.beam.runners.dataflow.worker.fn.logging.BeamFnLoggingServiceTest.testMultipleClientsFailingIsHandledGracefullyByServer
>> is flaky
>> https://github.com/apache/beam/issues/21260 Python DirectRunner does not
>> emit data at GC time
>> https://github.com/apache/beam/issues/21257 Either Create or
>> DirectRunner 

What to do about issues that track flaky tests?

2022-09-14 Thread Kenneth Knowles
I would like to make this alert email actionable.

I went through most of these issues. About half are P1 "flake" issues. I
don't think magically expecting them to be deflaked is helpful. So I have a
couple ideas:

1. Exclude "flake" P1s from this email. This is what we used to do. But
then... are they really P1s?
2. Make "flake" bugs P2 if they are not currently impacting our test
signal. But then... we may have a gap in test coverage that could cause
severe problems. But anyhow something that is P1 for a long time is not
*really* P1, so it is just being realistic.

What do you all think?

Kenn

On Wed, Sep 14, 2022 at 3:03 AM  wrote:

> This is your daily summary of Beam's current high priority issues that may
> need attention.
>
> See https://beam.apache.org/contribute/issue-priorities for the
> meaning and expectations around issue priorities.
>
> Unassigned P1 Issues:
>
> https://github.com/apache/beam/issues/23227 [Bug]: Python SDK
> installation cannot generate proto with protobuf 3.20.2
> https://github.com/apache/beam/issues/23179 [Bug]: Parquet size exploded
> for no apparent reason
> https://github.com/apache/beam/issues/22913 [Bug]:
> beam_PostCommit_Java_ValidatesRunner_Flink is flakey
> https://github.com/apache/beam/issues/22303 [Task]: Add tests to Kafka
> SDF and fix known and discovered issues
> https://github.com/apache/beam/issues/22299 [Bug]: JDBCIO Write freeze at
> getConnection() in WriteFn
> https://github.com/apache/beam/issues/21794 Dataflow runner creates a new
> timer whenever the output timestamp is change
> https://github.com/apache/beam/issues/21713 404s in BigQueryIO don't get
> output to Failed Inserts PCollection
> https://github.com/apache/beam/issues/21704
> beam_PostCommit_Java_DataflowV2 failures parent bug
> https://github.com/apache/beam/issues/21701
> beam_PostCommit_Java_DataflowV1 failing with a variety of flakes and errors
> https://github.com/apache/beam/issues/21700
> --dataflowServiceOptions=use_runner_v2 is broken
> https://github.com/apache/beam/issues/21696 Flink Tests failure :
> java.lang.NoClassDefFoundError: Could not initialize class
> org.apache.beam.runners.core.construction.SerializablePipelineOptions
> https://github.com/apache/beam/issues/21695 DataflowPipelineResult does
> not raise exception for unsuccessful states.
> https://github.com/apache/beam/issues/21694 BigQuery Storage API insert
> with writeResult retry and write to error table
> https://github.com/apache/beam/issues/21480 flake:
> FlinkRunnerTest.testEnsureStdoutStdErrIsRestored
> https://github.com/apache/beam/issues/21472 Dataflow streaming tests
> failing new AfterSynchronizedProcessingTime test
> https://github.com/apache/beam/issues/21471 Flakes: Failed to load cache
> entry
> https://github.com/apache/beam/issues/21470 Test flake:
> test_split_half_sdf
> https://github.com/apache/beam/issues/21469 beam_PostCommit_XVR_Flink
> flaky: Connection refused
> https://github.com/apache/beam/issues/21468
> beam_PostCommit_Python_Examples_Dataflow failing
> https://github.com/apache/beam/issues/21467 GBK and CoGBK streaming Java
> load tests failing
> https://github.com/apache/beam/issues/21465 Kafka commit offset drop data
> on failure for runners that have non-checkpointing shuffle
> https://github.com/apache/beam/issues/21463 NPE in Flink Portable
> ValidatesRunner streaming suite
> https://github.com/apache/beam/issues/21462 Flake in
> org.apache.beam.sdk.io.mqtt.MqttIOTest.testReadObject: Address already in
> use
> https://github.com/apache/beam/issues/21271 pubsublite.ReadWriteIT flaky
> in beam_PostCommit_Java_DataflowV2
> https://github.com/apache/beam/issues/21270
> org.apache.beam.sdk.transforms.CombineTest$WindowingTests.testWindowedCombineGloballyAsSingletonView
> flaky on Dataflow Runner V2
> https://github.com/apache/beam/issues/21267 WriteToBigQuery submits a
> duplicate BQ load job if a 503 error code is returned from googleapi
> https://github.com/apache/beam/issues/21266
> org.apache.beam.sdk.transforms.ParDoLifecycleTest.testTeardownCalledAfterExceptionInProcessElementStateful
> is flaky in Java ValidatesRunner Flink suite.
> https://github.com/apache/beam/issues/21262 Python AfterAny, AfterAll do
> not follow spec
> https://github.com/apache/beam/issues/21261
> org.apache.beam.runners.dataflow.worker.fn.logging.BeamFnLoggingServiceTest.testMultipleClientsFailingIsHandledGracefullyByServer
> is flaky
> https://github.com/apache/beam/issues/21260 Python DirectRunner does not
> emit data at GC time
> https://github.com/apache/beam/issues/21257 Either Create or DirectRunner
> fails to produce all elements to the following transform
> https://github.com/apache/beam/issues/21123 Multiple jobs running on
> Flink session cluster reuse the persistent Python environment.
> https://github.com/apache/beam/issues/21121
> apache_beam.examples.streaming_wordcount_it_test.StreamingWordCountIT.test_streaming_wordcount_it
> flakey
> https://github.com/apache/beam/issues/21118
>