Re: [VOTE] Release 2.43.0, release candidate #2

2022-11-14 Thread Anand Inguva via dev
+1(non-binding)

Validated Python wordcount example on Direct and Dataflow runner. Staging
of the Python dependencies works as expected now.

Thanks,
Anand

On Sun, Nov 13, 2022 at 9:52 AM Chamikara Jayalath via dev <
dev@beam.apache.org> wrote:

> Hi everyone,
> Please review and vote on the release candidate #2 for the version 2.43.0,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
>
> Reviewers are encouraged to test their own use cases with the release
> candidate, and vote +1 if
> no issues are found.
>
> The complete staging area is available for your review, which includes:
> * GitHub Release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2], which is signed with the key with fingerprint
> 40C61FBE1761E5DB652A1A780CCD5EB2A718A56E [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag "v2.43.0-RC2" [5],
> * website pull request listing the release [6], the blog post [6], and
> publishing the API reference manual [7].
> * Java artifacts were built with Gradle 7.5.1 and openjdk version
> 1.8.0_181-google-v7.
> * Python artifacts are deployed along with the source release to the
> dist.apache.org [2] and PyPI[8].
> * Go artifacts and documentation are available at pkg.go.dev [9]
> * Validation sheet with a tab for 2.43.0 release to help with validation
> [10].
> * Docker images published to Docker Hub [11].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> For guidelines on how to try the release in your projects, check out our
> blog post at https://beam.apache.org/blog/validate-beam-release/.
>
> Thanks,
> Cham
>
> [1] https://github.com/apache/beam/milestone/5
> [2] https://dist.apache.org/repos/dist/dev/beam/2.43.0/
> [3] https://dist.apache.org/repos/dist/release/beam/KEYS
> [4] https://repository.apache.org/content/repositories/orgapachebeam-1288/
> [5] https://github.com/apache/beam/tree/v2.43.0-RC2
> [6] https://github.com/apache/beam/pull/24044
> [7] https://github.com/apache/beam-site/pull/636
> [8] https://pypi.org/project/apache-beam/2.43.0rc2/
> [9]
> https://pkg.go.dev/github.com/apache/beam/sdks/v2@v2.43.0-RC2/go/pkg/beam
> [10]
> https://docs.google.com/spreadsheets/d/1qk-N5vjXvbcEk68GjbkSZTR8AGqyNUM-oLFo_ZXBpJw/edit#gid=1310009119
> [11] https://hub.docker.com/search?q=apache%2Fbeam=image
>


One Configuration, Many File Write Formats

2022-11-14 Thread Damon Douglas
Hello Everyone,

I hope you are doing well.  The following design document proposes, via a
single configuration, a producer of a Beam File writing transform
supporting multiple formats.
bit.ly/fileioschematransformwriteprovider

For those new to Beam and Schema, I've added a final section of
suggested pre-requisite reading.  It's important that everyone can
participate in the conversation at any level of experience, even if this is
the first day learning Beam.  *Please feel invited to let me know anything
that isn't clear so this document can strive to include everyone.*

*My personal thoughts on the proposal's value*

I've witnessed many smart people and teams argue and divide over the
subject of a programming language.  Beam multi-language support allows us
to join transforms written in various languages, currently Java, Python,
Go, and experimentally TypeScript into a single unified pipeline.  It's
Beam's schema and processing of these objects, called Rows, that allow this
unification possible.  The aforementioned proposal continues this vision
for producing file and object system sinks via a single language agnostic
configuration and supporting provider.

Ada Lovelace dreamed of a machine that processed objects instead of just
numbers, so that they might produce music and the human things of life.
Through Beam schema awareness, let us live Ada's dream and join our
multiple languages so that we may end our strife and produce the valuable
stuff of life.

Sincerely,

Damon


Re: Questions on primitive transforms hierarchy

2022-11-14 Thread Jan Lukavský

I don't think it is necessary in this particular case.

In general, it would be nice to document design decisions that were made 
during the history of Beam and which let to some aspects of the current 
implementation. But I'm afraid it would be rather costly and time 
consuming. We have design docs, which should be fine for most cases.


 Jan

On 11/14/22 15:25, Sachin Agarwal via dev wrote:

Would it be helpful to add these answers to the Beam docs?

On Mon, Nov 14, 2022 at 4:35 AM Jan Lukavský  wrote:

I somehow missed these answers, Reuven and Kenn, thanks for the
discussion, it helped me clarify my understanding.

 Jan

On 10/26/22 21:10, Kenneth Knowles wrote:



On Tue, Oct 25, 2022 at 5:53 AM Jan Lukavský  wrote:

> Not quite IMO. It is a subtle difference. Perhaps these
transforms can be *implemented* using stateful DoFn, but
defining their semantics directly at a high level is more
powerful. The higher level we can make transforms, the more
flexibility we have in the runners. You *could* suggest that
we take the same approach as we do with Combine: not a
primitive, but a special transform that we optimize. You
could say that "vanilla ParDo" is a composite that has a
stateful ParDo implementation, but a runner can implement the
composite more efficiently (without a shuffle). Same with
CoGBK. You could say that there is a default expansion of
CoGBK that uses stateful DoFn (which implies a shuffle) but
that smart runners will not use that expansion.

Yes, semantics > optimizations. For optimizations Beam
already has a facility - PTransformOverride. There is no
fundamental difference about how we treat Combine wrt GBK. It
*can* be expanded using GBK, but "smart runners will not use
that expansion". This is essentially the root of this discussion.

If I rephrase it:

 a) why do we distinguish between "some" actually composite
transforms treating them as primitive, while others have
expansions, although the fundamental reasoning seems the same
for both (performance)?

It is identical to why you can choose different axioms for formal
logic and get all the same provable statements. You have to
choose something. But certainly a runner that just executes
primitives is the bare minimum and all runners are really
expected to take advantage of known composites. Before
portability, the benefit was minimal to have the runner (written
in Java) execute a transform directly vs calling a user DoFn. Now
with portability it could be huge if it avoids a Fn API crossing.

 b) is there a fundamental reason why we do not support
stateful DoFn for merging windows?

No reason. The original design was to force users to only use
"mergeable" state in a stateful DoFn for merging windows. That is
an annoying restriction that we don't really need. So I think the
best way is to have an OnMerge callback. The internal legacy Java
APIs for this are way too complex. But portability wire protocols
support it (I think?) and making a good user facing API for all
the SDKs shouldn't be too hard.

Kenn

I feel that these are related and have historical reasons,
but I'd like to know that for sure. :)

 Jan

On 10/24/22 19:59, Kenneth Knowles wrote:



On Mon, Oct 24, 2022 at 5:51 AM Jan Lukavský
 wrote:

On 10/22/22 21:47, Reuven Lax via dev wrote:

I think we stated that CoGroupbyKey was also a
primitive, though in practice it's implemented in terms
of GroupByKey today.

On Fri, Oct 21, 2022 at 3:05 PM Kenneth Knowles
 wrote:



On Fri, Oct 21, 2022 at 5:24 AM Jan Lukavský
 wrote:

Hi,

I have some missing pieces in my understanding
of the set of Beam's primitive transforms,
which I'd like to fill. First a quick recap of
what I think is the current state. We have
(basically) the following primitive transforms:

 - DoFn (stateless, stateful, splittable)

 - Window

 - Impulse

 - GroupByKey

 - Combine


Not a primitive, just a well-defined transform that
runners can execute in special ways.


Yep, OK, agree. Performance is orthogonal to semantics.




 - Flatten (pCollections)


The rest, yes.

Inside runners, we most often transform GBK
into ReduceFn (ReduceFnRunner), which does the
actual logic for both GBK and stateful DoFn.



Re: Questions on primitive transforms hierarchy

2022-11-14 Thread Sachin Agarwal via dev
Would it be helpful to add these answers to the Beam docs?

On Mon, Nov 14, 2022 at 4:35 AM Jan Lukavský  wrote:

> I somehow missed these answers, Reuven and Kenn, thanks for the
> discussion, it helped me clarify my understanding.
>
>  Jan
> On 10/26/22 21:10, Kenneth Knowles wrote:
>
>
>
> On Tue, Oct 25, 2022 at 5:53 AM Jan Lukavský  wrote:
>
>> > Not quite IMO. It is a subtle difference. Perhaps these transforms can
>> be *implemented* using stateful DoFn, but defining their semantics directly
>> at a high level is more powerful. The higher level we can make transforms,
>> the more flexibility we have in the runners. You *could* suggest that we
>> take the same approach as we do with Combine: not a primitive, but a
>> special transform that we optimize. You could say that "vanilla ParDo" is a
>> composite that has a stateful ParDo implementation, but a runner can
>> implement the composite more efficiently (without a shuffle). Same with
>> CoGBK. You could say that there is a default expansion of CoGBK that uses
>> stateful DoFn (which implies a shuffle) but that smart runners will not use
>> that expansion.
>>
>> Yes, semantics > optimizations. For optimizations Beam already has a
>> facility - PTransformOverride. There is no fundamental difference about how
>> we treat Combine wrt GBK. It *can* be expanded using GBK, but "smart
>> runners will not use that expansion". This is essentially the root of this
>> discussion.
>>
>> If I rephrase it:
>>
>>  a) why do we distinguish between "some" actually composite transforms
>> treating them as primitive, while others have expansions, although the
>> fundamental reasoning seems the same for both (performance)?
>>
> It is identical to why you can choose different axioms for formal logic
> and get all the same provable statements. You have to choose something. But
> certainly a runner that just executes primitives is the bare minimum and
> all runners are really expected to take advantage of known composites.
> Before portability, the benefit was minimal to have the runner (written in
> Java) execute a transform directly vs calling a user DoFn. Now with
> portability it could be huge if it avoids a Fn API crossing.
>
>  b) is there a fundamental reason why we do not support stateful DoFn for
>> merging windows?
>>
> No reason. The original design was to force users to only use "mergeable"
> state in a stateful DoFn for merging windows. That is an annoying
> restriction that we don't really need. So I think the best way is to have
> an OnMerge callback. The internal legacy Java APIs for this are way too
> complex. But portability wire protocols support it (I think?) and making a
> good user facing API for all the SDKs shouldn't be too hard.
>
> Kenn
>
>
>> I feel that these are related and have historical reasons, but I'd like
>> to know that for sure. :)
>>
>>  Jan
>> On 10/24/22 19:59, Kenneth Knowles wrote:
>>
>>
>>
>> On Mon, Oct 24, 2022 at 5:51 AM Jan Lukavský  wrote:
>>
>>> On 10/22/22 21:47, Reuven Lax via dev wrote:
>>>
>>> I think we stated that CoGroupbyKey was also a primitive, though in
>>> practice it's implemented in terms of GroupByKey today.
>>>
>>> On Fri, Oct 21, 2022 at 3:05 PM Kenneth Knowles  wrote:
>>>


 On Fri, Oct 21, 2022 at 5:24 AM Jan Lukavský  wrote:

> Hi,
>
> I have some missing pieces in my understanding of the set of Beam's
> primitive transforms, which I'd like to fill. First a quick recap of what 
> I
> think is the current state. We have (basically) the following primitive
> transforms:
>
>  - DoFn (stateless, stateful, splittable)
>
>  - Window
>
>  - Impulse
>
>  - GroupByKey
>
>  - Combine
>

 Not a primitive, just a well-defined transform that runners can execute
 in special ways.

>>> Yep, OK, agree. Performance is orthogonal to semantics.
>>>
>>>

>
>
>  - Flatten (pCollections)
>

 The rest, yes.



> Inside runners, we most often transform GBK into ReduceFn
> (ReduceFnRunner), which does the actual logic for both GBK and stateful
> DoFn.
>

 ReduceFnRunner is for windowing / triggers and has special feature to
 use a CombineFn while doing it. Nothing to do with stateful DoFn.

>>> My bad, wrong wording. The point was that *all* of the semantics of GBK
>>> and Combine can be defined in terms of stateful DoFn. There are some
>>> changes needed to stateful DoFn to support the Combine functionality. But
>>> as mentioned above - optimization is orthogonal to semantics.
>>>
>>
>> Not quite IMO. It is a subtle difference. Perhaps these transforms can be
>> *implemented* using stateful DoFn, but defining their semantics directly at
>> a high level is more powerful. The higher level we can make transforms, the
>> more flexibility we have in the runners. You *could* suggest that we take
>> the same approach as we do with Combine: not a primitive, but 

Re: Questions on primitive transforms hierarchy

2022-11-14 Thread Jan Lukavský
I somehow missed these answers, Reuven and Kenn, thanks for the 
discussion, it helped me clarify my understanding.


 Jan

On 10/26/22 21:10, Kenneth Knowles wrote:



On Tue, Oct 25, 2022 at 5:53 AM Jan Lukavský  wrote:

> Not quite IMO. It is a subtle difference. Perhaps these
transforms can be *implemented* using stateful DoFn, but defining
their semantics directly at a high level is more powerful. The
higher level we can make transforms, the more flexibility we have
in the runners. You *could* suggest that we take the same approach
as we do with Combine: not a primitive, but a special transform
that we optimize. You could say that "vanilla ParDo" is a
composite that has a stateful ParDo implementation, but a runner
can implement the composite more efficiently (without a shuffle).
Same with CoGBK. You could say that there is a default expansion
of CoGBK that uses stateful DoFn (which implies a shuffle) but
that smart runners will not use that expansion.

Yes, semantics > optimizations. For optimizations Beam already has
a facility - PTransformOverride. There is no fundamental
difference about how we treat Combine wrt GBK. It *can* be
expanded using GBK, but "smart runners will not use that
expansion". This is essentially the root of this discussion.

If I rephrase it:

 a) why do we distinguish between "some" actually composite
transforms treating them as primitive, while others have
expansions, although the fundamental reasoning seems the same for
both (performance)?

It is identical to why you can choose different axioms for formal 
logic and get all the same provable statements. You have to choose 
something. But certainly a runner that just executes primitives is the 
bare minimum and all runners are really expected to take advantage of 
known composites. Before portability, the benefit was minimal to have 
the runner (written in Java) execute a transform directly vs calling a 
user DoFn. Now with portability it could be huge if it avoids a Fn API 
crossing.


 b) is there a fundamental reason why we do not support stateful
DoFn for merging windows?

No reason. The original design was to force users to only use 
"mergeable" state in a stateful DoFn for merging windows. That is an 
annoying restriction that we don't really need. So I think the best 
way is to have an OnMerge callback. The internal legacy Java APIs for 
this are way too complex. But portability wire protocols support it (I 
think?) and making a good user facing API for all the SDKs shouldn't 
be too hard.


Kenn

I feel that these are related and have historical reasons, but I'd
like to know that for sure. :)

 Jan

On 10/24/22 19:59, Kenneth Knowles wrote:



On Mon, Oct 24, 2022 at 5:51 AM Jan Lukavský  wrote:

On 10/22/22 21:47, Reuven Lax via dev wrote:

I think we stated that CoGroupbyKey was also a primitive,
though in practice it's implemented in terms of GroupByKey
today.

On Fri, Oct 21, 2022 at 3:05 PM Kenneth Knowles
 wrote:



On Fri, Oct 21, 2022 at 5:24 AM Jan Lukavský
 wrote:

Hi,

I have some missing pieces in my understanding of
the set of Beam's primitive transforms, which I'd
like to fill. First a quick recap of what I think is
the current state. We have (basically) the following
primitive transforms:

 - DoFn (stateless, stateful, splittable)

 - Window

 - Impulse

 - GroupByKey

 - Combine


Not a primitive, just a well-defined transform that
runners can execute in special ways.


Yep, OK, agree. Performance is orthogonal to semantics.




 - Flatten (pCollections)


The rest, yes.

Inside runners, we most often transform GBK into
ReduceFn (ReduceFnRunner), which does the actual
logic for both GBK and stateful DoFn.


ReduceFnRunner is for windowing / triggers and has
special feature to use a CombineFn while doing it.
Nothing to do with stateful DoFn.


My bad, wrong wording. The point was that *all* of the
semantics of GBK and Combine can be defined in terms of
stateful DoFn. There are some changes needed to stateful DoFn
to support the Combine functionality. But as mentioned above
- optimization is orthogonal to semantics.


Not quite IMO. It is a subtle difference. Perhaps these
transforms can be *implemented* using stateful DoFn, but defining
their semantics directly at a high level is more powerful. The
higher level we can make transforms, the more flexibility we have
in the runners. You *could* suggest that we take the same
approach as we do 

Re: bhulette stepping back (for now)

2022-11-14 Thread Alexey Romanenko
Hey Brian,

Many thanks for your contributions! Good luck with your new adventure!

—
Alexey


> On 12 Nov 2022, at 20:47, Chamikara Jayalath via dev  
> wrote:
> 
> Good luck with your next endeavor Brian! Thanks for all the contributions to 
> Beam (and hopefully more in the future when you have time :-) )
> 
> - Cham
> 
> On Fri, Nov 11, 2022 at 10:47 PM Moritz Mack  > wrote:
>> Also, thanks so much for all the great and through reviews! That was always 
>> much appreciated!
>> 
>> All the best, Brian
>> 
>>  
>> 
>> On 11.11.22, 23:23, "Ahmet Altay via dev" > > wrote:
>> 
>>  
>> 
>> Thank you for everything Brian! On Fri, Nov 11, 2022 at 11: 27 AM Austin 
>> Bennett  wrote: Thanks for everything you've done, @ 
>> Bhulette@ apache. org!   On Fri, Nov 11, 2022 at 11: 01 AM Pablo Estrada via 
>> dev 
>> 
>> Thank you for everything Brian!
>> 
>>  
>> 
>> On Fri, Nov 11, 2022 at 11:27 AM Austin Bennett > > wrote:
>> 
>> Thanks for everything you've done, @bhule...@apache.org 
>> !  
>> 
>>  
>> 
>> On Fri, Nov 11, 2022 at 11:01 AM Pablo Estrada via dev > > wrote:
>> 
>> I promised I wouldn't cry so I won't. Cya!
>> 
>>  
>> 
>> On Fri, Nov 11, 2022 at 10:46 AM Robin Qiu via dev > > wrote:
>> 
>> Thanks for your contribution Brian! Hope you enjoy your new team!
>> 
>>  
>> 
>> Best,
>> 
>> Robin
>> 
>>  
>> 
>> On Fri, Nov 11, 2022 at 10:27 AM Kenneth Knowles > > wrote:
>> 
>> Your contributions have been huge. You will be missed! But have a fabulous 
>> time with BigQuery. And thank you so much for letting us know [1]
>> 
>>  
>> 
>> Kenn
>> 
>>  
>> 
>> [1] See "stepping down considerately" from 
>> https://www.apache.org/foundation/policies/conduct.html 
>> 
>>  
>> 
>> On Thu, Nov 10, 2022 at 4:00 PM Brian Hulette > > wrote:
>> 
>> Hi dev@beam,
>> 
>>  
>> 
>> I just wanted to let the community know that I will be stepping back from 
>> Beam development for now. I'm switching to a different team within Google 
>> next week - I will be working on BigQuery.
>> 
>>  
>> 
>> I'm removing myself from automated code review assignments [1], and won't 
>> actively monitor the beam lists anymore. That being said, I'm happy to 
>> contribute to discussions or code reviews when it would be particularly 
>> helpful, e.g. for anything relating to DataFrames/Schemas/SQL. I can always 
>> be reached at bhule...@apache.org , and 
>> @TheNeuralBit [2] on GitHub.
>> 
>>  
>> 
>> Brian
>> 
>>  
>> 
>> [1] https://github.com/apache/beam/pull/24108 
>> 
>> [2] https://github.com/TheNeuralBit 
>> 
>> As a recipient of an email from Talend, your contact personal data will be 
>> on our systems. Please see our privacy notice. 
>> 
>> 



Beam High Priority Issue Report (60)

2022-11-14 Thread beamactions
This is your daily summary of Beam's current high priority issues that may need 
attention.

See https://beam.apache.org/contribute/issue-priorities for the meaning and 
expectations around issue priorities.

Unassigned P1 Issues:

https://github.com/apache/beam/issues/24089 [Bug]: import 
org.apache.beam.sdk.metrics.Gauge metrics do not show up in stackdriver
https://github.com/apache/beam/issues/23982 [Bug]: Could not find a version 
that matches protobuf
https://github.com/apache/beam/issues/23974 [Bug]: The top tesult for "Beam 
godocs" in Google points to an old Godocs page of Beam
https://github.com/apache/beam/issues/23944  beam_PreCommit_Python_Cron 
regularily failing - test_pardo_large_input flaky
https://github.com/apache/beam/issues/23815 [Bug]: Neo4j tests failing
https://github.com/apache/beam/issues/23745 [Bug]: Samza 
AsyncDoFnRunnerTest.testSimplePipeline is flaky
https://github.com/apache/beam/issues/23709 [Flake]: Spark batch flakes in 
ParDoLifecycleTest.testTeardownCalledAfterExceptionInProcessElement and 
ParDoLifecycleTest.testTeardownCalledAfterExceptionInStartBundle
https://github.com/apache/beam/issues/22969 Discrepancy in behavior of 
`DoFn.process()` when `yield` is combined with `return` statement, or vice versa
https://github.com/apache/beam/issues/22913 [Bug]: 
beam_PostCommit_Java_ValidatesRunner_Flink is flakes in 
org.apache.beam.sdk.transforms.GroupByKeyTest$BasicTests.testAfterProcessingTimeContinuationTriggerUsingState
https://github.com/apache/beam/issues/22321 
PortableRunnerTestWithExternalEnv.test_pardo_large_input is regularly failing 
on jenkins
https://github.com/apache/beam/issues/21713 404s in BigQueryIO don't get output 
to Failed Inserts PCollection
https://github.com/apache/beam/issues/21561 
ExternalPythonTransformTest.trivialPythonTransform flaky
https://github.com/apache/beam/issues/21469 beam_PostCommit_XVR_Flink flaky: 
Connection refused
https://github.com/apache/beam/issues/21462 Flake in 
org.apache.beam.sdk.io.mqtt.MqttIOTest.testReadObject: Address already in use
https://github.com/apache/beam/issues/21261 
org.apache.beam.runners.dataflow.worker.fn.logging.BeamFnLoggingServiceTest.testMultipleClientsFailingIsHandledGracefullyByServer
 is flaky
https://github.com/apache/beam/issues/21260 Python DirectRunner does not emit 
data at GC time
https://github.com/apache/beam/issues/21113 
testTwoTimersSettingEachOtherWithCreateAsInputBounded flaky
https://github.com/apache/beam/issues/20976 
apache_beam.runners.portability.flink_runner_test.FlinkRunnerTestOptimized.test_flink_metrics
 is flaky
https://github.com/apache/beam/issues/20975 
org.apache.beam.runners.flink.ReadSourcePortableTest.testExecution[streaming: 
false] is flaky
https://github.com/apache/beam/issues/20974 Python GHA PreCommits flake with 
grpc.FutureTimeoutError on SDK harness startup
https://github.com/apache/beam/issues/20689 Kafka commitOffsetsInFinalize OOM 
on Flink
https://github.com/apache/beam/issues/20108 Python direct runner doesn't emit 
empty pane when it should
https://github.com/apache/beam/issues/19814 Flink streaming flakes in 
ParDoLifecycleTest.testTeardownCalledAfterExceptionInStartBundleStateful and 
ParDoLifecycleTest.testTeardownCalledAfterExceptionInProcessElementStateful
https://github.com/apache/beam/issues/19734 
WatchTest.testMultiplePollsWithManyResults flake: Outputs must be in timestamp 
order (sickbayed)
https://github.com/apache/beam/issues/19241 Python Dataflow integration tests 
should export the pipeline Job ID and console output to Jenkins Test Result 
section


P1 Issues with no update in the last week:

https://github.com/apache/beam/issues/23906 [Bug]: Dataflow jpms tests fail on 
the 2.43.0 release branch
https://github.com/apache/beam/issues/23875 [Bug]: beam.Row.__eq__ returns true 
for unequal rows
https://github.com/apache/beam/issues/23855 [FLAKY-WORKFLOW] [10535797]: THIS 
IS A TEST, PLEASE IGNORE #1
https://github.com/apache/beam/issues/23627 [Bug]: Website precommit flaky
https://github.com/apache/beam/issues/23525 [Bug]: Default PubsubMessage coder 
will drop message id and orderingKey
https://github.com/apache/beam/issues/23489 [Bug]: add DebeziumIO to the 
connectors page
https://github.com/apache/beam/issues/23306 [Bug]: BigQueryBatchFileLoads in 
python loses data when using WRITE_TRUNCATE
https://github.com/apache/beam/issues/23286 [Bug]: 
beam_PerformanceTests_InfluxDbIO_IT Flaky > 50 % Fail 
https://github.com/apache/beam/issues/22891 [Bug]: 
beam_PostCommit_XVR_PythonUsingJavaDataflow is flaky
https://github.com/apache/beam/issues/22605 [Bug]: Beam Python failure for 
dataflow_exercise_metrics_pipeline_test.ExerciseMetricsPipelineTest.test_metrics_it
https://github.com/apache/beam/issues/22299 [Bug]: JDBCIO Write freeze at 
getConnection() in WriteFn
https://github.com/apache/beam/issues/22115 [Bug]: 
apache_beam.runners.portability.portable_runner_test.PortableRunnerTestWithSubprocesses
 is flaky