[jira] [Created] (FLINK-33034) Incorrect StateBackendTestBase#testGetKeysAndNamespaces

2023-09-04 Thread Dmitriy Linevich (Jira)
Dmitriy Linevich created FLINK-33034:


 Summary: Incorrect StateBackendTestBase#testGetKeysAndNamespaces
 Key: FLINK-33034
 URL: https://issues.apache.org/jira/browse/FLINK-33034
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.17.1, 1.15.0, 1.12.2
Reporter: Dmitriy Linevich
 Fix For: 1.17.1, 1.15.0
 Attachments: image-2023-09-05-12-51-28-203.png

In this test first namespace 'ns1' doesn't exist in state, because creating 
ValueState is incorrect for test. Need ti fix it to change creating ValueState 
or to change process of updating this state.

 

If to add following code for checking count of adding namespaces to state 
[here|https://github.com/apache/flink/blob/3e6a1aab0712acec3e9fcc955a28f2598f019377/flink-runtime/src/test/java/org/apache/flink/runtime/state/StateBackendTestBase.java#L501C28-L501C28]
{code:java}
assertThat(keysByNamespace.size(), is(2)); {code}
then

!image-2023-09-05-12-51-28-203.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33033) Add haservice micro benchmark for olap

2023-09-04 Thread Fang Yong (Jira)
Fang Yong created FLINK-33033:
-

 Summary: Add haservice micro benchmark for olap
 Key: FLINK-33033
 URL: https://issues.apache.org/jira/browse/FLINK-33033
 Project: Flink
  Issue Type: Sub-task
  Components: Benchmarks
Affects Versions: 1.19.0
Reporter: Fang Yong


Add micro benchmarks of haservice for olap to improve the performance for 
short-lived jobs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Drop python 3.7 support in 1.19

2023-09-04 Thread Jing Ge
+1

@Dian should we add support of python 3.11

Best regards,
Jing

On Mon, Sep 4, 2023 at 3:39 PM Gabor Somogyi 
wrote:

> Thanks for all the responses!
>
> Based on the suggestions I've created the following jiras and started to
> work on them:
> * https://issues.apache.org/jira/browse/FLINK-33029
> * https://issues.apache.org/jira/browse/FLINK-33030
>
> The reason why I've split them is to separate the concerns and reduce the
> amount of code in a PR to help reviewers.
>
> BR,
> G
>
>
> On Mon, Sep 4, 2023 at 12:57 PM Sergey Nuyanzin 
> wrote:
>
> > +1,
> > Thanks for looking into this.
> >
> > On Mon, Sep 4, 2023 at 8:38 AM Gyula Fóra  wrote:
> >
> > > +1
> > > Thanks for looking into this.
> > >
> > > Gyula
> > >
> > > On Mon, Sep 4, 2023 at 8:26 AM Matthias Pohl  > > .invalid>
> > > wrote:
> > >
> > > > Thanks Gabor for looking into it. It sounds reasonable to me as well.
> > > >
> > > > +1
> > > >
> > > > On Sun, Sep 3, 2023 at 5:44 PM Márton Balassi <
> > balassi.mar...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi Gabor,
> > > > >
> > > > > Thanks for bringing this up. Similarly to when we dropped Python
> 3.6
> > > due
> > > > to
> > > > > its end of life (and added 3.10) in Flink 1.17 [1,2], it makes
> sense
> > to
> > > > > proceed to remove 3.7 and add 3.11 instead.
> > > > >
> > > > > +1.
> > > > >
> > > > > [1] https://issues.apache.org/jira/browse/FLINK-27929
> > > > > [2] https://github.com/apache/flink/pull/21699
> > > > >
> > > > > Best,
> > > > > Marton
> > > > >
> > > > > On Fri, Sep 1, 2023 at 10:39 AM Gabor Somogyi <
> > > gabor.g.somo...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Hi All,
> > > > > >
> > > > > > I've analyzed through part of the pyflink code and found some
> > > > improvement
> > > > > > possibilities.
> > > > > > I would like to hear voices on the idea.
> > > > > >
> > > > > > Intention:
> > > > > > * upgrade several python related versions to eliminate
> end-of-life
> > > > issues
> > > > > > and keep up with bugfixes
> > > > > > * start to add python arm64 support
> > > > > >
> > > > > > Actual situation:
> > > > > > * Flink supports the following python versions: 3.7, 3.8, 3.9,
> 3.10
> > > > > > * We use miniconda 4.7.10 (python package management system and
> > > > > environment
> > > > > > management system) which supports the following python versions:
> > 3.7,
> > > > > 3.8,
> > > > > > 3.9, 3.10
> > > > > > * Our python framework is not supporting anything but x86_64
> > > > > >
> > > > > > Issues:
> > > > > > * Python 3.7.17 is the latest security patch of the 3.7 line.
> This
> > > > > version
> > > > > > is end-of-life and is no longer supported:
> > > > > > https://www.python.org/downloads/release/python-3717/
> > > > > > * Miniconda 4.7.10 is released on 2019-07-29 which is 4 years old
> > > > already
> > > > > > and not supporting too many architectures (x86_64 and ppc64le)
> > > > > > * The latest miniconda which has real multi-arch feature set
> > supports
> > > > the
> > > > > > following python versions: 3.8, 3.9, 3.10, 3.11 and no 3.7
> support
> > > > > >
> > > > > > Suggestion to solve the issues:
> > > > > > * In 1.19 drop python 3.7 support and upgrade miniconda to the
> > latest
> > > > > > version which opens the door to other platform + python 3.11
> > support
> > > > > >
> > > > > > Please note python 3.11 support is not initiated/discussed here.
> > > > > >
> > > > > > BR,
> > > > > > G
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> > --
> > Best regards,
> > Sergey
> >
>


[jira] [Created] (FLINK-33032) [JUnit5 Migration] Module: flink-table-planner (ExpressionTestBase)

2023-09-04 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-33032:
--

 Summary: [JUnit5 Migration] Module: flink-table-planner 
(ExpressionTestBase)
 Key: FLINK-33032
 URL: https://issues.apache.org/jira/browse/FLINK-33032
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.18.0
Reporter: Jiabao Sun


[JUnit5 Migration] Module: flink-table-planner (ExpressionTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33031) [JUnit5 Migration] Module: flink-table-planner (AggFunctionTestBase)

2023-09-04 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-33031:
--

 Summary: [JUnit5 Migration] Module: flink-table-planner 
(AggFunctionTestBase)
 Key: FLINK-33031
 URL: https://issues.apache.org/jira/browse/FLINK-33031
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.18.0
Reporter: Jiabao Sun


[JUnit5 Migration] Module: flink-table-planner (AggFunctionTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Proposal for Implementing Keyed Watermarks in Apache Flink

2023-09-04 Thread Tawfek Yasser Tawfek
Dear Apache Flink Development Team,

I hope this email finds you well. I am writing to propose an exciting new 
feature for Apache Flink that has the potential to significantly enhance its 
capabilities in handling unbounded streams of events, particularly in the 
context of event-time windowing.

As you may be aware, Apache Flink has been at the forefront of Big Data Stream 
processing engines, leveraging windowing techniques to manage unbounded event 
streams effectively. The accuracy of the results obtained from these streams 
relies heavily on the ability to gather all relevant input within a window. At 
the core of this process are watermarks, which serve as unique timestamps 
marking the progression of events in time.

However, our analysis has revealed a critical issue with the current watermark 
generation method in Apache Flink. This method, which operates at the input 
stream level, exhibits a bias towards faster sub-streams, resulting in the 
unfortunate consequence of dropped events from slower sub-streams. Our 
investigations showed that Apache Flink's conventional watermark generation 
approach led to an alarming data loss of approximately 33% when 50% of the keys 
around the median experienced delays. This loss further escalated to over 37% 
when 50% of random keys were delayed.

In response to this issue, we have authored a research paper outlining a novel 
strategy named "keyed watermarks" to address data loss and substantially 
enhance data processing accuracy, achieving at least 99% accuracy in most 
scenarios.

Moreover, we have conducted comprehensive comparative studies to evaluate the 
effectiveness of our strategy against the conventional watermark generation 
method, specifically in terms of event-time tracking accuracy.

We believe that implementing keyed watermarks in Apache Flink can greatly 
enhance its performance and reliability, making it an even more valuable tool 
for organizations dealing with complex, high-throughput data processing tasks.

We kindly request your consideration of this proposal. We would be eager to 
discuss further details, provide the full research paper, or collaborate 
closely to facilitate the integration of this feature into Apache Flink.

Thank you for your time and attention to this proposal. We look forward to the 
opportunity to contribute to the continued success and evolution of Apache 
Flink.

Best Regards,

Tawfik Yasser
Senior Teaching Assistant @ Nile University, Egypt
Email: tyas...@nu.edu.eg
LinkedIn: https://www.linkedin.com/in/tawfikyasser/


Re: [DISCUSS] FLIP-328: Allow source operators to determine isProcessingBacklog based on watermark lag

2023-09-04 Thread Jing Ge
Hi Xuannan,

Thanks for the clarification.

3. Event time and process time are two different things. It might be rarely
used, but conceptually, users can process data in the past within a
specific time range in the streaming mode. All data before that range will
be considered as backlog and needed to be processed in the batch mode,
like, e.g. the Present Perfect Progressive tense used in English language.

Best regards,
Jing

On Thu, Aug 31, 2023 at 4:45 AM Xuannan Su  wrote:

> Hi Jing,
>
> Thanks for the reply.
>
> 1. You are absolutely right that the watermark lag threshold must be
> carefully set with a thorough understanding of watermark generation. It is
> crucial for users to take into account the WatermarkStrategy when setting
> the watermark lag threshold.
>
> 2. Regarding pure processing-time based stream processing jobs,
> alternative strategies will be implemented to determine whether the job is
> processing backlog data. I have outlined two possible strategies below:
>
> - Based on the source operator's state. For example, when MySQL CDC source
> is reading snapshot, it can claim isBacklog=true.
> - Based on metrics. For example, when busyTimeMsPerSecond (or
> backPressuredTimeMsPerSecond) > user_specified_threshold, then
> isBacklog=true.
>
> As of the strategies proposed in this FLIP, it rely on generated
> watermarks. Therefore, if a user intends for the job to detect backlog
> status based on watermark, it is necessary to generate the watermark.
>
> 3. I'm afraid I'm not fully grasping your question. From my understanding,
> it should work in both cases. When event times are close to the processing
> time, resulting in watermarks close to the processing time, the job is not
> processing backlog data. On the other hand, when event times are far from
> processing time, causing watermarks to also be distant, if the lag
> surpasses the defined threshold, the job is considered processing backlog
> data.
>
> Best,
> Xuannan
>
>
> > On Aug 31, 2023, at 02:56, Jing Ge  wrote:
> >
> > Hi Xuannan,
> >
> > Thanks for the clarification. That is the part where I am trying to
> > understand your thoughts. I have some follow-up questions:
> >
> > 1. It depends strongly on the watermarkStrategy and how customized
> > watermark generation looks like. It mixes business logic with technical
> > implementation and technical data processing mode. The value of the
> > watermark lag threshold must be set very carefully. If the value is too
> > small. any time, when the watermark generation logic is changed(business
> > logic changes lead to the threshold getting exceeded), the same job might
> > be running surprisingly in backlog processing mode, i.e. a butterfly
> > effect. A comprehensive documentation is required to avoid any confusion
> > for the users.
> > 2. Like Jark already mentioned, use cases that do not have watermarks,
> > like pure processing-time based stream processing[1] are not covered. It
> is
> > more or less a trade-off solution that does not support such use cases
> and
> > appropriate documentation is required. Forcing them to explicitly
> generate
> > watermarks that are never needed just because of this does not sound
> like a
> > proper solution.
> > 3. If I am not mistaken, it only works for use cases where event times
> are
> > very close to the processing times, because the wall clock is used to
> > calculate the watermark lag and the watermark is generated based on the
> > event time.
> >
> > Best regards,
> > Jing
> >
> > [1]
> >
> https://github.com/apache/flink/blob/2c50b4e956305426f478b726d4de4a640a16b810/flink-core/src/main/java/org/apache/flink/api/common/eventtime/WatermarkStrategy.java#L236
> >
> > On Wed, Aug 30, 2023 at 4:06 AM Xuannan Su 
> wrote:
> >
> >> Hi Jing,
> >>
> >> Thank you for the suggestion.
> >>
> >> The definition of watermark lag is the same as the watermarkLag metric
> in
> >> FLIP-33[1]. More specifically, the watermark lag calculation is
> computed at
> >> the time when a watermark is emitted downstream in the following way:
> >> watermarkLag = CurrentTime - Watermark. I have added this description to
> >> the FLIP.
> >>
> >> I hope this addresses your concern.
> >>
> >> Best,
> >> Xuannan
> >>
> >> [1]
> >>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-33%3A+Standardize+Connector+Metrics
> >>
> >>
> >>> On Aug 28, 2023, at 01:04, Jing Ge  wrote:
> >>>
> >>> Hi Xuannan,
> >>>
> >>> Thanks for the proposal. +1 for me.
> >>>
> >>> There is one tiny thing that I am not sure if I understand it
> correctly.
> >>> Since there will be many different WatermarkStrategies and different
> >>> WatermarkGenerators. Could you please update the FLIP and add the
> >>> description of how the watermark lag is calculated exactly? E.g.
> >> Watermark
> >>> lag = A - B with A is the timestamp of the watermark emitted to the
> >>> downstream and B is(this is the part I am not really sure after
> >> reading
> >>> the FLIP).
> >>>
> >>> Best regards,
> >>> Jing

[DISCUSS] Add config to enable job stop with savepoint on exceeding tolerable checkpoint Failures

2023-09-04 Thread Dongwoo Kim
Hi all,
I have a proposal that aims to enhance the flink application's resilience
in cases of unexpected failures in checkpoint storages like S3 or HDFS,

*[Background]*
When using self managed S3-compatible object storage, we faced checkpoint
async failures lasting for an extended period more than 30 minutes,
leading to multiple job restarts and causing lags in our streaming
application.


*[Current Behavior]*Currently, when the number of checkpoint failures
exceeds a predefined tolerable limit, flink will either restart or fail the
job based on how it's configured.
In my opinion this does not handle scenarios where the checkpoint storage
itself may be unreliable or experiencing downtime.

*[Proposed Feature]*
I propose a config that allows for a graceful job stop with a savepoint
when the tolerable checkpoint failure limit is reached.
Instead of restarting/failing the job when tolerable checkpoint failure
exceeds, when this new config is set to true just trigger stopWithSavepoint.

This could offer the following benefits.
- Indication of Checkpoint Storage State: Exceeding tolerable checkpoint
failures could indicate unstable checkpoint storage.
- Automated Fallback Strategy: When combined with a monitoring cron job,
this feature could act as an automated fallback strategy for handling
unstable checkpoint storage.
  The job would stop safely, take a savepoint, and then you could
automatically restart with different checkpoint storage configured like
switching from S3 to HDFS.

For example let's say checkpoint path is configured to s3 and savepoint
path is configured to hdfs.
When the new config is set to true the job stops with savepoint like below
when tolerable checkpoint failure exceeds.
And we can restart the job from that savepoint while the checkpoint
configured as hdfs.
[image: image.png]


Looking forward to hearing the community's thoughts on this proposal.
And also want to ask how the community is handling long lasting unstable
checkpoint storage issues.

Thanks in advance.

Best dongwoo,


Re: [DISCUSS] Drop python 3.7 support in 1.19

2023-09-04 Thread Gabor Somogyi
Thanks for all the responses!

Based on the suggestions I've created the following jiras and started to
work on them:
* https://issues.apache.org/jira/browse/FLINK-33029
* https://issues.apache.org/jira/browse/FLINK-33030

The reason why I've split them is to separate the concerns and reduce the
amount of code in a PR to help reviewers.

BR,
G


On Mon, Sep 4, 2023 at 12:57 PM Sergey Nuyanzin  wrote:

> +1,
> Thanks for looking into this.
>
> On Mon, Sep 4, 2023 at 8:38 AM Gyula Fóra  wrote:
>
> > +1
> > Thanks for looking into this.
> >
> > Gyula
> >
> > On Mon, Sep 4, 2023 at 8:26 AM Matthias Pohl  > .invalid>
> > wrote:
> >
> > > Thanks Gabor for looking into it. It sounds reasonable to me as well.
> > >
> > > +1
> > >
> > > On Sun, Sep 3, 2023 at 5:44 PM Márton Balassi <
> balassi.mar...@gmail.com>
> > > wrote:
> > >
> > > > Hi Gabor,
> > > >
> > > > Thanks for bringing this up. Similarly to when we dropped Python 3.6
> > due
> > > to
> > > > its end of life (and added 3.10) in Flink 1.17 [1,2], it makes sense
> to
> > > > proceed to remove 3.7 and add 3.11 instead.
> > > >
> > > > +1.
> > > >
> > > > [1] https://issues.apache.org/jira/browse/FLINK-27929
> > > > [2] https://github.com/apache/flink/pull/21699
> > > >
> > > > Best,
> > > > Marton
> > > >
> > > > On Fri, Sep 1, 2023 at 10:39 AM Gabor Somogyi <
> > gabor.g.somo...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I've analyzed through part of the pyflink code and found some
> > > improvement
> > > > > possibilities.
> > > > > I would like to hear voices on the idea.
> > > > >
> > > > > Intention:
> > > > > * upgrade several python related versions to eliminate end-of-life
> > > issues
> > > > > and keep up with bugfixes
> > > > > * start to add python arm64 support
> > > > >
> > > > > Actual situation:
> > > > > * Flink supports the following python versions: 3.7, 3.8, 3.9, 3.10
> > > > > * We use miniconda 4.7.10 (python package management system and
> > > > environment
> > > > > management system) which supports the following python versions:
> 3.7,
> > > > 3.8,
> > > > > 3.9, 3.10
> > > > > * Our python framework is not supporting anything but x86_64
> > > > >
> > > > > Issues:
> > > > > * Python 3.7.17 is the latest security patch of the 3.7 line. This
> > > > version
> > > > > is end-of-life and is no longer supported:
> > > > > https://www.python.org/downloads/release/python-3717/
> > > > > * Miniconda 4.7.10 is released on 2019-07-29 which is 4 years old
> > > already
> > > > > and not supporting too many architectures (x86_64 and ppc64le)
> > > > > * The latest miniconda which has real multi-arch feature set
> supports
> > > the
> > > > > following python versions: 3.8, 3.9, 3.10, 3.11 and no 3.7 support
> > > > >
> > > > > Suggestion to solve the issues:
> > > > > * In 1.19 drop python 3.7 support and upgrade miniconda to the
> latest
> > > > > version which opens the door to other platform + python 3.11
> support
> > > > >
> > > > > Please note python 3.11 support is not initiated/discussed here.
> > > > >
> > > > > BR,
> > > > > G
> > > > >
> > > >
> > >
> >
>
>
> --
> Best regards,
> Sergey
>


[jira] [Created] (FLINK-33030) Add python 3.11 support

2023-09-04 Thread Gabor Somogyi (Jira)
Gabor Somogyi created FLINK-33030:
-

 Summary: Add python 3.11 support
 Key: FLINK-33030
 URL: https://issues.apache.org/jira/browse/FLINK-33030
 Project: Flink
  Issue Type: New Feature
  Components: API / Python
Affects Versions: 1.19.0
Reporter: Gabor Somogyi






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33029) Drop python 3.7 support

2023-09-04 Thread Gabor Somogyi (Jira)
Gabor Somogyi created FLINK-33029:
-

 Summary: Drop python 3.7 support
 Key: FLINK-33029
 URL: https://issues.apache.org/jira/browse/FLINK-33029
 Project: Flink
  Issue Type: New Feature
  Components: API / Python
Affects Versions: 1.19.0
Reporter: Gabor Somogyi






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33028) FLIP-348: Make expanding behavior of virtual metadata columns configurable

2023-09-04 Thread Timo Walther (Jira)
Timo Walther created FLINK-33028:


 Summary: FLIP-348: Make expanding behavior of virtual metadata 
columns configurable
 Key: FLINK-33028
 URL: https://issues.apache.org/jira/browse/FLINK-33028
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API, Table SQL / Planner
Reporter: Timo Walther
Assignee: Timo Walther


Many SQL vendors expose additional metadata via so-called "pseudo columns" or 
"system columns" next to the physical columns.

However, those columns should not be selected by default when expanding SELECT 
*.  Also for the sake of backward compatibility. Flink SQL already offers 
pseudo columns next to the physical columns exposed as metadata columns.

This proposal suggests to evolve the existing column design slightly to be more 
useful for platform providers.

https://cwiki.apache.org/confluence/x/_o6zDw




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[DISCUSS] FLIP-360: Merging ExecutionGraphInfoStore and JobResultStore into a single component

2023-09-04 Thread Matthias Pohl
Hi everyone,
I want to open the discussion on FLIP-360 [1]. The goal of this FLIP is to
combine the two very similar components ExecutionGraphInfoStore and
JobResultStore into a single component.

The benefit of this effort would be to expose the metadata of a
globally-terminated job even in cases where the JobManager fails shortly
after the job finished. This is relevant for external checkpoint management
(like it's done in the Kubernetes Operator) which relies on the checkpoint
information to be available.

More generally, it would allow completed jobs to be listed as part of the
Flink cluster even after a JM failover. This would allow users to gain more
control over finished jobs.

The current state of the FLIP doesn't come up with a final conclusion on
the serialization format of the data (JSON vs binary). I want to emphasize
that there's also a third option which keeps both components separate and
only exposes the additional checkpoint information through the
JobResultStore.

I'm looking forward to feedback.
Best,
Matthias

PS: I might be less responsive in the next 2-3 weeks but want to initiate
the discussion, anyway.

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-360%3A+Merging+the+ExecutionGraphInfoStore+and+the+JobResultStore+into+a+single+component+CompletedJobStore

-- 

[image: Aiven] 

*Matthias Pohl*
Opensource Software Engineer, *Aiven*
matthias.p...@aiven.io|  +49 170 9869525
aiven.io    |   
     
*Aiven Deutschland GmbH*
Alexanderufer 3-7, 10117 Berlin
Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
Amtsgericht Charlottenburg, HRB 209739 B


Re: [DISCUSS] Drop python 3.7 support in 1.19

2023-09-04 Thread Sergey Nuyanzin
+1,
Thanks for looking into this.

On Mon, Sep 4, 2023 at 8:38 AM Gyula Fóra  wrote:

> +1
> Thanks for looking into this.
>
> Gyula
>
> On Mon, Sep 4, 2023 at 8:26 AM Matthias Pohl  .invalid>
> wrote:
>
> > Thanks Gabor for looking into it. It sounds reasonable to me as well.
> >
> > +1
> >
> > On Sun, Sep 3, 2023 at 5:44 PM Márton Balassi 
> > wrote:
> >
> > > Hi Gabor,
> > >
> > > Thanks for bringing this up. Similarly to when we dropped Python 3.6
> due
> > to
> > > its end of life (and added 3.10) in Flink 1.17 [1,2], it makes sense to
> > > proceed to remove 3.7 and add 3.11 instead.
> > >
> > > +1.
> > >
> > > [1] https://issues.apache.org/jira/browse/FLINK-27929
> > > [2] https://github.com/apache/flink/pull/21699
> > >
> > > Best,
> > > Marton
> > >
> > > On Fri, Sep 1, 2023 at 10:39 AM Gabor Somogyi <
> gabor.g.somo...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > I've analyzed through part of the pyflink code and found some
> > improvement
> > > > possibilities.
> > > > I would like to hear voices on the idea.
> > > >
> > > > Intention:
> > > > * upgrade several python related versions to eliminate end-of-life
> > issues
> > > > and keep up with bugfixes
> > > > * start to add python arm64 support
> > > >
> > > > Actual situation:
> > > > * Flink supports the following python versions: 3.7, 3.8, 3.9, 3.10
> > > > * We use miniconda 4.7.10 (python package management system and
> > > environment
> > > > management system) which supports the following python versions: 3.7,
> > > 3.8,
> > > > 3.9, 3.10
> > > > * Our python framework is not supporting anything but x86_64
> > > >
> > > > Issues:
> > > > * Python 3.7.17 is the latest security patch of the 3.7 line. This
> > > version
> > > > is end-of-life and is no longer supported:
> > > > https://www.python.org/downloads/release/python-3717/
> > > > * Miniconda 4.7.10 is released on 2019-07-29 which is 4 years old
> > already
> > > > and not supporting too many architectures (x86_64 and ppc64le)
> > > > * The latest miniconda which has real multi-arch feature set supports
> > the
> > > > following python versions: 3.8, 3.9, 3.10, 3.11 and no 3.7 support
> > > >
> > > > Suggestion to solve the issues:
> > > > * In 1.19 drop python 3.7 support and upgrade miniconda to the latest
> > > > version which opens the door to other platform + python 3.11 support
> > > >
> > > > Please note python 3.11 support is not initiated/discussed here.
> > > >
> > > > BR,
> > > > G
> > > >
> > >
> >
>


-- 
Best regards,
Sergey


Re: [VOTE] Release flink-connector-hbase v3.0.0, release candidate 2

2023-09-04 Thread Samrat Deb
Hi, 

+1 (non-binding)

Verified NOTICE files 
Verified CheckSum and signatures 
Glanced through  PR[1] , Looks good to me 

Bests, 
Samrat 

[1]https://github.com/apache/flink-web/pull/591


> On 04-Sep-2023, at 2:22 PM, Ahmed Hamdy  wrote:
> 
> Hi Martijn,
> +1 (non-binding)
> 
> - verified Checksums and signatures
> - no binaries in source
> - Checked NOTICE files contains migrated artifacts
> - tag is correct
> - Approved Web PR
> 
> Best Regards
> Ahmed Hamdy
> 
> 
> On Fri, 1 Sept 2023 at 15:35, Martijn Visser 
> wrote:
> 
>> Hi everyone,
>> 
>> Please review and vote on the release candidate #2 for the version 3.0.0,
>> as follows:
>> [ ] +1, Approve the release
>> [ ] -1, Do not approve the release (please provide specific comments)
>> 
>> The complete staging area is available for your review, which includes:
>> * JIRA release notes [1],
>> * the official Apache source release to be deployed to dist.apache.org
>> [2],
>> which are signed with the key with fingerprint
>> A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
>> * all artifacts to be deployed to the Maven Central Repository [4],
>> * source code tag v3.0.0-rc2 [5],
>> * website pull request listing the new release [6].
>> 
>> This replaces the old, cancelled vote of RC1 [7]. This version is the
>> externalized version which is compatible with Flink 1.16 and 1.17.
>> 
>> The vote will be open for at least 72 hours. It is adopted by majority
>> approval, with at least 3 PMC affirmative votes.
>> 
>> Thanks,
>> Release Manager
>> 
>> [1]
>> 
>> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352578
>> [2]
>> 
>> https://dist.apache.org/repos/dist/dev/flink/flink-connector-hbase-3.0.0-rc2
>> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
>> [4] https://repository.apache.org/content/repositories/orgapacheflink-1650
>> [5]
>> https://github.com/apache/flink-connector-hbase/releases/tag/v3.0.0-rc2
>> [6] https://github.com/apache/flink-web/pull/591
>> [7] https://lists.apache.org/thread/wbl6sc86q9s5mmz5slx4z09svh91cpr0
>> 



Re: [VOTE] Release flink-connector-hbase v3.0.0, release candidate 2

2023-09-04 Thread Sergey Nuyanzin
Hi Martijn,
thanks a lot for taking this

+1 (non-binding)

- checked signatures and checksums
- checked that no binaries
- checked NOTICE and LICENSE
- checked web PR

there is one minor I noticed: the connector still uses "io.github.zentol
.flink" for flink-connector-parent
I submitted a PR[1] to fix that, however it shouldn't be a blocker

[1] https://github.com/apache/flink-connector-hbase/pull/18


On Mon, Sep 4, 2023 at 10:52 AM Ahmed Hamdy  wrote:

> Hi Martijn,
> +1 (non-binding)
>
> - verified Checksums and signatures
> - no binaries in source
> - Checked NOTICE files contains migrated artifacts
> - tag is correct
> - Approved Web PR
>
> Best Regards
> Ahmed Hamdy
>
>
> On Fri, 1 Sept 2023 at 15:35, Martijn Visser 
> wrote:
>
> > Hi everyone,
> >
> > Please review and vote on the release candidate #2 for the version 3.0.0,
> > as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release to be deployed to dist.apache.org
> > [2],
> > which are signed with the key with fingerprint
> > A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag v3.0.0-rc2 [5],
> > * website pull request listing the new release [6].
> >
> > This replaces the old, cancelled vote of RC1 [7]. This version is the
> > externalized version which is compatible with Flink 1.16 and 1.17.
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Thanks,
> > Release Manager
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352578
> > [2]
> >
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-hbase-3.0.0-rc2
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1650
> > [5]
> > https://github.com/apache/flink-connector-hbase/releases/tag/v3.0.0-rc2
> > [6] https://github.com/apache/flink-web/pull/591
> > [7] https://lists.apache.org/thread/wbl6sc86q9s5mmz5slx4z09svh91cpr0
> >
>


-- 
Best regards,
Sergey


[jira] [Created] (FLINK-33027) Users should be able to change parallelism of excluded vertices

2023-09-04 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-33027:
--

 Summary: Users should be able to change parallelism of excluded 
vertices
 Key: FLINK-33027
 URL: https://issues.apache.org/jira/browse/FLINK-33027
 Project: Flink
  Issue Type: Bug
  Components: Autoscaler, Kubernetes Operator
Affects Versions: kubernetes-operator-1.6.0
Reporter: Gyula Fora
 Fix For: kubernetes-operator-1.7.0


Currently it's not possible to manually override any parallelism even for 
excluded vertices. We should allow this for manually excluded ones.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33026) The chinese doc of sql 'Performance Tuning' has a wrong title in the index page

2023-09-04 Thread lincoln lee (Jira)
lincoln lee created FLINK-33026:
---

 Summary: The chinese doc of sql 'Performance Tuning' has a wrong 
title in the index page
 Key: FLINK-33026
 URL: https://issues.apache.org/jira/browse/FLINK-33026
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Reporter: lincoln lee
 Attachments: image-2023-09-04-13-35-20-832.png, 
image-2023-09-04-13-36-02-139.png

The chinese doc of sql 'Performance Tuning' has a wrong title in the index page
 !image-2023-09-04-13-36-02-139.png! 

 !image-2023-09-04-13-35-20-832.png! 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33025) BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount fails on AZP

2023-09-04 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-33025:
---

 Summary: 
BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount
 fails on AZP
 Key: FLINK-33025
 URL: https://issues.apache.org/jira/browse/FLINK-33025
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Reporter: Sergey Nuyanzin


This build 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=52958=logs=5cae8624-c7eb-5c51-92d3-4d2dacedd221=5acec1b4-945b-59ca-34f8-168928ce5199=22618
 fails on AZP as
{noformat}
Sep 03 05:05:38 05:05:38.220 [ERROR] Failures: 
Sep 03 05:05:38 05:05:38.220 [ERROR]   
BatchArrowPythonOverWindowAggregateFunctionOperatorTest.testFinishBundleTriggeredByCount:122->ArrowPythonAggregateFunctionOperatorTestBase.assertOutputEquals:62
 
Sep 03 05:05:38 Expected size: 4 but was: 3 in:
Sep 03 05:05:38 [Record @ (undef) : +I(c1,c2,0,0,0),
Sep 03 05:05:38 Record @ (undef) : +I(c1,c4,1,0,0),
Sep 03 05:05:38 Record @ (undef) : +I(c1,c6,2,10,2)]

{noformat}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33024) [JUnit5 Migration] Module: flink-table-planner (TableTestBase)

2023-09-04 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-33024:
--

 Summary: [JUnit5 Migration] Module: flink-table-planner 
(TableTestBase)
 Key: FLINK-33024
 URL: https://issues.apache.org/jira/browse/FLINK-33024
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: Jiabao Sun


[JUnit5 Migration] Module: flink-table-planner (TableTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-hbase v3.0.0, release candidate 2

2023-09-04 Thread Ahmed Hamdy
Hi Martijn,
+1 (non-binding)

- verified Checksums and signatures
- no binaries in source
- Checked NOTICE files contains migrated artifacts
- tag is correct
- Approved Web PR

Best Regards
Ahmed Hamdy


On Fri, 1 Sept 2023 at 15:35, Martijn Visser 
wrote:

> Hi everyone,
>
> Please review and vote on the release candidate #2 for the version 3.0.0,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2],
> which are signed with the key with fingerprint
> A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v3.0.0-rc2 [5],
> * website pull request listing the new release [6].
>
> This replaces the old, cancelled vote of RC1 [7]. This version is the
> externalized version which is compatible with Flink 1.16 and 1.17.
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Thanks,
> Release Manager
>
> [1]
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12352578
> [2]
>
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-hbase-3.0.0-rc2
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1650
> [5]
> https://github.com/apache/flink-connector-hbase/releases/tag/v3.0.0-rc2
> [6] https://github.com/apache/flink-web/pull/591
> [7] https://lists.apache.org/thread/wbl6sc86q9s5mmz5slx4z09svh91cpr0
>


[jira] [Created] (FLINK-33023) [JUnit5 Migration] Module: flink-table-planner (TableTestBase)

2023-09-04 Thread Jiabao Sun (Jira)
Jiabao Sun created FLINK-33023:
--

 Summary: [JUnit5 Migration] Module: flink-table-planner 
(TableTestBase)
 Key: FLINK-33023
 URL: https://issues.apache.org/jira/browse/FLINK-33023
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: Jiabao Sun


[JUnit5 Migration] Module: flink-table-planner (TableTestBase)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33022) When FailureEnricherUtils load FailureEnricherFactory failed should throw exception or add some error logs

2023-09-04 Thread Matt Wang (Jira)
Matt Wang created FLINK-33022:
-

 Summary: When FailureEnricherUtils load FailureEnricherFactory 
failed should throw exception or add some error logs
 Key: FLINK-33022
 URL: https://issues.apache.org/jira/browse/FLINK-33022
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.18.0
Reporter: Matt Wang


if we configurate the `jobmanager.failure-enrichers`, but could not load the 
class in 
FailureEnricherUtils, no exceptions can be seen in the log currently, and it is 
very inconvenient to check the problem. Here I suggest that some ERROR-level 
logs should be added, or an exception should be thrown directly (because the 
load cannot be uploaded is not an expected result)
{code:java}
// code placeholder
@VisibleForTesting
static Collection getFailureEnrichers(
final Configuration configuration, final PluginManager pluginManager) {
Set includedEnrichers = getIncludedFailureEnrichers(configuration);
LOG.info("includedEnrichers: {}", includedEnrichers);
//  When empty, NO enrichers will be started.
if (includedEnrichers.isEmpty()) {
return Collections.emptySet();
}
// TODO: here maybe load nothing
final Iterator factoryIterator =
pluginManager.load(FailureEnricherFactory.class);

} {code}
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33021) AWS nightly builds fails on architecture tests

2023-09-04 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-33021:
--

 Summary: AWS nightly builds fails on architecture tests
 Key: FLINK-33021
 URL: https://issues.apache.org/jira/browse/FLINK-33021
 Project: Flink
  Issue Type: Bug
  Components: Connectors / AWS
Affects Versions: aws-connector-4.2.0
Reporter: Martijn Visser


https://github.com/apache/flink-connector-aws/actions/runs/6067488560/job/16459208589#step:9:879

{code:java}
Error:  Failures: 
Error:Architecture Violation [Priority: MEDIUM] - Rule 'ITCASE tests should 
use a MiniCluster resource or extension' was violated (1 times):
org.apache.flink.connector.firehose.sink.KinesisFirehoseSinkITCase does not 
satisfy: only one of the following predicates match:
* reside in a package 'org.apache.flink.runtime.*' and contain any fields that 
are static, final, and of type InternalMiniClusterExtension and annotated with 
@RegisterExtension
* reside outside of package 'org.apache.flink.runtime.*' and contain any fields 
that are static, final, and of type MiniClusterExtension and annotated with 
@RegisterExtension or are , and of type MiniClusterTestEnvironment and 
annotated with @TestEnv
* reside in a package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class InternalMiniClusterExtension
* reside outside of package 'org.apache.flink.runtime.*' and is annotated with 
@ExtendWith with class MiniClusterExtension
 or contain any fields that are public, static, and of type 
MiniClusterWithClientResource and final and annotated with @ClassRule or 
contain any fields that is of type MiniClusterWithClientResource and public and 
final and not static and annotated with @Rule
[INFO] 
Error:  Tests run: 21, Failures: 1, Errors: 0, Skipped: 0
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33020) OpensearchSinkTest.testAtLeastOnceSink timed out

2023-09-04 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-33020:
--

 Summary: OpensearchSinkTest.testAtLeastOnceSink timed out
 Key: FLINK-33020
 URL: https://issues.apache.org/jira/browse/FLINK-33020
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Opensearch
Affects Versions: opensearch-1.0.2
Reporter: Martijn Visser


https://github.com/apache/flink-connector-opensearch/actions/runs/6061205003/job/16446139552#step:13:1029

{code:java}
Error:  Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 9.837 s 
<<< FAILURE! - in 
org.apache.flink.streaming.connectors.opensearch.OpensearchSinkTest
Error:  
org.apache.flink.streaming.connectors.opensearch.OpensearchSinkTest.testAtLeastOnceSink
  Time elapsed: 5.022 s  <<< ERROR!
java.util.concurrent.TimeoutException: testAtLeastOnceSink() timed out after 5 
seconds
at 
org.junit.jupiter.engine.extension.TimeoutInvocation.createTimeoutException(TimeoutInvocation.java:70)
at 
org.junit.jupiter.engine.extension.TimeoutInvocation.proceed(TimeoutInvocation.java:59)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.intercept(TimeoutExtension.java:149)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestableMethod(TimeoutExtension.java:140)
at 
org.junit.jupiter.engine.extension.TimeoutExtension.interceptTestMethod(TimeoutExtension.java:84)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker$ReflectiveInterceptorCall.lambda$ofVoidMethod$0(ExecutableInvoker.java:115)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker.lambda$invoke$0(ExecutableInvoker.java:105)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain$InterceptedInvocation.proceed(InvocationInterceptorChain.java:106)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.proceed(InvocationInterceptorChain.java:64)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.chainAndInvoke(InvocationInterceptorChain.java:45)
at 
org.junit.jupiter.engine.execution.InvocationInterceptorChain.invoke(InvocationInterceptorChain.java:37)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:104)
at 
org.junit.jupiter.engine.execution.ExecutableInvoker.invoke(ExecutableInvoker.java:98)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$invokeTestMethod$7(TestMethodTestDescriptor.java:214)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.invokeTestMethod(TestMethodTestDescriptor.java:210)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:135)
at 
org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.execute(TestMethodTestDescriptor.java:66)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:151)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at 
org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1541)
at 
org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
at 
org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
at 
org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
at 
org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
at 

[jira] [Created] (FLINK-33019) Pulsar tests hangs during nightly builds

2023-09-04 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-33019:
--

 Summary: Pulsar tests hangs during nightly builds
 Key: FLINK-33019
 URL: https://issues.apache.org/jira/browse/FLINK-33019
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Pulsar
Reporter: Martijn Visser


https://github.com/apache/flink-connector-pulsar/actions/runs/6067569890/job/16459404675#step:13:25195

The thread dump shows multiple parked/sleeping threads. No clear indicator of 
what's wrong



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33018) GCP Pubsub PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream failed

2023-09-04 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-33018:
--

 Summary: GCP Pubsub 
PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream
 failed
 Key: FLINK-33018
 URL: https://issues.apache.org/jira/browse/FLINK-33018
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Google Cloud PubSub
Affects Versions: gcp-pubsub-3.0.2
Reporter: Martijn Visser


https://github.com/apache/flink-connector-gcp-pubsub/actions/runs/6061318336/job/16446392844#step:13:507

{code:java}
[INFO] 
[INFO] Results:
[INFO] 
Error:  Failures: 
Error:
PubSubConsumingTest.testStoppingConnectorWhenDeserializationSchemaIndicatesEndOfStream:119
 
expected: ["1", "2", "3"]
 but was: ["1", "2"]
[INFO] 
Error:  Tests run: 30, Failures: 1, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33017) Nightly run for Flink Kafka connector fails

2023-09-04 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-33017:
--

 Summary: Nightly run for Flink Kafka connector fails
 Key: FLINK-33017
 URL: https://issues.apache.org/jira/browse/FLINK-33017
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: kafka-3.1.0
Reporter: Martijn Visser


https://github.com/apache/flink-connector-kafka/actions/runs/6061283403/job/16446313350#step:13:54462

{code:java}
2023-09-03T00:29:28.8942615Z [ERROR] Errors: 
2023-09-03T00:29:28.8942799Z [ERROR] 
FlinkKafkaConsumerBaseMigrationTest.testRestore
2023-09-03T00:29:28.8943079Z [ERROR]   Run 1: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8943342Z [ERROR]   Run 2: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8943604Z [ERROR]   Run 3: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8943903Z [ERROR]   Run 4: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8944164Z [ERROR]   Run 5: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8944419Z [ERROR]   Run 6: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8944714Z [ERROR]   Run 7: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8944970Z [ERROR]   Run 8: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8945221Z [ERROR]   Run 9: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8945294Z [INFO] 
2023-09-03T00:29:28.8945577Z [ERROR] 
FlinkKafkaConsumerBaseMigrationTest.testRestoreFromEmptyStateNoPartitions
2023-09-03T00:29:28.8945769Z [ERROR]   Run 1: 
org/apache/flink/shaded/guava31/com/google/common/collect/ImmutableList
2023-09-03T00:29:28.8946019Z [ERROR]   Run 2: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8946266Z [ERROR]   Run 3: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8946525Z [ERROR]   Run 4: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8946778Z [ERROR]   Run 5: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8947027Z [ERROR]   Run 6: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8947269Z [ERROR]   Run 7: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8947516Z [ERROR]   Run 8: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8947765Z [ERROR]   Run 9: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8947834Z [INFO] 
2023-09-03T00:29:28.8948117Z [ERROR] 
FlinkKafkaConsumerBaseMigrationTest.testRestoreFromEmptyStateWithPartitions
2023-09-03T00:29:28.8948407Z [ERROR]   Run 1: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8948660Z [ERROR]   Run 2: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8948949Z [ERROR]   Run 3: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8949192Z [ERROR]   Run 4: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8949433Z [ERROR]   Run 5: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8949673Z [ERROR]   Run 6: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8949913Z [ERROR]   Run 7: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8950155Z [ERROR]   Run 8: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8950518Z [ERROR]   Run 9: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8950598Z [INFO] 
2023-09-03T00:29:28.8950819Z [ERROR] 
FlinkKafkaProducerMigrationOperatorTest.testRestoreProducer
2023-09-03T00:29:28.8951072Z [ERROR]   Run 1: Could not initialize class 
org.apache.flink.runtime.util.config.memory.ManagedMemoryUtils
2023-09-03T00:29:28.8951318Z [ERROR]   Run 2: Could not initialize class 

Re: [DISCUSS] Drop python 3.7 support in 1.19

2023-09-04 Thread Gyula Fóra
+1
Thanks for looking into this.

Gyula

On Mon, Sep 4, 2023 at 8:26 AM Matthias Pohl 
wrote:

> Thanks Gabor for looking into it. It sounds reasonable to me as well.
>
> +1
>
> On Sun, Sep 3, 2023 at 5:44 PM Márton Balassi 
> wrote:
>
> > Hi Gabor,
> >
> > Thanks for bringing this up. Similarly to when we dropped Python 3.6 due
> to
> > its end of life (and added 3.10) in Flink 1.17 [1,2], it makes sense to
> > proceed to remove 3.7 and add 3.11 instead.
> >
> > +1.
> >
> > [1] https://issues.apache.org/jira/browse/FLINK-27929
> > [2] https://github.com/apache/flink/pull/21699
> >
> > Best,
> > Marton
> >
> > On Fri, Sep 1, 2023 at 10:39 AM Gabor Somogyi  >
> > wrote:
> >
> > > Hi All,
> > >
> > > I've analyzed through part of the pyflink code and found some
> improvement
> > > possibilities.
> > > I would like to hear voices on the idea.
> > >
> > > Intention:
> > > * upgrade several python related versions to eliminate end-of-life
> issues
> > > and keep up with bugfixes
> > > * start to add python arm64 support
> > >
> > > Actual situation:
> > > * Flink supports the following python versions: 3.7, 3.8, 3.9, 3.10
> > > * We use miniconda 4.7.10 (python package management system and
> > environment
> > > management system) which supports the following python versions: 3.7,
> > 3.8,
> > > 3.9, 3.10
> > > * Our python framework is not supporting anything but x86_64
> > >
> > > Issues:
> > > * Python 3.7.17 is the latest security patch of the 3.7 line. This
> > version
> > > is end-of-life and is no longer supported:
> > > https://www.python.org/downloads/release/python-3717/
> > > * Miniconda 4.7.10 is released on 2019-07-29 which is 4 years old
> already
> > > and not supporting too many architectures (x86_64 and ppc64le)
> > > * The latest miniconda which has real multi-arch feature set supports
> the
> > > following python versions: 3.8, 3.9, 3.10, 3.11 and no 3.7 support
> > >
> > > Suggestion to solve the issues:
> > > * In 1.19 drop python 3.7 support and upgrade miniconda to the latest
> > > version which opens the door to other platform + python 3.11 support
> > >
> > > Please note python 3.11 support is not initiated/discussed here.
> > >
> > > BR,
> > > G
> > >
> >
>


Re: [DISCUSS] Drop python 3.7 support in 1.19

2023-09-04 Thread Matthias Pohl
Thanks Gabor for looking into it. It sounds reasonable to me as well.

+1

On Sun, Sep 3, 2023 at 5:44 PM Márton Balassi 
wrote:

> Hi Gabor,
>
> Thanks for bringing this up. Similarly to when we dropped Python 3.6 due to
> its end of life (and added 3.10) in Flink 1.17 [1,2], it makes sense to
> proceed to remove 3.7 and add 3.11 instead.
>
> +1.
>
> [1] https://issues.apache.org/jira/browse/FLINK-27929
> [2] https://github.com/apache/flink/pull/21699
>
> Best,
> Marton
>
> On Fri, Sep 1, 2023 at 10:39 AM Gabor Somogyi 
> wrote:
>
> > Hi All,
> >
> > I've analyzed through part of the pyflink code and found some improvement
> > possibilities.
> > I would like to hear voices on the idea.
> >
> > Intention:
> > * upgrade several python related versions to eliminate end-of-life issues
> > and keep up with bugfixes
> > * start to add python arm64 support
> >
> > Actual situation:
> > * Flink supports the following python versions: 3.7, 3.8, 3.9, 3.10
> > * We use miniconda 4.7.10 (python package management system and
> environment
> > management system) which supports the following python versions: 3.7,
> 3.8,
> > 3.9, 3.10
> > * Our python framework is not supporting anything but x86_64
> >
> > Issues:
> > * Python 3.7.17 is the latest security patch of the 3.7 line. This
> version
> > is end-of-life and is no longer supported:
> > https://www.python.org/downloads/release/python-3717/
> > * Miniconda 4.7.10 is released on 2019-07-29 which is 4 years old already
> > and not supporting too many architectures (x86_64 and ppc64le)
> > * The latest miniconda which has real multi-arch feature set supports the
> > following python versions: 3.8, 3.9, 3.10, 3.11 and no 3.7 support
> >
> > Suggestion to solve the issues:
> > * In 1.19 drop python 3.7 support and upgrade miniconda to the latest
> > version which opens the door to other platform + python 3.11 support
> >
> > Please note python 3.11 support is not initiated/discussed here.
> >
> > BR,
> > G
> >
>