Re: Rescuing Queryable State from deprecation

2022-09-08 Thread Yuan Mei
Hey Ron,

Sorry for the late response. Thanks for the initiative to bring up this
topic and appreciate your efforts to rescue Queryable State :-). I will try
to answer your questions at my best!

*Why is Queryable State in the deprecation list?*
I've seen quite a few users bring up the requests to make "state" queryable
over the past few years, and I personally feel State Querying has a lot of
potential as well.
However, amongst all the use cases, I feel the current architecture of
queryable state is not aligned well with what is really needed to be
queried (I will explain why later, in the third section). I think this is
the main reason why the feature "queryable state" is not actively
maintained and improved, leading to the feature being added to the
deprecation list.

*What is the plan/future to support querying states?*
The motivation to introduce Queryable State was to provide external
applications with the capability to access real-time results from Flink
state without dependencies on external (k-v) stores. There are different
ways to achieve this. While Querable State can support some of the use
cases, I am not sure whether that's the way we want to head to before we
answer the following two questions:

1. What kind of data/query/result is expected from a user perspective
2. What does "real-time" mean here? Does that mean we have to read
uncommitted data (through the queryable state)?
Or if the checkpoint interval is short, reading committed data is good
enough (through state processor API)

>From my talking to users/customers (including myself), we do feel that
there are data already stored in the state that should not be wasted.
However, *Flink states are architected in a way optimized for fast
streaming processing access/fast failover/rescaling. Hence, state *
*querying **is far more complicated than simply "exposing states and making
the state accessible"*, as the current version of queryable state does.
Without scoping "state querying" well, it is difficult to make state
querying really usable and prod-ready.

*What is the problem with the queryable state?*
As I mentioned before, the problem mainly lies in the architecture. Here is
a short list based on my observation:

1. Queryable state service is unavailable during a failover.
2. Rollback to a previous checkpoint causing the state rollback as well.
We can solve this problem by only reading committed data. If this is the
case, why not simply use processor API then?
In the queryable state's current model,
it has to maintain multiple snapshots and wait for checkpoint completion.
This complicates the read model of the state store.
3. Have to support multi-thread reads from the state, which
complicates the accessing model and design
4. Cannot do complicated query analysis (other than simple look-up)
based on the current architecture.
5. The queryable state's life cycle is bound to the life cycle of the Flink
job. If a job is restarted, the job id is changed, and the query has to be
changed correspondingly even for the same state.

All in all, the queryable state is neither scalable nor high-available + it
has the potential to affect normal data processing if read access is
excessive.

Me and my colleagues have devoted ourselves to maintaining and improving
Flink state-related components in the Flink community during the last
couple of years. For reasons mentioned above: the past, the future, and the
problems faced with the queryable state, I'd be hesitant to remove the
queryable state from the deprecation list unless we have a clear vision of
where the "state querying" head (the two questions I proposed).

At the same time, I'd be open and happy to discuss this in more detail
online/offline.


Best Regards,

Yuan




On Tue, Aug 30, 2022 at 11:33 AM Yun Tang  wrote:

> Hi Ron,
>
> From my understanding, the feature of queryable state is introduced to
> Flink very early but lacks maintainers, in other words, this feature is
> currently a toy instead of a production-ready tool.
>
> I have heard from users many times asking for this feature in the
> production environment, and I believe it will bring benefits to the Flink
> community if someone could take it.
>
>
> Best
> Yun Tang
> 
> From: Konstantin Knauf 
> Sent: Monday, August 29, 2022 17:54
> To: dev 
> Subject: Re: Rescuing Queryable State from deprecation
>
> Hi Ron,
>
> thanks you for sharing your use case and your initiative to save Queryable
> State. Queryable State was deprecated due to a lack of maintainers and thus
> the community did not have resources to improve and develop it further. The
> deprecation notice was added to signal this lack of attention to our
> users.  On the other hand, I am not aware of anyone who is actively working
> towards actually dropping Queryable State. So, I don't think this will
> happen anytime soon.
>
> Personally, I see a lot of potential in Queryable State, but to make it
> really, really useful it still needs 

Re: [ANNOUNCE] New Apache Flink Committer - Caizhi Weng

2022-09-08 Thread Jing Ge
Congrats Chaizhi!

Best regards,
Jing

On Fri, Sep 9, 2022 at 7:23 AM Benchao Li  wrote:

> Congratulations Caizhi!
>
> Jark Wu  于2022年9月9日周五 11:46写道:
>
> > Congrats ChaiZhi!
> >
> > Cheers,
> > Jark
> >
> > > 2022年9月9日 11:26,Lijie Wang  写道:
> > >
> > > Congratulations Caizhi
> > >
> > > Best,
> > > Lijie
> > >
> > > Yuxin Tan  于2022年9月9日周五 10:19写道:
> > >
> > >> Caizhi, Congratulations!
> > >>
> > >> Best,
> > >> Yuxin
> > >>
> > >>
> > >> Jane Chan  于2022年9月9日周五 10:09写道:
> > >>
> > >>> Congrats Caizhi
> > >>>
> > >>> Best,
> > >>> Jane
> > >>>
> > >>> On Fri, Sep 9, 2022 at 9:58 AM Xingbo Huang 
> > wrote:
> > >>>
> >  Congratulations Caizhi
> > 
> >  Best,
> >  Xingbo
> > 
> >  Jingsong Lee  于2022年9月9日周五 09:41写道:
> > 
> > > Hi everyone,
> > >
> > > On behalf of the PMC, I'm very happy to announce Caizhi Weng as a
> new
> > > Flink committer.
> > >
> > > Caizhi has been contributing to the Flink project for more than 3
> > > years, and has authored 150+ PRs. He is one of the key driving
> > >> members
> > > of the Flink Table Store subproject. He is responsible for the core
> > > design of transaction committing. Expanded the Hive ecosystem of
> > >> Flink
> > > Table Store. He also works in Flink SQL, helps solve the problems
> of
> > > ease of use and performance.
> > >
> > > Please join me in congratulating Caizhi for becoming a Flink
> > >> committer!
> > >
> > > Best,
> > > Jingsong
> > >
> > 
> > >>>
> > >>
> >
> >
>
> --
>
> Best,
> Benchao Li
>


Re: [ANNOUNCE] New Apache Flink Committer - Caizhi Weng

2022-09-08 Thread Benchao Li
Congratulations Caizhi!

Jark Wu  于2022年9月9日周五 11:46写道:

> Congrats ChaiZhi!
>
> Cheers,
> Jark
>
> > 2022年9月9日 11:26,Lijie Wang  写道:
> >
> > Congratulations Caizhi
> >
> > Best,
> > Lijie
> >
> > Yuxin Tan  于2022年9月9日周五 10:19写道:
> >
> >> Caizhi, Congratulations!
> >>
> >> Best,
> >> Yuxin
> >>
> >>
> >> Jane Chan  于2022年9月9日周五 10:09写道:
> >>
> >>> Congrats Caizhi
> >>>
> >>> Best,
> >>> Jane
> >>>
> >>> On Fri, Sep 9, 2022 at 9:58 AM Xingbo Huang 
> wrote:
> >>>
>  Congratulations Caizhi
> 
>  Best,
>  Xingbo
> 
>  Jingsong Lee  于2022年9月9日周五 09:41写道:
> 
> > Hi everyone,
> >
> > On behalf of the PMC, I'm very happy to announce Caizhi Weng as a new
> > Flink committer.
> >
> > Caizhi has been contributing to the Flink project for more than 3
> > years, and has authored 150+ PRs. He is one of the key driving
> >> members
> > of the Flink Table Store subproject. He is responsible for the core
> > design of transaction committing. Expanded the Hive ecosystem of
> >> Flink
> > Table Store. He also works in Flink SQL, helps solve the problems of
> > ease of use and performance.
> >
> > Please join me in congratulating Caizhi for becoming a Flink
> >> committer!
> >
> > Best,
> > Jingsong
> >
> 
> >>>
> >>
>
>

-- 

Best,
Benchao Li


Re: [ANNOUNCE] New Apache Flink Committer - Caizhi Weng

2022-09-08 Thread Yang Wang
Congrats Caizhi

Best,
Yang

Jark Wu  于2022年9月9日周五 11:46写道:

> Congrats ChaiZhi!
>
> Cheers,
> Jark
>
> > 2022年9月9日 11:26,Lijie Wang  写道:
> >
> > Congratulations Caizhi
> >
> > Best,
> > Lijie
> >
> > Yuxin Tan  于2022年9月9日周五 10:19写道:
> >
> >> Caizhi, Congratulations!
> >>
> >> Best,
> >> Yuxin
> >>
> >>
> >> Jane Chan  于2022年9月9日周五 10:09写道:
> >>
> >>> Congrats Caizhi
> >>>
> >>> Best,
> >>> Jane
> >>>
> >>> On Fri, Sep 9, 2022 at 9:58 AM Xingbo Huang 
> wrote:
> >>>
>  Congratulations Caizhi
> 
>  Best,
>  Xingbo
> 
>  Jingsong Lee  于2022年9月9日周五 09:41写道:
> 
> > Hi everyone,
> >
> > On behalf of the PMC, I'm very happy to announce Caizhi Weng as a new
> > Flink committer.
> >
> > Caizhi has been contributing to the Flink project for more than 3
> > years, and has authored 150+ PRs. He is one of the key driving
> >> members
> > of the Flink Table Store subproject. He is responsible for the core
> > design of transaction committing. Expanded the Hive ecosystem of
> >> Flink
> > Table Store. He also works in Flink SQL, helps solve the problems of
> > ease of use and performance.
> >
> > Please join me in congratulating Caizhi for becoming a Flink
> >> committer!
> >
> > Best,
> > Jingsong
> >
> 
> >>>
> >>
>
>


Re: [VOTE] Release 1.14.6, release candidate #1

2022-09-08 Thread Xingbo Huang
Thanks Jingsong for the check. My mac hung for some minutes due to network
problems during building, which may have caused the incomplete package. The
RC1 would be canceled and I'll create the RC2 asap.

Best,
Xingbo

Jingsong Li  于2022年9月9日周五 12:17写道:

> -1
>
> shasum -a 512 flink-1.14.6-bin-scala_2.11.tgz
>
> 33d55347e93369164b166a54cd1625d617f3df92f5f788d87778c36bb7b2b7bc64d565d87f25c4fcb5f3c70ffa96fc89f9a774143459a6d810667534db1cd695
>  flink-1.14.6-bin-scala_2.11.tgz
>
> cat flink-1.14.6-bin-scala_2.11.tgz.sha512
>
> a2b1cac32778272c1741befa79832056d07977f9b714edccea9c1fdd7f73ba711beb5013c3b5f2794d53f74c3cfcfd19cc3145d86b17644a083edca4be0a4258
>  flink-1.14.6-bin-scala_2.11.tgz
>
> Looks like the sha is not correct.
>
> Best,
> Jingsong
>
> On Fri, Sep 9, 2022 at 9:56 AM Xingbo Huang  wrote:
> >
> > Hi everyone,
> >
> > Please review and vote on the release candidate #1 for the version
> 1.14.6,
> > as follows:
> > [ ] +1, Approve the release
> > [ ] -1, Do not approve the release (please provide specific comments)
> >
> > The complete staging area is available for your review, which includes:
> > * JIRA release notes [1],
> > * the official Apache source release and binary convenience releases to
> be
> > deployed to dist.apache.org [2], which are signed with the key with
> > fingerprint 3C2C9FFB59DF9F3E [3],
> > * all artifacts to be deployed to the Maven Central Repository [4],
> > * source code tag "release-1.14.6-rc1" [5],
> > * website pull request listing the new release and adding announcement
> blog
> > post [6].
> >
> > The vote will be open for at least 72 hours. It is adopted by majority
> > approval, with at least 3 PMC affirmative votes.
> >
> > Best,
> > Xingbo
> >
> > [1]
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351834
> > [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.14.6-rc1/
> > [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> > [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1529/
> > [5] https://github.com/apache/flink/tree/release-1.14.6-rc1
> > [6] https://github.com/apache/flink-web/pull/569
>


Re: [VOTE] Release 1.14.6, release candidate #1

2022-09-08 Thread Jingsong Li
-1

shasum -a 512 flink-1.14.6-bin-scala_2.11.tgz
33d55347e93369164b166a54cd1625d617f3df92f5f788d87778c36bb7b2b7bc64d565d87f25c4fcb5f3c70ffa96fc89f9a774143459a6d810667534db1cd695
 flink-1.14.6-bin-scala_2.11.tgz

cat flink-1.14.6-bin-scala_2.11.tgz.sha512
a2b1cac32778272c1741befa79832056d07977f9b714edccea9c1fdd7f73ba711beb5013c3b5f2794d53f74c3cfcfd19cc3145d86b17644a083edca4be0a4258
 flink-1.14.6-bin-scala_2.11.tgz

Looks like the sha is not correct.

Best,
Jingsong

On Fri, Sep 9, 2022 at 9:56 AM Xingbo Huang  wrote:
>
> Hi everyone,
>
> Please review and vote on the release candidate #1 for the version 1.14.6,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release and binary convenience releases to be
> deployed to dist.apache.org [2], which are signed with the key with
> fingerprint 3C2C9FFB59DF9F3E [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag "release-1.14.6-rc1" [5],
> * website pull request listing the new release and adding announcement blog
> post [6].
>
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
>
> Best,
> Xingbo
>
> [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351834
> [2] https://dist.apache.org/repos/dist/dev/flink/flink-1.14.6-rc1/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1529/
> [5] https://github.com/apache/flink/tree/release-1.14.6-rc1
> [6] https://github.com/apache/flink-web/pull/569


Re: [ANNOUNCE] New Apache Flink Committer - Caizhi Weng

2022-09-08 Thread Jark Wu
Congrats ChaiZhi! 

Cheers,
Jark

> 2022年9月9日 11:26,Lijie Wang  写道:
> 
> Congratulations Caizhi
> 
> Best,
> Lijie
> 
> Yuxin Tan  于2022年9月9日周五 10:19写道:
> 
>> Caizhi, Congratulations!
>> 
>> Best,
>> Yuxin
>> 
>> 
>> Jane Chan  于2022年9月9日周五 10:09写道:
>> 
>>> Congrats Caizhi
>>> 
>>> Best,
>>> Jane
>>> 
>>> On Fri, Sep 9, 2022 at 9:58 AM Xingbo Huang  wrote:
>>> 
 Congratulations Caizhi
 
 Best,
 Xingbo
 
 Jingsong Lee  于2022年9月9日周五 09:41写道:
 
> Hi everyone,
> 
> On behalf of the PMC, I'm very happy to announce Caizhi Weng as a new
> Flink committer.
> 
> Caizhi has been contributing to the Flink project for more than 3
> years, and has authored 150+ PRs. He is one of the key driving
>> members
> of the Flink Table Store subproject. He is responsible for the core
> design of transaction committing. Expanded the Hive ecosystem of
>> Flink
> Table Store. He also works in Flink SQL, helps solve the problems of
> ease of use and performance.
> 
> Please join me in congratulating Caizhi for becoming a Flink
>> committer!
> 
> Best,
> Jingsong
> 
 
>>> 
>> 



Re: [ANNOUNCE] New Apache Flink Committer - Caizhi Weng

2022-09-08 Thread Lijie Wang
Congratulations Caizhi

Best,
Lijie

Yuxin Tan  于2022年9月9日周五 10:19写道:

> Caizhi, Congratulations!
>
> Best,
> Yuxin
>
>
> Jane Chan  于2022年9月9日周五 10:09写道:
>
> > Congrats Caizhi
> >
> > Best,
> > Jane
> >
> > On Fri, Sep 9, 2022 at 9:58 AM Xingbo Huang  wrote:
> >
> > > Congratulations Caizhi
> > >
> > > Best,
> > > Xingbo
> > >
> > > Jingsong Lee  于2022年9月9日周五 09:41写道:
> > >
> > > > Hi everyone,
> > > >
> > > > On behalf of the PMC, I'm very happy to announce Caizhi Weng as a new
> > > > Flink committer.
> > > >
> > > > Caizhi has been contributing to the Flink project for more than 3
> > > > years, and has authored 150+ PRs. He is one of the key driving
> members
> > > > of the Flink Table Store subproject. He is responsible for the core
> > > > design of transaction committing. Expanded the Hive ecosystem of
> Flink
> > > > Table Store. He also works in Flink SQL, helps solve the problems of
> > > > ease of use and performance.
> > > >
> > > > Please join me in congratulating Caizhi for becoming a Flink
> committer!
> > > >
> > > > Best,
> > > > Jingsong
> > > >
> > >
> >
>


[jira] [Created] (FLINK-29238) Wrong index information will be obtained after the downstream failover in hybrid full mode

2022-09-08 Thread Weijie Guo (Jira)
Weijie Guo created FLINK-29238:
--

 Summary: Wrong index information will be obtained after the 
downstream failover in hybrid full mode
 Key: FLINK-29238
 URL: https://issues.apache.org/jira/browse/FLINK-29238
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Affects Versions: 1.16.0
Reporter: Weijie Guo
 Fix For: 1.16.0


Hybrid shuffle relies on the index to read the disk data. Since the spilled 
data may be consumed from memory, the readable status is introduced. For the 
readable buffer, FileDataManager does not do the pre-load. However, when the 
downstream fails, the previous readable status will be used incorrectly, 
resulting in that some buffer cannot be read correctly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] New Apache Flink Committer - Caizhi Weng

2022-09-08 Thread Yuxin Tan
Caizhi, Congratulations!

Best,
Yuxin


Jane Chan  于2022年9月9日周五 10:09写道:

> Congrats Caizhi
>
> Best,
> Jane
>
> On Fri, Sep 9, 2022 at 9:58 AM Xingbo Huang  wrote:
>
> > Congratulations Caizhi
> >
> > Best,
> > Xingbo
> >
> > Jingsong Lee  于2022年9月9日周五 09:41写道:
> >
> > > Hi everyone,
> > >
> > > On behalf of the PMC, I'm very happy to announce Caizhi Weng as a new
> > > Flink committer.
> > >
> > > Caizhi has been contributing to the Flink project for more than 3
> > > years, and has authored 150+ PRs. He is one of the key driving members
> > > of the Flink Table Store subproject. He is responsible for the core
> > > design of transaction committing. Expanded the Hive ecosystem of Flink
> > > Table Store. He also works in Flink SQL, helps solve the problems of
> > > ease of use and performance.
> > >
> > > Please join me in congratulating Caizhi for becoming a Flink committer!
> > >
> > > Best,
> > > Jingsong
> > >
> >
>


Re: [ANNOUNCE] New Apache Flink Committer - Caizhi Weng

2022-09-08 Thread Jane Chan
Congrats Caizhi

Best,
Jane

On Fri, Sep 9, 2022 at 9:58 AM Xingbo Huang  wrote:

> Congratulations Caizhi
>
> Best,
> Xingbo
>
> Jingsong Lee  于2022年9月9日周五 09:41写道:
>
> > Hi everyone,
> >
> > On behalf of the PMC, I'm very happy to announce Caizhi Weng as a new
> > Flink committer.
> >
> > Caizhi has been contributing to the Flink project for more than 3
> > years, and has authored 150+ PRs. He is one of the key driving members
> > of the Flink Table Store subproject. He is responsible for the core
> > design of transaction committing. Expanded the Hive ecosystem of Flink
> > Table Store. He also works in Flink SQL, helps solve the problems of
> > ease of use and performance.
> >
> > Please join me in congratulating Caizhi for becoming a Flink committer!
> >
> > Best,
> > Jingsong
> >
>


Re: [ANNOUNCE] New Apache Flink Committer - Caizhi Weng

2022-09-08 Thread Xingbo Huang
Congratulations Caizhi

Best,
Xingbo

Jingsong Lee  于2022年9月9日周五 09:41写道:

> Hi everyone,
>
> On behalf of the PMC, I'm very happy to announce Caizhi Weng as a new
> Flink committer.
>
> Caizhi has been contributing to the Flink project for more than 3
> years, and has authored 150+ PRs. He is one of the key driving members
> of the Flink Table Store subproject. He is responsible for the core
> design of transaction committing. Expanded the Hive ecosystem of Flink
> Table Store. He also works in Flink SQL, helps solve the problems of
> ease of use and performance.
>
> Please join me in congratulating Caizhi for becoming a Flink committer!
>
> Best,
> Jingsong
>


[VOTE] Release 1.14.6, release candidate #1

2022-09-08 Thread Xingbo Huang
Hi everyone,

Please review and vote on the release candidate #1 for the version 1.14.6,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release and binary convenience releases to be
deployed to dist.apache.org [2], which are signed with the key with
fingerprint 3C2C9FFB59DF9F3E [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag "release-1.14.6-rc1" [5],
* website pull request listing the new release and adding announcement blog
post [6].

The vote will be open for at least 72 hours. It is adopted by majority
approval, with at least 3 PMC affirmative votes.

Best,
Xingbo

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351834
[2] https://dist.apache.org/repos/dist/dev/flink/flink-1.14.6-rc1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1529/
[5] https://github.com/apache/flink/tree/release-1.14.6-rc1
[6] https://github.com/apache/flink-web/pull/569


[ANNOUNCE] New Apache Flink Committer - Caizhi Weng

2022-09-08 Thread Jingsong Lee
Hi everyone,

On behalf of the PMC, I'm very happy to announce Caizhi Weng as a new
Flink committer.

Caizhi has been contributing to the Flink project for more than 3
years, and has authored 150+ PRs. He is one of the key driving members
of the Flink Table Store subproject. He is responsible for the core
design of transaction committing. Expanded the Hive ecosystem of Flink
Table Store. He also works in Flink SQL, helps solve the problems of
ease of use and performance.

Please join me in congratulating Caizhi for becoming a Flink committer!

Best,
Jingsong


[jira] [Created] (FLINK-29237) CalcITCase.testOrWithIsNullPredicate fails after update to calcite 1.27

2022-09-08 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-29237:
---

 Summary: CalcITCase.testOrWithIsNullPredicate fails after update 
to calcite 1.27
 Key: FLINK-29237
 URL: https://issues.apache.org/jira/browse/FLINK-29237
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API, Table SQL / Planner
Reporter: Sergey Nuyanzin


{noformat}
Sep 07 11:25:08 java.lang.AssertionError: 
Sep 07 11:25:08 
Sep 07 11:25:08 Results do not match for query:
Sep 07 11:25:08   
Sep 07 11:25:08 SELECT * FROM NullTable3 AS T
Sep 07 11:25:08 WHERE T.a = 1 OR T.a = 3 OR T.a IS NULL
Sep 07 11:25:08 
Sep 07 11:25:08 
Sep 07 11:25:08 Results
Sep 07 11:25:08  == Correct Result - 4 ==   == Actual Result - 2 ==
Sep 07 11:25:08  +I[1, 1, Hi]   +I[1, 1, Hi]
Sep 07 11:25:08  +I[3, 2, Hello world]  +I[3, 2, Hello world]
Sep 07 11:25:08 !+I[null, 999, NullTuple]   
Sep 07 11:25:08 !+I[null, 999, NullTuple]   
Sep 07 11:25:08 
Sep 07 11:25:08 Plan:
Sep 07 11:25:08   == Abstract Syntax Tree ==
Sep 07 11:25:08 LogicalProject(inputs=[0..2])
Sep 07 11:25:08 +- LogicalFilter(condition=[OR(=($0, 1), =($0, 3), IS 
NULL($0))])
Sep 07 11:25:08+- LogicalTableScan(table=[[default_catalog, 
default_database, NullTable3]])
Sep 07 11:25:08 
Sep 07 11:25:08 == Optimized Logical Plan ==
Sep 07 11:25:08 Calc(select=[a, b, c], where=[SEARCH(a, Sarg[1, 3; NULL AS 
TRUE])])
Sep 07 11:25:08 +- BoundedStreamScan(table=[[default_catalog, default_database, 
NullTable3]], fields=[a, b, c])
Sep 07 11:25:08 
Sep 07 11:25:08
Sep 07 11:25:08 at org.junit.Assert.fail(Assert.java:89)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1(BatchTestBase.scala:154)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.$anonfun$check$1$adapted(BatchTestBase.scala:147)
Sep 07 11:25:08 at scala.Option.foreach(Option.scala:257)
Sep 07 11:25:08 at 
org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:147)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
Sep 07 11:25:08 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Sep 07 11:25:08 

{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29236) TableFactory wildcard options are not supported

2022-09-08 Thread Krishnaiah Narukulla (Jira)
Krishnaiah Narukulla created FLINK-29236:


 Summary: TableFactory wildcard options are not supported
 Key: FLINK-29236
 URL: https://issues.apache.org/jira/browse/FLINK-29236
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.15.0, 1.14.0, 1.16.0
Reporter: Krishnaiah Narukulla
 Fix For: 1.16.0


SQL API:
{code:java}
 CREATE TEMPORARY TABLE `playevents` (upload_time BIGINT, log_id STRING) WITH ( 
  'connector' = 'kafka',
  'topic' = 'topic1', 
  'properties.bootstrap.servers' = xxx', 
  'properties.group.id' = 'kafka-krish-test3',
  'scan.startup.mode' = 'earliest-offset', 
  'format' = 'avro-cloudera',   
  'avro-cloudera.properties.schema.registry.url' = 'yyy',
  'avro-cloudera.schema-name'='zzz'
  ) {code}
{color:#00}ClouderaRegistryAvroFormatFactory {color}
{code:java}
maven.artifact(
group = "org.apache.flink",
artifact = "flink-avro-cloudera-registry",
version = "1.14.0-csadh1.6.0.1",
), {code}
{color:#00}returns optionalOptions as ["schema-name", "{*}properties.*{*}"]

[https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/factories/FactoryUtil.java#L628]
 does not handle `wildcard patterns`. Hence its throwing error. {color}
{code:java}
Caused by: org.apache.flink.table.api.ValidationException: Unsupported options 
found for 'kafka'.Unsupported 
options:avro-cloudera.properties.schema.registry.urlSupported 
options:avro-cloudera.properties.*
avro-cloudera.schema-name
connector
format
key.fields
key.fields-prefix
key.format
properties.bootstrap.servers
properties.group.id
property-version
scan.startup.mode
scan.startup.specific-offsets
scan.startup.timestamp-millis
scan.topic-partition-discovery.interval
sink.delivery-guarantee
sink.parallelism
sink.partitioner
sink.semantic
sink.transactional-id-prefix
topic
topic-pattern
value.fields-include
value.format
        at 
org.apache.flink.table.factories.FactoryUtil.validateUnconsumedKeys(FactoryUtil.java:624)
        at 
org.apache.flink.table.factories.FactoryUtil$FactoryHelper.validate(FactoryUtil.java:914)
        at 
org.apache.flink.table.factories.FactoryUtil$TableFactoryHelper.validate(FactoryUtil.java:978)
        at 
org.apache.flink.table.factories.FactoryUtil$FactoryHelper.validateExcept(FactoryUtil.java:938)
        at 
org.apache.flink.table.factories.FactoryUtil$TableFactoryHelper.validateExcept(FactoryUtil.java:978)
        at 
org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory.createDynamicTableSource(KafkaDynamicTableFactory.java:176)
        at 
org.apache.flink.table.factories.FactoryUtil.createDynamicTableSource(FactoryUtil.java:156)
 {code}
{color:#00} {color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] FLIP-258 Guarantee binary compatibility for Public/-Evolving APIs between patch releases​

2022-09-08 Thread Alexander Fedulov
+1 (non-binding)

On Wed, Sep 7, 2022 at 9:28 PM Thomas Weise  wrote:

> +1
>
>
> On Wed, Sep 7, 2022 at 4:48 AM Danny Cranmer 
> wrote:
>
> > +1
> >
> > On Wed, 7 Sept 2022, 07:32 Zhu Zhu,  wrote:
> >
> > > +1
> > >
> > > Thanks,
> > > Zhu
> > >
> > > Jingsong Li  于2022年9月6日周二 19:49写道:
> > > >
> > > > +1
> > > >
> > > > On Tue, Sep 6, 2022 at 7:11 PM Yu Li  wrote:
> > > > >
> > > > > +1
> > > > >
> > > > > Thanks for the efforts, Chesnay
> > > > >
> > > > > Best Regards,
> > > > > Yu
> > > > >
> > > > >
> > > > > On Tue, 6 Sept 2022 at 18:17, Martijn Visser <
> > martijnvis...@apache.org
> > > >
> > > > > wrote:
> > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > > Op di 6 sep. 2022 om 11:59 schreef Xingbo Huang <
> > hxbks...@gmail.com
> > > >:
> > > > > >
> > > > > > > Thanks Chesnay for driving this,
> > > > > > >
> > > > > > > +1
> > > > > > >
> > > > > > > Best,
> > > > > > > Xingbo
> > > > > > >
> > > > > > > Xintong Song  于2022年9月6日周二 17:57写道:
> > > > > > >
> > > > > > > > +1
> > > > > > > >
> > > > > > > > Best,
> > > > > > > >
> > > > > > > > Xintong
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > On Tue, Sep 6, 2022 at 5:55 PM Konstantin Knauf <
> > > kna...@apache.org>
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > > +1. Thanks, Chesnay.
> > > > > > > > >
> > > > > > > > > Am Di., 6. Sept. 2022 um 11:51 Uhr schrieb Chesnay
> Schepler <
> > > > > > > > > ches...@apache.org>:
> > > > > > > > >
> > > > > > > > > > Since no one objected in the discuss thread, let's vote!
> > > > > > > > > >
> > > > > > > > > > FLIP:
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=225152857
> > > > > > > > > >
> > > > > > > > > > The vote will be open for at least 72h.
> > > > > > > > > >
> > > > > > > > > > Regards,
> > > > > > > > > > Chesnay
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > https://twitter.com/snntrable
> > > > > > > > > https://github.com/knaufk
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > >
> >
>


Re: Sink V2 interface replacement for GlobalCommitter

2022-09-08 Thread Krzysztof Chmielewski
Hi,
Krzysztof Chmielewski [1] from Delta-Flink connector open source community
here [2].

I'm totally agree with Steven on this. Sink's V1 GlobalCommitter is
something exactly what Flink-Delta Sink needs since it is the place where
we do an actual commit to the Delta Log which should be done from a one
place/instance.

Currently I'm evaluating V2 for our connector and having, how Steven
described it a "more natural, built-in concept/support of GlobalCommitter
in the sink v2 interface" would be greatly appreciated.

Cheers,
Krzysztof Chmielewski

[1] https://github.com/kristoffSC
[2] https://github.com/delta-io/connectors/tree/master/flink

czw., 8 wrz 2022 o 19:51 Steven Wu  napisał(a):

> Hi Yun,
>
> Thanks a lot for the reply!
>
> While we can add the global committer in the WithPostCommitTopology, the
> semantics are weird. The Commit stage actually didn't commit anything to
> the Iceberg table, and the PostCommit stage is where the Iceberg commit
> happens.
>
> I just took a quick look at DeltaLake Flink sink. It still uses the V1 sink
> interface [1]. I think it might have the same issue when switching to the
> V2 sink interface.
>
> For data lake storages (like Iceberg, DeltaLake) or any storage with global
> transactional commit, it would be more natural to have a built-in
> concept/support of GlobalCommitter in the sink v2 interface.
>
> Thanks,
> Steven
>
> [1]
>
> https://github.com/delta-io/connectors/blob/master/flink/src/main/java/io/delta/flink/sink/internal/committer/DeltaGlobalCommitter.java
>
>
> On Wed, Sep 7, 2022 at 2:15 AM Yun Gao 
> wrote:
>
> > Hi Steven, Liwei,
> > Very sorry for missing this mail and response very late.
> > I think the initial thought is indeed to use `WithPostCommitTopology` as
> > a replacement of the original GlobalCommitter, and currently the adapter
> of
> > Sink v1 on top of Sink v2 also maps the GlobalCommitter in Sink V1
> > interface
> > onto an implementation of `WithPostCommitTopology`.
> > Since `WithPostCommitTopology` supports arbitrary subgraph, thus It seems
> > to
> > me it could support both global committer and small file compaction? We
> > might
> > have an `WithPostCommitTopology` implementation like
> > DataStream ds = add global committer;
> > if (enable file compaction) {
> >  build the compaction subgraph from ds
> > }
> > Best,
> > Yun
> > [1]
> >
> https://github.com/apache/flink/blob/a8ca381c57788cd1a1527e4ebdc19bdbcd132fc4/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/transformations/SinkV1Adapter.java#L365
> > <
> >
> https://github.com/apache/flink/blob/a8ca381c57788cd1a1527e4ebdc19bdbcd132fc4/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/transformations/SinkV1Adapter.java#L365
> > >
> > --
> > From:Steven Wu 
> > Send Time:2022 Aug. 17 (Wed.) 07:30
> > To:dev ; hililiwei 
> > Subject:Re: Sink V2 interface replacement for GlobalCommitter
> > > Plus, it will disable the future capability of small file compaction
> > stage post commit.
> > I should clarify this comment. if we are using the
> `WithPostCommitTopology`
> > for global committer, we would lose the capability of using the post
> commit
> > stage for small files compaction.
> > On Tue, Aug 16, 2022 at 9:53 AM Steven Wu  wrote:
> > >
> > > In the V1 sink interface, there is a GlobalCommitter for Iceberg. With
> > the
> > > V2 sink interface, GlobalCommitter has been deprecated by
> > > WithPostCommitTopology. I thought the post commit stage is mainly for
> > async
> > > maintenance (like compaction).
> > >
> > > Are we supposed to do sth similar to the GlobalCommittingSinkAdapter?
> It
> > > seems like a temporary transition plan for bridging v1 sinks to v2
> > > interfaces.
> > >
> > > private class GlobalCommittingSinkAdapter extends
> > TwoPhaseCommittingSinkAdapter
> > > implements WithPostCommitTopology {
> > > @Override
> > > public void addPostCommitTopology(DataStream>
> > committables) {
> > > StandardSinkTopologies.addGlobalCommitter(
> > > committables,
> > > GlobalCommitterAdapter::new,
> > > () -> sink.getCommittableSerializer().get());
> > > }
> > > }
> > >
> > >
> > > In the Iceberg PR [1] for adopting the new sink interface, Liwei used
> the
> > > "global" partitioner to force all committables go to a single committer
> > > task 0. It will effectively force a global committer disguised in the
> > > parallel committers. It is a little weird and also can lead to
> questions
> > > why other committer tasks are not getting any messages. Plus, it will
> > > disable the future capability of small file compaction stage post
> commit.
> > > Hence, I am asking what is the right approach to achieve global
> committer
> > > behavior.
> > >
> > > Thanks,
> > > Steven
> > >
> > > [1] https://github.com/apache/iceberg/pull/4904/files#r946975047 <
> > https://github.com/apache/iceberg/pull/4904/files#r946975047 >
> > >
> >
>


Re: Sink V2 interface replacement for GlobalCommitter

2022-09-08 Thread Steven Wu
Hi Yun,

Thanks a lot for the reply!

While we can add the global committer in the WithPostCommitTopology, the
semantics are weird. The Commit stage actually didn't commit anything to
the Iceberg table, and the PostCommit stage is where the Iceberg commit
happens.

I just took a quick look at DeltaLake Flink sink. It still uses the V1 sink
interface [1]. I think it might have the same issue when switching to the
V2 sink interface.

For data lake storages (like Iceberg, DeltaLake) or any storage with global
transactional commit, it would be more natural to have a built-in
concept/support of GlobalCommitter in the sink v2 interface.

Thanks,
Steven

[1]
https://github.com/delta-io/connectors/blob/master/flink/src/main/java/io/delta/flink/sink/internal/committer/DeltaGlobalCommitter.java


On Wed, Sep 7, 2022 at 2:15 AM Yun Gao  wrote:

> Hi Steven, Liwei,
> Very sorry for missing this mail and response very late.
> I think the initial thought is indeed to use `WithPostCommitTopology` as
> a replacement of the original GlobalCommitter, and currently the adapter of
> Sink v1 on top of Sink v2 also maps the GlobalCommitter in Sink V1
> interface
> onto an implementation of `WithPostCommitTopology`.
> Since `WithPostCommitTopology` supports arbitrary subgraph, thus It seems
> to
> me it could support both global committer and small file compaction? We
> might
> have an `WithPostCommitTopology` implementation like
> DataStream ds = add global committer;
> if (enable file compaction) {
>  build the compaction subgraph from ds
> }
> Best,
> Yun
> [1]
> https://github.com/apache/flink/blob/a8ca381c57788cd1a1527e4ebdc19bdbcd132fc4/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/transformations/SinkV1Adapter.java#L365
> <
> https://github.com/apache/flink/blob/a8ca381c57788cd1a1527e4ebdc19bdbcd132fc4/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/transformations/SinkV1Adapter.java#L365
> >
> --
> From:Steven Wu 
> Send Time:2022 Aug. 17 (Wed.) 07:30
> To:dev ; hililiwei 
> Subject:Re: Sink V2 interface replacement for GlobalCommitter
> > Plus, it will disable the future capability of small file compaction
> stage post commit.
> I should clarify this comment. if we are using the `WithPostCommitTopology`
> for global committer, we would lose the capability of using the post commit
> stage for small files compaction.
> On Tue, Aug 16, 2022 at 9:53 AM Steven Wu  wrote:
> >
> > In the V1 sink interface, there is a GlobalCommitter for Iceberg. With
> the
> > V2 sink interface, GlobalCommitter has been deprecated by
> > WithPostCommitTopology. I thought the post commit stage is mainly for
> async
> > maintenance (like compaction).
> >
> > Are we supposed to do sth similar to the GlobalCommittingSinkAdapter? It
> > seems like a temporary transition plan for bridging v1 sinks to v2
> > interfaces.
> >
> > private class GlobalCommittingSinkAdapter extends
> TwoPhaseCommittingSinkAdapter
> > implements WithPostCommitTopology {
> > @Override
> > public void addPostCommitTopology(DataStream>
> committables) {
> > StandardSinkTopologies.addGlobalCommitter(
> > committables,
> > GlobalCommitterAdapter::new,
> > () -> sink.getCommittableSerializer().get());
> > }
> > }
> >
> >
> > In the Iceberg PR [1] for adopting the new sink interface, Liwei used the
> > "global" partitioner to force all committables go to a single committer
> > task 0. It will effectively force a global committer disguised in the
> > parallel committers. It is a little weird and also can lead to questions
> > why other committer tasks are not getting any messages. Plus, it will
> > disable the future capability of small file compaction stage post commit.
> > Hence, I am asking what is the right approach to achieve global committer
> > behavior.
> >
> > Thanks,
> > Steven
> >
> > [1] https://github.com/apache/iceberg/pull/4904/files#r946975047 <
> https://github.com/apache/iceberg/pull/4904/files#r946975047 >
> >
>


[jira] [Created] (FLINK-29235) CVE-2022-25857 on flink-shaded

2022-09-08 Thread Sergio Sainz (Jira)
Sergio Sainz created FLINK-29235:


 Summary: CVE-2022-25857 on flink-shaded
 Key: FLINK-29235
 URL: https://issues.apache.org/jira/browse/FLINK-29235
 Project: Flink
  Issue Type: Bug
  Components: BuildSystem / Shaded
Affects Versions: 1.15.2
Reporter: Sergio Sainz


flink-shaded-version uses snakeyaml v1.29 which is vulnerable to CVE-2022-25857

Ref:

https://nvd.nist.gov/vuln/detail/CVE-2022-25857

https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-jackson/2.12.4-15.0/flink-shaded-jackson-2.12.4-15.0.pom

https://github.com/apache/flink-shaded/blob/master/flink-shaded-jackson-parent/flink-shaded-jackson-2/pom.xml#L73



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29234) Dead lock in DefaultLeaderElectionService

2022-09-08 Thread Yu Wang (Jira)
Yu Wang created FLINK-29234:
---

 Summary: Dead lock in DefaultLeaderElectionService
 Key: FLINK-29234
 URL: https://issues.apache.org/jira/browse/FLINK-29234
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.13.5
Reporter: Yu Wang


Jobmanager stop working because the deadlock in DefaultLeaderElectionService. 
Which may similar to this ticket 
https://issues.apache.org/jira/browse/FLINK-20008

Here is the jstak info
{code:java}
Found one Java-level deadlock: = 
"flink-akka.actor.default-dispatcher-18": waiting to lock monitor 
0x7f15c7eae3a8 (object 0x000678d395e8, a java.lang.Object), which is 
held by "main-EventThread" "main-EventThread": waiting to lock monitor 
0x7f15a3811258 (object 0x000678cf1be0, a java.lang.Object), which is 
held by "flink-akka.actor.default-dispatcher-18" Java stack information for the 
threads listed above: === 
"flink-akka.actor.default-dispatcher-18": at 
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService.stop(DefaultLeaderElectionService.java:104)
 - waiting to lock <0x000678d395e8> (a java.lang.Object) at 
org.apache.flink.runtime.jobmaster.JobMasterServiceLeadershipRunner.lambda$closeAsync$0(JobMasterServiceLeadershipRunner.java:147)
 at 
org.apache.flink.runtime.jobmaster.JobMasterServiceLeadershipRunner$$Lambda$735/1742012752.run(Unknown
 Source) at 
org.apache.flink.runtime.concurrent.FutureUtils.lambda$runAfterwardsAsync$18(FutureUtils.java:687)
 at 
org.apache.flink.runtime.concurrent.FutureUtils$$Lambda$736/6716561.accept(Unknown
 Source) at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
 at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
 at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
 at 
org.apache.flink.runtime.concurrent.DirectExecutorService.execute(DirectExecutorService.java:217)
 at 
java.util.concurrent.CompletableFuture$UniCompletion.claim(CompletableFuture.java:543)
 at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:765)
 at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
 at 
java.util.concurrent.CompletableFuture.uniWhenCompleteStage(CompletableFuture.java:795)
 at 
java.util.concurrent.CompletableFuture.whenCompleteAsync(CompletableFuture.java:2163)
 at 
org.apache.flink.runtime.concurrent.FutureUtils.runAfterwardsAsync(FutureUtils.java:684)
 at 
org.apache.flink.runtime.concurrent.FutureUtils.runAfterwards(FutureUtils.java:651)
 at 
org.apache.flink.runtime.jobmaster.JobMasterServiceLeadershipRunner.closeAsync(JobMasterServiceLeadershipRunner.java:143)
 - locked <0x000678cf1be0> (a java.lang.Object) at 
org.apache.flink.runtime.dispatcher.Dispatcher.terminateJob(Dispatcher.java:807)
 at 
org.apache.flink.runtime.dispatcher.Dispatcher.terminateRunningJobs(Dispatcher.java:799)
 at 
org.apache.flink.runtime.dispatcher.Dispatcher.terminateRunningJobsAndGetTerminationFuture(Dispatcher.java:812)
 at org.apache.flink.runtime.dispatcher.Dispatcher.onStop(Dispatcher.java:268) 
at 
org.apache.flink.runtime.rpc.RpcEndpoint.internalCallOnStop(RpcEndpoint.java:214)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$StartedState.terminate(AkkaRpcActor.java:563)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleControlMessage(AkkaRpcActor.java:186)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor$$Lambda$444/1289054037.apply(Unknown
 Source) at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) at 
scala.PartialFunction.applyOrElse(PartialFunction.scala:123) at 
scala.PartialFunction.applyOrElse$(PartialFunction.scala:122) at 
akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172) at 
akka.actor.Actor.aroundReceive(Actor.scala:517) at 
akka.actor.Actor.aroundReceive$(Actor.scala:515) at 
akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) at 
akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) at 
akka.actor.ActorCell.invoke(ActorCell.scala:561) at 
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) at 
akka.dispatch.Mailbox.run(Mailbox.scala:225) at 
akka.dispatch.Mailbox.exec(Mailbox.scala:235) at 
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) 
"main-EventThread": at 

[jira] [Created] (FLINK-29233) Flink Kubernetes Operator example/kustomize not running

2022-09-08 Thread ShiminHuang (Jira)
ShiminHuang created FLINK-29233:
---

 Summary: Flink Kubernetes Operator example/kustomize not running
 Key: FLINK-29233
 URL: https://issues.apache.org/jira/browse/FLINK-29233
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Affects Versions: kubernetes-operator-1.0.1, kubernetes-operator-1.1.0, 
kubernetes-operator-1.0.0, kubernetes-operator-0.1.0, 
kubernetes-operator-1.2.0, kubernetes-operator-1.1.1
Reporter: ShiminHuang
 Fix For: kubernetes-operator-1.2.0


h1. question
 * I failed to pull the flink-Kubernetes -operator:latest image when installing 
the Flink Kubernetes operator using the Kustomize Sidecar case provided by the 
operator.
 * The following is the Helm install operator code and the corresponding error:

 
{code:java}
# code
helm install flink-kubernetes-operator helm/flink-kubernetes-operator -f 
flink-kubernetes-operator/examples/kustomize/sidecar/values.yaml 
--post-renderer flink-kubernetes-operator/examples/kustomize/sidecar/render 
# error
 PF    IMAGE                                READY     STATE                INIT 
        RESTARTS PROBES(L:R)          CPU/R:L       MEM/R:L PORTS               
  AGE          │ flink-kubernetes-operator     ●l    
flink-kubernetes-operator:latest     false     ImagePullBackOff     false       
        0 on:off             1000:2000     2560:2560 health-port:8085      
6m59s        │ flink-webhook                 ●l    
flink-kubernetes-operator:latest     false     ImagePullBackOff     false       
        0 off:off              200:500       512:512                       
6m59s        │ fluentbit                     ●l    
fluent/fluent-bit:1.9.1-debug        true      Running              false       
        0 off:off                  0:0           0:0                       
6m59s{code}
 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29232) File store continuous reading support from_timestamp scan mode

2022-09-08 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-29232:


 Summary: File store continuous reading support from_timestamp scan 
mode
 Key: FLINK-29232
 URL: https://issues.apache.org/jira/browse/FLINK-29232
 Project: Flink
  Issue Type: New Feature
  Components: Table Store
Reporter: Jingsong Lee
 Fix For: table-store-0.3.0


The file store can find a suitable snapshot according to start timestamp and 
read incremental data from it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [NEWS][DISCUSS] Akka moves to BSL licence

2022-09-08 Thread Chesnay Schepler
I have prepared a blogpost for the whole situation: 
https://github.com/apache/flink-web/pull/570


Any feedback/comments are highly appreciated; if you feel another 
question should be answered, or something in the the post shouldn't be 
said, don't hesitate to comment.


On 07/09/2022 15:03, Etienne Chauchot wrote:

Hi all,

I'd like to share a concerning news. I've just read that Akka will 
move from ASFv2 license to BSLv1.1 (1)


I guess this license is considered Category X (2) by the ASF and 
cannot be included in ASF projects.


Let's discuss possible solutions.

Best

Etienne

[1] 
https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka


[2] https://www.apache.org/legal/resolved.html#category-x





[jira] [Created] (FLINK-29231) PyFlink udtaf produces different results in the same sliding window

2022-09-08 Thread Xuannan Su (Jira)
Xuannan Su created FLINK-29231:
--

 Summary: PyFlink udtaf produces different results in the same 
sliding window
 Key: FLINK-29231
 URL: https://issues.apache.org/jira/browse/FLINK-29231
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.15.2
Reporter: Xuannan Su
 Attachments: image-2022-09-08-17-20-06-296.png, input, test_agg.py

It seems that PyFlink udtaf produces different results in the same sliding 
window. It can be reproduced with the given code and input. It is not always 
happening but the possibility is relatively high.

The incorrect output is the following:

!image-2022-09-08-17-20-06-296.png!

 

We can see that the output contains different `val_sum` at `window_time` 
2022-01-01 00:01:59.999.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29230) Publish blogpost about the Akka license change

2022-09-08 Thread Chesnay Schepler (Jira)
Chesnay Schepler created FLINK-29230:


 Summary: Publish blogpost about the Akka license change
 Key: FLINK-29230
 URL: https://issues.apache.org/jira/browse/FLINK-29230
 Project: Flink
  Issue Type: Technical Debt
  Components: Project Website
Reporter: Chesnay Schepler
Assignee: Chesnay Schepler


People reached out to the project via every conceivable channel to inquire 
about the impact of the recent Akka change; let's write a blogpost to 
centralize answers to that.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [NEWS][DISCUSS] Akka moves to BSL licence

2022-09-08 Thread Etienne Chauchot

Hi Chesnay,

Yes I came by the user thread almost at the same time you mentioned it 
here :)


That was what I thought as well that we could stick to the last ASFv2 
version as long as no CVE pops up. Alternatively moving to an 
hypothetical ASF fork would be the easiest path for us.


Best

Etienne

Le 07/09/2022 à 16:10, Chesnay Schepler a écrit :

But fair enough, different target audiences.

For the time being we don't have to do anything. We'll just stay on 
2.6.x and see how things unfold.

We don't really care about new Akka features after all.

If a fork pops up we could switch to that, otherwise we can look into 
alternative libraries and maybe implement certain things ourselves.


As you mentioned I've filed a ticket with LEGAL, inquiring about 
whether the option in the BSL to carve out a special use grant changes 
the situation, but AFAICT at the moment it wouldn't change anything.


On 07/09/2022 16:04, Chesnay Schepler wrote:

This is already being discussed on the user ML.

See "New licensing for Akka"

On 07/09/2022 15:03, Etienne Chauchot wrote:

Hi all,

I'd like to share a concerning news. I've just read that Akka will 
move from ASFv2 license to BSLv1.1 (1)


I guess this license is considered Category X (2) by the ASF and 
cannot be included in ASF projects.


Let's discuss possible solutions.

Best

Etienne

[1] 
https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka


[2] https://www.apache.org/legal/resolved.html#category-x







Re: [DISCUSS] Releasing Flink 1.14.6

2022-09-08 Thread Xingbo Huang
Thanks everyone,

I will start with the preparation for the release.

Best,
Xingbo

Yu Li  于2022年9月8日周四 11:19写道:

> +1
>
> Best Regards,
> Yu
>
>
> On Tue, 6 Sept 2022 at 17:57, Xintong Song  wrote:
>
> > +1
> >
> > Best,
> >
> > Xintong
> >
> >
> >
> > On Tue, Sep 6, 2022 at 5:55 PM Konstantin Knauf 
> wrote:
> >
> > > Sounds good. +1.
> > >
> > > Am Di., 6. Sept. 2022 um 10:45 Uhr schrieb Jingsong Li <
> > > jingsongl...@gmail.com>:
> > >
> > > > +1 for 1.14.6
> > > >
> > > > Thanks Xingbo for driving.
> > > >
> > > > Best,
> > > > Jingsong
> > > >
> > > > On Tue, Sep 6, 2022 at 4:42 PM Xingbo Huang 
> > wrote:
> > > > >
> > > > > Hi all,
> > > > >
> > > > > I would like to start discussing releasing Flink 1.14.6.
> > > > >
> > > > > It has already been almost three months since we released 1.14.5.
> > There
> > > > are
> > > > > currently 35 tickets[1] and 33 commits[2] already resolved for
> > 1.14.6,
> > > > some
> > > > > of them quite important, such as FLINK-27399
> > > > >  and
> FLINK-29138
> > > > > .
> > > > >
> > > > > Currently, there are no issues marked as critical or blocker for
> > > 1.14.6.
> > > > > Please let me know if there are any issues you'd like to be
> included
> > in
> > > > > this release but still not merged.
> > > > >
> > > > > I would like to volunteer as a release manager for 1.14.6, and
> start
> > > the
> > > > > release process once all the issues are merged.
> > > > >
> > > > > Best,
> > > > > Xingbo
> > > > >
> > > > > [1]
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20FLINK%20AND%20status%20in%20(Resolved%2C%20Closed)%20AND%20fixVersion%20%3D%201.14.6
> > > > > [2]
> > > >
> https://github.com/apache/flink/compare/release-1.14.5...release-1.14
> > > >
> > >
> > >
> > > --
> > > https://twitter.com/snntrable
> > > https://github.com/knaufk
> > >
> >
>


[jira] [Created] (FLINK-29229) Fix HiveServer2 Endpoint doesn't support execute statements in sync mode

2022-09-08 Thread Shengkai Fang (Jira)
Shengkai Fang created FLINK-29229:
-

 Summary: Fix HiveServer2 Endpoint doesn't support execute 
statements in sync mode
 Key: FLINK-29229
 URL: https://issues.apache.org/jira/browse/FLINK-29229
 Project: Flink
  Issue Type: Bug
Reporter: Shengkai Fang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-29228) Align the schema of the HiveServer2 getMetadata with JDBC

2022-09-08 Thread Shengkai Fang (Jira)
Shengkai Fang created FLINK-29228:
-

 Summary: Align the schema of the  HiveServer2 getMetadata with JDBC
 Key: FLINK-29228
 URL: https://issues.apache.org/jira/browse/FLINK-29228
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive, Table SQL / Gateway
Affects Versions: 1.16.0
Reporter: Shengkai Fang
 Fix For: 1.16.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)