Re: [VOTE] Apache Flink Kubernetes Operator Release 1.5.0, release candidate #1

2023-05-11 Thread Gyula Fóra
Very strange, I also get an error when running the examples from the RC.
It's different but could be related:

Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Failure
executing: POST at: https://10.96.0.1/api/v1/namespaces/default/pods.
Message: PodList in version "v1" cannot be handled as a Pod: converting
(v1.PodList) to (core.Pod): unknown conversion. Received status:
Status(apiVersion=v1, code=400, details=null, kind=Status, message=PodList
in version "v1" cannot be handled as a Pod: converting (v1.PodList) to
(core.Pod): unknown conversion, metadata=ListMeta(_continue=null,
remainingItemCount=null, resourceVersion=null, selfLink=null,
additionalProperties={}), reason=BadRequest, status=Failure,
additionalProperties={}).
at
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:684)
~[flink-dist-1.16.1.jar:1.16.1]
at
io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:664)
~[flink-dist-1.16.1.jar:1.16.1]
at
io.fabric8.kubernetes.client.dsl.base.OperationSupport.assertResponseCode(OperationSupport.java:615)
~[flink-dist-1.16.1.jar:1.16.1]
at
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:558)
~[flink-dist-1.16.1.jar:1.16.1]
at
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:521)
~[flink-dist-1.16.1.jar:1.16.1]
at
io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleCreate(OperationSupport.java:308)
~[flink-dist-1.16.1.jar:1.16.1]
at
io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:644)
~[flink-dist-1.16.1.jar:1.16.1]
at
io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleCreate(BaseOperation.java:83)
~[flink-dist-1.16.1.jar:1.16.1]
at
io.fabric8.kubernetes.client.dsl.base.CreateOnlyResourceOperation.create(CreateOnlyResourceOperation.java:61)
~[flink-dist-1.16.1.jar:1.16.1]
at
org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.lambda$createTaskManagerPod$1(Fabric8FlinkKubeClient.java:163)
~[flink-dist-1.16.1.jar:1.16.1]
... 4 more

I haven't seen this so far. Will investigate today.

Gyula



On Fri, May 12, 2023 at 8:53 AM Márton Balassi 
wrote:

> Hi Jim and Ted,
>
> Thanks for the quick response. For Openshift issue I would assume that
> adding the RBAC suggested here [1] would solve the problem, it seems fine
> to me.
>
> For the missing taskmanager could you please share the relevant logs from
> your jobmanager pod that is already show running? Thanks!
>
> [1]
>
> https://github.com/FairwindsOps/rbac-manager/issues/180#issuecomment-752706810
>
> On Fri, May 12, 2023 at 8:40 AM Hao t Chang  wrote:
>
> > Seems missing taskmanager pod. I tried the following:
> > kubectl create -f
> >
> https://github.com/jetstack/cert-manager/releases/download/v1.8.2/cert-manager.yaml
> > helm install 1.5rc1
> >
> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-1.5.0-rc1/flink-kubernetes-operator-1.5.0-helm.tgz
> > kubectl create -f
> >
> https://raw.githubusercontent.com/apache/flink-kubernetes-operator/release-1.5/examples/basic.yaml
> > kubectl get po
> > NAME READY   STATUSRESTARTS
> >  AGE
> > basic-example-67bbc79dd9-blfn4   1/1 Running   0
> > 2m28s
> > flink-kubernetes-operator-7bd6dcdfd4-2rshp   2/2 Running   0
> > 4m49s
> >
>


Re: [VOTE] Apache Flink Kubernetes Operator Release 1.5.0, release candidate #1

2023-05-11 Thread Márton Balassi
Hi Jim and Ted,

Thanks for the quick response. For Openshift issue I would assume that
adding the RBAC suggested here [1] would solve the problem, it seems fine
to me.

For the missing taskmanager could you please share the relevant logs from
your jobmanager pod that is already show running? Thanks!

[1]
https://github.com/FairwindsOps/rbac-manager/issues/180#issuecomment-752706810

On Fri, May 12, 2023 at 8:40 AM Hao t Chang  wrote:

> Seems missing taskmanager pod. I tried the following:
> kubectl create -f
> https://github.com/jetstack/cert-manager/releases/download/v1.8.2/cert-manager.yaml
> helm install 1.5rc1
> https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-1.5.0-rc1/flink-kubernetes-operator-1.5.0-helm.tgz
> kubectl create -f
> https://raw.githubusercontent.com/apache/flink-kubernetes-operator/release-1.5/examples/basic.yaml
> kubectl get po
> NAME READY   STATUSRESTARTS
>  AGE
> basic-example-67bbc79dd9-blfn4   1/1 Running   0
> 2m28s
> flink-kubernetes-operator-7bd6dcdfd4-2rshp   2/2 Running   0
> 4m49s
>


[VOTE] Apache Flink Kubernetes Operator Release 1.5.0, release candidate #1

2023-05-11 Thread Hao t Chang
Seems missing taskmanager pod. I tried the following:
kubectl create -f 
https://github.com/jetstack/cert-manager/releases/download/v1.8.2/cert-manager.yaml
helm install 1.5rc1 
https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-1.5.0-rc1/flink-kubernetes-operator-1.5.0-helm.tgz
kubectl create -f 
https://raw.githubusercontent.com/apache/flink-kubernetes-operator/release-1.5/examples/basic.yaml
kubectl get po
NAME READY   STATUSRESTARTS   AGE
basic-example-67bbc79dd9-blfn4   1/1 Running   0  2m28s
flink-kubernetes-operator-7bd6dcdfd4-2rshp   2/2 Running   0  4m49s


[jira] [Created] (FLINK-32065) Got NoSuchFileException when initialize source function.

2023-05-11 Thread Spongebob (Jira)
Spongebob created FLINK-32065:
-

 Summary: Got NoSuchFileException when initialize source function.
 Key: FLINK-32065
 URL: https://issues.apache.org/jira/browse/FLINK-32065
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Affects Versions: 1.14.4
Reporter: Spongebob
 Attachments: image-2023-05-12-14-07-45-771.png

When I submit an application to flink standalone cluster, I got a 
NoSuchFileException. I think it was failed to create the tmp channel file but I 
am confused about the reason relative to this case. BTW, this issue happen 
coincidently.

!image-2023-05-12-14-07-45-771.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Apache Flink Kubernetes Operator Release 1.5.0, release candidate #1

2023-05-11 Thread Jim Busche
Hi Gyula,

Thanks for the RC-1.

I'm taking a look at the rc-1, and I'm having a problem deploying on OpenShift 
4.10 and 4.12 for some reason.

I first tried the helm install method in the default namespace, and the 
operator launches fine.  But when I try one of the example flinkdeployments 
it's hanging:



oc get flinkdep

NAMEJOB STATUS   LIFECYCLE STATE

basic-exampleUPGRADING

In the operator log says:
[ERROR][default/basic-example] Error during event processing ExecutionScope{ 
resource id: ResourceID{name='basic-example', namespace='default'}, version: 
41800} failed.

Caused by: 
org.apache.flink.kubernetes.shaded.io.fabric8.kubernetes.client.KubernetesClientException:
 Failure executing: POST at: 
https://172.30.0.1/apis/apps/v1/namespaces/default/deployments. Message: 
Forbidden!Configured service account doesn't have access. Service account may 
have been revoked. deployments.apps "basic-example" is forbidden: cannot set 
blockOwnerDeletion if an ownerReference refers to a resource you can't set 
finalizers on: , .

An OLM install has similar issues.

I also tried a helm install on Ted's "kind" cluster (non-OpenShift) and while 
the flinkdep gets further, I still don't see the task manager pod like I'd 
expect and the flinkdeployment doesn't get to a Running/stable stage:


oc get pods

NAME READY   STATUSRESTARTS   AGE

basic-example-c6884ddcd-56j7b1/1 Running   0  2m52s

flink-kubernetes-operator-7bd6dcdfd4-4lfcg   2/2 Running   0  4m51s

root@cataract1:~/FLINK/release-1.5.0-rc1#
 oc get flinkdep

NAMEJOB STATUS   LIFECYCLE STATE

basic-example   CREATED  DEPLOYED

I'm curious if anyone else is having trouble with install on Kubernetes.

Thanks, Jim



Re: Re: [DISCUSS] FLIP-305: Support atomic for CREATE TABLE AS SELECT(CTAS) statement

2023-05-11 Thread Jingsong Li
Hi Mang,

Thanks for starting this FLIP.

I have some doubts about the `TwoPhaseCatalogTable`. Generally, our
Flink design places execution in the TableFactory or directly in the
Catalog, so introducing an executable table makes me feel a bit
strange. (Spark is this style, but Flink may not be)

And for `TwoPhase`, maybe `StagedXXX` like Spark is better?

Best,
Jingsong

On Wed, May 10, 2023 at 9:29 PM Mang Zhang  wrote:
>
> Hi Ron,
>
>
> First of all, thank you for your reply!
> After our offline communication, what you said is mainly in the compilePlan 
> scenario, but currently compilePlanSql does not support non INSERT 
> statements, otherwise it will throw an exception.
> >Unsupported SQL query! compilePlanSql() only accepts a single SQL statement 
> >of type INSERT
> But it's a good point that I will seriously consider.
> Non-atomic CTAS can be supported relatively easily;
> But atomic CTAS needs more adaptation work, so I'm going to leave it as is 
> and follow up with a separate issue to implement CTAS support for 
> compilePlanSql.
>
>
>
>
>
>
> --
>
> Best regards,
> Mang Zhang
>
>
>
>
>
> At 2023-04-23 17:52:07, "liu ron"  wrote:
> >Hi, Mang
> >
> >I have a question about the implementation details. For the atomicity case,
> >since the target table is not created before the JobGraph is generated, but
> >then the target table is required to exist when optimizing plan to generate
> >the JobGraph. So how do you solve this problem?
> >
> >Best,
> >Ron
> >
> >yuxia  于2023年4月20日周四 09:35写道:
> >
> >> Share some insights about the new TwoPhaseCatalogTable proposed after
> >> offline discussion with Mang.
> >> The main or important reason is that the TwoPhaseCatalogTable enables
> >> external connectors to implement theirs own logic for commit / abort.
> >> In FLIP-218, for atomic CTAS, the Catalog will then just drop the table
> >> when the job fail. It's not ideal for it's too generic to work well.
> >> For example, some connectors will need to clean some temporary files in
> >> abort method. And the actual connector can know the specific logic for
> >> aborting.
> >>
> >> Best regards,
> >> Yuxia
> >>
> >>
> >> 发件人: "zhangmang1" 
> >> 收件人: "dev" , "Jing Ge" 
> >> 抄送: "ron9 liu" , "lincoln 86xy" <
> >> lincoln.8...@gmail.com>, luoyu...@alumni.sjtu.edu.cn
> >> 发送时间: 星期三, 2023年 4 月 19日 下午 3:13:36
> >> 主题: Re:Re: [DISCUSS] FLIP-305: Support atomic for CREATE TABLE AS
> >> SELECT(CTAS) statement
> >>
> >> hi, Jing
> >> Thank you for your reply.
> >> >1. It looks like you found another way to design the atomic CTAS with new
> >> >serializable TwoPhaseCatalogTable instead of making Catalog serializable
> >> as
> >> >described in FLIP-218. Did I understand correctly?
> >> Yes, when I was implementing the FLIP-218 solution, I encountered problems
> >> with Catalog/CatalogTable serialization deserialization, for example, after
> >> deserialization CatalogTable could not be converted to Hive Table. Also,
> >> Catalog serialization is still a heavy operation, but it may not actually
> >> be necessary, we just need Create Table.
> >> Therefore, the TwoPhaseCatalogTable program is proposed, which also
> >> facilitates the implementation of the subsequent data lake, ReplaceTable
> >> and other functions.
> >>
> >> >2. I am a little bit confused about the isStreamingMode parameter of
> >> >Catalog#twoPhaseCreateTable(...), since it is the selector argument(code
> >> >smell) we should commonly avoid in the public interface. According to the
> >> >FLIP,  isStreamingMode will be used by the Catalog to determine whether to
> >> >support atomic or not. With this selector argument, there will be two
> >> >different logics built within one method and it is hard to follow without
> >> >reading the code or the doc carefully(another concern is to keep the doc
> >> >and code alway be consistent) i.e. sometimes there will be no difference
> >> by
> >> >using true/false isStreamingMode, sometimes they are quite different -
> >> >atomic vs. non-atomic. Another question is, before we call
> >> >Catalog#twoPhaseCreateTable(...), we have to know the value of
> >> >isStreamingMode. In case only non-atomic is supported for streaming mode,
> >> >we could just follow FLIP-218 instead of (twistedly) calling
> >> >Catalog#twoPhaseCreateTable(...) with a false isStreamingMode. Did I miss
> >> >anything here?
> >> Here's what I think about this issue, atomic CTAS wants to be the default
> >> behavior and only fall back to non-atomic CTAS if it's completely
> >> unattainable. Atomic CTAS will bring a better experience to users.
> >> Flink is already a stream batch unified engine, In our company kwai, many
> >> users are also using flink to do batch data processing, but still running
> >> in Stream mode.
> >> The boundary between stream and batch is gradually blurred, stream mode
> >> jobs may also FINISH, so I added the isStreamingMode parameter, this
> >> provides different atomicity implementations in Batch and Stream modes.
> >> Not only to

Re:Scala Compilation Errors with Flink code in IntelliJ. Builds fine with Maven command line.

2023-05-11 Thread Wencong Liu



Hi Brandon Wright,


I think you could try the following actions in IntelliJ IDE:
First, execute the command "mvn clean install  -Dfast -DskipTests=true 
-Dscala-2.12" in terminal.
Second, in "File -> Invalidate Caches", select all options and restart the IDE.
Finally, click "maven reload" in the maven plugin, and wait until the 
reloading process is finished.
   If it not work after these actions, you could try more times.


Best,


Wencong Liu











At 2023-05-12 07:16:09, "Brandon Wright"  
wrote:
>I clone the Flink git repository, master branch, I configure a Java 8 JDK, and 
>I can build the flink project successfully with:
>
>mvn clean package -DskipTests
>
>However, when I load the project into IntelliJ, and try to compile the project 
>and run the Scala tests in the IDE I get a lot of compilation errors with the 
>existing Scala code like:
>
>./flink/flink-scala/src/test/scala/org/apache/flink/api/scala/DeltaIterationSanityCheckTest.scala:33:41
>could not find implicit value for evidence parameter of type 
>org.apache.flink.api.common.typeinfo.TypeInformation[(Int, String)]
>val solutionInput = env.fromElements((1, "1"))
>
>and
>
>./flink/flink-table/flink-table-api-scala/src/test/scala/org/apache/flink/table/types/extraction/DataTypeExtractorScalaTest.scala:39:7
>overloaded method value assertThatThrownBy with alternatives:
>(x$1: org.assertj.core.api.ThrowableAssert.ThrowingCallable,x$2: String,x$3: 
>Object*)org.assertj.core.api.AbstractThrowableAssert[_, _ <: Throwable] 
>(x$1: 
>org.assertj.core.api.ThrowableAssert.ThrowingCallable)org.assertj.core.api.AbstractThrowableAssert[_,
> _ <: Throwable]
>cannot be applied to (() => Unit)
>assertThatThrownBy(() => runExtraction(testSpec))
>
>Clearly, the same code is compiling when using the Maven build via command 
>line, so this must be some kind of environment/config issue. I'd to get the 
>code building within IntelliJ so I can use the debugger and step through unit 
>tests. I don't want to make source changes quite yet. I'd like to just step 
>through the code as it is.
>
>My first guess is the IntelliJ IDE is using the wrong version of the Scala 
>compiler. In IntelliJ, in "Project Structure" -> "Platform Settings" -> 
>"Global Libraries", I have "scala-sdk-2.12.7" configured and nothing else. I 
>believe that's the specific version of Scala that the Flink code is intended 
>to compile with. I've checked all the project settings and preferences and I 
>don't see any other places I can configure or even verify which version of 
>Scala is being used.
>
>Additional points:
>
>- I can run/debug Java unit tests via the IntelliJ IDE, but not Scala unit 
>tests.
>- If I do "Build" -> "Rebuild Project", I get Scala compilation errors as 
>mentioned above, but no Java errors. The Java code seems to compile 
>successfully.
>- I'm using the current version of IntelliJ 2023.1.1 Ultimate with the Scala 
>plugin installed.
>- I've read and followed the instructions on 
>https://nightlies.apache.org/flink/flink-docs-master/docs/flinkdev/ide_setup/. 
>These docs don't mention specifying the version of the Scala compiler at all.
>- This is a clean repo on "master" branch with absolutely zero changes.
>- In IntelliJ, in "Project Structure" -> "Project Settings" -> "Project", I've 
>chosen a Java 8 JDK, which I presume is the best choice for building Flink 
>code today
>
>Thanks for any help!


Scala Compilation Errors with Flink code in IntelliJ. Builds fine with Maven command line.

2023-05-11 Thread Brandon Wright
I clone the Flink git repository, master branch, I configure a Java 8 JDK, and 
I can build the flink project successfully with:

mvn clean package -DskipTests

However, when I load the project into IntelliJ, and try to compile the project 
and run the Scala tests in the IDE I get a lot of compilation errors with the 
existing Scala code like:

./flink/flink-scala/src/test/scala/org/apache/flink/api/scala/DeltaIterationSanityCheckTest.scala:33:41
could not find implicit value for evidence parameter of type 
org.apache.flink.api.common.typeinfo.TypeInformation[(Int, String)]
val solutionInput = env.fromElements((1, "1"))

and

./flink/flink-table/flink-table-api-scala/src/test/scala/org/apache/flink/table/types/extraction/DataTypeExtractorScalaTest.scala:39:7
overloaded method value assertThatThrownBy with alternatives:
(x$1: org.assertj.core.api.ThrowableAssert.ThrowingCallable,x$2: String,x$3: 
Object*)org.assertj.core.api.AbstractThrowableAssert[_, _ <: Throwable] 
(x$1: 
org.assertj.core.api.ThrowableAssert.ThrowingCallable)org.assertj.core.api.AbstractThrowableAssert[_,
 _ <: Throwable]
cannot be applied to (() => Unit)
assertThatThrownBy(() => runExtraction(testSpec))

Clearly, the same code is compiling when using the Maven build via command 
line, so this must be some kind of environment/config issue. I'd to get the 
code building within IntelliJ so I can use the debugger and step through unit 
tests. I don't want to make source changes quite yet. I'd like to just step 
through the code as it is.

My first guess is the IntelliJ IDE is using the wrong version of the Scala 
compiler. In IntelliJ, in "Project Structure" -> "Platform Settings" -> "Global 
Libraries", I have "scala-sdk-2.12.7" configured and nothing else. I believe 
that's the specific version of Scala that the Flink code is intended to compile 
with. I've checked all the project settings and preferences and I don't see any 
other places I can configure or even verify which version of Scala is being 
used.

Additional points:

- I can run/debug Java unit tests via the IntelliJ IDE, but not Scala unit 
tests.
- If I do "Build" -> "Rebuild Project", I get Scala compilation errors as 
mentioned above, but no Java errors. The Java code seems to compile 
successfully.
- I'm using the current version of IntelliJ 2023.1.1 Ultimate with the Scala 
plugin installed.
- I've read and followed the instructions on 
https://nightlies.apache.org/flink/flink-docs-master/docs/flinkdev/ide_setup/. 
These docs don't mention specifying the version of the Scala compiler at all.
- This is a clean repo on "master" branch with absolutely zero changes.
- In IntelliJ, in "Project Structure" -> "Project Settings" -> "Project", I've 
chosen a Java 8 JDK, which I presume is the best choice for building Flink code 
today

Thanks for any help!

Re: [DISCUSS] FLIP-278: Hybrid Source Connector

2023-05-11 Thread Ilya Soin
Hi, Ran Tao.
Thanks for the reply!

I agree that a way to manage inconsistent field names / numbers will need to be 
provided and that for POC it’s enough to support the case where the batch and 
streaming schemas are consistent.

However, in the example provided by me, the schemas in batch and streaming are 
consistent: JSONs stored in S3 have exactly the same structure as JSONs in 
Kafka. As an end user, I’d expect this to work without providing 
schema.fields.mappings or schema.virtual-fields. But because Flink’s optimizer 
removes unused fields from internal records in the batch mode, the problem of 
inconsistent schema arises at runtime. Do you have an idea how to tackle this 
in a way that wouldn’t require users to add redundant configs in hybrid source 
definition?

__
Best regards,
Ilya Soin

> On 11 May 2023, at 04:44, Ran Tao  wrote:
> 
> Hi, Илья.
> Thanks for your opinions!
> 
> Your are right, and in fact, in addition to the different fields numbers,
> the names may also be different.
> Currently, we can also support inconsistent schema, which was discussed in
> the previous design,
> for example, we can provide a `schema.fields.mappings` parameter.
> 
> If we have different schema like below:
> true batch fields is: a, f1, c, f3
> true streaming fields is: f0, b, f2 (lack of 1 field)
> 
> 
> 1.about inconsistent field names
> 
> If user ddl is: f0, f1, f2
> `schema.fields.mappings`='[{"f0":"a", f2":"C"},{"f1":"b"}]'
> 
> then in hybrid table source, we generate child batch schema is: a, f1, c,
> streaming schema is: f0, b, f2 and pass them to final child table source.
> (note: we not use batch f3 field, just skip is ok)
> 
> 2.about inconsistent field numbers
> 
> If user ddl is: f0, f1, f2, f3
> `schema.fields.mappings`='[{"f0":"a", f2":"C"},{"f1":"b"}]'
> 
> then in hybrid table source, we generate child batch schema is: a, f1, c,
> f3,
> streaming schema has 2 options to set:
> 
> 1. set f0, b, f2, f3 and pass them to final child table source. (if child
> source format is k-v mode, f3 will be null)
> 
> 2. add an option, e.g.`schema.virtual-fields`='[[],["f3"]]' means
> streaming's field f3 is not existed.
> then hybrid table source set null for streaming's field f3 actively and
> just pass f0, b, f2 to child source to call real data.
> 
> In a word, we can use `schema.fields.mappings` to deal with inconsistent
> filed name
> and pass more fields to child source to get null to deal with inconsistent
> field numbers(or add a `schema.virtual-fields` option).
> 
> But in order to maintain consistency with the current DataStream api,
> we currently support the case where the batch and streaming schemas are
> consistent.
> I will update the POC pr then you can re-run your case. WDYT?
> 
> 
> Best Regards,
> Ran Tao
> 
> 
> 
> Илья Соин mailto:ilya.soin...@gmail.com>> 
> 于2023年5月11日周四 03:12写道:
> 
>> Hi devs,
>> 
>> I think for this approach to work, the internal record schema generated by
>> Flink must be exactly the same for batch and stream records, because at
>> runtime Flink will use the same serializer to send them downstream.
>> However, it’s not always the case, because in batch mode Flink’s optimizer
>> may realize that some fields are never actually used, so the records will
>> not contain those fields. Such optimizations may not be done in the
>> streaming mode, so records coming from the realtime source will have more
>> fields. In that case, after switching to the realtime source, the job will
>> fail, because record serializer expects records with the batch schema, but
>> instead receives records with more fields and doesn’t know how to serialize
>> them.
>> 
>> Consider the following DDL:
>> CREATE TABLE hybrid_table
>> (
>>trade ROW(
>>`openTime` BIGINT,
>>`closeTime` BIGINT),
>>server  STRING,
>>tradeTime as to_timestamp(from_unixtime(trade.openTime)),
>>WATERMARK FOR tradeTime AS tradeTime - INTERVAL '1' MINUTE
>> )
>>WITH (
>>'connector' = 'hybrid',
>>'source-identifiers' = 'historical,realtime',
>>'historical.connector' = 'filesystem',
>>'historical.path' = 's3://path.to.daa',
>>'historical.format' = 'json',
>>'realtime.connector' = 'kafka',
>>'realtime.topic' = 'trades',
>>'realtime.properties.bootstrap.servers' = '...',
>>'realtime.properties.group.id <
>> http://realtime.properties.group.id/>' = 'flink.tv  
>> ',
>>'realtime.format' = 'json',
>>'realtime.scan.startup.mode' = 'earliest-offset'
>>)
>> This query will fail:
>> 
>> select server from hybrid_table
>> 
>> But this query will work:
>> 
>> select * from hybrid_table
>> 
>> In the first query internal records in the batch source will only have 2
>> fields: server and trade. But in the streaming source they will have all
>> the fields described in the schema. When switching to the realtime source,
>> the job fails because record serializer expects reco

[DISCUSS] FLIP-312: Add Yarn ACLs to Flink Containers

2023-05-11 Thread Archit Goyal
Hi all,

I am opening this thread to discuss the proposal to support Yarn ACLs to Flink 
containers which has been documented in FLIP-312 
.

This FLIP mentions about providing Yarn application ACL mechanism on Flink 
containers to be able to provide specific rights to users other than the one 
running the Flink application job. This will restrict other users in two ways:

  *   view logs through the Resource Manager job history
  *   kill the application

Please feel free to reply to this email thread and share your opinions.

Thanks,
Archit Goyal



[jira] [Created] (FLINK-32064) Add sub-directory of test output file for JsonPlanTest to indicate the plan's version

2023-05-11 Thread Jane Chan (Jira)
Jane Chan created FLINK-32064:
-

 Summary: Add sub-directory of test output file  for JsonPlanTest 
to indicate the plan's version
 Key: FLINK-32064
 URL: https://issues.apache.org/jira/browse/FLINK-32064
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Affects Versions: 1.18.0
Reporter: Jane Chan
 Fix For: 1.18.0






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[VOTE] Apache Flink Kubernetes Operator Release 1.5.0, release candidate #1

2023-05-11 Thread Gyula Fóra
Hi everyone,

Please review and vote on the release candidate #1 for the version 1.5.0 of
Apache Flink Kubernetes Operator,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

**Release Overview**

As an overview, the release consists of the following:
a) Kubernetes Operator canonical source distribution (including the
Dockerfile), to be deployed to the release repository at dist.apache.org
b) Kubernetes Operator Helm Chart to be deployed to the release repository
at dist.apache.org
c) Maven artifacts to be deployed to the Maven Central Repository
d) Docker image to be pushed to dockerhub

**Staging Areas to Review**

The staging areas containing the above mentioned artifacts are as follows,
for your review:
* All artifacts for a,b) can be found in the corresponding dev repository
at dist.apache.org [1]
* All artifacts for c) can be found at the Apache Nexus Repository [2]
* The docker image for d) is staged on github [3]

All artifacts are signed with the key 21F06303B87DAFF1 [4]

Other links for your review:
* JIRA release notes [5]
* source code tag "release-1.5.0-rc1" [6]
* PR to update the website Downloads page to include Kubernetes Operator links
[7]

**Vote Duration**

The voting time will run for at least 72 hours.
It is adopted by majority approval, with at least 3 PMC affirmative votes.

**Note on Verification**

You can follow the basic verification guide here[8].
Note that you don't need to verify everything yourself, but please make
note of what you have tested together with your +- vote.

Cheers!
Gyula Fora

[1]
https://dist.apache.org/repos/dist/dev/flink/flink-kubernetes-operator-1.5.0-rc1/
[2] https://repository.apache.org/content/repositories/orgapacheflink-1632/
[3] ghcr.io/apache/flink-kubernetes-operator:6f08894
[4] https://dist.apache.org/repos/dist/release/flink/KEYS
[5]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352931
[6]
https://github.com/apache/flink-kubernetes-operator/tree/release-1.5.0-rc1
[7] https://github.com/apache/flink-web/pull/647
[8] https://cwiki.apache.org/confluence/display/FLINK/Verifying+a+Flink+
Kubernetes+Operator+Release


[jira] [Created] (FLINK-32063) AWS CI mvn compile fails to cast objects to parent type.

2023-05-11 Thread Ahmed Hamdy (Jira)
Ahmed Hamdy created FLINK-32063:
---

 Summary: AWS CI mvn compile fails to cast objects to parent type.
 Key: FLINK-32063
 URL: https://issues.apache.org/jira/browse/FLINK-32063
 Project: Flink
  Issue Type: Bug
  Components: Connectors / AWS, Tests
Reporter: Ahmed Hamdy


h2. Description

AWS Connectors CI fails to cast {{TestSinkInitContext}} into base type 
{{InitContext}},

- Failure
https://github.com/apache/flink-connector-aws/actions/runs/4924790308/jobs/8841458606?pr=70
 





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32062) Expose MetricGroup in FlinkResourceListener interface to allow users to create custom metrics

2023-05-11 Thread Tamir Sagi (Jira)
Tamir Sagi created FLINK-32062:
--

 Summary: Expose MetricGroup in FlinkResourceListener interface to 
allow users to create custom metrics
 Key: FLINK-32062
 URL: https://issues.apache.org/jira/browse/FLINK-32062
 Project: Flink
  Issue Type: Improvement
  Components: Kubernetes Operator
Reporter: Tamir Sagi


The operator supports pluggable {{FlinkResourceListener}} which provides the 
events & deployment status. However, such interface does not expose 
MetricManager/any way to create custom meters.
Which means that if users would like to create custom metrics per 
deployments(failures rate, scaling counter, any other metric per events) there 
is no way to attach it via operator metric system.

There are some basic metrics created per namespace in

{{org.apache.flink.kubernetes.operator.metrics.FlinkDeploymentMetrics}}

My suggestion is to expose either operator metric manager or another entity 
which provides a way to create meters(and internally registers them via 
MetricManager) in FlinkResourceListener interface.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32061) Resource metric groups are not cleaned up on removal

2023-05-11 Thread Maximilian Michels (Jira)
Maximilian Michels created FLINK-32061:
--

 Summary: Resource metric groups are not cleaned up on removal
 Key: FLINK-32061
 URL: https://issues.apache.org/jira/browse/FLINK-32061
 Project: Flink
  Issue Type: Bug
  Components: Autoscaler, Kubernetes Operator
Reporter: Maximilian Michels
Assignee: Maximilian Michels
 Fix For: kubernetes-operator-1.5.0


Not cleaning up leaks memory.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-310:use VARINT and ZIGZAG to encode ROWDATA in state

2023-05-11 Thread Xiaogang Zhou
Hi Zakelly,

Thanks for replay, I have add some more information in the FLIP document.
Please help review and let me know if further evidence is needed.

https://cwiki.apache.org/confluence/display/FLINK/%5BWIP%5DFLIP-310%3Ause+VARINT+and+ZIGZAG+to+encode+ROWDATA+in+state#id-[WIP]FLIP310:useVARINTandZIGZAGtoencodeROWDATAinstate-TestPlan
[image: image.png]

Zakelly Lan  于2023年5月10日周三 02:03写道:

> Hi Xiaogang Zhou,
>
> Thanks for driving this!
>
> I'm pretty interested in your verification test, could you please
> provide more details? AFAIK, the performance is related to the format
> of user data and the state size of the RocksDB, as well as the memory
> setup (determines the proportion of IO required to process data).
>
>
> Best regards,
> Zakelly
>
> On Fri, May 5, 2023 at 3:42 PM Xiaogang Zhou
>  wrote:
> >
> > Hi Guys,
> >
> > I have created a FLIP WIKI FLIP-310
> > <
> https://cwiki.apache.org/confluence/display/FLINK/%5BWIP%5DFLIP-310%3Ause+VARINT+and+ZIGZAG+to+encode+ROWDATA+in+state
> >,
> > and documented my thinking about using the varint format in FLINK state
> to
> > improve the FLINK state performance.
> >
> > Would you please help review and let me know what you think?
>


[jira] [Created] (FLINK-32060) Migrate subclasses of BatchAbstractTestBase in table and other modules to JUnit5

2023-05-11 Thread Yuxin Tan (Jira)
Yuxin Tan created FLINK-32060:
-

 Summary: Migrate subclasses of BatchAbstractTestBase in table and 
other modules to JUnit5
 Key: FLINK-32060
 URL: https://issues.apache.org/jira/browse/FLINK-32060
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.18.0
 Environment: Migrate subclasses of BatchAbstractTestBase in table and 
other modules to JUnit5.
Reporter: Yuxin Tan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32059) Migrate subclasses of BatchAbstractTestBase in batch.sql.agg and batch.sql.join to JUnit5

2023-05-11 Thread Yuxin Tan (Jira)
Yuxin Tan created FLINK-32059:
-

 Summary: Migrate subclasses of BatchAbstractTestBase in 
batch.sql.agg and batch.sql.join to JUnit5
 Key: FLINK-32059
 URL: https://issues.apache.org/jira/browse/FLINK-32059
 Project: Flink
  Issue Type: Sub-task
  Components: Tests
Affects Versions: 1.18.0
Reporter: Yuxin Tan


Migrate subclasses of BatchAbstractTestBase in batch.sql.agg and batch.sql.join 
to JUnit5.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32058) Migrate subclasses of BatchAbstractTestBase in runtime.batch.sql to JUnit5

2023-05-11 Thread Yuxin Tan (Jira)
Yuxin Tan created FLINK-32058:
-

 Summary: Migrate subclasses of BatchAbstractTestBase in 
runtime.batch.sql to JUnit5
 Key: FLINK-32058
 URL: https://issues.apache.org/jira/browse/FLINK-32058
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Affects Versions: 1.18.0
Reporter: Yuxin Tan


Migrate subclasses of BatchAbstractTestBase in runtime.batch.sql to JUnit5.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Release Flink 1.16.2

2023-05-11 Thread Martijn Visser
+1 - much appreciated

On Thu, May 11, 2023 at 9:24 AM Xintong Song  wrote:

> +1
>
> I'll help with the steps that require PMC privileges.
>
> Best,
>
> Xintong
>
>
>
> On Thu, May 11, 2023 at 3:13 PM Jingsong Li 
> wrote:
>
> > +1 for releasing 1.16.2
> >
> > Best,
> > Jingsong
> >
> > On Thu, May 11, 2023 at 1:28 PM Gyula Fóra  wrote:
> > >
> > > +1 for the release
> > >
> > > Gyula
> > >
> > > On Thu, 11 May 2023 at 05:08, weijie guo 
> > wrote:
> > >
> > > > [1]
> > > >
> > > >
> >
> https://issues.apache.org/jira/browse/FLINK-31092?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.16.2%20%20and%20resolution%20%20!%3D%20%20Unresolved%20order%20by%20priority%20DESC
> > > >
> > > > [2]
> > > >
> > > >
> >
> https://issues.apache.org/jira/browse/FLINK-31092?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.16.2%20and%20resolution%20%20!%3D%20Unresolved%20%20and%20priority%20in%20(Blocker%2C%20Critical)%20ORDER%20by%20priority%20%20DESC
> > > >
> > > > [3] https://issues.apache.org/jira/browse/FLINK-31293
> > > >
> > > > [4] https://issues.apache.org/jira/browse/FLINK-32027
> > > >
> > > > [5] https://issues.apache.org/jira/projects/FLINK/versions/12352765
> > > >
> > > >
> > > >
> > > >
> > > > weijie guo  于2023年5月11日周四 11:06写道:
> > > >
> > > > > Hi all,
> > > > >
> > > > >
> > > > > I would like to discuss creating a new 1.16 patch release (1.16.2).
> > The
> > > > > last 1.16 release is over three months old, and since then, 99
> > tickets
> > > > have
> > > > >  been closed [1], of which 30 are blocker/critical [2].  Some
> > > > > of them are quite important, such as FLINK-31293 [3] and
> FLINK-32027
> > [4].
> > > > >
> > > > >
> > > > >
> > > > > I am not aware of any unresolved blockers and there are no
> > in-progress
> > > > tickets [5].
> > > > > Please let me know if there are any issues you'd like to be
> included
> > in
> > > > > this release but still not merged.
> > > > >
> > > > >
> > > > >
> > > > > If the community agrees to create this new patch release, I could
> > > > volunteer as the release manager
> > > > >  and Xintong can help with actions that require a PMC role.
> > > > >
> > > > > Best regards,
> > > > >
> > > > > Weijie
> > > > >
> > > >
> >
>


Re: [DISCUSS] Release Flink 1.17.1

2023-05-11 Thread Martijn Visser
+1, thanks for volunteering!

On Thu, May 11, 2023 at 9:23 AM Xintong Song  wrote:

> +1
>
> I'll help with the steps that require PMC privileges.
>
> Best,
>
> Xintong
>
>
>
> On Thu, May 11, 2023 at 3:12 PM Jingsong Li 
> wrote:
>
> > +1 for releasing 1.17.1
> >
> > Best,
> > Jingsong
> >
> > On Thu, May 11, 2023 at 1:29 PM Gyula Fóra  wrote:
> > >
> > > +1 for the release
> > >
> > > Gyula
> > >
> > > On Thu, 11 May 2023 at 05:35, Yun Tang  wrote:
> > >
> > > > +1 for release flink-1.17.1
> > > >
> > > > The blocker issue might cause silent incorrect data, it's better to
> > have a
> > > > fix release ASAP.
> > > >
> > > >
> > > > Best
> > > > Yun Tang
> > > > 
> > > > From: weijie guo 
> > > > Sent: Thursday, May 11, 2023 11:08
> > > > To: dev@flink.apache.org ;
> tonysong...@gmail.com
> > <
> > > > tonysong...@gmail.com>
> > > > Subject: [DISCUSS] Release Flink 1.17.1
> > > >
> > > > Hi all,
> > > >
> > > >
> > > > I would like to discuss creating a new 1.17 patch release (1.17.1).
> The
> > > > last 1.17 release is nearly two months old, and since then, 66
> tickets
> > have
> > > > been closed [1], of which 14 are blocker/critical [2].  Some of them
> > are
> > > > quite important, such as FLINK-31293 [3] and  FLINK-32027 [4].
> > > >
> > > >
> > > > I am not aware of any unresolved blockers and there are no
> in-progress
> > > > tickets [5].
> > > > Please let me know if there are any issues you'd like to be included
> in
> > > > this release but still not merged.
> > > >
> > > >
> > > > If the community agrees to create this new patch release, I could
> > > > volunteer as the release manager
> > > >  and Xintong can help with actions that require a PMC role.
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > Weijie
> > > >
> > > >
> > > > [1]
> > > >
> > > >
> >
> https://issues.apache.org/jira/browse/FLINK-32027?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.17.1%20%20and%20resolution%20%20!%3D%20%20Unresolved%20order%20by%20priority%20DESC
> > > >
> > > > [2]
> > > >
> > > >
> >
> https://issues.apache.org/jira/browse/FLINK-31273?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.17.1%20and%20resolution%20%20!%3D%20Unresolved%20%20and%20priority%20in%20(Blocker%2C%20Critical)%20ORDER%20by%20priority%20%20DESC
> > > >
> > > > [3] https://issues.apache.org/jira/browse/FLINK-31293
> > > >
> > > > [4] https://issues.apache.org/jira/browse/FLINK-32027
> > > >
> > > > [5] https://issues.apache.org/jira/projects/FLINK/versions/12352886
> > > >
> >
>


[jira] [Created] (FLINK-32057) Autoscaler should use the new vertex resource api in 1.18

2023-05-11 Thread Gyula Fora (Jira)
Gyula Fora created FLINK-32057:
--

 Summary: Autoscaler should use the new vertex resource api in 1.18
 Key: FLINK-32057
 URL: https://issues.apache.org/jira/browse/FLINK-32057
 Project: Flink
  Issue Type: New Feature
  Components: Autoscaler, Kubernetes Operator
Reporter: Gyula Fora
Assignee: Gyula Fora


Flink 1.18 introduces a new rest api for changing vertex parallelisms on the 
fly with the adaptive scheduler.

We should build support for this into the operator autoscaler which has the 
potential to significantly improve rescale times and job stability



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32056) Update the used Pulsar connector in flink-python to 4.0.0

2023-05-11 Thread Martijn Visser (Jira)
Martijn Visser created FLINK-32056:
--

 Summary: Update the used Pulsar connector in flink-python to 4.0.0
 Key: FLINK-32056
 URL: https://issues.apache.org/jira/browse/FLINK-32056
 Project: Flink
  Issue Type: Bug
  Components: API / Python, Connectors / Pulsar
Affects Versions: 1.18.0, 1.17.1
Reporter: Martijn Visser
Assignee: Martijn Visser


flink-python still references and tests flink-connector-pulsar:3.0.0, while it 
should be using flink-connector-pulsar:4.0.0. That's because the newer version 
is the only version compatible with Flink 1.17 and it doesn't rely on 
flink-shaded. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-32055) Migrate all subclasses of BatchAbstractTestBase to JUnit5

2023-05-11 Thread Yuxin Tan (Jira)
Yuxin Tan created FLINK-32055:
-

 Summary: Migrate all subclasses of BatchAbstractTestBase to JUnit5
 Key: FLINK-32055
 URL: https://issues.apache.org/jira/browse/FLINK-32055
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Affects Versions: 1.18.0
Reporter: Yuxin Tan






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] Release Flink 1.17.1

2023-05-11 Thread Xintong Song
+1

I'll help with the steps that require PMC privileges.

Best,

Xintong



On Thu, May 11, 2023 at 3:12 PM Jingsong Li  wrote:

> +1 for releasing 1.17.1
>
> Best,
> Jingsong
>
> On Thu, May 11, 2023 at 1:29 PM Gyula Fóra  wrote:
> >
> > +1 for the release
> >
> > Gyula
> >
> > On Thu, 11 May 2023 at 05:35, Yun Tang  wrote:
> >
> > > +1 for release flink-1.17.1
> > >
> > > The blocker issue might cause silent incorrect data, it's better to
> have a
> > > fix release ASAP.
> > >
> > >
> > > Best
> > > Yun Tang
> > > 
> > > From: weijie guo 
> > > Sent: Thursday, May 11, 2023 11:08
> > > To: dev@flink.apache.org ; tonysong...@gmail.com
> <
> > > tonysong...@gmail.com>
> > > Subject: [DISCUSS] Release Flink 1.17.1
> > >
> > > Hi all,
> > >
> > >
> > > I would like to discuss creating a new 1.17 patch release (1.17.1). The
> > > last 1.17 release is nearly two months old, and since then, 66 tickets
> have
> > > been closed [1], of which 14 are blocker/critical [2].  Some of them
> are
> > > quite important, such as FLINK-31293 [3] and  FLINK-32027 [4].
> > >
> > >
> > > I am not aware of any unresolved blockers and there are no in-progress
> > > tickets [5].
> > > Please let me know if there are any issues you'd like to be included in
> > > this release but still not merged.
> > >
> > >
> > > If the community agrees to create this new patch release, I could
> > > volunteer as the release manager
> > >  and Xintong can help with actions that require a PMC role.
> > >
> > >
> > > Thanks,
> > >
> > > Weijie
> > >
> > >
> > > [1]
> > >
> > >
> https://issues.apache.org/jira/browse/FLINK-32027?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.17.1%20%20and%20resolution%20%20!%3D%20%20Unresolved%20order%20by%20priority%20DESC
> > >
> > > [2]
> > >
> > >
> https://issues.apache.org/jira/browse/FLINK-31273?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.17.1%20and%20resolution%20%20!%3D%20Unresolved%20%20and%20priority%20in%20(Blocker%2C%20Critical)%20ORDER%20by%20priority%20%20DESC
> > >
> > > [3] https://issues.apache.org/jira/browse/FLINK-31293
> > >
> > > [4] https://issues.apache.org/jira/browse/FLINK-32027
> > >
> > > [5] https://issues.apache.org/jira/projects/FLINK/versions/12352886
> > >
>


Re: [DISCUSS] Release Flink 1.16.2

2023-05-11 Thread Xintong Song
+1

I'll help with the steps that require PMC privileges.

Best,

Xintong



On Thu, May 11, 2023 at 3:13 PM Jingsong Li  wrote:

> +1 for releasing 1.16.2
>
> Best,
> Jingsong
>
> On Thu, May 11, 2023 at 1:28 PM Gyula Fóra  wrote:
> >
> > +1 for the release
> >
> > Gyula
> >
> > On Thu, 11 May 2023 at 05:08, weijie guo 
> wrote:
> >
> > > [1]
> > >
> > >
> https://issues.apache.org/jira/browse/FLINK-31092?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.16.2%20%20and%20resolution%20%20!%3D%20%20Unresolved%20order%20by%20priority%20DESC
> > >
> > > [2]
> > >
> > >
> https://issues.apache.org/jira/browse/FLINK-31092?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.16.2%20and%20resolution%20%20!%3D%20Unresolved%20%20and%20priority%20in%20(Blocker%2C%20Critical)%20ORDER%20by%20priority%20%20DESC
> > >
> > > [3] https://issues.apache.org/jira/browse/FLINK-31293
> > >
> > > [4] https://issues.apache.org/jira/browse/FLINK-32027
> > >
> > > [5] https://issues.apache.org/jira/projects/FLINK/versions/12352765
> > >
> > >
> > >
> > >
> > > weijie guo  于2023年5月11日周四 11:06写道:
> > >
> > > > Hi all,
> > > >
> > > >
> > > > I would like to discuss creating a new 1.16 patch release (1.16.2).
> The
> > > > last 1.16 release is over three months old, and since then, 99
> tickets
> > > have
> > > >  been closed [1], of which 30 are blocker/critical [2].  Some
> > > > of them are quite important, such as FLINK-31293 [3] and FLINK-32027
> [4].
> > > >
> > > >
> > > >
> > > > I am not aware of any unresolved blockers and there are no
> in-progress
> > > tickets [5].
> > > > Please let me know if there are any issues you'd like to be included
> in
> > > > this release but still not merged.
> > > >
> > > >
> > > >
> > > > If the community agrees to create this new patch release, I could
> > > volunteer as the release manager
> > > >  and Xintong can help with actions that require a PMC role.
> > > >
> > > > Best regards,
> > > >
> > > > Weijie
> > > >
> > >
>


Re: [DISCUSS] Release Flink 1.16.2

2023-05-11 Thread Jingsong Li
+1 for releasing 1.16.2

Best,
Jingsong

On Thu, May 11, 2023 at 1:28 PM Gyula Fóra  wrote:
>
> +1 for the release
>
> Gyula
>
> On Thu, 11 May 2023 at 05:08, weijie guo  wrote:
>
> > [1]
> >
> > https://issues.apache.org/jira/browse/FLINK-31092?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.16.2%20%20and%20resolution%20%20!%3D%20%20Unresolved%20order%20by%20priority%20DESC
> >
> > [2]
> >
> > https://issues.apache.org/jira/browse/FLINK-31092?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.16.2%20and%20resolution%20%20!%3D%20Unresolved%20%20and%20priority%20in%20(Blocker%2C%20Critical)%20ORDER%20by%20priority%20%20DESC
> >
> > [3] https://issues.apache.org/jira/browse/FLINK-31293
> >
> > [4] https://issues.apache.org/jira/browse/FLINK-32027
> >
> > [5] https://issues.apache.org/jira/projects/FLINK/versions/12352765
> >
> >
> >
> >
> > weijie guo  于2023年5月11日周四 11:06写道:
> >
> > > Hi all,
> > >
> > >
> > > I would like to discuss creating a new 1.16 patch release (1.16.2). The
> > > last 1.16 release is over three months old, and since then, 99 tickets
> > have
> > >  been closed [1], of which 30 are blocker/critical [2].  Some
> > > of them are quite important, such as FLINK-31293 [3] and FLINK-32027 [4].
> > >
> > >
> > >
> > > I am not aware of any unresolved blockers and there are no in-progress
> > tickets [5].
> > > Please let me know if there are any issues you'd like to be included in
> > > this release but still not merged.
> > >
> > >
> > >
> > > If the community agrees to create this new patch release, I could
> > volunteer as the release manager
> > >  and Xintong can help with actions that require a PMC role.
> > >
> > > Best regards,
> > >
> > > Weijie
> > >
> >


Re: [DISCUSS] Release Flink 1.17.1

2023-05-11 Thread Jingsong Li
+1 for releasing 1.17.1

Best,
Jingsong

On Thu, May 11, 2023 at 1:29 PM Gyula Fóra  wrote:
>
> +1 for the release
>
> Gyula
>
> On Thu, 11 May 2023 at 05:35, Yun Tang  wrote:
>
> > +1 for release flink-1.17.1
> >
> > The blocker issue might cause silent incorrect data, it's better to have a
> > fix release ASAP.
> >
> >
> > Best
> > Yun Tang
> > 
> > From: weijie guo 
> > Sent: Thursday, May 11, 2023 11:08
> > To: dev@flink.apache.org ; tonysong...@gmail.com <
> > tonysong...@gmail.com>
> > Subject: [DISCUSS] Release Flink 1.17.1
> >
> > Hi all,
> >
> >
> > I would like to discuss creating a new 1.17 patch release (1.17.1). The
> > last 1.17 release is nearly two months old, and since then, 66 tickets have
> > been closed [1], of which 14 are blocker/critical [2].  Some of them are
> > quite important, such as FLINK-31293 [3] and  FLINK-32027 [4].
> >
> >
> > I am not aware of any unresolved blockers and there are no in-progress
> > tickets [5].
> > Please let me know if there are any issues you'd like to be included in
> > this release but still not merged.
> >
> >
> > If the community agrees to create this new patch release, I could
> > volunteer as the release manager
> >  and Xintong can help with actions that require a PMC role.
> >
> >
> > Thanks,
> >
> > Weijie
> >
> >
> > [1]
> >
> > https://issues.apache.org/jira/browse/FLINK-32027?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.17.1%20%20and%20resolution%20%20!%3D%20%20Unresolved%20order%20by%20priority%20DESC
> >
> > [2]
> >
> > https://issues.apache.org/jira/browse/FLINK-31273?jql=project%20%3D%20FLINK%20AND%20fixVersion%20%3D%201.17.1%20and%20resolution%20%20!%3D%20Unresolved%20%20and%20priority%20in%20(Blocker%2C%20Critical)%20ORDER%20by%20priority%20%20DESC
> >
> > [3] https://issues.apache.org/jira/browse/FLINK-31293
> >
> > [4] https://issues.apache.org/jira/browse/FLINK-32027
> >
> > [5] https://issues.apache.org/jira/projects/FLINK/versions/12352886
> >


[jira] [Created] (FLINK-32054) ElasticsearchSinkITCase.testElasticsearchSink fails on AZP

2023-05-11 Thread Sergey Nuyanzin (Jira)
Sergey Nuyanzin created FLINK-32054:
---

 Summary: ElasticsearchSinkITCase.testElasticsearchSink fails on AZP
 Key: FLINK-32054
 URL: https://issues.apache.org/jira/browse/FLINK-32054
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Affects Versions: 1.16.1
Reporter: Sergey Nuyanzin


Test ElasticsearchSinkITCase.testElasticsearchSink fails on AZP
{noformat}
May 11 02:00:56 Caused by: org.elasticsearch.client.ResponseException: 
org.elasticsearch.client.ResponseException: method [HEAD], host 
[http://172.17.0.1:50560], URI [/], status line [HTTP/1.1 503 Service 
Unavailable]
May 11 02:00:56 at 
org.elasticsearch.client.RestClient$1.completed(RestClient.java:552)
May 11 02:00:56 at 
org.elasticsearch.client.RestClient$1.completed(RestClient.java:537)
May 11 02:00:56 at 
org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
May 11 02:00:56 at 
org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:177)
May 11 02:00:56 at 
org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436)
May 11 02:00:56 at 
org.apache.http.nio.protocol.HttpAsyncRequestExecutor.responseReceived(HttpAsyncRequestExecutor.java:309)
May 11 02:00:56 at 
org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:255)
May 11 02:00:56 at 
org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
May 11 02:00:56 at 
org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
May 11 02:00:56 at 
org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
May 11 02:00:56 at 
org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
May 11 02:00:56 at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
May 11 02:00:56 at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
May 11 02:00:56 at 
org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
May 11 02:00:56 at 
org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
May 11 02:00:56 at 
org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588)
May 11 02:00:56 ... 1 more
May 11 02:00:56 

{noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)