[jira] [Created] (FLINK-30310) Re-enable e2e test error check

2022-12-05 Thread Gabor Somogyi (Jira)
Gabor Somogyi created FLINK-30310:
-

 Summary: Re-enable e2e test error check
 Key: FLINK-30310
 URL: https://issues.apache.org/jira/browse/FLINK-30310
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Reporter: Gabor Somogyi


In FLINK-30307 e2e test error check has been turned off temporarily. We must 
re-enable it after release.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[RESULT][VOTE] FLIP-273: Improve the Catalog API to Support ALTER TABLE syntax

2022-12-05 Thread Shengkai Fang
Hi, dev.

FLIP-273: Improve the Catalog API to Support ALTER TABLE syntax[1] has been
accepted.

There are 4 binds, and 5 no-bindings as follows

Jingsong Li(binding),
Dong Lin(binding),
Yaroslav Tkachenko(no-binding),
yuxia(no-binding),
Paul Lam(no-binding),
Jark Wu(binding),
Zheng Yu Chen(no-binding),
Jing Ge(no-binding),
Shengkai Fang(binding)

There were no votes against it.

Best,
Shengkai

[1]
https://cwiki.apache.org/confluence/display/FLINK/FLIP-273%3A+Improve+the+Catalog+API+to+Support+ALTER+TABLE+syntax


Re: [VOTE] FLIP-273: Improve Catalog API to Support ALTER TABLE syntax

2022-12-05 Thread Shengkai Fang
Hi, all.

Thanks for all your votes.  +1(binding) from my side.

Best,
Shengkai

Jing Ge  于2022年12月6日周二 07:58写道:

> +1(no-binding)
>
> On Mon, Dec 5, 2022 at 3:29 PM Zheng Yu Chen  wrote:
>
> > +1(no-binding)
> >
> > Jark Wu  于2022年12月5日周一 11:00写道:
> >
> > > +1 (binding)
> > >
> > > Best,
> > > Jark
> > >
> > > On Fri, 2 Dec 2022 at 10:11, Paul Lam  wrote:
> > >
> > > > +1 (non-binding)
> > > >
> > > > Best,
> > > > Paul Lam
> > > >
> > > > > 2022年12月2日 09:17,yuxia  写道:
> > > > >
> > > > > +1 (non-binding)
> > > > >
> > > > > Best regards,
> > > > > Yuxia
> > > > >
> > > > > - 原始邮件 -
> > > > > 发件人: "Yaroslav Tkachenko" 
> > > > > 收件人: "dev" 
> > > > > 发送时间: 星期五, 2022年 12 月 02日 上午 12:27:24
> > > > > 主题: Re: [VOTE] FLIP-273: Improve Catalog API to Support ALTER TABLE
> > > > syntax
> > > > >
> > > > > +1 (non-binding).
> > > > >
> > > > > Looking forward to it!
> > > > >
> > > > > On Thu, Dec 1, 2022 at 5:06 AM Dong Lin 
> wrote:
> > > > >
> > > > >> +1 (binding)
> > > > >>
> > > > >> Thanks for the FLIP!
> > > > >>
> > > > >> On Thu, Dec 1, 2022 at 12:20 PM Shengkai Fang 
> > > > wrote:
> > > > >>
> > > > >>> Hi All,
> > > > >>>
> > > > >>> Thanks for all the feedback so far. Based on the discussion[1] we
> > > seem
> > > > >>> to have a consensus, so I would like to start a vote on FLIP-273.
> > > > >>>
> > > > >>> The vote will last for at least 72 hours (Dec 5th at 13:00 GMT,
> > > > >>> excluding weekend days) unless there is an objection or
> > insufficient
> > > > >> votes.
> > > > >>>
> > > > >>> Best,
> > > > >>> Shengkai
> > > > >>>
> > > > >>> [1]
> > > > >>>
> > > > >>>
> > > > >>
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-273%3A+Improve+the+Catalog+API+to+Support+ALTER+TABLE+syntax
> > > > >>> [2]
> > https://lists.apache.org/thread/2v4kh2bpzvk049zdxb687q7o1pcmnnnw
> > > > >>>
> > > > >>
> > > >
> > > >
> > >
> >
> >
> > --
> > Best
> >
> > ConradJam
> >
>


[jira] [Created] (FLINK-30309) Add the ability to supply custom SslContexts for netty client/servers.

2022-12-05 Thread Steve Niemitz (Jira)
Steve Niemitz created FLINK-30309:
-

 Summary: Add the ability to supply custom SslContexts for netty 
client/servers.
 Key: FLINK-30309
 URL: https://issues.apache.org/jira/browse/FLINK-30309
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Network
Reporter: Steve Niemitz


The existing flink configuration supports on simple SSL configuration via 
keystore/truststore configuration.  For more advanced configuration options it 
is desirable to be able to instead provide the entire pre-configured 
SslContext, as well as all them to be reloaded if needed (eg, the certificate 
material rotates while running).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30308) ClassCastException: class java.io.ObjectStreamClass$Caches$1 cannot be cast to class java.util.Map is showing in the logging when the job shutdown

2022-12-05 Thread Dian Fu (Jira)
Dian Fu created FLINK-30308:
---

 Summary: ClassCastException: class 
java.io.ObjectStreamClass$Caches$1 cannot be cast to class java.util.Map is 
showing in the logging when the job shutdown
 Key: FLINK-30308
 URL: https://issues.apache.org/jira/browse/FLINK-30308
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Reporter: Dian Fu
Assignee: Dian Fu
 Fix For: 1.17.0, 1.16.1, 1.15.4


{code:java}
2022-12-05 18:26:40,229 WARN  
org.apache.flink.streaming.api.operators.AbstractStreamOperator [] - Failed to 
clean up the leaking objects.
java.lang.ClassCastException: class java.io.ObjectStreamClass$Caches$1 cannot 
be cast to class java.util.Map (java.io.ObjectStreamClass$Caches$1 and 
java.util.Map are in module java.base of loader 'bootstrap')
at 
org.apache.flink.streaming.api.utils.ClassLeakCleaner.clearCache(ClassLeakCleaner.java:58)
 
~[blob_p-a72e14b9030c3ca0d3d0a8fc6e70166c7419d431-f7f18b2164971cb6798db9ab762feabd:1.15.0]
at 
org.apache.flink.streaming.api.utils.ClassLeakCleaner.cleanUpLeakingClasses(ClassLeakCleaner.java:39)
 
~[blob_p-a72e14b9030c3ca0d3d0a8fc6e70166c7419d431-f7f18b2164971cb6798db9ab762feabd:1.15.0]
at 
org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.close(AbstractPythonFunctionOperator.java:142)
 
~[blob_p-a72e14b9030c3ca0d3d0a8fc6e70166c7419d431-f7f18b2164971cb6798db9ab762feabd:1.15.0]
at 
org.apache.flink.streaming.api.operators.python.AbstractExternalPythonFunctionOperator.close(AbstractExternalPythonFunctionOperator.java:73)
 
~[blob_p-a72e14b9030c3ca0d3d0a8fc6e70166c7419d431-f7f18b2164971cb6798db9ab762feabd:1.15.0]
at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.close(StreamOperatorWrapper.java:163)
 ~[flink-dist-1.15.2.jar:1.15.2]
at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.closeAllOperators(RegularOperatorChain.java:125)
 ~[flink-dist-1.15.2.jar:1.15.2]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.closeAllOperators(StreamTask.java:997)
 ~[flink-dist-1.15.2.jar:1.15.2]
at org.apache.flink.util.IOUtils.closeAll(IOUtils.java:254) 
~[flink-dist-1.15.2.jar:1.15.2]
at 
org.apache.flink.core.fs.AutoCloseableRegistry.doClose(AutoCloseableRegistry.java:72)
 ~[flink-dist-1.15.2.jar:1.15.2]
at 
org.apache.flink.util.AbstractAutoCloseableRegistry.close(AbstractAutoCloseableRegistry.java:127)
 ~[flink-dist-1.15.2.jar:1.15.2]
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.cleanUp(StreamTask.java:916)
 ~[flink-dist-1.15.2.jar:1.15.2]
at 
org.apache.flink.runtime.taskmanager.Task.lambda$restoreAndInvoke$0(Task.java:930)
 ~[flink-dist-1.15.2.jar:1.15.2]
at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:948)
 [flink-dist-1.15.2.jar:1.15.2]
at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:930) 
[flink-dist-1.15.2.jar:1.15.2]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:741) 
[flink-dist-1.15.2.jar:1.15.2]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:563) 
[flink-dist-1.15.2.jar:1.15.2]
at java.lang.Thread.run(Unknown Source) [?:?]{code}

Reported in Slack: 
https://apache-flink.slack.com/archives/C03G7LJTS2G/p1670265131083639?thread_ts=1670265114.640369&cid=C03G7LJTS2G



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] FLIP-273: Improve Catalog API to Support ALTER TABLE syntax

2022-12-05 Thread Jing Ge
+1(no-binding)

On Mon, Dec 5, 2022 at 3:29 PM Zheng Yu Chen  wrote:

> +1(no-binding)
>
> Jark Wu  于2022年12月5日周一 11:00写道:
>
> > +1 (binding)
> >
> > Best,
> > Jark
> >
> > On Fri, 2 Dec 2022 at 10:11, Paul Lam  wrote:
> >
> > > +1 (non-binding)
> > >
> > > Best,
> > > Paul Lam
> > >
> > > > 2022年12月2日 09:17,yuxia  写道:
> > > >
> > > > +1 (non-binding)
> > > >
> > > > Best regards,
> > > > Yuxia
> > > >
> > > > - 原始邮件 -
> > > > 发件人: "Yaroslav Tkachenko" 
> > > > 收件人: "dev" 
> > > > 发送时间: 星期五, 2022年 12 月 02日 上午 12:27:24
> > > > 主题: Re: [VOTE] FLIP-273: Improve Catalog API to Support ALTER TABLE
> > > syntax
> > > >
> > > > +1 (non-binding).
> > > >
> > > > Looking forward to it!
> > > >
> > > > On Thu, Dec 1, 2022 at 5:06 AM Dong Lin  wrote:
> > > >
> > > >> +1 (binding)
> > > >>
> > > >> Thanks for the FLIP!
> > > >>
> > > >> On Thu, Dec 1, 2022 at 12:20 PM Shengkai Fang 
> > > wrote:
> > > >>
> > > >>> Hi All,
> > > >>>
> > > >>> Thanks for all the feedback so far. Based on the discussion[1] we
> > seem
> > > >>> to have a consensus, so I would like to start a vote on FLIP-273.
> > > >>>
> > > >>> The vote will last for at least 72 hours (Dec 5th at 13:00 GMT,
> > > >>> excluding weekend days) unless there is an objection or
> insufficient
> > > >> votes.
> > > >>>
> > > >>> Best,
> > > >>> Shengkai
> > > >>>
> > > >>> [1]
> > > >>>
> > > >>>
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-273%3A+Improve+the+Catalog+API+to+Support+ALTER+TABLE+syntax
> > > >>> [2]
> https://lists.apache.org/thread/2v4kh2bpzvk049zdxb687q7o1pcmnnnw
> > > >>>
> > > >>
> > >
> > >
> >
>
>
> --
> Best
>
> ConradJam
>


[jira] [Created] (FLINK-30307) Turn off e2e test error check temporarily

2022-12-05 Thread Gabor Somogyi (Jira)
Gabor Somogyi created FLINK-30307:
-

 Summary: Turn off e2e test error check temporarily
 Key: FLINK-30307
 URL: https://issues.apache.org/jira/browse/FLINK-30307
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Reporter: Gabor Somogyi






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30306) Audit utils can expose potentially sensitive information

2022-12-05 Thread Alexis Sarda-Espinosa (Jira)
Alexis Sarda-Espinosa created FLINK-30306:
-

 Summary: Audit utils can expose potentially sensitive information
 Key: FLINK-30306
 URL: https://issues.apache.org/jira/browse/FLINK-30306
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Affects Versions: kubernetes-operator-1.2.0
Reporter: Alexis Sarda-Espinosa


I see events being logged by 
{{org.apache.flink.kubernetes.operator.listener.AuditUtils}} along the lines of 
">>> Event  | Info| SPECCHANGED | UPGRADE change(s) detected". This 
logs the entire new spec, which can contain sensitive information that has been 
injected from a Kubernetes secret.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[ANNOUNCE] Apache flink-connector-dynamodb 3.0.0 released

2022-12-05 Thread Danny Cranmer
The Apache Flink community is very happy to announce the release of Apache
flink-connector-aws 3.0.0. This release includes a new Amazon DynamoDB
connector.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

The full release notes are available in Jira:
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352277

Flink 1.16 documentation can be found here:
- Datastream:
https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/datastream/dynamodb/
- Table API:
https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/connectors/table/dynamodb/

We would like to thank all contributors of the Apache Flink community who
made this release possible!

Regards,
Danny Cranmer


[jira] [Created] (FLINK-30305) Operator deletes HA metadata during stateful upgrade, preventing potential manual rollback

2022-12-05 Thread Alexis Sarda-Espinosa (Jira)
Alexis Sarda-Espinosa created FLINK-30305:
-

 Summary: Operator deletes HA metadata during stateful upgrade, 
preventing potential manual rollback
 Key: FLINK-30305
 URL: https://issues.apache.org/jira/browse/FLINK-30305
 Project: Flink
  Issue Type: Bug
  Components: Kubernetes Operator
Affects Versions: kubernetes-operator-1.2.0
Reporter: Alexis Sarda-Espinosa


I was testing resiliency of jobs with Kubernetes-based HA enabled, upgrade mode 
= {{savepoint}}, and with _automatic_ rollback _disabled_ in the operator. 
After the job was running, I purposely created an erroneous spec by changing my 
pod template to include an entry in {{envFrom -> secretRef}} with a name that 
doesn't exist. Schema validation passed, so the operator tried to upgrade the 
job, and I see this in the logs:

{noformat}
>>> Status | Info| UPGRADING   | The resource is being upgraded
Deleting deployment with terminated application before new deployment
Deleting JobManager deployment and HA metadata.
{noformat}

Afterwards, even if I remove the non-existing entry from my pod template, the 
operator can no longer propagate the new spec because "Job is not running yet 
and HA metadata is not available, waiting for upgradeable state".




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] Release flink-connector-aws v4.0.0, release candidate #1

2022-12-05 Thread Danny Cranmer
This release is officially cancelled due to the discovery of a critical bug
[1].

I will fix and follow up with an RC.

[1] https://issues.apache.org/jira/browse/FLINK-30304

On Mon, Dec 5, 2022 at 3:25 PM Danny Cranmer 
wrote:

> Hi everyone,
> Please review and vote on the release candidate #1 for the version 4.0.0,
> as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
>
> This release externalizes the Kinesis Data Streams and Kinesis Data
> Firehose connector to the flink-connector-aws repository.
>
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org
> [2], which are signed with the key with fingerprint 125FD8DB [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v4.0.0-rc1 [5],
> * website pull request listing the new release [6].
>
> The vote will be open for at least 72 hours (Thursday 8th December 16:00
> UTC). It is adopted by majority approval, with at least 3 PMC affirmative
> votes.
>
> Thanks,
> Danny
>
> [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352538
> [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-aws-4.0.0-rc1/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4]
> https://repository.apache.org/content/repositories/orgapacheflink-1558/
> [5] https://github.com/apache/flink-connector-aws/releases/tag/v4.0.0-rc1
> [6] https://github.com/apache/flink-web/pull/592
>
>
>


[jira] [Created] (FLINK-30304) Possible Deadlock in Kinesis/Firehose/DynamoDB Connector

2022-12-05 Thread Danny Cranmer (Jira)
Danny Cranmer created FLINK-30304:
-

 Summary: Possible Deadlock in Kinesis/Firehose/DynamoDB Connector
 Key: FLINK-30304
 URL: https://issues.apache.org/jira/browse/FLINK-30304
 Project: Flink
  Issue Type: Technical Debt
Reporter: Danny Cranmer
 Fix For: aws-connector-4.0.0


AWS Sinks based on Async Sink can enter a deadlock situation if the AWS async 
client throws error outside of the future. We observed this with a local 
application:

 
{code:java}

    
java.lang.NullPointerException
at 
org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.utils.NettyUtils.closedChannelMessage(NettyUtils.java:135)
at 
org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.utils.NettyUtils.decorateException(NettyUtils.java:71)
at 
org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.NettyRequestExecutor.handleFailure(NettyRequestExecutor.java:310)
at 
org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.NettyRequestExecutor.makeRequestListener(NettyRequestExecutor.java:189)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at 
org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.CancellableAcquireChannelPool.lambda$acquire$1(CancellableAcquireChannelPool.java:58)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at 
org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.HealthCheckedChannelPool.ensureAcquiredChannelIsHealthy(HealthCheckedChannelPool.java:114)
at 
org.apache.flink.kinesis.shaded.software.amazon.awssdk.http.nio.netty.internal.HealthCheckedChannelPool.lambda$tryAcquire$1(HealthCheckedChannelPool.java:97)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:571)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:550)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise.access$200(DefaultPromise.java:35)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:502)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
at 
org.apache.flink.kinesis.shaded.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at 
org.apache.flink.kinesis.shaded.io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at 
org.apache.flink.kinesis.shaded.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at java.base/java.lang.Thread.run(Thread.java:829){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[VOTE] Release flink-connector-aws v4.0.0, release candidate #1

2022-12-05 Thread Danny Cranmer
Hi everyone,
Please review and vote on the release candidate #1 for the version 4.0.0,
as follows:
[ ] +1, Approve the release
[ ] -1, Do not approve the release (please provide specific comments)

This release externalizes the Kinesis Data Streams and Kinesis Data
Firehose connector to the flink-connector-aws repository.

The complete staging area is available for your review, which includes:
* JIRA release notes [1],
* the official Apache source release to be deployed to dist.apache.org [2],
which are signed with the key with fingerprint 125FD8DB [3],
* all artifacts to be deployed to the Maven Central Repository [4],
* source code tag v4.0.0-rc1 [5],
* website pull request listing the new release [6].

The vote will be open for at least 72 hours (Thursday 8th December 16:00
UTC). It is adopted by majority approval, with at least 3 PMC affirmative
votes.

Thanks,
Danny

[1]
https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352538
[2]
https://dist.apache.org/repos/dist/dev/flink/flink-connector-aws-4.0.0-rc1/
[3] https://dist.apache.org/repos/dist/release/flink/KEYS
[4] https://repository.apache.org/content/repositories/orgapacheflink-1558/
[5] https://github.com/apache/flink-connector-aws/releases/tag/v4.0.0-rc1
[6] https://github.com/apache/flink-web/pull/592


[jira] [Created] (FLINK-30303) Support max column width in sql client

2022-12-05 Thread Shammon (Jira)
Shammon created FLINK-30303:
---

 Summary: Support max column width in sql client
 Key: FLINK-30303
 URL: https://issues.apache.org/jira/browse/FLINK-30303
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Client
Affects Versions: 1.17.0
Reporter: Shammon


Currently user can use `sql-client.display.max-column-width` to set column 
width in sql-client in streaming, this should be supported in batch too.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] FLIP-273: Improve Catalog API to Support ALTER TABLE syntax

2022-12-05 Thread Zheng Yu Chen
+1(no-binding)

Jark Wu  于2022年12月5日周一 11:00写道:

> +1 (binding)
>
> Best,
> Jark
>
> On Fri, 2 Dec 2022 at 10:11, Paul Lam  wrote:
>
> > +1 (non-binding)
> >
> > Best,
> > Paul Lam
> >
> > > 2022年12月2日 09:17,yuxia  写道:
> > >
> > > +1 (non-binding)
> > >
> > > Best regards,
> > > Yuxia
> > >
> > > - 原始邮件 -
> > > 发件人: "Yaroslav Tkachenko" 
> > > 收件人: "dev" 
> > > 发送时间: 星期五, 2022年 12 月 02日 上午 12:27:24
> > > 主题: Re: [VOTE] FLIP-273: Improve Catalog API to Support ALTER TABLE
> > syntax
> > >
> > > +1 (non-binding).
> > >
> > > Looking forward to it!
> > >
> > > On Thu, Dec 1, 2022 at 5:06 AM Dong Lin  wrote:
> > >
> > >> +1 (binding)
> > >>
> > >> Thanks for the FLIP!
> > >>
> > >> On Thu, Dec 1, 2022 at 12:20 PM Shengkai Fang 
> > wrote:
> > >>
> > >>> Hi All,
> > >>>
> > >>> Thanks for all the feedback so far. Based on the discussion[1] we
> seem
> > >>> to have a consensus, so I would like to start a vote on FLIP-273.
> > >>>
> > >>> The vote will last for at least 72 hours (Dec 5th at 13:00 GMT,
> > >>> excluding weekend days) unless there is an objection or insufficient
> > >> votes.
> > >>>
> > >>> Best,
> > >>> Shengkai
> > >>>
> > >>> [1]
> > >>>
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-273%3A+Improve+the+Catalog+API+to+Support+ALTER+TABLE+syntax
> > >>> [2] https://lists.apache.org/thread/2v4kh2bpzvk049zdxb687q7o1pcmnnnw
> > >>>
> > >>
> >
> >
>


-- 
Best

ConradJam


Re: [VOTE] Release flink-connector-hbase v3.0.0, release candidate #1

2022-12-05 Thread Ferenc Csaky
Hi Martijn,

+1 (non-binding)

- Verified hashes/signatures
- Maven repo content LGTM
- No binaries in the source archive
- Built source/tests pass
- Tag exists in GH
- Reviewed web PR

Thanks,
F


--- Original Message ---
On Friday, December 2nd, 2022 at 14:04, Martijn Visser 
 wrote:


> 
> 
> Hi everyone,
> Please review and vote on the release candidate #1 for the
> flink-connector-hbase version v3.0.0, as follows:
> [ ] +1, Approve the release
> [ ] -1, Do not approve the release (please provide specific comments)
> 
> Note: This is the first externalized version of the HBase connector.
> 
> The complete staging area is available for your review, which includes:
> * JIRA release notes [1],
> * the official Apache source release to be deployed to dist.apache.org [2],
> which are signed with the key with fingerprint
> A5F3BCE4CBE993573EC5966A65321B8382B219AF [3],
> * all artifacts to be deployed to the Maven Central Repository [4],
> * source code tag v3.0.0-rc1 [5],
> * website pull request listing the new release [6].
> 
> The vote will be open for at least 72 hours. It is adopted by majority
> approval, with at least 3 PMC affirmative votes.
> 
> Thanks,
> Martijn
> 
> https://twitter.com/MartijnVisser82
> https://github.com/MartijnVisser
> 
> [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12352578
> [2]
> https://dist.apache.org/repos/dist/dev/flink/flink-connector-hbase-3.0.0-rc1/
> [3] https://dist.apache.org/repos/dist/release/flink/KEYS
> [4] https://repository.apache.org/content/repositories/orgapacheflink-1555/
> [5] https://github.com/apache/flink-connector-hbase/releases/tag/v3.0.0-rc1
> [6] https://github.com/apache/flink-web/pull/591


[jira] [Created] (FLINK-30302) WatermarkAssignerChangelogNormalizeTransposeRuleTest failed

2022-12-05 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-30302:
-

 Summary: WatermarkAssignerChangelogNormalizeTransposeRuleTest 
failed
 Key: FLINK-30302
 URL: https://issues.apache.org/jira/browse/FLINK-30302
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.17.0
Reporter: Matthias Pohl


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43715&view=logs&j=0c940707-2659-5648-cbe6-a1ad63045f0a&t=075c2716-8010-5565-fe08-3c4bb45824a4&l=11792

{code}
Dec 05 04:26:28 [ERROR] 
org.apache.flink.table.planner.plan.rules.physical.stream.WatermarkAssignerChangelogNormalizeTransposeRuleTest.testPushdownCalcNotAffectChangelogNormalizeKey
  Time elapsed: 0.601 s  <<< FAILURE!
Dec 05 04:26:28 org.junit.ComparisonFailure: 
Dec 05 04:26:28 optimized rel plan expected:<...c(select=[a, b, f], 
[changelogMode=[I])
Dec 05 04:26:28 +- TemporalJoin(joinType=[InnerJoin], where=[AND(=(a, a0), 
__TEMPORAL_JOIN_CONDITION(ingestion_time, ingestion_time0, 
__TEMPORAL_JOIN_CONDITION_PRIMARY_KEY(a0), __TEMPORAL_JOIN_LEFT_KEY(a), 
__TEMPORAL_JOIN_RIGHT_KEY(a0)))], select=[ingestion_time, a, b, 
ingestion_time0, a0, f], changelogMode=[I])
Dec 05 04:26:28:- Exchange(distribution=[hash[a]], changelogMode=[I])
Dec 05 04:26:28:  +- WatermarkAssigner(rowtime=[ingestion_time], 
watermark=[ingestion_time], changelogMode=[I])
Dec 05 04:26:28: +- Calc(select=[CAST(ingestion_time AS TIMESTAMP(3) 
*ROWTIME*) AS ingestion_time, a, b], changelogMode=[I])
Dec 05 04:26:28:+- TableSourceScan(table=[[default_catalog, 
default_database, t1]], fields=[a, b, ingestion_time], changelogMode=[I])
Dec 05 04:26:28+- Exchange(distribution=[hash[a]], 
changelogMode=[I,UB,UA,D])
Dec 05 04:26:28   +- Calc(select=[ingestion_time, a, f], where=[f], 
changelogMode=[I,UB,UA,D])
Dec 05 04:26:28  +- ChangelogNormalize(key=[a], 
changelogMode=[I,UB,UA,D])
Dec 05 04:26:28 +- Exchange(distribution=[hash[a]], 
changelogMode=[I,UA,D])
Dec 05 04:26:28+- WatermarkAssigner(rowtime=[ingestion_time], 
watermark=[ingestion_time], changelogMode=[I,UA,D])
Dec 05 04:26:28   +- Calc(select=[CAST(ingestion_time AS 
TIMESTAMP(3) *ROWTIME*) AS ingestion_time, a, f], changelogMode=[I,UA,D])
Dec 05 04:26:28]  +-...> but was:<...c(select=[a, b, f], 
[where=[f], changelogMode=[I])
Dec 05 04:26:28 +- TemporalJoin(joinType=[InnerJoin], where=[AND(=(a, a0), 
__TEMPORAL_JOIN_CONDITION(ingestion_time, ingestion_time0, 
__TEMPORAL_JOIN_CONDITION_PRIMARY_KEY(a0), __TEMPORAL_JOIN_LEFT_KEY(a), 
__TEMPORAL_JOIN_RIGHT_KEY(a0)))], select=[ingestion_time, a, b, 
ingestion_time0, a0, f], changelogMode=[I])
Dec 05 04:26:28:- Exchange(distribution=[hash[a]], changelogMode=[I])
Dec 05 04:26:28:  +- WatermarkAssigner(rowtime=[ingestion_time], 
watermark=[ingestion_time], changelogMode=[I])
Dec 05 04:26:28: +- Calc(select=[CAST(ingestion_time AS TIMESTAMP(3) 
*ROWTIME*) AS ingestion_time, a, b], changelogMode=[I])
Dec 05 04:26:28:+- TableSourceScan(table=[[default_catalog, 
default_database, t1]], fields=[a, b, ingestion_time], changelogMode=[I])
Dec 05 04:26:28+- Exchange(distribution=[hash[a]], changelogMode=[I,UA,D])
Dec 05 04:26:28   +- ChangelogNormalize(key=[a], changelogMode=[I,UA,D])
Dec 05 04:26:28  +- Exchange(distribution=[hash[a]], 
changelogMode=[I,UA,D])
Dec 05 04:26:28 +- WatermarkAssigner(rowtime=[ingestion_time], 
watermark=[ingestion_time], changelogMode=[I,UA,D])
Dec 05 04:26:28+- Calc(select=[CAST(ingestion_time AS 
TIMESTAMP(3) *ROWTIME*) AS ingestion_time, a, f], changelogMode=[I,UA,D])
Dec 05 04:26:28 ]  +-...>
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30301) TaskExecutorTest.testSharedResourcesLifecycle failed with TaskException

2022-12-05 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-30301:
-

 Summary: TaskExecutorTest.testSharedResourcesLifecycle failed with 
TaskException
 Key: FLINK-30301
 URL: https://issues.apache.org/jira/browse/FLINK-30301
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.17.0
Reporter: Matthias Pohl


This seems to be a follow-up of FLINK-30275. Same test but different test 
failure:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43709&view=logs&j=77a9d8e1-d610-59b3-fc2a-4766541e0e33&t=125e07e7-8de0-5c6c-a541-a567415af3ef&l=7479
{code}
Dec 05 03:59:18 at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:114)
Dec 05 03:59:18 at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:86)
Dec 05 03:59:18 at 
org.junit.platform.launcher.core.DefaultLauncherSession$DelegatingLauncher.execute(DefaultLauncherSession.java:86)
Dec 05 03:59:18 at 
org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:53)
Dec 05 03:59:18 at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.lambda$execute$1(JUnitPlatformProvider.java:199)
Dec 05 03:59:18 at 
java.util.Iterator.forEachRemaining(Iterator.java:116)
Dec 05 03:59:18 at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:193)
Dec 05 03:59:18 at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:154)
Dec 05 03:59:18 at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:120)
Dec 05 03:59:18 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
Dec 05 03:59:18 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
Dec 05 03:59:18 at 
org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
Dec 05 03:59:18 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Dec 05 03:59:18 Caused by: 
org.apache.flink.runtime.taskexecutor.exceptions.TaskException: Cannot find 
task to stop for execution 
096b33c46c225fd4af41a9484b64c7fe_010f83ce510d70707aaf04c441173b70_0_0.
Dec 05 03:59:18 at 
org.apache.flink.runtime.taskexecutor.TaskExecutor.cancelTask(TaskExecutor.java:864)
Dec 05 03:59:18 ... 53 more
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-277: Native GlueCatalog Support in Flink

2022-12-05 Thread Dong Lin
Hi Samrat,

Thanks for the FLIP!

Since this is the first proposal for adding a vendor-specific catalog
library in Flink, I think maybe we should also externalize those catalog
libraries similar to how we are externalizing connector libraries. It is
likely that we might want to add catalogs for other vectors in the future.
Externalizing those catalogs can make Flink development more scalable in
the long term.

It is mentioned in the FLIP that there will be two types of SdkHttpClient
supported based on the catalog option http-client.type. Is http-client.type
a public config for the GlueCatalog? If yes, can we add this config to the
"Configurations" section and explain how users should choose the client
type?

Regards,
Dong


On Sat, Dec 3, 2022 at 12:31 PM Samrat Deb  wrote:

> Hi everyone,
>
> I would like to open a discussion[1] on providing GlueCatalog support
> in Flink.
> Currently, Flink offers 3 major types of catalog[2]. Out of which only
> HiveCatalog is a persistent catalog backed by Hive Metastore. We would like
> to introduce GlueCatalog in Flink offering another option for users which
> will be persistent in nature. Aws Glue data catalog is a centralized data
> catalog in AWS cloud that provides integrations with many different
> connectors[3]. Flink GlueCatalog can use the features provided by glue and
> create strong integration with other services in the cloud.
>
> [1]
>
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-277%3A+Native+GlueCatalog+Support+in+Flink
>
> [2]
>
> https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/catalogs/
>
> [3]
>
> https://docs.aws.amazon.com/glue/latest/dg/components-overview.html#data-catalog-intro
>
> [4] https://issues.apache.org/jira/browse/FLINK-29549
>
> Bests
> Samrat
>


[jira] [Created] (FLINK-30300) Building wheels on macos took longer than expected

2022-12-05 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-30300:
-

 Summary: Building wheels on macos took longer than expected
 Key: FLINK-30300
 URL: https://issues.apache.org/jira/browse/FLINK-30300
 Project: Flink
  Issue Type: Bug
  Components: API / Python, Build System / Azure Pipelines, Test 
Infrastructure
Affects Versions: 1.17.0
Reporter: Matthias Pohl


Looks like {{build_wheels_on_macos}} reached the Azure-specific limit of 1h 
runtime for this job:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43700&view=logs&j=f73b5736-8355-5390-ec71-4dfdec0ce6c5&t=90f7230e-bf5a-531b-8566-ad48d3e03bbb&l=176



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] FLIP-275: Support Remote SQL Client Based on SQL Gateway

2022-12-05 Thread godfrey he
Hi Zelin,

Thanks for driving this discussion.

I have a few comments,

> Add RowFormat to ResultSet to indicate the format of rows.
We should not require SqlGateway server to meet the display
requirements of a CliClient.
Because different CliClients may have different display style. The
server just need to response the data,
and the CliClient prints the result as needed. So RowFormat is not needed.

> Add ContentType to ResultSet to indicate what kind of data the result 
> contains.
from my first sight, the values of ContentType are intersected, such
as: A select query will return QUERY_RESULT,
but it also has JOB_ID. OTHER is too ambiguous, I don't know which
kind of query will return OTHER.
I recommend returning the concrete type for each statement, such as
"CREATE TABLE" for "create table xx (...) with ()",
"SELECT" for "select * from xxx". The statement type can be maintained
in `Operation`s.

>Error Handling
I think current design of error handling mechanism can meet the
requirement of CliClient, we can get the root cause from
the stack (see ErrorResponseBody#errors). If it becomes a common
requirement (for many clients) in the future,
we can introduce this interface.

>Runtime REST API Modification for Local Client Migration
I think this part is over-engineered, this part belongs to optimization.
The client does not require very high performance, the current design
can already meet our needs.
If we find performance problems in the future, do such optimizations.

Best,
Godfrey

yu zelin  于2022年12月5日周一 11:11写道:
>
> Hi, Shammon
>
> Thanks for your feedback. I think it’s good to support jdbc-sdk. However,
> it's not supported in the gateway side yet. In my opinion, this FLIP is more
> concerned with the SQL Client. How about put “supporting jdbc-sdk” in
> ‘Future Work’? We can discuss how to implement it in another thread.
>
> Best,
> Yu Zelin
> > 2022年12月2日 18:12,Shammon FY  写道:
> >
> > Hi zelin
> >
> > Thanks for driving this discussion.
> >
> > I notice that the sql-client will interact with sql-gateway by `REST
> > Client` in the `Executor` in the FLIP, how about introducing jdbc-sdk for
> > sql-gateway?
> >
> > Then the sql-client can connect the gateway with jdbc-sdk, on the other
> > hand, the other applications and tools such as jmeter can use the jdbc-sdk
> > to connect sql-gateway too.
> >
> > Best,
> > Shammon
> >
> >
> > On Fri, Dec 2, 2022 at 4:10 PM yu zelin  wrote:
> >
> >> Hi Jim,
> >>
> >> Thanks for your feedback!
> >>
> >>> Should this configuration be mentioned in the FLIP?
> >>
> >> Sure.
> >>
> >>> some way for the server to be able to limit the number of requests it
> >> receives.
> >> I’m sorry that this FLIP is dedicated in implementing the Remote mode, so
> >> we
> >> didn't consider much about this. I think the option is enough currently.
> >> I will add
> >> the improvement suggestions to the ‘Future Work’.
> >>
> >>> I wonder if two other options are possible
> >>
> >> To forward the raw format to gateway and then to client is possible. The
> >> raw
> >> results from sink is in ‘CollectResultIterator#bufferedResult’. First, we
> >> can find
> >> a way to get this result without wrapping it. Second, constructing a
> >> ‘InternalTypeInfo’.
> >> We can construct it using the schema information (data’s logical type).
> >> After
> >> construction, we can get the ’TypeSerializer’ to deserialize the raw
> >> result.
> >>
> >>
> >>
> >>
> >>> 2022年12月1日 04:54,Jim Hughes  写道:
> >>>
> >>> Hi Yu,
> >>>
> >>> Thanks for moving my comments to this thread!  Also, thank you for
> >>> answering my questions; it is helping me understand the SQL Gateway
> >>> better.
> >>>
> >>> 5.
>  Our idea is to introduce a new session option (like
> >>> 'sql-client.result.fetch-interval') to control
> >>> the fetching requests sending frequency. What do you think?
> >>>
> >>> Should this configuration be mentioned in the FLIP?
> >>>
> >>> One slight concern I have with having 'sql-client.result.fetch-interval'
> >> as
> >>> a session configuration is that users could set it low and cause the
> >> client
> >>> to send a large volume of requests to the SQL gateway.
> >>>
> >>> Generally, I'd like to see some way for the server to be able to limit
> >> the
> >>> number of requests it receives.  If that really needs to be done by a
> >> proxy
> >>> in front of the SQL gateway, that is fine as well.  (To be clear, I don't
> >>> think my concern here should be blocking in any way.)
> >>>
> >>> 7.
>  What is the serialization lifecycle for results?
> >>>
> >>> I wonder if two other options are possible:
> >>> 3) Could the Gateway just forward the result byte array?  (Or does the
> >>> Gateway need to deserialize the response in order to understand it for
> >> some
> >>> reason?)
> >>> 4) Could the JobManager prepare the results in JSON?  (Or similarly could
> >>> the Client read the format which the JobManager sends?)
> >>>
> >>> Thanks again!
> >>>
> >>> Cheers,
> >>>
> >>> Jim
> >>>
> >>> On Wed, Nov 

[jira] [Created] (FLINK-30299) TaskManagerRunnerTest fails with 239 exit code (i.e. FatalExitExceptionHandler was called)

2022-12-05 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-30299:
-

 Summary: TaskManagerRunnerTest fails with 239 exit code (i.e. 
FatalExitExceptionHandler was called)
 Key: FLINK-30299
 URL: https://issues.apache.org/jira/browse/FLINK-30299
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.16.0
Reporter: Matthias Pohl


We're again experiencing 239 exit code being caused by 
{{FatalExitExceptionHandler due class}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30298) KafkaTableITCase.testStartFromGroupOffsetsNone failed due to timeout

2022-12-05 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-30298:
-

 Summary: KafkaTableITCase.testStartFromGroupOffsetsNone failed due 
to timeout
 Key: FLINK-30298
 URL: https://issues.apache.org/jira/browse/FLINK-30298
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.17.0
Reporter: Matthias Pohl


KafkaTableITCase.testStartFromGroupOffsetsNone failed due to timeout in the 
following build:
{code:java}
Dec 03 01:07:31 org.assertj.core.error.AssertJMultipleFailuresError: 
Dec 03 01:07:31 
Dec 03 01:07:31 Multiple Failures (1 failure)
Dec 03 01:07:31 -- failure 1 --
Dec 03 01:07:31 [Any cause is instance of class 'class 
org.apache.kafka.clients.consumer.NoOffsetForPartitionException'] 
Dec 03 01:07:31 Expecting any element of:
Dec 03 01:07:31   [java.lang.IllegalStateException: Fail to create topic 
[groupOffset_json_dc640086-d1f1-48b8-ad7a-f83d33b6a03c partitions: 4 
replication factor: 1].
Dec 03 01:07:31 at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.createTestTopic(KafkaTableTestBase.java:143)
Dec 03 01:07:31 at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.startFromGroupOffset(KafkaTableITCase.java:881)
Dec 03 01:07:31 at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableITCase.testStartFromGroupOffsetsWithNoneResetStrategy(KafkaTableITCase.java:981)
Dec 03 01:07:31 ...(64 remaining lines not displayed - this can be 
changed with Assertions.setMaxStackTraceElementsDisplayed),
Dec 03 01:07:31 java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: The request timed out.
Dec 03 01:07:31 at 
java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
Dec 03 01:07:31 at 
java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
Dec 03 01:07:31 at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
Dec 03 01:07:31 ...(67 remaining lines not displayed - this can be 
changed with Assertions.setMaxStackTraceElementsDisplayed),
Dec 03 01:07:31 org.apache.kafka.common.errors.TimeoutException: The 
request timed out.
Dec 03 01:07:31 ]
Dec 03 01:07:31 to satisfy the given assertions requirements but none did:
[...] {code}
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43692&view=logs&j=fa307d6d-91b1-5ab6-d460-ef50f552b1fe&t=21eae189-b04c-5c04-662b-17dc80ffc83a&l=37708



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30297) Alibaba001 not responding

2022-12-05 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-30297:
-

 Summary: Alibaba001 not responding
 Key: FLINK-30297
 URL: https://issues.apache.org/jira/browse/FLINK-30297
 Project: Flink
  Issue Type: Bug
  Components: Build System / Azure Pipelines, Test Infrastructure
Affects Versions: 1.15.3, 1.16.0, 1.17.0
Reporter: Matthias Pohl


{{Alibaba001}} seems to be corrupted causing build failures like [that 
one|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=43692&view=logs&j=4eda0b4a-bd0d-521a-0916-8285b9be9bb5]:
{code}
##[error]We stopped hearing from agent AlibabaCI001-agent01. Verify the agent 
machine is running and has a healthy network connection. Anything that 
terminates an agent process, starves it for CPU, or blocks its network access 
can cause this error. For more information, see: 
https://go.microsoft.com/fwlink/?linkid=846610
Agent: AlibabaCI001-agent01
Started: Sat at 1:48 AM
Duration: 1h 3m 27s
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30296) In the situation of flink on yarn, how to control the number of TM in each nodemanager?

2022-12-05 Thread StarBoy1005 (Jira)
StarBoy1005 created FLINK-30296:
---

 Summary: In the situation of flink on yarn, how to control the 
number of TM in each nodemanager?
 Key: FLINK-30296
 URL: https://issues.apache.org/jira/browse/FLINK-30296
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / API
Reporter: StarBoy1005


Help, I use sql-client to execute batch query from hive and it's a on YARN job, 
parallelism=450 and taskmanager.numberOfTaskSlots=1,then the 450 tm is not 
average deploied on 15 nodemanagers (hosts),some nn have 28 and some have 34. 
I'm sure the load average or another resource of the 15 is totally same. 
I want to know is there exist a config can control the max tm num in a 
nodemanager.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30295) Develop MariaDbCatalog to connect Flink with MariaDb tables and ecosystem

2022-12-05 Thread Samrat Deb (Jira)
Samrat Deb created FLINK-30295:
--

 Summary: Develop MariaDbCatalog to connect Flink with MariaDb 
tables and ecosystem
 Key: FLINK-30295
 URL: https://issues.apache.org/jira/browse/FLINK-30295
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / JDBC
Reporter: Samrat Deb






--
This message was sent by Atlassian Jira
(v8.20.10#820010)