Re: Contributor

2020-04-25 Thread Benchao Li
Hi,

Welcome to the community!
There is no contributor permission now, and you can just create the issue
and ping a committer to ask to be assigned.
BTW, there is a document about how to contribute[1].

[1] https://flink.apache.org/contributing/how-to-contribute.html

流动的联系 <1060591...@qq.com> 于2020年4月25日周六 上午11:31写道:

> Hi Guys,
>
>
> I want to contribute to Apache Flink.
> Would you please give me the permission as a contributor?
> My JIRA ID is ziping.



-- 

Benchao Li
School of Electronics Engineering and Computer Science, Peking University
Tel:+86-15650713730
Email: libenc...@gmail.com; libenc...@pku.edu.cn


How to reproduce the issue locally

2020-04-25 Thread Manish G
Hi,

While working on an issue, is there a specific approach to quickly
reproduce the issue locally?

With regards
Manish


[ANNOUNCE] Apache Flink 1.9.3 released

2020-04-25 Thread Dian Fu
Hi everyone,

The Apache Flink community is very happy to announce the release of Apache
Flink 1.9.3, which is the third bugfix release for the Apache Flink 1.9
series.

Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.

The release is available for download at:
https://flink.apache.org/downloads.html

Please check out the release blog post for an overview of the improvements
for this bugfix release:
https://flink.apache.org/news/2020/04/24/release-1.9.3.html

The full release notes are available in Jira:
https://issues.apache.org/jira/projects/FLINK/versions/12346867

We would like to thank all contributors of the Apache Flink community who
made this release possible!
Also great thanks to @Jincheng for helping finalize this release.

Regards,
Dian


[jira] [Created] (FLINK-17377) support variable substitution

2020-04-25 Thread Guangbin Zhu (Jira)
Guangbin Zhu created FLINK-17377:


 Summary: support variable substitution
 Key: FLINK-17377
 URL: https://issues.apache.org/jira/browse/FLINK-17377
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API, Table SQL / Client
Affects Versions: 1.10.0
Reporter: Guangbin Zhu


FlinkSQL does not support variable substitution like hive. 
{code:java}
SET k1=v1;
Select * from table where key=${k1}{code}
This is a common and great useful feature that Flink should support.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: New contributor

2020-04-25 Thread Hequn Cheng
Welcome, Etienne :)

Best,
Hequn

On Fri, Apr 24, 2020 at 10:04 PM Etienne Chauchot 
wrote:

> Hi Till,
>
> Looking forward too ...
>
> Thanks
>
> Etienne
>
> On 24/04/2020 15:09, Till Rohrmann wrote:
> > Hi Etienne,
> >
> > welcome to the Flink community. Looking forward to working with you on
> > Flink :-)
> >
> > Cheers,
> > Till
> >
> > On Fri, Apr 24, 2020 at 11:20 AM Etienne Chauchot 
> > wrote:
> >
> >> Hi everyone,
> >>
> >> Let me introduce myself, I'm Etienne Chauchot, I'm an Apache Beam
> >> committer and a PMC member and I would like to start working on Flink as
> >> well.
> >>
> >> For now I only did 3 simple Flink PRs but that will grow :)
> >>
> >> https://github.com/apache/flink/pull/11886
> >>
> >> https://github.com/apache/flink/pull/11740
> >>
> >> https://github.com/apache/flink/pull/11703
> >>
> >> Here is a link to my blog: https://echauchot.blogspot.com/
> >>
> >> Best.
> >>
> >> Etienne
> >>
> >>
>


Re: New contributor

2020-04-25 Thread Forward Xu
welcome, Etienne

Best,
Forward

Hequn Cheng  于2020年4月25日周六 下午8:23写道:

> Welcome, Etienne :)
>
> Best,
> Hequn
>
> On Fri, Apr 24, 2020 at 10:04 PM Etienne Chauchot 
> wrote:
>
> > Hi Till,
> >
> > Looking forward too ...
> >
> > Thanks
> >
> > Etienne
> >
> > On 24/04/2020 15:09, Till Rohrmann wrote:
> > > Hi Etienne,
> > >
> > > welcome to the Flink community. Looking forward to working with you on
> > > Flink :-)
> > >
> > > Cheers,
> > > Till
> > >
> > > On Fri, Apr 24, 2020 at 11:20 AM Etienne Chauchot <
> echauc...@apache.org>
> > > wrote:
> > >
> > >> Hi everyone,
> > >>
> > >> Let me introduce myself, I'm Etienne Chauchot, I'm an Apache Beam
> > >> committer and a PMC member and I would like to start working on Flink
> as
> > >> well.
> > >>
> > >> For now I only did 3 simple Flink PRs but that will grow :)
> > >>
> > >> https://github.com/apache/flink/pull/11886
> > >>
> > >> https://github.com/apache/flink/pull/11740
> > >>
> > >> https://github.com/apache/flink/pull/11703
> > >>
> > >> Here is a link to my blog: https://echauchot.blogspot.com/
> > >>
> > >> Best.
> > >>
> > >> Etienne
> > >>
> > >>
> >
>


[jira] [Created] (FLINK-17378) KafkaProducerExactlyOnceITCase>KafkaProducerTestBase.testExactlyOnceCustomOperator unstable

2020-04-25 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17378:
--

 Summary: 
KafkaProducerExactlyOnceITCase>KafkaProducerTestBase.testExactlyOnceCustomOperator
 unstable
 Key: FLINK-17378
 URL: https://issues.apache.org/jira/browse/FLINK-17378
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.11.0
Reporter: Robert Metzger


CI run: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=221&view=logs&j=c5f0071e-1851-543e-9a45-9ac140befc32&t=684b1416-4c17-504e-d5ab-97ee44e08a20

{code}
2020-04-25T00:41:01.4191956Z 00:41:01,418 [Source: Custom Source -> Map -> 
Sink: Unnamed (1/1)] INFO  
org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaInternalProducer 
[] - Flushing new partitions
2020-04-25T00:41:01.4194268Z 00:41:01,418 [FailingIdentityMapper Status 
Printer] INFO  
org.apache.flink.streaming.connectors.kafka.testutils.FailingIdentityMapper [] 
- > Failing mapper  0: count=690, totalCount=1000
2020-04-25T00:41:01.4589519Z 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
2020-04-25T00:41:01.4590089Zat 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
2020-04-25T00:41:01.4590748Zat 
org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:659)
2020-04-25T00:41:01.4591524Zat 
org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:77)
2020-04-25T00:41:01.4592062Zat 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1643)
2020-04-25T00:41:01.4592597Zat 
org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:35)
2020-04-25T00:41:01.4593092Zat 
org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testExactlyOnce(KafkaProducerTestBase.java:370)
2020-04-25T00:41:01.4593680Zat 
org.apache.flink.streaming.connectors.kafka.KafkaProducerTestBase.testExactlyOnceCustomOperator(KafkaProducerTestBase.java:317)
2020-04-25T00:41:01.4594450Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-04-25T00:41:01.4595076Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-04-25T00:41:01.4595794Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-04-25T00:41:01.4596622Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-04-25T00:41:01.4597501Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-04-25T00:41:01.4598396Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-04-25T00:41:01.460Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-04-25T00:41:01.4603082Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-04-25T00:41:01.4604023Zat 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
2020-04-25T00:41:01.4604590Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-04-25T00:41:01.4605225Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2020-04-25T00:41:01.4605902Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2020-04-25T00:41:01.4606591Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2020-04-25T00:41:01.4607468Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2020-04-25T00:41:01.4608577Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2020-04-25T00:41:01.4609030Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2020-04-25T00:41:01.4609460Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2020-04-25T00:41:01.4609842Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2020-04-25T00:41:01.4610270Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2020-04-25T00:41:01.4610727Zat 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2020-04-25T00:41:01.4611147Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-04-25T00:41:01.4611628Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-04-25T00:41:01.4612011Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-04-25T00:41:01.4612415Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2020-04-25T00:41:01.4612841Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
2020-04-25T00:41:01.4613325Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
2020-04-25T00:41:01.4613810Zat 
org.apache.maven.surefire.junit4.

Re: [ANNOUNCE] Apache Flink 1.9.3 released

2020-04-25 Thread Hequn Cheng
@Dian, thanks a lot for the release and for being the release manager.
Also thanks to everyone who made this release possible!

Best,
Hequn

On Sat, Apr 25, 2020 at 7:57 PM Dian Fu  wrote:

> Hi everyone,
>
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.9.3, which is the third bugfix release for the Apache Flink 1.9
> series.
>
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Please check out the release blog post for an overview of the improvements
> for this bugfix release:
> https://flink.apache.org/news/2020/04/24/release-1.9.3.html
>
> The full release notes are available in Jira:
> https://issues.apache.org/jira/projects/FLINK/versions/12346867
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
> Also great thanks to @Jincheng for helping finalize this release.
>
> Regards,
> Dian
>


[jira] [Created] (FLINK-17379) testScaleDownBeforeFirstCheckpoint Error reading field 'api_versions': Error reading field 'max_version':

2020-04-25 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17379:
--

 Summary: testScaleDownBeforeFirstCheckpoint Error reading field 
'api_versions': Error reading field 'max_version': 
 Key: FLINK-17379
 URL: https://issues.apache.org/jira/browse/FLINK-17379
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=145&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8

{code}
2020-04-23T19:46:02.5792736Z [ERROR] Tests run: 12, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 141.725 s <<< FAILURE! - in 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011ITCase
2020-04-23T19:46:02.5794014Z [ERROR] 
testScaleDownBeforeFirstCheckpoint(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011ITCase)
  Time elapsed: 15.209 s  <<< ERROR!
2020-04-23T19:46:02.5795134Z 
org.apache.kafka.common.protocol.types.SchemaException: Error reading field 
'api_versions': Error reading field 'max_version': 
java.nio.BufferUnderflowException
2020-04-23T19:46:02.5795722Zat 
org.apache.kafka.common.protocol.types.Schema.read(Schema.java:75)
2020-04-23T19:46:02.5796132Zat 
org.apache.kafka.common.protocol.ApiKeys.parseResponse(ApiKeys.java:163)
2020-04-23T19:46:02.5796569Zat 
org.apache.kafka.common.protocol.ApiKeys$1.parseResponse(ApiKeys.java:54)
2020-04-23T19:46:02.5797065Zat 
org.apache.kafka.clients.NetworkClient.parseStructMaybeUpdateThrottleTimeMetrics(NetworkClient.java:560)
2020-04-23T19:46:02.5797567Zat 
org.apache.kafka.clients.NetworkClient.handleCompletedReceives(NetworkClient.java:657)
2020-04-23T19:46:02.5798020Zat 
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:442)
2020-04-23T19:46:02.5798486Zat 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)
2020-04-23T19:46:02.5799025Zat 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:208)
2020-04-23T19:46:02.5799621Zat 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:199)
2020-04-23T19:46:02.5800166Zat 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
2020-04-23T19:46:02.5800718Zat 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:309)
2020-04-23T19:46:02.5801267Zat 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1078)
2020-04-23T19:46:02.5801726Zat 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1043)
2020-04-23T19:46:02.5802263Zat 
org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.getAllRecordsFromTopic(KafkaTestEnvironmentImpl.java:257)
2020-04-23T19:46:02.5802990Zat 
org.apache.flink.streaming.connectors.kafka.KafkaTestBase.assertExactlyOnceForTopic(KafkaTestBase.java:274)
2020-04-23T19:46:02.5803551Zat 
org.apache.flink.streaming.connectors.kafka.KafkaTestBase.assertExactlyOnceForTopic(KafkaTestBase.java:248)
2020-04-23T19:46:02.5804170Zat 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011ITCase.testScaleDownBeforeFirstCheckpoint(FlinkKafkaProducer011ITCase.java:366)
2020-04-23T19:46:02.5804804Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-04-23T19:46:02.5805244Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-04-23T19:46:02.5805711Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-04-23T19:46:02.5806109Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-04-23T19:46:02.5806516Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-04-23T19:46:02.5806977Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-04-23T19:46:02.5807443Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-04-23T19:46:02.5807908Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-04-23T19:46:02.5808346Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2020-04-23T19:46:02.5808757Zat 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
2020-04-23T19:46:02.5809125Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-04-23T19:46:02.5809541Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2020-04-23T19:46:02.5809966Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2020-04-23T19:46:02.5810405Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4

[jira] [Created] (FLINK-17380) runAllDeletesTest: "Memory records is not writable"

2020-04-25 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17380:
--

 Summary: runAllDeletesTest: "Memory records is not writable"
 Key: FLINK-17380
 URL: https://issues.apache.org/jira/browse/FLINK-17380
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.10.0
Reporter: Robert Metzger


CI Run: https://travis-ci.org/github/apache/flink/jobs/678707334

{code}
23:27:15,112 [main] ERROR 
org.apache.flink.streaming.connectors.kafka.Kafka09ITCase - 

Test testAllDeletes(org.apache.flink.streaming.connectors.kafka.Kafka09ITCase) 
failed with:
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
at 
org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:648)
at 
org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:77)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1620)
at 
org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runAllDeletesTest(KafkaConsumerTestBase.java:1475)
at 
org.apache.flink.streaming.connectors.kafka.Kafka09ITCase.testAllDeletes(Kafka09ITCase.java:117)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
NoRestartBackoffTimeStrategy
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:496)
at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:199)
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at akka.actor.Actor$class.aroundReceive(Actor.scala:51

Unable to self-assign issue

2020-04-25 Thread Manish G
Hi,

I want to assign issue 17376
 to me, but it seems I
don't have rights to do it.

Manish


[jira] [Created] (FLINK-17381) travis_controller prints wrong mvn version

2020-04-25 Thread Benchao Li (Jira)
Benchao Li created FLINK-17381:
--

 Summary: travis_controller prints wrong mvn version
 Key: FLINK-17381
 URL: https://issues.apache.org/jira/browse/FLINK-17381
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Reporter: Benchao Li


We use `mvn -version` in `travis_controller.sh`, which should be `run_mvn 
-version`. This doesn't affect the build result, just prints wrong maven 
version information.

cc [~chesnay]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Removing deprecated state methods in 1.11

2020-04-25 Thread Yu Li
+1 (belated), removing long deprecated codes help keep the code base clean
and reduce the maintenance efforts.

Best Regards,
Yu


On Sat, 25 Apr 2020 at 00:45, Stephan Ewen  wrote:

> I'll turn this into a ticket for 1.11 then.
>
> Thanks!
>
> On Thu, Apr 23, 2020 at 4:13 PM Aljoscha Krettek 
> wrote:
>
> > Definitely +1! I'm always game for decreasing the API surface if it
> > doesn't decrease functionality.
> >
> > Aljoscha
> >
> > On 23.04.20 14:18, DONG, Weike wrote:
> > > Hi Stephan,
> > >
> > > +1 for the removal, as there are so many deprecated methods scattered
> > > around, making APIs a little bit messy and confusing.
> > >
> > > Best,
> > > Weike
> > >
> > > Stephan Ewen 于2020年4月23日 周四下午8:07写道:
> > >
> > >> Hi all!
> > >>
> > >> There are a bunch of deprecated methods for state access:
> > >>
> > >>- RuntimeContext.getFoldingState(...)
> > >>- OperatorStateStore.getSerializableListState(...)
> > >>-  OperatorStateStore.getOperatorState(...)
> > >>
> > >> All of them have been deprecated for around three years now. All have
> > good
> > >> alternatives.
> > >>
> > >> Time to remove them in 1.11?
> > >>
> > >> Best,
> > >> Stephan
> > >>
> > >
> >
> >
>


[jira] [Created] (FLINK-17382) Flink should support dynamic log level setting

2020-04-25 Thread Xingxing Di (Jira)
Xingxing Di created FLINK-17382:
---

 Summary: Flink should support dynamic log level setting
 Key: FLINK-17382
 URL: https://issues.apache.org/jira/browse/FLINK-17382
 Project: Flink
  Issue Type: New Feature
  Components: Runtime / REST, Runtime / Web Frontend
Affects Versions: 1.9.2, 1.7.2
Reporter: Xingxing Di


Flink should support dynamic log level setting, currently we can not do that 
through Flink UI.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Unable to self-assign issue

2020-04-25 Thread Yu Li
Hi Manish,

Thanks for being interested in contributing to Flink!

It is our bylaw that only committers can assign a JIRA ticket [1] and you
could find the reason behind it in this ML discussion [2].

For FLINK-17376, since there's already a discussion and conclusion in ML
[3], I think it's ok to start preparing PR for it. However, since Stephan
is driving this work, I'd like to double confirm with him whether he is
already working on the PR or not before assigning it to you, hope you could
understand.

I've already left a message in JIRA and will help review the PR when
there's any. Let's follow this up in JIRA.

Best Regards,
Yu

[1] https://flink.apache.org/contributing/contribute-code.html
[2]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-A-more-restrictive-JIRA-workflow-td27344.html
[3]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Removing-deprecated-state-methods-in-1-11-td40651.html

On Sat, 25 Apr 2020 at 20:45, Manish G  wrote:

> Hi,
>
> I want to assign issue 17376
>  to me, but it seems I
> don't have rights to do it.
>
> Manish
>


Re: [DISCUSS] Exact feature freeze date

2020-04-25 Thread Yu Li
+1 for extending the feature freeze to May 15th.

Best Regards,
Yu


On Fri, 24 Apr 2020 at 14:43, Yuan Mei  wrote:

> +1
>
> On Thu, Apr 23, 2020 at 4:10 PM Stephan Ewen  wrote:
>
> > Hi all!
> >
> > I want to bring up a discussion about when we want to do the feature
> freeze
> > for 1.11.
> >
> > When kicking off the release cycle, we tentatively set the date to end of
> > April, which would be in one week.
> >
> > I can say from the features I am involved with (FLIP-27, FLIP-115,
> > reviewing some state backend improvements, etc.) that it would be helpful
> > to have two additional weeks.
> >
> > When looking at various other feature threads, my feeling is that there
> are
> > more contributors and committers that could use a few more days.
> > The last two months were quite exceptional in and we did lose a bit of
> > development speed here and there.
> >
> > How do you think about making *May 15th* the feature freeze?
> >
> > Best,
> > Stephan
> >
>


Re: [DISCUSS] FLIP-36 - Support Interactive Programming in Flink Table API

2020-04-25 Thread Xuannan Su
Hi Becket,

You are right. It makes sense to treat retry of job 2 as an ordinary job.
And the config does introduce some unnecessary confusion. Thank you for you
comment. I will update the FLIP.

Best,
Xuannan

On Sat, Apr 25, 2020 at 7:44 AM Becket Qin  wrote:

> Hi Xuannan,
>
> If user submits Job 1 and generated a cached intermediate result. And later
> on, user submitted job 2 which should ideally use the intermediate result.
> In that case, if job 2 failed due to missing the intermediate result, Job 2
> should be retried with its full DAG. After that when Job 2 runs, it will
> also re-generate the cache. However, once job 2 has fell back to the
> original DAG, should it just be treated as an ordinary job that follow the
> recovery strategy? Having a separate configuration seems a little
> confusing. In another word, re-generating the cache is just a byproduct of
> running the full DAG of job 2, but is not the main purpose. It is just like
> when job 1 runs to generate cache, it does not have a separate config of
> retry to make sure the cache is generated. If it fails, it just fail like
> an ordinary job.
>
> What do you think?
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Fri, Apr 24, 2020 at 5:00 PM Xuannan Su  wrote:
>
> > Hi Becket,
> >
> > The intermediate result will indeed be automatically re-generated by
> > resubmitting the original DAG. And that job could fail as well. In that
> > case, we need to decide if we should resubmit the original DAG to
> > re-generate the intermediate result or give up and throw an exception to
> > the user. And the config is to indicate how many resubmit should happen
> > before giving up.
> >
> > Thanks,
> > Xuannan
> >
> > On Fri, Apr 24, 2020 at 4:19 PM Becket Qin  wrote:
> >
> > > Hi Xuannan,
> > >
> > >  I am not entirely sure if I understand the cases you mentioned. The
> > users
> > > > can use the cached table object returned by the .cache() method in
> > other
> > > > job and it should read the intermediate result. The intermediate
> result
> > > can
> > > > gone in the following three cases: 1. the user explicitly call the
> > > > invalidateCache() method 2. the TableEnvironment is closed 3. failure
> > > > happens on the TM. When that happens, the intermeidate result will
> not
> > be
> > > > available unless it is re-generated.
> > >
> > >
> > > What confused me was that why do we need to have a *cache.retries.max
> > > *config?
> > > Shouldn't the missing intermediate result always be automatically
> > > re-generated if it is gone?
> > >
> > > Thanks,
> > >
> > > Jiangjie (Becket) Qin
> > >
> > >
> > > On Fri, Apr 24, 2020 at 3:59 PM Xuannan Su 
> > wrote:
> > >
> > > > Hi Becket,
> > > >
> > > > Thanks for the comments.
> > > >
> > > > On Fri, Apr 24, 2020 at 9:12 AM Becket Qin 
> > wrote:
> > > >
> > > > > Hi Xuannan,
> > > > >
> > > > > Thanks for picking up the FLIP. It looks good to me overall. Some
> > quick
> > > > > comments / questions below:
> > > > >
> > > > > 1. Do we also need changes in the Java API?
> > > > >
> > > >
> > > > Yes, the public interface of Table and TableEnvironment should be
> made
> > in
> > > > the Java API.
> > > >
> > > >
> > > > > 2. What are the cases that users may want to retry reading the
> > > > intermediate
> > > > > result? It seems that once the intermediate result has gone, it
> will
> > > not
> > > > be
> > > > > available later without being generated again, right?
> > > > >
> > > >
> > > >  I am not entirely sure if I understand the cases you mentioned. The
> > > users
> > > > can use the cached table object returned by the .cache() method in
> > other
> > > > job and it should read the intermediate result. The intermediate
> result
> > > can
> > > > gone in the following three cases: 1. the user explicitly call the
> > > > invalidateCache() method 2. the TableEnvironment is closed 3. failure
> > > > happens on the TM. When that happens, the intermeidate result will
> not
> > be
> > > > available unless it is re-generated.
> > > >
> > > > 3. In the "semantic of cache() method" section, the description "The
> > > > > semantic of the *cache() *method is a little different depending on
> > > > whether
> > > > > auto caching is enabled or not." seems not explained.
> > > > >
> > > >
> > > > This line is actually outdated and should be removed, as we are not
> > > adding
> > > > the auto caching functionality in this FLIP. Auto caching will be
> added
> > > in
> > > > the future, and the semantic of cache() when auto caching is enabled
> > will
> > > > be discussed in detail by a new FLIP. I will remove the descriptor to
> > > avoid
> > > > further confusion.
> > > >
> > > >
> > > > > Thanks,
> > > > >
> > > > > Jiangjie (Becket) Qin
> > > > >
> > > > >
> > > > >
> > > > > On Wed, Apr 22, 2020 at 4:00 PM Xuannan Su 
> > > > wrote:
> > > > >
> > > > > > Hi folks,
> > > > > >
> > > > > > I'd like to start the discussion about FLIP-36 Support
> Interactive
> > > > > > Programming in Flink Table API
> > > > > >
> >

Re: [ANNOUNCE] Apache Flink 1.9.3 released

2020-04-25 Thread jincheng sun
Thanks for your great job, Dian!

Best,
Jincheng


Hequn Cheng  于2020年4月25日周六 下午8:30写道:

> @Dian, thanks a lot for the release and for being the release manager.
> Also thanks to everyone who made this release possible!
>
> Best,
> Hequn
>
> On Sat, Apr 25, 2020 at 7:57 PM Dian Fu  wrote:
>
>> Hi everyone,
>>
>> The Apache Flink community is very happy to announce the release of
>> Apache Flink 1.9.3, which is the third bugfix release for the Apache Flink
>> 1.9 series.
>>
>> Apache Flink® is an open-source stream processing framework for
>> distributed, high-performing, always-available, and accurate data streaming
>> applications.
>>
>> The release is available for download at:
>> https://flink.apache.org/downloads.html
>>
>> Please check out the release blog post for an overview of the
>> improvements for this bugfix release:
>> https://flink.apache.org/news/2020/04/24/release-1.9.3.html
>>
>> The full release notes are available in Jira:
>> https://issues.apache.org/jira/projects/FLINK/versions/12346867
>>
>> We would like to thank all contributors of the Apache Flink community who
>> made this release possible!
>> Also great thanks to @Jincheng for helping finalize this release.
>>
>> Regards,
>> Dian
>>
>


[jira] [Created] (FLINK-17383) flink legacy planner should not use CollectionEnvironment any more

2020-04-25 Thread godfrey he (Jira)
godfrey he created FLINK-17383:
--

 Summary: flink legacy planner should not use CollectionEnvironment 
any more
 Key: FLINK-17383
 URL: https://issues.apache.org/jira/browse/FLINK-17383
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Legacy Planner
Reporter: godfrey he
 Fix For: 1.11.0


As discussed in 
https://github.com/apache/flink/pull/11794,{{CollectionEnvironment}} is not a 
good practice, as it is not going through all the steps that a regular user 
program would go. We should change the tests to use {{LocalEnvironment}}. 

commit " Introduce CollectionPipelineExecutor for CollectionEnvironment 
([c983ac9|https://github.com/apache/flink/commit/c983ac9c49b7b58394574efdde4f39e8d33a8582])"
  should also be reverted at that mement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17384) support read hbase conf dir from flink.conf just like hadoop_conf

2020-04-25 Thread jackylau (Jira)
jackylau created FLINK-17384:


 Summary: support read hbase conf dir from flink.conf just like 
hadoop_conf
 Key: FLINK-17384
 URL: https://issues.apache.org/jira/browse/FLINK-17384
 Project: Flink
  Issue Type: Bug
  Components: Connectors / HBase, Deployment / Scripts
Affects Versions: 1.10.0
Reporter: jackylau
 Fix For: 1.11.0


hi all:

when user interacts with hbase should do 2 things when using sql
 # export HBASE_CONF_DIR
 # add hbase libs to flink_lib(because the hbase connnector doesn't have client 
jar)

i think it needs to optimise it.

for 1) we should support read hbase conf dir from flink.conf just like 
hadoop_conf in  config.sh

for 2) we should support HBASE_CLASSPATH in  config.sh. In case of jar 
conflicts such as guava , we also should support flink-hbase-shaded as hadoop ,



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17385) Fix precision problem when converting JDBC numberic into Flink decimal type

2020-04-25 Thread Jark Wu (Jira)
Jark Wu created FLINK-17385:
---

 Summary: Fix precision problem when converting JDBC numberic into 
Flink decimal type 
 Key: FLINK-17385
 URL: https://issues.apache.org/jira/browse/FLINK-17385
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC, Table SQL / Ecosystem
Reporter: Jark Wu


This is reported in the mailing list: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/JDBC-error-on-numeric-conversion-because-of-DecimalType-MIN-PRECISION-td34668.html





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17386) Exception in HadoopSecurityContextFactory.createContext while no shaded-hadoop-lib provided.

2020-04-25 Thread Wenlong Lyu (Jira)
Wenlong Lyu created FLINK-17386:
---

 Summary: Exception in HadoopSecurityContextFactory.createContext 
while no shaded-hadoop-lib provided.
 Key: FLINK-17386
 URL: https://issues.apache.org/jira/browse/FLINK-17386
 Project: Flink
  Issue Type: Bug
Reporter: Wenlong Lyu


java.io.IOException: Process execution failed due error. Error 
output:java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.security.UserGroupInformation\n\tat 
org.apache.flink.runtime.security.contexts.HadoopSecurityContextFactory.createContext(HadoopSecurityContextFactory.java:59)\n\tat
 
org.apache.flink.runtime.security.SecurityUtils.installContext(SecurityUtils.java:92)\n\tat
 
org.apache.flink.runtime.security.SecurityUtils.install(SecurityUtils.java:60)\n\tat
 org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:964)\n\n\tat 
com.alibaba.flink.vvr.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:144)\n\tat
 
com.alibaba.flink.vvr.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:126)\n\tat
 
com.alibaba.flink.vvr.VVRCompileTest.runSingleJobCompileCheck(VVRCompileTest.java:173)\n\tat
 
com.alibaba.flink.vvr.VVRCompileTest.lambda$runJobsCompileCheck$0(VVRCompileTest.java:101)\n\tat
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1147)\n\tat
 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622)\n\tat
 java.lang.Thread.run(Thread.java:834)

I think it is because exception throw in the static code block of 
UserInformation, we should catch Throwable instead of Exception in 
HadoopSecurityContextFactory#createContext?
[~rongrong] what do you think?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Integration of DataSketches into Flink

2020-04-25 Thread leerho
Hello All,

I am a committer on DataSketches.apache.org
 and just learning about Flink,  Since
Flink is designed for stateful stream processing I would think it would
make sense to have the DataSketches library integrated into its core so all
users of Flink could take advantage of these advanced streaming
algorithms.  If there is interest in the Flink community for this
capability, please contact us at d...@datasketches.apache.org or on our
datasketches-dev Slack channel.
Cheers,
Lee.