[GitHub] [flink] flinkbot edited a comment on pull request #14868: [FLINK-21326][runtime] Optimize building topology when initializing ExecutionGraph

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #14868:
URL: https://github.com/apache/flink/pull/14868#issuecomment-773192044


   
   ## CI report:
   
   * 016599e0bd95c07d502810e2e3129e4f6cb82184 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13989)
 
   * db743bed71c9d0577fc64a9ac10178e0d48fca50 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15070: [FLINK-21542][docs] Add documentation for supporting INSERT INTO spec…

2021-03-02 Thread GitBox


flinkbot commented on pull request #15070:
URL: https://github.com/apache/flink/pull/15070#issuecomment-789518377


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1cc18dd530f68e0a44ffab9a891e7425094123bd (Wed Mar 03 
07:58:20 UTC 2021)
   
   **Warnings:**
* Documentation files were touched, but no `docs/content.zh/` files: Update 
Chinese documentation or file Jira ticket.
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-21542).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21542) Add documentation for supporting INSERT INTO specific columns

2021-03-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21542:
---
Labels: pull-request-available  (was: )

> Add documentation for supporting INSERT INTO specific columns
> -
>
> Key: FLINK-21542
> URL: https://issues.apache.org/jira/browse/FLINK-21542
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> We have supported INSERT INTO specific columns in FLINK-18726, but no add 
> documentation yet. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] docete opened a new pull request #15070: [FLINK-21542][docs] Add documentation for supporting INSERT INTO spec…

2021-03-02 Thread GitBox


docete opened a new pull request #15070:
URL: https://github.com/apache/flink/pull/15070


   …ific columns
   
   
   ## What is the purpose of the change
   We have supported INSERT INTO specific columns in 
[FLINK-18726](https://issues.apache.org/jira/browse/FLINK-18726), this PR add 
document for it.
   
   
   ## Brief change log
   
   - 1cc18dd Add documentation for supporting INSERT INTO specific columns
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / **no** / don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / docs / 
**JavaDocs** / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21553) WindowDistinctAggregateITCase#testHopWindow_Cube is unstable

2021-03-02 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294354#comment-17294354
 ] 

Dawid Wysakowicz commented on FLINK-21553:
--

Is FLINK-21482 the root cause? The test fails quite frequently. How do you feel 
about reverting the change?

> WindowDistinctAggregateITCase#testHopWindow_Cube is unstable
> 
>
> Key: FLINK-21553
> URL: https://issues.apache.org/jira/browse/FLINK-21553
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Andy
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.13.0
>
> Attachments: screenshot-1.png
>
>
> See 
> https://dev.azure.com/imjark/Flink/_build/results?buildId=422=logs=d1352042-8a7d-50b6-3946-a85d176b7981=b2322052-d503-5552-81e2-b3a532a1d7e8
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] tillrohrmann commented on a change in pull request #14662: [FLINK-20675][checkpointing] Ensure asynchronous checkpoint failure could fail the job by default

2021-03-02 Thread GitBox


tillrohrmann commented on a change in pull request #14662:
URL: https://github.com/apache/flink/pull/14662#discussion_r586187938



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/JobMasterTest.java
##
@@ -353,13 +359,17 @@ public void declineCheckpoint(DeclineCheckpoint 
declineCheckpoint) {
 RpcCheckpointResponder rpcCheckpointResponder =
 new RpcCheckpointResponder(jobMasterGateway);
 rpcCheckpointResponder.declineCheckpoint(
-jobGraph.getJobID(), new ExecutionAttemptID(), 1, 
userException);
+jobGraph.getJobID(), new ExecutionAttemptID(), 1, 
checkpointException);
 
 Throwable throwable =
 declineCheckpointMessageFuture.get(
 testingTimeout.toMilliseconds(), 
TimeUnit.MILLISECONDS);
-assertThat(throwable, instanceOf(SerializedThrowable.class));
-assertThat(throwable.getMessage(), 
equalTo(userException.getMessage()));
+assertThat(throwable, instanceOf(CheckpointException.class));
+Optional throwableWithMessage =
+ExceptionUtils.findThrowableWithMessage(throwable, 
userException.getMessage());
+assertTrue(throwableWithMessage.isPresent());
+assertThat(
+throwableWithMessage.get().getMessage(), 
equalTo(userException.getMessage()));

Review comment:
   But why do we start a `JobMaster` in order to test this kind of 
behaviour. What we are effectively testing here is that the passed 
`CheckpointException` still contains the cause as it was constructed with. The 
test seems to be prohibitively expensive for such a trivial behavioral test.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21135) Reactive Mode: Change Adaptive Scheduler to set infinite parallelism in JobGraph

2021-03-02 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21135?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294352#comment-17294352
 ] 

Robert Metzger commented on FLINK-21135:


Yes! The Jira description is actually out of sync with the specification in the 
FLIP, where we define exactly what you said.

> Reactive Mode: Change Adaptive Scheduler to set infinite parallelism in 
> JobGraph
> 
>
> Key: FLINK-21135
> URL: https://issues.apache.org/jira/browse/FLINK-21135
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
> Fix For: 1.13.0
>
>
> For Reactive Mode, the scheduler needs to change the parallelism and 
> maxParalllelism of the submitted job graph to it's max value (2^15).
> + check if an unsupported flag is enabled in the submitted jobgraph or 
> configuration (unaligned checkpoints)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21550) ZooKeeperHaServicesTest.testSimpleClose fail

2021-03-02 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-21550:
--
Priority: Critical  (was: Major)

> ZooKeeperHaServicesTest.testSimpleClose fail
> 
>
> Key: FLINK-21550
> URL: https://issues.apache.org/jira/browse/FLINK-21550
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.12.3
>Reporter: Guowei Ma
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13956=logs=3b6ec2fd-a816-5e75-c775-06fb87cb6670=2aff8966-346f-518f-e6ce-de64002a5034
> {code:java}
> [ERROR] 
> testSimpleClose(org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServicesTest)
>  Time elapsed: 9.265 s <<< ERROR! java.util.concurrent.TimeoutException: 
> Listener was not notified about a new leader within 2000ms at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:151)
>  at 
> org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:136)
>  at 
> org.apache.flink.runtime.leaderelection.TestingRetrievalBase.waitForNewLeader(TestingRetrievalBase.java:53)
>  at 
> org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServicesTest.runCleanupTest(ZooKeeperHaServicesTest.java:195)
>  at 
> org.apache.flink.runtime.highavailability.zookeeper.ZooKeeperHaServicesTest.testSimpleClose(ZooKeeperHaServicesTest.java:100)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21577) SimpleType.simpleTypeFrom(...) complains with "Collection is empty"

2021-03-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21577:
---
Labels: pull-request-available  (was: )

> SimpleType.simpleTypeFrom(...) complains with "Collection is empty"
> ---
>
> Key: FLINK-21577
> URL: https://issues.apache.org/jira/browse/FLINK-21577
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: statefun-3.0.0
>
>
> This is caused by the {{EnumSet.copyOf}} method call at:
> https://github.com/apache/flink-statefun/blob/master/statefun-sdk-java/src/main/java/org/apache/flink/statefun/sdk/java/types/SimpleType.java#L57
> That expects the collection to be non-empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] tzulitai opened a new pull request #206: [FLINK-21577] [java] Fix instantiation error with SimpleType.simpleTypeFrom()

2021-03-02 Thread GitBox


tzulitai opened a new pull request #206:
URL: https://github.com/apache/flink-statefun/pull/206


   This was failing with an `Collection is empty` when attempting to copy the 
enum set of `TypeCharacteristic`s.
   
   I think the only reason we need to do a copy, is if users instantiate a 
`SimpleType` directly with the constructor (and not the `simpleTypeFrom`, 
`simpleImmutableTypeFrom` factory methods).
   IMO, we probably can expect users to just use the factory methods, and not 
the constructor.
   If they need to do anything more complex, they should just create a `Type` 
subclass directly.
   
   Therefore, this PR makes the constructor private to enforce instantiation a 
`SimpleType` via the factory methods.
   This would allow the constructor to be simpler and remove the 
`EnumSet.copyOf` operation.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 commented on pull request #15056: [FLINK-21128][k8s] Introduce dedicated scripts for native K8s integration

2021-03-02 Thread GitBox


wangyang0918 commented on pull request #15056:
URL: https://github.com/apache/flink/pull/15056#issuecomment-789513555


   Rebase master branch and resolve conflicts.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 removed a comment on pull request #15056: [FLINK-21128][k8s] Introduce dedicated scripts for native K8s integration

2021-03-02 Thread GitBox


wangyang0918 removed a comment on pull request #15056:
URL: https://github.com/apache/flink/pull/15056#issuecomment-789468060


   Rebase master branch and resolve conflicts.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21558) DeclarativeSlotManager starts logging "Could not fulfill resource requirements of job xxx"

2021-03-02 Thread Till Rohrmann (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Till Rohrmann updated FLINK-21558:
--
Parent: FLINK-21075
Issue Type: Sub-task  (was: Bug)

> DeclarativeSlotManager starts logging "Could not fulfill resource 
> requirements of job xxx"
> --
>
> Key: FLINK-21558
> URL: https://issues.apache.org/jira/browse/FLINK-21558
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Robert Metzger
>Priority: Critical
> Fix For: 1.13.0
>
>
> While testing the reactive mode, I noticed that my job started normally, but 
> after a few minutes, it started logging this:
> {code}
> 2021-03-02 13:36:48,075 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Source: 
> Custom Source -> Timestamps/Watermarks (2/3) 
> (061b652dabc0ecfc83c942ee3e127ecd) switched from DEPLOYING to RUNNING.
> 2021-03-02 13:36:48,076 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - 
> Window(GlobalWindows(), DeltaTrigger, TimeEvictor, ComparableAggregator, 
> PassThroughWindowFunction) -> Sink: Print to Std. Out (2/3) 
> (6a715e3c70754aafa0b91332b69a736d) switched from DEPLOYING to RUNNING.
> 2021-03-02 13:36:48,077 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - 
> Window(GlobalWindows(), DeltaTrigger, TimeEvictor, ComparableAggregator, 
> PassThroughWindowFunction) -> Sink: Print to Std. Out (1/3) 
> (8655874da6905d13c01927a282ed2ce0) switched from DEPLOYING to RUNNING.
> 2021-03-02 13:36:48,080 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - 
> Window(GlobalWindows(), DeltaTrigger, TimeEvictor, ComparableAggregator, 
> PassThroughWindowFunction) -> Sink: Print to Std. Out (3/3) 
> (9514718713ffa453c43a7e7efde9920a) switched from DEPLOYING to RUNNING.
> 2021-03-02 13:40:28,893 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:40:29,474 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:40:29,475 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:40:29,475 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:40:39,495 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:40:39,496 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:40:39,497 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:40:49,518 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:40:49,518 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:40:49,519 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:40:59,536 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:40:59,536 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:40:59,537 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:41:09,556 WARN  
> org.apache.flink.runtime.resourcemanager.slotmanager.DeclarativeSlotManager 
> [] - Could not fulfill resource requirements of job 
> 1283d12b281c35f33f3602611ef43b35.
> 2021-03-02 13:41:09,557 WARN  
> 

[jira] [Commented] (FLINK-21436) Speed ​​up the restore of UnionListState

2021-03-02 Thread fanrui (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294349#comment-17294349
 ] 

fanrui commented on FLINK-21436:


h1. Give some test results of increasing the 
`state.backend.fs.memory-threshold` configuration:


When source parallelism is 2000 and kafka partition is 2000, try to increase 
`state.backend.fs.memory-threshold` = 20K, then the state of offset will be 
sent to JM through ByteStreamStateHandle, reducing the number of hdfs files.

Unfortunately, after trying many times, the restore is still unsuccessful. The 
JM memory is 30G, and the heap memory is 22G.

The JM GC pressure was very high during restore, resulting in a maximum CPU 
usage of over 3000%, and an average CPU usage of over 500%.

The reasons for the ultimate failure are often:
```
akka.pattern.AskTimeoutException: Ask timed out on 
[Actor[akka.tcp://flink@host:36509/user/taskmanager_0#1922704998]] after [6 
ms]. Message of type [org.apache.flink.runtime. 
rpc.messages.RemoteRpcInvocation]. A typical reason for `AskTimeoutException` 
is that the recipient actor didn't send a reply.
```

Exception screenshot: !akka timeout Exception.png!

FlameGraph:

[^JM 启动火焰图.svg]

 

Hope the community can complete similar tests, if there is time. The test 
scenario is relatively simple, it only needs to meet the following conditions:
- source parallelism is greater than 2000
- kafka partition is greater than 2000
- Increase `state.backend.fs.memory-threshold` = 20K
- Job processing logic can be as simple as possible

I am very happy to provide more detailed test environment and conditions. 

If there is a problem with my testing process, I hope to correct me.

> Speed ​​up the restore of UnionListState
> 
>
> Key: FLINK-21436
> URL: https://issues.apache.org/jira/browse/FLINK-21436
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.13.0
>Reporter: fanrui
>Priority: Major
> Attachments: JM 启动火焰图.svg, akka timeout Exception.png
>
>
> h1. 1. Problem introduction and cause analysis
> Problem description: The duration of UnionListState restore under large 
> concurrency is more than 2 minutes.
> h2. the reason:
> 2000 subtasks write 2000 files during checkpoint, and each subtask needs to 
> read 2000 files during restore.
>  2000*2000 = 4 million, so 4 million small files need to be read to hdfs 
> during restore. HDFS has become a bottleneck, causing restore to be 
> particularly time-consuming.
> h1. 2. Optimize ideas
> Under normal circumstances, the UnionListState state is relatively small. 
> Typical usage scenario: Kafka offset information.
>  When restoring, JM can directly read all 2000 small files, merge 
> UnionListState into a byte array and send it to all TMs to avoid frequent 
> access to hdfs by TMs.
> h1. 3. Benefits after optimization
> Before optimization: 2000 concurrent, Kafka offset restore takes 90~130 s.
>  After optimization: 2000 concurrent, Kafka offset restore takes less than 1s.
> h1.  4. Risk points
> Too big UnionListState leads to too much pressure on JM.
> Solution 1:
>  Add configuration and decide whether to enable this feature. The default is 
> false, which means the old plan is used. When the user is set to true, JM 
> will merge.
> Solution 2:
> The above configuration is not required, which is equivalent to enabling 
> merge by default.
> JM detects the size of the state before merge, and if it is less than the 
> threshold, the state is considered to be relatively small, and the state is 
> sent to all TMs through ByteStreamStateHandle.
> If the threshold is exceeded, the state is considered to be greater. At this 
> time, write an hdfs file, and send FileStateHandle to all TMs, and TM can 
> read this file.
>  
> Note: Most of the scenarios where Flink uses UnionListState are Kafka offset 
> (small state). In theory, most jobs are risk-free.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21436) Speed ​​up the restore of UnionListState

2021-03-02 Thread fanrui (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanrui updated FLINK-21436:
---
Attachment: JM 启动火焰图.svg

> Speed ​​up the restore of UnionListState
> 
>
> Key: FLINK-21436
> URL: https://issues.apache.org/jira/browse/FLINK-21436
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.13.0
>Reporter: fanrui
>Priority: Major
> Attachments: JM 启动火焰图.svg, akka timeout Exception.png
>
>
> h1. 1. Problem introduction and cause analysis
> Problem description: The duration of UnionListState restore under large 
> concurrency is more than 2 minutes.
> h2. the reason:
> 2000 subtasks write 2000 files during checkpoint, and each subtask needs to 
> read 2000 files during restore.
>  2000*2000 = 4 million, so 4 million small files need to be read to hdfs 
> during restore. HDFS has become a bottleneck, causing restore to be 
> particularly time-consuming.
> h1. 2. Optimize ideas
> Under normal circumstances, the UnionListState state is relatively small. 
> Typical usage scenario: Kafka offset information.
>  When restoring, JM can directly read all 2000 small files, merge 
> UnionListState into a byte array and send it to all TMs to avoid frequent 
> access to hdfs by TMs.
> h1. 3. Benefits after optimization
> Before optimization: 2000 concurrent, Kafka offset restore takes 90~130 s.
>  After optimization: 2000 concurrent, Kafka offset restore takes less than 1s.
> h1.  4. Risk points
> Too big UnionListState leads to too much pressure on JM.
> Solution 1:
>  Add configuration and decide whether to enable this feature. The default is 
> false, which means the old plan is used. When the user is set to true, JM 
> will merge.
> Solution 2:
> The above configuration is not required, which is equivalent to enabling 
> merge by default.
> JM detects the size of the state before merge, and if it is less than the 
> threshold, the state is considered to be relatively small, and the state is 
> sent to all TMs through ByteStreamStateHandle.
> If the threshold is exceeded, the state is considered to be greater. At this 
> time, write an hdfs file, and send FileStateHandle to all TMs, and TM can 
> read this file.
>  
> Note: Most of the scenarios where Flink uses UnionListState are Kafka offset 
> (small state). In theory, most jobs are risk-free.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21436) Speed ​​up the restore of UnionListState

2021-03-02 Thread fanrui (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanrui updated FLINK-21436:
---
Attachment: akka timeout Exception.png

> Speed ​​up the restore of UnionListState
> 
>
> Key: FLINK-21436
> URL: https://issues.apache.org/jira/browse/FLINK-21436
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / State Backends
>Affects Versions: 1.13.0
>Reporter: fanrui
>Priority: Major
> Attachments: JM 启动火焰图.svg, akka timeout Exception.png
>
>
> h1. 1. Problem introduction and cause analysis
> Problem description: The duration of UnionListState restore under large 
> concurrency is more than 2 minutes.
> h2. the reason:
> 2000 subtasks write 2000 files during checkpoint, and each subtask needs to 
> read 2000 files during restore.
>  2000*2000 = 4 million, so 4 million small files need to be read to hdfs 
> during restore. HDFS has become a bottleneck, causing restore to be 
> particularly time-consuming.
> h1. 2. Optimize ideas
> Under normal circumstances, the UnionListState state is relatively small. 
> Typical usage scenario: Kafka offset information.
>  When restoring, JM can directly read all 2000 small files, merge 
> UnionListState into a byte array and send it to all TMs to avoid frequent 
> access to hdfs by TMs.
> h1. 3. Benefits after optimization
> Before optimization: 2000 concurrent, Kafka offset restore takes 90~130 s.
>  After optimization: 2000 concurrent, Kafka offset restore takes less than 1s.
> h1.  4. Risk points
> Too big UnionListState leads to too much pressure on JM.
> Solution 1:
>  Add configuration and decide whether to enable this feature. The default is 
> false, which means the old plan is used. When the user is set to true, JM 
> will merge.
> Solution 2:
> The above configuration is not required, which is equivalent to enabling 
> merge by default.
> JM detects the size of the state before merge, and if it is less than the 
> threshold, the state is considered to be relatively small, and the state is 
> sent to all TMs through ByteStreamStateHandle.
> If the threshold is exceeded, the state is considered to be greater. At this 
> time, write an hdfs file, and send FileStateHandle to all TMs, and TM can 
> read this file.
>  
> Note: Most of the scenarios where Flink uses UnionListState are Kafka offset 
> (small state). In theory, most jobs are risk-free.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21578) Closeable Sink Committer/GlobalCommitter were created to function in onestep during job graph composition

2021-03-02 Thread Kezhu Wang (Jira)
Kezhu Wang created FLINK-21578:
--

 Summary: Closeable Sink Committer/GlobalCommitter were created to 
function in onestep during job graph composition
 Key: FLINK-21578
 URL: https://issues.apache.org/jira/browse/FLINK-21578
 Project: Flink
  Issue Type: Improvement
  Components: API / DataStream
Affects Versions: 1.13.0
Reporter: Kezhu Wang


Normally, functions/operators are created in job graph composition phase for 
serialization and transmission. Them are "opened" in flink cluster to function. 
This two steps procedure succeed in that there will be no resource-cleanup 
requirement in job graph composition phase.

While {{Committer}} and {{GlobalCommitter}} has no such "open" operatin but 
they were created in job graph composition phase.

Following are fixes I could image if we converge to "this is problematic".
 # Add {{open}} or similar method for these two classes.
 # Add {{hasCommitter}}, {{hasGlobalCommitter}} to {{Sink}} and make 
{{createCommitter}} and others not optional(enforce this in runtime).

Personally, I am a bit preferring second approach for possible less code path 
touching in job graph composition phase. But first approach has advantage that 
it could be an no breaking change.

There might be other approaches though.

cc [~guoweima] [~gaoyunhaii]  [~aljoscha]  [~kkl0u]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15069: [FLINK-21576][runtime] Remove legacy ExecutionVertex#getPreferredLocations()

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15069:
URL: https://github.com/apache/flink/pull/15069#issuecomment-789495972


   
   ## CI report:
   
   * 91cd7acb32af72a0f4d1aba2735fb9e7c4130eff Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14038)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15062: [FLINK-21549][table-planner-blink] Support json serialization/deserialization for the push-down result of DynamicTableSource and Dyna

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15062:
URL: https://github.com/apache/flink/pull/15062#issuecomment-788890431


   
   ## CI report:
   
   * bc8213ca609dc86c8d3383502a60b5894ea50678 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14037)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on pull request #15032: [FLINK-21511][Connectors/Elasticsearch]Fix BulkProcessor hangs for threads deadlocked

2021-03-02 Thread GitBox


KarmaGYZ commented on pull request #15032:
URL: https://github.com/apache/flink/pull/15032#issuecomment-789506319


   @zhangmeng0426 The commit message could be changed to "[FLINK-21511][es] 
Bump Elasticsearch6 version to 6.8.12". You can try to rerun the CI.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wenlong88 commented on a change in pull request #15062: [FLINK-21549][table-planner-blink] Support json serialization/deserialization for the push-down result of DynamicTableSource an

2021-03-02 Thread GitBox


wenlong88 commented on a change in pull request #15062:
URL: https://github.com/apache/flink/pull/15062#discussion_r586163272



##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/RexNodeExtractor.scala
##
@@ -108,18 +130,15 @@ object RexNodeExtractor extends Logging {
 // converts the cnf condition to a list of AND conditions
 val conjunctions = RelOptUtil.conjunctions(cnf)
 
-val convertedExpressions = new mutable.ArrayBuffer[Expression]
+val convertibleRexNodes = new mutable.ArrayBuffer[RexNode]
 val unconvertedRexNodes = new mutable.ArrayBuffer[RexNode]
-val inputNames = inputFieldNames.asScala.toArray
-val converter = new RexNodeToExpressionConverter(
-  rexBuilder, inputNames, functionCatalog, catalogManager, timeZone)
 conjunctions.asScala.foreach(rex => {
   rex.accept(converter) match {
-case Some(expression) => convertedExpressions += expression

Review comment:
   there is a regression here: for rexnode which can not be converted to a 
expression, will be treated as convertible rexnode

##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/abilities/source/FilterPushDownSpec.java
##
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.abilities.source;
+
+import org.apache.flink.table.api.TableException;
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import 
org.apache.flink.table.connector.source.abilities.SupportsFilterPushDown;
+import org.apache.flink.table.expressions.Expression;
+import org.apache.flink.table.expressions.resolver.ExpressionResolver;
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonCreator;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonProperty;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonTypeName;
+
+import org.apache.calcite.rex.RexBuilder;
+import org.apache.calcite.rex.RexNode;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Optional;
+import java.util.TimeZone;
+import java.util.stream.Collectors;
+
+import static org.apache.flink.util.Preconditions.checkNotNull;
+
+/**
+ * A sub-class of {@link SourceAbilitySpec} that can not only 
serialize/deserialize the filter
+ * to/from JSON, but also can push the filter into a {@link 
SupportsFilterPushDown}.
+ */
+@JsonTypeName("FilterPushDown")
+public class FilterPushDownSpec implements SourceAbilitySpec {
+public static final String FIELD_NAME_PREDICATES = "predicates";
+
+@JsonProperty(FIELD_NAME_PREDICATES)
+private final List predicates;
+
+@JsonCreator
+public FilterPushDownSpec(@JsonProperty(FIELD_NAME_PREDICATES) 
List predicates) {
+this.predicates = new ArrayList<>(checkNotNull(predicates));
+}
+
+@Override
+public void apply(DynamicTableSource tableSource, SourceAbilityContext 
context) {
+apply(predicates, tableSource, context);

Review comment:
   I think we need to check that t all predicates is accepted by the source 
after deserialization





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Thesharing commented on a change in pull request #14868: [FLINK-21326][runtime] Optimize building topology when initializing ExecutionGraph

2021-03-02 Thread GitBox


Thesharing commented on a change in pull request #14868:
URL: https://github.com/apache/flink/pull/14868#discussion_r586086067



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/EdgeManagerBuildUtil.java
##
@@ -0,0 +1,143 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License
+ */
+
+package org.apache.flink.runtime.executiongraph;
+
+import org.apache.flink.runtime.jobgraph.DistributionPattern;
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+import org.apache.flink.runtime.scheduler.strategy.ConsumedPartitionGroup;
+import org.apache.flink.runtime.scheduler.strategy.ConsumerVertexGroup;
+import org.apache.flink.runtime.scheduler.strategy.ExecutionVertexID;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/** Utilities for building {@link EdgeManager}. */
+public class EdgeManagerBuildUtil {
+
+public static void connectVertexToResult(
+ExecutionJobVertex vertex,
+IntermediateResult ires,
+int inputNumber,
+DistributionPattern distributionPattern) {
+
+switch (distributionPattern) {
+case POINTWISE:
+connectPointwise(vertex.getTaskVertices(), ires, inputNumber);
+break;
+case ALL_TO_ALL:
+connectAllToAll(vertex.getTaskVertices(), ires, inputNumber);
+break;
+default:
+throw new RuntimeException("Unrecognized distribution 
pattern.");
+}
+}
+
+private static void connectAllToAll(
+ExecutionVertex[] taskVertices, IntermediateResult ires, int 
inputNumber) {
+
+ConsumedPartitionGroup consumedPartitions =
+new ConsumedPartitionGroup(
+Arrays.stream(ires.getPartitions())
+
.map(IntermediateResultPartition::getPartitionId)
+.collect(Collectors.toList()));
+for (ExecutionVertex ev : taskVertices) {
+ev.addConsumedPartitions(consumedPartitions, inputNumber);
+}
+
+ConsumerVertexGroup vertices =
+new ConsumerVertexGroup(
+Arrays.stream(taskVertices)
+.map(ExecutionVertex::getID)
+.collect(Collectors.toList()));
+for (IntermediateResultPartition partition : ires.getPartitions()) {
+partition.addConsumers(vertices);
+}
+}
+
+private static void connectPointwise(
+ExecutionVertex[] taskVertices, IntermediateResult ires, int 
inputNumber) {
+
+final int sourceCount = ires.getPartitions().length;
+final int targetCount = taskVertices.length;
+
+if (sourceCount == targetCount) {
+for (int i = 0; i < sourceCount; i++) {
+ExecutionVertex executionVertex = taskVertices[i];
+IntermediateResultPartition partition = 
ires.getPartitions()[i];
+
+ConsumerVertexGroup consumerVertexGroup =
+new ConsumerVertexGroup(executionVertex.getID());
+partition.addConsumers(consumerVertexGroup);
+
+ConsumedPartitionGroup consumedPartitionGroup =
+new ConsumedPartitionGroup(partition.getPartitionId());
+executionVertex.addConsumedPartitions(consumedPartitionGroup, 
inputNumber);
+}
+} else if (sourceCount > targetCount) {
+for (int index = 0; index < targetCount; index++) {
+
+ExecutionVertex executionVertex = taskVertices[index];
+ConsumerVertexGroup consumerVertexGroup =
+new ConsumerVertexGroup(executionVertex.getID());
+
+int start = index * sourceCount / targetCount;
+int end = (index + 1) * sourceCount / targetCount;
+
+List consumedPartitions =
+new ArrayList<>(end - start);
+
+for (int i = start; i < end; i++) {
+IntermediateResultPartition partition = 

[jira] [Commented] (FLINK-21542) Add documentation for supporting INSERT INTO specific columns

2021-03-02 Thread Zhenghua Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294341#comment-17294341
 ] 

Zhenghua Gao commented on FLINK-21542:
--

Is it enough to update the syntax of [insert 
syntax|https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/sql/insert.html#syntax]
 ?
 

> Add documentation for supporting INSERT INTO specific columns
> -
>
> Key: FLINK-21542
> URL: https://issues.apache.org/jira/browse/FLINK-21542
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.13.0
>
>
> We have supported INSERT INTO specific columns in FLINK-18726, but no add 
> documentation yet. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Thesharing removed a comment on pull request #15069: [FLINK-21576][runtime] Remove legacy ExecutionVertex#getPreferredLocations()

2021-03-02 Thread GitBox


Thesharing removed a comment on pull request #15069:
URL: https://github.com/apache/flink/pull/15069#issuecomment-789489512


   Since there's no reference of `Execution#calculatePreferredLocations`, I 
think it's safe to remove it. I'll rebase my PR 
[FLINK-21326](https://github.com/apache/flink/pull/14868) after this PR is 
merged. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15069: [FLINK-21576][runtime] Remove legacy ExecutionVertex#getPreferredLocations()

2021-03-02 Thread GitBox


flinkbot commented on pull request #15069:
URL: https://github.com/apache/flink/pull/15069#issuecomment-789495972


   
   ## CI report:
   
   * 91cd7acb32af72a0f4d1aba2735fb9e7c4130eff UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15062: [FLINK-21549][table-planner-blink] Support json serialization/deserialization for the push-down result of DynamicTableSource and Dyna

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15062:
URL: https://github.com/apache/flink/pull/15062#issuecomment-788890431


   
   ## CI report:
   
   * 8cb2f9ce280a52bf5be662c458d8b9005ca1728a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14000)
 
   * bc8213ca609dc86c8d3383502a60b5894ea50678 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15063: [FLINK-21434][python] Fix encoding error when using the fast coder to encode a row-type field containing more than 14 fields

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15063:
URL: https://github.com/apache/flink/pull/15063#issuecomment-788890531


   
   ## CI report:
   
   * 2effd19d74334e7da661d4774eedbaf303cb3ba8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14024)
 
   * 2b27ed5151bf2515e5ca23063032269c944b8bea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14035)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15056: [FLINK-21128][k8s] Introduce dedicated scripts for native K8s integration

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15056:
URL: https://github.com/apache/flink/pull/15056#issuecomment-788712841


   
   ## CI report:
   
   * ce5ae5eb11368269976029a47eded3951a4702d5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14005)
 
   * a69af3ac7eb1b4453c1d43601636f5777dfea11c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14034)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15018: [FLINK-21460][table api] Use Configuration to create TableEnvironment

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15018:
URL: https://github.com/apache/flink/pull/15018#issuecomment-785717193


   
   ## CI report:
   
   * 8da9084ba0c7dad6d6b14252ee8ae2d5879f9070 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14025)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15028: [FLINK-21115][python] Add AggregatingState and corresponding StateDescriptor for Python DataStream API

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15028:
URL: https://github.com/apache/flink/pull/15028#issuecomment-786507942


   
   ## CI report:
   
   * 75560398365a84a3493325a4ccb6b55ab667becb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14023)
 
   * 7144f6b8dcc5bf64d21c92da85a8e61c909fbf63 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14033)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14878: [FLINK-21245][table-planner-blink] Support StreamExecCalc json serialization/deserialization

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #14878:
URL: https://github.com/apache/flink/pull/14878#issuecomment-773880848


   
   ## CI report:
   
   * 7eed4a448147c00eb56b70e588eaa770a473996a UNKNOWN
   * 0e30f3541608abedd18beece54297e6815d6ae3c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13998)
 
   * bbe20063990703285bdfa7c37958151641534426 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-21502) Reduce frequency of global re-allocate resources

2021-03-02 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-21502.

Fix Version/s: 1.13.0
   Resolution: Done

Merged via:
* master (1.13): 7602d2e89d2b1e65694f19ba67e77d1dd03e0abe

> Reduce frequency of global re-allocate resources
> 
>
> Key: FLINK-21502
> URL: https://issues.apache.org/jira/browse/FLINK-21502
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21502) Reduce frequency of global re-allocate resources

2021-03-02 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-21502:
-
Component/s: (was: Runtime / Checkpointing)
 Runtime / Coordination

> Reduce frequency of global re-allocate resources
> 
>
> Key: FLINK-21502
> URL: https://issues.apache.org/jira/browse/FLINK-21502
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong closed pull request #15047: [FLINK-21502][coordination] Reduce frequency of global re-allocate re…

2021-03-02 Thread GitBox


xintongsong closed pull request #15047:
URL: https://github.com/apache/flink/pull/15047


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21577) SimpleType.simpleTypeFrom(...) complains with "Collection is empty"

2021-03-02 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-21577:
---

 Summary: SimpleType.simpleTypeFrom(...) complains with "Collection 
is empty"
 Key: FLINK-21577
 URL: https://issues.apache.org/jira/browse/FLINK-21577
 Project: Flink
  Issue Type: Bug
  Components: Stateful Functions
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai
 Fix For: statefun-3.0.0


This is caused by the {{EnumSet.copyOf}} method call at:
https://github.com/apache/flink-statefun/blob/master/statefun-sdk-java/src/main/java/org/apache/flink/statefun/sdk/java/types/SimpleType.java#L57

That expects the collection to be non-empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Thesharing commented on pull request #15069: [FLINK-21576][runtime] Remove legacy ExecutionVertex#getPreferredLocations()

2021-03-02 Thread GitBox


Thesharing commented on pull request #15069:
URL: https://github.com/apache/flink/pull/15069#issuecomment-789489512


   Since there's no reference of `Execution#calculatePreferredLocations`, I 
think it's safe to remove it. I'll rebase my PR 
[FLINK-21326](https://github.com/apache/flink/pull/14868) after this PR is 
merged. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-11536) cannot start standalone cluster - failed to bind to /0.0.0.0:6123

2021-03-02 Thread nobleyd (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294328#comment-17294328
 ] 

nobleyd commented on FLINK-11536:
-

[~yunta] Hi. I also met this problem, but my taskmanager is not failed but not 
workded...

> cannot start standalone cluster - failed to bind to /0.0.0.0:6123
> -
>
> Key: FLINK-11536
> URL: https://issues.apache.org/jira/browse/FLINK-11536
> Project: Flink
>  Issue Type: Bug
>Reporter: Hongkai Wu
>Priority: Major
>
> I'm deploying a standalone flink cluster with version 1.7 on EC2. I install 
> java openjdk9.
> Once I run bin/start-cluster.sh, the log says:
> 2019-02-06 07:52:12,460 ERROR akka.remote.transport.netty.NettyTra
>  nsport - failed to bind to /0.0.0.0:6123, shutt
>  ing down Netty transport
>  2019-02-06 07:52:12,466 INFO org.apache.flink.runtime.entrypoint.
>  ClusterEntrypoint - Shutting StandaloneSessionClusterEntry
>  point down with application status FAILED. Diagnostics java.net.Bi
>  ndException: Could not start actor system on any port in port rang
>  e 6123
>  
> How can I fix this?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15063: [FLINK-21434][python] Fix encoding error when using the fast coder to encode a row-type field containing more than 14 fields

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15063:
URL: https://github.com/apache/flink/pull/15063#issuecomment-788890531


   
   ## CI report:
   
   * 2effd19d74334e7da661d4774eedbaf303cb3ba8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14024)
 
   * 2b27ed5151bf2515e5ca23063032269c944b8bea UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15069: [FLINK-21576][runtime] Remove legacy ExecutionVertex#getPreferredLocations()

2021-03-02 Thread GitBox


flinkbot commented on pull request #15069:
URL: https://github.com/apache/flink/pull/15069#issuecomment-789485774


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 91cd7acb32af72a0f4d1aba2735fb9e7c4130eff (Wed Mar 03 
06:57:00 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15056: [FLINK-21128][k8s] Introduce dedicated scripts for native K8s integration

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15056:
URL: https://github.com/apache/flink/pull/15056#issuecomment-788712841


   
   ## CI report:
   
   * ce5ae5eb11368269976029a47eded3951a4702d5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14005)
 
   * a69af3ac7eb1b4453c1d43601636f5777dfea11c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15057: [FLINK-21552][runtime] Unreserve managed memory if OpaqueMemoryResour…

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15057:
URL: https://github.com/apache/flink/pull/15057#issuecomment-788739246


   
   ## CI report:
   
   * 5d6398e75a471f89c8e00df6968d6a32493175c3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13979)
 
   * c8faef9781f725ac613f04a4bbb650b6d6cba105 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14032)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15047: [FLINK-21502][coordination] Reduce frequency of global re-allocate re…

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15047:
URL: https://github.com/apache/flink/pull/15047#issuecomment-787603325


   
   ## CI report:
   
   * ec740e36bf9c727139ed108ac44c9aca0f7c6838 UNKNOWN
   * ab02a8b97b3c96f61a5c7d1c0c747c1b3fcde881 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14026)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15028: [FLINK-21115][python] Add AggregatingState and corresponding StateDescriptor for Python DataStream API

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15028:
URL: https://github.com/apache/flink/pull/15028#issuecomment-786507942


   
   ## CI report:
   
   * 75560398365a84a3493325a4ccb6b55ab667becb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14023)
 
   * 7144f6b8dcc5bf64d21c92da85a8e61c909fbf63 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21576) Remove ExecutionVertex#getPreferredLocations

2021-03-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21576:
---
Labels: pull-request-available  (was: )

> Remove ExecutionVertex#getPreferredLocations
> 
>
> Key: FLINK-21576
> URL: https://issues.apache.org/jira/browse/FLINK-21576
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> {{ExecutionVertex#getPreferredLocations()}} is superseded by 
> {{DefaultPreferredLocationsRetriever}} and is no longer used. Hence, we can 
> remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk opened a new pull request #15069: [FLINK-21576][runtime] Remove legacy ExecutionVertex#getPreferredLocations()

2021-03-02 Thread GitBox


zhuzhurk opened a new pull request #15069:
URL: https://github.com/apache/flink/pull/15069


   ## What is the purpose of the change
   
   ExecutionVertex#getPreferredLocations() is superseded by 
DefaultPreferredLocationsRetriever and is no longer used. Hence, we can remove 
it. Its test ExecutionVertexLocalityTest is superseded by 
DefaultPreferredLocationsRetrieverTest.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on a change in pull request #14878: [FLINK-21245][table-planner-blink] Support StreamExecCalc json serialization/deserialization

2021-03-02 Thread GitBox


godfreyhe commented on a change in pull request #14878:
URL: https://github.com/apache/flink/pull/14878#discussion_r586155957



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/RelDataTypeJsonSerializer.java
##
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.serde;
+
+import org.apache.flink.table.planner.plan.schema.GenericRelDataType;
+import org.apache.flink.table.planner.plan.schema.RawRelDataType;
+import org.apache.flink.table.planner.plan.schema.StructuredRelDataType;
+import org.apache.flink.table.planner.plan.schema.TimeIndicatorRelDataType;
+import org.apache.flink.table.types.logical.TimestampKind;
+import org.apache.flink.table.types.logical.TypeInformationRawType;
+import org.apache.flink.table.utils.EncodingUtils;
+
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonGenerator;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.SerializerProvider;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.std.StdSerializer;
+
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.sql.type.ArraySqlType;
+import org.apache.calcite.sql.type.MapSqlType;
+import org.apache.calcite.sql.type.MultisetSqlType;
+import org.apache.calcite.sql.type.SqlTypeName;
+
+import java.io.IOException;
+
+/**
+ * JSON serializer for {@link RelDataType}. refer to {@link 
RelDataTypeJsonDeserializer} for
+ * deserializer.
+ */
+public class RelDataTypeJsonSerializer extends StdSerializer {
+private static final long serialVersionUID = 1L;
+
+public static final String FIELD_NAME_TYPE_NAME = "typeName";
+public static final String FIELD_NAME_FILED_NAME = "fieldName";
+public static final String FIELD_NAME_NULLABLE = "nullable";
+public static final String FIELD_NAME_PRECISION = "precision";
+public static final String FIELD_NAME_SCALE = "scale";
+public static final String FIELD_NAME_FIELDS = "fields";
+public static final String FIELD_NAME_STRUCT_KIND = "structKind";
+public static final String FIELD_NAME_TIMESTAMP_KIND = "timestampKind";
+public static final String FIELD_NAME_ELEMENT = "element";
+public static final String FIELD_NAME_KEY = "key";
+public static final String FIELD_NAME_VALUE = "value";
+public static final String FIELD_NAME_TYPE_INFO = "typeInfo";
+public static final String FIELD_NAME_RAW_TYPE = "rawType";
+public static final String FIELD_NAME_STRUCTURED_TYPE = "structuredType";
+
+public RelDataTypeJsonSerializer() {
+super(RelDataType.class);
+}
+
+@Override
+public void serialize(
+RelDataType relDataType,
+JsonGenerator jsonGenerator,
+SerializerProvider serializerProvider)
+throws IOException {
+jsonGenerator.writeStartObject();
+serialize(relDataType, jsonGenerator);
+jsonGenerator.writeEndObject();
+}
+
+private void serialize(RelDataType relDataType, JsonGenerator gen) throws 
IOException {
+if (relDataType instanceof TimeIndicatorRelDataType) {
+TimeIndicatorRelDataType timeIndicatorType = 
(TimeIndicatorRelDataType) relDataType;
+gen.writeStringField(
+FIELD_NAME_TIMESTAMP_KIND,
+timeIndicatorType.isEventTime()
+? TimestampKind.ROWTIME.name()
+: TimestampKind.PROCTIME.name());
+gen.writeBooleanField(FIELD_NAME_NULLABLE, 
relDataType.isNullable());
+} else if (relDataType instanceof StructuredRelDataType) {
+StructuredRelDataType structuredType = (StructuredRelDataType) 
relDataType;
+gen.writeObjectField(FIELD_NAME_STRUCTURED_TYPE, 
structuredType.getStructuredType());
+} else if (relDataType.isStruct()) {
+gen.writeStringField(FIELD_NAME_STRUCT_KIND, 
relDataType.getStructKind().name());
+gen.writeBooleanField(FIELD_NAME_NULLABLE, 
relDataType.isNullable());
+
+

[jira] [Assigned] (FLINK-21576) Remove ExecutionVertex#getPreferredLocations

2021-03-02 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-21576:
---

Assignee: Zhu Zhu

> Remove ExecutionVertex#getPreferredLocations
> 
>
> Key: FLINK-21576
> URL: https://issues.apache.org/jira/browse/FLINK-21576
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Major
> Fix For: 1.13.0
>
>
> {{ExecutionVertex#getPreferredLocations()}} is superseded by 
> {{DefaultPreferredLocationsRetriever}} and is no longer used. Hence, we can 
> remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21576) Remove ExecutionVertex#getPreferredLocations

2021-03-02 Thread Zhu Zhu (Jira)
Zhu Zhu created FLINK-21576:
---

 Summary: Remove ExecutionVertex#getPreferredLocations
 Key: FLINK-21576
 URL: https://issues.apache.org/jira/browse/FLINK-21576
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Coordination
Affects Versions: 1.13.0
Reporter: Zhu Zhu
 Fix For: 1.13.0


{{ExecutionVertex#getPreferredLocations()}} is superseded by 
{{DefaultPreferredLocationsRetriever}} and is no longer used. Hence, we can 
remove it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wenlong88 commented on a change in pull request #14878: [FLINK-21245][table-planner-blink] Support StreamExecCalc json serialization/deserialization

2021-03-02 Thread GitBox


wenlong88 commented on a change in pull request #14878:
URL: https://github.com/apache/flink/pull/14878#discussion_r586102431



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/nodes/exec/serde/RelDataTypeJsonSerializer.java
##
@@ -0,0 +1,159 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.exec.serde;
+
+import org.apache.flink.table.planner.plan.schema.GenericRelDataType;
+import org.apache.flink.table.planner.plan.schema.RawRelDataType;
+import org.apache.flink.table.planner.plan.schema.StructuredRelDataType;
+import org.apache.flink.table.planner.plan.schema.TimeIndicatorRelDataType;
+import org.apache.flink.table.types.logical.TimestampKind;
+import org.apache.flink.table.types.logical.TypeInformationRawType;
+import org.apache.flink.table.utils.EncodingUtils;
+
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonGenerator;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.SerializerProvider;
+import 
org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ser.std.StdSerializer;
+
+import org.apache.calcite.rel.type.RelDataType;
+import org.apache.calcite.rel.type.RelDataTypeField;
+import org.apache.calcite.sql.type.ArraySqlType;
+import org.apache.calcite.sql.type.MapSqlType;
+import org.apache.calcite.sql.type.MultisetSqlType;
+import org.apache.calcite.sql.type.SqlTypeName;
+
+import java.io.IOException;
+
+/**
+ * JSON serializer for {@link RelDataType}. refer to {@link 
RelDataTypeJsonDeserializer} for
+ * deserializer.
+ */
+public class RelDataTypeJsonSerializer extends StdSerializer {
+private static final long serialVersionUID = 1L;
+
+public static final String FIELD_NAME_TYPE_NAME = "typeName";
+public static final String FIELD_NAME_FILED_NAME = "fieldName";
+public static final String FIELD_NAME_NULLABLE = "nullable";
+public static final String FIELD_NAME_PRECISION = "precision";
+public static final String FIELD_NAME_SCALE = "scale";
+public static final String FIELD_NAME_FIELDS = "fields";
+public static final String FIELD_NAME_STRUCT_KIND = "structKind";
+public static final String FIELD_NAME_TIMESTAMP_KIND = "timestampKind";
+public static final String FIELD_NAME_ELEMENT = "element";
+public static final String FIELD_NAME_KEY = "key";
+public static final String FIELD_NAME_VALUE = "value";
+public static final String FIELD_NAME_TYPE_INFO = "typeInfo";
+public static final String FIELD_NAME_RAW_TYPE = "rawType";
+public static final String FIELD_NAME_STRUCTURED_TYPE = "structuredType";
+
+public RelDataTypeJsonSerializer() {
+super(RelDataType.class);
+}
+
+@Override
+public void serialize(
+RelDataType relDataType,
+JsonGenerator jsonGenerator,
+SerializerProvider serializerProvider)
+throws IOException {
+jsonGenerator.writeStartObject();
+serialize(relDataType, jsonGenerator);
+jsonGenerator.writeEndObject();
+}
+
+private void serialize(RelDataType relDataType, JsonGenerator gen) throws 
IOException {
+if (relDataType instanceof TimeIndicatorRelDataType) {
+TimeIndicatorRelDataType timeIndicatorType = 
(TimeIndicatorRelDataType) relDataType;
+gen.writeStringField(
+FIELD_NAME_TIMESTAMP_KIND,
+timeIndicatorType.isEventTime()
+? TimestampKind.ROWTIME.name()
+: TimestampKind.PROCTIME.name());
+gen.writeBooleanField(FIELD_NAME_NULLABLE, 
relDataType.isNullable());
+} else if (relDataType instanceof StructuredRelDataType) {
+StructuredRelDataType structuredType = (StructuredRelDataType) 
relDataType;
+gen.writeObjectField(FIELD_NAME_STRUCTURED_TYPE, 
structuredType.getStructuredType());
+} else if (relDataType.isStruct()) {
+gen.writeStringField(FIELD_NAME_STRUCT_KIND, 
relDataType.getStructKind().name());
+gen.writeBooleanField(FIELD_NAME_NULLABLE, 
relDataType.isNullable());
+
+

[jira] [Created] (FLINK-21575) Replace InputFormat with BulkFormat in HivePartition Reader

2021-03-02 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-21575:
--

 Summary: Replace InputFormat with BulkFormat in HivePartition 
Reader
 Key: FLINK-21575
 URL: https://issues.apache.org/jira/browse/FLINK-21575
 Project: Flink
  Issue Type: Task
  Components: Connectors / Hive
Reporter: Leonard Xu


Currently HivePartition Reader(`HiveInputFormatPartitionReader`) still use 
legacy interface `InputFormat`, we can migrate it to new interface `BulkFormat`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15068: [FLINK-21523][Connectors / Hive] Bug fix: ArrayIndexOutOfBoundsException occurs while run a hive strea…

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15068:
URL: https://github.com/apache/flink/pull/15068#issuecomment-789459239


   
   ## CI report:
   
   * 624bf85a6820617768b5a7a20032e2001641cd27 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14031)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15057: [FLINK-21552][runtime] Unreserve managed memory if OpaqueMemoryResour…

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15057:
URL: https://github.com/apache/flink/pull/15057#issuecomment-788739246


   
   ## CI report:
   
   * 5d6398e75a471f89c8e00df6968d6a32493175c3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13979)
 
   * c8faef9781f725ac613f04a4bbb650b6d6cba105 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15039: [FLINK-19763][metrics][tests] Add testNonHeapMetricUsageNotStatic and refine testMetaspaceMetricUsageNotStatic

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15039:
URL: https://github.com/apache/flink/pull/15039#issuecomment-786732475


   
   ## CI report:
   
   * 82fd66b56c3f1f934f3d1f8164d6284afb44d0c2 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14022)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13990)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 commented on pull request #15056: [FLINK-21128][k8s] Introduce dedicated scripts for native K8s integration

2021-03-02 Thread GitBox


wangyang0918 commented on pull request #15056:
URL: https://github.com/apache/flink/pull/15056#issuecomment-789468060


   Rebase master branch and resolve conflicts.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21523) ArrayIndexOutOfBoundsException occurs while run a hive streaming job with partitioned table source

2021-03-02 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294299#comment-17294299
 ] 

Rui Li commented on FLINK-21523:


So if {{HivePartitionFetcherContextBase}} needs the full field names/types, we 
should call {{HiveTableSource::getTableSchema}} rather than 
{{getProducedTableSchema}} when constructing the context. [~Leonard Xu] what do 
you think?

> ArrayIndexOutOfBoundsException occurs while run a hive streaming job with 
> partitioned table source 
> ---
>
> Key: FLINK-21523
> URL: https://issues.apache.org/jira/browse/FLINK-21523
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.1
>Reporter: zouyunhe
>Priority: Major
>  Labels: pull-request-available
>
> we have two hive table, the ddl as below
> {code:java}
> //test_tbl5
> create table test.test_5
>  (dpi int, 
>   uid bigint) 
> partitioned by( day string, hour string) stored as parquet;
> //test_tbl3
> create table test.test_3(
>   dpi int,
>  uid bigint,
>  itime timestamp) stored as parquet;{code}
>  then add a partiton to test_tbl5, 
> {code:java}
> alter table test_tbl5 add partition(day='2021-02-27',hour='12');
> {code}
> we start a flink streaming job to read hive table test_tbl5 , and write the 
> data into test_tbl3, the job's  sql as 
> {code:java}
> set test_tbl5.streaming-source.enable = true;
> insert into hive.test.test_tbl3 select dpi, uid, 
> cast(to_timestamp('2020-08-09 00:00:00') as timestamp(9)) from 
> hive.test.test_tbl5 where `day` = '2021-02-27';
> {code}
> and we seen the exception throws
> {code:java}
> 2021-02-28 22:33:16,553 ERROR 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorContext - 
> Exception while handling result from async call in SourceCoordinator-Source: 
> HiveSource-test.test_tbl5. Triggering job 
> failover.org.apache.flink.connectors.hive.FlinkHiveException: Failed to 
> enumerate filesat 
> org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator.handleNewSplits(ContinuousHiveSplitEnumerator.java:152)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.runtime.source.coordinator.ExecutorNotifier.lambda$null$4(ExecutorNotifier.java:136)
>  ~[flink-dist_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:40)
>  [flink-dist_2.12-1.12.1.jar:1.12.1]at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_60]at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_60]at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]Caused 
> by: java.lang.ArrayIndexOutOfBoundsException: -1at 
> org.apache.flink.connectors.hive.util.HivePartitionUtils.toHiveTablePartition(HivePartitionUtils.java:184)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.connectors.hive.HiveTableSource$HiveContinuousPartitionFetcherContext.toHiveTablePartition(HiveTableSource.java:417)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator$PartitionMonitor.call(ContinuousHiveSplitEnumerator.java:237)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator$PartitionMonitor.call(ContinuousHiveSplitEnumerator.java:177)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.runtime.source.coordinator.ExecutorNotifier.lambda$notifyReadyAsync$5(ExecutorNotifier.java:133)
>  ~[flink-dist_2.12-1.12.1.jar:1.12.1]at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_60]at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> ~[?:1.8.0_60]at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  ~[?:1.8.0_60]at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  ~[?:1.8.0_60]... 3 more{code}
> it seems the partitoned field is not found in the source table field list.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21574) WindowDistinctAggregateITCase.testCumulateWindow_GroupingSets unstable

2021-03-02 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-21574:
-

 Summary: 
WindowDistinctAggregateITCase.testCumulateWindow_GroupingSets unstable
 Key: FLINK-21574
 URL: https://issues.apache.org/jira/browse/FLINK-21574
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.13.0
Reporter: Guowei Ma


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14014=logs=f66801b3-5d8b-58b4-03aa-cc67e0663d23=1abe556e-1530-599d-b2c7-b8c00d549e53
{code:java}
[ERROR]   WindowDistinctAggregateITCase.testCumulateWindow_GroupingSets:660 
expected:<...:10,7,21.09,6.0,1.0,[4
1,null,2020-10-10T00:00,2020-10-10T00:00:15,7,21.09,6.0,1.0,4]
1,null,2020-10-10T0...> but was:<...:10,7,21.09,6.0,1.0,[5
1,null,2020-10-10T00:00,2020-10-10T00:00:15,7,21.09,6.0,1.0,5]
1,null,2020-10-10T0...>

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21523) ArrayIndexOutOfBoundsException occurs while run a hive streaming job with partitioned table source

2021-03-02 Thread zouyunhe (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294296#comment-17294296
 ] 

zouyunhe commented on FLINK-21523:
--

I print some debug log , the partition field is not found in the projected 
fields. just like you said [~lirui]

> ArrayIndexOutOfBoundsException occurs while run a hive streaming job with 
> partitioned table source 
> ---
>
> Key: FLINK-21523
> URL: https://issues.apache.org/jira/browse/FLINK-21523
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.1
>Reporter: zouyunhe
>Priority: Major
>  Labels: pull-request-available
>
> we have two hive table, the ddl as below
> {code:java}
> //test_tbl5
> create table test.test_5
>  (dpi int, 
>   uid bigint) 
> partitioned by( day string, hour string) stored as parquet;
> //test_tbl3
> create table test.test_3(
>   dpi int,
>  uid bigint,
>  itime timestamp) stored as parquet;{code}
>  then add a partiton to test_tbl5, 
> {code:java}
> alter table test_tbl5 add partition(day='2021-02-27',hour='12');
> {code}
> we start a flink streaming job to read hive table test_tbl5 , and write the 
> data into test_tbl3, the job's  sql as 
> {code:java}
> set test_tbl5.streaming-source.enable = true;
> insert into hive.test.test_tbl3 select dpi, uid, 
> cast(to_timestamp('2020-08-09 00:00:00') as timestamp(9)) from 
> hive.test.test_tbl5 where `day` = '2021-02-27';
> {code}
> and we seen the exception throws
> {code:java}
> 2021-02-28 22:33:16,553 ERROR 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorContext - 
> Exception while handling result from async call in SourceCoordinator-Source: 
> HiveSource-test.test_tbl5. Triggering job 
> failover.org.apache.flink.connectors.hive.FlinkHiveException: Failed to 
> enumerate filesat 
> org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator.handleNewSplits(ContinuousHiveSplitEnumerator.java:152)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.runtime.source.coordinator.ExecutorNotifier.lambda$null$4(ExecutorNotifier.java:136)
>  ~[flink-dist_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:40)
>  [flink-dist_2.12-1.12.1.jar:1.12.1]at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_60]at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_60]at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]Caused 
> by: java.lang.ArrayIndexOutOfBoundsException: -1at 
> org.apache.flink.connectors.hive.util.HivePartitionUtils.toHiveTablePartition(HivePartitionUtils.java:184)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.connectors.hive.HiveTableSource$HiveContinuousPartitionFetcherContext.toHiveTablePartition(HiveTableSource.java:417)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator$PartitionMonitor.call(ContinuousHiveSplitEnumerator.java:237)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator$PartitionMonitor.call(ContinuousHiveSplitEnumerator.java:177)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.runtime.source.coordinator.ExecutorNotifier.lambda$notifyReadyAsync$5(ExecutorNotifier.java:133)
>  ~[flink-dist_2.12-1.12.1.jar:1.12.1]at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_60]at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> ~[?:1.8.0_60]at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  ~[?:1.8.0_60]at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  ~[?:1.8.0_60]... 3 more{code}
> it seems the partitoned field is not found in the source table field list.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21428) AdaptiveSchedulerSlotSharingITCase.testSchedulingOfJobRequiringSlotSharing fail

2021-03-02 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294295#comment-17294295
 ] 

Guowei Ma commented on FLINK-21428:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14014=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=c2734c79-73b6-521c-e85a-67c7ecae9107

> AdaptiveSchedulerSlotSharingITCase.testSchedulingOfJobRequiringSlotSharing 
> fail
> ---
>
> Key: FLINK-21428
> URL: https://issues.apache.org/jira/browse/FLINK-21428
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Assignee: Matthias
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13510=logs=d8d26c26-7ec2-5ed2-772e-7a1a1eb8317c=be5fb08e-1ad7-563c-4f1a-a97ad4ce4865
> {code:java}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 23.313 s <<< FAILURE! - in 
> org.apache.flink.runtime.scheduler.declarative.DeclarativeSchedulerSlotSharingITCase
>  [ERROR] 
> testSchedulingOfJobRequiringSlotSharing(org.apache.flink.runtime.scheduler.declarative.DeclarativeSchedulerSlotSharingITCase)
>  Time elapsed: 20.683 s <<< ERROR! 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed. 
> at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
>  at 
> org.apache.flink.runtime.scheduler.declarative.DeclarativeSchedulerSlotSharingITCase.runJob(DeclarativeSchedulerSlotSharingITCase.java:83)
>  at 
> org.apache.flink.runtime.scheduler.declarative.DeclarativeSchedulerSlotSharingITCase.testSchedulingOfJobRequiringSlotSharing(DeclarativeSchedulerSlotSharingITCase.java:71)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20723) testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink failed due to NoHostAvailableException

2021-03-02 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294294#comment-17294294
 ] 

Guowei Ma commented on FLINK-20723:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14014=logs=e1276d0f-df12-55ec-86b5-c0ad597d83c9=906e9244-f3be-5604-1979-e767c8a6f6d9

> testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink failed due to 
> NoHostAvailableException
> --
>
> Key: FLINK-20723
> URL: https://issues.apache.org/jira/browse/FLINK-20723
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Cassandra
>Affects Versions: 1.11.3, 1.13.0
> Environment: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11137=results
>Reporter: Matthias
>Priority: Major
>  Labels: test-stability
>
> [Build 
> 20201221.17|https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11137=results]
>  failed due to {{NoHostAvailableException}}:
> {code}
> [ERROR] Tests run: 17, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 167.927 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase
> [ERROR] 
> testCassandraPojoNoAnnotatedKeyspaceAtLeastOnceSink(org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase)
>   Time elapsed: 12.234 s  <<< ERROR!
> com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) 
> tried for query failed (tried: /127.0.0.1:9042 
> (com.datastax.driver.core.exceptions.OperationTimedOutException: [/127.0.0.1] 
> Timed out waiting for server response))
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:84)
>   at 
> com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:37)
>   at 
> com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
>   at 
> com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
>   at 
> com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
>   at 
> org.apache.flink.streaming.connectors.cassandra.CassandraConnectorITCase.createTable(CassandraConnectorITCase.java:221)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> 

[GitHub] [flink] flinkbot commented on pull request #15068: [FLINK-21523][Connectors / Hive] Bug fix: ArrayIndexOutOfBoundsException occurs while run a hive strea…

2021-03-02 Thread GitBox


flinkbot commented on pull request #15068:
URL: https://github.com/apache/flink/pull/15068#issuecomment-789459239


   
   ## CI report:
   
   * 624bf85a6820617768b5a7a20032e2001641cd27 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15067: [FLINK-21178] Task failure will not trigger master hook's reset()

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15067:
URL: https://github.com/apache/flink/pull/15067#issuecomment-789445509


   
   ## CI report:
   
   * 2748160748dcde3538202320ec4b87e5d07b775f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14030)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-15493) FlinkKafkaInternalProducerITCase.testProducerWhenCommitEmptyPartitionsToOutdatedTxnCoordinator failed on travis

2021-03-02 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294293#comment-17294293
 ] 

Guowei Ma commented on FLINK-15493:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14015=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=f09203c9-1af8-53a6-da0c-2e60f5418512

> FlinkKafkaInternalProducerITCase.testProducerWhenCommitEmptyPartitionsToOutdatedTxnCoordinator
>  failed on travis
> ---
>
> Key: FLINK-15493
> URL: https://issues.apache.org/jira/browse/FLINK-15493
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.10.0, 1.13.0
>Reporter: Dian Fu
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> FlinkKafkaInternalProducerITCase.testProducerWhenCommitEmptyPartitionsToOutdatedTxnCoordinator
>  failed on travis with the following exception:
> {code}
> Test 
> testProducerWhenCommitEmptyPartitionsToOutdatedTxnCoordinator(org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase)
>  failed with: org.junit.runners.model.TestTimedOutException: test timed out 
> after 3 milliseconds at java.lang.Object.wait(Native Method) at 
> java.lang.Object.wait(Object.java:502) at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:92)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:177)
>  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:115)
>  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:197)
>  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.testProducerWhenCommitEmptyPartitionsToOutdatedTxnCoordinator(FlinkKafkaInternalProducerITCase.java:176)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> {code}
> instance: [https://api.travis-ci.org/v3/job/633307060/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21523) ArrayIndexOutOfBoundsException occurs while run a hive streaming job with partitioned table source

2021-03-02 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294292#comment-17294292
 ] 

Rui Li commented on FLINK-21523:


I guess it's because the projection pushdown doesn't include the partition 
column so that we don't have it in the field name/type arrays.

> ArrayIndexOutOfBoundsException occurs while run a hive streaming job with 
> partitioned table source 
> ---
>
> Key: FLINK-21523
> URL: https://issues.apache.org/jira/browse/FLINK-21523
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.1
>Reporter: zouyunhe
>Priority: Major
>  Labels: pull-request-available
>
> we have two hive table, the ddl as below
> {code:java}
> //test_tbl5
> create table test.test_5
>  (dpi int, 
>   uid bigint) 
> partitioned by( day string, hour string) stored as parquet;
> //test_tbl3
> create table test.test_3(
>   dpi int,
>  uid bigint,
>  itime timestamp) stored as parquet;{code}
>  then add a partiton to test_tbl5, 
> {code:java}
> alter table test_tbl5 add partition(day='2021-02-27',hour='12');
> {code}
> we start a flink streaming job to read hive table test_tbl5 , and write the 
> data into test_tbl3, the job's  sql as 
> {code:java}
> set test_tbl5.streaming-source.enable = true;
> insert into hive.test.test_tbl3 select dpi, uid, 
> cast(to_timestamp('2020-08-09 00:00:00') as timestamp(9)) from 
> hive.test.test_tbl5 where `day` = '2021-02-27';
> {code}
> and we seen the exception throws
> {code:java}
> 2021-02-28 22:33:16,553 ERROR 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorContext - 
> Exception while handling result from async call in SourceCoordinator-Source: 
> HiveSource-test.test_tbl5. Triggering job 
> failover.org.apache.flink.connectors.hive.FlinkHiveException: Failed to 
> enumerate filesat 
> org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator.handleNewSplits(ContinuousHiveSplitEnumerator.java:152)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.runtime.source.coordinator.ExecutorNotifier.lambda$null$4(ExecutorNotifier.java:136)
>  ~[flink-dist_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:40)
>  [flink-dist_2.12-1.12.1.jar:1.12.1]at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_60]at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_60]at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]Caused 
> by: java.lang.ArrayIndexOutOfBoundsException: -1at 
> org.apache.flink.connectors.hive.util.HivePartitionUtils.toHiveTablePartition(HivePartitionUtils.java:184)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.connectors.hive.HiveTableSource$HiveContinuousPartitionFetcherContext.toHiveTablePartition(HiveTableSource.java:417)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator$PartitionMonitor.call(ContinuousHiveSplitEnumerator.java:237)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator$PartitionMonitor.call(ContinuousHiveSplitEnumerator.java:177)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.runtime.source.coordinator.ExecutorNotifier.lambda$notifyReadyAsync$5(ExecutorNotifier.java:133)
>  ~[flink-dist_2.12-1.12.1.jar:1.12.1]at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_60]at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> ~[?:1.8.0_60]at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  ~[?:1.8.0_60]at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  ~[?:1.8.0_60]... 3 more{code}
> it seems the partitoned field is not found in the source table field list.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21553) WindowDistinctAggregateITCase#testHopWindow_Cube is unstable

2021-03-02 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294291#comment-17294291
 ] 

Guowei Ma commented on FLINK-21553:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14013=logs=e25d5e7e-2a9c-5589-4940-0b638d75a414=a6e0f756-5bb9-5ea8-a468-5f60db442a29

> WindowDistinctAggregateITCase#testHopWindow_Cube is unstable
> 
>
> Key: FLINK-21553
> URL: https://issues.apache.org/jira/browse/FLINK-21553
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Andy
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.13.0
>
> Attachments: screenshot-1.png
>
>
> See 
> https://dev.azure.com/imjark/Flink/_build/results?buildId=422=logs=d1352042-8a7d-50b6-3946-a85d176b7981=b2322052-d503-5552-81e2-b3a532a1d7e8
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs

2021-03-02 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294290#comment-17294290
 ] 

Guowei Ma commented on FLINK-20329:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14011=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20

> Elasticsearch7DynamicSinkITCase hangs
> -
>
> Key: FLINK-20329
> URL: https://issues.apache.org/jira/browse/FLINK-20329
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-11-24T16:04:05.9260517Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2020-11-24T16:19:25.5481231Z 
> ==
> 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds.
> 2020-11-24T16:19:25.5484064Z 
> ==
> 2020-11-24T16:19:25.5484498Z 
> ==
> 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS)
> 2020-11-24T16:19:25.5485475Z 
> ==
> 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar
> 2020-11-24T16:19:25.7263515Z 18566 Jps
> 2020-11-24T16:19:25.7263709Z 959 Launcher
> 2020-11-24T16:19:25.7411148Z 
> ==
> 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192
> 2020-11-24T16:19:25.7427369Z 
> ==
> 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26
> 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.275-b01 mixed mode):
> 2020-11-24T16:19:26.0849831Z 
> 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 
> tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x]
> 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE
> 2020-11-24T16:19:26.0850814Z 
> 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 
> os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() 
> [0x7fc1012c4000]
> 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on 
> object monitor)
> 2020-11-24T16:19:26.0855379Z  at java.lang.Object.wait(Native Method)
> 2020-11-24T16:19:26.0855844Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142)
> 2020-11-24T16:19:26.0857272Z  - locked <0x8e2bd2d0> (a 
> java.util.ArrayList)
> 2020-11-24T16:19:26.0857977Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0858471Z  at 
> org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
> 2020-11-24T16:19:26.0858961Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133)
> 2020-11-24T16:19:26.0859422Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0859788Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-24T16:19:26.0860030Z 
> 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000]
> 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-11-24T16:19:26.0861387Z  at sun.misc.Unsafe.park(Native Method)
> 2020-11-24T16:19:26.0862495Z  - parking to wait for  <0x8814bf30> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-11-24T16:19:26.0863253Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 2020-11-24T16:19:26.0863760Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
> 2020-11-24T16:19:26.0864274Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
> 2020-11-24T16:19:26.0864762Z  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
> 2020-11-24T16:19:26.0865299Z  at 
> 

[jira] [Updated] (FLINK-21523) ArrayIndexOutOfBoundsException occurs while run a hive streaming job with partitioned table source

2021-03-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21523:
---
Labels: pull-request-available  (was: )

> ArrayIndexOutOfBoundsException occurs while run a hive streaming job with 
> partitioned table source 
> ---
>
> Key: FLINK-21523
> URL: https://issues.apache.org/jira/browse/FLINK-21523
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.1
>Reporter: zouyunhe
>Priority: Major
>  Labels: pull-request-available
>
> we have two hive table, the ddl as below
> {code:java}
> //test_tbl5
> create table test.test_5
>  (dpi int, 
>   uid bigint) 
> partitioned by( day string, hour string) stored as parquet;
> //test_tbl3
> create table test.test_3(
>   dpi int,
>  uid bigint,
>  itime timestamp) stored as parquet;{code}
>  then add a partiton to test_tbl5, 
> {code:java}
> alter table test_tbl5 add partition(day='2021-02-27',hour='12');
> {code}
> we start a flink streaming job to read hive table test_tbl5 , and write the 
> data into test_tbl3, the job's  sql as 
> {code:java}
> set test_tbl5.streaming-source.enable = true;
> insert into hive.test.test_tbl3 select dpi, uid, 
> cast(to_timestamp('2020-08-09 00:00:00') as timestamp(9)) from 
> hive.test.test_tbl5 where `day` = '2021-02-27';
> {code}
> and we seen the exception throws
> {code:java}
> 2021-02-28 22:33:16,553 ERROR 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorContext - 
> Exception while handling result from async call in SourceCoordinator-Source: 
> HiveSource-test.test_tbl5. Triggering job 
> failover.org.apache.flink.connectors.hive.FlinkHiveException: Failed to 
> enumerate filesat 
> org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator.handleNewSplits(ContinuousHiveSplitEnumerator.java:152)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.runtime.source.coordinator.ExecutorNotifier.lambda$null$4(ExecutorNotifier.java:136)
>  ~[flink-dist_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:40)
>  [flink-dist_2.12-1.12.1.jar:1.12.1]at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  [?:1.8.0_60]at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  [?:1.8.0_60]at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]Caused 
> by: java.lang.ArrayIndexOutOfBoundsException: -1at 
> org.apache.flink.connectors.hive.util.HivePartitionUtils.toHiveTablePartition(HivePartitionUtils.java:184)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.connectors.hive.HiveTableSource$HiveContinuousPartitionFetcherContext.toHiveTablePartition(HiveTableSource.java:417)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator$PartitionMonitor.call(ContinuousHiveSplitEnumerator.java:237)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.connectors.hive.ContinuousHiveSplitEnumerator$PartitionMonitor.call(ContinuousHiveSplitEnumerator.java:177)
>  ~[flink-connector-hive_2.12-1.12.1.jar:1.12.1]at 
> org.apache.flink.runtime.source.coordinator.ExecutorNotifier.lambda$notifyReadyAsync$5(ExecutorNotifier.java:133)
>  ~[flink-dist_2.12-1.12.1.jar:1.12.1]at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> ~[?:1.8.0_60]at 
> java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) 
> ~[?:1.8.0_60]at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  ~[?:1.8.0_60]at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  ~[?:1.8.0_60]... 3 more{code}
> it seems the partitoned field is not found in the source table field list.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #15068: [FLINK-21523][Connectors / Hive] ArrayIndexOutOfBoundsException occurs while run a hive strea…

2021-03-02 Thread GitBox


flinkbot commented on pull request #15068:
URL: https://github.com/apache/flink/pull/15068#issuecomment-789449204


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 624bf85a6820617768b5a7a20032e2001641cd27 (Wed Mar 03 
05:40:18 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **Invalid pull request title: No valid Jira ID provided**
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15067: [FLINK-21178] Task failure will not trigger master hook's reset()

2021-03-02 Thread GitBox


flinkbot commented on pull request #15067:
URL: https://github.com/apache/flink/pull/15067#issuecomment-789445509


   
   ## CI report:
   
   * 2748160748dcde3538202320ec4b87e5d07b775f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KevinyhZou opened a new pull request #15068: ArrayIndexOutOfBoundsException occurs while run a hive strea…

2021-03-02 Thread GitBox


KevinyhZou opened a new pull request #15068:
URL: https://github.com/apache/flink/pull/15068


   ## What is the purpose of the change
   
   Bug fix for array out of bounds exception while running a hive streaming job 
with partitioned table source, the partition feilds is not found in the fields 
provided by the context(HiveContinuousPartitionFetcherContext) , so we add the 
field names and types to it.
   
   ## Brief change log
   
   Get the partiton field name and types from catalog base table, and put them 
into the context (HiveContinuousPartitionFetcherContext) while to get hive 
partitions.
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
 - Tested manually by running a flink streaming job to hive partitioned 
table source
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency):   no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:  no
 - The serializers:  no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper:  no
 - The S3 file system connector:  no
   
   ## Documentation
   
 - Does this pull request introduce a new feature?  no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21543) when using FIFO compaction, I found sst being deleted on the first checkpoint

2021-03-02 Thread xiaogang zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294288#comment-17294288
 ] 

xiaogang zhou commented on FLINK-21543:
---

And Can we support a fifo auto compaction java api? I think it is very common 
is someone want to use the rocksdb as cache

> when using FIFO compaction, I found sst being deleted on the first checkpoint
> -
>
> Key: FLINK-21543
> URL: https://issues.apache.org/jira/browse/FLINK-21543
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: xiaogang zhou
>Priority: Major
> Attachments: LOG (2), image-2021-03-03-11-35-11-458.png, 
> image-2021-03-03-13-09-01-695.png
>
>
> 2021/03/01-18:51:01.202049 7f59042fc700 (Original Log Time 
> 2021/03/01-18:51:01.200883) [/compaction/compaction_picker_fifo.cc:107] 
> [_timer_state/processing_user-timers] FIFO compaction: picking file 1710 with 
> creation time 0 for deletion
>  
> the configuration is like 
> currentOptions.setCompactionStyle(getCompactionStyle());
>  currentOptions.setLevel0FileNumCompactionTrigger(8);
>  // 
> currentOptions.setMaxTableFilesSizeFIFO(MemorySize.parse("2gb").getBytes());
>  CompactionOptionsFIFO compactionOptionsFIFO = new CompactionOptionsFIFO();
>  
> compactionOptionsFIFO.setMaxTableFilesSize(MemorySize.parse("8gb").getBytes());
>  compactionOptionsFIFO.setAllowCompaction(true);
>  
> the rocksdb version is 
> 
>  io.github.myasuka
>  frocksdbjni
>  6.10.2-ververica-3.0
>  
>  
> I think the problem is caused by tableproperty is lost by snapshot. Can any 
> one suggest how i can skip this problem?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #15067: [FLINK-21178] Task failure will not trigger master hook's reset()

2021-03-02 Thread GitBox


flinkbot commented on pull request #15067:
URL: https://github.com/apache/flink/pull/15067#issuecomment-789442851


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2748160748dcde3538202320ec4b87e5d07b775f (Wed Mar 03 
05:27:32 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-21178).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] crazyzhou opened a new pull request #15067: [FLINK-21178] Task failure will not trigger master hook's reset()

2021-03-02 Thread GitBox


crazyzhou opened a new pull request #15067:
URL: https://github.com/apache/flink/pull/15067


   
   
   ## What is the purpose of the change
   
   This supersedes #14890 that fixes failure master builds.
   
   With FLINK-20222 fix, it brings the regression that the master hook's 
`reset()` is called only in global recovery case, which causes the test failure 
in [Pravega Flink connector](https://github.com/pravega/flink-connectors)
   
   ## Brief change log
   
   - Always reset the master hooks at the restore.
   
   ## Verifying this change
   
   This change is covered by `testResetCalledInRegionRecovery` newly added.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (**no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (**no**)
 - The serializers: (**no**)
 - The runtime per-record code paths (performance sensitive): (**no**)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (**yes**)
 - The S3 file system connector: (**no**)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**no**)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15060: [FLINK-19944][Connectors / Hive]Support sink parallelism configuratio…

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15060:
URL: https://github.com/apache/flink/pull/15060#issuecomment-788795982


   
   ## CI report:
   
   * 1ecb25b4192385c392fe43e0b3d7f3244ba9ca84 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14029)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13993)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21543) when using FIFO compaction, I found sst being deleted on the first checkpoint

2021-03-02 Thread xiaogang zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294281#comment-17294281
 ] 

xiaogang zhou commented on FLINK-21543:
---

!image-2021-03-03-13-09-01-695.png!

主要还是想问下为啥这个creation time读出来的是0

> when using FIFO compaction, I found sst being deleted on the first checkpoint
> -
>
> Key: FLINK-21543
> URL: https://issues.apache.org/jira/browse/FLINK-21543
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: xiaogang zhou
>Priority: Major
> Attachments: LOG (2), image-2021-03-03-11-35-11-458.png, 
> image-2021-03-03-13-09-01-695.png
>
>
> 2021/03/01-18:51:01.202049 7f59042fc700 (Original Log Time 
> 2021/03/01-18:51:01.200883) [/compaction/compaction_picker_fifo.cc:107] 
> [_timer_state/processing_user-timers] FIFO compaction: picking file 1710 with 
> creation time 0 for deletion
>  
> the configuration is like 
> currentOptions.setCompactionStyle(getCompactionStyle());
>  currentOptions.setLevel0FileNumCompactionTrigger(8);
>  // 
> currentOptions.setMaxTableFilesSizeFIFO(MemorySize.parse("2gb").getBytes());
>  CompactionOptionsFIFO compactionOptionsFIFO = new CompactionOptionsFIFO();
>  
> compactionOptionsFIFO.setMaxTableFilesSize(MemorySize.parse("8gb").getBytes());
>  compactionOptionsFIFO.setAllowCompaction(true);
>  
> the rocksdb version is 
> 
>  io.github.myasuka
>  frocksdbjni
>  6.10.2-ververica-3.0
>  
>  
> I think the problem is caused by tableproperty is lost by snapshot. Can any 
> one suggest how i can skip this problem?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21543) when using FIFO compaction, I found sst being deleted on the first checkpoint

2021-03-02 Thread xiaogang zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaogang zhou updated FLINK-21543:
--
Attachment: image-2021-03-03-13-09-01-695.png

> when using FIFO compaction, I found sst being deleted on the first checkpoint
> -
>
> Key: FLINK-21543
> URL: https://issues.apache.org/jira/browse/FLINK-21543
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: xiaogang zhou
>Priority: Major
> Attachments: LOG (2), image-2021-03-03-11-35-11-458.png, 
> image-2021-03-03-13-09-01-695.png
>
>
> 2021/03/01-18:51:01.202049 7f59042fc700 (Original Log Time 
> 2021/03/01-18:51:01.200883) [/compaction/compaction_picker_fifo.cc:107] 
> [_timer_state/processing_user-timers] FIFO compaction: picking file 1710 with 
> creation time 0 for deletion
>  
> the configuration is like 
> currentOptions.setCompactionStyle(getCompactionStyle());
>  currentOptions.setLevel0FileNumCompactionTrigger(8);
>  // 
> currentOptions.setMaxTableFilesSizeFIFO(MemorySize.parse("2gb").getBytes());
>  CompactionOptionsFIFO compactionOptionsFIFO = new CompactionOptionsFIFO();
>  
> compactionOptionsFIFO.setMaxTableFilesSize(MemorySize.parse("8gb").getBytes());
>  compactionOptionsFIFO.setAllowCompaction(true);
>  
> the rocksdb version is 
> 
>  io.github.myasuka
>  frocksdbjni
>  6.10.2-ververica-3.0
>  
>  
> I think the problem is caused by tableproperty is lost by snapshot. Can any 
> one suggest how i can skip this problem?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15060: [FLINK-19944][Connectors / Hive]Support sink parallelism configuratio…

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15060:
URL: https://github.com/apache/flink/pull/15060#issuecomment-788795982


   
   ## CI report:
   
   * 1ecb25b4192385c392fe43e0b3d7f3244ba9ca84 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13993)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14029)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shouweikun commented on pull request #15060: [FLINK-19944][Connectors / Hive]Support sink parallelism configuratio…

2021-03-02 Thread GitBox


shouweikun commented on pull request #15060:
URL: https://github.com/apache/flink/pull/15060#issuecomment-789420079


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21570) Add Job ID to RuntimeContext

2021-03-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21570:
---
Labels: pull-request-available  (was: )

> Add Job ID to RuntimeContext
> 
>
> Key: FLINK-21570
> URL: https://issues.apache.org/jira/browse/FLINK-21570
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> (the issue added retroactively after the PR was merged for reference)
>  
> There are some cases (e.g. 
> [1|http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/How-to-get-flink-JobId-in-runtime-td36756.html],
>  2) when job ID needs to be accessed from the user code (function).
>  Existing workarounds doesn't look clean (reliable).
>  
> One solution discussed offline is to add {{Optional}} to the 
> {{RuntimeContext}} (the latter already contains some information of the same 
> level, such as subtask index).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] kezhuw commented on pull request #15053: [FLINK-21570][runtime] Add job ID to RuntimeContext

2021-03-02 Thread GitBox


kezhuw commented on pull request #15053:
URL: https://github.com/apache/flink/pull/15053#issuecomment-789410967


   Hi @rkhachatryan, I am a bit worry about whether the type `Optional` 
is a right choice for reasons:
   
   * Most `getJobId` expect a not nullable `JobID`.
   * The only path `CollectionEnvironment`, `CollectionExecutor` could also be 
shaped to return a plan global job id(or created before execution if 
`Plan.jobId` is null). Due to `getJobId` is a brand new api, it does not hurt 
anyone.
   * After `DataSet` phased out, that only path will also be phased out. Then, 
it will be really confused that `getJobId` returns an optional value in that 
stage.
   
   cc @AHeise  @aljoscha  @StephanEwen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] crazyzhou commented on pull request #14890: [FLINK-21178][Runtime/Checkpointing] Task failure should trigger master hook's reset()

2021-03-02 Thread GitBox


crazyzhou commented on pull request #14890:
URL: https://github.com/apache/flink/pull/14890#issuecomment-789410205


   > I have reverted this commit to make master can compile. Please open 
another PR to fix the conflicts.
   
   Sorry for not rebasing master, I can raise another one based on latest 
master today.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #14890: [FLINK-21178][Runtime/Checkpointing] Task failure should trigger master hook's reset()

2021-03-02 Thread GitBox


wuchong commented on pull request #14890:
URL: https://github.com/apache/flink/pull/14890#issuecomment-789406159


   I have reverted this commit to make master can compile. Please open another 
PR to fix the conflicts. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 commented on pull request #14890: [FLINK-21178][Runtime/Checkpointing] Task failure should trigger master hook's reset()

2021-03-02 Thread GitBox


wangyang0918 commented on pull request #14890:
URL: https://github.com/apache/flink/pull/14890#issuecomment-789404500


   cc @becketqin It seems that this PR breaks the master branch. Because the 
`CheckpointCoordinatorBuilder#setJobId` has been removed. Now the Flink master 
branch could not compile successfully.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21543) when using FIFO compaction, I found sst being deleted on the first checkpoint

2021-03-02 Thread xiaogang zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294250#comment-17294250
 ] 

xiaogang zhou commented on FLINK-21543:
---

!image-2021-03-03-11-35-11-458.png!

evidence can be start from here

> when using FIFO compaction, I found sst being deleted on the first checkpoint
> -
>
> Key: FLINK-21543
> URL: https://issues.apache.org/jira/browse/FLINK-21543
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: xiaogang zhou
>Priority: Major
> Attachments: LOG (2), image-2021-03-03-11-35-11-458.png
>
>
> 2021/03/01-18:51:01.202049 7f59042fc700 (Original Log Time 
> 2021/03/01-18:51:01.200883) [/compaction/compaction_picker_fifo.cc:107] 
> [_timer_state/processing_user-timers] FIFO compaction: picking file 1710 with 
> creation time 0 for deletion
>  
> the configuration is like 
> currentOptions.setCompactionStyle(getCompactionStyle());
>  currentOptions.setLevel0FileNumCompactionTrigger(8);
>  // 
> currentOptions.setMaxTableFilesSizeFIFO(MemorySize.parse("2gb").getBytes());
>  CompactionOptionsFIFO compactionOptionsFIFO = new CompactionOptionsFIFO();
>  
> compactionOptionsFIFO.setMaxTableFilesSize(MemorySize.parse("8gb").getBytes());
>  compactionOptionsFIFO.setAllowCompaction(true);
>  
> the rocksdb version is 
> 
>  io.github.myasuka
>  frocksdbjni
>  6.10.2-ververica-3.0
>  
>  
> I think the problem is caused by tableproperty is lost by snapshot. Can any 
> one suggest how i can skip this problem?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21543) when using FIFO compaction, I found sst being deleted on the first checkpoint

2021-03-02 Thread xiaogang zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaogang zhou updated FLINK-21543:
--
Attachment: image-2021-03-03-11-35-11-458.png

> when using FIFO compaction, I found sst being deleted on the first checkpoint
> -
>
> Key: FLINK-21543
> URL: https://issues.apache.org/jira/browse/FLINK-21543
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: xiaogang zhou
>Priority: Major
> Attachments: LOG (2), image-2021-03-03-11-35-11-458.png
>
>
> 2021/03/01-18:51:01.202049 7f59042fc700 (Original Log Time 
> 2021/03/01-18:51:01.200883) [/compaction/compaction_picker_fifo.cc:107] 
> [_timer_state/processing_user-timers] FIFO compaction: picking file 1710 with 
> creation time 0 for deletion
>  
> the configuration is like 
> currentOptions.setCompactionStyle(getCompactionStyle());
>  currentOptions.setLevel0FileNumCompactionTrigger(8);
>  // 
> currentOptions.setMaxTableFilesSizeFIFO(MemorySize.parse("2gb").getBytes());
>  CompactionOptionsFIFO compactionOptionsFIFO = new CompactionOptionsFIFO();
>  
> compactionOptionsFIFO.setMaxTableFilesSize(MemorySize.parse("8gb").getBytes());
>  compactionOptionsFIFO.setAllowCompaction(true);
>  
> the rocksdb version is 
> 
>  io.github.myasuka
>  frocksdbjni
>  6.10.2-ververica-3.0
>  
>  
> I think the problem is caused by tableproperty is lost by snapshot. Can any 
> one suggest how i can skip this problem?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21543) when using FIFO compaction, I found sst being deleted on the first checkpoint

2021-03-02 Thread xiaogang zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294247#comment-17294247
 ] 

xiaogang zhou commented on FLINK-21543:
---

[~yunta] As level compaction can cause too many compaction in every snapshot, 
our job is not applicable in that situation as compaction caused all the cpu 
resources. 

Our situation is 1 write and 1 read after... we do not need merge files. in the 
flink configuration, the fifo is mentioned. 

for FIFO compaction, normally it works fine. But when recover from checkpoint, 
I found the issue, and I attached the rocksdb log .

 

I think in our case, the fifo is the only way. can you please review?

> when using FIFO compaction, I found sst being deleted on the first checkpoint
> -
>
> Key: FLINK-21543
> URL: https://issues.apache.org/jira/browse/FLINK-21543
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: xiaogang zhou
>Priority: Major
> Attachments: LOG (2)
>
>
> 2021/03/01-18:51:01.202049 7f59042fc700 (Original Log Time 
> 2021/03/01-18:51:01.200883) [/compaction/compaction_picker_fifo.cc:107] 
> [_timer_state/processing_user-timers] FIFO compaction: picking file 1710 with 
> creation time 0 for deletion
>  
> the configuration is like 
> currentOptions.setCompactionStyle(getCompactionStyle());
>  currentOptions.setLevel0FileNumCompactionTrigger(8);
>  // 
> currentOptions.setMaxTableFilesSizeFIFO(MemorySize.parse("2gb").getBytes());
>  CompactionOptionsFIFO compactionOptionsFIFO = new CompactionOptionsFIFO();
>  
> compactionOptionsFIFO.setMaxTableFilesSize(MemorySize.parse("8gb").getBytes());
>  compactionOptionsFIFO.setAllowCompaction(true);
>  
> the rocksdb version is 
> 
>  io.github.myasuka
>  frocksdbjni
>  6.10.2-ververica-3.0
>  
>  
> I think the problem is caused by tableproperty is lost by snapshot. Can any 
> one suggest how i can skip this problem?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21543) when using FIFO compaction, I found sst being deleted on the first checkpoint

2021-03-02 Thread xiaogang zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaogang zhou updated FLINK-21543:
--
Attachment: LOG (2)

> when using FIFO compaction, I found sst being deleted on the first checkpoint
> -
>
> Key: FLINK-21543
> URL: https://issues.apache.org/jira/browse/FLINK-21543
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: xiaogang zhou
>Priority: Major
> Attachments: LOG (2)
>
>
> 2021/03/01-18:51:01.202049 7f59042fc700 (Original Log Time 
> 2021/03/01-18:51:01.200883) [/compaction/compaction_picker_fifo.cc:107] 
> [_timer_state/processing_user-timers] FIFO compaction: picking file 1710 with 
> creation time 0 for deletion
>  
> the configuration is like 
> currentOptions.setCompactionStyle(getCompactionStyle());
>  currentOptions.setLevel0FileNumCompactionTrigger(8);
>  // 
> currentOptions.setMaxTableFilesSizeFIFO(MemorySize.parse("2gb").getBytes());
>  CompactionOptionsFIFO compactionOptionsFIFO = new CompactionOptionsFIFO();
>  
> compactionOptionsFIFO.setMaxTableFilesSize(MemorySize.parse("8gb").getBytes());
>  compactionOptionsFIFO.setAllowCompaction(true);
>  
> the rocksdb version is 
> 
>  io.github.myasuka
>  frocksdbjni
>  6.10.2-ververica-3.0
>  
>  
> I think the problem is caused by tableproperty is lost by snapshot. Can any 
> one suggest how i can skip this problem?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xiaoHoly commented on pull request #14531: [FLINK-20777][Connector][Kafka] Property "partition.discovery.interval.ms" shoule be enabled by default for unbounded mode, and disable for

2021-03-02 Thread GitBox


xiaoHoly commented on pull request #14531:
URL: https://github.com/apache/flink/pull/14531#issuecomment-789396324


   @becketqin ,Hi qin ,Is the current PR continuing to review?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21573) Support expression reuse in codegen

2021-03-02 Thread Benchao Li (Jira)
Benchao Li created FLINK-21573:
--

 Summary: Support expression reuse in codegen
 Key: FLINK-21573
 URL: https://issues.apache.org/jira/browse/FLINK-21573
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Reporter: Benchao Li


Currently there is no expression reuse in codegen, and this may result in more 
CPU overhead in some cases. E.g.
{code:java}
SELECT my_map['key1'] as key1, my_map['key2'] as key2, my_map['key3'] as key3
FROM (
  SELECT dump_json_to_map(col1) as my_map
  FROM T
)
{code}
`dump_json_to_map` will be called 3 times.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-21298) Support 'USE MODULES' syntax

2021-03-02 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-21298.
---
Resolution: Fixed

Fixed in master: 57decce1e52127b800226daf0a4706496994d9bb

> Support 'USE MODULES' syntax
> 
>
> Key: FLINK-21298
> URL: https://issues.apache.org/jira/browse/FLINK-21298
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / API
>Affects Versions: 1.13.0
>Reporter: Jane Chan
>Assignee: Jane Chan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #15005: [FLINK-21298][table] Support 'USE MODULES' syntax both in SQL parser, TableEnvironment and SQL CLI

2021-03-02 Thread GitBox


wuchong merged pull request #15005:
URL: https://github.com/apache/flink/pull/15005


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-21572) USE DATABASE & USE CATALOG fails with quoted identifiers containing characters to be escaped in Flink SQL client.

2021-03-02 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-21572.
---
Resolution: Duplicate

> USE DATABASE & USE CATALOG fails with quoted identifiers containing 
> characters to be escaped in Flink SQL client. 
> --
>
> Key: FLINK-21572
> URL: https://issues.apache.org/jira/browse/FLINK-21572
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.12.0, 1.12.1
>Reporter: Xiaoguang Sun
>Priority: Minor
> Attachments: image-2021-03-03-10-38-27-341.png, 
> image-2021-03-03-10-38-59-521.png
>
>
> SQL Client fails when catalog name or database name are quoted and contains 
> characters that must be escaped, for example pulsar-flink connector uses 
> `tenant/namespace` as database name. 
> It was introduced with [PR Flink 
> 18621|https://github.com/apache/flink/pull/12923]. The reason is that sql 
> statements restored from parsed sql operation were not quote even if user 
> actually used it. It can be easily fixed by escaping database name and 
> catalog name before using it. Like these
> {code:java}
> // code java
>  public class SqlUseCatalog extends SqlCall {
>  
> @@ -63,6 +65,6 @@ public class SqlUseCatalog extends SqlCall {
>      }
>  
>      public String catalogName() {
> -        return catalogName.getSimple();
> +        return escapeIdentifier(catalogName.getSimple());
>      }
>  }
> @@ -57,7 +59,9 @@ public class SqlUseDatabase extends SqlCall {
>      }
>  
>      public String[] fullDatabaseName() {
> -        return databaseName.names.toArray(new String[0]);
> +        return databaseName.names.stream()
> +                .map(EncodingUtils::escapeIdentifier)
> +                .toArray(String[]::new);
>      }{code}
> !image-2021-03-03-10-38-59-521.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20977) USE DATABASE & USE CATALOG fails with quoted identifiers containing characters to be escaped in Flink SQL client

2021-03-02 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20977:

Summary: USE DATABASE & USE CATALOG fails with quoted identifiers 
containing characters to be escaped in Flink SQL client  (was: can not use 
`use` command to switch database )

> USE DATABASE & USE CATALOG fails with quoted identifiers containing 
> characters to be escaped in Flink SQL client
> 
>
> Key: FLINK-20977
> URL: https://issues.apache.org/jira/browse/FLINK-20977
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.12.0
>Reporter: Jun Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.12.3
>
>
> I have a database which name is mod, when I use `use mod` to switch to the 
> db,the system throw an exception, I surround it with backticks ,it is still 
> not well



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-21553) WindowDistinctAggregateITCase#testHopWindow_Cube is unstable

2021-03-02 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-21553:
---

Assignee: Andy

> WindowDistinctAggregateITCase#testHopWindow_Cube is unstable
> 
>
> Key: FLINK-21553
> URL: https://issues.apache.org/jira/browse/FLINK-21553
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Andy
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.13.0
>
> Attachments: screenshot-1.png
>
>
> See 
> https://dev.azure.com/imjark/Flink/_build/results?buildId=422=logs=d1352042-8a7d-50b6-3946-a85d176b7981=b2322052-d503-5552-81e2-b3a532a1d7e8
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21553) WindowDistinctAggregateITCase#testHopWindow_Cube is unstable

2021-03-02 Thread Andy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294240#comment-17294240
 ] 

Andy commented on FLINK-21553:
--

Could you assign the issue to me ? I would like to find out the root cause.

> WindowDistinctAggregateITCase#testHopWindow_Cube is unstable
> 
>
> Key: FLINK-21553
> URL: https://issues.apache.org/jira/browse/FLINK-21553
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.13.0
>
> Attachments: screenshot-1.png
>
>
> See 
> https://dev.azure.com/imjark/Flink/_build/results?buildId=422=logs=d1352042-8a7d-50b6-3946-a85d176b7981=b2322052-d503-5552-81e2-b3a532a1d7e8
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15047: [FLINK-21502][coordination] Reduce frequency of global re-allocate re…

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15047:
URL: https://github.com/apache/flink/pull/15047#issuecomment-787603325


   
   ## CI report:
   
   * ec740e36bf9c727139ed108ac44c9aca0f7c6838 UNKNOWN
   * 030b3a45312dddf7ba6e9de8fcfb8448f6af98ec Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13985)
 
   * ab02a8b97b3c96f61a5c7d1c0c747c1b3fcde881 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14026)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15018: [FLINK-21460][table api] Use Configuration to create TableEnvironment

2021-03-02 Thread GitBox


flinkbot edited a comment on pull request #15018:
URL: https://github.com/apache/flink/pull/15018#issuecomment-785717193


   
   ## CI report:
   
   * 79dcd94be9865e972c7be940d6e6d4ae35426fd0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14007)
 
   * 8da9084ba0c7dad6d6b14252ee8ae2d5879f9070 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14025)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21566) Improve error message for "Unsupported casting"

2021-03-02 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294239#comment-17294239
 ] 

Jark Wu commented on FLINK-21566:
-

Improving error messages is always good. However, currently the "Unsupported 
casting" exception is thrown during code generation where the code location 
information is lost. 


> Improve error message for "Unsupported casting"
> ---
>
> Key: FLINK-21566
> URL: https://issues.apache.org/jira/browse/FLINK-21566
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Nico Kruber
>Priority: Major
>
> In a situation like from FLINK-21565, neither the error message {{Unsupported 
> casting from TINYINT to INTERVAL SECOND(3)}}, nor the exception trace (see 
> FLINK-21565) gives you a good hint on where the error is, especially if you 
> have many statements with TINYINTs or operations on these.
> The query parser could highlight the location of the error inside the SQL 
> statement that the user provided.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21565) Support more integer types in TIMESTAMPADD

2021-03-02 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294238#comment-17294238
 ] 

Jark Wu commented on FLINK-21565:
-

Sounds good to me.

> Support more integer types in TIMESTAMPADD
> --
>
> Key: FLINK-21565
> URL: https://issues.apache.org/jira/browse/FLINK-21565
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.12.1
>Reporter: Nico Kruber
>Priority: Major
> Attachments: flights-21565.csv
>
>
> At the moment, {{TIMESTAMPADD}} does not seem to support {{SMALLINT}} or 
> {{TINYINT}} types which should be perfectly suitable for auto-conversion (in 
> contrast to BIGINT or floating numbers where I would expect the user to cast 
> it appropriately).
> With the attached file, executing these lines
> {code}
> CREATE TABLE `flights` (
>   `_YEAR` CHAR(4),
>   `_MONTH` CHAR(2),
>   `_DAY` CHAR(2),
>   `_SCHEDULED_DEPARTURE` CHAR(4),
>   `SCHEDULED_DEPARTURE` AS TO_TIMESTAMP(`_YEAR` || '-' || `_MONTH` || '-' || 
> `_DAY` || ' ' || SUBSTRING(`_SCHEDULED_DEPARTURE` FROM 0 FOR 2) || ':' || 
> SUBSTRING(`_SCHEDULED_DEPARTURE` FROM 3) || ':00'),
>   `_DEPARTURE_TIME` CHAR(4),
>   `DEPARTURE_DELAY` TINYINT,
>   `DEPARTURE_TIME` AS TIMESTAMPADD(MINUTE, `DEPARTURE_DELAY`, 
> TO_TIMESTAMP(`_YEAR` || '-' || `_MONTH` || '-' || `_DAY` || ' ' || 
> SUBSTRING(`_SCHEDULED_DEPARTURE` FROM 0 FOR 2) || ':' || 
> SUBSTRING(`_SCHEDULED_DEPARTURE` FROM 3) || ':00'))
> ) WITH (
>   'connector' = 'filesystem',
>   'path' = 'file:///tmp/kaggle-flight-delay/flights-21565.csv',
>   'format' = 'csv'
> );
> SELECT * FROM flights;
> {code}
> currently fail with the following exception (similarly for {{SMALLINT}}):
> {code}
> org.apache.flink.table.planner.codegen.CodeGenException: Unsupported casting 
> from TINYINT to INTERVAL SECOND(3).
>   at 
> org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.numericCasting(ScalarOperatorGens.scala:2352)
>  ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.generateBinaryArithmeticOperator(ScalarOperatorGens.scala:93)
>  ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:590)
>  ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:529)
>  ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:56)
>  ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>   at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) 
> ~[flink-table_2.12-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$visitCall$2(ExprCodeGenerator.scala:526)
>  ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) 
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58) 
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51) 
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at scala.collection.TraversableLike.map(TraversableLike.scala:233) 
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at scala.collection.TraversableLike.map$(TraversableLike.scala:226) 
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at scala.collection.AbstractTraversable.map(Traversable.scala:104) 
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:517)
>  ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:56)
>  ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>   at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) 
> ~[flink-table_2.12-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$visitCall$2(ExprCodeGenerator.scala:526)
>  ~[flink-table-blink_2.12-1.12.1.jar:1.12.1]
>   at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) 
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58) 
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51) 
> ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at 

[jira] [Commented] (FLINK-21563) Support using computed columns when defining new computed columns

2021-03-02 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294237#comment-17294237
 ] 

Jark Wu commented on FLINK-21563:
-

Not sure how SQL standard and other database vendors handle this. I just 
checked MySQL, and it supports this. 

{code}
mysql> CREATE TABLE triangle (
->   sidea DOUBLE,
->   sideb DOUBLE,
->   sidec DOUBLE AS (SQRT(sidea * sidea + sideb * sideb)),
->   sided DOUBLE AS (sidec+1)
-> );
Query OK, 0 rows affected (0.01 sec)

mysql> INSERT INTO triangle (sidea, sideb) VALUES(1,1),(3,4),(6,8);
Query OK, 3 rows affected (0.01 sec)
Records: 3  Duplicates: 0  Warnings: 0

mysql> select * from triangle;
+---+---++---+
| sidea | sideb | sidec  | sided |
+---+---++---+
| 1 | 1 | 1.4142135623730951 | 2.414213562373095 |
| 3 | 4 |  5 | 6 |
| 6 | 8 | 10 |11 |
+---+---++---+
3 rows in set (0.00 sec)

{code}

> Support using computed columns when defining new computed columns
> -
>
> Key: FLINK-21563
> URL: https://issues.apache.org/jira/browse/FLINK-21563
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.11.3
>Reporter: Nico Kruber
>Priority: Major
> Attachments: flights-21563.csv
>
>
> To avoid code duplications, it would be nice to be able to use computed 
> columns in later computations of new computed columns, e.g.
> {code}
> CREATE TABLE `flights` (
>   `_YEAR` CHAR(4),
>   `_MONTH` CHAR(2),
>   `_DAY` CHAR(2),
>   `_SCHEDULED_DEPARTURE` CHAR(4),
>   `SCHEDULED_DEPARTURE` AS TO_TIMESTAMP(`_YEAR` || '-' || `_MONTH` || '-' || 
> `_DAY` || ' ' || SUBSTRING(`_SCHEDULED_DEPARTURE` FROM 0 FOR 2) || ':' || 
> SUBSTRING(`_SCHEDULED_DEPARTURE` FROM 3) || ':00'),
>   `_DEPARTURE_TIME` CHAR(4),
>   `DEPARTURE_DELAY` SMALLINT,
>   `DEPARTURE_TIME` AS TIMESTAMPADD(MINUTE, CAST(`DEPARTURE_DELAY` AS INT), 
> SCHEDULED_DEPARTURE)
> )...
> {code}
> Otherwise, a user would have to repeat these calculations over and over again 
> which is not that maintainable.
> Currently, for a minimal working example with the attached input file, it 
> would look like this, e.g. in the SQL CLI:
> {code}
> CREATE TABLE `flights` (
>   `_YEAR` CHAR(4),
>   `_MONTH` CHAR(2),
>   `_DAY` CHAR(2),
>   `_SCHEDULED_DEPARTURE` CHAR(4),
>   `SCHEDULED_DEPARTURE` AS TO_TIMESTAMP(`_YEAR` || '-' || `_MONTH` || '-' || 
> `_DAY` || ' ' || SUBSTRING(`_SCHEDULED_DEPARTURE` FROM 0 FOR 2) || ':' || 
> SUBSTRING(`_SCHEDULED_DEPARTURE` FROM 3) || ':00'),
>   `_DEPARTURE_TIME` CHAR(4),
>   `DEPARTURE_DELAY` SMALLINT,
>   `DEPARTURE_TIME` AS TIMESTAMPADD(MINUTE, CAST(`DEPARTURE_DELAY` AS INT), 
> TO_TIMESTAMP(`_YEAR` || '-' || `_MONTH` || '-' || `_DAY` || ' ' || 
> SUBSTRING(`_SCHEDULED_DEPARTURE` FROM 0 FOR 2) || ':' || 
> SUBSTRING(`_SCHEDULED_DEPARTURE` FROM 3) || ':00'))
> ) WITH (
>   'connector' = 'filesystem',
>   'path' = 'file:///tmp/kaggle-flight-delay/flights-21563.csv',
>   'format' = 'csv'
> );
> SELECT * FROM flights;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21543) when using FIFO compaction, I found sst being deleted on the first checkpoint

2021-03-02 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21543?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17294236#comment-17294236
 ] 

Yun Tang commented on FLINK-21543:
--

[~zhoujira86] First of all, do not use any third-party unofficial rocksDB jar 
package. We did not ever promise that could work well.
Secondly, as FIFO compaction could cause data lost quietly, we have never tried 
it in Flink and lack such experiences you mentioned here.

> when using FIFO compaction, I found sst being deleted on the first checkpoint
> -
>
> Key: FLINK-21543
> URL: https://issues.apache.org/jira/browse/FLINK-21543
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Reporter: xiaogang zhou
>Priority: Major
>
> 2021/03/01-18:51:01.202049 7f59042fc700 (Original Log Time 
> 2021/03/01-18:51:01.200883) [/compaction/compaction_picker_fifo.cc:107] 
> [_timer_state/processing_user-timers] FIFO compaction: picking file 1710 with 
> creation time 0 for deletion
>  
> the configuration is like 
> currentOptions.setCompactionStyle(getCompactionStyle());
>  currentOptions.setLevel0FileNumCompactionTrigger(8);
>  // 
> currentOptions.setMaxTableFilesSizeFIFO(MemorySize.parse("2gb").getBytes());
>  CompactionOptionsFIFO compactionOptionsFIFO = new CompactionOptionsFIFO();
>  
> compactionOptionsFIFO.setMaxTableFilesSize(MemorySize.parse("8gb").getBytes());
>  compactionOptionsFIFO.setAllowCompaction(true);
>  
> the rocksdb version is 
> 
>  io.github.myasuka
>  frocksdbjni
>  6.10.2-ververica-3.0
>  
>  
> I think the problem is caused by tableproperty is lost by snapshot. Can any 
> one suggest how i can skip this problem?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   >