[GitHub] [flink] flinkbot commented on pull request #15249: [FLINK-21794][metrics] Support retrieving slot details via rest api

2021-03-16 Thread GitBox


flinkbot commented on pull request #15249:
URL: https://github.com/apache/flink/pull/15249#issuecomment-800816109


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2dc50c51f1d32acaa043173e508a9a3090e2fe03 (Wed Mar 17 
05:53:20 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19938) Implement shuffle data read scheduling for sort-merge blocking shuffle

2021-03-16 Thread Yingjie Cao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303119#comment-17303119
 ] 

Yingjie Cao commented on FLINK-19938:
-

[~dahaishuantuoba] I will update the PR this week, this PR will be merged into 
1.13, you can also try it after the PR is merged, because the PR is still under 
review and some implementation may be changed before merged.

> Implement shuffle data read scheduling for sort-merge blocking shuffle
> --
>
> Key: FLINK-19938
> URL: https://issues.apache.org/jira/browse/FLINK-19938
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> As described in 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-148%3A+Introduce+Sort-Merge+Based+Blocking+Shuffle+to+Flink.]
>  shuffle IO scheduling is important for performance. We'd like to Introduce 
> it to sort-merge shuffle first.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong commented on a change in pull request #15246: [FLINK-21728] Do not release segments in MemoryManager#release(Collection) if they have been released

2021-03-16 Thread GitBox


xintongsong commented on a change in pull request #15246:
URL: https://github.com/apache/flink/pull/15246#discussion_r595662005



##
File path: 
flink-core/src/main/java/org/apache/flink/core/memory/HybridMemorySegment.java
##
@@ -131,6 +131,9 @@
 
 @Override
 public void free() {
+if (isFreed()) {
+throw new IllegalStateException("HybridMemorySegment can be freed 
only once!");
+}

Review comment:
   I'd suggest to scope this change out from this PR.
   
   I'm afraid currently there are other causes that can lead to a segment being 
freed multiple times. That's also what the CI failures suggest.
   
   As discussed 
[here](https://issues.apache.org/jira/browse/FLINK-21419?focusedCommentId=17301350=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17301350),
 we are planning to enable this check-and-fail only for CI and gradually hunt 
down all the misuse case.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #15195: [FLINK-19609][table-planner-blink] Support streaming window join in planner

2021-03-16 Thread GitBox


wuchong commented on pull request #15195:
URL: https://github.com/apache/flink/pull/15195#issuecomment-800812750


   I appended a commit to improve the exception message. And also rebased the 
branch. If there is no objections, I will merge it once the build is passed. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #15195: [FLINK-19609][table-planner-blink] Support streaming window join in planner

2021-03-16 Thread GitBox


wuchong commented on a change in pull request #15195:
URL: https://github.com/apache/flink/pull/15195#discussion_r595727043



##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/WindowJoinUtil.scala
##
@@ -182,26 +183,65 @@ object WindowJoinUtil {
 }
 
 // Validate join
+def getLeftFieldNames() = join.getLeft.getRowType.getFieldNames.toList
+
+def getRightFieldNames() = join.getRight.getRowType.getFieldNames.toList
+
 if (windowStartEqualityLeftKeys.nonEmpty && 
windowEndEqualityLeftKeys.nonEmpty) {
   if (
 leftWindowProperties.getTimeAttributeType != 
rightWindowProperties.getTimeAttributeType) {
+
+def timeAttributeTypeStr(isRowTime: Boolean): String = {
+  if (isRowTime) "ROWTIME" else "PROCTIME"

Review comment:
   We can just print the logical type, because we will support 
TIMESTAMP_LTZ as time attribute soon, so the logical type maybe different. 

##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/WindowJoinUtil.scala
##
@@ -182,26 +183,65 @@ object WindowJoinUtil {
 }
 
 // Validate join
+def getLeftFieldNames() = join.getLeft.getRowType.getFieldNames.toList
+
+def getRightFieldNames() = join.getRight.getRowType.getFieldNames.toList
+
 if (windowStartEqualityLeftKeys.nonEmpty && 
windowEndEqualityLeftKeys.nonEmpty) {
   if (
 leftWindowProperties.getTimeAttributeType != 
rightWindowProperties.getTimeAttributeType) {
+
+def timeAttributeTypeStr(isRowTime: Boolean): String = {
+  if (isRowTime) "ROWTIME" else "PROCTIME"
+}
+
 throw new TableException(
   "Currently, window join doesn't support different time attribute 
type of left and " +
 "right inputs.\n" +
-s"The left time attribute type is 
${leftWindowProperties.getTimeAttributeType}.\n" +
-s"The right time attribute type is 
${rightWindowProperties.getTimeAttributeType}.")
+s"The left time attribute type is " +
+s"${timeAttributeTypeStr(leftWindowProperties.isRowtime)}.\n" +
+s"The right time attribute type is " +
+s"${timeAttributeTypeStr(rightWindowProperties.isRowtime)}.")
   } else if (leftWindowProperties.getWindowSpec != 
rightWindowProperties.getWindowSpec) {
+
+def windowSpecToStr(
+inputFieldNames: Seq[String],
+windowStartIdx: Int,
+windowEndIdx: Int,
+windowSpec: WindowSpec): String = {
+  val windowing = s"win_start=[${inputFieldNames(windowStartIdx)}]" +
+s", win_end=[${inputFieldNames(windowEndIdx)}]"
+  windowSpec.toSummaryString(windowing)

Review comment:
   WindowSpec doesn't contain window_start and window_end column 
information, so I think we don't need to print them. 

##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/utils/WindowJoinUtil.scala
##
@@ -182,26 +183,65 @@ object WindowJoinUtil {
 }
 
 // Validate join
+def getLeftFieldNames() = join.getLeft.getRowType.getFieldNames.toList
+
+def getRightFieldNames() = join.getRight.getRowType.getFieldNames.toList

Review comment:
   A local variable is enough?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21794) Support retrieving slot details via rest api

2021-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21794:
---
Labels: pull-request-available  (was: )

> Support retrieving slot details via rest api 
> -
>
> Key: FLINK-21794
> URL: https://issues.apache.org/jira/browse/FLINK-21794
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Reporter: Xintong Song
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
>
> It would be helpful to allow retrieving detail information of slots via rest 
> api.
>  * JobID that the slot is assigned to
>  * Slot resources (for dynamic slot allocation)
> Such information should be displayed on webui, once fine-grained resource 
> management is enabled in future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ opened a new pull request #15249: [FLINK-21794][metrics] Support retrieving slot details via rest api

2021-03-16 Thread GitBox


KarmaGYZ opened a new pull request #15249:
URL: https://github.com/apache/flink/pull/15249


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21838) Retranslate "JDBC SQL Connector" page into Chinese

2021-03-16 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303101#comment-17303101
 ] 

Jark Wu commented on FLINK-21838:
-

I think we can copy the content translated by FLINK-18383.

> Retranslate "JDBC SQL Connector" page into Chinese
> --
>
> Key: FLINK-21838
> URL: https://issues.apache.org/jira/browse/FLINK-21838
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: jjiey
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]
> The markdown file is located in 
> 'docs/content.zh/docs/connectors/table/jdbc.md' now.
>  
> The doc is still in English on the mater branch after being translated by 
> FLINK-18383
>  
> I think it may be caused by [Migrate Flink docs from Jekyll to 
> Hugo|https://github.com/apache/flink/pull/14903]
> And you can see 
> [5b42f7100bb6501cd83e8acbf879fbd25661ed4a|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:
> 1) delete 'docs/dev/table/connectors/jdbc.zh.md'
> 2) add 'docs/content.zh/docs/connectors/table/jdbc.md'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21838) Retranslate "JDBC SQL Connector" page into Chinese

2021-03-16 Thread jjiey (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303103#comment-17303103
 ] 

jjiey commented on FLINK-21838:
---

cc [~jark].

I am willing to do it. Can you assign it to me? thank you.

> Retranslate "JDBC SQL Connector" page into Chinese
> --
>
> Key: FLINK-21838
> URL: https://issues.apache.org/jira/browse/FLINK-21838
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: jjiey
>Assignee: jjiey
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]
> The markdown file is located in 
> 'docs/content.zh/docs/connectors/table/jdbc.md' now.
>  
> The doc is still in English on the mater branch after being translated by 
> FLINK-18383
>  
> I think it may be caused by [Migrate Flink docs from Jekyll to 
> Hugo|https://github.com/apache/flink/pull/14903]
> And you can see 
> [5b42f7100bb6501cd83e8acbf879fbd25661ed4a|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:
> 1) delete 'docs/dev/table/connectors/jdbc.zh.md'
> 2) add 'docs/content.zh/docs/connectors/table/jdbc.md'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-21838) Retranslate "JDBC SQL Connector" page into Chinese

2021-03-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-21838:
---

Assignee: jjiey

> Retranslate "JDBC SQL Connector" page into Chinese
> --
>
> Key: FLINK-21838
> URL: https://issues.apache.org/jira/browse/FLINK-21838
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: jjiey
>Assignee: jjiey
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]
> The markdown file is located in 
> 'docs/content.zh/docs/connectors/table/jdbc.md' now.
>  
> The doc is still in English on the mater branch after being translated by 
> FLINK-18383
>  
> I think it may be caused by [Migrate Flink docs from Jekyll to 
> Hugo|https://github.com/apache/flink/pull/14903]
> And you can see 
> [5b42f7100bb6501cd83e8acbf879fbd25661ed4a|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:
> 1) delete 'docs/dev/table/connectors/jdbc.zh.md'
> 2) add 'docs/content.zh/docs/connectors/table/jdbc.md'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21838) Retranslate "JDBC SQL Connector" page into Chinese

2021-03-16 Thread jjiey (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jjiey updated FLINK-21838:
--
Description: 
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|https://github.com/apache/flink/pull/14903]

And you can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'

  was:
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|https://github.com/apache/flink/pull/14903]

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'


> Retranslate "JDBC SQL Connector" page into Chinese
> --
>
> Key: FLINK-21838
> URL: https://issues.apache.org/jira/browse/FLINK-21838
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: jjiey
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html|https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]
> The markdown file is located in 
> 'docs/content.zh/docs/connectors/table/jdbc.md' now.
>  
> The doc is still in English on the mater branch after being translated by 
> FLINK-18383
>  
> I think it may be caused by [Migrate Flink docs from Jekyll to 
> Hugo|https://github.com/apache/flink/pull/14903]
> And you can see 
> [5b42f7100bb6501cd83e8acbf879fbd25661ed4a|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:
> 1) delete 'docs/dev/table/connectors/jdbc.zh.md'
> 2) add 'docs/content.zh/docs/connectors/table/jdbc.md'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21838) Retranslate "JDBC SQL Connector" page into Chinese

2021-03-16 Thread jjiey (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jjiey updated FLINK-21838:
--
Description: 
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|https://github.com/apache/flink/pull/14903]

 

 

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'

  was:
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|[https://github.com/apache/flink/pull/14903].

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'


> Retranslate "JDBC SQL Connector" page into Chinese
> --
>
> Key: FLINK-21838
> URL: https://issues.apache.org/jira/browse/FLINK-21838
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: jjiey
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]
> The markdown file is located in 
> 'docs/content.zh/docs/connectors/table/jdbc.md' now.
>  
> The doc is still in English on the mater branch after being translated by 
> FLINK-18383
>  
> I think it may be caused by [Migrate Flink docs from Jekyll to 
> Hugo|https://github.com/apache/flink/pull/14903]
>  
>  
> You can see 
> [5b42f7100bb6501cd83e8acbf879fbd25661ed4a|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:
> 1) delete 'docs/dev/table/connectors/jdbc.zh.md'
> 2) add 'docs/content.zh/docs/connectors/table/jdbc.md'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21838) Retranslate "JDBC SQL Connector" page into Chinese

2021-03-16 Thread jjiey (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jjiey updated FLINK-21838:
--
Description: 
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|https://github.com/apache/flink/pull/14903]

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'

  was:
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|https://github.com/apache/flink/pull/14903]

 

 

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'


> Retranslate "JDBC SQL Connector" page into Chinese
> --
>
> Key: FLINK-21838
> URL: https://issues.apache.org/jira/browse/FLINK-21838
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: jjiey
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]
> The markdown file is located in 
> 'docs/content.zh/docs/connectors/table/jdbc.md' now.
>  
> The doc is still in English on the mater branch after being translated by 
> FLINK-18383
>  
> I think it may be caused by [Migrate Flink docs from Jekyll to 
> Hugo|https://github.com/apache/flink/pull/14903]
> You can see 
> [5b42f7100bb6501cd83e8acbf879fbd25661ed4a|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:
> 1) delete 'docs/dev/table/connectors/jdbc.zh.md'
> 2) add 'docs/content.zh/docs/connectors/table/jdbc.md'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21839) SinkFunction snapshotState don't snapshot all data when trigger a stop-with-drain savepoint

2021-03-16 Thread Darcy Lin (Jira)
Darcy Lin created FLINK-21839:
-

 Summary: SinkFunction snapshotState don't snapshot all data when 
trigger a stop-with-drain savepoint
 Key: FLINK-21839
 URL: https://issues.apache.org/jira/browse/FLINK-21839
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.12.2
Reporter: Darcy Lin
 Attachments: TestSink.java

This problem was discovered when I was developing the flink code. In my flink 
code, my custom sink don't send all data that be produced by event_time window 
when trigger stop-with-drain savepoint .

TestSink.java is a example that SinkFunction invoke() continues to run after 
snapshotState() executed when trigger a stop-with-drain savepoint by rest api.
{code:java}
//TaskSink.java log
sink open
invoke: 0
invoke: 1
invoke: 2
invoke: 3
invoke: 4
invoke: 5
invoke: 6
invoke: 7
invoke: 8
invoke: 9
...
invoke: 425
invoke: 426
invoke: 427
snapshotState
invoke: 428 // It should be executed before snapshotState.
sink close{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21838) Retranslate "JDBC SQL Connector" page into Chinese

2021-03-16 Thread jjiey (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jjiey updated FLINK-21838:
--
Description: 
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|[https://github.com/apache/flink/pull/14903].

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'

  was:
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|[https://github.com/apache/flink/pull/14903]].

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]]:

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'


> Retranslate "JDBC SQL Connector" page into Chinese
> --
>
> Key: FLINK-21838
> URL: https://issues.apache.org/jira/browse/FLINK-21838
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: jjiey
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]
> The markdown file is located in 
> 'docs/content.zh/docs/connectors/table/jdbc.md' now.
>  
> The doc is still in English on the mater branch after being translated by 
> FLINK-18383
>  
> I think it may be caused by [Migrate Flink docs from Jekyll to 
> Hugo|[https://github.com/apache/flink/pull/14903].
> You can see 
> [5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:
> 1) delete 'docs/dev/table/connectors/jdbc.zh.md'
> 2) add 'docs/content.zh/docs/connectors/table/jdbc.md'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21838) Retranslate "JDBC SQL Connector" page into Chinese

2021-03-16 Thread jjiey (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jjiey updated FLINK-21838:
--
Description: 
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|[https://github.com/apache/flink/pull/14903]].

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]]:

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'

  was:
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|[https://github.com/apache/flink/pull/14903]].

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]]:

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'


> Retranslate "JDBC SQL Connector" page into Chinese
> --
>
> Key: FLINK-21838
> URL: https://issues.apache.org/jira/browse/FLINK-21838
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: jjiey
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]
> The markdown file is located in 
> 'docs/content.zh/docs/connectors/table/jdbc.md' now.
>  
> The doc is still in English on the mater branch after being translated by 
> FLINK-18383
>  
> I think it may be caused by [Migrate Flink docs from Jekyll to 
> Hugo|[https://github.com/apache/flink/pull/14903]].
> You can see 
> [5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]]:
> 1) delete 'docs/dev/table/connectors/jdbc.zh.md'
> 2) add 'docs/content.zh/docs/connectors/table/jdbc.md'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21838) Retranslate "JDBC SQL Connector" page into Chinese

2021-03-16 Thread jjiey (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jjiey updated FLINK-21838:
--
Description: 
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|[https://github.com/apache/flink/pull/14903]].

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]]:

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'

  was:
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|[https://github.com/apache/flink/pull/14903]|https://github.com/apache/flink/pull/14903).]

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a):]

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'


> Retranslate "JDBC SQL Connector" page into Chinese
> --
>
> Key: FLINK-21838
> URL: https://issues.apache.org/jira/browse/FLINK-21838
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: jjiey
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]
> The markdown file is located in 
> 'docs/content.zh/docs/connectors/table/jdbc.md' now.
>  
> The doc is still in English on the mater branch after being translated by 
> FLINK-18383
>  
> I think it may be caused by [Migrate Flink docs from Jekyll to 
> Hugo|[https://github.com/apache/flink/pull/14903]].
> You can see 
> [5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]]:
> 1) delete 'docs/dev/table/connectors/jdbc.zh.md'
> 2) add 'docs/content.zh/docs/connectors/table/jdbc.md'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21838) Retranslate "JDBC SQL Connector" page into Chinese

2021-03-16 Thread jjiey (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jjiey updated FLINK-21838:
--
Description: 
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

FLINK-18383

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|[https://github.com/apache/flink/pull/14903]|https://github.com/apache/flink/pull/14903).]

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a):]

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'

  was:
The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

[FLINK-18383|https://issues.apache.org/jira/browse/FLINK-18383]

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|[https://github.com/apache/flink/pull/14903].|https://github.com/apache/flink/pull/14903).]

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a):]

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'


> Retranslate "JDBC SQL Connector" page into Chinese
> --
>
> Key: FLINK-21838
> URL: https://issues.apache.org/jira/browse/FLINK-21838
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: jjiey
>Priority: Major
>
> The page url is 
> [https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]
> The markdown file is located in 
> 'docs/content.zh/docs/connectors/table/jdbc.md' now.
>  
> The doc is still in English on the mater branch after being translated by 
> FLINK-18383
>  
> I think it may be caused by [Migrate Flink docs from Jekyll to 
> Hugo|[https://github.com/apache/flink/pull/14903]|https://github.com/apache/flink/pull/14903).]
> You can see 
> [5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a):]
> 1) delete 'docs/dev/table/connectors/jdbc.zh.md'
> 2) add 'docs/content.zh/docs/connectors/table/jdbc.md'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21838) Retranslate "JDBC SQL Connector" page into Chinese

2021-03-16 Thread jjiey (Jira)
jjiey created FLINK-21838:
-

 Summary: Retranslate "JDBC SQL Connector" page into Chinese
 Key: FLINK-21838
 URL: https://issues.apache.org/jira/browse/FLINK-21838
 Project: Flink
  Issue Type: Task
  Components: chinese-translation, Documentation, Table SQL / Ecosystem
Reporter: jjiey


The page url is 
[https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/jdbc.html.]

The markdown file is located in 'docs/content.zh/docs/connectors/table/jdbc.md' 
now.

 

The doc is still in English on the mater branch after being translated by 

[FLINK-18383|https://issues.apache.org/jira/browse/FLINK-18383]

 

I think it may be caused by [Migrate Flink docs from Jekyll to 
Hugo|[https://github.com/apache/flink/pull/14903].|https://github.com/apache/flink/pull/14903).]

You can see 
[5b42f7100bb6501cd83e8acbf879fbd25661ed4a|[https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a]:|https://github.com/apache/flink/commit/5b42f7100bb6501cd83e8acbf879fbd25661ed4a):]

1) delete 'docs/dev/table/connectors/jdbc.zh.md'

2) add 'docs/content.zh/docs/connectors/table/jdbc.md'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15109: [FLINK-20955][connectors/hbase] HBase connector using new connector API

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15109:
URL: https://github.com/apache/flink/pull/15109#issuecomment-792338377


   
   ## CI report:
   
   * 2ff251bcfcde490f0c60b79e15b005cf24d3b906 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14848)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15239: [FLINK-21811][blink-table-planner]Support StreamExecJoin json serialization/deserialization

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15239:
URL: https://github.com/apache/flink/pull/15239#issuecomment-800209760


   
   ## CI report:
   
   * 09b034d05b368af2fe07892f6a445156eb33e3e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14862)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15195: [FLINK-19609][table-planner-blink] Support streaming window join in planner

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15195:
URL: https://github.com/apache/flink/pull/15195#issuecomment-798871942


   
   ## CI report:
   
   * 44281b2314b6c5bf6b2cfc3387164b9aa7efcc74 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14706)
 
   * 693860368d8c271691f65cdd37644d99df7786a5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14861)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #15088: [FLINK-21328] Optimize the initialization of DefaultExecutionTopology

2021-03-16 Thread GitBox


zhuzhurk commented on a change in pull request #15088:
URL: https://github.com/apache/flink/pull/15088#discussion_r595706191



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/SchedulingResultPartition.java
##
@@ -44,4 +46,14 @@
  * @return result partition state
  */
 ResultPartitionState getState();
+
+/**
+ * Get the grouped {@link ExecutionVertexID}.

Review comment:
   seems there is a mistake in the fix. please take another look





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #15088: [FLINK-21328] Optimize the initialization of DefaultExecutionTopology

2021-03-16 Thread GitBox


zhuzhurk commented on a change in pull request #15088:
URL: https://github.com/apache/flink/pull/15088#discussion_r595704981



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/SchedulingExecutionVertex.java
##
@@ -37,4 +39,11 @@
  * @return state of the execution vertex
  */
 ExecutionState getState();
+
+/**
+ * Get the {@link ConsumedPartitionGroup}s.
+ *
+ * @return list of {@link ConsumedPartitionGroup}s
+ */
+List getConsumerPartitionGroups();

Review comment:
   `getConsumerPartitionGroups` -> `getConsumedPartitionGroups `

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/scheduler/strategy/SchedulingResultPartition.java
##
@@ -44,4 +46,14 @@
  * @return result partition state
  */
 ResultPartitionState getState();
+
+/**
+ * Get the grouped {@link ExecutionVertexID}.

Review comment:
   seems there is a mistake in the fix 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21836) Introduce RegexOperationConverter

2021-03-16 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303084#comment-17303084
 ] 

Jark Wu commented on FLINK-21836:
-

The class name sounds related to {{QueryOperationConverter}}.

> Introduce RegexOperationConverter
> -
>
> Key: FLINK-21836
> URL: https://issues.apache.org/jira/browse/FLINK-21836
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.13.0
>Reporter: Shengkai Fang
>Priority: Major
>
> The {{RegexOperationConverter}} is responsible to convert statement to 
> {{Operation}} by regex. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15239: [FLINK-21811][blink-table-planner]Support StreamExecJoin json serialization/deserialization

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15239:
URL: https://github.com/apache/flink/pull/15239#issuecomment-800209760


   
   ## CI report:
   
   * ef525b07fd333ab2a29f432684c15c719819ab91 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14824)
 
   * 09b034d05b368af2fe07892f6a445156eb33e3e4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15236: [FLINK-21818][table] Refactor SlicingWindowAggOperatorBuilder to accept serializer instead of LogicalType

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15236:
URL: https://github.com/apache/flink/pull/15236#issuecomment-800154343


   
   ## CI report:
   
   * a4b23175ae924c5ea608ecdd5d3c6f3751d1b252 UNKNOWN
   * da23c74b9c1bb5a857a0777de9a9ac4b063122e0 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14806)
 
   * d2404ab56c12185a521eb2c512df610c01680cf5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14856)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15213: [FLINK-21774][sql-client] Do not display column names when return set is emtpy in SQL Client

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15213:
URL: https://github.com/apache/flink/pull/15213#issuecomment-799172628


   
   ## CI report:
   
   * c2cc618f010e55bcaa8157221b4dcaf3f5bd3231 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14788)
 
   * d236292f98ae9d87cc5f7d50160a830a3f34f30e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14859)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wsry commented on a change in pull request #15192: [FLINK-21777][network] Replace the 4M data writing cache of sort-merge shuffle with writev system call

2021-03-16 Thread GitBox


wsry commented on a change in pull request #15192:
URL: https://github.com/apache/flink/pull/15192#discussion_r595705324



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/PartitionedFileWriter.java
##
@@ -202,26 +190,51 @@ private void flushIndexBuffer() throws IOException {
 }
 
 /**
- * Writes a {@link Buffer} of the given subpartition to the this {@link 
PartitionedFile}. In a
- * data region, all data of the same subpartition must be written together.
+ * Writes a list of {@link Buffer}s to this {@link PartitionedFile}. It 
guarantees that after
+ * the return of this method, the target buffers can be released. In a 
data region, all data of
+ * the same subpartition must be written together.
  *
- * Note: The caller is responsible for recycling the target buffer and 
releasing the failed
+ * Note: The caller is responsible for recycling the target buffers and 
releasing the failed
  * {@link PartitionedFile} if any exception occurs.
  */
-public void writeBuffer(Buffer target, int targetSubpartition) throws 
IOException {
+public void writeBuffers(List 
bufferWithChannels)
+throws IOException {
 checkState(!isFinished, "File writer is already finished.");
 checkState(!isClosed, "File writer is already closed.");
 
-if (targetSubpartition != currentSubpartition) {
-checkState(
-subpartitionBuffers[targetSubpartition] == 0,
-"Must write data of the same channel together.");
-subpartitionOffsets[targetSubpartition] = totalBytesWritten;
-currentSubpartition = targetSubpartition;
+if (bufferWithChannels.isEmpty()) {
+return;
 }
 
-totalBytesWritten += writeToByteChannel(dataFileChannel, target, 
writeDataCache, header);
-++subpartitionBuffers[targetSubpartition];
+long expectedBytes = 0;
+ByteBuffer[] bufferWithHeaders = new ByteBuffer[2 * 
bufferWithChannels.size()];
+
+for (int i = 0; i < bufferWithChannels.size(); i++) {
+SortBuffer.BufferWithChannel bufferWithChannel = 
bufferWithChannels.get(i);
+Buffer buffer = bufferWithChannel.getBuffer();
+int subpartitionIndex = bufferWithChannel.getChannelIndex();
+if (subpartitionIndex != currentSubpartition) {
+checkState(
+subpartitionBuffers[subpartitionIndex] == 0,
+"Must write data of the same channel together.");
+subpartitionOffsets[subpartitionIndex] = totalBytesWritten;
+currentSubpartition = subpartitionIndex;
+}
+
+ByteBuffer header = BufferReaderWriterUtil.allocatedHeaderBuffer();
+BufferReaderWriterUtil.getByteChannelBufferHeader(buffer, header);
+bufferWithHeaders[2 * i] = header;
+bufferWithHeaders[2 * i + 1] = buffer.getNioBufferReadable();
+
+int numBytes = header.remaining() + buffer.readableBytes();
+expectedBytes += numBytes;
+totalBytesWritten += numBytes;
+++subpartitionBuffers[subpartitionIndex];
+}
+
+if (dataFileChannel.write(bufferWithHeaders) < expectedBytes) {

Review comment:
   Document of FileChannel.write does not say it guarantees to write all 
data out. BufferReaderWriterUtil already does the same thing and there is some 
comments explaining why we do that. I can extract the logic and the 
corresponding comments in BufferReaderWriterUtil to a method and directly call 
that method here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15195: [FLINK-19609][table-planner-blink] Support streaming window join in planner

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15195:
URL: https://github.com/apache/flink/pull/15195#issuecomment-798871942


   
   ## CI report:
   
   * 44281b2314b6c5bf6b2cfc3387164b9aa7efcc74 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14706)
 
   * 693860368d8c271691f65cdd37644d99df7786a5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #15197: [FLINK-21462][sql client] Use configuration to store the option and value in Sql client

2021-03-16 Thread GitBox


wuchong commented on a change in pull request #15197:
URL: https://github.com/apache/flink/pull/15197#discussion_r595704999



##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliClientITCase.java
##
@@ -155,6 +155,19 @@ public void testSqlStatements() throws IOException {
 private static final String ERROR_BEGIN = "\u001B[31;1m";
 private static final String ERROR_END = "\u001B[0m";
 
+private static String getInputFromPath(String sqlPath) throws IOException {
+URL url = CliClientITCase.class.getResource("/" + sqlPath);
+String in = IOUtils.toString(url, StandardCharsets.UTF_8);
+// replace the placeholder with specified value if exists
+return StringUtils.replaceEach(
+in,
+new String[] {"$VAR_PIPELINE_JARS", "$VAR_REST_PORT"},
+new String[] {
+udfDependency.toString(),
+
miniClusterResource.getClientConfiguration().get(PORT).toString()

Review comment:
   I think the replace variables will be more in the future, maintian them 
in two separate list is not good. 
   Personally, I perfer to have a private static `Map 
REPLACE_VARS` and initialize the map in `setup` method. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21837) Support StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin json ser/de

2021-03-16 Thread Terry Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Terry Wang updated FLINK-21837:
---
Summary: Support 
StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin json ser/de  
(was: Support 
StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin json ser/des)

> Support StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin 
> json ser/de
> --
>
> Key: FLINK-21837
> URL: https://issues.apache.org/jira/browse/FLINK-21837
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Terry Wang
>Priority: Major
> Fix For: 1.13.0
>
>
> Support StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin 
> json ser/des



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21833) TemporalRowTimeJoinOperator State Leak Although configure idle.state.retention.time

2021-03-16 Thread lynn1.zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lynn1.zhang updated FLINK-21833:

Description: 
TemporalRowTimeJoinOperator feature will lead to unlimited data expansion, 
although configure idle.state.retention.time

I have found the bug, and fixed it.

!image-2021-03-17-11-06-21-768.png!

  was:
Use TemporalRowTimeJoinOperator feature will lead to unlimited data expansion, 
although configure idle.state.retention.time

I have found the bug, and fixed it.

!image-2021-03-17-11-06-21-768.png!


> TemporalRowTimeJoinOperator State Leak Although configure 
> idle.state.retention.time
> ---
>
> Key: FLINK-21833
> URL: https://issues.apache.org/jira/browse/FLINK-21833
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
>Reporter: lynn1.zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-03-17-11-06-21-768.png
>
>
> TemporalRowTimeJoinOperator feature will lead to unlimited data expansion, 
> although configure idle.state.retention.time
> I have found the bug, and fixed it.
> !image-2021-03-17-11-06-21-768.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21837) Support StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin json ser/des

2021-03-16 Thread Terry Wang (Jira)
Terry Wang created FLINK-21837:
--

 Summary: Support 
StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin json ser/des
 Key: FLINK-21837
 URL: https://issues.apache.org/jira/browse/FLINK-21837
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0


Support StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin json 
ser/des



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21836) Introduce RegexOperationConverter

2021-03-16 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-21836:
--
Description: The {{RegexOperationConverter}} is responsible to convert 
statement to {{Operation}} by regex.   (was: **The \{{RegexOperationConverter}} 
is responsible to convert statement to \{{Operation}} by regex. )

> Introduce RegexOperationConverter
> -
>
> Key: FLINK-21836
> URL: https://issues.apache.org/jira/browse/FLINK-21836
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.13.0
>Reporter: Shengkai Fang
>Priority: Major
>
> The {{RegexOperationConverter}} is responsible to convert statement to 
> {{Operation}} by regex. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21836) Introduce RegexOperationConverter

2021-03-16 Thread Shengkai Fang (Jira)
Shengkai Fang created FLINK-21836:
-

 Summary: Introduce RegexOperationConverter
 Key: FLINK-21836
 URL: https://issues.apache.org/jira/browse/FLINK-21836
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Affects Versions: 1.13.0
Reporter: Shengkai Fang


**The \{{RegexOperationConverter}} is responsible to convert statement to 
\{{Operation}} by regex. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #15197: [FLINK-21462][sql client] Use configuration to store the option and value in Sql client

2021-03-16 Thread GitBox


wuchong commented on a change in pull request #15197:
URL: https://github.com/apache/flink/pull/15197#discussion_r595701924



##
File path: flink-table/flink-sql-client/src/test/resources/sql/set.q
##
@@ -44,6 +44,14 @@ CREATE TABLE hive_table (
 
 # list the configured configuration
 set;
+execution.attached=true
+execution.savepoint.ignore-unclaimed-state=false
+execution.shutdown-on-attached-exit=false
+execution.target=remote
+jobmanager.rpc.address=localhost

Review comment:
   This may also be different on differet machines. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-21103) E2e tests time out on azure

2021-03-16 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303073#comment-17303073
 ] 

Guowei Ma edited comment on FLINK-21103 at 3/17/21, 4:22 AM:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14845=logs=08866332-78f7-59e4-4f7e-49a56faa3179=7f606211-1454-543c-70ab-c7a028a1ce8c=1366


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14847=results


was (Author: maguowei):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14845=logs=08866332-78f7-59e4-4f7e-49a56faa3179=7f606211-1454-543c-70ab-c7a028a1ce8c=1366

> E2e tests time out on azure
> ---
>
> Key: FLINK-21103
> URL: https://issues.apache.org/jira/browse/FLINK-21103
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.3, 1.12.1, 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12377=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529
> {code}
> Creating worker2 ... done
> Jan 22 13:16:17 Waiting for hadoop cluster to come up. We have been trying 
> for 0 seconds, retrying ...
> Jan 22 13:16:22 Waiting for hadoop cluster to come up. We have been trying 
> for 5 seconds, retrying ...
> Jan 22 13:16:27 Waiting for hadoop cluster to come up. We have been trying 
> for 10 seconds, retrying ...
> Jan 22 13:16:32 Waiting for hadoop cluster to come up. We have been trying 
> for 15 seconds, retrying ...
> Jan 22 13:16:37 Waiting for hadoop cluster to come up. We have been trying 
> for 20 seconds, retrying ...
> Jan 22 13:16:43 Waiting for hadoop cluster to come up. We have been trying 
> for 26 seconds, retrying ...
> Jan 22 13:16:48 Waiting for hadoop cluster to come up. We have been trying 
> for 31 seconds, retrying ...
> Jan 22 13:16:53 Waiting for hadoop cluster to come up. We have been trying 
> for 36 seconds, retrying ...
> Jan 22 13:16:58 Waiting for hadoop cluster to come up. We have been trying 
> for 41 seconds, retrying ...
> Jan 22 13:17:03 Waiting for hadoop cluster to come up. We have been trying 
> for 46 seconds, retrying ...
> Jan 22 13:17:08 We only have 0 NodeManagers up. We have been trying for 0 
> seconds, retrying ...
> 21/01/22 13:17:10 INFO client.RMProxy: Connecting to ResourceManager at 
> master.docker-hadoop-cluster-network/172.19.0.3:8032
> 21/01/22 13:17:11 INFO client.AHSProxy: Connecting to Application History 
> server at master.docker-hadoop-cluster-network/172.19.0.3:10200
> Jan 22 13:17:11 We now have 2 NodeManagers up.
> ==
> === WARNING: This E2E Run took already 80% of the allocated time budget of 
> 250 minutes ===
> ==
> ==
> === WARNING: This E2E Run will time out in the next few minutes. Starting to 
> upload the log output ===
> ==
> ##[error]The task has timed out.
> Async Command Start: Upload Artifact
> Uploading 1 files
> File upload succeed.
> Upload '/tmp/_e2e_watchdog.output.0' to file container: 
> '#/11824779/e2e-timeout-logs'
> Associated artifact 140921 with build 12377
> Async Command End: Upload Artifact
> Async Command Start: Upload Artifact
> Uploading 1 files
> File upload succeed.
> Upload '/tmp/_e2e_watchdog.output.1' to file container: 
> '#/11824779/e2e-timeout-logs'
> Associated artifact 140921 with build 12377
> Async Command End: Upload Artifact
> Async Command Start: Upload Artifact
> Uploading 1 files
> File upload succeed.
> Upload '/tmp/_e2e_watchdog.output.2' to file container: 
> '#/11824779/e2e-timeout-logs'
> Associated artifact 140921 with build 12377
> Async Command End: Upload Artifact
> Finishing: Run e2e tests
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wsry commented on a change in pull request #15192: [FLINK-21777][network] Replace the 4M data writing cache of sort-merge shuffle with writev system call

2021-03-16 Thread GitBox


wsry commented on a change in pull request #15192:
URL: https://github.com/apache/flink/pull/15192#discussion_r595702663



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/SortMergeResultPartition.java
##
@@ -212,31 +269,46 @@ private void flushCurrentSortBuffer() throws IOException {
 if (currentSortBuffer.hasRemaining()) {
 fileWriter.startNewRegion();
 
+List toWrite = new ArrayList<>();
+Queue segments = getWriteBuffers();
+
 while (currentSortBuffer.hasRemaining()) {
-BufferWithChannel bufferWithChannel =
-currentSortBuffer.copyIntoSegment(writeBuffer);
-Buffer buffer = bufferWithChannel.getBuffer();
-int subpartitionIndex = bufferWithChannel.getChannelIndex();
+if (segments.isEmpty()) {
+fileWriter.writeBuffers(toWrite);
+toWrite.clear();
+segments = getWriteBuffers();
+}
 
-writeCompressedBufferIfPossible(buffer, subpartitionIndex);
+BufferWithChannel bufferWithChannel =
+
currentSortBuffer.copyIntoSegment(checkNotNull(segments.poll()));
+toWrite.add(compressBufferIfPossible(bufferWithChannel));
 }
+
+fileWriter.writeBuffers(toWrite);
 }
 
 currentSortBuffer.release();
 }
 
-private void writeCompressedBufferIfPossible(Buffer buffer, int 
targetSubpartition)
-throws IOException {
-updateStatistics(buffer, targetSubpartition);
+private Queue getWriteBuffers() {
+synchronized (lock) {
+checkState(!writeBuffers.isEmpty(), "Task has been canceled.");
+return new ArrayDeque<>(writeBuffers);
+}
+}
 
-try {
-if (canBeCompressed(buffer)) {
-buffer = bufferCompressor.compressToIntermediateBuffer(buffer);
-}
-fileWriter.writeBuffer(buffer, targetSubpartition);
-} finally {
-buffer.recycleBuffer();
+private BufferWithChannel compressBufferIfPossible(BufferWithChannel 
bufferWithChannel) {
+Buffer buffer = bufferWithChannel.getBuffer();
+int channelIndex = bufferWithChannel.getChannelIndex();
+
+updateStatistics(buffer, channelIndex);
+
+if (!canBeCompressed(buffer)) {
+return bufferWithChannel;
 }
+
+buffer = 
checkNotNull(bufferCompressor).compressToOriginalBuffer(buffer);

Review comment:
   We have only one IntermediateBuffer in the compressor, after this patch, 
we may cache multiple data buffers in the result partition, so the single 
IntermediateBuffer can not be shared by multiple buffers.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21835) RocksDBStateBackendReaderKeyedStateITCase fail

2021-03-16 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-21835:
-

 Summary: RocksDBStateBackendReaderKeyedStateITCase fail
 Key: FLINK-21835
 URL: https://issues.apache.org/jira/browse/FLINK-21835
 Project: Flink
  Issue Type: Bug
  Components: Runtime / State Backends
Affects Versions: 1.12.2
Reporter: Guowei Ma


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14847=logs=b2f046ab-ae17-5406-acdc-240be7e870e4=93e5ae06-d194-513d-ba8d-150ef6da1d7c=8873


{code:java}

at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at 
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at 
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
Caused by: akka.pattern.AskTimeoutException: Ask timed out on 
[Actor[akka://flink/user/rpc/dispatcher_2#-390401339]] after [1 ms]. 
Message of type [org.apache.flink.runtime.rpc.messages.LocalFencedMessage]. A 
typical reason for `AskTimeoutException` is that the recipient actor didn't 
send a reply.
at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635)
at akka.pattern.PromiseActorRef$$anonfun$2.apply(AskSupport.scala:635)
at 
akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:648)
at akka.actor.Scheduler$$anon$4.run(Scheduler.scala:205)
at 
scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)
at 
scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
at 
scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)
at 
akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:328)
at 
akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:279)
at 
akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:283)
at 
akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
at java.lang.Thread.run(Thread.java:748)

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21707) Job is possible to hang when restarting a FINISHED task with POINTWISE BLOCKING consumers

2021-03-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-21707:

Fix Version/s: 1.12.3
   1.13.0

> Job is possible to hang when restarting a FINISHED task with POINTWISE 
> BLOCKING consumers
> -
>
> Key: FLINK-21707
> URL: https://issues.apache.org/jira/browse/FLINK-21707
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.12.2, 1.13.0
>Reporter: Zhu Zhu
>Assignee: Zhu Zhu
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.12.3
>
>
> Job is possible to hang when restarting a FINISHED task with POINTWISE 
> BLOCKING consumers. This is because 
> {{PipelinedRegionSchedulingStrategy#onExecutionStateChange()}} will try to 
> schedule all the consumer tasks/regions of the finished *ExecutionJobVertex*, 
> even though the regions are not the exact consumers of the finished 
> *ExecutionVertex*. In this case, some of the regions can be in non-CREATED 
> state because they are not connected to nor affected by the restarted tasks. 
> However, {{PipelinedRegionSchedulingStrategy#maybeScheduleRegion()}} does not 
> allow to schedule a non-CREATED region and will throw an Exception and breaks 
> the scheduling of all the other regions. One example to show this problem 
> case can be found at 
> [PipelinedRegionSchedulingITCase#testRecoverFromPartitionException 
> |https://github.com/zhuzhurk/flink/commit/1eb036b6566c5cb4958d9957ba84dc78ce62a08c].
> To fix the problem, we can add a filter in 
> {{PipelinedRegionSchedulingStrategy#onExecutionStateChange()}} to only 
> trigger the scheduling of regions in CREATED state.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] beyond1920 commented on pull request #15195: [FLINK-19609][table-planner-blink] Support streaming window join in planner

2021-03-16 Thread GitBox


beyond1920 commented on pull request #15195:
URL: https://github.com/apache/flink/pull/15195#issuecomment-800781664


   @wuchong Thanks a lot, I've updated based on your comment.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-19614) Further optimization of sort-merge based blocking shuffle

2021-03-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-19614:
---

Assignee: Yingjie Cao

> Further optimization of sort-merge based blocking shuffle
> -
>
> Key: FLINK-19614
> URL: https://issues.apache.org/jira/browse/FLINK-19614
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Major
> Fix For: 1.13.0
>
>
> FLINK-19582 introduces a basic sort-merge based blocking shuffle 
> implementation. We can further optimize it based on the approaches proposed 
> in 
> [https://docs.google.com/document/d/1mpekX6aAHJhBsQ0pS9MxDiFQjHQIuaJH0GAQHh0GlJ0/edit?usp=sharing|https://docs.google.com/document/d/1mpekX6aAHJhBsQ0pS9MxDiFQjHQIuaJH0GAQHh0GlJ0/edit?usp=sharing,].
> This is the umbrella ticket for the optimizations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19614) Further optimization of sort-merge based blocking shuffle

2021-03-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu updated FLINK-19614:

Fix Version/s: 1.13.0

> Further optimization of sort-merge based blocking shuffle
> -
>
> Key: FLINK-19614
> URL: https://issues.apache.org/jira/browse/FLINK-19614
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Reporter: Yingjie Cao
>Priority: Major
> Fix For: 1.13.0
>
>
> FLINK-19582 introduces a basic sort-merge based blocking shuffle 
> implementation. We can further optimize it based on the approaches proposed 
> in 
> [https://docs.google.com/document/d/1mpekX6aAHJhBsQ0pS9MxDiFQjHQIuaJH0GAQHh0GlJ0/edit?usp=sharing|https://docs.google.com/document/d/1mpekX6aAHJhBsQ0pS9MxDiFQjHQIuaJH0GAQHh0GlJ0/edit?usp=sharing,].
> This is the umbrella ticket for the optimizations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20758) Use region file mechanism for shuffle data reading before we switch to managed memory

2021-03-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-20758:
---

Assignee: Yingjie Cao

> Use region file mechanism for shuffle data reading before we switch to 
> managed memory
> -
>
> Key: FLINK-20758
> URL: https://issues.apache.org/jira/browse/FLINK-20758
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Major
>  Labels: usability
>
> FLINK-15981 implemented region file based data reader to solve the direct 
> memory OOM issue introduced by usage of unmanaged direct memory, however only 
> for BoundedBlockingResultPartition. We can introduce it to sort-merge based 
> blocking shuffle to avoid the similar direct memory OOM problem which can 
> improve the usability a lot.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19938) Implement shuffle data read scheduling for sort-merge blocking shuffle

2021-03-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-19938:
---

Assignee: Yingjie Cao

> Implement shuffle data read scheduling for sort-merge blocking shuffle
> --
>
> Key: FLINK-19938
> URL: https://issues.apache.org/jira/browse/FLINK-19938
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> As described in 
> [https://cwiki.apache.org/confluence/display/FLINK/FLIP-148%3A+Introduce+Sort-Merge+Based+Blocking+Shuffle+to+Flink.]
>  shuffle IO scheduling is important for performance. We'd like to Introduce 
> it to sort-merge shuffle first.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-21778) Use heap memory instead of direct memory as index entry cache for sort-merge shuffle

2021-03-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-21778.
---
Resolution: Fixed

done via f165c7261d6f90a1390efcc3b98a00ae60a67ef3

> Use heap memory instead of direct memory as index entry cache for sort-merge 
> shuffle
> 
>
> Key: FLINK-21778
> URL: https://issues.apache.org/jira/browse/FLINK-21778
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Currently, the sort-merge shuffle implementation uses a piece of direct 
> memory as index entry cache for acceleration. We can use heap memory instead 
> to reduce the usage of direct memory which further reduces the possibility of 
> OutOfMemoryError.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20757) Optimize data broadcast for sort-merge shuffle

2021-03-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-20757:
---

Assignee: Yingjie Cao

> Optimize data broadcast for sort-merge shuffle
> --
>
> Key: FLINK-20757
> URL: https://issues.apache.org/jira/browse/FLINK-20757
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Major
> Fix For: 1.13.0
>
>
> For data broadcast, we can only copy the record once when writing data into 
> SortBuffer. Besides, we can write only one copy of data when spilling data 
> into disk. These optimizations can improve the performance of data broadcast.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-21777) Replace the 4M data writing cache of sort-merge shuffle with writev system call

2021-03-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-21777:
---

Assignee: Yingjie Cao

> Replace the 4M data writing cache of sort-merge shuffle with writev system 
> call
> ---
>
> Key: FLINK-21777
> URL: https://issues.apache.org/jira/browse/FLINK-21777
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Currently, the sort-merge shuffle implementation uses 4M unmanaged direct 
> memory as cache for data writing. It can be replaced by the writev system 
> call which can reduce the unmanaged direct memory usage without any 
> performance loss.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21834) org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset fail

2021-03-16 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-21834:
-

 Summary: 
org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset
 fail
 Key: FLINK-21834
 URL: https://issues.apache.org/jira/browse/FLINK-21834
 Project: Flink
  Issue Type: Bug
  Components: FileSystems
Affects Versions: 1.12.2
Reporter: Guowei Ma


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14847=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361=10893

Maybe we need print what the exception is when `recover` is called.
{code:java}
java.lang.AssertionError
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.fail(Assert.java:95)
at 
org.apache.flink.core.fs.AbstractRecoverableWriterTest.testResumeWithWrongOffset(AbstractRecoverableWriterTest.java:381)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-21778) Use heap memory instead of direct memory as index entry cache for sort-merge shuffle

2021-03-16 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu reassigned FLINK-21778:
---

Assignee: Yingjie Cao

> Use heap memory instead of direct memory as index entry cache for sort-merge 
> shuffle
> 
>
> Key: FLINK-21778
> URL: https://issues.apache.org/jira/browse/FLINK-21778
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Yingjie Cao
>Assignee: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Currently, the sort-merge shuffle implementation uses a piece of direct 
> memory as index entry cache for acceleration. We can use heap memory instead 
> to reduce the usage of direct memory which further reduces the possibility of 
> OutOfMemoryError.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk merged pull request #15193: [FLINK-21778][network] Use heap memory instead of direct memory as index entry cache for sort-merge shuffle

2021-03-16 Thread GitBox


zhuzhurk merged pull request #15193:
URL: https://github.com/apache/flink/pull/15193


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on pull request #15193: [FLINK-21778][network] Use heap memory instead of direct memory as index entry cache for sort-merge shuffle

2021-03-16 Thread GitBox


zhuzhurk commented on pull request #15193:
URL: https://github.com/apache/flink/pull/15193#issuecomment-800780230


   Merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15247: [FLINK-21833][Table SQL / Runtime] state leak

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15247:
URL: https://github.com/apache/flink/pull/15247#issuecomment-800773807


   
   ## CI report:
   
   * 26b7237eea7690de13b6b8d6a655b27964987a2b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14857)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15248: [FLINK-21382][doc] Update documentation for standalone Flink on Kubernetes with standby JobManagers

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15248:
URL: https://github.com/apache/flink/pull/15248#issuecomment-800773858


   
   ## CI report:
   
   * 3f30a89df59a0feb08314808efdb6e6e3b99e3a6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14858)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15231: [FLINK-21805][table-planner-blink] Support json ser/de for StreamExecRank, StreamExecLimit and StreamExecSortLimit

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15231:
URL: https://github.com/apache/flink/pull/15231#issuecomment-800077062


   
   ## CI report:
   
   * fdc2f6479eb9e31e96f80c019b9f02fdc4cd9541 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14816)
 
   * 6284c51302491e962e3e00ae535d5828f2797d59 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14855)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15236: [FLINK-21818][table] Refactor SlicingWindowAggOperatorBuilder to accept serializer instead of LogicalType

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15236:
URL: https://github.com/apache/flink/pull/15236#issuecomment-800154343


   
   ## CI report:
   
   * a4b23175ae924c5ea608ecdd5d3c6f3751d1b252 UNKNOWN
   * da23c74b9c1bb5a857a0777de9a9ac4b063122e0 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14806)
 
   * d2404ab56c12185a521eb2c512df610c01680cf5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15213: [FLINK-21774][sql-client] Do not display column names when return set is emtpy in SQL Client

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15213:
URL: https://github.com/apache/flink/pull/15213#issuecomment-799172628


   
   ## CI report:
   
   * c2cc618f010e55bcaa8157221b4dcaf3f5bd3231 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14788)
 
   * d236292f98ae9d87cc5f7d50160a830a3f34f30e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15197: [FLINK-21462][sql client] Use configuration to store the option and value in Sql client

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15197:
URL: https://github.com/apache/flink/pull/15197#issuecomment-798871985


   
   ## CI report:
   
   * 546bb52a009fee535df450c4ce569f1d8019ff6a UNKNOWN
   * a126b0d57b528509a8a9292d218df984542a745d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14827)
 
   * 734f3d41b850e2db3edef894d5037c90134de85f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14854)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21798) Guard MemorySegment against multiple frees.

2021-03-16 Thread Kezhu Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303080#comment-17303080
 ] 

Kezhu Wang commented on FLINK-21798:


[~xintongsong] You are right. It is always a good to catch multiple-frees. 
There are already places in {{MemorySegment}} forbidding "access after freed", 
no excuse to leave multiple-frees behind.

> Guard MemorySegment against multiple frees.
> ---
>
> Key: FLINK-21798
> URL: https://issues.apache.org/jira/browse/FLINK-21798
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Priority: Critical
>  Labels: Umbrella
> Fix For: 1.13.0
>
>
> As discussed in FLINK-21419, freeing a memory segment for multiple times 
> usually indicates the ownership of the segment is unclear. It would be good 
> to gradually getting rid of all such multiple-frees.
> This ticket serves as an umbrella for detected multiple-free cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #15213: [FLINK-21774][sql-client] Do not display column names when return set is emtpy in SQL Client

2021-03-16 Thread GitBox


wuchong commented on a change in pull request #15213:
URL: https://github.com/apache/flink/pull/15213#discussion_r595699406



##
File path: 
flink-table/flink-table-common/src/main/java/org/apache/flink/table/utils/PrintUtils.java
##
@@ -116,65 +116,60 @@ public static void printAsTableauForm(
 String nullColumn,
 boolean deriveColumnWidthByType,
 boolean printRowKind) {
-final List columns = tableSchema.getTableColumns();
-String[] columnNames = 
columns.stream().map(TableColumn::getName).toArray(String[]::new);
-if (printRowKind) {
-columnNames =
-Stream.concat(Stream.of(ROW_KIND_COLUMN), 
Arrays.stream(columnNames))
-.toArray(String[]::new);
-}
-
-final int[] colWidths;
-if (deriveColumnWidthByType) {
-colWidths =
-columnWidthsByType(
-columns,
-maxColumnWidth,
-nullColumn,
-printRowKind ? ROW_KIND_COLUMN : null);
-} else {
-final List rows = new ArrayList<>();
-final List content = new ArrayList<>();
-content.add(columnNames);
-while (it.hasNext()) {
-Row row = it.next();
-rows.add(row);
-content.add(rowToString(row, nullColumn, printRowKind));
+if (it.hasNext()) {

Review comment:
   Personally, I don't like big if else block. Could you return the method 
when it is emtpy? 
   
   

##
File path: 
flink-table/flink-table-api-java/src/main/java/org/apache/flink/table/api/internal/TableEnvironmentImpl.java
##
@@ -628,12 +628,19 @@ public boolean dropTemporaryView(String path) {
 
 @Override
 public String[] listUserDefinedFunctions() {
-return functionCatalog.getUserDefinedFunctions();
+return sortFunctions(functionCatalog.getUserDefinedFunctions());
 }
 
 @Override
 public String[] listFunctions() {
-return functionCatalog.getFunctions();
+return sortFunctions(functionCatalog.getFunctions());

Review comment:
   I think we can simply `Arrays.sort(functionCatalog.getFunctions())`. The 
returned `getFunctions` should never be null. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21728) DegreesWithExceptionITCase crash

2021-03-16 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303077#comment-17303077
 ] 

Guowei Ma commented on FLINK-21728:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14845=logs=72d4811f-9f0d-5fd0-014a-0bc26b72b642=c1d93a6a-ba91-515d-3196-2ee8019fbda7=13627

> DegreesWithExceptionITCase crash
> 
>
> Key: FLINK-21728
> URL: https://issues.apache.org/jira/browse/FLINK-21728
> Project: Flink
>  Issue Type: Sub-task
>  Components: Library / Graph Processing (Gelly)
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14422=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=f266c805-9429-58ed-2f9e-482e7b82f58b



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21745) JobMasterTest.testReconnectionAfterDisconnect hangs on azure

2021-03-16 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303079#comment-17303079
 ] 

Guowei Ma commented on FLINK-21745:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14845=logs=0e7be18f-84f2-53f0-a32d-4a5e4a174679=7030a106-e977-5851-a05e-535de648c9c9=8972

> JobMasterTest.testReconnectionAfterDisconnect hangs on azure
> 
>
> Key: FLINK-21745
> URL: https://issues.apache.org/jira/browse/FLINK-21745
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14500=logs=0e7be18f-84f2-53f0-a32d-4a5e4a174679=7030a106-e977-5851-a05e-535de648c9c9=8884
> {code}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21416) FileBufferReaderITCase.testSequentialReading fails on azure

2021-03-16 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303074#comment-17303074
 ] 

Guowei Ma commented on FLINK-21416:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14845=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a=6948

> FileBufferReaderITCase.testSequentialReading fails on azure
> ---
>
> Key: FLINK-21416
> URL: https://issues.apache.org/jira/browse/FLINK-21416
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Assignee: Guo Weijie
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13473=logs=59c257d0-c525-593b-261d-e96a86f1926b=b93980e3-753f-5433-6a19-13747adae66a
> {code}
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:811)
>   at 
> org.apache.flink.runtime.io.network.partition.FileBufferReaderITCase.testSequentialReading(FileBufferReaderITCase.java:128)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at org.junit.runners.Suite.runChild(Suite.java:128)
>   at org.junit.runners.Suite.runChild(Suite.java:27)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
>   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:117)
>   at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:79)
>   at 
> 

[jira] [Commented] (FLINK-21103) E2e tests time out on azure

2021-03-16 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303073#comment-17303073
 ] 

Guowei Ma commented on FLINK-21103:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14845=logs=08866332-78f7-59e4-4f7e-49a56faa3179=7f606211-1454-543c-70ab-c7a028a1ce8c=1366

> E2e tests time out on azure
> ---
>
> Key: FLINK-21103
> URL: https://issues.apache.org/jira/browse/FLINK-21103
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.3, 1.12.1, 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12377=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529
> {code}
> Creating worker2 ... done
> Jan 22 13:16:17 Waiting for hadoop cluster to come up. We have been trying 
> for 0 seconds, retrying ...
> Jan 22 13:16:22 Waiting for hadoop cluster to come up. We have been trying 
> for 5 seconds, retrying ...
> Jan 22 13:16:27 Waiting for hadoop cluster to come up. We have been trying 
> for 10 seconds, retrying ...
> Jan 22 13:16:32 Waiting for hadoop cluster to come up. We have been trying 
> for 15 seconds, retrying ...
> Jan 22 13:16:37 Waiting for hadoop cluster to come up. We have been trying 
> for 20 seconds, retrying ...
> Jan 22 13:16:43 Waiting for hadoop cluster to come up. We have been trying 
> for 26 seconds, retrying ...
> Jan 22 13:16:48 Waiting for hadoop cluster to come up. We have been trying 
> for 31 seconds, retrying ...
> Jan 22 13:16:53 Waiting for hadoop cluster to come up. We have been trying 
> for 36 seconds, retrying ...
> Jan 22 13:16:58 Waiting for hadoop cluster to come up. We have been trying 
> for 41 seconds, retrying ...
> Jan 22 13:17:03 Waiting for hadoop cluster to come up. We have been trying 
> for 46 seconds, retrying ...
> Jan 22 13:17:08 We only have 0 NodeManagers up. We have been trying for 0 
> seconds, retrying ...
> 21/01/22 13:17:10 INFO client.RMProxy: Connecting to ResourceManager at 
> master.docker-hadoop-cluster-network/172.19.0.3:8032
> 21/01/22 13:17:11 INFO client.AHSProxy: Connecting to Application History 
> server at master.docker-hadoop-cluster-network/172.19.0.3:10200
> Jan 22 13:17:11 We now have 2 NodeManagers up.
> ==
> === WARNING: This E2E Run took already 80% of the allocated time budget of 
> 250 minutes ===
> ==
> ==
> === WARNING: This E2E Run will time out in the next few minutes. Starting to 
> upload the log output ===
> ==
> ##[error]The task has timed out.
> Async Command Start: Upload Artifact
> Uploading 1 files
> File upload succeed.
> Upload '/tmp/_e2e_watchdog.output.0' to file container: 
> '#/11824779/e2e-timeout-logs'
> Associated artifact 140921 with build 12377
> Async Command End: Upload Artifact
> Async Command Start: Upload Artifact
> Uploading 1 files
> File upload succeed.
> Upload '/tmp/_e2e_watchdog.output.1' to file container: 
> '#/11824779/e2e-timeout-logs'
> Associated artifact 140921 with build 12377
> Async Command End: Upload Artifact
> Async Command Start: Upload Artifact
> Uploading 1 files
> File upload succeed.
> Upload '/tmp/_e2e_watchdog.output.2' to file container: 
> '#/11824779/e2e-timeout-logs'
> Associated artifact 140921 with build 12377
> Async Command End: Upload Artifact
> Finishing: Run e2e tests
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21832) Avro Confluent Schema Registry nightly end-to-end fail

2021-03-16 Thread Guowei Ma (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guowei Ma updated FLINK-21832:
--
Affects Version/s: 1.13.0

> Avro Confluent Schema Registry nightly end-to-end   fail
> 
>
> Key: FLINK-21832
> URL: https://issues.apache.org/jira/browse/FLINK-21832
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.2, 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14793=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=20126
> Watchdog could not kill the download successfully.
> {code:java}
>  60  296M   60  179M0 0   235k  0  0:21:28  0:13:01  0:08:27  
> 238kMar 16 13:33:35 Test (pid: 13982) did not finish after 900 seconds.
> Mar 16 13:33:35 Printing Flink logs and killing it:
> cat: 
> '/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
>  No such file or directory
> {code}
> Because the watchdog exit so the case fail
> {code:java}
> Mar 16 13:42:37 Stopping job timeout watchdog (with pid=13983)
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 809: 
> kill: (13983) - No such process
> Mar 16 13:42:37 [FAIL] Test script contains errors.
> Mar 16 13:42:37 Checking for errors...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21832) Avro Confluent Schema Registry nightly end-to-end fail

2021-03-16 Thread Guowei Ma (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guowei Ma updated FLINK-21832:
--
Description: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14793=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=20126

Watchdog could not kill the download successfully.
{code:java}
 60  296M   60  179M0 0   235k  0  0:21:28  0:13:01  0:08:27  
238kMar 16 13:33:35 Test (pid: 13982) did not finish after 900 seconds.
Mar 16 13:33:35 Printing Flink logs and killing it:
cat: 
'/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
 No such file or directory

{code}

Because the watchdog exit so the case fail
{code:java}
Mar 16 13:42:37 Stopping job timeout watchdog (with pid=13983)
/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 809: 
kill: (13983) - No such process
Mar 16 13:42:37 [FAIL] Test script contains errors.
Mar 16 13:42:37 Checking for errors...

{code}


  was:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14793=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=20126

I download the logs but there is no logs after 2021-03-16 13:42:27,807.

{code:java}
2021-03-16 13:42:22,112 WARN  org.apache.kafka.clients.consumer.ConsumerConfig  
   [] - The configuration 'value.serializer' was supplied but isn't a 
known config.
2021-03-16 13:42:22,112 WARN  org.apache.kafka.clients.consumer.ConsumerConfig  
   [] - The configuration 'transaction.timeout.ms' was supplied but 
isn't a known config.
2021-03-16 13:42:22,112 WARN  org.apache.kafka.clients.consumer.ConsumerConfig  
   [] - The configuration 'key.serializer' was supplied but isn't a 
known config.
2021-03-16 13:42:22,114 INFO  org.apache.kafka.common.utils.AppInfoParser   
   [] - Kafka version: 2.4.1
2021-03-16 13:42:22,114 INFO  org.apache.kafka.common.utils.AppInfoParser   
   [] - Kafka commitId: c57222ae8cd7866b
2021-03-16 13:42:22,114 INFO  org.apache.kafka.common.utils.AppInfoParser   
   [] - Kafka startTimeMs: 1615902142112
2021-03-16 13:42:22,127 INFO  org.apache.kafka.clients.consumer.KafkaConsumer   
   [] - [Consumer clientId=consumer-myconsumer-2, groupId=myconsumer] 
Subscribed to partition(s): test-avro-input-0
2021-03-16 13:42:22,133 INFO  
org.apache.kafka.clients.consumer.internals.SubscriptionState [] - [Consumer 
clientId=consumer-myconsumer-2, groupId=myconsumer] Seeking to EARLIEST offset 
of partition test-avro-input-0
2021-03-16 13:42:22,153 INFO  org.apache.kafka.clients.Metadata 
   [] - [Consumer clientId=consumer-myconsumer-2, groupId=myconsumer] 
Cluster ID: kpJ9rApqS5OBn18olsdihQ
2021-03-16 13:42:22,167 INFO  
org.apache.kafka.clients.consumer.internals.SubscriptionState [] - [Consumer 
clientId=consumer-myconsumer-2, groupId=myconsumer] Resetting offset for 
partition test-avro-input-0 to offset 0.
2021-03-16 13:42:27,807 INFO  
org.apache.kafka.clients.consumer.internals.AbstractCoordinator [] - [Consumer 
clientId=consumer-myconsumer-2, groupId=myconsumer] Discovered group 
coordinator fv-az101-48.internal.cloudapp.net:9092 (id: 2147483647 rack: null)
{code}



> Avro Confluent Schema Registry nightly end-to-end   fail
> 
>
> Key: FLINK-21832
> URL: https://issues.apache.org/jira/browse/FLINK-21832
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.2
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14793=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=20126
> Watchdog could not kill the download successfully.
> {code:java}
>  60  296M   60  179M0 0   235k  0  0:21:28  0:13:01  0:08:27  
> 238kMar 16 13:33:35 Test (pid: 13982) did not finish after 900 seconds.
> Mar 16 13:33:35 Printing Flink logs and killing it:
> cat: 
> '/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
>  No such file or directory
> {code}
> Because the watchdog exit so the case fail
> {code:java}
> Mar 16 13:42:37 Stopping job timeout watchdog (with pid=13983)
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 809: 
> kill: (13983) - No such process
> Mar 16 13:42:37 [FAIL] Test script contains errors.
> Mar 16 13:42:37 Checking for errors...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21832) Avro Confluent Schema Registry nightly end-to-end fail

2021-03-16 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303071#comment-17303071
 ] 

Guowei Ma commented on FLINK-21832:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14820=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=17913

> Avro Confluent Schema Registry nightly end-to-end   fail
> 
>
> Key: FLINK-21832
> URL: https://issues.apache.org/jira/browse/FLINK-21832
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.2
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14793=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=20126
> Watchdog could not kill the download successfully.
> {code:java}
>  60  296M   60  179M0 0   235k  0  0:21:28  0:13:01  0:08:27  
> 238kMar 16 13:33:35 Test (pid: 13982) did not finish after 900 seconds.
> Mar 16 13:33:35 Printing Flink logs and killing it:
> cat: 
> '/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*':
>  No such file or directory
> {code}
> Because the watchdog exit so the case fail
> {code:java}
> Mar 16 13:42:37 Stopping job timeout watchdog (with pid=13983)
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/common.sh: line 809: 
> kill: (13983) - No such process
> Mar 16 13:42:37 [FAIL] Test script contains errors.
> Mar 16 13:42:37 Checking for errors...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zuston commented on a change in pull request #15185: [FLINK-21768][clients]Optimize system.exit() logic of CliFrontend

2021-03-16 Thread GitBox


zuston commented on a change in pull request #15185:
URL: https://github.com/apache/flink/pull/15185#discussion_r595696243



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java
##
@@ -1124,19 +1124,19 @@ public static void main(final String[] args) {
 final List customCommandLines =
 loadCustomCommandLines(configuration, configurationDirectory);
 
+int retCode = 31;
 try {
 final CliFrontend cli = new CliFrontend(configuration, 
customCommandLines);
 
 SecurityUtils.install(new 
SecurityConfiguration(cli.configuration));
-int retCode =
-SecurityUtils.getInstalledContext().runSecured(() -> 
cli.parseAndRun(args));
-System.exit(retCode);

Review comment:
   Yes.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-21798) Guard MemorySegment against multiple frees.

2021-03-16 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303069#comment-17303069
 ] 

Xintong Song edited comment on FLINK-21798 at 3/17/21, 3:55 AM:


It is the current status that unsafe segments are only allocated via 
{{MemoryManager}}. However, I'm not sure we want such a restriction. I don't 
see a good reason to forbid unsafe usages outside {{MemoryManager}} in future.

Moreover, while multiple-frees on a heap/direct segment may not be as severe as 
on a unsafe segment, they are still good to be prevented. Those segments cannot 
be guarded by {{MemoryManager}}.


was (Author: xintongsong):
It is the current status that unsafe segments are only allocated via 
{{MemoryManager}}. However, I'm not sure we want such a restriction. I don't 
see a good reason to forbid unsafe usages outside {{MemoryManager}}.

Moreover, while multiple-frees on a heap/direct segment may not be as severe as 
on a unsafe segment, they are still good to be prevented. Those segments cannot 
be guarded by {{MemoryManager}}.

> Guard MemorySegment against multiple frees.
> ---
>
> Key: FLINK-21798
> URL: https://issues.apache.org/jira/browse/FLINK-21798
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Priority: Critical
>  Labels: Umbrella
> Fix For: 1.13.0
>
>
> As discussed in FLINK-21419, freeing a memory segment for multiple times 
> usually indicates the ownership of the segment is unclear. It would be good 
> to gradually getting rid of all such multiple-frees.
> This ticket serves as an umbrella for detected multiple-free cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21798) Guard MemorySegment against multiple frees.

2021-03-16 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303069#comment-17303069
 ] 

Xintong Song commented on FLINK-21798:
--

It is the current status that unsafe segments are only allocated via 
{{MemoryManager}}. However, I'm not sure we want such a restriction. I don't 
see a good reason to forbid unsafe usages outside {{MemoryManager}}.

Moreover, while multiple-frees on a heap/direct segment may not be as severe as 
on a unsafe segment, they are still good to be prevented. Those segments cannot 
be guarded by {{MemoryManager}}.

> Guard MemorySegment against multiple frees.
> ---
>
> Key: FLINK-21798
> URL: https://issues.apache.org/jira/browse/FLINK-21798
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Reporter: Xintong Song
>Priority: Critical
>  Labels: Umbrella
> Fix For: 1.13.0
>
>
> As discussed in FLINK-21419, freeing a memory segment for multiple times 
> usually indicates the ownership of the segment is unclear. It would be good 
> to gradually getting rid of all such multiple-frees.
> This ticket serves as an umbrella for detected multiple-free cases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #15248: [FLINK-21382][doc] Update documentation for standalone Flink on Kubernetes with standby JobManagers

2021-03-16 Thread GitBox


flinkbot commented on pull request #15248:
URL: https://github.com/apache/flink/pull/15248#issuecomment-800773858


   
   ## CI report:
   
   * 3f30a89df59a0feb08314808efdb6e6e3b99e3a6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15247: [FLINK-21833][Table SQL / Runtime] state leak

2021-03-16 Thread GitBox


flinkbot commented on pull request #15247:
URL: https://github.com/apache/flink/pull/15247#issuecomment-800773807


   
   ## CI report:
   
   * 26b7237eea7690de13b6b8d6a655b27964987a2b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15233: [FLINK-21815][table-planner-blink] Support json ser/de for StreamExecUnion

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15233:
URL: https://github.com/apache/flink/pull/15233#issuecomment-800090052


   
   ## CI report:
   
   * 643cf2451ef1a7df46999f2c5cebb36c76f41c75 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14805)
 
   * eeb3175f76962360e7966fb7664b03a05e170622 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14853)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15231: [FLINK-21805][table-planner-blink] Support json ser/de for StreamExecRank, StreamExecLimit and StreamExecSortLimit

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15231:
URL: https://github.com/apache/flink/pull/15231#issuecomment-800077062


   
   ## CI report:
   
   * fdc2f6479eb9e31e96f80c019b9f02fdc4cd9541 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14816)
 
   * 6284c51302491e962e3e00ae535d5828f2797d59 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15215: [FLINK-21785][table-planner-blink] Support json ser/de for StreamExecCorrelate

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15215:
URL: https://github.com/apache/flink/pull/15215#issuecomment-799415392


   
   ## CI report:
   
   * bc8ac05b4db95e83d09acd6764c24f5a65f1ff9b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14799)
 
   * 4be10922d380460d6166cc9b4d0ceab1ed611a7f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14852)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on pull request #15213: [FLINK-21774][sql-client] Do not display column names when return set is emtpy in SQL Client

2021-03-16 Thread GitBox


SteNicholas commented on pull request #15213:
URL: https://github.com/apache/flink/pull/15213#issuecomment-800773465


   @wuchong , thanks for the minor comments. I have followed the suggestion for 
`listFunctions` and `listUserDefinedFunctions`. Please check again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-21768) Optimize system.exit() logic of CliFrontend

2021-03-16 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang reassigned FLINK-21768:
-

Assignee: Junfan Zhang

> Optimize system.exit() logic of CliFrontend
> ---
>
> Key: FLINK-21768
> URL: https://issues.apache.org/jira/browse/FLINK-21768
> Project: Flink
>  Issue Type: Improvement
>  Components: Command Line Client
>Reporter: Junfan Zhang
>Assignee: Junfan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> h2. Why 
> We encounter a problem when Oozie integerated with Flink Batch Action. 
> Oozie will use a launcher job to start Flink client used to submit Flink job 
> to Hadoop Yarn. 
> And when Flink client finished , Oozie will get its exitCode to determine job 
> submission status and then do some extra things.
> So how Oozie catch {{System.exit()}}? It will implement JDK SecurityManager. 
> ([Oozie related code 
> link|https://github.com/apache/oozie/blob/f1e01a9e155692aa5632f4573ab1b3ebeab7ef45/sharelib/oozie/src/main/java/org/apache/oozie/action/hadoop/security/LauncherSecurityManager.java#L24]).
>  
> Now when Flink Client finished successfully, it will call 
> {{System.exit(0)}}([Flink related code 
> link|https://github.com/apache/flink/blob/195298aea327b3f98d9852121f0f146368696300/flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java#L1133])
>  method. 
> And then JVM will use LauncherSecurityManager(Oozie implemented) to handle 
> {{System.exit(0)}} method and trigger {{LauncherSecurityManager.checkExit()}} 
> method, and then will throw exception. 
> Finally Flink Client will catch its {{throwable}} and call 
> {{System.exit(31)}}([related code 
> link|https://github.com/apache/flink/blob/195298aea327b3f98d9852121f0f146368696300/flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java#L1139])
>  method again. It will cause Oozie to misjudge the status of the Fllink job.
> Actually it's a corner case. In most scenes, the situation I mentioned will 
> not happen. But it's still necessary for us to optimize client exit logic. 
> Besides, i think the problem above may also exist in some other frameworks 
> such as linkedin/azakaban and apache/airflow, which are using Flink client to 
> submit batch job.
> Flink related code:
> {code:java}
> public static void main(final String[] args) {
> EnvironmentInformation.logEnvironmentInfo(LOG, "Command Line Client", 
> args);
> // 1. find the configuration directory
> final String configurationDirectory = 
> getConfigurationDirectoryFromEnv();
> // 2. load the global configuration
> final Configuration configuration =
> GlobalConfiguration.loadConfiguration(configurationDirectory);
> // 3. load the custom command lines
> final List customCommandLines =
> loadCustomCommandLines(configuration, configurationDirectory);
> try {
> final CliFrontend cli = new CliFrontend(configuration, 
> customCommandLines);
> SecurityUtils.install(new 
> SecurityConfiguration(cli.configuration));
> int retCode =
> SecurityUtils.getInstalledContext().runSecured(() -> 
> cli.parseAndRun(args));
> System.exit(retCode);
> } catch (Throwable t) {
> final Throwable strippedThrowable =
> ExceptionUtils.stripException(t, 
> UndeclaredThrowableException.class);
> LOG.error("Fatal error while running command line interface.", 
> strippedThrowable);
> strippedThrowable.printStackTrace();
> System.exit(31);
> }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong commented on a change in pull request #15112: [FLINK-21480][runtime] Respect external resources from resource requirements

2021-03-16 Thread GitBox


xintongsong commented on a change in pull request #15112:
URL: https://github.com/apache/flink/pull/15112#discussion_r595681050



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/types/ResourceProfile.java
##
@@ -640,15 +642,18 @@ public Builder setNetworkMemoryMB(int networkMemoryMB) {
 return this;
 }
 
-public Builder addExtendedResource(String name, Resource 
extendedResource) {
-this.extendedResources.put(name, extendedResource);
+// Add the given extended resource, the old value with the same 
resource name will be
+// override if present.
+public Builder setExtendedResource(Resource extendedResource) {
+this.extendedResources.put(extendedResource.getName(), 
extendedResource);
 return this;
 }
 
-public Builder addExtendedResources(Map 
extendedResources) {
-if (extendedResources != null) {
-this.extendedResources.putAll(extendedResources);
-}
+// Add the given extended resources, this will override all the 
previous records.

Review comment:
   ```
   Add the given extended resources. This will discard all the previous added 
extended resources.
   ```

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/TaskExecutorProcessSpec.java
##
@@ -109,18 +120,27 @@ public TaskExecutorProcessSpec(
 networkMemSize,
 managedMemorySize),
 new JvmMetaspaceAndOverhead(jvmMetaspaceSize, jvmOverheadSize),
-1);
+1,
+extendedResources);
 }
 
 protected TaskExecutorProcessSpec(
 CPUResource cpuCores,
 TaskExecutorFlinkMemory flinkMemory,
 JvmMetaspaceAndOverhead jvmMetaspaceAndOverhead,
-int numSlots) {
+int numSlots,
+Collection extendedResources) {
 
 super(flinkMemory, jvmMetaspaceAndOverhead);
 this.cpuCores = cpuCores;
 this.numSlots = numSlots;
+this.extendedResources =
+Preconditions.checkNotNull(extendedResources).stream()
+.filter(resource -> !resource.isZero())
+.collect(Collectors.toMap(ExternalResource::getName, 
Function.identity()));
+Preconditions.checkState(

Review comment:
   Same for `WorkerResourceSpec` and `TaskExecutorResourceSpec`.

##
File path: 
flink-core/src/main/java/org/apache/flink/api/common/resources/Resource.java
##
@@ -51,50 +52,51 @@ protected Resource(String name, BigDecimal value) {
 this.value = value;
 }
 
-public Resource merge(Resource other) {
+public T merge(T other) {
 checkNotNull(other, "Cannot merge with null resources");
 checkArgument(getClass() == other.getClass(), "Merge with different 
resource type");
-checkArgument(name.equals(other.name), "Merge with different resource 
name");
+checkArgument(name.equals(other.getName()), "Merge with different 
resource name");
 
-return create(value.add(other.value));
+return create(value.add(other.getValue()));
 }
 
-public Resource subtract(Resource other) {
+public T subtract(T other) {
 checkNotNull(other, "Cannot subtract null resources");
 checkArgument(getClass() == other.getClass(), "Minus with different 
resource type");
-checkArgument(name.equals(other.name), "Minus with different resource 
name");
+checkArgument(name.equals(other.getName()), "Minus with different 
resource name");
 checkArgument(
-value.compareTo(other.value) >= 0,
+value.compareTo(other.getValue()) >= 0,
 "Try to subtract a larger resource from this one.");
 
-return create(value.subtract(other.value));
+return create(value.subtract(other.getValue()));
 }
 
-public Resource multiply(BigDecimal multiplier) {
+public T multiply(BigDecimal multiplier) {
 return create(value.multiply(multiplier));
 }
 
-public Resource multiply(int multiplier) {
+public T multiply(int multiplier) {
 return multiply(BigDecimal.valueOf(multiplier));
 }
 
-public Resource divide(BigDecimal by) {
+public T divide(BigDecimal by) {
 return create(value.divide(by, 16, RoundingMode.DOWN));
 }
 
-public Resource divide(int by) {
+public T divide(int by) {
 return divide(BigDecimal.valueOf(by));
 }
 
 @Override
+@SuppressWarnings("unchecked")

Review comment:
   It would be nice to keep the scope of `@SuppressWarnings` as small as 
possible. In this case, the single statement `T other = (T) o;` should be 
enough.

##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/clusterframework/TaskExecutorProcessSpec.java

[GitHub] [flink] wangyang0918 commented on a change in pull request #15185: [FLINK-21768][clients]Optimize system.exit() logic of CliFrontend

2021-03-16 Thread GitBox


wangyang0918 commented on a change in pull request #15185:
URL: https://github.com/apache/flink/pull/15185#discussion_r595688406



##
File path: 
flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontend.java
##
@@ -1124,19 +1124,19 @@ public static void main(final String[] args) {
 final List customCommandLines =
 loadCustomCommandLines(configuration, configurationDirectory);
 
+int retCode = 31;
 try {
 final CliFrontend cli = new CliFrontend(configuration, 
customCommandLines);
 
 SecurityUtils.install(new 
SecurityConfiguration(cli.configuration));
-int retCode =
-SecurityUtils.getInstalledContext().runSecured(() -> 
cli.parseAndRun(args));
-System.exit(retCode);

Review comment:
   Do you mean the Oozie will handle the system exit signal and throw an 
exception here? After then, we will go into the `catch` code blocks and call 
the system exit again.
   
   So you are suggesting to ensure that only call the `System.exit` once. Right?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21833) TemporalRowTimeJoinOperator State Leak Although configure idle.state.retention.time

2021-03-16 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303059#comment-17303059
 ] 

Leonard Xu commented on FLINK-21833:


[~zicat] Thanks for the report, I'll help review this PR

> TemporalRowTimeJoinOperator State Leak Although configure 
> idle.state.retention.time
> ---
>
> Key: FLINK-21833
> URL: https://issues.apache.org/jira/browse/FLINK-21833
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
>Reporter: lynn1.zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-03-17-11-06-21-768.png
>
>
> Use TemporalRowTimeJoinOperator feature will lead to unlimited data 
> expansion, although configure idle.state.retention.time
> I have found the bug, and fixed it.
> !image-2021-03-17-11-06-21-768.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi commented on a change in pull request #15236: [FLINK-21818][table] Refactor SlicingWindowAggOperatorBuilder to accept serializer instead of LogicalType

2021-03-16 Thread GitBox


JingsongLi commented on a change in pull request #15236:
URL: https://github.com/apache/flink/pull/15236#discussion_r595686521



##
File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/typeutils/InternalTypeInfo.java
##
@@ -120,6 +120,10 @@ public RowDataSerializer toRowSerializer() {
 return (RowDataSerializer) typeSerializer;
 }
 
+public AbstractRowDataSerializer toAbstractRowSerializer() {

Review comment:
   Cast outside?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #15236: [FLINK-21818][table] Refactor SlicingWindowAggOperatorBuilder to accept serializer instead of LogicalType

2021-03-16 Thread GitBox


JingsongLi commented on a change in pull request #15236:
URL: https://github.com/apache/flink/pull/15236#discussion_r595686332



##
File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/operators/aggregate/window/combines/CombineRecordsFunction.java
##
@@ -59,8 +57,11 @@
 /** Whether to copy key and input record, because key and record are 
reused. */
 private final boolean requiresCopy;
 
+/** Serializer to copy key if required. */
+private AbstractRowDataSerializer keySerializer;

Review comment:
    Now types are:
   ```
   private AbstractRowDataSerializer inputSerializer; // Need 
toBinaryRow
   private PagedTypeSerializer keySerializer; // Need 
PagedTypeSerializer in BytesHashMap
   private TypeSerializer accSerializer;
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #15236: [FLINK-21818][table] Refactor SlicingWindowAggOperatorBuilder to accept serializer instead of LogicalType

2021-03-16 Thread GitBox


JingsongLi commented on a change in pull request #15236:
URL: https://github.com/apache/flink/pull/15236#discussion_r595684946



##
File path: 
flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/runtime/typeutils/WindowKeySerializer.java
##
@@ -109,10 +106,9 @@ public void copy(DataInputView source, DataOutputView 
target) throws IOException
 @Override
 public int serializeToPages(WindowKey record, AbstractPagedOutputView 
target)
 throws IOException {
-int windowSkip = checkSkipWriteForWindowPart(target);
 target.writeLong(record.getWindow());
-int keySkip = keySerializer.serializeToPages(record.getKey(), target);
-return windowSkip + keySkip;
+keySerializer.serializeToPages(record.getKey(), target);
+return 0;

Review comment:
   It is OK, `keySerializer.serializeToPages` will skip if need.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15248: [FLINK-21382][doc] Update documentation for standalone Flink on Kubernetes with standby JobManagers

2021-03-16 Thread GitBox


flinkbot commented on pull request #15248:
URL: https://github.com/apache/flink/pull/15248#issuecomment-800762378


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3f30a89df59a0feb08314808efdb6e6e3b99e3a6 (Wed Mar 17 
03:20:18 UTC 2021)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21820) JDBC connector shouldn't read all rows in per statement by default

2021-03-16 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303056#comment-17303056
 ] 

Jark Wu commented on FLINK-21820:
-

Not sure about default 10, it will lead to too many IO communications. 

> JDBC connector shouldn't read all rows in per statement by default
> --
>
> Key: FLINK-21820
> URL: https://issues.apache.org/jira/browse/FLINK-21820
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Reporter: Leonard Xu
>Priority: Major
>
> The default value for JDBC option 'scan.fetch-size' is 0 which means read all 
> rows in statement, this may lead to OOM or IO timeout.
> We'd better set a reasonable value as default value.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21382) Standalone K8s documentation does not explain usage of standby JobManagers

2021-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21382:
---
Labels: pull-request-available  (was: )

> Standalone K8s documentation does not explain usage of standby JobManagers
> --
>
> Key: FLINK-21382
> URL: https://issues.apache.org/jira/browse/FLINK-21382
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Documentation
>Affects Versions: 1.12.1, 1.13.0
>Reporter: Till Rohrmann
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.12.3
>
>
> Our [standalone K8s 
> documentation|https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/resource-providers/standalone/kubernetes.html#high-availability-with-standalone-kubernetes]
>  mentions how to configure K8s HA services. It does not mention that this 
> only works with a single JobManager. When using standby JobManagers, then the 
> given deployment yamls won't work because the {{jobmanager.rpc.address}} is 
> configured to be the {{jobmanager}} service.
> Changing the configuration to work is surprisingly difficult because of a 
> lack of documentation. Moreover, it is quite difficult to pass in custom 
> configuration values when using a ConfigMap for sharing Flink's 
> {{flink-conf.yaml}}. The problem is that mounted ConfigMaps are not writable 
> from a pod perspective. See [this 
> answer|https://stackoverflow.com/a/66228073/4815083] for how one could 
> achieve it.
> I think we could improve our documentation to explain our users how to 
> configure a standalone HA cluster with standby JobManagers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21833) TemporalRowTimeJoinOperator State Leak Although configure idle.state.retention.time

2021-03-16 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303054#comment-17303054
 ] 

Jark Wu commented on FLINK-21833:
-

cc [~Leonard Xu]


> TemporalRowTimeJoinOperator State Leak Although configure 
> idle.state.retention.time
> ---
>
> Key: FLINK-21833
> URL: https://issues.apache.org/jira/browse/FLINK-21833
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
>Reporter: lynn1.zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-03-17-11-06-21-768.png
>
>
> Use TemporalRowTimeJoinOperator feature will lead to unlimited data 
> expansion, although configure idle.state.retention.time
> I have found the bug, and fixed it.
> !image-2021-03-17-11-06-21-768.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 opened a new pull request #15248: [FLINK-21382][doc] Update documentation for standalone Flink on Kubernetes with standby JobManagers

2021-03-16 Thread GitBox


wangyang0918 opened a new pull request #15248:
URL: https://github.com/apache/flink/pull/15248


   This PR tries to update the documentation for standalone Flink on Kubernetes 
for HA with standby JobManagers.
   
   Note: Even we just have one JobManager, we should also use the pod IP 
instead of Kubernetes service when the HA enabled. This is also the current 
behavior of native Kubernetes integration.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15247: [FLINK-21833][Table SQL / Runtime] state leak

2021-03-16 Thread GitBox


flinkbot commented on pull request #15247:
URL: https://github.com/apache/flink/pull/15247#issuecomment-800760925


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 26b7237eea7690de13b6b8d6a655b27964987a2b (Wed Mar 17 
03:16:34 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-21833).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15233: [FLINK-21815][table-planner-blink] Support json ser/de for StreamExecUnion

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15233:
URL: https://github.com/apache/flink/pull/15233#issuecomment-800090052


   
   ## CI report:
   
   * 643cf2451ef1a7df46999f2c5cebb36c76f41c75 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14805)
 
   * eeb3175f76962360e7966fb7664b03a05e170622 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15219: [FLINK-21802][table-planner-blink]LogicalTypeJsonDeserializer/Serializer support custom RowType/MapType/ArrayType/MultisetType

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15219:
URL: https://github.com/apache/flink/pull/15219#issuecomment-799585147


   
   ## CI report:
   
   * 8a320ad9b0780b4b18c23c68314ffd2b750f259c UNKNOWN
   * 3fe7a96770ce45c4f9130ba16d31cb38341df563 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14850)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14804)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15215: [FLINK-21785][table-planner-blink] Support json ser/de for StreamExecCorrelate

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15215:
URL: https://github.com/apache/flink/pull/15215#issuecomment-799415392


   
   ## CI report:
   
   * bc8ac05b4db95e83d09acd6764c24f5a65f1ff9b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14799)
 
   * 4be10922d380460d6166cc9b4d0ceab1ed611a7f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15197: [FLINK-21462][sql client] Use configuration to store the option and value in Sql client

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #15197:
URL: https://github.com/apache/flink/pull/15197#issuecomment-798871985


   
   ## CI report:
   
   * 546bb52a009fee535df450c4ce569f1d8019ff6a UNKNOWN
   * a126b0d57b528509a8a9292d218df984542a745d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14827)
 
   * 734f3d41b850e2db3edef894d5037c90134de85f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21833) TemporalRowTimeJoinOperator State Leak Although configure idle.state.retention.time

2021-03-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21833:
---
Labels: pull-request-available  (was: )

> TemporalRowTimeJoinOperator State Leak Although configure 
> idle.state.retention.time
> ---
>
> Key: FLINK-21833
> URL: https://issues.apache.org/jira/browse/FLINK-21833
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.2
>Reporter: lynn1.zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-03-17-11-06-21-768.png
>
>
> Use TemporalRowTimeJoinOperator feature will lead to unlimited data 
> expansion, although configure idle.state.retention.time
> I have found the bug, and fixed it.
> !image-2021-03-17-11-06-21-768.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zicat opened a new pull request #15247: [FLINK-21833][Table SQL / Runtime] state leak

2021-03-16 Thread GitBox


zicat opened a new pull request #15247:
URL: https://github.com/apache/flink/pull/15247


   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14916: [FLINK-21345][Table SQL / Planner] Fix BUG of Union All join Temporal…

2021-03-16 Thread GitBox


flinkbot edited a comment on pull request #14916:
URL: https://github.com/apache/flink/pull/14916#issuecomment-776614443


   
   ## CI report:
   
   * cf1e072ffe4e9f9b0dfede859c72eabbde6428e9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14773)
 
   * 43e18527d2fe38a1570699c06365ac377132b20b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=14849)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21103) E2e tests time out on azure

2021-03-16 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303052#comment-17303052
 ] 

Guowei Ma commented on FLINK-21103:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=14791=results

> E2e tests time out on azure
> ---
>
> Key: FLINK-21103
> URL: https://issues.apache.org/jira/browse/FLINK-21103
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.11.3, 1.12.1, 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12377=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529
> {code}
> Creating worker2 ... done
> Jan 22 13:16:17 Waiting for hadoop cluster to come up. We have been trying 
> for 0 seconds, retrying ...
> Jan 22 13:16:22 Waiting for hadoop cluster to come up. We have been trying 
> for 5 seconds, retrying ...
> Jan 22 13:16:27 Waiting for hadoop cluster to come up. We have been trying 
> for 10 seconds, retrying ...
> Jan 22 13:16:32 Waiting for hadoop cluster to come up. We have been trying 
> for 15 seconds, retrying ...
> Jan 22 13:16:37 Waiting for hadoop cluster to come up. We have been trying 
> for 20 seconds, retrying ...
> Jan 22 13:16:43 Waiting for hadoop cluster to come up. We have been trying 
> for 26 seconds, retrying ...
> Jan 22 13:16:48 Waiting for hadoop cluster to come up. We have been trying 
> for 31 seconds, retrying ...
> Jan 22 13:16:53 Waiting for hadoop cluster to come up. We have been trying 
> for 36 seconds, retrying ...
> Jan 22 13:16:58 Waiting for hadoop cluster to come up. We have been trying 
> for 41 seconds, retrying ...
> Jan 22 13:17:03 Waiting for hadoop cluster to come up. We have been trying 
> for 46 seconds, retrying ...
> Jan 22 13:17:08 We only have 0 NodeManagers up. We have been trying for 0 
> seconds, retrying ...
> 21/01/22 13:17:10 INFO client.RMProxy: Connecting to ResourceManager at 
> master.docker-hadoop-cluster-network/172.19.0.3:8032
> 21/01/22 13:17:11 INFO client.AHSProxy: Connecting to Application History 
> server at master.docker-hadoop-cluster-network/172.19.0.3:10200
> Jan 22 13:17:11 We now have 2 NodeManagers up.
> ==
> === WARNING: This E2E Run took already 80% of the allocated time budget of 
> 250 minutes ===
> ==
> ==
> === WARNING: This E2E Run will time out in the next few minutes. Starting to 
> upload the log output ===
> ==
> ##[error]The task has timed out.
> Async Command Start: Upload Artifact
> Uploading 1 files
> File upload succeed.
> Upload '/tmp/_e2e_watchdog.output.0' to file container: 
> '#/11824779/e2e-timeout-logs'
> Associated artifact 140921 with build 12377
> Async Command End: Upload Artifact
> Async Command Start: Upload Artifact
> Uploading 1 files
> File upload succeed.
> Upload '/tmp/_e2e_watchdog.output.1' to file container: 
> '#/11824779/e2e-timeout-logs'
> Associated artifact 140921 with build 12377
> Async Command End: Upload Artifact
> Async Command Start: Upload Artifact
> Uploading 1 files
> File upload succeed.
> Upload '/tmp/_e2e_watchdog.output.2' to file container: 
> '#/11824779/e2e-timeout-logs'
> Associated artifact 140921 with build 12377
> Async Command End: Upload Artifact
> Finishing: Run e2e tests
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-21532) Make CatalogTableImpl#toProperties and CatalogTableImpl#fromProperties case sensitive

2021-03-16 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-21532.
---
Fix Version/s: 1.13.0
   Resolution: Fixed

Fixed in master: 2863ed015865316399f8e9d00fd51c6a22c28bbf

> Make CatalogTableImpl#toProperties and CatalogTableImpl#fromProperties case 
> sensitive
> -
>
> Key: FLINK-21532
> URL: https://issues.apache.org/jira/browse/FLINK-21532
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Lsw_aka_laplace
>Assignee: Lsw_aka_laplace
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> For now, both legacy Table/SQL API and implementation and current Table/SQL 
> API and implementation are case sensitive when it comes to the key of table 
> properties. Though it is highly recommended  to be full of low case, key in 
> upper case still works well.
> But the following case upon current code seems a little misleading..
> Given a Ddl sql:
> """
>  create table a (f string) with ('K1' = 'xxx')
> """ 
> The property of corresponding `CatalogTableImpl` is  Map(K1->xxx).
> After calling `CatalogTableImpl#toProperties` and then 
> `CatalogTableImpl#fromProperties`, there comes a `CatalogTableImpl` with 
> properity:Map(k1->xxx). The letter in upper case has been converted into low 
> case.
>  
> After reading code, the reason seems that the two method mentioned above 
> normalize key by default and can not be configured.
>  
> As far as I'm concerned, There is an easy way that just  ` 
> DescriptorProperties descriptorProperties = new DescriptorProperties(false)`, 
> when means not to normalize key to low case.
>  
> While, for the fear that this should break some underlying rules and cause 
> some unexpected incompatibility. This part shall be under well discussed.
> What's  more, a better alternate way is welcomed~
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #15234: [FLINK-21532][Table SQL/API]Make CatalogTableImpl#toProperties and Ca…

2021-03-16 Thread GitBox


wuchong merged pull request #15234:
URL: https://github.com/apache/flink/pull/15234


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   7   >