[GitHub] [flink] flinkbot edited a comment on pull request #15013: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes.

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15013:
URL: https://github.com/apache/flink/pull/15013#issuecomment-785429756


   
   ## CI report:
   
   * f2c1726aadbac68116f40e49698b6fa2457fd4e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13726)
 
   * e8d9945955c0730486f3d64da4001b03bfe3be66 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13748)
 
   * e618698ebd320e7c1830b1b2a4c3aa0854ab5112 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15014: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes. [1.12]

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15014:
URL: https://github.com/apache/flink/pull/15014#issuecomment-785429830


   
   ## CI report:
   
   * 076ef8d7f17282ee1be79f57612a4e4c70a472f9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13727)
 
   * 4c79ec889a7f5eb36773adf4e8c61f6108acaa50 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13749)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14994: [FLINK-21452][connector/common] Stop snapshotting registered readers in source coordinator.

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14994:
URL: https://github.com/apache/flink/pull/14994#issuecomment-784019705


   
   ## CI report:
   
   * d4f4152b9881a9349eae5c5549016f8e9d87da9b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13647)
 
   * 82266e5cbe7a60169f070e232ee10ee57a1d9bd5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13747)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15018: [FLINK-21460][table api] Use Configuration to create TableEnvironment

2021-02-24 Thread GitBox


flinkbot commented on pull request #15018:
URL: https://github.com/apache/flink/pull/15018#issuecomment-785696441


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit a44e1a4752e0b561b37b6403073e161157975a0e (Thu Feb 25 
07:53:30 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-21460).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21499) maven package hang when using multithread on jdk 11

2021-02-24 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17290754#comment-17290754
 ] 

Matthias commented on FLINK-21499:
--

Ok, I missed that FLINK-20092 already tackles this issue. You can find 
follow-ups on this discussion in that issue's comment section.

> maven package hang when using multithread on jdk 11
> ---
>
> Key: FLINK-21499
> URL: https://issues.apache.org/jira/browse/FLINK-21499
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.12.1
>Reporter: Zhengqi Zhang
>Priority: Major
> Attachments: 82210_jstack.txt, image-2021-02-25-14-27-46-087.png
>
>
> When I add the -t parameter to the MVN clean package command, the compilation 
> gets stuck. By turning on the Maven Debug log, you can observe a large number 
> of repeated logs like the one below, seemingly stuck in an endless loop.
>  
> I printed the thread stack and attached it.
>  
> !image-2021-02-25-14-27-46-087.png|width=1438,height=876!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21460) Use Configuration to create TableEnvironment

2021-02-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21460:
---
Labels: pull-request-available  (was: )

> Use Configuration to create TableEnvironment
> 
>
> Key: FLINK-21460
> URL: https://issues.apache.org/jira/browse/FLINK-21460
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.13.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> We can use new options {{table.planner}}, {{execution.runtime-mode}} to 
> create table environment. However, it's not allowed to modify the planner 
> type or execution mode when table environment is built.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] fsk119 opened a new pull request #15018: [FLINK-21460][table api] Use Configuration to create TableEnvironment

2021-02-24 Thread GitBox


fsk119 opened a new pull request #15018:
URL: https://github.com/apache/flink/pull/15018


   
   
   ## What is the purpose of the change
   
   *Allow to create table environment from `Configuration`. However, it's not 
allowed to modify the planner type or execution mode when table environment is 
built.*
   
   
   ## Brief change log
   
 - *Add `TableEnvironment#create(Configuration)`*
 - *Add method to convert between `Configuration` and `EnvironmentSettings`*
   
   ## Verifying this change
   
 - *Add test about transform between `EnvironmentSetting` and 
`Configuration`*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (**yes** / no)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (**yes** / no)
 - If yes, how is the feature documented? (not applicable / docs / 
**JavaDocs** / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21499) maven package hang when using multithread on jdk 11

2021-02-24 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17290751#comment-17290751
 ] 

Matthias commented on FLINK-21499:
--

Hi [~Tony Giao], thanks for reporting this issue. Could you provide the exact 
{{mvn}} command you executed? You're mentioning {{-t}} option in the 
description but talk about multi-threading in the headline which makes me think 
that your actually talking about {{-T}} here. Looking into MSHADE-384 (which 
seems to be related) supports this assumption. Could you verify (and fix) the 
issue accordingly?

Another question: is this behavior only happening when running {{package}} on 
the overall project or could we pin this down to sub-modules as well using 
{{-DskipTests}} or even {{compile}} (as you mentioned that the behavior happens 
during compilation)? I'm asking because I wasn't able to reproduce it using the 
{{flink-walkthrough}} module mentioned in your screenshot.

> maven package hang when using multithread on jdk 11
> ---
>
> Key: FLINK-21499
> URL: https://issues.apache.org/jira/browse/FLINK-21499
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.12.1
>Reporter: Zhengqi Zhang
>Priority: Major
> Attachments: 82210_jstack.txt, image-2021-02-25-14-27-46-087.png
>
>
> When I add the -t parameter to the MVN clean package command, the compilation 
> gets stuck. By turning on the Maven Debug log, you can observe a large number 
> of repeated logs like the one below, seemingly stuck in an endless loop.
>  
> I printed the thread stack and attached it.
>  
> !image-2021-02-25-14-27-46-087.png|width=1438,height=876!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-21499) maven package hang when using multithread on jdk 11

2021-02-24 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-21499.

Resolution: Duplicate

> maven package hang when using multithread on jdk 11
> ---
>
> Key: FLINK-21499
> URL: https://issues.apache.org/jira/browse/FLINK-21499
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.12.1
>Reporter: Zhengqi Zhang
>Priority: Major
> Attachments: 82210_jstack.txt, image-2021-02-25-14-27-46-087.png
>
>
> When I add the -t parameter to the MVN clean package command, the compilation 
> gets stuck. By turning on the Maven Debug log, you can observe a large number 
> of repeated logs like the one below, seemingly stuck in an endless loop.
>  
> I printed the thread stack and attached it.
>  
> !image-2021-02-25-14-27-46-087.png|width=1438,height=876!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15014: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes. [1.12]

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15014:
URL: https://github.com/apache/flink/pull/15014#issuecomment-785429830


   
   ## CI report:
   
   * 076ef8d7f17282ee1be79f57612a4e4c70a472f9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13727)
 
   * 4c79ec889a7f5eb36773adf4e8c61f6108acaa50 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15013: [FLINK-21490][datastream] Make job graph generation deterministic in respect to hashes of input nodes.

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15013:
URL: https://github.com/apache/flink/pull/15013#issuecomment-785429756


   
   ## CI report:
   
   * f2c1726aadbac68116f40e49698b6fa2457fd4e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13726)
 
   * e8d9945955c0730486f3d64da4001b03bfe3be66 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15003: [FLINK-21482][table-planner-blink] Support grouping set syntax for WindowAggregate

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15003:
URL: https://github.com/apache/flink/pull/15003#issuecomment-785052606


   
   ## CI report:
   
   * 931a8b3776e71f09ecdcd74b1851dbc0ae035c6e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13734)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14994: [FLINK-21452][connector/common] Stop snapshotting registered readers in source coordinator.

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14994:
URL: https://github.com/apache/flink/pull/14994#issuecomment-784019705


   
   ## CI report:
   
   * d4f4152b9881a9349eae5c5549016f8e9d87da9b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13647)
 
   * 82266e5cbe7a60169f070e232ee10ee57a1d9bd5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14977: [FLINK-18726][table-planner-blink] Support INSERT INTO specific colum…

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14977:
URL: https://github.com/apache/flink/pull/14977#issuecomment-783054931


   
   ## CI report:
   
   * cbb43f980354f2a108ec2f40fe4e7c194c7f73a1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13746)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13672)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14775: [FLINK-20964][python] Introduce PythonStreamGroupWindowAggregateOperator

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14775:
URL: https://github.com/apache/flink/pull/14775#issuecomment-768234660


   
   ## CI report:
   
   * d8f3c75d291d0050ee56f82aa019418718bf87a5 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13732)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21499) maven package hang when using multithread on jdk 11

2021-02-24 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-21499:
-
Component/s: Build System

> maven package hang when using multithread on jdk 11
> ---
>
> Key: FLINK-21499
> URL: https://issues.apache.org/jira/browse/FLINK-21499
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.12.1
>Reporter: Zhengqi Zhang
>Priority: Major
> Attachments: 82210_jstack.txt, image-2021-02-25-14-27-46-087.png
>
>
> When I add the -t parameter to the MVN clean package command, the compilation 
> gets stuck. By turning on the Maven Debug log, you can observe a large number 
> of repeated logs like the one below, seemingly stuck in an endless loop.
>  
> I printed the thread stack and attached it.
>  
> !image-2021-02-25-14-27-46-087.png|width=1438,height=876!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21490) UnalignedCheckpointITCase fails on azure

2021-02-24 Thread Arvid Heise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17290742#comment-17290742
 ] 

Arvid Heise commented on FLINK-21490:
-

The error is probably test-only:
For some reason the test does not terminate after 10 successful checkpoints (to 
be investigated).

{noformat}
12:21:43,173 [Checkpoint Timer] INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Triggering 
checkpoint 3165 (type=CHECKPOINT) @ 1614169303172 for job 
78d8cb678ee2304d517c9e42bff43aea.
{noformat}

I suspect that we overflow {{MAX_INT}} in {{value}}, and then {{checkHeader}} 
fails as it uses the upper 4 bytes of the long. We have already hardened that 
part to give a meaningful exception in the {{UCRescaleITCase}}, but it might be 
a good idea to extract that to this ticket as that test will only go into 
master.

So for now I'd harden the test. There is also a related issue with unions that 
I initially suspected.

> UnalignedCheckpointITCase fails on azure
> 
>
> Key: FLINK-21490
> URL: https://issues.apache.org/jira/browse/FLINK-21490
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13682=logs=34f41360-6c0d-54d3-11a1-0292a2def1d9=2d56e022-1ace-542f-bf1a-b37dd63243f2
> {code}
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
>   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
>   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
>   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237)
>   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
>   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
>   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
>   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1066)
>   at akka.dispatch.OnComplete.internal(Future.scala:264)
>   at akka.dispatch.OnComplete.internal(Future.scala:261)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
>   at akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
>   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>   at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
>   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
>   at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
>   at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
>   at 
> akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
>   at 
> scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
>   at 
> akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
>   at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
>   at 
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
>   at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>   at 
> 

[jira] [Commented] (FLINK-21473) Migrate ParquetInputFormat to BulkFormat interface

2021-02-24 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17290741#comment-17290741
 ] 

Jingsong Lee commented on FLINK-21473:
--

[~ZhenqiuHuang] I think it is good.

> Migrate ParquetInputFormat to BulkFormat interface
> --
>
> Key: FLINK-21473
> URL: https://issues.apache.org/jira/browse/FLINK-21473
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.1
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21497) FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail

2021-02-24 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17290738#comment-17290738
 ] 

Matthias commented on FLINK-21497:
--

Thanks [~maguowei] for reporting this test instability. I'm moving this issue 
into FLINK-21075 where we collect issues related to the current FLIP-160 work

> FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail
> --
>
> Key: FLINK-21497
> URL: https://issues.apache.org/jira/browse/FLINK-21497
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13722=logs=a5ef94ef-68c2-57fd-3794-dc108ed1c495=9c1ddabe-d186-5a2c-5fcc-f3cafb3ec699
> {code:java}
> 2021-02-24T22:47:55.4844360Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2021-02-24T22:47:55.4847421Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
> 2021-02-24T22:47:55.4848395Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> 2021-02-24T22:47:55.4849262Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSource(FileSourceTextLinesITCase.java:148)
> 2021-02-24T22:47:55.4850030Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover(FileSourceTextLinesITCase.java:108)
> 2021-02-24T22:47:55.4850780Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-02-24T22:47:55.4851322Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-02-24T22:47:55.4858977Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-02-24T22:47:55.4860737Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-02-24T22:47:55.4861855Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-02-24T22:47:55.4862873Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-02-24T22:47:55.4863598Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-02-24T22:47:55.4864289Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-02-24T22:47:55.4864937Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-02-24T22:47:55.4865570Z  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-02-24T22:47:55.4866152Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2021-02-24T22:47:55.4866670Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-02-24T22:47:55.4867172Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-02-24T22:47:55.4867765Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-02-24T22:47:55.4868588Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-02-24T22:47:55.4869683Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-02-24T22:47:55.4886595Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-02-24T22:47:55.4887656Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-02-24T22:47:55.4888451Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-02-24T22:47:55.4889199Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-02-24T22:47:55.4889845Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-02-24T22:47:55.4890447Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-02-24T22:47:55.4891037Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2021-02-24T22:47:55.4891604Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2021-02-24T22:47:55.4892235Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2021-02-24T22:47:55.4892959Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-02-24T22:47:55.4893573Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-02-24T22:47:55.4894216Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-02-24T22:47:55.4894824Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-02-24T22:47:55.4895425Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-02-24T22:47:55.4896027Z  

[jira] [Updated] (FLINK-21497) FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail

2021-02-24 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-21497:
-
Parent: FLINK-21075
Issue Type: Sub-task  (was: Bug)

> FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail
> --
>
> Key: FLINK-21497
> URL: https://issues.apache.org/jira/browse/FLINK-21497
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13722=logs=a5ef94ef-68c2-57fd-3794-dc108ed1c495=9c1ddabe-d186-5a2c-5fcc-f3cafb3ec699
> {code:java}
> 2021-02-24T22:47:55.4844360Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2021-02-24T22:47:55.4847421Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
> 2021-02-24T22:47:55.4848395Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
> 2021-02-24T22:47:55.4849262Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSource(FileSourceTextLinesITCase.java:148)
> 2021-02-24T22:47:55.4850030Z  at 
> org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover(FileSourceTextLinesITCase.java:108)
> 2021-02-24T22:47:55.4850780Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-02-24T22:47:55.4851322Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-02-24T22:47:55.4858977Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-02-24T22:47:55.4860737Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-02-24T22:47:55.4861855Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-02-24T22:47:55.4862873Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-02-24T22:47:55.4863598Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-02-24T22:47:55.4864289Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-02-24T22:47:55.4864937Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-02-24T22:47:55.4865570Z  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-02-24T22:47:55.4866152Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2021-02-24T22:47:55.4866670Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-02-24T22:47:55.4867172Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-02-24T22:47:55.4867765Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-02-24T22:47:55.4868588Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-02-24T22:47:55.4869683Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-02-24T22:47:55.4886595Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-02-24T22:47:55.4887656Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-02-24T22:47:55.4888451Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-02-24T22:47:55.4889199Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-02-24T22:47:55.4889845Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-02-24T22:47:55.4890447Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-02-24T22:47:55.4891037Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2021-02-24T22:47:55.4891604Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2021-02-24T22:47:55.4892235Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2021-02-24T22:47:55.4892959Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-02-24T22:47:55.4893573Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-02-24T22:47:55.4894216Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-02-24T22:47:55.4894824Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-02-24T22:47:55.4895425Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-02-24T22:47:55.4896027Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2021-02-24T22:47:55.4896638Z  at 
> 

[GitHub] [flink] wuchong commented on a change in pull request #14725: [FLINK-20977] Fix use the "USE DATABASE" command bug

2021-02-24 Thread GitBox


wuchong commented on a change in pull request #14725:
URL: https://github.com/apache/flink/pull/14725#discussion_r582595061



##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/SqlCommandParser.java
##
@@ -256,7 +256,7 @@ private static SqlCommandCall parseBySqlParser(Parser 
sqlParser, String stmt) {
 
 USE_CATALOG,
 
-USE,
+USE("USE\\s+(.*)", SINGLE_OPERAND),

Review comment:
   I agree this can fix the problem. However, this fallbacks to the old way 
which uses regex to parse the SQL. We should use the TableEnvironment to parse 
the common SQL. 
   
   Therefore, a simple fix can be adding quotes for identifiers in 
`org.apache.flink.table.client.cli.CliClient#callUseDatabase`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20977) can not use `use` command to switch database

2021-02-24 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20977:

Fix Version/s: 1.12.2

> can not use `use` command to switch database 
> -
>
> Key: FLINK-20977
> URL: https://issues.apache.org/jira/browse/FLINK-20977
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.12.0
>Reporter: Jun Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.2, 1.13.0
>
>
> I have a database which name is mod, when I use `use mod` to switch to the 
> db,the system throw an exception, I surround it with backticks ,it is still 
> not well



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-21494) Could not execute statement 'USE `default`' in Flink SQL client

2021-02-24 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-21494.
---
Fix Version/s: (was: 1.12.2)
   (was: 1.13.0)
   Resolution: Duplicate

> Could not execute statement 'USE `default`' in Flink SQL client
> ---
>
> Key: FLINK-21494
> URL: https://issues.apache.org/jira/browse/FLINK-21494
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Client
>Affects Versions: 1.12.1
>Reporter: Zheng Hu
>Priority: Major
> Attachments: stacktrace.txt
>
>
> I have two databases in my iceberg catalog,  one is `default`, another one is 
> `test_db`.  While I cannot switch to use the `default` database because of 
> the Flink SQL parser bug: 
> {code}
> Flink SQL> show databases;
> default
> test_db
>  
> Flink SQL> use `default`;
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.sql.parser.impl.ParseException: Incorrect syntax near the 
> keyword 'USE' at line 1, column 1.
> Was expecting one of:
>     "ABS" ...
>     "ALTER" ...
>     "ARRAY" ...
>     "AVG" ...
>     "CALL" ...
>     "CARDINALITY" ...
>     "CASE" ...
>     "CAST" ...
>     "CEIL" ...
>     "CEILING" ...
>     "CHAR_LENGTH" ...
>     "CHARACTER_LENGTH" ...
>     "CLASSIFIER" ...
>     "COALESCE" ...
>     "COLLECT" ...
>     "CONVERT" ...
>     "COUNT" ...
>     "COVAR_POP" ...
>     "COVAR_SAMP" ...
>     "CREATE" ...
>     "CUME_DIST" ...
>     "CURRENT" ...
>     "CURRENT_CATALOG" ...
>     "CURRENT_DATE" ...
>     "CURRENT_DEFAULT_TRANSFORM_GROUP" ...
>     "CURRENT_PATH" ...
>     "CURRENT_ROLE" ...
>     "CURRENT_SCHEMA" ...
>     "CURRENT_TIME" ...
>     "CURRENT_TIMESTAMP" ...
>     "CURRENT_USER" ...
>     "CURSOR" ...
>     "DATE" ...
>     "DELETE" ...
>     "DENSE_RANK" ...
>     "DESCRIBE" ...
>     "DROP" ...
>     "ELEMENT" ...
>     "EVERY" ...
>     "EXISTS" ...
>     "EXP" ...
>     "EXPLAIN" ...
>     "EXTRACT" ...
>     "FALSE" ...
>     "FIRST_VALUE" ...
>     "FLOOR" ...
>     "FUSION" ...
>     "GROUPING" ...
>     "HOUR" ...
>     "INSERT" ...
>     "INTERSECTION" ...
>     "INTERVAL" ...
>     "JSON_ARRAY" ...
>     "JSON_ARRAYAGG" ...
>     "JSON_EXISTS" ...
>     "JSON_OBJECT" ...
>     "JSON_OBJECTAGG" ...
>     "JSON_QUERY" ...
>     "JSON_VALUE" ...
>     "LAG" ...
>     "LAST_VALUE" ...
>     "LEAD" ...
>     "LEFT" ...
>     "LN" ...
>     "LOCALTIME" ...
>     "LOCALTIMESTAMP" ...
>     "LOWER" ...
>     "MATCH_NUMBER" ...
>     "MAX" ...
>     "MERGE" ...
>     "MIN" ...
>     "MINUTE" ...
>     "MOD" ...
>     "MONTH" ...
>     "MULTISET" ...
>     "NEW" ...
>     "NEXT" ...
>     "NOT" ...
>     "NTH_VALUE" ...
>     "NTILE" ...
>     "NULL" ...
>     "NULLIF" ...
>     "OCTET_LENGTH" ...
>     "OVERLAY" ...
>     "PERCENT_RANK" ...
>     "PERIOD" ...
>     "POSITION" ...
>     "POWER" ...
>     "PREV" ...
>     "RANK" ...
>     "REGR_COUNT" ...
>     "REGR_SXX" ...
>     "REGR_SYY" ...
>     "RESET" ...
>     "RIGHT" ...
>     "ROW" ...
>     "ROW_NUMBER" ...
>     "RUNNING" ...
>     "SECOND" ...
>     "SELECT" ...
>     "SESSION_USER" ...
>     "SET" ...
>     "SOME" ...
>     "SPECIFIC" ...
>     "SQRT" ...
>     "STDDEV_POP" ...
>     "STDDEV_SAMP" ...
>     "SUBSTRING" ...
>     "SUM" ...
>     "SYSTEM_USER" ...
>     "TABLE" ...
>     "TIME" ...
>     "TIMESTAMP" ...
>     "TRANSLATE" ...
>     "TRIM" ...
>     "TRUE" ...
>     "TRUNCATE" ...
>     "UNKNOWN" ...
>     "UPDATE" ...
>     "UPPER" ...
>     "UPSERT" ...
>     "USER" ...
>     "VALUES" ...
>     "VAR_POP" ...
>     "VAR_SAMP" ...
>     "WITH" ...
>     "YEAR" ...
>      ...
>      ...
>      ...
>      ...
>      ...
>      ...
>      ...
>      ...
>      ...
>     "(" ...
>      ...
>      ...
>      ...
>      ...
>     "?" ...
>     "+" ...
>     "-" ...
>      ...
>      ...
>      ...
>      ...
>      ...
>      ...
>     "SHOW" ...
>     "USE"  ...
>     "USE"  ...
>     "USE"  ...
>     "USE"  ...
>     "USE"  ...
>     "USE"  ...
> {code}
> It's OK to switch to use `test_db`. 
> {code}
> Flink SQL> use `test_db`;
> Flink SQL> show tables; 
> [INFO] Result was empty.
> {code}
> The stacktrace is here: 
> https://issues.apache.org/jira/secure/attachment/13021173/stacktrace.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21473) Migrate ParquetInputFormat to BulkFormat interface

2021-02-24 Thread Zhenqiu Huang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17290733#comment-17290733
 ] 

Zhenqiu Huang commented on FLINK-21473:
---

[~lzljs3620320]
According to existing implementation, I am going split a class 
AbstractParquetInputFormat from ParquetVectorizedInputFormat, so that the 
reader creation logic can be reused in both ParquetVectorizedInputFormat and 
ParquetInputFormat. How do you think?

> Migrate ParquetInputFormat to BulkFormat interface
> --
>
> Key: FLINK-21473
> URL: https://issues.apache.org/jira/browse/FLINK-21473
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.1
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15017: [FLINK-12607][rest] Introduce a REST API that returns the maxParallelism

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15017:
URL: https://github.com/apache/flink/pull/15017#issuecomment-785661508


   
   ## CI report:
   
   * 9a96593aa6cea51f09336a3b284528813d648828 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13745)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15004: [FLINK-21253][table-planner-blink] Support grouping set syntax for GroupWindowAggregate

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15004:
URL: https://github.com/apache/flink/pull/15004#issuecomment-785052726


   
   ## CI report:
   
   * 3af09855fc47130d93b11e6d6ba1c3dedb6574a5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13733)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14977: [FLINK-18726][table-planner-blink] Support INSERT INTO specific colum…

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14977:
URL: https://github.com/apache/flink/pull/14977#issuecomment-783054931


   
   ## CI report:
   
   * cbb43f980354f2a108ec2f40fe4e7c194c7f73a1 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13672)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13746)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Jiayi-Liao edited a comment on pull request #15016: [FLINK-21413][state] Clean TtlMapState and TtlListState after all elements are expired

2021-02-24 Thread GitBox


Jiayi-Liao edited a comment on pull request #15016:
URL: https://github.com/apache/flink/pull/15016#issuecomment-785653274


   @carp84 It took me some time to dig how to test this change, but 
unfortunately there may not be an easy way to achieve this. 
   
   Take the `TtlListState` as an example, in `TtlStateTestBase`, Flink uses 
`TtlStateTestContextBase.isOriginalEmptyValue()` to test whether the state is 
cleared, which is `Objects.equals(emptyValue, getOriginal());` in 
TtlListState's testing. To test my change, I need to verify the result of 
`stateTable.get(currentNamespace);`, but unit testing uses 
`MockInternalKvState.getInternal()` which uses 
`computeIfAbsent(currentNamespace, n -> emptyValue.get())`  and never returns 
null in `TtlListState`. 
   
   If we want to test the change, we might need to change the `emptyValue` in 
`MockInternalListState`、`MockInternalMapState` and related 
`TtlStateTestContextBase`, which may affect a lot of tests. 
   
   Do you have any better idea about this ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] docete commented on pull request #14977: [FLINK-18726][table-planner-blink] Support INSERT INTO specific colum…

2021-02-24 Thread GitBox


docete commented on pull request #14977:
URL: https://github.com/apache/flink/pull/14977#issuecomment-785662256


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Jiayi-Liao edited a comment on pull request #15016: [FLINK-21413][state] Clean TtlMapState and TtlListState after all elements are expired

2021-02-24 Thread GitBox


Jiayi-Liao edited a comment on pull request #15016:
URL: https://github.com/apache/flink/pull/15016#issuecomment-785653274


   @carp84 It took me some time to dig how to test this change, but 
unfortunately there may not be an easy way to achieve this. 
   
   Take the `TtlListState` as an example, in `TtlStateTestBase`, Flink uses 
`TtlStateTestContextBase.isOriginalEmptyValue()` to test whether the state is 
cleared, which is `Objects.equals(emptyValue, getOriginal());` in 
TtlListState's testing. To test my change, I need to verify the result of 
`stateTable.get(currentNamespace);`, but unit testing uses 
`MockInternalKvState.getInternal()` which uses 
`computeIfAbsent(currentNamespace, n -> emptyValue.get())`  and never returns 
null in `TtlListState`. 
   
   If we want to test the change, we might need to change the `emptyValue` in 
`MockInternalListState`、`MockInternalMapState` and related 
`TtlStateTestContextBase`, which may affect a lot of tests. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15017: [FLINK-12607][rest] Introduce a REST API that returns the maxParallelism

2021-02-24 Thread GitBox


flinkbot commented on pull request #15017:
URL: https://github.com/apache/flink/pull/15017#issuecomment-785661508


   
   ## CI report:
   
   * 9a96593aa6cea51f09336a3b284528813d648828 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-12607) Introduce a REST API that returns the maxParallelism of a job

2021-02-24 Thread John Phelan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17290724#comment-17290724
 ] 

John Phelan commented on FLINK-12607:
-

hey [~rmetzger] [~trohrmann] I opened a [github 
PR|https://github.com/apache/flink/pull/15017] for this.

I think the max parallelism values make sense. bumping one 
`ArchivedExecutionConfig` `serializeVersionUID` seems prudent.

It looks like the rest api documentation automatically updates? Is that true?

I was able to build and test manually, but it seems maven tests can't run well 
under WSL per [dev mailing list 
discussion|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/First-development-steps-broken-build-workarounds-tp48619p48622.html]?
 Will the test suite run against the PR or is there some other way I can 
trigger tests running against the PR?

I'm excited about the change!


> Introduce a REST API that returns the maxParallelism of a job
> -
>
> Key: FLINK-12607
> URL: https://issues.apache.org/jira/browse/FLINK-12607
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Affects Versions: 1.6.3
>Reporter: Akshay Kanfade
>Assignee: John Phelan
>Priority: Minor
>  Labels: pull-request-available, starter
>
> Today, Flink does not offer any way to get the maxParallelism for a job and 
> it's operators through any of the REST APIs. Since, the internal state 
> already tracks maxParallelism for a job and it's operators, we should expose 
> this via the REST APIs so that application developer can get more insights on 
> the current state.
> There can be two approaches on how we can do this -
> Approach 1 :
> Modify the existing rest API response model to additionally expose a new 
> field 'maxParallelism'. Some of the REST APIs that would be affected by this
> |h5. */jobs/:jobid/vertices/:vertexid*|
> |h5. */jobs/:jobid*|
>  
> Approach 2 :
> Create a new REST API that would only return maxParallelism for a job and 
> it's operators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14962: [FLINK-18789][sql-client] Use TableEnvironment#executeSql method to execute insert statement in sql client

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14962:
URL: https://github.com/apache/flink/pull/14962#issuecomment-781306516


   
   ## CI report:
   
   * 2591b6a0dafe996b1a6343965826e68bf4fffe35 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13703)
 
   * 5cc8acfd49aa8c77a440d6fdf401a454914c645f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13744)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21460) Use Configuration to create TableEnvironment

2021-02-24 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-21460:
--
Description: We can use new options {{table.planner}}, 
{{execution.runtime-mode}} to create table environment. However, it's not 
allowed to modify the planner type or execution mode when table environment is 
built.  (was: We can use new options {{table.planner}}, 
{{execution.runtime-mode}} to create table environment.)

> Use Configuration to create TableEnvironment
> 
>
> Key: FLINK-21460
> URL: https://issues.apache.org/jira/browse/FLINK-21460
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.13.0
>Reporter: Shengkai Fang
>Priority: Major
> Fix For: 1.13.0
>
>
> We can use new options {{table.planner}}, {{execution.runtime-mode}} to 
> create table environment. However, it's not allowed to modify the planner 
> type or execution mode when table environment is built.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14840: [FLINK-21231][sql-client] add "SHOW VIEWS" to SQL client

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14840:
URL: https://github.com/apache/flink/pull/14840#issuecomment-772110005


   
   ## CI report:
   
   * 273a8f60782ae2c65704efdac92d0202f7fae2f0 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13728)
 
   * 144583814392c82fd760bb6252508dba4f78cf50 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13738)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14741: [FLINK-21021][python] Bump Beam to 2.27.0

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14741:
URL: https://github.com/apache/flink/pull/14741#issuecomment-766340784


   
   ## CI report:
   
   * a2be9bed8bfcbccc245f9f01c77c22c1f6e8bd31 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12438)
 
   * c349fa0f9ef5ae136a6dc7616fe0e8e93a2b6133 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13743)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21460) Use Configuration to create TableEnvironment

2021-02-24 Thread Shengkai Fang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shengkai Fang updated FLINK-21460:
--
Summary: Use Configuration to create TableEnvironment  (was: Introduce 
option `table.planner`, `execution.runtime-mode`)

> Use Configuration to create TableEnvironment
> 
>
> Key: FLINK-21460
> URL: https://issues.apache.org/jira/browse/FLINK-21460
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Affects Versions: 1.13.0
>Reporter: Shengkai Fang
>Priority: Major
> Fix For: 1.13.0
>
>
> We can use new options {{table.planner}}, {{execution.runtime-mode}} to 
> create table environment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #15017: [FLINK-12607][rest] Introduce a REST API that returns the maxParallelism

2021-02-24 Thread GitBox


flinkbot commented on pull request #15017:
URL: https://github.com/apache/flink/pull/15017#issuecomment-785658072


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9a96593aa6cea51f09336a3b284528813d648828 (Thu Feb 25 
06:44:15 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-12607) Introduce a REST API that returns the maxParallelism of a job

2021-02-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-12607:
---
Labels: pull-request-available starter  (was: starter)

> Introduce a REST API that returns the maxParallelism of a job
> -
>
> Key: FLINK-12607
> URL: https://issues.apache.org/jira/browse/FLINK-12607
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / REST
>Affects Versions: 1.6.3
>Reporter: Akshay Kanfade
>Assignee: John Phelan
>Priority: Minor
>  Labels: pull-request-available, starter
>
> Today, Flink does not offer any way to get the maxParallelism for a job and 
> it's operators through any of the REST APIs. Since, the internal state 
> already tracks maxParallelism for a job and it's operators, we should expose 
> this via the REST APIs so that application developer can get more insights on 
> the current state.
> There can be two approaches on how we can do this -
> Approach 1 :
> Modify the existing rest API response model to additionally expose a new 
> field 'maxParallelism'. Some of the REST APIs that would be affected by this
> |h5. */jobs/:jobid/vertices/:vertexid*|
> |h5. */jobs/:jobid*|
>  
> Approach 2 :
> Create a new REST API that would only return maxParallelism for a job and 
> it's operators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] bytesandwich opened a new pull request #15017: [FLINK-12607][rest] Introduce a REST API that returns the maxParallelism

2021-02-24 Thread GitBox


bytesandwich opened a new pull request #15017:
URL: https://github.com/apache/flink/pull/15017


   ## What is the purpose of the change
   
   Add max parallelism for reporting purposes to the REST api.
   
   ## Brief change log
   
   Add max-parallelism to:
   - JobVertexDetailsHandler puts maxaParallelism into JobVertexDetailsInfo 
from jobVertex
   - JobDetailsHandler pus:
   - maxParallelism into JobDetailsInfo from ExecutionGraph's archived 
execution config
   - maxParallelism into JobVertexDetailsInfo from AccessExecutionJobVertex
   
   - ArchivedExecutionConfig
   
   - JobDetailsInfo as "job-max-parallelism"
   - JobDetailsInfo.JobVertexDetailsInfo as "max-parallelism"
   - JobVertexDetailsInfo as "max-parallelism"
   
   All have max parallelism added to equals and hash (except 
ArchivedExecutionConfig)
   
   Resets ArchivedExecutionConfig's `serialVersionUID`
   
   also adds to angular type definitions.
   
   
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
`JobDetailsInfoTest` and `JobVertexDetailsInfoTest`. As I'm developing on 
windows linux subsystem, my maven build works but I am unable to run tests 
locally. Is there an automation that runs them to check this PR? 
   
   I did manually run the REST api against a local cluster from the 
bin/start-cluster.sh  of TopSpeedWindowing example with 
`env.getConfig().setMaxParallelism(4)` and `.setMaxParallelism(2);` on an 
operator
   
   In both `/jobs`and `/jobs/:id` I saw that "job-max-parallelism" was set to 4 
and "max-parallelism" was set to "2" for the specific vertex and "4" for others.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no but I feel like it should?
 - The serializers: yes the ArchivedExecutionConfig's java serializability
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? slightly
 - If yes, how is the feature documented? Needs a follow up for the REST 
api docs.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Jiayi-Liao edited a comment on pull request #15016: [FLINK-21413][state] Clean TtlMapState and TtlListState after all elements are expired

2021-02-24 Thread GitBox


Jiayi-Liao edited a comment on pull request #15016:
URL: https://github.com/apache/flink/pull/15016#issuecomment-785653274


   @carp84 It took me some time to dig how to test this change, but 
unfortunately there may not be an easy way to achieve this. 
   
   Take the `TtlListState` as an example, in `TtlStateTestBase`, Flink uses 
`TtlStateTestContextBase.isOriginalEmptyValue()` to test whether the state is 
cleared, which is `Objects.equals(emptyValue, getOriginal());` in 
TtlListState's testing. To test my change, I need to verify the result of 
`AbstractHeapAppendingState.getInternal()`, but unit testing uses 
`MockInternalKvState.getInternal()` in `getOriginal()`, which never returns 
null in `TtlListState`. 
   
   To test the change, we might need to change the `emptyValue` in 
`MockInternalListState`、`MockInternalMapState` and related 
`TtlStateTestContextBase`, which may affect a lot of tests. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Jiayi-Liao commented on pull request #15016: [FLINK-21413][state] Clean TtlMapState and TtlListState after all elements are expired

2021-02-24 Thread GitBox


Jiayi-Liao commented on pull request #15016:
URL: https://github.com/apache/flink/pull/15016#issuecomment-785653274


   @carp84 It took me some time to dig how to test this change, but 
unfortunately there may not be an easy way to achieve this. 
   
   Take the `TtlListState` as an example, in `TtlStateTestBase`, Flink uses 
`TtlStateTestContextBase.isOriginalEmptyValue()` to test whether the state is 
cleared, which is `Objects.equals(emptyValue, getOriginal());` in 
TtlListState's testing. To test my change, I need to verify the result of 
`AbstractHeapAppendingState.getInternal()`, but unit testing uses 
`MockInternalKvState.getInternal()`, which never returns null in 
`TtlListState`. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15015: [FLINK-21479][coordination] Provide read-only interface of TaskManagerTracker to ResourceAllocationStrategy

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15015:
URL: https://github.com/apache/flink/pull/15015#issuecomment-785556721


   
   ## CI report:
   
   * 122ce5ed1d4f393686491f5be80dacb320020e3e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13737)
 
   * 69b32e4892cee2e19cd2be8a50f9a791828ec9e7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13741)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15016: [FLINK-21413][state] Clean TtlMapState and TtlListState after all elements are expired

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15016:
URL: https://github.com/apache/flink/pull/15016#issuecomment-785640058


   
   ## CI report:
   
   * 02bd50240636971f1b38f6ca0e2940200de2453a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13742)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14962: [FLINK-18789][sql-client] Use TableEnvironment#executeSql method to execute insert statement in sql client

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14962:
URL: https://github.com/apache/flink/pull/14962#issuecomment-781306516


   
   ## CI report:
   
   * 2591b6a0dafe996b1a6343965826e68bf4fffe35 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13703)
 
   * 5cc8acfd49aa8c77a440d6fdf401a454914c645f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21231) add "SHOW VIEWS" to SQL client

2021-02-24 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-21231:

Fix Version/s: 1.13.0

> add "SHOW VIEWS" to SQL client
> --
>
> Key: FLINK-21231
> URL: https://issues.apache.org/jira/browse/FLINK-21231
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Client
>Reporter: tim yu
>Assignee: tim yu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> SQL client cannot run "SHOW VIEWS" statement now, We should add the "SHOW 
> VIEWS" implement to it.
>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14741: [FLINK-21021][python] Bump Beam to 2.27.0

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14741:
URL: https://github.com/apache/flink/pull/14741#issuecomment-766340784


   
   ## CI report:
   
   * a2be9bed8bfcbccc245f9f01c77c22c1f6e8bd31 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12438)
 
   * c349fa0f9ef5ae136a6dc7616fe0e8e93a2b6133 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-16441) Allow users to override flink-conf parameters from SQL CLI environment

2021-02-24 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-16441:

Parent: FLINK-21454
Issue Type: Sub-task  (was: Improvement)

> Allow users to override flink-conf parameters from SQL CLI environment
> --
>
> Key: FLINK-16441
> URL: https://issues.apache.org/jira/browse/FLINK-16441
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client
>Reporter: Gyula Fora
>Assignee: Gyula Fora
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There is currently no way of overriding flink configuration parameters when 
> using the SQL CLI.
> The configuration section of the env yaml should provide a way of doing so as 
> this is a very important requirement for multi-user/multi-app flink client 
> envs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21499) maven package hang when using multithread on jdk 11

2021-02-24 Thread Zhengqi Zhang (Jira)
Zhengqi Zhang created FLINK-21499:
-

 Summary: maven package hang when using multithread on jdk 11
 Key: FLINK-21499
 URL: https://issues.apache.org/jira/browse/FLINK-21499
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.12.1
Reporter: Zhengqi Zhang
 Attachments: 82210_jstack.txt, image-2021-02-25-14-27-46-087.png

When I add the -t parameter to the MVN clean package command, the compilation 
gets stuck. By turning on the Maven Debug log, you can observe a large number 
of repeated logs like the one below, seemingly stuck in an endless loop.

 

I printed the thread stack and attached it.

 

!image-2021-02-25-14-27-46-087.png|width=1438,height=876!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-10229) Support listing of views

2021-02-24 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-10229.
---
Resolution: Duplicate

> Support listing of views
> 
>
> Key: FLINK-10229
> URL: https://issues.apache.org/jira/browse/FLINK-10229
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Client
>Reporter: Timo Walther
>Assignee: vinoyang
>Priority: Major
>  Labels: pull-request-available
>
> FLINK-10163 added initial support of views for the SQL Client. According to 
> other database vendors, views are listed in the \{{SHOW TABLES}}. However, 
> there should be a way of listing only the views. We can support the \{{SHOW 
> VIEWS}} command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-21146) 【SQL】Flink SQL Client not support specify the queue to submit the job

2021-02-24 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-21146.
---
Resolution: Fixed

> 【SQL】Flink SQL Client not support specify the queue to submit the job
> -
>
> Key: FLINK-21146
> URL: https://issues.apache.org/jira/browse/FLINK-21146
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
>
> We can submit the job to specify yarn queue in Hive like : 
> {code:java}
> set mapreduce.job.queuename=queue1;
> {code}
>  
>  
> we can submit the spark-sql job to specify yarn queue like : 
> {code:java}
> spark-sql --queue xxx {code}
>  
> but Flink SQL Client can not specify the job submit to which queue, default 
> is `default` queue. it is not friendly in pro env.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #15016: [FLINK-21413][state] Clean TtlMapState and TtlListState after all elements are expired

2021-02-24 Thread GitBox


flinkbot commented on pull request #15016:
URL: https://github.com/apache/flink/pull/15016#issuecomment-785640058


   
   ## CI report:
   
   * 02bd50240636971f1b38f6ca0e2940200de2453a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15015: [FLINK-21479][coordination] Provide read-only interface of TaskManagerTracker to ResourceAllocationStrategy

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15015:
URL: https://github.com/apache/flink/pull/15015#issuecomment-785556721


   
   ## CI report:
   
   * 122ce5ed1d4f393686491f5be80dacb320020e3e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13737)
 
   * 69b32e4892cee2e19cd2be8a50f9a791828ec9e7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14958: [FLINK-18550][sql-client] use TableResult#collect to get select result for sql client

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14958:
URL: https://github.com/apache/flink/pull/14958#issuecomment-781106758


   
   ## CI report:
   
   * bc24ab2e5c9ae437824349f8ac207b57b754ec96 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13702)
 
   * 9e7bfb51a1084576e38ad700b626bb7e776eab2d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13740)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14868: [FLINK-21326][runtime] Optimize building topology when initializing ExecutionGraph

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14868:
URL: https://github.com/apache/flink/pull/14868#issuecomment-773192044


   
   ## CI report:
   
   * 62df715ec67bf1785157ca0a056c44e52765e49c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13671)
 
   * e19dc30e8891b756dbdf528f62ac8c77f3a18182 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13739)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15016: [FLINK-21413][state] Clean TtlMapState and TtlListState after all elements are expired

2021-02-24 Thread GitBox


flinkbot commented on pull request #15016:
URL: https://github.com/apache/flink/pull/15016#issuecomment-785633040


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 02bd50240636971f1b38f6ca0e2940200de2453a (Thu Feb 25 
05:53:12 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21413) TtlMapState and TtlListState cannot be clean completely with Filesystem StateBackend

2021-02-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21413:
---
Labels: pull-request-available  (was: )

> TtlMapState and TtlListState cannot be clean completely with Filesystem 
> StateBackend
> 
>
> Key: FLINK-21413
> URL: https://issues.apache.org/jira/browse/FLINK-21413
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.9.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-02-19-11-13-58-672.png
>
>
> Take the #TtlMapState as an example,
>  
> {code:java}
> public Map> getUnexpiredOrNull(@Nonnull Map TtlValue> ttlValue) {
> Map> unexpired = new HashMap<>();
> TypeSerializer> valueSerializer =
> ((MapSerializer>) 
> original.getValueSerializer()).getValueSerializer();
> for (Map.Entry> e : ttlValue.entrySet()) {
> if (!expired(e.getValue())) {
> // we have to do the defensive copy to update the 
> value
> unexpired.put(e.getKey(), 
> valueSerializer.copy(e.getValue()));
> }
> }
> return ttlValue.size() == unexpired.size() ? ttlValue : unexpired;
> }
> {code}
>  
> The returned value will never be null and the #StateEntry will exists 
> forever, which leads to memory leak if the key's range of the stream is very 
> large. Below we can see that 20+ millison uncleared TtlStateMap could take up 
> several GB memory.
>  
> !image-2021-02-19-11-13-58-672.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Jiayi-Liao opened a new pull request #15016: [FLINK-21413][state] Clean TtlMapState and TtlListState after all elements are expired

2021-02-24 Thread GitBox


Jiayi-Liao opened a new pull request #15016:
URL: https://github.com/apache/flink/pull/15016


   ## What is the purpose of the change
   
   Solve the memory leak in #TtlMapState and #TtlListState with #FsStateBackend 
and incremental cleanup feature. 
   
   ## Brief change log
   
   Changes in #TtlMapState and #TtlListState  
   
   * In #getUnexpiredOrNull method, return null if all the elements are expired
   
   ## Verifying this change
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14958: [FLINK-18550][sql-client] use TableResult#collect to get select result for sql client

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14958:
URL: https://github.com/apache/flink/pull/14958#issuecomment-781106758


   
   ## CI report:
   
   * bc24ab2e5c9ae437824349f8ac207b57b754ec96 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13702)
 
   * 9e7bfb51a1084576e38ad700b626bb7e776eab2d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21498) Avoid copying when converting byte[] to ByteString in StateFun

2021-02-24 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-21498:

Issue Type: Task  (was: Bug)

> Avoid copying when converting byte[] to ByteString in StateFun
> --
>
> Key: FLINK-21498
> URL: https://issues.apache.org/jira/browse/FLINK-21498
> Project: Flink
>  Issue Type: Task
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Major
>
> There's a few places in StateFun where we can be more efficient with byte[] 
> to Protobuf ByteString conversions, by just wrapping the byte[] instead of 
> copying, since we know that the byte array can no longer be mutated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14868: [FLINK-21326][runtime] Optimize building topology when initializing ExecutionGraph

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14868:
URL: https://github.com/apache/flink/pull/14868#issuecomment-773192044


   
   ## CI report:
   
   * 62df715ec67bf1785157ca0a056c44e52765e49c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13671)
 
   * e19dc30e8891b756dbdf528f62ac8c77f3a18182 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21498) Avoid copying when converting byte[] to ByteString in StateFun

2021-02-24 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-21498:
---

 Summary: Avoid copying when converting byte[] to ByteString in 
StateFun
 Key: FLINK-21498
 URL: https://issues.apache.org/jira/browse/FLINK-21498
 Project: Flink
  Issue Type: Bug
  Components: Stateful Functions
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai


There's a few places in StateFun where we can be more efficient with byte[] to 
Protobuf ByteString conversions, by just wrapping the byte[] instead of 
copying, since we know that the byte array can no longer be mutated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #15005: [FLINK-21298][table] Support 'USE MODULES' syntax both in SQL parser, TableEnvironment and SQL CLI

2021-02-24 Thread GitBox


wuchong commented on a change in pull request #15005:
URL: https://github.com/apache/flink/pull/15005#discussion_r582530305



##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/calcite/FlinkPlannerImpl.scala
##
@@ -36,11 +35,11 @@ import org.apache.calcite.sql.validate.SqlValidator
 import org.apache.calcite.sql.{SqlExplain, SqlKind, SqlNode, SqlOperatorTable}
 import org.apache.calcite.sql2rel.{SqlRexConvertletTable, SqlToRelConverter}
 import org.apache.calcite.tools.{FrameworkConfig, RelConversionException}
+import org.apache.flink.sql.parser.ddl.SqlUseModules

Review comment:
   Please reorder the imports. 

##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/api/TableEnvironmentTest.scala
##
@@ -33,6 +33,7 @@ import org.apache.flink.types.Row
 import org.apache.calcite.plan.RelOptUtil
 import org.apache.calcite.sql.SqlExplainLevel
 import org.apache.flink.core.testutils.FlinkMatchers.containsMessage
+import org.apache.flink.table.module.ModuleEntry

Review comment:
   Please reorder the imports. 

##
File path: 
flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/api/TableEnvironmentTest.scala
##
@@ -592,6 +593,66 @@ class TableEnvironmentTest {
 tableEnv.executeSql("UNLOAD MODULE dummy")
   }
 
+  @Test
+  def testExecuteSqlWithUseModules(): Unit = {
+tableEnv.executeSql("LOAD MODULE dummy")
+assert(tableEnv.listModules().sameElements(Array[String]("core", "dummy")))

Review comment:
   Personally, I don't like Scala `assert` because it doesn't provide 
mismatch information when assertion failed. Besides, `sameElements` sounds like 
it doesn't care about the elements order, but the order is critical here. 
Therefore, it would be better to use `assertArrayEquals`. We may also need to 
update the tests added in previous PR. 

##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/gateway/local/LocalExecutor.java
##
@@ -428,6 +429,13 @@ public ProgramTargetDescriptor executeUpdate(String 
sessionId, String statement)
 return executeUpdateInternal(sessionId, context, statement);
 }
 
+@VisibleForTesting
+List listFullModules(String sessionId) throws 
SqlExecutionException {

Review comment:
   I think we will need this interface in the next `SHOW FULL MODULES` PR, 
we can add this method into base interface and implement it beside 
`listModules`.

##
File path: flink-table/flink-sql-parser-hive/src/main/codegen/data/Parser.tdd
##
@@ -129,6 +130,7 @@
 "LINES"
 "LOAD"
 "LOCATION"
+"MODULES"

Review comment:
   We should also add `MODULES` into non-reserved-keywords, otherwise it 
will break user jobs which uses `modules` as column name. For example, `select 
a, b, modules from T` will parse failed. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Thesharing commented on a change in pull request #14868: [FLINK-21326][runtime] Optimize building topology when initializing ExecutionGraph

2021-02-24 Thread GitBox


Thesharing commented on a change in pull request #14868:
URL: https://github.com/apache/flink/pull/14868#discussion_r582556815



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/EdgeManagerBuildUtil.java
##
@@ -0,0 +1,174 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License
+ */
+
+package org.apache.flink.runtime.executiongraph;
+
+import org.apache.flink.runtime.jobgraph.DistributionPattern;
+import org.apache.flink.runtime.jobgraph.IntermediateResultPartitionID;
+import org.apache.flink.runtime.scheduler.strategy.ConsumedPartitionGroup;
+import org.apache.flink.runtime.scheduler.strategy.ConsumerVertexGroup;
+import org.apache.flink.runtime.scheduler.strategy.ExecutionVertexID;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.stream.Collectors;
+
+/** Utilities for building {@link EdgeManager}. */
+public class EdgeManagerBuildUtil {
+
+public static void registerToExecutionEdgeManager(
+ExecutionVertex[] taskVertices,
+IntermediateResult ires,
+int inputNumber,
+DistributionPattern distributionPattern) {
+
+switch (distributionPattern) {
+case POINTWISE:
+connectPointwise(taskVertices, ires, inputNumber);
+break;
+case ALL_TO_ALL:
+connectAllToAll(taskVertices, ires, inputNumber);
+break;
+default:
+throw new RuntimeException("Unrecognized distribution 
pattern.");
+}
+}
+
+private static void connectAllToAll(
+ExecutionVertex[] taskVertices, IntermediateResult ires, int 
inputNumber) {
+
+ConsumedPartitionGroup consumedPartitions =
+new ConsumedPartitionGroup(
+Arrays.stream(ires.getPartitions())
+
.map(IntermediateResultPartition::getPartitionId)
+.collect(Collectors.toList()));
+for (ExecutionVertex ev : taskVertices) {
+ev.setConsumedPartitions(consumedPartitions, inputNumber);
+}
+
+ConsumerVertexGroup vertices =
+new ConsumerVertexGroup(
+Arrays.stream(taskVertices)
+.map(ExecutionVertex::getID)
+.collect(Collectors.toList()));
+for (IntermediateResultPartition partition : ires.getPartitions()) {
+partition.setConsumers(vertices);
+}
+}
+
+private static void connectPointwise(
+ExecutionVertex[] taskVertices, IntermediateResult ires, int 
inputNumber) {
+
+final int sourceCount = ires.getPartitions().length;
+final int targetCount = taskVertices.length;
+
+if (sourceCount == targetCount) {
+for (int i = 0; i < sourceCount; i++) {
+ExecutionVertex executionVertex = taskVertices[i];
+IntermediateResultPartition partition = 
ires.getPartitions()[i];
+
+ConsumerVertexGroup consumerVertexGroup =
+new ConsumerVertexGroup(executionVertex.getID());
+partition.setConsumers(consumerVertexGroup);
+
+ConsumedPartitionGroup consumedPartitionGroup =
+new ConsumedPartitionGroup(partition.getPartitionId());
+executionVertex.setConsumedPartitions(consumedPartitionGroup, 
inputNumber);
+}
+} else if (sourceCount > targetCount) {
+for (int index = 0; index < targetCount; index++) {
+
+ExecutionVertex executionVertex = taskVertices[index];
+ConsumerVertexGroup consumerVertexGroup =
+new ConsumerVertexGroup(executionVertex.getID());
+
+List consumedPartitions =
+new ArrayList<>(sourceCount / targetCount + 1);
+
+if (sourceCount % targetCount == 0) {

Review comment:
   Thanks for the suggestion. I've improved the `PointwisePatternTest` and 
the logics in `EdgeManagerBuilUtil`.






[jira] [Assigned] (FLINK-21478) Lookup joins should deal with any intermediate table scans correctly

2021-02-24 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-21478:
--

Assignee: Caizhi Weng

> Lookup joins should deal with any intermediate table scans correctly
> 
>
> Key: FLINK-21478
> URL: https://issues.apache.org/jira/browse/FLINK-21478
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
>
> This subtask is more complex because it has to deal with temporal joins with 
> a view. Currently general rules for temporal joins are not dealing with this.
> But 99.9% of users will only perform lookup joins with just the lookup table 
> source. Temporal joins with views are rare and we do no need to hurry for 
> this subtask.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-21477) Lookup joins should deal with intermediate table scans containing just the table source correctly

2021-02-24 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-21477:
--

Assignee: Caizhi Weng

> Lookup joins should deal with intermediate table scans containing just the 
> table source correctly
> -
>
> Key: FLINK-21477
> URL: https://issues.apache.org/jira/browse/FLINK-21477
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Caizhi Weng
>Assignee: Caizhi Weng
>Priority: Major
> Fix For: 1.13.0, 1.12.3
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-21495) Flink 1.12.0 execute hive sql GeneratedExpression ERROR

2021-02-24 Thread jiayue.yu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiayue.yu closed FLINK-21495.
-
Resolution: Not A Bug

> Flink 1.12.0 execute hive sql  GeneratedExpression ERROR
> 
>
> Key: FLINK-21495
> URL: https://issues.apache.org/jira/browse/FLINK-21495
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0
>Reporter: jiayue.yu
>Priority: Major
>
> SQL :
> SELECT
> user_id,
> sum(CASE when order_type = 0 and order_status in (1,3,4,11) and ins_category 
> = 3 then 1 end) as sdb_normal_medical_pay_num
> from hive.shuidi_dwb.dwb_sdb_order_info_full_d
> where dt = date_sub(CURRENT_DATE,1)
> and valid = 1
> group by user_id
> limit 10
>  
> Exception:
> Exception in thread "main" 
> org.apache.flink.table.planner.codegen.CodeGenException: Unable to find 
> common type of GeneratedExpression(field$26,isNull$24,,STRING,None) and 
> ArrayBuffer(GeneratedExpression(((int) 3),false,,INT NOT 
> NULL,Some(3))).Exception in thread "main" 
> org.apache.flink.table.planner.codegen.CodeGenException: Unable to find 
> common type of GeneratedExpression(field$26,isNull$24,,STRING,None) and 
> ArrayBuffer(GeneratedExpression(((int) 3),false,,INT NOT NULL,Some(3))). at 
> org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.$anonfun$generateIn$2(ScalarOperatorGens.scala:307)
>  at scala.Option.orElse(Option.scala:289) at 
> org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.generateIn(ScalarOperatorGens.scala:307)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:724)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:507)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:56)
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$visitCall$2(ExprCodeGenerator.scala:526)
>  at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58) at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51) at 
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:517)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:56)
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$visitCall$2(ExprCodeGenerator.scala:526)
>  at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58) at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51) at 
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:517)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:56)
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:155)
>  at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.$anonfun$generateProcessCode$5(CalcCodeGenerator.scala:143)
>  at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58) at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51) at 
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.produceProjectionCode$1(CalcCodeGenerator.scala:143)
>  at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateProcessCode(CalcCodeGenerator.scala:190)
>  at 
> 

[jira] [Closed] (FLINK-21496) Upgrade Testcontainers to 1.15.1 in Stateful Functions

2021-02-24 Thread Tzu-Li (Gordon) Tai (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai closed FLINK-21496.
---
Resolution: Fixed

flink-statefun/master: ec762a7413c3f94470ee13d57edc47350feb1569

> Upgrade Testcontainers to 1.15.1 in Stateful Functions
> --
>
> Key: FLINK-21496
> URL: https://issues.apache.org/jira/browse/FLINK-21496
> Project: Flink
>  Issue Type: Bug
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Assignee: Tzu-Li (Gordon) Tai
>Priority: Critical
> Fix For: statefun-3.0.0
>
>
> The E2E tests in CI is currently failing for StateFun, started failing 
> recently due to Github Actions upgrading their Docker version to 20.10.2+.
> Due to this upgrade, our current Testcontainers version 1.12.x is no longer 
> compatible since that version relies on a deprecated Docker API that no 
> longer exists in Docker version 10.10.2 (API version 1.41).
> Full description of the issue: 
> https://github.com/testcontainers/testcontainers-java/issues/3574



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14840: [FLINK-21231][sql-client] add "SHOW VIEWS" to SQL client

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14840:
URL: https://github.com/apache/flink/pull/14840#issuecomment-772110005


   
   ## CI report:
   
   * cacb257a38d5fcd818813031187816922e6d3dcb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13029)
 
   * 273a8f60782ae2c65704efdac92d0202f7fae2f0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13728)
 
   * 144583814392c82fd760bb6252508dba4f78cf50 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13738)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15006: [Flink-21485][sql-client] Simplify the ExecutionContext

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15006:
URL: https://github.com/apache/flink/pull/15006#issuecomment-785114095


   
   ## CI report:
   
   * f6a9d79eb7ef2cc93837e0370ba91b62eb17ba41 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13704)
 
   * 4cd5563d44a62aec2b8db80cfe8da650fddf4ffa UNKNOWN
   * 4cc39b300203dd122980e6ef22080746834fcc9b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13736)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14840: [FLINK-21231][sql-client] add "SHOW VIEWS" to SQL client

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14840:
URL: https://github.com/apache/flink/pull/14840#issuecomment-772110005


   
   ## CI report:
   
   * cacb257a38d5fcd818813031187816922e6d3dcb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13029)
 
   * 273a8f60782ae2c65704efdac92d0202f7fae2f0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13728)
 
   * 144583814392c82fd760bb6252508dba4f78cf50 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yulei0824 commented on pull request #14840: [FLINK-21231][sql-client] add "SHOW VIEWS" to SQL client

2021-02-24 Thread GitBox


yulei0824 commented on pull request #14840:
URL: https://github.com/apache/flink/pull/14840#issuecomment-785598428


   Hi @wuchong, I will rebase again. 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21497) FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail

2021-02-24 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-21497:
-

 Summary: 
FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover fail
 Key: FLINK-21497
 URL: https://issues.apache.org/jira/browse/FLINK-21497
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.13.0
Reporter: Guowei Ma


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=13722=logs=a5ef94ef-68c2-57fd-3794-dc108ed1c495=9c1ddabe-d186-5a2c-5fcc-f3cafb3ec699
{code:java}
2021-02-24T22:47:55.4844360Z java.lang.RuntimeException: Failed to fetch next 
result
2021-02-24T22:47:55.4847421Zat 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
2021-02-24T22:47:55.4848395Zat 
org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
2021-02-24T22:47:55.4849262Zat 
org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSource(FileSourceTextLinesITCase.java:148)
2021-02-24T22:47:55.4850030Zat 
org.apache.flink.connector.file.src.FileSourceTextLinesITCase.testBoundedTextFileSourceWithJobManagerFailover(FileSourceTextLinesITCase.java:108)
2021-02-24T22:47:55.4850780Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2021-02-24T22:47:55.4851322Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2021-02-24T22:47:55.4858977Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2021-02-24T22:47:55.4860737Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2021-02-24T22:47:55.4861855Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2021-02-24T22:47:55.4862873Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2021-02-24T22:47:55.4863598Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2021-02-24T22:47:55.4864289Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2021-02-24T22:47:55.4864937Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2021-02-24T22:47:55.4865570Zat 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
2021-02-24T22:47:55.4866152Zat 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
2021-02-24T22:47:55.4866670Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2021-02-24T22:47:55.4867172Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2021-02-24T22:47:55.4867765Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2021-02-24T22:47:55.4868588Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2021-02-24T22:47:55.4869683Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2021-02-24T22:47:55.4886595Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2021-02-24T22:47:55.4887656Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2021-02-24T22:47:55.4888451Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2021-02-24T22:47:55.4889199Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2021-02-24T22:47:55.4889845Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2021-02-24T22:47:55.4890447Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2021-02-24T22:47:55.4891037Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2021-02-24T22:47:55.4891604Zat 
org.junit.runners.Suite.runChild(Suite.java:128)
2021-02-24T22:47:55.4892235Zat 
org.junit.runners.Suite.runChild(Suite.java:27)
2021-02-24T22:47:55.4892959Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2021-02-24T22:47:55.4893573Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2021-02-24T22:47:55.4894216Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2021-02-24T22:47:55.4894824Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2021-02-24T22:47:55.4895425Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2021-02-24T22:47:55.4896027Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2021-02-24T22:47:55.4896638Zat 
org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
2021-02-24T22:47:55.4897378Zat 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
2021-02-24T22:47:55.4898342Zat 
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
2021-02-24T22:47:55.4899204Zat 

[GitHub] [flink] flinkbot edited a comment on pull request #15006: [Flink-21485][sql-client] Simplify the ExecutionContext

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15006:
URL: https://github.com/apache/flink/pull/15006#issuecomment-785114095


   
   ## CI report:
   
   * f6a9d79eb7ef2cc93837e0370ba91b62eb17ba41 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13704)
 
   * 4cd5563d44a62aec2b8db80cfe8da650fddf4ffa UNKNOWN
   * 4cc39b300203dd122980e6ef22080746834fcc9b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15015: [FLINK-21479][coordination] Provide read-only interface of TaskManagerTracker to ResourceAllocationStrategy

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15015:
URL: https://github.com/apache/flink/pull/15015#issuecomment-785556721


   
   ## CI report:
   
   * 122ce5ed1d4f393686491f5be80dacb320020e3e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13737)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18454) Add a code contribution section about how to look for what to contribute

2021-02-24 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-18454.
---
Resolution: Done

done via:
6c778cb8e85f1ea2b200fec82fa6e7036ef7b947
8f66969d3fcb7c0ee92750c8a6fa068834ca1cfd


> Add a code contribution section about how to look for what to contribute
> 
>
> Key: FLINK-18454
> URL: https://issues.apache.org/jira/browse/FLINK-18454
> Project: Flink
>  Issue Type: Task
>  Components: Project Website
>Reporter: Andrey Zagrebin
>Assignee: Zhu Zhu
>Priority: Major
>  Labels: pull-request-available
>
> This section is to give general advices about browsing open Jira issues and 
> starter tasks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #14958: [FLINK-18550][sql-client] use TableResult#collect to get select result for sql client

2021-02-24 Thread GitBox


wuchong commented on a change in pull request #14958:
URL: https://github.com/apache/flink/pull/14958#discussion_r582523520



##
File path: 
flink-table/flink-sql-client/src/test/java/org/apache/flink/table/client/cli/CliTableauResultViewTest.java
##
@@ -286,7 +320,7 @@ public void testFailedBatchResult() {
 
 @Test
 public void testStreamingResult() {
-ResultDescriptor resultDescriptor = new ResultDescriptor("", schema, 
true, true);
+ResultDescriptor resultDescriptor = new ResultDescriptor("", schema, 
true, true, false);

Review comment:
   The last flag should be true. 

##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java
##
@@ -654,11 +654,7 @@ private void callSelect(SqlCommandCall cmdCall) {
 if (resultDesc.isTableauMode()) {
 try (CliTableauResultView tableauResultView =
 new CliTableauResultView(terminal, executor, sessionId, 
resultDesc)) {
-if (resultDesc.isMaterialized()) {
-tableauResultView.displayBatchResults();
-} else {
-tableauResultView.displayStreamResults();
-}
+tableauResultView.displayResults(resultDesc.isStreamingMode());

Review comment:
   The `CliTableauResultView` already holds the `resultDesc`, so there is 
no need to pass the streaming mode again. 

##
File path: 
flink-table/flink-sql-client/src/main/java/org/apache/flink/table/client/cli/CliClient.java
##
@@ -654,11 +654,7 @@ private void callSelect(SqlCommandCall cmdCall) {
 if (resultDesc.isTableauMode()) {
 try (CliTableauResultView tableauResultView =
 new CliTableauResultView(terminal, executor, sessionId, 
resultDesc)) {
-if (resultDesc.isMaterialized()) {
-tableauResultView.displayBatchResults();
-} else {
-tableauResultView.displayStreamResults();
-}
+tableauResultView.displayResults(resultDesc.isStreamingMode());

Review comment:
   The `CliTableauResultView` already holds the `resultDesc`, so there is 
no need to pass the streaming mode again. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15015: [FLINK-21479][coordination] Provide read-only interface of TaskManagerTracker to ResourceAllocationStrategy

2021-02-24 Thread GitBox


flinkbot commented on pull request #15015:
URL: https://github.com/apache/flink/pull/15015#issuecomment-785556721


   
   ## CI report:
   
   * 122ce5ed1d4f393686491f5be80dacb320020e3e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15006: [Flink-21485][sql-client] Simplify the ExecutionContext

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15006:
URL: https://github.com/apache/flink/pull/15006#issuecomment-785114095


   
   ## CI report:
   
   * f6a9d79eb7ef2cc93837e0370ba91b62eb17ba41 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13704)
 
   * 4cd5563d44a62aec2b8db80cfe8da650fddf4ffa UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14740: [FLINK-21067][runtime][checkpoint] Modify the logic of computing which tasks to trigger/ack/commit to support finished tasks

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14740:
URL: https://github.com/apache/flink/pull/14740#issuecomment-766340750


   
   ## CI report:
   
   * c7e6b28b249f85cf52740d5201a769e0982a60aa UNKNOWN
   * bebd298009b12a9d5ac6518902f5534f8e00ff32 UNKNOWN
   * eb6c10b0d339bfc92a540314e7c58cbf11a70dd9 UNKNOWN
   * 1b4d1fc172e44377cbde71a71f34ea7f17b722ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13685)
 
   * d2b929d9a6f8f9ce142d94ef8be40d8e70e289a1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13735)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21495) Flink 1.12.0 execute hive sql GeneratedExpression ERROR

2021-02-24 Thread jiayue.yu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17290655#comment-17290655
 ] 

jiayue.yu commented on FLINK-21495:
---

When  the select case when  contdatins contains order_status in (1,3,4,11)  , 
will  CodeGenException: Unable to find common type of 
GeneratedExpression(field$26,isNull$24,,STRING,None)  ERROR!

> Flink 1.12.0 execute hive sql  GeneratedExpression ERROR
> 
>
> Key: FLINK-21495
> URL: https://issues.apache.org/jira/browse/FLINK-21495
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0
>Reporter: jiayue.yu
>Priority: Major
>
> SQL :
> SELECT
> user_id,
> sum(CASE when order_type = 0 and order_status in (1,3,4,11) and ins_category 
> = 3 then 1 end) as sdb_normal_medical_pay_num
> from hive.shuidi_dwb.dwb_sdb_order_info_full_d
> where dt = date_sub(CURRENT_DATE,1)
> and valid = 1
> group by user_id
> limit 10
>  
> Exception:
> Exception in thread "main" 
> org.apache.flink.table.planner.codegen.CodeGenException: Unable to find 
> common type of GeneratedExpression(field$26,isNull$24,,STRING,None) and 
> ArrayBuffer(GeneratedExpression(((int) 3),false,,INT NOT 
> NULL,Some(3))).Exception in thread "main" 
> org.apache.flink.table.planner.codegen.CodeGenException: Unable to find 
> common type of GeneratedExpression(field$26,isNull$24,,STRING,None) and 
> ArrayBuffer(GeneratedExpression(((int) 3),false,,INT NOT NULL,Some(3))). at 
> org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.$anonfun$generateIn$2(ScalarOperatorGens.scala:307)
>  at scala.Option.orElse(Option.scala:289) at 
> org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.generateIn(ScalarOperatorGens.scala:307)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:724)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:507)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:56)
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$visitCall$2(ExprCodeGenerator.scala:526)
>  at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58) at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51) at 
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:517)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:56)
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$visitCall$2(ExprCodeGenerator.scala:526)
>  at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58) at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51) at 
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:517)
>  at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:56)
>  at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) at 
> org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:155)
>  at 
> org.apache.flink.table.planner.codegen.CalcCodeGenerator$.$anonfun$generateProcessCode$5(CalcCodeGenerator.scala:143)
>  at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58) at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51) at 
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> 

[GitHub] [flink] wangyang0918 commented on a change in pull request #14629: [FLINK-15656][k8s] Support pod template for native kubernetes integration

2021-02-24 Thread GitBox


wangyang0918 commented on a change in pull request #14629:
URL: https://github.com/apache/flink/pull/14629#discussion_r582506793



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesResourceManagerDriver.java
##
@@ -100,6 +104,17 @@ protected void initializeInternal() throws Exception {
 kubeClientOpt =
 Optional.of(kubeClientFactory.fromConfiguration(flinkConfig, 
getIoExecutor()));
 podsWatchOpt = watchTaskManagerPods();
+taskManagerPodTemplate =
+flinkConfig
+
.getOptional(KubernetesConfigOptions.TASK_MANAGER_POD_TEMPLATE)
+.map(
+ignore ->
+
KubernetesUtils.loadPodFromTemplateFile(
+kubeClientOpt.get(),
+
KubernetesUtils.getTaskManagerPodTemplateFileInPod(
+flinkConfig),
+Constants.MAIN_CONTAINER_NAME))
+.orElse(new FlinkPod.Builder().build());

Review comment:
   Make sense.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] HuangXingBo opened a new pull request #422: Add xingbo to community page

2021-02-24 Thread GitBox


HuangXingBo opened a new pull request #422:
URL: https://github.com/apache/flink-web/pull/422


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15003: [FLINK-21482][table-planner-blink] Support grouping set syntax for WindowAggregate

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15003:
URL: https://github.com/apache/flink/pull/15003#issuecomment-785052606


   
   ## CI report:
   
   * b677d152aea25a4ba389e7713248a78484d942a6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13695)
 
   * 931a8b3776e71f09ecdcd74b1851dbc0ae035c6e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13734)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15015: [FLINK-21479][coordination] Provide read-only interface of TaskManagerTracker to ResourceAllocationStrategy

2021-02-24 Thread GitBox


flinkbot commented on pull request #15015:
URL: https://github.com/apache/flink/pull/15015#issuecomment-785551035


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 122ce5ed1d4f393686491f5be80dacb320020e3e (Thu Feb 25 
03:32:30 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14740: [FLINK-21067][runtime][checkpoint] Modify the logic of computing which tasks to trigger/ack/commit to support finished tasks

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14740:
URL: https://github.com/apache/flink/pull/14740#issuecomment-766340750


   
   ## CI report:
   
   * c7e6b28b249f85cf52740d5201a769e0982a60aa UNKNOWN
   * bebd298009b12a9d5ac6518902f5534f8e00ff32 UNKNOWN
   * eb6c10b0d339bfc92a540314e7c58cbf11a70dd9 UNKNOWN
   * 1b4d1fc172e44377cbde71a71f34ea7f17b722ce Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13685)
 
   * d2b929d9a6f8f9ce142d94ef8be40d8e70e289a1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21496) Upgrade Testcontainers to 1.15.1 in Stateful Functions

2021-02-24 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-21496:
---

 Summary: Upgrade Testcontainers to 1.15.1 in Stateful Functions
 Key: FLINK-21496
 URL: https://issues.apache.org/jira/browse/FLINK-21496
 Project: Flink
  Issue Type: Bug
  Components: Stateful Functions
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai
 Fix For: statefun-3.0.0


The E2E tests in CI is currently failing for StateFun, started failing recently 
due to Github Actions upgrading their Docker version to 20.10.2+.
Due to this upgrade, our current Testcontainers version 1.12.x is no longer 
compatible since that version relies on a deprecated Docker API that no longer 
exists in Docker version 10.10.2 (API version 1.41).

Full description of the issue: 
https://github.com/testcontainers/testcontainers-java/issues/3574



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21495) Flink 1.12.0 execute hive sql GeneratedExpression ERROR

2021-02-24 Thread jiayue.yu (Jira)
jiayue.yu created FLINK-21495:
-

 Summary: Flink 1.12.0 execute hive sql  GeneratedExpression ERROR
 Key: FLINK-21495
 URL: https://issues.apache.org/jira/browse/FLINK-21495
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.12.0
Reporter: jiayue.yu


SQL :

SELECT
user_id,
sum(CASE when order_type = 0 and order_status in (1,3,4,11) and ins_category = 
3 then 1 end) as sdb_normal_medical_pay_num
from hive.shuidi_dwb.dwb_sdb_order_info_full_d
where dt = date_sub(CURRENT_DATE,1)
and valid = 1
group by user_id
limit 10

 

Exception:

Exception in thread "main" 
org.apache.flink.table.planner.codegen.CodeGenException: Unable to find common 
type of GeneratedExpression(field$26,isNull$24,,STRING,None) and 
ArrayBuffer(GeneratedExpression(((int) 3),false,,INT NOT 
NULL,Some(3))).Exception in thread "main" 
org.apache.flink.table.planner.codegen.CodeGenException: Unable to find common 
type of GeneratedExpression(field$26,isNull$24,,STRING,None) and 
ArrayBuffer(GeneratedExpression(((int) 3),false,,INT NOT NULL,Some(3))). at 
org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.$anonfun$generateIn$2(ScalarOperatorGens.scala:307)
 at scala.Option.orElse(Option.scala:289) at 
org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens$.generateIn(ScalarOperatorGens.scala:307)
 at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateCallExpression(ExprCodeGenerator.scala:724)
 at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:507)
 at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:56)
 at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$visitCall$2(ExprCodeGenerator.scala:526)
 at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) 
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58) at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51) at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:517)
 at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:56)
 at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.$anonfun$visitCall$2(ExprCodeGenerator.scala:526)
 at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) 
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58) at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51) at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:517)
 at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.visitCall(ExprCodeGenerator.scala:56)
 at org.apache.calcite.rex.RexCall.accept(RexCall.java:174) at 
org.apache.flink.table.planner.codegen.ExprCodeGenerator.generateExpression(ExprCodeGenerator.scala:155)
 at 
org.apache.flink.table.planner.codegen.CalcCodeGenerator$.$anonfun$generateProcessCode$5(CalcCodeGenerator.scala:143)
 at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:233) 
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:58) at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:51) at 
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) at 
scala.collection.TraversableLike.map(TraversableLike.scala:233) at 
scala.collection.TraversableLike.map$(TraversableLike.scala:226) at 
scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
org.apache.flink.table.planner.codegen.CalcCodeGenerator$.produceProjectionCode$1(CalcCodeGenerator.scala:143)
 at 
org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateProcessCode(CalcCodeGenerator.scala:190)
 at 
org.apache.flink.table.planner.codegen.CalcCodeGenerator$.generateCalcOperator(CalcCodeGenerator.scala:59)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:84)
 at 
org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecCalc.translateToPlanInternal(StreamExecCalc.scala:39)
 at 

[jira] [Updated] (FLINK-21479) Provide read-only interface of TaskManagerTracker to ResourceAllocationStrategy

2021-02-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21479:
---
Labels: pull-request-available  (was: )

> Provide read-only interface of TaskManagerTracker to 
> ResourceAllocationStrategy
> ---
>
> Key: FLINK-21479
> URL: https://issues.apache.org/jira/browse/FLINK-21479
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Xintong Song
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
>
> This is a predecessor for optimizing performance of 
> {{ResourceAllocationStrategy}} (FLINK-21174).
> To optimize the performance, we will need to build and maintain index for 
> registered/pending resources. As the strategy is designed to be stateless, we 
> propose to build and maintain the index at {{TaskManagerTracker}}, providing 
> only access methods to the strategy.
> To decouple index accessing from the common {{FineGrainedSlotManager}} 
> workflow, while preventing the strategy from directly modifying the states, 
> we can introduce a read-only interface of {{TaskManagerTracker}} and pass it 
> to the strategy. In this way, we can easily extend the read-only interface to 
> provide more index-accessing methods in future.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ commented on pull request #15015: [FLINK-21479][coordination] Provide read-only interface of TaskManagerTracker to ResourceAllocationStrategy

2021-02-24 Thread GitBox


KarmaGYZ commented on pull request #15015:
URL: https://github.com/apache/flink/pull/15015#issuecomment-785543977


   cc @xintongsong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ opened a new pull request #15015: [FLINK-21479][coordination] Provide read-only interface of TaskManagerTracker to ResourceAllocationStrategy

2021-02-24 Thread GitBox


KarmaGYZ opened a new pull request #15015:
URL: https://github.com/apache/flink/pull/15015


   
   
   ## What is the purpose of the change
   
   This is a predecessor for optimizing performance of 
ResourceAllocationStrategy (FLINK-21174).
   
   To optimize the performance, we will need to build and maintain index for 
registered/pending resources. As the strategy is designed to be stateless, we 
propose to build and maintain the index at TaskManagerTracker, providing only 
access methods to the strategy.
   
   To decouple index accessing from the common FineGrainedSlotManager workflow, 
while preventing the strategy from directly modifying the states, we can 
introduce a read-only interface of TaskManagerTracker and pass it to the 
strategy. In this way, we can easily extend the read-only interface to provide 
more index-accessing methods in future.
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-21456) TableResult#print() should correctly stringify values of TIMESTAMP type in SQL format

2021-02-24 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17290639#comment-17290639
 ] 

Jark Wu edited comment on FLINK-21456 at 2/25/21, 3:10 AM:
---

A simplest to fix all data types can be adding an implicit CAST TO STRING for 
every column. This can make sure the string representation is exactly the same 
with SQL behavior. However, this can only work for SQL Client, not work for 
{{TableResult#print()}}.


was (Author: jark):
A simplest to fix all data types can be add a implicit CAST TO STRING for every 
column. This can make sure the string representation is exactly the same with 
SQL behavior. However, this can only work for SQL Client, not work for 
{{TableResult#print()}}.

> TableResult#print() should correctly stringify values of TIMESTAMP type in 
> SQL format
> -
>
> Key: FLINK-21456
> URL: https://issues.apache.org/jira/browse/FLINK-21456
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Major
>
> Currently {{TableResult#print()} simply use {{Object#toString()}} as the 
> string representation of the fields. This is not SQL compliant, because for 
> TIMESTAMP and TIMESTAMP_LZ, the string representation should be {{2021-02-23 
> 17:30:00}} instead of {{2021-02-23T17:30:00Z}}.
> Note: we may need to update {{PrintUtils#rowToString(Row)}} and also SQL 
> Client which invokes this method. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21456) TableResult#print() should correctly stringify values of TIMESTAMP type in SQL format

2021-02-24 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17290639#comment-17290639
 ] 

Jark Wu commented on FLINK-21456:
-

A simplest to fix all data types can be add a implicit CAST TO STRING for every 
column. This can make sure the string representation is exactly the same with 
SQL behavior. However, this can only work for SQL Client, not work for 
{{TableResult#print()}}.

> TableResult#print() should correctly stringify values of TIMESTAMP type in 
> SQL format
> -
>
> Key: FLINK-21456
> URL: https://issues.apache.org/jira/browse/FLINK-21456
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Major
>
> Currently {{TableResult#print()} simply use {{Object#toString()}} as the 
> string representation of the fields. This is not SQL compliant, because for 
> TIMESTAMP and TIMESTAMP_LZ, the string representation should be {{2021-02-23 
> 17:30:00}} instead of {{2021-02-23T17:30:00Z}}.
> Note: we may need to update {{PrintUtils#rowToString(Row)}} and also SQL 
> Client which invokes this method. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-web] zhuzhurk merged pull request #420: [FLINK-18454] Add Chinese translation for "how to look for what to contribute"

2021-02-24 Thread GitBox


zhuzhurk merged pull request #420:
URL: https://github.com/apache/flink-web/pull/420


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] zhuzhurk commented on pull request #420: [FLINK-18454] Add Chinese translation for "how to look for what to contribute"

2021-02-24 Thread GitBox


zhuzhurk commented on pull request #420:
URL: https://github.com/apache/flink-web/pull/420#issuecomment-785542493


   Thanks for reviewing! @gaoyunhaii 
   Merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15003: [FLINK-21482][table-planner-blink] Support grouping set syntax for WindowAggregate

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15003:
URL: https://github.com/apache/flink/pull/15003#issuecomment-785052606


   
   ## CI report:
   
   * b677d152aea25a4ba389e7713248a78484d942a6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13695)
 
   * 931a8b3776e71f09ecdcd74b1851dbc0ae035c6e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15004: [FLINK-21253][table-planner-blink] Support grouping set syntax for GroupWindowAggregate

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #15004:
URL: https://github.com/apache/flink/pull/15004#issuecomment-785052726


   
   ## CI report:
   
   * fd725646ab4c7d7174203dbf481901093265f3d8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13696)
 
   * 3af09855fc47130d93b11e6d6ba1c3dedb6574a5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13733)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14775: [FLINK-20964][python] Introduce PythonStreamGroupWindowAggregateOperator

2021-02-24 Thread GitBox


flinkbot edited a comment on pull request #14775:
URL: https://github.com/apache/flink/pull/14775#issuecomment-768234660


   
   ## CI report:
   
   * f2494304b2c16c02b89ee96864ff1e61f446f203 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13697)
 
   * d8f3c75d291d0050ee56f82aa019418718bf87a5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=13732)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21494) Could not execute statement 'USE `default`' in Flink SQL client

2021-02-24 Thread Zheng Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated FLINK-21494:
-
Description: 
I have two databases in my iceberg catalog,  one is `default`, another one is 
`test_db`.  While I cannot switch to use the `default` database because of the 
Flink SQL parser bug: 



{code}

Flink SQL> show databases;

default

test_db

 

Flink SQL> use `default`;

[ERROR] Could not execute SQL statement. Reason:

org.apache.flink.sql.parser.impl.ParseException: Incorrect syntax near the 
keyword 'USE' at line 1, column 1.

Was expecting one of:

    "ABS" ...

    "ALTER" ...

    "ARRAY" ...

    "AVG" ...

    "CALL" ...

    "CARDINALITY" ...

    "CASE" ...

    "CAST" ...

    "CEIL" ...

    "CEILING" ...

    "CHAR_LENGTH" ...

    "CHARACTER_LENGTH" ...

    "CLASSIFIER" ...

    "COALESCE" ...

    "COLLECT" ...

    "CONVERT" ...

    "COUNT" ...

    "COVAR_POP" ...

    "COVAR_SAMP" ...

    "CREATE" ...

    "CUME_DIST" ...

    "CURRENT" ...

    "CURRENT_CATALOG" ...

    "CURRENT_DATE" ...

    "CURRENT_DEFAULT_TRANSFORM_GROUP" ...

    "CURRENT_PATH" ...

    "CURRENT_ROLE" ...

    "CURRENT_SCHEMA" ...

    "CURRENT_TIME" ...

    "CURRENT_TIMESTAMP" ...

    "CURRENT_USER" ...

    "CURSOR" ...

    "DATE" ...

    "DELETE" ...

    "DENSE_RANK" ...

    "DESCRIBE" ...

    "DROP" ...

    "ELEMENT" ...

    "EVERY" ...

    "EXISTS" ...

    "EXP" ...

    "EXPLAIN" ...

    "EXTRACT" ...

    "FALSE" ...

    "FIRST_VALUE" ...

    "FLOOR" ...

    "FUSION" ...

    "GROUPING" ...

    "HOUR" ...

    "INSERT" ...

    "INTERSECTION" ...

    "INTERVAL" ...

    "JSON_ARRAY" ...

    "JSON_ARRAYAGG" ...

    "JSON_EXISTS" ...

    "JSON_OBJECT" ...

    "JSON_OBJECTAGG" ...

    "JSON_QUERY" ...

    "JSON_VALUE" ...

    "LAG" ...

    "LAST_VALUE" ...

    "LEAD" ...

    "LEFT" ...

    "LN" ...

    "LOCALTIME" ...

    "LOCALTIMESTAMP" ...

    "LOWER" ...

    "MATCH_NUMBER" ...

    "MAX" ...

    "MERGE" ...

    "MIN" ...

    "MINUTE" ...

    "MOD" ...

    "MONTH" ...

    "MULTISET" ...

    "NEW" ...

    "NEXT" ...

    "NOT" ...

    "NTH_VALUE" ...

    "NTILE" ...

    "NULL" ...

    "NULLIF" ...

    "OCTET_LENGTH" ...

    "OVERLAY" ...

    "PERCENT_RANK" ...

    "PERIOD" ...

    "POSITION" ...

    "POWER" ...

    "PREV" ...

    "RANK" ...

    "REGR_COUNT" ...

    "REGR_SXX" ...

    "REGR_SYY" ...

    "RESET" ...

    "RIGHT" ...

    "ROW" ...

    "ROW_NUMBER" ...

    "RUNNING" ...

    "SECOND" ...

    "SELECT" ...

    "SESSION_USER" ...

    "SET" ...

    "SOME" ...

    "SPECIFIC" ...

    "SQRT" ...

    "STDDEV_POP" ...

    "STDDEV_SAMP" ...

    "SUBSTRING" ...

    "SUM" ...

    "SYSTEM_USER" ...

    "TABLE" ...

    "TIME" ...

    "TIMESTAMP" ...

    "TRANSLATE" ...

    "TRIM" ...

    "TRUE" ...

    "TRUNCATE" ...

    "UNKNOWN" ...

    "UPDATE" ...

    "UPPER" ...

    "UPSERT" ...

    "USER" ...

    "VALUES" ...

    "VAR_POP" ...

    "VAR_SAMP" ...

    "WITH" ...

    "YEAR" ...

     ...

     ...

     ...

     ...

     ...

     ...

     ...

     ...

     ...

    "(" ...

     ...

     ...

     ...

     ...

    "?" ...

    "+" ...

    "-" ...

     ...

     ...

     ...

     ...

     ...

     ...

    "SHOW" ...

    "USE"  ...

    "USE"  ...

    "USE"  ...

    "USE"  ...

    "USE"  ...

    "USE"  ...

{code}


It's OK to switch to use `test_db`. 


{code}
Flink SQL> use `test_db`;

Flink SQL> show tables; 
[INFO] Result was empty.
{code}


The stacktrace is here: 

  was:
I have two databases in my iceberg catalog,  one is `default`, another one is 
`test_db`.  While I cannot switch to use the `default` database because of the 
Flink SQL parser bug: 



{code}

Flink SQL> show databases;

default

test_db

 

Flink SQL> use `default`;

[ERROR] Could not execute SQL statement. Reason:

org.apache.flink.sql.parser.impl.ParseException: Incorrect syntax near the 
keyword 'USE' at line 1, column 1.

Was expecting one of:

    "ABS" ...

    "ALTER" ...

    "ARRAY" ...

    "AVG" ...

    "CALL" ...

    "CARDINALITY" ...

    "CASE" ...

    "CAST" ...

    "CEIL" ...

    "CEILING" ...

    "CHAR_LENGTH" ...

    "CHARACTER_LENGTH" ...

    "CLASSIFIER" ...

    "COALESCE" ...

    "COLLECT" ...

    "CONVERT" ...

    "COUNT" ...

    "COVAR_POP" ...

    "COVAR_SAMP" ...

    "CREATE" ...

    "CUME_DIST" ...

    "CURRENT" ...

    "CURRENT_CATALOG" ...

    "CURRENT_DATE" ...

    "CURRENT_DEFAULT_TRANSFORM_GROUP" ...

    "CURRENT_PATH" ...

    "CURRENT_ROLE" ...

    "CURRENT_SCHEMA" ...

    "CURRENT_TIME" ...

    "CURRENT_TIMESTAMP" ...

    "CURRENT_USER" ...

    "CURSOR" ...

    "DATE" ...

    "DELETE" ...

    "DENSE_RANK" ...

    "DESCRIBE" ...

    "DROP" ...

    "ELEMENT" ...

    "EVERY" ...

    "EXISTS" ...

    "EXP" ...

    "EXPLAIN" ...

    "EXTRACT" ...

    

[jira] [Updated] (FLINK-21494) Could not execute statement 'USE `default`' in Flink SQL client

2021-02-24 Thread Zheng Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated FLINK-21494:
-
Description: 
I have two databases in my iceberg catalog,  one is `default`, another one is 
`test_db`.  While I cannot switch to use the `default` database because of the 
Flink SQL parser bug: 



{code}

Flink SQL> show databases;

default

test_db

 

Flink SQL> use `default`;

[ERROR] Could not execute SQL statement. Reason:

org.apache.flink.sql.parser.impl.ParseException: Incorrect syntax near the 
keyword 'USE' at line 1, column 1.

Was expecting one of:

    "ABS" ...

    "ALTER" ...

    "ARRAY" ...

    "AVG" ...

    "CALL" ...

    "CARDINALITY" ...

    "CASE" ...

    "CAST" ...

    "CEIL" ...

    "CEILING" ...

    "CHAR_LENGTH" ...

    "CHARACTER_LENGTH" ...

    "CLASSIFIER" ...

    "COALESCE" ...

    "COLLECT" ...

    "CONVERT" ...

    "COUNT" ...

    "COVAR_POP" ...

    "COVAR_SAMP" ...

    "CREATE" ...

    "CUME_DIST" ...

    "CURRENT" ...

    "CURRENT_CATALOG" ...

    "CURRENT_DATE" ...

    "CURRENT_DEFAULT_TRANSFORM_GROUP" ...

    "CURRENT_PATH" ...

    "CURRENT_ROLE" ...

    "CURRENT_SCHEMA" ...

    "CURRENT_TIME" ...

    "CURRENT_TIMESTAMP" ...

    "CURRENT_USER" ...

    "CURSOR" ...

    "DATE" ...

    "DELETE" ...

    "DENSE_RANK" ...

    "DESCRIBE" ...

    "DROP" ...

    "ELEMENT" ...

    "EVERY" ...

    "EXISTS" ...

    "EXP" ...

    "EXPLAIN" ...

    "EXTRACT" ...

    "FALSE" ...

    "FIRST_VALUE" ...

    "FLOOR" ...

    "FUSION" ...

    "GROUPING" ...

    "HOUR" ...

    "INSERT" ...

    "INTERSECTION" ...

    "INTERVAL" ...

    "JSON_ARRAY" ...

    "JSON_ARRAYAGG" ...

    "JSON_EXISTS" ...

    "JSON_OBJECT" ...

    "JSON_OBJECTAGG" ...

    "JSON_QUERY" ...

    "JSON_VALUE" ...

    "LAG" ...

    "LAST_VALUE" ...

    "LEAD" ...

    "LEFT" ...

    "LN" ...

    "LOCALTIME" ...

    "LOCALTIMESTAMP" ...

    "LOWER" ...

    "MATCH_NUMBER" ...

    "MAX" ...

    "MERGE" ...

    "MIN" ...

    "MINUTE" ...

    "MOD" ...

    "MONTH" ...

    "MULTISET" ...

    "NEW" ...

    "NEXT" ...

    "NOT" ...

    "NTH_VALUE" ...

    "NTILE" ...

    "NULL" ...

    "NULLIF" ...

    "OCTET_LENGTH" ...

    "OVERLAY" ...

    "PERCENT_RANK" ...

    "PERIOD" ...

    "POSITION" ...

    "POWER" ...

    "PREV" ...

    "RANK" ...

    "REGR_COUNT" ...

    "REGR_SXX" ...

    "REGR_SYY" ...

    "RESET" ...

    "RIGHT" ...

    "ROW" ...

    "ROW_NUMBER" ...

    "RUNNING" ...

    "SECOND" ...

    "SELECT" ...

    "SESSION_USER" ...

    "SET" ...

    "SOME" ...

    "SPECIFIC" ...

    "SQRT" ...

    "STDDEV_POP" ...

    "STDDEV_SAMP" ...

    "SUBSTRING" ...

    "SUM" ...

    "SYSTEM_USER" ...

    "TABLE" ...

    "TIME" ...

    "TIMESTAMP" ...

    "TRANSLATE" ...

    "TRIM" ...

    "TRUE" ...

    "TRUNCATE" ...

    "UNKNOWN" ...

    "UPDATE" ...

    "UPPER" ...

    "UPSERT" ...

    "USER" ...

    "VALUES" ...

    "VAR_POP" ...

    "VAR_SAMP" ...

    "WITH" ...

    "YEAR" ...

     ...

     ...

     ...

     ...

     ...

     ...

     ...

     ...

     ...

    "(" ...

     ...

     ...

     ...

     ...

    "?" ...

    "+" ...

    "-" ...

     ...

     ...

     ...

     ...

     ...

     ...

    "SHOW" ...

    "USE"  ...

    "USE"  ...

    "USE"  ...

    "USE"  ...

    "USE"  ...

    "USE"  ...

{code}


It's OK to switch to use `test_db`. 


{code}
Flink SQL> use `test_db`;

Flink SQL> show tables; 
[INFO] Result was empty.
{code}


The stacktrace is here: 
https://issues.apache.org/jira/secure/attachment/13021173/stacktrace.txt

  was:
I have two databases in my iceberg catalog,  one is `default`, another one is 
`test_db`.  While I cannot switch to use the `default` database because of the 
Flink SQL parser bug: 



{code}

Flink SQL> show databases;

default

test_db

 

Flink SQL> use `default`;

[ERROR] Could not execute SQL statement. Reason:

org.apache.flink.sql.parser.impl.ParseException: Incorrect syntax near the 
keyword 'USE' at line 1, column 1.

Was expecting one of:

    "ABS" ...

    "ALTER" ...

    "ARRAY" ...

    "AVG" ...

    "CALL" ...

    "CARDINALITY" ...

    "CASE" ...

    "CAST" ...

    "CEIL" ...

    "CEILING" ...

    "CHAR_LENGTH" ...

    "CHARACTER_LENGTH" ...

    "CLASSIFIER" ...

    "COALESCE" ...

    "COLLECT" ...

    "CONVERT" ...

    "COUNT" ...

    "COVAR_POP" ...

    "COVAR_SAMP" ...

    "CREATE" ...

    "CUME_DIST" ...

    "CURRENT" ...

    "CURRENT_CATALOG" ...

    "CURRENT_DATE" ...

    "CURRENT_DEFAULT_TRANSFORM_GROUP" ...

    "CURRENT_PATH" ...

    "CURRENT_ROLE" ...

    "CURRENT_SCHEMA" ...

    "CURRENT_TIME" ...

    "CURRENT_TIMESTAMP" ...

    "CURRENT_USER" ...

    "CURSOR" ...

    "DATE" ...

    "DELETE" ...

    "DENSE_RANK" ...

    "DESCRIBE" ...

    "DROP" ...

    "ELEMENT" ...

    "EVERY" ...

    

  1   2   3   4   5   >