[GitHub] [flink] flinkbot edited a comment on pull request #12975: [FLINK-18691]add HiveCatalog Construction method with HiveConf

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12975:
URL: https://github.com/apache/flink/pull/12975#issuecomment-663313007


   
   ## CI report:
   
   * 601b0925c922bf81816f325ba6375ac881847630 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4842)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12917: [WIP] [FLINK-18355][tests] Simplify tests of SlotPoolImpl

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12917:
URL: https://github.com/apache/flink/pull/12917#issuecomment-659940433


   
   ## CI report:
   
   * 1c5870c57b64436014900d67be2005395e007a52 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4852)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18517) kubernetes session test failed with "java.net.SocketException: Broken pipe"

2020-07-23 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164155#comment-17164155
 ] 

Yang Wang commented on FLINK-18517:
---

[~dian.fu] [~trohrmann] I think it is same with FLINK-17416. And we need to 
bump the fabric8 version from 4.5.2 to 4.9.2 in release-1.10 to fix the 
compatibility issue with jdk 8u252.

I will do it.

> kubernetes session test failed with "java.net.SocketException: Broken pipe"
> ---
>
> Key: FLINK-18517
> URL: https://issues.apache.org/jira/browse/FLINK-18517
> Project: Flink
>  Issue Type: Test
>  Components: Deployment / Kubernetes, Tests
>Affects Versions: 1.10.1
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.10.2
>
>
> It failed on release-1.10 branch:
> https://travis-ci.org/github/apache/flink/jobs/705554778
> Exception message:
> {code}
> 020-07-07 01:54:17,173 ERROR org.apache.flink.client.cli.CliFrontend  
>  - Error while running the command.
> org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Operation: [get]  for kind: [Service]  with name: 
> [flink-native-k8s-session-1-rest]  in namespace: [default]  failed.
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:335)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:670)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:901)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:974)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
>   at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:974)
> Caused by: io.fabric8.kubernetes.client.KubernetesClientException: Operation: 
> [get]  for kind: [Service]  with name: [flink-native-k8s-session-1-rest]  in 
> namespace: [default]  failed.
>   at 
> io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:64)
>   at 
> io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:72)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:231)
>   at 
> io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:164)
>   at 
> org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.getService(Fabric8FlinkKubeClient.java:299)
>   at 
> org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.getRestService(Fabric8FlinkKubeClient.java:240)
>   at 
> org.apache.flink.kubernetes.kubeclient.Fabric8FlinkKubeClient.getRestEndpoint(Fabric8FlinkKubeClient.java:205)
>   at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.lambda$createClusterClientProvider$0(KubernetesClusterDescriptor.java:88)
>   at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.retrieve(KubernetesClusterDescriptor.java:118)
>   at 
> org.apache.flink.kubernetes.KubernetesClusterDescriptor.retrieve(KubernetesClusterDescriptor.java:59)
>   at 
> org.apache.flink.client.deployment.executors.AbstractSessionClusterExecutor.execute(AbstractSessionClusterExecutor.java:63)
>   at 
> org.apache.flink.api.java.ExecutionEnvironment.executeAsync(ExecutionEnvironment.java:962)
>   at 
> org.apache.flink.client.program.ContextEnvironment.executeAsync(ContextEnvironment.java:108)
>   at 
> org.apache.flink.client.program.ContextEnvironment.execute(ContextEnvironment.java:58)
>   at 
> org.apache.flink.examples.java.wordcount.WordCount.main(WordCount.java:93)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
>   ... 11 more
> Caused by: java.net.SocketException: Broken pipe (Write failed)
>   at 

[GitHub] [flink] flinkbot edited a comment on pull request #12919: [FLINK-16048][avro] Support read/write confluent schema registry avro…

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12919:
URL: https://github.com/apache/flink/pull/12919#issuecomment-659928275


   
   ## CI report:
   
   * d1e4ba7690be134e21d193fbc1cb01aa51aaeb9b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4703)
 
   * f0da3cee91d22ec20cbba1b6c5be45da1440cf05 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4713)
 
   * f006afeec4c8ee25dfe12b944e2cf4260239ca1e UNKNOWN
   * 9104e12b0394cd6d578d2380ca4554b75e6e00f9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4730)
 
   * 9b557c718fe731e8d5c58e7c5d9c3452a245ee5a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4769)
 
   * 71218ee49095663a641e56889831536a2a2e69ea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4818)
 
   * 9d8870894b4d9d434c45b58339985aed3b76a8be Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4851)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12917: [WIP] [FLINK-18355][tests] Simplify tests of SlotPoolImpl

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12917:
URL: https://github.com/apache/flink/pull/12917#issuecomment-659940433


   
   ## CI report:
   
   * d5bb3bc311fdf4dbd815d718ba8ff51021110b94 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4849)
 
   * 1c5870c57b64436014900d67be2005395e007a52 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12919: [FLINK-16048][avro] Support read/write confluent schema registry avro…

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12919:
URL: https://github.com/apache/flink/pull/12919#issuecomment-659928275


   
   ## CI report:
   
   * d1e4ba7690be134e21d193fbc1cb01aa51aaeb9b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4703)
 
   * f0da3cee91d22ec20cbba1b6c5be45da1440cf05 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4713)
 
   * f006afeec4c8ee25dfe12b944e2cf4260239ca1e UNKNOWN
   * 9104e12b0394cd6d578d2380ca4554b75e6e00f9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4730)
 
   * 9b557c718fe731e8d5c58e7c5d9c3452a245ee5a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4769)
 
   * 71218ee49095663a641e56889831536a2a2e69ea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4818)
 
   * 6839c54eedcdca926b8304782fabcb0dc529c5a6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4847)
 
   * 9d8870894b4d9d434c45b58339985aed3b76a8be Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4851)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-15649) Support mounting volumes

2020-07-23 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164145#comment-17164145
 ] 

Yang Wang commented on FLINK-15649:
---

[~azagrebin] [~felixzheng] I agree that we could start with pod template first 
to make our users could benefit from the K8s advanced features easily. Adding 
some examples(e.g. volume mount, init container, sidecar container, etc.) in 
the document is a good suggestion. If the users still think it is not very 
convenient to use, then we could go back and rethink to add these features into 
Flink config options.

> Support mounting volumes 
> -
>
> Key: FLINK-15649
> URL: https://issues.apache.org/jira/browse/FLINK-15649
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Priority: Major
>  Labels: pull-request-available
>
> Add support for mounting K8S volumes, including emptydir, hostpath, pv etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12919: [FLINK-16048][avro] Support read/write confluent schema registry avro…

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12919:
URL: https://github.com/apache/flink/pull/12919#issuecomment-659928275


   
   ## CI report:
   
   * d1e4ba7690be134e21d193fbc1cb01aa51aaeb9b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4703)
 
   * f0da3cee91d22ec20cbba1b6c5be45da1440cf05 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4713)
 
   * f006afeec4c8ee25dfe12b944e2cf4260239ca1e UNKNOWN
   * 9104e12b0394cd6d578d2380ca4554b75e6e00f9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4730)
 
   * 9b557c718fe731e8d5c58e7c5d9c3452a245ee5a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4769)
 
   * 71218ee49095663a641e56889831536a2a2e69ea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4818)
 
   * 6839c54eedcdca926b8304782fabcb0dc529c5a6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4847)
 
   * 9d8870894b4d9d434c45b58339985aed3b76a8be UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-16827) StreamExecTemporalSort should require a distribution trait in StreamExecTemporalSortRule

2020-07-23 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17156784#comment-17156784
 ] 

Benchao Li edited comment on FLINK-16827 at 7/24/20, 4:42 AM:
--

Fixed via

66353f27c4c6481443d1f04a8f23e7f98dd7beda (1.12.0)

076a474cee465d3fc3267e2d2367bfdb59fce1d4 (1.11.2)


was (Author: libenchao):
Fixed via 66353f27c4c6481443d1f04a8f23e7f98dd7beda (1.12.0)

> StreamExecTemporalSort should require a distribution trait in 
> StreamExecTemporalSortRule
> 
>
> Key: FLINK-16827
> URL: https://issues.apache.org/jira/browse/FLINK-16827
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Table SQL / Runtime
>Affects Versions: 1.9.1
> Environment: flink on yarn
> !image-2020-03-27-21-23-13-648.png!
>Reporter: wuchangjun
>Assignee: Benchao Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
> Attachments: image-2020-03-27-21-22-21-122.png, 
> image-2020-03-27-21-22-44-191.png, image-2020-03-27-21-23-13-648.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> flink reads kafka data and sorts by time field. In the case of multiple 
> concurrency, it throws the following null pointer exception. One concurrent 
> processing is normal.
> !image-2020-03-27-21-22-21-122.png!
>  
> !image-2020-03-27-21-22-44-191.png!
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16827) StreamExecTemporalSort should require a distribution trait in StreamExecTemporalSortRule

2020-07-23 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li updated FLINK-16827:
---
Fix Version/s: 1.11.2

> StreamExecTemporalSort should require a distribution trait in 
> StreamExecTemporalSortRule
> 
>
> Key: FLINK-16827
> URL: https://issues.apache.org/jira/browse/FLINK-16827
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Table SQL / Runtime
>Affects Versions: 1.9.1
> Environment: flink on yarn
> !image-2020-03-27-21-23-13-648.png!
>Reporter: wuchangjun
>Assignee: Benchao Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.2
>
> Attachments: image-2020-03-27-21-22-21-122.png, 
> image-2020-03-27-21-22-44-191.png, image-2020-03-27-21-23-13-648.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> flink reads kafka data and sorts by time field. In the case of multiple 
> concurrency, it throws the following null pointer exception. One concurrent 
> processing is normal.
> !image-2020-03-27-21-22-21-122.png!
>  
> !image-2020-03-27-21-22-44-191.png!
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] libenchao closed pull request #12969: [BP-1.11][FLINK-16827][table-planner-blink] StreamExecTemporalSort should requ…

2020-07-23 Thread GitBox


libenchao closed pull request #12969:
URL: https://github.com/apache/flink/pull/12969


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] libenchao commented on pull request #12969: [BP-1.11][FLINK-16827][table-planner-blink] StreamExecTemporalSort should requ…

2020-07-23 Thread GitBox


libenchao commented on pull request #12969:
URL: https://github.com/apache/flink/pull/12969#issuecomment-663343964


   Merged via 076a474cee465d3fc3267e2d2367bfdb59fce1d4



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18674) Support to bridge Transformation (DataStream) with FLIP-95 interface?

2020-07-23 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164136#comment-17164136
 ] 

godfrey he commented on FLINK-18674:


I think we can provide another api (advance api), which could support 
Transformation (or DataStream). and let the connector to support  state 
compatibility, parallelism configure, message ordering guarantee via the 
advance api.

> Support to bridge Transformation (DataStream) with FLIP-95 interface?
> -
>
> Key: FLINK-18674
> URL: https://issues.apache.org/jira/browse/FLINK-18674
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Major
>
> A user complained in the user ML [1] that the old connector loigic is hard to 
> migrate to the FLIP-95 interfaces, because they heavily used DataStream in 
> the TableSource/TableSink and it is not possible to replace with new 
> interface right now. 
> This issue can be used to collect the user requirements around 
> DataStream/Transformation + FLIP-95. We can also evaluate/discuss whether and 
> how to support it. 
> [1]: http://apache-flink.147419.n8.nabble.com/1-11Flink-SQL-API-tp5261.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12917: [WIP] [FLINK-18355][tests] Simplify tests of SlotPoolImpl

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12917:
URL: https://github.com/apache/flink/pull/12917#issuecomment-659940433


   
   ## CI report:
   
   * d5bb3bc311fdf4dbd815d718ba8ff51021110b94 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4849)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18698) org.apache.flink.sql.parser.utils.ParserResource compile error

2020-07-23 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164128#comment-17164128
 ] 

Jark Wu edited comment on FLINK-18698 at 7/24/20, 4:14 AM:
---

You should build the source code first {{mvn clean install -DskipTests}}, the 
{{FlinkSqlParserImpl}} is code generated. 


was (Author: jark):
You should build the source first {{mvn clean install -DskipTests}}, the 
{{FlinkSqlParserImpl}} is code generated. 

> org.apache.flink.sql.parser.utils.ParserResource compile error
> --
>
> Key: FLINK-18698
> URL: https://issues.apache.org/jira/browse/FLINK-18698
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: 毛宗良
>Priority: Major
> Attachments: image-2020-07-24-11-42-09-880.png
>
>
> org.apache.flink.sql.parser.utils.ParserResource in flink-sql-parser import 
> org.apache.flink.sql.parser.impl.ParseException which could not find.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18698) org.apache.flink.sql.parser.utils.ParserResource compile error

2020-07-23 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164128#comment-17164128
 ] 

Jark Wu commented on FLINK-18698:
-

You should build the source first {{mvn clean install -DskipTests}}, the 
{{FlinkSqlParserImpl}} is code generated. 

> org.apache.flink.sql.parser.utils.ParserResource compile error
> --
>
> Key: FLINK-18698
> URL: https://issues.apache.org/jira/browse/FLINK-18698
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: 毛宗良
>Priority: Major
> Attachments: image-2020-07-24-11-42-09-880.png
>
>
> org.apache.flink.sql.parser.utils.ParserResource in flink-sql-parser import 
> org.apache.flink.sql.parser.impl.ParseException which could not find.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12919: [FLINK-16048][avro] Support read/write confluent schema registry avro…

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12919:
URL: https://github.com/apache/flink/pull/12919#issuecomment-659928275


   
   ## CI report:
   
   * d1e4ba7690be134e21d193fbc1cb01aa51aaeb9b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4703)
 
   * f0da3cee91d22ec20cbba1b6c5be45da1440cf05 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4713)
 
   * f006afeec4c8ee25dfe12b944e2cf4260239ca1e UNKNOWN
   * 9104e12b0394cd6d578d2380ca4554b75e6e00f9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4730)
 
   * 9b557c718fe731e8d5c58e7c5d9c3452a245ee5a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4769)
 
   * 71218ee49095663a641e56889831536a2a2e69ea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4818)
 
   * 6839c54eedcdca926b8304782fabcb0dc529c5a6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4847)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12917: [WIP] [FLINK-18355][tests] Simplify tests of SlotPoolImpl

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12917:
URL: https://github.com/apache/flink/pull/12917#issuecomment-659940433


   
   ## CI report:
   
   * db1412f0ca469ea1b9f8cc3b7dcc78d1b60762cf Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4757)
 
   * d5bb3bc311fdf4dbd815d718ba8ff51021110b94 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18545) Sql api cannot special flink job name

2020-07-23 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-18545:
---
Fix Version/s: 1.11.2
   1.12.0

> Sql api cannot special flink job name
> -
>
> Key: FLINK-18545
> URL: https://issues.apache.org/jira/browse/FLINK-18545
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission, Table SQL / API
>Affects Versions: 1.11.0
> Environment: execute sql : 
> StreamTableEnvironment.executeSql("insert into user_log_sink select user_id, 
> item_id, category_id, behavior, ts from user_log")
> current job name :  org.apache.flink.table.api.internal.TableEnvironmentImpl
> {code:java}
> public TableResult executeInternal(List operations) {
> List> transformations = translate(operations);
> List sinkIdentifierNames = extractSinkIdentifierNames(operations);
> String jobName = "insert-into_" + String.join(",", sinkIdentifierNames);
> Pipeline pipeline = execEnv.createPipeline(transformations, tableConfig, 
> jobName);
> try {
> JobClient jobClient = execEnv.executeAsync(pipeline);
> TableSchema.Builder builder = TableSchema.builder();
> Object[] affectedRowCounts = new Long[operations.size()];
> for (int i = 0; i < operations.size(); ++i) {
> // use sink identifier name as field name
> builder.field(sinkIdentifierNames.get(i), DataTypes.BIGINT());
> affectedRowCounts[i] = -1L;
> }return TableResultImpl.builder()
> .jobClient(jobClient)
> .resultKind(ResultKind.SUCCESS_WITH_CONTENT)
> .tableSchema(builder.build())
> .data(Collections.singletonList(Row.of(affectedRowCounts)))
> .build();
> } catch (Exception e) {
> throw new TableException("Failed to execute sql", e);
> }
> }
> {code}
>Reporter: venn wu
>Priority: Critical
> Fix For: 1.12.0, 1.11.2
>
>
> In Flink 1.11.0, {color:#172b4d}StreamTableEnvironment.executeSql(sql) 
> {color}will explan and execute job Immediately, The job name will special as 
> "insert-into_sink-table-name".  But we have Multiple sql job will insert into 
> a same sink table, this is not very friendly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18674) Support to bridge Transformation (DataStream) with FLIP-95 interface?

2020-07-23 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164123#comment-17164123
 ] 

Jark Wu commented on FLINK-18674:
-

cc [~twalthr], what's your opinion on this? 

My concern is that , DataStream is a black box to Flink SQL, it will be 
difficult to support state compatibility, parallelism configure, message 
ordering guarantee in the future. 

> Support to bridge Transformation (DataStream) with FLIP-95 interface?
> -
>
> Key: FLINK-18674
> URL: https://issues.apache.org/jira/browse/FLINK-18674
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Jark Wu
>Priority: Major
>
> A user complained in the user ML [1] that the old connector loigic is hard to 
> migrate to the FLIP-95 interfaces, because they heavily used DataStream in 
> the TableSource/TableSink and it is not possible to replace with new 
> interface right now. 
> This issue can be used to collect the user requirements around 
> DataStream/Transformation + FLIP-95. We can also evaluate/discuss whether and 
> how to support it. 
> [1]: http://apache-flink.147419.n8.nabble.com/1-11Flink-SQL-API-tp5261.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12972: [FLINK-18552][tests] Update migration tests in master to cover migration for 1.11

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12972:
URL: https://github.com/apache/flink/pull/12972#issuecomment-663126426


   
   ## CI report:
   
   * f87aff8c9cb475648ab35540ca22f65d0c077800 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4831)
 
   * 784fab54a9750ee607b9eb4d38123736f1c408ff Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4848)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12919: [FLINK-16048][avro] Support read/write confluent schema registry avro…

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12919:
URL: https://github.com/apache/flink/pull/12919#issuecomment-659928275


   
   ## CI report:
   
   * d1e4ba7690be134e21d193fbc1cb01aa51aaeb9b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4703)
 
   * f0da3cee91d22ec20cbba1b6c5be45da1440cf05 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4713)
 
   * f006afeec4c8ee25dfe12b944e2cf4260239ca1e UNKNOWN
   * 9104e12b0394cd6d578d2380ca4554b75e6e00f9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4730)
 
   * 22ed53e6e047b379e0ee568298600afd9283b2b8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4762)
 
   * 9b557c718fe731e8d5c58e7c5d9c3452a245ee5a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4769)
 
   * 71218ee49095663a641e56889831536a2a2e69ea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4818)
 
   * 6839c54eedcdca926b8304782fabcb0dc529c5a6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4847)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18698) org.apache.flink.sql.parser.utils.ParserResource compile error

2020-07-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-18698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164119#comment-17164119
 ] 

毛宗良 commented on FLINK-18698:
-

And org.apache.flink.sql.parser.TableApiIdentifierParsingTest has the same 
error. And org.apache.flink.sql.parser.impl.FlinkSqlParserImpl could not be 
found. Maybe there are some code not commited.

> org.apache.flink.sql.parser.utils.ParserResource compile error
> --
>
> Key: FLINK-18698
> URL: https://issues.apache.org/jira/browse/FLINK-18698
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.12.0
>Reporter: 毛宗良
>Priority: Major
> Attachments: image-2020-07-24-11-42-09-880.png
>
>
> org.apache.flink.sql.parser.utils.ParserResource in flink-sql-parser import 
> org.apache.flink.sql.parser.impl.ParseException which could not find.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-7289) Memory allocation of RocksDB can be problematic in container environments

2020-07-23 Thread Harsh Singh (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-7289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164117#comment-17164117
 ] 

Harsh Singh commented on FLINK-7289:


Hi [~liyu] is there also a way to enable Direct IO 
([https://github.com/facebook/rocksdb/wiki/Direct-IO)|https://github.com/facebook/rocksdb/wiki/Direct-IO]
 in flink rocksdb? I am on flink 1.9, and couldnt find a way to enable same.

> Memory allocation of RocksDB can be problematic in container environments
> -
>
> Key: FLINK-7289
> URL: https://issues.apache.org/jira/browse/FLINK-7289
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.7.2, 1.8.2, 1.9.0
>Reporter: Stefan Richter
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.0
>
> Attachments: completeRocksdbConfig.txt
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Flink's RocksDB based state backend allocates native memory. The amount of 
> allocated memory by RocksDB is not under the control of Flink or the JVM and 
> can (theoretically) grow without limits.
> In container environments, this can be problematic because the process can 
> exceed the memory budget of the container, and the process will get killed. 
> Currently, there is no other option than trusting RocksDB to be well behaved 
> and to follow its memory configurations. However, limiting RocksDB's memory 
> usage is not as easy as setting a single limit parameter. The memory limit is 
> determined by an interplay of several configuration parameters, which is 
> almost impossible to get right for users. Even worse, multiple RocksDB 
> instances can run inside the same process and make reasoning about the 
> configuration also dependent on the Flink job.
> Some information about the memory management in RocksDB can be found here:
> https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
> https://github.com/facebook/rocksdb/wiki/RocksDB-Tuning-Guide
> We should try to figure out ways to help users in one or more of the 
> following ways:
> - Some way to autotune or calculate the RocksDB configuration.
> - Conservative default values.
> - Additional documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18698) org.apache.flink.sql.parser.utils.ParserResource compile error

2020-07-23 Thread Jira
毛宗良 created FLINK-18698:
---

 Summary: org.apache.flink.sql.parser.utils.ParserResource compile 
error
 Key: FLINK-18698
 URL: https://issues.apache.org/jira/browse/FLINK-18698
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.12.0
Reporter: 毛宗良
 Attachments: image-2020-07-24-11-42-09-880.png

org.apache.flink.sql.parser.utils.ParserResource in flink-sql-parser import 
org.apache.flink.sql.parser.impl.ParseException which could not find.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18666) Update japicmp configuration for 1.11.1

2020-07-23 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-18666.
---
Resolution: Fixed

1.11: 4efb3b716350d431fd943070ea8d87832d347a2f

> Update japicmp configuration for 1.11.1
> ---
>
> Key: FLINK-18666
> URL: https://issues.apache.org/jira/browse/FLINK-18666
> Project: Flink
>  Issue Type: Task
>  Components: Build System
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.2
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu edited a comment on pull request #12951: [FLINK-18666][build] Update japicmp configuration for 1.11.1

2020-07-23 Thread GitBox


dianfu edited a comment on pull request #12951:
URL: https://github.com/apache/flink/pull/12951#issuecomment-663332472


   closed via 
https://github.com/apache/flink/commit/4efb3b716350d431fd943070ea8d87832d347a2f



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu closed pull request #12951: [FLINK-18666][build] Update japicmp configuration for 1.11.1

2020-07-23 Thread GitBox


dianfu closed pull request #12951:
URL: https://github.com/apache/flink/pull/12951


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on pull request #12951: [FLINK-18666][build] Update japicmp configuration for 1.11.1

2020-07-23 Thread GitBox


dianfu commented on pull request #12951:
URL: https://github.com/apache/flink/pull/12951#issuecomment-663332472


   closes via 
https://github.com/apache/flink/commit/4efb3b716350d431fd943070ea8d87832d347a2f



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12972: [FLINK-18552][tests] Update migration tests in master to cover migration for 1.11

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12972:
URL: https://github.com/apache/flink/pull/12972#issuecomment-663126426


   
   ## CI report:
   
   * f87aff8c9cb475648ab35540ca22f65d0c077800 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4831)
 
   * 784fab54a9750ee607b9eb4d38123736f1c408ff UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12955: [FLINK-18632][table-planner-blink] Assign row kind from input to outp…

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12955:
URL: https://github.com/apache/flink/pull/12955#issuecomment-662294282


   
   ## CI report:
   
   * 72d503c5afb9de060729cb9eab7cc9da2c4f2da1 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4753)
 
   * bede1bb56ff21b1e38fc1e3cc86db03a5d3b9423 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4845)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17159) ES6 ElasticsearchSinkITCase unstable

2020-07-23 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164111#comment-17164111
 ] 

Dian Fu commented on FLINK-17159:
-

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4821=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20]

{code}
2020-07-23T19:17:21.9040167Z Caused by: 
org.elasticsearch.ElasticsearchStatusException: method [HEAD], host 
[http://127.0.0.1:9200], URI [/], status line [HTTP/1.1 503 Service Unavailable]
2020-07-23T19:17:21.9040839Zat 
org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:625)
2020-07-23T19:17:21.9041512Zat 
org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:535)
2020-07-23T19:17:21.9042010Zat 
org.elasticsearch.client.RestHighLevelClient.ping(RestHighLevelClient.java:275)
2020-07-23T19:17:21.9042651Zat 
org.apache.flink.streaming.connectors.elasticsearch6.Elasticsearch6ApiCallBridge.verifyClientConnection(Elasticsearch6ApiCallBridge.java:137)
2020-07-23T19:17:21.9043348Zat 
org.apache.flink.streaming.connectors.elasticsearch6.Elasticsearch6ApiCallBridge.verifyClientConnection(Elasticsearch6ApiCallBridge.java:47)
2020-07-23T19:17:21.9043970Zat 
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.open(ElasticsearchSinkBase.java:296)
{code}

> ES6 ElasticsearchSinkITCase unstable
> 
>
> Key: FLINK-17159
> URL: https://issues.apache.org/jira/browse/FLINK-17159
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Chesnay Schepler
>Priority: Critical
>  Labels: test-stability
>
> [https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7482=logs=64110e28-73be-50d7-9369-8750330e0bf1=aa84fb9a-59ae-5696-70f7-011bc086e59b]
> {code:java}
> 2020-04-15T02:37:04.4289477Z [ERROR] 
> testElasticsearchSinkWithSmile(org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSinkITCase)
>   Time elapsed: 0.145 s  <<< ERROR!
> 2020-04-15T02:37:04.4290310Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-04-15T02:37:04.4290790Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-04-15T02:37:04.4291404Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:659)
> 2020-04-15T02:37:04.4291956Z  at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:77)
> 2020-04-15T02:37:04.4292548Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1643)
> 2020-04-15T02:37:04.4293254Z  at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkTestBase.runElasticSearchSinkTest(ElasticsearchSinkTestBase.java:128)
> 2020-04-15T02:37:04.4293990Z  at 
> org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkTestBase.runElasticsearchSinkSmileTest(ElasticsearchSinkTestBase.java:106)
> 2020-04-15T02:37:04.4295096Z  at 
> org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSinkITCase.testElasticsearchSinkWithSmile(ElasticsearchSinkITCase.java:45)
> 2020-04-15T02:37:04.4295923Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-04-15T02:37:04.4296489Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-04-15T02:37:04.4297076Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-04-15T02:37:04.4297513Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-04-15T02:37:04.4297951Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-04-15T02:37:04.4298688Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-04-15T02:37:04.4299374Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-04-15T02:37:04.4300069Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-04-15T02:37:04.4300960Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-04-15T02:37:04.4301705Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-04-15T02:37:04.4302204Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-04-15T02:37:04.4302661Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-04-15T02:37:04.4303234Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-04-15T02:37:04.4303706Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 

[jira] [Commented] (FLINK-18681) The jar package version conflict causes the task to continue to increase and grab resources

2020-07-23 Thread wangtaiyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164110#comment-17164110
 ] 

wangtaiyang commented on FLINK-18681:
-

Although the error is caused by the user's jar dependency, this behavior of 
flink occupies all the resources of the cluster. This is not normal at all, and 
the task will not die, it will remain stuck there forever, this is not at all 
normal

> The jar package version conflict causes the task to continue to increase and 
> grab resources
> ---
>
> Key: FLINK-18681
> URL: https://issues.apache.org/jira/browse/FLINK-18681
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: wangtaiyang
>Priority: Major
>
> When I submit a flink task to yarn, the default resource configuration is 
> 1G&1core, but in fact this task will always increase resources 2core, 3core, 
> and so on. . . 200core. . . Then I went to look at the JM log and found the 
> following error:
> {code:java}
> //代码占位符
> java.lang.NoSuchMethodError: 
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;java.lang.NoSuchMethodError:
>  
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;
>  at 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions.(CommandLineOptions.java:28)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:648)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) 
> ~[?:1.8.0_191]
> ...
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptionsjava.lang.NoClassDefFoundError:
>  Could not initialize class 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:648)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) 
> ~[?:1.8.0_191] at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) 
> ~[?:1.8.0_191] at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[?:1.8.0_191]{code}
> Finally, it is confirmed that it is caused by the commands-cli version 
> conflict, but the task reporting error has not stopped and will continue to 
> grab resources and increase. Is this a bug?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-18681) The jar package version conflict causes the task to continue to increase and grab resources

2020-07-23 Thread wangtaiyang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangtaiyang updated FLINK-18681:

Comment: was deleted

(was: 尽管该错误是由用户的jar依赖关系引起的,但flink的这种行为占用了群集的所有资源,这是不是?)

> The jar package version conflict causes the task to continue to increase and 
> grab resources
> ---
>
> Key: FLINK-18681
> URL: https://issues.apache.org/jira/browse/FLINK-18681
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: wangtaiyang
>Priority: Major
>
> When I submit a flink task to yarn, the default resource configuration is 
> 1G&1core, but in fact this task will always increase resources 2core, 3core, 
> and so on. . . 200core. . . Then I went to look at the JM log and found the 
> following error:
> {code:java}
> //代码占位符
> java.lang.NoSuchMethodError: 
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;java.lang.NoSuchMethodError:
>  
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;
>  at 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions.(CommandLineOptions.java:28)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:648)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) 
> ~[?:1.8.0_191]
> ...
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptionsjava.lang.NoClassDefFoundError:
>  Could not initialize class 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:648)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) 
> ~[?:1.8.0_191] at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) 
> ~[?:1.8.0_191] at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[?:1.8.0_191]{code}
> Finally, it is confirmed that it is caused by the commands-cli version 
> conflict, but the task reporting error has not stopped and will continue to 
> grab resources and increase. Is this a bug?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17274) Maven: Premature end of Content-Length delimited message body

2020-07-23 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164109#comment-17164109
 ] 

Dian Fu commented on FLINK-17274:
-

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4826=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb]

> Maven: Premature end of Content-Length delimited message body
> -
>
> Key: FLINK-17274
> URL: https://issues.apache.org/jira/browse/FLINK-17274
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
> Fix For: 1.12.0
>
>
> CI: 
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7786=logs=52b61abe-a3cc-5bde-cc35-1bbe89bb7df5=54421a62-0c80-5aad-3319-094ff69180bb
> {code}
> [ERROR] Failed to execute goal on project 
> flink-connector-elasticsearch7_2.11: Could not resolve dependencies for 
> project 
> org.apache.flink:flink-connector-elasticsearch7_2.11:jar:1.11-SNAPSHOT: Could 
> not transfer artifact org.apache.lucene:lucene-sandbox:jar:8.3.0 from/to 
> alicloud-mvn-mirror 
> (http://mavenmirror.alicloud.dak8s.net:/repository/maven-central/): GET 
> request of: org/apache/lucene/lucene-sandbox/8.3.0/lucene-sandbox-8.3.0.jar 
> from alicloud-mvn-mirror failed: Premature end of Content-Length delimited 
> message body (expected: 289920; received: 239832 -> [Help 1]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18681) The jar package version conflict causes the task to continue to increase and grab resources

2020-07-23 Thread wangtaiyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164107#comment-17164107
 ] 

wangtaiyang commented on FLINK-18681:
-

尽管该错误是由用户的jar依赖关系引起的,但flink的这种行为占用了群集的所有资源,这是不是?

> The jar package version conflict causes the task to continue to increase and 
> grab resources
> ---
>
> Key: FLINK-18681
> URL: https://issues.apache.org/jira/browse/FLINK-18681
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: wangtaiyang
>Priority: Major
>
> When I submit a flink task to yarn, the default resource configuration is 
> 1G&1core, but in fact this task will always increase resources 2core, 3core, 
> and so on. . . 200core. . . Then I went to look at the JM log and found the 
> following error:
> {code:java}
> //代码占位符
> java.lang.NoSuchMethodError: 
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;java.lang.NoSuchMethodError:
>  
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;
>  at 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions.(CommandLineOptions.java:28)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:648)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) 
> ~[?:1.8.0_191]
> ...
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptionsjava.lang.NoClassDefFoundError:
>  Could not initialize class 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:648)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) 
> ~[?:1.8.0_191] at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) 
> ~[?:1.8.0_191] at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[?:1.8.0_191]{code}
> Finally, it is confirmed that it is caused by the commands-cli version 
> conflict, but the task reporting error has not stopped and will continue to 
> grab resources and increase. Is this a bug?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out

2020-07-23 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164106#comment-17164106
 ] 

Dian Fu commented on FLINK-17730:
-

master: 
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4837=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361]

> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart
>  times out
> 
>
> Key: FLINK-17730
> URL: https://issues.apache.org/jira/browse/FLINK-17730
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, FileSystems, Tests
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0, 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8
> After 5 minutes 
> {code}
> 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000]
> 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE
> 2020-05-15T06:56:38.1689028Z  at 
> java.net.SocketInputStream.socketRead0(Native Method)
> 2020-05-15T06:56:38.1689496Z  at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 2020-05-15T06:56:38.1689921Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> 2020-05-15T06:56:38.1690316Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> 2020-05-15T06:56:38.1690723Z  at 
> sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
> 2020-05-15T06:56:38.1691196Z  at 
> sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
> 2020-05-15T06:56:38.1691608Z  at 
> sun.security.ssl.InputRecord.read(InputRecord.java:532)
> 2020-05-15T06:56:38.1692023Z  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
> 2020-05-15T06:56:38.1692558Z  - locked <0xb94644f8> (a 
> java.lang.Object)
> 2020-05-15T06:56:38.1692946Z  at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
> 2020-05-15T06:56:38.1693371Z  at 
> sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> 2020-05-15T06:56:38.1694151Z  - locked <0xb9464d20> (a 
> sun.security.ssl.AppInputStream)
> 2020-05-15T06:56:38.1694908Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> 2020-05-15T06:56:38.1695475Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> 2020-05-15T06:56:38.1696007Z  at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> 2020-05-15T06:56:38.1696509Z  at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> 2020-05-15T06:56:38.1696993Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1697466Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1698069Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1698567Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699041Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699624Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1700090Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1700584Z  at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> 2020-05-15T06:56:38.1701282Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1701800Z  at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> 2020-05-15T06:56:38.1702328Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1702804Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445)
> 2020-05-15T06:56:38.1703270Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown 
> Source)
> 2020-05-15T06:56:38.1703677Z  at 
> org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> 2020-05-15T06:56:38.1704090Z  at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> 2020-05-15T06:56:38.1704607Z  at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1991724700.execute(Unknown Source)
> 2020-05-15T06:56:38.1705115Z  at 
> 

[jira] [Commented] (FLINK-18681) The jar package version conflict causes the task to continue to increase and grab resources

2020-07-23 Thread wangtaiyang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164104#comment-17164104
 ] 

wangtaiyang commented on FLINK-18681:
-

The problem has been solved, but you actually think it is normal for flink 
wireless application resources to cause the cluster resources to be occupied? I 
was shocked. I went

> The jar package version conflict causes the task to continue to increase and 
> grab resources
> ---
>
> Key: FLINK-18681
> URL: https://issues.apache.org/jira/browse/FLINK-18681
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: wangtaiyang
>Priority: Major
>
> When I submit a flink task to yarn, the default resource configuration is 
> 1G&1core, but in fact this task will always increase resources 2core, 3core, 
> and so on. . . 200core. . . Then I went to look at the JM log and found the 
> following error:
> {code:java}
> //代码占位符
> java.lang.NoSuchMethodError: 
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;java.lang.NoSuchMethodError:
>  
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;
>  at 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions.(CommandLineOptions.java:28)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:648)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) 
> ~[?:1.8.0_191]
> ...
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptionsjava.lang.NoClassDefFoundError:
>  Could not initialize class 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:648)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) 
> ~[?:1.8.0_191] at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) 
> ~[?:1.8.0_191] at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[?:1.8.0_191]{code}
> Finally, it is confirmed that it is caused by the commands-cli version 
> conflict, but the task reporting error has not stopped and will continue to 
> grab resources and increase. Is this a bug?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16048) Support read/write confluent schema registry avro data from Kafka

2020-07-23 Thread Danny Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164105#comment-17164105
 ] 

Danny Chen commented on FLINK-16048:


Personally i do not have a strong preference too, it seems most of us supports 
avro-confluent, let's use it.

> Support read/write confluent schema registry avro data  from Kafka
> --
>
> Key: FLINK-16048
> URL: https://issues.apache.org/jira/browse/FLINK-16048
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.12.0
>
>
> *The background*
> I found SQL Kafka connector can not consume avro data that was serialized by 
> `KafkaAvroSerializer` and only can consume Row data with avro schema because 
> we use `AvroRowDeserializationSchema/AvroRowSerializationSchema` to se/de 
> data in  `AvroRowFormatFactory`. 
> I think we should support this because `KafkaAvroSerializer` is very common 
> in Kafka.
> and someone met same question in stackoverflow[1].
> [[1]https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259|https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259]
> *The format details*
> _The factory identifier (or format id)_
> There are 2 candidates now ~
> - {{avro-sr}}: the pattern borrowed from KSQL {{JSON_SR}} format [1]
> - {{avro-confluent}}: the pattern borrowed from Clickhouse {{AvroConfluent}} 
> [2]
> Personally i would prefer {{avro-sr}} because it is more concise and the 
> confluent is a company name which i think is not that suitable for a format 
> name.
> _The format attributes_
> || Options || required || Remark ||
> | schema-registry.url | true | URL to connect to schema registry service |
> | schema-registry.subject | false | Subject name to write to the Schema 
> Registry service, required for sink |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-16048) Support read/write confluent schema registry avro data from Kafka

2020-07-23 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17162611#comment-17162611
 ] 

Jark Wu edited comment on FLINK-16048 at 7/24/20, 3:15 AM:
---

I personally perfer {{avro-confluent}} than {{avro-sr}}, because the Kafka 
converter class path also tells it 
{{io.confluent.connect.avro.io.confluent.connect.AvroConverter}}. 

Besides, Debezium supports writing in Avro format using Apicurio Registry Avro 
converter or Confluent Registery Avro converter [1]. So, {{debezium-avro-sr}} 
is confusing in this case.

[1]: https://debezium.io/documentation/reference/1.2/configuration/avro.html



was (Author: jark):
I personally perfer {{avro-confluent}} than {{avro-sr}}, because the Kafka 
converter class path also tells it 
{{io.confluent.connect.avro.io.confluent.connect.AvroConverter}}. 

Besides, Debezium supports writing in Avro format using Apicurio Registry Avro 
converter or Confluent Registery Avro converter [1]. So, {{debezium-avro-sr}} 
is confusing in this case.


> Support read/write confluent schema registry avro data  from Kafka
> --
>
> Key: FLINK-16048
> URL: https://issues.apache.org/jira/browse/FLINK-16048
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.12.0
>
>
> *The background*
> I found SQL Kafka connector can not consume avro data that was serialized by 
> `KafkaAvroSerializer` and only can consume Row data with avro schema because 
> we use `AvroRowDeserializationSchema/AvroRowSerializationSchema` to se/de 
> data in  `AvroRowFormatFactory`. 
> I think we should support this because `KafkaAvroSerializer` is very common 
> in Kafka.
> and someone met same question in stackoverflow[1].
> [[1]https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259|https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259]
> *The format details*
> _The factory identifier (or format id)_
> There are 2 candidates now ~
> - {{avro-sr}}: the pattern borrowed from KSQL {{JSON_SR}} format [1]
> - {{avro-confluent}}: the pattern borrowed from Clickhouse {{AvroConfluent}} 
> [2]
> Personally i would prefer {{avro-sr}} because it is more concise and the 
> confluent is a company name which i think is not that suitable for a format 
> name.
> _The format attributes_
> || Options || required || Remark ||
> | schema-registry.url | true | URL to connect to schema registry service |
> | schema-registry.subject | false | Subject name to write to the Schema 
> Registry service, required for sink |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Thesharing commented on a change in pull request #12917: [WIP] [FLINK-18355][tests] Simplify tests of SlotPoolImpl

2020-07-23 Thread GitBox


Thesharing commented on a change in pull request #12917:
URL: https://github.com/apache/flink/pull/12917#discussion_r459833196



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolImplTest.java
##
@@ -365,19 +343,26 @@ public void testReleaseResource() throws Exception {
 
assertTrue(slotPool.offerSlot(taskManagerLocation, 
taskManagerGateway, slotOffer));
 
-   LogicalSlot slot1 = future1.get(1, TimeUnit.SECONDS);
+   PhysicalSlot slot1 = future1.get(1, TimeUnit.SECONDS);
assertTrue(future1.isDone());
assertFalse(future2.isDone());
 
final CompletableFuture releaseFuture = new 
CompletableFuture<>();
-   final DummyPayload dummyPayload = new 
DummyPayload(releaseFuture);
 
-   slot1.tryAssignPayload(dummyPayload);
+   SingleLogicalSlot logicalSlot = 
SingleLogicalSlot.allocateFromPhysicalSlot(
+   requestId1,
+   slot1,
+   Locality.UNKNOWN,
+   new DummySlotOwner(),
+   true
+   );
+
+   logicalSlot.tryAssignPayload(new 
DummyPayload(releaseFuture));
 

slotPool.releaseTaskManager(taskManagerLocation.getResourceID(), null);
 
-   releaseFuture.get();
-   assertFalse(slot1.isAlive());
+   releaseFuture.get(1, TimeUnit.SECONDS);

Review comment:
   I'll remove them.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12955: [FLINK-18632][table-planner-blink] Assign row kind from input to outp…

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12955:
URL: https://github.com/apache/flink/pull/12955#issuecomment-662294282


   
   ## CI report:
   
   * 72d503c5afb9de060729cb9eab7cc9da2c4f2da1 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4753)
 
   * bede1bb56ff21b1e38fc1e3cc86db03a5d3b9423 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lzy3261944 commented on pull request #12955: [FLINK-18632][table-planner-blink] Assign row kind from input to outp…

2020-07-23 Thread GitBox


lzy3261944 commented on pull request #12955:
URL: https://github.com/apache/flink/pull/12955#issuecomment-663327750


   I've simplified the test case, the last one has a little ambiguous.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18697) Adding flink-table-api-java-bridge_2.11 to a Flink job kills the IDE logging

2020-07-23 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-18697:

Fix Version/s: 1.11.2
   1.12.0

> Adding flink-table-api-java-bridge_2.11 to a Flink job kills the IDE logging
> 
>
> Key: FLINK-18697
> URL: https://issues.apache.org/jira/browse/FLINK-18697
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
> Fix For: 1.12.0, 1.11.2
>
>
> Steps to reproduce:
> - Set up a Flink project using a Maven archetype
> - Add "flink-table-api-java-bridge_2.11" as a dependency
> - Running Flink won't produce any log output
> Probable cause:
> "flink-table-api-java-bridge_2.11" has a dependency to 
> "org.apache.flink:flink-streaming-java_2.11:test-jar:tests:1.11.0", which 
> contains a "log4j2-test.properties" file.
> When I disable Log4j2 debugging (with "-Dlog4j2.debug"), I see the following 
> line:
> {code}
> DEBUG StatusLogger Reconfiguration complete for context[name=3d4eac69] at URI 
> jar:file:/Users/robert/.m2/repository/org/apache/flink/flink-streaming-java_2.11/1.11.0/flink-streaming-java_2.11-1.11.0-tests.jar!/log4j2-test.properties
>  (org.apache.logging.log4j.core.LoggerContext@568bf312) with optional 
> ClassLoader: null
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18697) Adding flink-table-api-java-bridge_2.11 to a Flink job kills the IDE logging

2020-07-23 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-18697:
---

Assignee: Jark Wu

> Adding flink-table-api-java-bridge_2.11 to a Flink job kills the IDE logging
> 
>
> Key: FLINK-18697
> URL: https://issues.apache.org/jira/browse/FLINK-18697
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Jark Wu
>Priority: Major
> Fix For: 1.12.0, 1.11.2
>
>
> Steps to reproduce:
> - Set up a Flink project using a Maven archetype
> - Add "flink-table-api-java-bridge_2.11" as a dependency
> - Running Flink won't produce any log output
> Probable cause:
> "flink-table-api-java-bridge_2.11" has a dependency to 
> "org.apache.flink:flink-streaming-java_2.11:test-jar:tests:1.11.0", which 
> contains a "log4j2-test.properties" file.
> When I disable Log4j2 debugging (with "-Dlog4j2.debug"), I see the following 
> line:
> {code}
> DEBUG StatusLogger Reconfiguration complete for context[name=3d4eac69] at URI 
> jar:file:/Users/robert/.m2/repository/org/apache/flink/flink-streaming-java_2.11/1.11.0/flink-streaming-java_2.11-1.11.0-tests.jar!/log4j2-test.properties
>  (org.apache.logging.log4j.core.LoggerContext@568bf312) with optional 
> ClassLoader: null
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18697) Adding flink-table-api-java-bridge_2.11 to a Flink job kills the IDE logging

2020-07-23 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164101#comment-17164101
 ] 

Jark Wu commented on FLINK-18697:
-

I think so.

> Adding flink-table-api-java-bridge_2.11 to a Flink job kills the IDE logging
> 
>
> Key: FLINK-18697
> URL: https://issues.apache.org/jira/browse/FLINK-18697
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>
> Steps to reproduce:
> - Set up a Flink project using a Maven archetype
> - Add "flink-table-api-java-bridge_2.11" as a dependency
> - Running Flink won't produce any log output
> Probable cause:
> "flink-table-api-java-bridge_2.11" has a dependency to 
> "org.apache.flink:flink-streaming-java_2.11:test-jar:tests:1.11.0", which 
> contains a "log4j2-test.properties" file.
> When I disable Log4j2 debugging (with "-Dlog4j2.debug"), I see the following 
> line:
> {code}
> DEBUG StatusLogger Reconfiguration complete for context[name=3d4eac69] at URI 
> jar:file:/Users/robert/.m2/repository/org/apache/flink/flink-streaming-java_2.11/1.11.0/flink-streaming-java_2.11-1.11.0-tests.jar!/log4j2-test.properties
>  (org.apache.logging.log4j.core.LoggerContext@568bf312) with optional 
> ClassLoader: null
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out

2020-07-23 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164100#comment-17164100
 ] 

Dian Fu commented on FLINK-17730:
-

1.11 branch:
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4822=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8]

> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart
>  times out
> 
>
> Key: FLINK-17730
> URL: https://issues.apache.org/jira/browse/FLINK-17730
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, FileSystems, Tests
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0, 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8
> After 5 minutes 
> {code}
> 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000]
> 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE
> 2020-05-15T06:56:38.1689028Z  at 
> java.net.SocketInputStream.socketRead0(Native Method)
> 2020-05-15T06:56:38.1689496Z  at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 2020-05-15T06:56:38.1689921Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> 2020-05-15T06:56:38.1690316Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> 2020-05-15T06:56:38.1690723Z  at 
> sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
> 2020-05-15T06:56:38.1691196Z  at 
> sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
> 2020-05-15T06:56:38.1691608Z  at 
> sun.security.ssl.InputRecord.read(InputRecord.java:532)
> 2020-05-15T06:56:38.1692023Z  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
> 2020-05-15T06:56:38.1692558Z  - locked <0xb94644f8> (a 
> java.lang.Object)
> 2020-05-15T06:56:38.1692946Z  at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
> 2020-05-15T06:56:38.1693371Z  at 
> sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> 2020-05-15T06:56:38.1694151Z  - locked <0xb9464d20> (a 
> sun.security.ssl.AppInputStream)
> 2020-05-15T06:56:38.1694908Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> 2020-05-15T06:56:38.1695475Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> 2020-05-15T06:56:38.1696007Z  at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> 2020-05-15T06:56:38.1696509Z  at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> 2020-05-15T06:56:38.1696993Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1697466Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1698069Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1698567Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699041Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699624Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1700090Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1700584Z  at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> 2020-05-15T06:56:38.1701282Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1701800Z  at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> 2020-05-15T06:56:38.1702328Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1702804Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445)
> 2020-05-15T06:56:38.1703270Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown 
> Source)
> 2020-05-15T06:56:38.1703677Z  at 
> org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> 2020-05-15T06:56:38.1704090Z  at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> 2020-05-15T06:56:38.1704607Z  at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1991724700.execute(Unknown Source)
> 2020-05-15T06:56:38.1705115Z  at 
> 

[GitHub] [flink] maozl commented on pull request #12950: [FLINK-18640][jdbc]Fix table name with schema parse error

2020-07-23 Thread GitBox


maozl commented on pull request #12950:
URL: https://github.com/apache/flink/pull/12950#issuecomment-663326461


   the 12 version has some other changes.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] maozl closed pull request #12950: [FLINK-18640][jdbc]Fix table name with schema parse error

2020-07-23 Thread GitBox


maozl closed pull request #12950:
URL: https://github.com/apache/flink/pull/12950


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12919: [FLINK-16048][avro] Support read/write confluent schema registry avro…

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12919:
URL: https://github.com/apache/flink/pull/12919#issuecomment-659928275


   
   ## CI report:
   
   * d1e4ba7690be134e21d193fbc1cb01aa51aaeb9b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4703)
 
   * f0da3cee91d22ec20cbba1b6c5be45da1440cf05 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4713)
 
   * f006afeec4c8ee25dfe12b944e2cf4260239ca1e UNKNOWN
   * 9104e12b0394cd6d578d2380ca4554b75e6e00f9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4730)
 
   * 22ed53e6e047b379e0ee568298600afd9283b2b8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4762)
 
   * 9b557c718fe731e8d5c58e7c5d9c3452a245ee5a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4769)
 
   * 71218ee49095663a641e56889831536a2a2e69ea Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4818)
 
   * 6839c54eedcdca926b8304782fabcb0dc529c5a6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on pull request #12970: [FLINK-18341][walkthroughs] Drop remaining table walkthrough archetypes

2020-07-23 Thread GitBox


dianfu commented on pull request #12970:
URL: https://github.com/apache/flink/pull/12970#issuecomment-663324805


   @sjwiesman Thanks for the PR. LGTM.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12976: [FLINK-18667][docs] Data types documentation misunderstand users

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12976:
URL: https://github.com/apache/flink/pull/12976#issuecomment-663319572


   
   ## CI report:
   
   * dc15e87e4eabe2cf6dec6b34ec2d4e20f9fbc15b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4844)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18681) The jar package version conflict causes the task to continue to increase and grab resources

2020-07-23 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164088#comment-17164088
 ] 

Xintong Song commented on FLINK-18681:
--

Hi [~apach...@163.com],
I don't think this is a bug of Flink. The problem is usually caused by 
improperly packaging of the user program.
* Have you compiled Flink into your program jar? Ideally, all Flink 
dependencies should be {{provided}} except for the connectors. See also [this 
doc|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/project-configuration.html].
* Have your project (directly or indirectly) depends on {{common-cli}}? If yes, 
you would need to make it provided/excluded to avoid conflict.

> The jar package version conflict causes the task to continue to increase and 
> grab resources
> ---
>
> Key: FLINK-18681
> URL: https://issues.apache.org/jira/browse/FLINK-18681
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: wangtaiyang
>Priority: Major
>
> When I submit a flink task to yarn, the default resource configuration is 
> 1G&1core, but in fact this task will always increase resources 2core, 3core, 
> and so on. . . 200core. . . Then I went to look at the JM log and found the 
> following error:
> {code:java}
> //代码占位符
> java.lang.NoSuchMethodError: 
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;java.lang.NoSuchMethodError:
>  
> org.apache.commons.cli.Option.builder(Ljava/lang/String;)Lorg/apache/commons/cli/Option$Builder;
>  at 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions.(CommandLineOptions.java:28)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:648)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) 
> ~[?:1.8.0_191]
> ...
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptionsjava.lang.NoClassDefFoundError:
>  Could not initialize class 
> org.apache.flink.runtime.entrypoint.parser.CommandLineOptions at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.lambda$getDynamicPropertiesAsString$0(BootstrapTools.java:648)
>  ~[flink-dist_2.11-1.11.1.jar:1.11.1] at 
> java.util.stream.ReferencePipeline$7$1.accept(ReferencePipeline.java:267) 
> ~[?:1.8.0_191] at 
> java.util.HashMap$KeySpliterator.forEachRemaining(HashMap.java:1553) 
> ~[?:1.8.0_191] at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[?:1.8.0_191]{code}
> Finally, it is confirmed that it is caused by the commands-cli version 
> conflict, but the task reporting error has not stopped and will continue to 
> grab resources and increase. Is this a bug?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12976: [FLINK-18667][docs] Data types documentation misunderstand users

2020-07-23 Thread GitBox


flinkbot commented on pull request #12976:
URL: https://github.com/apache/flink/pull/12976#issuecomment-663319572


   
   ## CI report:
   
   * dc15e87e4eabe2cf6dec6b34ec2d4e20f9fbc15b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Thesharing commented on a change in pull request #12917: [WIP] [FLINK-18355][tests] Simplify tests of SlotPoolImpl

2020-07-23 Thread GitBox


Thesharing commented on a change in pull request #12917:
URL: https://github.com/apache/flink/pull/12917#discussion_r459821712



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolImplTest.java
##
@@ -576,7 +550,6 @@ public void testDiscardIdleSlotIfReleasingFailed() throws 
Exception {
try (TestingSlotPoolImpl slotPool = createSlotPoolImpl(clock)) {
 
setupSlotPool(slotPool, resourceManagerGateway, 
mainThreadExecutor);

Review comment:
   Because in this commit there is no `createAndSetUpSlotPool(Clock clock)` 
method in `SlotPoolImpl` related test cases. In future commits, I replace all 
the `createAndSetUpSlotPool` with `SlotPoolBuilder`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Thesharing commented on a change in pull request #12917: [WIP] [FLINK-18355][tests] Simplify tests of SlotPoolImpl

2020-07-23 Thread GitBox


Thesharing commented on a change in pull request #12917:
URL: https://github.com/apache/flink/pull/12917#discussion_r459821120



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolImplTest.java
##
@@ -365,19 +343,26 @@ public void testReleaseResource() throws Exception {
 
assertTrue(slotPool.offerSlot(taskManagerLocation, 
taskManagerGateway, slotOffer));
 
-   LogicalSlot slot1 = future1.get(1, TimeUnit.SECONDS);
+   PhysicalSlot slot1 = future1.get(1, TimeUnit.SECONDS);
assertTrue(future1.isDone());
assertFalse(future2.isDone());
 
final CompletableFuture releaseFuture = new 
CompletableFuture<>();
-   final DummyPayload dummyPayload = new 
DummyPayload(releaseFuture);
 
-   slot1.tryAssignPayload(dummyPayload);
+   SingleLogicalSlot logicalSlot = 
SingleLogicalSlot.allocateFromPhysicalSlot(
+   requestId1,
+   slot1,
+   Locality.UNKNOWN,
+   new DummySlotOwner(),
+   true
+   );
+
+   logicalSlot.tryAssignPayload(new 
DummyPayload(releaseFuture));
 

slotPool.releaseTaskManager(taskManagerLocation.getResourceID(), null);
 
-   releaseFuture.get();
-   assertFalse(slot1.isAlive());
+   releaseFuture.get(1, TimeUnit.SECONDS);

Review comment:
   I'm considering that adding a timeout in `relaseFuture.get()` will make 
sure this line won't block the test running indefinitely.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18487) New table source factory omits unrecognized properties silently

2020-07-23 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-18487.

Fix Version/s: 1.12.0
   Resolution: Fixed

master: 73210cc0f712158ec939ef3ad7dec52a921aad7c

> New table source factory omits unrecognized properties silently
> ---
>
> Key: FLINK-18487
> URL: https://issues.apache.org/jira/browse/FLINK-18487
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Benchao Li
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> For the following DDL, we just omits the unrecognized property 
> 'records-per-second'.
> {code:sql}
> CREATE TABLE MyDataGen (
>   int_field int,
>   double_field double,
>   string_field varchar
> ) WITH (
>   'connector' = 'datagen',
>   'records-per-second' = '1'  -- should be rows-per-second
> )
> {code}
> IMO, we should throw Exceptions to tell users that they used a wrong 
> property. 
>  CC [~jark] [~twalthr]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi merged pull request #12864: [FLINK-18487][table] Datagen and Blackhole factory omits unrecognized properties silently

2020-07-23 Thread GitBox


JingsongLi merged pull request #12864:
URL: https://github.com/apache/flink/pull/12864


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18687) ProjectionCodeGenerator#generateProjectionExpression should remove for loop optimization

2020-07-23 Thread Caizhi Weng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Caizhi Weng closed FLINK-18687.
---
Resolution: Duplicate

> ProjectionCodeGenerator#generateProjectionExpression should remove for loop 
> optimization
> 
>
> Key: FLINK-18687
> URL: https://issues.apache.org/jira/browse/FLINK-18687
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.0
>Reporter: Caizhi Weng
>Priority: Critical
> Fix For: 1.12.0
>
>
> If too many fields of the same type are projected, 
> {{ProjectionCodeGenerator#generateProjectionExpression}} currently performs a 
> "for loop optimization" which, instead of generating code separately for each 
> field, they'll be squashed into a for loop.
> However, if the indices of the fields with the same type are not continuous, 
> this optimization will not write fields in index ascending order. This is not 
> acceptable because {{BinaryWriter}}s expect the users to write fields in 
> index ascending order (that is to say, we *have to* first write field 0, then 
> field 1, then...), otherwise the variable length area of the two binary rows 
> with same data might be different. Although we can use {{getXX}} methods of 
> {{BinaryRow}} to get the fields correctly, states for streaming jobs compare 
> state keys with binary bits, not with the contents of the keys. So we need to 
> make sure the binary bits of the binary rows be the same if two rows contain 
> the same data.
> What's worse, as the current implementation of 
> {{ProjectionCodeGenerator#generateProjectionExpression}} uses a scala 
> {{HashMap}}, the key order of the map might be different on different 
> workers; Even if the projection does not meet the condition to be optimized, 
> it will still be affected by this bug.
> What I suggest is to simply remove this optimization. Because if we still 
> want this optimization, we have to make sure that the fields of the same type 
> have continuous order, which is a very strict and rare condition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Thesharing commented on a change in pull request #12917: [WIP] [FLINK-18355][tests] Simplify tests of SlotPoolImpl

2020-07-23 Thread GitBox


Thesharing commented on a change in pull request #12917:
URL: https://github.com/apache/flink/pull/12917#discussion_r459819964



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolImplTest.java
##
@@ -275,16 +254,14 @@ public void testOfferSlot() throws Exception {
 

resourceManagerGateway.setRequestSlotConsumer(slotRequestFuture::complete);
 
-   try (SlotPoolImpl slotPool = createSlotPoolImpl()) {
-   setupSlotPool(slotPool, resourceManagerGateway, 
mainThreadExecutor);
-   Scheduler scheduler = setupScheduler(slotPool, 
mainThreadExecutor);
+   try (SlotPoolImpl slotPool = createAndSetUpSlotPool()) {

slotPool.registerTaskManager(taskManagerLocation.getResourceID());
 
-   CompletableFuture future = 
scheduler.allocateSlot(
-   new SlotRequestId(),
-   new DummyScheduledUnit(),
-   SlotProfile.noLocality(DEFAULT_TESTING_PROFILE),
-   timeout);
+   SlotRequestId requestId = new SlotRequestId();
+   CompletableFuture future = 
requestNewAllocatedSlot(
+   slotPool,
+   requestId
+   );

Review comment:
   It would be reused in `slotPool.releaseSlot(requestId, null)` in the 
line below.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12975: [FLINK-18691]add HiveCatalog Construction method with HiveConf

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12975:
URL: https://github.com/apache/flink/pull/12975#issuecomment-663313007


   
   ## CI report:
   
   * 601b0925c922bf81816f325ba6375ac881847630 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4842)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12976: [FLINK-18667][docs] Data types documentation misunderstand users

2020-07-23 Thread GitBox


flinkbot commented on pull request #12976:
URL: https://github.com/apache/flink/pull/12976#issuecomment-663315576


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit dc15e87e4eabe2cf6dec6b34ec2d4e20f9fbc15b (Fri Jul 24 
02:01:19 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18667) Data Types documentation misunderstand users

2020-07-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18667:
---
Labels: pull-request-available starter  (was: starter)

> Data Types documentation misunderstand users
> 
>
> Key: FLINK-18667
> URL: https://issues.apache.org/jira/browse/FLINK-18667
> Project: Flink
>  Issue Type: Task
>  Components: Documentation
>Reporter: Jingsong Lee
>Assignee: Shengkai Fang
>Priority: Major
>  Labels: pull-request-available, starter
> Fix For: 1.11.2
>
>
> In 
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/types.html]
> In New Blink Planner label:
> > {{CHAR}} and {{VARCHAR}} are not supported yet.
> But in blink planner, users can write varchar and char.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] fsk119 opened a new pull request #12976: [FLINK-18667][docs] Data types documentation misunderstand users

2020-07-23 Thread GitBox


fsk119 opened a new pull request #12976:
URL: https://github.com/apache/flink/pull/12976


   
   
   ## What is the purpose of the change
   
   *Fix document error.*
   
   
   ## Brief change log
   
 - *Fix document error*
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12975: [FLINK-18691]add HiveCatalog Construction method with HiveConf

2020-07-23 Thread GitBox


flinkbot commented on pull request #12975:
URL: https://github.com/apache/flink/pull/12975#issuecomment-663313007


   
   ## CI report:
   
   * 601b0925c922bf81816f325ba6375ac881847630 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18691) add HiveCatalog Construction method with HiveConf

2020-07-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18691:
---
Labels: pull-request-available  (was: )

> add HiveCatalog Construction method with HiveConf
> -
>
> Key: FLINK-18691
> URL: https://issues.apache.org/jira/browse/FLINK-18691
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.11.1
>Reporter: Jun Zhang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Currently HiveCatalog has two public construction methods. They all need a 
> hiveConfDir variable, which is the path of hive local configuration file. But 
> when we use the Application mode to submit job, the job is submitted on the 
> master node of the cluster, and there may be no hive configuration on the 
> cluster, we can not get the local hive conf path ,so we add a public 
> construction method with HiveConf, which is convenient for users to use.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12975: [FLINK-18691]add HiveCatalog Construction method with HiveConf

2020-07-23 Thread GitBox


flinkbot commented on pull request #12975:
URL: https://github.com/apache/flink/pull/12975#issuecomment-663308170


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 601b0925c922bf81816f325ba6375ac881847630 (Fri Jul 24 
01:22:01 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-18691).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhangjun0x01 opened a new pull request #12975: add HiveCatalog Construction method with HiveConf

2020-07-23 Thread GitBox


zhangjun0x01 opened a new pull request #12975:
URL: https://github.com/apache/flink/pull/12975


   
   
   ## What is the purpose of the change
   
   *add HiveCatalog Construction method with HiveConf*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18693) AvroSerializationSchema does not work with types generated by avrohugger

2020-07-23 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu closed FLINK-18693.
--
Resolution: Duplicate

Hi, [~aljoscha] 
I close this one, looks like it's a duplicate issue with FLINK-18692.

> AvroSerializationSchema does not work with types generated by avrohugger
> 
>
> Key: FLINK-18693
> URL: https://issues.apache.org/jira/browse/FLINK-18693
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Aljoscha Krettek
>Assignee: Aljoscha Krettek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.10.2, 1.12.0, 1.11.1
>
>
> The main problem is that the code in {{SpecificData.createSchema()}} tries to 
> reflectively read the {{SCHEMA$}} field, that is normally there in Avro 
> generated classes. However, avrohugger generates this field in a companion 
> object, which the reflective Java code will therefore not find.
> This is also described in these ML threads:
>  * 
> [https://lists.apache.org/thread.html/5db58c7d15e4e9aaa515f935be3b342fe036e97d32e1fb0f0d1797ee@%3Cuser.flink.apache.org%3E]
>  * 
> [https://lists.apache.org/thread.html/cf1c5b8fa7f095739438807de9f2497e04ffe55237c5dea83355112d@%3Cuser.flink.apache.org%3E]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-659762134


   
   ## CI report:
   
   * f5c161b9cbf0df4cbf0e9f9efd08d1b5b3edb47e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4568)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4813)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4733)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4814)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4839)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4841)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-659762134


   
   ## CI report:
   
   * f5c161b9cbf0df4cbf0e9f9efd08d1b5b3edb47e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4568)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4813)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4733)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4814)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4841)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4839)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-659762134


   
   ## CI report:
   
   * f5c161b9cbf0df4cbf0e9f9efd08d1b5b3edb47e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4568)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4813)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4733)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4814)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4839)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4841)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tweise edited a comment on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


tweise edited a comment on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-663282255


   Actually all test runs show an error when you click through, even when they 
are marked `SUCCESS`in the status here. That's quite confusing. The error has 
nothing to do with the change in this PR.
   ```
   Run e2e tests
   ...
   ##[error]Bash exited with code '2'.
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18686) Getting the emit time when `table.exec.emit.early-fire.enabled` is true

2020-07-23 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164026#comment-17164026
 ] 

hailong wang edited comment on FLINK-18686 at 7/23/20, 11:38 PM:
-

In my business, I want to compute the PV from zero to the current time every 
minute for one day. SQL can be,
{code:java}
COUNT(*) GROUP BY USER, TUMBLE(pt, INTEVAL '1' DAY);
SET `table.exec.emit.early-fire.delay` = `6 ms`;
{code}
And I want to use the fire time to know when was the PV exactly.


was (Author: hailong wang):
In my business, I want to compute the PV from zero to the current time every 
minute for one day. SQL can be,

 
{code:java}
COUNT(*) GROUP BY USER, TUMBLE(pt, INTEVAL '1' DAY);
SET `table.exec.emit.early-fire.delay` = `6 ms`;
{code}
And I want to use the fire time to know when was the PV exactly.

 

> Getting the emit time when `table.exec.emit.early-fire.enabled` is true
> ---
>
> Key: FLINK-18686
> URL: https://issues.apache.org/jira/browse/FLINK-18686
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Major
> Fix For: 1.12.0
>
>
> We can turn `table.exec.emit.early-fire.enabled`  on to let window 
> early-fire. But users always want to get the emit time.
> So can we support auxiliary Function to support this, may be like 
> TUMBLE_EMIT, HOP_EMIT?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18686) Getting the emit time when `table.exec.emit.early-fire.enabled` is true

2020-07-23 Thread hailong wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164026#comment-17164026
 ] 

hailong wang commented on FLINK-18686:
--

In my business, I want to compute the PV from zero to the current time every 
minute for one day. SQL can be,

 
{code:java}
COUNT(*) GROUP BY USER, TUMBLE(pt, INTEVAL '1' DAY);
SET `table.exec.emit.early-fire.delay` = `6 ms`;
{code}
And I want to use the fire time to know when was the PV exactly.

 

> Getting the emit time when `table.exec.emit.early-fire.enabled` is true
> ---
>
> Key: FLINK-18686
> URL: https://issues.apache.org/jira/browse/FLINK-18686
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.11.0
>Reporter: hailong wang
>Priority: Major
> Fix For: 1.12.0
>
>
> We can turn `table.exec.emit.early-fire.enabled`  on to let window 
> early-fire. But users always want to get the emit time.
> So can we support auxiliary Function to support this, may be like 
> TUMBLE_EMIT, HOP_EMIT?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] tweise commented on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


tweise commented on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-663282511


   @flinkbot run azure
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tweise commented on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


tweise commented on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-663282255


   Actually all test runs show a failure when you click through, regardless 
what is shown in the status here. That's quite confusing. The failures also 
have nothing to do with the change in this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12962: [FLINK-18694] Add unaligned checkpoint config to web ui

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12962:
URL: https://github.com/apache/flink/pull/12962#issuecomment-662526701


   
   ## CI report:
   
   * 0b680957f9828af8d87dc3a10a31a5a5112c9b96 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4840)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-659762134


   
   ## CI report:
   
   * f5c161b9cbf0df4cbf0e9f9efd08d1b5b3edb47e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4568)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4813)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4733)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4814)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4839)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12962: [FLINK-18694] Add unaligned checkpoint config to web ui

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12962:
URL: https://github.com/apache/flink/pull/12962#issuecomment-662526701


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * 0b680957f9828af8d87dc3a10a31a5a5112c9b96 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4840)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tinder-dthomson commented on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


tinder-dthomson commented on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-663260476


   @tweise success! What are the next steps?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-16048) Support read/write confluent schema registry avro data from Kafka

2020-07-23 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17164003#comment-17164003
 ] 

Seth Wiesman commented on FLINK-16048:
--

I honestly don’t have a strong preference here. I originally said avro-sr 
because that’s what it’s called in ksql and I like using preexisting names when 
possible. That said, you and Jark have made a strong case, especially for 
debezium which I could realistically see being supported soon. 

+1 for avro-confluent

> Support read/write confluent schema registry avro data  from Kafka
> --
>
> Key: FLINK-16048
> URL: https://issues.apache.org/jira/browse/FLINK-16048
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: Danny Chen
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.12.0
>
>
> *The background*
> I found SQL Kafka connector can not consume avro data that was serialized by 
> `KafkaAvroSerializer` and only can consume Row data with avro schema because 
> we use `AvroRowDeserializationSchema/AvroRowSerializationSchema` to se/de 
> data in  `AvroRowFormatFactory`. 
> I think we should support this because `KafkaAvroSerializer` is very common 
> in Kafka.
> and someone met same question in stackoverflow[1].
> [[1]https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259|https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259]
> *The format details*
> _The factory identifier (or format id)_
> There are 2 candidates now ~
> - {{avro-sr}}: the pattern borrowed from KSQL {{JSON_SR}} format [1]
> - {{avro-confluent}}: the pattern borrowed from Clickhouse {{AvroConfluent}} 
> [2]
> Personally i would prefer {{avro-sr}} because it is more concise and the 
> confluent is a company name which i think is not that suitable for a format 
> name.
> _The format attributes_
> || Options || required || Remark ||
> | schema-registry.url | true | URL to connect to schema registry service |
> | schema-registry.subject | false | Subject name to write to the Schema 
> Registry service, required for sink |



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12962: [FLINK-18694] Add unaligned checkpoint config to web ui

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12962:
URL: https://github.com/apache/flink/pull/12962#issuecomment-662526701


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * 0b680957f9828af8d87dc3a10a31a5a5112c9b96 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12945: [FLINK-18629] Add type to ConnectedStreams#keyBy

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12945:
URL: https://github.com/apache/flink/pull/12945#issuecomment-661810553


   
   ## CI report:
   
   * b5aef5007f883d62f11c2e4247f2f5682e9b081e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4836)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] kbohinski commented on pull request #12962: [FLINK-18694] Add unaligned checkpoint config to web ui

2020-07-23 Thread GitBox


kbohinski commented on pull request #12962:
URL: https://github.com/apache/flink/pull/12962#issuecomment-663256091


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-659762134


   
   ## CI report:
   
   * f5c161b9cbf0df4cbf0e9f9efd08d1b5b3edb47e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4568)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4813)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4733)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4839)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4814)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tweise commented on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


tweise commented on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-663251038


   The bot command is working now, let's wait for the test result. I see 
successful CI runs in other PRs.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-659762134


   
   ## CI report:
   
   * f5c161b9cbf0df4cbf0e9f9efd08d1b5b3edb47e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4568)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4813)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4733)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4814)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4839)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18697) Adding flink-table-api-java-bridge_2.11 to a Flink job kills the IDE logging

2020-07-23 Thread Chesnay Schepler (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17163982#comment-17163982
 ] 

Chesnay Schepler commented on FLINK-18697:
--

So this is just missing the test scope?

> Adding flink-table-api-java-bridge_2.11 to a Flink job kills the IDE logging
> 
>
> Key: FLINK-18697
> URL: https://issues.apache.org/jira/browse/FLINK-18697
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>
> Steps to reproduce:
> - Set up a Flink project using a Maven archetype
> - Add "flink-table-api-java-bridge_2.11" as a dependency
> - Running Flink won't produce any log output
> Probable cause:
> "flink-table-api-java-bridge_2.11" has a dependency to 
> "org.apache.flink:flink-streaming-java_2.11:test-jar:tests:1.11.0", which 
> contains a "log4j2-test.properties" file.
> When I disable Log4j2 debugging (with "-Dlog4j2.debug"), I see the following 
> line:
> {code}
> DEBUG StatusLogger Reconfiguration complete for context[name=3d4eac69] at URI 
> jar:file:/Users/robert/.m2/repository/org/apache/flink/flink-streaming-java_2.11/1.11.0/flink-streaming-java_2.11-1.11.0-tests.jar!/log4j2-test.properties
>  (org.apache.logging.log4j.core.LoggerContext@568bf312) with optional 
> ClassLoader: null
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] tweise commented on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


tweise commented on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-663238635


   @flinkbot run azure
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12974: [FLINK-18552][tests] Update migration tests in master to cover migration for 1.10

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12974:
URL: https://github.com/apache/flink/pull/12974#issuecomment-663137580


   
   ## CI report:
   
   * 0b1fb23a1422af5285d0e7d3d58c2ef61056bf2d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4834)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12970: [FLINK-18341][walkthroughs] Drop remaining table walkthrough archetypes

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12970:
URL: https://github.com/apache/flink/pull/12970#issuecomment-663126281


   
   ## CI report:
   
   * 29d992adaad37d914a272df74642ff740dd84e1e UNKNOWN
   * 6b557f3c579efd309884b9abc9037bc224fd4086 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4829)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tinder-dthomson commented on pull request #12916: [FLINK-11547][flink-connector-kinesis] Fix JsonMappingException in DynamoDBStreamsSchema

2020-07-23 Thread GitBox


tinder-dthomson commented on pull request #12916:
URL: https://github.com/apache/flink/pull/12916#issuecomment-663233131


   @rmetzger there appears to be something broken with the CI. Is there 
anything we can do to move this forward?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12971: [FLINK-18552][tests] Update migration tests in master to cover migration for 1.10

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12971:
URL: https://github.com/apache/flink/pull/12971#issuecomment-663126356


   
   ## CI report:
   
   * 90910c6d4a96d15fb339cd8cddc92498148fc283 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4830)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12973: [BP-1.11][FLINK-18341][walkthroughs] Drop remaining table walkthrough archetypes

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12973:
URL: https://github.com/apache/flink/pull/12973#issuecomment-663126564


   
   ## CI report:
   
   * fc19a7ebd0a13f9d4335a51d6b633f72ff03b28a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4832)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12969: [BP-1.11][FLINK-16827][table-planner-blink] StreamExecTemporalSort should requ…

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12969:
URL: https://github.com/apache/flink/pull/12969#issuecomment-663025372


   
   ## CI report:
   
   * 3e165c51359e78bf3a59aaef69a8585fb3331e57 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4820)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12906: [FLINK-18606][java-streaming] Remove unused generic parameter from SinkFunction.Context

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12906:
URL: https://github.com/apache/flink/pull/12906#issuecomment-658724114


   
   ## CI report:
   
   * c7da082db0dd1f9e73412c2d399dbfb4144abe88 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4819)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4672)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12920: [FLINK-18536][kinesis] Adding enhanced fan-out related configurations.

2020-07-23 Thread GitBox


flinkbot edited a comment on pull request #12920:
URL: https://github.com/apache/flink/pull/12920#issuecomment-659996635


   
   ## CI report:
   
   * 68304c8bc1e5ccae037269302bd4c15ea41dc7a8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4816)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4679)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4815)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on a change in pull request #12917: [WIP] [FLINK-18355][tests] Simplify tests of SlotPoolImpl

2020-07-23 Thread GitBox


zhuzhurk commented on a change in pull request #12917:
URL: https://github.com/apache/flink/pull/12917#discussion_r459635923



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolImplTest.java
##
@@ -245,27 +226,25 @@ public void testAllocateWithFreeSlot() throws Exception {
 
assertTrue(slotPool.offerSlot(taskManagerLocation, 
taskManagerGateway, slotOffer));
 
-   LogicalSlot slot1 = future1.get(1, TimeUnit.SECONDS);
+   PhysicalSlot slot1 = future1.get(1, TimeUnit.SECONDS);
assertTrue(future1.isDone());
 
// return this slot to pool
-   slot1.releaseSlot();
+   slotPool.releaseSlot(requestId1, null);

Review comment:
   Looks to me there is not need to have the process to allocate slot1, 
offer slot and release it.
   The slot offering only would be enough to add a free slot.
   I think we can simplify it, maybe in a separate commit.

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolImplTest.java
##
@@ -123,17 +119,11 @@ public void testAllocateSimpleSlot() throws Exception {
CompletableFuture slotRequestFuture = new 
CompletableFuture<>();

resourceManagerGateway.setRequestSlotConsumer(slotRequestFuture::complete);
 
-   try (SlotPoolImpl slotPool = createSlotPoolImpl()) {
-   setupSlotPool(slotPool, resourceManagerGateway, 
mainThreadExecutor);
-   Scheduler scheduler = setupScheduler(slotPool, 
mainThreadExecutor);
+   try (SlotPoolImpl slotPool = createAndSetUpSlotPool()) {

slotPool.registerTaskManager(taskManagerLocation.getResourceID());
 
SlotRequestId requestId = new SlotRequestId();
-   CompletableFuture future = 
scheduler.allocateSlot(
-   requestId,
-   new DummyScheduledUnit(),
-   SlotProfile.noLocality(DEFAULT_TESTING_PROFILE),
-   timeout);
+   CompletableFuture future = 
requestNewAllocatedSlot(slotPool, requestId);

Review comment:
   can be final

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolImplTest.java
##
@@ -145,10 +135,10 @@ public void testAllocateSimpleSlot() throws Exception {
 
assertTrue(slotPool.offerSlot(taskManagerLocation, 
taskManagerGateway, slotOffer));
 
-   LogicalSlot slot = future.get(1, TimeUnit.SECONDS);
+   PhysicalSlot physicalSlot = future.get(1, 
TimeUnit.SECONDS);

Review comment:
   can be final

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolImplTest.java
##
@@ -275,16 +254,14 @@ public void testOfferSlot() throws Exception {
 

resourceManagerGateway.setRequestSlotConsumer(slotRequestFuture::complete);
 
-   try (SlotPoolImpl slotPool = createSlotPoolImpl()) {
-   setupSlotPool(slotPool, resourceManagerGateway, 
mainThreadExecutor);
-   Scheduler scheduler = setupScheduler(slotPool, 
mainThreadExecutor);
+   try (SlotPoolImpl slotPool = createAndSetUpSlotPool()) {

slotPool.registerTaskManager(taskManagerLocation.getResourceID());
 
-   CompletableFuture future = 
scheduler.allocateSlot(
-   new SlotRequestId(),
-   new DummyScheduledUnit(),
-   SlotProfile.noLocality(DEFAULT_TESTING_PROFILE),
-   timeout);
+   SlotRequestId requestId = new SlotRequestId();
+   CompletableFuture future = 
requestNewAllocatedSlot(
+   slotPool,
+   requestId
+   );

Review comment:
   ```suggestion
CompletableFuture future = 
requestNewAllocatedSlot(
slotPool,
SlotRequestId()
);
   ```
   requestId is not reused

##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolImplTest.java
##
@@ -893,14 +866,6 @@ protected boolean matchesSafely(AllocatedSlotInfo item, 
Description mismatchDesc
};
}
 
-   private CompletableFuture allocateSlot(Scheduler 
scheduler, SlotRequestId slotRequestId) {
-   return scheduler.allocateSlot(
-   slotRequestId,
-   new DummyScheduledUnit(),
-   

[GitHub] [flink] kl0u commented on a change in pull request #12791: [FLINK-18362][FLINK-13838][yarn] Add yarn.ship-archives to support LocalResourceType.ARCHIVE

2020-07-23 Thread GitBox


kl0u commented on a change in pull request #12791:
URL: https://github.com/apache/flink/pull/12791#discussion_r459662195



##
File path: 
flink-yarn-tests/src/test/java/org/apache/flink/yarn/testjob/YarnTestArchiveJob.java
##
@@ -0,0 +1,137 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.yarn.testjob;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.typeutils.ResultTypeQueryable;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.runtime.jobgraph.JobGraph;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import org.apache.flink.streaming.api.functions.sink.DiscardingSink;
+import org.apache.flink.streaming.api.functions.source.RichSourceFunction;
+import org.apache.flink.yarn.configuration.YarnConfigOptions;
+
+import org.apache.flink.shaded.guava18.com.google.common.collect.ImmutableList;
+
+import org.apache.commons.compress.archivers.tar.TarArchiveEntry;
+import org.apache.commons.compress.archivers.tar.TarArchiveOutputStream;
+import org.apache.commons.compress.compressors.gzip.GzipCompressorOutputStream;
+import org.apache.commons.compress.utils.IOUtils;
+
+import java.io.BufferedInputStream;
+import java.io.BufferedOutputStream;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Paths;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.flink.util.Preconditions.checkArgument;
+
+/**
+ * Testing job for localizing resources of LocalResourceType.ARCHIVE in per 
job cluster mode.
+ */
+public class YarnTestArchiveJob {
+   private static final List LIST = ImmutableList.of("test1", 
"test2");
+
+   private static final Map srcFiles = new HashMap() {{
+   put("local1.txt", "Local text Content1");
+   put("local2.txt", "Local text Content2");
+   }};
+
+   private static void archiveFilesInDirectory(File directory, String 
target) throws IOException {
+
+   for (Map.Entry entry : srcFiles.entrySet()) {
+   Files.write(Paths.get(directory.getAbsolutePath() + 
File.separator + entry.getKey()),
+   entry.getValue().getBytes());
+   }
+
+   try (FileOutputStream fos = new FileOutputStream(target);
+   GzipCompressorOutputStream gos = new 
GzipCompressorOutputStream(new BufferedOutputStream(fos));
+   TarArchiveOutputStream taros = new 
TarArchiveOutputStream(gos)) {
+   
taros.setLongFileMode(TarArchiveOutputStream.LONGFILE_GNU);
+   for (File f : directory.listFiles()) {
+   taros.putArchiveEntry(new TarArchiveEntry(f,
+   directory.getName() + File.separator + 
f.getName()));
+   try (FileInputStream fis = new 
FileInputStream(f);
+   BufferedInputStream bis = new 
BufferedInputStream(fis)) {
+   IOUtils.copy(bis, taros);
+   taros.closeArchiveEntry();
+   }
+   }
+   }
+   }
+
+   public static JobGraph getArchiveJobGraph(File testDirectory, 
Configuration config) throws IOException {
+
+   final String archive = 
testDirectory.getAbsolutePath().concat(".tar.gz");
+   final String localizedPath = 
testDirectory.getName().concat(".tar.gz") + File.separator + 
testDirectory.getName();
+
+   archiveFilesInDirectory(testDirectory, archive);
+
+   final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
+
+   env.addSource(new SourceFunctionWithArchive(LIST, 
localizedPath, TypeInformation.of(String.class)))
+   .setParallelism(1)
+   .addSink(new 

  1   2   3   4   >