[GitHub] [flink] flinkbot edited a comment on pull request #14675: [FLINK-18383][docs]translate "JDBC SQL Connector" documentation into …

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14675:
URL: https://github.com/apache/flink/pull/14675#issuecomment-761769906


   
   ## CI report:
   
   * f5b73f302997a6f8e5f211efea381a513218145d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12289)
 
   * 76e878bbb4c4e2d71523a5a2eb993d083a4eb0c5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12295)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal removed a comment on pull request #14702: [FLINK-21042][docs] Correct the error in the code example in section 'aggregate-functions' of table-page 'udfs'.

2021-01-20 Thread GitBox


RocMarshal removed a comment on pull request #14702:
URL: https://github.com/apache/flink/pull/14702#issuecomment-763384117


   @appleyuchi Hi, Could you help me to review this PR if you have any free 
time? Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal removed a comment on pull request #14671: [hotfix][docs] Fix typo in 'docs/dev/table/functions/udfs.md & udfs.zh.md'.

2021-01-20 Thread GitBox


RocMarshal removed a comment on pull request #14671:
URL: https://github.com/apache/flink/pull/14671#issuecomment-762580819


   @appleyuchi Hi, Could you help me to review this PR if you have any free 
time? Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14659: [FLINK-20931][coordination] Remove globalModVersion from ExecutionGraph

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14659:
URL: https://github.com/apache/flink/pull/14659#issuecomment-760700709


   
   ## CI report:
   
   * cf3f4fd97e1252a06a982cdefafaf7cf9012f0b2 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12288)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 commented on pull request #14692: [FLINK-20944][k8s] Do not resolve the rest endpoint address when the service exposed type is ClusterIP

2021-01-20 Thread GitBox


wangyang0918 commented on pull request #14692:
URL: https://github.com/apache/flink/pull/14692#issuecomment-763706950


   cc @tillrohrmann I have updated this PR.
   
   After this change, we could start a Flink application/session cluster on K8s 
native inside/outside the K8s cluster when using `ClusterIP`. And we will have 
the following logs in client side.
   
   ```
   ... ...
   2021-01-20 23:10:21,045 WARN  
org.apache.flink.kubernetes.KubernetesClusterDescriptor  [] - Please note 
that Flink client operation(e.g. cancel, list, stop, savepoint, etc.) won't 
work from outside the Kubernetes cluster.
   2021-01-20 23:10:21,338 INFO  
org.apache.flink.kubernetes.KubernetesClusterDescriptor  [] - Create flink 
application cluster k8s-ha-app-1 successfully, JobManager Web Interface: 
http://k8s-ha-app-1-rest.default:8081
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on pull request #14702: [FLINK-21042][docs] Correct the error in the code example in section 'aggregate-functions' of table-page 'udfs'.

2021-01-20 Thread GitBox


RocMarshal commented on pull request #14702:
URL: https://github.com/apache/flink/pull/14702#issuecomment-763734982


   @wuchong Could you help me to review this PR?Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14675: [FLINK-18383][docs]translate "JDBC SQL Connector" documentation into …

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14675:
URL: https://github.com/apache/flink/pull/14675#issuecomment-761769906


   
   ## CI report:
   
   * cb14f11086c52f0e1b432181decc1f232599fa2b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12283)
 
   * f5b73f302997a6f8e5f211efea381a513218145d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12289)
 
   * 76e878bbb4c4e2d71523a5a2eb993d083a4eb0c5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14692: [FLINK-20944][k8s] Do not resolve the rest endpoint address when the service exposed type is ClusterIP

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14692:
URL: https://github.com/apache/flink/pull/14692#issuecomment-762669595


   
   ## CI report:
   
   * 08992db9099458db6540eb207cb924faac072617 UNKNOWN
   * df11e9635734c1cd78788093cd45d5a32c78db66 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12293)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14675: [FLINK-18383][docs]translate "JDBC SQL Connector" documentation into …

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14675:
URL: https://github.com/apache/flink/pull/14675#issuecomment-761769906


   
   ## CI report:
   
   * cb14f11086c52f0e1b432181decc1f232599fa2b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12283)
 
   * f5b73f302997a6f8e5f211efea381a513218145d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12289)
 
   * 76e878bbb4c4e2d71523a5a2eb993d083a4eb0c5 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12295)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] tillrohrmann commented on a change in pull request #14692: [FLINK-20944][k8s] Do not resolve the rest endpoint address when the service exposed type is ClusterIP

2021-01-20 Thread GitBox


tillrohrmann commented on a change in pull request #14692:
URL: https://github.com/apache/flink/pull/14692#discussion_r561147706



##
File path: 
flink-kubernetes/src/main/java/org/apache/flink/kubernetes/KubernetesClusterDescriptor.java
##
@@ -112,18 +112,27 @@ public String getClusterDescription() {
 return new RestClusterClient<>(
 configuration,
 clusterId,
-new StandaloneClientHAServices(
-
HighAvailabilityServicesUtils.getWebMonitorAddress(
-configuration,
-
HighAvailabilityServicesUtils.AddressResolution
-.TRY_ADDRESS_RESOLUTION)));
+new 
StandaloneClientHAServices(getWebMonitorAddress(configuration)));
 } catch (Exception e) {
 throw new RuntimeException(
 new ClusterRetrieveException("Could not create the 
RestClusterClient.", e));
 }
 };
 }
 
+private String getWebMonitorAddress(Configuration configuration) throws 
Exception {
+HighAvailabilityServicesUtils.AddressResolution resolution =
+
HighAvailabilityServicesUtils.AddressResolution.TRY_ADDRESS_RESOLUTION;
+if 
(configuration.get(KubernetesConfigOptions.REST_SERVICE_EXPOSED_TYPE)
+== KubernetesConfigOptions.ServiceExposedType.ClusterIP) {
+resolution = 
HighAvailabilityServicesUtils.AddressResolution.NO_ADDRESS_RESOLUTION;
+LOG.warn(
+"Please note that Flink client operation(e.g. cancel, 
list, stop,"
++ " savepoint, etc.) won't work from outside the 
Kubernetes cluster.");

Review comment:
   Let's add that this is due to having chosen `ClusterIP` for the 
`KubernetesConfigOptions.REST_SERVICE_EXPOSED_TYPE`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14692: [FLINK-20944][k8s] Do not resolve the rest endpoint address when the service exposed type is ClusterIP

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14692:
URL: https://github.com/apache/flink/pull/14692#issuecomment-762669595


   
   ## CI report:
   
   * 5f74749ef6ff2e0092a6b563a0261fe234a54fbc Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12236)
 
   * 08992db9099458db6540eb207cb924faac072617 UNKNOWN
   * df11e9635734c1cd78788093cd45d5a32c78db66 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14704: [FLINK-21044][connectors/jdbc] Use more random bytes in Xid

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14704:
URL: https://github.com/apache/flink/pull/14704#issuecomment-763375645


   
   ## CI report:
   
   * 22d5bc0d99357efcaee56367171e73f6779fc235 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12290)
 
   * 9c22c37e550ce6df617bd3c5f2091d7d10aa62ad UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14704: [FLINK-21044][connectors/jdbc] Use more random bytes in Xid

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14704:
URL: https://github.com/apache/flink/pull/14704#issuecomment-763375645


   
   ## CI report:
   
   * 22d5bc0d99357efcaee56367171e73f6779fc235 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12290)
 
   * 9c22c37e550ce6df617bd3c5f2091d7d10aa62ad Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12294)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13797: [FLINK-19465][runtime / statebackends] Add CheckpointStorage interface and wire through runtime

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #13797:
URL: https://github.com/apache/flink/pull/13797#issuecomment-716854695


   
   ## CI report:
   
   * c06b3dcd4eb4045f66352c8b65c14a7b60163ecd UNKNOWN
   * 185e46fd046c5f39a87ac3583ad559b065a275e1 UNKNOWN
   * 06f933a05574758aa9c2472f7650b36f802e79cf Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12292)
 
   * e59969cd8feed88248332fd4c2b63ff873bb093a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] alpinegizmo commented on pull request #14698: [hotfix][javadoc] Correct the wrong spelling 'perserves' to 'preserves'.

2021-01-20 Thread GitBox


alpinegizmo commented on pull request #14698:
URL: https://github.com/apache/flink/pull/14698#issuecomment-763839952


   This is fine. Merging ...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] alpinegizmo merged pull request #14698: [hotfix][javadoc] Correct the wrong spelling 'perserves' to 'preserves'.

2021-01-20 Thread GitBox


alpinegizmo merged pull request #14698:
URL: https://github.com/apache/flink/pull/14698


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] xintongsong opened a new pull request #413: Add Apache Flink release 1.10.3

2021-01-20 Thread GitBox


xintongsong opened a new pull request #413:
URL: https://github.com/apache/flink-web/pull/413


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14692: [FLINK-20944][k8s] Do not resolve the rest endpoint address when the service exposed type is ClusterIP

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14692:
URL: https://github.com/apache/flink/pull/14692#issuecomment-762669595


   
   ## CI report:
   
   * 5f74749ef6ff2e0092a6b563a0261fe234a54fbc Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12236)
 
   * 08992db9099458db6540eb207cb924faac072617 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14692: [FLINK-20944][k8s] Do not resolve the rest endpoint address when the service exposed type is ClusterIP

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14692:
URL: https://github.com/apache/flink/pull/14692#issuecomment-762669595


   
   ## CI report:
   
   * 5f74749ef6ff2e0092a6b563a0261fe234a54fbc Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12236)
 
   * 08992db9099458db6540eb207cb924faac072617 UNKNOWN
   * df11e9635734c1cd78788093cd45d5a32c78db66 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12293)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14705: [FLINK-21019] Bump Netty 4 to 4.1.46

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14705:
URL: https://github.com/apache/flink/pull/14705#issuecomment-763437202


   
   ## CI report:
   
   * 750cdaae9bc4a432e8d061413ea7011270a3a326 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12291)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21036) Consider removing automatic configuration fo number of slots from docker

2021-01-20 Thread Till Rohrmann (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17268723#comment-17268723
 ] 

Till Rohrmann commented on FLINK-21036:
---

I am also in favour of setting the number of slots to {{1}} if it has not been 
specified differently by the caller. Ideally, this should happen via a 
configuration option.

> Consider removing automatic configuration fo number of slots from docker
> 
>
> Key: FLINK-21036
> URL: https://issues.apache.org/jira/browse/FLINK-21036
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Scripts
>Reporter: Chesnay Schepler
>Priority: Major
> Fix For: 1.13.0
>
>
> The {{docker-entrypoint.sh}} supports setting the number of task slots via 
> the {{TASK_MANAGER_NUMBER_OF_TASK_SLOTS}} environment variable, which 
> defaults to the number of cpu cores via {{$(grep -c ^processor 
> /proc/cpuinfo)}}.
> The environment variable itself is redundant nowadays since we introduced 
> {{FLINK_PROPERTIES}}, and is no longer documented.
> Defaulting to the number of CPU cores can be considered convenience, but it 
> seems odd to have this specific to docker while the distribution defaults to 
> {{1}}.
> The bigger issue in my mind though is that this creates a configuration 
> mismatch between the Job- and TaskManager processes; the ResourceManager 
> specifically needs to know how many slots a worker has to make decisions 
> about redundancy and allocating resources.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13797: [FLINK-19465][runtime / statebackends] Add CheckpointStorage interface and wire through runtime

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #13797:
URL: https://github.com/apache/flink/pull/13797#issuecomment-716854695


   
   ## CI report:
   
   * c06b3dcd4eb4045f66352c8b65c14a7b60163ecd UNKNOWN
   * 185e46fd046c5f39a87ac3583ad559b065a275e1 UNKNOWN
   * e59969cd8feed88248332fd4c2b63ff873bb093a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12296)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] RocMarshal commented on pull request #14698: [hotfix][javadoc] Correct the wrong spelling 'perserves' to 'preserves'.

2021-01-20 Thread GitBox


RocMarshal commented on pull request #14698:
URL: https://github.com/apache/flink/pull/14698#issuecomment-763736003


   @alpinegizmo  Could you help me to review this PR?Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13797: [FLINK-19465][runtime / statebackends] Add CheckpointStorage interface and wire through runtime

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #13797:
URL: https://github.com/apache/flink/pull/13797#issuecomment-716854695


   
   ## CI report:
   
   * c06b3dcd4eb4045f66352c8b65c14a7b60163ecd UNKNOWN
   * 185e46fd046c5f39a87ac3583ad559b065a275e1 UNKNOWN
   * e59969cd8feed88248332fd4c2b63ff873bb093a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12296)
 
   * 374508f59bd49283e374899444597ca761ce49c2 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13797: [FLINK-19465][runtime / statebackends] Add CheckpointStorage interface and wire through runtime

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #13797:
URL: https://github.com/apache/flink/pull/13797#issuecomment-716854695


   
   ## CI report:
   
   * c06b3dcd4eb4045f66352c8b65c14a7b60163ecd UNKNOWN
   * 185e46fd046c5f39a87ac3583ad559b065a275e1 UNKNOWN
   * e59969cd8feed88248332fd4c2b63ff873bb093a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12296)
 
   * 374508f59bd49283e374899444597ca761ce49c2 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12298)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-20676) Remove deprecated command "native-k8s" in docker-entrypoint.sh

2021-01-20 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler reassigned FLINK-20676:


Assignee: Chesnay Schepler

> Remove deprecated command "native-k8s" in docker-entrypoint.sh
> --
>
> Key: FLINK-20676
> URL: https://issues.apache.org/jira/browse/FLINK-20676
> Project: Flink
>  Issue Type: Improvement
>  Components: flink-docker
>Reporter: Yang Wang
>Assignee: Chesnay Schepler
>Priority: Blocker
> Fix For: 1.13.0
>
>
> In FLINK-20650, we have marked "native-k8s" as deprecated in 
> docker-entrypoint.sh and set the environments for native modes for all 
> pass-through commands. The deprecated command "native-k8s" should be removed 
> in the next major release(1.13).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14704: [FLINK-21044][connectors/jdbc] Use more random bytes in Xid

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14704:
URL: https://github.com/apache/flink/pull/14704#issuecomment-763375645


   
   ## CI report:
   
   * 9c22c37e550ce6df617bd3c5f2091d7d10aa62ad Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12294)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-docker] zentol opened a new pull request #61: [FLINK-20676] Remove deprecated 'native-k8s' command

2021-01-20 Thread GitBox


zentol opened a new pull request #61:
URL: https://github.com/apache/flink-docker/pull/61


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20915) Move docker-entrypoint.sh logic into distribution

2021-01-20 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-20915.

Fix Version/s: (was: 1.12.2)
   (was: 1.13.0)
   Resolution: Won't Fix

Going for an alternative approach.

> Move docker-entrypoint.sh logic into distribution
> -
>
> Key: FLINK-20915
> URL: https://issues.apache.org/jira/browse/FLINK-20915
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Scripts
>Reporter: Chesnay Schepler
>Assignee: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
>
> Copy most of the existing docker-entrypoint.sh into the distribution and 
> defer to it from the docker image. This will make it easier for us to align 
> the distribution and docker entrypoints, hopefully preventing disasters like 
> the 1.12 release.
> The only things remaining in the image are things that are intrinsically tied 
> to the image, like the jmalloc switch.
> Over time we should look into reducing the amount of docker-specific logic in 
> this script.
> For example,
> * setting various config options at startup could be moved into the image 
> creation by modifying flink-conf.yaml right away (in a way, creating a custom 
> docker distribution)
> * docker-specific environment variables (for plugins the the number of slots 
> for task executors) could be supported in all modes



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14675: [FLINK-18383][docs]translate "JDBC SQL Connector" documentation into …

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14675:
URL: https://github.com/apache/flink/pull/14675#issuecomment-761769906


   
   ## CI report:
   
   * 76e878bbb4c4e2d71523a5a2eb993d083a4eb0c5 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12295)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-docker] zentol closed pull request #54: [FLINK-20915][docker] Move docker entrypoint to distribution

2021-01-20 Thread GitBox


zentol closed pull request #54:
URL: https://github.com/apache/flink-docker/pull/54


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20676) Remove deprecated command "native-k8s" in docker-entrypoint.sh

2021-01-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20676:
---
Labels: pull-request-available  (was: )

> Remove deprecated command "native-k8s" in docker-entrypoint.sh
> --
>
> Key: FLINK-20676
> URL: https://issues.apache.org/jira/browse/FLINK-20676
> Project: Flink
>  Issue Type: Improvement
>  Components: flink-docker
>Reporter: Yang Wang
>Assignee: Chesnay Schepler
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> In FLINK-20650, we have marked "native-k8s" as deprecated in 
> docker-entrypoint.sh and set the environments for native modes for all 
> pass-through commands. The deprecated command "native-k8s" should be removed 
> in the next major release(1.13).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-docker] zentol merged pull request #57: [FLINK-21034] Refactor jemalloc switch to environment variable

2021-01-20 Thread GitBox


zentol merged pull request #57:
URL: https://github.com/apache/flink-docker/pull/57


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13797: [FLINK-19465][runtime / statebackends] Add CheckpointStorage interface and wire through runtime

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #13797:
URL: https://github.com/apache/flink/pull/13797#issuecomment-716854695


   
   ## CI report:
   
   * c06b3dcd4eb4045f66352c8b65c14a7b60163ecd UNKNOWN
   * 185e46fd046c5f39a87ac3583ad559b065a275e1 UNKNOWN
   * 374508f59bd49283e374899444597ca761ce49c2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12298)
 
   * e6bf6df6365fc746d7054553b252a35c0a324a15 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12302)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zentol merged pull request #14705: [FLINK-21019] Bump Netty 4 to 4.1.46

2021-01-20 Thread GitBox


zentol merged pull request #14705:
URL: https://github.com/apache/flink/pull/14705


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-21019) Bump Netty 4 to 4.1.58

2021-01-20 Thread Chesnay Schepler (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21019?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chesnay Schepler closed FLINK-21019.

Resolution: Fixed

master:
562d9fab85960214fa20b84b19003cfad71de04b..e687629a31b4637761d76c4859f8e65e370e55c2

> Bump Netty 4 to 4.1.58
> --
>
> Key: FLINK-21019
> URL: https://issues.apache.org/jira/browse/FLINK-21019
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python, Connectors / Cassandra, Connectors / 
> ElasticSearch, Connectors / HBase
>Reporter: Dian Fu
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Our current Netty version (4.1.44) is vulnerable for at least this CVE:
> [https://nvd.nist.gov/vuln/detail/CVE-2020-11612]
> Bumping to 4.1.46+ should solve it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13797: [FLINK-19465][runtime / statebackends] Add CheckpointStorage interface and wire through runtime

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #13797:
URL: https://github.com/apache/flink/pull/13797#issuecomment-716854695


   
   ## CI report:
   
   * c06b3dcd4eb4045f66352c8b65c14a7b60163ecd UNKNOWN
   * 185e46fd046c5f39a87ac3583ad559b065a275e1 UNKNOWN
   * 374508f59bd49283e374899444597ca761ce49c2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12298)
 
   * e6bf6df6365fc746d7054553b252a35c0a324a15 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21036) Consider removing automatic configuration fo number of slots from docker

2021-01-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21036:
---
Labels: pull-request-available  (was: )

> Consider removing automatic configuration fo number of slots from docker
> 
>
> Key: FLINK-21036
> URL: https://issues.apache.org/jira/browse/FLINK-21036
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Scripts
>Reporter: Chesnay Schepler
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> The {{docker-entrypoint.sh}} supports setting the number of task slots via 
> the {{TASK_MANAGER_NUMBER_OF_TASK_SLOTS}} environment variable, which 
> defaults to the number of cpu cores via {{$(grep -c ^processor 
> /proc/cpuinfo)}}.
> The environment variable itself is redundant nowadays since we introduced 
> {{FLINK_PROPERTIES}}, and is no longer documented.
> Defaulting to the number of CPU cores can be considered convenience, but it 
> seems odd to have this specific to docker while the distribution defaults to 
> {{1}}.
> The bigger issue in my mind though is that this creates a configuration 
> mismatch between the Job- and TaskManager processes; the ResourceManager 
> specifically needs to know how many slots a worker has to make decisions 
> about redundancy and allocating resources.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-docker] zentol opened a new pull request #60: [FLINK-21036] Remove automatic configuration of number of slots

2021-01-20 Thread GitBox


zentol opened a new pull request #60:
URL: https://github.com/apache/flink-docker/pull/60


   Removes an odd special code path for taskmanagers started in docker.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14692: [FLINK-20944][k8s] Do not resolve the rest endpoint address when the service exposed type is ClusterIP

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14692:
URL: https://github.com/apache/flink/pull/14692#issuecomment-762669595


   
   ## CI report:
   
   * 08992db9099458db6540eb207cb924faac072617 UNKNOWN
   * 05f8747452c37c1a52e2df2692f530ca2afeee1e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12309)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21058) Add SHOW PARTITION syntax support for flink

2021-01-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-21058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

孙铮 updated FLINK-21058:
---
Summary: Add SHOW PARTITION syntax support for flink  (was: The Show 
Partitinos command is supported under the Flink dialect to show all partitions)

> Add SHOW PARTITION syntax support for flink
> ---
>
> Key: FLINK-21058
> URL: https://issues.apache.org/jira/browse/FLINK-21058
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0
>Reporter: 孙铮
>Priority: Minor
> Fix For: 1.13.0
>
>
> The Show Partitinos command is supported under the Flink dialect to show all 
> partitions



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14699: [FLINK-21011][table-planner-blink] Separate implementation of StreamExecIntervalJoin

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14699:
URL: https://github.com/apache/flink/pull/14699#issuecomment-763287840


   
   ## CI report:
   
   * c463989c4c6afdc6a85485822d1350f6ab771a30 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12304)
 
   * 316311b1eb3e13ae6ececc71d75c02b95403214d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk merged pull request #14659: [FLINK-20931][coordination] Remove globalModVersion from ExecutionGraph

2021-01-20 Thread GitBox


zhuzhurk merged pull request #14659:
URL: https://github.com/apache/flink/pull/14659


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21059) KafkaSourceEnumerator does not honor consumer properties

2021-01-20 Thread Thomas Weise (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Weise updated FLINK-21059:
-
Description: Enumerator fails when SSL is required because the user 
provided properties are not used to construct the admin client. Empty 
properties are created and all provided properties except bootstrap.servers are 
ignored.  (was: Enumerator fails when SSL is required because the user provided 
properties are not used to construct the admin client.
)

> KafkaSourceEnumerator does not honor consumer properties
> 
>
> Key: FLINK-21059
> URL: https://issues.apache.org/jira/browse/FLINK-21059
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Major
>
> Enumerator fails when SSL is required because the user provided properties 
> are not used to construct the admin client. Empty properties are created and 
> all provided properties except bootstrap.servers are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21057) Streaming checkpointing with small interval leads app to hang

2021-01-20 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang updated FLINK-21057:
-
Component/s: (was: Runtime / Checkpointing)
 Connectors / Kafka

> Streaming checkpointing with small interval leads app to hang
> -
>
> Key: FLINK-21057
> URL: https://issues.apache.org/jira/browse/FLINK-21057
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.11.3
> Environment: * streaming app
> * flink cluster in standalone-job / application mode
> * 1.11.3 Flink version
> * jobmanager --> 1 instance
> * taskmanager --> 1 instance
> * parallelism --> 2
>Reporter: Nazar Volynets
>Priority: Major
> Attachments: jobmanager.log, taskmanager.log
>
>
> There is a simple streaming app with enabled checkpointing:
>  * statebackend --> RockDB
>  * mode --> EXACTLY_ONCE
> STRs:
>  1. Run Flink cluster in standalone-job / application mode (with embedded 
> streaming app)
>  2. Get error
>  3. Wait 1 min
>  4. Stop Flink cluster
>  4. Repeat steps from 1 to 3 util error :
> {code:java|title=taskmanager}
> org.apache.kafka.common.KafkaException: Unexpected error in 
> InitProducerIdResponse; Producer attempted an operation with an old epoch. 
> Either there is a newer producer with the same transactionalId, or the 
> producer's transaction has been expired by the broker.
> flink-kafka-mirror-maker-jobmanager | at 
> org.apache.kafka.clients.producer.internals.TransactionManager$InitProducerIdHandler.handleResponse(TransactionManager.java:1352)
>  ~[?:?]
> flink-kafka-mirror-maker-jobmanager | at 
> org.apache.kafka.clients.producer.internals.TransactionManager$TxnRequestHandler.onComplete(TransactionManager.java:1260)
>  ~[?:?]
> flink-kafka-mirror-maker-jobmanager | at 
> org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) 
> ~[?:?]
> flink-kafka-mirror-maker-jobmanager | at 
> org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:572)
>  ~[?:?]
> flink-kafka-mirror-maker-jobmanager | at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:564) ~[?:?]
> flink-kafka-mirror-maker-jobmanager | at 
> org.apache.kafka.clients.producer.internals.Sender.maybeSendAndPollTransactionalRequest(Sender.java:414)
>  ~[?:?]
> flink-kafka-mirror-maker-jobmanager | at 
> org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:312) 
> ~[?:?]
> flink-kafka-mirror-maker-jobmanager | at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239) ~[?:?]
> flink-kafka-mirror-maker-jobmanager | at java.lang.Thread.run(Unknown 
> Source) ~[?:?]
> {code}
> It is obvious 
> Please find below:
>  * streaming app code base (example)
>  * attached logs
>  ** jobmanager
>  ** taskmanager
> *Example*
> +App+
> {code:java|title=build.gradle (dependencies)}
> ...
> ext {
>   ...
>   javaVersion = '11'
>   flinkVersion = '1.12.0'
>   scalaBinaryVersion = '2.11'
>   ...
> }
> dependencies {
>   ...
>   implementation 
> "org.apache.flink:flink-streaming-java_${scalaBinaryVersion}:${flinkVersion}"
>   implementation 
> "org.apache.flink:flink-clients_${scalaBinaryVersion}:${flinkVersion}"
>   implementation 
> "org.apache.flink:flink-statebackend-rocksdb_${scalaBinaryVersion}:${flinkVersion}"
>   ...
> }
> {code}
> {code:java|title=App}
> public static void main(String[] args) {
>   ...
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   env.setParallelism(2);
>   env.enableCheckpointing(500);
>   env.setStateBackend(new 
> RocksDBStateBackend("file:///xxx/config/checkpoints/rocksdb", true));
>   
> env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
>   env.getCheckpointConfig().setMinPauseBetweenCheckpoints(1000);
>   env.getCheckpointConfig().setCheckpointTimeout(60);
>   env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
>   FlinkKafkaConsumer consumer = createConsumer();
>   FlinkKafkaProducer producer = createProducer();
>   env
> .addSource(consumer)
> .uid("kafka-consumer")
> .addSink(producer)
> .uid("kafka-producer")
>   ;
>   env.execute();
> }
> public static FlinkKafkaConsumer createConsumer() {
>   ...
>   Properties props = new Properties();
>   props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "kafka-source-1:9091");
>   ... // nothing special
>   props.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
>   FlinkKafkaConsumer consumer = new FlinkKafkaConsumer<>("topic-1", 
> new RecordKafkaDerSchema(), props);
>   ... // RecordKafkaDerSchema --> custom schema is used to copy not only 
> message body but message key too
>   ... // 

[jira] [Created] (FLINK-21061) Introduce RemotePersistedValue construct in Stateful Functions

2021-01-20 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-21061:
---

 Summary: Introduce RemotePersistedValue construct in Stateful 
Functions
 Key: FLINK-21061
 URL: https://issues.apache.org/jira/browse/FLINK-21061
 Project: Flink
  Issue Type: Task
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai


To prepare for the new cross-language type system upcoming in Stateful 
Functions, we need a new {{RemotePersistedValue}} state construct to support 
this in the runtime.

A {{RemotePersistedValue}} should translate to a Flink {{ValueState}} that is 
essentially a byte array value, and persists in state serializer snapshots a 
{{typeUrl}} metainfo to represent the type of the remote value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14691: [FLINK-21018] Update checkpoint related documentation for UI

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14691:
URL: https://github.com/apache/flink/pull/14691#issuecomment-762651310


   
   ## CI report:
   
   * 6a766507fc023a0642ab36f6eca289e479b4b84b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12310)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14708: [FLINK-21054][table-runtime-blink] Implement mini-batch optimized slicing window aggregate operator

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14708:
URL: https://github.com/apache/flink/pull/14708#issuecomment-763516524


   
   ## CI report:
   
   * 480de3a5924eb8a615e173e3044a2e6cf22e16e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12284)
 
   * e00fb2507a257ad657632047b3d2ec2e5b38a737 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14710: [FLINK-19393][docs-zh] Translate the 'SQL Hints' page of 'Table API & SQL' into Chinese

2021-01-20 Thread GitBox


flinkbot commented on pull request #14710:
URL: https://github.com/apache/flink/pull/14710#issuecomment-764262487


   
   ## CI report:
   
   * da3f3f6a21306e3b4fa3cfc7b79ef34d825f5efb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sv3ndk commented on pull request #14672: [FLINK-20998][docs] - flink-raw.jar no longer mentioned + prettier maven xml dependencies

2021-01-20 Thread GitBox


sv3ndk commented on pull request #14672:
URL: https://github.com/apache/flink/pull/14672#issuecomment-764362584


   Hi,
   Do you need more input from me on this one?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20998) flink-raw-1.12.jar does not exist

2021-01-20 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269045#comment-17269045
 ] 

Jark Wu commented on FLINK-20998:
-

Fixed in
 - master: b607ef2ca9eedb4a82656587a052cda11f7088e2
 - release-1.12: TODO

> flink-raw-1.12.jar does not exist
> -
>
> Key: FLINK-20998
> URL: https://issues.apache.org/jira/browse/FLINK-20998
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: Svend Vanderveken
>Assignee: Svend Vanderveken
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.12.2
>
>
> [Flink Raw format 
> documentation|https://ci.apache.org/projects/flink/flink-docs-release-1.12/dev/table/connectors/formats/raw.html#dependencies]
>  currently states that the {{flink-raw-x.yz.jar}} is required in order to use 
> this format.
> As I understand, such jar does not exist though:
>  * [broken link to 
> mavenrepository|https://mvnrepository.com/artifact/org.apache.flink/flink-raw]
>  * [the list of format in the flink repo on github does not contain 
> raw|https://github.com/apache/flink/tree/master/flink-formats]
> The raw format seems to currently be present [in the blink table 
> planner|https://github.com/apache/flink/tree/master/flink-table/flink-table-runtime-blink/src/main/java/org/apache/flink/table/formats/raw]
> I guess either documentation should be updated, or a specific jar should to 
> be created? I'm happy to help with either.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #14672: [FLINK-20998][docs] - flink-raw.jar no longer mentioned + prettier maven xml dependencies

2021-01-20 Thread GitBox


wuchong merged pull request #14672:
URL: https://github.com/apache/flink/pull/14672


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21059) KafkaSourceEnumerator does not honor consumer properties

2021-01-20 Thread Thomas Weise (Jira)
Thomas Weise created FLINK-21059:


 Summary: KafkaSourceEnumerator does not honor consumer properties
 Key: FLINK-21059
 URL: https://issues.apache.org/jira/browse/FLINK-21059
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.13.0
Reporter: Thomas Weise
Assignee: Thomas Weise


Enumerator fails when SSL is required because the user provided properties are 
not used to construct the admin client.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20931) Remove globalModVersion from ExecutionGraph

2021-01-20 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-20931.
---
Resolution: Done

Done via 759d7347ddccc576b4f039e99ac27af1df9ab260

> Remove globalModVersion from ExecutionGraph
> ---
>
> Key: FLINK-20931
> URL: https://issues.apache.org/jira/browse/FLINK-20931
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Zhu Zhu
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> {{ExecutionGraph#globalModVersion}} is no longer used after legacy scheduler 
> is removed.
> {{ExecutionVertexVersioner}} is used instead for concurrent global-local and 
> local-local failures' handling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14611: [FLINK-13194][docs] Add explicit clarification about thread-safety of state

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14611:
URL: https://github.com/apache/flink/pull/14611#issuecomment-758104213


   
   ## CI report:
   
   * 528835d01452384bd0f3708d31817812e246b230 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12275)
 
   * 145e976456e1f3d765fbcb5f4e34e256949ae1ec UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-21041) Introduce ExecNodeGraph to wrap the ExecNode topology

2021-01-20 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-21041.
--
Resolution: Fixed

Fixed in 1.13.0: 842f5cd703f1c31c85f6c4500ea54490924f7a92

> Introduce ExecNodeGraph to wrap the ExecNode topology
> -
>
> Key: FLINK-21041
> URL: https://issues.apache.org/jira/browse/FLINK-21041
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Currently, we use {{List}} to represent the {{ExecNode}} 
> topology, as we will introduce more features (such as serialize/deserialize 
> {{ExecNode}}s), It's better we can introduce an unified class to represent 
> the topology.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21056) Streaming checkpointing is failing occasionally

2021-01-20 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269075#comment-17269075
 ] 

Yun Tang commented on FLINK-21056:
--

[~nvolynets] From the exception message "{{Timeout expired after 
6milliseconds while awaiting InitProducerId}}", we can see this should be a 
problem from kafka (maybe the same as KAFKA-8803). 
Flink connector would call kafka producer to do something during sync phase of 
checkpoint, and the kafka producer just report "{{Timeout expired after 
6milliseconds while awaiting InitProducerId}}" leading to the checkpoint 
failure. This is not a bug in Flink framework, and you could ask this problem 
in flink user@ mailing list.

> Streaming checkpointing is failing occasionally
> ---
>
> Key: FLINK-21056
> URL: https://issues.apache.org/jira/browse/FLINK-21056
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.3
> Environment: * streaming app
> * flink cluster in standalone-job / application mode
> * 1.11.3 Flink version
> * jobmanager --> 1 instance
> * taskmanager --> 1 instance
> * parallelism --> 2
>Reporter: Nazar Volynets
>Priority: Major
> Attachments: jobmanager.log, taskmanager.log
>
>
> There is a simple streaming app with enabled checkpointing:
>  * statebackend --> RockDB
>  * mode --> EXACTLY_ONCE
> STRs:
>  1. Run Flink cluster in standalone-job / application mode (with embedded 
> streaming app)
>  2. Wait 10 minutes
>  3. Restart Flink cluster (& consequently streaming app)
>  4. Repeat steps from #1 to #3 until you will get an checkpointing error
> {code:java|title=taskmanager}
> 2021-01-19 12:09:39,719 INFO  
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl [] 
> - Could not complete snapshot 21 for operator Source: Custom Source -> Sink: 
> Unnamed (1/2). Failure reason: Checkpoint was declined.
> org.apache.flink.runtime.checkpoint.CheckpointException: Could not complete 
> snapshot 21 for operator Source: Custom Source -> Sink: Unnamed (1/2). 
> Failure reason: Checkpoint was declined.
> ...
> Caused by: org.apache.flink.util.SerializedThrowable: Timeout expired after 
> 6milliseconds while awaiting InitProducerId
> {code}
> Based on stack trace quite tricky to define / determine the root cause.
> Please find below:
>  * streaming app code base (example)
>  * attached logs
>  ** jobmanager
>  ** taskmanager
> *Example*
> +App+
> {code:java|title=build.gradle (dependencies)}
> ...
> ext {
>   ...
>   javaVersion = '11'
>   flinkVersion = '1.12.0'
>   scalaBinaryVersion = '2.11'
>   ...
> }
> dependencies {
>   ...
>   implementation 
> "org.apache.flink:flink-streaming-java_${scalaBinaryVersion}:${flinkVersion}"
>   implementation 
> "org.apache.flink:flink-clients_${scalaBinaryVersion}:${flinkVersion}"
>   implementation 
> "org.apache.flink:flink-statebackend-rocksdb_${scalaBinaryVersion}:${flinkVersion}"
>   ...
> }
> {code}
> {code:java|title=App}
> public static void main(String[] args) {
>   ...
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   env.setParallelism(2);
>   env.enableCheckpointing(1);
>   env.setStateBackend(new 
> RocksDBStateBackend("file:///xxx/config/checkpoints/rocksdb", true));
>   
> env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
>   env.getCheckpointConfig().setMinPauseBetweenCheckpoints(1000);
>   env.getCheckpointConfig().setCheckpointTimeout(60);
>   env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
>   FlinkKafkaConsumer consumer = createConsumer();
>   FlinkKafkaProducer producer = createProducer();
>   env
> .addSource(consumer)
> .uid("kafka-consumer")
> .addSink(producer)
> .uid("kafka-producer")
>   ;
>   env.execute();
> }
> public static FlinkKafkaConsumer createConsumer() {
>   ...
>   Properties props = new Properties();
>   props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "kafka-source-1:9091");
>   ... // nothing special
>   props.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
>   FlinkKafkaConsumer consumer = new FlinkKafkaConsumer<>("topic-1", 
> new RecordKafkaDerSchema(), props);
>   ... // RecordKafkaDerSchema --> custom schema is used to copy not only 
> message body but message key too
>   ... // SimpleStringSchema --> can be used instead to reproduce issue
>   consumer.setStartFromGroupOffsets();
>   consumer.setCommitOffsetsOnCheckpoints(true);
>   return consumer;
> }
> public static FlinkKafkaProducer createProducer() {
>   ...
>   Properties props = new Properties();
>   ...
>   props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "kafka-target-1:9094");
>   props.setProperty(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 

[jira] [Closed] (FLINK-21056) Streaming checkpointing is failing occasionally

2021-01-20 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang closed FLINK-21056.

Resolution: Invalid

> Streaming checkpointing is failing occasionally
> ---
>
> Key: FLINK-21056
> URL: https://issues.apache.org/jira/browse/FLINK-21056
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.3
> Environment: * streaming app
> * flink cluster in standalone-job / application mode
> * 1.11.3 Flink version
> * jobmanager --> 1 instance
> * taskmanager --> 1 instance
> * parallelism --> 2
>Reporter: Nazar Volynets
>Priority: Major
> Attachments: jobmanager.log, taskmanager.log
>
>
> There is a simple streaming app with enabled checkpointing:
>  * statebackend --> RockDB
>  * mode --> EXACTLY_ONCE
> STRs:
>  1. Run Flink cluster in standalone-job / application mode (with embedded 
> streaming app)
>  2. Wait 10 minutes
>  3. Restart Flink cluster (& consequently streaming app)
>  4. Repeat steps from #1 to #3 until you will get an checkpointing error
> {code:java|title=taskmanager}
> 2021-01-19 12:09:39,719 INFO  
> org.apache.flink.streaming.runtime.tasks.SubtaskCheckpointCoordinatorImpl [] 
> - Could not complete snapshot 21 for operator Source: Custom Source -> Sink: 
> Unnamed (1/2). Failure reason: Checkpoint was declined.
> org.apache.flink.runtime.checkpoint.CheckpointException: Could not complete 
> snapshot 21 for operator Source: Custom Source -> Sink: Unnamed (1/2). 
> Failure reason: Checkpoint was declined.
> ...
> Caused by: org.apache.flink.util.SerializedThrowable: Timeout expired after 
> 6milliseconds while awaiting InitProducerId
> {code}
> Based on stack trace quite tricky to define / determine the root cause.
> Please find below:
>  * streaming app code base (example)
>  * attached logs
>  ** jobmanager
>  ** taskmanager
> *Example*
> +App+
> {code:java|title=build.gradle (dependencies)}
> ...
> ext {
>   ...
>   javaVersion = '11'
>   flinkVersion = '1.12.0'
>   scalaBinaryVersion = '2.11'
>   ...
> }
> dependencies {
>   ...
>   implementation 
> "org.apache.flink:flink-streaming-java_${scalaBinaryVersion}:${flinkVersion}"
>   implementation 
> "org.apache.flink:flink-clients_${scalaBinaryVersion}:${flinkVersion}"
>   implementation 
> "org.apache.flink:flink-statebackend-rocksdb_${scalaBinaryVersion}:${flinkVersion}"
>   ...
> }
> {code}
> {code:java|title=App}
> public static void main(String[] args) {
>   ...
>   StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>   env.setParallelism(2);
>   env.enableCheckpointing(1);
>   env.setStateBackend(new 
> RocksDBStateBackend("file:///xxx/config/checkpoints/rocksdb", true));
>   
> env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
>   env.getCheckpointConfig().setMinPauseBetweenCheckpoints(1000);
>   env.getCheckpointConfig().setCheckpointTimeout(60);
>   env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
>   FlinkKafkaConsumer consumer = createConsumer();
>   FlinkKafkaProducer producer = createProducer();
>   env
> .addSource(consumer)
> .uid("kafka-consumer")
> .addSink(producer)
> .uid("kafka-producer")
>   ;
>   env.execute();
> }
> public static FlinkKafkaConsumer createConsumer() {
>   ...
>   Properties props = new Properties();
>   props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "kafka-source-1:9091");
>   ... // nothing special
>   props.setProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true");
>   FlinkKafkaConsumer consumer = new FlinkKafkaConsumer<>("topic-1", 
> new RecordKafkaDerSchema(), props);
>   ... // RecordKafkaDerSchema --> custom schema is used to copy not only 
> message body but message key too
>   ... // SimpleStringSchema --> can be used instead to reproduce issue
>   consumer.setStartFromGroupOffsets();
>   consumer.setCommitOffsetsOnCheckpoints(true);
>   return consumer;
> }
> public static FlinkKafkaProducer createProducer() {
>   ...
>   Properties props = new Properties();
>   ...
>   props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> "kafka-target-1:9094");
>   props.setProperty(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, 
> "1");
>   props.setProperty(ProducerConfig.ACKS_CONFIG, "all");
>   props.setProperty(ProducerConfig.BATCH_SIZE_CONFIG, "16384");
>   props.setProperty(ProducerConfig.LINGER_MS_CONFIG, "1000");
>   props.setProperty(ProducerConfig.REQUEST_TIMEOUT_MS_CONFIG, "2000");
>   props.setProperty(ProducerConfig.DELIVERY_TIMEOUT_MS_CONFIG, "9000");
>   props.setProperty(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
>   props.setProperty(ProducerConfig.TRANSACTIONAL_ID_CONFIG, "xxx"); // 
> ignored due to expected behaviour - 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #14611: [FLINK-13194][docs] Add explicit clarification about thread-safety of state

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14611:
URL: https://github.com/apache/flink/pull/14611#issuecomment-758104213


   
   ## CI report:
   
   * 528835d01452384bd0f3708d31817812e246b230 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12275)
 
   * 145e976456e1f3d765fbcb5f4e34e256949ae1ec Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12316)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21062) Meeting NPE when using the dynamic Index in elasticsearch connector

2021-01-20 Thread xiaozilong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaozilong updated FLINK-21062:
---
Description: 
The program will throws NPE when using the dynamic Index in elasticsearch 
connector. 

The DDL like:
{code:java}
create table bigoflow_logs_output(
  jobName VARCHAR,
  userName VARCHAR,
  proctime TIMESTAMP
) with (
  'connector' = 'elasticsearch-7',
  'hosts' = 'http://127.0.0.1:9400',
  'index' = 'flink2es-{proctime|-MM-dd}'
);
{code}
The problem mayby that the method `AbstractTimeIndexGenerator#open()` is not 
called when `AbstractTimeIndexGenerator` is initialized.

The exception stack is as follows: 
{code:java}
java.lang.NullPointerException: formatter
at java.util.Objects.requireNonNull(Objects.java:228) ~[?:1.8.0_60]
at java.time.LocalDateTime.format(LocalDateTime.java:1751) ~[?:1.8.0_60]
at 
org.apache.flink.streaming.connectors.elasticsearch.table.IndexGeneratorFactory.lambda$createFormatFunction$27972a5d$3(IndexGeneratorFactory.java:161)
 ~[flink-connector-elasticsearch-base_2.12-1.11.0.jar:1.11.0]
at 
org.apache.flink.streaming.connectors.elasticsearch.table.IndexGeneratorFactory$1.generate(IndexGeneratorFactory.java:118)
 [flink-connector-elasticsearch-base_2.12-1.11.0.jar:1.11.0]
at 
org.apache.flink.streaming.connectors.elasticsearch.table.RowElasticsearchSinkFunction.processUpsert(RowElasticsearchSinkFunction.java:103)
 [flink-connector-elasticsearch-base_2.12-1.11.0.jar:1.11.0]
at 
org.apache.flink.streaming.connectors.elasticsearch.table.RowElasticsearchSinkFunction.process(RowElasticsearchSinkFunction.java:79)
 [flink-connector-elasticsearch-base_2.12-1.11.0.jar:1.11.0]
at 
org.apache.flink.streaming.connectors.elasticsearch.table.RowElasticsearchSinkFunction.process(RowElasticsearchSinkFunction.java:44)
 [flink-connector-elasticsearch-base_2.12-1.11.0.jar:1.11.0]
at 
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.invoke(ElasticsearchSinkBase.java:310)
 [flink-connector-elasticsearch-base_2.12-1.11.0.jar:1.11.0]
at 
org.apache.flink.table.runtime.operators.sink.SinkOperator.processElement(SinkOperator.java:86)
 [flink-table-blink_2.12-1.11.0.jar:1.11.0]
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717)
 [flink-dist_2.12-1.11.0.jar:1.11.0]
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
 [flink-dist_2.12-1.11.0.jar:1.11.0]
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
StreamExecCalc$14.processElement(Unknown Source) 
[flink-table-blink_2.12-1.11.0.jar:?]at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352)
 [flink-connector-kafka-base_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:244)
 [flink-connector-kafka_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:200)
 [flink-connector-kafka_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:813)
 [flink-connector-kafka-base_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 

[jira] [Comment Edited] (FLINK-6042) Display last n exceptions/causes for job restarts in Web UI

2021-01-20 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269092#comment-17269092
 ] 

Matthias edited comment on FLINK-6042 at 1/21/21, 7:34 AM:
---

We have two approach (which we discussed offline) to implement this feature:
 # The {{JobExceptionsHandler}} does most of the work by iterating over the 
{{ArchivedExecution}}s of the passed {{ArchivedExecutionGraph}}. 
{{ArchivedExecutions}} provide the time (through 
{{ArchivedExecution.stateTimestamps}}) and the thrown exception 
({{ArchivedExecution.failureCause}}). The {{SchedulerNG}} implementation would 
need to collect a mapping of {{failureCause}} to {{ExecutionAttemptID}} and 
pass it over to the {{JobExceptionsHandler}} along the 
{{ArchivedExecutionGraph}}. This would enable the handler to group exceptions 
happened due to the same failure case.
 ** +Pros:+
 *** This approach has the advantage of using mostly code that is already there.
 *** No extra code in the {{SchedulerBase}} implementation.
 ** Cons:
 *** It does not support restarts of the {{ExecutionGraph}}. This restart 
functionality is planned for the declarative scheduler which we're currently 
working on (see 
[FLIP-160|https://cwiki.apache.org/confluence/display/FLINK/FLIP-160%3A+Declarative+Scheduler]).
 Only the most recent {{ExecutionGraph}} (and, therefore, its exceptions) is 
provided.
 *** There might be modifications necessary to the internally used data 
structures allowing random access based on {{ExecutionAttemptID}} instead of 
iterating over collections.
 # The collection of exceptions happens in the scheduler. The mapping of root 
cause to related exceptions is then passed over to the 
{{JobExceptionsHandler}}. The exceptions can be collected as they appear.
 ** +Pros:+
 *** It makes makes it easier to port this functionality into the declarative 
scheduler of FLIP-160. We don't need to think of a history of 
{{ArchivedExecutionGraphs}} for now. Restart of the {{ExecutionGraph}} are 
hidden away from the {{JobExceptionsHandler}} 
 ** +Cons:+
 *** The {{SchedulerBase}} code base grows once more which increases complexity.

We decided to go with option 2 for now. This makes it easier for us to 
implement the functionality into the declarative scheduler of FLIP-160.


was (Author: mapohl):
We have two approach (which we discussed offline) to implement this feature:
 # The {{JobExceptionsHandler}} does most of the work by iterating over the 
{{ArchivedExecution}}s of the passed {{ArchivedExecutionGraph}}. 
{{ArchivedExecutions}} provide the time (through 
{{ArchivedExecution.stateTimestamps}}) and the thrown exception 
({{ArchivedExecution.failureCause}}). The {{SchedulerNG}} implementation would 
need to collect a mapping of {{failureCause}} to {{ExecutionAttemptID}} and 
pass it over to the {{JobExceptionsHandler}} along the 
{{ArchivedExecutionGraph}}. This would enable the handler to group exceptions 
happened due to the same failure case.
 ** +Pros:+
 *** This approach has the advantage of using mostly code that is already there.
 *** No extra code in the {{SchedulerBase}} implementation.
 ** Cons:
 *** It does not support restarts of the {{ExecutionGraph}}. This restart 
functionality is planned for the declarative scheduler which we're currently 
working on (see 
[FLIP-160|https://cwiki.apache.org/confluence/display/FLINK/FLIP-160%3A+Declarative+Scheduler]).
 Only the most recent {{ExecutionGraph}} (and, therefore, its exceptions) is 
provided.
 *** There might be modifications necessary to the internally used data 
structures allowing random access based on {{ExecutionAttemptID}} instead of 
iterating over collections.

 # The collection of exceptions happens in the scheduler. The mapping of root 
cause to related exceptions is then passed over to the 
{{JobExceptionsHandler}}. The exceptions can be collected as they appear.
 ** +Pros:+
 *** It makes makes it easier to port this functionality into the declarative 
scheduler of FLIP-160. We don't need to think of a history of 
{{ArchivedExecutionGraphs}} for now. Restart of the {{ExecutionGraph}} are 
hidden away from the {{JobExceptionsHandler}} 
 ** +Cons:+
 *** The {{SchedulerBase}} code base grows once more which increases complexity.

We decided to go with option 2 for now. This makes it easier for us to 
implement the functionality into the declarative scheduler of FLIP-160.

> Display last n exceptions/causes for job restarts in Web UI
> ---
>
> Key: FLINK-6042
> URL: https://issues.apache.org/jira/browse/FLINK-6042
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination, Runtime / Web Frontend
>Affects Versions: 1.3.0
>Reporter: Till Rohrmann
>Assignee: Matthias
>Priority: Major
>  Labels: 

[jira] [Comment Edited] (FLINK-6042) Display last n exceptions/causes for job restarts in Web UI

2021-01-20 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269092#comment-17269092
 ] 

Matthias edited comment on FLINK-6042 at 1/21/21, 7:34 AM:
---

We have two approach (which we discussed offline) to implement this feature:
 # The {{JobExceptionsHandler}} does most of the work by iterating over the 
{{ArchivedExecution}}s of the passed {{ArchivedExecutionGraph}}. 
{{ArchivedExecutions}} provide the time (through 
{{ArchivedExecution.stateTimestamps}}) and the thrown exception 
({{ArchivedExecution.failureCause}}). The {{SchedulerNG}} implementation would 
need to collect a mapping of {{failureCause}} to {{ExecutionAttemptID}} and 
pass it over to the {{JobExceptionsHandler}} along the 
{{ArchivedExecutionGraph}}. This would enable the handler to group exceptions 
happened due to the same failure case.
 ** +Pros:+
 *** This approach has the advantage of using mostly code that is already there.
 *** No extra code in the {{SchedulerBase}} implementation.
 ** Cons:
 *** It does not support restarts of the {{ExecutionGraph}}. This restart 
functionality is planned for the declarative scheduler which we're currently 
working on (see 
[FLIP-160|https://cwiki.apache.org/confluence/display/FLINK/FLIP-160%3A+Declarative+Scheduler]).
 Only the most recent {{ExecutionGraph}} (and, therefore, its exceptions) is 
provided.
 *** There might be modifications necessary to the internally used data 
structures allowing random access based on {{ExecutionAttemptID}} instead of 
iterating over collections.

 # The collection of exceptions happens in the scheduler. The mapping of root 
cause to related exceptions is then passed over to the 
{{JobExceptionsHandler}}. The exceptions can be collected as they appear.
 ** +Pros:+
 *** It makes makes it easier to port this functionality into the declarative 
scheduler of FLIP-160. We don't need to think of a history of 
{{ArchivedExecutionGraphs}} for now. Restart of the {{ExecutionGraph}} are 
hidden away from the {{JobExceptionsHandler}} 
 ** +Cons:+
 *** The {{SchedulerBase}} code base grows once more which increases complexity.

We decided to go with option 2 for now. This makes it easier for us to 
implement the functionality into the declarative scheduler of FLIP-160.


was (Author: mapohl):
We have two approach (which we discussed offline) to implement this feature:
 # The {{JobExceptionsHandler}} does most of the work by iterating over the 
{{ArchivedExecution}}s of the passed {{ArchivedExecutionGraph}}. 
{{ArchivedExecutions}} provide the time (through 
{{ArchivedExecution.stateTimestamps}}) and the thrown exception 
({{ArchivedExecution.failureCause}}). The {{SchedulerNG}} implementation would 
need to collect a mapping of {{failureCause}} to {{ExecutionAttemptID}} and 
pass it over to the {{JobExceptionsHandler}} along the 
{{ArchivedExecutionGraph}}. This would enable the handler to group exceptions 
happened due to the same failure case.
+Pros:+ 
- This approach has the advantage of using mostly code that is already there.
- No extra code in the {{SchedulerBase}} implementation.
+Cons:+ 
- It does not support restarts of the {{ExecutionGraph}}. This restart 
functionality is planned for the declarative scheduler which we're currently 
working on (see 
[FLIP-160|https://cwiki.apache.org/confluence/display/FLINK/FLIP-160%3A+Declarative+Scheduler]).
 Only the most recent {{ExecutionGraph}} (and, therefore, its exceptions) is 
provided.
- There might be modifications necessary to the internally used data structures 
allowing random access based on {{ExecutionAttemptID}} instead of iterating 
over collections.
 # The collection of exceptions happens in the scheduler. The mapping of root 
cause to related exceptions is then passed over to the 
{{JobExceptionsHandler}}. The exceptions can be collected as they appear.
+Pros:+ 
- It makes makes it easier to port this functionality into the declarative 
scheduler of FLIP-160. We don't need to think of a history of 
{{ArchivedExecutionGraphs}} for now. Restart of the {{ExecutionGraph}} are 
hidden away from the {{JobExceptionsHandler}} 
+Cons:+
- The {{SchedulerBase}} code base grows once more which increases complexity.

We decided to go with option 2 for now. This makes it easier for us to 
implement the functionality into the declarative scheduler of FLIP-160.

> Display last n exceptions/causes for job restarts in Web UI
> ---
>
> Key: FLINK-6042
> URL: https://issues.apache.org/jira/browse/FLINK-6042
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination, Runtime / Web Frontend
>Affects Versions: 1.3.0
>Reporter: Till Rohrmann
>Assignee: Matthias
>Priority: Major
>  Labels: pull-request-available
>
> Users 

[GitHub] [flink] flinkbot edited a comment on pull request #14613: [FLINK-20935][yarn] can't write flink configuration to tmp file and a…

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14613:
URL: https://github.com/apache/flink/pull/14613#issuecomment-758462709


   
   ## CI report:
   
   * fed0cc6ecf3d5bceb33a6bd68a32c77fde5180dd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11907)
 
   * e28099f2684a1ec8706290ca52d982b3d7b266da UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21058) The Show Partitinos command is supported under the Flink dialect to show all partitions

2021-01-20 Thread Jira
孙铮 created FLINK-21058:
--

 Summary: The Show Partitinos command is supported under the Flink 
dialect to show all partitions
 Key: FLINK-21058
 URL: https://issues.apache.org/jira/browse/FLINK-21058
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / Planner
Affects Versions: 1.12.0
Reporter: 孙铮
 Fix For: 1.13.0


The Show Partitinos command is supported under the Flink dialect to show all 
partitions



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21059) KafkaSourceEnumerator does not honor consumer properties

2021-01-20 Thread Thomas Weise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269063#comment-17269063
 ] 

Thomas Weise commented on FLINK-21059:
--

{code:java}
Caused by: java.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node 
assignment.
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
 ~[kafka-clients-2.4.1.jar:?]
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
 ~[kafka-clients-2.4.1.jar:?]
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
 ~[kafka-clients-2.4.1.jar:?]
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260) 
~[kafka-clients-2.4.1.jar:?]
at 
org.apache.flink.connector.kafka.source.enumerator.subscriber.TopicListSubscriber.getPartitionChanges(TopicListSubscriber.java:57)
 ~[flink-connector-kafka_2.12-1.13-20210119.215918-55.jar:1.13-SNAPSHOT]
at 
org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.discoverAndInitializePartitionSplit(KafkaSourceEnumerator.java:196)
 ~[flink-connector-kafka_2.12-1.13-20210119.215918-55.jar:1.13-SNAPSHOT]
{code}

> KafkaSourceEnumerator does not honor consumer properties
> 
>
> Key: FLINK-21059
> URL: https://issues.apache.org/jira/browse/FLINK-21059
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Major
>
> Enumerator fails when SSL is required because the user provided properties 
> are not used to construct the admin client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] tweise opened a new pull request #14711: [FLINK-21059][kafka] KafkaSourceEnumerator does not honor consumer properties

2021-01-20 Thread GitBox


tweise opened a new pull request #14711:
URL: https://github.com/apache/flink/pull/14711


   ## What is the purpose of the change
   
   *Ensure that KafkaSourceEnumerator honors the user provided consumer 
properties*
   
   ## Verifying this change
   
   I tested this change on an internal Kafka cluster, we should probably also 
add unit test coverage. Putting this up for discussion.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21059) KafkaSourceEnumerator does not honor consumer properties

2021-01-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21059:
---
Labels: pull-request-available  (was: )

> KafkaSourceEnumerator does not honor consumer properties
> 
>
> Key: FLINK-21059
> URL: https://issues.apache.org/jira/browse/FLINK-21059
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Major
>  Labels: pull-request-available
>
> Enumerator fails when SSL is required because the user provided properties 
> are not used to construct the admin client. Empty properties are created and 
> all provided properties except bootstrap.servers are ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14699: [FLINK-21011][table-planner-blink] Separate implementation of StreamExecIntervalJoin

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14699:
URL: https://github.com/apache/flink/pull/14699#issuecomment-763287840


   
   ## CI report:
   
   * c463989c4c6afdc6a85485822d1350f6ab771a30 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12304)
 
   * 316311b1eb3e13ae6ececc71d75c02b95403214d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12314)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21060) Could not create the DispatcherResourceManagerComponent

2021-01-20 Thread Spongebob (Jira)
Spongebob created FLINK-21060:
-

 Summary: Could not create the DispatcherResourceManagerComponent
 Key: FLINK-21060
 URL: https://issues.apache.org/jira/browse/FLINK-21060
 Project: Flink
  Issue Type: Bug
  Components: API / Core
Affects Versions: 1.12.0
 Environment: flink: 1.12.0

hive: 3.1.2
Reporter: Spongebob
 Attachments: yarn.log

I set multi sink to hive table in flink application then deploy it on yarn, if 
I run with detached mode, the application can run successful but sink nothing. 
And if I run without detached mode, the application would throw this exception: 
Could not create the DispatcherResourceManagerComponent. Attachment is the log 
of this application run without detached mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14711: [FLINK-21059][kafka] KafkaSourceEnumerator does not honor consumer properties

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14711:
URL: https://github.com/apache/flink/pull/14711#issuecomment-764431519


   
   ## CI report:
   
   * 745be8be878cbc0eb4094daa66ecce499d5ac585 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12318)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #14672: [FLINK-20998][docs] - flink-raw.jar no longer mentioned + prettier maven xml dependencies

2021-01-20 Thread GitBox


wuchong commented on pull request #14672:
URL: https://github.com/apache/flink/pull/14672#issuecomment-764445821


   Yes, you are right. You can just cherry pick the merged commit to your dev 
branch based on latest release-1.12 branch. But you may need to resolve 
conflicts if there are conflicts. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21063) ElasticsearchSinkITCase fails while setting up ES cluster

2021-01-20 Thread Dawid Wysakowicz (Jira)
Dawid Wysakowicz created FLINK-21063:


 Summary: ElasticsearchSinkITCase fails while setting up ES cluster
 Key: FLINK-21063
 URL: https://issues.apache.org/jira/browse/FLINK-21063
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Affects Versions: 1.13.0
Reporter: Dawid Wysakowicz
 Fix For: 1.13.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12300=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8

{code}
2021-01-20T22:39:29.0031362Z ElasticsearchStatusException[method [HEAD], host 
[http://127.0.0.1:9200], URI [/], status line [HTTP/1.1 503 Service 
Unavailable]]; nested: ResponseException[method [HEAD], host 
[http://127.0.0.1:9200], URI [/], status line [HTTP/1.1 503 Service 
Unavailable]]; nested: ResponseException[method [HEAD], host 
[http://127.0.0.1:9200], URI [/], status line [HTTP/1.1 503 Service 
Unavailable]];
2021-01-20T22:39:29.0034068Zat 
org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:625)
2021-01-20T22:39:29.0035014Zat 
org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:535)
2021-01-20T22:39:29.0035875Zat 
org.elasticsearch.client.RestHighLevelClient.ping(RestHighLevelClient.java:275)
2021-01-20T22:39:29.0036779Zat 
org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSinkITCase.ensureClusterIsUp(ElasticsearchSinkITCase.java:45)
2021-01-20T22:39:29.0037961Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2021-01-20T22:39:29.0038621Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2021-01-20T22:39:29.0039362Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2021-01-20T22:39:29.0040036Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2021-01-20T22:39:29.0040667Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2021-01-20T22:39:29.0041366Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2021-01-20T22:39:29.0042163Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2021-01-20T22:39:29.0042920Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
2021-01-20T22:39:29.0043722Zat 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
2021-01-20T22:39:29.0044370Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2021-01-20T22:39:29.0044946Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2021-01-20T22:39:29.0045604Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2021-01-20T22:39:29.0046332Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2021-01-20T22:39:29.0046980Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2021-01-20T22:39:29.0047692Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2021-01-20T22:39:29.0048311Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2021-01-20T22:39:29.0048938Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2021-01-20T22:39:29.0049539Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2021-01-20T22:39:29.0050147Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2021-01-20T22:39:29.0050847Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2021-01-20T22:39:29.0051503Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2021-01-20T22:39:29.0052119Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2021-01-20T22:39:29.0052735Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2021-01-20T22:39:29.0053532Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
2021-01-20T22:39:29.0054281Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
2021-01-20T22:39:29.0055040Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
2021-01-20T22:39:29.0055747Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
2021-01-20T22:39:29.0056545Zat 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
2021-01-20T22:39:29.0057550Zat 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
2021-01-20T22:39:29.0058297Zat 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
2021-01-20T22:39:29.0058974Zat 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

[jira] [Updated] (FLINK-21020) Bump Jackson to 2.12.1

2021-01-20 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21020:
---
Labels: pull-request-available  (was: )

> Bump Jackson to 2.12.1
> --
>
> Key: FLINK-21020
> URL: https://issues.apache.org/jira/browse/FLINK-21020
> Project: Flink
>  Issue Type: Improvement
>  Components: BuildSystem / Shaded
>Affects Versions: shaded-9.0, shaded-10.0, shaded-11.0, shaded-12.0
>Reporter: Dian Fu
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.11.4, 1.12.2
>
>
> Our current Jackson version (2.10.1) is vulnerable for at least this CVE:
> [https://nvd.nist.gov/vuln/detail/CVE-2020-25649]
> Bump it to 2.10.5.1+ should address this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-shaded] HuangXingBo opened a new pull request #93: [FLINK-21020][jackson] Bump version to 2.12.1

2021-01-20 Thread GitBox


HuangXingBo opened a new pull request #93:
URL: https://github.com/apache/flink-shaded/pull/93


   This fixes the following vulnerability
   https://nvd.nist.gov/vuln/detail/CVE-2020-25649
   
   which has been fixed in 2.10.5.1
   https://github.com/FasterXML/jackson/wiki/Jackson-Release-2.10#micro-patches
   
   In order not to release versions too frequently, so we upgrade the version 
of Jackson to 2.12.1.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21058) Add SHOW PARTITION syntax support for flink

2021-01-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/FLINK-21058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269057#comment-17269057
 ] 

孙铮 commented on FLINK-21058:


[~jark] Yes, I want to add SHOW PARTITION syntax support to flink . could you 
assign it to me ? thanks

> Add SHOW PARTITION syntax support for flink
> ---
>
> Key: FLINK-21058
> URL: https://issues.apache.org/jira/browse/FLINK-21058
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0
>Reporter: 孙铮
>Priority: Minor
> Fix For: 1.13.0
>
>
> The Show Partitinos command is supported under the Flink dialect to show all 
> partitions



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21059) KafkaSourceEnumerator does not honor consumer properties

2021-01-20 Thread Thomas Weise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269065#comment-17269065
 ] 

Thomas Weise commented on FLINK-21059:
--

In this case

security.protocol = PLAINTEXT

was used instead of

security.protocol = SSL

 

> KafkaSourceEnumerator does not honor consumer properties
> 
>
> Key: FLINK-21059
> URL: https://issues.apache.org/jira/browse/FLINK-21059
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Thomas Weise
>Assignee: Thomas Weise
>Priority: Major
>
> Enumerator fails when SSL is required because the user provided properties 
> are not used to construct the admin client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21058) Add SHOW PARTITION syntax support for flink

2021-01-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/FLINK-21058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

孙铮 updated FLINK-21058:
---
Description: I want to add SHOW PARTITION syntax support to flink  (was: 
The Show Partitinos command is supported under the Flink dialect to show all 
partitions)

> Add SHOW PARTITION syntax support for flink
> ---
>
> Key: FLINK-21058
> URL: https://issues.apache.org/jira/browse/FLINK-21058
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0
>Reporter: 孙铮
>Priority: Minor
> Fix For: 1.13.0
>
>
> I want to add SHOW PARTITION syntax support to flink



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] godfreyhe merged pull request #14700: [FLINK-21041][table-planner-blink] Introduce ExecNodeGraph to wrap the ExecNode topology

2021-01-20 Thread GitBox


godfreyhe merged pull request #14700:
URL: https://github.com/apache/flink/pull/14700


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14711: [FLINK-21059][kafka] KafkaSourceEnumerator does not honor consumer properties

2021-01-20 Thread GitBox


flinkbot commented on pull request #14711:
URL: https://github.com/apache/flink/pull/14711#issuecomment-764423225


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 93b4ec3111f193bdeb091c11100ecf916389527e (Thu Jan 21 
06:49:49 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-6042) Display last n exceptions/causes for job restarts in Web UI

2021-01-20 Thread Matthias (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-6042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269092#comment-17269092
 ] 

Matthias commented on FLINK-6042:
-

We have two approach (which we discussed offline) to implement this feature:
 # The {{JobExceptionsHandler}} does most of the work by iterating over the 
{{ArchivedExecution}}s of the passed {{ArchivedExecutionGraph}}. 
{{ArchivedExecutions}} provide the time (through 
{{ArchivedExecution.stateTimestamps}}) and the thrown exception 
({{ArchivedExecution.failureCause}}). The {{SchedulerNG}} implementation would 
need to collect a mapping of {{failureCause}} to {{ExecutionAttemptID}} and 
pass it over to the {{JobExceptionsHandler}} along the 
{{ArchivedExecutionGraph}}. This would enable the handler to group exceptions 
happened due to the same failure case.
+Pros:+ 
- This approach has the advantage of using mostly code that is already there.
- No extra code in the {{SchedulerBase}} implementation.
+Cons:+ 
- It does not support restarts of the {{ExecutionGraph}}. This restart 
functionality is planned for the declarative scheduler which we're currently 
working on (see 
[FLIP-160|https://cwiki.apache.org/confluence/display/FLINK/FLIP-160%3A+Declarative+Scheduler]).
 Only the most recent {{ExecutionGraph}} (and, therefore, its exceptions) is 
provided.
- There might be modifications necessary to the internally used data structures 
allowing random access based on {{ExecutionAttemptID}} instead of iterating 
over collections.
 # The collection of exceptions happens in the scheduler. The mapping of root 
cause to related exceptions is then passed over to the 
{{JobExceptionsHandler}}. The exceptions can be collected as they appear.
+Pros:+ 
- It makes makes it easier to port this functionality into the declarative 
scheduler of FLIP-160. We don't need to think of a history of 
{{ArchivedExecutionGraphs}} for now. Restart of the {{ExecutionGraph}} are 
hidden away from the {{JobExceptionsHandler}} 
+Cons:+
- The {{SchedulerBase}} code base grows once more which increases complexity.

We decided to go with option 2 for now. This makes it easier for us to 
implement the functionality into the declarative scheduler of FLIP-160.

> Display last n exceptions/causes for job restarts in Web UI
> ---
>
> Key: FLINK-6042
> URL: https://issues.apache.org/jira/browse/FLINK-6042
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination, Runtime / Web Frontend
>Affects Versions: 1.3.0
>Reporter: Till Rohrmann
>Assignee: Matthias
>Priority: Major
>  Labels: pull-request-available
>
> Users requested that it would be nice to see the last {{n}} exceptions 
> causing a job restart in the Web UI. This will help to more easily debug and 
> operate a job.
> We could store the root causes for failures similar to how prior executions 
> are stored in the {{ExecutionVertex}} using the {{EvictingBoundedList}} and 
> then serve this information via the Web UI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14710: [FLINK-19393][docs-zh] Translate the 'SQL Hints' page of 'Table API & SQL' into Chinese

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14710:
URL: https://github.com/apache/flink/pull/14710#issuecomment-764262487


   
   ## CI report:
   
   * da3f3f6a21306e3b4fa3cfc7b79ef34d825f5efb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12312)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14708: [FLINK-21054][table-runtime-blink] Implement mini-batch optimized slicing window aggregate operator

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14708:
URL: https://github.com/apache/flink/pull/14708#issuecomment-763516524


   
   ## CI report:
   
   * 480de3a5924eb8a615e173e3044a2e6cf22e16e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12284)
 
   * e00fb2507a257ad657632047b3d2ec2e5b38a737 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12311)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #14672: [FLINK-20998][docs] - flink-raw.jar no longer mentioned + prettier maven xml dependencies

2021-01-20 Thread GitBox


wuchong commented on pull request #14672:
URL: https://github.com/apache/flink/pull/14672#issuecomment-764388301


   @sv3ndk , sorry, I do think I missed to merge this PR. 
   
   Could you help to open a pull request for release-1.12 ? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21058) The Show Partitinos command is supported under the Flink dialect to show all partitions

2021-01-20 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269047#comment-17269047
 ] 

Jark Wu commented on FLINK-21058:
-

Do you mean you want to support {{SHOW PARTITION}} syntax? Because AFAIK this 
syntax is not supported yet.

> The Show Partitinos command is supported under the Flink dialect to show all 
> partitions
> ---
>
> Key: FLINK-21058
> URL: https://issues.apache.org/jira/browse/FLINK-21058
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Planner
>Affects Versions: 1.12.0
>Reporter: 孙铮
>Priority: Minor
> Fix For: 1.13.0
>
>
> The Show Partitinos command is supported under the Flink dialect to show all 
> partitions



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19445) Several tests for HBase connector 1.4 failed with "NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V"

2021-01-20 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269068#comment-17269068
 ] 

Robert Metzger commented on FLINK-19445:


This seems very similar, and fixed recently: FLINK-21006. Maybe it's not fixed?

> Several tests for HBase connector 1.4 failed with "NoSuchMethodError: 
> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V"
> -
>
> Key: FLINK-19445
> URL: https://issues.apache.org/jira/browse/FLINK-19445
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Dian Fu
>Assignee: Gyula Fora
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7042=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51
> {code}
> 2020-09-28T21:28:29.4171075Z Running 
> org.apache.flink.connector.hbase1.HBaseTablePlanTest
> 2020-09-28T21:28:31.0367584Z Tests run: 5, Failures: 0, Errors: 1, Skipped: 
> 0, Time elapsed: 1.62 sec <<< FAILURE! - in 
> org.apache.flink.connector.hbase1.HBaseTablePlanTest
> 2020-09-28T21:28:31.0368925Z 
> testProjectionPushDown(org.apache.flink.connector.hbase1.HBaseTablePlanTest)  
> Time elapsed: 0.031 sec  <<< ERROR!
> 2020-09-28T21:28:31.0369805Z org.apache.flink.table.api.ValidationException: 
> 2020-09-28T21:28:31.0370409Z Unable to create a source for reading table 
> 'default_catalog.default_database.hTable'.
> 2020-09-28T21:28:31.0370707Z 
> 2020-09-28T21:28:31.0370976Z Table options are:
> 2020-09-28T21:28:31.0371204Z 
> 2020-09-28T21:28:31.0371528Z 'connector'='hbase-1.4'
> 2020-09-28T21:28:31.0371871Z 'table-name'='my_table'
> 2020-09-28T21:28:31.0372255Z 'zookeeper.quorum'='localhost:2021'
> 2020-09-28T21:28:31.0372812Z  at 
> org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:125)
> 2020-09-28T21:28:31.0373359Z  at 
> org.apache.flink.table.planner.plan.schema.CatalogSourceTable.buildTableScan(CatalogSourceTable.scala:135)
> 2020-09-28T21:28:31.0373905Z  at 
> org.apache.flink.table.planner.plan.schema.CatalogSourceTable.toRel(CatalogSourceTable.scala:78)
> 2020-09-28T21:28:31.0374390Z  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3492)
> 2020-09-28T21:28:31.0375224Z  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2415)
> 2020-09-28T21:28:31.0375867Z  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2102)
> 2020-09-28T21:28:31.0376479Z  at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl$$anon$1.convertFrom(FlinkPlannerImpl.scala:181)
> 2020-09-28T21:28:31.0377077Z  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
> 2020-09-28T21:28:31.0377593Z  at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl$$anon$1.convertFrom(FlinkPlannerImpl.scala:181)
> 2020-09-28T21:28:31.0378114Z  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:661)
> 2020-09-28T21:28:31.0378622Z  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:642)
> 2020-09-28T21:28:31.0379132Z  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3345)
> 2020-09-28T21:28:31.0379872Z  at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:568)
> 2020-09-28T21:28:31.0380477Z  at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:196)
> 2020-09-28T21:28:31.0381128Z  at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:154)
> 2020-09-28T21:28:31.0381666Z  at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:823)
> 2020-09-28T21:28:31.0382264Z  at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:795)
> 2020-09-28T21:28:31.0382968Z  at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:250)
> 2020-09-28T21:28:31.0383550Z  at 
> org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:78)
> 2020-09-28T21:28:31.0384172Z  at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:640)
> 2020-09-28T21:28:31.0384700Z  at 
> 

[GitHub] [flink] sv3ndk commented on pull request #14672: [FLINK-20998][docs] - flink-raw.jar no longer mentioned + prettier maven xml dependencies

2021-01-20 Thread GitBox


sv3ndk commented on pull request #14672:
URL: https://github.com/apache/flink/pull/14672#issuecomment-764426318


   Thanks for the merge @wuchong 
   
   > Could you help to open a pull request for release-1.12 ?
   
   Gladly. I'm new here so I want to make sure I do this correctly: all I 
should do is open another PR with the same commits to the branch called 
release-1.12, is this correct?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14711: [FLINK-21059][kafka] KafkaSourceEnumerator does not honor consumer properties

2021-01-20 Thread GitBox


flinkbot commented on pull request #14711:
URL: https://github.com/apache/flink/pull/14711#issuecomment-764431519


   
   ## CI report:
   
   * 745be8be878cbc0eb4094daa66ecce499d5ac585 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-21062) Meeting NPE when using the dynamic Index in elasticsearch connector

2021-01-20 Thread xiaozilong (Jira)
xiaozilong created FLINK-21062:
--

 Summary: Meeting NPE when using the dynamic Index in elasticsearch 
connector
 Key: FLINK-21062
 URL: https://issues.apache.org/jira/browse/FLINK-21062
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Affects Versions: 1.12.0
Reporter: xiaozilong


The program will throws NPE when using the dynamic Index in elasticsearch 
connector. 

The DDL like:
{code:java}
create table bigoflow_logs_output(
  jobName VARCHAR,
  userName VARCHAR,
  proctime TIMESTAMP
) with (
  'connector' = 'elasticsearch-7',
  'hosts' = 'http://127.0.0.1:9400',
  'index' = 'flink2es-{proctime|-MM-dd}'
);
{code}

The problem mayby that the method `AbstractTimeIndexGenerator#open()` is not 
called when `AbstractTimeIndexGenerator` is initialized.



The exception stack is as follows: 
java.lang.NullPointerException: formatterat 
java.util.Objects.requireNonNull(Objects.java:228) ~[?:1.8.0_60]at 
java.time.LocalDateTime.format(LocalDateTime.java:1751) ~[?:1.8.0_60]at 
org.apache.flink.streaming.connectors.elasticsearch.table.IndexGeneratorFactory.lambda$createFormatFunction$27972a5d$3(IndexGeneratorFactory.java:161)
 ~[flink-connector-elasticsearch-base_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.connectors.elasticsearch.table.IndexGeneratorFactory$1.generate(IndexGeneratorFactory.java:118)
 [flink-connector-elasticsearch-base_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.connectors.elasticsearch.table.RowElasticsearchSinkFunction.processUpsert(RowElasticsearchSinkFunction.java:103)
 [flink-connector-elasticsearch-base_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.connectors.elasticsearch.table.RowElasticsearchSinkFunction.process(RowElasticsearchSinkFunction.java:79)
 [flink-connector-elasticsearch-base_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.connectors.elasticsearch.table.RowElasticsearchSinkFunction.process(RowElasticsearchSinkFunction.java:44)
 [flink-connector-elasticsearch-base_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkBase.invoke(ElasticsearchSinkBase.java:310)
 [flink-connector-elasticsearch-base_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.table.runtime.operators.sink.SinkOperator.processElement(SinkOperator.java:86)
 [flink-table-blink_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
StreamExecCalc$14.processElement(Unknown Source) 
[flink-table-blink_2.12-1.11.0.jar:?]at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:717)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:692)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:672)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:52)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:30)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
 [flink-dist_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordsWithTimestamps(AbstractFetcher.java:352)
 [flink-connector-kafka-base_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.partitionConsumerRecordsHandler(KafkaFetcher.java:244)
 [flink-connector-kafka_2.12-1.11.0.jar:1.11.0]at 
org.apache.flink.streaming.connectors.kafka.internal.KafkaFetcher.runFetchLoop(KafkaFetcher.java:200)
 [flink-connector-kafka_2.12-1.11.0.jar:1.11.0]at 

[jira] [Commented] (FLINK-16947) ArtifactResolutionException: Could not transfer artifact. Entry [...] has not been leased from this pool

2021-01-20 Thread Dawid Wysakowicz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17269103#comment-17269103
 ] 

Dawid Wysakowicz commented on FLINK-16947:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12303=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf

> ArtifactResolutionException: Could not transfer artifact.  Entry [...] has 
> not been leased from this pool
> -
>
> Key: FLINK-16947
> URL: https://issues.apache.org/jira/browse/FLINK-16947
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Reporter: Piotr Nowojski
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=6982=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> Build of flink-metrics-availability-test failed with:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
> on project flink-metrics-availability-test: Unable to generate classpath: 
> org.apache.maven.artifact.resolver.ArtifactResolutionException: Could not 
> transfer artifact org.apache.maven.surefire:surefire-grouper:jar:2.22.1 
> from/to google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/): Entry 
> [id:13][route:{s}->https://maven-central-eu.storage-download.googleapis.com:443][state:null]
>  has not been leased from this pool
> [ERROR] org.apache.maven.surefire:surefire-grouper:jar:2.22.1
> [ERROR] 
> [ERROR] from the specified remote repositories:
> [ERROR] google-maven-central 
> (https://maven-central-eu.storage-download.googleapis.com/maven2/, 
> releases=true, snapshots=false),
> [ERROR] apache.snapshots (https://repository.apache.org/snapshots, 
> releases=false, snapshots=true)
> [ERROR] Path to dependency:
> [ERROR] 1) dummy:dummy:jar:1.0
> [ERROR] 2) org.apache.maven.surefire:surefire-junit47:jar:2.22.1
> [ERROR] 3) org.apache.maven.surefire:common-junit48:jar:2.22.1
> [ERROR] 4) org.apache.maven.surefire:surefire-grouper:jar:2.22.1
> [ERROR] -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [ERROR] 
> [ERROR] After correcting the problems, you can resume the build with the 
> command
> [ERROR]   mvn  -rf :flink-metrics-availability-test
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ commented on a change in pull request #14647: [FLINK-20835] Implement FineGrainedSlotManager

2021-01-20 Thread GitBox


KarmaGYZ commented on a change in pull request #14647:
URL: https://github.com/apache/flink/pull/14647#discussion_r561489225



##
File path: 
flink-core/src/main/java/org/apache/flink/configuration/ClusterOptions.java
##
@@ -111,6 +111,19 @@
 .withDescription(
 "Defines whether the cluster uses declarative 
resource management.");
 
+@Documentation.ExcludeFromDocumentation
+public static final ConfigOption 
ENABLE_FINE_GRAINED_RESOURCE_MANAGEMENT =
+
ConfigOptions.key("cluster.fine-grained-resource-management.enabled")
+.booleanType()
+.defaultValue(false)
+.withDescription(
+"Defines whether the cluster uses fine-grained 
resource management.");
+
+public static boolean isFineGrainedResourceManagementEnabled(Configuration 
configuration) {

Review comment:
   I think it's just about personal taste.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14699: [FLINK-21011][table-planner-blink] Separate implementation of StreamExecIntervalJoin

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14699:
URL: https://github.com/apache/flink/pull/14699#issuecomment-763287840


   
   ## CI report:
   
   * e9d2494f67bf130a6a0da0c9a1433195b79a7c45 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12287)
 
   * c463989c4c6afdc6a85485822d1350f6ab771a30 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12304)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong merged pull request #14706: [FLINK-20913][hive]Improve HiveConf creation for release-1.12

2021-01-20 Thread GitBox


wuchong merged pull request #14706:
URL: https://github.com/apache/flink/pull/14706


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wenlong88 commented on a change in pull request #14699: [FLINK-21011][table-planner-blink] Separate implementation of StreamExecIntervalJoin

2021-01-20 Thread GitBox


wenlong88 commented on a change in pull request #14699:
URL: https://github.com/apache/flink/pull/14699#discussion_r561513490



##
File path: 
flink-table/flink-table-planner-blink/src/main/scala/org/apache/flink/table/planner/plan/nodes/physical/stream/StreamPhysicalIntervalJoin.scala
##
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.nodes.physical.stream
+
+import org.apache.flink.table.api.TableException
+import org.apache.flink.table.planner.calcite.FlinkTypeFactory
+import org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalJoin
+import org.apache.flink.table.planner.plan.nodes.exec.{ExecEdge, ExecNode}
+import 
org.apache.flink.table.planner.plan.nodes.exec.stream.StreamExecIntervalJoin
+import org.apache.flink.table.planner.plan.utils.IntervalJoinUtil.WindowBounds
+import org.apache.flink.table.planner.plan.utils.PythonUtil.containsPythonCall
+import 
org.apache.flink.table.planner.plan.utils.RelExplainUtil.preferExpressionFormat
+
+import org.apache.calcite.plan._
+import org.apache.calcite.rel.core.{Join, JoinRelType}
+import org.apache.calcite.rel.{RelNode, RelWriter}
+import org.apache.calcite.rex.RexNode
+
+import scala.collection.JavaConversions._
+
+/**
+  * Stream physical RelNode for a time interval stream join.
+  */
+class StreamPhysicalIntervalJoin(
+cluster: RelOptCluster,
+traitSet: RelTraitSet,
+leftRel: RelNode,
+rightRel: RelNode,
+joinType: JoinRelType,
+condition: RexNode,

Review comment:
   I have added back the originalCondition in the physical node.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20913) Improve new HiveConf(jobConf, HiveConf.class)

2021-01-20 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-20913.
---
Resolution: Fixed

> Improve new HiveConf(jobConf, HiveConf.class)
> -
>
> Key: FLINK-20913
> URL: https://issues.apache.org/jira/browse/FLINK-20913
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.0, 1.12.1, 1.12.2
> Environment: Hive 2.0.1
> Flink 1.12.0
> Query with SQL client
>Reporter: Xingxing Di
>Assignee: Xingxing Di
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.11.4, 1.12.2
>
>
> When we query hive tables We got an Exception in  
> org.apache.flink.connectors.hive.util.HivePartitionUtils#getAllPartitions
> Exception:
>  
> {code:java}
> org.apache.thrift.transport.TTransportException
> {code}
>  
> SQL: 
> {code:java}
> select * from dxx1 limit 1;
> {code}
>  
> After debug we found that new HiveConf will override the configurations in 
> jobConf,in my case `hive.metastore.sasl.enabled` was reset to `false`, which 
> is unexpected.
> {code:java}
> // org.apache.flink.connectors.hive.util.HivePartitionUtils
> new HiveConf(jobConf, HiveConf.class){code}
>  
> *I think we should add an HiveConfUtils to create HiveConf, which would be 
> like this:*
>  
> {code:java}
> HiveConf hiveConf = new HiveConf(jobConf, HiveConf.class);
> hiveConf.addResource(jobConf);{code}
> Above code can fix the error, i will make a PR if this improvement is 
> acceptable.
>  
> Here is the detail error stack:
> {code:java}
> 2021-01-10 17:27:11,995 WARN  org.apache.flink.table.client.cli.CliClient     
>              [] - Could not execute SQL statement.2021-01-10 17:27:11,995 
> WARN  org.apache.flink.table.client.cli.CliClient                  [] - Could 
> not execute SQL 
> statement.org.apache.flink.table.client.gateway.SqlExecutionException: 
> Invalid SQL query. at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:527)
>  ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:365)
>  ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:634) 
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:324) 
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> java.util.Optional.ifPresent(Optional.java:159) [?:1.8.0_202] at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:216) 
> [flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:141) 
> [flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.client.SqlClient.start(SqlClient.java:114) 
> [flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:196) 
> [flink-sql-client_2.11-1.12.0.jar:1.12.0]Caused by: 
> org.apache.flink.connectors.hive.FlinkHiveException: Failed to collect all 
> partitions from hive metaStore at 
> org.apache.flink.connectors.hive.util.HivePartitionUtils.getAllPartitions(HivePartitionUtils.java:142)
>  ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:133)
>  ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.connectors.hive.HiveTableSource$1.produceDataStream(HiveTableSource.java:119)
>  ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:94)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:44)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlan(BatchExecTableSourceScan.scala:44)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecLimit.translateToPlanInternal(BatchExecLimit.scala:105)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> 

[jira] [Comment Edited] (FLINK-20913) Improve new HiveConf(jobConf, HiveConf.class)

2021-01-20 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17267695#comment-17267695
 ] 

Jark Wu edited comment on FLINK-20913 at 1/21/21, 3:05 AM:
---

Fixed in
 - master: ccc306f398f1b74681f12667de855b940dc61498
 - release-1.12: fb93fdbe4eaa8149ef4f0e61a94d9d9a32fc5306
 - release-1.11: 0767fd8d709adbc01328084f9b7099fce6d26846


was (Author: jark):
Fixed in
 - master: ccc306f398f1b74681f12667de855b940dc61498

> Improve new HiveConf(jobConf, HiveConf.class)
> -
>
> Key: FLINK-20913
> URL: https://issues.apache.org/jira/browse/FLINK-20913
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.0, 1.12.1, 1.12.2
> Environment: Hive 2.0.1
> Flink 1.12.0
> Query with SQL client
>Reporter: Xingxing Di
>Assignee: Xingxing Di
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.13.0, 1.11.4, 1.12.2
>
>
> When we query hive tables We got an Exception in  
> org.apache.flink.connectors.hive.util.HivePartitionUtils#getAllPartitions
> Exception:
>  
> {code:java}
> org.apache.thrift.transport.TTransportException
> {code}
>  
> SQL: 
> {code:java}
> select * from dxx1 limit 1;
> {code}
>  
> After debug we found that new HiveConf will override the configurations in 
> jobConf,in my case `hive.metastore.sasl.enabled` was reset to `false`, which 
> is unexpected.
> {code:java}
> // org.apache.flink.connectors.hive.util.HivePartitionUtils
> new HiveConf(jobConf, HiveConf.class){code}
>  
> *I think we should add an HiveConfUtils to create HiveConf, which would be 
> like this:*
>  
> {code:java}
> HiveConf hiveConf = new HiveConf(jobConf, HiveConf.class);
> hiveConf.addResource(jobConf);{code}
> Above code can fix the error, i will make a PR if this improvement is 
> acceptable.
>  
> Here is the detail error stack:
> {code:java}
> 2021-01-10 17:27:11,995 WARN  org.apache.flink.table.client.cli.CliClient     
>              [] - Could not execute SQL statement.2021-01-10 17:27:11,995 
> WARN  org.apache.flink.table.client.cli.CliClient                  [] - Could 
> not execute SQL 
> statement.org.apache.flink.table.client.gateway.SqlExecutionException: 
> Invalid SQL query. at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQueryInternal(LocalExecutor.java:527)
>  ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.client.gateway.local.LocalExecutor.executeQuery(LocalExecutor.java:365)
>  ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.client.cli.CliClient.callSelect(CliClient.java:634) 
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.client.cli.CliClient.callCommand(CliClient.java:324) 
> ~[flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> java.util.Optional.ifPresent(Optional.java:159) [?:1.8.0_202] at 
> org.apache.flink.table.client.cli.CliClient.open(CliClient.java:216) 
> [flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.client.SqlClient.openCli(SqlClient.java:141) 
> [flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.client.SqlClient.start(SqlClient.java:114) 
> [flink-sql-client_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.client.SqlClient.main(SqlClient.java:196) 
> [flink-sql-client_2.11-1.12.0.jar:1.12.0]Caused by: 
> org.apache.flink.connectors.hive.FlinkHiveException: Failed to collect all 
> partitions from hive metaStore at 
> org.apache.flink.connectors.hive.util.HivePartitionUtils.getAllPartitions(HivePartitionUtils.java:142)
>  ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.connectors.hive.HiveTableSource.getDataStream(HiveTableSource.java:133)
>  ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.connectors.hive.HiveTableSource$1.produceDataStream(HiveTableSource.java:119)
>  ~[flink-connector-hive_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.planner.plan.nodes.common.CommonPhysicalTableSourceScan.createSourceTransformation(CommonPhysicalTableSourceScan.scala:88)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:94)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.planner.plan.nodes.physical.batch.BatchExecTableSourceScan.translateToPlanInternal(BatchExecTableSourceScan.scala:44)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:59)
>  ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
> 

[GitHub] [flink] wuchong merged pull request #14707: [FLINK-20913][hive]Improve HiveConf creation for release-1.11

2021-01-20 Thread GitBox


wuchong merged pull request #14707:
URL: https://github.com/apache/flink/pull/14707


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21018) Follow up of FLINK-20488 to update checkpoint related documentation for UI

2021-01-20 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-21018:
-
Attachment: 屏幕快照 2021-01-21 上午11.38.25.png

> Follow up of FLINK-20488 to update checkpoint related documentation for UI
> --
>
> Key: FLINK-21018
> URL: https://issues.apache.org/jira/browse/FLINK-21018
> Project: Flink
>  Issue Type: Task
>Reporter: Yuan Mei
>Priority: Major
>  Labels: pull-request-available
> Attachments: 屏幕快照 2021-01-21 上午11.38.02.png, 屏幕快照 2021-01-21 
> 上午11.38.25.png
>
>
> Follow up of FLINK-20488 to update checkpoint-related documentation as well.
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/monitoring/checkpoint_monitoring.html#checkpoint-details



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21018) Follow up of FLINK-20488 to update checkpoint related documentation for UI

2021-01-20 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-21018:
-
Attachment: 屏幕快照 2021-01-21 上午11.38.02.png

> Follow up of FLINK-20488 to update checkpoint related documentation for UI
> --
>
> Key: FLINK-21018
> URL: https://issues.apache.org/jira/browse/FLINK-21018
> Project: Flink
>  Issue Type: Task
>Reporter: Yuan Mei
>Priority: Major
>  Labels: pull-request-available
> Attachments: 屏幕快照 2021-01-21 上午11.38.02.png, 屏幕快照 2021-01-21 
> 上午11.38.25.png
>
>
> Follow up of FLINK-20488 to update checkpoint-related documentation as well.
> https://ci.apache.org/projects/flink/flink-docs-stable/ops/monitoring/checkpoint_monitoring.html#checkpoint-details



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14710: [FLINK-19393][docs-zh] Translate the 'SQL Hints' page of 'Table API & SQL' into Chinese

2021-01-20 Thread GitBox


flinkbot commented on pull request #14710:
URL: https://github.com/apache/flink/pull/14710#issuecomment-764230111


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit da3f3f6a21306e3b4fa3cfc7b79ef34d825f5efb (Thu Jan 21 
04:04:52 UTC 2021)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14691: [FLINK-21018] Update checkpoint related documentation for UI

2021-01-20 Thread GitBox


flinkbot edited a comment on pull request #14691:
URL: https://github.com/apache/flink/pull/14691#issuecomment-762651310


   
   ## CI report:
   
   * c02e54d3e599a05ebb96cbb66aaade1e058da0fe Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=12218)
 
   * 6a766507fc023a0642ab36f6eca289e479b4b84b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   >