[jira] [Updated] (FLINK-20798) Using PVC as high-availability.storageDir could not work

2020-12-29 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-20798:
--
Description: 
我这边 部署 flink 到 k8s 使用 PVC 作为 high avalibility storagedir , 我看jobmanager 
的日志,选举成功了。但是 web 一直显示选举进行中。

When deploying standalone Flink on Kubernetes and configure the 
{{high-availability.storageDir}} to a mounted PVC directory, the Flink webui 
could not be visited normally. It shows that "Service temporarily unavailable 
due to an ongoing leader election. Please refresh".

 

下面是 jobmanager 的日志

The following is related logs from JobManager.

```

2020-12-29T06:45:54.177850394Z 2020-12-29 14:45:54,177 DEBUG 
io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - Leader 
election started
 2020-12-29T06:45:54.177855303Z 2020-12-29 14:45:54,177 DEBUG 
io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
Attempting to acquire leader lease 'ConfigMapLock: default - 
mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'...
 2020-12-29T06:45:54.178668055Z 2020-12-29 14:45:54,178 DEBUG 
io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - WebSocket 
successfully opened
 2020-12-29T06:45:54.178895963Z 2020-12-29 14:45:54,178 INFO 
org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
Starting DefaultLeaderRetrievalService with 
KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-resourcemanager-leader'}.
 2020-12-29T06:45:54.179327491Z 2020-12-29 14:45:54,179 DEBUG 
io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
Connecting websocket ... 
io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@6d303498
 2020-12-29T06:45:54.230081993Z 2020-12-29 14:45:54,229 DEBUG 
io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - WebSocket 
successfully opened
 2020-12-29T06:45:54.230202329Z 2020-12-29 14:45:54,230 INFO 
org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
Starting DefaultLeaderRetrievalService with 
KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-dispatcher-leader'}.
 2020-12-29T06:45:54.230219281Z 2020-12-29 14:45:54,229 DEBUG 
io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - WebSocket 
successfully opened
 2020-12-29T06:45:54.230353912Z 2020-12-29 14:45:54,230 INFO 
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
Starting DefaultLeaderElectionService with 
KubernetesLeaderElectionDriver\{configMapName='mta-flink-resourcemanager-leader'}.
 2020-12-29T06:45:54.237004177Z 2020-12-29 14:45:54,236 DEBUG 
io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - Leader 
changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
 2020-12-29T06:45:54.237024655Z 2020-12-29 14:45:54,236 INFO 
org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
mta-flink-restserver-leader.
 2020-12-29T06:45:54.237027811Z 2020-12-29 14:45:54,236 DEBUG 
io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
Successfully Acquired leader lease 'ConfigMapLock: default - 
mta-flink-restserver-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'
 2020-12-29T06:45:54.237297376Z 2020-12-29 14:45:54,237 DEBUG 
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - Grant 
leadership to contender 
[http://mta-flink-jobmanager:8081|http://mta-flink-jobmanager:8081/] with 
session ID 9587e13f-322f-4cd5-9fff-b4941462be0f.
 2020-12-29T06:45:54.237353551Z 2020-12-29 14:45:54,237 INFO 
org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - 
[http://mta-flink-jobmanager:8081|http://mta-flink-jobmanager:8081/] was 
granted leadership with leaderSessionID=9587e13f-322f-4cd5-9fff-b4941462be0f
 2020-12-29T06:45:54.237440354Z 2020-12-29 14:45:54,237 DEBUG 
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
Confirm leader session ID 9587e13f-322f-4cd5-9fff-b4941462be0f for leader 
[http://mta-flink-jobmanager:8081|http://mta-flink-jobmanager:8081/].
 2020-12-29T06:45:54.254555127Z 2020-12-29 14:45:54,254 DEBUG 
io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - Leader 
changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
 2020-12-29T06:45:54.254588299Z 2020-12-29 14:45:54,254 INFO 
org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
mta-flink-resourcemanager-leader.
 2020-12-29T06:45:54.254628053Z 2020-12-29 14:45:54,254 DEBUG 
io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
Successfully Acquired leader lease 'ConfigMapLock: default - 
mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'
 2020-12-29T06:45:54.254871569Z 2020-12-29 14:45:54,254 DEBUG 

[jira] [Updated] (FLINK-20798) Using PVC as high-availability.storageDir could not work

2020-12-29 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-20798:
--
Description: 
我这边 部署 flink 到 k8s 使用 PVC 作为 high avalibility storagedir , 我看jobmanager 
的日志,选举成功了。但是 web 一直显示选举进行中。

When deploying standalone Flink on Kubernetes and configure the 
{{high-availability.storageDir}} to a mounted PVC directory, the Flink webui 
could not be visited normally. It shows that ""

 

下面是 jobmanager 的日志

```

2020-12-29T06:45:54.177850394Z 2020-12-29 14:45:54,177 DEBUG 
io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - Leader 
election started
 2020-12-29T06:45:54.177855303Z 2020-12-29 14:45:54,177 DEBUG 
io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
Attempting to acquire leader lease 'ConfigMapLock: default - 
mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'...
 2020-12-29T06:45:54.178668055Z 2020-12-29 14:45:54,178 DEBUG 
io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - WebSocket 
successfully opened
 2020-12-29T06:45:54.178895963Z 2020-12-29 14:45:54,178 INFO 
org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
Starting DefaultLeaderRetrievalService with 
KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-resourcemanager-leader'}.
 2020-12-29T06:45:54.179327491Z 2020-12-29 14:45:54,179 DEBUG 
io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
Connecting websocket ... 
io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@6d303498
 2020-12-29T06:45:54.230081993Z 2020-12-29 14:45:54,229 DEBUG 
io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - WebSocket 
successfully opened
 2020-12-29T06:45:54.230202329Z 2020-12-29 14:45:54,230 INFO 
org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
Starting DefaultLeaderRetrievalService with 
KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-dispatcher-leader'}.
 2020-12-29T06:45:54.230219281Z 2020-12-29 14:45:54,229 DEBUG 
io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - WebSocket 
successfully opened
 2020-12-29T06:45:54.230353912Z 2020-12-29 14:45:54,230 INFO 
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
Starting DefaultLeaderElectionService with 
KubernetesLeaderElectionDriver\{configMapName='mta-flink-resourcemanager-leader'}.
 2020-12-29T06:45:54.237004177Z 2020-12-29 14:45:54,236 DEBUG 
io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - Leader 
changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
 2020-12-29T06:45:54.237024655Z 2020-12-29 14:45:54,236 INFO 
org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
mta-flink-restserver-leader.
 2020-12-29T06:45:54.237027811Z 2020-12-29 14:45:54,236 DEBUG 
io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
Successfully Acquired leader lease 'ConfigMapLock: default - 
mta-flink-restserver-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'
 2020-12-29T06:45:54.237297376Z 2020-12-29 14:45:54,237 DEBUG 
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - Grant 
leadership to contender 
[http://mta-flink-jobmanager:8081|http://mta-flink-jobmanager:8081/] with 
session ID 9587e13f-322f-4cd5-9fff-b4941462be0f.
 2020-12-29T06:45:54.237353551Z 2020-12-29 14:45:54,237 INFO 
org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - 
[http://mta-flink-jobmanager:8081|http://mta-flink-jobmanager:8081/] was 
granted leadership with leaderSessionID=9587e13f-322f-4cd5-9fff-b4941462be0f
 2020-12-29T06:45:54.237440354Z 2020-12-29 14:45:54,237 DEBUG 
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
Confirm leader session ID 9587e13f-322f-4cd5-9fff-b4941462be0f for leader 
[http://mta-flink-jobmanager:8081|http://mta-flink-jobmanager:8081/].
 2020-12-29T06:45:54.254555127Z 2020-12-29 14:45:54,254 DEBUG 
io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - Leader 
changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
 2020-12-29T06:45:54.254588299Z 2020-12-29 14:45:54,254 INFO 
org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
mta-flink-resourcemanager-leader.
 2020-12-29T06:45:54.254628053Z 2020-12-29 14:45:54,254 DEBUG 
io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
Successfully Acquired leader lease 'ConfigMapLock: default - 
mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'
 2020-12-29T06:45:54.254871569Z 2020-12-29 14:45:54,254 DEBUG 
org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - Grant 
leadership to contender LeaderContender: StandaloneResourceManager with session 
ID 

[jira] [Commented] (FLINK-20798) Using PVC as high-availability.storageDir could not work

2020-12-29 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256365#comment-17256365
 ] 

Yang Wang commented on FLINK-20798:
---

[~hayden zhou] I do not find any suspicious logs in the attached log file. The 
rest endpoint "http://mta-flink-jobmanager:8081; is granted leadership 
successfully. After the leadership is granted, I believe that if you visit the 
webui normally.

 

For {{FileSystemHaServiceFactory}}, it is not supported in Flink right now. But 
I think you could have your own implementation and configure the HA mode. 
Please remember to replace the deployment with statefulset for JobManager.

> Using PVC as high-availability.storageDir could not work
> 
>
> Key: FLINK-20798
> URL: https://issues.apache.org/jira/browse/FLINK-20798
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
> Environment: FLINK 1.12.0
>Reporter: hayden zhou
>Priority: Major
> Attachments: flink.log
>
>
> 我这边 部署 flink 到 k8s 使用 PVC 作为 high avalibility storagedir , 我看jobmanager 
> 的日志,选举成功了。但是 web 一直显示选举进行中。
>  
> 下面是 jobmanager 的日志
> ```
> 2020-12-29T06:45:54.177850394Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader election started
> 2020-12-29T06:45:54.177855303Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Attempting to acquire leader lease 'ConfigMapLock: default - 
> mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'...
> 2020-12-29T06:45:54.178668055Z 2020-12-29 14:45:54,178 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.178895963Z 2020-12-29 14:45:54,178 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.179327491Z 2020-12-29 14:45:54,179 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> Connecting websocket ... 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@6d303498
> 2020-12-29T06:45:54.230081993Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230202329Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-dispatcher-leader'}.
> 2020-12-29T06:45:54.230219281Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230353912Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Starting DefaultLeaderElectionService with 
> KubernetesLeaderElectionDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.237004177Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
> 2020-12-29T06:45:54.237024655Z 2020-12-29 14:45:54,236 INFO 
> org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
> New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
> mta-flink-restserver-leader.
> 2020-12-29T06:45:54.237027811Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Successfully Acquired leader lease 'ConfigMapLock: default - 
> mta-flink-restserver-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'
> 2020-12-29T06:45:54.237297376Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Grant leadership to contender http://mta-flink-jobmanager:8081 with session 
> ID 9587e13f-322f-4cd5-9fff-b4941462be0f.
> 2020-12-29T06:45:54.237353551Z 2020-12-29 14:45:54,237 INFO 
> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - 
> http://mta-flink-jobmanager:8081 was granted leadership with 
> leaderSessionID=9587e13f-322f-4cd5-9fff-b4941462be0f
> 2020-12-29T06:45:54.237440354Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Confirm leader session ID 9587e13f-322f-4cd5-9fff-b4941462be0f for leader 
> http://mta-flink-jobmanager:8081.
> 2020-12-29T06:45:54.254555127Z 2020-12-29 14:45:54,254 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader 

[GitHub] [flink] flinkbot edited a comment on pull request #14464: [FLINK-20385][canal][json] Allow to read metadata for canal-json format

2020-12-29 Thread GitBox


flinkbot edited a comment on pull request #14464:
URL: https://github.com/apache/flink/pull/14464#issuecomment-749543461


   
   ## CI report:
   
   * e81abd70d44d6ce7221d4cd2f44d31deab78ea9a UNKNOWN
   * 554e0a2c2ada42836d65397b163bad657806e8c2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11456)
 
   * 3f3b88d98c525b635179b6a9098a2634a1ffb42c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11483)
 
   * 5a5ef1f0bb5e1dcc094ad5454890d13ddf55c646 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #14464: [FLINK-20385][canal][json] Allow to read metadata for canal-json format

2020-12-29 Thread GitBox


wuchong commented on pull request #14464:
URL: https://github.com/apache/flink/pull/14464#issuecomment-752361626


   I helped to beautify the format. Will merge this once Azure is passed. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20816) NotifyCheckpointAbortedITCase failed due to timeout

2020-12-29 Thread Matthias (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias updated FLINK-20816:
-
Description: 
[This 
build|https://dev.azure.com/mapohl/flink/_build/results?buildId=152=logs=0a15d512-44ac-5ba5-97ab-13a5d066c22c=634cd701-c189-5dff-24cb-606ed884db87=4245]
 failed caused by a failing of {{NotifyCheckpointAbortedITCase}} due to a 
timeout.
{code}
2020-12-29T21:48:40.9430511Z [INFO] Running 
org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase
2020-12-29T21:50:28.0087043Z [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 107.062 s <<< FAILURE! - in 
org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase
2020-12-29T21:50:28.0087961Z [ERROR] 
testNotifyCheckpointAborted[unalignedCheckpointEnabled 
=true](org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase)  Time 
elapsed: 104.044 s  <<< ERROR!
2020-12-29T21:50:28.0088619Z org.junit.runners.model.TestTimedOutException: 
test timed out after 10 milliseconds
2020-12-29T21:50:28.0088972Zat java.lang.Object.wait(Native Method)
2020-12-29T21:50:28.0089267Zat java.lang.Object.wait(Object.java:502)
2020-12-29T21:50:28.0089633Zat 
org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:61)
2020-12-29T21:50:28.0090458Zat 
org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.verifyAllOperatorsNotifyAborted(NotifyCheckpointAbortedITCase.java:200)
2020-12-29T21:50:28.0091313Zat 
org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted(NotifyCheckpointAbortedITCase.java:183)
2020-12-29T21:50:28.0091819Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-12-29T21:50:28.0092199Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-12-29T21:50:28.0092675Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-12-29T21:50:28.0093095Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-12-29T21:50:28.0093495Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-12-29T21:50:28.0093980Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-12-29T21:50:28.009Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-12-29T21:50:28.0094917Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-12-29T21:50:28.0095663Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
2020-12-29T21:50:28.0096221Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
2020-12-29T21:50:28.0096675Zat 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
2020-12-29T21:50:28.0097022Zat java.lang.Thread.run(Thread.java:748)
{code}

The branch contained changes from FLINK-20594 and FLINK-20595. These issues 
remove code that is not used anymore and should have had only affects on unit 
tests. [The previous 
build|https://dev.azure.com/mapohl/flink/_build/results?buildId=151=results]
 containing all the changes accept for 
[9c57c37|https://github.com/XComp/flink/commit/9c57c37c50733a1f592a4fc5e492b22be80d8279]
 passed.

  was:
[This 
build|https://dev.azure.com/mapohl/flink/_build/results?buildId=152=logs=0a15d512-44ac-5ba5-97ab-13a5d066c22c=634cd701-c189-5dff-24cb-606ed884db87=4245]
 failed caused by a failing of {{NotifyCheckpointAbortedITCase}} due to a 
timeout.
{code}
2020-12-29T21:48:40.9430511Z [INFO] Running 
org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase
2020-12-29T21:50:28.0087043Z [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 107.062 s <<< FAILURE! - in 
org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase
2020-12-29T21:50:28.0087961Z [ERROR] 
testNotifyCheckpointAborted[unalignedCheckpointEnabled 
=true](org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase)  Time 
elapsed: 104.044 s  <<< ERROR!
2020-12-29T21:50:28.0088619Z org.junit.runners.model.TestTimedOutException: 
test timed out after 10 milliseconds
2020-12-29T21:50:28.0088972Zat java.lang.Object.wait(Native Method)
2020-12-29T21:50:28.0089267Zat java.lang.Object.wait(Object.java:502)
2020-12-29T21:50:28.0089633Zat 
org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:61)
2020-12-29T21:50:28.0090458Zat 
org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.verifyAllOperatorsNotifyAborted(NotifyCheckpointAbortedITCase.java:200)
2020-12-29T21:50:28.0091313Zat 
org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted(NotifyCheckpointAbortedITCase.java:183)
2020-12-29T21:50:28.0091819Zat 

[GitHub] [flink] flinkbot edited a comment on pull request #14464: [FLINK-20385][canal][json] Allow to read metadata for canal-json format

2020-12-29 Thread GitBox


flinkbot edited a comment on pull request #14464:
URL: https://github.com/apache/flink/pull/14464#issuecomment-749543461


   
   ## CI report:
   
   * e81abd70d44d6ce7221d4cd2f44d31deab78ea9a UNKNOWN
   * 554e0a2c2ada42836d65397b163bad657806e8c2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11456)
 
   * 3f3b88d98c525b635179b6a9098a2634a1ffb42c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11483)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20301) Flink sql 1.10 : Legacy Decimal and decimal for Array that is not Compatible

2020-12-29 Thread hehuiyuan (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256349#comment-17256349
 ] 

hehuiyuan commented on FLINK-20301:
---

Array  & Array is not compatible.

> Flink sql 1.10 : Legacy Decimal and decimal  for Array  that is not Compatible
> --
>
> Key: FLINK-20301
> URL: https://issues.apache.org/jira/browse/FLINK-20301
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: hehuiyuan
>Priority: Minor
> Attachments: image-2020-11-23-23-48-02-102.png, 
> image-2020-12-24-18-45-26-403.png
>
>
> The error log:
>  
> {code:java}
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> Type ARRAY of table field 'numbers' does not match with the 
> physical type ARRAY of the 'numbers' field of 
> the TableSource return type.Exception in thread "main" 
> org.apache.flink.table.api.ValidationException: Type ARRAY 
> of table field 'numbers' does not match with the physical type 
> ARRAY of the 'numbers' field of the TableSource 
> return type. at 
> org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:160)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:185)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.lambda$computeInCompositeType$8(TypeMappingUtils.java:246)
>  at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1321) at 
> java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169) at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>  at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708) 
> at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at 
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499) at 
> org.apache.flink.table.utils.TypeMappingUtils.computeInCompositeType(TypeMappingUtils.java:228)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.computePhysicalIndices(TypeMappingUtils.java:206)
>  at 
> org.apache.flink.table.utils.TypeMappingUtils.computePhysicalIndicesOrTimeAttributeMarkers(TypeMappingUtils.java:110)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.computeIndexMapping(StreamExecTableSourceScan.scala:212)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:107)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlanInternal(StreamExecTableSourceScan.scala:62)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecTableSourceScan.translateToPlan(StreamExecTableSourceScan.scala:62)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToTransformation(StreamExecSink.scala:184)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:118)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlanInternal(StreamExecSink.scala:48)
>  at 
> org.apache.flink.table.planner.plan.nodes.exec.ExecNode$class.translateToPlan(ExecNode.scala:58)
>  at 
> org.apache.flink.table.planner.plan.nodes.physical.stream.StreamExecSink.translateToPlan(StreamExecSink.scala:48)
>  at 
> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:60)
>  at 
> org.apache.flink.table.planner.delegation.StreamPlanner$$anonfun$translateToPlan$1.apply(StreamPlanner.scala:59)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at scala.collection.Iterator$class.foreach(Iterator.scala:893) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at 
> scala.collection.IterableLike$class.foreach(IterableLike.scala:72) at 
> scala.collection.AbstractIterable.foreach(Iterable.scala:54) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> org.apache.flink.table.planner.delegation.StreamPlanner.translateToPlan(StreamPlanner.scala:59)
>  at 
> org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:153)
>  

[jira] [Created] (FLINK-20816) NotifyCheckpointAbortedITCase failed due to timeout

2020-12-29 Thread Matthias (Jira)
Matthias created FLINK-20816:


 Summary: NotifyCheckpointAbortedITCase failed due to timeout
 Key: FLINK-20816
 URL: https://issues.apache.org/jira/browse/FLINK-20816
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.13.0
Reporter: Matthias


[This 
build|https://dev.azure.com/mapohl/flink/_build/results?buildId=152=logs=0a15d512-44ac-5ba5-97ab-13a5d066c22c=634cd701-c189-5dff-24cb-606ed884db87=4245]
 failed caused by a failing of {{NotifyCheckpointAbortedITCase}} due to a 
timeout.
{code}
2020-12-29T21:48:40.9430511Z [INFO] Running 
org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase
2020-12-29T21:50:28.0087043Z [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 107.062 s <<< FAILURE! - in 
org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase
2020-12-29T21:50:28.0087961Z [ERROR] 
testNotifyCheckpointAborted[unalignedCheckpointEnabled 
=true](org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase)  Time 
elapsed: 104.044 s  <<< ERROR!
2020-12-29T21:50:28.0088619Z org.junit.runners.model.TestTimedOutException: 
test timed out after 10 milliseconds
2020-12-29T21:50:28.0088972Zat java.lang.Object.wait(Native Method)
2020-12-29T21:50:28.0089267Zat java.lang.Object.wait(Object.java:502)
2020-12-29T21:50:28.0089633Zat 
org.apache.flink.core.testutils.OneShotLatch.await(OneShotLatch.java:61)
2020-12-29T21:50:28.0090458Zat 
org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.verifyAllOperatorsNotifyAborted(NotifyCheckpointAbortedITCase.java:200)
2020-12-29T21:50:28.0091313Zat 
org.apache.flink.test.checkpointing.NotifyCheckpointAbortedITCase.testNotifyCheckpointAborted(NotifyCheckpointAbortedITCase.java:183)
2020-12-29T21:50:28.0091819Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-12-29T21:50:28.0092199Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-12-29T21:50:28.0092675Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-12-29T21:50:28.0093095Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-12-29T21:50:28.0093495Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-12-29T21:50:28.0093980Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-12-29T21:50:28.009Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-12-29T21:50:28.0094917Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-12-29T21:50:28.0095663Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
2020-12-29T21:50:28.0096221Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
2020-12-29T21:50:28.0096675Zat 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
2020-12-29T21:50:28.0097022Zat java.lang.Thread.run(Thread.java:748)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20814) The CEP code is not running properly

2020-12-29 Thread huqingwen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256346#comment-17256346
 ] 

huqingwen commented on FLINK-20814:
---

I have the same problem。

> The CEP code is not running properly
> 
>
> Key: FLINK-20814
> URL: https://issues.apache.org/jira/browse/FLINK-20814
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.12.0
> Environment: flink1.12.0
> jdk1.8
>Reporter: little-tomato
>Priority: Blocker
>
> The cep code is running properly on flink1.11.2,but it is not working 
> properly on flink1.12.0.
> Can somebody help me?
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> // DataStream : source
>   DataStream input = env.fromElements(new 
> TemperatureEvent(1,"Device01", 22.0),
> new TemperatureEvent(1,"Device01", 27.1), new 
> TemperatureEvent(2,"Device01", 28.1),
> new TemperatureEvent(1,"Device01", 22.2), new 
> TemperatureEvent(3,"Device01", 22.1),
> new TemperatureEvent(1,"Device02", 22.3), new 
> TemperatureEvent(4,"Device02", 22.1),
> new TemperatureEvent(1,"Device02", 22.4), new 
> TemperatureEvent(5,"Device02", 22.7),
> new TemperatureEvent(1,"Device02", 27.0), new 
> TemperatureEvent(6,"Device02", 30.0));
> 
> Pattern warningPattern = 
> Pattern.begin("start")
> .subtype(TemperatureEvent.class)
> .where(new SimpleCondition() {
>   @Override
> public boolean filter(TemperatureEvent subEvent) {
> if (subEvent.getTemperature() >= 26.0) {
> return true;
> }
> return false;
> }
> }).where(new SimpleCondition() {
>   @Override
>   public boolean filter(TemperatureEvent subEvent) {
> if (subEvent.getMachineName().equals("Device02")) {
> return true;
> }
> return false;
> }
> }).within(Time.seconds(10));
> DataStream patternStream = CEP.pattern(input, warningPattern)
> .select(
> new RichPatternSelectFunction Alert>() {
> /**
>* 
>*/
>   private static final 
> long serialVersionUID = 1L;
>   @Override
>   public void 
> open(Configuration parameters) throws Exception {
>   
> System.out.println(getRuntimeContext().getUserCodeClassLoader());
>   }
>   @Override
> public Alert select(Map List> event) throws Exception {
>   
> return new Alert("Temperature Rise Detected: 
> " + event.get("start") + " on machine name: " + event.get("start"));
> }
> });
> patternStream.print();
> env.execute("CEP on Temperature Sensor");
> it should be output(on flink1.11.2):
> Alert [message=Temperature Rise Detected: [TemperatureEvent 
> [getTemperature()=27.0, getMachineName=Device02]] on machine name: 
> [TemperatureEvent [getTemperature()=27.0, getMachineName=Device02]]]
> Alert [message=Temperature Rise Detected: [TemperatureEvent 
> [getTemperature()=30.0, getMachineName=Device02]] on machine name: 
> [TemperatureEvent [getTemperature()=30.0, getMachineName=Device02]]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20815) Elasticsearch7DynamicSinkITCase failed caused by pulling docker image error

2020-12-29 Thread zlzhang0122 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256344#comment-17256344
 ] 

zlzhang0122 edited comment on FLINK-20815 at 12/30/20, 7:19 AM:


This problem maybe related with FLINK-20279.


was (Author: zlzhang0122):
This problem maybe relate to 
[FLINK-20279|https://issues.apache.org/jira/browse/FLINK-20279].

> Elasticsearch7DynamicSinkITCase failed caused by pulling docker image error
> ---
>
> Key: FLINK-20815
> URL: https://issues.apache.org/jira/browse/FLINK-20815
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0
>Reporter: zlzhang0122
>Priority: Major
>
> {code:java}
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
>  Time elapsed: 0.347 s <<< ERROR! java.lang.ExceptionInInitializerError  at 
> sun.misc.Unsafe.ensureClassInitialized(Native Method)  at 
> sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(UnsafeFieldAccessorFactory.java:43)
>   at 
> sun.reflect.ReflectionFactory.newFieldAccessor(ReflectionFactory.java:156)  
> at java.lang.reflect.Field.acquireFieldAccessor(Field.java:1088)  at 
> java.lang.reflect.Field.getFieldAccessor(Field.java:1069)  at 
> java.lang.reflect.Field.get(Field.java:393)  at 
> org.junit.runners.model.FrameworkField.get(FrameworkField.java:73)  at 
> org.junit.runners.model.TestClass.getAnnotatedFieldValues(TestClass.java:230) 
>  at org.junit.runners.ParentRunner.classRules(ParentRunner.java:255)  at 
> org.junit.runners.ParentRunner.withClassRules(ParentRunner.java:244)  at 
> org.junit.runners.ParentRunner.classBlock(ParentRunner.java:194)  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:362)  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)  
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: org.testcontainers.containers.ContainerFetchException: Can't get 
> Docker image: 
> RemoteDockerImage(imageName=docker.elastic.co/elasticsearch/elasticsearch-oss:7.5.1,
>  imagePullPolicy=DefaultPullPolicy())  at 
> org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1278)
>   at 
> org.testcontainers.containers.GenericContainer.logger(GenericContainer.java:612)
>   at 
> org.testcontainers.elasticsearch.ElasticsearchContainer.(ElasticsearchContainer.java:73)
>   at 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase.(Elasticsearch7DynamicSinkITCase.java:72)
>   ... 20 more Caused by: java.lang.IllegalStateException: Could not find a 
> valid Docker environment. Please see logs and check configuration  at 
> org.testcontainers.dockerclient.DockerClientProviderStrategy.lambda$getFirstValidStrategy$7(DockerClientProviderStrategy.java:214)
>   at java.util.Optional.orElseThrow(Optional.java:290)  at 
> org.testcontainers.dockerclient.DockerClientProviderStrategy.getFirstValidStrategy(DockerClientProviderStrategy.java:206)
>   at 
> org.testcontainers.DockerClientFactory.getOrInitializeStrategy(DockerClientFactory.java:134)
>   at 
> org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:176)  
> at 
> org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14) 
>  at 
> org.testcontainers.LazyDockerClient.listImagesCmd(LazyDockerClient.java:12)  
> at 
> org.testcontainers.images.LocalImagesCache.maybeInitCache(LocalImagesCache.java:68)
>   at org.testcontainers.images.LocalImagesCache.get(LocalImagesCache.java:32) 
>  at 
> org.testcontainers.images.AbstractImagePullPolicy.shouldPull(AbstractImagePullPolicy.java:18)
>   at 
> org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:66)
>   at 
> org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:27)
>   at 
> org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:17)  
> at org.testcontainers.utility.LazyFuture.get(LazyFuture.java:39)  at 
> org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1276)
>   ... 23 more [INFO] Running 
> 

[jira] [Commented] (FLINK-20815) Elasticsearch7DynamicSinkITCase failed caused by pulling docker image error

2020-12-29 Thread zlzhang0122 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256344#comment-17256344
 ] 

zlzhang0122 commented on FLINK-20815:
-

This problem maybe relate to 
[FLINK-20279|https://issues.apache.org/jira/browse/FLINK-20279].

> Elasticsearch7DynamicSinkITCase failed caused by pulling docker image error
> ---
>
> Key: FLINK-20815
> URL: https://issues.apache.org/jira/browse/FLINK-20815
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0
>Reporter: zlzhang0122
>Priority: Major
>
> {code:java}
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
>  Time elapsed: 0.347 s <<< ERROR! java.lang.ExceptionInInitializerError  at 
> sun.misc.Unsafe.ensureClassInitialized(Native Method)  at 
> sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(UnsafeFieldAccessorFactory.java:43)
>   at 
> sun.reflect.ReflectionFactory.newFieldAccessor(ReflectionFactory.java:156)  
> at java.lang.reflect.Field.acquireFieldAccessor(Field.java:1088)  at 
> java.lang.reflect.Field.getFieldAccessor(Field.java:1069)  at 
> java.lang.reflect.Field.get(Field.java:393)  at 
> org.junit.runners.model.FrameworkField.get(FrameworkField.java:73)  at 
> org.junit.runners.model.TestClass.getAnnotatedFieldValues(TestClass.java:230) 
>  at org.junit.runners.ParentRunner.classRules(ParentRunner.java:255)  at 
> org.junit.runners.ParentRunner.withClassRules(ParentRunner.java:244)  at 
> org.junit.runners.ParentRunner.classBlock(ParentRunner.java:194)  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:362)  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)  
> at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
> Caused by: org.testcontainers.containers.ContainerFetchException: Can't get 
> Docker image: 
> RemoteDockerImage(imageName=docker.elastic.co/elasticsearch/elasticsearch-oss:7.5.1,
>  imagePullPolicy=DefaultPullPolicy())  at 
> org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1278)
>   at 
> org.testcontainers.containers.GenericContainer.logger(GenericContainer.java:612)
>   at 
> org.testcontainers.elasticsearch.ElasticsearchContainer.(ElasticsearchContainer.java:73)
>   at 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase.(Elasticsearch7DynamicSinkITCase.java:72)
>   ... 20 more Caused by: java.lang.IllegalStateException: Could not find a 
> valid Docker environment. Please see logs and check configuration  at 
> org.testcontainers.dockerclient.DockerClientProviderStrategy.lambda$getFirstValidStrategy$7(DockerClientProviderStrategy.java:214)
>   at java.util.Optional.orElseThrow(Optional.java:290)  at 
> org.testcontainers.dockerclient.DockerClientProviderStrategy.getFirstValidStrategy(DockerClientProviderStrategy.java:206)
>   at 
> org.testcontainers.DockerClientFactory.getOrInitializeStrategy(DockerClientFactory.java:134)
>   at 
> org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:176)  
> at 
> org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14) 
>  at 
> org.testcontainers.LazyDockerClient.listImagesCmd(LazyDockerClient.java:12)  
> at 
> org.testcontainers.images.LocalImagesCache.maybeInitCache(LocalImagesCache.java:68)
>   at org.testcontainers.images.LocalImagesCache.get(LocalImagesCache.java:32) 
>  at 
> org.testcontainers.images.AbstractImagePullPolicy.shouldPull(AbstractImagePullPolicy.java:18)
>   at 
> org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:66)
>   at 
> org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:27)
>   at 
> org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:17)  
> at org.testcontainers.utility.LazyFuture.get(LazyFuture.java:39)  at 
> org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1276)
>   ... 23 more [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSinkITCase 
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.4 

[jira] [Created] (FLINK-20815) Elasticsearch7DynamicSinkITCase failed caused by pulling docker image error

2020-12-29 Thread zlzhang0122 (Jira)
zlzhang0122 created FLINK-20815:
---

 Summary: Elasticsearch7DynamicSinkITCase failed caused by pulling 
docker image error
 Key: FLINK-20815
 URL: https://issues.apache.org/jira/browse/FLINK-20815
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch
Affects Versions: 1.12.0
Reporter: zlzhang0122


{code:java}
org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
 Time elapsed: 0.347 s <<< ERROR! java.lang.ExceptionInInitializerError  at 
sun.misc.Unsafe.ensureClassInitialized(Native Method)  at 
sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(UnsafeFieldAccessorFactory.java:43)
  at sun.reflect.ReflectionFactory.newFieldAccessor(ReflectionFactory.java:156) 
 at java.lang.reflect.Field.acquireFieldAccessor(Field.java:1088)  at 
java.lang.reflect.Field.getFieldAccessor(Field.java:1069)  at 
java.lang.reflect.Field.get(Field.java:393)  at 
org.junit.runners.model.FrameworkField.get(FrameworkField.java:73)  at 
org.junit.runners.model.TestClass.getAnnotatedFieldValues(TestClass.java:230)  
at org.junit.runners.ParentRunner.classRules(ParentRunner.java:255)  at 
org.junit.runners.ParentRunner.withClassRules(ParentRunner.java:244)  at 
org.junit.runners.ParentRunner.classBlock(ParentRunner.java:194)  at 
org.junit.runners.ParentRunner.run(ParentRunner.java:362)  at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
  at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
  at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
  at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) 
 at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
  at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
  at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)  
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) 
Caused by: org.testcontainers.containers.ContainerFetchException: Can't get 
Docker image: 
RemoteDockerImage(imageName=docker.elastic.co/elasticsearch/elasticsearch-oss:7.5.1,
 imagePullPolicy=DefaultPullPolicy())  at 
org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1278)
  at 
org.testcontainers.containers.GenericContainer.logger(GenericContainer.java:612)
  at 
org.testcontainers.elasticsearch.ElasticsearchContainer.(ElasticsearchContainer.java:73)
  at 
org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase.(Elasticsearch7DynamicSinkITCase.java:72)
  ... 20 more Caused by: java.lang.IllegalStateException: Could not find a 
valid Docker environment. Please see logs and check configuration  at 
org.testcontainers.dockerclient.DockerClientProviderStrategy.lambda$getFirstValidStrategy$7(DockerClientProviderStrategy.java:214)
  at java.util.Optional.orElseThrow(Optional.java:290)  at 
org.testcontainers.dockerclient.DockerClientProviderStrategy.getFirstValidStrategy(DockerClientProviderStrategy.java:206)
  at 
org.testcontainers.DockerClientFactory.getOrInitializeStrategy(DockerClientFactory.java:134)
  at 
org.testcontainers.DockerClientFactory.client(DockerClientFactory.java:176)  at 
org.testcontainers.LazyDockerClient.getDockerClient(LazyDockerClient.java:14)  
at org.testcontainers.LazyDockerClient.listImagesCmd(LazyDockerClient.java:12)  
at 
org.testcontainers.images.LocalImagesCache.maybeInitCache(LocalImagesCache.java:68)
  at org.testcontainers.images.LocalImagesCache.get(LocalImagesCache.java:32)  
at 
org.testcontainers.images.AbstractImagePullPolicy.shouldPull(AbstractImagePullPolicy.java:18)
  at 
org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:66)  
at 
org.testcontainers.images.RemoteDockerImage.resolve(RemoteDockerImage.java:27)  
at org.testcontainers.utility.LazyFuture.getResolvedValue(LazyFuture.java:17)  
at org.testcontainers.utility.LazyFuture.get(LazyFuture.java:39)  at 
org.testcontainers.containers.GenericContainer.getDockerImageName(GenericContainer.java:1276)
  ... 23 more [INFO] Running 
org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSinkITCase 
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.4 s 
<<< FAILURE! - in 
org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSinkITCase 
[ERROR] 
org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSinkITCase 
Time elapsed: 0.4 s <<< ERROR! java.lang.ExceptionInInitializerError  at 
sun.misc.Unsafe.ensureClassInitialized(Native Method)  at 
sun.reflect.UnsafeFieldAccessorFactory.newFieldAccessor(UnsafeFieldAccessorFactory.java:43)
  at sun.reflect.ReflectionFactory.newFieldAccessor(ReflectionFactory.java:156) 
 at 

[GitHub] [flink] flinkbot edited a comment on pull request #14464: [FLINK-20385][canal][json] Allow to read metadata for canal-json format

2020-12-29 Thread GitBox


flinkbot edited a comment on pull request #14464:
URL: https://github.com/apache/flink/pull/14464#issuecomment-749543461


   
   ## CI report:
   
   * e81abd70d44d6ce7221d4cd2f44d31deab78ea9a UNKNOWN
   * 554e0a2c2ada42836d65397b163bad657806e8c2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11456)
 
   * 3f3b88d98c525b635179b6a9098a2634a1ffb42c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20814) The CEP code is not running properly

2020-12-29 Thread little-tomato (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256341#comment-17256341
 ] 

little-tomato commented on FLINK-20814:
---

my pom file: 
 

  org.apache.flink
  flink-clients_2.12
  1.12.0



  org.apache.flink
  flink-connector-kafka_2.12
  1.12.0



  org.apache.flink
  flink-streaming-java_2.12
  1.12.0



  com.alibaba
  fastjson
  1.2.73



  org.apache.flink
  flink-cep_2.12
  1.12.0


  

> The CEP code is not running properly
> 
>
> Key: FLINK-20814
> URL: https://issues.apache.org/jira/browse/FLINK-20814
> Project: Flink
>  Issue Type: Bug
>  Components: Library / CEP
>Affects Versions: 1.12.0
> Environment: flink1.12.0
> jdk1.8
>Reporter: little-tomato
>Priority: Blocker
>
> The cep code is running properly on flink1.11.2,but it is not working 
> properly on flink1.12.0.
> Can somebody help me?
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> // DataStream : source
>   DataStream input = env.fromElements(new 
> TemperatureEvent(1,"Device01", 22.0),
> new TemperatureEvent(1,"Device01", 27.1), new 
> TemperatureEvent(2,"Device01", 28.1),
> new TemperatureEvent(1,"Device01", 22.2), new 
> TemperatureEvent(3,"Device01", 22.1),
> new TemperatureEvent(1,"Device02", 22.3), new 
> TemperatureEvent(4,"Device02", 22.1),
> new TemperatureEvent(1,"Device02", 22.4), new 
> TemperatureEvent(5,"Device02", 22.7),
> new TemperatureEvent(1,"Device02", 27.0), new 
> TemperatureEvent(6,"Device02", 30.0));
> 
> Pattern warningPattern = 
> Pattern.begin("start")
> .subtype(TemperatureEvent.class)
> .where(new SimpleCondition() {
>   @Override
> public boolean filter(TemperatureEvent subEvent) {
> if (subEvent.getTemperature() >= 26.0) {
> return true;
> }
> return false;
> }
> }).where(new SimpleCondition() {
>   @Override
>   public boolean filter(TemperatureEvent subEvent) {
> if (subEvent.getMachineName().equals("Device02")) {
> return true;
> }
> return false;
> }
> }).within(Time.seconds(10));
> DataStream patternStream = CEP.pattern(input, warningPattern)
> .select(
> new RichPatternSelectFunction Alert>() {
> /**
>* 
>*/
>   private static final 
> long serialVersionUID = 1L;
>   @Override
>   public void 
> open(Configuration parameters) throws Exception {
>   
> System.out.println(getRuntimeContext().getUserCodeClassLoader());
>   }
>   @Override
> public Alert select(Map List> event) throws Exception {
>   
> return new Alert("Temperature Rise Detected: 
> " + event.get("start") + " on machine name: " + event.get("start"));
> }
> });
> patternStream.print();
> env.execute("CEP on Temperature Sensor");
> it should be output(on flink1.11.2):
> Alert [message=Temperature Rise Detected: [TemperatureEvent 
> [getTemperature()=27.0, getMachineName=Device02]] on machine name: 
> [TemperatureEvent [getTemperature()=27.0, getMachineName=Device02]]]
> Alert [message=Temperature Rise Detected: [TemperatureEvent 
> [getTemperature()=30.0, getMachineName=Device02]] on machine name: 
> [TemperatureEvent [getTemperature()=30.0, getMachineName=Device02]]]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19635) HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result mismatch

2020-12-29 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-19635:
---
Fix Version/s: 1.13.0

> HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result 
> mismatch
> --
>
> Key: FLINK-19635
> URL: https://issues.apache.org/jira/browse/FLINK-19635
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0, 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7562=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-10-14T04:35:36.9268975Z testTableSourceSinkWithDDL[planner = 
> BLINK_PLANNER, legacy = 
> false](org.apache.flink.connector.hbase2.HBaseConnectorITCase)  Time elapsed: 
> 3.131 sec  <<< FAILURE!
> 2020-10-14T04:35:36.9276246Z java.lang.AssertionError: 
> expected:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003,
>  
> 4,40,null,400,4.04,true,Welt-4,2019-08-18T19:03,2019-08-18,19:03,12345678.0004,
>  
> 5,50,Hello-5,500,5.05,false,Welt-5,2019-08-19T19:10,2019-08-19,19:10,12345678.0005,
>  
> 6,60,Hello-6,600,6.06,true,Welt-6,2019-08-19T19:20,2019-08-19,19:20,12345678.0006,
>  
> 7,70,Hello-7,700,7.07,false,Welt-7,2019-08-19T19:30,2019-08-19,19:30,12345678.0007,
>  
> 8,80,null,800,8.08,true,Welt-8,2019-08-19T19:40,2019-08-19,19:40,12345678.0008]>
>  but 
> was:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003]>
> 2020-10-14T04:35:36.9281340Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-10-14T04:35:36.9282023Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-10-14T04:35:36.9328385Z  at 
> org.junit.Assert.assertEquals(Assert.java:118)
> 2020-10-14T04:35:36.9338939Z  at 
> org.junit.Assert.assertEquals(Assert.java:144)
> 2020-10-14T04:35:36.9339880Z  at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:449)
> 2020-10-14T04:35:36.9341003Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20814) The CEP code is not running properly

2020-12-29 Thread little-tomato (Jira)
little-tomato created FLINK-20814:
-

 Summary: The CEP code is not running properly
 Key: FLINK-20814
 URL: https://issues.apache.org/jira/browse/FLINK-20814
 Project: Flink
  Issue Type: Bug
  Components: Library / CEP
Affects Versions: 1.12.0
 Environment: flink1.12.0
jdk1.8

Reporter: little-tomato


The cep code is running properly on flink1.11.2,but it is not working properly 
on flink1.12.0.
Can somebody help me?

StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();

// DataStream : source
DataStream input = env.fromElements(new 
TemperatureEvent(1,"Device01", 22.0),
new TemperatureEvent(1,"Device01", 27.1), new 
TemperatureEvent(2,"Device01", 28.1),
new TemperatureEvent(1,"Device01", 22.2), new 
TemperatureEvent(3,"Device01", 22.1),
new TemperatureEvent(1,"Device02", 22.3), new 
TemperatureEvent(4,"Device02", 22.1),
new TemperatureEvent(1,"Device02", 22.4), new 
TemperatureEvent(5,"Device02", 22.7),
new TemperatureEvent(1,"Device02", 27.0), new 
TemperatureEvent(6,"Device02", 30.0));

Pattern warningPattern = 
Pattern.begin("start")
.subtype(TemperatureEvent.class)
.where(new SimpleCondition() {
@Override
public boolean filter(TemperatureEvent subEvent) {
if (subEvent.getTemperature() >= 26.0) {
return true;
}
return false;
}
}).where(new SimpleCondition() {
@Override
public boolean filter(TemperatureEvent subEvent) {
if (subEvent.getMachineName().equals("Device02")) {
return true;
}
return false;
}
}).within(Time.seconds(10));

DataStream patternStream = CEP.pattern(input, warningPattern)
.select(
new RichPatternSelectFunction() {
/**
 * 
 */
private static final 
long serialVersionUID = 1L;
@Override
public void 
open(Configuration parameters) throws Exception {

System.out.println(getRuntimeContext().getUserCodeClassLoader());
}

@Override
public Alert select(Map> event) throws Exception {

return new Alert("Temperature Rise Detected: " 
+ event.get("start") + " on machine name: " + event.get("start"));

}
});

patternStream.print();

env.execute("CEP on Temperature Sensor");

it should be output(on flink1.11.2):
Alert [message=Temperature Rise Detected: [TemperatureEvent 
[getTemperature()=27.0, getMachineName=Device02]] on machine name: 
[TemperatureEvent [getTemperature()=27.0, getMachineName=Device02]]]
Alert [message=Temperature Rise Detected: [TemperatureEvent 
[getTemperature()=30.0, getMachineName=Device02]] on machine name: 
[TemperatureEvent [getTemperature()=30.0, getMachineName=Device02]]]




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20026) Jdbc connector support regular expression

2020-12-29 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256337#comment-17256337
 ] 

jiawen xiao commented on FLINK-20026:
-

SGTM,please a assign , I want to try

> Jdbc connector support regular expression
> -
>
> Key: FLINK-20026
> URL: https://issues.apache.org/jira/browse/FLINK-20026
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.11.2
>Reporter: Peihui He
>Priority: Major
> Fix For: 1.13.0
>
>
> When there is a large amount of data, we divide the tables by month.
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#table-name]
> so it's nice to support regular expression for table-name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2020-12-29 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256334#comment-17256334
 ] 

jiawen xiao commented on FLINK-20777:
-

hi , [~renqs] , Maybe we need more people’s opinions

hi ,[~jark]   WDYT?

I'm not sure whether constant checking of kafka meta will bring performance 
impact?

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.12.1
>
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19635) HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result mismatch

2020-12-29 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-19635:
---
Fix Version/s: (was: 1.12.0)
   1.12.1

> HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result 
> mismatch
> --
>
> Key: FLINK-19635
> URL: https://issues.apache.org/jira/browse/FLINK-19635
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.1
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7562=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-10-14T04:35:36.9268975Z testTableSourceSinkWithDDL[planner = 
> BLINK_PLANNER, legacy = 
> false](org.apache.flink.connector.hbase2.HBaseConnectorITCase)  Time elapsed: 
> 3.131 sec  <<< FAILURE!
> 2020-10-14T04:35:36.9276246Z java.lang.AssertionError: 
> expected:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003,
>  
> 4,40,null,400,4.04,true,Welt-4,2019-08-18T19:03,2019-08-18,19:03,12345678.0004,
>  
> 5,50,Hello-5,500,5.05,false,Welt-5,2019-08-19T19:10,2019-08-19,19:10,12345678.0005,
>  
> 6,60,Hello-6,600,6.06,true,Welt-6,2019-08-19T19:20,2019-08-19,19:20,12345678.0006,
>  
> 7,70,Hello-7,700,7.07,false,Welt-7,2019-08-19T19:30,2019-08-19,19:30,12345678.0007,
>  
> 8,80,null,800,8.08,true,Welt-8,2019-08-19T19:40,2019-08-19,19:40,12345678.0008]>
>  but 
> was:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003]>
> 2020-10-14T04:35:36.9281340Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-10-14T04:35:36.9282023Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-10-14T04:35:36.9328385Z  at 
> org.junit.Assert.assertEquals(Assert.java:118)
> 2020-10-14T04:35:36.9338939Z  at 
> org.junit.Assert.assertEquals(Assert.java:144)
> 2020-10-14T04:35:36.9339880Z  at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:449)
> 2020-10-14T04:35:36.9341003Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19635) HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result mismatch

2020-12-29 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-19635:
---
Fix Version/s: (was: 1.12.1)
   1.12.0

> HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result 
> mismatch
> --
>
> Key: FLINK-19635
> URL: https://issues.apache.org/jira/browse/FLINK-19635
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7562=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-10-14T04:35:36.9268975Z testTableSourceSinkWithDDL[planner = 
> BLINK_PLANNER, legacy = 
> false](org.apache.flink.connector.hbase2.HBaseConnectorITCase)  Time elapsed: 
> 3.131 sec  <<< FAILURE!
> 2020-10-14T04:35:36.9276246Z java.lang.AssertionError: 
> expected:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003,
>  
> 4,40,null,400,4.04,true,Welt-4,2019-08-18T19:03,2019-08-18,19:03,12345678.0004,
>  
> 5,50,Hello-5,500,5.05,false,Welt-5,2019-08-19T19:10,2019-08-19,19:10,12345678.0005,
>  
> 6,60,Hello-6,600,6.06,true,Welt-6,2019-08-19T19:20,2019-08-19,19:20,12345678.0006,
>  
> 7,70,Hello-7,700,7.07,false,Welt-7,2019-08-19T19:30,2019-08-19,19:30,12345678.0007,
>  
> 8,80,null,800,8.08,true,Welt-8,2019-08-19T19:40,2019-08-19,19:40,12345678.0008]>
>  but 
> was:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003]>
> 2020-10-14T04:35:36.9281340Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-10-14T04:35:36.9282023Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-10-14T04:35:36.9328385Z  at 
> org.junit.Assert.assertEquals(Assert.java:118)
> 2020-10-14T04:35:36.9338939Z  at 
> org.junit.Assert.assertEquals(Assert.java:144)
> 2020-10-14T04:35:36.9339880Z  at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:449)
> 2020-10-14T04:35:36.9341003Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-19635) HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result mismatch

2020-12-29 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reopened FLINK-19635:


> HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result 
> mismatch
> --
>
> Key: FLINK-19635
> URL: https://issues.apache.org/jira/browse/FLINK-19635
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7562=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-10-14T04:35:36.9268975Z testTableSourceSinkWithDDL[planner = 
> BLINK_PLANNER, legacy = 
> false](org.apache.flink.connector.hbase2.HBaseConnectorITCase)  Time elapsed: 
> 3.131 sec  <<< FAILURE!
> 2020-10-14T04:35:36.9276246Z java.lang.AssertionError: 
> expected:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003,
>  
> 4,40,null,400,4.04,true,Welt-4,2019-08-18T19:03,2019-08-18,19:03,12345678.0004,
>  
> 5,50,Hello-5,500,5.05,false,Welt-5,2019-08-19T19:10,2019-08-19,19:10,12345678.0005,
>  
> 6,60,Hello-6,600,6.06,true,Welt-6,2019-08-19T19:20,2019-08-19,19:20,12345678.0006,
>  
> 7,70,Hello-7,700,7.07,false,Welt-7,2019-08-19T19:30,2019-08-19,19:30,12345678.0007,
>  
> 8,80,null,800,8.08,true,Welt-8,2019-08-19T19:40,2019-08-19,19:40,12345678.0008]>
>  but 
> was:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003]>
> 2020-10-14T04:35:36.9281340Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-10-14T04:35:36.9282023Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-10-14T04:35:36.9328385Z  at 
> org.junit.Assert.assertEquals(Assert.java:118)
> 2020-10-14T04:35:36.9338939Z  at 
> org.junit.Assert.assertEquals(Assert.java:144)
> 2020-10-14T04:35:36.9339880Z  at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:449)
> 2020-10-14T04:35:36.9341003Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19635) HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result mismatch

2020-12-29 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256332#comment-17256332
 ] 

Leonard Xu commented on FLINK-19635:


oops, let me reopen this

> HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result 
> mismatch
> --
>
> Key: FLINK-19635
> URL: https://issues.apache.org/jira/browse/FLINK-19635
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7562=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-10-14T04:35:36.9268975Z testTableSourceSinkWithDDL[planner = 
> BLINK_PLANNER, legacy = 
> false](org.apache.flink.connector.hbase2.HBaseConnectorITCase)  Time elapsed: 
> 3.131 sec  <<< FAILURE!
> 2020-10-14T04:35:36.9276246Z java.lang.AssertionError: 
> expected:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003,
>  
> 4,40,null,400,4.04,true,Welt-4,2019-08-18T19:03,2019-08-18,19:03,12345678.0004,
>  
> 5,50,Hello-5,500,5.05,false,Welt-5,2019-08-19T19:10,2019-08-19,19:10,12345678.0005,
>  
> 6,60,Hello-6,600,6.06,true,Welt-6,2019-08-19T19:20,2019-08-19,19:20,12345678.0006,
>  
> 7,70,Hello-7,700,7.07,false,Welt-7,2019-08-19T19:30,2019-08-19,19:30,12345678.0007,
>  
> 8,80,null,800,8.08,true,Welt-8,2019-08-19T19:40,2019-08-19,19:40,12345678.0008]>
>  but 
> was:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003]>
> 2020-10-14T04:35:36.9281340Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-10-14T04:35:36.9282023Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-10-14T04:35:36.9328385Z  at 
> org.junit.Assert.assertEquals(Assert.java:118)
> 2020-10-14T04:35:36.9338939Z  at 
> org.junit.Assert.assertEquals(Assert.java:144)
> 2020-10-14T04:35:36.9339880Z  at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:449)
> 2020-10-14T04:35:36.9341003Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-29 Thread GitBox


xintongsong commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r549959072



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/failurerate/TimestampBasedFailureRater.java
##
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.failurerate;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.execution.ThresholdExceedException;
+
+import java.util.ArrayDeque;
+import java.util.Queue;
+import java.util.function.Supplier;
+
+/**
+ * A timestamp queue based failure rater implementation.
+ */
+public class TimestampBasedFailureRater implements ThresholdMeter {

Review comment:
   I believe a clear abstraction of concepts should be prior to the 
implementation details (what the error messages say, how the factories should 
look like, etc). Compared with `Meter`, conceptually the only additional 
function this implementation class provides is `checkAgainstThreshold`.
   
   Concerning the implementation issues you mentioned, they should be adjusted 
with respect to the underlying concepts.
   * `checkAgainstThreshold` can only say that current rate `x` has reached the 
threshold `y`, and we can mention that the rate is for starting container 
failure in `ActiveResourceManager` for logging.
   * We can change `fromConfiguration(Configuration config)` to 
`withMinuteThreshold(double threshold)` and parse the configuration option in 
`ActiveResourceManagerFactory`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20800) HadoopViewFileSystemTruncateTest failure caused by AssumptionViolatedException

2020-12-29 Thread zlzhang0122 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256330#comment-17256330
 ] 

zlzhang0122 commented on FLINK-20800:
-

[~chesnay] Thanks for your reply. You are right, the reason is the hadoop 
version in pom is 2.4.1 and the test needs Hadoop 2.7+.If IntelliJ was being 
used, the test case will failed.

> HadoopViewFileSystemTruncateTest failure caused by AssumptionViolatedException
> --
>
> Key: FLINK-20800
> URL: https://issues.apache.org/jira/browse/FLINK-20800
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.12.0
>Reporter: zlzhang0122
>Priority: Major
>
> [Build|https://dev.azure.com/mapohl/flink/_build/results?buildId=118=results]
>  failed due to HadoopViewFileSystemTruncateTest caused by a 
> AssumptionViolatedException:
> {code:java}
> // code placeholder
> org.junit.AssumptionViolatedException: got: , expected: is 
> org.junit.AssumptionViolatedException: got: , expected: is 
>  at org.junit.Assume.assumeThat(Assume.java:95) at 
> org.junit.Assume.assumeTrue(Assume.java:41) at 
> org.apache.flink.runtime.fs.hdfs.HadoopViewFileSystemTruncateTest.testHadoopVersion(HadoopViewFileSystemTruncateTest.java:71)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
>  at 
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
>  at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)Process 
> finished with exit code 0
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20813) SQL Client should create modules with user class loader

2020-12-29 Thread Rui Li (Jira)
Rui Li created FLINK-20813:
--

 Summary: SQL Client should create modules with user class loader
 Key: FLINK-20813
 URL: https://issues.apache.org/jira/browse/FLINK-20813
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Reporter: Rui Li
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14512: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes

2020-12-29 Thread GitBox


flinkbot edited a comment on pull request #14512:
URL: https://github.com/apache/flink/pull/14512#issuecomment-751906693


   
   ## CI report:
   
   * 655b2992fd97e20ef142287bfd435832c469261d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11481)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20809) Limit push down with Hive table doesn't work when using with filter

2020-12-29 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256302#comment-17256302
 ] 

Rui Li commented on FLINK-20809:


Sure I'll take a look.

> Limit push down with Hive table doesn't work when using with filter
> ---
>
> Key: FLINK-20809
> URL: https://issues.apache.org/jira/browse/FLINK-20809
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: Jun Zhang
>Priority: Major
> Fix For: 1.13.0
>
>
> when I use flink sql to query hive table , like this 
> {code:java}
> // select * from hive_table where id = 1 limit 1
> {code}
>  
> when the sql contain query conditions in where clause, I found that the limit 
> push down is invalid.
> I look up the comment on source code , I think it is should be push down , is 
> it a bug ?
> [the comment 
> |https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushLimitIntoTableSourceScanRule.java#L64]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20812) flink connector hbase(1.4,2.2) too few control parameters provided

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-20812:
---

Assignee: WeiNan Zhao

> flink connector hbase(1.4,2.2) too few control parameters provided
> --
>
> Key: FLINK-20812
> URL: https://issues.apache.org/jira/browse/FLINK-20812
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Reporter: WeiNan Zhao
>Assignee: WeiNan Zhao
>Priority: Major
> Fix For: 1.13.0
>
>
> When I use the cdh cluster, I need to use kerberos authentication, and I need 
> to add some kerberos authentication parameters of hbase, but the current 
> hbase connector structure does not provide this entry, I wonder if it can be 
> modified, if possible, I can submit for hbase connector a pr.
> e.g hbase parameter
> hbase.security.authentication='kerberos',
> hbase.master.kerberos.principal='...',
> hbase.kerberos.regionserver.principal='...',
> hbase.security.auth.enable = 'true',
> hbase.sasl.clientconfig = 'Client'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20812) flink connector hbase(1.4,2.2) too few control parameters provided

2020-12-29 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256299#comment-17256299
 ] 

Jark Wu commented on FLINK-20812:
-

Assigned this issue to you [~ZhaoWeiNan].


> flink connector hbase(1.4,2.2) too few control parameters provided
> --
>
> Key: FLINK-20812
> URL: https://issues.apache.org/jira/browse/FLINK-20812
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Reporter: WeiNan Zhao
>Assignee: WeiNan Zhao
>Priority: Major
> Fix For: 1.13.0
>
>
> When I use the cdh cluster, I need to use kerberos authentication, and I need 
> to add some kerberos authentication parameters of hbase, but the current 
> hbase connector structure does not provide this entry, I wonder if it can be 
> modified, if possible, I can submit for hbase connector a pr.
> e.g hbase parameter
> hbase.security.authentication='kerberos',
> hbase.master.kerberos.principal='...',
> hbase.kerberos.regionserver.principal='...',
> hbase.security.auth.enable = 'true',
> hbase.sasl.clientconfig = 'Client'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20348) Make "schema-registry.subject" optional for Kafka sink with avro-confluent format

2020-12-29 Thread zhuxiaoshang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256298#comment-17256298
 ] 

zhuxiaoshang commented on FLINK-20348:
--

[~jark],I want to have a try.

> Make "schema-registry.subject" optional for Kafka sink with avro-confluent 
> format
> -
>
> Key: FLINK-20348
> URL: https://issues.apache.org/jira/browse/FLINK-20348
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Reporter: Jark Wu
>Priority: Major
>  Labels: sprint
> Fix For: 1.13.0
>
>
> Currently, configuration "schema-registry.subject" in avro-confluent format 
> is required by sink. However, this is quite verbose set it manually. By 
> default, it can be to set to {{-key}} and {{-value}} 
> if it works with kafka or upsert-kafka connector. This can also makes 
> 'avro-confluent' format to be more handy and works better with 
> Kafka/Confluent ecosystem. 
> {code:sql}
> CREATE TABLE kafka_gmv (
>   day_str STRING,
>   gmv BIGINT,
>   PRIMARY KEY (day_str) NOT ENFORCED
> ) WITH (
> 'connector' = 'upsert-kafka',
> 'topic' = 'kafka_gmv',
> 'properties.bootstrap.servers' = 'localhost:9092',
> -- 'key.format' = 'raw',
> 'key.format' = 'avro-confluent',
> 'key.avro-confluent.schema-registry.url' = 'http://localhost:8181',
> 'key.avro-confluent.schema-registry.subject' = 'kafka_gmv-key',
> 'value.format' = 'avro-confluent',
> 'value.avro-confluent.schema-registry.url' = 'http://localhost:8181',
> 'value.avro-confluent.schema-registry.subject' = 'kafka_gmv-value'
> );
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.

2020-12-29 Thread GitBox


wuchong commented on pull request #14508:
URL: https://github.com/apache/flink/pull/14508#issuecomment-752336588


   Hi @V1ncentzzZ , please avoid to use `git pull` or `git merge` in your dev 
branch. In order to have lastest master branch in your dev branch, a better way 
is to use `git rebase`. IDEA also provides visual interface to do that.  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20812) flink connector hbase(1.4,2.2) too few control parameters provided

2020-12-29 Thread WeiNan Zhao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256297#comment-17256297
 ] 

WeiNan Zhao commented on FLINK-20812:
-

I agree with your approach

> flink connector hbase(1.4,2.2) too few control parameters provided
> --
>
> Key: FLINK-20812
> URL: https://issues.apache.org/jira/browse/FLINK-20812
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Reporter: WeiNan Zhao
>Priority: Major
> Fix For: 1.13.0
>
>
> When I use the cdh cluster, I need to use kerberos authentication, and I need 
> to add some kerberos authentication parameters of hbase, but the current 
> hbase connector structure does not provide this entry, I wonder if it can be 
> modified, if possible, I can submit for hbase connector a pr.
> e.g hbase parameter
> hbase.security.authentication='kerberos',
> hbase.master.kerberos.principal='...',
> hbase.kerberos.regionserver.principal='...',
> hbase.security.auth.enable = 'true',
> hbase.sasl.clientconfig = 'Client'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20458) Translate page 'SQL-gettingStarted' into Chinese

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-20458.
---
Resolution: Fixed

Fixed in master: bd403e2cd33490db7aa5dcde72ae53dc46d304a3

> Translate page 'SQL-gettingStarted' into Chinese
> 
>
> Key: FLINK-20458
> URL: https://issues.apache.org/jira/browse/FLINK-20458
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: CaoZhen
>Assignee: jjiey
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Translate the doc located in "docs/dev/table/sql/gettingStarted.zh.md"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #14437: [FLINK-20458][docs] translate gettingStarted.zh.md and correct spelling errors in gettingStarted.md

2020-12-29 Thread GitBox


wuchong merged pull request #14437:
URL: https://github.com/apache/flink/pull/14437


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14392: [FLINK-20606][connectors/hive, table sql] sql cli with hive catalog c…

2020-12-29 Thread GitBox


flinkbot edited a comment on pull request #14392:
URL: https://github.com/apache/flink/pull/14392#issuecomment-745400662


   
   ## CI report:
   
   * a366565d2ef37c57eee2d17f5718b2f0e6f173a8 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11479)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20809) Limit push down with Hive table doesn't work when using with filter

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20809:

Summary: Limit push down with Hive table doesn't work when using with 
filter  (was: the limit push down invalid when use filter )

> Limit push down with Hive table doesn't work when using with filter
> ---
>
> Key: FLINK-20809
> URL: https://issues.apache.org/jira/browse/FLINK-20809
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: Jun Zhang
>Priority: Major
> Fix For: 1.13.0
>
>
> when I use flink sql to query hive table , like this 
> {code:java}
> // select * from hive_table where id = 1 limit 1
> {code}
>  
> when the sql contain query conditions in where clause, I found that the limit 
> push down is invalid.
> I look up the comment on source code , I think it is should be push down , is 
> it a bug ?
> [the comment 
> |https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushLimitIntoTableSourceScanRule.java#L64]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20809) Limit push down with Hive table doesn't work when using with filter

2020-12-29 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256295#comment-17256295
 ] 

Jark Wu commented on FLINK-20809:
-

cc [~lirui], could you help to check this?

> Limit push down with Hive table doesn't work when using with filter
> ---
>
> Key: FLINK-20809
> URL: https://issues.apache.org/jira/browse/FLINK-20809
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: Jun Zhang
>Priority: Major
> Fix For: 1.13.0
>
>
> when I use flink sql to query hive table , like this 
> {code:java}
> // select * from hive_table where id = 1 limit 1
> {code}
>  
> when the sql contain query conditions in where clause, I found that the limit 
> push down is invalid.
> I look up the comment on source code , I think it is should be push down , is 
> it a bug ?
> [the comment 
> |https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushLimitIntoTableSourceScanRule.java#L64]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20809) the limit push down invalid when use filter

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20809:

Component/s: (was: Table SQL / API)
 Connectors / Hive

> the limit push down invalid when use filter 
> 
>
> Key: FLINK-20809
> URL: https://issues.apache.org/jira/browse/FLINK-20809
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.12.0
>Reporter: Jun Zhang
>Priority: Major
> Fix For: 1.13.0
>
>
> when I use flink sql to query hive table , like this 
> {code:java}
> // select * from hive_table where id = 1 limit 1
> {code}
>  
> when the sql contain query conditions in where clause, I found that the limit 
> push down is invalid.
> I look up the comment on source code , I think it is should be push down , is 
> it a bug ?
> [the comment 
> |https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushLimitIntoTableSourceScanRule.java#L64]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20812) flink connector hbase(1.4,2.2) too few control parameters provided

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20812:

Affects Version/s: (was: 1.13.0)
   (was: 1.11.3)
   (was: 1.12.0)
   (was: 1.10.1)

> flink connector hbase(1.4,2.2) too few control parameters provided
> --
>
> Key: FLINK-20812
> URL: https://issues.apache.org/jira/browse/FLINK-20812
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Reporter: WeiNan Zhao
>Priority: Major
> Fix For: 1.13.0
>
>
> When I use the cdh cluster, I need to use kerberos authentication, and I need 
> to add some kerberos authentication parameters of hbase, but the current 
> hbase connector structure does not provide this entry, I wonder if it can be 
> modified, if possible, I can submit for hbase connector a pr.
> e.g hbase parameter
> hbase.security.authentication='kerberos',
> hbase.master.kerberos.principal='...',
> hbase.kerberos.regionserver.principal='...',
> hbase.security.auth.enable = 'true',
> hbase.sasl.clientconfig = 'Client'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20812) flink connector hbase(1.4,2.2) too few control parameters provided

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20812:

Component/s: Table SQL / Ecosystem

> flink connector hbase(1.4,2.2) too few control parameters provided
> --
>
> Key: FLINK-20812
> URL: https://issues.apache.org/jira/browse/FLINK-20812
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Affects Versions: 1.10.1, 1.12.0, 1.11.3, 1.13.0
>Reporter: WeiNan Zhao
>Priority: Major
>  Labels: flink-connector
> Fix For: 1.13.0
>
>
> When I use the cdh cluster, I need to use kerberos authentication, and I need 
> to add some kerberos authentication parameters of hbase, but the current 
> hbase connector structure does not provide this entry, I wonder if it can be 
> modified, if possible, I can submit for hbase connector a pr.
> e.g hbase parameter
> hbase.security.authentication='kerberos',
> hbase.master.kerberos.principal='...',
> hbase.kerberos.regionserver.principal='...',
> hbase.security.auth.enable = 'true',
> hbase.sasl.clientconfig = 'Client'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20812) flink connector hbase(1.4,2.2) too few control parameters provided

2020-12-29 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256294#comment-17256294
 ] 

Jark Wu commented on FLINK-20812:
-

I think we can do this like Kafka connector. Kafka connector provides a 
{{properties.*}} configuration which can by pass all the Kafka configs: 
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/kafka.html#properties

{code}
'properties.hbase.security.authentication'='kerberos',
'properties.hbase.master.kerberos.principal'='...',
{code}

> flink connector hbase(1.4,2.2) too few control parameters provided
> --
>
> Key: FLINK-20812
> URL: https://issues.apache.org/jira/browse/FLINK-20812
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase
>Affects Versions: 1.10.1, 1.12.0, 1.11.3, 1.13.0
>Reporter: WeiNan Zhao
>Priority: Major
>  Labels: flink-connector
> Fix For: 1.13.0
>
>
> When I use the cdh cluster, I need to use kerberos authentication, and I need 
> to add some kerberos authentication parameters of hbase, but the current 
> hbase connector structure does not provide this entry, I wonder if it can be 
> modified, if possible, I can submit for hbase connector a pr.
> e.g hbase parameter
> hbase.security.authentication='kerberos',
> hbase.master.kerberos.principal='...',
> hbase.kerberos.regionserver.principal='...',
> hbase.security.auth.enable = 'true',
> hbase.sasl.clientconfig = 'Client'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20812) flink connector hbase(1.4,2.2) too few control parameters provided

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20812:

Labels:   (was: flink-connector)

> flink connector hbase(1.4,2.2) too few control parameters provided
> --
>
> Key: FLINK-20812
> URL: https://issues.apache.org/jira/browse/FLINK-20812
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Affects Versions: 1.10.1, 1.12.0, 1.11.3, 1.13.0
>Reporter: WeiNan Zhao
>Priority: Major
> Fix For: 1.13.0
>
>
> When I use the cdh cluster, I need to use kerberos authentication, and I need 
> to add some kerberos authentication parameters of hbase, but the current 
> hbase connector structure does not provide this entry, I wonder if it can be 
> modified, if possible, I can submit for hbase connector a pr.
> e.g hbase parameter
> hbase.security.authentication='kerberos',
> hbase.master.kerberos.principal='...',
> hbase.kerberos.regionserver.principal='...',
> hbase.security.auth.enable = 'true',
> hbase.sasl.clientconfig = 'Client'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20488) Show checkpoint type in the UI (AC/UC) for each subtask

2020-12-29 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256293#comment-17256293
 ] 

Yuan Mei edited comment on FLINK-20488 at 12/30/20, 5:37 AM:
-

A bit more details:

 

On the high-level, people want to understand whether the checkpoint is 
completed as an aligned or unaligned checkpoint; so at least we should have 
types of checkpoints completed as {{ALIGNED_CHECKPOINT}}  or 
{{UN_ALIGNED_CHECKPOINT}}

For each type, there are separate cases:

{{ALIGNED_CHECKPOINT}} :
 *case1:* aligned with no timeout (unaligned checkpoint is disabled);
 *case2:* aligned without being timed out (unaligned checkpoint is enabled, but 
alignment duration < timeout)

{{UN_ALIGNED_CHECKPOINT}} :
 *case3:* unaligned with no timeout (unaligned is enabled, but not allowed to 
be switched back to aligned
 *case4:* unaligned due to being timed out (unaligned checkpoint is enabled, 
alignment duration > timeout)

*case 5:* we have an additional case where we are NOT in {{ExactlyOnceMode}}


was (Author: ym):
A bit more details:

 

On the high-level, people want to understand whether the checkpoint is 
completed as an aligned or unaligned checkpoint; so at least we should have 
types of checkpoints completed as {{ALIGNED_CHECKPOINT}}  or 
{{UN_ALIGNED_CHECKPOINT}}

{{}}

{{}}For each type, there are separate cases:
{{}}

{{ALIGNED_CHECKPOINT}} :
*case1:* aligned with no timeout (unaligned checkpoint is disabled);
*case2:* aligned without being timed out (unaligned checkpoint is enabled, but 
alignment duration < timeout)

{{UN_ALIGNED_CHECKPOINT}} :
*case3:* unaligned with no timeout (unaligned is enabled, but not allowed to be 
switched back to aligned
*case4:* unaligned due to being timed out (unaligned checkpoint is enabled, 
alignment duration > timeout)

*case 5:* we have an additional case where we are NOT in {{ExactlyOnceMode}}

> Show checkpoint type in the UI (AC/UC) for each subtask
> ---
>
> Key: FLINK-20488
> URL: https://issues.apache.org/jira/browse/FLINK-20488
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / Configuration, 
> Runtime / Web Frontend
>Affects Versions: 1.12.0
>Reporter: Roman Khachatryan
>Priority: Major
>
> A follow-up ticket after FLINK-19681 to address issues not directly related 
> to checkpointing (see 
> [discussion|https://github.com/apache/flink/pull/13827#discussion_r527794600]).
>  
> In the UI, show checkpoint type for each subtask; on a checkpoint level 
> display unaligned if at least one subtask did UC.
> That should ease debugging of the checkpointing issues. 
>  
> Disabling propagation moved to FLINK-20548.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20488) Show checkpoint type in the UI (AC/UC) for each subtask

2020-12-29 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256293#comment-17256293
 ] 

Yuan Mei commented on FLINK-20488:
--

A bit more details:

 

On the high-level, people want to understand whether the checkpoint is 
completed as an aligned or unaligned checkpoint; so at least we should have 
types of checkpoints completed as {{ALIGNED_CHECKPOINT}}  or 
{{UN_ALIGNED_CHECKPOINT}}

{{}}

{{}}For each type, there are separate cases:
{{}}

{{ALIGNED_CHECKPOINT}} :
*case1:* aligned with no timeout (unaligned checkpoint is disabled);
*case2:* aligned without being timed out (unaligned checkpoint is enabled, but 
alignment duration < timeout)

{{UN_ALIGNED_CHECKPOINT}} :
*case3:* unaligned with no timeout (unaligned is enabled, but not allowed to be 
switched back to aligned
*case4:* unaligned due to being timed out (unaligned checkpoint is enabled, 
alignment duration > timeout)

*case 5:* we have an additional case where we are NOT in {{ExactlyOnceMode}}

> Show checkpoint type in the UI (AC/UC) for each subtask
> ---
>
> Key: FLINK-20488
> URL: https://issues.apache.org/jira/browse/FLINK-20488
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / Configuration, 
> Runtime / Web Frontend
>Affects Versions: 1.12.0
>Reporter: Roman Khachatryan
>Priority: Major
>
> A follow-up ticket after FLINK-19681 to address issues not directly related 
> to checkpointing (see 
> [discussion|https://github.com/apache/flink/pull/13827#discussion_r527794600]).
>  
> In the UI, show checkpoint type for each subtask; on a checkpoint level 
> display unaligned if at least one subtask did UC.
> That should ease debugging of the checkpointing issues. 
>  
> Disabling propagation moved to FLINK-20548.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18983) Job doesn't changed to failed if close function has blocked

2020-12-29 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256291#comment-17256291
 ] 

Yuan Mei edited comment on FLINK-18983 at 12/30/20, 5:34 AM:
-

As [~liuyufei] mentioned that they already have an internal solution for this 
problem, I would downgrade the priority of this ticket a bit for now. 

But I think [~liuyufei] made a point, so I wrote a PR to differentiate the 
differences between "task cancellation" and "task failure" explicitly.

If users have similar requirements, they could add a timeout logic in the 
`close()` function to solve the problem for now. 

 

[https://github.com/apache/flink/pull/14460]


was (Author: ym):
As [~liuyufei] mentioned that they already have an internal solution for this 
problem, I would downgrade the priority of this ticket a bit for now. 

But I think [~liuyufei] made a point, so I wrote a PR to differentiate the 
differences between "task cancellation" and "task failure" explicitly.

 

[https://github.com/apache/flink/pull/14460]

> Job doesn't changed to failed if close function has blocked
> ---
>
> Key: FLINK-18983
> URL: https://issues.apache.org/jira/browse/FLINK-18983
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Affects Versions: 1.11.0, 1.12.0
>Reporter: YufeiLiu
>Priority: Major
>
> If a operator throw a exception, it will break process loop and dispose all 
> operator. But state will never switch to FAILED if block in Function.close, 
> and JobMaster can't know the final state and do restart.
> Task have {{TaskCancelerWatchDog}} to kill process if cancellation timeout, 
> but it doesn't work for FAILED task.TAskThread will allways hang at:
> org.apache.flink.streaming.runtime.tasks.StreamTask#cleanUpInvoke
> Test case:
> {code:java}
> Configuration configuration = new Configuration();
> configuration.set(TaskManagerOptions.TASK_CANCELLATION_TIMEOUT, 1L);
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.createLocalEnvironment(2, configuration);
> env.addSource(...)
>   .process(new ProcessFunction() {
>   @Override
>   public void processElement(String value, Context ctx, 
> Collector out) throws Exception {
>   if (getRuntimeContext().getIndexOfThisSubtask() == 0) {
>   throw new RuntimeException();
>   }
>   }
>   @Override
>   public void close() throws Exception {
>   if (getRuntimeContext().getIndexOfThisSubtask() == 0) {
>   Thread.sleep(1000);
>   }
>   }
>   }).setParallelism(2)
>   .print();
> {code}
> In this case, job will block at close action and never change to FAILED.
> If change thread which subtaskIndex == 1 to sleep, TM will exit after 
> TASK_CANCELLATION_TIMEOUT.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18983) Job doesn't changed to failed if close function has blocked

2020-12-29 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256291#comment-17256291
 ] 

Yuan Mei commented on FLINK-18983:
--

As [~liuyufei] mentioned that they already have an internal solution for this 
problem, I would downgrade the priority of this ticket a bit for now. 

But I think [~liuyufei] made a point, so I wrote a PR to differentiate the 
differences between "task cancellation" and "task failure" explicitly.

 

[https://github.com/apache/flink/pull/14460]

> Job doesn't changed to failed if close function has blocked
> ---
>
> Key: FLINK-18983
> URL: https://issues.apache.org/jira/browse/FLINK-18983
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Task
>Affects Versions: 1.11.0, 1.12.0
>Reporter: YufeiLiu
>Priority: Major
>
> If a operator throw a exception, it will break process loop and dispose all 
> operator. But state will never switch to FAILED if block in Function.close, 
> and JobMaster can't know the final state and do restart.
> Task have {{TaskCancelerWatchDog}} to kill process if cancellation timeout, 
> but it doesn't work for FAILED task.TAskThread will allways hang at:
> org.apache.flink.streaming.runtime.tasks.StreamTask#cleanUpInvoke
> Test case:
> {code:java}
> Configuration configuration = new Configuration();
> configuration.set(TaskManagerOptions.TASK_CANCELLATION_TIMEOUT, 1L);
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.createLocalEnvironment(2, configuration);
> env.addSource(...)
>   .process(new ProcessFunction() {
>   @Override
>   public void processElement(String value, Context ctx, 
> Collector out) throws Exception {
>   if (getRuntimeContext().getIndexOfThisSubtask() == 0) {
>   throw new RuntimeException();
>   }
>   }
>   @Override
>   public void close() throws Exception {
>   if (getRuntimeContext().getIndexOfThisSubtask() == 0) {
>   Thread.sleep(1000);
>   }
>   }
>   }).setParallelism(2)
>   .print();
> {code}
> In this case, job will block at close action and never change to FAILED.
> If change thread which subtaskIndex == 1 to sleep, TM will exit after 
> TASK_CANCELLATION_TIMEOUT.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20812) flink connector hbase(1.4,2.2) too few control parameters provided

2020-12-29 Thread WeiNan Zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

WeiNan Zhao updated FLINK-20812:

Labels: flink-connector  (was: )

> flink connector hbase(1.4,2.2) too few control parameters provided
> --
>
> Key: FLINK-20812
> URL: https://issues.apache.org/jira/browse/FLINK-20812
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / HBase
>Affects Versions: 1.10.1, 1.12.0, 1.11.3, 1.13.0
>Reporter: WeiNan Zhao
>Priority: Major
>  Labels: flink-connector
> Fix For: 1.13.0
>
>
> When I use the cdh cluster, I need to use kerberos authentication, and I need 
> to add some kerberos authentication parameters of hbase, but the current 
> hbase connector structure does not provide this entry, I wonder if it can be 
> modified, if possible, I can submit for hbase connector a pr.
> e.g hbase parameter
> hbase.security.authentication='kerberos',
> hbase.master.kerberos.principal='...',
> hbase.kerberos.regionserver.principal='...',
> hbase.security.auth.enable = 'true',
> hbase.sasl.clientconfig = 'Client'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20812) flink connector hbase(1.4,2.2) too few control parameters provided

2020-12-29 Thread WeiNan Zhao (Jira)
WeiNan Zhao created FLINK-20812:
---

 Summary: flink connector hbase(1.4,2.2) too few control parameters 
provided
 Key: FLINK-20812
 URL: https://issues.apache.org/jira/browse/FLINK-20812
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / HBase
Affects Versions: 1.11.3, 1.12.0, 1.10.1, 1.13.0
Reporter: WeiNan Zhao
 Fix For: 1.13.0


When I use the cdh cluster, I need to use kerberos authentication, and I need 
to add some kerberos authentication parameters of hbase, but the current hbase 
connector structure does not provide this entry, I wonder if it can be 
modified, if possible, I can submit for hbase connector a pr.

e.g hbase parameter

hbase.security.authentication='kerberos',
hbase.master.kerberos.principal='...',
hbase.kerberos.regionserver.principal='...',
hbase.security.auth.enable = 'true',
hbase.sasl.clientconfig = 'Client'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangZhenQiu commented on pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-29 Thread GitBox


HuangZhenQiu commented on pull request #8952:
URL: https://github.com/apache/flink/pull/8952#issuecomment-752331980


   @xintongsong 
   In the current implementation, I choose to use the class name 
TimestampBasedFailureRater rather than ThresholdMeter because there is a 
container failure related error message and config key are used in the class. I 
think it is not generic enough to name it as ThresholdMeter. For the formatting 
rules. I used spotless to format the code after rebasing the master. The code 
should already conform to the rule proposed.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] HuangZhenQiu commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-29 Thread GitBox


HuangZhenQiu commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r549938770



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/failurerate/TimestampBasedFailureRater.java
##
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.failurerate;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.execution.ThresholdExceedException;
+
+import java.util.ArrayDeque;
+import java.util.Queue;
+import java.util.function.Supplier;
+
+/**
+ * A timestamp queue based failure rater implementation.
+ */
+public class TimestampBasedFailureRater implements ThresholdMeter {

Review comment:
   Yes, agree we don't need to have three layers. The latest PR contains 
only two layers, Meter and TimestampBasedFailureRater. Why I don't want to use 
ThresholdMeter as the implementation class name? This two method checkThreshold 
and fromConfiguration has container failure related error message and 
configuration key. It is not generic enough to put into ThresholdMeter. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20777) Default value of property "partition.discovery.interval.ms" is not as documented in new Kafka Source

2020-12-29 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256288#comment-17256288
 ] 

Jiangjie Qin commented on FLINK-20777:
--

>From a Kafka user's perspective, I would expect the partition changes to be 
>picked up by the Flink job automatically, so I don't have to restart the Flink 
>job. Therefore, enabling partition discovery by default seems reasonable to me.

[~873925...@qq.com] I am curious about the case that people don't want to start 
consuming from the new partitions right away. Do you mind sharing some concrete 
scenarios? Thanks.

> Default value of property "partition.discovery.interval.ms" is not as 
> documented in new Kafka Source
> 
>
> Key: FLINK-20777
> URL: https://issues.apache.org/jira/browse/FLINK-20777
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Reporter: Qingsheng Ren
>Priority: Major
> Fix For: 1.12.1
>
>
> The default value of property "partition.discovery.interval.ms" is documented 
> as 30 seconds in {{KafkaSourceOptions}}, but it will be set as -1 in 
> {{KafkaSourceBuilder}} if user doesn't pass in this property explicitly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19635) HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result mismatch

2020-12-29 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256282#comment-17256282
 ] 

Huang Xingbo edited comment on FLINK-19635 at 12/30/20, 4:42 AM:
-

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11476=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20]

The failed case happened again. Do we reopen this issue?


was (Author: hxbks2ks):
[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11476=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20]

The failed case happened again. Does we reopen this issue?

> HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result 
> mismatch
> --
>
> Key: FLINK-19635
> URL: https://issues.apache.org/jira/browse/FLINK-19635
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7562=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-10-14T04:35:36.9268975Z testTableSourceSinkWithDDL[planner = 
> BLINK_PLANNER, legacy = 
> false](org.apache.flink.connector.hbase2.HBaseConnectorITCase)  Time elapsed: 
> 3.131 sec  <<< FAILURE!
> 2020-10-14T04:35:36.9276246Z java.lang.AssertionError: 
> expected:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003,
>  
> 4,40,null,400,4.04,true,Welt-4,2019-08-18T19:03,2019-08-18,19:03,12345678.0004,
>  
> 5,50,Hello-5,500,5.05,false,Welt-5,2019-08-19T19:10,2019-08-19,19:10,12345678.0005,
>  
> 6,60,Hello-6,600,6.06,true,Welt-6,2019-08-19T19:20,2019-08-19,19:20,12345678.0006,
>  
> 7,70,Hello-7,700,7.07,false,Welt-7,2019-08-19T19:30,2019-08-19,19:30,12345678.0007,
>  
> 8,80,null,800,8.08,true,Welt-8,2019-08-19T19:40,2019-08-19,19:40,12345678.0008]>
>  but 
> was:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003]>
> 2020-10-14T04:35:36.9281340Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-10-14T04:35:36.9282023Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-10-14T04:35:36.9328385Z  at 
> org.junit.Assert.assertEquals(Assert.java:118)
> 2020-10-14T04:35:36.9338939Z  at 
> org.junit.Assert.assertEquals(Assert.java:144)
> 2020-10-14T04:35:36.9339880Z  at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:449)
> 2020-10-14T04:35:36.9341003Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19635) HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result mismatch

2020-12-29 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256282#comment-17256282
 ] 

Huang Xingbo commented on FLINK-19635:
--

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11476=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20]

The failed case happened again. Does we reopen this issue?

> HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result 
> mismatch
> --
>
> Key: FLINK-19635
> URL: https://issues.apache.org/jira/browse/FLINK-19635
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7562=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-10-14T04:35:36.9268975Z testTableSourceSinkWithDDL[planner = 
> BLINK_PLANNER, legacy = 
> false](org.apache.flink.connector.hbase2.HBaseConnectorITCase)  Time elapsed: 
> 3.131 sec  <<< FAILURE!
> 2020-10-14T04:35:36.9276246Z java.lang.AssertionError: 
> expected:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003,
>  
> 4,40,null,400,4.04,true,Welt-4,2019-08-18T19:03,2019-08-18,19:03,12345678.0004,
>  
> 5,50,Hello-5,500,5.05,false,Welt-5,2019-08-19T19:10,2019-08-19,19:10,12345678.0005,
>  
> 6,60,Hello-6,600,6.06,true,Welt-6,2019-08-19T19:20,2019-08-19,19:20,12345678.0006,
>  
> 7,70,Hello-7,700,7.07,false,Welt-7,2019-08-19T19:30,2019-08-19,19:30,12345678.0007,
>  
> 8,80,null,800,8.08,true,Welt-8,2019-08-19T19:40,2019-08-19,19:40,12345678.0008]>
>  but 
> was:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003]>
> 2020-10-14T04:35:36.9281340Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-10-14T04:35:36.9282023Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-10-14T04:35:36.9328385Z  at 
> org.junit.Assert.assertEquals(Assert.java:118)
> 2020-10-14T04:35:36.9338939Z  at 
> org.junit.Assert.assertEquals(Assert.java:144)
> 2020-10-14T04:35:36.9339880Z  at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:449)
> 2020-10-14T04:35:36.9341003Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20798) Using PVC as high-availability.storageDir could not work

2020-12-29 Thread hayden zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256280#comment-17256280
 ] 

hayden zhou edited comment on FLINK-20798 at 12/30/20, 4:39 AM:


[^flink.log] this is the log

 

my flink-conf.yaml

 

kubernetes.cluster-id: mta-flink
 high-availability: org.apache.flink.kubernetes.highavailability. 
KubernetesHaServicesFactory
 high-availability.storageDir: file:///opt/flink/nfs/ha

 

 

can I just use  FileSystemHaServiceFactory to replace the 
KubernetesHaServicesFactory in the configMap? 


was (Author: hayden zhou):
[^flink.log] this is the log

> Using PVC as high-availability.storageDir could not work
> 
>
> Key: FLINK-20798
> URL: https://issues.apache.org/jira/browse/FLINK-20798
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
> Environment: FLINK 1.12.0
>Reporter: hayden zhou
>Priority: Major
> Attachments: flink.log
>
>
> 我这边 部署 flink 到 k8s 使用 PVC 作为 high avalibility storagedir , 我看jobmanager 
> 的日志,选举成功了。但是 web 一直显示选举进行中。
>  
> 下面是 jobmanager 的日志
> ```
> 2020-12-29T06:45:54.177850394Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader election started
> 2020-12-29T06:45:54.177855303Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Attempting to acquire leader lease 'ConfigMapLock: default - 
> mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'...
> 2020-12-29T06:45:54.178668055Z 2020-12-29 14:45:54,178 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.178895963Z 2020-12-29 14:45:54,178 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.179327491Z 2020-12-29 14:45:54,179 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> Connecting websocket ... 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@6d303498
> 2020-12-29T06:45:54.230081993Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230202329Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-dispatcher-leader'}.
> 2020-12-29T06:45:54.230219281Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230353912Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Starting DefaultLeaderElectionService with 
> KubernetesLeaderElectionDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.237004177Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
> 2020-12-29T06:45:54.237024655Z 2020-12-29 14:45:54,236 INFO 
> org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
> New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
> mta-flink-restserver-leader.
> 2020-12-29T06:45:54.237027811Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Successfully Acquired leader lease 'ConfigMapLock: default - 
> mta-flink-restserver-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'
> 2020-12-29T06:45:54.237297376Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Grant leadership to contender http://mta-flink-jobmanager:8081 with session 
> ID 9587e13f-322f-4cd5-9fff-b4941462be0f.
> 2020-12-29T06:45:54.237353551Z 2020-12-29 14:45:54,237 INFO 
> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - 
> http://mta-flink-jobmanager:8081 was granted leadership with 
> leaderSessionID=9587e13f-322f-4cd5-9fff-b4941462be0f
> 2020-12-29T06:45:54.237440354Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Confirm leader session ID 9587e13f-322f-4cd5-9fff-b4941462be0f for leader 
> http://mta-flink-jobmanager:8081.
> 2020-12-29T06:45:54.254555127Z 2020-12-29 14:45:54,254 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 

[jira] [Commented] (FLINK-20798) Using PVC as high-availability.storageDir could not work

2020-12-29 Thread hayden zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256281#comment-17256281
 ] 

hayden zhou commented on FLINK-20798:
-

can we chat on wechat? my wechat account `zmfk2009`

> Using PVC as high-availability.storageDir could not work
> 
>
> Key: FLINK-20798
> URL: https://issues.apache.org/jira/browse/FLINK-20798
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
> Environment: FLINK 1.12.0
>Reporter: hayden zhou
>Priority: Major
> Attachments: flink.log
>
>
> 我这边 部署 flink 到 k8s 使用 PVC 作为 high avalibility storagedir , 我看jobmanager 
> 的日志,选举成功了。但是 web 一直显示选举进行中。
>  
> 下面是 jobmanager 的日志
> ```
> 2020-12-29T06:45:54.177850394Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader election started
> 2020-12-29T06:45:54.177855303Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Attempting to acquire leader lease 'ConfigMapLock: default - 
> mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'...
> 2020-12-29T06:45:54.178668055Z 2020-12-29 14:45:54,178 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.178895963Z 2020-12-29 14:45:54,178 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.179327491Z 2020-12-29 14:45:54,179 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> Connecting websocket ... 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@6d303498
> 2020-12-29T06:45:54.230081993Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230202329Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-dispatcher-leader'}.
> 2020-12-29T06:45:54.230219281Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230353912Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Starting DefaultLeaderElectionService with 
> KubernetesLeaderElectionDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.237004177Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
> 2020-12-29T06:45:54.237024655Z 2020-12-29 14:45:54,236 INFO 
> org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
> New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
> mta-flink-restserver-leader.
> 2020-12-29T06:45:54.237027811Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Successfully Acquired leader lease 'ConfigMapLock: default - 
> mta-flink-restserver-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'
> 2020-12-29T06:45:54.237297376Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Grant leadership to contender http://mta-flink-jobmanager:8081 with session 
> ID 9587e13f-322f-4cd5-9fff-b4941462be0f.
> 2020-12-29T06:45:54.237353551Z 2020-12-29 14:45:54,237 INFO 
> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - 
> http://mta-flink-jobmanager:8081 was granted leadership with 
> leaderSessionID=9587e13f-322f-4cd5-9fff-b4941462be0f
> 2020-12-29T06:45:54.237440354Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Confirm leader session ID 9587e13f-322f-4cd5-9fff-b4941462be0f for leader 
> http://mta-flink-jobmanager:8081.
> 2020-12-29T06:45:54.254555127Z 2020-12-29 14:45:54,254 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
> 2020-12-29T06:45:54.254588299Z 2020-12-29 14:45:54,254 INFO 
> org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
> New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
> mta-flink-resourcemanager-leader.
> 2020-12-29T06:45:54.254628053Z 2020-12-29 14:45:54,254 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector 

[jira] [Commented] (FLINK-20798) Using PVC as high-availability.storageDir could not work

2020-12-29 Thread hayden zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256280#comment-17256280
 ] 

hayden zhou commented on FLINK-20798:
-

[^flink.log] this is the log

> Using PVC as high-availability.storageDir could not work
> 
>
> Key: FLINK-20798
> URL: https://issues.apache.org/jira/browse/FLINK-20798
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
> Environment: FLINK 1.12.0
>Reporter: hayden zhou
>Priority: Major
> Attachments: flink.log
>
>
> 我这边 部署 flink 到 k8s 使用 PVC 作为 high avalibility storagedir , 我看jobmanager 
> 的日志,选举成功了。但是 web 一直显示选举进行中。
>  
> 下面是 jobmanager 的日志
> ```
> 2020-12-29T06:45:54.177850394Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader election started
> 2020-12-29T06:45:54.177855303Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Attempting to acquire leader lease 'ConfigMapLock: default - 
> mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'...
> 2020-12-29T06:45:54.178668055Z 2020-12-29 14:45:54,178 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.178895963Z 2020-12-29 14:45:54,178 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.179327491Z 2020-12-29 14:45:54,179 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> Connecting websocket ... 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@6d303498
> 2020-12-29T06:45:54.230081993Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230202329Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-dispatcher-leader'}.
> 2020-12-29T06:45:54.230219281Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230353912Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Starting DefaultLeaderElectionService with 
> KubernetesLeaderElectionDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.237004177Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
> 2020-12-29T06:45:54.237024655Z 2020-12-29 14:45:54,236 INFO 
> org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
> New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
> mta-flink-restserver-leader.
> 2020-12-29T06:45:54.237027811Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Successfully Acquired leader lease 'ConfigMapLock: default - 
> mta-flink-restserver-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'
> 2020-12-29T06:45:54.237297376Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Grant leadership to contender http://mta-flink-jobmanager:8081 with session 
> ID 9587e13f-322f-4cd5-9fff-b4941462be0f.
> 2020-12-29T06:45:54.237353551Z 2020-12-29 14:45:54,237 INFO 
> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - 
> http://mta-flink-jobmanager:8081 was granted leadership with 
> leaderSessionID=9587e13f-322f-4cd5-9fff-b4941462be0f
> 2020-12-29T06:45:54.237440354Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Confirm leader session ID 9587e13f-322f-4cd5-9fff-b4941462be0f for leader 
> http://mta-flink-jobmanager:8081.
> 2020-12-29T06:45:54.254555127Z 2020-12-29 14:45:54,254 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
> 2020-12-29T06:45:54.254588299Z 2020-12-29 14:45:54,254 INFO 
> org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
> New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
> mta-flink-resourcemanager-leader.
> 2020-12-29T06:45:54.254628053Z 2020-12-29 14:45:54,254 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Successfully 

[jira] [Updated] (FLINK-20798) Using PVC as high-availability.storageDir could not work

2020-12-29 Thread hayden zhou (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hayden zhou updated FLINK-20798:

Attachment: flink.log

> Using PVC as high-availability.storageDir could not work
> 
>
> Key: FLINK-20798
> URL: https://issues.apache.org/jira/browse/FLINK-20798
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
> Environment: FLINK 1.12.0
>Reporter: hayden zhou
>Priority: Major
> Attachments: flink.log
>
>
> 我这边 部署 flink 到 k8s 使用 PVC 作为 high avalibility storagedir , 我看jobmanager 
> 的日志,选举成功了。但是 web 一直显示选举进行中。
>  
> 下面是 jobmanager 的日志
> ```
> 2020-12-29T06:45:54.177850394Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader election started
> 2020-12-29T06:45:54.177855303Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Attempting to acquire leader lease 'ConfigMapLock: default - 
> mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'...
> 2020-12-29T06:45:54.178668055Z 2020-12-29 14:45:54,178 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.178895963Z 2020-12-29 14:45:54,178 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.179327491Z 2020-12-29 14:45:54,179 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> Connecting websocket ... 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@6d303498
> 2020-12-29T06:45:54.230081993Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230202329Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-dispatcher-leader'}.
> 2020-12-29T06:45:54.230219281Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230353912Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Starting DefaultLeaderElectionService with 
> KubernetesLeaderElectionDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.237004177Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
> 2020-12-29T06:45:54.237024655Z 2020-12-29 14:45:54,236 INFO 
> org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
> New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
> mta-flink-restserver-leader.
> 2020-12-29T06:45:54.237027811Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Successfully Acquired leader lease 'ConfigMapLock: default - 
> mta-flink-restserver-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'
> 2020-12-29T06:45:54.237297376Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Grant leadership to contender http://mta-flink-jobmanager:8081 with session 
> ID 9587e13f-322f-4cd5-9fff-b4941462be0f.
> 2020-12-29T06:45:54.237353551Z 2020-12-29 14:45:54,237 INFO 
> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - 
> http://mta-flink-jobmanager:8081 was granted leadership with 
> leaderSessionID=9587e13f-322f-4cd5-9fff-b4941462be0f
> 2020-12-29T06:45:54.237440354Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Confirm leader session ID 9587e13f-322f-4cd5-9fff-b4941462be0f for leader 
> http://mta-flink-jobmanager:8081.
> 2020-12-29T06:45:54.254555127Z 2020-12-29 14:45:54,254 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
> 2020-12-29T06:45:54.254588299Z 2020-12-29 14:45:54,254 INFO 
> org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
> New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
> mta-flink-resourcemanager-leader.
> 2020-12-29T06:45:54.254628053Z 2020-12-29 14:45:54,254 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Successfully Acquired leader lease 'ConfigMapLock: default 

[jira] [Commented] (FLINK-20321) Get NPE when using AvroDeserializationSchema to deserialize null input

2020-12-29 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256279#comment-17256279
 ] 

Jark Wu commented on FLINK-20321:
-

I assigned this issue to you [~xwang51]. 
Welcome contribution anytime [~sampadsaha5]!

> Get NPE when using AvroDeserializationSchema to deserialize null input
> --
>
> Key: FLINK-20321
> URL: https://issues.apache.org/jira/browse/FLINK-20321
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: Shengkai Fang
>Assignee: Sampad Kumar Saha
>Priority: Major
>  Labels: sprint, starter
> Fix For: 1.13.0
>
>
> You can reproduce the bug by adding the code into the 
> {{AvroDeserializationSchemaTest}}.
> The code follows
> {code:java}
> @Test
>   public void testSpecificRecord2() throws Exception {
>   DeserializationSchema deserializer = 
> AvroDeserializationSchema.forSpecific(Address.class);
>   Address deserializedAddress = deserializer.deserialize(null);
>   assertEquals(null, deserializedAddress);
>   }
> {code}
> Exception stack:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.flink.formats.avro.utils.MutableByteArrayInputStream.setBuffer(MutableByteArrayInputStream.java:43)
>   at 
> org.apache.flink.formats.avro.AvroDeserializationSchema.deserialize(AvroDeserializationSchema.java:131)
>   at 
> org.apache.flink.formats.avro.AvroDeserializationSchemaTest.testSpecificRecord2(AvroDeserializationSchemaTest.java:69)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18654) Correct missleading documentation in "Partitioned Scan" section of JDBC connector

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-18654:
---

Assignee: jiawen xiao

> Correct missleading documentation in "Partitioned Scan" section of JDBC 
> connector
> -
>
> Key: FLINK-18654
> URL: https://issues.apache.org/jira/browse/FLINK-18654
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC, Documentation, Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Jark Wu
>Assignee: jiawen xiao
>Priority: Major
> Fix For: 1.13.0, 1.11.4
>
>
> In 
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/jdbc.html#partitioned-scan
> > Notice that scan.partition.lower-bound and scan.partition.upper-bound are 
> > just used to decide the partition stride, not for filtering the rows in 
> > table. So all rows in the table will be partitioned and returned.
> The "not for filtering the rows in table" is not correct, actually, if 
> partition bounds is defined, it only scans rows in the bound range. 
> Besides, maybe it would be better to add some practice suggestion, for 
> example, 
> "If it is a batch job, I think it also doable to get the max and min value 
> first before submitting the flink job."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20321) Get NPE when using AvroDeserializationSchema to deserialize null input

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-20321:
---

Assignee: Xue Wang  (was: Sampad Kumar Saha)

> Get NPE when using AvroDeserializationSchema to deserialize null input
> --
>
> Key: FLINK-20321
> URL: https://issues.apache.org/jira/browse/FLINK-20321
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: Shengkai Fang
>Assignee: Xue Wang
>Priority: Major
>  Labels: sprint, starter
> Fix For: 1.13.0
>
>
> You can reproduce the bug by adding the code into the 
> {{AvroDeserializationSchemaTest}}.
> The code follows
> {code:java}
> @Test
>   public void testSpecificRecord2() throws Exception {
>   DeserializationSchema deserializer = 
> AvroDeserializationSchema.forSpecific(Address.class);
>   Address deserializedAddress = deserializer.deserialize(null);
>   assertEquals(null, deserializedAddress);
>   }
> {code}
> Exception stack:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.flink.formats.avro.utils.MutableByteArrayInputStream.setBuffer(MutableByteArrayInputStream.java:43)
>   at 
> org.apache.flink.formats.avro.AvroDeserializationSchema.deserialize(AvroDeserializationSchema.java:131)
>   at 
> org.apache.flink.formats.avro.AvroDeserializationSchemaTest.testSpecificRecord2(AvroDeserializationSchemaTest.java:69)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20026) Jdbc connector support regular expression

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20026:

Component/s: Table SQL / Ecosystem

> Jdbc connector support regular expression
> -
>
> Key: FLINK-20026
> URL: https://issues.apache.org/jira/browse/FLINK-20026
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.11.2
>Reporter: Peihui He
>Priority: Major
> Fix For: 1.13.0
>
>
> When there is a large amount of data, we divide the tables by month.
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#table-name]
> so it's nice to support regular expression for table-name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20026) Jdbc connector support regular expression

2020-12-29 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256278#comment-17256278
 ] 

Jark Wu commented on FLINK-20026:
-

I'm fine with this feature. But we should make clear some validations, e.g. 
such table can't be used as lookup table and sink table. Besides, maybe we can 
use 
{{table-pattern}} option to accept such regular expression, just like 
{{topic-pattern}} in Kafka connector. 

> Jdbc connector support regular expression
> -
>
> Key: FLINK-20026
> URL: https://issues.apache.org/jira/browse/FLINK-20026
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.11.2
>Reporter: Peihui He
>Priority: Major
> Fix For: 1.13.0
>
>
> When there is a large amount of data, we divide the tables by month.
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#table-name]
> so it's nice to support regular expression for table-name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20810) abstract JdbcSchema structure and derive it from JdbcDialect.

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20810:

Component/s: Table SQL / Ecosystem

>  abstract JdbcSchema structure and derive it from JdbcDialect.
> --
>
> Key: FLINK-20810
> URL: https://issues.apache.org/jira/browse/FLINK-20810
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.11.1, 1.11.2, 1.11.3
>Reporter: jiawen xiao
>Priority: Major
> Fix For: 1.13.0
>
>
> related https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20386) ClassCastException when lookup join a JDBC table on INT UNSIGNED column

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-20386:

Fix Version/s: 1.13.0

> ClassCastException when lookup join a JDBC table on INT UNSIGNED column
> ---
>
> Key: FLINK-20386
> URL: https://issues.apache.org/jira/browse/FLINK-20386
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Jark Wu
>Assignee: jiawen xiao
>Priority: Major
> Fix For: 1.13.0
>
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> The primary key of the MySQL is an INT UNSIGNED column, but declared INT in 
> Flink. 
> I know the 
> [docs|https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#data-type-mapping]
>  say it should be decalred BIGINT in Flink, however, would be better not fail 
> the job. 
> At least, the exception is hard to understand for users. We can also check 
> the schema before start the job. 
> {code}
> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
>   at 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149) 
> ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at JoinTableFuncCollector$6460.collect(Unknown Source) ~[?:?]
>   at 
> org.apache.flink.table.functions.TableFunction.collect(TableFunction.java:203)
>  ~[flink-table-blink_2.11-1.11-vvr-2.1.1-SNAPSHOT.jar:1.11-vvr-2.1.1-SNAPSHOT]
>   at 
> org.apache.flink.connector.jdbc.table.JdbcRowDataLookupFunction.eval(JdbcRowDataLookupFunction.java:162)
>  ~[?:?]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] xintongsong commented on pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-29 Thread GitBox


xintongsong commented on pull request #8952:
URL: https://github.com/apache/flink/pull/8952#issuecomment-752320806


   On more thing, we need to update the PR w.r.t. the community's new 
formatting rules.
   Please check this 
[thread](http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/ANNOUNCE-New-formatting-rules-are-now-in-effect-td47441.html).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-29 Thread GitBox


xintongsong commented on pull request #8952:
URL: https://github.com/apache/flink/pull/8952#issuecomment-752320177


   @HuangZhenQiu,
   Sorry for the late response.
   I don't think need to split the 100 LoC into `ThresholdMeter`, 
`TimestampBasedThresholdMeter`, and `FailureRater`.
   On the contrary, I think we should not have the concepts `FailureRater` and 
`TimestampBasedThresholdMeter` at all.
   Please see my other comment 
[here](https://github.com/apache/flink/pull/8952#discussion_r549927391).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-29 Thread GitBox


xintongsong commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r549927391



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/failurerate/TimestampBasedFailureRater.java
##
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.failurerate;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.execution.ThresholdExceedException;
+
+import java.util.ArrayDeque;
+import java.util.Queue;
+import java.util.function.Supplier;
+
+/**
+ * A timestamp queue based failure rater implementation.
+ */
+public class TimestampBasedFailureRater implements ThresholdMeter {

Review comment:
   I believe the three layers you mean are the following.
   1. Meter (interface)
   2. ThresholdMeter (interface)
   3. TimestampBasedThresholdMeter (implementation)
   
   My questions is, why do we need 2) and 3) separately? Do we really need the 
concept `TimestampBasedThresholdMeter`? What if we consider `ThresholdMeter` an 
implementation of `Meter` and have the following two layers?
   1. Meter (interface)
   2. ThresholdMeter (implementation)
   
   I'm proposing this because:
   * There's no other implementation of `ThresholdMeter` except for 
`TimestampBasedThresholdMeter`.
   * Looking into the implementation of `TimestampBasedFailureRater`, there's 
nothing specifically related to *Failure*, and shouldn't the concept *Meter* 
already *TimestampBased*?
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong commented on a change in pull request #8952: [FLINK-10868][flink-runtime] Add failure rater for resource manager

2020-12-29 Thread GitBox


xintongsong commented on a change in pull request #8952:
URL: https://github.com/apache/flink/pull/8952#discussion_r549927391



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/failurerate/TimestampBasedFailureRater.java
##
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.failurerate;
+
+import org.apache.flink.annotation.VisibleForTesting;
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.execution.ThresholdExceedException;
+
+import java.util.ArrayDeque;
+import java.util.Queue;
+import java.util.function.Supplier;
+
+/**
+ * A timestamp queue based failure rater implementation.
+ */
+public class TimestampBasedFailureRater implements ThresholdMeter {

Review comment:
   I believe the three layers you mean are the following.
   1. Meter (interface)
   2. ThresholdMeter (interface)
   3. TimestampBasedThresholdMeter (implementation)
   
   My questions is, why do we need 2) and 3) separately? What if we consider 
`ThresholdMeter` an implementation of `Meter` and have the following two layers?
   1. Meter (interface)
   2. ThresholdMeter (implementation)
   
   I'm proposing this because:
   * There's no other implementation of `ThresholdMeter` except for 
`TimestampBasedThresholdMeter`.
   * Looking into the implementation of `TimestampBasedFailureRater`, there's 
nothing specifically related to *Failure*, and shouldn't the concept *Meter* 
already *TimestampBased*?
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang commented on pull request #14280: [FLINK-20110][table-planner] Support 'merge' method for first_value a…

2020-12-29 Thread GitBox


leonardBang commented on pull request #14280:
URL: https://github.com/apache/flink/pull/14280#issuecomment-752319197


   Could you help rebase this PR? @wangxlong The master branch merged a big 
format PR which caused the conflict



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zoucao closed pull request #14479: [FLINK-20505][yarn]let yarn descriptor support negative file length

2020-12-29 Thread GitBox


zoucao closed pull request #14479:
URL: https://github.com/apache/flink/pull/14479


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-17855) UDF with parameter Array(Row) can not work

2020-12-29 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-17855:
---

Assignee: Nicholas Jiang

> UDF with parameter Array(Row) can not work
> --
>
> Key: FLINK-17855
> URL: https://issues.apache.org/jira/browse/FLINK-17855
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: sprint
> Fix For: 1.13.0
>
>
> {code:java}
> public String eval(Row[] rows) {
>   ...
> }
> {code}
> Can not work.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20800) HadoopViewFileSystemTruncateTest failure caused by AssumptionViolatedException

2020-12-29 Thread zlzhang0122 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17255878#comment-17255878
 ] 

zlzhang0122 edited comment on FLINK-20800 at 12/30/20, 3:35 AM:


-I think this problem can be solved by add a config to fsViewConf.-


was (Author: zlzhang0122):
I think this problem can be solved by add a config to fsViewConf.

> HadoopViewFileSystemTruncateTest failure caused by AssumptionViolatedException
> --
>
> Key: FLINK-20800
> URL: https://issues.apache.org/jira/browse/FLINK-20800
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.12.0
>Reporter: zlzhang0122
>Priority: Major
>
> [Build|https://dev.azure.com/mapohl/flink/_build/results?buildId=118=results]
>  failed due to HadoopViewFileSystemTruncateTest caused by a 
> AssumptionViolatedException:
> {code:java}
> // code placeholder
> org.junit.AssumptionViolatedException: got: , expected: is 
> org.junit.AssumptionViolatedException: got: , expected: is 
>  at org.junit.Assume.assumeThat(Assume.java:95) at 
> org.junit.Assume.assumeTrue(Assume.java:41) at 
> org.apache.flink.runtime.fs.hdfs.HadoopViewFileSystemTruncateTest.testHadoopVersion(HadoopViewFileSystemTruncateTest.java:71)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> com.intellij.rt.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:33)
>  at 
> com.intellij.rt.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:230)
>  at com.intellij.rt.junit.JUnitStarter.main(JUnitStarter.java:58)Process 
> finished with exit code 0
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20505) Yarn provided lib does not work with http paths.

2020-12-29 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-20505.

Resolution: Won't Do

As discussed, we will not support HTTP paths for the shared libs.

Instead, we will explore supporting HTTP paths for ship files/archives, in 
FLINK-20811.

> Yarn provided lib does not work with http paths.
> 
>
> Key: FLINK-20505
> URL: https://issues.apache.org/jira/browse/FLINK-20505
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0, 1.11.2
>Reporter: Xintong Song
>Assignee: zoucao
>Priority: Major
>  Labels: pull-request-available
>
> If an http path is used for provided lib, the following exception will be 
> thrown on the resource manager side:
> {code:java}
> 2020-12-04 17:01:28.955 ERROR org.apache.flink.yarn.YarnResourceManager - 
> Could not start TaskManager in container containerXX.
> org.apache.flink.util.FlinkException: Error to parse 
> YarnLocalResourceDescriptor from YarnLocalResourceDescriptor{key=X.jar, 
> path=https://XXX.jar, size=-1, modificationTime=0, visibility=APPLICATION}
>     at 
> org.apache.flink.yarn.YarnLocalResourceDescriptor.fromString(YarnLocalResourceDescriptor.java:99)
>     at 
> org.apache.flink.yarn.Utils.decodeYarnLocalResourceDescriptorListFromString(Utils.java:721)
>     at org.apache.flink.yarn.Utils.createTaskExecutorContext(Utils.java:626)
>     at 
> org.apache.flink.yarn.YarnResourceManager.getOrCreateContainerLaunchContext(YarnResourceManager.java:746)
>     at 
> org.apache.flink.yarn.YarnResourceManager.createTaskExecutorLaunchContext(YarnResourceManager.java:726)
>     at 
> org.apache.flink.yarn.YarnResourceManager.startTaskExecutorInContainer(YarnResourceManager.java:500)
>     at 
> org.apache.flink.yarn.YarnResourceManager.onContainersOfResourceAllocated(YarnResourceManager.java:455)
>     at 
> org.apache.flink.yarn.YarnResourceManager.lambda$onContainersAllocated$1(YarnResourceManager.java:415)
> {code}
> The problem is that, `HttpFileSystem#getFilsStatus` returns file status with 
> length `-1`, while `YarnLocalResourceDescriptor` does not recognize the 
> negative file length.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20798) Using PVC as high-availability.storageDir could not work

2020-12-29 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-20798:
--
Summary: Using PVC as high-availability.storageDir could not work  (was: 
Service temporarily unavailable due to an ongoing leader election. Please 
refresh.)

> Using PVC as high-availability.storageDir could not work
> 
>
> Key: FLINK-20798
> URL: https://issues.apache.org/jira/browse/FLINK-20798
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
> Environment: FLINK 1.12.0
>Reporter: hayden zhou
>Priority: Major
>
> 我这边 部署 flink 到 k8s 使用 PVC 作为 high avalibility storagedir , 我看jobmanager 
> 的日志,选举成功了。但是 web 一直显示选举进行中。
>  
> 下面是 jobmanager 的日志
> ```
> 2020-12-29T06:45:54.177850394Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader election started
> 2020-12-29T06:45:54.177855303Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Attempting to acquire leader lease 'ConfigMapLock: default - 
> mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'...
> 2020-12-29T06:45:54.178668055Z 2020-12-29 14:45:54,178 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.178895963Z 2020-12-29 14:45:54,178 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.179327491Z 2020-12-29 14:45:54,179 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> Connecting websocket ... 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@6d303498
> 2020-12-29T06:45:54.230081993Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230202329Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-dispatcher-leader'}.
> 2020-12-29T06:45:54.230219281Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230353912Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Starting DefaultLeaderElectionService with 
> KubernetesLeaderElectionDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.237004177Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
> 2020-12-29T06:45:54.237024655Z 2020-12-29 14:45:54,236 INFO 
> org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
> New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
> mta-flink-restserver-leader.
> 2020-12-29T06:45:54.237027811Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Successfully Acquired leader lease 'ConfigMapLock: default - 
> mta-flink-restserver-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'
> 2020-12-29T06:45:54.237297376Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Grant leadership to contender http://mta-flink-jobmanager:8081 with session 
> ID 9587e13f-322f-4cd5-9fff-b4941462be0f.
> 2020-12-29T06:45:54.237353551Z 2020-12-29 14:45:54,237 INFO 
> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - 
> http://mta-flink-jobmanager:8081 was granted leadership with 
> leaderSessionID=9587e13f-322f-4cd5-9fff-b4941462be0f
> 2020-12-29T06:45:54.237440354Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Confirm leader session ID 9587e13f-322f-4cd5-9fff-b4941462be0f for leader 
> http://mta-flink-jobmanager:8081.
> 2020-12-29T06:45:54.254555127Z 2020-12-29 14:45:54,254 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
> 2020-12-29T06:45:54.254588299Z 2020-12-29 14:45:54,254 INFO 
> org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
> New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
> mta-flink-resourcemanager-leader.
> 2020-12-29T06:45:54.254628053Z 2020-12-29 14:45:54,254 DEBUG 
> 

[jira] [Created] (FLINK-20811) Support HTTP paths for yarn ship files/archives

2020-12-29 Thread Xintong Song (Jira)
Xintong Song created FLINK-20811:


 Summary: Support HTTP paths for yarn ship files/archives
 Key: FLINK-20811
 URL: https://issues.apache.org/jira/browse/FLINK-20811
 Project: Flink
  Issue Type: New Feature
  Components: Deployment / YARN
Reporter: Xintong Song


Flink's Yarn integration supports shipping workload-specific local 
files/directories/archives to the Yarn cluster.

As discussed in FLINK-20505, it would be helpful to support directly 
downloading contents from HTTP paths to the Yarn cluster, so that users won't 
need to first download the contents locally and then upload it to the Yarn 
cluster.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20797) can flink on k8s use pv using NFS and pvc as the hight avalibility storagedir

2020-12-29 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang closed FLINK-20797.
-
Resolution: Duplicate

> can flink on k8s use pv using NFS and pvc as the hight avalibility storagedir
> -
>
> Key: FLINK-20797
> URL: https://issues.apache.org/jira/browse/FLINK-20797
> Project: Flink
>  Issue Type: New Feature
>  Components: Client / Job Submission
> Environment: FLINK 1.12.0
>  
>Reporter: hayden zhou
>Priority: Major
>
> I want to deploy Flink on k8s with HA mode, and I don't want to deploy the 
> HDFS cluster, and I have an NFS so that I am created a PV that use NFS as the 
> backend storage, and I created a PVC for deployment mount.
> this is my FLINK configMap
> ```
> kubernetes.cluster-id: mta-flink
>  high-availability: org.apache.flink.kubernetes.highavailability. 
> KubernetesHaServicesFactory
>  high-availability.storageDir: file:///opt/flink/nfs/ha
> ```
> and this is my jobmanager yaml file:
> ```
> volumeMounts:
>  - name: flink-config-volume
>  mountPath: /opt/flink/conf
>  - name: flink-nfs
>  mountPath: /opt/flink/nfs
>  securityContext:
>  runAsUser:  # refers to user _flink_ from official flink image, change 
> if necessary
>  #fsGroup: 
>  volumes:
>  - name: flink-config-volume
>  configMap:
>  name: mta-flink-config
>  items:
>  - key: flink-conf.yaml
>  path: flink-conf.yaml
>  - key: log4j-console.properties
>  path: log4j-console.properties
>  - name: flink-nfs
>  persistentVolumeClaim:
>  claimName: mta-flink-nfs-pvc
> ```
> It can be deployed successfully, but if I browser the jobmanager:8081 
> website, I get the result below:
> ```
> {"errors": ["Service temporarily unavailable due to an ongoing leader 
> election. Please refresh."]}
> ```
>  
> is the PVC can be used as `high-availability.storageDir`?  if it's can be 
> used, how can I fix this error?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20797) can flink on k8s use pv using NFS and pvc as the hight avalibility storagedir

2020-12-29 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256256#comment-17256256
 ] 

Yang Wang commented on FLINK-20797:
---

I will close this ticket and let's keep the discussion under FLINK-20798.

> can flink on k8s use pv using NFS and pvc as the hight avalibility storagedir
> -
>
> Key: FLINK-20797
> URL: https://issues.apache.org/jira/browse/FLINK-20797
> Project: Flink
>  Issue Type: New Feature
>  Components: Client / Job Submission
> Environment: FLINK 1.12.0
>  
>Reporter: hayden zhou
>Priority: Major
>
> I want to deploy Flink on k8s with HA mode, and I don't want to deploy the 
> HDFS cluster, and I have an NFS so that I am created a PV that use NFS as the 
> backend storage, and I created a PVC for deployment mount.
> this is my FLINK configMap
> ```
> kubernetes.cluster-id: mta-flink
>  high-availability: org.apache.flink.kubernetes.highavailability. 
> KubernetesHaServicesFactory
>  high-availability.storageDir: file:///opt/flink/nfs/ha
> ```
> and this is my jobmanager yaml file:
> ```
> volumeMounts:
>  - name: flink-config-volume
>  mountPath: /opt/flink/conf
>  - name: flink-nfs
>  mountPath: /opt/flink/nfs
>  securityContext:
>  runAsUser:  # refers to user _flink_ from official flink image, change 
> if necessary
>  #fsGroup: 
>  volumes:
>  - name: flink-config-volume
>  configMap:
>  name: mta-flink-config
>  items:
>  - key: flink-conf.yaml
>  path: flink-conf.yaml
>  - key: log4j-console.properties
>  path: log4j-console.properties
>  - name: flink-nfs
>  persistentVolumeClaim:
>  claimName: mta-flink-nfs-pvc
> ```
> It can be deployed successfully, but if I browser the jobmanager:8081 
> website, I get the result below:
> ```
> {"errors": ["Service temporarily unavailable due to an ongoing leader 
> election. Please refresh."]}
> ```
>  
> is the PVC can be used as `high-availability.storageDir`?  if it's can be 
> used, how can I fix this error?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20797) can flink on k8s use pv using NFS and pvc as the hight avalibility storagedir

2020-12-29 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256255#comment-17256255
 ] 

Yang Wang commented on FLINK-20797:
---

Mounting a PVC to replace the distributed storage in HA should work. Please 
make sure that you are mounting a same multiple-read-write PVC for all the 
JobManager/TaskManager pods. And it just feels like a distributed storage.

> can flink on k8s use pv using NFS and pvc as the hight avalibility storagedir
> -
>
> Key: FLINK-20797
> URL: https://issues.apache.org/jira/browse/FLINK-20797
> Project: Flink
>  Issue Type: New Feature
>  Components: Client / Job Submission
> Environment: FLINK 1.12.0
>  
>Reporter: hayden zhou
>Priority: Major
>
> I want to deploy Flink on k8s with HA mode, and I don't want to deploy the 
> HDFS cluster, and I have an NFS so that I am created a PV that use NFS as the 
> backend storage, and I created a PVC for deployment mount.
> this is my FLINK configMap
> ```
> kubernetes.cluster-id: mta-flink
>  high-availability: org.apache.flink.kubernetes.highavailability. 
> KubernetesHaServicesFactory
>  high-availability.storageDir: file:///opt/flink/nfs/ha
> ```
> and this is my jobmanager yaml file:
> ```
> volumeMounts:
>  - name: flink-config-volume
>  mountPath: /opt/flink/conf
>  - name: flink-nfs
>  mountPath: /opt/flink/nfs
>  securityContext:
>  runAsUser:  # refers to user _flink_ from official flink image, change 
> if necessary
>  #fsGroup: 
>  volumes:
>  - name: flink-config-volume
>  configMap:
>  name: mta-flink-config
>  items:
>  - key: flink-conf.yaml
>  path: flink-conf.yaml
>  - key: log4j-console.properties
>  path: log4j-console.properties
>  - name: flink-nfs
>  persistentVolumeClaim:
>  claimName: mta-flink-nfs-pvc
> ```
> It can be deployed successfully, but if I browser the jobmanager:8081 
> website, I get the result below:
> ```
> {"errors": ["Service temporarily unavailable due to an ongoing leader 
> election. Please refresh."]}
> ```
>  
> is the PVC can be used as `high-availability.storageDir`?  if it's can be 
> used, how can I fix this error?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20798) Service temporarily unavailable due to an ongoing leader election. Please refresh.

2020-12-29 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256254#comment-17256254
 ] 

Yang Wang commented on FLINK-20798:
---

Could you share the complete logs for JobManager? From the pieces of logs you 
provided, it seems that the rest endpoint is granted leadership successfully.

> Service temporarily unavailable due to an ongoing leader election. Please 
> refresh.
> --
>
> Key: FLINK-20798
> URL: https://issues.apache.org/jira/browse/FLINK-20798
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
> Environment: FLINK 1.12.0
>Reporter: hayden zhou
>Priority: Major
>
> 我这边 部署 flink 到 k8s 使用 PVC 作为 high avalibility storagedir , 我看jobmanager 
> 的日志,选举成功了。但是 web 一直显示选举进行中。
>  
> 下面是 jobmanager 的日志
> ```
> 2020-12-29T06:45:54.177850394Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader election started
> 2020-12-29T06:45:54.177855303Z 2020-12-29 14:45:54,177 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Attempting to acquire leader lease 'ConfigMapLock: default - 
> mta-flink-resourcemanager-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'...
> 2020-12-29T06:45:54.178668055Z 2020-12-29 14:45:54,178 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.178895963Z 2020-12-29 14:45:54,178 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.179327491Z 2020-12-29 14:45:54,179 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> Connecting websocket ... 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@6d303498
> 2020-12-29T06:45:54.230081993Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230202329Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderretrieval.DefaultLeaderRetrievalService [] - 
> Starting DefaultLeaderRetrievalService with 
> KubernetesLeaderRetrievalDriver\{configMapName='mta-flink-dispatcher-leader'}.
> 2020-12-29T06:45:54.230219281Z 2020-12-29 14:45:54,229 DEBUG 
> io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager [] - 
> WebSocket successfully opened
> 2020-12-29T06:45:54.230353912Z 2020-12-29 14:45:54,230 INFO 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Starting DefaultLeaderElectionService with 
> KubernetesLeaderElectionDriver\{configMapName='mta-flink-resourcemanager-leader'}.
> 2020-12-29T06:45:54.237004177Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
> 2020-12-29T06:45:54.237024655Z 2020-12-29 14:45:54,236 INFO 
> org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
> New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
> mta-flink-restserver-leader.
> 2020-12-29T06:45:54.237027811Z 2020-12-29 14:45:54,236 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Successfully Acquired leader lease 'ConfigMapLock: default - 
> mta-flink-restserver-leader (6f6479c6-86cc-4d62-84f9-37ff968bd0e5)'
> 2020-12-29T06:45:54.237297376Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Grant leadership to contender http://mta-flink-jobmanager:8081 with session 
> ID 9587e13f-322f-4cd5-9fff-b4941462be0f.
> 2020-12-29T06:45:54.237353551Z 2020-12-29 14:45:54,237 INFO 
> org.apache.flink.runtime.dispatcher.DispatcherRestEndpoint [] - 
> http://mta-flink-jobmanager:8081 was granted leadership with 
> leaderSessionID=9587e13f-322f-4cd5-9fff-b4941462be0f
> 2020-12-29T06:45:54.237440354Z 2020-12-29 14:45:54,237 DEBUG 
> org.apache.flink.runtime.leaderelection.DefaultLeaderElectionService [] - 
> Confirm leader session ID 9587e13f-322f-4cd5-9fff-b4941462be0f for leader 
> http://mta-flink-jobmanager:8081.
> 2020-12-29T06:45:54.254555127Z 2020-12-29 14:45:54,254 DEBUG 
> io.fabric8.kubernetes.client.extended.leaderelection.LeaderElector [] - 
> Leader changed from null to 6f6479c6-86cc-4d62-84f9-37ff968bd0e5
> 2020-12-29T06:45:54.254588299Z 2020-12-29 14:45:54,254 INFO 
> org.apache.flink.kubernetes.kubeclient.resources.KubernetesLeaderElector [] - 
> New leader elected 6f6479c6-86cc-4d62-84f9-37ff968bd0e5 for 
> mta-flink-resourcemanager-leader.
> 

[jira] [Commented] (FLINK-15656) Support user-specified pod templates

2020-12-29 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256252#comment-17256252
 ] 

Yang Wang commented on FLINK-15656:
---

[~lublinsky] I just give some examples here. Actually, these two yaml files are 
typical pod yaml. You could set arbitrary fields that is supported by K8s 
directly.

> Support user-specified pod templates
> 
>
> Key: FLINK-15656
> URL: https://issues.apache.org/jira/browse/FLINK-15656
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Assignee: Yang Wang
>Priority: Major
> Fix For: 1.13.0
>
>
> The current approach of introducing new configuration options for each aspect 
> of pod specification a user might wish is becoming unwieldy, we have to 
> maintain more and more Flink side Kubernetes configuration options and users 
> have to learn the gap between the declarative model used by Kubernetes and 
> the configuration model used by Flink. It's a great improvement to allow 
> users to specify pod templates as central places for all customization needs 
> for the jobmanager and taskmanager pods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17598) Implement FileSystemHAServices for native K8s setups

2020-12-29 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256251#comment-17256251
 ] 

Yang Wang commented on FLINK-17598:
---

I notice that you have created some tickets FLINK-20798 and FLINK-20797. I will 
have a look soon and let's keep the discussion under FLINK-20798.

> Implement FileSystemHAServices for native K8s setups
> 
>
> Key: FLINK-17598
> URL: https://issues.apache.org/jira/browse/FLINK-17598
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Reporter: Canbin Zheng
>Priority: Major
>
> At the moment we use Zookeeper as a distributed coordinator for implementing 
> JobManager high availability services. But in the cloud-native environment, 
> there is a trend that more and more users prefer to use *Kubernetes* as the 
> underlying scheduler backend while *Storage Object* as the Storage medium, 
> both of these two services don't require Zookeeper deployment.
> As a result, in the K8s setups, people have to deploy and maintain their 
> Zookeeper clusters for solving JobManager SPOF. This ticket proposes to 
> provide a simplified FileSystem HA implementation with the leader-election 
> removed, which saves the efforts of Zookeeper deployment.
> To achieve this, we plan to 
> # Introduce a {{FileSystemHaServices}} which implements the 
> {{HighAvailabilityServices}}.
> # Replace Deployment with StatefulSet to ensure *at most one* semantics, 
> preventing potential concurrent access to the underlying FileSystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17598) Implement FileSystemHAServices for native K8s setups

2020-12-29 Thread hayden zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256246#comment-17256246
 ] 

hayden zhou commented on FLINK-17598:
-

[~fly_in_gis] I am trying to deploy Flink on k8s with HA mode, I use PV as the 
HA `storageDir`,  it seems the leader election is successfully

but I got error "Service temporarily unavailable due to an ongoing leader 
election. Please refresh." if I submit job

> Implement FileSystemHAServices for native K8s setups
> 
>
> Key: FLINK-17598
> URL: https://issues.apache.org/jira/browse/FLINK-17598
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Reporter: Canbin Zheng
>Priority: Major
>
> At the moment we use Zookeeper as a distributed coordinator for implementing 
> JobManager high availability services. But in the cloud-native environment, 
> there is a trend that more and more users prefer to use *Kubernetes* as the 
> underlying scheduler backend while *Storage Object* as the Storage medium, 
> both of these two services don't require Zookeeper deployment.
> As a result, in the K8s setups, people have to deploy and maintain their 
> Zookeeper clusters for solving JobManager SPOF. This ticket proposes to 
> provide a simplified FileSystem HA implementation with the leader-election 
> removed, which saves the efforts of Zookeeper deployment.
> To achieve this, we plan to 
> # Introduce a {{FileSystemHaServices}} which implements the 
> {{HighAvailabilityServices}}.
> # Replace Deployment with StatefulSet to ensure *at most one* semantics, 
> preventing potential concurrent access to the underlying FileSystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-17598) Implement FileSystemHAServices for native K8s setups

2020-12-29 Thread hayden zhou (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256246#comment-17256246
 ] 

hayden zhou edited comment on FLINK-17598 at 12/30/20, 3:07 AM:


[~fly_in_gis] I am trying to deploy Flink on k8s with HA mode, I use PV as the 
HA `storageDir`,  it seems the leader election is successfully

but I got error "Service temporarily unavailable due to an ongoing leader 
election. Please refresh." if I submit job.

details as below:

https://stackoverflow.com/questions/65487789/can-flink-on-k8s-use-pv-using-nfs-and-pvc-as-the-high-avalibility-storagedir


was (Author: hayden zhou):
[~fly_in_gis] I am trying to deploy Flink on k8s with HA mode, I use PV as the 
HA `storageDir`,  it seems the leader election is successfully

but I got error "Service temporarily unavailable due to an ongoing leader 
election. Please refresh." if I submit job

> Implement FileSystemHAServices for native K8s setups
> 
>
> Key: FLINK-17598
> URL: https://issues.apache.org/jira/browse/FLINK-17598
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes, Runtime / Coordination
>Reporter: Canbin Zheng
>Priority: Major
>
> At the moment we use Zookeeper as a distributed coordinator for implementing 
> JobManager high availability services. But in the cloud-native environment, 
> there is a trend that more and more users prefer to use *Kubernetes* as the 
> underlying scheduler backend while *Storage Object* as the Storage medium, 
> both of these two services don't require Zookeeper deployment.
> As a result, in the K8s setups, people have to deploy and maintain their 
> Zookeeper clusters for solving JobManager SPOF. This ticket proposes to 
> provide a simplified FileSystem HA implementation with the leader-election 
> removed, which saves the efforts of Zookeeper deployment.
> To achieve this, we plan to 
> # Introduce a {{FileSystemHaServices}} which implements the 
> {{HighAvailabilityServices}}.
> # Replace Deployment with StatefulSet to ensure *at most one* semantics, 
> preventing potential concurrent access to the underlying FileSystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20810) abstract JdbcSchema structure and derive it from JdbcDialect.

2020-12-29 Thread jiawen xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiawen xiao updated FLINK-20810:

Description: related 
https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963

>  abstract JdbcSchema structure and derive it from JdbcDialect.
> --
>
> Key: FLINK-20810
> URL: https://issues.apache.org/jira/browse/FLINK-20810
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Affects Versions: 1.11.1, 1.11.2, 1.11.3
> Environment: related 
> https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963
>Reporter: jiawen xiao
>Priority: Major
> Fix For: 1.13.0
>
>
> related https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20810) abstract JdbcSchema structure and derive it from JdbcDialect.

2020-12-29 Thread jiawen xiao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

jiawen xiao updated FLINK-20810:

Environment: (was: related 
https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963)

>  abstract JdbcSchema structure and derive it from JdbcDialect.
> --
>
> Key: FLINK-20810
> URL: https://issues.apache.org/jira/browse/FLINK-20810
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Affects Versions: 1.11.1, 1.11.2, 1.11.3
>Reporter: jiawen xiao
>Priority: Major
> Fix For: 1.13.0
>
>
> related https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20810) abstract JdbcSchema structure and derive it from JdbcDialect.

2020-12-29 Thread jiawen xiao (Jira)
jiawen xiao created FLINK-20810:
---

 Summary:  abstract JdbcSchema structure and derive it from 
JdbcDialect.
 Key: FLINK-20810
 URL: https://issues.apache.org/jira/browse/FLINK-20810
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / JDBC
Affects Versions: 1.11.3, 1.11.2, 1.11.1
 Environment: related 
https://issues.apache.org/jira/browse/FLINK-20386?filter=12349963
Reporter: jiawen xiao
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20329) Elasticsearch7DynamicSinkITCase hangs

2020-12-29 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256244#comment-17256244
 ] 

Huang Xingbo commented on FLINK-20329:
--

[https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11474=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=60581941-0138-53c0-39fe-86d62be5f407]

 

> Elasticsearch7DynamicSinkITCase hangs
> -
>
> Key: FLINK-20329
> URL: https://issues.apache.org/jira/browse/FLINK-20329
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10052=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-11-24T16:04:05.9260517Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch7DynamicSinkITCase
> 2020-11-24T16:19:25.5481231Z 
> ==
> 2020-11-24T16:19:25.5483549Z Process produced no output for 900 seconds.
> 2020-11-24T16:19:25.5484064Z 
> ==
> 2020-11-24T16:19:25.5484498Z 
> ==
> 2020-11-24T16:19:25.5484882Z The following Java processes are running (JPS)
> 2020-11-24T16:19:25.5485475Z 
> ==
> 2020-11-24T16:19:25.5694497Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:25.7263048Z 16192 surefirebooter5057948964630155904.jar
> 2020-11-24T16:19:25.7263515Z 18566 Jps
> 2020-11-24T16:19:25.7263709Z 959 Launcher
> 2020-11-24T16:19:25.7411148Z 
> ==
> 2020-11-24T16:19:25.7427013Z Printing stack trace of Java process 16192
> 2020-11-24T16:19:25.7427369Z 
> ==
> 2020-11-24T16:19:25.7484365Z Picked up JAVA_TOOL_OPTIONS: 
> -XX:+HeapDumpOnOutOfMemoryError
> 2020-11-24T16:19:26.0848776Z 2020-11-24 16:19:26
> 2020-11-24T16:19:26.0849578Z Full thread dump OpenJDK 64-Bit Server VM 
> (25.275-b01 mixed mode):
> 2020-11-24T16:19:26.0849831Z 
> 2020-11-24T16:19:26.0850185Z "Attach Listener" #32 daemon prio=9 os_prio=0 
> tid=0x7fc148001000 nid=0x48e7 waiting on condition [0x]
> 2020-11-24T16:19:26.0850595Zjava.lang.Thread.State: RUNNABLE
> 2020-11-24T16:19:26.0850814Z 
> 2020-11-24T16:19:26.0851375Z "testcontainers-ryuk" #31 daemon prio=5 
> os_prio=0 tid=0x7fc251232000 nid=0x3fb0 in Object.wait() 
> [0x7fc1012c4000]
> 2020-11-24T16:19:26.0854688Zjava.lang.Thread.State: TIMED_WAITING (on 
> object monitor)
> 2020-11-24T16:19:26.0855379Z  at java.lang.Object.wait(Native Method)
> 2020-11-24T16:19:26.0855844Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$null$1(ResourceReaper.java:142)
> 2020-11-24T16:19:26.0857272Z  - locked <0x8e2bd2d0> (a 
> java.util.ArrayList)
> 2020-11-24T16:19:26.0857977Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$93/1981729428.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0858471Z  at 
> org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
> 2020-11-24T16:19:26.0858961Z  at 
> org.testcontainers.utility.ResourceReaper.lambda$start$2(ResourceReaper.java:133)
> 2020-11-24T16:19:26.0859422Z  at 
> org.testcontainers.utility.ResourceReaper$$Lambda$92/40191541.run(Unknown 
> Source)
> 2020-11-24T16:19:26.0859788Z  at java.lang.Thread.run(Thread.java:748)
> 2020-11-24T16:19:26.0860030Z 
> 2020-11-24T16:19:26.0860371Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7fc0f803b800 nid=0x3f92 waiting on condition [0x7fc10296e000]
> 2020-11-24T16:19:26.0860913Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-11-24T16:19:26.0861387Z  at sun.misc.Unsafe.park(Native Method)
> 2020-11-24T16:19:26.0862495Z  - parking to wait for  <0x8814bf30> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-11-24T16:19:26.0863253Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 2020-11-24T16:19:26.0863760Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
> 2020-11-24T16:19:26.0864274Z  at 
> java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
> 2020-11-24T16:19:26.0864762Z  at 
> java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
> 2020-11-24T16:19:26.0865299Z  at 
> 

[jira] [Commented] (FLINK-20026) Jdbc connector support regular expression

2020-12-29 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256243#comment-17256243
 ] 

jiawen xiao commented on FLINK-20026:
-

cc,[~jark]. WDYT?

> Jdbc connector support regular expression
> -
>
> Key: FLINK-20026
> URL: https://issues.apache.org/jira/browse/FLINK-20026
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / JDBC
>Affects Versions: 1.11.2
>Reporter: Peihui He
>Priority: Major
> Fix For: 1.13.0
>
>
> When there is a large amount of data, we divide the tables by month.
> [https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connectors/jdbc.html#table-name]
> so it's nice to support regular expression for table-name.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20321) Get NPE when using AvroDeserializationSchema to deserialize null input

2020-12-29 Thread Xue Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256240#comment-17256240
 ] 

Xue Wang commented on FLINK-20321:
--

Thank you, [~sampadsaha5]. I've worked with Avro Serdes and Confluent Schema 
Registry during several projects. I'll definitely take this opportunity to 
learn more about Flink and contribute to the community. Feel free to get back 
when you have time, contributions are always welcome. Cheers!

> Get NPE when using AvroDeserializationSchema to deserialize null input
> --
>
> Key: FLINK-20321
> URL: https://issues.apache.org/jira/browse/FLINK-20321
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: Shengkai Fang
>Assignee: Sampad Kumar Saha
>Priority: Major
>  Labels: sprint, starter
> Fix For: 1.13.0
>
>
> You can reproduce the bug by adding the code into the 
> {{AvroDeserializationSchemaTest}}.
> The code follows
> {code:java}
> @Test
>   public void testSpecificRecord2() throws Exception {
>   DeserializationSchema deserializer = 
> AvroDeserializationSchema.forSpecific(Address.class);
>   Address deserializedAddress = deserializer.deserialize(null);
>   assertEquals(null, deserializedAddress);
>   }
> {code}
> Exception stack:
> {code:java}
> java.lang.NullPointerException
>   at 
> org.apache.flink.formats.avro.utils.MutableByteArrayInputStream.setBuffer(MutableByteArrayInputStream.java:43)
>   at 
> org.apache.flink.formats.avro.AvroDeserializationSchema.deserialize(AvroDeserializationSchema.java:131)
>   at 
> org.apache.flink.formats.avro.AvroDeserializationSchemaTest.testSpecificRecord2(AvroDeserializationSchemaTest.java:69)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18654) Correct missleading documentation in "Partitioned Scan" section of JDBC connector

2020-12-29 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256241#comment-17256241
 ] 

jiawen xiao commented on FLINK-18654:
-

Hi,[~jark],I am happy to solve this problem .i will fix  it

> Correct missleading documentation in "Partitioned Scan" section of JDBC 
> connector
> -
>
> Key: FLINK-18654
> URL: https://issues.apache.org/jira/browse/FLINK-18654
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC, Documentation, Table SQL / Ecosystem
>Affects Versions: 1.11.0
>Reporter: Jark Wu
>Priority: Major
> Fix For: 1.13.0, 1.11.4
>
>
> In 
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/jdbc.html#partitioned-scan
> > Notice that scan.partition.lower-bound and scan.partition.upper-bound are 
> > just used to decide the partition stride, not for filtering the rows in 
> > table. So all rows in the table will be partitioned and returned.
> The "not for filtering the rows in table" is not correct, actually, if 
> partition bounds is defined, it only scans rows in the bound range. 
> Besides, maybe it would be better to add some practice suggestion, for 
> example, 
> "If it is a batch job, I think it also doable to get the max and min value 
> first before submitting the flink job."



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20809) the limit push down invalid when use filter

2020-12-29 Thread Jun Zhang (Jira)
Jun Zhang created FLINK-20809:
-

 Summary: the limit push down invalid when use filter 
 Key: FLINK-20809
 URL: https://issues.apache.org/jira/browse/FLINK-20809
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.12.0
Reporter: Jun Zhang
 Fix For: 1.13.0


when I use flink sql to query hive table , like this 
{code:java}
// select * from hive_table where id = 1 limit 1
{code}
 
when the sql contain query conditions in where clause, I found that the limit 
push down is invalid.

I look up the comment on source code , I think it is should be push down , is 
it a bug ?

[the comment 
|https://github.com/apache/flink/blob/master/flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushLimitIntoTableSourceScanRule.java#L64]




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15352) develop MySQLCatalog to connect Flink with MySQL tables and ecosystem

2020-12-29 Thread jiawen xiao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256234#comment-17256234
 ] 

jiawen xiao commented on FLINK-15352:
-

hi,[~phoenixjiangnan],I think this is a clear task and I am very 
interested.please assign to me

> develop MySQLCatalog  to connect Flink with MySQL tables and ecosystem
> --
>
> Key: FLINK-15352
> URL: https://issues.apache.org/jira/browse/FLINK-15352
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / JDBC
>Reporter: Bowen Li
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14512: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes

2020-12-29 Thread GitBox


flinkbot edited a comment on pull request #14512:
URL: https://github.com/apache/flink/pull/14512#issuecomment-751906693


   
   ## CI report:
   
   * ef398a6aca683bb22074d7a72abd6a328981610e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11468)
 
   * 655b2992fd97e20ef142287bfd435832c469261d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11481)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20801) Using asynchronous methods in operators causes serialization problems

2020-12-29 Thread lafeier (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17256230#comment-17256230
 ] 

lafeier commented on FLINK-20801:
-

Well, I tried Async IO and it didn't meet my needs.This issue is a result of 
kryO serialization concurrency. Will this issue be resolved in the future?

> Using asynchronous methods in operators causes serialization problems
> -
>
> Key: FLINK-20801
> URL: https://issues.apache.org/jira/browse/FLINK-20801
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.11.2
> Environment: * flink 1.11.2
>  * java8
>  * windows
>Reporter: lafeier
>Priority: Major
>
> Using asynchronous methods in operators causes serialization problems.
> Exceptions are indeterminate, for example:
> {code:java}
> java.io.IOException: Corrupt stream, found tag: 21java.io.IOException: 
> Corrupt stream, found tag: 21 at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:217)
>  at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:46)
>  at 
> org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55)
>  at 
> org.apache.flink.runtime.io.network.api.serialization.NonSpanningWrapper.readInto(NonSpanningWrapper.java:335)
>  at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.readNonpanningRecord(SpillingAdaptiveSpanningRecordDeserializer.java:108)
>  at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:85)
>  at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:146)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:67)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:351)
>  at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxStep(MailboxProcessor.java:191)
>  at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:181)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:566)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:536)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721) at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:546) at 
> java.lang.Thread.run(Thread.java:748)
> {code}
>  
> {code:java}
> java.lang.RuntimeException: Cannot instantiate 
> class.java.lang.RuntimeException: Cannot instantiate class. at 
> org.apache.flink.api.java.typeutils.runtime.PojoSerializer.deserialize(PojoSerializer.java:385)
>  at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:205)
>  at 
> org.apache.flink.streaming.runtime.streamrecord.StreamElementSerializer.deserialize(StreamElementSerializer.java:46)
>  at 
> org.apache.flink.runtime.plugable.NonReusingDeserializationDelegate.read(NonReusingDeserializationDelegate.java:55)
>  at 
> org.apache.flink.runtime.io.network.api.serialization.NonSpanningWrapper.readInto(NonSpanningWrapper.java:335)
>  at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.readNonSpanningRecord(SpillingAdaptiveSpanningRecordDeserializer.java:108)
>  at 
> org.apache.flink.runtime.io.network.api.serialization.SpillingAdaptiveSpanningRecordDeserializer.getNextRecord(SpillingAdaptiveSpanningRecordDeserializer.java:85)
>  at 
> org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:146)
>  at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:67)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:351)
>  at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxStep(MailboxProcessor.java:191)
>  at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:181)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:566)
>  at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:536)
>  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721) at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:546) at 
> java.lang.Thread.run(Thread.java:748)Caused by: 
> java.lang.ClassNotFoundException: e11     name12     
> {code}
> 

[GitHub] [flink] V1ncentzzZ commented on pull request #14508: [FLINK-20773][format] Support allow-unescaped-control-chars option for JSON format.

2020-12-29 Thread GitBox


V1ncentzzZ commented on pull request #14508:
URL: https://github.com/apache/flink/pull/14508#issuecomment-752300734


   Hi @fsk119, Would you help review this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14512: FLINK-20359 Added Owner Reference to Job Manager in native kubernetes

2020-12-29 Thread GitBox


flinkbot edited a comment on pull request #14512:
URL: https://github.com/apache/flink/pull/14512#issuecomment-751906693


   
   ## CI report:
   
   * ef398a6aca683bb22074d7a72abd6a328981610e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11468)
 
   * 655b2992fd97e20ef142287bfd435832c469261d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14392: [FLINK-20606][connectors/hive, table sql] sql cli with hive catalog c…

2020-12-29 Thread GitBox


flinkbot edited a comment on pull request #14392:
URL: https://github.com/apache/flink/pull/14392#issuecomment-745400662


   
   ## CI report:
   
   *  Unknown: [CANCELED](TBD) 
   * a366565d2ef37c57eee2d17f5718b2f0e6f173a8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=11479)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20693) Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java

2020-12-29 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20693:

Component/s: API / Python

> Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java
> ---
>
> Key: FLINK-20693
> URL: https://issues.apache.org/jira/browse/FLINK-20693
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / Python, Table SQL / Planner
>Reporter: godfrey he
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20693) Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java

2020-12-29 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-20693.
---
  Assignee: Huang Xingbo
Resolution: Fixed

Merged to master via 99c70580d9b3a2cba505d1fd7883b99f815c0d25

> Port BatchExecPythonCorrelate and StreamExecPythonCorrelate to Java
> ---
>
> Key: FLINK-20693
> URL: https://issues.apache.org/jira/browse/FLINK-20693
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: godfrey he
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   >