[jira] [Commented] (FLINK-27576) Flink will request new pod when jm pod is delete, but will remove when TaskExecutor exceeded the idle timeout

2022-05-16 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-27576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17537400#comment-17537400
 ] 

zhisheng commented on FLINK-27576:
--

hi [~aitozi] , Have you started a PR?

> Flink will request new pod when jm pod is delete, but will remove when 
> TaskExecutor exceeded the idle timeout 
> --
>
> Key: FLINK-27576
> URL: https://issues.apache.org/jira/browse/FLINK-27576
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2022-05-11-20-06-58-955.png, 
> image-2022-05-11-20-08-01-739.png, jobmanager_log.txt
>
>
> flink 1.12.0 enable the ha(zk) and checkpoint, when i use kubectl delete the 
> jm pod, the job will  request new jm pod failover from the last checkpoint , 
> it is ok.  But it will request new tm pod again, but not use actually, the 
> new tm pod will closed when TaskExecutor exceeded the idle timeout . actually 
> it will use the old tm, why need to request for new tm pod? whether the job 
> will fail if the cluster has no resource for the new tm?Can we optimize and 
> reuse the old tm directly?
>  
> [^jobmanager_log.txt]
> ^!image-2022-05-11-20-06-58-955.png!^
> ^!image-2022-05-11-20-08-01-739.png|width=857,height=324!^



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (FLINK-27576) Flink will request new pod when jm pod is delete, but will remove when TaskExecutor exceeded the idle timeout

2022-05-11 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-27576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-27576:
-
Description: 
flink 1.12.0 enable the ha(zk) and checkpoint, when i use kubectl delete the jm 
pod, the job will  request new jm pod failover from the last checkpoint , it is 
ok.  But it will request new tm pod again, but not use actually, the new tm pod 
will closed when TaskExecutor exceeded the idle timeout . actually it will use 
the old tm, why need to request for new tm pod? whether the job will fail if 
the cluster has no resource for the new tm?Can we optimize and reuse the old tm 
directly?

 

[^jobmanager_log.txt]

^!image-2022-05-11-20-06-58-955.png!^

^!image-2022-05-11-20-08-01-739.png|width=857,height=324!^

  was:
flink 1.12.0 enable the ha(zk) and checkpoint, when i use kubectl delete the jm 
pod, the job will  request new jm pod failover from the last checkpoint , it is 
ok.  But it will request new tm pod again, but not use actually, the new tm pod 
will closed when TaskExecutor exceeded the idle timeout . actually it will use 
the old tm, why need to request for new tm pod? whether the job will fail if 
the cluster has no resource for the new tm?Can we optimize and reuse the old tm 
directly?

 

[^jobmanager_log.txt]

^!image-2022-05-11-20-06-58-955.png!^

^!image-2022-05-11-20-08-01-739.png!^


> Flink will request new pod when jm pod is delete, but will remove when 
> TaskExecutor exceeded the idle timeout 
> --
>
> Key: FLINK-27576
> URL: https://issues.apache.org/jira/browse/FLINK-27576
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2022-05-11-20-06-58-955.png, 
> image-2022-05-11-20-08-01-739.png, jobmanager_log.txt
>
>
> flink 1.12.0 enable the ha(zk) and checkpoint, when i use kubectl delete the 
> jm pod, the job will  request new jm pod failover from the last checkpoint , 
> it is ok.  But it will request new tm pod again, but not use actually, the 
> new tm pod will closed when TaskExecutor exceeded the idle timeout . actually 
> it will use the old tm, why need to request for new tm pod? whether the job 
> will fail if the cluster has no resource for the new tm?Can we optimize and 
> reuse the old tm directly?
>  
> [^jobmanager_log.txt]
> ^!image-2022-05-11-20-06-58-955.png!^
> ^!image-2022-05-11-20-08-01-739.png|width=857,height=324!^



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (FLINK-27576) Flink will request new pod when jm pod is delete, but will remove when TaskExecutor exceeded the idle timeout

2022-05-11 Thread zhisheng (Jira)
zhisheng created FLINK-27576:


 Summary: Flink will request new pod when jm pod is delete, but 
will remove when TaskExecutor exceeded the idle timeout 
 Key: FLINK-27576
 URL: https://issues.apache.org/jira/browse/FLINK-27576
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes
Affects Versions: 1.12.0
Reporter: zhisheng
 Attachments: image-2022-05-11-20-06-58-955.png, 
image-2022-05-11-20-08-01-739.png, jobmanager_log.txt

flink 1.12.0 enable the ha(zk) and checkpoint, when i use kubectl delete the jm 
pod, the job will  request new jm pod failover from the last checkpoint , it is 
ok.  But it will request new tm pod again, but not use actually, the new tm pod 
will closed when TaskExecutor exceeded the idle timeout . actually it will use 
the old tm, why need to request for new tm pod? whether the job will fail if 
the cluster has no resource for the new tm?Can we optimize and reuse the old tm 
directly?

 

[^jobmanager_log.txt]

^!image-2022-05-11-20-06-58-955.png!^

^!image-2022-05-11-20-08-01-739.png!^



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (FLINK-26884) Move Elasticsearch connector to external connector repository

2022-03-31 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17515674#comment-17515674
 ] 

zhisheng commented on FLINK-26884:
--

[~martijnvisser] thanks your quickly reply, I get it.

> Move Elasticsearch connector to external connector repository
> -
>
> Key: FLINK-26884
> URL: https://issues.apache.org/jira/browse/FLINK-26884
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26884) Move Elasticsearch connector to external connector repository

2022-03-30 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17514761#comment-17514761
 ] 

zhisheng commented on FLINK-26884:
--

I've seen similar discussions in the dev mail before, I always thought it will 
moving all the connector modules to a new external connector repository, eg: 
[https://github.com/apache/flink-connectors] 

Then, the new connector repository will contains all the connector

> Move Elasticsearch connector to external connector repository
> -
>
> Key: FLINK-26884
> URL: https://issues.apache.org/jira/browse/FLINK-26884
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26884) Move Elasticsearch connector to external connector repository

2022-03-30 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17514758#comment-17514758
 ] 

zhisheng commented on FLINK-26884:
--

hi, why move the es connector to external connector 
repository([https://github.com/apache/flink-connector-elasticsearch)] ? 

Will other connectors also will be moved to the external connector repository? 
eg:

kafka connector move to [https://github.com/apache/flink-connector-kafka] ?

HBase connector move to 
[https://github.com/apache/flink-connector-hbase|https://github.com/apache/flink-connector-hbase?]

 

> Move Elasticsearch connector to external connector repository
> -
>
> Key: FLINK-26884
> URL: https://issues.apache.org/jira/browse/FLINK-26884
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / ElasticSearch
>Reporter: Jing Ge
>Assignee: Jing Ge
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26601) Flink HBase HBaseRowDataAsyncLookupFunction threadPool not close cause container has so many 'hbase-async-lookup-worker' thread

2022-03-10 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-26601:
-
Description: 
[https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/source/HBaseRowDataAsyncLookupFunction.java#L111]

the thread pool create in the open function, but not gracefulShutdown in the 
close function, when the job failover, it will renew the thread pool

!image-2022-03-11-15-32-58-084.png|width=516,height=288!

!image-2022-03-11-15-32-23-721.png|width=532,height=274!

!image-2022-03-11-15-31-19-206.png|width=555,height=369!

 

  was:
[https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/source/HBaseRowDataAsyncLookupFunction.java#L111]

the thread pool create in the open function, but not gracefulShutdown in the 
close function, when the job failover, it will renew the thread pool

!image-2022-03-11-15-32-58-084.png!

!image-2022-03-11-15-32-23-721.png!

!image-2022-03-11-15-31-19-206.png!

 


> Flink HBase HBaseRowDataAsyncLookupFunction threadPool not close cause 
> container  has so many 'hbase-async-lookup-worker' thread
> 
>
> Key: FLINK-26601
> URL: https://issues.apache.org/jira/browse/FLINK-26601
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.13.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2022-03-11-15-31-19-206.png, 
> image-2022-03-11-15-32-23-721.png, image-2022-03-11-15-32-46-701.png, 
> image-2022-03-11-15-32-58-084.png
>
>
> [https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/source/HBaseRowDataAsyncLookupFunction.java#L111]
> the thread pool create in the open function, but not gracefulShutdown in the 
> close function, when the job failover, it will renew the thread pool
> !image-2022-03-11-15-32-58-084.png|width=516,height=288!
> !image-2022-03-11-15-32-23-721.png|width=532,height=274!
> !image-2022-03-11-15-31-19-206.png|width=555,height=369!
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-26601) Flink HBase HBaseRowDataAsyncLookupFunction threadPool not close cause container has so many 'hbase-async-lookup-worker' thread

2022-03-10 Thread zhisheng (Jira)
zhisheng created FLINK-26601:


 Summary: Flink HBase HBaseRowDataAsyncLookupFunction threadPool 
not close cause container  has so many 'hbase-async-lookup-worker' thread
 Key: FLINK-26601
 URL: https://issues.apache.org/jira/browse/FLINK-26601
 Project: Flink
  Issue Type: Bug
  Components: Connectors / HBase
Affects Versions: 1.13.0
Reporter: zhisheng
 Attachments: image-2022-03-11-15-31-19-206.png, 
image-2022-03-11-15-32-23-721.png, image-2022-03-11-15-32-46-701.png, 
image-2022-03-11-15-32-58-084.png

[https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/source/HBaseRowDataAsyncLookupFunction.java#L111]

the thread pool create in the open function, but not gracefulShutdown in the 
close function, when the job failover, it will renew the thread pool

!image-2022-03-11-15-32-58-084.png!

!image-2022-03-11-15-32-23-721.png!

!image-2022-03-11-15-31-19-206.png!

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-21 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17495933#comment-17495933
 ] 

zhisheng commented on FLINK-26248:
--

[~wangyang0918]  thank you very much, i closed this issue

> flink job not recover from latest checkpoint on native k8s
> --
>
> Key: FLINK-26248
> URL: https://issues.apache.org/jira/browse/FLINK-26248
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2022-02-22-10-26-53-699.png
>
>
> flink version: 1.12.0
> ha: zk
> mode: application mode
> native k8s
>  
> 1、if flink job start with savepoint, running for a while,then i delete the JM 
> pod, the job will restart with the old savepoint not the latest checkpoint.  
> this is not what i want
> if i delete the TM pod, the job will restart with the  latest the 
> checkpoint,this is what i want
>  
> 2、if start job without savepoint, running for a while,then i delete the JM 
> pod, the job will restart from earliest not the latest checkpoint



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Closed] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-21 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng closed FLINK-26248.

Resolution: Not A Bug

> flink job not recover from latest checkpoint on native k8s
> --
>
> Key: FLINK-26248
> URL: https://issues.apache.org/jira/browse/FLINK-26248
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2022-02-22-10-26-53-699.png
>
>
> flink version: 1.12.0
> ha: zk
> mode: application mode
> native k8s
>  
> 1、if flink job start with savepoint, running for a while,then i delete the JM 
> pod, the job will restart with the old savepoint not the latest checkpoint.  
> this is not what i want
> if i delete the TM pod, the job will restart with the  latest the 
> checkpoint,this is what i want
>  
> 2、if start job without savepoint, running for a while,then i delete the JM 
> pod, the job will restart from earliest not the latest checkpoint



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-21 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17495867#comment-17495867
 ] 

zhisheng commented on FLINK-26248:
--

thanks [~wangyang0918] , I have a question to ask you, if one of my jobs uses 
the same cluster.id all the time, stops the job repeatedly and then resumes the 
job from the state, is there any problem with the job state and HA-related data 
in this case?  Because i find the zk data、hdfs checkpoint data、hdfs ha data 
will allways in the same folder? Do I need to use a new cluster.id every time I 
start the same job? Similar to yarn, will use new application id, so this way 
these directories(zk data/hdfs checkpoint data/hdfs ha data) are a unique

 

 

> flink job not recover from latest checkpoint on native k8s
> --
>
> Key: FLINK-26248
> URL: https://issues.apache.org/jira/browse/FLINK-26248
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2022-02-22-10-26-53-699.png
>
>
> flink version: 1.12.0
> ha: zk
> mode: application mode
> native k8s
>  
> 1、if flink job start with savepoint, running for a while,then i delete the JM 
> pod, the job will restart with the old savepoint not the latest checkpoint.  
> this is not what i want
> if i delete the TM pod, the job will restart with the  latest the 
> checkpoint,this is what i want
>  
> 2、if start job without savepoint, running for a while,then i delete the JM 
> pod, the job will restart from earliest not the latest checkpoint



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-20219) Rethink the HA related ZNodes/ConfigMap clean up for session cluster

2022-02-21 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17495865#comment-17495865
 ] 

zhisheng commented on FLINK-20219:
--

thanks [~wangyang0918] 

> Rethink the HA related ZNodes/ConfigMap clean up for session cluster
> 
>
> Key: FLINK-20219
> URL: https://issues.apache.org/jira/browse/FLINK-20219
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Deployment / Scripts, Runtime / 
> Coordination
>Affects Versions: 1.12.0
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Attachments: image-2022-02-22-10-54-03-439.png
>
>
> When I am testing the Kubernetes HA service, I realize that ConfigMap clean 
> up for session cluster(both standalone and native) are not very easy.
>  * For the native K8s session, we suggest our users to stop it via {{echo 
> 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id= 
> -Dexecution.attached=true}}. Currently, it has the same effect with {{kubectl 
> delete deploy }}. This will not clean up the leader 
> ConfigMaps(e.g. ResourceManager, Dispatcher, RestServer, JobManager). Even 
> though there is no running jobs before stop, we still get some retained 
> ConfigMaps. So when and how to clean up the retained ConfigMaps? Should the 
> user do it manually? Or we could provide some utilities in Flink client.
>  * For the standalone session, I think it is reasonable for the users to do 
> the HA ConfigMap clean up manually.
>  
> We could use the following command to do the manually clean up.
> {{kubectl delete cm 
> --selector='app=,configmap-type=high-availability'}}
>  
> Note: This is not a problem for Flink application cluster. Since we could do 
> the clean up automatically when all the running jobs in the application 
> reached terminal state(e.g. FAILED, CANCELED, FINISHED) and then destroy the 
> Flink cluster.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-20219) Rethink the HA related ZNodes/ConfigMap clean up for session cluster

2022-02-21 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-20219:
-
Attachment: image-2022-02-22-10-54-03-439.png

> Rethink the HA related ZNodes/ConfigMap clean up for session cluster
> 
>
> Key: FLINK-20219
> URL: https://issues.apache.org/jira/browse/FLINK-20219
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Deployment / Scripts, Runtime / 
> Coordination
>Affects Versions: 1.12.0
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Attachments: image-2022-02-22-10-54-03-439.png
>
>
> When I am testing the Kubernetes HA service, I realize that ConfigMap clean 
> up for session cluster(both standalone and native) are not very easy.
>  * For the native K8s session, we suggest our users to stop it via {{echo 
> 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id= 
> -Dexecution.attached=true}}. Currently, it has the same effect with {{kubectl 
> delete deploy }}. This will not clean up the leader 
> ConfigMaps(e.g. ResourceManager, Dispatcher, RestServer, JobManager). Even 
> though there is no running jobs before stop, we still get some retained 
> ConfigMaps. So when and how to clean up the retained ConfigMaps? Should the 
> user do it manually? Or we could provide some utilities in Flink client.
>  * For the standalone session, I think it is reasonable for the users to do 
> the HA ConfigMap clean up manually.
>  
> We could use the following command to do the manually clean up.
> {{kubectl delete cm 
> --selector='app=,configmap-type=high-availability'}}
>  
> Note: This is not a problem for Flink application cluster. Since we could do 
> the clean up automatically when all the running jobs in the application 
> reached terminal state(e.g. FAILED, CANCELED, FINISHED) and then destroy the 
> Flink cluster.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-20219) Rethink the HA related ZNodes/ConfigMap clean up for session cluster

2022-02-21 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17495831#comment-17495831
 ] 

zhisheng commented on FLINK-20219:
--

[~wangyang0918] hello, I found out that it's not just the session cluster that 
has this problem,flink on native k8s application mode also have this problem. i 
delete the cluster.id deployment, the job ha configmap also exist.

!image-2022-02-22-10-54-03-439.png!

 

> Rethink the HA related ZNodes/ConfigMap clean up for session cluster
> 
>
> Key: FLINK-20219
> URL: https://issues.apache.org/jira/browse/FLINK-20219
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes, Deployment / Scripts, Runtime / 
> Coordination
>Affects Versions: 1.12.0
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Attachments: image-2022-02-22-10-54-03-439.png
>
>
> When I am testing the Kubernetes HA service, I realize that ConfigMap clean 
> up for session cluster(both standalone and native) are not very easy.
>  * For the native K8s session, we suggest our users to stop it via {{echo 
> 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id= 
> -Dexecution.attached=true}}. Currently, it has the same effect with {{kubectl 
> delete deploy }}. This will not clean up the leader 
> ConfigMaps(e.g. ResourceManager, Dispatcher, RestServer, JobManager). Even 
> though there is no running jobs before stop, we still get some retained 
> ConfigMaps. So when and how to clean up the retained ConfigMaps? Should the 
> user do it manually? Or we could provide some utilities in Flink client.
>  * For the standalone session, I think it is reasonable for the users to do 
> the HA ConfigMap clean up manually.
>  
> We could use the following command to do the manually clean up.
> {{kubectl delete cm 
> --selector='app=,configmap-type=high-availability'}}
>  
> Note: This is not a problem for Flink application cluster. Since we could do 
> the clean up automatically when all the running jobs in the application 
> reached terminal state(e.g. FAILED, CANCELED, FINISHED) and then destroy the 
> Flink cluster.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-21 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-26248:
-
Attachment: image-2022-02-22-10-26-53-699.png

> flink job not recover from latest checkpoint on native k8s
> --
>
> Key: FLINK-26248
> URL: https://issues.apache.org/jira/browse/FLINK-26248
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2022-02-22-10-26-53-699.png
>
>
> flink version: 1.12.0
> ha: zk
> mode: application mode
> native k8s
>  
> 1、if flink job start with savepoint, running for a while,then i delete the JM 
> pod, the job will restart with the old savepoint not the latest checkpoint.  
> this is not what i want
> if i delete the TM pod, the job will restart with the  latest the 
> checkpoint,this is what i want
>  
> 2、if start job without savepoint, running for a while,then i delete the JM 
> pod, the job will restart from earliest not the latest checkpoint



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-18 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17494589#comment-17494589
 ] 

zhisheng commented on FLINK-26248:
--

https://issues.apache.org/jira/browse/FLINK-19358

find this

 

> flink job not recover from latest checkpoint on native k8s
> --
>
> Key: FLINK-26248
> URL: https://issues.apache.org/jira/browse/FLINK-26248
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
>
> flink version: 1.12.0
> ha: zk
> mode: application mode
> native k8s
>  
> 1、if flink job start with savepoint, running for a while,then i delete the JM 
> pod, the job will restart with the old savepoint not the latest checkpoint.  
> this is not what i want
> if i delete the TM pod, the job will restart with the  latest the 
> checkpoint,this is what i want
>  
> 2、if start job without savepoint, running for a while,then i delete the JM 
> pod, the job will restart from earliest not the latest checkpoint



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-18 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17494573#comment-17494573
 ] 

zhisheng edited comment on FLINK-26248 at 2/18/22, 12:28 PM:
-

thanks [~wangyang0918] quickly response, sorry , i just find that i not set the 
high-availability=zookeeper in flink-conf.yaml file. 

 

i found if i set ha, the job id is , and all 
the job checkpoint file will in the 
/flink/checkpoints//  folder , the job 
checkpoint and restart will it be affected? what should i do to slove this 
problem?


was (Author: zhisheng):
thanks [~wangyang0918] quickly response, sorry , i just find that i not set the 
high-availability=zookeeper in flink-conf.yaml file. 

 

i found if i set ha, the job id is , and all 
the job checkpoint file will in the 
/flink/checkpoints//  folder , the job 
checkpoint and restart will affect, what should i do to slove this problem?

> flink job not recover from latest checkpoint on native k8s
> --
>
> Key: FLINK-26248
> URL: https://issues.apache.org/jira/browse/FLINK-26248
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
>
> flink version: 1.12.0
> ha: zk
> mode: application mode
> native k8s
>  
> 1、if flink job start with savepoint, running for a while,then i delete the JM 
> pod, the job will restart with the old savepoint not the latest checkpoint.  
> this is not what i want
> if i delete the TM pod, the job will restart with the  latest the 
> checkpoint,this is what i want
>  
> 2、if start job without savepoint, running for a while,then i delete the JM 
> pod, the job will restart from earliest not the latest checkpoint



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-18 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17494573#comment-17494573
 ] 

zhisheng edited comment on FLINK-26248 at 2/18/22, 12:26 PM:
-

thanks [~wangyang0918] quickly response, sorry , i just find that i not set the 
high-availability=zookeeper in flink-conf.yaml file. 

 

i found if i set ha, the job id is , and all 
the job checkpoint file will in the 
/flink/checkpoints//  folder , the job 
checkpoint and restart will affect, what should i do to slove this problem?


was (Author: zhisheng):
thanks [~wangyang0918] quickly response, sorry , i just find that i not set the 
high-availability=zookeeper in flink-conf.yaml file. 

 

i found if i set ha, the job id is , and job 
checkpoint  folder will be same, the job checkpoint and restart will affect, 
what should i do to slove this problem?

> flink job not recover from latest checkpoint on native k8s
> --
>
> Key: FLINK-26248
> URL: https://issues.apache.org/jira/browse/FLINK-26248
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
>
> flink version: 1.12.0
> ha: zk
> mode: application mode
> native k8s
>  
> 1、if flink job start with savepoint, running for a while,then i delete the JM 
> pod, the job will restart with the old savepoint not the latest checkpoint.  
> this is not what i want
> if i delete the TM pod, the job will restart with the  latest the 
> checkpoint,this is what i want
>  
> 2、if start job without savepoint, running for a while,then i delete the JM 
> pod, the job will restart from earliest not the latest checkpoint



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-18 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17494573#comment-17494573
 ] 

zhisheng edited comment on FLINK-26248 at 2/18/22, 12:25 PM:
-

thanks [~wangyang0918] quickly response, sorry , i just find that i not set the 
high-availability=zookeeper in flink-conf.yaml file. 

 

i found if i set ha, the job id is , and job 
checkpoint  folder will be same, the job checkpoint and restart will affect, 
what should i do to slove this problem?


was (Author: zhisheng):
thanks [~wangyang0918] quickly response, sorry , i just find that i not set the 
high-availability=zookeeper in flink-conf.yaml file. 

 

i found if i set ha, the job id is , and job 
checkpoint and ha folder will be same, the job checkpoint and restart will 
affect, what should i do to slove this problem?

> flink job not recover from latest checkpoint on native k8s
> --
>
> Key: FLINK-26248
> URL: https://issues.apache.org/jira/browse/FLINK-26248
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
>
> flink version: 1.12.0
> ha: zk
> mode: application mode
> native k8s
>  
> 1、if flink job start with savepoint, running for a while,then i delete the JM 
> pod, the job will restart with the old savepoint not the latest checkpoint.  
> this is not what i want
> if i delete the TM pod, the job will restart with the  latest the 
> checkpoint,this is what i want
>  
> 2、if start job without savepoint, running for a while,then i delete the JM 
> pod, the job will restart from earliest not the latest checkpoint



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-18 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17494573#comment-17494573
 ] 

zhisheng commented on FLINK-26248:
--

thanks [~wangyang0918] quickly response, sorry , i just find that i not set the 
high-availability=zookeeper in flink-conf.yaml file. 

 

i found if i set ha, the job id is , and job 
checkpoint and ha folder will be same, the job checkpoint and restart will 
affect, what should i do to slove this problem?

> flink job not recover from latest checkpoint on native k8s
> --
>
> Key: FLINK-26248
> URL: https://issues.apache.org/jira/browse/FLINK-26248
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
>
> flink version: 1.12.0
> ha: zk
> mode: application mode
> native k8s
>  
> 1、if flink job start with savepoint, running for a while,then i delete the JM 
> pod, the job will restart with the old savepoint not the latest checkpoint.  
> this is not what i want
> if i delete the TM pod, the job will restart with the  latest the 
> checkpoint,this is what i want
>  
> 2、if start job without savepoint, running for a while,then i delete the JM 
> pod, the job will restart from earliest not the latest checkpoint



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-18 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-26248:
-
Description: 
flink version: 1.12.0

ha: zk

mode: application mode

native k8s

 

1、if flink job start with savepoint, running for a while,then i delete the JM 
pod, the job will restart with the old savepoint not the latest checkpoint.  
this is not what i want

if i delete the TM pod, the job will restart with the  latest the 
checkpoint,this is what i want

 

2、if start job without savepoint, running for a while,then i delete the JM pod, 
the job will restart from earliest not the latest checkpoint

  was:
flink version: 1.12.0

ha: zk

mode: application mode

native k8s

 

flink job start with savepoint, running for a while,then i delete the JM pod, 
the job will restart with the old savepoint not the latest checkpoint.  this is 
not what i want

if i delete the TM pod, the job will restart with the  latest the 
checkpoint,this is what i want


> flink job not recover from latest checkpoint on native k8s
> --
>
> Key: FLINK-26248
> URL: https://issues.apache.org/jira/browse/FLINK-26248
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
>
> flink version: 1.12.0
> ha: zk
> mode: application mode
> native k8s
>  
> 1、if flink job start with savepoint, running for a while,then i delete the JM 
> pod, the job will restart with the old savepoint not the latest checkpoint.  
> this is not what i want
> if i delete the TM pod, the job will restart with the  latest the 
> checkpoint,this is what i want
>  
> 2、if start job without savepoint, running for a while,then i delete the JM 
> pod, the job will restart from earliest not the latest checkpoint



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-18 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17494550#comment-17494550
 ] 

zhisheng commented on FLINK-26248:
--

delete the TM pod then the JM log:

 
{code:java}
2022-02-18 11:12:39,319 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Triggering 
checkpoint 910 (type=CHECKPOINT) @ 1645182759315 for job 
7422577421a74b52bb14d76b076728f0.2022-02-18 11:12:40,204 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Completed 
checkpoint 910 for job 7422577421a74b52bb14d76b076728f0 (99869 bytes in 879 
ms).2022-02-18 11:12:41,319 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Triggering 
checkpoint 911 (type=CHECKPOINT) @ 1645182761315 for job 
7422577421a74b52bb14d76b076728f0.2022-02-18 11:12:41,703 WARN  
akka.remote.transport.netty.NettyTransport   [] - Remote 
connection to [/10.74.2.229:48000] failed with java.io.IOException: Connection 
reset by peer2022-02-18 11:12:41,709 WARN  
akka.remote.ReliableDeliverySupervisor   [] - Association 
with remote system [akka.tcp://flink@10.74.2.229:6122] has failed, address is 
now gated for [50] ms. Reason: [Disassociated] 2022-02-18 11:12:41,713 WARN  
akka.remote.ReliableDeliverySupervisor   [] - Association 
with remote system [akka.tcp://flink-metrics@10.74.2.229:40339] has failed, 
address is now gated for [50] ms. Reason: [Disassociated] 2022-02-18 
11:12:42,647 WARN  
org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - 
Worker state-machine-cluster-taskmanager-1-1 is terminated. Diagnostics: Pod 
terminated, container termination statuses: [flink-task-manager(exitCode=143, 
reason=Error, message=null)]2022-02-18 11:12:42,651 WARN  
org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] - 
Closing TaskExecutor connection state-machine-cluster-taskmanager-1-1 because: 
Pod terminated, container termination statuses: 
[flink-task-manager(exitCode=143, reason=Error, message=null)]2022-02-18 
11:12:42,666 WARN  org.apache.flink.runtime.executiongraph.ExecutionGraph   
[] - Source: Custom Source (1/1) (044fe8cb5885eb314fae6f509e0bab3b) switched 
from RUNNING to FAILED on state-machine-cluster-taskmanager-1-1 @ 10.74.2.229 
(dataPort=42891).java.lang.Exception: Pod terminated, container termination 
statuses: [flink-task-manager(exitCode=143, reason=Error, message=null)]at 
org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager.onWorkerTerminated(ActiveResourceManager.java:219)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]at 
org.apache.flink.kubernetes.KubernetesResourceManagerDriver.lambda$terminatedPodsInMainThread$3(KubernetesResourceManagerDriver.java:278)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:404)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:197)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0]at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
akka.actor.Actor$class.aroundReceive(Actor.scala:517) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
akka.actor.ActorCell.receiveMessage(ActorCell.scala:592) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
akka.actor.ActorCell.invoke(ActorCell.scala:561) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
akka.dispatch.Mailbox.run(Mailbox.scala:225) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
akka.dispatch.Mailbox.exec(Mailbox.scala:235) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
[flink-dist_2.11-1.12.0.jar:1.12.0]at 

[jira] [Commented] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-18 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-26248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17494543#comment-17494543
 ] 

zhisheng commented on FLINK-26248:
--

start job shell:
{code:java}
./bin/flink run-application -p 1 -t kubernetes-application \  
-Dkubernetes.cluster-id=state-machine-cluster \  
-Dtaskmanager.memory.process.size=1024m \  -Dkubernetes.taskmanager.cpu=0.5 \  
-Dtaskmanager.numberOfTaskSlots=1 \  
-Dkubernetes.container.image=harbor.cn/flink/statemachine:v0.0.6 \  
-Dkubernetes.namespace=hke-flink \  
-Dkubernetes.jobmanager.service-account=flink \  
-Dkubernetes.container.image.pull-secrets=docker-registry-test \  
-Dkubernetes.jobmanager.node-selector=kubernetes.io/role:flink-node \  
-Dkubernetes.taskmanager.node-selector=kubernetes.io/role:flink-node \  
-Dkubernetes.rest-service.exposed.type=NodePort \  
-Dhigh-availability.storageDir=hdfs:///flink/ha/k8s \  
local:///opt/flink/usrlib/StateMachineExample.jar {code}
 

the jm log:

 
{code:java}
2022-02-18 10:41:44,930 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] - 

2022-02-18 10:41:44,987 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  Starting 
KubernetesApplicationClusterEntrypoint (Version: 1.12.0, Scala: 2.11, 
Rev:a41d55f, Date:2021-12-09T10:38:36+01:00)
2022-02-18 10:41:44,987 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  OS current 
user: flink
2022-02-18 10:41:45,590 WARN  org.apache.hadoop.util.NativeCodeLoader           
           [] - Unable to load native-hadoop library for your platform... using 
builtin-java classes where applicable
2022-02-18 10:41:45,714 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  Current 
Hadoop/Kerberos user: flink
2022-02-18 10:41:45,714 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  JVM: OpenJDK 
64-Bit Server VM - Oracle Corporation - 1.8/25.322-b06
2022-02-18 10:41:45,715 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  Maximum heap 
size: 1024 MiBytes
2022-02-18 10:41:45,715 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  JAVA_HOME: 
/usr/local/openjdk-8
2022-02-18 10:41:45,717 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  Hadoop 
version: 2.7.3
2022-02-18 10:41:45,717 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -  JVM Options:
2022-02-18 10:41:45,717 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-Xmx1073741824
2022-02-18 10:41:45,717 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-Xms1073741824
2022-02-18 10:41:45,717 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:MaxMetaspaceSize=268435456
2022-02-18 10:41:45,717 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     -Denv=dev
2022-02-18 10:41:45,717 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-Dlog4j2.formatMsgNoLookups=true
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-Xloggc:./gc-%t.log
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:+PrintGCDetails
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:-OmitStackTraceInFastThrow
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:+PrintGCTimeStamps
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:+PrintHeapAtGC
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:+PrintGCDateStamps
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:+UseGCLogFileRotation
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:+PrintReferenceGC
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:+PrintTenuringDistribution
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:NumberOfGCLogFiles=5
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:GCLogFileSize=20M
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:+PrintPromotionFailure
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:+PrintGCCause
2022-02-18 10:41:45,718 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint        [] -     
-XX:+UseG1GC
2022-02-18 10:41:45,719 INFO  

[jira] [Created] (FLINK-26248) flink job not recover from latest checkpoint on native k8s

2022-02-18 Thread zhisheng (Jira)
zhisheng created FLINK-26248:


 Summary: flink job not recover from latest checkpoint on native k8s
 Key: FLINK-26248
 URL: https://issues.apache.org/jira/browse/FLINK-26248
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Kubernetes
Affects Versions: 1.12.0
Reporter: zhisheng


flink version: 1.12.0

ha: zk

mode: application mode

native k8s

 

flink job start with savepoint, running for a while,then i delete the JM pod, 
the job will restart with the old savepoint not the latest checkpoint.  this is 
not what i want

if i delete the TM pod, the job will restart with the  latest the 
checkpoint,this is what i want



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-25057) Streaming File Sink writing to HDFS

2021-11-26 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17449438#comment-17449438
 ] 

zhisheng commented on FLINK-25057:
--

it seems like the https://issues.apache.org/jira/browse/FLINK-23725 , I also 
meet this case

> Streaming File Sink writing to  HDFS
> 
>
> Key: FLINK-25057
> URL: https://issues.apache.org/jira/browse/FLINK-25057
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Affects Versions: 1.12.1
>Reporter: hanjie
>Priority: Major
>
> When I first start flink task:
>     *First part file example:*
>        part-0-0
>        part-0-1
>       .part-0-2.inprogress.952eb958-dac9-4f2c-b92f-9084ed536a1c
>   I cancel flink task. then, i restart task without savepoint or checkpoint. 
> Task run for a while.
>    *Second part file example:*
>           **          part-0-0
>           part-0-1
>           .part-0-2.inprogress.952eb958-dac9-4f2c-b92f-9084ed536a1c
>           .part-0-0.inprogress.0e2f234b-042d-4232-a5f7-c980f04ca82d
>     'part-0-2.inprogress.952eb958-dac9-4f2c-b92f-9084ed536a1c' not rename 
> file and bucketIndex will start zero.
>      I view related code. Start task need savepoint or checkpoint. I choose 
> savepoint.The above question disappears, when i start third test. 
>     But, if i use expire savepoint. Task will  throw exception.
>      java.io.FileNotFoundException: File does not exist: 
> /ns-hotel/hotel_sa_log/stream/sa_cpc_ad_log_list_detail_dwd/2021-11-25/.part-6-1537.inprogress.cd9c756a-1756-4dc5-9325-485fe99a2803\n\tat
>  
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1309)\n\tat
>  
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)\n\tat
>  
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)\n\tat
>  
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)\n\tat
>  org.apache.hadoop.fs.FileSystem.resolvePath(FileSystem.java:752)\n\tat 
> org.apache.hadoop.fs.FilterFileSystem.resolvePath(FilterFileSystem.java:153)\n\tat
>  
> org.apache.hadoop.fs.viewfs.ChRootedFileSystem.resolvePath(ChRootedFileSystem.java:373)\n\tat
>  
> org.apache.hadoop.fs.viewfs.ViewFileSystem.resolvePath(ViewFileSystem.java:243)\n\tat
>  
> org.apache.flink.runtime.fs.hdfs.HadoopRecoverableFsDataOutputStream.revokeLeaseByFileSystem(HadoopRecoverableFsDataOutputStream.java:327)\n\tat
>  
> org.apache.flink.runtime.fs.hdfs.HadoopRecoverableFsDataOutputStream.safelyTruncateFile(HadoopRecoverableFsDataOutputStream.java:163)\n\tat
>  
> org.apache.flink.runtime.fs.hdfs.HadoopRecoverableFsDataOutputStream.(HadoopRecoverableFsDataOutputStream.java:88)\n\tat
>  
> org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriter.recover(HadoopRecoverableWriter.java:86)\n\tat
>  
> org.apache.flink.streaming.api.functions.sink.filesystem.OutputStreamBasedPartFileWriter$OutputStreamBasedBucketWriter.resumeInProgressFileFrom(OutputStreamBasedPartFileWriter.java:104)\n\tat
>  org.apache.flink.streaming.api.functions.sink.filesyst
>   Task set 'execution.checkpointing.interval': 1min,  I  invoke savepoint  
> every fifth minutes.
>    Consult next everybody solution.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-24528) Flink HBase Asyc Lookup throw NPE if rowkey is null

2021-10-12 Thread zhisheng (Jira)
zhisheng created FLINK-24528:


 Summary: Flink HBase Asyc Lookup throw NPE if rowkey is null
 Key: FLINK-24528
 URL: https://issues.apache.org/jira/browse/FLINK-24528
 Project: Flink
  Issue Type: Bug
  Components: Connectors / HBase
Affects Versions: 1.13.0
Reporter: zhisheng


Flink SQL DDL create HBase table, if set 'lookup.async' = 'true', when the 
rowkey is null, may throw NPE:
{code:java}

2021-10-12 21:11:07,100 INFO  
org.apache.flink.connector.hbase2.source.HBaseRowDataAsyncLookupFunction [] - 
start close ...2021-10-12 21:11:07,100 INFO  
org.apache.flink.connector.hbase2.source.HBaseRowDataAsyncLookupFunction [] - 
start close ...2021-10-12 21:11:07,103 WARN  
org.apache.flink.runtime.taskmanager.Task                    [] - 
LookupJoin(table=[default_catalog.default_database.dim_user_guid_relation], 
joinType=[LeftOuterJoin], async=[true], lookup=[rowkey=userGuid], 
select=[userGuid, last_time, rowkey, cf]) -> Calc(select=[userGuid AS 
user_guid, cf.user_new_id AS user_new_id, last_time AS 
usr_pwtx_ectx_driver_last_seek_order_time, _UTF-16LE'prfl.usr' AS metric]) -> 
Sink: Sink(table=[default_catalog.default_database.print_table], 
fields=[user_guid, user_new_id, usr_pwtx_ectx_driver_last_seek_order_time, 
metric]) (1/1)#0 (06bf3d7b0c341101e070796e20f7e571) switched from RUNNING to 
FAILED.java.lang.NullPointerException: null at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.RawAsyncTableImpl.get(RawAsyncTableImpl.java:249)
 ~[flink-sql-connector-hbase-2.2_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncTableImpl.get(AsyncTableImpl.java:96)
 ~[flink-sql-connector-hbase-2.2_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.connector.hbase2.source.HBaseRowDataAsyncLookupFunction.fetchResult(HBaseRowDataAsyncLookupFunction.java:187)
 ~[flink-sql-connector-hbase-2.2_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.connector.hbase2.source.HBaseRowDataAsyncLookupFunction.eval(HBaseRowDataAsyncLookupFunction.java:174)
 ~[flink-sql-connector-hbase-2.2_2.11-1.12.0.jar:1.12.0] at 
LookupFunction$24.asyncInvoke(Unknown Source) ~[?:?] at 
org.apache.flink.table.runtime.operators.join.lookup.AsyncLookupJoinRunner.asyncInvoke(AsyncLookupJoinRunner.java:139)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.table.runtime.operators.join.lookup.AsyncLookupJoinRunner.asyncInvoke(AsyncLookupJoinRunner.java:53)
 ~[flink-table-blink_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.api.operators.async.AsyncWaitOperator.processElement(AsyncWaitOperator.java:195)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.OneInputStreamTask$StreamTaskNetworkOutput.emitRecord(OneInputStreamTask.java:193)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.processElement(StreamTaskNetworkInput.java:179)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.io.StreamTaskNetworkInput.emitNext(StreamTaskNetworkInput.java:152)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:67)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:372)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:186)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:575)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:539) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722) 
[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:547) 
[flink-dist_2.11-1.12.0.jar:1.12.0] at java.lang.Thread.run(Thread.java:748) 
[?:1.8.0_201] Suppressed: java.lang.Exception: java.lang.NoClassDefFoundError: 
org/apache/flink/hbase/shaded/org/apache/commons/io/IOUtils at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runAndSuppressThrowable(StreamTask.java:723)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.cleanUpInvoke(StreamTask.java:643)
 ~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:552) 
~[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:722) 
[flink-dist_2.11-1.12.0.jar:1.12.0] at 
org.apache.flink.runtime.taskmanager.Task.run(Task.java:547) 
[flink-dist_2.11-1.12.0.jar:1.12.0] at java.lang.Thread.run(Thread.java:748) 
[?:1.8.0_201] Caused by: java.lang.NoClassDefFoundError: 

[jira] [Commented] (FLINK-20527) Support side output for late events in Flink SQL

2021-03-22 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17305921#comment-17305921
 ] 

zhisheng commented on FLINK-20527:
--

+ 1 for this feature

> Support side output for late events in Flink SQL
> 
>
> Key: FLINK-20527
> URL: https://issues.apache.org/jira/browse/FLINK-20527
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Affects Versions: 1.11.2
>Reporter: tonychan
>Priority: Major
>
> the datastream api has watermarker  and can add a side output,how to add a 
> side output for flink sql watermaker? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21493) Flink Command Line Interface Document can add command usage

2021-02-24 Thread zhisheng (Jira)
zhisheng created FLINK-21493:


 Summary: Flink Command Line Interface Document can add command 
usage
 Key: FLINK-21493
 URL: https://issues.apache.org/jira/browse/FLINK-21493
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.12.0
Reporter: zhisheng


Flink 1.11 document has command usage 
[https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/cli.html#usage]

but Flink 1.12 the usage was removed just list the cli actions 
[https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/cli.html#cli-actions|https://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/cli.html#cli-actions.]

Sometime find the actions parameter and usage information is inconvenient, can 
community add the original usage document?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21219) FlinkKafkaConsumer ignores offset overrides for new topics when restoring from savepoint.

2021-02-01 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17276292#comment-17276292
 ] 

zhisheng commented on FLINK-21219:
--

Duplicated in https://issues.apache.org/jira/browse/FLINK-10806



https://issues.apache.org/jira/browse/FLINK-16865

> FlinkKafkaConsumer ignores offset overrides for new topics when restoring 
> from savepoint.
> -
>
> Key: FLINK-21219
> URL: https://issues.apache.org/jira/browse/FLINK-21219
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.1
>Reporter: Dominik Wosiński
>Priority: Major
>
> Currently, when FlinkKafkaConsumer is restored from savepoint, the following 
> code will handle topics that do not have offsets committed (for example if a 
> new topic was added):
> {noformat}
> if (restoredState != null) { for (KafkaTopicPartition partition : 
> allPartitions) { if (!restoredState.containsKey(partition)) { 
> restoredState.put(partition, 
> KafkaTopicPartitionStateSentinel.EARLIEST_OFFSET); } }{noformat}
>  
> So if we have a KafkaConsumer with topicPattern and the pattern is changed, 
> new topis will always start from earliest offset, even if originally the 
> setting was different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-27 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272699#comment-17272699
 ] 

zhisheng commented on FLINK-21143:
--

first, i config the `execution.target` in flink-conf.yaml, it is `yarn-per-job`.

 

!image-2021-01-27-16-55-06-104.png|width=645,height=433!

 

second, the test has exception in flink sql client

!image-2021-01-27-16-56-47-400.png|width=586,height=346!

following is the files in lib/ 

!image-2021-01-27-16-58-43-372.png|width=764,height=347!

following is the files in hdfs 

!image-2021-01-27-17-00-38-661.png|width=1062,height=456!

 

> Are you sure that it could work for the old version without having connector 
> jar in the client lib directory?

 

if the flink client lib directory has not any connector jar, it may has 
exception like the second picture above

 

HDFS            lib directory                  Status

new jar         old jar                job can submit and running, but with 
exception

new jar         no jar                 job can not submit

new jar         new jar              job can submit and running, works well

 

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client-.log, 
> image-2021-01-27-16-53-11-255.png, image-2021-01-27-16-55-06-104.png, 
> image-2021-01-27-16-56-47-400.png, image-2021-01-27-16-58-43-372.png, 
> image-2021-01-27-17-00-01-553.png, image-2021-01-27-17-00-38-661.png
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-27 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-21143:
-
Attachment: image-2021-01-27-17-00-38-661.png

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client-.log, 
> image-2021-01-27-16-53-11-255.png, image-2021-01-27-16-55-06-104.png, 
> image-2021-01-27-16-56-47-400.png, image-2021-01-27-16-58-43-372.png, 
> image-2021-01-27-17-00-01-553.png, image-2021-01-27-17-00-38-661.png
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-27 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-21143:
-
Attachment: image-2021-01-27-17-00-01-553.png

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client-.log, 
> image-2021-01-27-16-53-11-255.png, image-2021-01-27-16-55-06-104.png, 
> image-2021-01-27-16-56-47-400.png, image-2021-01-27-16-58-43-372.png, 
> image-2021-01-27-17-00-01-553.png
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-27 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-21143:
-
Attachment: image-2021-01-27-16-58-43-372.png

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client-.log, 
> image-2021-01-27-16-53-11-255.png, image-2021-01-27-16-55-06-104.png, 
> image-2021-01-27-16-56-47-400.png, image-2021-01-27-16-58-43-372.png
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-27 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-21143:
-
Attachment: image-2021-01-27-16-56-47-400.png

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client-.log, 
> image-2021-01-27-16-53-11-255.png, image-2021-01-27-16-55-06-104.png, 
> image-2021-01-27-16-56-47-400.png
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-27 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-21143:
-
Attachment: image-2021-01-27-16-55-06-104.png

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client-.log, 
> image-2021-01-27-16-53-11-255.png, image-2021-01-27-16-55-06-104.png
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-27 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-21143:
-
Attachment: image-2021-01-27-16-53-11-255.png

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client-.log, 
> image-2021-01-27-16-53-11-255.png
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-26 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272600#comment-17272600
 ] 

zhisheng commented on FLINK-21143:
--

[~fly_in_gis] yes, the 
/data/HDATA/yarn/local/filecache/40/flink-sql-connector-hbase-2.2_2.11-1.12.0.jar
 file is a local jar in the Yarn local cache directory. Also it is a public 
local resource.

the file is the same with 
hdfs:///flink/composite-lib/flink-1.12.0/flink-sql-connector-hbase-2.2_2.11-1.12.0.jar
 

Maybe, you can have a test, for example:

1、don't add any connector file in flink lib/ 

2、config the yarn.provided.lib.dirs in the flink-conf.yaml and add flink sql 
HBase connector in the HDFS

3、you can start the sql client, and then create a HBase table, then select the 
table, it may has exception like: [ERROR] Could not execute SQL statement. 
Reason:
org.apache.flink.table.api.ValidationException: Could not find any factory for 
identifier 'hbase-2.2' that implements 
'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.

if add the flink-sql-connector-hbase-2.2_2.11-1.12.0.jar to the lib/, it may 
run, so I doubt flink job use the lib jars instead of the 
`yarn.provided.lib.dirs` config jars?

 

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client-.log
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-26 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-21143:
-
Attachment: (was: flink-deploy-sql-client.log)

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client-.log
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-26 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272524#comment-17272524
 ] 

zhisheng commented on FLINK-21143:
--

[~fly_in_gis] The job can be submitted successfully, and after submitting, I 
went to see that the file handle of the corresponding TaskManager process is 
indeed a file connected to HDFS, eg: 
/data/HDATA/yarn/local/filecache/40/flink-sql-connector-hbase 
-2.2_2.11-1.12.0.jar

But it actually seems that the business logic of the job has not read the 
latest jar. Once I keep the jar under lib/ consistent with the jar of HDFS, 
then the problem is gone, so I suspect it is still reading the local jar

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client-.log, flink-deploy-sql-client.log
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21146) 【SQL】Flink SQL Client not support specify the queue to submit the job

2021-01-26 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272522#comment-17272522
 ] 

zhisheng commented on FLINK-21146:
--

[~jark] good (y)

> 【SQL】Flink SQL Client not support specify the queue to submit the job
> -
>
> Key: FLINK-21146
> URL: https://issues.apache.org/jira/browse/FLINK-21146
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN, Table SQL / Client
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
>
> We can submit the job to specify yarn queue in Hive like : 
> {code:java}
> set mapreduce.job.queuename=queue1;
> {code}
>  
>  
> we can submit the spark-sql job to specify yarn queue like : 
> {code:java}
> spark-sql --queue xxx {code}
>  
> but Flink SQL Client can not specify the job submit to which queue, default 
> is `default` queue. it is not friendly in pro env.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-26 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-21143:
-
Attachment: flink-deploy-sql-client-.log

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client-.log, flink-deploy-sql-client.log
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-26 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272100#comment-17272100
 ] 

zhisheng commented on FLINK-21143:
--

[^flink-deploy-sql-client-.log]

 
thanks [~xintongsong]  [~fly_in_gis] I describe my problem in detail:

I configured yarn.provided.lib.dirs: hdfs:///flink/composite-lib/flink-1.12.0

There is the previous version of flink-sql-connector-hbase-2.2_2.11-1.12.0.jar 
under the Flink client directory, and later I add a new version of 
flink-sql-connector-hbase-2.2_2.11-1.12. 0.jar to HDFS.

When I started the SQL client and execute SQL, an exception is reported. This 
exception is caused by a problem with the previous version of the jar package. 
The new version of the jar has fixed the problem(in HDFS), but the local jar 
has not been updated(in Flink lib/). I add the log file,it may help you. ^^

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client-.log, flink-deploy-sql-client.log
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-26 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-21143:
-
Comment: was deleted

(was: [^flink-deploy-sql-client.log]

^thanks [~xintongsong]  [~fly_in_gis]       I describe my problem in detail:^  

I configured yarn.provided.lib.dirs: hdfs:///flink/composite-lib/flink-1.12.0

There is the previous version of flink-sql-connector-hbase-2.2_2.11-1.12.0.jar 
under the Flink client directory, and later I add a new version of 
flink-sql-connector-hbase-2.2_2.11-1.12. 0.jar to HDFS.

When I started the SQL client and execute SQL, an exception is reported. This 
exception is caused by a problem with the previous version of the jar package. 
The new version of the jar has fixed the problem(in HDFS), but the local jar 
has not been updated(in Flink lib/). I add the log file,it may help you. ^^ )

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client.log
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-26 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272083#comment-17272083
 ] 

zhisheng commented on FLINK-21143:
--

[^flink-deploy-sql-client.log]

^thanks [~xintongsong]  [~fly_in_gis]       I describe my problem in detail:^  

I configured yarn.provided.lib.dirs: hdfs:///flink/composite-lib/flink-1.12.0

There is the previous version of flink-sql-connector-hbase-2.2_2.11-1.12.0.jar 
under the Flink client directory, and later I add a new version of 
flink-sql-connector-hbase-2.2_2.11-1.12. 0.jar to HDFS.

When I started the SQL client and execute SQL, an exception is reported. This 
exception is caused by a problem with the previous version of the jar package. 
The new version of the jar has fixed the problem(in HDFS), but the local jar 
has not been updated(in Flink lib/). I add the log file,it may help you. ^^ 

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client.log
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-26 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-21143:
-
Attachment: flink-deploy-sql-client.log

> 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` 
> config jars
> ---
>
> Key: FLINK-21143
> URL: https://issues.apache.org/jira/browse/FLINK-21143
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN, Runtime / Configuration
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: flink-deploy-sql-client.log
>
>
> Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
> start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
> flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new 
> HDFS jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21146) 【SQL】Flink SQL Client not support specify the queue to submit the job

2021-01-26 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17272066#comment-17272066
 ] 

zhisheng commented on FLINK-21146:
--

hi,[~jark]  set a fixed configuration does not seem to be a very friendly 
way,The production environment is generally isolated, and users from different 
departments will submit jobs to different queues. 

> 【SQL】Flink SQL Client not support specify the queue to submit the job
> -
>
> Key: FLINK-21146
> URL: https://issues.apache.org/jira/browse/FLINK-21146
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN, Table SQL / Client
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
>
> We can submit the job to specify yarn queue in Hive like : 
> {code:java}
> set mapreduce.job.queuename=queue1;
> {code}
>  
>  
> we can submit the spark-sql job to specify yarn queue like : 
> {code:java}
> spark-sql --queue xxx {code}
>  
> but Flink SQL Client can not specify the job submit to which queue, default 
> is `default` queue. it is not friendly in pro env.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21146) 【SQL】Flink SQL Client not support specify the queue to submit the job

2021-01-25 Thread zhisheng (Jira)
zhisheng created FLINK-21146:


 Summary: 【SQL】Flink SQL Client not support specify the queue to 
submit the job
 Key: FLINK-21146
 URL: https://issues.apache.org/jira/browse/FLINK-21146
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / YARN, Table SQL / Client
Affects Versions: 1.12.0
Reporter: zhisheng


We can submit the job to specify yarn queue in Hive like : 
{code:java}
set mapreduce.job.queuename=queue1;
{code}
 

 

we can submit the spark-sql job to specify yarn queue like : 
{code:java}
spark-sql --queue xxx {code}
 

but Flink SQL Client can not specify the job submit to which queue, default is 
`default` queue. it is not friendly in pro env.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21143) 【runtime】flink job use the lib jars instead of the `yarn.provided.lib.dirs` config jars

2021-01-25 Thread zhisheng (Jira)
zhisheng created FLINK-21143:


 Summary: 【runtime】flink job use the lib jars instead of the 
`yarn.provided.lib.dirs` config jars
 Key: FLINK-21143
 URL: https://issues.apache.org/jira/browse/FLINK-21143
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN, Runtime / Configuration
Affects Versions: 1.12.0
Reporter: zhisheng


Flink 1.12.0, I had use `yarn.provided.lib.dirs` config to speed up the job 
start,so I upload all jars in HDFS,but I update the jars in HDFS(not 
flink-1.12.0/lib/),it will still use the lib/  jars instead of use the new HDFS 
jars when I submit new job.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19944) Support sink parallelism configuration to Hive connector

2020-12-08 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245909#comment-17245909
 ] 

zhisheng commented on FLINK-19944:
--

hi [~neighborhood] do you start for this subtask?

> Support sink parallelism configuration to Hive connector
> 
>
> Key: FLINK-19944
> URL: https://issues.apache.org/jira/browse/FLINK-19944
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Hive
>Reporter: zhuxiaoshang
>Assignee: Lsw_aka_laplace
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19946) Support sink parallelism configuration to Hbase connector

2020-12-04 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243875#comment-17243875
 ] 

zhisheng commented on FLINK-19946:
--

[~jark] get it

> Support sink parallelism configuration to Hbase connector
> -
>
> Key: FLINK-19946
> URL: https://issues.apache.org/jira/browse/FLINK-19946
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase
>Reporter: zhuxiaoshang
>Assignee: zhuxiaoshang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20481) java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer

2020-12-04 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243866#comment-17243866
 ] 

zhisheng commented on FLINK-20481:
--

[~873925...@qq.com] yes, i compile the mster branch

> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
> 
>
> Key: FLINK-20481
> URL: https://issues.apache.org/jira/browse/FLINK-20481
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-12-04-15-24-08-732.png, 
> image-2020-12-04-15-26-49-307.png
>
>
> MySQL table sql :
> {code:java}
> DROP TABLE IF EXISTS `yarn_app_logs_count`;
> CREATE TABLE `yarn_app_logs_count` (  `id` int(11) unsigned NOT NULL 
> AUTO_INCREMENT,  `app_id` varchar(50) DEFAULT NULL,  `count` bigint(11) 
> DEFAULT NULL,  PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;
> INSERT INTO `yarn_app_logs_count` (`id`, `app_id`, `count`)VALUES  
> (1,'application_1575453055442_3188',2);
> {code}
> Flink SQL DDL and SQL:
> {code:java}
> CREATE TABLE yarn_app_logs_count (
>   id INT,
>   app_id STRING,
>   `count` BIGINT
> ) WITH (
>'connector' = 'jdbc',
>'url' = 'jdbc:mysql://localhost:3306/zhisheng',
>'table-name' = 'yarn_app_logs_count',
>'username' = 'root',
>'password' = '123456'
> );
> select * from yarn_app_logs_count;
> {code}
>  
> if id type is INT, it has an exception:
> {code:java}
> 2020-12-04 15:15:23org.apache.flink.runtime.JobException: Recovery is 
> suppressed by NoRestartBackoffTimeStrategyat 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
> at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:224)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:217)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:208)
> at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:533)
> at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:89)
> at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:419)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
>    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:286)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:201)
> at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)at 
> akka.actor.Actor$class.aroundReceive(Actor.scala:517)at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)at 
> akka.actor.ActorCell.invoke(ActorCell.scala:561)at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)at 
> akka.dispatch.Mailbox.run(Mailbox.scala:225)at 
> akka.dispatch.Mailbox.exec(Mailbox.scala:235)at 
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
>    at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)   
>  at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)Caused
>  by: java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integerat 
> org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149)
> at 
> 

[jira] [Comment Edited] (FLINK-20481) java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer

2020-12-04 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243866#comment-17243866
 ] 

zhisheng edited comment on FLINK-20481 at 12/4/20, 9:11 AM:


[~873925...@qq.com] yes, i compile the master branch


was (Author: zhisheng):
[~873925...@qq.com] yes, i compile the mster branch

> java.lang.ClassCastException: java.lang.Long cannot be cast to 
> java.lang.Integer
> 
>
> Key: FLINK-20481
> URL: https://issues.apache.org/jira/browse/FLINK-20481
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-12-04-15-24-08-732.png, 
> image-2020-12-04-15-26-49-307.png
>
>
> MySQL table sql :
> {code:java}
> DROP TABLE IF EXISTS `yarn_app_logs_count`;
> CREATE TABLE `yarn_app_logs_count` (  `id` int(11) unsigned NOT NULL 
> AUTO_INCREMENT,  `app_id` varchar(50) DEFAULT NULL,  `count` bigint(11) 
> DEFAULT NULL,  PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;
> INSERT INTO `yarn_app_logs_count` (`id`, `app_id`, `count`)VALUES  
> (1,'application_1575453055442_3188',2);
> {code}
> Flink SQL DDL and SQL:
> {code:java}
> CREATE TABLE yarn_app_logs_count (
>   id INT,
>   app_id STRING,
>   `count` BIGINT
> ) WITH (
>'connector' = 'jdbc',
>'url' = 'jdbc:mysql://localhost:3306/zhisheng',
>'table-name' = 'yarn_app_logs_count',
>'username' = 'root',
>'password' = '123456'
> );
> select * from yarn_app_logs_count;
> {code}
>  
> if id type is INT, it has an exception:
> {code:java}
> 2020-12-04 15:15:23org.apache.flink.runtime.JobException: Recovery is 
> suppressed by NoRestartBackoffTimeStrategyat 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
> at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:224)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:217)
> at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:208)
> at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:533)
> at 
> org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:89)
> at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:419)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
>    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:286)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:201)
> at 
> org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
> at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
> at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)at 
> akka.actor.Actor$class.aroundReceive(Actor.scala:517)at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)at 
> akka.actor.ActorCell.invoke(ActorCell.scala:561)at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)at 
> akka.dispatch.Mailbox.run(Mailbox.scala:225)at 
> akka.dispatch.Mailbox.exec(Mailbox.scala:235)at 
> akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)at 
> akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
>    at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)   
>  at 
> akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)Caused
>  by: java.lang.ClassCastException: java.lang.Long cannot be cast to 

[jira] [Updated] (FLINK-20481) java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer

2020-12-03 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-20481:
-
Description: 
MySQL table sql :
{code:java}
DROP TABLE IF EXISTS `yarn_app_logs_count`;

CREATE TABLE `yarn_app_logs_count` (  `id` int(11) unsigned NOT NULL 
AUTO_INCREMENT,  `app_id` varchar(50) DEFAULT NULL,  `count` bigint(11) DEFAULT 
NULL,  PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;

INSERT INTO `yarn_app_logs_count` (`id`, `app_id`, `count`)VALUES  
(1,'application_1575453055442_3188',2);
{code}
Flink SQL DDL and SQL:
{code:java}
CREATE TABLE yarn_app_logs_count (
  id INT,
  app_id STRING,
  `count` BIGINT
) WITH (
   'connector' = 'jdbc',
   'url' = 'jdbc:mysql://localhost:3306/zhisheng',
   'table-name' = 'yarn_app_logs_count',
   'username' = 'root',
   'password' = '123456'
);

select * from yarn_app_logs_count;
{code}
 

if id type is INT, it has an exception:
{code:java}
2020-12-04 15:15:23org.apache.flink.runtime.JobException: Recovery is 
suppressed by NoRestartBackoffTimeStrategyat 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:224)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:217)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:208)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:533)
at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:89)
at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:419)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:286)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:201)
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)at 
scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)at 
akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)at 
akka.actor.Actor$class.aroundReceive(Actor.scala:517)at 
akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)at 
akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)at 
akka.actor.ActorCell.invoke(ActorCell.scala:561)at 
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)at 
akka.dispatch.Mailbox.run(Mailbox.scala:225)at 
akka.dispatch.Mailbox.exec(Mailbox.scala:235)at 
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)   
 at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)Caused
 by: java.lang.ClassCastException: java.lang.Long cannot be cast to 
java.lang.Integerat 
org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149)
at 
org.apache.flink.table.data.RowData.lambda$createFieldGetter$245ca7d1$6(RowData.java:267)
at 
org.apache.flink.table.data.RowData.lambda$createFieldGetter$25774257$1(RowData.java:317)
at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:166)
at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:129)
at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:69)
at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
at 
org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)

[jira] [Created] (FLINK-20481) java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.Integer

2020-12-03 Thread zhisheng (Jira)
zhisheng created FLINK-20481:


 Summary: java.lang.ClassCastException: java.lang.Long cannot be 
cast to java.lang.Integer
 Key: FLINK-20481
 URL: https://issues.apache.org/jira/browse/FLINK-20481
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC, Table SQL / Ecosystem
Affects Versions: 1.12.0
Reporter: zhisheng
 Attachments: image-2020-12-04-15-24-08-732.png, 
image-2020-12-04-15-26-49-307.png

MySQL table sql :
{code:java}
DROP TABLE IF EXISTS `yarn_app_logs_count`;

CREATE TABLE `yarn_app_logs_count` (  `id` int(11) unsigned NOT NULL 
AUTO_INCREMENT,  `app_id` varchar(50) DEFAULT NULL,  `count` bigint(11) DEFAULT 
NULL,  PRIMARY KEY (`id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8;

INSERT INTO `yarn_app_logs_count` (`id`, `app_id`, `count`)VALUES  
(1,'application_1575453055442_3188',2);
{code}
Flink SQL DDL:
{code:java}
CREATE TABLE yarn_app_logs_count (
  id INT,
  app_id STRING,
  `count` BIGINT
) WITH (
   'connector' = 'jdbc',
   'url' = 'jdbc:mysql://localhost:3306/zhisheng',
   'table-name' = 'yarn_app_logs_count',
   'username' = 'root',
   'password' = '123456'
);select * from yarn_app_logs_count;
{code}
 

if id type is INT, it has an exception:
{code:java}
2020-12-04 15:15:23org.apache.flink.runtime.JobException: Recovery is 
suppressed by NoRestartBackoffTimeStrategyat 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
at 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:224)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:217)
at 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:208)
at 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:533)
at 
org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:89)
at 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:419)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)   
 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:286)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:201)
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:154)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)at 
akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)at 
scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)at 
akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)at 
scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)at 
akka.actor.Actor$class.aroundReceive(Actor.scala:517)at 
akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)at 
akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)at 
akka.actor.ActorCell.invoke(ActorCell.scala:561)at 
akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)at 
akka.dispatch.Mailbox.run(Mailbox.scala:225)at 
akka.dispatch.Mailbox.exec(Mailbox.scala:235)at 
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)   
 at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)Caused
 by: java.lang.ClassCastException: java.lang.Long cannot be cast to 
java.lang.Integerat 
org.apache.flink.table.data.GenericRowData.getInt(GenericRowData.java:149)
at 
org.apache.flink.table.data.RowData.lambda$createFieldGetter$245ca7d1$6(RowData.java:267)
at 
org.apache.flink.table.data.RowData.lambda$createFieldGetter$25774257$1(RowData.java:317)
at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:166)
at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:129)
at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
at 

[jira] [Commented] (FLINK-19946) Support sink parallelism configuration to Hbase connector

2020-12-03 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243746#comment-17243746
 ] 

zhisheng commented on FLINK-19946:
--

[~ZhuShang] [~jark] hi, i find the HBaseUpsertTableSink doesn't use the 
'sink.parallelism' config, also use the 'dataStream.getParallelism()' as the 
parallelism. 
[https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-hbase-2.2/src/main/java/org/apache/flink/connector/hbase2/sink/HBaseUpsertTableSink.java#L103]

Does this place need to be modified?

> Support sink parallelism configuration to Hbase connector
> -
>
> Key: FLINK-19946
> URL: https://issues.apache.org/jira/browse/FLINK-19946
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / HBase
>Reporter: zhuxiaoshang
>Assignee: zhuxiaoshang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20473) when get metrics option, its hard to see the full name unless u choosed

2020-12-03 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17243673#comment-17243673
 ] 

zhisheng commented on FLINK-20473:
--

+1

> when get metrics option, its hard to see the full name unless u choosed
> ---
>
> Key: FLINK-20473
> URL: https://issues.apache.org/jira/browse/FLINK-20473
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.11.2
>Reporter: tonychan
>Priority: Major
> Attachments: image-2020-12-04-09-45-16-219.png
>
>
> wish have a more friendly way to see the full name 
> !image-2020-12-04-09-45-16-219.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19943) Support sink parallelism configuration to ElasticSearch connector

2020-12-01 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242126#comment-17242126
 ] 

zhisheng commented on FLINK-19943:
--

[~lzljs3620320] hi, this subtask will merge to 1.12?

> Support sink parallelism configuration to ElasticSearch connector
> -
>
> Key: FLINK-19943
> URL: https://issues.apache.org/jira/browse/FLINK-19943
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / ElasticSearch
>Reporter: CloseRiver
>Assignee: wgcn
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20414) 【HBase】FileNotFoundException: File /tmp/hbase-deploy/hbase/lib does not exist

2020-12-01 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17242045#comment-17242045
 ] 

zhisheng commented on FLINK-20414:
--

[~jark] yes, i solved it, but the exception is not friendly, if the log 
level='INFO', I can't find the DEBUG log
{code:java}
2020-11-30 10:49:45,772 DEBUG 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
[] - Finding class again: 
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException2020-11-30 
10:49:45,774 DEBUG 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
[] - Class org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException not 
found - using dynamical class loader2020-11-30 10:49:45,774 {code}

> 【HBase】FileNotFoundException: File /tmp/hbase-deploy/hbase/lib does not exist
> -
>
> Key: FLINK-20414
> URL: https://issues.apache.org/jira/browse/FLINK-20414
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Fix For: 1.12.0
>
>
> {code:java}
> CREATE TABLE yarn_log_datagen_test_hbase_sink (
>  appid INT,
>  message STRING
> ) WITH (
>  'connector' = 'datagen',
>  'rows-per-second'='10',
>  'fields.appid.kind'='random',
>  'fields.appid.min'='1',
>  'fields.appid.max'='1000',
>  'fields.message.length'='100'
> );
> CREATE TABLE hbase_test1 (
>  rowkey INT,
>  family1 ROW
> ) WITH (
>  'connector' = 'hbase-1.4',
>  'table-name' = 'test_flink',
>  'zookeeper.quorum' = 'xxx:2181',
>  'sink.parallelism' = '2',
>  'sink.buffer-flush.interval' = '1',
>  'sink.buffer-flush.max-rows' = '1',
>  'sink.buffer-flush.max-size' = '1'
> );
> INSERT INTO hbase_test1 SELECT appid, ROW(message) FROM 
> yarn_log_datagen_test_hbase_sink;
> {code}
> I run the sql, has exception, and data is not write into hbase, i add the  
> flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar  in the lib folder
>  
> {code:java}
> 2020-11-30 10:49:45,772 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Finding class again: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException2020-11-30 
> 10:49:45,774 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Class org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException 
> not found - using dynamical class loader2020-11-30 10:49:45,774 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Finding class: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException2020-11-30 
> 10:49:45,774 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Loading new jar files, if any2020-11-30 10:49:45,776 WARN  
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Failed to check remote dir status 
> /tmp/hbase-deploy/hbase/libjava.io.FileNotFoundException: File 
> /tmp/hbase-deploy/hbase/lib does not exist.at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:795)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  ~[hadoop-common-2.7.3.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:860)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.loadNewJars(DynamicClassLoader.java:206)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.tryRefreshClass(DynamicClassLoader.java:168)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.loadClass(DynamicClassLoader.java:140)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> java.lang.Class.forName0(Native Method) ~[?:1.8.0_92]at 
> java.lang.Class.forName(Class.java:348) [?:1.8.0_92]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1753)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> 

[jira] [Closed] (FLINK-20414) 【HBase】FileNotFoundException: File /tmp/hbase-deploy/hbase/lib does not exist

2020-11-29 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng closed FLINK-20414.

Resolution: Fixed

> 【HBase】FileNotFoundException: File /tmp/hbase-deploy/hbase/lib does not exist
> -
>
> Key: FLINK-20414
> URL: https://issues.apache.org/jira/browse/FLINK-20414
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Fix For: 1.12.0
>
>
> {code:java}
> CREATE TABLE yarn_log_datagen_test_hbase_sink (
>  appid INT,
>  message STRING
> ) WITH (
>  'connector' = 'datagen',
>  'rows-per-second'='10',
>  'fields.appid.kind'='random',
>  'fields.appid.min'='1',
>  'fields.appid.max'='1000',
>  'fields.message.length'='100'
> );
> CREATE TABLE hbase_test1 (
>  rowkey INT,
>  family1 ROW
> ) WITH (
>  'connector' = 'hbase-1.4',
>  'table-name' = 'test_flink',
>  'zookeeper.quorum' = 'xxx:2181',
>  'sink.parallelism' = '2',
>  'sink.buffer-flush.interval' = '1',
>  'sink.buffer-flush.max-rows' = '1',
>  'sink.buffer-flush.max-size' = '1'
> );
> INSERT INTO hbase_test1 SELECT appid, ROW(message) FROM 
> yarn_log_datagen_test_hbase_sink;
> {code}
> I run the sql, has exception, and data is not write into hbase, i add the  
> flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar  in the lib folder
>  
> {code:java}
> 2020-11-30 10:49:45,772 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Finding class again: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException2020-11-30 
> 10:49:45,774 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Class org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException 
> not found - using dynamical class loader2020-11-30 10:49:45,774 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Finding class: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException2020-11-30 
> 10:49:45,774 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Loading new jar files, if any2020-11-30 10:49:45,776 WARN  
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Failed to check remote dir status 
> /tmp/hbase-deploy/hbase/libjava.io.FileNotFoundException: File 
> /tmp/hbase-deploy/hbase/lib does not exist.at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:795)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  ~[hadoop-common-2.7.3.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:860)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.loadNewJars(DynamicClassLoader.java:206)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.tryRefreshClass(DynamicClassLoader.java:168)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.loadClass(DynamicClassLoader.java:140)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> java.lang.Class.forName0(Native Method) ~[?:1.8.0_92]at 
> java.lang.Class.forName(Class.java:348) [?:1.8.0_92]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1753)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.protobuf.ResponseConverter.getResults(ResponseConverter.java:157)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:180)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:53)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> 

[jira] [Updated] (FLINK-20414) 【HBase】FileNotFoundException: File /tmp/hbase-deploy/hbase/lib does not exist

2020-11-29 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-20414:
-
Summary: 【HBase】FileNotFoundException: File /tmp/hbase-deploy/hbase/lib 
does not exist  (was: 【HBase】)

> 【HBase】FileNotFoundException: File /tmp/hbase-deploy/hbase/lib does not exist
> -
>
> Key: FLINK-20414
> URL: https://issues.apache.org/jira/browse/FLINK-20414
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Fix For: 1.12.0
>
>
> {code:java}
> CREATE TABLE yarn_log_datagen_test_hbase_sink (
>  appid INT,
>  message STRING
> ) WITH (
>  'connector' = 'datagen',
>  'rows-per-second'='10',
>  'fields.appid.kind'='random',
>  'fields.appid.min'='1',
>  'fields.appid.max'='1000',
>  'fields.message.length'='100'
> );
> CREATE TABLE hbase_test1 (
>  rowkey INT,
>  family1 ROW
> ) WITH (
>  'connector' = 'hbase-1.4',
>  'table-name' = 'test_flink',
>  'zookeeper.quorum' = 'xxx:2181',
>  'sink.parallelism' = '2',
>  'sink.buffer-flush.interval' = '1',
>  'sink.buffer-flush.max-rows' = '1',
>  'sink.buffer-flush.max-size' = '1'
> );
> INSERT INTO hbase_test1 SELECT appid, ROW(message) FROM 
> yarn_log_datagen_test_hbase_sink;
> {code}
> I run the sql, has exception, and data is not write into hbase, i add the  
> flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar  in the lib folder
>  
> {code:java}
> 2020-11-30 10:49:45,772 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Finding class again: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException2020-11-30 
> 10:49:45,774 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Class org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException 
> not found - using dynamical class loader2020-11-30 10:49:45,774 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Finding class: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException2020-11-30 
> 10:49:45,774 DEBUG 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Loading new jar files, if any2020-11-30 10:49:45,776 WARN  
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
> [] - Failed to check remote dir status 
> /tmp/hbase-deploy/hbase/libjava.io.FileNotFoundException: File 
> /tmp/hbase-deploy/hbase/lib does not exist.at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:795)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  ~[hadoop-common-2.7.3.jar:?]at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:860)
>  ~[hadoop-hdfs-2.7.3.4.jar:?]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.loadNewJars(DynamicClassLoader.java:206)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.tryRefreshClass(DynamicClassLoader.java:168)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.loadClass(DynamicClassLoader.java:140)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> java.lang.Class.forName0(Native Method) ~[?:1.8.0_92]at 
> java.lang.Class.forName(Class.java:348) [?:1.8.0_92]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1753)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.protobuf.ResponseConverter.getResults(ResponseConverter.java:157)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:180)
>  [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
> org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:53)
>  

[jira] [Created] (FLINK-20414) 【HBase】

2020-11-29 Thread zhisheng (Jira)
zhisheng created FLINK-20414:


 Summary: 【HBase】
 Key: FLINK-20414
 URL: https://issues.apache.org/jira/browse/FLINK-20414
 Project: Flink
  Issue Type: Bug
  Components: Connectors / HBase
Affects Versions: 1.12.0
Reporter: zhisheng
 Fix For: 1.12.0


{code:java}
CREATE TABLE yarn_log_datagen_test_hbase_sink (
 appid INT,
 message STRING
) WITH (
 'connector' = 'datagen',
 'rows-per-second'='10',
 'fields.appid.kind'='random',
 'fields.appid.min'='1',
 'fields.appid.max'='1000',
 'fields.message.length'='100'
);

CREATE TABLE hbase_test1 (
 rowkey INT,
 family1 ROW
) WITH (
 'connector' = 'hbase-1.4',
 'table-name' = 'test_flink',
 'zookeeper.quorum' = 'xxx:2181',
 'sink.parallelism' = '2',
 'sink.buffer-flush.interval' = '1',
 'sink.buffer-flush.max-rows' = '1',
 'sink.buffer-flush.max-size' = '1'
);

INSERT INTO hbase_test1 SELECT appid, ROW(message) FROM 
yarn_log_datagen_test_hbase_sink;

{code}
I run the sql, has exception, and data is not write into hbase, i add the  
flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar  in the lib folder

 
{code:java}
2020-11-30 10:49:45,772 DEBUG 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
[] - Finding class again: 
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException2020-11-30 
10:49:45,774 DEBUG 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
[] - Class org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException not 
found - using dynamical class loader2020-11-30 10:49:45,774 DEBUG 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
[] - Finding class: 
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException2020-11-30 
10:49:45,774 DEBUG 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
[] - Loading new jar files, if any2020-11-30 10:49:45,776 WARN  
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader 
[] - Failed to check remote dir status 
/tmp/hbase-deploy/hbase/libjava.io.FileNotFoundException: File 
/tmp/hbase-deploy/hbase/lib does not exist.at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:795)
 ~[hadoop-hdfs-2.7.3.4.jar:?]at 
org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
 ~[hadoop-hdfs-2.7.3.4.jar:?]at 
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
 ~[hadoop-hdfs-2.7.3.4.jar:?]at 
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
 ~[hadoop-hdfs-2.7.3.4.jar:?]at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 ~[hadoop-common-2.7.3.jar:?]at 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:860)
 ~[hadoop-hdfs-2.7.3.4.jar:?]at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.loadNewJars(DynamicClassLoader.java:206)
 [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.tryRefreshClass(DynamicClassLoader.java:168)
 [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.util.DynamicClassLoader.loadClass(DynamicClassLoader.java:140)
 [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
java.lang.Class.forName0(Native Method) ~[?:1.8.0_92]at 
java.lang.Class.forName(Class.java:348) [?:1.8.0_92]at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.protobuf.ProtobufUtil.toException(ProtobufUtil.java:1753)
 [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.protobuf.ResponseConverter.getResults(ResponseConverter.java:157)
 [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:180)
 [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:53)
 [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:219)
 [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 
org.apache.flink.hbase.shaded.org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:806)
 [flink-sql-connector-hbase-1.4_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]at 

[jira] [Commented] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-17 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17233539#comment-17233539
 ] 

zhisheng commented on FLINK-20143:
--

thanks [~fly_in_gis] , it works well now, thanks [~kkl0u] too, it will push to 
1.12.0 ?

> use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode
> --
>
> Key: FLINK-20143
> URL: https://issues.apache.org/jira/browse/FLINK-20143
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Deployment / YARN
>Affects Versions: 1.12.0, 1.11.2
>Reporter: zhisheng
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2020-11-13-17-21-47-751.png, 
> image-2020-11-13-17-22-06-111.png, image-2020-11-13-18-43-55-188.png
>
>
> use follow command deploy flink job to yarn failed 
> {code:java}
> ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
> ./examples/streaming/StateMachineExample.jar
> {code}
> log:
> {code:java}
> $ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
> -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jarSLF4J: Class path contains 
> multiple SLF4J bindings.SLF4J: Found binding in 
> [jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
> org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
> Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
> 16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>   [] - Dynamic Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with 
> built-in data generator: StateMachineExample [--error-rate 
> ] [--sleep ]Usage 
> with Kafka: StateMachineExample --kafka-topic  [--brokers 
> ]Options for both the above setups: [--backend ] 
> [--checkpoint-dir ] [--async-checkpoints ] 
> [--incremental-checkpoints ] [--output  OR null for 
> stdout]
> Using standalone source with error rate 0.00 and sleep delay 1 millis
> 2020-11-13 16:14:30,706 WARN  
> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
> configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
> contains a LOG4J config file.If you want to use logback, then please delete 
> or rename the log configuration file.2020-11-13 16:14:30,947 INFO  
> org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting 
> to Application History server at 
> FAT-hadoopuat-69117.vm.dc01.tech/10.69.1.17:102002020-11-13 16:14:30,958 INFO 
>  org.apache.flink.yarn.YarnClusterDescriptor                  [] - No path 
> for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
> 16:14:31,065 INFO  
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
> over to rm22020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured TaskManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
> specification: ClusterSpecification{masterMemoryMB=3072, 
> taskManagerMemoryMB=3072, slotsPerTaskManager=2}2020-11-13 

[jira] [Commented] (FLINK-20109) java.lang.ClassNotFoundException: org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231377#comment-17231377
 ] 

zhisheng commented on FLINK-20109:
--

could you please show you sql and ddl ?

> java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
> 
>
> Key: FLINK-20109
> URL: https://issues.apache.org/jira/browse/FLINK-20109
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.11.2
> Environment: macos 10.14.6 
> flink 1.11.2
>Reporter: xuhaoran
>Priority: Major
>
> 按照官网例子 加载Kafka的连接器需要下载flink-sql-connector-kafka_2.11-1.11.2.jar
> 然后启动指令为./sql-client.sh embedded -lib xxx/ 
> 然后创建简单的sql语句 创建 Kafka table 
> 执行都ok
>  
> 但如果flink-sql-connector-kafka_2.11-1.11.2.jar 放在classpath里面  启动./sql-client.sh 
> embedded ps -ef|grep java 
> 也可以看到当前classpath加载了flink-sql-connector-kafka_2.11-1.11.2.jar
> 执行之前的创建kafka table 执行select * from table 
> 报错java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
> 怀疑类加载器有问题
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231369#comment-17231369
 ] 

zhisheng edited comment on FLINK-20143 at 11/13/20, 10:51 AM:
--

{code:java}
22020-11-13 18:46:43,014 INFO  org.apache.flink.client.cli.CliFrontend          
            [] - 
2020-11-13
 18:46:43,014 INFO  org.apache.flink.client.cli.CliFrontend                     
 [] - 
2020-11-13
 18:46:43,019 INFO  org.apache.flink.client.cli.CliFrontend                     
 [] -  Starting Command Line Client (Version: 1.12-SNAPSHOT, Scala: 2.11, 
Rev:c55420b, Date:2020-11-05T05:29:49+01:00)2020-11-13 18:46:43,019 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -  OS current 
user: deploy2020-11-13 18:46:43,415 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -  Current 
Hadoop/Kerberos user: deploy2020-11-13 18:46:43,416 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -  JVM: Java 
HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.92-b142020-11-13 
18:46:43,416 INFO  org.apache.flink.client.cli.CliFrontend                      
[] -  Maximum heap size: 7136 MiBytes2020-11-13 18:46:43,416 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -  JAVA_HOME: 
/app/jdk/2020-11-13 18:46:43,418 INFO  org.apache.flink.client.cli.CliFrontend  
                    [] -  Hadoop version: 2.7.32020-11-13 18:46:43,418 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -  JVM 
Options:2020-11-13 18:46:43,418 INFO  org.apache.flink.client.cli.CliFrontend   
                   [] -     
-Dlog.file=/data1/app/flink-1.12-SNAPSHOT/log/flink-deploy-client-FAT-hadoopuat-69120.vm.dc01.
 .tech.log2020-11-13 18:46:43,418 INFO  org.apache.flink.client.cli.CliFrontend 
                     [] -     
-Dlog4j.configuration=file:/data1/app/flink-1.12-SNAPSHOT/conf/log4j-cli.properties2020-11-13
 18:46:43,418 INFO  org.apache.flink.client.cli.CliFrontend                     
 [] -     
-Dlog4j.configurationFile=file:/data1/app/flink-1.12-SNAPSHOT/conf/log4j-cli.properties2020-11-13
 18:46:43,418 INFO  org.apache.flink.client.cli.CliFrontend                     
 [] -     
-Dlogback.configurationFile=file:/data1/app/flink-1.12-SNAPSHOT/conf/logback.xml2020-11-13
 18:46:43,419 INFO  org.apache.flink.client.cli.CliFrontend                     
 [] -  Program Arguments:2020-11-13 18:46:43,420 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -     
run2020-11-13 18:46:43,420 INFO  org.apache.flink.client.cli.CliFrontend        
              [] -     -t2020-11-13 18:46:43,421 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -     
yarn-per-job2020-11-13 18:46:43,421 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -     
-Dexecution.attached=false2020-11-13 18:46:43,421 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -     
-Dyarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
18:46:43,421 INFO  org.apache.flink.client.cli.CliFrontend                      
[] -     ./examples/streaming/StateMachineExample.jar2020-11-13 18:46:43,421 
INFO  org.apache.flink.client.cli.CliFrontend                      [] -  
Classpath: 

[jira] [Commented] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231369#comment-17231369
 ] 

zhisheng commented on FLINK-20143:
--

{code:java}
22020-11-13 18:46:43,014 INFO  org.apache.flink.client.cli.CliFrontend          
            [] - 
2020-11-13
 18:46:43,014 INFO  org.apache.flink.client.cli.CliFrontend                     
 [] - 
2020-11-13
 18:46:43,019 INFO  org.apache.flink.client.cli.CliFrontend                     
 [] -  Starting Command Line Client (Version: 1.12-SNAPSHOT, Scala: 2.11, 
Rev:c55420b, Date:2020-11-05T05:29:49+01:00)2020-11-13 18:46:43,019 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -  OS current 
user: deploy2020-11-13 18:46:43,415 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -  Current 
Hadoop/Kerberos user: deploy2020-11-13 18:46:43,416 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -  JVM: Java 
HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.92-b142020-11-13 
18:46:43,416 INFO  org.apache.flink.client.cli.CliFrontend                      
[] -  Maximum heap size: 7136 MiBytes2020-11-13 18:46:43,416 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -  JAVA_HOME: 
/app/jdk/2020-11-13 18:46:43,418 INFO  org.apache.flink.client.cli.CliFrontend  
                    [] -  Hadoop version: 2.7.32020-11-13 18:46:43,418 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -  JVM 
Options:2020-11-13 18:46:43,418 INFO  org.apache.flink.client.cli.CliFrontend   
                   [] -     
-Dlog.file=/data1/app/flink-1.12-SNAPSHOT/log/flink-deploy-client-FAT-hadoopuat-69120.vm.dc01.
 .tech.log2020-11-13 18:46:43,418 INFO  org.apache.flink.client.cli.CliFrontend 
                     [] -     
-Dlog4j.configuration=file:/data1/app/flink-1.12-SNAPSHOT/conf/log4j-cli.properties2020-11-13
 18:46:43,418 INFO  org.apache.flink.client.cli.CliFrontend                     
 [] -     
-Dlog4j.configurationFile=file:/data1/app/flink-1.12-SNAPSHOT/conf/log4j-cli.properties2020-11-13
 18:46:43,418 INFO  org.apache.flink.client.cli.CliFrontend                     
 [] -     
-Dlogback.configurationFile=file:/data1/app/flink-1.12-SNAPSHOT/conf/logback.xml2020-11-13
 18:46:43,419 INFO  org.apache.flink.client.cli.CliFrontend                     
 [] -  Program Arguments:2020-11-13 18:46:43,420 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -     
run2020-11-13 18:46:43,420 INFO  org.apache.flink.client.cli.CliFrontend        
              [] -     -t2020-11-13 18:46:43,421 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -     
yarn-per-job2020-11-13 18:46:43,421 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -     
-Dexecution.attached=false2020-11-13 18:46:43,421 INFO  
org.apache.flink.client.cli.CliFrontend                      [] -     
-Dyarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
18:46:43,421 INFO  org.apache.flink.client.cli.CliFrontend                      
[] -     ./examples/streaming/StateMachineExample.jar2020-11-13 18:46:43,421 
INFO  org.apache.flink.client.cli.CliFrontend                      [] -  
Classpath: 

[jira] [Commented] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231365#comment-17231365
 ] 

zhisheng commented on FLINK-20143:
--

!image-2020-11-13-18-43-55-188.png!

 

does not has any log, i had say just now;)

> use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode
> --
>
> Key: FLINK-20143
> URL: https://issues.apache.org/jira/browse/FLINK-20143
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-13-17-21-47-751.png, 
> image-2020-11-13-17-22-06-111.png, image-2020-11-13-18-43-55-188.png
>
>
> use follow command deploy flink job to yarn failed 
> {code:java}
> ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
> ./examples/streaming/StateMachineExample.jar
> {code}
> log:
> {code:java}
> $ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
> -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jarSLF4J: Class path contains 
> multiple SLF4J bindings.SLF4J: Found binding in 
> [jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
> org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
> Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
> 16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>   [] - Dynamic Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with 
> built-in data generator: StateMachineExample [--error-rate 
> ] [--sleep ]Usage 
> with Kafka: StateMachineExample --kafka-topic  [--brokers 
> ]Options for both the above setups: [--backend ] 
> [--checkpoint-dir ] [--async-checkpoints ] 
> [--incremental-checkpoints ] [--output  OR null for 
> stdout]
> Using standalone source with error rate 0.00 and sleep delay 1 millis
> 2020-11-13 16:14:30,706 WARN  
> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
> configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
> contains a LOG4J config file.If you want to use logback, then please delete 
> or rename the log configuration file.2020-11-13 16:14:30,947 INFO  
> org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting 
> to Application History server at 
> FAT-hadoopuat-69117.vm.dc01.tech/10.69.1.17:102002020-11-13 16:14:30,958 INFO 
>  org.apache.flink.yarn.YarnClusterDescriptor                  [] - No path 
> for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
> 16:14:31,065 INFO  
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
> over to rm22020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured TaskManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
> specification: ClusterSpecification{masterMemoryMB=3072, 
> taskManagerMemoryMB=3072, slotsPerTaskManager=2}2020-11-13 16:14:31,681 WARN  
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
> 

[jira] [Updated] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-20143:
-
Attachment: image-2020-11-13-18-43-55-188.png

> use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode
> --
>
> Key: FLINK-20143
> URL: https://issues.apache.org/jira/browse/FLINK-20143
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-13-17-21-47-751.png, 
> image-2020-11-13-17-22-06-111.png, image-2020-11-13-18-43-55-188.png
>
>
> use follow command deploy flink job to yarn failed 
> {code:java}
> ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
> ./examples/streaming/StateMachineExample.jar
> {code}
> log:
> {code:java}
> $ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
> -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jarSLF4J: Class path contains 
> multiple SLF4J bindings.SLF4J: Found binding in 
> [jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
> org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
> Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
> 16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>   [] - Dynamic Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with 
> built-in data generator: StateMachineExample [--error-rate 
> ] [--sleep ]Usage 
> with Kafka: StateMachineExample --kafka-topic  [--brokers 
> ]Options for both the above setups: [--backend ] 
> [--checkpoint-dir ] [--async-checkpoints ] 
> [--incremental-checkpoints ] [--output  OR null for 
> stdout]
> Using standalone source with error rate 0.00 and sleep delay 1 millis
> 2020-11-13 16:14:30,706 WARN  
> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
> configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
> contains a LOG4J config file.If you want to use logback, then please delete 
> or rename the log configuration file.2020-11-13 16:14:30,947 INFO  
> org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting 
> to Application History server at 
> FAT-hadoopuat-69117.vm.dc01.tech/10.69.1.17:102002020-11-13 16:14:30,958 INFO 
>  org.apache.flink.yarn.YarnClusterDescriptor                  [] - No path 
> for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
> 16:14:31,065 INFO  
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
> over to rm22020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured TaskManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
> specification: ClusterSpecification{masterMemoryMB=3072, 
> taskManagerMemoryMB=3072, slotsPerTaskManager=2}2020-11-13 16:14:31,681 WARN  
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
> short-circuit local reads feature cannot be used because libhadoop cannot be 

[jira] [Comment Edited] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231346#comment-17231346
 ] 

zhisheng edited comment on FLINK-20143 at 11/13/20, 10:12 AM:
--

{code:java}
$ ./bin/flink run -t yarn-per-job -Dexecution.attached=false 
-Dyarn.provided.lib.dirs="hdfs:///flink/flink-1.12-SNAPSHOT/lib" 
./examples/streaming/StateMachineExample.jar

SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in 
[jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 Found binding in 
[jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 Found binding in 
[jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.SLF4J: Actual binding is of type 
[org.apache.logging.slf4j.Log4jLoggerFactory]Usage with built-in data 
generator: StateMachineExample [--error-rate 
] [--sleep ]Usage 
with Kafka: StateMachineExample --kafka-topic  [--brokers 
]Options for both the above setups: [--backend ] 
[--checkpoint-dir ] [--async-checkpoints ] 
[--incremental-checkpoints ] [--output  OR null for 
stdout]
Using standalone source with error rate 0.00 and sleep delay 1 millis
2020-11-13 18:05:51,974 WARN  
org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
contains a LOG4J config file.If you want to use logback, then please delete or 
rename the log configuration file.2020-11-13 18:05:52,202 INFO  
org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting to 
Application History server at 
FAT-hadoopuat-69117.vm.dc01.tech/10.69.1.17:102002020-11-13 18:05:52,213 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - No path for 
the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
18:05:52,324 INFO  
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
over to rm22020-11-13 18:05:52,387 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
configured JobManager memory is 1600 MB. YARN will allocate 2048 MB to make up 
an integer multiple of its minimum allocation memory (2048 MB, configured via 
'yarn.scheduler.minimum-allocation-mb'). The extra 448 MB may not be used by 
Flink.2020-11-13 18:05:52,388 INFO  org.apache.flink.yarn.YarnClusterDescriptor 
                 [] - The configured TaskManager memory is 1728 MB. YARN will 
allocate 2048 MB to make up an integer multiple of its minimum allocation 
memory (2048 MB, configured via 'yarn.scheduler.minimum-allocation-mb'). The 
extra 320 MB may not be used by Flink.2020-11-13 18:05:52,388 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
specification: ClusterSpecification{masterMemoryMB=2048, 
taskManagerMemoryMB=1728, slotsPerTaskManager=2}2020-11-13 18:05:52,932 WARN  
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
short-circuit local reads feature cannot be used because libhadoop cannot be 
loaded.2020-11-13 18:05:55,076 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Submitting 
application master application_1599741232083_220112020-11-13 18:05:55,307 INFO  
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] - Submitted 
application application_1599741232083_220112020-11-13 18:05:55,308 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Waiting for 
the cluster to be allocated2020-11-13 18:05:55,310 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deploying 
cluster, current state ACCEPTED
 The program 
finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: Could not deploy Yarn job cluster. at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:330)
 at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
 at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:743) at 
org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:242) at 
org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:971) at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1047) 
at java.security.AccessController.doPrivileged(Native Method) at 

[jira] [Comment Edited] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231346#comment-17231346
 ] 

zhisheng edited comment on FLINK-20143 at 11/13/20, 10:10 AM:
--

{code:java}
$ ./bin/flink run -t yarn-per-job -Dexecution.attached=false 
-Dyarn.provided.lib.dirs="hdfs:///flink/flink-1.12-SNAPSHOT/lib" 
./examples/streaming/StateMachineExample.jar$ ./bin/flink run -t yarn-per-job 
-Dexecution.attached=false 
-Dyarn.provided.lib.dirs="hdfs:///flink/flink-1.12-SNAPSHOT/lib" 
./examples/streaming/StateMachineExample.jar

SLF4J: Class path contains multiple SLF4J bindings.SLF4J: Found binding in 
[jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 Found binding in 
[jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 Found binding in 
[jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.SLF4J: Actual binding is of type 
[org.apache.logging.slf4j.Log4jLoggerFactory]Usage with built-in data 
generator: StateMachineExample [--error-rate 
] [--sleep ]Usage 
with Kafka: StateMachineExample --kafka-topic  [--brokers 
]Options for both the above setups: [--backend ] 
[--checkpoint-dir ] [--async-checkpoints ] 
[--incremental-checkpoints ] [--output  OR null for 
stdout]
Using standalone source with error rate 0.00 and sleep delay 1 millis
2020-11-13 18:05:51,974 WARN  
org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
contains a LOG4J config file.If you want to use logback, then please delete or 
rename the log configuration file.2020-11-13 18:05:52,202 INFO  
org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting to 
Application History server at 
FAT-hadoopuat-69117.vm.dc01.hellocloud.tech/10.69.1.17:102002020-11-13 
18:05:52,213 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  
[] - No path for the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
18:05:52,324 INFO  
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
over to rm22020-11-13 18:05:52,387 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
configured JobManager memory is 1600 MB. YARN will allocate 2048 MB to make up 
an integer multiple of its minimum allocation memory (2048 MB, configured via 
'yarn.scheduler.minimum-allocation-mb'). The extra 448 MB may not be used by 
Flink.2020-11-13 18:05:52,388 INFO  org.apache.flink.yarn.YarnClusterDescriptor 
                 [] - The configured TaskManager memory is 1728 MB. YARN will 
allocate 2048 MB to make up an integer multiple of its minimum allocation 
memory (2048 MB, configured via 'yarn.scheduler.minimum-allocation-mb'). The 
extra 320 MB may not be used by Flink.2020-11-13 18:05:52,388 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
specification: ClusterSpecification{masterMemoryMB=2048, 
taskManagerMemoryMB=1728, slotsPerTaskManager=2}2020-11-13 18:05:52,932 WARN  
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
short-circuit local reads feature cannot be used because libhadoop cannot be 
loaded.2020-11-13 18:05:55,076 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Submitting 
application master application_1599741232083_220112020-11-13 18:05:55,307 INFO  
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] - Submitted 
application application_1599741232083_220112020-11-13 18:05:55,308 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Waiting for 
the cluster to be allocated2020-11-13 18:05:55,310 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deploying 
cluster, current state ACCEPTED
 The program 
finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: Could not deploy Yarn job cluster. at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:330)
 at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
 at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:743) at 
org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:242) at 

[jira] [Updated] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-20143:
-
Description: 
use follow command deploy flink job to yarn failed 
{code:java}
./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
./examples/streaming/StateMachineExample.jar
{code}
log:
{code:java}
$ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
-d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
./examples/streaming/StateMachineExample.jarSLF4J: Class path contains multiple 
SLF4J bindings.SLF4J: Found binding in 
[jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 Found binding in 
[jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 Found binding in 
[jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.SLF4J: Actual binding is of type 
[org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
Property set: 
yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                
[] - Dynamic Property set: 
yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with built-in 
data generator: StateMachineExample [--error-rate 
] [--sleep ]Usage 
with Kafka: StateMachineExample --kafka-topic  [--brokers 
]Options for both the above setups: [--backend ] 
[--checkpoint-dir ] [--async-checkpoints ] 
[--incremental-checkpoints ] [--output  OR null for 
stdout]
Using standalone source with error rate 0.00 and sleep delay 1 millis
2020-11-13 16:14:30,706 WARN  
org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
contains a LOG4J config file.If you want to use logback, then please delete or 
rename the log configuration file.2020-11-13 16:14:30,947 INFO  
org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting to 
Application History server at 
FAT-hadoopuat-69117.vm.dc01.tech/10.69.1.17:102002020-11-13 16:14:30,958 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - No path for 
the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
16:14:31,065 INFO  
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
over to rm22020-11-13 16:14:31,130 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make up 
an integer multiple of its minimum allocation memory (2048 MB, configured via 
'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be used by 
Flink.2020-11-13 16:14:31,130 INFO  org.apache.flink.yarn.YarnClusterDescriptor 
                 [] - The configured TaskManager memory is 3072 MB. YARN will 
allocate 4096 MB to make up an integer multiple of its minimum allocation 
memory (2048 MB, configured via 'yarn.scheduler.minimum-allocation-mb'). The 
extra 1024 MB may not be used by Flink.2020-11-13 16:14:31,130 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
specification: ClusterSpecification{masterMemoryMB=3072, 
taskManagerMemoryMB=3072, slotsPerTaskManager=2}2020-11-13 16:14:31,681 WARN  
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
short-circuit local reads feature cannot be used because libhadoop cannot be 
loaded.2020-11-13 16:14:33,417 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Submitting 
application master application_1599741232083_219902020-11-13 16:14:33,446 INFO  
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] - Submitted 
application application_1599741232083_219902020-11-13 16:14:33,446 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Waiting for 
the cluster to be allocated2020-11-13 16:14:33,448 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deploying 
cluster, current state ACCEPTED
 The program 
finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main 

[jira] [Comment Edited] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231346#comment-17231346
 ] 

zhisheng edited comment on FLINK-20143 at 11/13/20, 10:07 AM:
--

{code:java}
$ ./bin/flink run -t yarn-per-job -Dexecution.attached=false 
-Dyarn.provided.lib.dirs="hdfs:///flink/flink-1.12-SNAPSHOT/lib" 
./examples/streaming/StateMachineExample.jar$ ./bin/flink run -t yarn-per-job 
-Dexecution.attached=false 
-Dyarn.provided.lib.dirs="hdfs:///flink/flink-1.12-SNAPSHOT/lib" 
./examples/streaming/StateMachineExample.jarSLF4J: Class path contains multiple 
SLF4J bindings.SLF4J: Found binding in 
[jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 Found binding in 
[jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 Found binding in 
[jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.SLF4J: Actual binding is of type 
[org.apache.logging.slf4j.Log4jLoggerFactory]Usage with built-in data 
generator: StateMachineExample [--error-rate 
] [--sleep ]Usage 
with Kafka: StateMachineExample --kafka-topic  [--brokers 
]Options for both the above setups: [--backend ] 
[--checkpoint-dir ] [--async-checkpoints ] 
[--incremental-checkpoints ] [--output  OR null for 
stdout]
Using standalone source with error rate 0.00 and sleep delay 1 millis
2020-11-13 18:05:51,974 WARN  
org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
contains a LOG4J config file.If you want to use logback, then please delete or 
rename the log configuration file.2020-11-13 18:05:52,202 INFO  
org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting to 
Application History server at 
FAT-hadoopuat-69117.vm.dc01.tech/10.69.1.17:102002020-11-13 18:05:52,213 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - No path for 
the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
18:05:52,324 INFO  
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
over to rm22020-11-13 18:05:52,387 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
configured JobManager memory is 1600 MB. YARN will allocate 2048 MB to make up 
an integer multiple of its minimum allocation memory (2048 MB, configured via 
'yarn.scheduler.minimum-allocation-mb'). The extra 448 MB may not be used by 
Flink.2020-11-13 18:05:52,388 INFO  org.apache.flink.yarn.YarnClusterDescriptor 
                 [] - The configured TaskManager memory is 1728 MB. YARN will 
allocate 2048 MB to make up an integer multiple of its minimum allocation 
memory (2048 MB, configured via 'yarn.scheduler.minimum-allocation-mb'). The 
extra 320 MB may not be used by Flink.2020-11-13 18:05:52,388 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
specification: ClusterSpecification{masterMemoryMB=2048, 
taskManagerMemoryMB=1728, slotsPerTaskManager=2}2020-11-13 18:05:52,932 WARN  
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
short-circuit local reads feature cannot be used because libhadoop cannot be 
loaded.2020-11-13 18:05:55,076 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Submitting 
application master application_1599741232083_220112020-11-13 18:05:55,307 INFO  
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] - Submitted 
application application_1599741232083_220112020-11-13 18:05:55,308 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Waiting for 
the cluster to be allocated2020-11-13 18:05:55,310 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deploying 
cluster, current state ACCEPTED
 The program 
finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: Could not deploy Yarn job cluster. at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:330)
 at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
 at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:743) at 
org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:242) at 
org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:971) at 

[jira] [Commented] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231346#comment-17231346
 ] 

zhisheng commented on FLINK-20143:
--

{code:java}
$ ./bin/flink run -t yarn-per-job -Dexecution.attached=false 
-Dyarn.provided.lib.dirs="hdfs:///flink/flink-1.12-SNAPSHOT/lib" 
./examples/streaming/StateMachineExample.jar$ ./bin/flink run -t yarn-per-job 
-Dexecution.attached=false 
-Dyarn.provided.lib.dirs="hdfs:///flink/flink-1.12-SNAPSHOT/lib" 
./examples/streaming/StateMachineExample.jarSLF4J: Class path contains multiple 
SLF4J bindings.SLF4J: Found binding in 
[jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 Found binding in 
[jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 Found binding in 
[jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.SLF4J: Actual binding is of type 
[org.apache.logging.slf4j.Log4jLoggerFactory]Usage with built-in data 
generator: StateMachineExample [--error-rate 
] [--sleep ]Usage 
with Kafka: StateMachineExample --kafka-topic  [--brokers 
]Options for both the above setups: [--backend ] 
[--checkpoint-dir ] [--async-checkpoints ] 
[--incremental-checkpoints ] [--output  OR null for 
stdout]
Using standalone source with error rate 0.00 and sleep delay 1 millis
2020-11-13 18:05:51,974 WARN  
org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
contains a LOG4J config file.If you want to use logback, then please delete or 
rename the log configuration file.2020-11-13 18:05:52,202 INFO  
org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting to 
Application History server at 
FAT-hadoopuat-69117.vm.dc01.hellocloud.tech/10.69.1.17:102002020-11-13 
18:05:52,213 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  
[] - No path for the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
18:05:52,324 INFO  
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
over to rm22020-11-13 18:05:52,387 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
configured JobManager memory is 1600 MB. YARN will allocate 2048 MB to make up 
an integer multiple of its minimum allocation memory (2048 MB, configured via 
'yarn.scheduler.minimum-allocation-mb'). The extra 448 MB may not be used by 
Flink.2020-11-13 18:05:52,388 INFO  org.apache.flink.yarn.YarnClusterDescriptor 
                 [] - The configured TaskManager memory is 1728 MB. YARN will 
allocate 2048 MB to make up an integer multiple of its minimum allocation 
memory (2048 MB, configured via 'yarn.scheduler.minimum-allocation-mb'). The 
extra 320 MB may not be used by Flink.2020-11-13 18:05:52,388 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
specification: ClusterSpecification{masterMemoryMB=2048, 
taskManagerMemoryMB=1728, slotsPerTaskManager=2}2020-11-13 18:05:52,932 WARN  
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
short-circuit local reads feature cannot be used because libhadoop cannot be 
loaded.2020-11-13 18:05:55,076 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Submitting 
application master application_1599741232083_220112020-11-13 18:05:55,307 INFO  
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] - Submitted 
application application_1599741232083_220112020-11-13 18:05:55,308 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Waiting for 
the cluster to be allocated2020-11-13 18:05:55,310 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Deploying 
cluster, current state ACCEPTED
 The program 
finished with the following exception:
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: Could not deploy Yarn job cluster. at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:330)
 at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
 at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:114) at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:743) at 
org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:242) at 
org.apache.flink.client.cli.CliFrontend.parseAndRun(CliFrontend.java:971) at 

[jira] [Commented] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231321#comment-17231321
 ] 

zhisheng commented on FLINK-20143:
--

Are there any other methods to make job config compatibility?[~kkl0u]

> use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode
> --
>
> Key: FLINK-20143
> URL: https://issues.apache.org/jira/browse/FLINK-20143
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-13-17-21-47-751.png, 
> image-2020-11-13-17-22-06-111.png
>
>
> use follow command deploy flink job to yarn failed 
> {code:java}
> ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
> ./examples/streaming/StateMachineExample.jar
> {code}
> log:
> {code:java}
> $ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
> -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jarSLF4J: Class path contains 
> multiple SLF4J bindings.SLF4J: Found binding in 
> [jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
> org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
> Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
> 16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>   [] - Dynamic Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with 
> built-in data generator: StateMachineExample [--error-rate 
> ] [--sleep ]Usage 
> with Kafka: StateMachineExample --kafka-topic  [--brokers 
> ]Options for both the above setups: [--backend ] 
> [--checkpoint-dir ] [--async-checkpoints ] 
> [--incremental-checkpoints ] [--output  OR null for 
> stdout]
> Using standalone source with error rate 0.00 and sleep delay 1 millis
> 2020-11-13 16:14:30,706 WARN  
> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
> configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
> contains a LOG4J config file.If you want to use logback, then please delete 
> or rename the log configuration file.2020-11-13 16:14:30,947 INFO  
> org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting 
> to Application History server at 
> FAT-hadoopuat-69117.vm.dc01.hellocloud.tech/10.69.1.17:102002020-11-13 
> 16:14:30,958 INFO  org.apache.flink.yarn.YarnClusterDescriptor                
>   [] - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
> 16:14:31,065 INFO  
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
> over to rm22020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured TaskManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
> specification: ClusterSpecification{masterMemoryMB=3072, 
> taskManagerMemoryMB=3072, slotsPerTaskManager=2}2020-11-13 16:14:31,681 WARN  
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
> short-circuit local reads feature cannot 

[jira] [Commented] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231319#comment-17231319
 ] 

zhisheng commented on FLINK-20143:
--

in our production environment,has many flink job,every job have the -ytm and 
-yjm -ynm config,if we upgrade to 1.12,It could change a lot [~kkl0u]

> use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode
> --
>
> Key: FLINK-20143
> URL: https://issues.apache.org/jira/browse/FLINK-20143
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-13-17-21-47-751.png, 
> image-2020-11-13-17-22-06-111.png
>
>
> use follow command deploy flink job to yarn failed 
> {code:java}
> ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
> ./examples/streaming/StateMachineExample.jar
> {code}
> log:
> {code:java}
> $ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
> -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jarSLF4J: Class path contains 
> multiple SLF4J bindings.SLF4J: Found binding in 
> [jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
> org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
> Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
> 16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>   [] - Dynamic Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with 
> built-in data generator: StateMachineExample [--error-rate 
> ] [--sleep ]Usage 
> with Kafka: StateMachineExample --kafka-topic  [--brokers 
> ]Options for both the above setups: [--backend ] 
> [--checkpoint-dir ] [--async-checkpoints ] 
> [--incremental-checkpoints ] [--output  OR null for 
> stdout]
> Using standalone source with error rate 0.00 and sleep delay 1 millis
> 2020-11-13 16:14:30,706 WARN  
> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
> configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
> contains a LOG4J config file.If you want to use logback, then please delete 
> or rename the log configuration file.2020-11-13 16:14:30,947 INFO  
> org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting 
> to Application History server at 
> FAT-hadoopuat-69117.vm.dc01.hellocloud.tech/10.69.1.17:102002020-11-13 
> 16:14:30,958 INFO  org.apache.flink.yarn.YarnClusterDescriptor                
>   [] - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
> 16:14:31,065 INFO  
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
> over to rm22020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured TaskManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
> specification: ClusterSpecification{masterMemoryMB=3072, 
> taskManagerMemoryMB=3072, slotsPerTaskManager=2}2020-11-13 16:14:31,681 WARN  
> 

[jira] [Commented] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231317#comment-17231317
 ] 

zhisheng commented on FLINK-20143:
--

 
{code:java}
./bin/flink run -m yarn-cluster -d  
-Dyarn.provided.lib.dirs="hdfs:///flink/flink-1.12-SNAPSHOT/lib" 
./examples/streaming/StateMachineExample.jar
{code}
 i use this command(remove the -ynm flink-1.12-test -ytm 3g -yjm ), it runs ok

 

> use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode
> --
>
> Key: FLINK-20143
> URL: https://issues.apache.org/jira/browse/FLINK-20143
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-13-17-21-47-751.png, 
> image-2020-11-13-17-22-06-111.png
>
>
> use follow command deploy flink job to yarn failed 
> {code:java}
> ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
> ./examples/streaming/StateMachineExample.jar
> {code}
> log:
> {code:java}
> $ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
> -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jarSLF4J: Class path contains 
> multiple SLF4J bindings.SLF4J: Found binding in 
> [jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
> org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
> Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
> 16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>   [] - Dynamic Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with 
> built-in data generator: StateMachineExample [--error-rate 
> ] [--sleep ]Usage 
> with Kafka: StateMachineExample --kafka-topic  [--brokers 
> ]Options for both the above setups: [--backend ] 
> [--checkpoint-dir ] [--async-checkpoints ] 
> [--incremental-checkpoints ] [--output  OR null for 
> stdout]
> Using standalone source with error rate 0.00 and sleep delay 1 millis
> 2020-11-13 16:14:30,706 WARN  
> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
> configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
> contains a LOG4J config file.If you want to use logback, then please delete 
> or rename the log configuration file.2020-11-13 16:14:30,947 INFO  
> org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting 
> to Application History server at 
> FAT-hadoopuat-69117.vm.dc01.hellocloud.tech/10.69.1.17:102002020-11-13 
> 16:14:30,958 INFO  org.apache.flink.yarn.YarnClusterDescriptor                
>   [] - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
> 16:14:31,065 INFO  
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
> over to rm22020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured TaskManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
> specification: ClusterSpecification{masterMemoryMB=3072, 
> 

[jira] [Commented] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231300#comment-17231300
 ] 

zhisheng commented on FLINK-20143:
--

[~kkl0u] yes, -ytm and -yjm does not take effect,i create a issue some days ago 
https://issues.apache.org/jira/browse/FLINK-19973

> use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode
> --
>
> Key: FLINK-20143
> URL: https://issues.apache.org/jira/browse/FLINK-20143
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-13-17-21-47-751.png, 
> image-2020-11-13-17-22-06-111.png
>
>
> use follow command deploy flink job to yarn failed 
> {code:java}
> ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
> ./examples/streaming/StateMachineExample.jar
> {code}
> log:
> {code:java}
> $ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
> -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jarSLF4J: Class path contains 
> multiple SLF4J bindings.SLF4J: Found binding in 
> [jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
> org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
> Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
> 16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>   [] - Dynamic Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with 
> built-in data generator: StateMachineExample [--error-rate 
> ] [--sleep ]Usage 
> with Kafka: StateMachineExample --kafka-topic  [--brokers 
> ]Options for both the above setups: [--backend ] 
> [--checkpoint-dir ] [--async-checkpoints ] 
> [--incremental-checkpoints ] [--output  OR null for 
> stdout]
> Using standalone source with error rate 0.00 and sleep delay 1 millis
> 2020-11-13 16:14:30,706 WARN  
> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
> configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
> contains a LOG4J config file.If you want to use logback, then please delete 
> or rename the log configuration file.2020-11-13 16:14:30,947 INFO  
> org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting 
> to Application History server at 
> FAT-hadoopuat-69117.vm.dc01.hellocloud.tech/10.69.1.17:102002020-11-13 
> 16:14:30,958 INFO  org.apache.flink.yarn.YarnClusterDescriptor                
>   [] - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
> 16:14:31,065 INFO  
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
> over to rm22020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured TaskManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
> specification: ClusterSpecification{masterMemoryMB=3072, 
> taskManagerMemoryMB=3072, slotsPerTaskManager=2}2020-11-13 16:14:31,681 WARN  
> 

[jira] [Commented] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231287#comment-17231287
 ] 

zhisheng commented on FLINK-20143:
--

!image-2020-11-13-17-21-47-751.png!

 

!image-2020-11-13-17-22-06-111.png!

> use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode
> --
>
> Key: FLINK-20143
> URL: https://issues.apache.org/jira/browse/FLINK-20143
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-13-17-21-47-751.png, 
> image-2020-11-13-17-22-06-111.png
>
>
> use follow command deploy flink job to yarn failed 
> {code:java}
> ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
> ./examples/streaming/StateMachineExample.jar
> {code}
> log:
> {code:java}
> $ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
> -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jarSLF4J: Class path contains 
> multiple SLF4J bindings.SLF4J: Found binding in 
> [jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
> org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
> Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
> 16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>   [] - Dynamic Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with 
> built-in data generator: StateMachineExample [--error-rate 
> ] [--sleep ]Usage 
> with Kafka: StateMachineExample --kafka-topic  [--brokers 
> ]Options for both the above setups: [--backend ] 
> [--checkpoint-dir ] [--async-checkpoints ] 
> [--incremental-checkpoints ] [--output  OR null for 
> stdout]
> Using standalone source with error rate 0.00 and sleep delay 1 millis
> 2020-11-13 16:14:30,706 WARN  
> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
> configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
> contains a LOG4J config file.If you want to use logback, then please delete 
> or rename the log configuration file.2020-11-13 16:14:30,947 INFO  
> org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting 
> to Application History server at 
> FAT-hadoopuat-69117.vm.dc01.hellocloud.tech/10.69.1.17:102002020-11-13 
> 16:14:30,958 INFO  org.apache.flink.yarn.YarnClusterDescriptor                
>   [] - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
> 16:14:31,065 INFO  
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
> over to rm22020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured TaskManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
> specification: ClusterSpecification{masterMemoryMB=3072, 
> taskManagerMemoryMB=3072, slotsPerTaskManager=2}2020-11-13 16:14:31,681 WARN  
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
> short-circuit local reads feature 

[jira] [Updated] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-20143:
-
Attachment: image-2020-11-13-17-22-06-111.png

> use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode
> --
>
> Key: FLINK-20143
> URL: https://issues.apache.org/jira/browse/FLINK-20143
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-13-17-21-47-751.png, 
> image-2020-11-13-17-22-06-111.png
>
>
> use follow command deploy flink job to yarn failed 
> {code:java}
> ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
> ./examples/streaming/StateMachineExample.jar
> {code}
> log:
> {code:java}
> $ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
> -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jarSLF4J: Class path contains 
> multiple SLF4J bindings.SLF4J: Found binding in 
> [jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
> org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
> Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
> 16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>   [] - Dynamic Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with 
> built-in data generator: StateMachineExample [--error-rate 
> ] [--sleep ]Usage 
> with Kafka: StateMachineExample --kafka-topic  [--brokers 
> ]Options for both the above setups: [--backend ] 
> [--checkpoint-dir ] [--async-checkpoints ] 
> [--incremental-checkpoints ] [--output  OR null for 
> stdout]
> Using standalone source with error rate 0.00 and sleep delay 1 millis
> 2020-11-13 16:14:30,706 WARN  
> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
> configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
> contains a LOG4J config file.If you want to use logback, then please delete 
> or rename the log configuration file.2020-11-13 16:14:30,947 INFO  
> org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting 
> to Application History server at 
> FAT-hadoopuat-69117.vm.dc01.hellocloud.tech/10.69.1.17:102002020-11-13 
> 16:14:30,958 INFO  org.apache.flink.yarn.YarnClusterDescriptor                
>   [] - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
> 16:14:31,065 INFO  
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
> over to rm22020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured TaskManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
> specification: ClusterSpecification{masterMemoryMB=3072, 
> taskManagerMemoryMB=3072, slotsPerTaskManager=2}2020-11-13 16:14:31,681 WARN  
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
> short-circuit local reads feature cannot be used because libhadoop cannot be 
> loaded.2020-11-13 

[jira] [Updated] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-20143:
-
Attachment: image-2020-11-13-17-21-47-751.png

> use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode
> --
>
> Key: FLINK-20143
> URL: https://issues.apache.org/jira/browse/FLINK-20143
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-13-17-21-47-751.png
>
>
> use follow command deploy flink job to yarn failed 
> {code:java}
> ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
> ./examples/streaming/StateMachineExample.jar
> {code}
> log:
> {code:java}
> $ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
> -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jarSLF4J: Class path contains 
> multiple SLF4J bindings.SLF4J: Found binding in 
> [jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
> org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
> Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
> 16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>   [] - Dynamic Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with 
> built-in data generator: StateMachineExample [--error-rate 
> ] [--sleep ]Usage 
> with Kafka: StateMachineExample --kafka-topic  [--brokers 
> ]Options for both the above setups: [--backend ] 
> [--checkpoint-dir ] [--async-checkpoints ] 
> [--incremental-checkpoints ] [--output  OR null for 
> stdout]
> Using standalone source with error rate 0.00 and sleep delay 1 millis
> 2020-11-13 16:14:30,706 WARN  
> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
> configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
> contains a LOG4J config file.If you want to use logback, then please delete 
> or rename the log configuration file.2020-11-13 16:14:30,947 INFO  
> org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting 
> to Application History server at 
> FAT-hadoopuat-69117.vm.dc01.hellocloud.tech/10.69.1.17:102002020-11-13 
> 16:14:30,958 INFO  org.apache.flink.yarn.YarnClusterDescriptor                
>   [] - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
> 16:14:31,065 INFO  
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
> over to rm22020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured TaskManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
> specification: ClusterSpecification{masterMemoryMB=3072, 
> taskManagerMemoryMB=3072, slotsPerTaskManager=2}2020-11-13 16:14:31,681 WARN  
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
> short-circuit local reads feature cannot be used because libhadoop cannot be 
> loaded.2020-11-13 16:14:33,417 INFO  
> 

[jira] [Commented] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17231284#comment-17231284
 ] 

zhisheng commented on FLINK-20143:
--

[~kkl0u] it does not have jobmanager log and taskmanager log

> use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode
> --
>
> Key: FLINK-20143
> URL: https://issues.apache.org/jira/browse/FLINK-20143
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
>
> use follow command deploy flink job to yarn failed 
> {code:java}
> ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
> ./examples/streaming/StateMachineExample.jar
> {code}
> log:
> {code:java}
> $ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
> -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
> ./examples/streaming/StateMachineExample.jarSLF4J: Class path contains 
> multiple SLF4J bindings.SLF4J: Found binding in 
> [jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  Found binding in 
> [jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
>  See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.SLF4J: Actual binding is of type 
> [org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
> org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
> Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
> 16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli              
>   [] - Dynamic Property set: 
> yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with 
> built-in data generator: StateMachineExample [--error-rate 
> ] [--sleep ]Usage 
> with Kafka: StateMachineExample --kafka-topic  [--brokers 
> ]Options for both the above setups: [--backend ] 
> [--checkpoint-dir ] [--async-checkpoints ] 
> [--incremental-checkpoints ] [--output  OR null for 
> stdout]
> Using standalone source with error rate 0.00 and sleep delay 1 millis
> 2020-11-13 16:14:30,706 WARN  
> org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
> configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
> contains a LOG4J config file.If you want to use logback, then please delete 
> or rename the log configuration file.2020-11-13 16:14:30,947 INFO  
> org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting 
> to Application History server at 
> FAT-hadoopuat-69117.vm.dc01.hellocloud.tech/10.69.1.17:102002020-11-13 
> 16:14:30,958 INFO  org.apache.flink.yarn.YarnClusterDescriptor                
>   [] - No path for the flink jar passed. Using the location of class 
> org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
> 16:14:31,065 INFO  
> org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
> over to rm22020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
> configured TaskManager memory is 3072 MB. YARN will allocate 4096 MB to make 
> up an integer multiple of its minimum allocation memory (2048 MB, configured 
> via 'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be 
> used by Flink.2020-11-13 16:14:31,130 INFO  
> org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
> specification: ClusterSpecification{masterMemoryMB=3072, 
> taskManagerMemoryMB=3072, slotsPerTaskManager=2}2020-11-13 16:14:31,681 WARN  
> org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
> short-circuit local reads feature cannot be used because libhadoop cannot be 
> loaded.2020-11-13 16:14:33,417 INFO  
> 

[jira] [Created] (FLINK-20143) use `yarn.provided.lib.dirs` config deploy job failed in yarn per job mode

2020-11-13 Thread zhisheng (Jira)
zhisheng created FLINK-20143:


 Summary: use `yarn.provided.lib.dirs` config deploy job failed in 
yarn per job mode
 Key: FLINK-20143
 URL: https://issues.apache.org/jira/browse/FLINK-20143
 Project: Flink
  Issue Type: Bug
  Components: Client / Job Submission, Deployment / YARN
Affects Versions: 1.12.0
Reporter: zhisheng


use follow command deploy flink job to yarn failed 
{code:java}
./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib
./examples/streaming/StateMachineExample.jar
{code}
log:
{code:java}
$ ./bin/flink run -m yarn-cluster -d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
./examples/streaming/StateMachineExample.jar$ ./bin/flink run -m yarn-cluster 
-d -ynm flink-1.12-test -ytm 3g -yjm 3g -yD 
yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib    
./examples/streaming/StateMachineExample.jarSLF4J: Class path contains multiple 
SLF4J bindings.SLF4J: Found binding in 
[jar:file:/data1/app/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 Found binding in 
[jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 Found binding in 
[jar:file:/data1/app/hadoop-2.7.3-snappy-32core12disk/share/hadoop/tools/lib/hadoop-aliyun-2.9.2-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class]SLF4J:
 See http://www.slf4j.org/codes.html#multiple_bindings for an 
explanation.SLF4J: Actual binding is of type 
[org.apache.logging.slf4j.Log4jLoggerFactory]2020-11-13 16:14:30,347 INFO  
org.apache.flink.yarn.cli.FlinkYarnSessionCli                [] - Dynamic 
Property set: 
yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/lib2020-11-13 
16:14:30,347 INFO  org.apache.flink.yarn.cli.FlinkYarnSessionCli                
[] - Dynamic Property set: 
yarn.provided.lib.dirs=hdfs:///flink/flink-1.12-SNAPSHOT/libUsage with built-in 
data generator: StateMachineExample [--error-rate 
] [--sleep ]Usage 
with Kafka: StateMachineExample --kafka-topic  [--brokers 
]Options for both the above setups: [--backend ] 
[--checkpoint-dir ] [--async-checkpoints ] 
[--incremental-checkpoints ] [--output  OR null for 
stdout]
Using standalone source with error rate 0.00 and sleep delay 1 millis
2020-11-13 16:14:30,706 WARN  
org.apache.flink.yarn.configuration.YarnLogConfigUtil        [] - The 
configuration directory ('/data1/app/flink-1.12-SNAPSHOT/conf') already 
contains a LOG4J config file.If you want to use logback, then please delete or 
rename the log configuration file.2020-11-13 16:14:30,947 INFO  
org.apache.hadoop.yarn.client.AHSProxy                       [] - Connecting to 
Application History server at 
FAT-hadoopuat-69117.vm.dc01.hellocloud.tech/10.69.1.17:102002020-11-13 
16:14:30,958 INFO  org.apache.flink.yarn.YarnClusterDescriptor                  
[] - No path for the flink jar passed. Using the location of class 
org.apache.flink.yarn.YarnClusterDescriptor to locate the jar2020-11-13 
16:14:31,065 INFO  
org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider [] - Failing 
over to rm22020-11-13 16:14:31,130 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - The 
configured JobManager memory is 3072 MB. YARN will allocate 4096 MB to make up 
an integer multiple of its minimum allocation memory (2048 MB, configured via 
'yarn.scheduler.minimum-allocation-mb'). The extra 1024 MB may not be used by 
Flink.2020-11-13 16:14:31,130 INFO  org.apache.flink.yarn.YarnClusterDescriptor 
                 [] - The configured TaskManager memory is 3072 MB. YARN will 
allocate 4096 MB to make up an integer multiple of its minimum allocation 
memory (2048 MB, configured via 'yarn.scheduler.minimum-allocation-mb'). The 
extra 1024 MB may not be used by Flink.2020-11-13 16:14:31,130 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Cluster 
specification: ClusterSpecification{masterMemoryMB=3072, 
taskManagerMemoryMB=3072, slotsPerTaskManager=2}2020-11-13 16:14:31,681 WARN  
org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory      [] - The 
short-circuit local reads feature cannot be used because libhadoop cannot be 
loaded.2020-11-13 16:14:33,417 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Submitting 
application master application_1599741232083_219902020-11-13 16:14:33,446 INFO  
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl        [] - Submitted 
application application_1599741232083_219902020-11-13 16:14:33,446 INFO  
org.apache.flink.yarn.YarnClusterDescriptor                  [] - Waiting for 
the cluster to be allocated2020-11-13 16:14:33,448 INFO  

[jira] [Commented] (FLINK-20109) java.lang.ClassNotFoundException: org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer

2020-11-12 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17230623#comment-17230623
 ] 

zhisheng commented on FLINK-20109:
--

Do you restart the sql client or restart the cluster after add the jar?

> java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
> 
>
> Key: FLINK-20109
> URL: https://issues.apache.org/jira/browse/FLINK-20109
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.11.2
> Environment: macos 10.14.6 
> flink 1.11.2
>Reporter: xuhaoran
>Priority: Major
>
> 按照官网例子 加载Kafka的连接器需要下载flink-sql-connector-kafka_2.11-1.11.2.jar
> 然后启动指令为./sql-client.sh embedded -lib xxx/ 
> 然后创建简单的sql语句 创建 Kafka table 
> 执行都ok
>  
> 但如果flink-sql-connector-kafka_2.11-1.11.2.jar 放在classpath里面  启动./sql-client.sh 
> embedded ps -ef|grep java 
> 也可以看到当前classpath加载了flink-sql-connector-kafka_2.11-1.11.2.jar
> 执行之前的创建kafka table 执行select * from table 
> 报错java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
> 怀疑类加载器有问题
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19973) 【Flink-Deployment】YARN CLI Parameter doesn't work when set `execution.target: yarn-per-job` config

2020-11-08 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228314#comment-17228314
 ] 

zhisheng commented on FLINK-19973:
--

[~kkl0u] closed

> 【Flink-Deployment】YARN CLI Parameter doesn't work when set `execution.target: 
> yarn-per-job` config
> --
>
> Key: FLINK-19973
> URL: https://issues.apache.org/jira/browse/FLINK-19973
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Assignee: Kostas Kloudas
>Priority: Major
> Attachments: image-2020-11-04-20-58-49-738.png, 
> image-2020-11-04-21-00-06-180.png
>
>
> when i use flink-sql-client to deploy job to yarn(per job mod), I set 
> `execution.target: yarn-per-job` in flink-conf.yaml, job will deploy to yarn.
>  
> when I deploy jar job to yarn, The command is `./bin/flink run -m 
> yarn-cluster -ynm flink-1.12-test  -ytm 3g -yjm 3g 
> examples/streaming/StateMachineExample.jar`, it will deploy ok, but the 
> `-ynm`、`-ytm 3g` and `-yjm 3g` doesn't work. 
>  
> !image-2020-11-04-20-58-49-738.png|width=912,height=235!
>  
>  
> when i remove the config `execution.target: yarn-per-job`, it work well.
>  
> !image-2020-11-04-21-00-06-180.png|width=1047,height=150!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19973) 【Flink-Deployment】YARN CLI Parameter doesn't work when set `execution.target: yarn-per-job` config

2020-11-08 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng closed FLINK-19973.

Resolution: Not A Bug

> 【Flink-Deployment】YARN CLI Parameter doesn't work when set `execution.target: 
> yarn-per-job` config
> --
>
> Key: FLINK-19973
> URL: https://issues.apache.org/jira/browse/FLINK-19973
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Assignee: Kostas Kloudas
>Priority: Major
> Attachments: image-2020-11-04-20-58-49-738.png, 
> image-2020-11-04-21-00-06-180.png
>
>
> when i use flink-sql-client to deploy job to yarn(per job mod), I set 
> `execution.target: yarn-per-job` in flink-conf.yaml, job will deploy to yarn.
>  
> when I deploy jar job to yarn, The command is `./bin/flink run -m 
> yarn-cluster -ynm flink-1.12-test  -ytm 3g -yjm 3g 
> examples/streaming/StateMachineExample.jar`, it will deploy ok, but the 
> `-ynm`、`-ytm 3g` and `-yjm 3g` doesn't work. 
>  
> !image-2020-11-04-20-58-49-738.png|width=912,height=235!
>  
>  
> when i remove the config `execution.target: yarn-per-job`, it work well.
>  
> !image-2020-11-04-21-00-06-180.png|width=1047,height=150!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19995) 【Flink SQL Client】Use Flink Kafka Connector in SQL-Client has more than one exception

2020-11-05 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng closed FLINK-19995.

Resolution: Won't Fix

> 【Flink SQL Client】Use Flink Kafka Connector in SQL-Client has more than one 
> exception
> -
>
> Key: FLINK-19995
> URL: https://issues.apache.org/jira/browse/FLINK-19995
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-05-17-35-10-103.png, 
> image-2020-11-05-17-37-21-610.png, image-2020-11-05-17-40-05-630.png, 
> image-2020-11-05-17-41-01-319.png, image-2020-11-05-17-57-38-381.png
>
>
> when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  
> sql job has an exception like picture2
>  
> !image-2020-11-05-17-35-10-103.png|width=658,height=251!
> !image-2020-11-05-17-37-21-610.png|width=648,height=479!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
> {code}
> when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
> another exception
>  
> !image-2020-11-05-17-41-01-319.png|width=629,height=238!
> !image-2020-11-05-17-40-05-630.png|width=658,height=400!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> if i add both jar, it returm exception too
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> ddl & sql:
>  
> {code:java}
> CREATE TABLE UserBehavior (
>  user_id BIGINT,
>  item_id BIGINT,
>  behavior CHAR(2),
>  `time` BIGINT
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'user_behavior',
>  'properties.bootstrap.servers' = 'localhost:9092',
>  'properties.group.id' = 'user_behavior_flink',
>  'format' = 'json',
>  'json.ignore-parse-errors' = 'true',  
>  'scan.startup.mode' = 'earliest-offset',
>  'scan.topic-partition-discovery.interval' = '1'
> );
> select * from UserBehavior;{code}
>  
> i found same problem at 
> [http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html]
>  
> i see there are two dependencies conflicts
>  
> !image-2020-11-05-17-57-38-381.png|width=1328,height=711!
> i try to solve the conflict, but it doesn't work
>  
> {code:java}
> ➜  flink-1.12-SNAPSHOT jar -tf 
> ./lib/flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar |grep 'ConsumerRecord'
> org/apache/flink/kafka/shaded/org/apache/kafka/clients/consumer/ConsumerRecords$ConcatenatedIterable$1.class
> org/apache/flink/kafka/shaded/org/apache/kafka/clients/consumer/ConsumerRecord.class
> org/apache/flink/kafka/shaded/org/apache/kafka/clients/consumer/ConsumerRecords$ConcatenatedIterable.class
> org/apache/flink/kafka/shaded/org/apache/kafka/clients/consumer/ConsumerRecords.class
> ➜  flink-1.12-SNAPSHOT jar -tf 
> ./lib/flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar |grep 
> 'FlinkKafkaConsumer'
> org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumer.class
> org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase$1.class
> org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.class
> org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase$2.class
> ➜  flink-1.12-SNAPSHOT jar -tf 
> ./lib/flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar |grep 
> 'ByteArrayDeserializer'
> org/apache/flink/kafka/shaded/org/apache/kafka/common/serialization/ByteArrayDeserializer.class
> ➜  flink-1.12-SNAPSHOT jar -tf 
> ./lib/flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar |grep 
> 'FlinkKafkaConsumer'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19995) 【Flink SQL Client】Use Flink Kafka Connector in SQL-Client has more than one exception

2020-11-05 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17227133#comment-17227133
 ] 

zhisheng commented on FLINK-19995:
--

I restart the SQL Client, but not the Flink Cluster, sloved now, thanks [~jark] 
[~hailong wang], i will close this issue.

> 【Flink SQL Client】Use Flink Kafka Connector in SQL-Client has more than one 
> exception
> -
>
> Key: FLINK-19995
> URL: https://issues.apache.org/jira/browse/FLINK-19995
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-05-17-35-10-103.png, 
> image-2020-11-05-17-37-21-610.png, image-2020-11-05-17-40-05-630.png, 
> image-2020-11-05-17-41-01-319.png, image-2020-11-05-17-57-38-381.png
>
>
> when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  
> sql job has an exception like picture2
>  
> !image-2020-11-05-17-35-10-103.png|width=658,height=251!
> !image-2020-11-05-17-37-21-610.png|width=648,height=479!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
> {code}
> when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
> another exception
>  
> !image-2020-11-05-17-41-01-319.png|width=629,height=238!
> !image-2020-11-05-17-40-05-630.png|width=658,height=400!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> if i add both jar, it returm exception too
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> ddl & sql:
>  
> {code:java}
> CREATE TABLE UserBehavior (
>  user_id BIGINT,
>  item_id BIGINT,
>  behavior CHAR(2),
>  `time` BIGINT
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'user_behavior',
>  'properties.bootstrap.servers' = 'localhost:9092',
>  'properties.group.id' = 'user_behavior_flink',
>  'format' = 'json',
>  'json.ignore-parse-errors' = 'true',  
>  'scan.startup.mode' = 'earliest-offset',
>  'scan.topic-partition-discovery.interval' = '1'
> );
> select * from UserBehavior;{code}
>  
> i found same problem at 
> [http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html]
>  
> i see there are two dependencies conflicts
>  
> !image-2020-11-05-17-57-38-381.png|width=1328,height=711!
> i try to solve the conflict, but it doesn't work
>  
> {code:java}
> ➜  flink-1.12-SNAPSHOT jar -tf 
> ./lib/flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar |grep 'ConsumerRecord'
> org/apache/flink/kafka/shaded/org/apache/kafka/clients/consumer/ConsumerRecords$ConcatenatedIterable$1.class
> org/apache/flink/kafka/shaded/org/apache/kafka/clients/consumer/ConsumerRecord.class
> org/apache/flink/kafka/shaded/org/apache/kafka/clients/consumer/ConsumerRecords$ConcatenatedIterable.class
> org/apache/flink/kafka/shaded/org/apache/kafka/clients/consumer/ConsumerRecords.class
> ➜  flink-1.12-SNAPSHOT jar -tf 
> ./lib/flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar |grep 
> 'FlinkKafkaConsumer'
> org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumer.class
> org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase$1.class
> org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.class
> org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase$2.class
> ➜  flink-1.12-SNAPSHOT jar -tf 
> ./lib/flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar |grep 
> 'ByteArrayDeserializer'
> org/apache/flink/kafka/shaded/org/apache/kafka/common/serialization/ByteArrayDeserializer.class
> ➜  flink-1.12-SNAPSHOT jar -tf 
> ./lib/flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar |grep 
> 'FlinkKafkaConsumer'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19995) 【Flink SQL Client】Use Flink Kafka Connector in SQL-Client has more than one exception

2020-11-05 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-19995:
-
Description: 
when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  sql 
job has an exception like picture2

 

!image-2020-11-05-17-35-10-103.png|width=658,height=251!

!image-2020-11-05-17-37-21-610.png|width=648,height=479!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
{code}
when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
another exception

 

!image-2020-11-05-17-41-01-319.png|width=629,height=238!

!image-2020-11-05-17-40-05-630.png|width=658,height=400!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
if i add both jar, it returm exception too
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
ddl & sql:

 
{code:java}
CREATE TABLE UserBehavior (
 user_id BIGINT,
 item_id BIGINT,
 behavior CHAR(2),
 `time` BIGINT
) WITH (
 'connector' = 'kafka',
 'topic' = 'user_behavior',
 'properties.bootstrap.servers' = 'localhost:9092',
 'properties.group.id' = 'user_behavior_flink',
 'format' = 'json',
 'json.ignore-parse-errors' = 'true',  
 'scan.startup.mode' = 'earliest-offset',
 'scan.topic-partition-discovery.interval' = '1'
);

select * from UserBehavior;{code}
 

i found same problem at 
[http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html]

 

i see there are two dependencies conflicts

 

!image-2020-11-05-17-57-38-381.png|width=1328,height=711!

i try to solve the conflict, but it doesn't work

 
{code:java}
➜  flink-1.12-SNAPSHOT jar -tf 
./lib/flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar |grep 'ConsumerRecord'
org/apache/flink/kafka/shaded/org/apache/kafka/clients/consumer/ConsumerRecords$ConcatenatedIterable$1.class
org/apache/flink/kafka/shaded/org/apache/kafka/clients/consumer/ConsumerRecord.class
org/apache/flink/kafka/shaded/org/apache/kafka/clients/consumer/ConsumerRecords$ConcatenatedIterable.class
org/apache/flink/kafka/shaded/org/apache/kafka/clients/consumer/ConsumerRecords.class
➜  flink-1.12-SNAPSHOT jar -tf 
./lib/flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar |grep 
'FlinkKafkaConsumer'
org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumer.class
org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase$1.class
org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase.class
org/apache/flink/streaming/connectors/kafka/FlinkKafkaConsumerBase$2.class
➜  flink-1.12-SNAPSHOT jar -tf 
./lib/flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar |grep 
'ByteArrayDeserializer'
org/apache/flink/kafka/shaded/org/apache/kafka/common/serialization/ByteArrayDeserializer.class
➜  flink-1.12-SNAPSHOT jar -tf 
./lib/flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar |grep 
'FlinkKafkaConsumer'
{code}

  was:
when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  sql 
job has an exception like picture2

 

!image-2020-11-05-17-35-10-103.png|width=658,height=251!

!image-2020-11-05-17-37-21-610.png|width=648,height=479!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
{code}
when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
another exception

 

!image-2020-11-05-17-41-01-319.png|width=629,height=238!

!image-2020-11-05-17-40-05-630.png|width=658,height=400!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
if i add both jar, it returm exception too
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
ddl & sql:

 
{code:java}
CREATE TABLE UserBehavior (
 user_id BIGINT,
 item_id BIGINT,
 behavior CHAR(2),
 `time` BIGINT
) WITH (
 'connector' = 'kafka',
 'topic' = 'user_behavior',
 'properties.bootstrap.servers' = 'localhost:9092',
 'properties.group.id' = 'user_behavior_flink',
 'format' = 'json',
 'json.ignore-parse-errors' = 'true',  
 'scan.startup.mode' = 'earliest-offset',
 'scan.topic-partition-discovery.interval' = '1'
);

select * from UserBehavior;{code}
 

i found same problem at 
[http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html]

 

i see there are two dependencies conflicts

 

!image-2020-11-05-17-57-38-381.png|width=1328,height=711!

i try to solve the conflict, but it doesn't work

 

 


> 【Flink SQL Client】Use Flink Kafka Connector in SQL-Client has more than one 
> exception
> 

[jira] [Updated] (FLINK-19995) 【Flink SQL Client】Use Flink Kafka Connector in SQL-Client has more than one exception

2020-11-05 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-19995:
-
Description: 
when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  sql 
job has an exception like picture2

 

!image-2020-11-05-17-35-10-103.png|width=658,height=251!

!image-2020-11-05-17-37-21-610.png|width=648,height=479!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
{code}
when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
another exception

 

!image-2020-11-05-17-41-01-319.png|width=629,height=238!

!image-2020-11-05-17-40-05-630.png|width=658,height=400!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
if i add both jar, it returm exception too
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
ddl & sql:

 
{code:java}
CREATE TABLE UserBehavior (
 user_id BIGINT,
 item_id BIGINT,
 behavior CHAR(2),
 `time` BIGINT
) WITH (
 'connector' = 'kafka',
 'topic' = 'user_behavior',
 'properties.bootstrap.servers' = 'localhost:9092',
 'properties.group.id' = 'user_behavior_flink',
 'format' = 'json',
 'json.ignore-parse-errors' = 'true',  
 'scan.startup.mode' = 'earliest-offset',
 'scan.topic-partition-discovery.interval' = '1'
);

select * from UserBehavior;{code}
 

i found same problem at 
[http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html]

 

i see there are two dependencies conflicts

 

!image-2020-11-05-17-57-38-381.png|width=1328,height=711!

i try to solve the conflict, but it doesn't work

 

 

  was:
when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  sql 
job has an exception like picture2

 

!image-2020-11-05-17-35-10-103.png|width=658,height=251!

!image-2020-11-05-17-37-21-610.png|width=648,height=479!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
{code}
when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
another exception

 

!image-2020-11-05-17-41-01-319.png|width=629,height=238!

!image-2020-11-05-17-40-05-630.png|width=658,height=400!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
if i add both jar, it returm exception too
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
ddl & sql:

 
{code:java}
CREATE TABLE UserBehavior (
 user_id BIGINT,
 item_id BIGINT,
 behavior CHAR(2),
 `time` BIGINT
) WITH (
 'connector' = 'kafka',
 'topic' = 'user_behavior',
 'properties.bootstrap.servers' = 'localhost:9092',
 'properties.group.id' = 'user_behavior_flink',
 'format' = 'json',
 'json.ignore-parse-errors' = 'true',  
 'scan.startup.mode' = 'earliest-offset',
 'scan.topic-partition-discovery.interval' = '1'
);

select * from UserBehavior;{code}
 

i found same problem at 
[http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html]

 

i see there are two dependencies conflicts

 

!image-2020-11-05-17-57-38-381.png|width=1328,height=711!


> 【Flink SQL Client】Use Flink Kafka Connector in SQL-Client has more than one 
> exception
> -
>
> Key: FLINK-19995
> URL: https://issues.apache.org/jira/browse/FLINK-19995
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Client
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-05-17-35-10-103.png, 
> image-2020-11-05-17-37-21-610.png, image-2020-11-05-17-40-05-630.png, 
> image-2020-11-05-17-41-01-319.png, image-2020-11-05-17-57-38-381.png
>
>
> when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  
> sql job has an exception like picture2
>  
> !image-2020-11-05-17-35-10-103.png|width=658,height=251!
> !image-2020-11-05-17-37-21-610.png|width=648,height=479!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
> {code}
> when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
> another exception
>  
> !image-2020-11-05-17-41-01-319.png|width=629,height=238!
> !image-2020-11-05-17-40-05-630.png|width=658,height=400!
> {code:java}
> [ERROR] Could not execute SQL statement. 

[jira] [Updated] (FLINK-19995) 【Flink SQL Client】Use Flink Kafka Connector in SQL-Client has more than one exception

2020-11-05 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-19995:
-
Summary: 【Flink SQL Client】Use Flink Kafka Connector in SQL-Client has more 
than one exception  (was: 【Flink SQL Client】Use Flink Kafka Connector has more 
than one exception)

> 【Flink SQL Client】Use Flink Kafka Connector in SQL-Client has more than one 
> exception
> -
>
> Key: FLINK-19995
> URL: https://issues.apache.org/jira/browse/FLINK-19995
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Client
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-05-17-35-10-103.png, 
> image-2020-11-05-17-37-21-610.png, image-2020-11-05-17-40-05-630.png, 
> image-2020-11-05-17-41-01-319.png, image-2020-11-05-17-57-38-381.png
>
>
> when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  
> sql job has an exception like picture2
>  
> !image-2020-11-05-17-35-10-103.png|width=658,height=251!
> !image-2020-11-05-17-37-21-610.png|width=648,height=479!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
> {code}
> when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
> another exception
>  
> !image-2020-11-05-17-41-01-319.png|width=629,height=238!
> !image-2020-11-05-17-40-05-630.png|width=658,height=400!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> if i add both jar, it returm exception too
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> ddl & sql:
>  
> {code:java}
> CREATE TABLE UserBehavior (
>  user_id BIGINT,
>  item_id BIGINT,
>  behavior CHAR(2),
>  `time` BIGINT
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'user_behavior',
>  'properties.bootstrap.servers' = 'localhost:9092',
>  'properties.group.id' = 'user_behavior_flink',
>  'format' = 'json',
>  'json.ignore-parse-errors' = 'true',  
>  'scan.startup.mode' = 'earliest-offset',
>  'scan.topic-partition-discovery.interval' = '1'
> );
> select * from UserBehavior;{code}
>  
> i found same problem at 
> [http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html]
>  
> i see there are two dependencies conflicts
>  
> !image-2020-11-05-17-57-38-381.png|width=1328,height=711!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19995) 【Flink SQL Client】Use Flink Kafka Connector has more than one exception

2020-11-05 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-19995:
-
Summary: 【Flink SQL Client】Use Flink Kafka Connector has more than one 
exception  (was: 【Flink SQL Client】Use Flink Kafka Connector has more one 
exception)

> 【Flink SQL Client】Use Flink Kafka Connector has more than one exception
> ---
>
> Key: FLINK-19995
> URL: https://issues.apache.org/jira/browse/FLINK-19995
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Client
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-05-17-35-10-103.png, 
> image-2020-11-05-17-37-21-610.png, image-2020-11-05-17-40-05-630.png, 
> image-2020-11-05-17-41-01-319.png, image-2020-11-05-17-57-38-381.png
>
>
> when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  
> sql job has an exception like picture2
>  
> !image-2020-11-05-17-35-10-103.png|width=658,height=251!
> !image-2020-11-05-17-37-21-610.png|width=648,height=479!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
> {code}
> when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
> another exception
>  
> !image-2020-11-05-17-41-01-319.png|width=629,height=238!
> !image-2020-11-05-17-40-05-630.png|width=658,height=400!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> if i add both jar, it returm exception too
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> ddl & sql:
>  
> {code:java}
> CREATE TABLE UserBehavior (
>  user_id BIGINT,
>  item_id BIGINT,
>  behavior CHAR(2),
>  `time` BIGINT
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'user_behavior',
>  'properties.bootstrap.servers' = 'localhost:9092',
>  'properties.group.id' = 'user_behavior_flink',
>  'format' = 'json',
>  'json.ignore-parse-errors' = 'true',  
>  'scan.startup.mode' = 'earliest-offset',
>  'scan.topic-partition-discovery.interval' = '1'
> );
> select * from UserBehavior;{code}
>  
> i found same problem at 
> [http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html]
>  
> i see there are two dependencies conflicts
>  
> !image-2020-11-05-17-57-38-381.png|width=1328,height=711!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19995) 【Flink SQL Client】Use Flink Kafka Connector has more one exception

2020-11-05 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-19995:
-
 Attachment: image-2020-11-05-17-57-38-381.png
Description: 
when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  sql 
job has an exception like picture2

 

!image-2020-11-05-17-35-10-103.png|width=658,height=251!

!image-2020-11-05-17-37-21-610.png|width=648,height=479!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
{code}
when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
another exception

 

!image-2020-11-05-17-41-01-319.png|width=629,height=238!

!image-2020-11-05-17-40-05-630.png|width=658,height=400!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
if i add both jar, it returm exception too
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
ddl & sql:

 
{code:java}
CREATE TABLE UserBehavior (
 user_id BIGINT,
 item_id BIGINT,
 behavior CHAR(2),
 `time` BIGINT
) WITH (
 'connector' = 'kafka',
 'topic' = 'user_behavior',
 'properties.bootstrap.servers' = 'localhost:9092',
 'properties.group.id' = 'user_behavior_flink',
 'format' = 'json',
 'json.ignore-parse-errors' = 'true',  
 'scan.startup.mode' = 'earliest-offset',
 'scan.topic-partition-discovery.interval' = '1'
);

select * from UserBehavior;{code}
 

i found same problem at 
[http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html]

 

i see there are two dependencies conflicts

 

!image-2020-11-05-17-57-38-381.png|width=1328,height=711!

  was:
when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  sql 
job has an exception like picture2

 

!image-2020-11-05-17-35-10-103.png|width=658,height=251!

!image-2020-11-05-17-37-21-610.png|width=648,height=479!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
{code}
when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
another exception

 

!image-2020-11-05-17-41-01-319.png|width=629,height=238!

!image-2020-11-05-17-40-05-630.png|width=658,height=400!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
if i add both jar, it returm exception too
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
ddl & sql:

 
{code:java}
CREATE TABLE UserBehavior (
 user_id BIGINT,
 item_id BIGINT,
 behavior CHAR(2),
 `time` BIGINT
) WITH (
 'connector' = 'kafka',
 'topic' = 'user_behavior',
 'properties.bootstrap.servers' = 'localhost:9092',
 'properties.group.id' = 'user_behavior_flink',
 'format' = 'json',
 'json.ignore-parse-errors' = 'true',  
 'scan.startup.mode' = 'earliest-offset',
 'scan.topic-partition-discovery.interval' = '1'
);

select * from UserBehavior;{code}
 

i found same problem at 
[http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html]


> 【Flink SQL Client】Use Flink Kafka Connector has more one exception
> --
>
> Key: FLINK-19995
> URL: https://issues.apache.org/jira/browse/FLINK-19995
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Client
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-05-17-35-10-103.png, 
> image-2020-11-05-17-37-21-610.png, image-2020-11-05-17-40-05-630.png, 
> image-2020-11-05-17-41-01-319.png, image-2020-11-05-17-57-38-381.png
>
>
> when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  
> sql job has an exception like picture2
>  
> !image-2020-11-05-17-35-10-103.png|width=658,height=251!
> !image-2020-11-05-17-37-21-610.png|width=648,height=479!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
> {code}
> when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
> another exception
>  
> !image-2020-11-05-17-41-01-319.png|width=629,height=238!
> !image-2020-11-05-17-40-05-630.png|width=658,height=400!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> if i add both jar, it returm exception 

[jira] [Updated] (FLINK-19995) 【Flink SQL Client】Use Flink Kafka Connector has more one exception

2020-11-05 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-19995:
-
Description: 
when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  sql 
job has an exception like picture2

 

!image-2020-11-05-17-35-10-103.png|width=658,height=251!

!image-2020-11-05-17-37-21-610.png|width=648,height=479!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
{code}
when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
another exception

 

!image-2020-11-05-17-41-01-319.png|width=629,height=238!

!image-2020-11-05-17-40-05-630.png|width=658,height=400!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
if i add both jar, it returm exception too
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
ddl & sql:

 
{code:java}
CREATE TABLE UserBehavior (
 user_id BIGINT,
 item_id BIGINT,
 behavior CHAR(2),
 `time` BIGINT
) WITH (
 'connector' = 'kafka',
 'topic' = 'user_behavior',
 'properties.bootstrap.servers' = 'localhost:9092',
 'properties.group.id' = 'user_behavior_flink',
 'format' = 'json',
 'json.ignore-parse-errors' = 'true',  
 'scan.startup.mode' = 'earliest-offset',
 'scan.topic-partition-discovery.interval' = '1'
);

select * from UserBehavior;{code}
 

i found same problem at 
[http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html]

  was:
when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  sql 
job has an exception like picture2

 

!image-2020-11-05-17-35-10-103.png|width=658,height=251!

!image-2020-11-05-17-37-21-610.png|width=812,height=600!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
{code}
when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
another exception

 

!image-2020-11-05-17-41-01-319.png|width=841,height=318!

!image-2020-11-05-17-40-05-630.png|width=955,height=581!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
if i add both jar, it returm exception too
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
ddl & sql:

 
{code:java}
CREATE TABLE UserBehavior (
 user_id BIGINT,
 item_id BIGINT,
 behavior CHAR(2),
 `time` BIGINT
) WITH (
 'connector' = 'kafka',
 'topic' = 'user_behavior',
 'properties.bootstrap.servers' = 'localhost:9092',
 'properties.group.id' = 'user_behavior_flink',
 'format' = 'json',
 'json.ignore-parse-errors' = 'true',  
 'scan.startup.mode' = 'earliest-offset',
 'scan.topic-partition-discovery.interval' = '1'
);

select * from UserBehavior;{code}
 

i found same problem at 
http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html


> 【Flink SQL Client】Use Flink Kafka Connector has more one exception
> --
>
> Key: FLINK-19995
> URL: https://issues.apache.org/jira/browse/FLINK-19995
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Client
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-05-17-35-10-103.png, 
> image-2020-11-05-17-37-21-610.png, image-2020-11-05-17-40-05-630.png, 
> image-2020-11-05-17-41-01-319.png
>
>
> when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  
> sql job has an exception like picture2
>  
> !image-2020-11-05-17-35-10-103.png|width=658,height=251!
> !image-2020-11-05-17-37-21-610.png|width=648,height=479!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
> {code}
> when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
> another exception
>  
> !image-2020-11-05-17-41-01-319.png|width=629,height=238!
> !image-2020-11-05-17-40-05-630.png|width=658,height=400!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> if i add both jar, it returm exception too
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> ddl & sql:
>  
> 

[jira] [Updated] (FLINK-19995) 【Flink SQL Client】Use Flink Kafka Connector has more one exception

2020-11-05 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-19995:
-
Description: 
when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  sql 
job has an exception like picture2

 

!image-2020-11-05-17-35-10-103.png|width=658,height=251!

!image-2020-11-05-17-37-21-610.png|width=812,height=600!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
{code}
when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
another exception

 

!image-2020-11-05-17-41-01-319.png|width=841,height=318!

!image-2020-11-05-17-40-05-630.png|width=955,height=581!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
if i add both jar, it returm exception too
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
ddl & sql:

 
{code:java}
CREATE TABLE UserBehavior (
 user_id BIGINT,
 item_id BIGINT,
 behavior CHAR(2),
 `time` BIGINT
) WITH (
 'connector' = 'kafka',
 'topic' = 'user_behavior',
 'properties.bootstrap.servers' = 'localhost:9092',
 'properties.group.id' = 'user_behavior_flink',
 'format' = 'json',
 'json.ignore-parse-errors' = 'true',  
 'scan.startup.mode' = 'earliest-offset',
 'scan.topic-partition-discovery.interval' = '1'
);

select * from UserBehavior;{code}
 

i found same problem at 
http://apache-flink.147419.n8.nabble.com/sql-cli-sql-td7530.html

  was:
when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  sql 
job has an exception like picture2

 

!image-2020-11-05-17-35-10-103.png|width=658,height=251!

!image-2020-11-05-17-37-21-610.png|width=812,height=600!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
{code}
when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
another exception

 

!image-2020-11-05-17-41-01-319.png|width=841,height=318!

!image-2020-11-05-17-40-05-630.png|width=955,height=581!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
if i add both jar, it returm exception too
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
ddl & sql:

 
{code:java}
CREATE TABLE UserBehavior (
 user_id BIGINT,
 item_id BIGINT,
 behavior CHAR(2),
 `time` BIGINT
) WITH (
 'connector' = 'kafka',
 'topic' = 'user_behavior',
 'properties.bootstrap.servers' = 'localhost:9092',
 'properties.group.id' = 'user_behavior_flink',
 'format' = 'json',
 'json.ignore-parse-errors' = 'true',  
 'scan.startup.mode' = 'earliest-offset',
 'scan.topic-partition-discovery.interval' = '1'
);

select * from UserBehavior;{code}
 


> 【Flink SQL Client】Use Flink Kafka Connector has more one exception
> --
>
> Key: FLINK-19995
> URL: https://issues.apache.org/jira/browse/FLINK-19995
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Client
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-05-17-35-10-103.png, 
> image-2020-11-05-17-37-21-610.png, image-2020-11-05-17-40-05-630.png, 
> image-2020-11-05-17-41-01-319.png
>
>
> when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  
> sql job has an exception like picture2
>  
> !image-2020-11-05-17-35-10-103.png|width=658,height=251!
> !image-2020-11-05-17-37-21-610.png|width=812,height=600!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
> {code}
> when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
> another exception
>  
> !image-2020-11-05-17-41-01-319.png|width=841,height=318!
> !image-2020-11-05-17-40-05-630.png|width=955,height=581!
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> if i add both jar, it returm exception too
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.ClassNotFoundException: 
> org.apache.kafka.common.serialization.ByteArrayDeserializer
> {code}
> ddl & sql:
>  
> {code:java}
> CREATE TABLE UserBehavior (
>  user_id BIGINT,
>  item_id BIGINT,
>  behavior 

[jira] [Created] (FLINK-19995) 【Flink SQL Client】Use Flink Kafka Connector has more one exception

2020-11-05 Thread zhisheng (Jira)
zhisheng created FLINK-19995:


 Summary: 【Flink SQL Client】Use Flink Kafka Connector has more one 
exception
 Key: FLINK-19995
 URL: https://issues.apache.org/jira/browse/FLINK-19995
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Table SQL / Client
Affects Versions: 1.12.0
Reporter: zhisheng
 Attachments: image-2020-11-05-17-35-10-103.png, 
image-2020-11-05-17-37-21-610.png, image-2020-11-05-17-40-05-630.png, 
image-2020-11-05-17-41-01-319.png

when i add flink-sql-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, I run  sql 
job has an exception like picture2

 

!image-2020-11-05-17-35-10-103.png|width=658,height=251!

!image-2020-11-05-17-37-21-610.png|width=812,height=600!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
{code}
when i add flink-connector-kafka_2.11-1.12-SNAPSHOT.jar in lib, it run has 
another exception

 

!image-2020-11-05-17-41-01-319.png|width=841,height=318!

!image-2020-11-05-17-40-05-630.png|width=955,height=581!
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
if i add both jar, it returm exception too
{code:java}
[ERROR] Could not execute SQL statement. Reason:
java.lang.ClassNotFoundException: 
org.apache.kafka.common.serialization.ByteArrayDeserializer
{code}
ddl & sql:

 
{code:java}
CREATE TABLE UserBehavior (
 user_id BIGINT,
 item_id BIGINT,
 behavior CHAR(2),
 `time` BIGINT
) WITH (
 'connector' = 'kafka',
 'topic' = 'user_behavior',
 'properties.bootstrap.servers' = 'localhost:9092',
 'properties.group.id' = 'user_behavior_flink',
 'format' = 'json',
 'json.ignore-parse-errors' = 'true',  
 'scan.startup.mode' = 'earliest-offset',
 'scan.topic-partition-discovery.interval' = '1'
);

select * from UserBehavior;{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19973) 【Flink-Deployment】YARN CLI Parameter doesn't work

2020-11-04 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-19973:
-
Description: 
when i use flink-sql-client to deploy job to yarn(per job mod), I set 
`execution.target: yarn-per-job` in flink-conf.yaml, job will deploy to yarn.

 

when I deploy jar job to yarn, The command is `./bin/flink run -m yarn-cluster 
-ynm flink-1.12-test  -ytm 3g -yjm 3g 
examples/streaming/StateMachineExample.jar`, it will deploy ok, but the 
`-ynm`、`-ytm 3g` and `-yjm 3g` doesn't work. 

 

!image-2020-11-04-20-58-49-738.png|width=912,height=235!

 

 

when i remove the config `execution.target: yarn-per-job`, it work well.

 

!image-2020-11-04-21-00-06-180.png|width=1047,height=150!

  was:
when i use flink-sql-client to deploy job to yarn(per job mod), I set 
`execution.target: yarn-per-job` in flink-conf.yaml, job will deploy to yarn.

 

when I deploy jar job to yarn, The command is `./bin/flink run -m yarn-cluster 
-ynm flink-1.12-test  -ytm 3g -yjm 3g 
examples/streaming/StateMachineExample.jar`, it will deploy ok, but the 
`-ynm`、`-ytm 3g` and `-yjm 3g` doesn't work. 

 

!image-2020-11-04-20-58-49-738.png!

 

 

when i remove the config `execution.target: yarn-per-job`, it work well.

 

!image-2020-11-04-21-00-06-180.png!


> 【Flink-Deployment】YARN CLI Parameter doesn't work
> -
>
> Key: FLINK-19973
> URL: https://issues.apache.org/jira/browse/FLINK-19973
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-04-20-58-49-738.png, 
> image-2020-11-04-21-00-06-180.png
>
>
> when i use flink-sql-client to deploy job to yarn(per job mod), I set 
> `execution.target: yarn-per-job` in flink-conf.yaml, job will deploy to yarn.
>  
> when I deploy jar job to yarn, The command is `./bin/flink run -m 
> yarn-cluster -ynm flink-1.12-test  -ytm 3g -yjm 3g 
> examples/streaming/StateMachineExample.jar`, it will deploy ok, but the 
> `-ynm`、`-ytm 3g` and `-yjm 3g` doesn't work. 
>  
> !image-2020-11-04-20-58-49-738.png|width=912,height=235!
>  
>  
> when i remove the config `execution.target: yarn-per-job`, it work well.
>  
> !image-2020-11-04-21-00-06-180.png|width=1047,height=150!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19973) 【Flink-Deployment】YARN CLI Parameter doesn't work when set `execution.target: yarn-per-job` config

2020-11-04 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng updated FLINK-19973:
-
Summary: 【Flink-Deployment】YARN CLI Parameter doesn't work when set 
`execution.target: yarn-per-job` config  (was: 【Flink-Deployment】YARN CLI 
Parameter doesn't work)

> 【Flink-Deployment】YARN CLI Parameter doesn't work when set `execution.target: 
> yarn-per-job` config
> --
>
> Key: FLINK-19973
> URL: https://issues.apache.org/jira/browse/FLINK-19973
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Priority: Major
> Attachments: image-2020-11-04-20-58-49-738.png, 
> image-2020-11-04-21-00-06-180.png
>
>
> when i use flink-sql-client to deploy job to yarn(per job mod), I set 
> `execution.target: yarn-per-job` in flink-conf.yaml, job will deploy to yarn.
>  
> when I deploy jar job to yarn, The command is `./bin/flink run -m 
> yarn-cluster -ynm flink-1.12-test  -ytm 3g -yjm 3g 
> examples/streaming/StateMachineExample.jar`, it will deploy ok, but the 
> `-ynm`、`-ytm 3g` and `-yjm 3g` doesn't work. 
>  
> !image-2020-11-04-20-58-49-738.png|width=912,height=235!
>  
>  
> when i remove the config `execution.target: yarn-per-job`, it work well.
>  
> !image-2020-11-04-21-00-06-180.png|width=1047,height=150!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19973) 【Flink-Deployment】YARN CLI Parameter doesn't work

2020-11-04 Thread zhisheng (Jira)
zhisheng created FLINK-19973:


 Summary: 【Flink-Deployment】YARN CLI Parameter doesn't work
 Key: FLINK-19973
 URL: https://issues.apache.org/jira/browse/FLINK-19973
 Project: Flink
  Issue Type: Bug
  Components: Deployment / YARN
Affects Versions: 1.12.0
Reporter: zhisheng
 Attachments: image-2020-11-04-20-58-49-738.png, 
image-2020-11-04-21-00-06-180.png

when i use flink-sql-client to deploy job to yarn(per job mod), I set 
`execution.target: yarn-per-job` in flink-conf.yaml, job will deploy to yarn.

 

when I deploy jar job to yarn, The command is `./bin/flink run -m yarn-cluster 
-ynm flink-1.12-test  -ytm 3g -yjm 3g 
examples/streaming/StateMachineExample.jar`, it will deploy ok, but the 
`-ynm`、`-ytm 3g` and `-yjm 3g` doesn't work. 

 

!image-2020-11-04-20-58-49-738.png!

 

 

when i remove the config `execution.target: yarn-per-job`, it work well.

 

!image-2020-11-04-21-00-06-180.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18449) Make topic discovery and partition discovery configurable for FlinkKafkaConsumer in Table API

2020-07-05 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17151727#comment-17151727
 ] 

zhisheng commented on FLINK-18449:
--

In FlinkKafkaConsumer, the constructor method as the follow:

 
{code:java}
//single
public FlinkKafkaConsumer(String topic, KafkaDeserializationSchema 
deserializer, Properties props) {
   this(Collections.singletonList(topic), deserializer, props);
}

//list
public FlinkKafkaConsumer(List topics, DeserializationSchema 
deserializer, Properties props) {
   this(topics, new KafkaDeserializationSchemaWrapper<>(deserializer), props);
}

//pattern
public FlinkKafkaConsumer(Pattern subscriptionPattern, DeserializationSchema 
valueDeserializer, Properties props) {
   this(null, subscriptionPattern, new 
KafkaDeserializationSchemaWrapper<>(valueDeserializer), props);
}{code}
There are already three types. 

> Make topic discovery and partition discovery configurable for 
> FlinkKafkaConsumer in Table API
> -
>
> Key: FLINK-18449
> URL: https://issues.apache.org/jira/browse/FLINK-18449
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Shengkai Fang
>Assignee: Shengkai Fang
>Priority: Major
> Fix For: 1.12.0
>
>
> In streaming api, we can use regex to find topic and enable partiton 
> discovery by setting non-negative value for 
> `{{flink.partition-discovery.interval-millis}}`. However, it's not work in 
> table api. I think we can add options such as 'topic-regex' and 
> '{{partition-discovery.interval-millis}}' in WITH block for users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18449) Make topic discovery and partition discovery configurable for FlinkKafkaConsumer in Table API

2020-07-03 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17151165#comment-17151165
 ] 

zhisheng edited comment on FLINK-18449 at 7/4/20, 4:23 AM:
---

what about this?[~jark] I had make change like below in my company

 
{code:java}
'topic-type' = 'single/list/pattern' // for topic type
'topic' = 'topic/topic-1, topic-2,..., topic-n/topic*', // for topic
'topic-partition-discovery.interval' = '5s' // for both topic discovery
{code}


was (Author: zhisheng):
what about this?[~jark] I had make change like below in my company

 
{code:java}
'topic-type' = 'single/list/pattern' // for topic
'topic' = 'topic/topic-1, topic-2,..., topic-n/topic*', // for topic
'topic-partition-discovery.interval' = '5s' // for both topic discovery
{code}

> Make topic discovery and partition discovery configurable for 
> FlinkKafkaConsumer in Table API
> -
>
> Key: FLINK-18449
> URL: https://issues.apache.org/jira/browse/FLINK-18449
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Shengkai Fang
>Assignee: Shengkai Fang
>Priority: Major
> Fix For: 1.12.0
>
>
> In streaming api, we can use regex to find topic and enable partiton 
> discovery by setting non-negative value for 
> `{{flink.partition-discovery.interval-millis}}`. However, it's not work in 
> table api. I think we can add options such as 'topic-regex' and 
> '{{partition-discovery.interval-millis}}' in WITH block for users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-18449) Make topic discovery and partition discovery configurable for FlinkKafkaConsumer in Table API

2020-07-03 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17151165#comment-17151165
 ] 

zhisheng edited comment on FLINK-18449 at 7/4/20, 4:23 AM:
---

what about this?[~jark] I had make change like below in my company

 
{code:java}
'topic-type' = 'single/list/pattern' // for topic
'topic' = 'topic/topic-1, topic-2,..., topic-n/topic*', // for topic
'topic-partition-discovery.interval' = '5s' // for both topic discovery
{code}


was (Author: zhisheng):
what about this?[~jark] I had make change like below in my company

 


{code:java}
'topic-type' = 'single/list/pattern' // for topic
'type'topic' = 'topic/topic-1, topic-2,..., topic-n/topic*', // for topic
'topic-partition-discovery.interval' = '5s' // for both topic discovery
{code}

> Make topic discovery and partition discovery configurable for 
> FlinkKafkaConsumer in Table API
> -
>
> Key: FLINK-18449
> URL: https://issues.apache.org/jira/browse/FLINK-18449
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka, Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Shengkai Fang
>Assignee: Shengkai Fang
>Priority: Major
> Fix For: 1.12.0
>
>
> In streaming api, we can use regex to find topic and enable partiton 
> discovery by setting non-negative value for 
> `{{flink.partition-discovery.interval-millis}}`. However, it's not work in 
> table api. I think we can add options such as 'topic-regex' and 
> '{{partition-discovery.interval-millis}}' in WITH block for users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >