[jira] [Created] (FLINK-33296) Split flink-dist.jar into independent dependency packages

2023-10-17 Thread JieFang.He (Jira)
JieFang.He created FLINK-33296:
--

 Summary: Split flink-dist.jar into independent dependency packages
 Key: FLINK-33296
 URL: https://issues.apache.org/jira/browse/FLINK-33296
 Project: Flink
  Issue Type: Improvement
Reporter: JieFang.He


Flink currently uses the shade package of flink-dist. Can it be split into 
independent dependency packages?

To facilitate the replacement of dependencies individually



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28287) Should TaskManagerRunner need a ShutdownHook

2022-07-06 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-28287:
---
Affects Version/s: 1.14.0

> Should TaskManagerRunner need a ShutdownHook
> 
>
> Key: FLINK-28287
> URL: https://issues.apache.org/jira/browse/FLINK-28287
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.14.0, 1.14.5, 1.15.1
>Reporter: JieFang.He
>Priority: Major
>
> TaskManagerRunner  has a close method,but did not call when it stop.
> Some resources in TaskManagerRunner come with ShutdownHook, but some 
> resources do not, such as rpcSystem, which causes the temporary file 
> flink-rpc-akka_*.jar to not be deleted when stop.
> Should TaskManagerRunner need a ShutdownHook to call the close method to 
> release all resources
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28287) Should TaskManagerRunner need a ShutdownHook

2022-07-06 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-28287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-28287:
---
Affects Version/s: 1.15.1
   1.14.5
   (was: 1.14.0)

> Should TaskManagerRunner need a ShutdownHook
> 
>
> Key: FLINK-28287
> URL: https://issues.apache.org/jira/browse/FLINK-28287
> Project: Flink
>  Issue Type: Improvement
>Affects Versions: 1.14.5, 1.15.1
>Reporter: JieFang.He
>Priority: Major
>
> TaskManagerRunner  has a close method,but did not call when it stop.
> Some resources in TaskManagerRunner come with ShutdownHook, but some 
> resources do not, such as rpcSystem, which causes the temporary file 
> flink-rpc-akka_*.jar to not be deleted when stop.
> Should TaskManagerRunner need a ShutdownHook to call the close method to 
> release all resources
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-28287) Should TaskManagerRunner need a ShutdownHook

2022-06-28 Thread JieFang.He (Jira)
Title: Message Title


 
 
 
 

 
 
 

 
   
 JieFang.He updated an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Flink /  FLINK-28287  
 
 
  Should TaskManagerRunner need a ShutdownHook   
 

  
 
 
 
 

 
Change By: 
 JieFang.He  
 

  
 
 
 
 

 
 TaskManagerRunner  has a close method,but did not call when it stop.Some resources in TaskManagerRunner come with ShutdownHook, but some resources do not, such as rpcSystem, which causes the temporary file flink-rpc-akka_*.jar to not be deleted when stop.Should TaskManagerRunner need a ShutdownHook to call the close method  to release all resources      
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

 
 This message was sent by Atlassian Jira (v8.20.10#820010-sha1:ace47f9)  
 
 

 
   
 

  
 

  
 

   



[jira] [Created] (FLINK-28287) Should TaskManagerRunner need a ShutdownHook

2022-06-28 Thread JieFang.He (Jira)
Title: Message Title


 
 
 
 

 
 
 

 
   
 JieFang.He created an issue  
 

  
 
 
 
 

 
 
  
 
 
 
 

 
 Flink /  FLINK-28287  
 
 
  Should TaskManagerRunner need a ShutdownHook   
 

  
 
 
 
 

 
Issue Type: 
  Improvement  
 
 
Affects Versions: 
 1.14.0  
 
 
Assignee: 
 Unassigned  
 
 
Created: 
 29/Jun/22 00:06  
 
 
Priority: 
  Major  
 
 
Reporter: 
 JieFang.He  
 

  
 
 
 
 

 
 TaskManagerRunner  has a close method,but did not call when it stop. Some resources in TaskManagerRunner come with ShutdownHook, but some resources do not, such as rpcSystem, which causes the temporary file flink-rpc-akka_*.jar to not be deleted when stop. Should TaskManagerRunner need a ShutdownHook to call the close method    
 

  
 
 
 
 

 
 
 

 
 
 Add Comment  
 

  
 

  
 
 
 
  
 

  
 
 
 
 

[jira] [Commented] (FLINK-24697) Kafka table source cannot change the auto.offset.reset setting for 'group-offsets' startup mode

2022-02-23 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17497197#comment-17497197
 ] 

JieFang.He commented on FLINK-24697:


Whether need to change the default value to LATEST? The problem describe in 
FLINK-24681 still exist if  auto.offset.reset is not set

> Kafka table source cannot change the auto.offset.reset setting for 
> 'group-offsets' startup mode
> ---
>
> Key: FLINK-24697
> URL: https://issues.apache.org/jira/browse/FLINK-24697
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Hang Ruan
>Assignee: Hang Ruan
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Because Flink 1.13 SQL does not use the new Source API in FLIP-27, the 
> behavior to start from group offsets in flink 1.13 will use the kafka 
> 'auto.offset.reset' default value(latest), when the 'auto.offset.reset' 
> configuration is not set in table options. But in flink 1.13 we could change 
> the behavior by setting 'auto.offset.reset' to other values. See the method 
> {{setStartFromGroupOffsets in the class FlinkKafkaConsumerBase.}}
> Flink 1.14 uses the new Source API, but we have no ways to change the default 
> 'auto.offset.reset' value when use 'group-offsets' startup mode. In 
> DataStream API, we could change it by 
> `kafkaSourceBuilder.setStartingOffsets(OffsetsInitializer.committedOffsets(OffsetResetStrategy))`.
> So we need the way to change auto offset reset configuration.
> The design is that when 'auto.offset.reset' is set, the 'group-offsets' 
> startup mode will use the provided auto offset reset strategy, or else 'none' 
> reset strategy in order to be consistent with the DataStream API.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (FLINK-19754) Cannot have more than one execute() or executeAsync() call in a single environment.

2021-04-08 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17317076#comment-17317076
 ] 

JieFang.He commented on FLINK-19754:


[~jark] My problem is not Exception on StreamTableEnvironment with Web UI. The 
situation is that, My Job has both DataStream and SQL, and both of them need 
the execute API to run the task (StreamExecutionEnvironment.execute and 
StreamTableEnvironment.execute). The Flink run work well, but restApi can only 
work with one of them, have both of them may cause exception "Cannot have more 
than one execute() or executeAsync() call in a single environment". It seems 
that restApi can only support one execute in job

> Cannot have more than one execute() or executeAsync() call in a single 
> environment.
> ---
>
> Key: FLINK-19754
> URL: https://issues.apache.org/jira/browse/FLINK-19754
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.2
>Reporter: little-tomato
>Priority: Major
>
> I run this code on my Standalone Cluster. When i submit the job,the error log 
> is as follows:
> {code}
> 2020-10-20 11:53:42,969 WARN 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner [] - 
> Could not execute application: 
>  org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Cannot have more than one execute() or executeAsync() call 
> in a single environment.
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149) 
> ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
>  ~[flink-dist_2.12-1.11.2.jar:1.11.2]
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>  [?:1.8.0_221]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_221]
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_221]
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [?:1.8.0_221]
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [?:1.8.0_221]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_221]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_221]
>  at java.lang.Thread.run(Thread.java:748) [?:1.8.0_221]
>  Caused by: org.apache.flink.util.FlinkRuntimeException: Cannot have more 
> than one execute() or executeAsync() call in a single environment.
>  at 
> org.apache.flink.client.program.StreamContextEnvironment.validateAllowedExecution(StreamContextEnvironment.java:139)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:127)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:76)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1697)
>  ~[flink-dist_2.12-1.11.2.jar:1.11.2]
>  at cn.cuiot.dmp.ruleengine.job.RuleEngineJob.main(RuleEngineJob.java:556) 
> ~[?:?]
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_221]
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_221]
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_221]
>  at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_221]
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
> {code}
> my code is:
> {code:java}
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>  EnvironmentSettings bsSettings = 
> E

[jira] [Comment Edited] (FLINK-19754) Cannot have more than one execute() or executeAsync() call in a single environment.

2021-04-08 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17317029#comment-17317029
 ] 

JieFang.He edited comment on FLINK-19754 at 4/8/21, 9:47 AM:
-

It happens when use UI or restAPI to submit the job when Job has more then one 
execute like that
{code:java}
StreamExecutionEnvironment bsEnv = 
StreamExecutionEnvironment.getExecutionEnvironment();
EnvironmentSettings bsSettings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
StreamTableEnvironment bsTableEnv = StreamTableEnvironment.create(bsEnv, 
bsSettings);

bsTableEnv.executeSql(insertSql);
bsEnv.execute("bsEnv");
{code}
but it can run with flink run, are there any restrictions on restAPI 


was (Author: hejiefang):
It happens when use UI or restAPI to submit the job when Job has more then one 
execute like that
{code:java}
bsTableEnv.executeSql(insertSql);
bsEnv.execute("bsEnv");
{code}
but it can run with flink run, are there any restrictions on restAPI 

> Cannot have more than one execute() or executeAsync() call in a single 
> environment.
> ---
>
> Key: FLINK-19754
> URL: https://issues.apache.org/jira/browse/FLINK-19754
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.2
>Reporter: little-tomato
>Priority: Major
>
> I run this code on my Standalone Cluster. When i submit the job,the error log 
> is as follows:
> {code}
> 2020-10-20 11:53:42,969 WARN 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner [] - 
> Could not execute application: 
>  org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Cannot have more than one execute() or executeAsync() call 
> in a single environment.
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149) 
> ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
>  ~[flink-dist_2.12-1.11.2.jar:1.11.2]
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>  [?:1.8.0_221]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_221]
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_221]
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [?:1.8.0_221]
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [?:1.8.0_221]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_221]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_221]
>  at java.lang.Thread.run(Thread.java:748) [?:1.8.0_221]
>  Caused by: org.apache.flink.util.FlinkRuntimeException: Cannot have more 
> than one execute() or executeAsync() call in a single environment.
>  at 
> org.apache.flink.client.program.StreamContextEnvironment.validateAllowedExecution(StreamContextEnvironment.java:139)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:127)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:76)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1697)
>  ~[flink-dist_2.12-1.11.2.jar:1.11.2]
>  at cn.cuiot.dmp.ruleengine.job.RuleEngineJob.main(RuleEngineJob.java:556) 
> ~[?:?]
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_221]
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_221]
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_221]
>  at java.lang.reflect.Method.invoke(Method.ja

[jira] [Commented] (FLINK-19754) Cannot have more than one execute() or executeAsync() call in a single environment.

2021-04-08 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17317029#comment-17317029
 ] 

JieFang.He commented on FLINK-19754:


It happens when use UI or restAPI to submit the job when Job has more then one 
execute like that
{code:java}
bsTableEnv.executeSql(insertSql);
bsEnv.execute("bsEnv");
{code}
but it can run with flink run, are there any restrictions on restAPI 

> Cannot have more than one execute() or executeAsync() call in a single 
> environment.
> ---
>
> Key: FLINK-19754
> URL: https://issues.apache.org/jira/browse/FLINK-19754
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.2
>Reporter: little-tomato
>Priority: Major
>
> I run this code on my Standalone Cluster. When i submit the job,the error log 
> is as follows:
> {code}
> 2020-10-20 11:53:42,969 WARN 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner [] - 
> Could not execute application: 
>  org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: Cannot have more than one execute() or executeAsync() call 
> in a single environment.
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149) 
> ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.tryExecuteJobs(DetachedApplicationRunner.java:78)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.deployment.application.DetachedApplicationRunner.run(DetachedApplicationRunner.java:67)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.runtime.webmonitor.handlers.JarRunHandler.lambda$handleRequest$0(JarRunHandler.java:100)
>  ~[flink-dist_2.12-1.11.2.jar:1.11.2]
>  at 
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
>  [?:1.8.0_221]
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_221]
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_221]
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [?:1.8.0_221]
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [?:1.8.0_221]
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_221]
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_221]
>  at java.lang.Thread.run(Thread.java:748) [?:1.8.0_221]
>  Caused by: org.apache.flink.util.FlinkRuntimeException: Cannot have more 
> than one execute() or executeAsync() call in a single environment.
>  at 
> org.apache.flink.client.program.StreamContextEnvironment.validateAllowedExecution(StreamContextEnvironment.java:139)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:127)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.client.program.StreamContextEnvironment.execute(StreamContextEnvironment.java:76)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1697)
>  ~[flink-dist_2.12-1.11.2.jar:1.11.2]
>  at cn.cuiot.dmp.ruleengine.job.RuleEngineJob.main(RuleEngineJob.java:556) 
> ~[?:?]
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_221]
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_221]
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_221]
>  at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_221]
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>  ~[flink-clients_2.12-1.11.0.jar:1.11.0]
> {code}
> my code is:
> {code:java}
> final StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
>  EnvironmentSettings bsSettings = 
> EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
>  StreamTableEnvironment tEnv = StreamTableEnvironment.create(env, bsSettings);
>  ...
>  FlinkKafkaConsumer myConsumer = new 
> FlinkKafkaConsumer("kafkatopic", new SimpleStringSchema(), 
> pro

[jira] [Closed] (FLINK-21841) Can not find kafka-connect with sql-kafka-connector

2021-03-18 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He closed FLINK-21841.
--
Resolution: Not A Problem

> Can not find kafka-connect with sql-kafka-connector
> ---
>
> Key: FLINK-21841
> URL: https://issues.apache.org/jira/browse/FLINK-21841
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
>  
> When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
> jar) with flink 1.11.1  like
> {code:java}
> CREATE TABLE user_behavior (
>  user_id INT,
>  action STRING,
>  province INT,
>  ts TIMESTAMP(3)
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'intopic',
>  'properties.bootstrap.servers' = 'kafkaserver:9092',
>  'properties.group.id' = 'testGroup',
>  'format' = 'csv',
>  'scan.startup.mode' = 'earliest-offset'
> )
> {code}
>  I get a exception
> {code:java}
> Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
> connector using option ''connector'='kafka''.
> at 
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
> at 
> org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
> ... 35 more
> Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
> factory for identifier 'kafka' that implements 
> 'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
> classpath.Available factory identifiers are:datagen
> {code}
> It looks like the issue 
> [FLINK-18076|https://issues.apache.org/jira/browse/FLINK-18076] is not deal 
> with all exceptions
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21841) Can not find kafka-connect with sql-kafka-connector

2021-03-18 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17304609#comment-17304609
 ] 

JieFang.He commented on FLINK-21841:


[~jark] Thank you for your answers. It works. I used the wrong Maven plugin

> Can not find kafka-connect with sql-kafka-connector
> ---
>
> Key: FLINK-21841
> URL: https://issues.apache.org/jira/browse/FLINK-21841
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
>  
> When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
> jar) with flink 1.11.1  like
> {code:java}
> CREATE TABLE user_behavior (
>  user_id INT,
>  action STRING,
>  province INT,
>  ts TIMESTAMP(3)
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'intopic',
>  'properties.bootstrap.servers' = 'kafkaserver:9092',
>  'properties.group.id' = 'testGroup',
>  'format' = 'csv',
>  'scan.startup.mode' = 'earliest-offset'
> )
> {code}
>  I get a exception
> {code:java}
> Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
> connector using option ''connector'='kafka''.
> at 
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
> at 
> org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
> ... 35 more
> Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
> factory for identifier 'kafka' that implements 
> 'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
> classpath.Available factory identifiers are:datagen
> {code}
> It looks like the issue 
> [FLINK-18076|https://issues.apache.org/jira/browse/FLINK-18076] is not deal 
> with all exceptions
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-21841) Can not find kafka-connect with sql-kafka-connector

2021-03-18 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17303802#comment-17303802
 ] 

JieFang.He edited comment on FLINK-21841 at 3/18/21, 9:13 AM:
--

I use the maven plug “maven-assembly-plugin” to package the dependencies and 
code together into a jar。The pom like that
{code:java}


  org.apache.flink
  flink-sql-connector-kafka_2.11
  1.11.1


  org.apache.flink
  flink-connector-kafka_2.11
  1.11.1
  provided


...


maven-assembly-plugin

  
package

  single

  


  
jar-with-dependencies
  
  

  true
  lib/
  DimTable.DimTable

  



{code}
I also find that the jdbc-connector has no problem, Only kafka-connector get 
the exception, And also no problem when use DataStream on kafka-connector。

It seems that the FactoryUtil did not find the KafkaDynamicTableFactory

 


was (Author: hejiefang):
I use the maven plug “maven-assembly-plugin” to package the dependencies and 
code together into a jar。The pom like that
{code:java}


  org.apache.flink
  flink-sql-connector-kafka_2.11
  1.11.1


  org.apache.flink
  flink-connector-kafka_2.11
  1.11.1
  provided


...


maven-assembly-plugin

  
package

  single

  


  
jar-with-dependencies
  
  

  true
  lib/
  DimTable.DimTable

  



{code}
I also find that the jdbc-connector has no problem, Only kafka-connector get 
the exception, And also no problem when use DataStream on kafka-connector。

 

It seems that the FactoryUtil did not find the KafkaDynamicTableFactory

 

> Can not find kafka-connect with sql-kafka-connector
> ---
>
> Key: FLINK-21841
> URL: https://issues.apache.org/jira/browse/FLINK-21841
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
>  
> When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
> jar) with flink 1.11.1  like
> {code:java}
> CREATE TABLE user_behavior (
>  user_id INT,
>  action STRING,
>  province INT,
>  ts TIMESTAMP(3)
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'intopic',
>  'properties.bootstrap.servers' = 'kafkaserver:9092',
>  'properties.group.id' = 'testGroup',
>  'format' = 'csv',
>  'scan.startup.mode' = 'earliest-offset'
> )
> {code}
>  I get a exception
> {code:java}
> Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
> connector using option ''connector'='kafka''.
> at 
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
> at 
> org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
> ... 35 more
> Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
> factory for identifier 'kafka' that implements 
> 'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
> classpath.Available factory identifiers are:datagen
> {code}
> It looks like the issue 
> [FLINK-18076|https://issues.apache.org/jira/browse/FLINK-18076] is not deal 
> with all exceptions
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-21841) Can not find kafka-connect with sql-kafka-connector

2021-03-18 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17303802#comment-17303802
 ] 

JieFang.He edited comment on FLINK-21841 at 3/18/21, 9:12 AM:
--

I use the maven plug “maven-assembly-plugin” to package the dependencies and 
code together into a jar。The pom like that
{code:java}


  org.apache.flink
  flink-sql-connector-kafka_2.11
  1.11.1


  org.apache.flink
  flink-connector-kafka_2.11
  1.11.1
  provided


...


maven-assembly-plugin

  
package

  single

  


  
jar-with-dependencies
  
  

  true
  lib/
  DimTable.DimTable

  



{code}
I also find that the jdbc-connector has no problem, Only kafka-connector get 
the exception, And also no problem when use DataStream on kafka-connector。

 

It seems that the FactoryUtil did not find the KafkaDynamicTableFactory

 


was (Author: hejiefang):
I use the maven plug “maven-assembly-plugin” to package the dependencies and 
code together into a jar。The pom like that
{code:java}


  org.apache.flink
  flink-sql-connector-kafka_2.11
  1.11.1


  org.apache.flink
  flink-connector-kafka_2.11
  1.11.1
  provided


...


maven-assembly-plugin

  
package

  single

  


  
jar-with-dependencies
  
  

  true
  lib/
  DimTable.DimTable

  



{code}
I also find that the jdbc-connector has no problem, Only kafka-connector get 
the exception, And also no problem when use DataStream on kafka-connector。

 

> Can not find kafka-connect with sql-kafka-connector
> ---
>
> Key: FLINK-21841
> URL: https://issues.apache.org/jira/browse/FLINK-21841
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
>  
> When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
> jar) with flink 1.11.1  like
> {code:java}
> CREATE TABLE user_behavior (
>  user_id INT,
>  action STRING,
>  province INT,
>  ts TIMESTAMP(3)
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'intopic',
>  'properties.bootstrap.servers' = 'kafkaserver:9092',
>  'properties.group.id' = 'testGroup',
>  'format' = 'csv',
>  'scan.startup.mode' = 'earliest-offset'
> )
> {code}
>  I get a exception
> {code:java}
> Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
> connector using option ''connector'='kafka''.
> at 
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
> at 
> org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
> ... 35 more
> Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
> factory for identifier 'kafka' that implements 
> 'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
> classpath.Available factory identifiers are:datagen
> {code}
> It looks like the issue 
> [FLINK-18076|https://issues.apache.org/jira/browse/FLINK-18076] is not deal 
> with all exceptions
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21841) Can not find kafka-connect with sql-kafka-connector

2021-03-18 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-21841:
---
Description: 
 

When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
jar) with flink 1.11.1  like
{code:java}
CREATE TABLE user_behavior (
 user_id INT,
 action STRING,
 province INT,
 ts TIMESTAMP(3)
) WITH (
 'connector' = 'kafka',
 'topic' = 'intopic',
 'properties.bootstrap.servers' = 'kafkaserver:9092',
 'properties.group.id' = 'testGroup',
 'format' = 'csv',
 'scan.startup.mode' = 'earliest-offset'
)
{code}
 I get a exception
{code:java}
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
connector using option ''connector'='kafka''.
at 
org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
at 
org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
... 35 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
factory for identifier 'kafka' that implements 
'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
classpath.Available factory identifiers are:datagen

{code}
It looks like the issue 
[FLINK-18076|https://issues.apache.org/jira/browse/FLINK-18076] is not deal 
with all exceptions

 

  was:
 

When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
jar) with flink 1.11.1  like
{code:java}
CREATE TABLE user_behavior (
 user_id INT,
 action STRING,
 province INT,
 ts TIMESTAMP(3)
) WITH (
 'connector' = 'kafka',
 'topic' = 'intopic',
 'properties.bootstrap.servers' = 'kafkaserver:9092',
 'properties.group.id' = 'testGroup',
 'format' = 'csv',
 'scan.startup.mode' = 'earliest-offset'
)
{code}
 I get a exception
{code:java}
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
connector using option ''connector'='kafka''.
at 
org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
at 
org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
... 35 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
factory for identifier 'kafka' that implements 
'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
classpath.Available factory identifiers are:datagen

{code}
It looks like the issue [FLINK-18076|http://example.com/] is not deal with all 
exceptions

 


> Can not find kafka-connect with sql-kafka-connector
> ---
>
> Key: FLINK-21841
> URL: https://issues.apache.org/jira/browse/FLINK-21841
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
>  
> When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
> jar) with flink 1.11.1  like
> {code:java}
> CREATE TABLE user_behavior (
>  user_id INT,
>  action STRING,
>  province INT,
>  ts TIMESTAMP(3)
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'intopic',
>  'properties.bootstrap.servers' = 'kafkaserver:9092',
>  'properties.group.id' = 'testGroup',
>  'format' = 'csv',
>  'scan.startup.mode' = 'earliest-offset'
> )
> {code}
>  I get a exception
> {code:java}
> Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
> connector using option ''connector'='kafka''.
> at 
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
> at 
> org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
> ... 35 more
> Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
> factory for identifier 'kafka' that implements 
> 'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
> classpath.Available factory identifiers are:datagen
> {code}
> It looks like the issue 
> [FLINK-18076|https://issues.apache.org/jira/browse/FLINK-18076] is not deal 
> with all exceptions
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-21841) Can not find kafka-connect with sql-kafka-connector

2021-03-17 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17303802#comment-17303802
 ] 

JieFang.He edited comment on FLINK-21841 at 3/18/21, 2:09 AM:
--

I use the maven plug “maven-assembly-plugin” to package the dependencies and 
code together into a jar。The pom like that
{code:java}


  org.apache.flink
  flink-sql-connector-kafka_2.11
  1.11.1


  org.apache.flink
  flink-connector-kafka_2.11
  1.11.1
  provided


...


maven-assembly-plugin

  
package

  single

  


  
jar-with-dependencies
  
  

  true
  lib/
  DimTable.DimTable

  



{code}
I also find that the jdbc-connector has no problem, Only kafka-connector get 
the exception, And also no problem when use DataStream on kafka-connector。

 


was (Author: hejiefang):
I use the maven plug “maven-assembly-plugin” to package the dependencies and 
code together into a jar。The pom like that

 
{code:java}


  org.apache.flink
  flink-sql-connector-kafka_2.11
  1.11.1


  org.apache.flink
  flink-connector-kafka_2.11
  1.11.1
  provided


...


maven-assembly-plugin

  
package

  single

  


  
jar-with-dependencies
  
  

  true
  lib/
  DimTable.DimTable

  



{code}
 

I also find that the jdbc-connector has no problem, Only kafka-connector get 
the exception, And also no problem when use DataStream on kafka-connector。

 

> Can not find kafka-connect with sql-kafka-connector
> ---
>
> Key: FLINK-21841
> URL: https://issues.apache.org/jira/browse/FLINK-21841
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
>  
> When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
> jar) with flink 1.11.1  like
> {code:java}
> CREATE TABLE user_behavior (
>  user_id INT,
>  action STRING,
>  province INT,
>  ts TIMESTAMP(3)
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'intopic',
>  'properties.bootstrap.servers' = 'kafkaserver:9092',
>  'properties.group.id' = 'testGroup',
>  'format' = 'csv',
>  'scan.startup.mode' = 'earliest-offset'
> )
> {code}
>  I get a exception
> {code:java}
> Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
> connector using option ''connector'='kafka''.
> at 
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
> at 
> org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
> ... 35 more
> Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
> factory for identifier 'kafka' that implements 
> 'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
> classpath.Available factory identifiers are:datagen
> {code}
> It looks like the issue [FLINK-18076|http://example.com/] is not deal with 
> all exceptions
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-21841) Can not find kafka-connect with sql-kafka-connector

2021-03-17 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17303802#comment-17303802
 ] 

JieFang.He commented on FLINK-21841:


I use the maven plug “maven-assembly-plugin” to package the dependencies and 
code together into a jar。The pom like that

 
{code:java}


  org.apache.flink
  flink-sql-connector-kafka_2.11
  1.11.1


  org.apache.flink
  flink-connector-kafka_2.11
  1.11.1
  provided


...


maven-assembly-plugin

  
package

  single

  


  
jar-with-dependencies
  
  

  true
  lib/
  DimTable.DimTable

  



{code}
 

I also find that the jdbc-connector has no problem, Only kafka-connector get 
the exception, And also no problem when use DataStream on kafka-connector。

 

> Can not find kafka-connect with sql-kafka-connector
> ---
>
> Key: FLINK-21841
> URL: https://issues.apache.org/jira/browse/FLINK-21841
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Ecosystem
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
>  
> When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
> jar) with flink 1.11.1  like
> {code:java}
> CREATE TABLE user_behavior (
>  user_id INT,
>  action STRING,
>  province INT,
>  ts TIMESTAMP(3)
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'intopic',
>  'properties.bootstrap.servers' = 'kafkaserver:9092',
>  'properties.group.id' = 'testGroup',
>  'format' = 'csv',
>  'scan.startup.mode' = 'earliest-offset'
> )
> {code}
>  I get a exception
> {code:java}
> Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
> connector using option ''connector'='kafka''.
> at 
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
> at 
> org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
> ... 35 more
> Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
> factory for identifier 'kafka' that implements 
> 'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
> classpath.Available factory identifiers are:datagen
> {code}
> It looks like the issue [FLINK-18076|http://example.com/] is not deal with 
> all exceptions
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21841) Can not find kafka-connect with sql-kafka-connector

2021-03-17 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-21841:
---
  Component/s: Connectors / Kafka
Affects Version/s: 1.11.1

> Can not find kafka-connect with sql-kafka-connector
> ---
>
> Key: FLINK-21841
> URL: https://issues.apache.org/jira/browse/FLINK-21841
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
>  
> When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
> jar) with flink 1.11.1  like
> {code:java}
> CREATE TABLE user_behavior (
>  user_id INT,
>  action STRING,
>  province INT,
>  ts TIMESTAMP(3)
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'intopic',
>  'properties.bootstrap.servers' = 'kafkaserver:9092',
>  'properties.group.id' = 'testGroup',
>  'format' = 'csv',
>  'scan.startup.mode' = 'earliest-offset'
> )
> {code}
>  I get a exception
> {code:java}
> Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
> connector using option ''connector'='kafka''.
> at 
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
> at 
> org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
> ... 35 more
> Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
> factory for identifier 'kafka' that implements 
> 'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
> classpath.Available factory identifiers are:datagen
> {code}
> It looks like the issue [FLINK-18076|http://example.com/] is not deal with 
> all exceptions
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21841) Can not find kafka-connect with sql-kafka-connector

2021-03-17 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-21841:
---
Description: 
 

When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
jar) with flink 1.11.1  like
{code:java}
CREATE TABLE user_behavior (
 user_id INT,
 action STRING,
 province INT,
 ts TIMESTAMP(3)
) WITH (
 'connector' = 'kafka',
 'topic' = 'intopic',
 'properties.bootstrap.servers' = 'kafkaserver:9092',
 'properties.group.id' = 'testGroup',
 'format' = 'csv',
 'scan.startup.mode' = 'earliest-offset'
)
{code}
 I get a exception
{code:java}
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
connector using option ''connector'='kafka''.
at 
org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
at 
org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
... 35 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
factory for identifier 'kafka' that implements 
'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
classpath.Available factory identifiers are:datagen

{code}
It looks like the issue [FLINK-18076|http://example.com/] is not deal with all 
exceptions

 

  was:
 

When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
jar) with flink 1.11.1  like

 
{code:java}
CREATE TABLE user_behavior (
 user_id INT,
 action STRING,
 province INT,
 ts TIMESTAMP(3)
) WITH (
 'connector' = 'kafka',
 'topic' = 'intopic',
 'properties.bootstrap.servers' = 'kafkaserver:9092',
 'properties.group.id' = 'testGroup',
 'format' = 'csv',
 'scan.startup.mode' = 'earliest-offset'
)
{code}
 

I get a exception

 
{code:java}
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
connector using option ''connector'='kafka''.
at 
org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
at 
org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
... 35 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
factory for identifier 'kafka' that implements 
'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
classpath.Available factory identifiers are:datagen

{code}
It looks like the issue [FLINK-18076|http://example.com/] is not deal with all 
exceptions

 


> Can not find kafka-connect with sql-kafka-connector
> ---
>
> Key: FLINK-21841
> URL: https://issues.apache.org/jira/browse/FLINK-21841
> Project: Flink
>  Issue Type: Bug
>Reporter: JieFang.He
>Priority: Major
>
>  
> When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
> jar) with flink 1.11.1  like
> {code:java}
> CREATE TABLE user_behavior (
>  user_id INT,
>  action STRING,
>  province INT,
>  ts TIMESTAMP(3)
> ) WITH (
>  'connector' = 'kafka',
>  'topic' = 'intopic',
>  'properties.bootstrap.servers' = 'kafkaserver:9092',
>  'properties.group.id' = 'testGroup',
>  'format' = 'csv',
>  'scan.startup.mode' = 'earliest-offset'
> )
> {code}
>  I get a exception
> {code:java}
> Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
> connector using option ''connector'='kafka''.
> at 
> org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
> at 
> org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
> ... 35 more
> Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
> factory for identifier 'kafka' that implements 
> 'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
> classpath.Available factory identifiers are:datagen
> {code}
> It looks like the issue [FLINK-18076|http://example.com/] is not deal with 
> all exceptions
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21841) Can not find kafka-connect with sql-kafka-connector

2021-03-17 Thread JieFang.He (Jira)
JieFang.He created FLINK-21841:
--

 Summary: Can not find kafka-connect with sql-kafka-connector
 Key: FLINK-21841
 URL: https://issues.apache.org/jira/browse/FLINK-21841
 Project: Flink
  Issue Type: Bug
Reporter: JieFang.He


 

When use sql-kafka with fat-jar(make flink-sql-connector-kafka_2.11 in user 
jar) with flink 1.11.1  like

 
{code:java}
CREATE TABLE user_behavior (
 user_id INT,
 action STRING,
 province INT,
 ts TIMESTAMP(3)
) WITH (
 'connector' = 'kafka',
 'topic' = 'intopic',
 'properties.bootstrap.servers' = 'kafkaserver:9092',
 'properties.group.id' = 'testGroup',
 'format' = 'csv',
 'scan.startup.mode' = 'earliest-offset'
)
{code}
 

I get a exception

 
{code:java}
Caused by: org.apache.flink.table.api.ValidationException: Cannot discover a 
connector using option ''connector'='kafka''.
at 
org.apache.flink.table.factories.FactoryUtil.getDynamicTableFactory(FactoryUtil.java:329)
at 
org.apache.flink.table.factories.FactoryUtil.createTableSource(FactoryUtil.java:118)
... 35 more
Caused by: org.apache.flink.table.api.ValidationException: Could not find any 
factory for identifier 'kafka' that implements 
'org.apache.flink.table.factories.DynamicTableSourceFactory' in the 
classpath.Available factory identifiers are:datagen

{code}
It looks like the issue [FLINK-18076|http://example.com/] is not deal with all 
exceptions

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16542) Nothing on the HistoryServer

2021-03-15 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-16542:
---
Attachment: (was: HistoryServer.JPG)

> Nothing on the HistoryServer
> 
>
> Key: FLINK-16542
> URL: https://issues.apache.org/jira/browse/FLINK-16542
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.9.2
>Reporter: JieFang.He
>Priority: Major
>
> I use the 1.9.2 Flink,When i open the address of HistoryServer, it is blank



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-03-15 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Description: 
When run examples/batch/WordCount.jar,it will fail with the exception:
{code:java}
Caused by: java.io.FileNotFoundException: 
/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
 (No such file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
 at 
org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
 at 
org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
{code}
 

I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node. So in HA mode, it need to 
ensure they are on one node

 

  was:
When run examples/batch/WordCount.jar,it will fail with the exception:
{code:java}
Caused by: java.io.FileNotFoundException: 
/data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
 (No such file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
 at 
org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
 at 
org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
{code}
 

I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node. So in HA mode, it need to 
ensure they are on one node

 


> resource_manager and dispatcher register on different nodes in HA mode will 
> cause FileNotFoundException
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Assignee: Robert Metzger
>Priority: Major
>
> When run examples/batch/WordCount.jar,it will fail with the exception:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> /blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
> {code}
>  
> I think the reason is that the jobFiles are upload to the dispatcher node,but 
> the task get jobFiles from resource_manager node. So in HA mode, it need to 
> ensure they are on one node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-03-15 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Comment: was deleted

(was: [~rmetzger] I find the same exception on original version

It seems that the client get the incorrect blob node.

The exception on client shows that the client want to get the file on 
node-01,but i find that the blob file is create on node-02

Here is the exception on client
{code:java}
Caused by: java.io.IOException: Failed to fetch BLOB 
0d04922e319f520b84a2f20c9b4556e0/p-903092697dde4fa408ba1f52fae34c5e876a997a-94c6b64d36b76c695392475a3c72cfab
 from ***-deployer-hejiefang01/172.17.0.4:32101 and store it under 
/data3/***/flink/tmp/blobStore-523c19be-109b-4b7b-9934-b78b0281ffd4/incoming/temp-0002
{code}
But I find the file is on node-02
{code:java}
[root@***-deployer-hejiefang02 default]# for i in {1..10}; do ll -R; sleep 
1;done
total 36
drwxr-xr-x 3 mr users  4096 Sep 17 01:28 blob
-rw-r--r-- 1 mr users 32576 Sep 17 01:28 submittedJobGraph202f6020058b
./blob:
total 4
drwxr-xr-x 2 mr users 4096 Sep 17 01:28 job_0d04922e319f520b84a2f20c9b4556e0
./blob/job_0d04922e319f520b84a2f20c9b4556e0:
total 12
-rw-r--r-- 1 mr users 9401 Sep 17 01:28 
blob_p-903092697dde4fa408ba1f52fae34c5e876a997a-94c6b64d36b76c695392475a3c72cfab
[root@***-deployer-hejiefang02 default]# 
{code}
Node-01 has no file
{code:java}
[root@***-deployer-hejiefang01 default]# for i in {1..10}; do ll -R; sleep 
1;done
.:
total 0
.:
total 0
.:
[root@***-deployer-hejiefang01 default]# 
{code}
And the information on zookeeper is that, dispatcher is on node-02, 
resource_manager and rest_server are on node-01
{code:java}
[zk: localhost:2181(CONNECTED) 0] get /flink/default/leader/dispatcher_lock
??wGEakka.tcp://flink@***-deployer-hejiefang02:37241/user/rpc/dispatcher_1srjava.util.UUIDm?/J
[zk: localhost:2181(CONNECTED) 1] get 
/flink/default/leader/resource_manager_lock
??wLJakka.tcp://flink@***-deployer-hejiefang01:30717/user/rpc/resourcemanager_0srjava.util.UUIDm?/J
[zk: localhost:2181(CONNECTED) 2] get /flink/default/leader/rest_server_lock
??w&$http://***-deployer-hejiefang01:8181srjava.util.UUIDm?/J
{code}
[^flink-jobmanager-deployer-hejiefang01.log]

 )

> resource_manager and dispatcher register on different nodes in HA mode will 
> cause FileNotFoundException
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Assignee: Robert Metzger
>Priority: Major
>
> When run examples/batch/WordCount.jar,it will fail with the exception:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
> {code}
>  
> I think the reason is that the jobFiles are upload to the dispatcher node,but 
> the task get jobFiles from resource_manager node. So in HA mode, it need to 
> ensure they are on one node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-03-15 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Comment: was deleted

(was: I delete some logs contain custom information

It is a custom build, but only add some peripheral content and change the 
version suffix. The processing flow is the same.

I will try to reproduce it with the original version and report again


Thank you for your reply)

> resource_manager and dispatcher register on different nodes in HA mode will 
> cause FileNotFoundException
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Assignee: Robert Metzger
>Priority: Major
>
> When run examples/batch/WordCount.jar,it will fail with the exception:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
> {code}
>  
> I think the reason is that the jobFiles are upload to the dispatcher node,but 
> the task get jobFiles from resource_manager node. So in HA mode, it need to 
> ensure they are on one node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-03-15 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Comment: was deleted

(was: The log in JobManager is the same as CLI

[^flink-jobmanager-deployer-hejiefang01.log])

> resource_manager and dispatcher register on different nodes in HA mode will 
> cause FileNotFoundException
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Assignee: Robert Metzger
>Priority: Major
>
> When run examples/batch/WordCount.jar,it will fail with the exception:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
> {code}
>  
> I think the reason is that the jobFiles are upload to the dispatcher node,but 
> the task get jobFiles from resource_manager node. So in HA mode, it need to 
> ensure they are on one node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-03-15 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Comment: was deleted

(was: [~rmetzger]

After testing again, this should have nothing to do with standby JobManager, 
When it happens,all node submit job fail. The last time it failed only on the 
Standby node seems to be accidental. 

Also,  Not every time you submit a job will fail, failure is probabilistic.

The log shows that there is no blob file on the server, but the wordcount 
example seems no need for these files, the successful job did not generate 
these files.

My resource_manager_lock is on Standby JobManager, Others on the Active 
JobManager, I don't know if this has any impact.
{code:java}
2020-09-03 19:15:11,566 INFO [main-SendThread(deployer-hejiefang03:2181)] 
[ClientCnxn]: Socket connection established, initiating session, client: 
/172.17.0.8:45164, server: deployer-hejiefang03/172.17.0.9:2181
2020-09-03 19:15:11,573 INFO [main-SendThread(deployer-hejiefang03:2181)] 
[ClientCnxn]: Session establishment complete on server 
deployer-hejiefang03/172.17.0.9:2181, sessionid = 0x3000e154ba9000a, negotiated 
timeout = 6
2020-09-03 19:15:11,573 INFO [main-EventThread] [ConnectionStateManager]: State 
change: CONNECTED
2020-09-03 19:15:11,578 INFO [main-EventThread] [EnsembleTracker]: New config 
event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,578 ERROR [main-EventThread] [EnsembleTracker]: Invalid 
config event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,579 INFO [main-EventThread] [EnsembleTracker]: New config 
event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,579 ERROR [main-EventThread] [EnsembleTracker]: Invalid 
config event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,668 INFO [Curator-Framework-0] [CuratorFrameworkImpl]: 
backgroundOperationsLoop exiting
2020-09-03 19:15:11,772 INFO [ForkJoinPool.commonPool-worker-57] [ZooKeeper]: 
Session: 0x3000e154ba9000a closed
2020-09-03 19:15:11,772 INFO [main-EventThread] [ClientCnxn]: EventThread shut 
down for session: 0x3000e154ba9000a
2020-09-03 19:15:11,775 ERROR [main] [CliFrontend]: Error while running the 
command.
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: org.apache.flink.client.program.ProgramInvocationException: 
Job failed (JobID: 8b242015d603f46e82a5f1299399b669)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
 ~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
 ~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at java.security.AccessController.doPrivileged(Native Method) 
~[?:1.8.0_163]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_163]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 [flink-shaded-hadoop-3-uber-3.2.1-11.0.jar:?]
at 
org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
 [flink-dist_2.11-1.11.1.jar:1.11.1]
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992) 
[flink-dist_2.11-1.11.1.jar:1.11.1]
Caused by: java.util.concurrent.ExecutionException: 
org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: 
8b242015d603f46e82a5f1299399b669)
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
~[?:1.8.0_163]
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) 
~[?:1.8.0_163]
at 
org.apache.flink.client.program.ContextEnvironment.getJo

[jira] [Updated] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-03-15 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-jobmanager-deployer-hejiefang02.log)

> resource_manager and dispatcher register on different nodes in HA mode will 
> cause FileNotFoundException
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Assignee: Robert Metzger
>Priority: Major
>
> When run examples/batch/WordCount.jar,it will fail with the exception:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
> {code}
>  
> I think the reason is that the jobFiles are upload to the dispatcher node,but 
> the task get jobFiles from resource_manager node. So in HA mode, it need to 
> ensure they are on one node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-03-15 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-taskmanager-deployer-hejiefang02.log)

> resource_manager and dispatcher register on different nodes in HA mode will 
> cause FileNotFoundException
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Assignee: Robert Metzger
>Priority: Major
>
> When run examples/batch/WordCount.jar,it will fail with the exception:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
> {code}
>  
> I think the reason is that the jobFiles are upload to the dispatcher node,but 
> the task get jobFiles from resource_manager node. So in HA mode, it need to 
> ensure they are on one node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-03-15 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-jobmanager-deployer-hejiefang01.log)

> resource_manager and dispatcher register on different nodes in HA mode will 
> cause FileNotFoundException
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Assignee: Robert Metzger
>Priority: Major
>
> When run examples/batch/WordCount.jar,it will fail with the exception:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
> {code}
>  
> I think the reason is that the jobFiles are upload to the dispatcher node,but 
> the task get jobFiles from resource_manager node. So in HA mode, it need to 
> ensure they are on one node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-03-15 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-taskmanager-deployer-hejiefang01.log)

> resource_manager and dispatcher register on different nodes in HA mode will 
> cause FileNotFoundException
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Assignee: Robert Metzger
>Priority: Major
>
> When run examples/batch/WordCount.jar,it will fail with the exception:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
> {code}
>  
> I think the reason is that the jobFiles are upload to the dispatcher node,but 
> the task get jobFiles from resource_manager node. So in HA mode, it need to 
> ensure they are on one node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-03-15 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Comment: was deleted

(was: I think the reason is that the jobFiles are upload to the dispatcher 
node,but the task get jobFiles from resource_manager node. So in HA mode, it 
need to ensure they are on one node

When the job submit to the rest,the rest then submit it to dispatcher,the 
jobFiles are the put to dispatcher

Here is the log
{code:java}
2020-12-31 00:42:29,255 DEBUG [BLOB connection for /127.0.0.1:43576] 
[FileSystemBlobStore]: Copying from 
/tmp/blobStore-5fe6b952-fece-4590-b637-04f19d8121dd/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109
 to 
/data2/zdh/flink/storageDir/default/blob/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109.
{code}
and here is the code in JobSubmitHandler
{code:java}
CompletableFuture jobSubmissionFuture = 
finalizedJobGraphFuture.thenCompose(jobGraph -> gateway.submitJob(jobGraph, 
timeout));
{code}
the gateway is define in DefaultDispatcherResourceManagerComponentFactory
{code:java}
final LeaderGatewayRetriever dispatcherGatewayRetriever = 
new RpcGatewayRetriever<>(
   rpcService,
   DispatcherGateway.class,
   DispatcherId::fromUuid,
   10,
   Time.milliseconds(50L));
{code}
{code:java}
webMonitorEndpoint = restEndpointFactory.createRestEndpoint(
   configuration,
   dispatcherGatewayRetriever,
   resourceManagerGatewayRetriever,
   blobServer,
   executor,
   metricFetcher,
   highAvailabilityServices.getClusterRestEndpointLeaderElectionService(),
   fatalErrorHandler);
{code}
 

But the Task get the jobFiles from resource_manager

The code of BlobClient.downloadFromBlobServer
{code:java}
static void downloadFromBlobServer(
  @Nullable JobID jobId,
  BlobKey blobKey,
  File localJarFile,
  InetSocketAddress serverAddress,
  Configuration blobClientConfig,
  int numFetchRetries) throws IOException {
..
  catch (Throwable t) {
 String message = "Failed to fetch BLOB " + jobId + "/" + blobKey + " 
from " + serverAddress +
" and store it under " + localJarFile.getAbsolutePath();
{code}
The serverAddress is define in TaskExecutor
{code:java}
final InetSocketAddress blobServerAddress = new InetSocketAddress(
   clusterInformation.getBlobServerHostname(),
   clusterInformation.getBlobServerPort());

blobCacheService.setBlobServerAddress(blobServerAddress);
{code}
The clusterInformation define in ResourceManagerRegistrationListener
{code:java}
if (resourceManagerConnection == connection) {
   try {
  establishResourceManagerConnection(
 resourceManagerGateway,
 resourceManagerId,
 taskExecutorRegistrationId,
 clusterInformation);
{code}
Where ResourceManagerRegistrationListener used
{code:java}
resourceManagerConnection =
   new TaskExecutorToResourceManagerConnection(
  log,
  getRpcService(),
  taskManagerConfiguration.getRetryingRegistrationConfiguration(),
  resourceManagerAddress.getAddress(),
  resourceManagerAddress.getResourceManagerId(),
  getMainThreadExecutor(),
  new ResourceManagerRegistrationListener(),
  taskExecutorRegistration);
{code}
 

 )

> resource_manager and dispatcher register on different nodes in HA mode will 
> cause FileNotFoundException
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Assignee: Robert Metzger
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> When run examples/batch/WordCount.jar,it will fail with the exception:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get

[jira] [Commented] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-01-17 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17266965#comment-17266965
 ] 

JieFang.He commented on FLINK-19067:


Thank you for your answers and suggestions

> resource_manager and dispatcher register on different nodes in HA mode will 
> cause FileNotFoundException
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Assignee: Robert Metzger
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> When run examples/batch/WordCount.jar,it will fail with the exception:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
> {code}
>  
> I think the reason is that the jobFiles are upload to the dispatcher node,but 
> the task get jobFiles from resource_manager node. So in HA mode, it need to 
> ensure they are on one node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-01-08 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Description: 
When run examples/batch/WordCount.jar,it will fail with the exception:
{code:java}
Caused by: java.io.FileNotFoundException: 
/data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
 (No such file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
 at 
org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
 at 
org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
{code}
 

I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node. So in HA mode, it need to 
ensure they are on one node

 

  was:
1、When run examples/batch/WordCount.jar,it will fail with the exception:

Caused by: java.io.FileNotFoundException: 
/data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
 (No such file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
 at 
org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
 at 
org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)

 

 


> resource_manager and dispatcher register on different nodes in HA mode will 
> cause FileNotFoundException
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> When run examples/batch/WordCount.jar,it will fail with the exception:
> {code:java}
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
> {code}
>  
> I think the reason is that the jobFiles are upload to the dispatcher node,but 
> the task get jobFiles from resource_manager node. So in HA mode, it need to 
> ensure they are on one node
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) resource_manager and dispatcher register on different nodes will cause FileNotFoundException

2021-01-08 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Summary: resource_manager and dispatcher register on different nodes will 
cause FileNotFoundException  (was: resource_manager and dispatcher register on 
different nodes will cause )

> resource_manager and dispatcher register on different nodes will cause 
> FileNotFoundException
> 
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) resource_manager and dispatcher register on different nodes in HA mode will cause FileNotFoundException

2021-01-08 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Summary: resource_manager and dispatcher register on different nodes in HA 
mode will cause FileNotFoundException  (was: resource_manager and dispatcher 
register on different nodes will cause FileNotFoundException)

> resource_manager and dispatcher register on different nodes in HA mode will 
> cause FileNotFoundException
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) resource_manager and dispatcher register on different nodes will cause

2021-01-08 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Summary: resource_manager and dispatcher register on different nodes will 
cause   (was: FileNotFoundException when run flink examples)

> resource_manager and dispatcher register on different nodes will cause 
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20855) Calculating numBuckets exceeds the maximum value of int and got a negative number

2021-01-05 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-20855:
---
Description: 
When i run the TPCDS of 500G,i get a exception
{code:java}
Caused by: java.lang.IllegalArgumentException
at 
org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:123)
at 
org.apache.flink.table.runtime.hashtable.LongHashPartition.setNewBuckets(LongHashPartition.java:223)
at 
org.apache.flink.table.runtime.hashtable.LongHashPartition.(LongHashPartition.java:176)
at 
org.apache.flink.table.runtime.hashtable.LongHybridHashTable.buildTableFromSpilledPartition(LongHybridHashTable.java:432)
at 
org.apache.flink.table.runtime.hashtable.LongHybridHashTable.prepareNextPartition(LongHybridHashTable.java:354)
at 
org.apache.flink.table.runtime.hashtable.LongHybridHashTable.nextMatching(LongHybridHashTable.java:145)
at LongHashJoinOperator$40166.endInput2$(Unknown Source)
at LongHashJoinOperator$40166.endInput(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.endOperatorInput(StreamOperatorWrapper.java:101)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.endHeadOperatorInput(OperatorChain.java:278)
at 
org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.checkFinished(StreamTwoInputProcessor.java:211)
at 
org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.processInput(StreamTwoInputProcessor.java:183)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:349)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxStep(MailboxProcessor.java:191)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:181)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:564)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:534)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
at java.lang.Thread.run(Thread.java:748)
{code}
The reason is: when calculate the numBuckets in LongHashPartition,the result 
exceeds the maximum value of int and got a negative number
{code:java}
LongHashPartition(
  LongHybridHashTable longTable,
  int partitionNum,
  BinaryRowDataSerializer buildSideSerializer,
  int bucketNumSegs,
  int recursionLevel,
  List buffers,
  int lastSegmentLimit) {
   this(longTable, buildSideSerializer, listToArray(buffers));
   this.partitionNum = partitionNum;
   this.recursionLevel = recursionLevel;

   int numBuckets = MathUtils.roundDownToPowerOf2(bucketNumSegs * segmentSize / 
16);
   MemorySegment[] buckets = new MemorySegment[bucketNumSegs];
   for (int i = 0; i < bucketNumSegs; i++) {
  buckets[i] = longTable.nextSegment();
   }
   setNewBuckets(buckets, numBuckets);
   this.finalBufferLimit = lastSegmentLimit;
}
{code}
A way to avoid the exception is to adjust the calculation order

change
{code:java}
int numBuckets = MathUtils.roundDownToPowerOf2(bucketNumSegs * segmentSize / 
16);
{code}
to
{code:java}
int numBuckets = MathUtils.roundDownToPowerOf2(segmentSize / 16 * 
bucketNumSegs);
{code}

  was:
When i run the TPCDS of 500G,i get a exception
{code:java}
Caused by: java.lang.IllegalArgumentException
at 
org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:123)
at 
org.apache.flink.table.runtime.hashtable.LongHashPartition.setNewBuckets(LongHashPartition.java:223)
at 
org.apache.flink.table.runtime.hashtable.LongHashPartition.(LongHashPartition.java:176)
at 
org.apache.flink.table.runtime.hashtable.LongHybridHashTable.buildTableFromSpilledPartition(LongHybridHashTable.java:432)
at 
org.apache.flink.table.runtime.hashtable.LongHybridHashTable.prepareNextPartition(LongHybridHashTable.java:354)
at 
org.apache.flink.table.runtime.hashtable.LongHybridHashTable.nextMatching(LongHybridHashTable.java:145)
at LongHashJoinOperator$40166.endInput2$(Unknown Source)
at LongHashJoinOperator$40166.endInput(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.endOperatorInput(StreamOperatorWrapper.java:101)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.endHeadOperatorInput(OperatorChain.java:278)
at 
org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.checkFinished(StreamTwoInputProcessor.java:211)
at 
org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.processInput(StreamTwoInputProcessor.java:183)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:349)
at 
org.apache.flink.streaming.ru

[jira] [Created] (FLINK-20855) Calculating numBuckets exceeds the maximum value of int and got a negative number

2021-01-05 Thread JieFang.He (Jira)
JieFang.He created FLINK-20855:
--

 Summary: Calculating numBuckets exceeds the maximum value of int 
and got a negative number
 Key: FLINK-20855
 URL: https://issues.apache.org/jira/browse/FLINK-20855
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.11.1, 1.12.0
Reporter: JieFang.He


When i run the TPCDS of 500G,i get a exception
{code:java}
Caused by: java.lang.IllegalArgumentException
at 
org.apache.flink.util.Preconditions.checkArgument(Preconditions.java:123)
at 
org.apache.flink.table.runtime.hashtable.LongHashPartition.setNewBuckets(LongHashPartition.java:223)
at 
org.apache.flink.table.runtime.hashtable.LongHashPartition.(LongHashPartition.java:176)
at 
org.apache.flink.table.runtime.hashtable.LongHybridHashTable.buildTableFromSpilledPartition(LongHybridHashTable.java:432)
at 
org.apache.flink.table.runtime.hashtable.LongHybridHashTable.prepareNextPartition(LongHybridHashTable.java:354)
at 
org.apache.flink.table.runtime.hashtable.LongHybridHashTable.nextMatching(LongHybridHashTable.java:145)
at LongHashJoinOperator$40166.endInput2$(Unknown Source)
at LongHashJoinOperator$40166.endInput(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.endOperatorInput(StreamOperatorWrapper.java:101)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain.endHeadOperatorInput(OperatorChain.java:278)
at 
org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.checkFinished(StreamTwoInputProcessor.java:211)
at 
org.apache.flink.streaming.runtime.io.StreamTwoInputProcessor.processInput(StreamTwoInputProcessor.java:183)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:349)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxStep(MailboxProcessor.java:191)
at 
org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:181)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:564)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:534)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
at java.lang.Thread.run(Thread.java:748)
{code}
The reason is: when calculate the numBuckets in LongHashPartition,the result 
exceeds the maximum value of int and got a negative number
{code:java}
LongHashPartition(
  LongHybridHashTable longTable,
  int partitionNum,
  BinaryRowDataSerializer buildSideSerializer,
  int bucketNumSegs,
  int recursionLevel,
  List buffers,
  int lastSegmentLimit) {
   this(longTable, buildSideSerializer, listToArray(buffers));
   this.partitionNum = partitionNum;
   this.recursionLevel = recursionLevel;

   int numBuckets = MathUtils.roundDownToPowerOf2(bucketNumSegs * segmentSize / 
16);
   MemorySegment[] buckets = new MemorySegment[bucketNumSegs];
   for (int i = 0; i < bucketNumSegs; i++) {
  buckets[i] = longTable.nextSegment();
   }
   setNewBuckets(buckets, numBuckets);
   this.finalBufferLimit = lastSegmentLimit;
}
{code}
A way to avoid the exception is to adjust the calculation order

change
{code:java}
int numBuckets = MathUtils.roundDownToPowerOf2(bucketNumSegs * segmentSize / 
16);
{code}
to
{code:java}
int numBuckets = MathUtils.roundDownToPowerOf2(bucketNumSegs / 16 * 
segmentSize);
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19067) FileNotFoundException when run flink examples

2020-12-30 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17256814#comment-17256814
 ] 

JieFang.He edited comment on FLINK-19067 at 12/31/20, 3:39 AM:
---

I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node. So in HA mode, it need to 
ensure they are on one node

When the job submit to the rest,the rest then submit it to dispatcher,the 
jobFiles are the put to dispatcher

Here is the log
{code:java}
2020-12-31 00:42:29,255 DEBUG [BLOB connection for /127.0.0.1:43576] 
[FileSystemBlobStore]: Copying from 
/tmp/blobStore-5fe6b952-fece-4590-b637-04f19d8121dd/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109
 to 
/data2/zdh/flink/storageDir/default/blob/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109.
{code}
and here is the code in JobSubmitHandler
{code:java}
CompletableFuture jobSubmissionFuture = 
finalizedJobGraphFuture.thenCompose(jobGraph -> gateway.submitJob(jobGraph, 
timeout));
{code}
the gateway is define in DefaultDispatcherResourceManagerComponentFactory
{code:java}
final LeaderGatewayRetriever dispatcherGatewayRetriever = 
new RpcGatewayRetriever<>(
   rpcService,
   DispatcherGateway.class,
   DispatcherId::fromUuid,
   10,
   Time.milliseconds(50L));
{code}
{code:java}
webMonitorEndpoint = restEndpointFactory.createRestEndpoint(
   configuration,
   dispatcherGatewayRetriever,
   resourceManagerGatewayRetriever,
   blobServer,
   executor,
   metricFetcher,
   highAvailabilityServices.getClusterRestEndpointLeaderElectionService(),
   fatalErrorHandler);
{code}
 

But the Task get the jobFiles from resource_manager

The code of BlobClient.downloadFromBlobServer
{code:java}
static void downloadFromBlobServer(
  @Nullable JobID jobId,
  BlobKey blobKey,
  File localJarFile,
  InetSocketAddress serverAddress,
  Configuration blobClientConfig,
  int numFetchRetries) throws IOException {
..
  catch (Throwable t) {
 String message = "Failed to fetch BLOB " + jobId + "/" + blobKey + " 
from " + serverAddress +
" and store it under " + localJarFile.getAbsolutePath();
{code}
The serverAddress is define in TaskExecutor
{code:java}
final InetSocketAddress blobServerAddress = new InetSocketAddress(
   clusterInformation.getBlobServerHostname(),
   clusterInformation.getBlobServerPort());

blobCacheService.setBlobServerAddress(blobServerAddress);
{code}
The clusterInformation define in ResourceManagerRegistrationListener
{code:java}
if (resourceManagerConnection == connection) {
   try {
  establishResourceManagerConnection(
 resourceManagerGateway,
 resourceManagerId,
 taskExecutorRegistrationId,
 clusterInformation);
{code}
Where ResourceManagerRegistrationListener used
{code:java}
resourceManagerConnection =
   new TaskExecutorToResourceManagerConnection(
  log,
  getRpcService(),
  taskManagerConfiguration.getRetryingRegistrationConfiguration(),
  resourceManagerAddress.getAddress(),
  resourceManagerAddress.getResourceManagerId(),
  getMainThreadExecutor(),
  new ResourceManagerRegistrationListener(),
  taskExecutorRegistration);
{code}
 

 


was (Author: hejiefang):
I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node

When the job submit to the rest,the rest then submit it to dispatcher,the 
jobFiles are the put to dispatcher

Here is the log
{code:java}
2020-12-31 00:42:29,255 DEBUG [BLOB connection for /127.0.0.1:43576] 
[FileSystemBlobStore]: Copying from 
/tmp/blobStore-5fe6b952-fece-4590-b637-04f19d8121dd/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109
 to 
/data2/zdh/flink/storageDir/default/blob/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109.
{code}
and here is the code in JobSubmitHandler
{code:java}
CompletableFuture jobSubmissionFuture = 
finalizedJobGraphFuture.thenCompose(jobGraph -> gateway.submitJob(jobGraph, 
timeout));
{code}
the gateway is define in DefaultDispatcherResourceManagerComponentFactory
{code:java}
final LeaderGatewayRetriever dispatcherGatewayRetriever = 
new RpcGatewayRetriever<>(
   rpcService,
   DispatcherGateway.class,
   DispatcherId::fromUuid,
   10,
   Time.milliseconds(50L));
{code}
{code:java}
webMonitorEndpoint = restEndpointFactory.createRestEndpoint(
   configuration,
   dispatcherGatewayRetriever,
   resourceManagerGatewayRetriever,
   blobServer,
   executor,
   metricFetcher,
   highAvailabilityServices.getClusterRestEndpointLeaderElectionService(),
   fatalErrorHand

[jira] [Comment Edited] (FLINK-19067) FileNotFoundException when run flink examples

2020-12-30 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17256814#comment-17256814
 ] 

JieFang.He edited comment on FLINK-19067 at 12/31/20, 3:36 AM:
---

I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node

When the job submit to the rest,the rest then submit it to dispatcher,the 
jobFiles are the put to dispatcher

Here is the log
{code:java}
2020-12-31 00:42:29,255 DEBUG [BLOB connection for /127.0.0.1:43576] 
[FileSystemBlobStore]: Copying from 
/tmp/blobStore-5fe6b952-fece-4590-b637-04f19d8121dd/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109
 to 
/data2/zdh/flink/storageDir/default/blob/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109.
{code}
and here is the code in JobSubmitHandler
{code:java}
CompletableFuture jobSubmissionFuture = 
finalizedJobGraphFuture.thenCompose(jobGraph -> gateway.submitJob(jobGraph, 
timeout));
{code}
the gateway is define in DefaultDispatcherResourceManagerComponentFactory
{code:java}
final LeaderGatewayRetriever dispatcherGatewayRetriever = 
new RpcGatewayRetriever<>(
   rpcService,
   DispatcherGateway.class,
   DispatcherId::fromUuid,
   10,
   Time.milliseconds(50L));
{code}
{code:java}
webMonitorEndpoint = restEndpointFactory.createRestEndpoint(
   configuration,
   dispatcherGatewayRetriever,
   resourceManagerGatewayRetriever,
   blobServer,
   executor,
   metricFetcher,
   highAvailabilityServices.getClusterRestEndpointLeaderElectionService(),
   fatalErrorHandler);
{code}
 

But the Task get the jobFiles from resource_manager

The code of BlobClient.downloadFromBlobServer
{code:java}
static void downloadFromBlobServer(
  @Nullable JobID jobId,
  BlobKey blobKey,
  File localJarFile,
  InetSocketAddress serverAddress,
  Configuration blobClientConfig,
  int numFetchRetries) throws IOException {
..
  catch (Throwable t) {
 String message = "Failed to fetch BLOB " + jobId + "/" + blobKey + " 
from " + serverAddress +
" and store it under " + localJarFile.getAbsolutePath();
{code}
The serverAddress is define in TaskExecutor
{code:java}
final InetSocketAddress blobServerAddress = new InetSocketAddress(
   clusterInformation.getBlobServerHostname(),
   clusterInformation.getBlobServerPort());

blobCacheService.setBlobServerAddress(blobServerAddress);
{code}
The clusterInformation define in ResourceManagerRegistrationListener
{code:java}
if (resourceManagerConnection == connection) {
   try {
  establishResourceManagerConnection(
 resourceManagerGateway,
 resourceManagerId,
 taskExecutorRegistrationId,
 clusterInformation);
{code}
Where ResourceManagerRegistrationListener used
{code:java}
resourceManagerConnection =
   new TaskExecutorToResourceManagerConnection(
  log,
  getRpcService(),
  taskManagerConfiguration.getRetryingRegistrationConfiguration(),
  resourceManagerAddress.getAddress(),
  resourceManagerAddress.getResourceManagerId(),
  getMainThreadExecutor(),
  new ResourceManagerRegistrationListener(),
  taskExecutorRegistration);
{code}
 

 


was (Author: hejiefang):
I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node

When the job submit to the rest,the rest then submit it to dispatcher,the 
jobFiles are the put to dispatcher

Here is the log

 
{code:java}
2020-12-31 00:42:29,255 DEBUG [BLOB connection for /127.0.0.1:43576] 
[FileSystemBlobStore]: Copying from 
/tmp/blobStore-5fe6b952-fece-4590-b637-04f19d8121dd/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109
 to 
/data2/zdh/flink/storageDir/default/blob/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109.
{code}
and here is the code in JobSubmitHandler

 

 
{code:java}
CompletableFuture jobSubmissionFuture = 
finalizedJobGraphFuture.thenCompose(jobGraph -> gateway.submitJob(jobGraph, 
timeout));
{code}
the gateway is define in DefaultDispatcherResourceManagerComponentFactory

 
{code:java}
final LeaderGatewayRetriever dispatcherGatewayRetriever = 
new RpcGatewayRetriever<>(
   rpcService,
   DispatcherGateway.class,
   DispatcherId::fromUuid,
   10,
   Time.milliseconds(50L));
{code}
 
{code:java}
webMonitorEndpoint = restEndpointFactory.createRestEndpoint(
   configuration,
   dispatcherGatewayRetriever,
   resourceManagerGatewayRetriever,
   blobServer,
   executor,
   metricFetcher,
   highAvailabilityServices.getClusterRestEndpointLeaderElectionService(),
   fatalErrorHandler);
{code}
 

 

But the Task get the jo

[jira] [Commented] (FLINK-19067) FileNotFoundException when run flink examples

2020-12-30 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17256814#comment-17256814
 ] 

JieFang.He commented on FLINK-19067:


I think the reason is that the jobFiles are upload to the dispatcher node,but 
the task get jobFiles from resource_manager node

When the job submit to the rest,the rest then submit it to dispatcher,the 
jobFiles are the put to dispatcher

Here is the log

 
{code:java}
2020-12-31 00:42:29,255 DEBUG [BLOB connection for /127.0.0.1:43576] 
[FileSystemBlobStore]: Copying from 
/tmp/blobStore-5fe6b952-fece-4590-b637-04f19d8121dd/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109
 to 
/data2/zdh/flink/storageDir/default/blob/job_22ea0e6f01a4d7667aa1077e9bffe759/blob_p-b8b09e59290d54dbc0bfd5c7672d424e7f2c5178-3c4b3b70d11b7ca1164a615928023109.
{code}
and here is the code in JobSubmitHandler

 

 
{code:java}
CompletableFuture jobSubmissionFuture = 
finalizedJobGraphFuture.thenCompose(jobGraph -> gateway.submitJob(jobGraph, 
timeout));
{code}
the gateway is define in DefaultDispatcherResourceManagerComponentFactory

 
{code:java}
final LeaderGatewayRetriever dispatcherGatewayRetriever = 
new RpcGatewayRetriever<>(
   rpcService,
   DispatcherGateway.class,
   DispatcherId::fromUuid,
   10,
   Time.milliseconds(50L));
{code}
 
{code:java}
webMonitorEndpoint = restEndpointFactory.createRestEndpoint(
   configuration,
   dispatcherGatewayRetriever,
   resourceManagerGatewayRetriever,
   blobServer,
   executor,
   metricFetcher,
   highAvailabilityServices.getClusterRestEndpointLeaderElectionService(),
   fatalErrorHandler);
{code}
 

 

But the Task get the jobFiles from resource_manager

The code of BlobClient.downloadFromBlobServer

 
{code:java}
static void downloadFromBlobServer(
  @Nullable JobID jobId,
  BlobKey blobKey,
  File localJarFile,
  InetSocketAddress serverAddress,
  Configuration blobClientConfig,
  int numFetchRetries) throws IOException {
..
  catch (Throwable t) {
 String message = "Failed to fetch BLOB " + jobId + "/" + blobKey + " 
from " + serverAddress +
" and store it under " + localJarFile.getAbsolutePath();
{code}
The serverAddress is define in TaskExecutor

 

 
{code:java}
final InetSocketAddress blobServerAddress = new InetSocketAddress(
   clusterInformation.getBlobServerHostname(),
   clusterInformation.getBlobServerPort());

blobCacheService.setBlobServerAddress(blobServerAddress);
{code}
The clusterInformation define in ResourceManagerRegistrationListener

 
{code:java}
if (resourceManagerConnection == connection) {
   try {
  establishResourceManagerConnection(
 resourceManagerGateway,
 resourceManagerId,
 taskExecutorRegistrationId,
 clusterInformation);
{code}
Where ResourceManagerRegistrationListener used

 
{code:java}
resourceManagerConnection =
   new TaskExecutorToResourceManagerConnection(
  log,
  getRpcService(),
  taskManagerConfiguration.getRetryingRegistrationConfiguration(),
  resourceManagerAddress.getAddress(),
  resourceManagerAddress.getResourceManagerId(),
  getMainThreadExecutor(),
  new ResourceManagerRegistrationListener(),
  taskExecutorRegistration);
{code}
 

 

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerCo

[jira] [Comment Edited] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-16 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196832#comment-17196832
 ] 

JieFang.He edited comment on FLINK-19067 at 9/16/20, 9:50 AM:
--

[~rmetzger] I find the same exception on original version

It seems that the client get the incorrect blob node.

The exception on client shows that the client want to get the file on 
node-01,but i find that the blob file is create on node-02

Here is the exception on client
{code:java}
Caused by: java.io.IOException: Failed to fetch BLOB 
0d04922e319f520b84a2f20c9b4556e0/p-903092697dde4fa408ba1f52fae34c5e876a997a-94c6b64d36b76c695392475a3c72cfab
 from ***-deployer-hejiefang01/172.17.0.4:32101 and store it under 
/data3/***/flink/tmp/blobStore-523c19be-109b-4b7b-9934-b78b0281ffd4/incoming/temp-0002
{code}
But I find the file is on node-02
{code:java}
[root@***-deployer-hejiefang02 default]# for i in {1..10}; do ll -R; sleep 
1;done
total 36
drwxr-xr-x 3 mr users  4096 Sep 17 01:28 blob
-rw-r--r-- 1 mr users 32576 Sep 17 01:28 submittedJobGraph202f6020058b
./blob:
total 4
drwxr-xr-x 2 mr users 4096 Sep 17 01:28 job_0d04922e319f520b84a2f20c9b4556e0
./blob/job_0d04922e319f520b84a2f20c9b4556e0:
total 12
-rw-r--r-- 1 mr users 9401 Sep 17 01:28 
blob_p-903092697dde4fa408ba1f52fae34c5e876a997a-94c6b64d36b76c695392475a3c72cfab
[root@***-deployer-hejiefang02 default]# 
{code}
Node-01 has no file
{code:java}
[root@***-deployer-hejiefang01 default]# for i in {1..10}; do ll -R; sleep 
1;done
.:
total 0
.:
total 0
.:
[root@***-deployer-hejiefang01 default]# 
{code}
And the information on zookeeper is that, dispatcher is on node-02, 
resource_manager and rest_server are on node-01
{code:java}
[zk: localhost:2181(CONNECTED) 0] get /flink/default/leader/dispatcher_lock
??wGEakka.tcp://flink@***-deployer-hejiefang02:37241/user/rpc/dispatcher_1srjava.util.UUIDm?/J
[zk: localhost:2181(CONNECTED) 1] get 
/flink/default/leader/resource_manager_lock
??wLJakka.tcp://flink@***-deployer-hejiefang01:30717/user/rpc/resourcemanager_0srjava.util.UUIDm?/J
[zk: localhost:2181(CONNECTED) 2] get /flink/default/leader/rest_server_lock
??w&$http://***-deployer-hejiefang01:8181srjava.util.UUIDm?/J
{code}
[^flink-jobmanager-deployer-hejiefang01.log]

 


was (Author: hejiefang):
[~rmetzger] I find the same exception on original version

It seems that the client get the incorrect blob node.

The exception on client shows that the client want to get the file on 
node-01,but i find that the blob file is create on node-02

Here is the exception on client

 
{code:java}
Caused by: java.io.IOException: Failed to fetch BLOB 
0d04922e319f520b84a2f20c9b4556e0/p-903092697dde4fa408ba1f52fae34c5e876a997a-94c6b64d36b76c695392475a3c72cfab
 from ***-deployer-hejiefang01/172.17.0.4:32101 and store it under 
/data3/***/flink/tmp/blobStore-523c19be-109b-4b7b-9934-b78b0281ffd4/incoming/temp-0002
{code}
But I find the file is on node-02

 

 
{code:java}
[root@***-deployer-hejiefang02 default]# for i in {1..10}; do ll -R; sleep 
1;done
total 36
drwxr-xr-x 3 mr users  4096 Sep 17 01:28 blob
-rw-r--r-- 1 mr users 32576 Sep 17 01:28 submittedJobGraph202f6020058b
./blob:
total 4
drwxr-xr-x 2 mr users 4096 Sep 17 01:28 job_0d04922e319f520b84a2f20c9b4556e0
./blob/job_0d04922e319f520b84a2f20c9b4556e0:
total 12
-rw-r--r-- 1 mr users 9401 Sep 17 01:28 
blob_p-903092697dde4fa408ba1f52fae34c5e876a997a-94c6b64d36b76c695392475a3c72cfab
[root@***-deployer-hejiefang02 default]# 
{code}
Node-01 has no file
{code:java}
[root@***-deployer-hejiefang01 default]# for i in {1..10}; do ll -R; sleep 
1;done
.:
total 0
.:
total 0
.:
[root@***-deployer-hejiefang01 default]# 
{code}
And the information on zookeeper is that, dispatcher is on node-02, 
resource_manager and rest_server are on node-01

 

 
{code:java}
[zk: localhost:2181(CONNECTED) 0] get /flink/default/leader/dispatcher_lock
??wGEakka.tcp://flink@***-deployer-hejiefang02:37241/user/rpc/dispatcher_1srjava.util.UUIDm?/J
[zk: localhost:2181(CONNECTED) 1] get 
/flink/default/leader/resource_manager_lock
??wLJakka.tcp://flink@***-deployer-hejiefang01:30717/user/rpc/resourcemanager_0srjava.util.UUIDm?/J
[zk: localhost:2181(CONNECTED) 2] get /flink/default/leader/rest_server_lock
??w&$http://***-deployer-hejiefang01:8181srjava.util.UUIDm?/J
{code}
[^flink-jobmanager-deployer-hejiefang01.log]

 

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hej

[jira] [Commented] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-16 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17196832#comment-17196832
 ] 

JieFang.He commented on FLINK-19067:


[~rmetzger] I find the same exception on original version

It seems that the client get the incorrect blob node.

The exception on client shows that the client want to get the file on 
node-01,but i find that the blob file is create on node-02

Here is the exception on client

 
{code:java}
Caused by: java.io.IOException: Failed to fetch BLOB 
0d04922e319f520b84a2f20c9b4556e0/p-903092697dde4fa408ba1f52fae34c5e876a997a-94c6b64d36b76c695392475a3c72cfab
 from ***-deployer-hejiefang01/172.17.0.4:32101 and store it under 
/data3/***/flink/tmp/blobStore-523c19be-109b-4b7b-9934-b78b0281ffd4/incoming/temp-0002
{code}
But I find the file is on node-02

 

 
{code:java}
[root@***-deployer-hejiefang02 default]# for i in {1..10}; do ll -R; sleep 
1;done
total 36
drwxr-xr-x 3 mr users  4096 Sep 17 01:28 blob
-rw-r--r-- 1 mr users 32576 Sep 17 01:28 submittedJobGraph202f6020058b
./blob:
total 4
drwxr-xr-x 2 mr users 4096 Sep 17 01:28 job_0d04922e319f520b84a2f20c9b4556e0
./blob/job_0d04922e319f520b84a2f20c9b4556e0:
total 12
-rw-r--r-- 1 mr users 9401 Sep 17 01:28 
blob_p-903092697dde4fa408ba1f52fae34c5e876a997a-94c6b64d36b76c695392475a3c72cfab
[root@***-deployer-hejiefang02 default]# 
{code}
Node-01 has no file
{code:java}
[root@***-deployer-hejiefang01 default]# for i in {1..10}; do ll -R; sleep 
1;done
.:
total 0
.:
total 0
.:
[root@***-deployer-hejiefang01 default]# 
{code}
And the information on zookeeper is that, dispatcher is on node-02, 
resource_manager and rest_server are on node-01

 

 
{code:java}
[zk: localhost:2181(CONNECTED) 0] get /flink/default/leader/dispatcher_lock
??wGEakka.tcp://flink@***-deployer-hejiefang02:37241/user/rpc/dispatcher_1srjava.util.UUIDm?/J
[zk: localhost:2181(CONNECTED) 1] get 
/flink/default/leader/resource_manager_lock
??wLJakka.tcp://flink@***-deployer-hejiefang01:30717/user/rpc/resourcemanager_0srjava.util.UUIDm?/J
[zk: localhost:2181(CONNECTED) 2] get /flink/default/leader/rest_server_lock
??w&$http://***-deployer-hejiefang01:8181srjava.util.UUIDm?/J
{code}
[^flink-jobmanager-deployer-hejiefang01.log]

 

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-16 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: flink-taskmanager-deployer-hejiefang02.log
flink-taskmanager-deployer-hejiefang01.log
flink-jobmanager-deployer-hejiefang02.log
flink-jobmanager-deployer-hejiefang01.log

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-16 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-jobmanager-deployer-hejiefang02.log)

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-16 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-jobmanager-deployer-hejiefang01.log)

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-16 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-taskmanager-deployer-hejiefang01.log)

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-16 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-taskmanager-deployer-hejiefang02.log)

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-16 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: flink-taskmanager-deployer-hejiefang02.log
flink-taskmanager-deployer-hejiefang01.log
flink-jobmanager-deployer-hejiefang02.log
flink-jobmanager-deployer-hejiefang01.log

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang02.log, 
> flink-taskmanager-deployer-hejiefang01.log, 
> flink-taskmanager-deployer-hejiefang02.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-16 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-jobmanager-deployer-hejiefang01.log)

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17191686#comment-17191686
 ] 

JieFang.He commented on FLINK-19067:


I delete some logs contain custom information

It is a custom build, but only add some peripheral content and change the 
version suffix. The processing flow is the same.

I will try to reproduce it with the original version and report again


Thank you for your reply

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-mr-jobmanager-deployer-hejiefang01.log)

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-mr-jobmanager-deployer-hejiefang01.log)

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-jobmanager-deployer-hejiefang01.log)

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: (was: flink-mr-jobmanager-deployer-hejiefang01.log)

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17191609#comment-17191609
 ] 

JieFang.He commented on FLINK-19067:


The log in JobManager is the same as CLI

[^flink-jobmanager-deployer-hejiefang01.log]

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: flink-jobmanager-deployer-hejiefang01.log

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Comment: was deleted

(was: The log in JobManager is the same as CLI

[^flink-jobmanager-deployer-hejiefang01.log])

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17191607#comment-17191607
 ] 

JieFang.He edited comment on FLINK-19067 at 9/7/20, 9:45 AM:
-

The log in JobManager is the same as CLI

[^flink-jobmanager-deployer-hejiefang01.log]


was (Author: hejiefang):
The log in JobManager is the same as CLI

[^flink-jobmanager-deployer-hejiefang01.log]

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Comment: was deleted

(was: The log in JobManager is the same as CLI

[^flink-mr-jobmanager-deployer-hejiefang01.log])

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: flink-jobmanager-deployer-hejiefang01.log

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17191607#comment-17191607
 ] 

JieFang.He commented on FLINK-19067:


The log in JobManager is the same as CLI

[^flink-jobmanager-deployer-hejiefang01.log]

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17191605#comment-17191605
 ] 

JieFang.He commented on FLINK-19067:


The log in JobManager is the same as CLI

[^flink-mr-jobmanager-deployer-hejiefang01.log]

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Comment: was deleted

(was: The log in JobManager is the same as CLI

[^flink-mr-jobmanager-deployer-hejiefang01.log])

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: flink-mr-jobmanager-deployer-hejiefang01.log

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17191603#comment-17191603
 ] 

JieFang.He commented on FLINK-19067:


The log in JobManager is the same as CLI

[^flink-mr-jobmanager-deployer-hejiefang01.log]

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: flink-mr-jobmanager-deployer-hejiefang01.log

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-mr-jobmanager-deployer-hejiefang01.log, 
> flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Comment: was deleted

(was: [^flink-mr-jobmanager-deployer-hejiefang01.log]

The log in JobManager is the same as CLI)

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17191596#comment-17191596
 ] 

JieFang.He commented on FLINK-19067:


[^flink-mr-jobmanager-deployer-hejiefang01.log]

The log in JobManager is the same as CLI

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Description: 
1、When run examples/batch/WordCount.jar,it will fail with the exception:

Caused by: java.io.FileNotFoundException: 
/data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
 (No such file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
 at 
org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
 at 
org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)

 

 

  was:
1、When run examples/batch/WordCount.jar,it will fail with the exception:

Caused by: java.io.FileNotFoundException: 
/data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
 (No such file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
 at 
org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
 at 
org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)

 

 


> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/flink/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-07 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Attachment: flink-mr-jobmanager-deployer-hejiefang01.log

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
> Attachments: flink-mr-jobmanager-deployer-hejiefang01.log
>
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-06 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Description: 
1、When run examples/batch/WordCount.jar,it will fail with the exception:

Caused by: java.io.FileNotFoundException: 
/data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
 (No such file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
 at 
org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
 at 
org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)

 

 

  was:
1、When run examples/batch/WordCount.jar,it will fail with the exception:

Caused by: java.io.FileNotFoundException: 
/data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
 (No such file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
 at 
org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
 at 
org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)

 

2、Run examples success on other nodes

3、After run success on the other node, it can run success on the Standby 
JobManager. But run again will fail

 


> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-06 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Description: 
1、When run examples/batch/WordCount.jar,it will fail with the exception:

Caused by: java.io.FileNotFoundException: 
/data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
 (No such file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
 at 
org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
 at 
org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)

 

2、Run examples success on other nodes

3、After run success on the other node, it can run success on the Standby 
JobManager. But run again will fail

 

  was:
1、When run examples/batch/WordCount.jar on standby JobManager,it will fail with 
the exception:

Caused by: java.io.FileNotFoundException: 
/data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
 (No such file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
 at 
org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
 at 
org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)

 

2、Run examples success on other nodes

3、After run success on the other node, it can run success on the Standby 
JobManager. But run again will fail

 


> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
> 1、When run examples/batch/WordCount.jar,it will fail with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
> 2、Run examples success on other nodes
> 3、After run success on the other node, it can run success on the Standby 
> JobManager. But run again will fail
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19067) FileNotFoundException when run flink examples

2020-09-06 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Summary: FileNotFoundException when run flink examples  (was: 
FileNotFoundException when run flink examples on standby JobManager)

> FileNotFoundException when run flink examples
> -
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
> 1、When run examples/batch/WordCount.jar on standby JobManager,it will fail 
> with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
> 2、Run examples success on other nodes
> 3、After run success on the other node, it can run success on the Standby 
> JobManager. But run again will fail
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-19067) FileNotFoundException when run flink examples on standby JobManager

2020-09-06 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17189861#comment-17189861
 ] 

JieFang.He edited comment on FLINK-19067 at 9/7/20, 3:00 AM:
-

[~rmetzger]

After testing again, this should have nothing to do with standby JobManager, 
When it happens,all node submit job fail. The last time it failed only on the 
Standby node seems to be accidental. 

Also,  Not every time you submit a job will fail, failure is probabilistic.

The log shows that there is no blob file on the server, but the wordcount 
example seems no need for these files, the successful job did not generate 
these files.

My resource_manager_lock is on Standby JobManager, Others on the Active 
JobManager, I don't know if this has any impact.
{code:java}
2020-09-03 19:15:11,566 INFO [main-SendThread(deployer-hejiefang03:2181)] 
[ClientCnxn]: Socket connection established, initiating session, client: 
/172.17.0.8:45164, server: deployer-hejiefang03/172.17.0.9:2181
2020-09-03 19:15:11,573 INFO [main-SendThread(deployer-hejiefang03:2181)] 
[ClientCnxn]: Session establishment complete on server 
deployer-hejiefang03/172.17.0.9:2181, sessionid = 0x3000e154ba9000a, negotiated 
timeout = 6
2020-09-03 19:15:11,573 INFO [main-EventThread] [ConnectionStateManager]: State 
change: CONNECTED
2020-09-03 19:15:11,578 INFO [main-EventThread] [EnsembleTracker]: New config 
event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,578 ERROR [main-EventThread] [EnsembleTracker]: Invalid 
config event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,579 INFO [main-EventThread] [EnsembleTracker]: New config 
event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,579 ERROR [main-EventThread] [EnsembleTracker]: Invalid 
config event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,668 INFO [Curator-Framework-0] [CuratorFrameworkImpl]: 
backgroundOperationsLoop exiting
2020-09-03 19:15:11,772 INFO [ForkJoinPool.commonPool-worker-57] [ZooKeeper]: 
Session: 0x3000e154ba9000a closed
2020-09-03 19:15:11,772 INFO [main-EventThread] [ClientCnxn]: EventThread shut 
down for session: 0x3000e154ba9000a
2020-09-03 19:15:11,775 ERROR [main] [CliFrontend]: Error while running the 
command.
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: org.apache.flink.client.program.ProgramInvocationException: 
Job failed (JobID: 8b242015d603f46e82a5f1299399b669)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
 ~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
 ~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at java.security.AccessController.doPrivileged(Native Method) 
~[?:1.8.0_163]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_163]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 [flink-shaded-hadoop-3-uber-3.2.1-11.0.jar:?]
at 
org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
 [flink-dist_2.11-1.11.1.jar:1.11.1]
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992) 
[flink-dist_2.11-1.11.1.jar:1.11.1]
Caused by: java.util.concurrent.ExecutionException: 
org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: 
8b242015d603f46e82a5f1299399b669)
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
~[?:1.8.0_163]
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) 
~[?:1.8.

[jira] [Comment Edited] (FLINK-19067) FileNotFoundException when run flink examples on standby JobManager

2020-09-02 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17189861#comment-17189861
 ] 

JieFang.He edited comment on FLINK-19067 at 9/3/20, 6:21 AM:
-

[~rmetzger]

After testing again, this should have nothing to do with standby JobManager, 
When it happens,all node submit job fail. The last time it failed only on the 
Standby node seems to be accidental. 

Also,  Not every time you submit a job will fail, failure is probabilistic.

The log shows that there is no blob file on the server, but the wordcount 
example seems no need for these files, the successful job did not generate 
these files.

Also, if set --output to HDFS, it never fail.

My resource_manager_lock is on Standby JobManager, Others on the Active 
JobManager, I don't know if this has any impact.
{code:java}
2020-09-03 19:15:11,566 INFO [main-SendThread(deployer-hejiefang03:2181)] 
[ClientCnxn]: Socket connection established, initiating session, client: 
/172.17.0.8:45164, server: deployer-hejiefang03/172.17.0.9:2181
2020-09-03 19:15:11,573 INFO [main-SendThread(deployer-hejiefang03:2181)] 
[ClientCnxn]: Session establishment complete on server 
deployer-hejiefang03/172.17.0.9:2181, sessionid = 0x3000e154ba9000a, negotiated 
timeout = 6
2020-09-03 19:15:11,573 INFO [main-EventThread] [ConnectionStateManager]: State 
change: CONNECTED
2020-09-03 19:15:11,578 INFO [main-EventThread] [EnsembleTracker]: New config 
event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,578 ERROR [main-EventThread] [EnsembleTracker]: Invalid 
config event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,579 INFO [main-EventThread] [EnsembleTracker]: New config 
event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,579 ERROR [main-EventThread] [EnsembleTracker]: Invalid 
config event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,668 INFO [Curator-Framework-0] [CuratorFrameworkImpl]: 
backgroundOperationsLoop exiting
2020-09-03 19:15:11,772 INFO [ForkJoinPool.commonPool-worker-57] [ZooKeeper]: 
Session: 0x3000e154ba9000a closed
2020-09-03 19:15:11,772 INFO [main-EventThread] [ClientCnxn]: EventThread shut 
down for session: 0x3000e154ba9000a
2020-09-03 19:15:11,775 ERROR [main] [CliFrontend]: Error while running the 
command.
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: org.apache.flink.client.program.ProgramInvocationException: 
Job failed (JobID: 8b242015d603f46e82a5f1299399b669)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
 ~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
 ~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at java.security.AccessController.doPrivileged(Native Method) 
~[?:1.8.0_163]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_163]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 [flink-shaded-hadoop-3-uber-3.2.1-11.0.jar:?]
at 
org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
 [flink-dist_2.11-1.11.1.jar:1.11.1]
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992) 
[flink-dist_2.11-1.11.1.jar:1.11.1]
Caused by: java.util.concurrent.ExecutionException: 
org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: 
8b242015d603f46e82a5f1299399b669)
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
~[?:1.8.0_163]
at 
java.util.concurrent.CompletableFu

[jira] [Comment Edited] (FLINK-19067) FileNotFoundException when run flink examples on standby JobManager

2020-09-02 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17189861#comment-17189861
 ] 

JieFang.He edited comment on FLINK-19067 at 9/3/20, 6:14 AM:
-

[~rmetzger]

After testing again, this should have nothing to do with standby JobManager, 
When it happens,all node submit job fail. The last time it failed only on the 
Standby node seems to be accidental. 

Also,  Not every time you submit a job will fail, failure is probabilistic.

The log shows that there is no blob file on the server, but the wordcount 
example seems no need for these files, the successful job did not generate 
these files.

My resource_manager_lock is on Standby JobManager, Others on the Active 
JobManager, I don't know if this has any impact.
{code:java}
2020-09-03 19:15:11,566 INFO [main-SendThread(deployer-hejiefang03:2181)] 
[ClientCnxn]: Socket connection established, initiating session, client: 
/172.17.0.8:45164, server: deployer-hejiefang03/172.17.0.9:2181
2020-09-03 19:15:11,573 INFO [main-SendThread(deployer-hejiefang03:2181)] 
[ClientCnxn]: Session establishment complete on server 
deployer-hejiefang03/172.17.0.9:2181, sessionid = 0x3000e154ba9000a, negotiated 
timeout = 6
2020-09-03 19:15:11,573 INFO [main-EventThread] [ConnectionStateManager]: State 
change: CONNECTED
2020-09-03 19:15:11,578 INFO [main-EventThread] [EnsembleTracker]: New config 
event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,578 ERROR [main-EventThread] [EnsembleTracker]: Invalid 
config event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,579 INFO [main-EventThread] [EnsembleTracker]: New config 
event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,579 ERROR [main-EventThread] [EnsembleTracker]: Invalid 
config event received: {server.1=deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=deployer-hejiefang03:2888:3888:participant, 
server.2=deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,668 INFO [Curator-Framework-0] [CuratorFrameworkImpl]: 
backgroundOperationsLoop exiting
2020-09-03 19:15:11,772 INFO [ForkJoinPool.commonPool-worker-57] [ZooKeeper]: 
Session: 0x3000e154ba9000a closed
2020-09-03 19:15:11,772 INFO [main-EventThread] [ClientCnxn]: EventThread shut 
down for session: 0x3000e154ba9000a
2020-09-03 19:15:11,775 ERROR [main] [CliFrontend]: Error while running the 
command.
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: org.apache.flink.client.program.ProgramInvocationException: 
Job failed (JobID: 8b242015d603f46e82a5f1299399b669)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
 ~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
 ~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at java.security.AccessController.doPrivileged(Native Method) 
~[?:1.8.0_163]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_163]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 [flink-shaded-hadoop-3-uber-3.2.1-11.0.jar:?]
at 
org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
 [flink-dist_2.11-1.11.1.jar:1.11.1]
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992) 
[flink-dist_2.11-1.11.1.jar:1.11.1]
Caused by: java.util.concurrent.ExecutionException: 
org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: 
8b242015d603f46e82a5f1299399b669)
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
~[?:1.8.0_163]
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) 
~[?:1.8.

[jira] [Commented] (FLINK-19067) FileNotFoundException when run flink examples on standby JobManager

2020-09-02 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17189861#comment-17189861
 ] 

JieFang.He commented on FLINK-19067:


[~rmetzger]

After testing again, this should have nothing to do with standby JobManager, 
When it happens,all node submit job fail. The last time it failed only on the 
Standby node seems to be accidental. 

Also,  Not every time you submit a job will fail, failure is probabilistic.

The log shows that there is no blob file on the server, but the wordcount 
example seems no need for these files, the successful job did not generate 
these files.

My resource_manager_lock is on Standby JobManager, Others on the Active 
JobManager, I don't know if this has any impact.
{code:java}
2020-09-03 19:15:11,566 INFO [main-SendThread(-deployer-hejiefang03:2181)] 
[ClientCnxn]: Socket connection established, initiating session, client: 
/172.17.0.8:45164, server: -deployer-hejiefang03/172.17.0.9:2181
2020-09-03 19:15:11,573 INFO [main-SendThread(-deployer-hejiefang03:2181)] 
[ClientCnxn]: Session establishment complete on server 
-deployer-hejiefang03/172.17.0.9:2181, sessionid = 0x3000e154ba9000a, 
negotiated timeout = 6
2020-09-03 19:15:11,573 INFO [main-EventThread] [ConnectionStateManager]: State 
change: CONNECTED
2020-09-03 19:15:11,578 INFO [main-EventThread] [EnsembleTracker]: New config 
event received: {server.1=-deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=-deployer-hejiefang03:2888:3888:participant, 
server.2=-deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,578 ERROR [main-EventThread] [EnsembleTracker]: Invalid 
config event received: {server.1=-deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=-deployer-hejiefang03:2888:3888:participant, 
server.2=-deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,579 INFO [main-EventThread] [EnsembleTracker]: New config 
event received: {server.1=-deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=-deployer-hejiefang03:2888:3888:participant, 
server.2=-deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,579 ERROR [main-EventThread] [EnsembleTracker]: Invalid 
config event received: {server.1=-deployer-hejiefang01:2888:3888:participant, 
version=0, server.3=-deployer-hejiefang03:2888:3888:participant, 
server.2=-deployer-hejiefang02:2888:3888:participant}
2020-09-03 19:15:11,668 INFO [Curator-Framework-0] [CuratorFrameworkImpl]: 
backgroundOperationsLoop exiting
2020-09-03 19:15:11,772 INFO [ForkJoinPool.commonPool-worker-57] [ZooKeeper]: 
Session: 0x3000e154ba9000a closed
2020-09-03 19:15:11,772 INFO [main-EventThread] [ClientCnxn]: EventThread shut 
down for session: 0x3000e154ba9000a
2020-09-03 19:15:11,775 ERROR [main] [CliFrontend]: Error while running the 
command.
org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error: org.apache.flink.client.program.ProgramInvocationException: 
Job failed (JobID: 8b242015d603f46e82a5f1299399b669)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
 ~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
 ~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992) 
~[flink-dist_2.11-1.11.1.jar:1.11.1]
at java.security.AccessController.doPrivileged(Native Method) 
~[?:1.8.0_163]
at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_163]
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 [flink-shaded-hadoop-3-uber-3.2.1-11.0.jar:?]
at 
org.apache.flink.runtime.security.contexts.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
 [flink-dist_2.11-1.11.1.jar:1.11.1]
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992) 
[flink-dist_2.11-1.11.1.jar:1.11.1]
Caused by: java.util.concurrent.ExecutionException: 
org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: 
8b242015d603f46e82a5f1299399b669)
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
~[?:1.8.0_163]
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) 
~[?:1.8.0_163]
at 
org.apache.fli

[jira] [Issue Comment Deleted] (FLINK-19067) FileNotFoundException when run flink examples on standby JobManager

2020-09-02 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-19067:
---
Comment: was deleted

(was: [~rmetzger]  This situation does not always happen. My cluster no longer 
appears after reinstallation. I am now trying to reproduce the problem.)

> FileNotFoundException when run flink examples on standby JobManager
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
> 1、When run examples/batch/WordCount.jar on standby JobManager,it will fail 
> with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
> 2、Run examples success on other nodes
> 3、After run success on the other node, it can run success on the Standby 
> JobManager. But run again will fail
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19067) FileNotFoundException when run flink examples on standby JobManager

2020-09-02 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17189783#comment-17189783
 ] 

JieFang.He commented on FLINK-19067:


[~rmetzger]  This situation does not always happen. My cluster no longer 
appears after reinstallation. I am now trying to reproduce the problem.

> FileNotFoundException when run flink examples on standby JobManager
> ---
>
> Key: FLINK-19067
> URL: https://issues.apache.org/jira/browse/FLINK-19067
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.1
>Reporter: JieFang.He
>Priority: Major
>
> 1、When run examples/batch/WordCount.jar on standby JobManager,it will fail 
> with the exception:
> Caused by: java.io.FileNotFoundException: 
> /data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
>  (No such file or directory)
>  at java.io.FileInputStream.open0(Native Method)
>  at java.io.FileInputStream.open(FileInputStream.java:195)
>  at java.io.FileInputStream.(FileInputStream.java:138)
>  at 
> org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
>  at 
> org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
>  at 
> org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
>  at 
> org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
>  at 
> org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)
>  
> 2、Run examples success on other nodes
> 3、After run success on the other node, it can run success on the Standby 
> JobManager. But run again will fail
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-19067) FileNotFoundException when run flink examples on standby JobManager

2020-08-27 Thread JieFang.He (Jira)
JieFang.He created FLINK-19067:
--

 Summary: FileNotFoundException when run flink examples on standby 
JobManager
 Key: FLINK-19067
 URL: https://issues.apache.org/jira/browse/FLINK-19067
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.11.1
Reporter: JieFang.He


1、When run examples/batch/WordCount.jar on standby JobManager,it will fail with 
the exception:

Caused by: java.io.FileNotFoundException: 
/data2/storageDir/default/blob/job_d29414828f614d5466e239be4d3889ac/blob_p-a2ebe1c5aa160595f214b4bd0f39d80e42ee2e93-f458f1c12dc023e78d25f191de1d7c4b
 (No such file or directory)
 at java.io.FileInputStream.open0(Native Method)
 at java.io.FileInputStream.open(FileInputStream.java:195)
 at java.io.FileInputStream.(FileInputStream.java:138)
 at 
org.apache.flink.core.fs.local.LocalDataInputStream.(LocalDataInputStream.java:50)
 at 
org.apache.flink.core.fs.local.LocalFileSystem.open(LocalFileSystem.java:143)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:105)
 at 
org.apache.flink.runtime.blob.FileSystemBlobStore.get(FileSystemBlobStore.java:87)
 at 
org.apache.flink.runtime.blob.BlobServer.getFileInternal(BlobServer.java:501)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.get(BlobServerConnection.java:231)
 at 
org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:117)

 

2、Run examples success on other nodes

3、After run success on the other node, it can run success on the Standby 
JobManager. But run again will fail

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18210) The tmpdir will be clean up when stop historyserver

2020-06-12 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-18210:
---
Affects Version/s: 1.12.0

> The tmpdir will be clean up when stop historyserver
> ---
>
> Key: FLINK-18210
> URL: https://issues.apache.org/jira/browse/FLINK-18210
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Web Frontend
>Affects Versions: 1.9.2, 1.12.0
>Reporter: JieFang.He
>Priority: Major
>
> The tmpdir(config by historyserver.web.tmpdir) will be clean up when stop 
> historyserver。But the directory may be shared, it is better to delete files 
> that created by historyserver。
> Part of the code to stop historyserver is shown below
> {code:java}
> try {
>LOG.info("Removing web dashboard root cache directory {}", webDir);
>FileUtils.deleteDirectory(webDir);
> } catch (Throwable t) {
>LOG.warn("Error while deleting web root directory {}", webDir, t);
> }
> {code}
>  FileUtils.deleteDirectory:
> {code:java}
> // empty the directory first
> try {
>cleanDirectoryInternal(directory);
> }
> {code}
> cleanDirectoryInternal:
> {code:java}
> // remove all files in the directory
> for (File file : files) {
>if (file != null) {
>   deleteFileOrDirectory(file);
>}
> }
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18210) The tmpdir will be clean up when stop historyserver

2020-06-09 Thread JieFang.He (Jira)
JieFang.He created FLINK-18210:
--

 Summary: The tmpdir will be clean up when stop historyserver
 Key: FLINK-18210
 URL: https://issues.apache.org/jira/browse/FLINK-18210
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Web Frontend
Affects Versions: 1.9.2
Reporter: JieFang.He


The tmpdir(config by historyserver.web.tmpdir) will be clean up when stop 
historyserver。But the directory may be shared, it is better to delete files 
that created by historyserver。


Part of the code to stop historyserver is shown below
{code:java}
try {
   LOG.info("Removing web dashboard root cache directory {}", webDir);
   FileUtils.deleteDirectory(webDir);
} catch (Throwable t) {
   LOG.warn("Error while deleting web root directory {}", webDir, t);
}
{code}
 FileUtils.deleteDirectory:
{code:java}
// empty the directory first
try {
   cleanDirectoryInternal(directory);
}
{code}
cleanDirectoryInternal:
{code:java}
// remove all files in the directory
for (File file : files) {
   if (file != null) {
  deleteFileOrDirectory(file);
   }
}
{code}
 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17676) Is there some way to rollback the .out file of TaskManager

2020-05-14 Thread JieFang.He (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107054#comment-17107054
 ] 

JieFang.He commented on FLINK-17676:


[~gjy] [~karmagyz], Yes, i made a mistake. In some cases, mainly in testing, 
the print API may be used, which causes this file to grow indefinitely

> Is there some way to rollback the .out file of TaskManager
> --
>
> Key: FLINK-17676
> URL: https://issues.apache.org/jira/browse/FLINK-17676
> Project: Flink
>  Issue Type: Improvement
>Reporter: JieFang.He
>Priority: Major
>
> When use .print() API, the result all write to the out file, But there is no 
> way to rollback the out file.
>  
> out in flink-daemon.sh
> {code:java}
> // $JAVA_RUN $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" -classpath 
> "`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" 
> ${CLASS_TO_RUN} "${ARGS[@]}" > "$out" 200<&- 2>&1 < /dev/null &
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17676) Is there some way to rollback the .out file of TaskManager

2020-05-13 Thread JieFang.He (Jira)
JieFang.He created FLINK-17676:
--

 Summary: Is there some way to rollback the .out file of TaskManager
 Key: FLINK-17676
 URL: https://issues.apache.org/jira/browse/FLINK-17676
 Project: Flink
  Issue Type: Improvement
Reporter: JieFang.He


When use .print() API, the result all write to the out file, But there is no 
way to rollback the out file.

 

out in flink-daemon.sh
{code:java}
// $JAVA_RUN $JVM_ARGS ${FLINK_ENV_JAVA_OPTS} "${log_setting[@]}" -classpath 
"`manglePathList "$FLINK_TM_CLASSPATH:$INTERNAL_HADOOP_CLASSPATHS"`" 
${CLASS_TO_RUN} "${ARGS[@]}" > "$out" 200<&- 2>&1 < /dev/null &
{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16542) Nothing on the HistoryServer

2020-03-11 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-16542:
---
Attachment: (was: 故障.JPG)

> Nothing on the HistoryServer
> 
>
> Key: FLINK-16542
> URL: https://issues.apache.org/jira/browse/FLINK-16542
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.9.2
>Reporter: JieFang.He
>Priority: Major
> Attachments: HistoryServer.JPG
>
>
> I use the 1.9.2 Flink,When i open the address of HistoryServer, it is blank



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16542) Nothing on the HistoryServer

2020-03-11 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-16542:
---
Attachment: HistoryServer.JPG

> Nothing on the HistoryServer
> 
>
> Key: FLINK-16542
> URL: https://issues.apache.org/jira/browse/FLINK-16542
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.9.2
>Reporter: JieFang.He
>Priority: Major
> Attachments: HistoryServer.JPG
>
>
> I use the 1.9.2 Flink,When i open the address of HistoryServer, it is blank



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16542) Nothing on the HistoryServer

2020-03-11 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-16542:
---
Attachment: 故障.JPG

> Nothing on the HistoryServer
> 
>
> Key: FLINK-16542
> URL: https://issues.apache.org/jira/browse/FLINK-16542
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.9.2
>Reporter: JieFang.He
>Priority: Major
> Attachments: 故障.JPG
>
>
> I use the 1.9.2 Flink,When i open the address of HistoryServer, it is blank



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16542) Nothing on the HistoryServer

2020-03-11 Thread JieFang.He (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JieFang.He updated FLINK-16542:
---
Description: I use the 1.9.2 Flink,When i open the address of 
HistoryServer, it is blank  (was: I use the 1.9.2 Flink,When i open the address 
of **HistoryServer, it is blank)

> Nothing on the HistoryServer
> 
>
> Key: FLINK-16542
> URL: https://issues.apache.org/jira/browse/FLINK-16542
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.9.2
>Reporter: JieFang.He
>Priority: Major
>
> I use the 1.9.2 Flink,When i open the address of HistoryServer, it is blank



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16542) Nothing on the HistoryServer

2020-03-11 Thread JieFang.He (Jira)
JieFang.He created FLINK-16542:
--

 Summary: Nothing on the HistoryServer
 Key: FLINK-16542
 URL: https://issues.apache.org/jira/browse/FLINK-16542
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Web Frontend
Affects Versions: 1.9.2
Reporter: JieFang.He


I use the 1.9.2 Flink,When i open the address of **HistoryServer, it is blank



--
This message was sent by Atlassian Jira
(v8.3.4#803005)