[
https://issues.apache.org/jira/browse/FLINK-7643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16192630#comment-16192630
]
ASF GitHub Bot commented on FLINK-7643:
---------------------------------------
Github user StephanEwen commented on the issue:
https://github.com/apache/flink/pull/4776
@bowenli86 We need to fail lazily, because Flink should be able to always
work without MapR FS or HDFS being in the classpath.
With the change currently, you can start Flink without any Hadoop
dependencies and it works fine. It only fails then if you try to use HDFS.
Failing everything eager means it always fails when MapR or Hadoop classes are
not in the classpath.
BTW: That behavior is the same as right now - I did not change it, just
> Configure FileSystems only once
> -------------------------------
>
> Key: FLINK-7643
> URL: https://issues.apache.org/jira/browse/FLINK-7643
> Project: Flink
> Issue Type: Bug
> Components: State Backends, Checkpointing
> Affects Versions: 1.4.0
> Reporter: Ufuk Celebi
> Assignee: Stephan Ewen
>
> HadoopFileSystem always reloads GlobalConfiguration, which potentially leads
> to a lot of noise in the logs, because this happens on each checkpoint.
> Instead, file systems should be configured once upon process startup, when
> the configuration is loaded.
> This will also increase efficiency of checkpoints, as it avoids redundant
> parsing for each data chunk.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)