Github user yew1eb commented on the issue:

    https://github.com/apache/flink/pull/6118
  
    yes, this is a hadoop-file-system discovery issue (similar case: 
https://stackoverflow.com/questions/17265002/hadoop-no-filesystem-for-scheme-file),
 
    but if  flink-job dependency this `hadoop-common`, and flink-cluster uses 
hdfs to store checkpoint,   job will throw error when init filesystem for 
checkpoint.
    
![image](https://user-images.githubusercontent.com/4133864/40985981-c101bb3a-6917-11e8-82a4-5c62e2fd7ec0.png)
    
    i think we should improve `load file system factories` part.  
    see `org.apache.flink.core.fs.FileSystem` code snippets:
    ```
        /** All available file system factories. */
        private static final List<FileSystemFactory> RAW_FACTORIES = 
loadFileSystems();
    
        /** Mapping of file system schemes to the corresponding factories,
         * populated in {@link FileSystem#initialize(Configuration)}. */
        private static final HashMap<String, FileSystemFactory> FS_FACTORIES = 
new HashMap<>();
    
        /** The default factory that is used when no scheme matches. */
        private static final FileSystemFactory FALLBACK_FACTORY = 
loadHadoopFsFactory();
    ```
    
    @StephanEwen , what do you think about this?
    



---

Reply via email to