[ 
https://issues.apache.org/jira/browse/FLINK-15378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-15378:
-----------------------------------
    Fix Version/s:     (was: 1.17.0)

> StreamFileSystemSink supported mutil hdfs plugins.
> --------------------------------------------------
>
>                 Key: FLINK-15378
>                 URL: https://issues.apache.org/jira/browse/FLINK-15378
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / FileSystem, FileSystems
>    Affects Versions: 1.9.2, 1.10.0
>            Reporter: ouyangwulin
>            Priority: Minor
>              Labels: auto-deprioritized-major, pull-request-available
>         Attachments: jobmananger.log
>
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> [As report from 
> maillist|[https://lists.apache.org/thread.html/7a6b1e341bde0ef632a82f8d46c9c93da358244b6bac0d8d544d11cb%40%3Cuser.flink.apache.org%3E]]
> Request 1:  FileSystem plugins not effect the default yarn dependecies.
> Request 2:  StreamFileSystemSink supported mutil hdfs plugins under the same 
> schema
> As Problem describe :
>     when I put a ' filesystem plugin to FLINK_HOME/pulgins in flink', and the 
> clas{color:#172b4d}s '*com.filesystem.plugin.FileSystemFactoryEnhance*' 
> implements '*FileSystemFactory*', when jm start, It will call 
> FileSystem.initialize(configuration, 
> PluginUtils.createPluginManagerFromRootFolder(configuration)) to load 
> factories to map  FileSystem#**{color}FS_FACTORIES, and the key is only 
> schema. When tm/jm use local hadoop conf A ,   the user code use hadoop conf 
> Bin 'filesystem plugin',  Conf A and Conf B is used to different hadoop 
> cluster. and The Jm will start failed, beacuse of the blodserver in JM will 
> load Conf B to get filesystem. the full log add appendix.
>  
> AS reslove method:
>     use  schema and spec identify as key for ' FileSystem#**FS_FACTORIES '
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to