GitHub user NicoK opened a pull request:

    https://github.com/apache/flink/pull/4939

    [FLINK-4228][yarn/s3a] fix yarn resource upload s3a defaultFs

    ## What is the purpose of the change
    
    If YARN is configured to use the `s3a` default file system, upload of the 
Flink jars will fail since its 
`org.apache.hadoop.fs.FileSystem#copyFromLocalFile()` does not work recursively 
on the given `lib` folder.
    
    ## Brief change log
    
    - implement our own recursive upload (based on #2288)
    - add unit tests to verify its behaviour for both `hdfs://` and `s3://` 
(via S3A) resource uploads
    
    ## Verifying this change
    
    This change added tests and can be verified as follows:
    
    - added a unit test for HDFS uploads via our `MiniDFSCluster`
    - added integration test to verify S3 uploads (via the S3A filesystem 
implementation of the `flink-s3-fs-hadoop` sub-project)
    - manually verified the test on YARN with both S3A and HDFS default file 
systems being set
    
    ## Does this pull request potentially affect one of the following parts:
    
      - Dependencies (does it add or upgrade a dependency): (yes - internally)
      - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
      - The serializers: (no)
      - The runtime per-record code paths (performance sensitive): (no)
      - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes)
    
    ## Documentation
    
      - Does this pull request introduce a new feature? (no)
      - If yes, how is the feature documented? (JavaDocs)


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/NicoK/flink flink-4228

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/4939.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #4939
    
----
commit 5d31f41e0e480820e9fec1efa84e5725364a136d
Author: Nico Kruber <n...@data-artisans.com>
Date:   2017-11-02T18:38:48Z

    [hotfix][s3] fix HadoopS3FileSystemITCase leaving test directories behind 
in S3

commit bf47d376397a8e64625a031468d5f5d0a5486238
Author: Nico Kruber <n...@data-artisans.com>
Date:   2016-11-09T20:04:50Z

    [FLINK-4228][yarn/s3] fix for yarn staging with s3a defaultFs
    
    + includes a new unit tests for recursive uploads to hfds:// targets
    + add a unit test for recursive file uploads to s3:// via s3a

----


---

Reply via email to