[ 
https://issues.apache.org/jira/browse/HADOOP-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362135#comment-16362135
 ] 

ASF GitHub Bot commented on HADOOP-13972:
-----------------------------------------

GitHub user ssonker opened a pull request:

    https://github.com/apache/hadoop/pull/339

    HADOOP-13972 Supporting per-account configuration for ADL

    

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/ssonker/hadoop trunk

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/hadoop/pull/339.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #339
    
----
commit 6f401a301d56a459af5d90c10d7a32e480d36915
Author: Sharad Sonker <ssonker@...>
Date:   2018-02-13T10:43:34Z

    HADOOP-13972 Supporting per-account configuration for ADL

----


> ADLS to support per-store configuration
> ---------------------------------------
>
>                 Key: HADOOP-13972
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13972
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/adl
>    Affects Versions: 3.0.0-alpha2
>            Reporter: John Zhuge
>            Priority: Major
>
> Useful when distcp needs to access 2 Data Lake stores with different SPIs.
> Of course, a workaround is to grant the same SPI access permission to both 
> stores, but sometimes it might not be feasible.
> One idea is to embed the store name in the configuration property names, 
> e.g., {{dfs.adls.oauth2.<store>.client.id}}. Per-store keys will be consulted 
> first, then fall back to the global keys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to