[ 
https://issues.apache.org/jira/browse/HADOOP-18330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17566029#comment-17566029
 ] 

Ashutosh Pant commented on HADOOP-18330:
----------------------------------------

[~ste...@apache.org] Sorry for creating another PR but I was finally able to 
test the changes and remove any errors with it. Tested by packaging the jar and 
using the jar to read and write to an s3 bucket. Please review [GitHub Pull 
Request #4557|https://github.com/apache/hadoop/pull/4557] and lmk if any 
changes need to be made!

 

> S3AFileSystem removes Path when calling createS3Client
> ------------------------------------------------------
>
>                 Key: HADOOP-18330
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18330
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 3.3.0, 3.3.1, 3.3.2, 3.3.3
>            Reporter: Ashutosh Pant
>            Assignee: Ashutosh Pant
>            Priority: Minor
>              Labels: pull-request-available
>          Time Spent: 1h
>  Remaining Estimate: 0h
>
> when using hadoop and spark to read/write data from an s3 bucket like -> 
> s3a://bucket/path and using a custom Credentials Provider, the path is 
> removed from the s3a URI and the credentials provider fails because the full 
> path is gone.
> In Spark 3.2,
> It was invoked as -> s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf)
> .createS3Client(name, bucket, credentials); 
> But In spark 3.3.3
> It is invoked as s3 = ReflectionUtils.newInstance(s3ClientFactoryClass, 
> conf).createS3Client(getUri(), parameters);
> the getUri() removes the path from the s3a URI



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to