[ 
https://issues.apache.org/jira/browse/SPARK-38958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17817558#comment-17817558
 ] 

ASF GitHub Bot commented on SPARK-38958:
----------------------------------------

AbhiAMZ commented on code in PR #6550:
URL: https://github.com/apache/hadoop/pull/6550#discussion_r1490209084


##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/AWSClientConfig.java:
##########
@@ -407,6 +413,48 @@ private static void initSigner(Configuration conf,
     }
   }
 
+  /**
+   *
+   * @param conf hadoop configuration
+   * @param clientConfig client configuration to update
+   * @param awsServiceIdentifier service name
+   */
+  private static void initRequestHeaders(Configuration conf,
+      ClientOverrideConfiguration.Builder clientConfig, String 
awsServiceIdentifier) {
+    String configKey = null;
+    switch (awsServiceIdentifier) {
+      case AWS_SERVICE_IDENTIFIER_S3:
+        configKey = CUSTOM_HEADERS_S3;
+        break;
+      case AWS_SERVICE_IDENTIFIER_STS:
+        configKey = CUSTOM_HEADERS_STS;
+        break;
+      default:
+        // Nothing to do. The original signer override is already setup
+    }
+    if (configKey != null) {
+      String[] customHeaders = conf.getTrimmedStrings(configKey);
+      if (customHeaders == null || customHeaders.length == 0) {
+        LOG.debug("No custom headers specified");
+        return;
+      }
+
+      for (String customHeader : customHeaders) {

Review Comment:
   Unknown headers are neglected





> Override S3 Client in Spark Write/Read calls
> --------------------------------------------
>
>                 Key: SPARK-38958
>                 URL: https://issues.apache.org/jira/browse/SPARK-38958
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>    Affects Versions: 3.2.1
>            Reporter: Hershal
>            Priority: Major
>              Labels: pull-request-available
>
> Hello,
> I have been working to use spark to read and write data to S3. Unfortunately, 
> there are a few S3 headers that I need to add to my spark read/write calls. 
> After much looking, I have not found a way to replace the S3 client that 
> spark uses to make the read/write calls. I also have not found a 
> configuration that allows me to pass in S3 headers. Here is an example of 
> some common S3 request headers 
> ([https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonRequestHeaders.html).]
>  Does there already exist functionality to add S3 headers to spark read/write 
> calls or pass in a custom client that would pass these headers on every 
> read/write request? Appreciate the help and feedback
>  
> Thanks,



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to