[ 
https://issues.apache.org/jira/browse/HADOOP-18708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17870515#comment-17870515
 ] 

ASF GitHub Bot commented on HADOOP-18708:
-----------------------------------------

steveloughran commented on code in PR #6164:
URL: https://github.com/apache/hadoop/pull/6164#discussion_r1701839472


##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##########
@@ -1034,6 +1035,21 @@ private void bindAWSClient(URI name, boolean dtEnabled) 
throws IOException {
         S3_CLIENT_FACTORY_IMPL, DEFAULT_S3_CLIENT_FACTORY_IMPL,
         S3ClientFactory.class);
 
+    S3ClientFactory clientFactory;
+    CSEMaterials cseMaterials = null;
+
+    if (isCSEEnabled) {
+      String kmsKeyId = getS3EncryptionKey(bucket, conf, true);
+
+      cseMaterials  = new CSEMaterials()
+          .withCSEKeyType(CSEMaterials.CSEKeyType.KMS)
+          .withKmsKeyId(kmsKeyId);
+
+      clientFactory = 
ReflectionUtils.newInstance(EncryptionS3ClientFactory.class, conf);

Review Comment:
   so if encryption is set, no custom client factory (done in places for unit 
tests) work. proposed: warn if cse is enabled and the value of 
`(s3ClientFactoryClass` is not the default
   
   the other way would be to choose the default client factory based on the 
encryption flag, so tests will always override it (the documentation will have 
to cover it). I think I prefer that



##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##########
@@ -1034,6 +1035,21 @@ private void bindAWSClient(URI name, boolean dtEnabled) 
throws IOException {
         S3_CLIENT_FACTORY_IMPL, DEFAULT_S3_CLIENT_FACTORY_IMPL,
         S3ClientFactory.class);
 
+    S3ClientFactory clientFactory;
+    CSEMaterials cseMaterials = null;

Review Comment:
   this stuff MUST pick up what is in EncryptionSecrets (which you can extend 
but MUST NOT use any aws sdk classes). this is what gets passed round with 
delegation tokens, so allows you to pass secrets with a job and without the 
cluster having the config



##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java:
##########
@@ -876,11 +877,17 @@ private void uploadBlockAsync(final 
S3ADataBlocks.DataBlock block,
             ? RequestBody.fromFile(uploadData.getFile())
             : RequestBody.fromInputStream(uploadData.getUploadStream(), size);
 
-        request = writeOperationHelper.newUploadPartRequestBuilder(
+        UploadPartRequest.Builder requestBuilder = 
writeOperationHelper.newUploadPartRequestBuilder(
             key,
             uploadId,
             currentPartNumber,
-            size).build();
+            size);
+
+        if (isLast) {

Review Comment:
   add this as a param to pass down to WriteOperationHelper and let the factory 
do the work





> AWS SDK V2 - Implement CSE
> --------------------------
>
>                 Key: HADOOP-18708
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18708
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.4.0
>            Reporter: Ahmar Suhail
>            Assignee: Syed Shameerur Rahman
>            Priority: Major
>              Labels: pull-request-available
>
> S3 Encryption client for SDK V2 is now available, so add client side 
> encryption back in. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to