steveloughran commented on code in PR #6164: URL: https://github.com/apache/hadoop/pull/6164#discussion_r1701839472
########## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ########## @@ -1034,6 +1035,21 @@ private void bindAWSClient(URI name, boolean dtEnabled) throws IOException { S3_CLIENT_FACTORY_IMPL, DEFAULT_S3_CLIENT_FACTORY_IMPL, S3ClientFactory.class); + S3ClientFactory clientFactory; + CSEMaterials cseMaterials = null; + + if (isCSEEnabled) { + String kmsKeyId = getS3EncryptionKey(bucket, conf, true); + + cseMaterials = new CSEMaterials() + .withCSEKeyType(CSEMaterials.CSEKeyType.KMS) + .withKmsKeyId(kmsKeyId); + + clientFactory = ReflectionUtils.newInstance(EncryptionS3ClientFactory.class, conf); Review Comment: so if encryption is set, no custom client factory (done in places for unit tests) work. proposed: warn if cse is enabled and the value of `(s3ClientFactoryClass` is not the default the other way would be to choose the default client factory based on the encryption flag, so tests will always override it (the documentation will have to cover it). I think I prefer that ########## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java: ########## @@ -1034,6 +1035,21 @@ private void bindAWSClient(URI name, boolean dtEnabled) throws IOException { S3_CLIENT_FACTORY_IMPL, DEFAULT_S3_CLIENT_FACTORY_IMPL, S3ClientFactory.class); + S3ClientFactory clientFactory; + CSEMaterials cseMaterials = null; Review Comment: this stuff MUST pick up what is in EncryptionSecrets (which you can extend but MUST NOT use any aws sdk classes). this is what gets passed round with delegation tokens, so allows you to pass secrets with a job and without the cluster having the config ########## hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java: ########## @@ -876,11 +877,17 @@ private void uploadBlockAsync(final S3ADataBlocks.DataBlock block, ? RequestBody.fromFile(uploadData.getFile()) : RequestBody.fromInputStream(uploadData.getUploadStream(), size); - request = writeOperationHelper.newUploadPartRequestBuilder( + UploadPartRequest.Builder requestBuilder = writeOperationHelper.newUploadPartRequestBuilder( key, uploadId, currentPartNumber, - size).build(); + size); + + if (isLast) { Review Comment: add this as a param to pass down to WriteOperationHelper and let the factory do the work -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org