Mukund Thakur created HADOOP-17192:
--------------------------------------
Summary: ITestS3AHugeFilesSSECDiskBlock failing because of bucket
overrides
Key: HADOOP-17192
URL: https://issues.apache.org/jira/browse/HADOOP-17192
Project: Hadoop Common
Issue Type: Bug
Components: fs/s3
Affects Versions: 3.3.0
Reporter: Mukund Thakur
If we set the conf "fs.s3a.bucket.mthakur-data.server-side-encryption.key" in
our test config
tests in ITestS3AHugeFilesSSECDiskBlock failing because of we are overriding
the bucket configuration thus overwriting the value of the base config set
here.
[https://github.com/apache/hadoop/blob/81da221c757bef9ec35cd190f14b2f872324c661/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AHugeFilesSSECDiskBlocks.java#L51]
Full stack stace:
{code:java}
java.lang.IllegalArgumentException: Invalid base 64 character:
':'java.lang.IllegalArgumentException: Invalid base 64 character: ':'
at com.amazonaws.util.Base64Codec.pos(Base64Codec.java:242) at
com.amazonaws.util.Base64Codec.decode4bytes(Base64Codec.java:151) at
com.amazonaws.util.Base64Codec.decode(Base64Codec.java:230) at
com.amazonaws.util.Base64.decode(Base64.java:112) at
com.amazonaws.services.s3.AmazonS3Client.populateSSE_C(AmazonS3Client.java:4379)
at
com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1318)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$6(S3AFileSystem.java:1920)
at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407) at
org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:370) at
org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1913)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:1889)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3027)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2958)
at
org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2842)
at org.apache.hadoop.fs.s3a.S3AFileSystem.innerMkdirs(S3AFileSystem.java:2798)
at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:2772) at
org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2369) at
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:361)
at
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:203)
at
org.apache.hadoop.fs.s3a.AbstractS3ATestBase.setup(AbstractS3ATestBase.java:59)
at
org.apache.hadoop.fs.s3a.scale.S3AScaleTestBase.setup(S3AScaleTestBase.java:90)
at
org.apache.hadoop.fs.s3a.scale.AbstractSTestS3AHugeFiles.setup(AbstractSTestS3AHugeFiles.java:78)
at
org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesSSECDiskBlocks.setup(ITestS3AHugeFilesSSECDiskBlocks.java:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.lang.Thread.run(Thread.java:748){code}
I am not sure if we need to worry too much about this. We can just fix the
local test config.
CC [[email protected]]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]