[ 
https://issues.apache.org/jira/browse/HADOOP-18684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17706817#comment-17706817
 ] 

ASF GitHub Bot commented on HADOOP-18684:
-----------------------------------------

steveloughran commented on code in PR #5521:
URL: https://github.com/apache/hadoop/pull/5521#discussion_r1153027000


##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUrlScheme.java:
##########
@@ -0,0 +1,31 @@
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class ITestS3AUrlScheme extends AbstractS3ATestBase{
+
+  @Override
+  public void setup() throws Exception {
+    super.setup();
+  }
+
+  @Override
+  protected Configuration createConfiguration() {
+    Configuration conf = super.createConfiguration();
+    conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem");

Review Comment:
   this is the default. why are you setting it again?



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUrlScheme.java:
##########
@@ -0,0 +1,31 @@
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class ITestS3AUrlScheme extends AbstractS3ATestBase{
+
+  @Override

Review Comment:
   not needed



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUrlScheme.java:
##########
@@ -0,0 +1,31 @@
+package org.apache.hadoop.fs.s3a;

Review Comment:
   yetus will reject this without the asf copyright header



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUrlScheme.java:
##########
@@ -0,0 +1,31 @@
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.Test;
+
+import java.io.IOException;
+
+public class ITestS3AUrlScheme extends AbstractS3ATestBase{
+
+  @Override
+  public void setup() throws Exception {
+    super.setup();
+  }
+
+  @Override
+  protected Configuration createConfiguration() {
+    Configuration conf = super.createConfiguration();
+    conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem");
+    return conf;
+  }
+
+  @Test
+  public void testFSScheme() throws IOException {
+    FileSystem fs = getFileSystem();
+    assertEquals(fs.getScheme(), "s3a");

Review Comment:
   1. junit  assert equals has the order (expected, actual) for the error 
messages to work
   2. use AssertJ.assertThat() for new tests please -it's far more powerful. 
while it takes a while to learn, smaller tests are the place to do it.



##########
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AUrlScheme.java:
##########
@@ -0,0 +1,31 @@
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.junit.Test;
+
+import java.io.IOException;

Review Comment:
   check your import rules. they MUST be
   ```
   java.*
   java.*
   --
   not-asf.*
   --
   org.apace.*
   --
   statics



##########
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3A.java:
##########
@@ -37,7 +37,8 @@ public class S3A extends DelegateToFileSystem {
 
   public S3A(URI theUri, Configuration conf)
       throws IOException, URISyntaxException {
-    super(theUri, new S3AFileSystem(), conf, "s3a", false);
+    super(theUri, new S3AFileSystem(), conf,
+        theUri.getScheme().isEmpty() ? "s3a" : theUri.getScheme(), false);

Review Comment:
   move this to a constant in `org.apache.hadoop.fs.s3a.impl.InternalConstants` 
and then you can reference it in tests. 





> Fix S3A filesystem such that the scheme matches the URI scheme
> --------------------------------------------------------------
>
>                 Key: HADOOP-18684
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18684
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 3.3.5
>            Reporter: Harshit Gupta
>            Priority: Major
>              Labels: pull-request-available
>
> Certain codepaths use the FileContext API's to perform FS based operations 
> such as yarn log aggregations. While trying to reuse the S3A connector for 
> GCS based workloads the yarn log aggregation was not happening. Upon further 
> investigation it was observed that FileContext API have hardcoded URI scheme 
> checks that need to disabled/updated to make S3A compatible with non AWS 
> stores.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to