[jira] [Commented] (HADOOP-12956) Inevitable Log4j2 migration via slf4j

2019-03-21 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798775#comment-16798775
 ] 

Akira Ajisaka commented on HADOOP-12956:


And we need to fix HDFS-12829

> Inevitable Log4j2 migration via slf4j
> -
>
> Key: HADOOP-12956
> URL: https://issues.apache.org/jira/browse/HADOOP-12956
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Gopal V
>Assignee: Haohui Mai
>Priority: Major
>
> {{5 August 2015 --The Apache Logging Services™ Project Management Committee 
> (PMC) has announced that the Log4j™ 1.x logging framework has reached its end 
> of life (EOL) and is no longer officially supported.}}
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> A whole framework log4j2 upgrade has to be synchronized, partly for improved 
> performance brought about by log4j2.
> https://logging.apache.org/log4j/2.x/manual/async.html#Performance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12956) Inevitable Log4j2 migration via slf4j

2019-03-21 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-12956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798773#comment-16798773
 ] 

Akira Ajisaka commented on HADOOP-12956:


Hi [~smeng], we should not resolve this umbrella jira for now.
We would like to make the log4j1 dependency optional and add optional 
dependencies for log4j2.

> Inevitable Log4j2 migration via slf4j
> -
>
> Key: HADOOP-12956
> URL: https://issues.apache.org/jira/browse/HADOOP-12956
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Gopal V
>Assignee: Haohui Mai
>Priority: Major
>
> {{5 August 2015 --The Apache Logging Services™ Project Management Committee 
> (PMC) has announced that the Log4j™ 1.x logging framework has reached its end 
> of life (EOL) and is no longer officially supported.}}
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> A whole framework log4j2 upgrade has to be synchronized, partly for improved 
> performance brought about by log4j2.
> https://logging.apache.org/log4j/2.x/manual/async.html#Performance



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on issue #299: Support Log4j 2 and Logback

2019-03-21 Thread GitBox
aajisaka commented on issue #299: Support Log4j 2 and Logback
URL: https://github.com/apache/hadoop/pull/299#issuecomment-475513092
 
 
   Thanks @rogers for the PR.
   After applying the patch, `mvn install -DskipTests` fails.
   
   ```
   [ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-hdfs-client: Compilation failure: 
Compilation failure: 
   [ERROR] 
/Users/aajisaka/git/ghe.corp/hadoop-mirror/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/util/TestByteArrayManager.java:[27,24]
 cannot find symbol
   [ERROR]   symbol:   class Level
   [ERROR]   location: package org.apache.log4j
   [ERROR] 
/Users/aajisaka/git/ghe.corp/hadoop-mirror/hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestConfiguredFailoverProxyProvider.java:[88,21]
 cannot access org.apache.log4j.Logger
   [ERROR]   class file for org.apache.log4j.Logger not found
   ```
   
   This is because many modules in Apache Hadoop call log4j1 API directly. We 
need to remove the caller at all the projects.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2019-03-21 Thread Ryu Kobayashi (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798764#comment-16798764
 ] 

Ryu Kobayashi commented on HADOOP-16206:


Nice ticket!

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2019-03-21 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-16206:
--

 Summary: Migrate from Log4j1 to Log4j2
 Key: HADOOP-16206
 URL: https://issues.apache.org/jira/browse/HADOOP-16206
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Akira Ajisaka


This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #627: HDDS-1299. Support TokenIssuer interface 
for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#issuecomment-475496757
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 70 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 76 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1298 | trunk passed |
   | +1 | compile | 129 | trunk passed |
   | +1 | checkstyle | 36 | trunk passed |
   | +1 | mvnsite | 190 | trunk passed |
   | +1 | shadedclient | 808 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 190 | trunk passed |
   | +1 | javadoc | 141 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 15 | Maven dependency ordering for patch |
   | -1 | mvninstall | 25 | dist in the patch failed. |
   | +1 | compile | 109 | the patch passed |
   | +1 | javac | 109 | the patch passed |
   | +1 | checkstyle | 27 | the patch passed |
   | +1 | hadolint | 1 | There were no new hadolint issues. |
   | +1 | mvnsite | 138 | the patch passed |
   | +1 | shellcheck | 2 | There were no new shellcheck issues. |
   | +1 | shelldocs | 17 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 876 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 201 | the patch passed |
   | +1 | javadoc | 121 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 39 | common in the patch passed. |
   | +1 | unit | 28 | client in the patch passed. |
   | +1 | unit | 60 | ozone-manager in the patch passed. |
   | +1 | unit | 165 | ozonefs in the patch passed. |
   | +1 | unit | 23 | dist in the patch passed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 5149 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/627 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  yamllint  shellcheck  
shelldocs  hadolint  |
   | uname | Linux 3bd80b9a2b6b 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 90afc9a |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/9/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/9/testReport/ |
   | Max. process+thread count | 2906 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/ozonefs hadoop-ozone/dist U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/9/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 commented on issue #634: HDDS-939. Add S3 access check to Ozone manager. Contributed by Ajay Kumar.

2019-03-21 Thread GitBox
bharatviswa504 commented on issue #634: HDDS-939. Add S3 access check to Ozone 
manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-475485998
 
 
   > > I think if the awsAccessKey will not have realm, if it has just name we 
shall not see the issue.
   > 
   > That will not work, since there can be same user names in different 
domains who are completely different users in real life.
   
   Thank You @anuengineer for info.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #597: HDFS-3246: pRead equivalent for direct read path

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #597: HDFS-3246: pRead equivalent for direct 
read path
URL: https://github.com/apache/hadoop/pull/597#issuecomment-475482557
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 61 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1040 | trunk passed |
   | +1 | compile | 934 | trunk passed |
   | +1 | checkstyle | 196 | trunk passed |
   | +1 | mvnsite | 249 | trunk passed |
   | +1 | shadedclient | 1156 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | +1 | findbugs | 312 | trunk passed |
   | +1 | javadoc | 165 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 18 | Maven dependency ordering for patch |
   | +1 | mvninstall | 155 | the patch passed |
   | +1 | compile | 873 | the patch passed |
   | +1 | cc | 873 | the patch passed |
   | +1 | javac | 873 | the patch passed |
   | -0 | checkstyle | 183 | root: The patch generated 2 new + 114 unchanged - 
5 fixed = 116 total (was 119) |
   | +1 | mvnsite | 211 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 600 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | +1 | findbugs | 340 | the patch passed |
   | +1 | javadoc | 160 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 503 | hadoop-common in the patch passed. |
   | +1 | unit | 126 | hadoop-hdfs-client in the patch passed. |
   | -1 | unit | 4713 | hadoop-hdfs in the patch failed. |
   | +1 | unit | 366 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 12234 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-597/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/597 |
   | JIRA Issue | HDFS-3246 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux cdc24f282597 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 90afc9a |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-597/4/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-597/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-597/4/testReport/ |
   | Max. process+thread count | 4894 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-native-client U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-597/4/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer commented on issue #634: HDDS-939. Add S3 access check to Ozone manager. Contributed by Ajay Kumar.

2019-03-21 Thread GitBox
anuengineer commented on issue #634: HDDS-939. Add S3 access check to Ozone 
manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-475474021
 
 
   > I think if the awsAccessKey will not have realm, if it has just name we 
shall not see the issue.
   That will not work, since there can be same user names in different domains 
who are completely different users in real life.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer edited a comment on issue #634: HDDS-939. Add S3 access check to Ozone manager. Contributed by Ajay Kumar.

2019-03-21 Thread GitBox
anuengineer edited a comment on issue #634: HDDS-939. Add S3 access check to 
Ozone manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-475474021
 
 
   > I think if the awsAccessKey will not have realm, if it has just name we 
shall not see the issue.
   
   That will not work, since there can be same user names in different domains 
who are completely different users in real life.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16180) LocalFileSystem throw Malformed input or input contains unmappable characters

2019-03-21 Thread Yuming Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798638#comment-16798638
 ] 

Yuming Wang commented on HADOOP-16180:
--

I'm not sure. Maybe it's the way we use it.

> LocalFileSystem throw Malformed input or input contains unmappable characters
> -
>
> Key: HADOOP-16180
> URL: https://issues.apache.org/jira/browse/HADOOP-16180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.8.0, 3.2.0
>Reporter: Yuming Wang
>Priority: Major
>
> How to reproduce:
> {code:java}
> export LANG=
> export LC_CTYPE="POSIX"
> export LC_NUMERIC="POSIX"
> export LC_TIME="POSIX"
> export LC_COLLATE="POSIX"
> export LC_MONETARY="POSIX"
> export LC_MESSAGES="POSIX"
> export LC_PAPER="POSIX"
> export LC_NAME="POSIX"
> export LC_ADDRESS="POSIX"
> export LC_TELEPHONE="POSIX"
> export LC_MEASUREMENT="POSIX"
> export LC_IDENTIFICATION="POSIX"
> git clone https://github.com/apache/spark.git && cd spark && git checkout 
> v2.4.0
> build/sbt "hive/testOnly *.HiveDDLSuite" -Phive -Phadoop-2.7 
> -Dhadoop.version=2.8.0
> {code}
> Stack trace:
> {noformat}
> Caused by: sbt.ForkMain$ForkError: java.nio.file.InvalidPathException: 
> Malformed input or input contains unmappable characters: 
> /home/jenkins/workspace/SparkPullRequestBuilder@2/target/tmp/warehouse-15474fdf-0808-40ab-946d-1309fb05bf26/DaTaBaSe_I.db/tab_ı
>   at sun.nio.fs.UnixPath.encode(UnixPath.java:147)
>   at sun.nio.fs.UnixPath.(UnixPath.java:71)
>   at sun.nio.fs.UnixFileSystem.getPath(UnixFileSystem.java:281)
>   at java.io.File.toPath(File.java:2234)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getLastAccessTime(RawLocalFileSystem.java:683)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.(RawLocalFileSystem.java:694)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:664)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:987)
>   at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:656)
>   at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:454)
>   at org.apache.hadoop.hive.metastore.Warehouse.isDir(Warehouse.java:520)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1436)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1503)
>   ... 112 more{noformat}
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveCatalogedDDLSuite/basic_DDL_using_locale_tr___caseSensitive_true/]
> [https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/103328/testReport/org.apache.spark.sql.hive.execution/HiveDDLSuite/create_Hive_serde_table_and_view_with_unicode_columns_and_comment/]
>  
> It works before https://issues.apache.org/jira/browse/HADOOP-12045.
> We could workaround it by resetting locale:
> {code:java}
> export LANG=en_US.UTF-8
> export LC_CTYPE="en_US.UTF-8"
> export LC_NUMERIC="en_US.UTF-8"
> export LC_TIME="en_US.UTF-8"
> export LC_COLLATE="en_US.UTF-8"
> export LC_MONETARY="en_US.UTF-8"
> export LC_MESSAGES="en_US.UTF-8"
> export LC_PAPER="en_US.UTF-8"
> export LC_NAME="en_US.UTF-8"
> export LC_ADDRESS="en_US.UTF-8"
> export LC_TELEPHONE="en_US.UTF-8"
> export LC_MEASUREMENT="en_US.UTF-8"
> export LC_IDENTIFICATION="en_US.UTF-8"
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #634: HDDS-939. Add S3 access check to Ozone manager. Contributed by Ajay Kumar.

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #634: HDDS-939. Add S3 access check to Ozone 
manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634#issuecomment-475465485
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 24 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 36 | Maven dependency ordering for branch |
   | +1 | mvninstall | 965 | trunk passed |
   | +1 | compile | 97 | trunk passed |
   | +1 | checkstyle | 27 | trunk passed |
   | +1 | mvnsite | 68 | trunk passed |
   | +1 | shadedclient | 759 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 99 | trunk passed |
   | +1 | javadoc | 55 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | +1 | mvninstall | 69 | the patch passed |
   | +1 | compile | 94 | the patch passed |
   | +1 | javac | 94 | the patch passed |
   | -0 | checkstyle | 23 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 57 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 739 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 111 | the patch passed |
   | +1 | javadoc | 54 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 35 | common in the patch passed. |
   | +1 | unit | 36 | s3gateway in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3451 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/634 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux d225cfcc9fd7 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 90afc9a |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/1/testReport/ |
   | Max. process+thread count | 445 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/s3gateway U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-634/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv opened a new pull request #634: HDDS-939. Add S3 access check to Ozone manager. Contributed by Ajay Kumar.

2019-03-21 Thread GitBox
ajayydv opened a new pull request #634: HDDS-939. Add S3 access check to Ozone 
manager. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/634
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16147) Allow CopyListing sequence file keys and values to be more easily customized

2019-03-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798584#comment-16798584
 ] 

Steve Loughran commented on HADOOP-16147:
-

makes sense. Are you really sure you couldn't come up with a test? 

> Allow CopyListing sequence file keys and values to be more easily customized
> 
>
> Key: HADOOP-16147
> URL: https://issues.apache.org/jira/browse/HADOOP-16147
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>Priority: Major
> Attachments: HADOOP-16147-001.patch, HADOOP-16147-002.patch
>
>
> We have encountered a scenario where, when using the Crunch library to run a 
> distributed copy (CRUNCH-660, CRUNCH-675) at the conclusion of a job we need 
> to dynamically rename target paths to the preferred destination output part 
> file names, rather than retaining the original source path names.
> A custom CopyListing implementation appears to be the proper solution for 
> this. However the place where the current SimpleCopyListing logic needs to be 
> adjusted is in a private method (writeToFileListing), so a relatively large 
> portion of the class would need to be cloned.
> To minimize the amount of code duplication required for such a custom 
> implementation, we propose adding two new protected methods to the 
> CopyListing class, that can be used to change the actual keys and/or values 
> written to the copy listing sequence file: 
> {noformat}
> protected Text getFileListingKey(Path sourcePathRoot, CopyListingFileStatus 
> fileStatus);
> protected CopyListingFileStatus getFileListingValue(CopyListingFileStatus 
> fileStatus);
> {noformat}
> The SimpleCopyListing class would then be modified to consume these methods 
> as follows,
> {noformat}
> fileListWriter.append(
>getFileListingKey(sourcePathRoot, fileStatus),
>getFileListingValue(fileStatus));
> {noformat}
> The default implementations would simply preserve the present behavior of the 
> SimpleCopyListing class, and could reside in either CopyListing or 
> SimpleCopyListing, whichever is preferable.
> {noformat}
> protected Text getFileListingKey(Path sourcePathRoot, CopyListingFileStatus 
> fileStatus) {
>return new Text(DistCpUtils.getRelativePath(sourcePathRoot, 
> fileStatus.getPath()));
> }
> protected CopyListingFileStatus getFileListingValue(CopyListingFileStatus 
> fileStatus) {
>return fileStatus;
> }
> {noformat}
> Please let me know if this proposal seems to be on the right track. If so I 
> can provide a patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #627: HDDS-1299. Support TokenIssuer interface 
for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#issuecomment-475450824
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | yamllint | 0 | yamllint was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 64 | Maven dependency ordering for branch |
   | +1 | mvninstall | 984 | trunk passed |
   | +1 | compile | 102 | trunk passed |
   | +1 | checkstyle | 33 | trunk passed |
   | +1 | mvnsite | 188 | trunk passed |
   | +1 | shadedclient | 675 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 185 | trunk passed |
   | +1 | javadoc | 136 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for patch |
   | -1 | mvninstall | 22 | dist in the patch failed. |
   | +1 | compile | 96 | the patch passed |
   | +1 | javac | 96 | the patch passed |
   | +1 | checkstyle | 25 | the patch passed |
   | +1 | hadolint | 1 | There were no new hadolint issues. |
   | +1 | mvnsite | 140 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 19 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 757 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/dist |
   | +1 | findbugs | 223 | the patch passed |
   | +1 | javadoc | 132 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 38 | common in the patch passed. |
   | +1 | unit | 30 | client in the patch passed. |
   | +1 | unit | 45 | ozone-manager in the patch passed. |
   | +1 | unit | 85 | ozonefs in the patch passed. |
   | +1 | unit | 25 | dist in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 4400 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/627 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  yamllint  shellcheck  
shelldocs  hadolint  |
   | uname | Linux 044083feb274 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 90afc9a |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | findbugs | v3.1.0-RC1 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/8/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/8/testReport/ |
   | Max. process+thread count | 3050 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/ozonefs hadoop-ozone/dist U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-627/8/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #597: HDFS-3246: pRead equivalent for direct read path

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #597: HDFS-3246: pRead equivalent for direct 
read path
URL: https://github.com/apache/hadoop/pull/597#issuecomment-475446037
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1177 | trunk passed |
   | +1 | compile | 1018 | trunk passed |
   | +1 | checkstyle | 211 | trunk passed |
   | +1 | mvnsite | 239 | trunk passed |
   | +1 | shadedclient | 1212 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | +1 | findbugs | 355 | trunk passed |
   | +1 | javadoc | 180 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 162 | the patch passed |
   | +1 | compile | 973 | the patch passed |
   | +1 | cc | 973 | the patch passed |
   | +1 | javac | 973 | the patch passed |
   | +1 | checkstyle | 215 | root: The patch generated 0 new + 116 unchanged - 
3 fixed = 116 total (was 119) |
   | +1 | mvnsite | 226 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | +1 | findbugs | 373 | the patch passed |
   | +1 | javadoc | 178 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 508 | hadoop-common in the patch passed. |
   | +1 | unit | 116 | hadoop-hdfs-client in the patch passed. |
   | -1 | unit | 5845 | hadoop-hdfs in the patch failed. |
   | +1 | unit | 404 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 14082 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   |   | hadoop.hdfs.server.mover.TestMover |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-597/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/597 |
   | JIRA Issue | HDFS-3246 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux e97235a00ae8 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 548997d |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-597/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-597/3/testReport/ |
   | Max. process+thread count | 2968 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs 
hadoop-hdfs-project/hadoop-hdfs-native-client U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-597/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #628: HADOOP-16186. NPE in ITestS3AFileSystemContract teardown in DynamoDBMetadataStore.lambda$listChildren

2019-03-21 Thread GitBox
steveloughran commented on a change in pull request #628: HADOOP-16186. NPE in 
ITestS3AFileSystemContract teardown in DynamoDBMetadataStore.lambda$listChildren
URL: https://github.com/apache/hadoop/pull/628#discussion_r267995021
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
 ##
 @@ -647,9 +650,19 @@ public DirListingMetadata listChildren(final Path path) 
throws IOException {
   LOG.trace("Listing table {} in region {} for {} returning {}",
   tableName, region, path, metas);
 
-  return (metas.isEmpty() && dirPathMeta == null)
-  ? null
-  : new DirListingMetadata(path, metas, isAuthoritative,
+  if (!metas.isEmpty() && dirPathMeta == null) {
+// We handle this case as the directory is deleted.
+LOG.warn("Directory metadata is null, but the list of the "
 
 Review comment:
   we need to review a message which makes sense for someone who doesn't know 
the db structures. And match it with a troubleshooting doc stack trace.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #628: HADOOP-16186. NPE in ITestS3AFileSystemContract teardown in DynamoDBMetadataStore.lambda$listChildren

2019-03-21 Thread GitBox
steveloughran commented on issue #628: HADOOP-16186. NPE in 
ITestS3AFileSystemContract teardown in DynamoDBMetadataStore.lambda$listChildren
URL: https://github.com/apache/hadoop/pull/628#issuecomment-475441686
 
 
   as usual: which S3 endpoint did you test with; which s3guard build options, 
etc...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-11452) Make FileSystem.rename(path, path, options) public, specified, tested

2019-03-21 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11452:

Comment: was deleted

(was: sorry, I'd missed this was up for review. Don't feel shy about reminding 
me -everyone else does!

Anyway, will try and look at this. At a quick glance: it's time we did this; I 
like what you've done with the exception strings and updated the filesystem 
docs. 

What this will need to get in is testing. 

We're now handling github PRs better, and can review stuff there as well as 
test with yetus. Would you mind submitting a rebased-to-trunk PR that way? Put 
this JIRA in the title and it'll get wired up)

> Make FileSystem.rename(path, path, options) public, specified, tested
> -
>
> Key: HADOOP-11452
> URL: https://issues.apache.org/jira/browse/HADOOP-11452
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Yi Liu
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-11452-001.patch, HADOOP-11452-002.patch, 
> HADOOP-14452-004.patch, HADOOP-14452-branch-2-003.patch
>
>
> Currently in {{FileSystem}}, {{rename}} with _Rename options_ is protected 
> and with _deprecated_ annotation. And the default implementation is not 
> atomic.
> So this method is not able to be used outside. On the other hand, HDFS has a 
> good and atomic implementation. (Also an interesting thing in {{DFSClient}}, 
> the _deprecated_ annotations for these two methods are opposite).
> It makes sense to make public for {{rename}} with _Rename options_, since 
> it's atomic for rename+overwrite, also it saves RPC calls if user desires 
> rename+overwrite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11452) Make FileSystem.rename(path, path, options) public, specified, tested

2019-03-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798544#comment-16798544
 ] 

Steve Loughran commented on HADOOP-11452:
-

sorry, I'd missed this was up for review. Don't feel shy about reminding me 
-everyone else does!

Anyway, will try and look at this. At a quick glance: it's time we did this; I 
like what you've done with the exception strings and updated the filesystem 
docs. 

What this will need to get in is testing. 

We're now handling github PRs better, and can review stuff there as well as 
test with yetus. Would you mind submitting a rebased-to-trunk PR that way? Put 
this JIRA in the title and it'll get wired up

> Make FileSystem.rename(path, path, options) public, specified, tested
> -
>
> Key: HADOOP-11452
> URL: https://issues.apache.org/jira/browse/HADOOP-11452
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Yi Liu
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-11452-001.patch, HADOOP-11452-002.patch, 
> HADOOP-14452-004.patch, HADOOP-14452-branch-2-003.patch
>
>
> Currently in {{FileSystem}}, {{rename}} with _Rename options_ is protected 
> and with _deprecated_ annotation. And the default implementation is not 
> atomic.
> So this method is not able to be used outside. On the other hand, HDFS has a 
> good and atomic implementation. (Also an interesting thing in {{DFSClient}}, 
> the _deprecated_ annotations for these two methods are opposite).
> It makes sense to make public for {{rename}} with _Rename options_, since 
> it's atomic for rename+overwrite, also it saves RPC calls if user desires 
> rename+overwrite.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267989214
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1486,7 +1484,9 @@ private static UserGroupInformation getRemoteUser() 
throws IOException {
 realUser = new Text(ugi.getRealUser().getUserName());
   }
 
-  return delegationTokenMgr.createToken(owner, renewer, realUser);
+  token = delegationTokenMgr.createToken(owner, renewer, realUser);
+  LOG.debug("OmDelegationToken: {} created.", token);
 
 Review comment:
   Remove this one and change the trace log to debug log in SecretManager.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267983572
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapterImpl.java
 ##
 @@ -289,7 +291,7 @@ public boolean hasNextKey(String key) {
   @Override
   public Token getDelegationToken(String renewer)
   throws IOException {
-if (!securityEnabled) {
+if (!securityEnabled || renewer == null) {
 
 Review comment:
   I don't think null renewer will be a legit case. HDFS 
(DistrubutedFIleSystem/DFSClient) returns null in this case, which matches the 
behavior implemented for Ozone.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267981777
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
 ##
 @@ -0,0 +1,176 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+OZONE-SITE.XML_ozone.om.address=om
+OZONE-SITE.XML_ozone.om.http-address=om:9874
+OZONE-SITE.XML_ozone.scm.names=scm
+OZONE-SITE.XML_ozone.enabled=True
+OZONE-SITE.XML_ozone.scm.datanode.id=/data/datanode.id
+OZONE-SITE.XML_ozone.scm.block.client.address=scm
+OZONE-SITE.XML_ozone.metadata.dirs=/data/metadata
+OZONE-SITE.XML_ozone.handler.type=distributed
+OZONE-SITE.XML_ozone.scm.client.address=scm
+OZONE-SITE.XML_hdds.block.token.enabled=true
+OZONE-SITE.XML_ozone.replication=1
+OZONE-SITE.XML_hdds.scm.kerberos.principal=scm/s...@example.com
+OZONE-SITE.XML_hdds.scm.kerberos.keytab.file=/etc/security/keytabs/scm.keytab
+OZONE-SITE.XML_ozone.om.kerberos.principal=om/o...@example.com
 
 Review comment:
   We only use _HOST for those servers that need scale to more than one such as 
DN and NM. All the other principals we use the name directly to avoid 
unnecessary DNS resolution issues in docker environment. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267981835
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
 ##
 @@ -0,0 +1,176 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+OZONE-SITE.XML_ozone.om.address=om
+OZONE-SITE.XML_ozone.om.http-address=om:9874
+OZONE-SITE.XML_ozone.scm.names=scm
+OZONE-SITE.XML_ozone.enabled=True
+OZONE-SITE.XML_ozone.scm.datanode.id=/data/datanode.id
+OZONE-SITE.XML_ozone.scm.block.client.address=scm
+OZONE-SITE.XML_ozone.metadata.dirs=/data/metadata
+OZONE-SITE.XML_ozone.handler.type=distributed
+OZONE-SITE.XML_ozone.scm.client.address=scm
+OZONE-SITE.XML_hdds.block.token.enabled=true
+OZONE-SITE.XML_ozone.replication=1
+OZONE-SITE.XML_hdds.scm.kerberos.principal=scm/s...@example.com
+OZONE-SITE.XML_hdds.scm.kerberos.keytab.file=/etc/security/keytabs/scm.keytab
+OZONE-SITE.XML_ozone.om.kerberos.principal=om/o...@example.com
+OZONE-SITE.XML_ozone.om.kerberos.keytab.file=/etc/security/keytabs/om.keytab
+OZONE-SITE.XML_ozone.s3g.keytab.file=/etc/security/keytabs/HTTP.keytab
+OZONE-SITE.XML_ozone.s3g.authentication.kerberos.principal=HTTP/s...@example.com
+
+OZONE-SITE.XML_ozone.security.enabled=true
+OZONE-SITE.XML_hdds.scm.http.kerberos.principal=HTTP/s...@example.com
+OZONE-SITE.XML_hdds.scm.http.kerberos.keytab=/etc/security/keytabs/HTTP.keytab
+OZONE-SITE.XML_ozone.om.http.kerberos.principal=HTTP/o...@example.com
 
 Review comment:
   Same as above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267981817
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
 ##
 @@ -0,0 +1,176 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+OZONE-SITE.XML_ozone.om.address=om
+OZONE-SITE.XML_ozone.om.http-address=om:9874
+OZONE-SITE.XML_ozone.scm.names=scm
+OZONE-SITE.XML_ozone.enabled=True
+OZONE-SITE.XML_ozone.scm.datanode.id=/data/datanode.id
+OZONE-SITE.XML_ozone.scm.block.client.address=scm
+OZONE-SITE.XML_ozone.metadata.dirs=/data/metadata
+OZONE-SITE.XML_ozone.handler.type=distributed
+OZONE-SITE.XML_ozone.scm.client.address=scm
+OZONE-SITE.XML_hdds.block.token.enabled=true
+OZONE-SITE.XML_ozone.replication=1
+OZONE-SITE.XML_hdds.scm.kerberos.principal=scm/s...@example.com
 
 Review comment:
   Same as above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15604) Bulk commits of S3A MPUs place needless excessive load on S3 & S3Guard

2019-03-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798509#comment-16798509
 ] 

Steve Loughran commented on HADOOP-15604:
-

Moving to on demand DDB should handle this a bit

> Bulk commits of S3A MPUs place needless excessive load on S3 & S3Guard
> --
>
> Key: HADOOP-15604
> URL: https://issues.apache.org/jira/browse/HADOOP-15604
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> When there are ~50 files being committed; each in their own thread from the 
> commit pool; probably the DDB repo is being overloaded just from one single 
> process doing task commit. We should be backing off more, especially given 
> that failing on a write could potentially leave the store inconsistent with 
> the FS (renames, etc)
> It would be nice to have some tests to prove that the I/O thresholds are the 
> reason for unprocessed items in DynamoDB metadata store



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267980864
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-image/docker-krb5/Dockerfile-krb5
 ##
 @@ -0,0 +1,35 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License lsfor the specific language governing permissions and
+# limitations under the License.
+
+
+FROM openjdk:8u191-jdk-alpine3.9
 
 Review comment:
   We will build official kdc image later in a separate JIRA to avoid 
duplication.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267980430
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -452,12 +457,11 @@ public void removeBucketAcls(
 Token token =
 ozoneManagerClient.getDelegationToken(renewer);
 if (token != null) {
-  Text dtService =
-  getOMProxyProvider().getProxy().getDelegationTokenService();
   token.setService(dtService);
-  LOG.debug("Created token {}", token);
+  LOG.debug("Created token {} for dtService {}", token, dtService);
 } else {
-  LOG.debug("Cannot get ozone delegation token from {}", renewer);
+  LOG.debug("Cannot get ozone delegation token from {} for service {}",
 
 Review comment:
   Fixed in next commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
xiaoyuyao commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267979701
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
 ##
 @@ -50,6 +52,7 @@
* The proxy used for connecting to the cluster and perform
* client operations.
*/
+  // TODO: remove rest api and client
 
 Review comment:
   We already have one: https://issues.apache.org/jira/browse/HDDS-738


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16118) S3Guard to support on-demand DDB tables

2019-03-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798499#comment-16798499
 ] 

Steve Loughran commented on HADOOP-16118:
-

Proposed Changes

h3. Awareness: 
* fix tests to downgrade
* s3guard bucket-info command to report DDB table with read==write==0 as PAYG
* error messages to be meaningful
* troubleshooting 

h3. Adoption
* create DDB table supports PAYG (may need SDK update)
* its the default
* including the test containers


> S3Guard to support on-demand DDB tables
> ---
>
> Key: HADOOP-16118
> URL: https://issues.apache.org/jira/browse/HADOOP-16118
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> AWS now supports [on demand DDB 
> capacity|https://aws.amazon.com/blogs/aws/amazon-dynamodb-on-demand-no-capacity-planning-and-pay-per-request-pricing/]
>  
> This has lowest cost and best scalability, so could be the default capacity. 
> + add a new option to set-capacity.
> Will depend on an SDK update: created HADOOP-16117.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-03-21 Thread Esfandiar Manii (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-16205:
-
Description: Back porting ABFS driver from trunk to 2.0  (was: Commit the 
core code of the ABFS connector (HADOOP-15407) to its development branch)

> Backporting ABFS driver from trunk to branch 2.0
> 
>
> Key: HADOOP-16205
> URL: https://issues.apache.org/jira/browse/HADOOP-16205
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.0.0-alpha
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
>
> Back porting ABFS driver from trunk to 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-03-21 Thread Esfandiar Manii (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-16205:
-
Target Version/s: 2.0.0-alpha  (was: 3.2.0)

> Backporting ABFS driver from trunk to branch 2.0
> 
>
> Key: HADOOP-16205
> URL: https://issues.apache.org/jira/browse/HADOOP-16205
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.0.0-alpha
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
>
> Back porting ABFS driver from trunk to 2.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-03-21 Thread Esfandiar Manii (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-16205:
-
Affects Version/s: (was: 3.2.0)
   2.0.0-alpha

> Backporting ABFS driver from trunk to branch 2.0
> 
>
> Key: HADOOP-16205
> URL: https://issues.apache.org/jira/browse/HADOOP-16205
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.0.0-alpha
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
>
> Commit the core code of the ABFS connector (HADOOP-15407) to its development 
> branch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-03-21 Thread Esfandiar Manii (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii updated HADOOP-16205:
-
Fix Version/s: (was: HADOOP-15407)

> Backporting ABFS driver from trunk to branch 2.0
> 
>
> Key: HADOOP-16205
> URL: https://issues.apache.org/jira/browse/HADOOP-16205
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
>
> Commit the core code of the ABFS connector (HADOOP-15407) to its development 
> branch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-03-21 Thread Esfandiar Manii (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esfandiar Manii reassigned HADOOP-16205:


Assignee: Esfandiar Manii  (was: Da Zhou)

> Backporting ABFS driver from trunk to branch 2.0
> 
>
> Key: HADOOP-16205
> URL: https://issues.apache.org/jira/browse/HADOOP-16205
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Esfandiar Manii
>Assignee: Esfandiar Manii
>Priority: Major
> Fix For: HADOOP-15407
>
>
> Commit the core code of the ABFS connector (HADOOP-15407) to its development 
> branch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16205) Backporting ABFS driver from trunk to branch 2.0

2019-03-21 Thread Esfandiar Manii (JIRA)
Esfandiar Manii created HADOOP-16205:


 Summary: Backporting ABFS driver from trunk to branch 2.0
 Key: HADOOP-16205
 URL: https://issues.apache.org/jira/browse/HADOOP-16205
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.2.0
Reporter: Esfandiar Manii
Assignee: Da Zhou
 Fix For: HADOOP-15407


Commit the core code of the ABFS connector (HADOOP-15407) to its development 
branch



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-21 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798449#comment-16798449
 ] 

Daryn Sharp commented on HADOOP-16156:
--

Patch looks ok and I'm not objecting to the changes.  IMHO, it's not a good 
idea to reformat for the mere sake of reformatting.  All it will do is cause 
someone else a merge headache or even migraine...

That said, if this patch goes in, {{index !=-1}} , is missing a space between 
=/-...

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch, HADOOP-16156.002.patch, 
> HADOOP-16156.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv merged pull request #633: HDDS-1321. TestOzoneManagerHttpServer depends on hard-coded port numb…

2019-03-21 Thread GitBox
ajayydv merged pull request #633: HDDS-1321. TestOzoneManagerHttpServer depends 
on hard-coded port numb…
URL: https://github.com/apache/hadoop/pull/633
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on issue #633: HDDS-1321. TestOzoneManagerHttpServer depends on hard-coded port numb…

2019-03-21 Thread GitBox
ajayydv commented on issue #633: HDDS-1321. TestOzoneManagerHttpServer depends 
on hard-coded port numb…
URL: https://github.com/apache/hadoop/pull/633#issuecomment-475406398
 
 
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #626: HDDS-1262. In OM HA OpenKey and 
initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-475400297
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 54 | Maven dependency ordering for branch |
   | +1 | mvninstall | 977 | trunk passed |
   | +1 | compile | 99 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 96 | trunk passed |
   | +1 | shadedclient | 756 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 97 | trunk passed |
   | +1 | javadoc | 63 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 92 | the patch passed |
   | +1 | compile | 94 | the patch passed |
   | +1 | cc | 94 | the patch passed |
   | +1 | javac | 94 | the patch passed |
   | +1 | checkstyle | 22 | the patch passed |
   | +1 | mvnsite | 82 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 751 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 116 | the patch passed |
   | -1 | javadoc | 32 | hadoop-ozone_common generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 33 | common in the patch passed. |
   | +1 | unit | 43 | ozone-manager in the patch passed. |
   | -1 | unit | 989 | integration-test in the patch failed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 4537 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.scm.TestXceiverClientMetrics |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.hdds.scm.pipeline.TestNodeFailure |
   |   | hadoop.hdds.scm.pipeline.TestNode2PipelineMap |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/626 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 1d6d0c034eb6 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 548997d |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/6/artifact/out/diff-javadoc-javadoc-hadoop-ozone_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/6/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/6/testReport/ |
   | Max. process+thread count | 4281 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/6/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #633: HDDS-1321. TestOzoneManagerHttpServer depends on hard-coded port numb…

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #633: HDDS-1321. TestOzoneManagerHttpServer 
depends on hard-coded port numb…
URL: https://github.com/apache/hadoop/pull/633#issuecomment-475396364
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 45 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1211 | trunk passed |
   | +1 | compile | 55 | trunk passed |
   | +1 | checkstyle | 20 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 780 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 45 | trunk passed |
   | +1 | javadoc | 19 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 25 | the patch passed |
   | +1 | javac | 25 | the patch passed |
   | +1 | checkstyle | 14 | the patch passed |
   | +1 | mvnsite | 26 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 802 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 53 | the patch passed |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 63 | ozone-manager in the patch passed. |
   | +1 | asflicense | 25 | The patch does not generate ASF License warnings. |
   | | | 3354 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-633/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/633 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux c664e0ade47b 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 548997d |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-633/3/testReport/ |
   | Max. process+thread count | 352 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-633/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #633: HDDS-1321. TestOzoneManagerHttpServer depends on hard-coded port numb…

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #633: HDDS-1321. TestOzoneManagerHttpServer 
depends on hard-coded port numb…
URL: https://github.com/apache/hadoop/pull/633#issuecomment-475396348
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 49 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1229 | trunk passed |
   | +1 | compile | 55 | trunk passed |
   | +1 | checkstyle | 20 | trunk passed |
   | +1 | mvnsite | 34 | trunk passed |
   | +1 | shadedclient | 765 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 47 | trunk passed |
   | +1 | javadoc | 21 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 35 | the patch passed |
   | +1 | compile | 24 | the patch passed |
   | +1 | javac | 24 | the patch passed |
   | +1 | checkstyle | 13 | the patch passed |
   | +1 | mvnsite | 28 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 801 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 53 | the patch passed |
   | +1 | javadoc | 20 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 66 | ozone-manager in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3381 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-633/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/633 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 28910357a9dc 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 548997d |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-633/2/testReport/ |
   | Max. process+thread count | 350 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-633/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #633: HDDS-1321. TestOzoneManagerHttpServer depends on hard-coded port numb…

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #633: HDDS-1321. TestOzoneManagerHttpServer 
depends on hard-coded port numb…
URL: https://github.com/apache/hadoop/pull/633#issuecomment-475394461
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 25 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 997 | trunk passed |
   | +1 | compile | 48 | trunk passed |
   | +1 | checkstyle | 16 | trunk passed |
   | +1 | mvnsite | 29 | trunk passed |
   | +1 | shadedclient | 732 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 38 | trunk passed |
   | +1 | javadoc | 19 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 33 | the patch passed |
   | +1 | compile | 23 | the patch passed |
   | +1 | javac | 23 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | mvnsite | 25 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | shadedclient | 1087 | patch has errors when building and testing our 
client artifacts. |
   | +1 | findbugs | 48 | the patch passed |
   | +1 | javadoc | 18 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 44 | ozone-manager in the patch passed. |
   | +1 | asflicense | 26 | The patch does not generate ASF License warnings. |
   | | | 3315 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-633/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/633 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 2890069db7a0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 548997d |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-633/1/testReport/ |
   | Max. process+thread count | 411 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/ozone-manager U: hadoop-ozone/ozone-manager |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-633/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
ajayydv commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267913003
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/ObjectStore.java
 ##
 @@ -50,6 +52,7 @@
* The proxy used for connecting to the cluster and perform
* client operations.
*/
+  // TODO: remove rest api and client
 
 Review comment:
   Shall we file a jira for this targeting 0.5? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
ajayydv commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267927555
 
 

 ##
 File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
 ##
 @@ -452,12 +457,11 @@ public void removeBucketAcls(
 Token token =
 ozoneManagerClient.getDelegationToken(renewer);
 if (token != null) {
-  Text dtService =
-  getOMProxyProvider().getProxy().getDelegationTokenService();
   token.setService(dtService);
-  LOG.debug("Created token {}", token);
+  LOG.debug("Created token {} for dtService {}", token, dtService);
 } else {
-  LOG.debug("Cannot get ozone delegation token from {}", renewer);
+  LOG.debug("Cannot get ozone delegation token from {} for service {}",
 
 Review comment:
   Shall we rephrase it to something like "Cannot get ozone delegation token 
for  renwer:{} from service:{}"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
ajayydv commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267932913
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
 ##
 @@ -0,0 +1,176 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+OZONE-SITE.XML_ozone.om.address=om
+OZONE-SITE.XML_ozone.om.http-address=om:9874
+OZONE-SITE.XML_ozone.scm.names=scm
+OZONE-SITE.XML_ozone.enabled=True
+OZONE-SITE.XML_ozone.scm.datanode.id=/data/datanode.id
+OZONE-SITE.XML_ozone.scm.block.client.address=scm
+OZONE-SITE.XML_ozone.metadata.dirs=/data/metadata
+OZONE-SITE.XML_ozone.handler.type=distributed
+OZONE-SITE.XML_ozone.scm.client.address=scm
+OZONE-SITE.XML_hdds.block.token.enabled=true
+OZONE-SITE.XML_ozone.replication=1
+OZONE-SITE.XML_hdds.scm.kerberos.principal=scm/s...@example.com
+OZONE-SITE.XML_hdds.scm.kerberos.keytab.file=/etc/security/keytabs/scm.keytab
+OZONE-SITE.XML_ozone.om.kerberos.principal=om/o...@example.com
+OZONE-SITE.XML_ozone.om.kerberos.keytab.file=/etc/security/keytabs/om.keytab
+OZONE-SITE.XML_ozone.s3g.keytab.file=/etc/security/keytabs/HTTP.keytab
+OZONE-SITE.XML_ozone.s3g.authentication.kerberos.principal=HTTP/s...@example.com
 
 Review comment:
   Replace it with _HOST?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
ajayydv commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267938948
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -1486,7 +1484,9 @@ private static UserGroupInformation getRemoteUser() 
throws IOException {
 realUser = new Text(ugi.getRealUser().getUserName());
   }
 
-  return delegationTokenMgr.createToken(owner, renewer, realUser);
+  token = delegationTokenMgr.createToken(owner, renewer, realUser);
+  LOG.debug("OmDelegationToken: {} created.", token);
 
 Review comment:
   We already have a trace statement inside SecretManager. Shall we skip this 
debug statement?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
ajayydv commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267937938
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapterImpl.java
 ##
 @@ -289,7 +291,7 @@ public boolean hasNextKey(String key) {
   @Override
   public Token getDelegationToken(String renewer)
   throws IOException {
-if (!securityEnabled) {
+if (!securityEnabled || renewer == null) {
 
 Review comment:
   Seems issuing token even if renewer is not passed seems to be legit case.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
ajayydv commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267933112
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
 ##
 @@ -0,0 +1,176 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+OZONE-SITE.XML_ozone.om.address=om
+OZONE-SITE.XML_ozone.om.http-address=om:9874
+OZONE-SITE.XML_ozone.scm.names=scm
+OZONE-SITE.XML_ozone.enabled=True
+OZONE-SITE.XML_ozone.scm.datanode.id=/data/datanode.id
+OZONE-SITE.XML_ozone.scm.block.client.address=scm
+OZONE-SITE.XML_ozone.metadata.dirs=/data/metadata
+OZONE-SITE.XML_ozone.handler.type=distributed
+OZONE-SITE.XML_ozone.scm.client.address=scm
+OZONE-SITE.XML_hdds.block.token.enabled=true
+OZONE-SITE.XML_ozone.replication=1
+OZONE-SITE.XML_hdds.scm.kerberos.principal=scm/s...@example.com
 
 Review comment:
   Replace hardcoded hostname with _HOST?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
ajayydv commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267933244
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
 ##
 @@ -0,0 +1,176 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+OZONE-SITE.XML_ozone.om.address=om
+OZONE-SITE.XML_ozone.om.http-address=om:9874
+OZONE-SITE.XML_ozone.scm.names=scm
+OZONE-SITE.XML_ozone.enabled=True
+OZONE-SITE.XML_ozone.scm.datanode.id=/data/datanode.id
+OZONE-SITE.XML_ozone.scm.block.client.address=scm
+OZONE-SITE.XML_ozone.metadata.dirs=/data/metadata
+OZONE-SITE.XML_ozone.handler.type=distributed
+OZONE-SITE.XML_ozone.scm.client.address=scm
+OZONE-SITE.XML_hdds.block.token.enabled=true
+OZONE-SITE.XML_ozone.replication=1
+OZONE-SITE.XML_hdds.scm.kerberos.principal=scm/s...@example.com
+OZONE-SITE.XML_hdds.scm.kerberos.keytab.file=/etc/security/keytabs/scm.keytab
+OZONE-SITE.XML_ozone.om.kerberos.principal=om/o...@example.com
+OZONE-SITE.XML_ozone.om.kerberos.keytab.file=/etc/security/keytabs/om.keytab
+OZONE-SITE.XML_ozone.s3g.keytab.file=/etc/security/keytabs/HTTP.keytab
+OZONE-SITE.XML_ozone.s3g.authentication.kerberos.principal=HTTP/s...@example.com
+
+OZONE-SITE.XML_ozone.security.enabled=true
+OZONE-SITE.XML_hdds.scm.http.kerberos.principal=HTTP/s...@example.com
+OZONE-SITE.XML_hdds.scm.http.kerberos.keytab=/etc/security/keytabs/HTTP.keytab
+OZONE-SITE.XML_ozone.om.http.kerberos.principal=HTTP/o...@example.com
 
 Review comment:
   same as above.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
ajayydv commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267933056
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-config
 ##
 @@ -0,0 +1,176 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+OZONE-SITE.XML_ozone.om.address=om
+OZONE-SITE.XML_ozone.om.http-address=om:9874
+OZONE-SITE.XML_ozone.scm.names=scm
+OZONE-SITE.XML_ozone.enabled=True
+OZONE-SITE.XML_ozone.scm.datanode.id=/data/datanode.id
+OZONE-SITE.XML_ozone.scm.block.client.address=scm
+OZONE-SITE.XML_ozone.metadata.dirs=/data/metadata
+OZONE-SITE.XML_ozone.handler.type=distributed
+OZONE-SITE.XML_ozone.scm.client.address=scm
+OZONE-SITE.XML_hdds.block.token.enabled=true
+OZONE-SITE.XML_ozone.replication=1
+OZONE-SITE.XML_hdds.scm.kerberos.principal=scm/s...@example.com
+OZONE-SITE.XML_hdds.scm.kerberos.keytab.file=/etc/security/keytabs/scm.keytab
+OZONE-SITE.XML_ozone.om.kerberos.principal=om/o...@example.com
 
 Review comment:
   Replace hardcoded hostname with _HOST?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
ajayydv commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267937938
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/OzoneClientAdapterImpl.java
 ##
 @@ -289,7 +291,7 @@ public boolean hasNextKey(String key) {
   @Override
   public Token getDelegationToken(String renewer)
   throws IOException {
-if (!securityEnabled) {
+if (!securityEnabled || renewer == null) {
 
 Review comment:
   Seems issuing token even if renewer is not passed seems to be legit case.
public void setRenewer(Text renewer) {
   if (renewer == null) {
 this.renewer = new Text();
   } else {
 HadoopKerberosName renewerKrbName = new 
HadoopKerberosName(renewer.toString());
 try {
   this.renewer = new Text(renewerKrbName.getShortName());
 } catch (IOException e) {
   throw new RuntimeException(e);
 }
   }
 }
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv commented on a change in pull request #627: HDDS-1299. Support TokenIssuer interface for running jobs with OzoneFileSystem.

2019-03-21 Thread GitBox
ajayydv commented on a change in pull request #627: HDDS-1299. Support 
TokenIssuer interface for running jobs with OzoneFileSystem.
URL: https://github.com/apache/hadoop/pull/627#discussion_r267934260
 
 

 ##
 File path: 
hadoop-ozone/dist/src/main/compose/ozonesecure-mr/docker-image/docker-krb5/Dockerfile-krb5
 ##
 @@ -0,0 +1,35 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License lsfor the specific language governing permissions and
+# limitations under the License.
+
+
+FROM openjdk:8u191-jdk-alpine3.9
 
 Review comment:
   Shall we reuse docker file in ../ozonesecure/docker-image dir instead? This 
way we have to maintain only one set of docker file.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #600: HDFS-14348: Fix JNI exception handling issues in libhdfs

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #600: HDFS-14348: Fix JNI exception handling 
issues in libhdfs
URL: https://github.com/apache/hadoop/pull/600#issuecomment-475382564
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 975 | trunk passed |
   | +1 | compile | 94 | trunk passed |
   | +1 | mvnsite | 19 | trunk passed |
   | +1 | shadedclient | 1674 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 13 | the patch passed |
   | +1 | compile | 91 | the patch passed |
   | +1 | cc | 91 | the patch passed |
   | +1 | javac | 91 | the patch passed |
   | +1 | mvnsite | 15 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 681 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 345 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3002 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/600 |
   | JIRA Issue | HDFS-14348 |
   | Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
   | uname | Linux 0d2992db4512 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 548997d |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/6/testReport/ |
   | Max. process+thread count | 449 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/6/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #626: HDDS-1262. In OM HA OpenKey and initiateMultipartUpload call Should happen only leader OM.

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #626: HDDS-1262. In OM HA OpenKey and 
initiateMultipartUpload call Should happen only leader OM.
URL: https://github.com/apache/hadoop/pull/626#issuecomment-475381176
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 27 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 53 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1062 | trunk passed |
   | +1 | compile | 97 | trunk passed |
   | +1 | checkstyle | 26 | trunk passed |
   | +1 | mvnsite | 98 | trunk passed |
   | +1 | shadedclient | 764 | branch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 95 | trunk passed |
   | +1 | javadoc | 65 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for patch |
   | +1 | mvninstall | 99 | the patch passed |
   | +1 | compile | 95 | the patch passed |
   | +1 | cc | 95 | the patch passed |
   | +1 | javac | 95 | the patch passed |
   | +1 | checkstyle | 20 | the patch passed |
   | +1 | mvnsite | 80 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 712 | patch has no errors when building and testing 
our client artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hadoop-ozone/integration-test |
   | +1 | findbugs | 104 | the patch passed |
   | -1 | javadoc | 32 | hadoop-ozone_common generated 1 new + 1 unchanged - 0 
fixed = 2 total (was 1) |
   ||| _ Other Tests _ |
   | +1 | unit | 37 | common in the patch passed. |
   | +1 | unit | 39 | ozone-manager in the patch passed. |
   | -1 | unit | 640 | integration-test in the patch failed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 4246 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/626 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  cc  |
   | uname | Linux 5a520c24500b 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a99eb80 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/5/artifact/out/diff-javadoc-javadoc-hadoop-ozone_common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/5/artifact/out/patch-unit-hadoop-ozone_integration-test.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/5/testReport/ |
   | Max. process+thread count | 3840 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-626/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 commented on issue #620: HDDS-1205. Refactor ReplicationManager to handle QUASI_CLOSED contain…

2019-03-21 Thread GitBox
arp7 commented on issue #620: HDDS-1205. Refactor ReplicationManager to handle 
QUASI_CLOSED contain…
URL: https://github.com/apache/hadoop/pull/620#issuecomment-475381105
 
 
   +1 from me. Not sure why Yetus is failing either.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r267932717
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/CryptoStreamsTestBase.java
 ##
 @@ -129,6 +130,32 @@ private void preadCheck(PositionedReadable in) throws 
Exception {
 Assert.assertArrayEquals(result, expectedData);
   }
 
+  private int byteBufferPreadAll(ByteBufferPositionedReadable in,
+ ByteBuffer buf) throws IOException {
+int n = 0;
+int total = 0;
+while (n != -1) {
 
 Review comment:
   IIUC the HDFS read APIs make the same guarantees as 
`InputStream#read(byte[]`, which returns -1  if there is no more data to read.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r267932621
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -375,7 +397,70 @@ private void decrypt(long position, byte[] buffer, int 
offset, int length)
   returnDecryptor(decryptor);
 }
   }
-  
+
+  /**
+   * Decrypt n bytes in buf starting at start. Output is also put into buf
+   * starting at current position. buf.position() and buf.limit() should be
+   * unchanged after decryption. It is thread-safe.
+   *
+   * 
+   *   This method decrypts the input buf chunk-by-chunk and writes the
+   *   decrypted output back into the input buf. It uses two local buffers
+   *   taken from the {@link #bufferPool} to assist in this process: one is
+   *   designated as the input buffer and it stores a single chunk of the
+   *   given buf, the other is designated as the output buffer, which stores
+   *   the output of decrypting the input buffer. Both buffers are of size
+   *   {@link #bufferSize}.
+   * 
+   *
+   * 
+   *   Decryption is done by using a {@link Decryptor} and the
+   *   {@link #decrypt(Decryptor, ByteBuffer, ByteBuffer, byte)} method. Once
+   *   the decrypted data is written into the output buffer, is is copied back
+   *   into buf. Both buffers are returned back into the pool once the entire
+   *   buf is decrypted.
+   * 
+   */
+  private void decrypt(long position, ByteBuffer buf, int n, int start)
+  throws IOException {
+ByteBuffer localInBuffer = getBuffer();
+ByteBuffer localOutBuffer = getBuffer();
+final int pos = buf.position();
+final int limit = buf.limit();
+int len = 0;
+Decryptor localDecryptor = null;
+try {
+  localDecryptor = getDecryptor();
+  byte[] localIV = initIV.clone();
+  updateDecryptor(localDecryptor, position, localIV);
+  byte localPadding = getPadding(position);
+  // Set proper position for inputdata.
+  localInBuffer.position(localPadding);
+
+  while (len < n) {
+buf.position(start + len);
+buf.limit(start + len + Math.min(n - len, localInBuffer.remaining()));
+localInBuffer.put(buf);
+// Do decryption
+try {
+  decrypt(localDecryptor, localInBuffer, localOutBuffer, localPadding);
+  buf.position(start + len);
+  buf.limit(limit);
 
 Review comment:
   Well the invariant you want to preserve is that `buf.put(inputBuf)` should 
only be called with the original value of `buf.limit()` so that you don't 
exceed the given limit. Using `start + len + Math.min(n - len, 
localInBuffer.remaining())` as the limit could violate this if `n + start > 
buf.limit()`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r267932658
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferPositionedReadable.java
 ##
 @@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Implementers of this interface provide a positioned read API that writes to 
a
+ * {@link ByteBuffer} rather than a {@code byte[]}.
+ *
+ * @see PositionedReadable
+ * @see ByteBufferReadable
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public interface ByteBufferPositionedReadable {
+  /**
+   * Reads up to {@code buf.remaining()} bytes into buf from a given position
+   * in the file and returns the number of bytes read. Callers should use
+   * {@code buf.limit(...)} to control the size of the desired read and
+   * {@code buf.position(...)} to control the offset into the buffer the data
+   * should be written to.
+   * 
+   * After a successful call, {@code buf.position()} will be advanced by the
+   * number of bytes read and {@code buf.limit()} should be unchanged.
+   * 
+   * In the case of an exception, the values of {@code buf.position()} and
+   * {@code buf.limit()} are undefined, and callers should be prepared to
+   * recover from this eventuality.
 
 Review comment:
   I agree, but I copied this from `ByteBufferReadable`, so I think we should 
leave it for now, and if we want to lift this limitation, then we can do so for 
both `ByteBufferReadable` and `ByteBufferPositionedReadable` in another JIRA.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r267932721
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c
 ##
 @@ -316,6 +331,17 @@ void hdfsFileDisableDirectRead(hdfsFile file)
 file->flags &= ~HDFS_FILE_SUPPORTS_DIRECT_READ;
 }
 
+int hdfsFileUsesDirectPread(hdfsFile file)
+{
+return !!(file->flags & HDFS_FILE_SUPPORTS_DIRECT_PREAD);
 
 Review comment:
   It was copied from `hdfsFileUsesDirectRead`, but I agree it hard to 
understand so I cleaned it up.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r267932692
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferPositionedReadable.java
 ##
 @@ -0,0 +1,64 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+import java.nio.ByteBuffer;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+
+/**
+ * Implementers of this interface provide a positioned read API that writes to 
a
+ * {@link ByteBuffer} rather than a {@code byte[]}.
+ *
+ * @see PositionedReadable
+ * @see ByteBufferReadable
+ */
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public interface ByteBufferPositionedReadable {
+  /**
+   * Reads up to {@code buf.remaining()} bytes into buf from a given position
+   * in the file and returns the number of bytes read. Callers should use
+   * {@code buf.limit(...)} to control the size of the desired read and
+   * {@code buf.position(...)} to control the offset into the buffer the data
+   * should be written to.
+   * 
+   * After a successful call, {@code buf.position()} will be advanced by the
+   * number of bytes read and {@code buf.limit()} should be unchanged.
+   * 
+   * In the case of an exception, the values of {@code buf.position()} and
+   * {@code buf.limit()} are undefined, and callers should be prepared to
+   * recover from this eventuality.
+   * 
+   * Many implementations will throw {@link UnsupportedOperationException}, so
+   * callers that are not confident in support for this method from the
+   * underlying filesystem should be prepared to handle that exception.
 
 Review comment:
   Yes, the code is there, updated the javadoc to reflect this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #597: HDFS-3246: pRead equivalent for direct read path

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #597: HDFS-3246: pRead 
equivalent for direct read path
URL: https://github.com/apache/hadoop/pull/597#discussion_r267932559
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/CryptoInputStream.java
 ##
 @@ -341,6 +343,26 @@ public int read(long position, byte[] buffer, int offset, 
int length)
   "positioned read.");
 }
   }
+
+   /** Positioned read using ByteBuffers. It is thread-safe */
+  @Override
+  public int read(long position, final ByteBuffer buf)
+  throws IOException {
+checkStream();
+try {
+  int pos = buf.position();
+  final int n = ((ByteBufferPositionedReadable) in).read(position, buf);
+  if (n > 0) {
+// This operation does not change the current offset of the file
+decrypt(position, buf, n, pos);
+  }
+
+  return n;
+} catch (ClassCastException e) {
 
 Review comment:
   Yeah, its probably not the most efficient way to do things, but thats how 
all other methods handle the same thing: `PositionedReadable`, `Seekable`, etc. 
+ the exception handling code wouldn't be on the hot path. So in this case I 
would prefer to keep the code consistent with the rest of the class.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arp7 opened a new pull request #633: HDDS-1321. TestOzoneManagerHttpServer depends on hard-coded port numb…

2019-03-21 Thread GitBox
arp7 opened a new pull request #633: HDDS-1321. TestOzoneManagerHttpServer 
depends on hard-coded port numb…
URL: https://github.com/apache/hadoop/pull/633
 
 
   …ers. Contributed by Arpit Agarwal.
   
   Change-Id: If17c851b4aea7070064069a3596a144ad80d284c


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #632: HDDS-1255. Refactor ozone acceptance test 
to allow run in secure mode. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/632#issuecomment-475369409
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1203 | trunk passed |
   | +1 | compile | 66 | trunk passed |
   | +1 | mvnsite | 25 | trunk passed |
   | +1 | shadedclient | 721 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 20 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | -1 | mvninstall | 21 | dist in the patch failed. |
   | +1 | compile | 19 | the patch passed |
   | +1 | javac | 19 | the patch passed |
   | +1 | mvnsite | 21 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | shelldocs | 15 | There were no new shelldocs issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 801 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 17 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 22 | dist in the patch passed. |
   | +1 | asflicense | 29 | The patch does not generate ASF License warnings. |
   | | | 3148 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/632 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  shellcheck  shelldocs  |
   | uname | Linux f0b787d89b57 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed 
Oct 31 10:55:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a99eb80 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | shellcheck | v0.4.6 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/1/artifact/out/patch-mvninstall-hadoop-ozone_dist.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/1/testReport/ |
   | Max. process+thread count | 340 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/dist U: hadoop-ozone/dist |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-632/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #595: HDFS-14304: High lock contention on 
hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#issuecomment-475367550
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 26 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 985 | trunk passed |
   | +1 | compile | 92 | trunk passed |
   | +1 | mvnsite | 24 | trunk passed |
   | +1 | shadedclient | 1768 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 14 | the patch passed |
   | +1 | compile | 91 | the patch passed |
   | +1 | cc | 91 | the patch passed |
   | +1 | javac | 91 | the patch passed |
   | +1 | mvnsite | 18 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 729 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 334 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 28 | The patch does not generate ASF License warnings. |
   | | | 3142 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/595 |
   | JIRA Issue | HDFS-14304 |
   | Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
   | uname | Linux 9acfaac08d71 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / a99eb80 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/5/testReport/ |
   | Max. process+thread count | 425 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-595/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16147) Allow CopyListing sequence file keys and values to be more easily customized

2019-03-21 Thread Andrew Olson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798335#comment-16798335
 ] 

Andrew Olson commented on HADOOP-16147:
---

[~ste...@apache.org] would you mind taking a look at this? thanks

> Allow CopyListing sequence file keys and values to be more easily customized
> 
>
> Key: HADOOP-16147
> URL: https://issues.apache.org/jira/browse/HADOOP-16147
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Andrew Olson
>Assignee: Andrew Olson
>Priority: Major
> Attachments: HADOOP-16147-001.patch, HADOOP-16147-002.patch
>
>
> We have encountered a scenario where, when using the Crunch library to run a 
> distributed copy (CRUNCH-660, CRUNCH-675) at the conclusion of a job we need 
> to dynamically rename target paths to the preferred destination output part 
> file names, rather than retaining the original source path names.
> A custom CopyListing implementation appears to be the proper solution for 
> this. However the place where the current SimpleCopyListing logic needs to be 
> adjusted is in a private method (writeToFileListing), so a relatively large 
> portion of the class would need to be cloned.
> To minimize the amount of code duplication required for such a custom 
> implementation, we propose adding two new protected methods to the 
> CopyListing class, that can be used to change the actual keys and/or values 
> written to the copy listing sequence file: 
> {noformat}
> protected Text getFileListingKey(Path sourcePathRoot, CopyListingFileStatus 
> fileStatus);
> protected CopyListingFileStatus getFileListingValue(CopyListingFileStatus 
> fileStatus);
> {noformat}
> The SimpleCopyListing class would then be modified to consume these methods 
> as follows,
> {noformat}
> fileListWriter.append(
>getFileListingKey(sourcePathRoot, fileStatus),
>getFileListingValue(fileStatus));
> {noformat}
> The default implementations would simply preserve the present behavior of the 
> SimpleCopyListing class, and could reside in either CopyListing or 
> SimpleCopyListing, whichever is preferable.
> {noformat}
> protected Text getFileListingKey(Path sourcePathRoot, CopyListingFileStatus 
> fileStatus) {
>return new Text(DistCpUtils.getRelativePath(sourcePathRoot, 
> fileStatus.getPath()));
> }
> protected CopyListingFileStatus getFileListingValue(CopyListingFileStatus 
> fileStatus) {
>return fileStatus;
> }
> {noformat}
> Please let me know if this proposal seems to be on the right track. If so I 
> can provide a patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16144) Create a Hadoop RPC based KMS client

2019-03-21 Thread Anu Engineer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798328#comment-16798328
 ] 

Anu Engineer commented on HADOOP-16144:
---

[~jojochuang] Thanks for letting us know. I will be glad to take care of this.

> Create a Hadoop RPC based KMS client
> 
>
> Key: HADOOP-16144
> URL: https://issues.apache.org/jira/browse/HADOOP-16144
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Assignee: Anu Engineer
>Priority: Major
>
> Create a new KMS client implementation that speaks Hadoop RPC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ajayydv opened a new pull request #632: HDDS-1255. Refactor ozone acceptance test to allow run in secure mode. Contributed by Ajay Kumar.

2019-03-21 Thread GitBox
ajayydv opened a new pull request #632: HDDS-1255. Refactor ozone acceptance 
test to allow run in secure mode. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/632
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #630: HADOOP-15999 improve test resilience and probes

2019-03-21 Thread GitBox
steveloughran commented on issue #630: HADOOP-15999 improve test resilience and 
probes
URL: https://github.com/apache/hadoop/pull/630#issuecomment-475350828
 
 
   failed tests are about a directory not existing on the test VM/container
   ```
   ava.nio.file.NoSuchFileException: 
/testptch/hadoop/hadoop-common-project/hadoop-common/target/test/data/4/test3157601392604950757
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.TempFileHelper.create(TempFileHelper.java:136)
at 
java.nio.file.TempFileHelper.createTempDirectory(TempFileHelper.java:173)
at java.nio.file.Files.createTempDirectory(Files.java:950)
at 
org.apache.hadoop.util.TestDiskChecker.createTempDir(TestDiskChecker.java:153)
at 
org.apache.hadoop.util.TestDiskChecker._checkDirs(TestDiskChecker.java:158)
at 
org.apache.hadoop.util.TestDiskChecker.testCheckDir_notReadable(TestDiskChecker.java:121)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267898894
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jclasses.c
 ##
 @@ -0,0 +1,135 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "exception.h"
+#include "jclasses.h"
+#include "jni_helper.h"
+#include "os/mutexes.h"
+
+#include 
+
+/**
+ * Whether initCachedClasses has been called or not. Protected by the mutex
+ * jclassInitMutex.
+ */
+static int jclassesInitialized = 0;
+
+struct javaClassAndName {
 
 Review comment:
   Done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267898825
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c
 ##
 @@ -2343,11 +2330,11 @@ int hdfsChmod(hdfsFS fs, const char *path, short mode)
 }
 
 // construct jPerm = FsPermission.createImmutable(short mode);
-jthr = constructNewObjectOfClass(env, &jPermObj,
-HADOOP_FSPERM,"(S)V",jmode);
+jthr = constructNewObjectOfCachedClass(env, &jPermObj, JC_FS_PERMISSION,
+"(S)V",jmode);
 if (jthr) {
 ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
 
 Review comment:
   Fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267898728
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
 ##
 @@ -623,8 +643,7 @@ jthrowable hadoopConfSetStr(JNIEnv *env, jobject 
jConfiguration,
 if (jthr)
 goto done;
 jthr = invokeMethod(env, NULL, INSTANCE, jConfiguration,
-"org/apache/hadoop/conf/Configuration", "set", 
-"(Ljava/lang/String;Ljava/lang/String;)V",
+JC_CONFIGURATION, "set","(Ljava/lang/String;Ljava/lang/String;)V",
 
 Review comment:
   Fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267898862
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c
 ##
 @@ -2555,28 +2541,26 @@ static jthrowable hadoopRzOptionsGetEnumSet(JNIEnv 
*env,
 goto done;
 }
 if (opts->skipChecksums) {
-jthr = fetchEnumInstance(env, READ_OPTION,
+jthr = fetchEnumInstance(env, HADOOP_RO,
   "SKIP_CHECKSUMS", &enumInst);
 if (jthr) {
 goto done;
 }
-jthr = invokeMethod(env, &jVal, STATIC, NULL,
-"java/util/EnumSet", "of",
-"(Ljava/lang/Enum;)Ljava/util/EnumSet;", enumInst);
+jthr = invokeMethod(env, &jVal, STATIC, NULL, JC_ENUM_SET,
+"of", "(Ljava/lang/Enum;)Ljava/util/EnumSet;", enumInst);
 if (jthr) {
 goto done;
 }
 enumSetObj = jVal.l;
 } else {
-jclass clazz = (*env)->FindClass(env, READ_OPTION);
+jclass clazz = (*env)->FindClass(env, HADOOP_RO);
 if (!clazz) {
 jthr = newRuntimeError(env, "failed "
-"to find class for %s", READ_OPTION);
+"to find class for %s", HADOOP_RO);
 goto done;
 }
-jthr = invokeMethod(env, &jVal, STATIC, NULL,
-"java/util/EnumSet", "noneOf",
-"(Ljava/lang/Class;)Ljava/util/EnumSet;", clazz);
+jthr = invokeMethod(env, &jVal, STATIC, NULL, JC_ENUM_SET,
 
 Review comment:
   Fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267898778
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jclasses.c
 ##
 @@ -0,0 +1,135 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+#include "exception.h"
+#include "jclasses.h"
+#include "jni_helper.h"
+#include "os/mutexes.h"
+
+#include 
+
+/**
+ * Whether initCachedClasses has been called or not. Protected by the mutex
+ * jclassInitMutex.
+ */
+static int jclassesInitialized = 0;
+
+struct javaClassAndName {
+jclass javaClass;
+const char *className;
+};
+
+/**
+ * A collection of commonly used jclass objects that are used throughout
+ * libhdfs. The jclasses are loaded immediately after the JVM is created (see
+ * initCachedClasses). The array is indexed using CachedJavaClass.
+ */
+struct javaClassAndName cachedJavaClasses[NUM_CACHED_CLASSES];
+
+/**
+ * Helper method that creates and sets a jclass object given a class name.
+ * Returns a jthrowable on error, NULL otherwise.
+ */
+static jthrowable initCachedClass(JNIEnv *env, const char *className,
+jclass *cachedJclass) {
+assert(className != NULL && "Found a CachedJavaClass without a class "
+"name");
+jthrowable jthr = NULL;
+jclass tempLocalClassRef;
+tempLocalClassRef = (*env)->FindClass(env, className);
+if (!tempLocalClassRef) {
+jthr = getPendingExceptionAndClear(env);
+goto done;
+}
+*cachedJclass = (jclass) (*env)->NewGlobalRef(env, tempLocalClassRef);
+if (!*cachedJclass) {
+jthr = getPendingExceptionAndClear(env);
+goto done;
+}
+done:
+destroyLocalReference(env, tempLocalClassRef);
+return jthr;
+}
+
+jthrowable initCachedClasses(JNIEnv* env) {
+mutexLock(&jclassInitMutex);
+if (!jclassesInitialized) {
+// Set all the class names
+cachedJavaClasses[JC_CONFIGURATION].className =
+"org/apache/hadoop/conf/Configuration";
+cachedJavaClasses[JC_PATH].className =
+"org/apache/hadoop/fs/Path";
+cachedJavaClasses[JC_FILE_SYSTEM].className =
+"org/apache/hadoop/fs/FileSystem";
+cachedJavaClasses[JC_FS_STATUS].className =
+"org/apache/hadoop/fs/FsStatus";
+cachedJavaClasses[JC_FILE_UTIL].className =
+"org/apache/hadoop/fs/FileUtil";
+cachedJavaClasses[JC_BLOCK_LOCATION].className =
+"org/apache/hadoop/fs/BlockLocation";
+cachedJavaClasses[JC_DFS_HEDGED_READ_METRICS].className =
+"org/apache/hadoop/hdfs/DFSHedgedReadMetrics";
+cachedJavaClasses[JC_DISTRIBUTED_FILE_SYSTEM].className =
+"org/apache/hadoop/hdfs/DistributedFileSystem";
+cachedJavaClasses[JC_FS_DATA_INPUT_STREAM].className =
+"org/apache/hadoop/fs/FSDataInputStream";
+cachedJavaClasses[JC_FS_DATA_OUTPUT_STREAM].className =
+"org/apache/hadoop/fs/FSDataOutputStream";
+cachedJavaClasses[JC_FILE_STATUS].className =
+"org/apache/hadoop/fs/FileStatus";
+cachedJavaClasses[JC_FS_PERMISSION].className =
+"org/apache/hadoop/fs/permission/FsPermission";
+cachedJavaClasses[JC_READ_STATISTICS].className =
+"org/apache/hadoop/hdfs/ReadStatistics";
+cachedJavaClasses[JC_HDFS_DATA_INPUT_STREAM].className =
+"org/apache/hadoop/hdfs/client/HdfsDataInputStream";
+cachedJavaClasses[JC_DOMAIN_SOCKET].className =
+"org/apache/hadoop/net/unix/DomainSocket";
+cachedJavaClasses[JC_URI].className =
+"java/net/URI";
+cachedJavaClasses[JC_BYTE_BUFFER].className =
+"java/nio/ByteBuffer";
+cachedJavaClasses[JC_ENUM_SET].className =
+"java/util/EnumSet";
+cachedJavaClasses[JC_EXCEPTION_UTILS].className =
+"org/apache/commons/lang3/exception/ExceptionUtils";
+
+// Create and set the jclass objects based on the class na

[GitHub] [hadoop] sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock contention on hdfsHashMutex in libhdfs

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #595: HDFS-14304: High lock 
contention on hdfsHashMutex in libhdfs
URL: https://github.com/apache/hadoop/pull/595#discussion_r267898627
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c
 ##
 @@ -200,43 +185,115 @@ jthrowable invokeMethod(JNIEnv *env, jvalue *retval, 
MethType methType,
 return NULL;
 }
 
-jthrowable constructNewObjectOfClass(JNIEnv *env, jobject *out, const char 
*className, 
-  const char *ctorSignature, ...)
+jthrowable findClassAndInvokeMethod(JNIEnv *env, jvalue *retval,
+MethType methType, jobject instObj, const char *className,
+const char *methName, const char *methSignature, ...)
 {
+jclass cls = NULL;
+jthrowable jthr = NULL;
+
 va_list args;
-jclass cls;
-jmethodID mid; 
+va_start(args, methSignature);
+
+jthr = validateMethodType(env, methType);
+if (jthr) {
+goto done;
+}
+
+cls = (*env)->FindClass(env, className);
+if (!cls) {
+jthr = getPendingExceptionAndClear(env);
+goto done;
+}
+
+jthr = invokeMethodOnJclass(env, retval, methType, instObj, cls,
+className, methName, methSignature, args);
+
+done:
+va_end(args);
+destroyLocalReference(env, cls);
+return jthr;
+}
+
+jthrowable invokeMethod(JNIEnv *env, jvalue *retval, MethType methType,
+jobject instObj, CachedJavaClass class,
+const char *methName, const char *methSignature, ...)
+{
+jthrowable jthr;
+
+va_list args;
+va_start(args, methSignature);
+
+jthr = invokeMethodOnJclass(env, retval, methType, instObj,
+getJclass(class), getClassName(class), methName, methSignature,
+args);
+
+va_end(args);
+return jthr;
+}
+
+static jthrowable constructNewObjectOfJclass(JNIEnv *env,
+jobject *out, jclass cls, const char *className,
+const char *ctorSignature, va_list args) {
+jmethodID mid;
 jobject jobj;
 jthrowable jthr;
 
-jthr = globalClassReference(className, env, &cls);
+jthr = methodIdFromClass(cls, className, "", ctorSignature, INSTANCE,
+env, &mid);
 if (jthr)
 return jthr;
-jthr = methodIdFromClass(className, "", ctorSignature, 
-INSTANCE, env, &mid);
-if (jthr)
-return jthr;
-va_start(args, ctorSignature);
 jobj = (*env)->NewObjectV(env, cls, mid, args);
-va_end(args);
 if (!jobj)
 return getPendingExceptionAndClear(env);
 *out = jobj;
 return NULL;
 }
 
-
-jthrowable methodIdFromClass(const char *className, const char *methName, 
-const char *methSignature, MethType methType, 
-JNIEnv *env, jmethodID *out)
+jthrowable constructNewObjectOfClass(JNIEnv *env, jobject *out,
+const char *className, const char *ctorSignature, ...)
 {
+va_list args;
 jclass cls;
+jthrowable jthr = NULL;
+
+cls = (*env)->FindClass(env, className);
+if (!cls) {
+jthr = getPendingExceptionAndClear(env);
+goto done;
 
 Review comment:
   Fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16058) S3A tests to include Terasort

2019-03-21 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-16058:
-

reopening for a backport to branch-3.2

> S3A tests to include Terasort
> -
>
> Key: HADOOP-16058
> URL: https://issues.apache.org/jira/browse/HADOOP-16058
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16058-001.patch, HADOOP-16058-002.patch, 
> HADOOP-16058-002.patch, HADOOP-16058-002.patch
>
>
> Add S3A tests to run terasort for the magic and directory committers.
> MAPREDUCE-7091 is a requirement for this
> Bonus feature: print the results to see which committers are faster in the 
> specific test setup. As that's a function of latency to the store, bandwidth 
> and size of jobs, it's not at all meaningful, just interesting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #577: HADOOP-16058 S3A to support terasort

2019-03-21 Thread GitBox
steveloughran closed pull request #577: HADOOP-16058 S3A to support terasort
URL: https://github.com/apache/hadoop/pull/577
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #577: HADOOP-16058 S3A to support terasort

2019-03-21 Thread GitBox
steveloughran commented on issue #577: HADOOP-16058 S3A to support terasort
URL: https://github.com/apache/hadoop/pull/577#issuecomment-475348515
 
 
   VS.code seems to have a github plugin for in-IDE reviewing: need to play 
with that. If you really can review & comments from in an IDE, that'd be 
wonderful


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #630: HADOOP-15999 improve test resilience and probes

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #630: HADOOP-15999 improve test resilience and 
probes
URL: https://github.com/apache/hadoop/pull/630#issuecomment-475346876
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for branch |
   | +1 | mvninstall | 993 | trunk passed |
   | +1 | compile | 961 | trunk passed |
   | +1 | checkstyle | 194 | trunk passed |
   | +1 | mvnsite | 127 | trunk passed |
   | +1 | shadedclient | 1055 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 154 | trunk passed |
   | +1 | javadoc | 104 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 20 | Maven dependency ordering for patch |
   | +1 | mvninstall | 76 | the patch passed |
   | +1 | compile | 891 | the patch passed |
   | +1 | javac | 891 | the patch passed |
   | -0 | checkstyle | 188 | root: The patch generated 3 new + 6 unchanged - 0 
fixed = 9 total (was 6) |
   | +1 | mvnsite | 125 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 670 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 171 | the patch passed |
   | +1 | javadoc | 101 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 504 | hadoop-common in the patch failed. |
   | +1 | unit | 290 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 6655 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.util.TestDiskChecker |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-630/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/630 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 7e8ff71b7686 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9f1c017 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-630/2/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-630/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-630/2/testReport/ |
   | Max. process+thread count | 1345 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-630/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-03-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798301#comment-16798301
 ] 

Steve Loughran commented on HADOOP-15870:
-

this patch should be ready to get in. Can everyone interested review and retest 
it? thanks

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16186) NPE in ITestS3AFileSystemContract teardown in DynamoDBMetadataStore.lambda$listChildren

2019-03-21 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16186:

Status: Patch Available  (was: In Progress)

> NPE in ITestS3AFileSystemContract teardown in  
> DynamoDBMetadataStore.lambda$listChildren
> 
>
> Key: HADOOP-16186
> URL: https://issues.apache.org/jira/browse/HADOOP-16186
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> Test run options. NPE in test teardown
> {code}
> -Dparallel-tests -DtestsThreadCount=6 -Ds3guard -Ddynamodb
> {code}
> If you look at the code, its *exactly* the place fixed in HADOOP-15827, a 
> change which HADOOP-15947 reverted. 
> There's clearly some codepath which can surface which is causing failures in 
> some situations, and having multiple patches switching between the && and || 
> operators isn't going to to fix it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16158) DistCp supports checksum validation when copy blocks in parallel

2019-03-21 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798264#comment-16798264
 ] 

Kai Xie edited comment on HADOOP-16158 at 3/21/19 5:06 PM:
---

Hi [~ste...@apache.org],

the patch (004) is ready for review. when you have time, would you mind taking 
a look?

the fix proposed is to add a checksum validation in CopyCommitter when chunks 
are concatenated back to one. And the validation can be skipped if the config 
skipCrc is set.

I'll also provide a patch for branch-2, since trunk has backward incompatible 
changes (in DistCpOptions ctor).


was (Author: kai33):
Hi [~ste...@apache.org],

the patch (004) is ready for review. when you have time, would you mind taking 
a look?

the fix proposed is to add a checksum validation in CopyCommitter when chunks 
are concatenated back to one. And the validation can be skipped if the config 
skipCrc is set.

I'll also provide a patch for branch-2, since trunk has backward incompatible 
changes.

> DistCp supports checksum validation when copy blocks in parallel
> 
>
> Key: HADOOP-16158
> URL: https://issues.apache.org/jira/browse/HADOOP-16158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16158-001.patch, HADOOP-16158-002.patch, 
> HADOOP-16158-003.patch, HADOOP-16158-004.patch
>
>
> Copying blocks in parallel (enabled when blocks per chunk > 0) is a great 
> DistCp improvement that can hugely speed up copying big files. 
> But its checksum validation is skipped, e.g. in 
> `RetriableFileCopyCommand.java`
>  
> {code:java}
> if (!source.isSplit()) {
>   compareCheckSums(sourceFS, source.getPath(), sourceChecksum,
>   targetFS, targetPath);
> }
> {code}
> and this could result in checksum/data mismatch without notifying 
> developers/users (e.g. HADOOP-16049).
> I'd like to provide a patch to add the checksum validation.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15272) Update Guava, see what breaks

2019-03-21 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-15272:
---

Assignee: Gabor Bota

> Update Guava, see what breaks
> -
>
> Key: HADOOP-15272
> URL: https://issues.apache.org/jira/browse/HADOOP-15272
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> We're still on Guava 11; the last attempt at an update (HADOOP-10101) failed 
> to take
> The HBase 2 version of ATS should permit this, at least for its profile.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #624: HADOOP-15999. S3Guard: Better support for out-of-band operations

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #624: HADOOP-15999. S3Guard: Better support for 
out-of-band operations
URL: https://github.com/apache/hadoop/pull/624#issuecomment-475318380
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 58 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1063 | trunk passed |
   | +1 | compile | 1074 | trunk passed |
   | +1 | checkstyle | 213 | trunk passed |
   | +1 | mvnsite | 142 | trunk passed |
   | +1 | shadedclient | 1064 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 148 | trunk passed |
   | +1 | javadoc | 87 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 23 | Maven dependency ordering for patch |
   | +1 | mvninstall | 72 | the patch passed |
   | +1 | compile | 928 | the patch passed |
   | +1 | javac | 928 | the patch passed |
   | -0 | checkstyle | 188 | root: The patch generated 3 new + 6 unchanged - 0 
fixed = 9 total (was 6) |
   | +1 | mvnsite | 115 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 643 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 169 | the patch passed |
   | +1 | javadoc | 86 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 555 | hadoop-common in the patch passed. |
   | +1 | unit | 273 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 39 | The patch does not generate ASF License warnings. |
   | | | 6837 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-624/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/624 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux cca3c80700fe 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9f1c017 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-624/2/artifact/out/diff-checkstyle-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-624/2/testReport/ |
   | Max. process+thread count | 1358 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-624/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16158) DistCp supports checksum validation when copy blocks in parallel

2019-03-21 Thread Kai Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798264#comment-16798264
 ] 

Kai Xie commented on HADOOP-16158:
--

Hi [~ste...@apache.org],

the patch (004) is ready for review. when you have time, would you mind taking 
a look?

the fix proposed is to add a checksum validation in CopyCommitter when chunks 
are concatenated back to one. And the validation can be skipped if the config 
skipCrc is set.

I'll also provide a patch for branch-2, since trunk has backward incompatible 
changes.

> DistCp supports checksum validation when copy blocks in parallel
> 
>
> Key: HADOOP-16158
> URL: https://issues.apache.org/jira/browse/HADOOP-16158
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.0, 2.9.2
>Reporter: Kai Xie
>Assignee: Kai Xie
>Priority: Major
> Attachments: HADOOP-16158-001.patch, HADOOP-16158-002.patch, 
> HADOOP-16158-003.patch, HADOOP-16158-004.patch
>
>
> Copying blocks in parallel (enabled when blocks per chunk > 0) is a great 
> DistCp improvement that can hugely speed up copying big files. 
> But its checksum validation is skipped, e.g. in 
> `RetriableFileCopyCommand.java`
>  
> {code:java}
> if (!source.isSplit()) {
>   compareCheckSums(sourceFS, source.getPath(), sourceChecksum,
>   targetFS, targetPath);
> }
> {code}
> and this could result in checksum/data mismatch without notifying 
> developers/users (e.g. HADOOP-16049).
> I'd like to provide a patch to add the checksum validation.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #631: HADOOP-16058. S3A tests to include Terasort.

2019-03-21 Thread GitBox
steveloughran opened a new pull request #631: HADOOP-16058. S3A tests to 
include Terasort.
URL: https://github.com/apache/hadoop/pull/631
 
 
   HADOOP-16058. S3A tests to include Terasort.
   
   Contributed by Steve Loughran.
   
   This includes
- HADOOP-15890. Some S3A committer tests don't match ITest* pattern; don't 
run in maven
- MAPREDUCE-7090. BigMapOutput example doesn't work with paths off cluster 
fs
- MAPREDUCE-7091. Terasort on S3A to switch to new committers
- MAPREDUCE-7092. MR examples to work better against cloud stores
   
   This is the branch-3.2 patch. Testing: run the new S3A ITests 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #600: HDFS-14348: Fix JNI exception handling issues in libhdfs

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #600: HDFS-14348: Fix JNI exception handling 
issues in libhdfs
URL: https://github.com/apache/hadoop/pull/600#issuecomment-475295794
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1084 | trunk passed |
   | +1 | compile | 120 | trunk passed |
   | +1 | mvnsite | 20 | trunk passed |
   | +1 | shadedclient | 1974 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 16 | the patch passed |
   | +1 | compile | 107 | the patch passed |
   | +1 | cc | 107 | the patch passed |
   | +1 | javac | 107 | the patch passed |
   | +1 | mvnsite | 19 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 821 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 434 | hadoop-hdfs-native-client in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3566 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/600 |
   | JIRA Issue | HDFS-14348 |
   | Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
   | uname | Linux 2b31a429b499 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri 
Oct 5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9f1c017 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/5/testReport/ |
   | Max. process+thread count | 306 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-600/5/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #624: HADOOP-15999. S3Guard: Better support for out-of-band operations

2019-03-21 Thread GitBox
bgaborg commented on issue #624: HADOOP-15999. S3Guard: Better support for 
out-of-band operations
URL: https://github.com/apache/hadoop/pull/624#issuecomment-475290572
 
 
   The second run succeeded, no errors.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg edited a comment on issue #624: HADOOP-15999. S3Guard: Better support for out-of-band operations

2019-03-21 Thread GitBox
bgaborg edited a comment on issue #624: HADOOP-15999. S3Guard: Better support 
for out-of-band operations
URL: https://github.com/apache/hadoop/pull/624#issuecomment-475288182
 
 
   I never got any issues with running ITestS3GuardOutOfBandOperations (just 
with local and because of the reference on the cache), but running it with your 
commit gave me the following error: 
   
   > [ERROR]   
ITestS3GuardOutOfBandOperations.testListingLongerLengthOverwrite:251->overwriteFileInListing:399->verifyFileStatusAsExpected:439->Assert.assertNotEquals:199->Assert.failEquals:185->Assert.fail:88
 File length in authoritative table with
   > Raw: 
S3AFileStatus{path=s3a://cloudera-dev-gabor-ireland/dir-fcd509ca-0db4-426a-9c4f-b972a4949a5d/file-1-fcd509ca-0db4-426a-9c4f-b972a4949a5d;
 isDirectory=false; length=15; replication=1; blocksize=33554432; 
modification_time=1553182592000; access_time=0; owner=gaborbota; 
group=gaborbota; permission=rw-rw-rw-; isSymlink=false; hasAcl=false; 
isEncrypted=false; isErasureCoded=false} isEmptyDirectory=FALSE
   > Guarded: 
S3AFileStatus{path=s3a://cloudera-dev-gabor-ireland/dir-fcd509ca-0db4-426a-9c4f-b972a4949a5d/file-1-fcd509ca-0db4-426a-9c4f-b972a4949a5d;
 isDirectory=false; length=15; replication=1; blocksize=33554432; 
modification_time=1553182592000; access_time=0; owner=gaborbota; 
group=gaborbota; permission=rw-rw-rw-; isSymlink=false; hasAcl=false; 
isEncrypted=false; isErasureCoded=false} isEmptyDirectory=FALSE. Actual: 15
   
   Maybe something still inconsistent?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #624: HADOOP-15999. S3Guard: Better support for out-of-band operations

2019-03-21 Thread GitBox
bgaborg commented on issue #624: HADOOP-15999. S3Guard: Better support for 
out-of-band operations
URL: https://github.com/apache/hadoop/pull/624#issuecomment-475288182
 
 
   I never got any issues with running ITestS3GuardOutOfBandOperations (just 
with local and because of the reference on the cache), but running it with your 
commit gave me the following error: 
   
   > [ERROR]   
ITestS3GuardOutOfBandOperations.testListingLongerLengthOverwrite:251->overwriteFileInListing:399->verifyFileStatusAsExpected:439->Assert.assertNotEquals:199->Assert.failEquals:185->Assert.fail:88
 File length in authoritative table with
   > Raw: 
S3AFileStatus{path=s3a://cloudera-dev-gabor-ireland/dir-fcd509ca-0db4-426a-9c4f-b972a4949a5d/file-1-fcd509ca-0db4-426a-9c4f-b972a4949a5d;
 isDirectory=false; length=15; replication=1; blocksize=33554432; 
modification_time=1553182592000; access_time=0; owner=gaborbota; 
group=gaborbota; permission=rw-rw-rw-; isSymlink=false; hasAcl=false; 
isEncrypted=false; isErasureCoded=false} isEmptyDirectory=FALSE
   > Guarded: 
S3AFileStatus{path=s3a://cloudera-dev-gabor-ireland/dir-fcd509ca-0db4-426a-9c4f-b972a4949a5d/file-1-fcd509ca-0db4-426a-9c4f-b972a4949a5d;
 isDirectory=false; length=15; replication=1; blocksize=33554432; 
modification_time=1553182592000; access_time=0; owner=gaborbota; 
group=gaborbota; permission=rw-rw-rw-; isSymlink=false; hasAcl=false; 
isEncrypted=false; isErasureCoded=false} isEmptyDirectory=FALSE. Actual: 15


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #624: HADOOP-15999. S3Guard: Better support for out-of-band operations

2019-03-21 Thread GitBox
bgaborg commented on issue #624: HADOOP-15999. S3Guard: Better support for 
out-of-band operations
URL: https://github.com/apache/hadoop/pull/624#issuecomment-475270062
 
 
   I'll do another check on your modifications, run some test with local and 
ddb, and get back with the results.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sahilTakiar commented on a change in pull request #600: HDFS-14348: Fix JNI exception handling issues in libhdfs

2019-03-21 Thread GitBox
sahilTakiar commented on a change in pull request #600: HDFS-14348: Fix JNI 
exception handling issues in libhdfs
URL: https://github.com/apache/hadoop/pull/600#discussion_r267805522
 
 

 ##
 File path: 
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/hdfs.c
 ##
 @@ -2512,17 +2509,35 @@ int hadoopRzOptionsSetByteBufferPool(
   if (jthr) {
   printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
   "hadoopRzOptionsSetByteBufferPool(className=%s): ", className);
-  errno = EINVAL;
-  return -1;
+  ret = EINVAL;
+  goto done;
   }
-  opts->byteBufferPool = (*env)->NewGlobalRef(env, byteBufferPool);
-  if (!opts->byteBufferPool) {
-  printExceptionAndFree(env, jthr, PRINT_EXC_ALL,
+  // Only set opts->byteBufferPool if creating a global reference is
+  // successful
+  globalByteBufferPool = (*env)->NewGlobalRef(env, byteBufferPool);
+  if (!globalByteBufferPool) {
+  printPendingExceptionAndFree(env, PRINT_EXC_ALL,
   "hadoopRzOptionsSetByteBufferPool(className=%s): ",
   className);
-  errno = EINVAL;
-  return -1;
+  ret = EINVAL;
+  goto done;
+  }
+  // Delete any previous ByteBufferPool we had before setting a new one.
+  if (opts->byteBufferPool) {
+  (*env)->DeleteGlobalRef(env, opts->byteBufferPool);
   }
+  opts->byteBufferPool = globalByteBufferPool;
+} else if (opts->byteBufferPool) {
+// If the specified className is NULL, delete any previous
+// ByteBufferPool we had.
+(*env)->DeleteGlobalRef(env, opts->byteBufferPool);
 
 Review comment:
   Yeah done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] noslowerdna commented on a change in pull request #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request

2019-03-21 Thread GitBox
noslowerdna commented on a change in pull request #606: HADOOP-16190. S3A 
copyFile operation to include source versionID or etag in the copy request
URL: https://github.com/apache/hadoop/pull/606#discussion_r267775945
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
 ##
 @@ -2864,7 +2865,7 @@ public String getCanonicalServiceName() {
* @throws IOException Other IO problems
*/
 
 Review comment:
   Add `@return` javadoc


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16156) [Clean-up] Remove NULL check before instanceof and fix checkstyle in InnerNodeImpl

2019-03-21 Thread Daniel Templeton (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798121#comment-16798121
 ] 

Daniel Templeton commented on HADOOP-16156:
---

After a careful review, I have a couple more comments:
# Looks like this change was unnecessary:{code}  return new 
InnerNodeImpl(parentName,
getPath(this), this, this.getLevel()+1);{code}  That line was not over 
the 80 char limit.  Now, if I were in a mood to reformat code, I'd add a space 
on either side of that plus, which would put it over the limit.  You can go 
either way, but the change in the current patch is superfluous.
# If you're going to reformat this line:{code}if (loc == null || 
loc.length() == 0) return this;{code} then you may as well put some parens 
around the clauses in the conditional.
# If you're going to reformat these lines:{code}if (childnode == null) 
return null; // non-existing node
if (path.length == 1) return childnode;{code} you should probably combine 
them into a single conditional.  If {{childnode == null}}, then {{return 
childnode}} is the same as {{return null}}.  If you do, be sure to add a 
comment to explain it.

> [Clean-up] Remove NULL check before instanceof and fix checkstyle in 
> InnerNodeImpl
> --
>
> Key: HADOOP-16156
> URL: https://issues.apache.org/jira/browse/HADOOP-16156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Shweta
>Assignee: Shweta
>Priority: Minor
> Attachments: HADOOP-16156.001.patch, HADOOP-16156.002.patch, 
> HADOOP-16156.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request

2019-03-21 Thread GitBox
hadoop-yetus commented on a change in pull request #606: HADOOP-16190. S3A 
copyFile operation to include source versionID or etag in the copy request
URL: https://github.com/apache/hadoop/pull/606#discussion_r267769368
 
 

 ##
 File path: 
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
 ##
 @@ -288,6 +288,57 @@ For the default test dataset, hosted in the `landsat-pds` 
bucket, this is:
 
 ```
 
+##  Testing against versioned buckets
+
+AWS S3 and some third party stores support versioned buckets.
+
+Hadoop is adding awareness of this, including 
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request

2019-03-21 Thread GitBox
hadoop-yetus commented on issue #606: HADOOP-16190. S3A copyFile operation to 
include source versionID or etag in the copy request
URL: https://github.com/apache/hadoop/pull/606#issuecomment-475239524
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 29 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1020 | trunk passed |
   | +1 | compile | 30 | trunk passed |
   | +1 | checkstyle | 25 | trunk passed |
   | +1 | mvnsite | 36 | trunk passed |
   | +1 | shadedclient | 700 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 46 | trunk passed |
   | +1 | javadoc | 27 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 29 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | +1 | checkstyle | 20 | the patch passed |
   | +1 | mvnsite | 33 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 742 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | findbugs | 50 | the patch passed |
   | +1 | javadoc | 23 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 273 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 3212 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-606/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/606 |
   | Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall 
 mvnsite  unit  shadedclient  findbugs  checkstyle  |
   | uname | Linux 2dc13fc24f1d 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9f1c017 |
   | maven | version: Apache Maven 3.3.9 |
   | Default Java | 1.8.0_191 |
   | findbugs | v3.1.0-RC1 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-606/3/artifact/out/whitespace-eol.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-606/3/testReport/ |
   | Max. process+thread count | 440 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-606/3/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16085) S3Guard: use object version or etags to protect against inconsistent read after replace/overwrite

2019-03-21 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798111#comment-16798111
 ] 

Steve Loughran commented on HADOOP-16085:
-

I've got a short term patch for HADOOP-16190 which I'd like reviewed and in 
first, as it includes instructions on testing setup and is straightforward to 
backport. Reviews encouraged: https://github.com/apache/hadoop/pull/606

the tracking of etag/version in s3guard should line up with it.

bq. We need to provide these tests to show that the improvement is backward 
compatible so no need to do any update manually.

we wouldn't be able to run mixed clients without it, and that's going to 
happen. Which is something to bear in mind: old clients will be adding files 
without etags or version markers. If you can do a unified roll out of clients 
(e.g. its an EC2 hosted cluster where all VMs are on the same image, all will 
be well, but in a more heterogenous cluster, that can't hold. Be good to test 
for that with some workflow of

* create a file, versioned
* remove the version column values for that entry (may need some s3guard test 
methods here
* work with the file again

That's to simulate an overwrite.

bq. S3 eventual consistency manifests in a way where one read sees the new 
version and the next read sees the old version. This seems unlikely but I don't 
think there is any guarantee it couldn't happen.

seen that happen with openstack swift. It's why all our contract tests set to 
have unique names across test casesaccidental contamination of subsequent 
tests with the data from old runs. For s3 select, well, users will just have to 
deal with it. "Don't overwrite a file you are querying"







> S3Guard: use object version or etags to protect against inconsistent read 
> after replace/overwrite
> -
>
> Key: HADOOP-16085
> URL: https://issues.apache.org/jira/browse/HADOOP-16085
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Ben Roling
>Priority: Major
> Attachments: HADOOP-16085-003.patch, HADOOP-16085_002.patch, 
> HADOOP-16085_3.2.0_001.patch
>
>
> Currently S3Guard doesn't track S3 object versions.  If a file is written in 
> S3A with S3Guard and then subsequently overwritten, there is no protection 
> against the next reader seeing the old version of the file instead of the new 
> one.
> It seems like the S3Guard metadata could track the S3 object version.  When a 
> file is created or updated, the object version could be written to the 
> S3Guard metadata.  When a file is read, the read out of S3 could be performed 
> by object version, ensuring the correct version is retrieved.
> I don't have a lot of direct experience with this yet, but this is my 
> impression from looking through the code.  My organization is looking to 
> shift some datasets stored in HDFS over to S3 and is concerned about this 
> potential issue as there are some cases in our codebase that would do an 
> overwrite.
> I imagine this idea may have been considered before but I couldn't quite 
> track down any JIRAs discussing it.  If there is one, feel free to close this 
> with a reference to it.
> Am I understanding things correctly?  Is this idea feasible?  Any feedback 
> that could be provided would be appreciated.  We may consider crafting a 
> patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16204) ABFS tests to include terasort

2019-03-21 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16204:
---

 Summary: ABFS tests to include terasort
 Key: HADOOP-16204
 URL: https://issues.apache.org/jira/browse/HADOOP-16204
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure, test
Affects Versions: 3.3.0
Reporter: Steve Loughran


with MAPREDUCE-7092 in, all the MR examples can be run against object stores, 
even when the cluster fs is just file://

running these against ABFS helps validate that the store works against these 
workflows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #606: HADOOP-16190. S3A copyFile operation to include source versionID or etag in the copy request

2019-03-21 Thread GitBox
steveloughran commented on issue #606: HADOOP-16190. S3A copyFile operation to 
include source versionID or etag in the copy request
URL: https://github.com/apache/hadoop/pull/606#issuecomment-475231024
 
 
   * tested at scale against a versioned bucket; all happy apart from those 
dynamo db failures. 
   * Added a section on setting up a versioned bucket for testing, especially 
setting up a rule to to delete old files. you will pay for a days storage of 
all the data generated on every test run: your bill is O(runs), with scale test 
runs costing more. But after 24h with no tests, no data to bill for.
   
   FWIW, I'll set up different buckets with different policies, and have my dev 
hadoop-trunk source tree running unversioned, but the separate branch I use for 
committing work set up to test against versioned. At least once we've got the 
version-aware-delete-fake-dirs-code in


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16144) Create a Hadoop RPC based KMS client

2019-03-21 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16798100#comment-16798100
 ] 

Wei-Chiu Chuang commented on HADOOP-16144:
--

Hey guys. Really really appreciate the work. I am unfortunately being preempted 
by other high prio tasks. In any case, there's enough work in this project for 
multiple contributors working in parallel.

> Create a Hadoop RPC based KMS client
> 
>
> Key: HADOOP-16144
> URL: https://issues.apache.org/jira/browse/HADOOP-16144
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: kms
>Reporter: Wei-Chiu Chuang
>Assignee: Anu Engineer
>Priority: Major
>
> Create a new KMS client implementation that speaks Hadoop RPC.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >