[GitHub] [hadoop] hadoop-yetus commented on issue #1088: HDDS-1689. Implement S3 Create Bucket request to use Cache and DoubleBuffer.

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1088: HDDS-1689. Implement S3 Create Bucket 
request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#issuecomment-511080754
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 88 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 77 | Maven dependency ordering for branch |
   | +1 | mvninstall | 550 | trunk passed |
   | +1 | compile | 268 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 949 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 168 | trunk passed |
   | 0 | spotbugs | 374 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 604 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for patch |
   | +1 | mvninstall | 508 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | cc | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | -0 | checkstyle | 39 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 751 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 193 | the patch passed |
   | +1 | findbugs | 678 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 358 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2912 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 53 | The patch does not generate ASF License warnings. |
   | | | 8777 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestKeyInputStream |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1088 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux d73f0f46d68f 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a70a0d |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/2/testReport/ |
   | Max. process+thread count | 4875 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1063: HDDS-1775. Make OM KeyDeletingService compatible with HA model

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1063: HDDS-1775. Make OM KeyDeletingService 
compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#issuecomment-511079591
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 102 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 4 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 27 | Maven dependency ordering for branch |
   | +1 | mvninstall | 697 | trunk passed |
   | +1 | compile | 343 | trunk passed |
   | +1 | checkstyle | 101 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 1018 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 192 | trunk passed |
   | 0 | spotbugs | 444 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 706 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 24 | Maven dependency ordering for patch |
   | +1 | mvninstall | 735 | the patch passed |
   | +1 | compile | 309 | the patch passed |
   | +1 | cc | 309 | the patch passed |
   | +1 | javac | 309 | the patch passed |
   | -0 | checkstyle | 41 | hadoop-ozone: The patch generated 5 new + 0 
unchanged - 0 fixed = 5 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 1 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 741 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 101 | hadoop-ozone generated 1 new + 12 unchanged - 0 fixed 
= 13 total (was 12) |
   | +1 | findbugs | 609 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 364 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2958 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 9387 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.om.TestSecureOzoneManager |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.om.TestOmMetrics |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.hdds.scm.pipeline.TestSCMRestart |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1063 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 3edb16226876 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a70a0d |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/artifact/out/whitespace-eol.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/artifact/out/diff-javadoc-javadoc-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/testReport/ |
   | Max. process+thread count | 4995 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1063/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: 

[GitHub] [hadoop] hadoop-yetus commented on a change in pull request #1063: HDDS-1775. Make OM KeyDeletingService compatible with HA model

2019-07-12 Thread GitBox
hadoop-yetus commented on a change in pull request #1063: HDDS-1775. Make OM 
KeyDeletingService compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#discussion_r303188169
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -405,9 +406,12 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
 omRpcServer = getRpcServer(conf);
 omRpcAddress = updateRPCListenAddress(configuration,
 OZONE_OM_ADDRESS_KEY, omNodeRpcAddr, omRpcServer);
+
 this.scmClient = new ScmClient(scmBlockClient, scmContainerClient);
-keyManager = new KeyManagerImpl(scmClient, metadataManager,
-configuration, omStorage.getOmId(), blockTokenMgr, getKmsProvider());
+
+keyManager = new KeyManagerImpl(this, scmClient, configuration,
+omStorage.getOmId());
+
 
 Review comment:
   whitespace:end of line
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1088: HDDS-1689. Implement S3 Create Bucket request to use Cache and DoubleBuffer.

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1088: HDDS-1689. Implement S3 Create Bucket 
request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088#issuecomment-511077933
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 79 | Maven dependency ordering for branch |
   | +1 | mvninstall | 496 | trunk passed |
   | +1 | compile | 242 | trunk passed |
   | +1 | checkstyle | 61 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 778 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 147 | trunk passed |
   | 0 | spotbugs | 315 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 497 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 74 | Maven dependency ordering for patch |
   | +1 | mvninstall | 452 | the patch passed |
   | +1 | compile | 351 | the patch passed |
   | +1 | cc | 351 | the patch passed |
   | +1 | javac | 351 | the patch passed |
   | -0 | checkstyle | 35 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 625 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 600 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 282 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2039 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 48 | The patch does not generate ASF License warnings. |
   | | | 7252 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.client.rpc.TestContainerStateMachineFailures |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1088 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 22528066b727 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a70a0d |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/1/testReport/ |
   | Max. process+thread count | 5307 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/common 
hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1088/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16404) ABFS default blocksize change(256MB from 512MB)

2019-07-12 Thread Da Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884255#comment-16884255
 ] 

Da Zhou commented on HADOOP-16404:
--

Arun met some env issues, so I ran the tests with this patch. Looks good to me. 
+1.

All tests passed to my US west account:
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0
Tests run: 392, Failures: 2, Errors: 0, Skipped: 207

> ABFS default blocksize change(256MB from 512MB)
> ---
>
> Key: HADOOP-16404
> URL: https://issues.apache.org/jira/browse/HADOOP-16404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.1.2
>Reporter: Arun Singh
>Assignee: Arun Singh
>Priority: Major
>  Labels: patch
> Fix For: 3.1.2
>
> Attachments: HADOOP-16404.patch
>
>
> We intend to change the default blocksize of the abfs driver to 256Mb from 
> 512MB.
> After changing the blocksize we have performed a series of test(Spark Tera, 
> Spark DFSIO,TPCDS on HIVE) and have seen consistent improvements in order of 
> 4-5 %



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #823: HADOOP-16315. ABFS: transform full UPN for named user in AclStatus

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #823: HADOOP-16315. ABFS: transform full UPN 
for named user in AclStatus
URL: https://github.com/apache/hadoop/pull/823#issuecomment-511069737
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1179 | trunk passed |
   | +1 | compile | 29 | trunk passed |
   | +1 | checkstyle | 19 | trunk passed |
   | +1 | mvnsite | 35 | trunk passed |
   | +1 | shadedclient | 723 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 21 | trunk passed |
   | 0 | spotbugs | 54 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 52 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 41 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | +1 | checkstyle | 18 | the patch passed |
   | +1 | mvnsite | 28 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 737 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 19 | the patch passed |
   | +1 | findbugs | 53 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 79 | hadoop-azure in the patch passed. |
   | +1 | asflicense | 24 | The patch does not generate ASF License warnings. |
   | | | 3218 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-823/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/823 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux f3a099cb0b34 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a70a0d |
   | Default Java | 1.8.0_212 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-823/3/testReport/ |
   | Max. process+thread count | 417 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-823/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on issue #823: HADOOP-16315. ABFS: transform full UPN for named user in AclStatus

2019-07-12 Thread GitBox
DadanielZ commented on issue #823: HADOOP-16315. ABFS: transform full UPN for 
named user in AclStatus
URL: https://github.com/apache/hadoop/pull/823#issuecomment-511058397
 
 
   rerun the tests, all passed:
   xns account:
   Tests run: 41, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 393, Failures: 0, Errors: 0, Skipped: 25
   Tests run: 190, Failures: 0, Errors: 0, Skipped: 23
   
   non-xns account:
   Tests run: 41, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 393, Failures: 0, Errors: 0, Skipped: 207
   Tests run: 190, Failures: 0, Errors: 0, Skipped: 15


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao closed pull request #1066: HDDS-1776. Fix image name in some ozone docker-compose files. Contrib…

2019-07-12 Thread GitBox
xiaoyuyao closed pull request #1066: HDDS-1776. Fix image name in some ozone 
docker-compose files. Contrib…
URL: https://github.com/apache/hadoop/pull/1066
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bharatviswa504 opened a new pull request #1088: HDDS-1689. Implement S3 Create Bucket request to use Cache and DoubleBuffer.

2019-07-12 Thread GitBox
bharatviswa504 opened a new pull request #1088: HDDS-1689. Implement S3 Create 
Bucket request to use Cache and DoubleBuffer.
URL: https://github.com/apache/hadoop/pull/1088
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on issue #1074: HDDS-1544. Support default Acls for volume, bucket, keys and prefix. Contributed by Ajay Kumar.

2019-07-12 Thread GitBox
xiaoyuyao commented on issue #1074: HDDS-1544. Support default Acls for volume, 
bucket, keys and prefix. Contributed by Ajay Kumar.
URL: https://github.com/apache/hadoop/pull/1074#issuecomment-511053477
 
 
   TestOzoneNativeAuthorizer failure is related. The Prefix_Lock is not 
reentrant. The fix is refactor the PrefixmanagerImpl#getLongestPrefixPath with 
a helper function that does not acquire prefix lock. 
   
   In PrefixManagerImpl#setAcl, we should call the helper function without lock.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hanishakoneru commented on issue #1063: HDDS-1775. Make OM KeyDeletingService compatible with HA model

2019-07-12 Thread GitBox
hanishakoneru commented on issue #1063: HDDS-1775. Make OM KeyDeletingService 
compatible with HA model
URL: https://github.com/apache/hadoop/pull/1063#issuecomment-511052962
 
 
   Fixed failing tests and add unit tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16428) Distcp don't make use of S3a Committers, be it magic or staging

2019-07-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884156#comment-16884156
 ] 

Steve Loughran commented on HADOOP-16428:
-

I'm saying: its hard to show any benefits over the -direct option. It's what I 
do.

Put differently: what problem to you have which the -direct option doesn't 
address?

> Distcp don't make use of S3a Committers, be it magic or staging
> ---
>
> Key: HADOOP-16428
> URL: https://issues.apache.org/jira/browse/HADOOP-16428
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, tools/distcp
>Affects Versions: 3.1.1
>Reporter: Sahil Kaw
>Priority: Minor
>
> Currently, I don't see Distcp make use of S3a Committers, be it Magic or 
> Staging and I have noticed most of the jobs which use MapReduce frameworks 
> use S3 committers except distcp. Distcp makes use of the FileOutputCommitter 
> even if S3a committer parameters are specified in the core-site.xml. Is this 
> by design? If yes, can someone please explain the reason for that. Are there 
> any limitations or potential risks of using S3a committers with Distcp? 
> I know there is a "-direct" option that can be used with the 
> FileOutputCommitter in order to avoid renaming while committing fr object 
> Stores. But if anyone can put some light on the current limitation of S3a 
> committers with distcp and reason for choosing FileOutputCommitters for 
> Distcp over S3a committers, it would be helpful.  Thanks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1071: HDDS-1779. TestWatchForCommit tests are flaky.

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1071: HDDS-1779. TestWatchForCommit tests are 
flaky.
URL: https://github.com/apache/hadoop/pull/1071#issuecomment-510999876
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 57 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 473 | trunk passed |
   | +1 | compile | 269 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 864 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | trunk passed |
   | 0 | spotbugs | 316 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 505 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 450 | the patch passed |
   | +1 | compile | 268 | the patch passed |
   | +1 | javac | 268 | the patch passed |
   | -0 | checkstyle | 42 | hadoop-ozone: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 682 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 162 | the patch passed |
   | +1 | findbugs | 520 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 338 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2469 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 7595 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1071/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1071 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d6815f71900a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 4a70a0d |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1071/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1071/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1071/1/testReport/ |
   | Max. process+thread count | 5231 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1071/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16428) Distcp don't make use of S3a Committers, be it magic or staging

2019-07-12 Thread Sahil Kaw (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16884021#comment-16884021
 ] 

Sahil Kaw commented on HADOOP-16428:


Thanks [~ste...@apache.org] , so that means its because of the different task 
profile of distcp (which you explained above), it can't use the S3 committers 
and your suggestion is to look into "the multipart upload API of Hadoop 3.3 and 
the ability to upload different blocks in parallel and coalesce them at the 
end". Is my understanding right?

> Distcp don't make use of S3a Committers, be it magic or staging
> ---
>
> Key: HADOOP-16428
> URL: https://issues.apache.org/jira/browse/HADOOP-16428
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, tools/distcp
>Affects Versions: 3.1.1
>Reporter: Sahil Kaw
>Priority: Minor
>
> Currently, I don't see Distcp make use of S3a Committers, be it Magic or 
> Staging and I have noticed most of the jobs which use MapReduce frameworks 
> use S3 committers except distcp. Distcp makes use of the FileOutputCommitter 
> even if S3a committer parameters are specified in the core-site.xml. Is this 
> by design? If yes, can someone please explain the reason for that. Are there 
> any limitations or potential risks of using S3a committers with Distcp? 
> I know there is a "-direct" option that can be used with the 
> FileOutputCommitter in order to avoid renaming while committing fr object 
> Stores. But if anyone can put some light on the current limitation of S3a 
> committers with distcp and reason for choosing FileOutputCommitters for 
> Distcp over S3a committers, it would be helpful.  Thanks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-07-12 Thread Tsuyoshi Ozawa (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883990#comment-16883990
 ] 

Tsuyoshi Ozawa commented on HADOOP-13363:
-

I'm happy that Anu, Yikun, and Steven resume the conversation :-) 

I don't know it is good time to do the upgrade event. Maybe the most difficult 
part of this task is to get consensus among us. This is because the upgrade can 
disrupt other projects which depend on Apache Hadoop as Steve said. In my 
experience, a lesson which I learned from Guava updating, though I have 
recognized it as a failure because the patch was reverted, is we should keep 
the dependencies on common libraries even if Apache Hadoop itself doesn't use 
it. 

So, a safer way for the ecosystem I came up with is as follows:
1. Shading updated protobuf version e.g. protobuf v3.
2. Gradually replacing existent parts where protobuf v2.5 is used with protobuf 
v3. This can be done on a non-master branch. Here, we remain the dependency on 
protobuf v 2.5. This is because other projects may use it.
3. Announcing when to delete the dependency. 
4. Removing the dependency on the future version.

This kind of gradual replacing approach might be acceptable by the Hadoop 
ecosystem, I think.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1087: ContainerStateMachine should have its own executors for executing applyTransaction calls

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1087: ContainerStateMachine should have its 
own executors for executing applyTransaction calls
URL: https://github.com/apache/hadoop/pull/1087#issuecomment-510957462
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 59 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 467 | trunk passed |
   | +1 | compile | 250 | trunk passed |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 874 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | trunk passed |
   | 0 | spotbugs | 305 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 493 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 456 | the patch passed |
   | +1 | compile | 277 | the patch passed |
   | +1 | javac | 277 | the patch passed |
   | -0 | checkstyle | 30 | hadoop-hdds: The patch generated 8 new + 0 
unchanged - 0 fixed = 8 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 697 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 520 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 339 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2405 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 52 | The patch does not generate ASF License warnings. |
   | | | 7502 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineProvider |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1087/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1087 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a56bca2dfbd0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 190e434 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1087/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1087/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1087/1/testReport/ |
   | Max. process+thread count | 5352 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/container-service U: 
hadoop-hdds/container-service |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1087/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16417) abfs can't access storage account without password

2019-07-12 Thread Jose Luis Pedrosa (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Luis Pedrosa updated HADOOP-16417:
---
Description: 
*** NOTE: apparently a hack around is use any string as password. Azure will 
allow access with wrong password to open SA.


It does not seem possible to access storage accounts without passwords using 
abfs, but it is possible using wasb.

 

This sample code (Spark based) to illustrate, the following code using 
abfs_path with throw an exception
{noformat}
Exception in thread "main" java.lang.IllegalArgumentException: Invalid account 
key.
at 
org.apache.hadoop.fs.azurebfs.services.SharedKeyCredentials.(SharedKeyCredentials.java:70)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:812)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.(AzureBlobFileSystemStore.java:149)
at 
org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:108)
at 
org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
{noformat}
  While using the wasb_path will work normally,
{code:java}
import org.apache.spark.api.java.function.FilterFunction;
import org.apache.spark.sql.RuntimeConfig;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;

public class SimpleApp {

static String blob_account_name = "azureopendatastorage";
static String blob_container_name = "gfsweatherdatacontainer";
static String blob_relative_path = "GFSWeather/GFSProcessed";
static String blob_sas_token = "";
static String abfs_path = 
"abfs://"+blob_container_name+"@"+blob_account_name+".dfs.core.windows.net/"+blob_relative_path;
static String wasbs_path = "wasbs://"+blob_container_name + 
"@"+blob_account_name+".blob.core.windows.net/" + blob_relative_path;


public static void main(String[] args) {
   
SparkSession spark = SparkSession.builder().appName("NOAAGFS 
Run").getOrCreate();
configureAzureHadoopConnetor(spark);
RuntimeConfig conf = spark.conf();


conf.set("fs.azure.account.key."+blob_account_name+".dfs.core.windows.net", 
blob_sas_token);

conf.set("fs.azure.account.key."+blob_account_name+".blob.core.windows.net", 
blob_sas_token);

System.out.println("Creating parquet dataset");
Dataset logData = spark.read().parquet(abfs_path);

System.out.println("Creating temp view");
logData.createOrReplaceTempView("source");

System.out.println("SQL");
spark.sql("SELECT * FROM source LIMIT 10").show();
spark.stop();
}

public static void configureAzureHadoopConnetor(SparkSession session) {
RuntimeConfig conf = session.conf();


conf.set("fs.AbstractFileSystem.wasb.impl","org.apache.hadoop.fs.azure.Wasb");

conf.set("fs.AbstractFileSystem.wasbs.impl","org.apache.hadoop.fs.azure.Wasbs");

conf.set("fs.wasb.impl","org.apache.hadoop.fs.azure.NativeAzureFileSystem");

conf.set("fs.wasbs.impl","org.apache.hadoop.fs.azure.NativeAzureFileSystem$Secure");

conf.set("fs.azure.secure.mode", false);

conf.set("fs.abfs.impl",  
"org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem");
conf.set("fs.abfss.impl", 
"org.apache.hadoop.fs.azurebfs.SecureAzureBlobFileSystem");


conf.set("fs.AbstractFileSystem.abfs.impl","org.apache.hadoop.fs.azurebfs.Abfs");

conf.set("fs.AbstractFileSystem.abfss.impl","org.apache.hadoop.fs.azurebfs.Abfss");

// Works in conjuction with fs.azure.secure.mode. Setting this config 
to true
//results in fs.azure.NativeAzureFileSystem using the local SAS key 
generation
//where the SAS keys are generating in the same process as 
fs.azure.NativeAzureFileSystem.
//If fs.azure.secure.mode flag is set to false, this flag has no 
effect.
conf.set("fs.azure.local.sas.key.mode", false);
}
}
{code}
Sample build.gradle
{noformat}
plugins {
id 'java'
}

group 'org.samples'
version '1.0-SNAPSHOT'

sourceCompatibility = 1.8

repositories {
mavenCentral()
}

dependencies {
compile  'org.apache.spark:spark-sql_2.12:2.4.3'
}
{noformat}

  was:
It does not seem possible to access storage accounts without passwords using 
abfs, but it is possible using wasb.

 

This sample code (Spark based) to illustrate, the following code using 
abfs_path with throw an exception
{noformat}
Exception in thread 

[GitHub] [hadoop] hadoop-yetus commented on issue #1086: HADOOP-16341. ShutDownHookManager: Regressed performance on Hook remo…

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1086: HADOOP-16341. ShutDownHookManager: 
Regressed performance on Hook remo…
URL: https://github.com/apache/hadoop/pull/1086#issuecomment-510953576
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 83 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1252 | trunk passed |
   | +1 | compile | 1158 | trunk passed |
   | +1 | checkstyle | 45 | trunk passed |
   | +1 | mvnsite | 85 | trunk passed |
   | +1 | shadedclient | 888 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 69 | trunk passed |
   | 0 | spotbugs | 149 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 146 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 70 | the patch passed |
   | +1 | compile | 1110 | the patch passed |
   | +1 | javac | 1110 | the patch passed |
   | -0 | checkstyle | 44 | hadoop-common-project/hadoop-common: The patch 
generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) |
   | +1 | mvnsite | 80 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 726 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 67 | the patch passed |
   | +1 | findbugs | 160 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 572 | hadoop-common in the patch failed. |
   | +1 | asflicense | 45 | The patch does not generate ASF License warnings. |
   | | | 6682 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.util.TestShutdownHookManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1086/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1086 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7c540dd37401 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 190e434 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1086/2/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1086/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1086/2/testReport/ |
   | Max. process+thread count | 1668 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1086/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1086: HADOOP-16341. ShutDownHookManager: Regressed performance on Hook remo…

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1086: HADOOP-16341. ShutDownHookManager: 
Regressed performance on Hook remo…
URL: https://github.com/apache/hadoop/pull/1086#issuecomment-510952612
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 72 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1288 | trunk passed |
   | +1 | compile | 1163 | trunk passed |
   | +1 | checkstyle | 47 | trunk passed |
   | +1 | mvnsite | 92 | trunk passed |
   | +1 | shadedclient | 896 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 64 | trunk passed |
   | 0 | spotbugs | 129 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 126 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 56 | the patch passed |
   | +1 | compile | 1140 | the patch passed |
   | +1 | javac | 1140 | the patch passed |
   | -0 | checkstyle | 46 | hadoop-common-project/hadoop-common: The patch 
generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) |
   | +1 | mvnsite | 88 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 745 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 62 | the patch passed |
   | +1 | findbugs | 140 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 592 | hadoop-common in the patch failed. |
   | +1 | asflicense | 49 | The patch does not generate ASF License warnings. |
   | | | 6735 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.util.TestShutdownHookManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1086/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1086 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1c6568919351 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 190e434 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1086/1/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1086/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1086/1/testReport/ |
   | Max. process+thread count | 1728 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1086/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anuengineer merged pull request #1035: HDDS-1735. Create separate unit and integration test executor dev-support script

2019-07-12 Thread GitBox
anuengineer merged pull request #1035: HDDS-1735. Create separate unit and 
integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1085: HDDS-1785. OOM error in Freon due to the concurrency handling

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1085: HDDS-1785. OOM error in Freon due to the 
concurrency handling
URL: https://github.com/apache/hadoop/pull/1085#issuecomment-510942107
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 482 | trunk passed |
   | +1 | compile | 248 | trunk passed |
   | +1 | checkstyle | 68 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 905 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | trunk passed |
   | 0 | spotbugs | 317 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 504 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 535 | the patch passed |
   | +1 | compile | 310 | the patch passed |
   | +1 | javac | 310 | the patch passed |
   | +1 | checkstyle | 79 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 724 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 153 | the patch passed |
   | +1 | findbugs | 608 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 353 | hadoop-hdds in the patch passed. |
   | -1 | unit | 198 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 36 | The patch does not generate ASF License warnings. |
   | | | 5596 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1085 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e9107caa267d 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 190e434 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/2/testReport/ |
   | Max. process+thread count | 370 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16417) abfs can't access storage account without password

2019-07-12 Thread Masatake Iwasaki (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883919#comment-16883919
 ] 

Masatake Iwasaki commented on HADOOP-16417:
---

WASB allows accessing public container of BlobStorage account without 
{{fs.azure.account.key.ACCOUNT_NAME.blob.core.windows.net}} or setting it to 
empty string.
{noformat}
$ bin/hadoop fs -ls 
wasb://mypubliccontai...@myblobaccount.blob.core.windows.net/
{noformat}
For public container of BlobStorage or StorageV2 account without hierarchical 
namespace, ABFS does not work with both no configuration and empty string as 
shared key as [~jlpedrosa] reported.
{noformat}
$ bin/hadoop fs -ls abfs://mypubliccontai...@myblobaccount.dfs.core.windows.net/
ls: Configuration property myblob.dfs.core.windows.net not found.
{noformat}
Current ABFS does not assume anonymous access and uses AuthType.SharedKey as a 
default if {{fs.azure.account.auth.type.ACCOUNT_NAME.dfs.core.windows.net}} is 
not specified. I'm trying to add {{AuthType.Anonymous}} or something to handle 
the case. Just sending request without Authorization header seems not to work.

> abfs can't access storage account without password
> --
>
> Key: HADOOP-16417
> URL: https://issues.apache.org/jira/browse/HADOOP-16417
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Jose Luis Pedrosa
>Assignee: Masatake Iwasaki
>Priority: Minor
>
> It does not seem possible to access storage accounts without passwords using 
> abfs, but it is possible using wasb.
>  
> This sample code (Spark based) to illustrate, the following code using 
> abfs_path with throw an exception
> {noformat}
> Exception in thread "main" java.lang.IllegalArgumentException: Invalid 
> account key.
> at 
> org.apache.hadoop.fs.azurebfs.services.SharedKeyCredentials.(SharedKeyCredentials.java:70)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:812)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.(AzureBlobFileSystemStore.java:149)
> at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:108)
> at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124)
> at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
> {noformat}
>   While using the wasb_path will work normally,
> {code:java}
> import org.apache.spark.api.java.function.FilterFunction;
> import org.apache.spark.sql.RuntimeConfig;
> import org.apache.spark.sql.SparkSession;
> import org.apache.spark.sql.Dataset;
> import org.apache.spark.sql.Row;
> public class SimpleApp {
> static String blob_account_name = "azureopendatastorage";
> static String blob_container_name = "gfsweatherdatacontainer";
> static String blob_relative_path = "GFSWeather/GFSProcessed";
> static String blob_sas_token = "";
> static String abfs_path = 
> "abfs://"+blob_container_name+"@"+blob_account_name+".dfs.core.windows.net/"+blob_relative_path;
> static String wasbs_path = "wasbs://"+blob_container_name + 
> "@"+blob_account_name+".blob.core.windows.net/" + blob_relative_path;
> public static void main(String[] args) {
>
> SparkSession spark = SparkSession.builder().appName("NOAAGFS 
> Run").getOrCreate();
> configureAzureHadoopConnetor(spark);
> RuntimeConfig conf = spark.conf();
> 
> conf.set("fs.azure.account.key."+blob_account_name+".dfs.core.windows.net", 
> blob_sas_token);
> 
> conf.set("fs.azure.account.key."+blob_account_name+".blob.core.windows.net", 
> blob_sas_token);
> System.out.println("Creating parquet dataset");
> Dataset logData = spark.read().parquet(abfs_path);
> System.out.println("Creating temp view");
> logData.createOrReplaceTempView("source");
> System.out.println("SQL");
> spark.sql("SELECT * FROM source LIMIT 10").show();
> spark.stop();
> }
> public static void configureAzureHadoopConnetor(SparkSession session) {
> RuntimeConfig conf = session.conf();
> 
> conf.set("fs.AbstractFileSystem.wasb.impl","org.apache.hadoop.fs.azure.Wasb");
> 
> conf.set("fs.AbstractFileSystem.wasbs.impl","org.apache.hadoop.fs.azure.Wasbs");
> 
> conf.set("fs.wasb.impl","org.apache.hadoop.fs.azure.NativeAzureFileSystem");
> 
> 

[GitHub] [hadoop] zeroflag commented on issue #1086: HADOOP-16341. ShutDownHookManager: Regressed performance on Hook remo…

2019-07-12 Thread GitBox
zeroflag commented on issue #1086: HADOOP-16341. ShutDownHookManager: Regressed 
performance on Hook remo…
URL: https://github.com/apache/hadoop/pull/1086#issuecomment-510932548
 
 
   @steveloughran could you review it? It's the same as #940 but with the fix 
for the flaky test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16428) Distcp don't make use of S3a Committers, be it magic or staging

2019-07-12 Thread Sahil Kaw (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Kaw updated HADOOP-16428:
---
Summary: Distcp don't make use of S3a Committers, be it magic or staging  
(was: Distcp make use of S3a Committers, be it magic or staging)

> Distcp don't make use of S3a Committers, be it magic or staging
> ---
>
> Key: HADOOP-16428
> URL: https://issues.apache.org/jira/browse/HADOOP-16428
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, tools/distcp
>Affects Versions: 3.1.1
>Reporter: Sahil Kaw
>Priority: Minor
>
> Currently, I don't see Distcp make use of S3a Committers, be it Magic or 
> Staging and I have noticed most of the jobs which use MapReduce frameworks 
> use S3 committers except distcp. Distcp makes use of the FileOutputCommitter 
> even if S3a committer parameters are specified in the core-site.xml. Is this 
> by design? If yes, can someone please explain the reason for that. Are there 
> any limitations or potential risks of using S3a committers with Distcp? 
> I know there is a "-direct" option that can be used with the 
> FileOutputCommitter in order to avoid renaming while committing fr object 
> Stores. But if anyone can put some light on the current limitation of S3a 
> committers with distcp and reason for choosing FileOutputCommitters for 
> Distcp over S3a committers, it would be helpful.  Thanks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1032: [HDDS-1201] Reporting corrupted containers info to SCM

2019-07-12 Thread GitBox
elek commented on issue #1032: [HDDS-1201] Reporting corrupted containers info 
to SCM
URL: https://github.com/apache/hadoop/pull/1032#issuecomment-510924348
 
 
   Oh, no worries at all. It's a problem with the build system if these minor 
problems are not clearly visible  immediately on the PR.
   
   I just tried to share how can it be avoid, but I am also thinking to 
document it better on the wiki.
   
   (ps: I am also experimenting with helper scripts such as 
`./hadoop-ozone/dev-support/checks/checkstyle.sh` to make it easier to run the 
checks locally. For me it helps a lot, but checkstyle report will be more 
usable after HDDS-1735...) 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on a change in pull request #1076: HDDS-1782. Add an option to MiniOzoneChaosCluster to read files multiple times. Contributed by Mukul Kumar Singh.

2019-07-12 Thread GitBox
adoroszlai commented on a change in pull request #1076: HDDS-1782. Add an 
option to MiniOzoneChaosCluster to read files multiple times. Contributed by 
Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1076#discussion_r303020363
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/chaos/TestProbability.java
 ##
 @@ -0,0 +1,39 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.chaos;
+
+import org.apache.commons.lang3.RandomUtils;
+
+/**
+ * Class to keep track of test probability.
+ */
+public class TestProbability {
+  private int pct;
+
+  private TestProbability(int pct) {
+this.pct = pct;
+  }
+
+  public boolean isTrue() {
+return (RandomUtils.nextInt() * pct / 100) == 1;
 
 Review comment:
   Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on issue #1085: HDDS-1785. OOM error in Freon due to the concurrency handling

2019-07-12 Thread GitBox
adoroszlai commented on issue #1085: HDDS-1785. OOM error in Freon due to the 
concurrency handling
URL: https://github.com/apache/hadoop/pull/1085#issuecomment-510916323
 
 
   @elek @iamcaoxudong please review


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 opened a new pull request #1087: ContainerStateMachine should have its own executors for executing applyTransaction calls

2019-07-12 Thread GitBox
lokeshj1703 opened a new pull request #1087: ContainerStateMachine should have 
its own executors for executing applyTransaction calls
URL: https://github.com/apache/hadoop/pull/1087
 
 
   Currently ContainerStateMachine uses the executors provided by 
XceiverServerRatis for executing applyTransaction calls. This would result in 
two or more ContainerStateMachine to share the same set of executors. Delay or 
load in one ContainerStateMachine would adversely affect the performance of 
other state machines in such a case. It is better to have separate set of 
executors for each ContainerStateMachine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zeroflag opened a new pull request #1086: HADOOP-16341. ShutDownHookManager: Regressed performance on Hook remo…

2019-07-12 Thread GitBox
zeroflag opened a new pull request #1086: HADOOP-16341. ShutDownHookManager: 
Regressed performance on Hook remo…
URL: https://github.com/apache/hadoop/pull/1086
 
 
   …vals after HADOOP-15679
   
   Continuation of #940. Taking it over Gopal.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hgadre commented on issue #1032: [HDDS-1201] Reporting corrupted containers info to SCM

2019-07-12 Thread GitBox
hgadre commented on issue #1032: [HDDS-1201] Reporting corrupted containers 
info to SCM
URL: https://github.com/apache/hadoop/pull/1032#issuecomment-510906972
 
 
   @elek sorry about that. I have filed HDDS-1794 to fix this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16430) S3AFilesystem.delete to incrementally update s3guard with deletions

2019-07-12 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16430:
---

 Summary: S3AFilesystem.delete to incrementally update s3guard with 
deletions
 Key: HADOOP-16430
 URL: https://issues.apache.org/jira/browse/HADOOP-16430
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Steve Loughran


Currently S3AFilesystem.delete() only updates the delete at the end of a paged 
delete operation. This makes it slow when there are many thousands of files to 
delete ,and increases the window of vulnerability to failures

Preferred

* after every bulk DELETE call is issued to S3, queue the (async) delete of all 
entries in that post.
* at the end of the delete, await the completion of these operations.
* inside S3AFS, also do the delete across threads, so that different HTTPS 
connections can be used.

This should maximise DDB throughput against tables which aren't IO limited.

When executed against small IOP limited tables, the parallel DDB DELETE batches 
will trigger a lot of throttling events; we should make sure these aren't going 
to trigger failures



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16429) DynamoDBMetaStore deleteSubtree to delete leaf nodes first

2019-07-12 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16429:
---

 Summary: DynamoDBMetaStore deleteSubtree to delete leaf nodes first
 Key: HADOOP-16429
 URL: https://issues.apache.org/jira/browse/HADOOP-16429
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


in {{deleteSubtree(path)}}, the DynamoDB metastore walks down the tree, 
returning elements to delete. But it will delete parent entries before 
children, so if an operation fails partway through, there will be orphans

Better: DescendantsIterator to return all the leaf nodes before their parents 
so the deletion is done bottom up

Also: push the deletions off into their own async queue/pool so that they don't 
become the bottleneck on the process



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1085: HDDS-1785. OOM error in Freon due to the concurrency handling

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1085: HDDS-1785. OOM error in Freon due to the 
concurrency handling
URL: https://github.com/apache/hadoop/pull/1085#issuecomment-510894889
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 456 | trunk passed |
   | +1 | compile | 241 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 803 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 173 | trunk passed |
   | 0 | spotbugs | 303 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 595 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 427 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | +1 | checkstyle | 81 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 657 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 149 | the patch passed |
   | -1 | findbugs | 321 | hadoop-ozone generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0) |
   ||| _ Other Tests _ |
   | +1 | unit | 277 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1730 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 50 | The patch does not generate ASF License warnings. |
   | | | 6789 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-ozone |
   |  |  Exception is caught when Exception is not thrown in 
org.apache.hadoop.ozone.freon.RandomKeyGenerator.createKey(long)  At 
RandomKeyGenerator.java:is not thrown in 
org.apache.hadoop.ozone.freon.RandomKeyGenerator.createKey(long)  At 
RandomKeyGenerator.java:[line 728] |
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.container.ozoneimpl.TestOzoneContainer |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1085 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0f7d4659da4b 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f9fab9f |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/1/artifact/out/new-findbugs-hadoop-ozone.html
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/1/testReport/ |
   | Max. process+thread count | 5295 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/tools U: hadoop-ozone/tools |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1085/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11890) Uber-JIRA: Hadoop should support IPv6

2019-07-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-11890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883825#comment-16883825
 ] 

Steve Loughran commented on HADOOP-11890:
-

bq. Anything trying to semi-intelligently split/join authorities by a colon as 
problematic.

fortunately we cut that from s3a; abfs does take authorities but it doesn't 
look for a :pass, AFAIK

# It's time
# inevitably, it'd be unstable at first, but that doesn't mean it shouldn't be 
done
# someone needs to volunteer to do this for 3.3+

> Uber-JIRA: Hadoop should support IPv6
> -
>
> Key: HADOOP-11890
> URL: https://issues.apache.org/jira/browse/HADOOP-11890
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Major
>  Labels: ipv6
> Attachments: hadoop_2.7.3_ipv6_commits.txt
>
>
> Hadoop currently treats IPv6 as unsupported.  Track related smaller issues to 
> support IPv6.
> (Current case here is mainly HBase on HDFS, so any suggestions about other 
> test cases/workload are really appreciated.)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16415) Speed up S3A test runs

2019-07-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883813#comment-16883813
 ] 

Steve Loughran commented on HADOOP-16415:
-

Thinking for of this

We're running the same operation (MR or terasort sequence) with different 
cluster configs, which is exactly what parameterized test runs can do. So we 
just need some parameterization which declares everything a specific test run 
can do: 
* config options
* extra callbacks on validation
* expected outcomes

then we have a test which brings up the mini yarn cluster in static setup, 
destroys it in teardown, and has the sets run parameterized

The only thing we'd need to do is implement a significantly more complex 
parameterization than normal, with each one being a class declaring all that is 
needed for each one. Ideally, one which we could share between the Terasort and 
the TestMRJob tests



> Speed up S3A test runs
> --
>
> Key: HADOOP-16415
> URL: https://issues.apache.org/jira/browse/HADOOP-16415
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> S3A Test runs are way too slow.
> Speed them by
> * reducing test setup/teardown costs
> * eliminating obsolete test cases
> * merge small tests into larger ones.
> One thing i see is that the main S3A test cases create and destroy new FS 
> instances; There's both a setup and teardown cost there, but it does 
> guarantee better isolation.
> Maybe if we know all test cases in a specific suite need the same options, we 
> can manage that better; demand create the FS but only delete it in an 
> @Afterclass method. That'd give us the OO-inheritance based setup of tests, 
> but mean only one instance is done per suite



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1076: HDDS-1782. Add an option to MiniOzoneChaosCluster to read files multiple times. Contributed by Mukul Kumar Singh.

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1076: HDDS-1782. Add an option to 
MiniOzoneChaosCluster to read files multiple times. Contributed by Mukul Kumar 
Singh.
URL: https://github.com/apache/hadoop/pull/1076#issuecomment-510887805
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 5 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 483 | trunk passed |
   | +1 | compile | 273 | trunk passed |
   | +1 | checkstyle | 84 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 790 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | trunk passed |
   | 0 | spotbugs | 330 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 527 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 459 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | -0 | checkstyle | 49 | hadoop-ozone: The patch generated 3 new + 0 
unchanged - 0 fixed = 3 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 689 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 156 | the patch passed |
   | +1 | findbugs | 534 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 309 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1687 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 6862 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1076 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
compile javac javadoc mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 0856e1e9aa5a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f9fab9f |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/2/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/2/testReport/ |
   | Max. process+thread count | 5263 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1076/2/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16425) S3Guard fsck: Export MetadataStore and S3 bucket hierarchy

2019-07-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883809#comment-16883809
 ] 

Steve Loughran commented on HADOOP-16425:
-

HADOOP-16384 does this, though without any promises of stability of format or 
escaping of characters

> S3Guard fsck: Export MetadataStore and S3 bucket hierarchy
> --
>
> Key: HADOOP-16425
> URL: https://issues.apache.org/jira/browse/HADOOP-16425
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> The export should be done in a human-readable format like csv



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16428) Distcp make use of S3a Committers, be it magic or staging

2019-07-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16428:

Fix Version/s: (was: 3.1.2)

> Distcp make use of S3a Committers, be it magic or staging
> -
>
> Key: HADOOP-16428
> URL: https://issues.apache.org/jira/browse/HADOOP-16428
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, tools/distcp
>Affects Versions: 3.1.1
>Reporter: Sahil Kaw
>Priority: Minor
>
> Currently, I don't see Distcp make use of S3a Committers, be it Magic or 
> Staging and I have noticed most of the jobs which use MapReduce frameworks 
> use S3 committers except distcp. Distcp makes use of the FileOutputCommitter 
> even if S3a committer parameters are specified in the core-site.xml. Is this 
> by design? If yes, can someone please explain the reason for that. Are there 
> any limitations or potential risks of using S3a committers with Distcp? 
> I know there is a "-direct" option that can be used with the 
> FileOutputCommitter in order to avoid renaming while committing fr object 
> Stores. But if anyone can put some light on the current limitation of S3a 
> committers with distcp and reason for choosing FileOutputCommitters for 
> Distcp over S3a committers, it would be helpful.  Thanks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16428) Distcp make use of S3a Committers, be it magic or staging

2019-07-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16428:

Component/s: tools/distcp

> Distcp make use of S3a Committers, be it magic or staging
> -
>
> Key: HADOOP-16428
> URL: https://issues.apache.org/jira/browse/HADOOP-16428
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, tools/distcp
>Affects Versions: 3.1.1
>Reporter: Sahil Kaw
>Priority: Minor
> Fix For: 3.1.2
>
>
> Currently, I don't see Distcp make use of S3a Committers, be it Magic or 
> Staging and I have noticed most of the jobs which use MapReduce frameworks 
> use S3 committers except distcp. Distcp makes use of the FileOutputCommitter 
> even if S3a committer parameters are specified in the core-site.xml. Is this 
> by design? If yes, can someone please explain the reason for that. Are there 
> any limitations or potential risks of using S3a committers with Distcp? 
> I know there is a "-direct" option that can be used with the 
> FileOutputCommitter in order to avoid renaming while committing fr object 
> Stores. But if anyone can put some light on the current limitation of S3a 
> committers with distcp and reason for choosing FileOutputCommitters for 
> Distcp over S3a committers, it would be helpful.  Thanks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16428) Distcp make use of S3a Committers, be it magic or staging

2019-07-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883808#comment-16883808
 ] 

Steve Loughran commented on HADOOP-16428:
-

the s3a committers aim to eliminate the two renames which take place on task 
commit, and uses various devious techniques to pass commit information from the 
workers to a the driver. I supports the mapreduce 2.0 APIs only.

Distcp may use mapreduce but its got a very different task profile: first the 
files to copy are listed, then the workers upload each in turn, then rename 
into place. The rename is there so that an incomplete upload isn't visible.

Distcp with -direct does no renames, and you don't get the incomplete uploads, 
so I don't think there's any reason to put effort in here. If someone was to, 
look at the multipart upload API of Hadoop 3.3 and the ability to upload 
different blocks in parallel and coalesce them at the end.

A key distcp limitation is that you cant use change detection between source 
and dest if the two stores have different checksum algorithms/values; something 
to track the values there across jobs would be good.

> Distcp make use of S3a Committers, be it magic or staging
> -
>
> Key: HADOOP-16428
> URL: https://issues.apache.org/jira/browse/HADOOP-16428
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Sahil Kaw
>Priority: Minor
> Fix For: 3.1.2
>
>
> Currently, I don't see Distcp make use of S3a Committers, be it Magic or 
> Staging and I have noticed most of the jobs which use MapReduce frameworks 
> use S3 committers except distcp. Distcp makes use of the FileOutputCommitter 
> even if S3a committer parameters are specified in the core-site.xml. Is this 
> by design? If yes, can someone please explain the reason for that. Are there 
> any limitations or potential risks of using S3a committers with Distcp? 
> I know there is a "-direct" option that can be used with the 
> FileOutputCommitter in order to avoid renaming while committing fr object 
> Stores. But if anyone can put some light on the current limitation of S3a 
> committers with distcp and reason for choosing FileOutputCommitters for 
> Distcp over S3a committers, it would be helpful.  Thanks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1083: HDDS-1791. Update network-tests/src/test/blockade/README.md file

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1083: HDDS-1791. Update 
network-tests/src/test/blockade/README.md file
URL: https://github.com/apache/hadoop/pull/1083#issuecomment-510877178
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 62 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 11 | Maven dependency ordering for branch |
   | +1 | mvninstall | 506 | trunk passed |
   | +1 | compile | 250 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 788 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 144 | trunk passed |
   | 0 | spotbugs | 322 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 514 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 22 | Maven dependency ordering for patch |
   | +1 | mvninstall | 446 | the patch passed |
   | +1 | compile | 247 | the patch passed |
   | +1 | javac | 247 | the patch passed |
   | +1 | checkstyle | 32 | The patch passed checkstyle in hadoop-hdds |
   | +1 | checkstyle | 36 | hadoop-ozone: The patch generated 0 new + 0 
unchanged - 4 fixed = 0 total (was 4) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 648 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 544 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 410 | hadoop-hdds in the patch passed. |
   | -1 | unit | 3885 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 8960 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.om.TestOzoneManagerHA |
   |   | hadoop.ozone.TestMiniOzoneCluster |
   |   | hadoop.ozone.om.TestOzoneManager |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestReadRetries |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1083/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1083 |
   | Optional Tests | dupname asflicense mvnsite unit compile javac javadoc 
mvninstall shadedclient findbugs checkstyle |
   | uname | Linux 630f37ba5827 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f9fab9f |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1083/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1083/1/testReport/ |
   | Max. process+thread count | 4240 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/fault-injection-test/network-tests 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1083/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #1082: HDDS-1790. Fix checkstyle issues in TestDataScrubber.

2019-07-12 Thread GitBox
elek closed pull request #1082: HDDS-1790. Fix checkstyle issues in 
TestDataScrubber.
URL: https://github.com/apache/hadoop/pull/1082
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #1083: HDDS-1791. Update network-tests/src/test/blockade/README.md file

2019-07-12 Thread GitBox
elek closed pull request #1083: HDDS-1791. Update 
network-tests/src/test/blockade/README.md file
URL: https://github.com/apache/hadoop/pull/1083
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1035: HDDS-1735. Create separate unit and integration test executor dev-support script

2019-07-12 Thread GitBox
elek commented on issue #1035: HDDS-1735. Create separate unit and integration 
test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035#issuecomment-510870453
 
 
   > @elek I was trying to merge, but seems like we have some conflicts. 
Perhaps due to the fact that I merged a patch from nanda, also there is an 
author check warning.
   
   Yes, I rebased in on the top of the trunk. 
   
   Author check is false positive as one of the scripts greps for "@author" 
tags, obviously it contains the @author inside as string. I changed it to use 
string concatenation (@a + uthor) to make yetus happy
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1084: HDDS-1492. Generated chunk size name too long.

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1084: HDDS-1492. Generated chunk size name too 
long.
URL: https://github.com/apache/hadoop/pull/1084#issuecomment-510867630
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 36 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 6 | Maven dependency ordering for branch |
   | -1 | mvninstall | 9 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 9 | hadoop-ozone in trunk failed. |
   | -1 | compile | 8 | hadoop-hdds in trunk failed. |
   | +1 | checkstyle | 75 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 832 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | trunk passed |
   | 0 | spotbugs | 309 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 510 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 75 | Maven dependency ordering for patch |
   | +1 | mvninstall | 431 | the patch passed |
   | +1 | compile | 262 | the patch passed |
   | -1 | javac | 94 | hadoop-hdds generated 14 new + 0 unchanged - 0 fixed = 
14 total (was 0) |
   | -0 | checkstyle | 32 | hadoop-hdds: The patch generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 116 line(s) that end in whitespace. 
Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | -1 | whitespace | 1 | The patch 1800  line(s) with tabs. |
   | +1 | shadedclient | 642 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 148 | the patch passed |
   | -1 | findbugs | 79 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | +1 | unit | 276 | hadoop-hdds in the patch passed. |
   | -1 | unit | 53 | hadoop-ozone in the patch failed. |
   | -1 | asflicense | 35 | The patch generated 1 ASF License warnings. |
   | | | 4282 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1084 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 17acb20d2ff5 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f9fab9f |
   | Default Java | 1.8.0_212 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | javac | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/diff-compile-javac-hadoop-hdds.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/diff-checkstyle-hadoop-hdds.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/whitespace-eol.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/whitespace-tabs.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/patch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/testReport/ |
   | asflicense | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/artifact/out/patch-asflicense-problems.txt
 |
   | Max. process+thread count | 476 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-ozone/client U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from 

[GitHub] [hadoop] steveloughran commented on issue #1003: HADOOP-16384: Avoid inconsistencies between DDB and S3

2019-07-12 Thread GitBox
steveloughran commented on issue #1003: HADOOP-16384: Avoid inconsistencies 
between DDB and S3
URL: https://github.com/apache/hadoop/pull/1003#issuecomment-510862114
 
 
   thanks for all the testing -committed to trunk


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #1003: HADOOP-16384: Avoid inconsistencies between DDB and S3

2019-07-12 Thread GitBox
steveloughran closed pull request #1003: HADOOP-16384: Avoid inconsistencies 
between DDB and S3
URL: https://github.com/apache/hadoop/pull/1003
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16406) ITestDynamoDBMetadataStore.testProvisionTable times out intermittently

2019-07-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16406.
-
   Resolution: Fixed
Fix Version/s: 3.3.0

> ITestDynamoDBMetadataStore.testProvisionTable times out intermittently
> --
>
> Key: HADOOP-16406
> URL: https://issues.apache.org/jira/browse/HADOOP-16406
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.0
>
>
> Sometimes on test runs, ITestDynamoDBMetadataStore.testProvisionTable times 
> out because AWS takes too long to resize a table.
> {code}
> [ERROR] 
> testProvisionTable(org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore)
>   Time elapsed: 100.011 s  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 10 
> milliseconds
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestDynamoDBMetadataStore.testProvisionTable(ITestDynamoDBMetadataStore.java:963)
> {code}
> Given we are moving off provisioned IO to on-demand, I propose cutting this 
> test entirely



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16397) Hadoop S3Guard Prune command to support a -tombstone option.

2019-07-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16397.
-
   Resolution: Fixed
Fix Version/s: 3.3.0

> Hadoop S3Guard Prune command to support a -tombstone option.
> 
>
> Key: HADOOP-16397
> URL: https://issues.apache.org/jira/browse/HADOOP-16397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.0
>
>
> HADOOP-16279 added purging of tombstones as an explicit option, but its only 
> used in tests.
> By adding a {{-tombstone}} option to prune, we can purge all old tombstones 
> from a store. I actually think this is worth doing on a regular basis.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16384) S3A: Avoid inconsistencies between DDB and S3

2019-07-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883744#comment-16883744
 ] 

Hudson commented on HADOOP-16384:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16902 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16902/])
HADOOP-16384: S3A: Avoid inconsistencies between DDB and S3. (stevel: rev 
b15ef7dc3d91c6d50fa515158104fba29f43e6b0)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/BulkOperationState.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DumpS3GuardDynamoTable.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractRootDirectoryTest.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardEmptyDirs.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/PathMetadataDynamoDBTranslation.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/AbstractITCommitMRJob.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/launcher/ServiceLaunchException.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardListConsistency.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTableAccess.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3ATestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestPartialRenamesDeletes.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/RenameTracker.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/TestDynamoDBMiscOperations.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStoreTestBase.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/MetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardDynamoDBDiagnostic.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/PathOrderComparators.java
* (add) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/PurgeS3GuardDynamoTable.java
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/AbstractS3GuardToolTestBase.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardToolDynamoDB.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRootDir.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ALocatedFileStatus.java
* (edit) hadoop-tools/hadoop-aws/src/test/resources/log4j.properties
* (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/s3guard.md
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/service/launcher/ServiceLauncher.java
* (add) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestS3GuardDDBRootOperations.java
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Listing.java
* (edit) hadoop-tools/hadoop-aws/pom.xml
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStoreScale.java


> S3A: Avoid inconsistencies between DDB and S3
> -
>
> Key: HADOOP-16384
> URL: https://issues.apache.org/jira/browse/HADOOP-16384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: hwdev-ireland-new.csv
>
>
> HADOOP-15183 added detection and rejection of prune updates when the store is 
> inconsistent (i.e. when it tries to update an entry twice in the same 
> operation, the second time with one that is inconsistent with the first)
> Now that we can detect this, we should 

[jira] [Resolved] (HADOOP-16384) S3A: Avoid inconsistencies between DDB and S3

2019-07-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16384.
-
   Resolution: Fixed
Fix Version/s: 3.3.0

> S3A: Avoid inconsistencies between DDB and S3
> -
>
> Key: HADOOP-16384
> URL: https://issues.apache.org/jira/browse/HADOOP-16384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: hwdev-ireland-new.csv
>
>
> HADOOP-15183 added detection and rejection of prune updates when the store is 
> inconsistent (i.e. when it tries to update an entry twice in the same 
> operation, the second time with one that is inconsistent with the first)
> Now that we can detect this, we should address it. We are lucky here in that 
> my DDB table is currently inconsistent: prune is failing. 
> Plan
> # new test to run in the sequential phase, which does a s3guard prune against 
> the bucket used in tests
> # use this to identify/debug the issue
> # replicate the problem in the ITestDDBMetastore tests
> # decide what to do in this world. Tell the user to run fsck? skip?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16384) S3A: Avoid inconsistencies between DDB and S3

2019-07-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16384:

Summary: S3A: Avoid inconsistencies between DDB and S3  (was: HADOOP-16384: 
S3A: Avoid inconsistencies between DDB and S3)

> S3A: Avoid inconsistencies between DDB and S3
> -
>
> Key: HADOOP-16384
> URL: https://issues.apache.org/jira/browse/HADOOP-16384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: hwdev-ireland-new.csv
>
>
> HADOOP-15183 added detection and rejection of prune updates when the store is 
> inconsistent (i.e. when it tries to update an entry twice in the same 
> operation, the second time with one that is inconsistent with the first)
> Now that we can detect this, we should address it. We are lucky here in that 
> my DDB table is currently inconsistent: prune is failing. 
> Plan
> # new test to run in the sequential phase, which does a s3guard prune against 
> the bucket used in tests
> # use this to identify/debug the issue
> # replicate the problem in the ITestDDBMetastore tests
> # decide what to do in this world. Tell the user to run fsck? skip?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai opened a new pull request #1085: HDDS-1785. OOM error in Freon due to the concurrency handling

2019-07-12 Thread GitBox
adoroszlai opened a new pull request #1085: HDDS-1785. OOM error in Freon due 
to the concurrency handling
URL: https://github.com/apache/hadoop/pull/1085
 
 
   ## What changes were proposed in this pull request?
   
   Change concurrency in Freon `RandomKeyGenerator`:
   
* create a worker for each thread
* let each worker create volumes, buckets and keys, without limiting 
"inner" objects to specific "outer" ones (eg. create key for any bucket)
   
   Workers coordinate the items they create using "global" counters.
   
   https://issues.apache.org/jira/browse/HDDS-1785
   
   ## How was this patch tested?
   
   Tested with various number of volumes/buckets/threads.
   
   ```
   $ ozone freon rk --numOfVolumes 1 --numOfBuckets 100 --numOfKeys 5 
--numOfThreads 1 --replicationType=RATIS --factor=THREE
   ...
   Number of Volumes created: 1
   Number of Buckets created: 100
   Number of Keys added: 500
   Ratis replication factor: THREE
   Ratis replication type: RATIS
   Average Time spent in volume creation: 00:00:00,100
   Average Time spent in bucket creation: 00:00:00,304
   Average Time spent in key creation: 00:00:01,556
   Average Time spent in key write: 00:00:53,509
   Total bytes written: 512
   Total Execution time: 00:01:01,537
   ```
   
   ```
   $ ozone freon rk --numOfVolumes 1 --numOfBuckets 100 --numOfKeys 5 
--numOfThreads 50 --replicationType=RATIS --factor=THREE
   ...
   Number of Volumes created: 1
   Number of Buckets created: 100
   Number of Keys added: 500
   Ratis replication factor: THREE
   Ratis replication type: RATIS
   Average Time spent in volume creation: 00:00:00,003
   Average Time spent in bucket creation: 00:00:00,229
   Average Time spent in key creation: 00:00:00,273
   Average Time spent in key write: 00:00:10,375
   Total bytes written: 512
   Total Execution time: 00:00:16,872
   ```
   
   ```
   $ ozone freon rk --numOfVolumes 10 --numOfBuckets 10 --numOfKeys 500 
--numOfThreads 50 --replicationType=RATIS --factor=THREE
   ...
   Number of Volumes created: 10
   Number of Buckets created: 100
   Number of Keys added: 5
   Ratis replication factor: THREE
   Ratis replication type: RATIS
   Average Time spent in volume creation: 00:00:00,052
   Average Time spent in bucket creation: 00:00:00,240
   Average Time spent in key creation: 00:00:30,742
   Average Time spent in key write: 00:10:04,146
   Total bytes written: 51200
   Total Execution time: 00:10:42,463
   ```
   
   ```
   $ ozone freon rk --numOfVolumes 100 --numOfBuckets 100 --numOfKeys 2 
--numOfThreads 50 --replicationType=RATIS --factor=THREE
   ...
   Number of Volumes created: 100
   Number of Buckets created: 1
   Number of Keys added: 2
   Ratis replication factor: THREE
   Ratis replication type: RATIS
   Average Time spent in volume creation: 00:00:00,266
   Average Time spent in bucket creation: 00:00:06,388
   Average Time spent in key creation: 00:00:09,324
   Average Time spent in key write: 00:03:44,925
   Total bytes written: 20480
   Total Execution time: 00:04:11,735
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16384) HADOOP-16384: S3A: Avoid inconsistencies between DDB and S3

2019-07-12 Thread Steve Loughran (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883737#comment-16883737
 ] 

Steve Loughran commented on HADOOP-16384:
-

Given a +1 by Gabor in the PR; committing with a clearer title

> HADOOP-16384: S3A: Avoid inconsistencies between DDB and S3
> ---
>
> Key: HADOOP-16384
> URL: https://issues.apache.org/jira/browse/HADOOP-16384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: hwdev-ireland-new.csv
>
>
> HADOOP-15183 added detection and rejection of prune updates when the store is 
> inconsistent (i.e. when it tries to update an entry twice in the same 
> operation, the second time with one that is inconsistent with the first)
> Now that we can detect this, we should address it. We are lucky here in that 
> my DDB table is currently inconsistent: prune is failing. 
> Plan
> # new test to run in the sequential phase, which does a s3guard prune against 
> the bucket used in tests
> # use this to identify/debug the issue
> # replicate the problem in the ITestDDBMetastore tests
> # decide what to do in this world. Tell the user to run fsck? skip?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16384) HADOOP-16384: S3A: Avoid inconsistencies between DDB and S3

2019-07-12 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-16384:

Summary: HADOOP-16384: S3A: Avoid inconsistencies between DDB and S3  (was: 
ITestS3AContractRootDir failing.)

> HADOOP-16384: S3A: Avoid inconsistencies between DDB and S3
> ---
>
> Key: HADOOP-16384
> URL: https://issues.apache.org/jira/browse/HADOOP-16384
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: hwdev-ireland-new.csv
>
>
> HADOOP-15183 added detection and rejection of prune updates when the store is 
> inconsistent (i.e. when it tries to update an entry twice in the same 
> operation, the second time with one that is inconsistent with the first)
> Now that we can detect this, we should address it. We are lucky here in that 
> my DDB table is currently inconsistent: prune is failing. 
> Plan
> # new test to run in the sequential phase, which does a s3guard prune against 
> the bucket used in tests
> # use this to identify/debug the issue
> # replicate the problem in the ITestDDBMetastore tests
> # decide what to do in this world. Tell the user to run fsck? skip?



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1071: HDDS-1779. TestWatchForCommit tests are flaky.

2019-07-12 Thread GitBox
bshashikant commented on a change in pull request #1071: HDDS-1779. 
TestWatchForCommit tests are flaky.
URL: https://github.com/apache/hadoop/pull/1071#discussion_r302947780
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java
 ##
 @@ -343,61 +349,24 @@ public void testWatchForCommitForRetryfailure() throws 
Exception {
 cluster.shutdownHddsDatanode(pipeline.getNodes().get(1));
 // again write data with more than max buffer limit. This wi
 try {
-  // just watch for a lo index which in not updated in the commitInfo Map
-  xceiverClient.watchForCommit(index + 1, 2);
+  // just watch for a log index which in not updated in the commitInfo Map
+  // as well as there is no logIndex generate in Ratis.
+  // The basic idea here is just to test if its throws an exception.
+  xceiverClient
+  .watchForCommit(index + new Random().nextInt(100) + 10, 2);
 
 Review comment:
   The idea here is to run the test each time with unique number so any 
possible hacks/errors get caught if any.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant commented on a change in pull request #1071: HDDS-1779. TestWatchForCommit tests are flaky.

2019-07-12 Thread GitBox
bshashikant commented on a change in pull request #1071: HDDS-1779. 
TestWatchForCommit tests are flaky.
URL: https://github.com/apache/hadoop/pull/1071#discussion_r302947487
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestWatchForCommit.java
 ##
 @@ -303,10 +305,14 @@ public void testWatchForCommitWithSmallerTimeoutValue() 
throws Exception {
 cluster.shutdownHddsDatanode(pipeline.getNodes().get(0));
 cluster.shutdownHddsDatanode(pipeline.getNodes().get(1));
 try {
-  // just watch for a lo index which in not updated in the commitInfo Map
-  xceiverClient.watchForCommit(index + 1, 3000);
+  // just watch for a log index which in not updated in the commitInfo Map
+  // as well as there is no logIndex generate in Ratis.
+  // The basic idea here is just to test if its throws an exception.
+  xceiverClient
+  .watchForCommit(index + new Random().nextInt(100) + 10, 3000);
   Assert.fail("expected exception not thrown");
 } catch (Exception e) {
+  System.out.println("exception " + e);
 
 Review comment:
   Addressed in the latest commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1003: HADOOP-16384: Avoid inconsistencies between DDB and S3

2019-07-12 Thread GitBox
bgaborg commented on issue #1003: HADOOP-16384: Avoid inconsistencies between 
DDB and S3
URL: https://github.com/apache/hadoop/pull/1003#issuecomment-510856266
 
 
   +1; these improvements will make the s3a more stable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1003: HADOOP-16384: Avoid inconsistencies between DDB and S3

2019-07-12 Thread GitBox
steveloughran commented on issue #1003: HADOOP-16384: Avoid inconsistencies 
between DDB and S3
URL: https://github.com/apache/hadoop/pull/1003#issuecomment-510855815
 
 
   thanks. Gabor, you do explicitly need to do that +1 for the record


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #1076: HDDS-1782. Add an option to MiniOzoneChaosCluster to read files multiple times. Contributed by Mukul Kumar Singh.

2019-07-12 Thread GitBox
mukul1987 commented on a change in pull request #1076: HDDS-1782. Add an option 
to MiniOzoneChaosCluster to read files multiple times. Contributed by Mukul 
Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1076#discussion_r302944648
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/chaos/TestProbability.java
 ##
 @@ -0,0 +1,39 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.chaos;
+
+import org.apache.commons.lang3.RandomUtils;
+
+/**
+ * Class to keep track of test probability.
+ */
+public class TestProbability {
+  private int pct;
+
+  private TestProbability(int pct) {
+this.pct = pct;
+  }
+
+  public boolean isTrue() {
+return (RandomUtils.nextInt() * pct / 100) == 1;
 
 Review comment:
   Thanks for the review @adoroszlai, this is fixed as part of the update to 
this pull request.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1082: HDDS-1790. Fix checkstyle issues in TestDataScrubber.

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1082: HDDS-1790. Fix checkstyle issues in 
TestDataScrubber.
URL: https://github.com/apache/hadoop/pull/1082#issuecomment-510851545
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 498 | trunk passed |
   | +1 | compile | 263 | trunk passed |
   | +1 | checkstyle | 78 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 882 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 150 | trunk passed |
   | 0 | spotbugs | 198 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 26 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 435 | the patch passed |
   | +1 | compile | 233 | the patch passed |
   | +1 | javac | 233 | the patch passed |
   | +1 | checkstyle | 33 | The patch passed checkstyle in hadoop-hdds |
   | +1 | checkstyle | 34 | hadoop-ozone: The patch generated 0 new + 0 
unchanged - 4 fixed = 0 total (was 4) |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 611 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 140 | the patch passed |
   | +1 | findbugs | 504 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 273 | hadoop-hdds in the patch passed. |
   | -1 | unit | 1763 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 42 | The patch does not generate ASF License warnings. |
   | | | 6328 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | 
hadoop.ozone.container.common.statemachine.commandhandler.TestCloseContainerByPipeline
 |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1082 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bc04395a2c6e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9119ed0 |
   | Default Java | 1.8.0_212 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/testReport/ |
   | Max. process+thread count | 5387 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/integration-test U: 
hadoop-ozone/integration-test |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bshashikant opened a new pull request #1084: HDDS-1492. Generated chunk size name too long.

2019-07-12 Thread GitBox
bshashikant opened a new pull request #1084: HDDS-1492. Generated chunk size 
name too long.
URL: https://github.com/apache/hadoop/pull/1084
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1035: HDDS-1735. Create separate unit and integration test executor dev-support script

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1035: HDDS-1735. Create separate unit and 
integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035#issuecomment-510844929
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 38 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 469 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 732 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 426 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | The patch generated 0 new + 0 unchanged - 7 fixed = 
0 total (was 7) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 632 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 98 | hadoop-hdds in the patch passed. |
   | +1 | unit | 176 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 2788 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1035 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 36fd912879ea 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9119ed0 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/6/testReport/ |
   | Max. process+thread count | 446 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/6/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1035: HDDS-1735. Create separate unit and integration test executor dev-support script

2019-07-12 Thread GitBox
hadoop-yetus commented on issue #1035: HDDS-1735. Create separate unit and 
integration test executor dev-support script
URL: https://github.com/apache/hadoop/pull/1035#issuecomment-510837233
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 69 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | 0 | shelldocs | 1 | Shelldocs was not available. |
   | 0 | @author | 0 | Skipping @author checks as author.sh has been patched. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 50 | Maven dependency ordering for branch |
   | +1 | mvninstall | 532 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | -1 | pylint | 8 | Error running pylint. Please check pylint stderr files. |
   | +1 | shadedclient | 792 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 32 | Maven dependency ordering for patch |
   | +1 | mvninstall | 474 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | pylint | 8 | Error running pylint. Please check pylint stderr files. |
   | +1 | pylint | 8 | There were no new pylint issues. |
   | +1 | shellcheck | 0 | The patch generated 0 new + 0 unchanged - 7 fixed = 
0 total (was 7) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 669 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 99 | hadoop-hdds in the patch passed. |
   | +1 | unit | 175 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 3152 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1035 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs 
pylint |
   | uname | Linux 3e3d75757cf8 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9119ed0 |
   | pylint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/5/artifact/out/branch-pylint-stderr.txt
 |
   | pylint | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/5/artifact/out/patch-pylint-stderr.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/5/testReport/ |
   | Max. process+thread count | 423 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone hadoop-ozone/fault-injection-test/network-tests 
U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1035/5/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 pylint=1.9.2 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lokeshj1703 commented on issue #1072: HDDS-1766. ContainerStateMachine is unable to increment lastAppliedTermIndex. Contributed by Mukul Kumar Singh.

2019-07-12 Thread GitBox
lokeshj1703 commented on issue #1072: HDDS-1766. ContainerStateMachine is 
unable to increment lastAppliedTermIndex. Contributed by  Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1072#issuecomment-510837155
 
 
   @mukul1987 The changes look good to me. +1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #1072: HDDS-1766. ContainerStateMachine is unable to increment lastAppliedTermIndex. Contributed by Mukul Kumar Singh.

2019-07-12 Thread GitBox
mukul1987 commented on a change in pull request #1072: HDDS-1766. 
ContainerStateMachine is unable to increment lastAppliedTermIndex. Contributed 
by  Mukul Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1072#discussion_r302923915
 
 

 ##
 File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
 ##
 @@ -554,12 +556,12 @@ private ByteString getCachedStateMachineData(Long 
logIndex, long term,
   }
 } catch (Exception e) {
   metrics.incNumReadStateMachineFails();
-  LOG.error("unable to read stateMachineData:" + e);
+  LOG.error("{} unable to read stateMachineData:", gid, e);
   return completeExceptionally(e);
 }
   }
 
-  private void updateLastApplied() {
+  private synchronized void updateLastApplied() {
 
 Review comment:
   HDDS-1792 will fix this by using ConcurrentHashSet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg removed a comment on issue #1079: HADOOP-16380: test to show that it is the root directory where the "tombstone problem" can be replicated

2019-07-12 Thread GitBox
bgaborg removed a comment on issue #1079: HADOOP-16380: test to show that it is 
the root directory where the "tombstone problem" can be replicated
URL: https://github.com/apache/hadoop/pull/1079#issuecomment-510833679
 
 
   thanks Steve for creating a test that reproduces the issue. I'm looking into 
how to solve this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukul1987 commented on a change in pull request #1076: HDDS-1782. Add an option to MiniOzoneChaosCluster to read files multiple times. Contributed by Mukul Kumar Singh.

2019-07-12 Thread GitBox
mukul1987 commented on a change in pull request #1076: HDDS-1782. Add an option 
to MiniOzoneChaosCluster to read files multiple times. Contributed by Mukul 
Kumar Singh.
URL: https://github.com/apache/hadoop/pull/1076#discussion_r302922551
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/chaos/TestProbability.java
 ##
 @@ -0,0 +1,39 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.chaos;
+
+import org.apache.commons.lang3.RandomUtils;
+
+/**
+ * Class to keep track of test probability.
+ */
+public class TestProbability {
+  private int pct;
+
+  private TestProbability(int pct) {
+this.pct = pct;
+  }
+
+  public boolean isTrue() {
+return (RandomUtils.nextInt() * pct / 100) == 1;
 
 Review comment:
   Sorry, my bad. This is a coding error. I had quickly tried something in the 
notepad. let me updated the patch soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on issue #1079: HADOOP-16380: test to show that it is the root directory where the "tombstone problem" can be replicated

2019-07-12 Thread GitBox
bgaborg commented on issue #1079: HADOOP-16380: test to show that it is the 
root directory where the "tombstone problem" can be replicated
URL: https://github.com/apache/hadoop/pull/1079#issuecomment-510833679
 
 
   thanks Steve for creating a test that reproduces the issue. I'm looking into 
how to solve this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 opened a new pull request #1083: HDDS-1791. Update network-tests/src/test/blockade/README.md file

2019-07-12 Thread GitBox
nandakumar131 opened a new pull request #1083: HDDS-1791. Update 
network-tests/src/test/blockade/README.md file
URL: https://github.com/apache/hadoop/pull/1083
 
 
   
`hadoop-ozone/fault-injection-test/network-tests/src/test/blockade/README.md` 
has to be updated after #1068 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 opened a new pull request #1082: HDDS-1790. Fix checkstyle issues in TestDataScrubber.

2019-07-12 Thread GitBox
nandakumar131 opened a new pull request #1082: HDDS-1790. Fix checkstyle issues 
in TestDataScrubber.
URL: https://github.com/apache/hadoop/pull/1082
 
 
   There are 4 Checkstyle issues in TestDataScrubber that has to be fixed
   ```
   [ERROR] 
src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[157] 
(sizes) LineLength: Line is longer than 80 characters (found 81).
   [ERROR] 
src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[161] 
(sizes) LineLength: Line is longer than 80 characters (found 82).
   [ERROR] 
src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[167] 
(sizes) LineLength: Line is longer than 80 characters (found 85).
   [ERROR] 
src/test/java/org/apache/hadoop/ozone/dn/scrubber/TestDataScrubber.java:[187] 
(sizes) LineLength: Line is longer than 80 characters (found 104).
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek closed pull request #1029: HDDS-1384. TestBlockOutputStreamWithFailures is failing

2019-07-12 Thread GitBox
elek closed pull request #1029: HDDS-1384. TestBlockOutputStreamWithFailures is 
failing
URL: https://github.com/apache/hadoop/pull/1029
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer

2019-07-12 Thread Yikun Jiang (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883662#comment-16883662
 ] 

Yikun Jiang commented on HADOOP-13363:
--

I also prefer to upgrade the protobuf to v3, and protobuf v3 is becoming more 
and more stable in distribution with better multi-arch support.

> Upgrade protobuf from 2.5.0 to something newer
> --
>
> Key: HADOOP-13363
> URL: https://issues.apache.org/jira/browse/HADOOP-13363
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Priority: Major
>  Labels: security
> Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, 
> HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch
>
>
> Standard protobuf 2.5.0 does not work properly on many platforms.  (See, for 
> example, https://gist.github.com/BennettSmith/7111094 ).  In order for us to 
> avoid crazy work arounds in the build environment and the fact that 2.5.0 is 
> starting to slowly disappear as a standard install-able package for even 
> Linux/x86, we need to either upgrade or self bundle or something else.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 commented on a change in pull request #1008: HDDS-1713. ReplicationManager fail to find proper node topology based…

2019-07-12 Thread GitBox
nandakumar131 commented on a change in pull request #1008: HDDS-1713. 
ReplicationManager fail to find proper node topology based…
URL: https://github.com/apache/hadoop/pull/1008#discussion_r302894199
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMDatanodeHeartbeatDispatcher.java
 ##
 @@ -99,6 +99,9 @@ public SCMDatanodeHeartbeatDispatcher(NodeManager 
nodeManager,
   commands = nodeManager.getCommandQueue(dnID);
 
 } else {
+  // Get the datanode details again from node manager with the topology 
info
+  // for registered datanodes.
+  datanodeDetails = nodeManager.getNode(datanodeDetails.getIpAddress());
 
 Review comment:
   +1 on changing it to `uuid -> location` and maintaining a map for `uuid -> 
ip/dns`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1029: HDDS-1384. TestBlockOutputStreamWithFailures is failing

2019-07-12 Thread GitBox
elek commented on issue #1029: HDDS-1384. TestBlockOutputStreamWithFailures is 
failing
URL: https://github.com/apache/hadoop/pull/1029#issuecomment-510809738
 
 
   Thanks @arp7 the review, I am merging it to the trunk right now.
   Remaining unit test failures are not related (AssertionErrors + timeout) the 
original problem was fixed (
   44a8b9f)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] elek commented on issue #1032: [HDDS-1201] Reporting corrupted containers info to SCM

2019-07-12 Thread GitBox
elek commented on issue #1032: [HDDS-1201] Reporting corrupted containers info 
to SCM
URL: https://github.com/apache/hadoop/pull/1032#issuecomment-510806166
 
 
   This patch introduced new checkstyle errors.
   
   I recommend to add ozone label to the jira to get clear reports from the 
supplementary jenkins instance.
   
   Contributors (who has no write access to the repository) can add ozone label 
with a new comment with the "/label ozone" message.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] nandakumar131 merged pull request #1080: HDDS-1752 Use concurrent set implementation for node to pipelines ma…

2019-07-12 Thread GitBox
nandakumar131 merged pull request #1080:  HDDS-1752 Use concurrent set 
implementation for node to pipelines ma…
URL: https://github.com/apache/hadoop/pull/1080
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zeroflag commented on issue #940: HADOOP-16341. ShutDownHookManager: Regressed performance on Hook remo…

2019-07-12 Thread GitBox
zeroflag commented on issue #940: HADOOP-16341. ShutDownHookManager: Regressed 
performance on Hook remo…
URL: https://github.com/apache/hadoop/pull/940#issuecomment-510801697
 
 
   retest this please


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16428) Distcp make use of S3a Committers, be it magic or staging

2019-07-12 Thread Sahil Kaw (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16883565#comment-16883565
 ] 

Sahil Kaw commented on HADOOP-16428:


[~ste...@apache.org] Can you please help me with this? Thanks a lot in advance 
for your time.

> Distcp make use of S3a Committers, be it magic or staging
> -
>
> Key: HADOOP-16428
> URL: https://issues.apache.org/jira/browse/HADOOP-16428
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Sahil Kaw
>Priority: Minor
> Fix For: 3.1.2
>
>
> Currently, I don't see Distcp make use of S3a Committers, be it Magic or 
> Staging and I have noticed most of the jobs which use MapReduce frameworks 
> use S3 committers except distcp. Distcp makes use of the FileOutputCommitter 
> even if S3a committer parameters are specified in the core-site.xml. Is this 
> by design? If yes, can someone please explain the reason for that. Are there 
> any limitations or potential risks of using S3a committers with Distcp? 
> I know there is a "-direct" option that can be used with the 
> FileOutputCommitter in order to avoid renaming while committing fr object 
> Stores. But if anyone can put some light on the current limitation of S3a 
> committers with distcp and reason for choosing FileOutputCommitters for 
> Distcp over S3a committers, it would be helpful.  Thanks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16428) Distcp make use of S3a Committers, be it magic or staging

2019-07-12 Thread Sahil Kaw (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Kaw updated HADOOP-16428:
---
Description: 
Currently, I don't see Distcp make use of S3a Committers, be it Magic or 
Staging and I have noticed most of the jobs which use MapReduce frameworks use 
S3 committers except distcp. Distcp makes use of the FileOutputCommitter even 
if S3a committer parameters are specified in the core-site.xml. Is this by 
design? If yes, can someone please explain the reason for that. Are there any 
limitations or potential risks of using S3a committers with Distcp? 

I know there is a "-direct" option that can be used with the 
FileOutputCommitter in order to avoid renaming while committing fr object 
Stores. But if anyone can put some light on the current limitation of S3a 
committers with distcp and reason for choosing FileOutputCommitters for Distcp 
over S3a committers, it would be helpful.  Thanks

  was:
Currently, I don't see Distcp make use of S3a Committers, be it Magic or 
Staging. It makes use of the FileOutputCommitter even if S3a committer 
parameters are specified in the core-site.xml. Is this by design? If yes, can 
someone please explain the reason for that. Are there any limitations of using 
S3 committers with Distcp and potential risks of using S3a committers with 
Distcp? 

I know there is a "-direct" option that can be used with the 
FileOutputCommitter in order to avoid renaming while committing. But if anyone 
can put some light on the current limitation of S3a committers with distcp, it 
would be helpful. Thanks


> Distcp make use of S3a Committers, be it magic or staging
> -
>
> Key: HADOOP-16428
> URL: https://issues.apache.org/jira/browse/HADOOP-16428
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.1.1
>Reporter: Sahil Kaw
>Priority: Minor
> Fix For: 3.1.2
>
>
> Currently, I don't see Distcp make use of S3a Committers, be it Magic or 
> Staging and I have noticed most of the jobs which use MapReduce frameworks 
> use S3 committers except distcp. Distcp makes use of the FileOutputCommitter 
> even if S3a committer parameters are specified in the core-site.xml. Is this 
> by design? If yes, can someone please explain the reason for that. Are there 
> any limitations or potential risks of using S3a committers with Distcp? 
> I know there is a "-direct" option that can be used with the 
> FileOutputCommitter in order to avoid renaming while committing fr object 
> Stores. But if anyone can put some light on the current limitation of S3a 
> committers with distcp and reason for choosing FileOutputCommitters for 
> Distcp over S3a committers, it would be helpful.  Thanks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16428) Distcp make use of S3a Committers, be it magic or staging

2019-07-12 Thread Sahil Kaw (JIRA)
Sahil Kaw created HADOOP-16428:
--

 Summary: Distcp make use of S3a Committers, be it magic or staging
 Key: HADOOP-16428
 URL: https://issues.apache.org/jira/browse/HADOOP-16428
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.1.1
Reporter: Sahil Kaw
 Fix For: 3.1.2


Currently, I don't see Distcp make use of S3a Committers, be it Magic or 
Staging. It makes use of the FileOutputCommitter even if S3a committer 
parameters are specified in the core-site.xml. Is this by design? If yes, can 
someone please explain the reason for that. Are there any limitations of using 
S3 committers with Distcp and potential risks of using S3a committers with 
Distcp? 

I know there is a "-direct" option that can be used with the 
FileOutputCommitter in order to avoid renaming while committing. But if anyone 
can put some light on the current limitation of S3a committers with distcp, it 
would be helpful. Thanks



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org